Archive for the 'CentOS' Category

CentOS System Administration Essentials

CentOS System Administration Essentials

CentOS System Administration Essentials


The description of this book is “Become an efficient CentOS administrator by acquiring real-world knowledge of system setup and configuration” and the author, Andrew Mallett, has put together quite a collection of information in there to help you do just that.

Probably worth mentioning here is that this book is obviously designed for someone not only familiar with Linux in general, but also comfortable eough with CentOS to dispatch with the usual obligatory chapters dealing with installation, etc.. Yes, this information is surely aimed for someone who is. or has designs for being a Systems Administrator. As it happens, I am “one of those guys” so I’ll give you my thoughts on how well he did.

One of the interesting things about Linux is there are so many ways to do things and so many areas of focus. This means that this area of information that a system Administrator should know is pretty expansive and what *I* think a System Administrator should be an expert in is not necessarily what someone else may think. Well, up to a point. There are some real basics in there as well. One of those is using vi or vim and noodling around on the command line, and this is right where Mallett heads for in the beginning of the book and rightly so.

After running through some great tips you start to dive into some deep subject matter on Grub, filesystems, processes (all really important stuff). Yum (package management) and managing users are also important standards that are covered well, and then you start diverging a bit from what I would consider “must know” information into, really, the more interesting stuff of the book. You walk through LDAP auth, Nginx web servers and puppet configuration management. While those may not be essentials for your systems, it sure is nice to at least have a basic understanding, and the information here on them can get you up and running. And then lastly we go back into the last topic, security, which is also a “must know”.

I quite liked this book, especially the portion on Nginx, which I had not played with before. It was good information, easy to read and use and the examples worked. I also noted that much unlike some other similar books I have reviewed, this book is not so voluminous as to make it impractical to read through in an afternoon or so and you can do so and come away immediately with some practical and usable information. Again, the book “CentOS System Administration Essentials” by Andrew Mallett, is available from Packt Publishing for under $25 and is well worth it for all you budding (and maybe not so budding) System Administrators out there.

Ubuntu 9.10 and Grub 2

ubuntu
Yes, another post about Ubuntu 9.10. I know I tried it out before, but I put it on this new (old) laptop and am giving it a little better run this time. I still believe 9.10 (Karmic) to be a fine running distribution and this time I got to test out my method of installing all the codecs I want on there, along with messing with Grub 2 a little bit.

When you are travelling abroad where it’s legal to do so, as i was just the other day, you might want to have access to all those codecs that make life worth living on a linux box. Things like listening to your mp3s and watching your dvds and miscellaneous media files are very dificult without them.

I realise that Ubuntu has, for some time now, been able to detect that you need so and so codec to play so and so media and ask you if you really want it installed, but I find that particularly irritating. I like to already have that functionality there when I want to use it. To do that, I have a little script that I use that generally takes care of that for me, along with installing most of the programs I need to make my day to day use hassle free.

#!/bin/bash
sudo wget http://www.medibuntu.org/sources.list.d/karmic.list -O /etc/apt/sources.list.d/medibuntu.list
sudo apt-get update && sudo apt-get install medibuntu-keyring && sudo apt-get update
sudo apt-get install mozilla-thunderbird php5-common php5-cli php-pear subversion openssh-server clusterssh imagemagick vim synergy smbfs curl vlc libdvdcss2 ubuntu-restricted-extras w32codecs mplayer mencoder build-essential sqlite dia expect mysql-client

Feel free to modify and use this, but basically I derived this from paying attention to the programs I need and use and making a list. It really does save a lot of time to do this.

The other thing I wanted to mention is Grub 2. For some reason, someone decided it was time to move from the original Grub to Grub 2. Time alone will tell whether that was a smart move or not. I know I certainly had a tough time of it for a day or two. Everything has moved and the methodology has changed as well. The short of it is you have some config files in /etc/grub.d that you can now manipulate, along with issuing a “update-grub”, that will build your /boot/grub/grub.cfg, which is pretty much the equivalent of the old /boot/grub/menu.lst file. The fun part is figuring out how all this works because, as it happens with open source many times, the documentation sucks.

What I needed to do was to add another linux distribution to grub so I could dual (or multi) boot it. This is accomplished in that /etc/grub.d directory. Now it’s worth mentioning here that if you do multiple OS installs on your machine and just issue a “update-grub” on your base Grub 2 enabled OS, it will (or at least mine did) auto detect this installation by default and add a boot option for it into the grub boot menu. The problem is, like mine, it probaly won’t boot your other OS.

The way to fix this is to go into /etc/grub.d and “chmod -x 30_os-prober”. After that you won’t be auto-genning entries. Next you can make a copy of the 40_custom file (I named mine 41_centos) and edit that file to have the correct boot parameters to boot your other OS. This is especially fun without having a good grasp of the correct syntax. For instance it took me hours to figure out that the “kernel” line that the old Grub used has been replaced with a “linux” line now. Other than that, though, just make sure that if you are booting another linux to use the correct root label and kernel and initrd image names and locations. My correct and working CentOS entry looks like this for reference:

#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the ‘exec tail’ line above.
menuentry “CentOS 5.4″ {
set root=(hd0,3)
linux /boot/vmlinuz-2.6.18-164.el5 ro root=LABEL=/ rhgb quiet
initrd /boot/initrd-2.6.18-164.el5.img
}

Have fun!

More CentOS

tpt23
I know, I know. I wrote earlier about how the T23 was suffering some sort of display death again and would undoubtedly end up on freelinuxbox.org. While that is definitely true, I was looking at it today and thought I’d start it up again and let it do it’s updates. I did and the display was working the whole time, so I thought I would play with it a little more….:

I wanted to test getting some multimedia playback on this distribution. You see, RedHat based distributions are notorious for following the letter of the law and not letting you have access to any of those nasty codecs we all like to use. You know the ones I am talking about, mp3, wmv, dvd, etc.. Well, since I just happened to be traveling abroad in europe for a few minutes where this is completely legal, I decided to have a go at it.

A quick search brought me to this website and the directions looked pretty thorough so that’s where I started. The only thing I added to the process was adding vlc, my favorite media player, and everything else worked beautifully. To recap, follow these instructions, taken from the previously mentioned website and only edited to add vlc.

rpm -Uhv http://apt.sw.be/redhat/el5/en/i386/RPMS.dag/rpmforge-release-0.3.6-1.el5.rf.i386.rpm
rpm -Uhv http://linuxdownload.adobe.com/adobe-release/adobe-release-i386-1.0-1.noarch.rpm
yum -y install libdvdcss libdvdread libdvdplay libdvdnav lsdvd mplayerplug-in mplayer mplayer-gui compat-libstdc++-33 flash-plugin gstreamer-plugins-bad gstreamer-plugins-ugly vlc
wget www1.mplayerhq.hu/MPlayer/releases/codecs/mplayer-codecs-20061022-1.i386.rpm ; rpm -ivh mplayer-codecs-20061022-1.i386.rpm
wget www1.mplayerhq.hu/MPlayer/releases/codecs/mplayer-codecs-extra-20061022-1.i386.rpm; rpm -ivh mplayer-codecs-extra-20061022-1.i386.rpm

Now a couple notes….

Although I have not yet rebooted to check if that has any effect, the default media player, Totem, still does not play very much. While slightly dissapointed, I never really liked Totem anyhow and found that to be the case on almost every distribution. VLC, however, works exactly as expected, which is, perfectly.

I also took pains to install xmms, my favorite mp3 player on the T23 as well and, although it installed fine from the Dag repos, it doesn’t play a dang thing. VLC to the rescue again. In fact, I hadn’t realized that VLC actually makes such a good audio player!

It is important to also note that I still really feel that this CentOS desktop runs quite well - very snappy. I know I keep saying that, but it really is quite noticeable on this older laptop.

Rocks burn-in

fah-on-rocks-sm
The other day I was talking about how to install Rocks Cluster. Well, today I’ll give you indication on how to test it out a bit. Now this is surely not the *proper* way to test the cluster out, which would be to run some fancy cluster-aware graphics rendering application or something of the sort, but this will put something on there and make it churn out some cpu cycles just to see how things look.

What I like to use for this task is Folding At Home, which is a protein folding program (hey, help cure diseases and stuff, right). You can get things ready by downloading the appropriate version of the client for your machine(s) from the download section. The current one that I am using is the Linux version 6.24 Beta.

Log on to your cluster and create a directory for each node that you want to run the FAH client on. If you only have a couple,. it’s easy to just do that by hand, if not, you can use this simple script:

#!/bin/bash
rockslist=$(rocks list host | grep ‘:’ | cut -d’:’ -f1)
for name in $rockslist
do
mkdir -p $name
done

From there, extract your FAH client file you just downloaded into your headnode directory. Tip: you headnode directory will be named something *other* than compute-?-?. Take the fah5 and mpiexec files from there and copy them to all your compute-?-? directories.

This should really get better instruction, but you’ll want to install screen on all your nodes. If you have things set up well, you should be able to do this as root:

rocks run host “yum -y install screen”

Go into your headnode directory and start your rocks client “./fah6″ and answer the configuration questions. Once you get it actually processing a work unit, you can stop it with a control-c.

At this point, copy the client.cfg file from your headnode directory to all the compute node directories.

Now, back in the headnode directory, “screen -d -m ./fah6″ which will start your folding at home client in a detached screen session and leave it running.

Now your are ready to start it up like that in your compute nodes too:

for name in compute*
do
echo “Killing $name”
ssh $name killall screen
echo “Restarting $name”
ssh $name “cd $name ; screen -d -m ./fah6″
done

And you can also use that script to periodically stop/restart (or just start again) FAH on your compute nodes as FAH will sometimes hang. I normally run this to restart FAH every couple weeks just to keep things going. Also do jump in occasionally and “screen -x” to look and see if there needs to be an updated client installed occasionally. Either way, this will eat up your spare cpu cycles and make use of your cluster while you learn on it and figure out what else to do with it. It’s also a lot of fun and you can help study/cure diseases too.

Throw some Rocks at it!

ganglia
One of the parts of my day job is dealing with and managing our HPC cluster. This is an 8 node Rocks cluster that was installed maybe a week after I started. Now I was a bit green still at that point and failed to get a better grasp on some things at the time, like how to maintain and upgrade the thing, and I have recently been paying for that :-)

Apparently, the install we have doesn’t have a clear-cut way to do errata and bug fixes. It was an early version of the cluster software. Well, after some heated discussions with our Dell rep about this, I decided what I really needed to do was a bit of research to see what the deal really was and if I could get us upgraded to something a bit better and more current.

Along came my June 2009 issue of The Linux Journal which just happened to have a GREAT article in it about installing your very own Rocks Cluster (YAY!). Well, I hung on to that issue with the full intention of setting up a development/testing cluster when I had the chance. And that chance came just the other day.

Some of you probably don’t have a copy of the article, and I needed to do some things a bit different anyhow, so I am going to try and summarize here what I did to get my new dev cluster going.

Now what I needed is probably a little different that what most people will, so you will have to adjust things accordingly and I’ll try and mention the differences as I go along where I can. First off, I needed to run the cluster on RedHat proper and not CentOS, which is much easier to get going. I also am running my entire dev cluster virtually on an ESX box and most of you would be doing this with physical hardware.

To start things off I headed over to The Rocks CLuster website where I went to the download section and then to the page for Rocks 5.2 (Chimichanga) for Linux. At this point, those of you who do not need specifically RedHat should pick the appropriate version of the Jumbo DVD (either 32 or 64 bit). What I did was to grab the iso’s for the Kernel and Core Rolls. Those 2 cd images plus my dvd image for RHEL 5.4 are the equivalent to your one Jumbo DVD iso on the website that uses CentOS as the default Linux install.

Now at this point, you can follow the installation docs there (which are maybe *slightly* outdated(?), or just follow here as the install is pretty simple really. You will need a head node and one or more cluster nodes for your cluster. Your head node should have 2 interfaces and each cluster node 1 network interface. The idea here is that your head node will be the only node of your cluster that is directly accessible on your local area network and that head node will communicate on a separate private network with the cluster nodes. With 2 interfaces, plug your eth0 interface on all nodes, head and cluster into a separate switch and plug eth1 of your head node into your LAN. Turn on your head node and boot it up from the Jumbo DVD, or in the case of the RHEL people, from the Kernel cd.

The Rocks installer is really quite simple. Enter “build” at the welcome screen. Soon you will be at the configuration screen. There you will choose the “CD/DVD Based Rolls” selection where you can pick from your rolls and such. I chose everything except the Sun specific stuff (descriptions on which Rolls do what are in the download section). Since I was using RHEL instead of CentOS on the jumbo dvd, I had to push that “CD/DVD” button once per cd/dvd and select what I needed from each one.

Once the selections were made it asks you for information about the cluster. Only the FQDN and Cluster name are really necessary. After that you are given the chance to configure your public (lan) and private network settings, your root password, time zone and disk partitioning. My best advice here would be to go with default where possible although I did change my private network address settings and they worked perfectly. Letting the partitioner handle your disk partitioning is probably best too.

A quick note about disk space: If you are going to have a lot of disk space anywhere, it’s best on the head node as that space will be put in a partition that will be shared between compute nodes. Also, each node should have at least 30gb of hdd space to get the install done correctly. I tried with 16gb on one compute node and the install failed!

After all that (which really is not much at all), you just sit back and wait for your install to complete. After completion the install docs tell you to wait a few minutes for all the post install configs (behind the scenes I guess) to finish up before logging in.

Once you are at that point and logged into your head node, it is absolutely trivial to get a compute node running. First, from the command line on your head node, run “insert-ethers” and select “Compute”. Then, power on your compute node (do one at a time) and make sure it’s set to network boot (PXE). You will see the mac address and compute node name pop up on your insert-ethers screen and shortly thereafter your node will install itself from the head node, reboot and you’ll be rockin’ and rollin’!

Once your nodes are going, you can get to that shared drive space on /state/partition1. You can run commands on the hosts by doing “rocks run host uptime”, which would give you an uptime on all the hosts in the cluster. “rocks help” will help you out with more commands. You can ssh into any one of the nodes by simply doing “ssh compute-0-1″ or whichever node you want.

Now the only problem I have encountered so far is I had an issue with a compute node that didn’t want to install correctly (probably because I was impatient). I tried reinstalling it and it and somehow got a new nodename from insert-ethers. In order to delete my bad info in the node database that insert-ethers maintains I needed to do a “rocks remove host compute-0-1″ and then “rocks sync config” before I was able to make a new compute-0-1 node.

So now you and I have a functional cluster. What do you do with it? Well, you can do anything on there that requires the horsepower of multiple computers. Some things come to mind like graphics rendering and there are programs and instructions on the web on how to do those. I ran folding at home on mine. With a simple shell script I was able to setup and start folding at home on all my nodes. You could probably do most anything the same way. If any of you find something fantastic you like to run on your cluster, be sure to pass it along and let us know!

CentOS 5.4 The Real Deal

tpt23
I promised that I would try the full install version of CentOS 5.4 desktop on my thinkpad and didn’t want to disappoint, so here it is…

I actually had a really hard time with this one. That is not to say that I believe that there is an issue somehow with CentOS, but certainly something odd with the lappy at the very least. For some reason, no matter how hard I tried, I could not get the installer to run correctly on the dvd. Or more correctly put, on any of several dvds. The installer would just randomly crap out in different places. Finally I just tried an old CentOS 5.3 dvd in an external dvd player, and that finally did the trick. Besides, it only takes one quick “yum -y update” from there and you’re at 5.4 anyhow.

Like the live version (or should that be the other way around) the full install of CentOS 5.4 is quite good looking and very snappy. It uses the gnome desktop and has all the goodies you would expect from a full blown enterprise desktop. It also carries, smartly, the software that I personally use in a day to day basis - firefox, thunderbird, openoffice, etc.

You know, I almost dislike reviewing this particular distribution because there is nothing particularly exciting about it other than it does what I like for a business desktop distribution and does it quite well. The same goes for the server install, both available from the same dvd, you get a true, reliable enterprise class Linux that “just works” ™ like it’s supposed to. I guess, that is the exciting part. You get a good Linux without having to tweak and mess with a bunch of things.

This sure isn’t a *home* desktop Linux. There’s no easy support for multimedia, so I wouldn’t go springing this on mom and dad, but for a business desktop, you just can’t go wrong here. And just FYI, there are plenty of good instructions on how to get your media on within a quick google search.

As unexciting as this sounds, I am still going to get a more permanent desktop install of this somewhere in my house. Just like I run some servers at home on CentOS, it sure couldn’t hurt to have a really stable and quick workstation somewhere within easy reach too!

CentOS 5.4 Live

tpt23
In my continuing saga of Linux distributions and testing on my trusty, crusty Thinkpad T23, I tried out CentOS 5.4 Live. Now many of you already know that I have developed a sort of love affair with CentOS as a wonderful alternative to paying for enterprise class Linux. Then again, I am RedHat certified, so I like to be able to keep sharp by using similar products on my home and work servers.

Now I will have to say that my original intent was to just install CentOS 5.4 proper on this laptop, however, at the time I was grabbing the iso, I was away from the lappy and forgot whether or not it had a dvd drive in it (it does). So, while looking at the 6 cd’s I was going to have to grab and burn, I noticed the “Live” cd. I thought to myself that I should just grab that and install from there. After all, installing from a live cd has become quite commonplace these days hasn’t it?

Even on a live cd, i noticed that CentOS booted up pretty fast. I was excited to try it out as I recall my RHEL and CentOS 5.3 desktops were quite snappy. Well, it wasn’t long before it was running and I was horrified at what I saw. CentOS refused to properly autodetect my display, instead providing me with some frightfully blocky looking 800×600 default display. Ick. I was pretty sure right off the bat that this was going to be a “quick” trial :-)

CentOS

CentOS


I decided to plod through this and actually give the thing a real try anyway, so I set out to fix the screen resolution. The fix was quite simple. I did a quick system-config-display and picked a 1024×768 lcd screen, millions of colors and then whacked control-alt-backspace to restart X and violla, things were looking considerably better. In fact, the default desktop, under a decent color and resolution, is quite pleasant to look at and work on although I did find the default fonts not *quite* as smooth looking as those on Ubuntu or Mint. Not that they were bad looking, just perhaps not as polished.

All that aside, once things were running, they ran well, very very well, and fast too. CentOS 5.4 is hands down the fastest running live cd distribution I have ever used. The desktop is really snappy. The cd access is very quick compared to other live cds I have used. One of the most impressive things is that programs like firefox (the default webbrowser) started off that cd *faster* than it did on a full install of mint. At least it felt faster.

The other really fantastic thing about CentOS 5.4 Live is the selection of programs available on the cd. It has Firefox for a browser, Thunderbird for email, OpenOffice and just about everything else *I* use on a day to day business basis. Of course there are no media codecs, outside of the free ogg variety, but hey, this is an enterprise desktop right?

This brings me to the installer. My intention was to try a full CentOS 5.4 install here and especially when I wanted to play a video or two. I figured it would be a lot easier to get some necessary codecs installed on a full version than a live cd. Well, the problem is there is no installer on the live cd. They expect you to install from the dvd or the cd set and the live cd is just that, a live cd. That being the case, I have put it in my to-do list to do just that, and install a full version later on for some testing, but you can bet that I am gonna keep that CentOS live cd close at hand as I can see that being a fantastic resource for a lot of things like filesystem forensics, fixing broken servers and even anonymous/secure access from other machines (bring your own linux with you).

Check it out. If you are at all familiar with RedHat/Fedora style of things, or you are looking for a nice fast live cd, give this a look over - you’ll like it.

Pukwudgie roars into life

Last night I finally finished cutting all my server services to their new residence on Pukwudgie (my spectacular CentOS 5.3 based server VM). I turned off my old Thinkpad server, which has been doing the job reliably for over 2 years, rebooted pukwudgie just to make sure everything starts up correctly unattended and that was that. My first impressions are that everything seems to run faster. I really expected that, though, because there are a lot more resources available to Pukwudgie than there were to the old server. I am loving it so far and it sure is nice to have an up-to-date server. The old server was running Ubuntu 6.10, which was so old I couldn’t even get security patches for it anymore and this new CentOS server is completely current.

Hopefully this is a move for the better, and I can probably offer the old lappy/server on FreeLinuxBox.org too!

On Server Migrations

new-servers
Lately, I have been slowly migrating my server services from my ancient Thinkpad server to a VM on some real server hardware. This has been an arduous process, mostly because this is my own personal server, which means I am #1 strapped for time, #2 deprived of ambition when home and #3 insanely paranoid about screwing up any of my data.

I mentioned recently about how to set up a fakie mail server, which is what I have used for a long time. Well, oddly enough, the *new* portion of the server setup is quick, easy and works great. The hard part of that, which I didn’t mention on the previous post was data migration. You see, on the old server I used UW-IMAP and on the new one am using Dovecot IMAP. They both are set to use Maildir, so I just copied the folders from one machine to the other. Somewhere in there there is something funny in how each uses the folder structure. I noticed the problem immediately when using Alpine to get email from the server that it couldn’t find the previously configured folders. It turns out that I had to rename them to INBOX.foldername on the new system instead of just foldername and then things would mostly jive. I still had to make a few mail client tweaks, which were irritating, but hey, I got mail!

Once mail was running, I set out to get my webdav share working. That’s a pretty easy process and you can find a good instructable at http://www.cyberciti.biz/faq/rhel-fedora-linux-apache-enable-webdav/. I have used webdav fileshares for quite some time now and really dig them despite the fact that it’s a crap shoot if such-and-such version of nautilus will work with them (/me grates teeth at nautilus). It still makes for good document portability.

I set up subversion (dav) as well. This is a must have for any coder of any kind and a great tutorial for this is at http://wiki.centos.org/HowTos/Subversion. Once again, I pretty much copied my repo files from one machine to the other, and I even renamed the base repo on the way. Everything just worked and retained my logs and revision history, etc. Good stuff!

I installed LAMP on the server and moved over my intranet php code and databases and set up ntpd (a time server).

Lastly I installed bind on the new server so I can keep my dns going. That was a little bit more of a pain as the installs are *way* different between Ubuntu (old server) and CentOS (new one). CentOS uses a jailed instance, a different directory structure and a slightly different config setup as well. The tutorial at http://www.sanhom.com/?p=83 was invaluable at getting things going.

Whew, we are almost there! The only things I have left to do is setup my music server (GnuMP3d) and dhcp server, bothl of which are pretty much a piece of cake. After that I am ready for the momentous unplugging of the old machine :-)

I wonder how Dann managed all this in just one night?! :-)

Personal IMAP mail server

What is a personal IMAP mail server and why would you want one? Well, such a server is, like it sounds, your very own mail server that you can access via IMAP. You might want such a thing because, like me, you have a lot of different email accounts in different places and you want to collect them all into one central and easy to manage location. I also like having more direct control over my access to my email. For example, if your email account at somewhere.com stops working because their server is down and you need to reference an email stored there, you are out of luck, unless you store your somewhere.com email on your own email server where you can still access it even though their server is inaccessible.

Since I am doing some personal server upgrades and migration, I thought it would be great to share just how to get this kind of server up and running with the most minimal hassle.

For starters, my new mail server is going to be a 32 bit server install of CentOS 5.3. This OS is not at all difficult to install at all and it’s enterprise ready, so it’s plenty reliable.

When you have a machine ready with CentOS running on it, you will need to install Dovecot to handle your IMAP mail access. This is just an yum install away:

yum -y install dovecot

You will need to configure Dovecot after the install. Edit the /etc/dovecot.conf file and make sure the following is set and uncommented:

protocols = imap imaps
mail_location = maildir:~/Maildir

CentOS uses a sendmail/procmail mail combo by default, so in order to make sure your server and IMAP are both using Maildir (so your email gets delivered to you locally) you’ll need to create a file called /etc/procmailrc and in it put:

DEFAULT=$HOME/Maildir/

And then restart your mail service (just to make sure):

service sendmail restart

Once that is set, you will need to turn on Dovecot!

chkconfig dovecot on
service dovecot start

At this point, you should be able to (firewall issues not withstanding) connect to your new mail server via IMAP and see that you have no mail. I am assuming that you have set up mail clients before, the only difference now is you will point to “YourNewMailServerName” and set it up for IMAP mail and use your account name and password from “YourNewMailServerName”.

For example, I created a new server called “Pukwudgie.linc.lan” and created an account on it called linc and made a supersecret password. When I set up my mail client to test the mailserver setup, I set it up to point to Pukwudgie.linc.lan using my username of linc and my password of supersecret via the IMAP protocol. I was able to log directly into my new mail account, which was completely empty.

At some point, you will want to SEND some mail from this account. Since this personal IMAP server is just a place to HOLD your emails, you will need to configure your email client to use your ISP’s smtp address to send through. Follow their instructions for doing this. Most ISP’s do not allow you to use any smtp server other than their own these days.

Now for the fun part. You wan to collect your mail from other places and store it here. This is accomplished through the use of fetchmail. You will need to place a “~/.fetchmailrc” file in your home directory. Please refer to the fetchmailrc man page for full details, but in essence mine looks a lot like this:

poll lincisgreat.org user "linc" there with password "itsasecret"

And you can have as many of those lines in that file as you have email accounts. After creating the rc file, you can run fetchmail to get your mail and have it delivered locally on your new server, where you can access it via IMAP. There are several methods of running fetchmail. You can run it by itself and watch the output as it goes each time, you can run it in daemon mode by starting it with the -d command line switch and specifying a time interval:

fetchmail -d 60

Will check for and grab your email every 60 seconds. Or you could put fetchmail in your crontab and have cron manage getting your mail like so:

*/5 * * * * /usr/bin/fetchmail &> /dev/null

Which would check your email every 5 minutes.

Next Page »