Archive for the 'CentOS' Category

CentOS System Administration Essentials

CentOS System Administration Essentials

CentOS System Administration Essentials

The description of this book is “Become an efficient CentOS administrator by acquiring real-world knowledge of system setup and configuration” and the author, Andrew Mallett, has put together quite a collection of information in there to help you do just that.

Probably worth mentioning here is that this book is obviously designed for someone not only familiar with Linux in general, but also comfortable eough with CentOS to dispatch with the usual obligatory chapters dealing with installation, etc.. Yes, this information is surely aimed for someone who is. or has designs for being a Systems Administrator. As it happens, I am “one of those guys” so I’ll give you my thoughts on how well he did.

One of the interesting things about Linux is there are so many ways to do things and so many areas of focus. This means that this area of information that a system Administrator should know is pretty expansive and what *I* think a System Administrator should be an expert in is not necessarily what someone else may think. Well, up to a point. There are some real basics in there as well. One of those is using vi or vim and noodling around on the command line, and this is right where Mallett heads for in the beginning of the book and rightly so.

After running through some great tips you start to dive into some deep subject matter on Grub, filesystems, processes (all really important stuff). Yum (package management) and managing users are also important standards that are covered well, and then you start diverging a bit from what I would consider “must know” information into, really, the more interesting stuff of the book. You walk through LDAP auth, Nginx web servers and puppet configuration management. While those may not be essentials for your systems, it sure is nice to at least have a basic understanding, and the information here on them can get you up and running. And then lastly we go back into the last topic, security, which is also a “must know”.

I quite liked this book, especially the portion on Nginx, which I had not played with before. It was good information, easy to read and use and the examples worked. I also noted that much unlike some other similar books I have reviewed, this book is not so voluminous as to make it impractical to read through in an afternoon or so and you can do so and come away immediately with some practical and usable information. Again, the book “CentOS System Administration Essentials” by Andrew Mallett, is available from Packt Publishing for under $25 and is well worth it for all you budding (and maybe not so budding) System Administrators out there.

twidge on CentOS

A couple days ago I was reading a post from Knightwise and he mentioned using twidge on his server to do some fun stuff with his twitter account. Well! That sounded to me like just the thing for me to get some use from my neglected twitter account. Unfortunately, twidge is really best used on a debian type system and *my* server runs CentOS 5. This is a simple recipe to shoehorn twidge onto a CentOS 5 server.

I downloaded the twidge binary from

The binary requires libcurl-gnutls which CentOS just doesn’t have. I snuck around that by doing

ln -s /usr/lib/ /usr/lib/

Then the binary told me it needed libffi. This I could get from the epel repository. Do that by doing

rpm -Uvh

and then

yum install libffi

That gets twidge working ….. mostly. Because of the sneaky I pulled with that libcurl-gnutls thing, twidge generates an error message on each run. It still works fine, but gives me a message on each run:

twidge: /usr/lib/ no version information available (required by bin/twidge)

Undaunted, the easy fix for that is to dump the unneeded error to /dev/null like so

twidge lsrecent 2> /dev/null

And there you have it! For those of you looking to employ twidge on CentOS or similar Linux, this will get you going pretty quickly. Enjoy and I’ll tweet ya later!

Rsync bug



Bitten by the rsync bug? I was. Apparently in the new RHEL 5.7, and I am sure the RH clones like CentOS, Scientific Linux and ClearOS(?) as well, there is a bug in rsync when you use it with ssh transport like so:

rsync -avz -e ssh remotehost:/data /data

The fix is to make sure to append a username to your host and then it magically starts working properly again.

rsync -avz -e ssh username@remotehost:/data /data


RHEL 5 quick and dirty samba primer



A friend asked me for a quick primer on how to set up a windows accessible share under RHEL 5, so I thought I would include it here for the benefit of anyone interested.

  • sudo yum -y install samba
  • sudo vim /etc/samba/smb.conf
  • replace the file with something like so:

server string = SERVERHOSTNAME Samba Server Version %v
security = user
netbios name = CALLMESOMETHING
comment = my data share
path = /data
read only = no
writable = yes
guest ok = no
available = yes
valid users = USERNAME

  • add a local user to the box: sudo useradd USERNAME
  • add the local user to samba and give password: sudo smbpaswd -a USERNAME
  • restart samba service: sudo service smb restart
  • make sure samba starts at boot: sudo chkconfig smb on
  • adjust your firewall settings if necessary

At this point you should be able to access the share at //servername/data.
Have fun!

Server Build

Last night on the TechShow I was asked about providing some info on a decent default server build. Here are some quick notes to get people going. Adjust as necessary.

Just for ease, here, lets assume you are installing CentOS 5, a nice robust enterprise class Linux for your server needs.

CentOS 5 / RHEL 5 / Scientific Linux, etc., does a really great job picking the defaults, so sticking with those is just fine and has worked well for me on literally hundreds of servers.

  • I let the partitioner remove all existing partitions and chose the default layout without modification.
  • Configure your networking appropriately, make sure to set your system clock for the appropriate timezone (no I do not generally leave my hardware clock set to UTC).
  • When picking general server packages I go for web server and software devel. I do not, generally, pick virtualization unless there is a specific reason to. I find that the web and devel meta server choices provide a robust background with all the tools I need to set up almost any kind of server I want without having to dredge for hundreds of packages later on.
  • The install itself at this point should take you about 15 minutes depending on the speed of your hardware.
  • Once installed, reboot the server and you should come to a setup agent prompt. Select the firewall configuration. Disable the firewall and SELinux completely (trust me here). Once that is done, exit the setup agent (no need to change anything else here), login to the machine as root and reboot it. This is necessary to completely disable SELinux.

From this point on it’s all post install config…:

  • Add any software repositories you need to.
    I not only have my own repo for custom applications, but also have a local RedHat repo for faster updates and lower network strain/congestion.
  • Install your firewall.
    I use an ingress and egress firewall built on iptables. While mine is a custom written app, there are several iptables firewall generator apps out there you can try.
  • Install your backup software.
    Doesn’t matter if this is a big company backup software like TSM or CommVault, or you are just using tar in a script. Make sure your system is not only being backed up regularly, but that you can actually restore data from those backups if you need to.
  • Add your local admin account(s).
    Don’t be an idiot and log into your server all the time as root. Make a local account and give yourself sudo access (and use it).
  • Fix your mail forwarding.
    Create a .forward file in your root directory and put your email address in there. You will get your servers root emails delivered to you so you can watch the logwatch reports and any cron results and errors. This is important sysadmin stuff to look at when it hits your inbox.
  • Stop unnecessary services.
    Yes, if you are running a server you can probably safely stop the bluetooth and cups services. Check through what you are running with a “service –status-all” or a “chkconfig –list” (according to your runlevel) and turn off / stop those services you are not and will not be using. This will go a long way toward securing your server as well.
  • Install OSSEC and configure it to email you alerts.
  • No root ssh.
    Change your /etc/ssh/sshd_config and set “PermitRootLogin no”. Remember, you just added an admin account for yourself, you don’t need to ssh into this thing as root anymore. Restart your sshd service after making the change in order to apply it.
  • Set runlevel 3 as default.
    You do not need to have a GUI desktop running on your server. Run the gui on your workstation and save your server resources for serving stuff. Make the change in /etc/inittab “id:3:initdefault:”.
  • Fix your syslog.
    You really should consider having a separate syslog server. They are easy to set up (hey, Splunk is FREE up to so much usage) and it makes keeping track of whats happening on multiple servers much easier (try that Splunk stuff – you’ll like it).
  • Set up NTPD.
    Your server needs to know what time it is. ‘Nuff said.
  • Install ClamAV.
    Hey, it’s free and it works. If you do ANYTHING at all with handling emails or fileshares for windows folks on this machine, you owe it to yourself and your users to run Clam on there to help keep them safer.
  • Do all your updates now.
    Before you go letting the world in on your new server, make sure to run all the available updates. No sense starting a new server instance with out of date and potentially dangerous software.
  • Lastly, update your logbook.
    You should have SOME mechanism for keeping track of server changes, whether it be on paper or in a wiki or whathaveyou. Use it RELIGIOUSLY. You will be glad someday you did.

ESXi and Subsonic

In continuation, somewhat, of my last post and a brief review on the last TechShow, I wanted to jot down some notes about my newest encounter with ESXi and Subsonic.



I wanted to try out Subsonic, so I really needed to put together a new machine to play with it a bit. As a RL System administrator, some things carry over into my home computing environment, and paranoia is one of them. I just *have* to test things outside of my “production” servers at home too. Since I run my servers in a virtualized environment, this shouldn’t be too much of a problem.

I run ESXi at home for my virtualization platform, and the norm there is to use virtualcenter (or the vic) to create and manipulate VMs. The problem there is I am just not a Windows fan (no kidding). I had gotten around this problem initially by creating a VM on VMware Server (running on Linux) and then using VMware Converter to move that VM to my ESXi machine. This time, I did a little more digging on the subject of using the command line to create those VMs natively and I actually found some great information that let me do just that. What I found was these two links that contain all the information I needed:
ESXi – creating new virtual machines (servers) from the command line

Without rehashing a lot of the detail provided in those two sites, the basics are using vmkfstools to create a disk image for you to use and then building a small minimal vmx file with enough info in it to get things going. To do the install, make sure have your vmx start an iso image from the cdrom drive and turn on vnc for the box. From there it’s quite easy to get an install working.

The server I decided upon installing is CentOS 5.5. I chose the standard server install and the only things that were required to get Subsonic working on it were:
yum install java-1.6.0-openjdk
and then to download and install the rpm from Subsonic’s website. A little later on I found that Subsonic would not stream my ogg files and that was easily fixed by:
rpm –import
rpm -Uvh rpmforge-release-0.5.2-2.el5.rf.i386.rpm
yum install lame ffmpeg

After all that, pointing your web browser to http://:4040 and you are rocking and rolling with the big boys. The thing that really impressed me with the setup is when you tell Subsonic where your music is. On every other music server install this is the part where it takes a while to scan and index your music. With Subsonic this was surprisingly almost instantaneous! You tell it where the music is and *whamo* your music shows up, ready to be played. Fantastic! The other great piece is the ability to add album art. You can just tell subsonic to change your album art and it finds some suggestions on the web and will let you pick the correct one and save it to your collection. It’s very nice and a complete time grabber :)

PHP 5.3.X on RHEL 5 / CentOS 5



Another one for posterity here. I was asked to find out how to upgrade on PHP RHEL 5 / CentOS 5 to v 5.3.x and to test the procedure. It turns out to work pretty well and is not as difficult as you might think as long as you have the right repositories enabled:

rpm -Uvh epel-release-5-4.noarch.rpm
rpm -Uvh remi-release-5.rpm
yum –enablerepo=remi update php php-* mysql

This, of course, assumes that your LAMP stack is already installed. If not, you would change the “update” to “install” and away you go. This will currently set you to php v 5.3.3 and mysql 5.1.51..

Ubuntu 9.10 and Grub 2

Yes, another post about Ubuntu 9.10. I know I tried it out before, but I put it on this new (old) laptop and am giving it a little better run this time. I still believe 9.10 (Karmic) to be a fine running distribution and this time I got to test out my method of installing all the codecs I want on there, along with messing with Grub 2 a little bit.

When you are travelling abroad where it’s legal to do so, as i was just the other day, you might want to have access to all those codecs that make life worth living on a linux box. Things like listening to your mp3s and watching your dvds and miscellaneous media files are very dificult without them.

I realise that Ubuntu has, for some time now, been able to detect that you need so and so codec to play so and so media and ask you if you really want it installed, but I find that particularly irritating. I like to already have that functionality there when I want to use it. To do that, I have a little script that I use that generally takes care of that for me, along with installing most of the programs I need to make my day to day use hassle free.

sudo wget -O /etc/apt/sources.list.d/medibuntu.list
sudo apt-get update && sudo apt-get install medibuntu-keyring && sudo apt-get update
sudo apt-get install mozilla-thunderbird php5-common php5-cli php-pear subversion openssh-server clusterssh imagemagick vim synergy smbfs curl vlc libdvdcss2 ubuntu-restricted-extras w32codecs mplayer mencoder build-essential sqlite dia expect mysql-client

Feel free to modify and use this, but basically I derived this from paying attention to the programs I need and use and making a list. It really does save a lot of time to do this.

The other thing I wanted to mention is Grub 2. For some reason, someone decided it was time to move from the original Grub to Grub 2. Time alone will tell whether that was a smart move or not. I know I certainly had a tough time of it for a day or two. Everything has moved and the methodology has changed as well. The short of it is you have some config files in /etc/grub.d that you can now manipulate, along with issuing a “update-grub”, that will build your /boot/grub/grub.cfg, which is pretty much the equivalent of the old /boot/grub/menu.lst file. The fun part is figuring out how all this works because, as it happens with open source many times, the documentation sucks.

What I needed to do was to add another linux distribution to grub so I could dual (or multi) boot it. This is accomplished in that /etc/grub.d directory. Now it’s worth mentioning here that if you do multiple OS installs on your machine and just issue a “update-grub” on your base Grub 2 enabled OS, it will (or at least mine did) auto detect this installation by default and add a boot option for it into the grub boot menu. The problem is, like mine, it probaly won’t boot your other OS.

The way to fix this is to go into /etc/grub.d and “chmod -x 30_os-prober”. After that you won’t be auto-genning entries. Next you can make a copy of the 40_custom file (I named mine 41_centos) and edit that file to have the correct boot parameters to boot your other OS. This is especially fun without having a good grasp of the correct syntax. For instance it took me hours to figure out that the “kernel” line that the old Grub used has been replaced with a “linux” line now. Other than that, though, just make sure that if you are booting another linux to use the correct root label and kernel and initrd image names and locations. My correct and working CentOS entry looks like this for reference:

exec tail -n +3 $0
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the ‘exec tail’ line above.
menuentry “CentOS 5.4″ {
set root=(hd0,3)
linux /boot/vmlinuz-2.6.18-164.el5 ro root=LABEL=/ rhgb quiet
initrd /boot/initrd-2.6.18-164.el5.img

Have fun!

More CentOS

I know, I know. I wrote earlier about how the T23 was suffering some sort of display death again and would undoubtedly end up on While that is definitely true, I was looking at it today and thought I’d start it up again and let it do it’s updates. I did and the display was working the whole time, so I thought I would play with it a little more….:

I wanted to test getting some multimedia playback on this distribution. You see, RedHat based distributions are notorious for following the letter of the law and not letting you have access to any of those nasty codecs we all like to use. You know the ones I am talking about, mp3, wmv, dvd, etc.. Well, since I just happened to be traveling abroad in europe for a few minutes where this is completely legal, I decided to have a go at it.

A quick search brought me to this website and the directions looked pretty thorough so that’s where I started. The only thing I added to the process was adding vlc, my favorite media player, and everything else worked beautifully. To recap, follow these instructions, taken from the previously mentioned website and only edited to add vlc.

rpm -Uhv
rpm -Uhv
yum -y install libdvdcss libdvdread libdvdplay libdvdnav lsdvd mplayerplug-in mplayer mplayer-gui compat-libstdc++-33 flash-plugin gstreamer-plugins-bad gstreamer-plugins-ugly vlc
wget ; rpm -ivh mplayer-codecs-20061022-1.i386.rpm
wget; rpm -ivh mplayer-codecs-extra-20061022-1.i386.rpm

Now a couple notes….

Although I have not yet rebooted to check if that has any effect, the default media player, Totem, still does not play very much. While slightly dissapointed, I never really liked Totem anyhow and found that to be the case on almost every distribution. VLC, however, works exactly as expected, which is, perfectly.

I also took pains to install xmms, my favorite mp3 player on the T23 as well and, although it installed fine from the Dag repos, it doesn’t play a dang thing. VLC to the rescue again. In fact, I hadn’t realized that VLC actually makes such a good audio player!

It is important to also note that I still really feel that this CentOS desktop runs quite well - very snappy. I know I keep saying that, but it really is quite noticeable on this older laptop.

Rocks burn-in

The other day I was talking about how to install Rocks Cluster. Well, today I’ll give you indication on how to test it out a bit. Now this is surely not the *proper* way to test the cluster out, which would be to run some fancy cluster-aware graphics rendering application or something of the sort, but this will put something on there and make it churn out some cpu cycles just to see how things look.

What I like to use for this task is Folding At Home, which is a protein folding program (hey, help cure diseases and stuff, right). You can get things ready by downloading the appropriate version of the client for your machine(s) from the download section. The current one that I am using is the Linux version 6.24 Beta.

Log on to your cluster and create a directory for each node that you want to run the FAH client on. If you only have a couple,. it’s easy to just do that by hand, if not, you can use this simple script:

rockslist=$(rocks list host | grep ‘:’ | cut -d’:’ -f1)
for name in $rockslist
mkdir -p $name

From there, extract your FAH client file you just downloaded into your headnode directory. Tip: you headnode directory will be named something *other* than compute-?-?. Take the fah5 and mpiexec files from there and copy them to all your compute-?-? directories.

This should really get better instruction, but you’ll want to install screen on all your nodes. If you have things set up well, you should be able to do this as root:

rocks run host “yum -y install screen”

Go into your headnode directory and start your rocks client “./fah6″ and answer the configuration questions. Once you get it actually processing a work unit, you can stop it with a control-c.

At this point, copy the client.cfg file from your headnode directory to all the compute node directories.

Now, back in the headnode directory, “screen -d -m ./fah6″ which will start your folding at home client in a detached screen session and leave it running.

Now your are ready to start it up like that in your compute nodes too:

for name in compute*
echo “Killing $name”
ssh $name killall screen
echo “Restarting $name”
ssh $name “cd $name ; screen -d -m ./fah6″

And you can also use that script to periodically stop/restart (or just start again) FAH on your compute nodes as FAH will sometimes hang. I normally run this to restart FAH every couple weeks just to keep things going. Also do jump in occasionally and “screen -x” to look and see if there needs to be an updated client installed occasionally. Either way, this will eat up your spare cpu cycles and make use of your cluster while you learn on it and figure out what else to do with it. It’s also a lot of fun and you can help study/cure diseases too.

Next Page »