Archive for the 'CentOS' Category

CentOS System Administration Essentials

CentOS System Administration Essentials

CentOS System Administration Essentials


The description of this book is “Become an efficient CentOS administrator by acquiring real-world knowledge of system setup and configuration” and the author, Andrew Mallett, has put together quite a collection of information in there to help you do just that.

Probably worth mentioning here is that this book is obviously designed for someone not only familiar with Linux in general, but also comfortable eough with CentOS to dispatch with the usual obligatory chapters dealing with installation, etc.. Yes, this information is surely aimed for someone who is. or has designs for being a Systems Administrator. As it happens, I am “one of those guys” so I’ll give you my thoughts on how well he did.

One of the interesting things about Linux is there are so many ways to do things and so many areas of focus. This means that this area of information that a system Administrator should know is pretty expansive and what *I* think a System Administrator should be an expert in is not necessarily what someone else may think. Well, up to a point. There are some real basics in there as well. One of those is using vi or vim and noodling around on the command line, and this is right where Mallett heads for in the beginning of the book and rightly so.

After running through some great tips you start to dive into some deep subject matter on Grub, filesystems, processes (all really important stuff). Yum (package management) and managing users are also important standards that are covered well, and then you start diverging a bit from what I would consider “must know” information into, really, the more interesting stuff of the book. You walk through LDAP auth, Nginx web servers and puppet configuration management. While those may not be essentials for your systems, it sure is nice to at least have a basic understanding, and the information here on them can get you up and running. And then lastly we go back into the last topic, security, which is also a “must know”.

I quite liked this book, especially the portion on Nginx, which I had not played with before. It was good information, easy to read and use and the examples worked. I also noted that much unlike some other similar books I have reviewed, this book is not so voluminous as to make it impractical to read through in an afternoon or so and you can do so and come away immediately with some practical and usable information. Again, the book “CentOS System Administration Essentials” by Andrew Mallett, is available from Packt Publishing for under $25 and is well worth it for all you budding (and maybe not so budding) System Administrators out there.

RHEL 5 quick and dirty samba primer

samba

samba


A friend asked me for a quick primer on how to set up a windows accessible share under RHEL 5, so I thought I would include it here for the benefit of anyone interested.

  • sudo yum -y install samba
  • sudo vim /etc/samba/smb.conf
  • replace the file with something like so:

[global]
workgroup = SOMEWORKGROUPNAME
server string = SERVERHOSTNAME Samba Server Version %v
security = user
netbios name = CALLMESOMETHING
[data]
comment = my data share
path = /data
read only = no
writable = yes
guest ok = no
available = yes
valid users = USERNAME

  • add a local user to the box: sudo useradd USERNAME
  • add the local user to samba and give password: sudo smbpaswd -a USERNAME
  • restart samba service: sudo service smb restart
  • make sure samba starts at boot: sudo chkconfig smb on
  • adjust your firewall settings if necessary

At this point you should be able to access the share at //servername/data.
Have fun!

Server Build

Last night on the TechShow I was asked about providing some info on a decent default server build. Here are some quick notes to get people going. Adjust as necessary.

Just for ease, here, lets assume you are installing CentOS 5, a nice robust enterprise class Linux for your server needs.

CentOS 5 / RHEL 5 / Scientific Linux, etc., does a really great job picking the defaults, so sticking with those is just fine and has worked well for me on literally hundreds of servers.

  • I let the partitioner remove all existing partitions and chose the default layout without modification.
  • Configure your networking appropriately, make sure to set your system clock for the appropriate timezone (no I do not generally leave my hardware clock set to UTC).
  • When picking general server packages I go for web server and software devel. I do not, generally, pick virtualization unless there is a specific reason to. I find that the web and devel meta server choices provide a robust background with all the tools I need to set up almost any kind of server I want without having to dredge for hundreds of packages later on.
  • The install itself at this point should take you about 15 minutes depending on the speed of your hardware.
  • Once installed, reboot the server and you should come to a setup agent prompt. Select the firewall configuration. Disable the firewall and SELinux completely (trust me here). Once that is done, exit the setup agent (no need to change anything else here), login to the machine as root and reboot it. This is necessary to completely disable SELinux.

From this point on it’s all post install config…:

  • Add any software repositories you need to.
    I not only have my own repo for custom applications, but also have a local RedHat repo for faster updates and lower network strain/congestion.
  • Install your firewall.
    I use an ingress and egress firewall built on iptables. While mine is a custom written app, there are several iptables firewall generator apps out there you can try.
  • Install your backup software.
    Doesn’t matter if this is a big company backup software like TSM or CommVault, or you are just using tar in a script. Make sure your system is not only being backed up regularly, but that you can actually restore data from those backups if you need to.
  • Add your local admin account(s).
    Don’t be an idiot and log into your server all the time as root. Make a local account and give yourself sudo access (and use it).
  • Fix your mail forwarding.
    Create a .forward file in your root directory and put your email address in there. You will get your servers root emails delivered to you so you can watch the logwatch reports and any cron results and errors. This is important sysadmin stuff to look at when it hits your inbox.
  • Stop unnecessary services.
    Yes, if you are running a server you can probably safely stop the bluetooth and cups services. Check through what you are running with a “service –status-all” or a “chkconfig –list” (according to your runlevel) and turn off / stop those services you are not and will not be using. This will go a long way toward securing your server as well.
  • Install OSSEC and configure it to email you alerts.
  • No root ssh.
    Change your /etc/ssh/sshd_config and set “PermitRootLogin no”. Remember, you just added an admin account for yourself, you don’t need to ssh into this thing as root anymore. Restart your sshd service after making the change in order to apply it.
  • Set runlevel 3 as default.
    You do not need to have a GUI desktop running on your server. Run the gui on your workstation and save your server resources for serving stuff. Make the change in /etc/inittab “id:3:initdefault:”.
  • Fix your syslog.
    You really should consider having a separate syslog server. They are easy to set up (hey, Splunk is FREE up to so much usage) and it makes keeping track of whats happening on multiple servers much easier (try that Splunk stuff – you’ll like it).
  • Set up NTPD.
    Your server needs to know what time it is. ‘Nuff said.
  • Install ClamAV.
    Hey, it’s free and it works. If you do ANYTHING at all with handling emails or fileshares for windows folks on this machine, you owe it to yourself and your users to run Clam on there to help keep them safer.
  • Do all your updates now.
    Before you go letting the world in on your new server, make sure to run all the available updates. No sense starting a new server instance with out of date and potentially dangerous software.
  • Lastly, update your logbook.
    You should have SOME mechanism for keeping track of server changes, whether it be on paper or in a wiki or whathaveyou. Use it RELIGIOUSLY. You will be glad someday you did.

ESXi and Subsonic

In continuation, somewhat, of my last post and a brief review on the last TechShow, I wanted to jot down some notes about my newest encounter with ESXi and Subsonic.

Subsonic

Subsonic

I wanted to try out Subsonic, so I really needed to put together a new machine to play with it a bit. As a RL System administrator, some things carry over into my home computing environment, and paranoia is one of them. I just *have* to test things outside of my “production” servers at home too. Since I run my servers in a virtualized environment, this shouldn’t be too much of a problem.

I run ESXi at home for my virtualization platform, and the norm there is to use virtualcenter (or the vic) to create and manipulate VMs. The problem there is I am just not a Windows fan (no kidding). I had gotten around this problem initially by creating a VM on VMware Server (running on Linux) and then using VMware Converter to move that VM to my ESXi machine. This time, I did a little more digging on the subject of using the command line to create those VMs natively and I actually found some great information that let me do just that. What I found was these two links that contain all the information I needed:
ESXi – creating new virtual machines (servers) from the command line
and
http://www.vm-help.com/esx40i/manage_without_VI_client_1.php

Without rehashing a lot of the detail provided in those two sites, the basics are using vmkfstools to create a disk image for you to use and then building a small minimal vmx file with enough info in it to get things going. To do the install, make sure have your vmx start an iso image from the cdrom drive and turn on vnc for the box. From there it’s quite easy to get an install working.

The server I decided upon installing is CentOS 5.5. I chose the standard server install and the only things that were required to get Subsonic working on it were:
yum install java-1.6.0-openjdk
and then to download and install the rpm from Subsonic’s website. A little later on I found that Subsonic would not stream my ogg files and that was easily fixed by:
rpm –import http://apt.sw.be/RPM-GPG-KEY.dag.txt
wget http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.2-2.el5.rf.i386.rpm
rpm -Uvh rpmforge-release-0.5.2-2.el5.rf.i386.rpm
yum install lame ffmpeg

After all that, pointing your web browser to http://:4040 and you are rocking and rolling with the big boys. The thing that really impressed me with the setup is when you tell Subsonic where your music is. On every other music server install this is the part where it takes a while to scan and index your music. With Subsonic this was surprisingly almost instantaneous! You tell it where the music is and *whamo* your music shows up, ready to be played. Fantastic! The other great piece is the ability to add album art. You can just tell subsonic to change your album art and it finds some suggestions on the web and will let you pick the correct one and save it to your collection. It’s very nice and a complete time grabber :)

PHP 5.3.X on RHEL 5 / CentOS 5

PHP

PHP

Another one for posterity here. I was asked to find out how to upgrade on PHP RHEL 5 / CentOS 5 to v 5.3.x and to test the procedure. It turns out to work pretty well and is not as difficult as you might think as long as you have the right repositories enabled:

wget http://download.fedora.redhat.com/pub/epel/5/i386/epel-release-5-4.noarch.rpm
wget http://rpms.famillecollet.com/enterprise/remi-release-5.rpm
rpm -Uvh epel-release-5-4.noarch.rpm
rpm -Uvh remi-release-5.rpm
yum –enablerepo=remi update php php-* mysql

This, of course, assumes that your LAMP stack is already installed. If not, you would change the “update” to “install” and away you go. This will currently set you to php v 5.3.3 and mysql 5.1.51..

Ubuntu 9.10 and Grub 2

ubuntu
Yes, another post about Ubuntu 9.10. I know I tried it out before, but I put it on this new (old) laptop and am giving it a little better run this time. I still believe 9.10 (Karmic) to be a fine running distribution and this time I got to test out my method of installing all the codecs I want on there, along with messing with Grub 2 a little bit.

When you are travelling abroad where it’s legal to do so, as i was just the other day, you might want to have access to all those codecs that make life worth living on a linux box. Things like listening to your mp3s and watching your dvds and miscellaneous media files are very dificult without them.

I realise that Ubuntu has, for some time now, been able to detect that you need so and so codec to play so and so media and ask you if you really want it installed, but I find that particularly irritating. I like to already have that functionality there when I want to use it. To do that, I have a little script that I use that generally takes care of that for me, along with installing most of the programs I need to make my day to day use hassle free.

#!/bin/bash
sudo wget http://www.medibuntu.org/sources.list.d/karmic.list -O /etc/apt/sources.list.d/medibuntu.list
sudo apt-get update && sudo apt-get install medibuntu-keyring && sudo apt-get update
sudo apt-get install mozilla-thunderbird php5-common php5-cli php-pear subversion openssh-server clusterssh imagemagick vim synergy smbfs curl vlc libdvdcss2 ubuntu-restricted-extras w32codecs mplayer mencoder build-essential sqlite dia expect mysql-client

Feel free to modify and use this, but basically I derived this from paying attention to the programs I need and use and making a list. It really does save a lot of time to do this.

The other thing I wanted to mention is Grub 2. For some reason, someone decided it was time to move from the original Grub to Grub 2. Time alone will tell whether that was a smart move or not. I know I certainly had a tough time of it for a day or two. Everything has moved and the methodology has changed as well. The short of it is you have some config files in /etc/grub.d that you can now manipulate, along with issuing a “update-grub”, that will build your /boot/grub/grub.cfg, which is pretty much the equivalent of the old /boot/grub/menu.lst file. The fun part is figuring out how all this works because, as it happens with open source many times, the documentation sucks.

What I needed to do was to add another linux distribution to grub so I could dual (or multi) boot it. This is accomplished in that /etc/grub.d directory. Now it’s worth mentioning here that if you do multiple OS installs on your machine and just issue a “update-grub” on your base Grub 2 enabled OS, it will (or at least mine did) auto detect this installation by default and add a boot option for it into the grub boot menu. The problem is, like mine, it probaly won’t boot your other OS.

The way to fix this is to go into /etc/grub.d and “chmod -x 30_os-prober”. After that you won’t be auto-genning entries. Next you can make a copy of the 40_custom file (I named mine 41_centos) and edit that file to have the correct boot parameters to boot your other OS. This is especially fun without having a good grasp of the correct syntax. For instance it took me hours to figure out that the “kernel” line that the old Grub used has been replaced with a “linux” line now. Other than that, though, just make sure that if you are booting another linux to use the correct root label and kernel and initrd image names and locations. My correct and working CentOS entry looks like this for reference:

#!/bin/sh
exec tail -n +3 $0
# This file provides an easy way to add custom menu entries. Simply type the
# menu entries you want to add after this comment. Be careful not to change
# the ‘exec tail’ line above.
menuentry “CentOS 5.4″ {
set root=(hd0,3)
linux /boot/vmlinuz-2.6.18-164.el5 ro root=LABEL=/ rhgb quiet
initrd /boot/initrd-2.6.18-164.el5.img
}

Have fun!

More CentOS

tpt23
I know, I know. I wrote earlier about how the T23 was suffering some sort of display death again and would undoubtedly end up on freelinuxbox.org. While that is definitely true, I was looking at it today and thought I’d start it up again and let it do it’s updates. I did and the display was working the whole time, so I thought I would play with it a little more….:

I wanted to test getting some multimedia playback on this distribution. You see, RedHat based distributions are notorious for following the letter of the law and not letting you have access to any of those nasty codecs we all like to use. You know the ones I am talking about, mp3, wmv, dvd, etc.. Well, since I just happened to be traveling abroad in europe for a few minutes where this is completely legal, I decided to have a go at it.

A quick search brought me to this website and the directions looked pretty thorough so that’s where I started. The only thing I added to the process was adding vlc, my favorite media player, and everything else worked beautifully. To recap, follow these instructions, taken from the previously mentioned website and only edited to add vlc.

rpm -Uhv http://apt.sw.be/redhat/el5/en/i386/RPMS.dag/rpmforge-release-0.3.6-1.el5.rf.i386.rpm
rpm -Uhv http://linuxdownload.adobe.com/adobe-release/adobe-release-i386-1.0-1.noarch.rpm
yum -y install libdvdcss libdvdread libdvdplay libdvdnav lsdvd mplayerplug-in mplayer mplayer-gui compat-libstdc++-33 flash-plugin gstreamer-plugins-bad gstreamer-plugins-ugly vlc
wget www1.mplayerhq.hu/MPlayer/releases/codecs/mplayer-codecs-20061022-1.i386.rpm ; rpm -ivh mplayer-codecs-20061022-1.i386.rpm
wget www1.mplayerhq.hu/MPlayer/releases/codecs/mplayer-codecs-extra-20061022-1.i386.rpm; rpm -ivh mplayer-codecs-extra-20061022-1.i386.rpm

Now a couple notes….

Although I have not yet rebooted to check if that has any effect, the default media player, Totem, still does not play very much. While slightly dissapointed, I never really liked Totem anyhow and found that to be the case on almost every distribution. VLC, however, works exactly as expected, which is, perfectly.

I also took pains to install xmms, my favorite mp3 player on the T23 as well and, although it installed fine from the Dag repos, it doesn’t play a dang thing. VLC to the rescue again. In fact, I hadn’t realized that VLC actually makes such a good audio player!

It is important to also note that I still really feel that this CentOS desktop runs quite well - very snappy. I know I keep saying that, but it really is quite noticeable on this older laptop.

Rocks burn-in

fah-on-rocks-sm
The other day I was talking about how to install Rocks Cluster. Well, today I’ll give you indication on how to test it out a bit. Now this is surely not the *proper* way to test the cluster out, which would be to run some fancy cluster-aware graphics rendering application or something of the sort, but this will put something on there and make it churn out some cpu cycles just to see how things look.

What I like to use for this task is Folding At Home, which is a protein folding program (hey, help cure diseases and stuff, right). You can get things ready by downloading the appropriate version of the client for your machine(s) from the download section. The current one that I am using is the Linux version 6.24 Beta.

Log on to your cluster and create a directory for each node that you want to run the FAH client on. If you only have a couple,. it’s easy to just do that by hand, if not, you can use this simple script:

#!/bin/bash
rockslist=$(rocks list host | grep ‘:’ | cut -d’:’ -f1)
for name in $rockslist
do
mkdir -p $name
done

From there, extract your FAH client file you just downloaded into your headnode directory. Tip: you headnode directory will be named something *other* than compute-?-?. Take the fah5 and mpiexec files from there and copy them to all your compute-?-? directories.

This should really get better instruction, but you’ll want to install screen on all your nodes. If you have things set up well, you should be able to do this as root:

rocks run host “yum -y install screen”

Go into your headnode directory and start your rocks client “./fah6″ and answer the configuration questions. Once you get it actually processing a work unit, you can stop it with a control-c.

At this point, copy the client.cfg file from your headnode directory to all the compute node directories.

Now, back in the headnode directory, “screen -d -m ./fah6″ which will start your folding at home client in a detached screen session and leave it running.

Now your are ready to start it up like that in your compute nodes too:

for name in compute*
do
echo “Killing $name”
ssh $name killall screen
echo “Restarting $name”
ssh $name “cd $name ; screen -d -m ./fah6″
done

And you can also use that script to periodically stop/restart (or just start again) FAH on your compute nodes as FAH will sometimes hang. I normally run this to restart FAH every couple weeks just to keep things going. Also do jump in occasionally and “screen -x” to look and see if there needs to be an updated client installed occasionally. Either way, this will eat up your spare cpu cycles and make use of your cluster while you learn on it and figure out what else to do with it. It’s also a lot of fun and you can help study/cure diseases too.

Throw some Rocks at it!

ganglia
One of the parts of my day job is dealing with and managing our HPC cluster. This is an 8 node Rocks cluster that was installed maybe a week after I started. Now I was a bit green still at that point and failed to get a better grasp on some things at the time, like how to maintain and upgrade the thing, and I have recently been paying for that :-)

Apparently, the install we have doesn’t have a clear-cut way to do errata and bug fixes. It was an early version of the cluster software. Well, after some heated discussions with our Dell rep about this, I decided what I really needed to do was a bit of research to see what the deal really was and if I could get us upgraded to something a bit better and more current.

Along came my June 2009 issue of The Linux Journal which just happened to have a GREAT article in it about installing your very own Rocks Cluster (YAY!). Well, I hung on to that issue with the full intention of setting up a development/testing cluster when I had the chance. And that chance came just the other day.

Some of you probably don’t have a copy of the article, and I needed to do some things a bit different anyhow, so I am going to try and summarize here what I did to get my new dev cluster going.

Now what I needed is probably a little different that what most people will, so you will have to adjust things accordingly and I’ll try and mention the differences as I go along where I can. First off, I needed to run the cluster on RedHat proper and not CentOS, which is much easier to get going. I also am running my entire dev cluster virtually on an ESX box and most of you would be doing this with physical hardware.

To start things off I headed over to The Rocks CLuster website where I went to the download section and then to the page for Rocks 5.2 (Chimichanga) for Linux. At this point, those of you who do not need specifically RedHat should pick the appropriate version of the Jumbo DVD (either 32 or 64 bit). What I did was to grab the iso’s for the Kernel and Core Rolls. Those 2 cd images plus my dvd image for RHEL 5.4 are the equivalent to your one Jumbo DVD iso on the website that uses CentOS as the default Linux install.

Now at this point, you can follow the installation docs there (which are maybe *slightly* outdated(?), or just follow here as the install is pretty simple really. You will need a head node and one or more cluster nodes for your cluster. Your head node should have 2 interfaces and each cluster node 1 network interface. The idea here is that your head node will be the only node of your cluster that is directly accessible on your local area network and that head node will communicate on a separate private network with the cluster nodes. With 2 interfaces, plug your eth0 interface on all nodes, head and cluster into a separate switch and plug eth1 of your head node into your LAN. Turn on your head node and boot it up from the Jumbo DVD, or in the case of the RHEL people, from the Kernel cd.

The Rocks installer is really quite simple. Enter “build” at the welcome screen. Soon you will be at the configuration screen. There you will choose the “CD/DVD Based Rolls” selection where you can pick from your rolls and such. I chose everything except the Sun specific stuff (descriptions on which Rolls do what are in the download section). Since I was using RHEL instead of CentOS on the jumbo dvd, I had to push that “CD/DVD” button once per cd/dvd and select what I needed from each one.

Once the selections were made it asks you for information about the cluster. Only the FQDN and Cluster name are really necessary. After that you are given the chance to configure your public (lan) and private network settings, your root password, time zone and disk partitioning. My best advice here would be to go with default where possible although I did change my private network address settings and they worked perfectly. Letting the partitioner handle your disk partitioning is probably best too.

A quick note about disk space: If you are going to have a lot of disk space anywhere, it’s best on the head node as that space will be put in a partition that will be shared between compute nodes. Also, each node should have at least 30gb of hdd space to get the install done correctly. I tried with 16gb on one compute node and the install failed!

After all that (which really is not much at all), you just sit back and wait for your install to complete. After completion the install docs tell you to wait a few minutes for all the post install configs (behind the scenes I guess) to finish up before logging in.

Once you are at that point and logged into your head node, it is absolutely trivial to get a compute node running. First, from the command line on your head node, run “insert-ethers” and select “Compute”. Then, power on your compute node (do one at a time) and make sure it’s set to network boot (PXE). You will see the mac address and compute node name pop up on your insert-ethers screen and shortly thereafter your node will install itself from the head node, reboot and you’ll be rockin’ and rollin’!

Once your nodes are going, you can get to that shared drive space on /state/partition1. You can run commands on the hosts by doing “rocks run host uptime”, which would give you an uptime on all the hosts in the cluster. “rocks help” will help you out with more commands. You can ssh into any one of the nodes by simply doing “ssh compute-0-1″ or whichever node you want.

Now the only problem I have encountered so far is I had an issue with a compute node that didn’t want to install correctly (probably because I was impatient). I tried reinstalling it and it and somehow got a new nodename from insert-ethers. In order to delete my bad info in the node database that insert-ethers maintains I needed to do a “rocks remove host compute-0-1″ and then “rocks sync config” before I was able to make a new compute-0-1 node.

So now you and I have a functional cluster. What do you do with it? Well, you can do anything on there that requires the horsepower of multiple computers. Some things come to mind like graphics rendering and there are programs and instructions on the web on how to do those. I ran folding at home on mine. With a simple shell script I was able to setup and start folding at home on all my nodes. You could probably do most anything the same way. If any of you find something fantastic you like to run on your cluster, be sure to pass it along and let us know!

CentOS 5.4 The Real Deal

tpt23
I promised that I would try the full install version of CentOS 5.4 desktop on my thinkpad and didn’t want to disappoint, so here it is…

I actually had a really hard time with this one. That is not to say that I believe that there is an issue somehow with CentOS, but certainly something odd with the lappy at the very least. For some reason, no matter how hard I tried, I could not get the installer to run correctly on the dvd. Or more correctly put, on any of several dvds. The installer would just randomly crap out in different places. Finally I just tried an old CentOS 5.3 dvd in an external dvd player, and that finally did the trick. Besides, it only takes one quick “yum -y update” from there and you’re at 5.4 anyhow.

Like the live version (or should that be the other way around) the full install of CentOS 5.4 is quite good looking and very snappy. It uses the gnome desktop and has all the goodies you would expect from a full blown enterprise desktop. It also carries, smartly, the software that I personally use in a day to day basis - firefox, thunderbird, openoffice, etc.

You know, I almost dislike reviewing this particular distribution because there is nothing particularly exciting about it other than it does what I like for a business desktop distribution and does it quite well. The same goes for the server install, both available from the same dvd, you get a true, reliable enterprise class Linux that “just works” ™ like it’s supposed to. I guess, that is the exciting part. You get a good Linux without having to tweak and mess with a bunch of things.

This sure isn’t a *home* desktop Linux. There’s no easy support for multimedia, so I wouldn’t go springing this on mom and dad, but for a business desktop, you just can’t go wrong here. And just FYI, there are plenty of good instructions on how to get your media on within a quick google search.

As unexciting as this sounds, I am still going to get a more permanent desktop install of this somewhere in my house. Just like I run some servers at home on CentOS, it sure couldn’t hurt to have a really stable and quick workstation somewhere within easy reach too!

Next Page »