Archive for the 'System Administration' Category

Replace HD in Dell Inspiron N5110

I am no stranger to replacing bad equipment in servers, desktops and laptops, but some laptops don’t make it easy. This was one.

A couple years ago I swapped out an aging hdd in an older Dell Inspiron with a new ssd and, boy, the performance improved drastically. Lately, I have been using a new(er) Inspiron, an N5110 and have noticed that it sure took a while for things like bootup and Chrome to initially load. It was really starting to annoy me, so I looked up the specs on the original hdd and found that there was a squirrel in there pounding out the bits with a chisel, so I decided it was high time for a modern drive and splurged on a 240Gb ssd. I assumed that this was a simple pull the panel off the bottom and swap kind of procedure like the old Dell, so I pulled off the hdd sized panel and boom. The only thing under there was more plastic and a small memory slot???!!

Not to be outdone I turned to youtube, just like an self respecting techie would and was pleased to find some instruction there. You can find the video i used here if you are interested:

That is where is starts to get fun. Apparently you have to disassemble THE ENTIRE LAPTOP to get the hdd out. You have to pull out the battery, memory, all the screws on the bottom, the dvd drive, then flip the machine over and pull off the keyboard, unscrew and pull off the top plate and all the ribbon cables, then unscrew and remove the entire motherboard and one of the monitor mounts. The hdd is underneath the motherboard. Unreal.

Believe it or not, after all that I only had one extra screw(?) and the laptop booted up on the first try. Now came the good part. How to get my existing Linux Mint install onto the new ssd. Normally I would have just used a disk cloning program or dd to do it but the old hdd was 500Gb and this new ssd is only 240Gb. There are also some complicated tutorials on the web on how accomplish this task but let me share with you the easy way.

Do a clean install of your OS. Really. With Linux it takes 15 minutes tops. Don’t bother with any of your configs or personalization. It’s a dummy install to not only get the partitioning correct on your ssd but generate the correct /etc/fstab file (or get the new uuids and make the correct partitions bootable.

Once you are done, boot into your install media again (I used USB because it was faster) and mount your new installation AND your old hdd (I used an external usb drive case for this). I made the directories I needed by doing (as root) “mkdir -p /mnt/newdisk ; mkdir -p /mnt/olddisk” and then putting things in place with “mount /dev/sda1 /mnt/newdisk ; mount /dev/sdc1 /mnt/olddisk”. I should mention here that my partitions were the default Mint layout with a big Linux partition first, then an extended partition, then swap, on both drives.

Once mounted I made a backup copy of the /etc/fstab on my olddisk (the hdd) and then I copied the /etc/fstab from the newdisk to the /etc/fstab on the olddisk. Now the fun part. Go to (cd) the /mnt/newdisk directory. MAKE SURE IT’S THE NEWDISK DIRECTORY, and “rm -rf *”. That is going to delete all the files you just installed. It’ll only take a second.

Next is the long part. I used rsync to copy all my old files over. If you aren’t a hoarder like me with six linux dvd isos in your download directory and 50Gb of music files, it’ll go a lot faster, but all the same, it’s pretty cool to watch. I did a “rsync -rvlpogdstHEAX /mnt/olddisk/ /mnt/newdisk”. Make note of those /’ in there or you’ll end up having to move stuff around afterwards. In retrospect, I think you could use just rsync -av, but ymmv. What you will see is every file on your old drive being copied to the new one. Like I mentioned, this takes a few minutes, just sit back or grab a coffee. Once it’s done you are *almost* ready.

The very last thing you’ll need to fix is your grub.cfg file. These days everyone wants to use uuid to assign devices and your boot file is still looking for your old hdd. Open up a couple terminals. In one, vi /mnt/newdisk/boot/grub/grub.cfg and in the other vi /mnt/newdisk/etc/fstab. In the fstab file you will see the uuid for your new ssd drive. It’s the first uuid mentioned and mounted at /. Io You need to replace the old one in there with the new one from your fstab. It’s easier than you think in vi. Just do a “:g/olduuidstring/s//newuuidstring/g” and hit enter where olduuidstring is your old uuid and newuuidstring is your new uuid from the fstab file. Once it is finished replacing you probably need to save it with a “:wq!” because your system will undoubtedly say it’s a read only file. The reboot! You should be greeted shortly with a much faster but very familiar linux install, complete with all your goodies.

One last note. You may want to increase the life of your ssd ehink in vi. Just do a “:g/olduuidstring/s//newuuidstring/g” and hit enter where olduuidstring is your old uuid and newuuidstring is your new uuid from the fstab file. Once it is finished replacing you probably need to save it with a “:wq!” because your system will undoubtedly say it’s a read only file. The reboot! You should be greeted shortly with a much faster but very familiar linux install, complete with all your by adding a couple options to your /etc/fstab file. Those options are discard and noatime. These options deal with extra disk writes that you really don’t need on ssd. Your / line options in the fstab should look something like “ext4 discard,noatime,errors=remount-ro 0 1”.


It’s NOT Telecommuting!

OK, so it is telecommuting – but here me out for just a second..

I have been involved in a job search as a Linux admin for a few months now and one of the barriers I keep running in to is (get this) physical location, or company location. WHY? Business owners, let me reason with you for a moment here.

Your servers are “in the cloud”:
There are a LOT of companies these days who are using cloud servers and services. Buzz words like Paas, Saas and Iaas are all the rage now, along with their providers AWS, Rackspace, Azure, Google and the like. These services that you use locally for your business are not actually located at your business. Likely, they are not even in the same time zone, and, in some cases, country. Every time one of your server administrators or users access those services and systems, they are doing so remotely, even if they are sitting at a desk next to you in your corporate headquarters.

You have “datacenters”:
For those of you who have your own datacenters for your machines, you have the same issue. Most companies have at least two such facilities for redundancy and either one or both of them are typically located away from your corporate campus. This, again, means that when you are working on them in any capacity, you are doing so remotely, or “telecommuting”, whether it be from your corporate campus, from, home or across the world.

So you see, in almost every scenario in these modern times, you are already telecommuting to use your own resources. I am here to implore you to consider expanding your employment pool by letting computer workers do their jobs remotely. Save yourself some real estate space. Use conference calls, instant messaging, emails and video chats (free) for your office communications. Dramatically lower your corporate utility bills and *paper costs*. And give someone like myself a shot. You’ll be happy you did!

“Fixing” an old laptop

Dell Inspiron 1545

Dell Inspiron 1545

A few years ago when I was in the market for a new laptop I picked up one of the then wildly popular and cheap Dell Inspiron 1545. There are gobs of these running around now and you can find them cheap if you look (click the pic for links to Amazon). I used this for for, it seems, forever. I only ever had one problem with it – a small plastic chip in one of the corners that I repaired with superglue (you would never notice). Lately, though, it has been running noticeably slow. I don’t know if it’s because it’s actually getting slower, the software is just getting fatter, my work computer is blazing fast in comparison, or a combination of any/all of those. Either way, it’s really been bugging me so much lately that I had considered just getting a new lappy. Before I did, I decided to look over the specs to see what I actually had here. Mine is a core duo 2.2Ghz with 4Gb ram and a 320gb HDD. Running Linux this thing *should* run like it was on fire. So why so freaking slow? A quick look at “top” revealed what had to be the problem. I was at almost 0% CPU and only 1.5Gb ram. It HAD to be the slow as pencil and paper hard drive writes and reads. A quick search says that somewhere in between now and the last time I came up from air at work SSD drive prices dramatically reduced, so I stopped by a bigbox store and picked up a 240Gb SSD for <$100 and screwed it in and WHAMO! It’s like I have a brand new laptop! Seriously! Not only is the difference noticeable, it’s amazing, so much so that I needed to break my blogging silence to tell you about it. If any of you have an aging laptop like me that runs but is “meh”, it’s totally worth it to spend the 15 minutes it takes to do this upgrade. It certainly just saved me $500 and I am now, once again, perfectly happy with my trusty old (but well kept) Dell Inspiron 1545.

Diagnosis: Paranoia

You know, there are just some things you do not need first thing on a Monday morning. This was one of them…

I came and and started reviewing my reports and was looking at an access report, which is basically a “last | grep $TheDateIWant” from over the weekend. I keep a pretty tight ship and want to know who is accessing what servers and when (and sometimes why). What I saw was monstrously suspicious! I saw MYSELF logged in to 3 different servers 3 times each around 5am on Sunday morning – while I was sleeping.

This is the kind of thing to throw you into an immediate panic first thing on a Monday morning, but I decided to give myself 10 minutes to investigate before completely freaking out.

The first thing I noticed was that the access/login times looked suspiciously like the same times I ran my daily reports on the machines, however, the previous week I had changed the user that runs those reports and this was still saying it was me. I double, triple and quadruple checked and searched all the report programs to make absolutely sure there was no indication that they were still using my personal account (which was probably bad practice to begin with btw). Then I scoured all the cron logs to see what was actually running at those times, and oddly enough, it was just those reports.

I looked through the command line history on those machines and checked again the “last | head” to see who was logging on those machines. Nothing out of place BUT with the “last| head” I was NOT listed as being on the machine on that date! So I ran the entire report command again “last | grep $TheDateIWant” and there I was again, listed right under the logins of the report user.

Anyone catching this yet?

What I had stumbled upon were a few machines that are used so infrequently that the wtmp file, which is what the “last” command uses for data, had over 1 year of entries. My search of “last | grep ‘Oct 31′” was returning not only this year, but my own logins from last year as well.


Moral of the story? Mondays stink – Just stay home!


Updates, updates everywhere. I pushed a bunch of updates to, my Blog, LinuxPlanet Casts and Blogs, LinuxForChristians, TLLTS Planet and the Lincware forums. Everything looks ok right now, but please let me know if you see anything strange happening (or not happening as the case may be). Thanks and you may now return to your previously scheduled rss feed.

Throw some Rocks at it!

One of the parts of my day job is dealing with and managing our HPC cluster. This is an 8 node Rocks cluster that was installed maybe a week after I started. Now I was a bit green still at that point and failed to get a better grasp on some things at the time, like how to maintain and upgrade the thing, and I have recently been paying for that :-)

Apparently, the install we have doesn’t have a clear-cut way to do errata and bug fixes. It was an early version of the cluster software. Well, after some heated discussions with our Dell rep about this, I decided what I really needed to do was a bit of research to see what the deal really was and if I could get us upgraded to something a bit better and more current.

Along came my June 2009 issue of The Linux Journal which just happened to have a GREAT article in it about installing your very own Rocks Cluster (YAY!). Well, I hung on to that issue with the full intention of setting up a development/testing cluster when I had the chance. And that chance came just the other day.

Some of you probably don’t have a copy of the article, and I needed to do some things a bit different anyhow, so I am going to try and summarize here what I did to get my new dev cluster going.

Now what I needed is probably a little different that what most people will, so you will have to adjust things accordingly and I’ll try and mention the differences as I go along where I can. First off, I needed to run the cluster on RedHat proper and not CentOS, which is much easier to get going. I also am running my entire dev cluster virtually on an ESX box and most of you would be doing this with physical hardware.

To start things off I headed over to The Rocks CLuster website where I went to the download section and then to the page for Rocks 5.2 (Chimichanga) for Linux. At this point, those of you who do not need specifically RedHat should pick the appropriate version of the Jumbo DVD (either 32 or 64 bit). What I did was to grab the iso’s for the Kernel and Core Rolls. Those 2 cd images plus my dvd image for RHEL 5.4 are the equivalent to your one Jumbo DVD iso on the website that uses CentOS as the default Linux install.

Now at this point, you can follow the installation docs there (which are maybe *slightly* outdated(?), or just follow here as the install is pretty simple really. You will need a head node and one or more cluster nodes for your cluster. Your head node should have 2 interfaces and each cluster node 1 network interface. The idea here is that your head node will be the only node of your cluster that is directly accessible on your local area network and that head node will communicate on a separate private network with the cluster nodes. With 2 interfaces, plug your eth0 interface on all nodes, head and cluster into a separate switch and plug eth1 of your head node into your LAN. Turn on your head node and boot it up from the Jumbo DVD, or in the case of the RHEL people, from the Kernel cd.

The Rocks installer is really quite simple. Enter “build” at the welcome screen. Soon you will be at the configuration screen. There you will choose the “CD/DVD Based Rolls” selection where you can pick from your rolls and such. I chose everything except the Sun specific stuff (descriptions on which Rolls do what are in the download section). Since I was using RHEL instead of CentOS on the jumbo dvd, I had to push that “CD/DVD” button once per cd/dvd and select what I needed from each one.

Once the selections were made it asks you for information about the cluster. Only the FQDN and Cluster name are really necessary. After that you are given the chance to configure your public (lan) and private network settings, your root password, time zone and disk partitioning. My best advice here would be to go with default where possible although I did change my private network address settings and they worked perfectly. Letting the partitioner handle your disk partitioning is probably best too.

A quick note about disk space: If you are going to have a lot of disk space anywhere, it’s best on the head node as that space will be put in a partition that will be shared between compute nodes. Also, each node should have at least 30gb of hdd space to get the install done correctly. I tried with 16gb on one compute node and the install failed!

After all that (which really is not much at all), you just sit back and wait for your install to complete. After completion the install docs tell you to wait a few minutes for all the post install configs (behind the scenes I guess) to finish up before logging in.

Once you are at that point and logged into your head node, it is absolutely trivial to get a compute node running. First, from the command line on your head node, run “insert-ethers” and select “Compute”. Then, power on your compute node (do one at a time) and make sure it’s set to network boot (PXE). You will see the mac address and compute node name pop up on your insert-ethers screen and shortly thereafter your node will install itself from the head node, reboot and you’ll be rockin’ and rollin’!

Once your nodes are going, you can get to that shared drive space on /state/partition1. You can run commands on the hosts by doing “rocks run host uptime”, which would give you an uptime on all the hosts in the cluster. “rocks help” will help you out with more commands. You can ssh into any one of the nodes by simply doing “ssh compute-0-1″ or whichever node you want.

Now the only problem I have encountered so far is I had an issue with a compute node that didn’t want to install correctly (probably because I was impatient). I tried reinstalling it and it and somehow got a new nodename from insert-ethers. In order to delete my bad info in the node database that insert-ethers maintains I needed to do a “rocks remove host compute-0-1″ and then “rocks sync config” before I was able to make a new compute-0-1 node.

So now you and I have a functional cluster. What do you do with it? Well, you can do anything on there that requires the horsepower of multiple computers. Some things come to mind like graphics rendering and there are programs and instructions on the web on how to do those. I ran folding at home on mine. With a simple shell script I was able to setup and start folding at home on all my nodes. You could probably do most anything the same way. If any of you find something fantastic you like to run on your cluster, be sure to pass it along and let us know!

National Blog Posting Month

Well, here it is, National Blog Posting Month again. I have decided to accept the challenge. I do, however,think that I am setting myself up for failure. Just curious as to how long that is going to take. :-)

Do stay tuned, though, as I will attempt to interject a few interesting things, if possible, from my many times mundane sysadmin life!

I would like to take this opportunity to challenge the other Linuxish bloggers to do the same and perhaps we can flood the market (so to speak) with some interesting Linux/FOSS/BSD content this month! You can do it!

New Show – Update

Due to the overwhelming response of 4 people on the idea of doing a Linux System Administration show, I have decided to do it anyway. I know - glutton for punishment. I believe I will do this in a video format, or I will at least try. I need to work out just how to get that accomplished, but we’ll see what happens. What I do need from you 4 listeners/readers/watchers is a NAME and (hopefully RFQuerin is reading) a LOGO :-)

As always, hit me up with suggestions, questions or concerns at linc dot fessenden at gmail dot com. Thanks!