Archive for the 'System Administration' Category

Replace HD in Dell Inspiron N5110

I am no stranger to replacing bad equipment in servers, desktops and laptops, but some laptops don’t make it easy. This was one.

A couple years ago I swapped out an aging hdd in an older Dell Inspiron with a new ssd and, boy, the performance improved drastically. Lately, I have been using a new(er) Inspiron, an N5110 and have noticed that it sure took a while for things like bootup and Chrome to initially load. It was really starting to annoy me, so I looked up the specs on the original hdd and found that there was a squirrel in there pounding out the bits with a chisel, so I decided it was high time for a modern drive and splurged on a 240Gb ssd. I assumed that this was a simple pull the panel off the bottom and swap kind of procedure like the old Dell, so I pulled off the hdd sized panel and boom. The only thing under there was more plastic and a small memory slot???!!

Not to be outdone I turned to youtube, just like an self respecting techie would and was pleased to find some instruction there. You can find the video i used here if you are interested:

That is where is starts to get fun. Apparently you have to disassemble THE ENTIRE LAPTOP to get the hdd out. You have to pull out the battery, memory, all the screws on the bottom, the dvd drive, then flip the machine over and pull off the keyboard, unscrew and pull off the top plate and all the ribbon cables, then unscrew and remove the entire motherboard and one of the monitor mounts. The hdd is underneath the motherboard. Unreal.

Believe it or not, after all that I only had one extra screw(?) and the laptop booted up on the first try. Now came the good part. How to get my existing Linux Mint install onto the new ssd. Normally I would have just used a disk cloning program or dd to do it but the old hdd was 500Gb and this new ssd is only 240Gb. There are also some complicated tutorials on the web on how accomplish this task but let me share with you the easy way.

Do a clean install of your OS. Really. With Linux it takes 15 minutes tops. Don’t bother with any of your configs or personalization. It’s a dummy install to not only get the partitioning correct on your ssd but generate the correct /etc/fstab file (or get the new uuids and make the correct partitions bootable.

Once you are done, boot into your install media again (I used USB because it was faster) and mount your new installation AND your old hdd (I used an external usb drive case for this). I made the directories I needed by doing (as root) “mkdir -p /mnt/newdisk ; mkdir -p /mnt/olddisk” and then putting things in place with “mount /dev/sda1 /mnt/newdisk ; mount /dev/sdc1 /mnt/olddisk”. I should mention here that my partitions were the default Mint layout with a big Linux partition first, then an extended partition, then swap, on both drives.

Once mounted I made a backup copy of the /etc/fstab on my olddisk (the hdd) and then I copied the /etc/fstab from the newdisk to the /etc/fstab on the olddisk. Now the fun part. Go to (cd) the /mnt/newdisk directory. MAKE SURE IT’S THE NEWDISK DIRECTORY, and “rm -rf *”. That is going to delete all the files you just installed. It’ll only take a second.

Next is the long part. I used rsync to copy all my old files over. If you aren’t a hoarder like me with six linux dvd isos in your download directory and 50Gb of music files, it’ll go a lot faster, but all the same, it’s pretty cool to watch. I did a “rsync -rvlpogdstHEAX /mnt/olddisk/ /mnt/newdisk”. Make note of those /’ in there or you’ll end up having to move stuff around afterwards. In retrospect, I think you could use just rsync -av, but ymmv. What you will see is every file on your old drive being copied to the new one. Like I mentioned, this takes a few minutes, just sit back or grab a coffee. Once it’s done you are *almost* ready.

The very last thing you’ll need to fix is your grub.cfg file. These days everyone wants to use uuid to assign devices and your boot file is still looking for your old hdd. Open up a couple terminals. In one, vi /mnt/newdisk/boot/grub/grub.cfg and in the other vi /mnt/newdisk/etc/fstab. In the fstab file you will see the uuid for your new ssd drive. It’s the first uuid mentioned and mounted at /. Io You need to replace the old one in there with the new one from your fstab. It’s easier than you think in vi. Just do a “:g/olduuidstring/s//newuuidstring/g” and hit enter where olduuidstring is your old uuid and newuuidstring is your new uuid from the fstab file. Once it is finished replacing you probably need to save it with a “:wq!” because your system will undoubtedly say it’s a read only file. The reboot! You should be greeted shortly with a much faster but very familiar linux install, complete with all your goodies.

One last note. You may want to increase the life of your ssd ehink in vi. Just do a “:g/olduuidstring/s//newuuidstring/g” and hit enter where olduuidstring is your old uuid and newuuidstring is your new uuid from the fstab file. Once it is finished replacing you probably need to save it with a “:wq!” because your system will undoubtedly say it’s a read only file. The reboot! You should be greeted shortly with a much faster but very familiar linux install, complete with all your goodies.cat by adding a couple options to your /etc/fstab file. Those options are discard and noatime. These options deal with extra disk writes that you really don’t need on ssd. Your / line options in the fstab should look something like “ext4 discard,noatime,errors=remount-ro 0 1”.

Enjoy!

It’s NOT Telecommuting!


OK, so it is telecommuting – but here me out for just a second..

I have been involved in a job search as a Linux admin for a few months now and one of the barriers I keep running in to is (get this) physical location, or company location. WHY? Business owners, let me reason with you for a moment here.

Your servers are “in the cloud”:
There are a LOT of companies these days who are using cloud servers and services. Buzz words like Paas, Saas and Iaas are all the rage now, along with their providers AWS, Rackspace, Azure, Google and the like. These services that you use locally for your business are not actually located at your business. Likely, they are not even in the same time zone, and, in some cases, country. Every time one of your server administrators or users access those services and systems, they are doing so remotely, even if they are sitting at a desk next to you in your corporate headquarters.

You have “datacenters”:
For those of you who have your own datacenters for your machines, you have the same issue. Most companies have at least two such facilities for redundancy and either one or both of them are typically located away from your corporate campus. This, again, means that when you are working on them in any capacity, you are doing so remotely, or “telecommuting”, whether it be from your corporate campus, from, home or across the world.

So you see, in almost every scenario in these modern times, you are already telecommuting to use your own resources. I am here to implore you to consider expanding your employment pool by letting computer workers do their jobs remotely. Save yourself some real estate space. Use conference calls, instant messaging, emails and video chats (free) for your office communications. Dramatically lower your corporate utility bills and *paper costs*. And give someone like myself a shot. You’ll be happy you did!

“Fixing” an old laptop

Dell Inspiron 1545

Dell Inspiron 1545


A few years ago when I was in the market for a new laptop I picked up one of the then wildly popular and cheap Dell Inspiron 1545. There are gobs of these running around now and you can find them cheap if you look (click the pic for links to Amazon). I used this for for, it seems, forever. I only ever had one problem with it – a small plastic chip in one of the corners that I repaired with superglue (you would never notice). Lately, though, it has been running noticeably slow. I don’t know if it’s because it’s actually getting slower, the software is just getting fatter, my work computer is blazing fast in comparison, or a combination of any/all of those. Either way, it’s really been bugging me so much lately that I had considered just getting a new lappy. Before I did, I decided to look over the specs to see what I actually had here. Mine is a core duo 2.2Ghz with 4Gb ram and a 320gb HDD. Running Linux this thing *should* run like it was on fire. So why so freaking slow? A quick look at “top” revealed what had to be the problem. I was at almost 0% CPU and only 1.5Gb ram. It HAD to be the slow as pencil and paper hard drive writes and reads. A quick search says that somewhere in between now and the last time I came up from air at work SSD drive prices dramatically reduced, so I stopped by a bigbox store and picked up a 240Gb SSD for <$100 and screwed it in and WHAMO! It’s like I have a brand new laptop! Seriously! Not only is the difference noticeable, it’s amazing, so much so that I needed to break my blogging silence to tell you about it. If any of you have an aging laptop like me that runs but is “meh”, it’s totally worth it to spend the 15 minutes it takes to do this upgrade. It certainly just saved me $500 and I am now, once again, perfectly happy with my trusty old (but well kept) Dell Inspiron 1545.

Linux Shell Scripting Cookbook

Linux Shell Scripting Cookbook

Linux Shell Scripting Cookbook


   As a full time Senior Linux System Administrator in real life I was quite interested to get my fingers on this book for a review. After all, the job of a smart sysadmin pretty much dictates scripting away as much of your work as possible. We are a lazy bunch and we call that being efficient :)

   This is the first book I have reviewed by Packt Publishing or the author, Sarath Lackshman, I wasn’t really sure what I was in for. In fact I was slightly put off by the price, which I initially thought overly hefty at $45 US. For that kind of scratch I am used to seeing a much more substantial sized book from the sort of publishers I normally review for. I started making my way through the book anyway, and I am glad I did.

   What makes this book really cool is the premise behind it. Inside, as a “cookbook” should, you have these “recipes” for scripts. These are not what I have normally seen in many scripting books before, which are generally theoretical and sometimes lengthy examples, but these recipes are pretty straight forward, real world examples of things you might want to do, and how to handle those efficiently. The recipes are also small enough that you could easily piece meal things out to compose another script and I am certain that would be a great help to novice scripters.

   As nice as I think this book would be for novice scripters, there is a lot of smart stuff in there, stuff that had never occurred to me through my years of command line use. I actually got really excited to try some of the examples in there and to put them into practice. I particularly liked the little tricks here and there, like the “subshell trick” and I was absolutely thrilled that this book used modern syntax and variable manipulation, dropping the deprecated stuff like putting commands into back ticks. Good form!

   This book is certainly a keeper and I would recommend it highly to anyone who wants to become proficient on the command line. Some days you actually *do* get what you pay for, and I believe people will find this book to be a good example of that. This book was truly fun for me to work my way through and I sure hope they have more like it in store for the future. Go buy yourself a copy. I know I will be hanging on to this one for a while :)

RHEL 5 quick and dirty samba primer

samba

samba


A friend asked me for a quick primer on how to set up a windows accessible share under RHEL 5, so I thought I would include it here for the benefit of anyone interested.

  • sudo yum -y install samba
  • sudo vim /etc/samba/smb.conf
  • replace the file with something like so:

[global]
workgroup = SOMEWORKGROUPNAME
server string = SERVERHOSTNAME Samba Server Version %v
security = user
netbios name = CALLMESOMETHING
[data]
comment = my data share
path = /data
read only = no
writable = yes
guest ok = no
available = yes
valid users = USERNAME

  • add a local user to the box: sudo useradd USERNAME
  • add the local user to samba and give password: sudo smbpaswd -a USERNAME
  • restart samba service: sudo service smb restart
  • make sure samba starts at boot: sudo chkconfig smb on
  • adjust your firewall settings if necessary

At this point you should be able to access the share at //servername/data.
Have fun!

System Administration: Information

Probably 50% of a SysAdmin’s job revolves around information. Knowing what is going on with your systems can make all the difference. Just don’t make the mistake of thinking that the more information the better. What you really need is the *correct* information at the appropriate time and it shouldn’t be obfuscated by extra information.

Good info sources:
Use OSSEC and Nagios. These products will notify you about security issues and outages.

There is a child’s fable about the boy who cried wolf. To make a long story short, the boy made false alarms several times drawing attention until when he really saw a wolf, nobody would come. There is an important lesson in there about information too. After a while, if you are flooded with info you don’t need, you tend to stop paying attention and may miss something important.

The right stuff:
Make sure that you set up your source filters or rules well. Use your mail filters wisely and set them up as you go along to remove non essential notifications. And most importantly, read and pay attention to those alerts and notifications you get!

Server Build

Last night on the TechShow I was asked about providing some info on a decent default server build. Here are some quick notes to get people going. Adjust as necessary.

Just for ease, here, lets assume you are installing CentOS 5, a nice robust enterprise class Linux for your server needs.

CentOS 5 / RHEL 5 / Scientific Linux, etc., does a really great job picking the defaults, so sticking with those is just fine and has worked well for me on literally hundreds of servers.

  • I let the partitioner remove all existing partitions and chose the default layout without modification.
  • Configure your networking appropriately, make sure to set your system clock for the appropriate timezone (no I do not generally leave my hardware clock set to UTC).
  • When picking general server packages I go for web server and software devel. I do not, generally, pick virtualization unless there is a specific reason to. I find that the web and devel meta server choices provide a robust background with all the tools I need to set up almost any kind of server I want without having to dredge for hundreds of packages later on.
  • The install itself at this point should take you about 15 minutes depending on the speed of your hardware.
  • Once installed, reboot the server and you should come to a setup agent prompt. Select the firewall configuration. Disable the firewall and SELinux completely (trust me here). Once that is done, exit the setup agent (no need to change anything else here), login to the machine as root and reboot it. This is necessary to completely disable SELinux.

From this point on it’s all post install config…:

  • Add any software repositories you need to.
    I not only have my own repo for custom applications, but also have a local RedHat repo for faster updates and lower network strain/congestion.
  • Install your firewall.
    I use an ingress and egress firewall built on iptables. While mine is a custom written app, there are several iptables firewall generator apps out there you can try.
  • Install your backup software.
    Doesn’t matter if this is a big company backup software like TSM or CommVault, or you are just using tar in a script. Make sure your system is not only being backed up regularly, but that you can actually restore data from those backups if you need to.
  • Add your local admin account(s).
    Don’t be an idiot and log into your server all the time as root. Make a local account and give yourself sudo access (and use it).
  • Fix your mail forwarding.
    Create a .forward file in your root directory and put your email address in there. You will get your servers root emails delivered to you so you can watch the logwatch reports and any cron results and errors. This is important sysadmin stuff to look at when it hits your inbox.
  • Stop unnecessary services.
    Yes, if you are running a server you can probably safely stop the bluetooth and cups services. Check through what you are running with a “service –status-all” or a “chkconfig –list” (according to your runlevel) and turn off / stop those services you are not and will not be using. This will go a long way toward securing your server as well.
  • Install OSSEC and configure it to email you alerts.
  • No root ssh.
    Change your /etc/ssh/sshd_config and set “PermitRootLogin no”. Remember, you just added an admin account for yourself, you don’t need to ssh into this thing as root anymore. Restart your sshd service after making the change in order to apply it.
  • Set runlevel 3 as default.
    You do not need to have a GUI desktop running on your server. Run the gui on your workstation and save your server resources for serving stuff. Make the change in /etc/inittab “id:3:initdefault:”.
  • Fix your syslog.
    You really should consider having a separate syslog server. They are easy to set up (hey, Splunk is FREE up to so much usage) and it makes keeping track of whats happening on multiple servers much easier (try that Splunk stuff – you’ll like it).
  • Set up NTPD.
    Your server needs to know what time it is. ‘Nuff said.
  • Install ClamAV.
    Hey, it’s free and it works. If you do ANYTHING at all with handling emails or fileshares for windows folks on this machine, you owe it to yourself and your users to run Clam on there to help keep them safer.
  • Do all your updates now.
    Before you go letting the world in on your new server, make sure to run all the available updates. No sense starting a new server instance with out of date and potentially dangerous software.
  • Lastly, update your logbook.
    You should have SOME mechanism for keeping track of server changes, whether it be on paper or in a wiki or whathaveyou. Use it RELIGIOUSLY. You will be glad someday you did.

ESXi and Subsonic

In continuation, somewhat, of my last post and a brief review on the last TechShow, I wanted to jot down some notes about my newest encounter with ESXi and Subsonic.

Subsonic

Subsonic

I wanted to try out Subsonic, so I really needed to put together a new machine to play with it a bit. As a RL System administrator, some things carry over into my home computing environment, and paranoia is one of them. I just *have* to test things outside of my “production” servers at home too. Since I run my servers in a virtualized environment, this shouldn’t be too much of a problem.

I run ESXi at home for my virtualization platform, and the norm there is to use virtualcenter (or the vic) to create and manipulate VMs. The problem there is I am just not a Windows fan (no kidding). I had gotten around this problem initially by creating a VM on VMware Server (running on Linux) and then using VMware Converter to move that VM to my ESXi machine. This time, I did a little more digging on the subject of using the command line to create those VMs natively and I actually found some great information that let me do just that. What I found was these two links that contain all the information I needed:
ESXi – creating new virtual machines (servers) from the command line
and
http://www.vm-help.com/esx40i/manage_without_VI_client_1.php

Without rehashing a lot of the detail provided in those two sites, the basics are using vmkfstools to create a disk image for you to use and then building a small minimal vmx file with enough info in it to get things going. To do the install, make sure have your vmx start an iso image from the cdrom drive and turn on vnc for the box. From there it’s quite easy to get an install working.

The server I decided upon installing is CentOS 5.5. I chose the standard server install and the only things that were required to get Subsonic working on it were:
yum install java-1.6.0-openjdk
and then to download and install the rpm from Subsonic’s website. A little later on I found that Subsonic would not stream my ogg files and that was easily fixed by:
rpm –import http://apt.sw.be/RPM-GPG-KEY.dag.txt
wget http://packages.sw.be/rpmforge-release/rpmforge-release-0.5.2-2.el5.rf.i386.rpm
rpm -Uvh rpmforge-release-0.5.2-2.el5.rf.i386.rpm
yum install lame ffmpeg

After all that, pointing your web browser to http://:4040 and you are rocking and rolling with the big boys. The thing that really impressed me with the setup is when you tell Subsonic where your music is. On every other music server install this is the part where it takes a while to scan and index your music. With Subsonic this was surprisingly almost instantaneous! You tell it where the music is and *whamo* your music shows up, ready to be played. Fantastic! The other great piece is the ability to add album art. You can just tell subsonic to change your album art and it finds some suggestions on the web and will let you pick the correct one and save it to your collection. It’s very nice and a complete time grabber :)

Diagnosis: Paranoia


You know, there are just some things you do not need first thing on a Monday morning. This was one of them…

I came and and started reviewing my reports and was looking at an access report, which is basically a “last | grep $TheDateIWant” from over the weekend. I keep a pretty tight ship and want to know who is accessing what servers and when (and sometimes why). What I saw was monstrously suspicious! I saw MYSELF logged in to 3 different servers 3 times each around 5am on Sunday morning – while I was sleeping.

This is the kind of thing to throw you into an immediate panic first thing on a Monday morning, but I decided to give myself 10 minutes to investigate before completely freaking out.

The first thing I noticed was that the access/login times looked suspiciously like the same times I ran my daily reports on the machines, however, the previous week I had changed the user that runs those reports and this was still saying it was me. I double, triple and quadruple checked and searched all the report programs to make absolutely sure there was no indication that they were still using my personal account (which was probably bad practice to begin with btw). Then I scoured all the cron logs to see what was actually running at those times, and oddly enough, it was just those reports.

I looked through the command line history on those machines and checked again the “last | head” to see who was logging on those machines. Nothing out of place BUT with the “last| head” I was NOT listed as being on the machine on that date! So I ran the entire report command again “last | grep $TheDateIWant” and there I was again, listed right under the logins of the report user.

Anyone catching this yet?

What I had stumbled upon were a few machines that are used so infrequently that the wtmp file, which is what the “last” command uses for data, had over 1 year of entries. My search of “last | grep ‘Oct 31′” was returning not only this year, but my own logins from last year as well.

WHEW!

Moral of the story? Mondays stink – Just stay home!

Updates

Updates, updates everywhere. I pushed a bunch of updates to FreeLinuxBox.org, my Blog, LinuxPlanet Casts and Blogs, LinuxForChristians, TLLTS Planet and the Lincware forums. Everything looks ok right now, but please let me know if you see anything strange happening (or not happening as the case may be). Thanks and you may now return to your previously scheduled rss feed.

Next Page »