Checking for a Dell DRAC card on linux

Note to self: this seems to be the most reliable way of checking whether
a Dell machine has a DRAC card installed:
sudo ipmitool sdr elist mcloc

If there is, you'll see some kind of DRAC card:

iDRAC6           | 00h | ok  |  7.1 | Dynamic MC @ 20h

If there isn't, you'll see only a base management controller:

BMC              | 00h | ok  |  7.1 | Dynamic MC @ 20h

You need ipmi setup for this (if you haven't already):

# on RHEL/CentOS etc.
yum install OpenIPMI
service ipmi start

Subtracting Text Files

This has bitten me a couple of times now, and each time I've had to re-google the utility and figure out the appropriate incantation. So note to self: to subtract text files use comm(1).

Input files have to be sorted, but comm accepts a - argument for stdin, so you can sort on the fly if you like.

I also find the -1 -2 -3 options pretty counter-intuitive, as they indicate what you want to suppress, when I seem to want to indicate what I want to select. But whatever.

Here's the cheatsheet:

FILE1=one.txt
FILE2=two.txt

# FILE1 - FILE2 (lines unique to FILE1)
comm -23 $FILE1 $FILE2

# FILE2 - FILE1 (lines unique to FILE2)
comm -13 $FILE1 $FILE2

# intersection (common lines)
comm -12 $FILE1 $FILE2

# xor (non-common lines, either FILE)
comm -3 $FILE1 $FILE2
# or without the column delimiters:
comm -3 --output-delimiter=' ' $FILE1 $FILE2 | sed 's/^ *//'

# union (all lines)
comm $FILE1 $FILE2
# or without the column delimiters:
comm --output-delimiter=' ' $FILE1 $FILE2 | sed 's/^ *//'

OpenLDAP Tips and Tricks

Having spent too much of this week debugging problems around migrating ldap servers from RHEL5 to RHEL6, here are some miscellaneous notes to self:

  1. The service is named ldap on RHEL5, and slapd on RHEL6 e.g. you do service ldap start on RHEL5, but service slapd start on RHEL6

  2. On RHEL6, you want all of the following packages installed on your clients:

    yum install openldap-clients pam_ldap nss-pam-ldapd
    
  3. This seems to be the magic incantation that works for me (with real SSL certificates, though):

    authconfig --enableldap --enableldapauth \
      --ldapserver ldap.example.com \
      --ldapbasedn="dc=example,dc=com" \
      --update
    
  4. Be aware that there are multiple ldap configuration files involved now. All of the following end up with ldap config entries in them and need to be checked:

    • /etc/openldap/ldap.conf
    • /etc/pam_ldap.conf
    • /etc/nslcd.conf
    • /etc/sssd/sssd.conf

    Note too that /etc/openldap/ldap.conf uses uppercased directives (e.g. URI) that get lowercased in the other files (URI -> uri). Additionally, some directives are confusingly renamed as well - e.g. TLA_CACERT in /etc/openldap/ldap.conf becomes tla_cacertfile in most of the others. :-(

  5. If you want to do SSL or TLS, you should know that the default behaviour is for ldap clients to verify certificates, and give misleading bind errors if they can't validate them. This means:

    • if you're using self-signed certificates, add TLS_REQCERT allow to /etc/openldap/ldap.conf on your clients, which means allow certificates the clients can't validate

    • if you're using CA-signed certificates, and want to verify them, add your CA PEM certificate to a directory of your choice (e.g. /etc/openldap/certs, or /etc/pki/tls/certs, for instance), and point to it using TLA_CACERT in /etc/openldap/ldap.conf, and tla_cacertfile in /etc/ldap.conf.

  6. RHEL6 uses a new-fangled /etc/openldap/slapd.d directory for the old /etc/openldap/slapd.conf config data, and the RHEL6 Migration Guide tells you to how to convert from one to the other. But if you simply rename the default slapd.d directory, slapd will use the old-style slapd.conf file quite happily, which is much easier to read/modify/debug, at least while you're getting things working.

  7. If you run into problems on the server, there are lots of helpful utilities included with the openldap-servers package. Check out the manpages for slaptest(8), slapcat(8), slapacl(8), slapadd(8), etc.

Further reading:

rpm-find-changes

rpm-find-changes is a little script I wrote a while ago for rpm-based systems (RedHat, CentOS, Mandriva, etc.). It finds files in a filesystem tree that are not owned by any rpm package (orphans), or are modified from the version distributed with their rpm. In other words, any file that has been introduced or changed from it's distributed version.

It's intended to help identify candidates for backup, or just for tracking interesting changes. I run it nightly on /etc on most of my machines, producing a list of files that I copy off the machine (using another tool, which I'll blog about later) and store in a git repository.

I've also used it for tracking changes to critical configuration trees across multiple machines, to make sure everything is kept in sync, and to be able to track changes over time.

Available on github: https://github.com/gavincarr/rpm-find-changes

RHEL6 GDM Sessions Workaround

Update: ilaiho has provided a better solution in the comments, which is to install the xorg-x11-xinit-session package, which adds a "User script" session option. This will invoke your (executable) ~/.xsession or ~/.Xclients configs, if selected, and works well, so I'd recommend you go that route instead of using this patch now.


The GDM Greeter in RHEL6 seems to have lost the ability to select 'session types' (or window managers), which apparently means you're stuck using Gnome, even if you have other better options installed. One workaround is to install KDM instead, and set DISPLAYMANAGER=KDE in your /etc/sysconfig/desktop config, as KDM does still support selectable session types.

Since I've become a big fan of tiling window managers in general, and ion in particular, this was pretty annoying, so I wasted a few hours today working through the /etc/X11 scripts and figuring out how they hung together on RHEL6.

So for any other gnome-haters out there who don't want to have to go to KDM, here's a patch to /etc/X11/xinit/Xsession that ignores the default 'gnome-session' set by GDM, which allows proper window manager selection either by user .xsession or .Xclients files, or by the /etc/sysconfig/desktop DISPLAY setting.

diff --git a/xinit/Xsession b/xinit/Xsession
index e12e0ee..ab94d28 100755
--- a/xinit/Xsession
+++ b/xinit/Xsession
@@ -30,6 +30,14 @@ SWITCHDESKPATH=/usr/share/switchdesk
 # Xsession and xinitrc scripts which has been factored out to avoid duplication
 . /etc/X11/xinit/xinitrc-common

+# RHEL6 GDM doesn't seem to support selectable sessions, and always requests a
+# gnome-session. So we unset this default here, to allow things like user
+# .xsession or .Xclients files to be checked, and /etc/sysconfig/desktop
+# settings (via /etc/X11/xinit/Xclients) honoured.
+if [ -n "$GDMSESSION" -a $# -eq 1 -a "$1" = gnome-session ]; then
+  shift
+fi
+
 # This Xsession.d implementation, is intended to obsolte and replace the
 # various mechanisms present in the 'case' statement which follows, and to
 # eventually be able to easily remove all hard coded window manager specific

Apply as root:

cd /etc/X11
patch -p1 < /tmp/xsession.patch

Rebuild Inventory

Here's what I use to take a quick inventory of a machine before a rebuild, both to act as a reference during the rebuild itself, and in case something goes pear-shaped. The whole chunk after script up to exit is cut-and-pastable.

# as root, where you want your inventory file
script $(hostname).inventory
export PS1='\h:\w\$ '               # reset prompt to avoid ctrl chars
fdisk -l /dev/sd?                   # list partition tables
cat /proc/mdstat                    # list raid devices
pvs                                 # list lvm stuff
vgs
lvs
df -h                               # list mounts
ip addr                             # list network interfaces
ip route                            # list network routes
cat /etc/resolv.conf                # show resolv.conf
exit

# Cleanup control characters in the inventory
perl -i -pe 's/\r//g; s/\033\]\d+;//g; s/\033\[\d+m//g; s/\007/\//g' \
  $(hostname).inventory

# And then copy it somewhere else in case of problems ;-)
scp $(hostname).inventory somewhere:

Anything else useful I've missed?

Dell OMSA

Following on from my IPMI explorations, here's the next chapter in my getting-down-and-dirty-with-dell-hardware-on-linux adventures. This time I'm setting up Dell's OpenManage Server Administrator software, primarily in order to explore being able to configure bios settings from within the OS. As before, I'm running CentOS 5, but OMSA supports any of RHEL4, RHEL5, SLES9, and SLES10, and various versions of Fedora Core and OpenSUSE.

Here's what I did to get up and running:

# Configure the Dell OMSA repository
wget -O bootstrap.sh http://linux.dell.com/repo/hardware/latest/bootstrap.cgi
# Review the script to make sure you trust it, and then run it
sh bootstrap.sh
# OR, for CentOS5/RHEL5 x86_64 you can just install the following:
rpm -Uvh http://linux.dell.com/repo/hardware/latest/platform_independent/rh50_64/prereq/\
dell-omsa-repository-2-5.noarch.rpm

# Install base version of OMSA, without gui (install srvadmin-all for more)
yum install srvadmin-base

# One of daemons requires /usr/bin/lockfile, so make sure you've got procmail installed
yum install procmail

# If you're running an x86_64 OS, there are a couple of additional 32-bit
#   libraries you need that aren't dependencies in the RPMs
yum install compat-libstdc++-33-3.2.3-61.i386 pam.i386

# Start OMSA daemons
for i in instsvcdrv dataeng dsm_om_shrsvc; do service $i start; done

# Finally, you can update your path by doing logout/login, or just run:
. /etc/profile.d/srvadmin-path.sh

Now to check whether you're actually functional you can try a few of the following (as root):

omconfig about
omreport about
omreport system -?
omreport chassis -?

omreport is the OMSA CLI reporting/query tool, and omconfig is the equivalent update tool. The main documentation for the current version of OMSA is here. I found the CLI User's Guide the most useful.

Here's a sampler of interesting things to try:

# Report system overview
omreport chassis

# Report system summary info (OS, CPUs, memory, PCIe slots, DRAC cards, NICs)
omreport system summary

# Report bios settings
omreport chassis biossetup

# Fan info
omreport chassis fans

# Temperature info
omreport chassis temps

# CPU info
omreport chassis processors

# Memory and memory slot info
omreport chassis memory

# Power supply info
omreport chassis pwrsupplies

# Detailed PCIe slot info
omreport chassis slots

# DRAC card info
omreport chassis remoteaccess

omconfig allows setting object attributes using a key=value syntax, which can get reasonably complex. See the CLI User's Guide above for details, but here are some examples of messing with various bios settings:

# See available attributes and settings
omconfig chassis biossetup -?

# Turn the AC Power Recovery setting to On
omconfig chassis biossetup attribute=acpwrrecovery setting=on

# Change the serial communications setting (on with serial redirection via)
omconfig chassis biossetup attribute=serialcom setting=com1
omconfig chassis biossetup attribute=serialcom setting=com2

# Change the external serial connector
omconfig chassis biossetup attribute=extserial setting=com1
omconfig chassis biossetup attribute=extserial setting=rad

# Change the Console Redirect After Boot (crab) setting
omconfig chassis biossetup attribute=crab setting=enabled
omconfig chassis biossetup attribute=crab setting=disabled

# Change NIC settings (turn on PXE on NIC1)
omconfig chassis biossetup attribute=nic1 setting=enabledwithpxe

Finally, there are some interesting formatting options available to both omreport, for use in scripting e.g.

# Custom delimiter format (default semicolon)
omreport chassis -fmt cdv

# XML format
omreport chassis -fmt xml

# To change the default cdv delimiter
omconfig preferences cdvformat -?
omconfig preferences cdvformat delimiter=pipe

IPMI on CentOS/RHEL

Spent a few days deep in the bowels of a couple of datacentres last week, and realised I didn't know enough about Dell's DRAC base management controllers to use them properly. In particular, I didn't know how to mess with the drac settings from within the OS. So spent some of today researching that.

Turns out there are a couple of routes to do this. You can use the Dell native tools (e.g. racadm) included in Dell's OMSA product, or you can use vendor-neutral IPMI, which is well-supported by Dell DRACs. I went with the latter as it's more cross-platform, and the tools come native with CentOS, instead of having to setup Dell's OMSA repositories. The Dell-native tools may give you more functionality, but for what I wanted to do IPMI seems to work just fine.

So installation is just:

yum install OpenIPMI OpenIPMI-tools
chkconfig ipmi on
service ipmi start

and then from the local machine you can use ipmitool to access and manipulate all kinds of useful stuff:

# IPMI commands
ipmitool help
man ipmitool

# To check firmware version
ipmitool mc info
# To reset the management controller
ipmitool mc reset [ warm | cold ]

# Show field-replaceable-unit details
ipmitool fru print

# Show sensor output
ipmitool sdr list
ipmitool sdr type list
ipmitool sdr type Temperature
ipmitool sdr type Fan
ipmitool sdr type 'Power Supply'

# Chassis commands
ipmitool chassis status
ipmitool chassis identify [<interval>]   # turn on front panel identify light (default 15s)
ipmitool [chassis] power soft            # initiate a soft-shutdown via acpi
ipmitool [chassis] power cycle           # issue a hard power off, wait 1s, power on
ipmitool [chassis] power off             # issue a hard power off
ipmitool [chassis] power on              # issue a hard power on
ipmitool [chassis] power reset           # issue a hard reset

# Modify boot device for next reboot
ipmitool chassis bootdev pxe
ipmitool chassis bootdev cdrom
ipmitool chassis bootdev bios

# Logging
ipmitool sel info
ipmitool sel list
ipmitool sel elist                       # extended list (see manpage)
ipmitool sel clear

For remote access, you need to setup user and network settings, either at boot time on the DRAC card itself, or from the OS via ipmitool:

# Display/reset password for default root user (userid '2')
ipmitool user list 1
ipmitool user set password 2 <new_password>

# Display/configure lan settings
ipmitool lan print 1
ipmitool lan set 1 ipsrc [ static | dhcp ]
ipmitool lan set 1 ipaddr 192.168.1.101
ipmitool lan set 1 netmask 255.255.255.0
ipmitool lan set 1 defgw ipaddr 192.168.1.254

Once this is configured you should be able to connect using the 'lan' interface to ipmitool, like this:

ipmitool -I lan -U root -H 192.168.1.101 chassis status

which will prompt you for your ipmi root password, or you can do the following:

echo <new_password> > ~/.racpasswd
chmod 600 ~/.racpasswd

and then use that password file instead of manually entering it each time:

ipmitool -I lan -U root -f ~/.racpasswd -H 192.168.1.101 chassis status

I'm using an 'ipmi' alias that looks like this:

alias ipmi='ipmitool -I lan -U root -f ~/.racpasswd -H'

# which then allows you to do the much shorter:
ipmi 192.168.1.101 chassis status
# OR
ipmi <hostname> chassis status

Finally, if you configure serial console redirection in the bios as follows:

Serial Communication -> Serial Communication:       On with Console Redirection via COM2
Serial Communication -> External Serial Connector:  COM2
Serial Communication -> Redirection After Boot:     Disabled

then you can setup standard serial access in grub.conf and inittab on com2/ttyS1 and get serial console access via IPMI serial-over-lan using the 'lanplus' interface:

ipmitool -I lanplus -U root -f ~/.racpasswd -H 192.168.1.101 sol activate

which I typically use via a shell function:

# ipmi serial-over-lan function
isol() {
   if [ -n "$1" ]; then
       ipmitool -I lanplus -U root -f ~/.racpasswd -H $1 sol activate
   else
       echo "usage: sol <sol_ip>"
   fi
}

# used like:
isol 192.168.1.101
isol <hostname>

Further reading:

Mocking RPMs on CentOS

Mock is a Fedora project that allows you to build RPM packages within a chroot environment, allowing you to build packages for other systems than the one you're running on (e.g. building CentOS 4 32-bit RPMs on a CentOS 5 64-bit host), and ensuring that all the required build dependencies are specified correctly in the RPM spec file.

It's also pretty under-documented, so these are my notes on things I've figured out over the last week setting up a decent mock environment on CentOS 5.

First, I'm using mock 1.0.2 from the EPEL repository, rather than older 0.6.13 available from CentOS Extras. There are apparently backward-compatibility problems with versions of mock > 0.6, but as I'm mostly building C5 packages I decided to go with the newer version. So installation is just:

# Install mock and python-ctypes packages (the latter for better setarch support)
$ sudo yum --enablerepo=epel install mock python-ctypes

# Add yourself to the 'mock' group that will have now been created
$ sudo usermod -G mock gavin

The mock package creates an /etc/mock directory with configs for various OS versions (mostly Fedoras). The first thing you want to tweak there is the site-defaults.cfg file which sets up various defaults for all your builds. Mine now looks like this:

# /etc/mock/site-defaults.cfg

# Set this to true if you've installed python-ctypes
config_opts['internal_setarch'] = True

# Turn off ccache since it was causing errors I haven't bothered debugging
config_opts['plugin_conf']['ccache_enable'] = False

# (Optional) Fake the build hostname to report
config_opts['use_host_resolv'] = False
config_opts['files']['etc/hosts'] = """
127.0.0.1 buildbox.openfusion.com.au nox.openfusion.com.au localhost
"""
config_opts['files']['etc/resolv.conf'] = """
nameserver 127.0.0.1
"""

# Setup various rpm macros to use
config_opts['macros']['%packager'] = 'Gavin Carr <gavin@openfusion.com.au>'
config_opts['macros']['%debug_package'] = '%{nil}'

You can use the epel-5-{i386,x86_64}.cfg configs as-is if you like; I copied them to centos-5-{i386,x86_64}.cfg versions and removed the epel 'extras', 'testing', and 'local' repositories from the yum.conf section, since I typically want to build using only 'core' and 'update' packages.

You can then run a test by doing:

# e.g. initialise a centos-5-i386 chroot environment
$ CONFIG=centos-5-i386
$ mock -r $CONFIG --init

which will setup an initial chroot environment using the given config. If that seemed to work (you weren't inundated with error messages), you can try a build:

# Rebuild the given source RPM within the chroot environment
# usage: mock -r <mock_config> --rebuild /path/to/SRPM e.g.
$ mock -r $CONFIG --rebuild ~/rpmbuild/SRPMS/clix-0.3.4-1.of.src.rpm

If the build succeeds, it drops your packages into the /var/lib/mock/$CONFIG/result directory:

$ ls -1 /var/lib/mock/$CONFIG/result
build.log
clix-0.3.4-1.of.noarch.rpm
clix-0.3.4-1.of.src.rpm
root.log
state.log

If it fails, you can check mock output, the *.log files above for more info, and/or rerun mock with the -v flag for more verbose messaging.

A couple of final notes:

  • the chroot environments are cached, but rebuilding them and checking for updates can be pretty network intensive, so you might want to consider setting up a local repository to pull from. mrepo (available from rpmforge) is pretty good for that.

  • there don't seem to be any hooks in mock to allow you to sign packages you've built, so if you do want signed packages you need to sign them afterwards via a rpm --resign $RPMS.

Anycast DNS

(Okay, brand new year - must be time to get back on the blogging wagon ...)

Linux Journal recently had a really good article by Philip Martin on Anycast DNS. It's well worth a read - I just want to point it out and record a cutdown version of how I've been setting it up recently.

As the super-quick intro, anycast is the idea of providing a network service at multiple points in a network, and then routing requests to the 'nearest' service provider for any particular client. There's a one-to-many relationship between an ip address and the hosts that are providing services on that address.

In the LJ article above, this means you provide a service on a /32 host address, and then use a(n) (interior) dynamic routing protocol to advertise that address to your internal routers. If you're a non-cisco linux shop, that means using quagga/ospf.

The classic anycast service is dns, since it's stateless and benefits from the high availability and low latency benefits of a distributed anycast service.

So here's my quick-and-dirty notes on setting up an anycast dns server on CentOS/RHEL using dnsmasq for dns, and quagga zebra/ospfd for the routing.

  1. First, setup your anycast ip address (e.g. 192.168.255.1/32) on a random virtual loopback interface e.g. lo:0. On CentOS/RHEL, this means you want to setup a /etc/sysconfig/network-scripts/ifcfg-lo:0 file containing:

    DEVICE=lo:0
    IPADDR=192.168.255.1
    NETMASK=255.255.255.255
    ONBOOT=yes
    
  2. Setup your dns server to listen to (at least) your anycast dns interface. With dnsmasq, I use an /etc/dnsmasq.conf config like:

    interface=lo:0
    domain=example.com
    local=/example.com/
    resolv.conf=/etc/resolv.conf.upstream
    expand-hosts
    domain-needed
    bogus-priv
    
  3. Use quagga's zebra/ospfd to advertise this host address to your internal routers. I use a completely vanilla zebra.conf, and an /etc/quagga/ospfd.conf config like:

    hostname myhost
    password mypassword
    log syslog
    !
    router ospf
      ! Local segments (adjust for your network config and ospf areas)
      network 192.168.1.0/24 area 0
      ! Anycast address redistribution
      redistribute connected metric-type 1
      distribute-list ANYCAST out connected
    !
    access-list ANYCAST permit 192.168.255.1/32
    

That's it. Now (as root) start everything up:

ifup lo:0
for s in dnsmasq zebra ospfd; do
  service $s start
  chkconfig $s on
done
tail -50f /var/log/messages

And then check on your router that the anycast dns address is getting advertised and picked up by your router. If you're using cisco, you're probably know how to do that; if you're using linux and quagga, the useful vtysh commands are:

show ip ospf interface <interface>
show ip ospf neighbor
show ip ospf database
show ip ospf route
show ip route

Yum Download SRPMs

Found a nice post today on how to use yum to download source RPMs, rather than having to do a manual search on the relevant mirror.

Skype 2.1 on CentOS 5

The new skype 2.1 beta (woohoo - Linux users are now only 2.0 versions behind Windows, way to go Skype!) doesn't come with a CentOS rpm, unlike earlier versions. And the Fedora packages that are available are for FC9 and FC10, which are too recent to work on a stock RHEL/CentOS 5 system.

So here's how I got skype working nicely on CentOS 5.3, using the static binary tarball.

Note that while it appears skype has finally been ported to 64-bit architectures, the only current 64-bit builds are for Ubuntu 8.10+, so installing on a 64-bit CentOS box requires 32-bit libraries to be installed (sigh). Otherwise you get the error: skype: /lib/ld-linux.so.2: bad ELF interpreter: No such file or directory.

# the available generic skype binaries are 32-bit, so if you're running a 64-bit
# system you need to make sure you have various 32-bit libraries installed
yum install glib2.i386 qt4.i386 zlib.i386 alsa-lib.i386 libX11.i386 \
  libXv.i386 libXScrnSaver.i386

# installing to /opt (tweak to taste)
cd /tmp
wget http://www.skype.com/go/getskype-linux-beta-static
cd /opt
tar jxvf /tmp/skype_static-2.1.0.47.tar.bz2
ln -s skype_static-2.1.0.47 skype

# Setup some symlinks (the first is required for sounds to work, the second is optional)
ln -s /opt/skype /usr/share/skype
ln -s /opt/skype/skype /usr/bin/skype

You don't seem to need pulseaudio installed (at least with the static binary - I assume it's linked in statically already).

Tangentially, if you have any video problems with your webcam, you might want to check out the updated video drivers available in the kmod-video4linux package from the shiny new ELRepo.org. I'm using their updated uvcvideo module with a Logitech QuickCam Pro 9000 and Genius Slim 1322AF, and both are working well.

pvmove Disk Migrations

Lots of people make use of linux's lvm (Logical Volume Manager) for providing services such as disk volume resizing and snapshotting under linux. But few people seem to know about the little pvmove utility, which offers a very powerful facility for migrating data between disk volumes on the fly.

Let's say, for example, that you have a disk volume you need to rebuild for some reason. Perhaps you want to change the raid type you're using on it; perhaps you want to rebuild it using larger disks. Whatever the reason, you need to migrate all your data to another temporary disk volume so you can rebuild your initial one.

The standard way of doing this is probably to just create a new filesystem on your new disk volume, and then copy or rsync all the data across. But how do you verify that you have all the data at the end of the copy, and that nothing has changed on your original disk after the copy started? If you did a second rsync and nothing new was copied across, and the disk usage totals exactly match, and you remember to unmount the original disk immediately, you might have an exact copy. But if your original disk data is changing at all, getting a good copy of a large disk volume can actually be pretty tricky.

The elegant lvm/pvmove solution to this problem is this: instead of doing a userspace migration between disk volumes, you add your new volume into the existing volume group, and then tell lvm to move all the physical extents off of your old physical volume, and the migration is magically handled by lvm, without even needing to unmount the logical volume!

# Volume group 'extra' exists on physical volume /dev/sdc1
$ lvs
  LV   VG     Attr   LSize  Origin Snap%  Move Log Copy%  Convert
  data extra  -wi-ao 100.00G

# Add new physical volume /dev/sdd1 into volume group
$ vgextend extra /dev/sdd1
  Volume group "extra" successfully extended
$ lvs
  LV   VG     Attr   LSize  Origin Snap%  Move Log Copy%  Convert
  data extra  -wi-ao 200.00G

# Use pvmove to move physical extents off of old /dev/sdc1 (verbose mode)
$ pvmove -v /dev/sdc1
# Lots of output in verbose mode ...

# Done - remove old physical volume
$ pvremove /dev/sdc1
$ lvs
  LV   VG     Attr   LSize  Origin Snap%  Move Log Copy%  Convert
  data extra  -wi-ao 100.00G

The joys of linux.

Quick Linux Box Hardware Overview

Note to self: here's how to get a quick overview of the hardware on a
linux box:
perl -F"\s*:\s*" -ane "chomp \$F[1];
  print qq/\$F[1] / if \$F[0] =~ m/^(model name|cpu MHz)/;
  print qq/\n/ if \$F[0] eq qq/\n/" /proc/cpuinfo
grep MemTotal /proc/meminfo
grep SwapTotal /proc/meminfo
fdisk -l /dev/[sh]d? 2>/dev/null | grep Disk

Particularly useful if you're auditing a bunch of machines (via an ssh loop or clusterssh or something) and want a quick 5000-foot view of what's there.

Basic KVM on CentOS 5

I've been using kvm for my virtualisation needs lately, instead of xen, and finding it great. Disadvantages are that it requires hardware virtualisation support, and so only works on newer Intel/AMD CPUs. Advantages are that it's baked into recent linux kernels, and so more or less Just Works out of the box, no magic kernels required.

There are some pretty useful resources covering this stuff out on the web - the following sites are particularly useful:

There's not too much specific to CentOS though, so here's the recipe I've been using for CentOS 5:

# Confirm your CPU has virtualisation support
egrep 'vmx|svm' /proc/cpuinfo

# Install the kvm and qemu packages you need
# From the CentOS Extras repository (older):
yum install --enablerepo=extras kvm kmod-kvm qemu
# OR from my repository (for most recent kernels only):
ARCH=$(uname -i)
OF_MREPO=http://www.openfusion.com.au/mrepo/centos5-$ARCH/RPMS.of/
rpm -Uvh $OF_MREPO/openfusion-release-0.3-1.of.noarch.rpm
yum install kvm kmod-kvm qemu

# Install the appropriate kernel module - either:
modprobe kvm-intel
# OR:
modprobe kvm-amd
lsmod | grep kvm

# Check the kvm device exists
ls -l /dev/kvm

# I like to run my virtual machines as a 'kvm' user, not as root
chgrp kvm /dev/kvm
chmod 660 /dev/kvm
ls -l /dev/kvm
useradd -r -g kvm kvm

# Create a disk image to use
cd /data/images
IMAGE=centos5x.img
# Note that the specified size is a maximum - the image only uses what it needs
qemu-img create -f qcow2 $IMAGE 10G
chown kvm $IMAGE

# Boot an install ISO on your image and do the install
MEM=1024
ISO=/path/to/CentOS-5.2-x86_64-bin-DVD.iso
# ISO=/path/to/WinXP.iso
qemu-kvm -hda $IMAGE -m ${MEM:-512} -cdrom $ISO -boot d
# I usually just do a minimal install with std defaults and dhcp, and configure later

# After your install has completed restart without the -boot parameter
# This should have outgoing networking working, but pings don't work (!)
qemu-kvm -hda $IMAGE -m ${MEM:-512} &

That should be sufficient to get you up and running with basic outgoing networking (for instance as a test desktop instance). In qemu terms this is using 'user mode' networking which is easy but slow, so if you want better performance, or if you want to allow incoming connections (e.g. as a server) you need some extra magic, which I'll cover in a "subsequent post":kvm_bridging.

Simple KVM Bridging

Following on from my post yesterday on "Basic KVM on CentOS 5", here's how to setup simple bridging to allow incoming network connections to your VM (and to get other standard network functionality like pings working). This is a simplified/tweaked version of Hadyn Solomon's bridging instructions.

Note this this is all done on your HOST machine, not your guest.

For CentOS:

# Install bridge-utils
yum install bridge-utils

# Add a bridge interface config file
vi /etc/sysconfig/network-scripts/ifcfg-br0
# DHCP version
ONBOOT=yes
TYPE=Bridge
DEVICE=br0
BOOTPROTO=dhcp
# OR, static version
ONBOOT=yes
TYPE=Bridge
DEVICE=br0
BOOTPROTO=static
IPADDR=xx.xx.xx.xx
NETMASK=255.255.255.0

# Make your primary interface part of this bridge e.g.
vi /etc/sysconfig/network-scripts/ifcfg-eth0
# Add:
BRIDGE=br0
# Optional: comment out BOOTPROTO/IPADDR lines, since they're
# no longer being used (the br0 takes precedence)

# Add a script to connect your guest instance to the bridge on guest boot
vi /etc/qemu-ifup
#!/bin/bash
BRIDGE=$(/sbin/ip route list | awk '/^default / { print $NF }')
/sbin/ifconfig $1 0.0.0.0 up
/usr/sbin/brctl addif $BRIDGE $1
# END OF SCRIPT
# Silence a qemu warning by creating a noop qemu-ifdown script
vi /etc/qemu-ifdown
#!/bin/bash
# END OF SCRIPT
chmod +x /etc/qemu-if*

# Test - bridged networking uses a 'tap' networking device
NAME=c5-1
qemu-kvm -hda $NAME.img -name $NAME -m ${MEM:-512} -net nic -net tap &

Done. This should give you VMs that are full network members, able to be pinged and accessed just like a regular host. Bear in mind that this means you'll want to setup firewalls etc. if you're not in a controlled environment.

Notes:

  • If you want to run more than one VM on your LAN, you need to set the guest MAC address explicitly, since otherwise qemu uses a static default that will conflict with any other similar VM on the LAN. e.g. do something like:
# HOST_ID, identifying your host machine (2-digit hex)
HOST_ID=91
# INSTANCE, identifying the guest on this host (2-digit hex)
INSTANCE=01
# Startup, but with explicit macaddr
NAME=c5-1
qemu-kvm -hda $NAME.img -name $NAME -m ${MEM:-512} \
  -net nic,macaddr=00:16:3e:${HOST_ID}:${INSTANCE}:00 -net tap &
  • This doesn't use the paravirtual ('virtio') drivers that Hadyn mentions, as these aren't available until kernel 2.6.25, so they're not available to CentOS linux guests without a kernel upgrade.

Simple dual upstream gateways in CentOS

Had to setup some simple policy-based routing on CentOS again recently, and had forgotten the exact steps. So here's the simplest recipe for CentOS that seems to work. This assumes you have two upstream gateways (gw1 and gw2), and that your default route is gw1, so all you're trying to do is have packets that come in on gw2 go back out gw2.

1) Define an extra routing table e.g.

$ cat /etc/iproute2/rt_tables
#
# reserved values
#
255     local
254     main
253     default
0       unspec
#
# local tables
#
102     gw2
#

2) Add a default route via gw2 (here 172.16.2.254) to table gw2 on the appropriate interface (here eth1) e.g.

$ cat /etc/sysconfig/network-scripts/route-eth1
default table gw2 via 172.16.2.254

3) Add an ifup-local script to add a rule to use table gw2 for eth1 packets e.g.

$ cat /etc/sysconfig/network-scripts/ifup-local
#!/bin/bash
#
# Script to add/delete routing rules for gw2 devices
#

GW2_DEVICE=eth1
GW2_LOCAL_ADDR=172.16.2.1

if [ $(basename $0) = ifdown-local ]; then
  OP=del
else
  OP=add
fi

if [ "$1" = "$GW2_DEVICE" ]; then
  ip rule $OP from $GW2_LOCAL_ADDR table gw2
fi

4) Use the ifup-local script also as ifdown-local, to remove that rule

$ cd /etc/sysconfig/network-scripts
$ ln -s ifup-local ifdown-local

5) Restart networking, and you're done!

# service network restart

For more, see:

Finding/Delete Broken Symlinks

Find goodness (with a recent-ish find for the '-delete'):

find -L . -type l
find -L . -type l -delete

Using Network Driver Images on RedHat/CentOS Installs

I was building a shiny new CentOS 5.0 server today with a very nice 3ware 9650SE raid card.

Problem #1: the RedHat anaconda installer kernel doesn't support these cards yet, so no hard drives were detected.

If you are dealing with a clueful Linux vendor like 3ware, though, you can just go to their comprehensive download driver page, grab the right driver you need for your kernel, drop the files onto a floppy disk, and boot with a 'dd' (for 'driverdisk') kernel parameter i.e. type 'linux dd' at your boot prompt.

Problem #2: no floppy disks! So the choices were: actually exit the office and go and buy a floppy disk, or (since this was a kickstart anyway) figure out how to build and use a network driver image. Hmmm ...

Turns out the dd kernel parameter supports networked images out of the box. You just specify dd=http://..., dd=ftp://..., or dd=nfs://..., giving it the path to your driver image. So the only missing piece was putting the 3ware drivers onto a suitable disk image. I ended up doing the following:

# Decide what name you'll give to your image e.g.
DRIVER=3ware-c5-x86_64
mkdir /tmp/$DRIVER
cd /tmp/$DRIVER
# download your driver from wherever and save as $DRIVER.zip (or whatever)
# e.g. wget -O $DRIVER.zip http://www.3ware.com/KB/article.aspx?id=15080
#   though this doesn't work with 3ware, as you need to agree to their
#   licence agreement
# unpack your archive (assume zip here)
mkdir files
unzip -d files $DRIVER.zip
# download a suitable base image from somewhere
wget -O $DRIVER.img \
  http://ftp.usf.edu/pub/freedos/files/distributions/1.0/fdboot.img
# mount your dos image
mkdir mnt
sudo mount $DRIVER.img mnt -o loop,rw
sudo cp files/* mnt
ls mnt
sudo umount mnt

Then you can just copy your $DRIVER.img somewhere web- or ftp- or nfs-accessible, and give it the appropriate url with your dd kernel parameter e.g.

dd=http://web/pub/3ware/3ware-c5-x86_64.img

Alternatives: here's an interesting post about how to this with USB keys as well, but I didn't end up going that way.

Fuzzy Displays and Dual NVIDIA 8xxx Cards

We've been chasing a problem recently with trying to use dual nvidia 8000-series cards with four displays. 7000-series cards work just fine (we're mostly using 7900GSs), but with 8000-series cards (mostly 8600GTs) we're seeing an intermittent problem with one of the displays (and only one) going badly 'fuzzy'. It's not a hardware problem because it moves displays and cables and cards.

Turns out it's an nvidia driver issue, and present on the latest 100.14.11 linux drivers. Lonni from nvidia got back to us saying:

This is a known bug ... it is specific to G8x GPUs ... The issue is still being investigated, and there is not currently a resolution timeframe.

So this is a heads-up for anyone trying to run dual 8000-series cards on linux and seeing this. And props to nvidia for getting back to us really quickly and acknowledging the problem. Hopefully there's a fix soonish so we can put these lovely cards to use.