Running Solaris on an Oracle Solaris certified hardware is a no brainer. It surely works. Everything works beautifully. If it were a SPARC box, then you don’t really have much choice but to buy from Oracle. But what if you wanted an x86 platform? Generic x86 hardware is so cheap, it is very tempting to try to run Solaris on a commodity x86 hardware.
An Oracle Solaris x86 box would easily cost two to three times that of an equivalent generic x86 box. Of course, that would be just the hardware cost and not considering the software licensing and other professional services. It’s still very tempting, though, to consider generic x86 hardware.
We’ve embarked on this journey. Over a decade ago, we more or less had stopped running generic x86 hardware. Now, we’re back at it again. So here I’m going to share a little bit of our experiences exploring Solaris 11.
The funnest and easiest way to install Oracle Solaris is to use the Automated Install. That’s assuming, of course, you already have an AI server running. Otherwise, you’d better stick with a DVDROM install. But in case you’re interested in the AI route, I blogged about it before.
For my current installs, I’m trying to do everything remotely, over the network. Not just an AI install over the network, but more importantly, remote console access through serial console redirection. This requires an IPMI controller on the server. IPMI is quite common on server hardware. Many servers implement additional remote management features over what IPMI requires. I’m just going to use the basic IPMI features, because, well, many of those higher-level remote management features require a web browser, Java, and additional TCP ports.
Serial console redirection, in case you’re not familiar, works pretty well to redirect text-based output from the BIOS and bootloader (e.g. GRUB). However, t does not take care, ordinarily, of console output once the operating system has loaded. In the case in Oracle Solaris 11, for example, you can see the GRUB screen, but once GRUB has handed off to the OS, there’s nothing on the serial console anymore.
So the trick is, until you have the opportunity to reconfigure GRUB, you’d need to tell GRUB to pass some extra options to the kernel. The Automated Install can proceed just fine without any console intervention, but if you want to see anything or to use the system while Automated Install works, you’d also have to tell GRUB to boot with extra kernel options.
This is what you need to append to the kernel line:
Remember, append, not replace, the kernel command line. This should give you a working console during Automated Install, as well as at the first boot after Automated Install is completed. At the first boot after Automated Install is done, you may need to step through some setup screens, so a working console is essential at that point.
To permanently have the serial console redirect work, here are some extra things to do:
- First, save the serial console into “EEPROM”:
$ eeprom console=ttyb
- Update GRUB configuration file in /rpool/boot/grub/menu.lst to add the following to the kernel line:
- Edit the file /etc/ttydefs and change the line:
console:9600 hupcl opost onlcr:9600::console
console:115200 hupcl opost onlcr:115200::console
- That’s it.
Reboot and test. All the above assumes that COM2 (i.e. the second serial port) is used for the serial console redirect. You should check your BIOS setup to confirm which port is being used.
Another thing is about ZFS mirror. Automated Install doesn’t by default setup ZFS mirror on your rpool. (Maybe you could customize the AI profile to do that.) If you have two disks and you want them in a mirror configuration, here’s what to do:
- Attach the second disk to form the mirror:
$ zpool attach -f rpool <firstdisk> <seconddisk>
- Check the mirror status:
$ zpool status
It will take a few moments for the zpool to resilver.