How to install Linux on a large scale

I’ve been thinking about this for a while to be honest. I worked in an organisation with around 600 machines for over a year, all Windows of course. A copy of Windows was installed on each type of machine (there were around 5 different types, with all machines within each type having exactly the same hardware), tweaked the settings to make them use the correct settings for Windows domain and the file shares, used Norton Ghost to make a copy of the disk and put it on a server, then used Ghost book disks to drag the disk image over, then change the machine names, set up a domain account and set up any extra peripherals.

We could roll out a machine in 30 minutes tops, mostly unattended and one person could do the whole organisation in about 4 weeks. I’ve been wondering how this would be done in the Linux world. How do you deploy 600 Linux desktops like this?

I vaguely recall Red Hat having a thing where you could perform an installation on a machine and save the setup configuration somehow so it could be used to perform the same installation on other machines in a mostly unattended way, but that means using Red Hat. It seems that there is no open source vendor agnostic tool to do this. Either buy Norton Ghost or use Red Hat.

I read the HOWTO Clone Disk Images on Linux Booted from a Network artcile from The Linux Documentation Project, but it seemed a bit messy.

So I googled and found the following links and articles.

g4u
Patagonia
Mondo Rescue
Dolly and
this Linux Gazette article.

After a quick browse of these projects, I’d say that g4u looks the most like something that would be bearable to use.

I think it would be interesting to look at how to reduce maintenace time too. I think it’s not uncommon to use Linux as a thin client system on larger installations, using something like the Linux Terminal Server Project, like they did at Handsworth Grammar School but it would be interesting to see how thin is practical, what network file systems people use (NFS, Samba, Coda?) for best results and how to handle network logins (do people really still use NIS, how much of a pain is it to synchronize smbpasswd with /etc/passwd on a large multi-user network etc).

I would be interested in hearing from anyone who has experience of deploying Linux on a large number machines, or who has experience of using any of these tools, particularly in comparison to proprietary products like Norton Ghost, Power Quest Drive Image or Acronis True Image.

LUG Radio Bastards!

The latest episode of LUG Radio came out today, Season 2, episode 10, their 1st anniversary episode. Congratulations guys 🙂 I’m listening to it right now.

As usual it’s a massively feature packed episode, not confined to talking about the Hula Project, LUG Radio Live, GNU Classpath (an open source Java implementation), talking to Jeff Waugh about Kubuntu and loads of other stuff.

Aside from all of this, I am publicly barracked (bare in mind that the last episode got about 14,000 downloads) for my failure to produce the Ubuntu jingle that I promised nearly 8 weeks ago and it is claimed that I blame it all on Linux. Not strictly true.

I blame it on my crap soundcard, on Creative for making 2 completely different Soundblaster Lives and then ALSA for not specifiying that there are is more than one Soundblaster Live, one which works and one which is under development using a different driver – to which end I checked compatibility and not realising there was 2, bought the wrong one. Bleh.

But in the spirit of the show I will take the boiled down version of the facts on the chin. I still haven’t fucking done it.

LUG Radio Bastards!

Ubuntu jingle latest

As some people may know I’ve been trying to make an Ubuntu jingle for LUG Radio. So far this has been nothing short of painful. If you read this post you’ll see I bought 2 new soundcards to try to finish the job.

In the first instance this was to do with my crappy on board soundcard which seemed to enjoy chewing up anything recorded via the mic and line inputs. My laptop was away being fixed at the time so I tried Dynebolic Linux on my dad’s PC (soundcard wasn’t recognised even though it’s a Creative Soundblaster PCI 16, supported by the es1371 kernel module), on my university project machine (similar problem to main desktop machine – crap quality and crackly), on my laptop when it returned (Dynebolic wouldn’t boot with ACPI enabled and I couldn’t seem to use the sound device with it disabled. Haven’t had chance to install Ubuntu yet) and finally on my old Dell Optiplex desktop machine with an Intel i810 chipset and everything onboard, which has a lowly 128MB RAM and struggled to keep up (uses non-standard RAM so I couldn’t drop in a spare stick). All of the other machines were Via chipsets using onboard sound (except my dads PCI 16 soundcard of course).

Nevertheless, the best sound quality came from the i810 machine, so tonight I persevered and mounted the hard disk (dynebolic is a Live CD), copied the files across from my usb drive (causes stutter when running from it) and ran the project from there.

It went great at first but as I layered the tracks it started to slow down as the memory got used up, so much so that by the time I recorded about 4 vocal tracks (I’m simulating a tribe), the machine became barely usable and the kernel killed the Audacity process so I lost everything I had done in the session. I hadn’t saved it as I was technically testing how well it would work.

As Dynebolic is a live CD and runs in memory, I figured that maybe Audacity was saving some kind of user data in /home which is also in memory so I mounted the home partition of the hard disk as /home to see if that saved me a few MBs.

For some reason all this did was make the recorded mic stream slow, bitty and deeeeeeeep. Not to be denied, I unmounted the partition and remounted it under /mnt/home as it was before. I also tried to turn on the hard disk swap partition as swap space, but this failed as it turns out Dynebolic already does this.

Anyway I decided to try again, this time by saving the project every time I recorded a new audio track. I got to about 6 vocal tracks before it started getting flaky so I exported what I had as a wav and called it a night.

I’m not entirely happy with it, but at least I have a proof of concept. I’ll wait until I have my new Soundblaster and make a proper attempt under Ubuntu. At the moment I won’t post it as I’m not sure if it sounds crap or not, you know when you hear your own voice (and accent in my case) on tape? Terrible…

All Soundblasters are not equal

Recent readers will know that I bought a new Creative Labs Soundblaster Live soundcard as they seem to be very well supported under Linux, in a bid to finish this goddam Ubuntu jingle for LUG Radio. Well it arrived today and guess what? It doesn’t work properly under Linux at the moment.

It seems that there are two Soundblaster Lives and they use different chipsets. The older 5.1 is known to work perfectly under Linux using the emu10k1 kernel module and the newer 24 bit 7.1 card doesn’t (the numbers refer only to the number of surround sound speakers, not a versioning process like software release numbers).

Before I bought the card I checked the ALSA website which says here that the Soundblaster Live is supported by the emu10k1 module. I also searched Google for Linux support and read this. Sounds fine I thought at the time.

Until it didn’t work. After searching the Ubuntu forums for the SB Live, I read this thread which points out that the new 24 bit SB Live 7.1 uses a different chipset to the SB Live 5.1 and in fact uses the audigyls module which isn’t entirely working and also only available in v1.06 or above of ALSA, which isn’t available in Ubuntu yet. Why is it you only find this stuff after you buy it?

It also seems that the 24 bit 7.1 card is a piece of shit anyway. Rather than do onboard hardware mixing, it palms it off to the system CPU do do all the work and it is this that has made the driver slower to develop as they had to work out how to do this. Now I know that modern computers like mine are powerful enough to handle this, but would you be happy with a software modem if you were expecting a hardware one? Nowhere in the product spec does it say this.

So I had 3 choices:

  • Upgrade to Ubuntu Hoary
  • Compile the latest version of ALSA myself
  • Or forget it and buy a SB Live 5.1

I don’t fancy upgrading to Hoary as I have yet to hear if it works with the version of ALSA in Hoary. I prefer to stick with a stable version of Ubuntu now I’ve moved over full-time.

I don’t fancy moving to a compiled version ALSA as this means I will have to work out how to put the packaged versions of ALSA on hold in apt and risk making a mess of the sound system by compiling it myself.

So I wimped out and found a 5.1 from Scan for £10 or so. Thats something I could really do without to be honest. It cost me £26 or so for the first card at a time when I’ve just found out I am £3 overdrawn and have no money coming in until early April. I had to transfer money off my credit card to cover my bills for the next 6 weeks.

Well, I’ve made a big noise about this jingle now, I seem to be getting some decent traffic because of it, especially when I was linked by Jeff Waugh on Planet Gnome, Planet Debian and Planet Ubuntu. Also I told the LUG Radio guys about it nearly 6 weeks ago and they having been waiting for it ever since. I’ve had quite a few people post comments about it too so I now feel some kind of responsibility to produce something. God help me if it’s shit…

Don’t know what I’m going to do with the SB Live 7.1 just yet. I might try to sell it on ebay or something to see if I can make some money back, or I might keep it and see if a) it works under Hoary and b) if it is actually a better quality card than the 5.1 despite the hardware mixing cop-out.

One day I will learn that knowledge of Linux hardware support is not innate, nor is it as simple as it looks from a kernel perspective. The ALSA people are doing their job (although it would be nice of them to state that the 7.1 is actually a variant of the Audigy LS and not a variant of the SB Live as the name suggests, by listing the 5.1 and 7.1 separately), but it seems Creative have named the 7.1 based on where it fits into their range and not on what chipset it uses.

Bastards.

UPDATE:

After reading a lot of threads about this problem I filed a bug against the ALSA website to get them to point out that the 24 bit SB Live 7.1 uses the audigyls driver and to ask them to specify that the 5.1 and 24 bit 7.1 are different as it just said that the SB Live uses the emu10k1 driver, so as to prevent other drowning souls in the various support forums around the world from buying the wrong card. They have since updated the site to reflect this. Good of them to research this and actually do it.

Of course Hoary is now out and the 7.1 should be supported, but I have yet to open my box and swap the 7.1 in to check…

The Pleasure and Pain of Gentoo

Heh 😉 I’m gonna have to start thinking of another title for my Gentoo posts.

Well Gentoo is finally installed on my Sun Ultra 10 Sparc64 machine. It went ok really apart from that it has probably taken me 24 man hours or so in 3 sessions. The (Sparc64) Gentoo docs are very good and useful for non-Gentoo specific stuff that I didn’t know. I will be referring to them again. They could do with a few little tweaks, like explicitly stating that the sparc-sources kernel source package is preferable to gentoo-sources on Sparc machines. It’s not as obvious as it might seem as you can use either, but sparc-sources are tweaked for Sparc macines. Fortunately I have a sense of completeness that made me choose sparc-sources straight away, other people had problems with gentoo-sources on sparc.

I started this process again last night and emerged lshw and pciutils (for lspci) so I could work out what was in the box. This sucked in X.org as a dependency for some reason and meant I spent another night wearing earplugs as I was ssh-ed in again from my noisy PSU containing desktop. Meh.

All finished this morning so I decided to change the compilation optimisation from level 3 to level 2 to speed up compilation and reduce the size of the binaries. I then did some work on identifying the hardware, got the sparc kernel sources and cautiously did make-menuconfig.

It actually wasn’t all that bad as all of the sparc hardware options were already selected, I just removed all of the things I didn’t have. I did worry that I didn’t see options for ebus and a few other things but I built the kernel anyway and watched it fail on make modules. Fuck. Google. It turns out that kernel-2.4.29 (the latest version of sparc-sources in Gentoo) fails to build on sparc64 due to missing #defines in dmabuf.c where sound is enabled. Well I only enabled sound support because I hadn’t noticed before that the CS4231 sound card uses a separate low level driver in the kernel, not part of the regular sound system.

Cool. Turned off sound support. Compiled nicely. The rest went pretty much as per the instructions but it’s been one long journey. I still don’t have any nice end-user apps. On the hitlist is Gnome and maybe OpenOffice.org but they are gonna be looooooong compiles.

I think getting the X server to work will be interesting. I have an ATI Technologies Inc 3D Rage Pro 215GP (rev 5c) (thanks once again to a wholesale lspci quote…).

After leaving this post for an hour or two, getting Xorg working is awkward and manual, the configuration tools can’t detect the ATI card, the Sun mouse or Sun keyboard. After some not very helpful googling, and some shot in the dark guessing, I managed to correctly assume that the mouse protocol was busmouse and the device is /dev/sunmouse. Stealing sections of the xorg.conf file from here and here also helped. I just added ATI as the graphics driver and I can get an enormous resolution and moving mouse.

At the moment I can’t see an option to change the resolution in /etc/X11/xorg.conf and for now, Sun keyboards don’t work with Xorg6.8 – they require the deprecated (and apparently no longer supplied) keyboard driver and don’t work with the replacement kbd driver. Hmph.

Gentoo (on Sparc64): awkward and drawn out but thats the cost of doing everything manually and compiling it all yourself. The keyboard problem isn’t strictly a Gentoo thing, thats Xorg going through a transitional period. Next I have to work out how Gentoo startup scripts work so I can make ssh, X, gdm and other things in the future start at boot time. With few other options for my Sparc hardware (though I’m sure after all this installing Debian on it would be a breeze…), Gentoo’s pay off will be in the performance and in the learning I did going through the process.

Ubuntu Jingle Update

After getting annoyed with the frustratingly fiddly process of getting some kind of decent input from my microphone via my soundcard and trying Dynebolic on various machines which run out of RAM as it’s a live CD or stutter because the audacity project is running from a USB micro hard disk (ie slow read/write access) I have bought a new sound card. It takes so long to check this and that with such granularity that by the time I come to the conclusion that I need to mount the hard disk to put the files on and run it from there it’s either midnight and I have to abandon it and go to sleep or I have more important uni work to do. So it was just easier to buy a new soundcard for my main desktop as recommended by Ant in my comments for the original Ubuntu Jingle post.

So on his advice I now have a Creative Labs Soundblaster Live on it’s way. After a bit of research I believe it uses the emu10k module.

Hopefully this will be the end of my complaining and I can get this jingle finished. Either way, I was starting to have other problems with this sound card, I just never worried too much about them before. For example when playing music files the sound would rise and dip randomly. It really is obviously a crap soundcard.

As my cousin once said to me, unless you’re doing real sound work, the soundcard is the last thing anyone ever upgrades. And it is.

Windows is hard to use

I’ve barely used Windows in the last few months and now I have my laptop back I’m stuck with Windows on it until I can sort the crap restore partition thingy out and install Linux.

It’s struck me how hard Windows is to maintain. The amount of calls I get when people explode their Windows installations definitely supports this theory. I have a fresh installation. First thing I do is head to Windows Update and install all of the updates and patches. Then I install Firefox, a firewall and anti-virus. Then I update them. Then I install Microsoft Office (I will be moving to OpenOffice.org on Windows when I am more comfortable with it. By this time I may have finished uni and won’t be using Windows at all). Then I update it.

Then I install all of the million apps you need to make Windows do anything useful. Real Player and it’s horrible ad laden bulkiness, Quick Time, Acrobat Reader and all of the other things I never use. An adware remover.

I think adware and spyware are the biggest threats to Windows users at the moment. I watched a video clip the other day that showed a malicious website installing such malware with no visible output to the user and certainly no asking the user if they wanted to install the software. The guy showed the Program Files directory before and after to show the new software installed. I don’t care if XP Service Pack 2 makes you have Automatic Updates turned on, in my experience people just tell it to fuck off when it tells them that there are updates to install. Just booting into Windows and getting prompted to check all these things for updates is a pain in the arse, so much so that I’d prefer not to use it and the rest of the world have no interest in learning about why they should care, let alone actually doing this, which is why my phone keeps ringing with people complaining that porn and adverts keep popping up and why I get emailed viruses all the time. I have tried explaining it to them…

Windows takes too much looking after and ordinary people are overwhelmed. Even I, as a techincally minded individual think Windows is a hideous, uncomfortable, over-complicated beast that drains me of energy to use. Linux, in my case Ubuntu is a case of hitting reload in Synaptic and then Mark All Updates then Apply or whatever. To install stuff, hit search, find your package and choose install. Imagine having all the Windows software you might possibly want to use in a searchable list with an install button next to each one and an update button to get the latest version of everything you have installed, all at once.

But people want Windows. I think this is mainly due to the PR thing. Like people saying they want a ‘Pentchinum 4’ because they’ve seen it on the the TV and their friends have one. I think if Linux were more able to play MP3’s, DVD movies, Real, Quicktime, DivX, XVid and so on multimedia formats out of the box, then the only real reason to use Windows would be for games. But if thats all you want a computer for then buy a console. But try explaing that to people…

I really must get around to testing Ubuntu on an innocent bystander.

The Art of Gentoo (on Sparc64)

Further to my question over whether Gentoo was worth the effort, I decided to actually install it. Somewhat prompted by Ron‘s insistence a while back that Gentoo is great, a chat with a guy called Mark Welch from uni and also from Fizz‘s comments.

I got frustrated over Christmas that my degree doesn’t cover anything that doesn’t run in Windows and therefore on x86 hardware and so I bought an old iMac and a Sun Sparc Ultra 10 workstation. (Sidenote: man is CDE butt ugly).

Well I must have been well treated by Linux because I couldn’t work out how to turn the DHCP client on in Solaris 9 (never used Solaris before) and as we all know, computers are pretty fucking boring without a net connection these days. Yeah I could have figured it out in the end, but the management console was starting to fail to open and a few other things so I figured I’d never use Solaris for anything anyway and decided to install Linux on it. I think Sun are sending me a copy of Solaris 10 for entering some competition or other anyway.

It seems nobody really does a mainstream Sparc64 Linux anymore apart Debian and Gentoo. Debian is my distro of choice but I’m not really sure whats in the box so I need hardware detection and I can’t be arsed to wait 9 months or however long it’s going to take for Sarge to appear, I don’t think they’ve even gone into a freeze yet. So it’s Gentoo.

And well, it seems cool but not one you’d give to a beginner to install. I chose the stage 2 Live CD method as it offered the most control without having to know all my hardware.

Hmm I had to fudge some things. All of the hardware worked out of the box so far. I could probably do with tweaking the hard disk performance with hdparm but I’ll worry about that later when I’ve had time to learn whether my hdparm out was any good and how to tune it.

livecd root # hdparm -tT /dev/hda

/dev/hda:
Timing O_DIRECT cached reads: 716 MB in 2.00 seconds = 358.00 MB/sec
Timing O_DIRECT disk reads: 38 MB in 3.08 seconds = 12.34 MB/sec

livecd root # hdparm /dev/hda

/dev/hda:
multcount = 16 (on)
IO_support = 0 (default 16-bit)
unmaskirq = 0 (off)
using_dma = 1 (on)
keepsettings = 0 (off)
readonly = 0 (off)
readahead = 8 (on)
geometry = 38792/16/63, sectors = 20020396032, start = 0

I’ve never really bothered to compile stuff apart from my own kernels and a few things that weren’t packaged by Mandrake when I used it years ago, so I’ve never learned about compiler optimisations and whatnot. I opted for:

USE=”X gtk gnome alsa -kde -qt”
CHOST=”sparc-unknown-linux-gnu”
CFLAGS=”-mcpu=ultrasparc -O3 -pipe”
CXXFLAGS=”${CFLAGS}”
MAKEOPTS=”-j2″

I have no idea how good a choice I made (advice gratefully received) but ultimately when I know what I’m doing I’ll rebuild the entire system. I will use it mainly as a backup desktop machine running Gnome. I might use it as a home server later on.

I let mirrorselect choose my mirrors for me and the performance was dismal. For some reason, all my mirrors were in the Netherlands (apparently Holland is only part of the Netherlands and it annoys the hell out the Dutch that people think the country is called Holland) but all my downloads were coming from Korean and Taiwanese mirrors which took 3 minutes to time out, which they did a lot. After 50 minutes I had about 12 packages so ‘control-C’ed emerge and manually added the British Blueyonder mirror to /etc/make.conf and the whole lot came down in about 20 minutes. (Note to self: bash filename auto-completion doesn’t work in a web browser window ;)). I use Blueyonder as my Debian apt source and know I can get a sustained 59KB/s transfer on a 512Kb ADSL link.

I left it compiling overnight as it was about 2am when I started and it was all done when I woke up. I stupidly forgot to log out of the SSH session on my desktop machine with the loud PSU and run emerge system locally on my Sun box. I had to wear earplugs overnight…

But it all went fine. Now I’m at the Configuration File Protection and Configuring the Kernel stage but I just don’t have time to absorb all of this reading and sit there and set it all up. I still don’t really know whats in the box, lsmod only lists ext3, jdb and openpromfs so everything else must be compiled in to the kernel image (doh, must remember to use lspci…). I did note from dmesg that I have a Sun Happy Meal ethernet card 🙂 That made me smile, I’ve seen that in the kernel source over the last few years and thought it was a cute name for a network card 😉 I wonder if Sun or McDonalds came up with it first.

So I now have a half complete Gentoo installation, I just have to do the reading and finish it off before, I assume, installing all the apps that I want and worrying about booloaders and stuff. I think that will be a(nother) weekend job…

I bumped into Fizz last night actually, we were both pretty drunk and he asked me if I’d read his comments. I think we mumbled to each other for a few seconds about Gentoo. I think he was more interested in the girl I was with to be honest but it was still good to see him and exchange drunkitudes 😀

Blogging is bad for your academic productivity

Trust me I know. My performance has nosedived since I started reading blogs. Admittedly I am far more interested in what I am picking up from blogs than I am in writing right outer joins in Oracle’s not completely ANSI standard SQL dialect or the economic impact of IT globalisation and offshoring. There is no Linux on my degree. There is on the years that follow mine, my year was the last of the old degree scheme. Isn’t that really weird? In an era such as this my only academic contact with a non-Windows operating system is telnetting a Solaris server to use Oracle.

I’m a Linux guy and blogging is far more interesting. Just don’t tell my lecturers…

Novell produce another crucial open source app

Not content with vying with Canonical for hiring some of the best and coolest open source hackers out there, Novell has offered yet another gift to the open source community by anouncing the Hula Project.

Hula is a web based calendaring and email server somewhat akin to Microsoft Exchange Server, a system that has been lacking in the open source world for years. Although many projects have claimed to offer similar features to Exchange, none have yet to offer a clean implementation or crucially, the shared calendar functionality. They have their eyes on some really cool features like viewing via rss feed and interacting with it via your mobile phone. It is worth noting that Hula is still in the planning stage and has as yet made no releases.

A lot of people were worried when Novell bought Ximian and SuSE, including myself, thinking that they would just get swallowed up in corporate bullshit and slowly die a quiet death. It appears not to be the case.

I’ll try to refain from saying stuff here that I have been intending to use in an article entitled “Why Linux is Good News for Everybody” (feel free to hire me to write this for your publication, email to drinky76 at yahoo dot com ;)), but a large part of this revolves around why Linux is so important to people like Novell, IBM, Sun, HP, Intel and Oracle. They all have products in a shrinking market with one main competitor.

Novell realised they were dying and pretty soon they wouldn’t exist. Novell network and directory services ran on Novell Netware and Windows, but nobody used them on Windows any more and people weren’t buying Novell Netware. People were buying Windows and a few people were buying Unix, but Unix vendors were painting themselves into a corner. But a hell of a lot of people were looking at Linux as the new cool Unix, a possible investment and oen to watch as a future competitor to Windows if not the future of the operating system market. What to do?

Make Novell stuff run on Linux. How to do that? Hire the right people, buy a Linux distributor with the right profile and buy another Linux company that look like they are pushing Linux in the right way. Red Hat are too big to be bought, you can’t buy Debian, what about SuSE? SuSE are about the right size, have a sizeable market and have the right kind of corporate profile. What about hiring the right people? Well, Ximian are doing some really cool things and have some of the best most focused hackers out there – Miguel de Icaza, Nat Friedman and so on.

And they did. But they also realised something important that a lot of the big guns miss. You can’t win with Linux by just doing your own Linux and go at it will all corporate marketing and PR guns blazing. The community won’t give a fuck about you and you won’t get anywhere without them. You have to do it right and you have to get the community on side. How to do that? Well, if you have read anything about the open source community, it is characterised in part by the concept that if we all give something to a project (code, patches, money etc), we all get something greater and more valuable back as a whole system. Novell spotted this and decided that the only way to win with Linux was to give the community what it wanted. Open source several high profile and highly desired applications (Ximian Connector, YaST), pay people to work on what love (Mono, beagle and so on) and pay people to work on what was sorely needed (an Exhange replacement among other things). All these things add up to more and more pieces of the jig-saw dropping into place for an open source equivalent to every app in every bedroom/office/server room.

To win in the corprate field, they also realised that they needed to offer Linux services that very few have might to provide. Software support and training support. These are big things in the professional world. The thing that scares people most about deploying Linux is that they need somebody to call when things go wrong and someone to take responsibility for it. For Linux to take off, there also needs to be a groundswell of Linux expertise. Linux has always been a bedroom hacker’s system, but how can you prove that a bedroom hacker is skillful enough to run your IT infrastructure? Training and qualifications. Novell offer all of this. Wow.

There are big things happening in the open source world at the moment and the future is exciting, damn, I can’t wait to see what we have in the next 12 months. Gnome looks like the future of the desktop to me and I’ve only been using it for 3 weeks. Stuff like Beagle, Xgl and iFolder look like great apps and show clear, ahead of the game, thinking outside of the box. Windows users won’t see this kind of stuff for maybe 2 years. I wonder how many more of them will be using Linux by then. Novell and Canonical (via Ubuntu) are really pushing Linux where it needs to be heading and Novell are paying for a lot of the core pieces of software to be developed.

Bravo Novell, although I still think Ubuntu is the one true way forward, I might try the Novell Linux Desktop at some point.

The Point of Gentoo

Is… Umm…

Well a few of Wolves LUG are using Gentoo and think it’s great, the main draw seems to be the package management system. I have to be honest, I’ve been using Debian for a few years (and Ubuntu more recently) and am in the Debian way of thinking. Package management is pretty core to how I evaluate a distro these days. Apt is just the business. So Gentoo uses Portage. The idea being you get your package source repositries and build the packages from source in an efficient, well managed way. It’s a great idea, but whats the point? Why build everything from source?

Well every package is compiled on your own hardware and therefore is optmised for your own machine. Great. But it takes ages. I was told that on a very fast modern system, building all of the packages to make a fresh install takes a weekend. You can expect stuff like Open Office.org, KDE, Gnome or X to take around 8 hours each. Phew.

The point of this post is the argument about precompiled packages versus locally compiled optimised packages and whether the performance boost is worth the time lost compiling. While your software is optimised for the machine it was built on and hence runs a lot faster than any pre-compiled packages, is the gain in resposiveness worth the time lost to compiling? Sure you can still use the machine while you compile, but still, what you lose in compile time will you get back in response time? In a desktop environment, you can probably compile and continue to work, everything will just take longer, but what about a server?

I can see the point in an environment where the software must run fully optimised for the hardware, but what do you do at update time? Take the performance hit of compiling new updates? Won’t that throw off the whole performance thing? Sure, it was said that where this is the case you have a backup machine which runs while such updates are going on. But isn’t this a sidestep? What is more expensive? 2 machines or 1 better machine?

It’s a fantastic idea if you like to know your software is running as fast as it can, but is it worth the hit at compile time? I don’t think the speedup you make is greater than the time you lost compiling.

This of course is just an opinion and I do aim to take a look at Gentoo sometime soon…

Linux at the forefront of a desktop graphics revolution?

Wow. Look at this post from Nat Friedman.

Xgl is a new X server using GL 3d acceleration. I think Nat’s blog explains it all better than I can.

I don’t know a great deal about graphics subsystems, but I think this kind of idea is just the kind of great thinking that will make Windows and Mac OS X users sit up and take notice. I know Microsoft have a lot of new graphics stuff in mind for Longhorn, but surely people will be using this first and this will probably far more powerful. OS X looks great but I’m not sure how far their graphics subsystem goes. Could it do stuff like this? I don’t think so.

Eye candy rules in desktop world. If I could demo the possibilities of this to my dad he’d want it and that means a lot to me, I want everyone to want Linux.

I meant to write something a little more profound than this, but just like Nat, I’m really tired – it’s late and my thoughts are just starting to slow down.

Just go look at it.

Wow, I’m popular

I set up my website http://www.drinky.org.uk/ a long time ago. I started in Microsoft Frontpage Express (this was before I knew what Linux was). As everyone that ever used it will know, Frontpage Express was utter crap, but I didn’t know any better. Today the same basic design exists as I’ve never had the time to rewrite the whole thing, though I’ve been meaning to move it over to some kind of CMS for some time. I basically use it as a dumping ground and as portal for when people ask me stuff. It’s a terrible, disorganised, ugly mess.

Because it’s so poorly put together I’ve never really pimped it and having been living under illusion that nobody ever really reads it for around 4 years. Until today.

I have this workshop to do for a uni module. It basically involves checking the Apache server logs on the uni webserver and grepping the output for your own site. Until about a week ago, my entire website was hosted on the uni server with DNS forwarding to point at the relevent place. So I expected some kind of logs. What I didn’t expect was todays logs to scroll off the terminal for a few minutes. On a busy site maybe, but not for my pathetic effort. How wrong I have been.

It seems that the Big Snake (not suitable if you are squeamish) is popular with the employees of Samsung in Korea and quite a few other people. Why I don’t know. I’d forgotten all about it. It’s a verbatim copy and paste of an email I received about 2 or 3 years back, complete with annoying caps-lock on text. All I can assume is that someone must have come across it and emailed a link to a few friends who then forwarded it on and on and on, it must be doing the rounds in Korea at the moment. Bizarre. But aside from that, my site has been getting a regular hammering from all over the world. I’m really surprised. I didn’t think anyone read my site at all.

So, having moved my hosting over to the account kindly provided by Sparkes, I decided to check the logs for my new hosting and my blog subdomain. Wow. Obviously not as prolific as the old one, it’s only been up for a few days, but still busy. And then I noticed something really cool.

Some of the biggest referrers are Planet Gnome, Planet Ubuntu and Planet Debian. Holy crap. People on some of the coolest blog syndicates are reading *my* blog. In the last 24 hours. Jeee-zus.

So off I went to Bloglines to have a look as I’m subscibed to all of those planets. Planet Gnome first as it was the biggest referrer. It seems that Jeff Waugh has linked to my Ubuntu Jingle post. That really freaked me out. Nobody really knows I’ve got a blog yet, apart from the LUG Radio guys. Jeff was interviewed at some ungodly hour of the morning by the LUG Radio team, the day after Australia day and still managed to be intelligent and entertaining.

So. Shit. I’ve been linked to by one of the coolest guys in the open source world. That really made my day 😀 Guess I have to finish my jingle now…

Ubuntu Jingle

I decided to do an Ubuntu jingle for LUG Radio. If you don’t already know, LUG Radio is a Linux radio discussion show that goes some way to recreating the loud, opinionated and very funny nature of our Wolves LUG meetings.

The jingle was to be based on a running joke from LUG Radio and our LUG meetings, where Aq would sing Ubuntu to the Um Bongo theme tune. Um Bongo is a kids fruit juice drink with the coolest advert ever, available when we were kids and judging by the site, it’s still available now.

So, I set about making a proper version for a jingle. As a test case, I was determined to use open source software, in this case the Hydrogen drum machine software for Linux to make the drum track (thanks to Mr Ben’s reply on the LUG Radio forums) and the Eastern Hip-Hop drumkit.

After having trouble getting the JACK server to start, I had to abandon Rosegarden and Ardour, so I decided to use Audacity instead. It’s not that complex a project to require serious multi-tracking anyway.

So, export the drum track as a .wav out of hydrogen, create a click track in Audacity and import the .wav. All cool. Plug in my crappy PC mic into the mic slot of my soundcard, try to record my hideous tribal “Ubuntu Ubuntu” chanting. Play it back. Umph. Sounds like a crap, crackly mess with heavy breathing and unintelligable, slightly out of time words. You can’t really tell I’m talking at all. Try another 2 crap PC mics and get the same result. Double check Gnome sound controls, no problem there. Try Dynebolic Linux, a Linux sound recording Live CD, same problem but less out of time (probably to do with the low latency patches that help with sound recording issues under Linux). Try Audacity under Windows, even worse (ie no recorded sound at all).

So, I dig out my ‘proper’ mic from my musician days and buy an adapter to connect the quarter inch jack to a 3.5 mm soundcard input. Same problem. Shit.

So there lies my problem. I think this jingle would be really cool, I even had plans to make a version to submit to Ubuntu as a sound test wav file or something. No go. I think the latency problems with Linux sound recording can be overcome by using Dynebolic, but I think the main problem is my crap Via VT8233/A/8235/8237 AC97 Audio Controller (to quote lspci ;)).

So my fantastic project is on hold for now, until I can work out how to record a decent vocal track. I could maybe dig out my old 4-track tape recorder and use my mic through that and into the line-in or mic socket, but thats a major ball-ache.

Linux Sound Recording (with reservations) 1 – 0 Crap Hardware.

UPDATE 17/02/2005:

Dynebolic can’t play the file back without significant stutter on my laptop. It will boot on my dad’s machine but can’t recognise his bog standard Creative PCI 16 soundcard which I vaguely recall uses one of the ensoniq modules. Modprobing either ensoniq module supplied with dynebolic produces an error about depending on a PCMICIA module which blah blah… Audacity under Windows works great with my microphone on my dad’s machine. This means the problem with my machine is the soundcard.

I’m running out of machines. Don’t make me use Windows…

UPDATE 21/02/2005:

The latest developments are in another post.

Words of advice for lazy Linux users

Learn to Search the Fucking Web (STFW) and Read the Fucking Manual (RTFM).

Now thats a terrible thing to say, especially to people that literally just stepped straight off the boat as it were. But now some rationalisation.

I’ve been annoyed recently by a few people who asked stupid obvious questions and displayed no effort to help themselves. Information that anyone could find out just by going to the website of the enormously well known software in qustion.

Sure, I’ve not been using Linux so long that I don’t remember my own mistakes, in fact I’m really happy to see they’re not archived any more. I certainly remember my early days when starting out with Linux seemed like staring up the North face of Everest, but having passed that stage I can tell you that there is no little secret that everybody else understands about Linux that you don’t. People that know a lot about how Linux works do so because they have read a lot.

Searching for answers to technical questions is hard when you don’t have the surrounding knowledge of the basic principles of what you have to do, but at least try. You can then say, “Aaah yes, I tried looking at that but couldn’t work out what it meant even after reading the man page, could you explain a little?” You are not expected to know the answers to everything. You are expected to try to help yourself before asking.

These incidents were different, this information was easy to find. Where do you download program xyz? What is the release schedule of distributor abc? Search the fucking web! One of these people is a very competent Linux user.

Now heres the thing. Learn how to help yourself, show some effort, show some etiquette and learn that subject headings like “Help me pleeeeeeeeease!” and “It doesn’t work!” mean nobody is going to help you because people get a lot of mail and they use the subjects to determine whether they can help you or not. If you get no answer it means nobody knows, it doesn’t mean you are getting ignored. Asking again means people might then start ignoring you. If you really need to ask again, do some more work on it yourself and leave it a few days before going back, apologising for repeat posting and explain that you have tried to solve it yourself by doing xyz but you really don’t understand what is wrong or how to fix it.

For a less crass and ill-tempered explanation of these ideas read

How To Ask Questions The Smart Way and

How to Report Bugs Effectively.

Seriously, these are the two most important things a Linux beginner can read. I learnt a lot from these and you learn how to get more help from people.

Next learn how to use google and the man command. To give you a head start, type

man man

at a command prompt.

Learn to do that and you’re already ahead of the game. Show some attempt to help yourself and people will be willing to help you because it shows you have tried to solve the problem yourself but it is beyond your skills. The contents of man pages may baffle the hell out of you for some time but at least you can show you tried and ask for someone to explain it to you.

Don’t forget to go back and thank the people that helped you when you fix something. A quick description of the solution and a thank you will do, it helps people close the issue psychologically.

Learn to trim replies to posts so that people don’t have to wade through acres of unnecessary quotes to read your single line reply. Write your comments below what you are commenting, not above. Top posting as it is called means that people won’t understand what you have written until they read what is below it and they will have to go back and read what was above it for it to make sense. Doing either of these will mean people will become less likely to read your mails as you are showing poor etiquette by making it hard work to help you. Don’t write single line “Yes I agree” posts. It wastes people’s time.

Don’t post anything other than text to a mailing list. Don’t post pages of code or output either. Put them on the web and post a link. Don’t ask people to mail you in private unless it is a sensitive matter and you have at least spoken before. Don’t email people privately with questions.

Also read a little about using the command line. If you can get your head around the ls, cd, mv, cp, man and su commands you are running on full steam. If you can get used to using bash file name autocompletion (type a few letters and use the tab button), command history (use the up and down arrow keys), and the difference between absolute file paths (starting at /) and relative paths (starting in your current directory) you are almost the master of your own universe.

Ok, I’m going to stop now or I could be here for days. Read the above 2 links it will really make a difference and stop you getting torn a new arsehole. Learn to search the web, learn to use man, show some willingness to help yourself, show some etiquette and show some gratitude.