Mumblings of an Ubuntu Kernel developer

2017/06/13 Gnome Desktop -- Terminal icon wars

I have recently switched my default desktop environment to Ubuntu Gnome as part of our ongoing dogfooding program. First impressions are good. As always different is confusing and I keep hitting buttons that used to do one thing and get another; 5 years with one desktop will do that to a man.

As a consummate command-line junkie I really use my desktop environment to hold my terminal windows, lots of them. It is really important my desktop environment will sensibly collate them; sensibly to me. I use a lot of command line tools for email, IRC etc. Those are necessarily hosted in terminal windows but not semantically terminal windows. I want them to be grouped together away from actual terminals and preferably they should have my preferred icons.

I am pleased to say I have been able to persuade Gnome of my predilections. It has been a long and frustrating journey. I have to thank Laney for his support in this endeavour, for answering interminable IRC questions and stopping me from sticking a fist through my screen.

Gnome Terminal

The default terminal application in Gnome is Terminal (gnome-terminal to me). This has lots of nice sounding options which ought to let me have the control I want. It supports the --class option to set the window manager class. In the X11 world this is meant to tell the window manager the type of window this is and it is common to use that to group windows. Great:
gnome-terminal --class weechat -- weechat
Of course nothing is ever simple. gnome-terminal is now smart, starting a server which spawns new windows for you thus defeating the window manager class option. After some playing and a lot of whining at people further down the road they pointed me to the --app-id option allowing me to separate instances by use case with a server for each. After some work (and getting a bug fixed in the Ubuntu gnome-terminal wrapper) I was able to use those two in combination:
gnome-terminal --app-id com.example.Terminal.weechat
--class weechat -- weechat
Now my windows are separated and grouped in the alt-TAB popup. Sadly they are all called gnome-terminal.

Gnome Dash

In order to distinguish the various otherwise visually identical Terminal icons with their associated terminal windows I wish to have specific icons for each group. Icons are determined by the .desktop file for the application. So first we have to create one for each class:
[Desktop Entry]
Encoding=UTF-8
Name=my-weechat
Comment=Chat with other people using Internet Relay Chat
Exec=gnome-terminal --app-id org.shadowen.Terminal.my-weechat --class my-weechat --hide-menubar --title Weechat
Icon=weechat
Terminal=false
Type=Application
Note that you want the Name= attribute to be unique in space and time otherwise it will associate your windows with another application (likely with the same icon) but not with your command and generally make your head hurt. This had me going for an hour as one of my groups was fine (the name happened to be unique) and the other not.

This file tells the launcher which icon to associate with this application. You need to drop that into your personal applications directory ($HOME/.local/share/applications) for Gnome to know about it. Now you can start this new application from the overview search box. You can also drag that icon from the searcher to the Gnome Dash to have it clickable. Nice. Now I have my windows grouped on alt-TAB with the specified name underneath and the appropriate icon. Win!

This seemed to work for a while, until it stopped working and they all went back to being named gnome-terminal and using the original Terminal icon in alt-TAB. Arrgggh.

Startup Notification Protocol

After much reading around the subject it seems I was hitting a race so that the Gnome Launcher was having to guess which window was associated with the applications it knew. As these are all Terminal windows Gnome felt at liberty to associate them with that application even though it did correctly group them by class. This happens as launching an application is a fire-and-forget process and finding the windows which were spawned by the started application rather than something that happened to start around that time is hard. To sort this out there is a protocol which allows the newly started application to communicate with the window-manager to tell it that this window is that application. In Gnome these are defined using the StartupNotify= and StartupWMClass= attributes:
StartupNotify=true
StartupWMClass=my-weechat
With these set to match the class used by gnome-terminal the Gnome Launcher was able to reliably associate the new windows with the appropriate icon but in the Gnome Dash and in the alt-TAB window.

Complete Example

Here is the final complete desktop entry:
[Desktop Entry]
Encoding=UTF-8
Name=my-weechat
Comment=Chat with other people using Internet Relay Chat
Exec=gnome-terminal --app-id org.shadowen.Terminal.my-weechat --class my-weechat --hide-menubar --title Weechat
Icon=weechat
Terminal=false
Type=Application
StartupNotify=true
StartupWMClass=my-weechat

Result

Finally I have windows grouped under the appropriate icons every time. Nice.


2015/06/08 Living with a Ubiquiti EdgeRouter Lite-3

I have been using an old Dell Mini 9 as firewall, ipv6 tunnel, and file server for my local networks, for some years. Fear of that just melting into a heap of slag was starting to keep me up at night, time for it to be put out to pasture. This also seemed like a good time to spend a little money and split out the functions sanely.

After a lot of research I ended up purchasing a Ubiquiti EdgeRouter Lite-3 with a view to using it as my boundary router, and ipv6 tunnel end-point. All the documentation implied that this little device would handle all of the pieces I need, DHCP, Hurricane Electric IPv6 tunnels, VLANs, firewalls etc. All that and it was sub 100 GBP delivered to my house. Well worth a punt. So I ordered one, and waited impatiently for it to arrive. Once it arrived I put it on the shelf planning on playing with it "this" evening, needless to say the box sat on the shelf for a couple of months, ooops.

Finally, this weekend I got round to pulling it out and booting it up. It is nice small package installed, silent of course and it seems to perform admirably. Using the web interface I was quickly able to assign the various interfaces to the appropriate networks, add the VLAN interfaces I needed, and put down basic addresses on them. Not bad for an hour of fiddling.

When I went to sort out my fairly complicated firewalling requirements things got a bit trickier. After some googling I found the simplest approach was to use Zone based firewalling, but this form is not supported by the web interface. Time to break out a bigger hammer and get to know the configuration CLI.

The configuration CLI turned out to be very simple to use, and pretty intuitive. I am sure it is instantly recognisable to those of you who have to incant at cisco style routers. You update the configuration in "configure" mode and you then "commit" to test the changes, and "save" to make the changes persistent across reboots. A handy split for when you firewall yourself away from the configuration interfaces! After another couple of hours of googling and hacking at my rules I had the IPv4 side of things setup as I wanted and working pretty well.

I still need to setup the DHCP servers, and IPv6 side of my world, but good progress and so far a pretty nice experience.

2013/10/31 Booting ARM64 Ubuntu Core Images (on the Foundation Model)

For a while now we have been hearing about the new arm64 machines coming
from ARM licencees. New machines are always interesting, new architectures
even more so. For me this one is particularly interesting as is seems to
offer such a much higher bang for power consume ratio than we are used to,
and that can only bring down hosted server prices. Cheap is something
we can all relate to.

There has been some awsome work going on in both Debian and Ubuntu to
bring up this new architecture as official ports. In the Ubuntu Saucy
Salamander development cycle we started to see arm64 builds, and with
the release of Ubuntu 13.10 there was a new Ubuntu Core image for arm64.

This is awsome (to geeks at least), even if right now it is almost
impossible to actually get anything which can boot it. Luckily for us ARM
offers a "Foundation Model" for this new processor. This is essentially an
emulator akin to qemu which can run arm64 binary code on your amd64 machine,
allbeit rather slowly.

As one of the Ubuntu Kernel Engineers, the release of the Ubuntu Core image
for arm64 signalled time for there to be an official Ubuntu Kernel for
the arm64 architecture. This would allow us to start booting and testing
these images. Obviously as there is no actual hardware available for
the general public, it seemed appropriate that the first Ubuntu Kernel
would target the Foundation Model. These kernel are now prototyped,
and the first image published into the archive.

As part of this work, indeed to allow validation of my new kernel, I was
forced to work out how to plumb these kernels into the Ubuntu Core image
and boot them using the Foundation Model from ARM. Having done the work
I have documented this in the Ubuntu WIKI.

If such things exite you and you are interested in detailed instructions
check out the Ubuntu WIKI:

http://wiki.ubuntu.com/ARM64/FoundationModel

Have fun!

2013/04/17 BT Fail (or "I have never been so angry")

For those of you who do not have to hear me whine on a day to day basis about, well frankly, everything, you will not be aware that I have been waiting for broadband to be connected to my new house. Today was the 5th week of waiting for this simple seeming task to be completed. (Please don't make me even more angry by telling me how your US supplier pays you compensation every day it takes longer than ONE, I expect some level of suck from my UK service providers, else I would emigrate.) Along the line I have had to have a huge hole made in my brand new house, and had to have countless engineers attend to try and supply my service. Today should have been the end of this debackle, I should now have super fast Internet, I should be happy.

I am angry, so angry that it is unclear I have ever been more angry. If my house was not so new I suspect that objects might have been thrown, hard.

Today was meant to be the third attempt to hook up my internet. Today at 2pm I get a call:

"Hello we aren't coming today *beam*.
No sir, we don't know why, the system says 'Technical Problems'.
Someone will call you within 30 hours to tell you why, honest.
Sorry we do understand this isn't what you were hoping for."
Frankly you do not understand, you have no clue how I am feeling, so let me enlighten you. My blood is boiling, if I had a heart condition you would likely have killed me. I have had to go out for a walk to avoid breaking things. I am now writing this in catharsis.

As I tried to explain to the caller, it is not so much that you are cancelling my slot, shit happens, people go sick, etc etc, it is that you have no idea why it went wrong and you won't know for 24 earth hours, that you cannot tell me why you are going to attend to actually complete the work. This is utterly unacceptable. Actually when I phoned your own helpdesk they seemed to be able to find out that "Your appointment was cancelled because we [BT] failed to confirm it with the suppliers". The website says that "Your appointment was no longer needed because the engineer could enable your service from the exchange." Who knows what is true. Whatever is true, I do not have the promised service, I did not have an engineer attend despite confirming the appointment was scheduled on four separate occasions over four consecutive days including Monday this week, on some days by more than one person at BT actually calling the engineers to check.

BT you SUCK. If Virgin (perhaps one day I will be calm enough to tell you how they suck) didn't suck harder you would have lost my business today.

2013/02/18 IPv6 exceeds 1% of google search traffic (continuously)

In the ongoing march towards an IPv6 only Internet, IPv6 is not a speedy traveller but it did reach a mini-milestone this week. Google reported that IPv6 traffic was greater than 1% of it's total traffic all week, on a regular ordinary week (well the week has the same basic shape as most non-holiday weeks). Usage continues to edge higher and higher:

http://www.google.com/ipv6/statistics.html
Do I hear 2%? (Probably not for a little while.) Yes I know it is sad to be interested in this graph but hey, one has to be into something.

2013/02/12 GPG key managment

As all good boys did, I (relatively) recently generated a nice shiny new GPG key, all 4096 bits of it. I have switched everything over to this key and have been happy. Today I was wondering whatever happened to the old key. After some minutes trying to remember what the passphrase (oops) I finally managed to find and open the key.

Time it seems to revoke it so that I never have to worry about it again (and before I forget the passphrase for good). Revoking a key essentially puts an end date on the key, it says any use of the key after this date is definitively invalid. Luckily revoking a key (that you can remember the passphrase for) is relatively simple:

gpg --edit key
gpg> revoke
gpg> save
gpg --send-key
While I was at it I started to wonder about losing keys and how one guards against total loss of a key. The received wisdom is to set an expiration date on your key. These may be extended at any time, even after the key has technically expired, assuming you still have the private key. If you do not then at least the key will automatically fall out of use when it expires. Adding an expiry date to a key is also pretty simple:
gpg --edit-key
gpg> key 0
gpg> expire
...
Key is valid for? (0) 18m
gpg> key 1
gpg> expire
Changing expiration time for a subkey.
...
Key is valid for? (0) 12m
gpg> save
gpg --send-key
Note here I am setting the subkey (or keys, key 1 and higher) to expire in a year, and the main key to expire in 18 months.

At least now the keys I care about are protected and those I do not are put out of use.

2013/02/11 HTML should be simple even with a javascript infection

Having been there in the simple days when a web server was a couple of hundred lines of code, and when HTML was a simple markup language pretty much only giving you hyperlinks and a bit of bold, I have always found javascript at best an abomination and certainly to be avoided in any personal project.

My hatred mostly stems from just how unclean the page source became when using lots of fancy javascript and how javascript dependant everything became as a result. Turning javascript off just broke everything, basically meaning you had to have it enabled or not use the sites. This is just wrong.

Recently I have been helping a friend to build their own website, a website which for reasons I find hard to understand could not be simple, with just links and bold, but really had to have popups, fading things, slides which move, all those things you really can only do easily and well in javascript. Fooey.

Reluctantly embracing these goals I spent some time implementing various bits of javascript and ended up as predicted in some kind of maze of twisty passages all the same. I was fulfilling my own nightmare. Then something happened. I accidentally discovered jquery. Now jquery is no panacea at all, yes it does simplify the javascript you need to write so it is clearer and cleaner which is no bad thing. The real jolt was the methodology espoused by the community there. To write pages which work reasonably well with just HTML and CSS, and then during page load if and only if javascript is enabled rewrite the pages to add the necessary magic runes. Now you can have nice maintainable HTML source files and still have fancy effects when available.

I have used this to great effect to do "twistable" sections. When there is no javascript you get a plain document with all of the text displayed at page open. If it is available then little buttons are injected into the sections to allow sections to be opened and closed on demand and the body is hidden by default. All without any significant markup in the main HTML source, what little semantic markup there is has no effect:

<h2 class="twist-title">Section Title</h2>
<div class="twist-body">
Section Text
</div>
Now that is source you can be proud of. Yes there is some site-wide jquery instantiations required here which I will avoid including in its full glory as it is rather messy. But this example shows the concept:
$(function() {
$(".twist-title").prepend("<span class="twist-plus">+</span>
<span class="twist-minus">-</span> ")
$(".twist-body").hide();
$(".twist-title").click(function (event) {
$(this).children('.twist-plus').toggle();
$(this).children('.twist-minus').toggle();
$(this).next().toggle();
})
$(".twist-title").css("cursor", "pointer");
});
Ok this is not so easy to understand, but the majority of the code, the HTML pages that the people who write the content have to look at is easy to understand. I think you only agree this is a win all round.

2012/10/29 IPv6 hits 1% of google search traffic

There has been much talk as to how IPv6 would only be credible when it reached 1% of overall traffic. Today we hit that milestone, at least for Google search traffic:

http://www.google.com/ipv6/statistics.html
Ok, it is not an overall number by any means but the upward trend is undeniable. Awsome.

2012/09/22 ARIN enters IPv4 exhaustion phase two

This week we saw another step towards IPv4 address exhaustion. The American Registry for Internet Numbers (ARIN) reached 3.00 /8s of space remaining, this triggers phase two of its exhaustion planning. This sees requests move to a first in first out model and overall criteria for allocations tightening. Not exhaustion, but considering just how quickly addresses dissappeared this week (from 3.02 /8s to 2.89 /8s over night) we will be there soon enough. For more details on what this really means:

https://www.arin.net/resources/request/countdown_phase2.html

2012/09/16 RIPE IPv4 pool exhausted

September 14th 2012 was another big day for IPv6. Today the RIPE Regional Address Registry (RAR) exhausted its IPv4 pool:

http://www.ripe.net/internet-coordination/news/ripe-ncc-begins-to-allocate-ipv4-address-space-from-the-last-8
This triggers section 5.6 of its IPv4 Address Allocation and Assignment Policies. While these procedures do allow RIPE members to allocate IPv4 addresses still but only in very tiny allocations and only if they already have an IPv6 allocation. In particular they are limited to a single allocation:
1a. LIRs may only receive one allocation from this /8. The size of the allocation made under this policy will be exactly one /22.
With two of the five RARs now essentially out of IPv4 addresses this has to help spur on the adoption of IPv6. One can hope.

2012/07/26 IPv6 and dovecot

For my mail server I use dovecot. By default this only binds its sockets in IPv4. It is however trivial to enable binding of IPv6 as well. Simply change the "listen = *" entry in its configuration (/etc/dovecot/dovecot.conf) as below:

listen="*, [::]"
With that done and the service restarted my email if now available over IPv6 too. This should become the default most likely.

2012/07/26 IPv6 and weechat

With my bip proxy IPv6 enabled it is time to look at my IRC client. I am a 'weechat' user so that is my target. It seems that weechat has some support for IPv6 via its 'ipv6' option on the server configuration block. You can set this option for connection 'freenode' as below:

/set irc.server.freenode.ipv6 yes
This certainly switches the connection over to IPv6, but it does not seem to fallback to IPv4 automatically. Some additional support will be required to handle this, though the changes look minor. I guess we can call this half enabled.

2012/07/12 IPv6 dual-stack listeners?

As alluded to in my previous post "IPv6 and bip" when a service creates its network endpoint as an IPv6 socket it is actually able to communicate with both IPv6 clients and IPv4 clients interchangeably. But how does this work?

This compatibility mode is defined in one of the foundation Request For Comments (RFC) documents, specifically RFC3493 which defines how the socket() interfaces should behave with respect to IPv6 (see section 3.7 for the gory details). This defines a reserved area of IPv6 address space which maps 1:1 to IPv4 addesses. Essentially the IPv4 address may be translated into a unique reserved IPv6 address by concatenating the 0:0:0:0:0:FFFF prefix with the existing IPv4 address. As this address is unique the underlying socket implementation can directly infer the correct physical protocol to use for this connection from the address alone, allowing connections to safely coexist.

The great thing about this compatibility mode is it works for any socket, server or client. This allows an application to support only IPv6 and yet remain compatible with addresses of either type. We no longer have to care which they are, nor does a service have to bind sockets to each protocol and handle the complexity that entails. Magic.

2012/07/10 IPv6 and bip

As a keen IPv6 advocate I have been playing around with the various applications and services I use on a regular basis and have been trying to enable IPv6 use; today it was my IRC proxy 'bip'. bip turns out to be very simple to convert indeed. Simply requesting bip bind on the IPv6 unspecified address (::) triggers it to switch to IPv6 and (through the magic of the Linux dual-stack IPv4 handling) this enables either IPv4 or IPv6 clients to connect to the proxy.

To change the default bind address in bip simply change the ip configuration in your .bip/.bip.conf to use the IPv6 unspecified address as below:

ip = "::";
As simple as that. Probably the default behaviour of bip should be to bind :: (IPv6) and on failure bind 0.0.0.0 (IPv4).

2012/06/20 Ubuntu Plus One

In the Precise development cycle a new archive oriented team was trialled, the Plus One team. This team was created to help keep the archive in an installable and buildable state at all times, a major driver for early testing and for a solid LTS release. The Plus One team is tasked with general house keeping for the Ubuntu archive. for finding issues such as build failures or packages which are no longer built from the source in the archive, and figuring out what to do with them. They are also responsible for figuring out why our images are broken on any particular day and driving resolution.

This cycle when they were looking for volunteers for the team for the Quantal cycle I was put forward to help. It sounded interesting as the work has a much broader base than my normal role, touching anything in the Ubuntu archive. I have been working with Debian packages for over 4 years, but mostly kernel related packages and they have their own quirks. This gave me a chance to branch out and solidify my Ubuntu skills, a good stepping stone to becoming a core-dev in my own right. An exciting and scary prospect.

I have been working on the Plus One team for a couple of weeks now, and all I can say is it has been a baptism of fire, have had to touch C++, C, python, ruby, perl (and more) often in the same package. Had to fiddle with autoconf, and get familiar with the multitudinous patching schemes. I have learned a healthy hatred for quilt (as it lets me lose my changes for the umpteenth time). Test built untold packages; my poor test build server is crying out for a rest after the utter pounding I have given it. For all this work we have made some progress indeed, but at this time in the cycle the breakage is building at least as fast as we club it into submission. At times it is soul destroying.

Overall though it has been a very positive experience, I have learned a huge amount about the archive and packaging in the Ubuntu world, and gained a healthy dose of respect for anyone who voluntarily maintains other peoples packages. All I can really say is a big thank you to those who look after this stuff full time, you are made of strong stuff indeed.

2012/05/28 World IPv6 Launch Day -- 6th June 2012

The 6th of June 2012 is an interesting day, World IPv6 Launch Day. On this day a swathe of influential web-sites will be enabling IPv6 addresses for their services by default and not turning them off. Why is this interesting? For the most part it is not! For most people nothing should happen, things should continue working probably using IPv4, fine. For those of us with working IPv6 again nothing will change other than we will be producing more IPv6 traffic, great. It is those who have unused broken IPv6 who will start having fun, they will likely lose connectivity to the participating sites until they sort out their issues.

So why is this a good thing? This is a key step towards IPv6 adoption, and we really do not have any other choice but to adopt IPv6. Adoption will only be driven by need, and this move creates exactly that need. Such ISP and client issues need to be identified and solved, before the end users lose connectivity to even greater swathes of The Internet, those parts which only will exist on The IPv6 Internet.

For more information see: http://www.worldipv6launch.org/

2011/11/15 IPv6 at home?

The Internet has been alive with doom saying since the IPv4 global address pool was parcelled out. Now I do not subscribe to the view that the Internet is going to end imminently, but I do feel that if the technical people out there do not start playing with IPv6 soon then what hope is there for the masses?

In the UK getting native IPv6 is not a trivial task, only one ISP I can find seems to offer it and of course it is not the one I am with. So what options do I have? Well there are a number of different types of IPv4 tunnelling techniques such as 6to4 but these seem to require the ability to handle the transition on your NAT router, not an option here. The other is a proper 6in4 tunnel to a tunnel broker but this needs an end-point.

As I have a local server that makes a sensible anchor for such a tunnel. Talking round with those in the know I settled on getting a tunnel from Hurricane Electric (HE), a company which gives out tunnels to individuals for free and seems to have local presence for their tunnel hosts. HE even supply you with tools to cope with your endpoint having a dynamic address, handy. So with an HE tunnel configuration in hand I set about making my backup server into my IPv6 gateway.

First I had to ensure that protocol 41 (the tunnelling protocol) was being forwarded to the appropriate host. This is a little tricky as this required me to talk to the configurator for my wireless router. With that passed on to my server I was able to start configuring the tunnel.

Following the instructions on my HE tunnel broker page, a simple cut-n-paste into /etc/network/interfaces added the new tunnel network device, a quick ifup and my server started using IPv6. Interestingly my apt-cacher-ng immediately switched backhaul of its incoming IPv4 requests to IPv6 no configuration needed.

Enabling IPv6 for the rest of the network was surprisingly easy. I had to install and configure radv with my assigned prefix. It also passed out information on the HE DNS servers, prioritising IPv6 in DNS lookup results. No changes were required for any of the client systems; well other than enabling firewalls. Win.

Overall IPv6 is still not simple as it is hard to obtain native IPv6 support, but if you can get it onto your network the client side is working very well indeed.

2011/06/21 Oh no 3.0

After 39 2.6.x releases Linus Torvalds has chosen to revisit the upstream kernel version. The plan is to release what would have been 2.6.40 instead as version 3.0:

"I decided to just bite the bullet, and call the next version 3.0. It
will get released close enough to the 20-year mark, which is excuse
enough for me, although honestly, the real reason is just that I can
no longe rcomfortably count as high as 40."
When 3.0-rc1 was released the Kernel Team had to decide what version to use for it in Ubuntu. We typically upload every -rcN release within a couple of days of its release so the pressure was on. We could simply call it 3.0.0 knowing that all the current scripting would cope, or as 3.0 better matching its official name knowing this would not be plain sailing. This was not a decision we could delay as in Debian versioning 3.0 < 3.0.0 so we were likely to be committed for Oneiric if we uploaded using 3.0.0. It is also not clear from upstream discussion what version number the final release will carry, as 3.0 clearly will cause breakage on older userspace.

After much discussion we decided we bite the bullet and upload a 3.0 kernel. At least we get a chance to identify problematic applications, while still keeping our options open to move to a 3.0.0 kernel for release should that be prudent. As expected this was not smooth sailing, not least for the kernel packaging which needed much love to even correctly build this version. Plus we had to hack the meta packages to allow that to be reversioned later too.

Once successfully uploaded the problem applications started to crawl out of the woodwork:

  • depmod -- the depmod incantion to create the module dependancies identifies the kernel version in its command line but was assuming that a version contained three digits, this lead it to miss the version entirely and rebuild the wrong dependancies;
  • libc6 -- both the runtime and the installation control scripts manipulate the kernel version number, in both cases assuming the version was three digits, enormous fun getting the pending updates installed;
  • ps/top -- when starting the kernel version was checked, and miss decoded triggering a rather nasty sounding version warning whenever they are started;
  • nfs-utils -- when attempting to read and identify the kernel version the nfs-utils would trigger a SIGSEGV and die, triggering boot failures on machines with NFS roots; and
  • lm-sensors-3 -- this package is only compatible with 2.6.5 and above, failed version detection lead to this test failing and sensors being unconfigured.
Those are the ones we have found so far, I am sure there will be more. If you do find one please file a bug against the failing package but tag it kernel-3.0 then we can find them.

2011/05/19 Union File Systems (again)

During the early part of the Maverick cycle we once again revisited out Union Mount solution. At that time VFS union-mounts was the hit of the day, set to finally to produce something which might get into the kernel. Since then the complexity of changing every filesystem to support whiteouts, its invasiveness, and its affects on Posix semantics have lead to it falling by the wayside. In its place has sprung overlayfs.

overlayfs is a small patch set which is a hybrid of the VFS union-mount approach and that of aufs/unionfs in that it also provides a filesystem. This greatly reduces the complexity of the patch set, reducing its invasiveness and thus increasing its chances of ever being merged. So much so simpler is it that your author is actually able to understand and debug it. Win.

We have been tickling overlayfs for most of the Natty cycle, but with Natty in the can I have had had some time to catch up with its development and help out a little, both with testing and bug fixing. Culminating today in my being able to inject a kernel containing overlayfs support into an Ubuntu LiveCD and boot it, then update it to the latest Natty, all without error.

overlayfs may shortly be in a mergable state, nirvana for all union mount lovers. Only time and testing will tell.

2010/07/02 Ubuntu Kernel Crack of the Day

We have been producing the mainline kernel crack of the day[1] for some time now. But this targets the upstream kernel, and while great for testing and bug isolation it does not provide us with any pre-upload testing on the Ubuntu kernel delta.

Enter the pre-proposed kernel PPA[2]. We are now uploading the unreleased tips of the ubuntu kernel trees to this PPA. These builds will contain any bug fixes marked Fix Committed and should provide a vehicle for advanced testing of these before they hit the archive. For Maverick we will be uploading these automatically as the tree changes, roughly daily. This should help us avoid the ThinkPad debackle we experienced late in Lucid.

I would encourage you to add this PPA to your sources and help us test this kernel before we unleash it on the world.

[1] https://wiki.ubuntu.com/KernelTeam/MainlineBuilds
[2] https://edge.launchpad.net/~kernel-ppa/+archive/pre-proposed/

2010/06/01 Union Filesystem Plans

At UDS we return to the subject of those drivers we are carrying which are specific to Ubuntu, why they are not yet upstream and what we can do about it. Union Filesystems are a key technology in producing the live CD environment used to allow both non-destructive testing and the graphical installer. This technology has long been a contentious subject as the patch sets have been extensive and intrusive to the VFS. Worse there have been a number of camps all disagreeing as to the most sensible solution.

For a number of series we have carried the AUFS/AUFS2 patch kit as an Ubuntu add on. This has been a solid performer in this space and served our purposes well. About a year ago now talk began upstream on what approach would be acceptable to upstream. This has resulted in a proposed for an integrated VFS based solution for union filesystems called union-mounts. Patches have been circulating for the best part of a year and we skirted with them for the Lucid cycle, at that time they were feature incomplete preventing full testing.

Recently updated patches have been circulated which should be feature complete, and we are planning to provide kernels enabled for union-mount for testing in a PPA. Should testing there prove good we will consider switching to this solution for our live CDs. Watch this space as they say.

2010/05/16 Maverick UDS: the hangover

What a mad week. Finally UDS is over and at least some of us are back home, dispite the efforts of a certain Icelandic volcano and of Eurostar. For me this has been one of the most valuable I have attended. I suspect in large part this is due to my being more experienced in the 'one hour to rule the world' mentality. The rather fetching five minute warning popups were pretty handy to focus the mind on getting the pertinent actions into the gobby seance.

Last cycle I was Kernel Release Manager, this meant that I had responsibility for all of the core kernel blueprints and meant I was tied to the kernel track almost exclusively. With that responsibility passing to Leann I found myself free to rove to other tracks and stick my nose in. I attended a number of X/graphics related discussions. It was great to meet those behind the IRC nicks which I deal with so very often.

More on what we are going to be playing with once the hangover dissipates, in the meantime some Guitar Hero me thinks.

2010/04/23 Lucid Kernel Final Configuration

Now that release is fast approaching it was thought appropriate to advertise the final kernel configurations for all of the main distro and ports kernel flavours. The purpose is to expose the main configuration changes to scrutiny and to provide pointers to the full configurations where those are of interest.

For Lucid we have aimed primarily on stabilisation and supportability. As such we do not expect there to be any radical configuration changes in these kernels. There has been a drive to commonise and standardise options between architectures and flavours where at all possible to help standardise the experience. There has also been a drive to pull out to modules some sub-systems which are commonly replaced by users, such as HID, and also pulling out the majority of the PATA and SATA drivers as it is most common to only require a single one of these. We have also enabled KMS for all graphics hardware where it is supported.

To aid in this comparison we have generated a delta report between Karmic and Lucid which shows all items which have changed and how the value has changed. This report can be found at the URL below:

https://wiki.ubuntu.com/KernelTeam/Configs/KarmicToLucid

To facilitate series to series comparison we include pointers to both the full Karmic and Lucid configurations for each flavour, all of which can be found at the URL below:

http://kernel.ubuntu.com/~kernel-ppa/configs/

Enjoy!

2010/03/11 Lucid Kernel Freeze

Today, March 11th 2010, marks Kernel Freeze for the Lucid kernel. This means that the kernel moves from active development into its stabilisation phase. All planned kernel features are now set, included, and enabled and the kernel team focus now moves from new enablement to testing, bug isolation, and fixing of issues found in the kernel. The kernel will now transition over to the stable maintenance team, they will be responsible for patch acceptance from here on.

What does this transition mean for you. Now is the time to test things you care about and report any issues in Launchpad against the linux package. If you have bugs open found earlier in the cycle please retest with the latest and greatest kernel and report back whether those bugs are still present and where you tested. The upcoming Beta-1 release is an ideal test platform.

Additionally this transition means that is will be much harder to make a change the kernel. From today patches will need to meet the same criteria as would be required for SRU[1] to a released kernel. That means that the patch must have a Launchpad bug open, it must be a fix for an actual bug being experienced in the field, it must be sent to the kernel-team email list for review, it must recieve two ACKs from kernel team members, and finally you must test the updated kernels and report back.

Lucid will be with us for a long time so please help us make this the best kernel possible. Please test beta-1 and report your issues. Thanks!

[1] https://wiki.ubuntu.com/KernelTeam/StableKernelMaintenance

2010/03/05 Lucid DRM Update

After much discussion within the Ubuntu Kernel team, the Ubuntu X team,
and with the various Graphics upstreams it has become clear that the
2.6.32 drm stack is not of sufficient quality to form a good basis for a
LTS release. 2.6.32 does not contain Nouveau so we are already committed
to a backport of that for KMS there. Upstream is essentially saying
ATI Radeon KMS support in 2.6.32 is so bad that the recommendation is to
disable it globally. Finally i915 does not support the latest chipsets
well, and backports are already extremely painful; chipsets which are
slated to become prevalent over the next few months.

The recommendation from upstream is to use the 2.6.33 drm stack if we
desire KMS to be enabled generally, a clear goal for Lucid. Following a
review it does appear that the drm subsystem is sufficiently self contained
that it is possible to backport just that subsystem into our 2.6.32 tree.
This gives us a hybrid kernel gaining the long-term stable support backing
for the main kernel (a major bonus as this has to be supported for 5
years on servers) while gaining the more stable 2.6.33 graphics support
for desktop use. Additionally upstream is essentially rejecting 2.6.32
as a supportable stack, and is committing to longer support for 2.6.33
as their stable version. We are therefore planning to upload a hybrid
2.6.32 kernel containing the 2.6.33 drm backported.

From an Ubuntu stable maintenance standpoint we should be able to track a
hybrid of 2.6.33.y for drm and 2.6.32.y for the remainder of the kernel and
due to the separation that drm enjoys we hope to avoid major conflicts.
Plug gaining the longest possible support from upstream for each part.
This will also remove the requirement to install an LBM package to get
Nouveau cleaning up the install significantly. It seems likely that
Debian and other distros will be following a similar hybrid approach
allowing us to share the maintenance burden.

2009/07/25 Wibble Wobble Gwibber

It happens to us all in the end. "No there is no point in being on FaceBook, Twitter is pointless", you know the drill. You are certain you do not need these things in your life but you sign up just "to be in touch with so and so". Before you know it you have about 20 accounts and no idea which one to check when, and little inclination to do so as time is so in short supply. Gah!

I have recently discovered Gwibber. A simple little desktop application which does all that checking for you and shoves all the interesting titbits in one box. Gwibber reads all your incoming updates and shoves them into a single personal timeline, even colour coding the entries so you can tell them apart. Want to reply to something just hit the reply icon and it sends the reply to the right place without you even needing to know which service they are from. What more could a minimalist want I ask you? Struggling with your feeds, give it a go.

2009/07/24 Bookmark Knots

I have bookmark knots in my life. I have a laptop, a new netbook, oh another laptop, a media server on my TV. Err, plus my bookmarks are different on every single one of them. In fact I have given up cause I could never find the right one and deleted them from all but my day-to-day machine.

Gaining a new netbook has brought a new urgency to solving this problem. I want to be able to take my netbook away to conferences and yet I need that to be a workable environment. Bookmarks are key to my finding anything it seems, so I need to get those synchronised on both these boxes. As I am already owned by Google I started out evaluating the google task bar. It looks ok I guess but it adds a taskbar which on a netbook is like stealing half the screen, it also does not have sidebar support. On my netbook I have taken to removing all the taskbars and using the bookmarks side bar instead to gain some space, so whatever I found really needed to support those.

Enter GMarks. This Firefox extension basically replicates the Bookmarks menu and sidebar but synchronised with google bookmarks. It even seems to support a toolbar though I have yet to find out how one gets that to display. So far so good. I will let you know how I get on with it.

2009/07/24 Playing with my Dell Mini 10v

I have been talking about getting myself a netbook so I do not have to lug my main laptop about when we visit family, or travelling to conferences. I have said I was going to order a Dell Mini 9 a few times. By the time I got round to ordering it the Dell Mini 10v had basically replaced it, sigh. Anyhow I really did order one. My 10v arrived a couple of days ago, and its prettier than I imagined. Say hello to Penfold.

So what is it like? Its very light and just that little bit bigger and more usable than the 9 I had seen. The keyboard is a decent size and the only keys which are not normal are the function keys which it seems I really do not use that much. A significant improvement over the 9's layout.

As a keen Ubuntu convert obviously I ordered the Ubuntu version which thankfully is still cheaper than the M$ loaded version, at least in the UK.
It had a Dell pre-install of Ubuntu based on Hardy. Although it seemed perfectly usable I wanted to see how well Karmic worked on this baby. A quick backup of the SSD and I was installing from a USB stick. Overall completely painless.

The only odd feature of this machine is the touchpad. The left and right clicks are activated by pushing down the corners of the touchpad. This has an odd effect as the buttons are in the active area of the touchpad and clicking tended to cause the mouse to jump, I will not even attempt to describe the gymnastics required to do a long drag.

I was resigned to getting a little travel mouse when using this thing but thought I would do a little googling first. This turned up a reference to an updates synaptics X driver from Alberto Milone. I have installed the xserver-xorg-input-synaptics package (only) from the PPA below:

https://launchpad.net/~albertomilone/+archive/ppa
That change seems to have sorted things out. The power of open source at work. Awsome.

2009/07/16 Karmic Kapers

I've been putting it off long enought, its time to upgrade to Karmic fully rather than woosing about running just the kernel. So today I bit the bullet and let update-manager move me up to Karmic. So far ... well I am able to post this so things aren't all bad. In fact so far things are pretty good. As I have an all Intel box here I have Kernel Mode Setting (KMS) goodness. Rather unexpectedly I have full compositing of opengl apps!?! What does that mean, well it means that I can move an glxgears window around without it making a mess, and even 'tilt' my compiz cube desktop and see those applications running on the surfaces of it. Most impressive.

On the negative side I have had a couple of odd moments. When quitting some full screen 3D games I have ended up with my main desktop resolution changed, in one case to 640x480 which was a challenge to sort out. Also my sensors config was dropped on the floor. Most oddly I had to remove my twitter and identi.ca accounts from gwibber and readd them before they would do anything other than error on me.

Its probabally a bit early to recommend you come and join me in the wild-lands, but at least some of the wild animals out here are friendly!

2009/06/30 Welcome to Whipper

It seems that we have a new member of the family. Whipper joined us today. Our new car! We have finally mini-sized our car portfolio, two for one. I hope she will enjoy playing with us :)

2009/06/30 Karmic Kernel jumps to 2.6.31

So the 2.6.31 merge window has slammed shut, 2.6.31-rc1 has been tagged and released, and now the fun begins. I have just finished the job of rolling the Ubuntu kernel delta forward to the new kernel. So far my testing has been pretty positive for an -rc1 release:

$ cat /proc/version_signature
Ubuntu 2.6.31-1.13-generic
Now all I have to do is get the thing into the archive, expect this kernel shortly on a Karmic install near you. Be warned that KMS is enabled for both Intel and ATI radeon so if you have issues with X you might want to turn it off with i195.modeset=0 or radeon.modeset=0 as appropriate.

2009/06/23 Meet your Upstream

Our desire to work with our upstream counterparts to bring new goodness to Ubuntu is often tempered by the sheer volume of change we are trying to bring to Ubuntu in a particular cycle. It is easy to focus on that work and forget that upstream is out there, and often better equiped to solve issues or provide advice. I have recently managed, indeed in places been forced by circumstance, to interact with upstream very directly on a couple of projects close to my heart, Kernel Mode Setting (KMS) and VFS union-mount.

At the recent UDS we had the opportunity to mix with a number of upstream developers, some involved in Ubuntu others simply invited for their insight into various subsystems. For me this was particularly relevant as the nominated KMS 'expert' on the Ubuntu kernel team! I was able to spend a number of hours discussing plans with these guys and getting to know them. Out of those discussions came plans to produce bleeding edge kernels containing updated Intel, ATI Radeon, and even Nouveau drivers for those brave enough to give them a spin. This has been hugly beneficial for us, getting testing on these code bases and allowing the userspace guys to get work out the kinks before we release KMS enabled kernels, so that when we do the general experience should be much better much earlier in the cycle. Win all round.

Elsewhere I have been involved in investigations into what we are planning to use as our union mount solution for Karmic. Upstream seems to be leaning towards a VFS based approach, VFS union-mount. We have been trialing these patches putting together test kernels to allow stress testing. Again providing vital early testing feedback to the maintainers and helping to improve the quality and increase confidence in the code. Again win.

Moving my work closer to the developers, often into their lap, is working well for me at least. No I am more involved with them I find them in their turn easier to deal with and generally more friendly. My advice, get out there and meet your upstream find out where their lap is and make yourself comfy.

2009/06/22 Karmic Kernel Version

At the Ubuntu Developer Summit we had a session on the likely kernel version for the Ubuntu Karmic release. The decision at the session was that we would be aiming for a v2.6.31 based kernel for Karmic. This was based in large part on release timing, we are expecting this kernel to be released around three weeks before our beta freeze which gives us a fair amount of time to stabalise the kernel before the final release.

What goodness can we expect from the Karmic kernel. We are obviously expecting Intel support for Kernel Mode Setting (KMS) to be stable and enabled. We have some hope of seeing ATI Radeon KMS for at least some cards, indeed the first cut of this support has just been merged. We can expect some good improvements in the Intel graphics drivers as a whole. We also get some new in kernel compression LZMA which might allow smaller kernels and initramfs files saving a bit of space on the CDs. There is a pile of DVB updates merged already . Overall there is nearly 20k changes in already and the merge window is still open. Even staging drivers are getting some love.

2009/06/17 Series specific mainline builds

I have been having requests to enable Kernel Mode Setting (KMS) in the latest mainline kernel builds so they can be used for testing on Karmic. The problem is that the primary consumer for these mainline builds is for testing on older series, mainly Jaunty, and that does not want KMS to be enabled. While the official kernel for Karmic should be very close to the mainline builds, kept up to date approximately weekly, it certainly is not a crack-of-the-day kernel for the development release.

It seems likely we need to be building these kernels both against the current stable kernel, and against the kernel configurations for the current development release. Likely that means we should be naming all the builds by the release for which they are targeted. More upheaval.

2009/06/12 Hamster Wheels

Ever wondered just how much time you are spending on things at work? Constantly forgetting what you are working on, finishing the week with no idea what you contributed? Certainly that is my experience. I have been looking for some tool which would quietly remind me to record what I was doing. I think I might just have found it. Hamster.


The hamster applet sits quietly on my gnome menubar, showing what I am working on and how long I have been at it. A quick click and I can change activity. Once you have your tasks recorded you can then categorise them. For example I am interested in how much of my time is spent on Development tasks and how much on Maintenance. Needless to say it produces pretty bar charts for review. Overall an interesting app.

2009/06/11 LZMA Compression for the Linux Kernel

With the release of the 2.6.30 kernel we now have native support for Lempel-Ziv-Markov chain algorithm (LZMA) compression. This offers something like a 30% improvement in compressed size over GZIP compression (used for kernel image and initramfs compression). The downside is that decompressing LZMA data uses around twice as much CPU as compared to GZIP. Also compression is much slower than GZIP.

What does this mean for the kernel? Well for one it means it is not in the least bit obvious whether switching compression of the kernel and initramfs to this new compression format is going to be beneficial on average. Yes they would be smaller, but they would also take more time to extract especially on slow hardware. The key metric is the overall time taken to load and extract this pair and that is not easy to measure. Obviously we want to enable support for this new compression format, but making a change to that format will take more research and quantative comparisons.

2009/06/11 ATI Radeon and KMS

We are expecting to see KMS support for at least some ATI cards by the time the Karmic Koala releases. As part of that we have been playing with previews of ATI KMS support. As a kernel junky I have pulled the updates for ATI Radeon and applied them to the latest Karmic kernel and built some test kernels. Initial touch testing by the X-swat team seems to show that its not in that bad a shape.

With a whole kernel release to stabalise this stuff I am pretty hopeful that we should have something stable for Karmic. Of course that does rely on the support being merged in the v2.6.31 merge window, we shall know in the next few weeks.

If you are keen to try out ATI Radeon KMS you can find kernels with this enabled in my KMS PPA.

2009/06/05 Kernel Mode Setting

There has been much hype over Kernel Mode Setting (KMS) support in the Linux Kernel. KMS is claimed to improve the boot experience, speed up boot and suspend/resume, make suspend/resume more reliable, improve crash reporting, and make you breakfast in bed. I was therefore somewhat sceptical that it would be so world changing.

During the recent Ubuntu Developer Summit I had the opportunity to be involved in planning for the kernel side of KMS and had the chance to see some machines setup with KMS enabled. I can only say I was impressed with the improvements I have seen. Ok it does not yet seem to make my breakfast, but the rest of the claims seem more than justified. Even without updating the splash screen support to use Plymouth we were seeing a vastly less blinky boot sequence and resume from suspend which seemed almost instant, the screensaver lock being visible in the time it took me to open the lid.

There is a lot of work still to do of course. This is all very fragile only supporting Intel hardware as I write. We do hope to have at least ATI support before Karmic Koala is released (though that depends on how quickly radeon support gets merged). Also getting the right bits together is non-trivial. You need an appropriate kernel, updated X and mesa, some manual configuration etc. But we are planning to put together a PPA with the required bits and document things better. Watch out for further announcements.

2009/06/02 Kernel Crack of the Day

Some time ago the Ubuntu kernel team started generating installable mainline kernel builds. These are kernels built directly from Linus' tree with no Ubuntu modifications. These are generated from each tagged release which includes all full kernel releases (2.6.x, 2.6.x.y) as well as the release candidates (2.6.x-rcN). These kernels have proven very popular both with people wanting support for the latest hardware and as a basis for finding the source of bugs.

As an experiment we are expanding the scope of the mainline builds to include builds of the daily snapshots of the tip of Linus' tree. These are generated every 24 hours where updates are present and are published in the mainline builds archive as normal.

For more information on what mainline builds are, why you might want them, and how to obtain and install them can be found on the Ubuntu WIKI on the mainline kernel builds page. Enjoy.

2009/06/01 Ubuntu Developer Summit (Barcelona May 2009)

My second Ubuntu Developer Summit (UDS) is over. I think I need a week sleeping to let the sheer volume of information to clear from my mind before I can think about anything else. This UDS seemed bigger than the previous one. More tracks, more talks, more information, and more overload. But that has to be seen as a positive we are trying to tackle more and achieve more. At least this time I knew what to expect and was able sift the deluge of information and contribute to the event. It was great to see so many dedicated Ubuntu contributors and fans all in one place, to meet those people who I have been working with regularly.

As a member of the Ubuntu kernel team my focus was on the kernel track, particularly on the subjects I am focused on for this cycle. Though this time round the team did try and make sure we were able to escape our own sessions and get representatives out to other tracks so we could find out what the rest of the distro is planning to do and how that might affect us. They are after all our customers, consuming our kernels.

A number of topics caught my eye during the week. There was the drive to improve boot speed, some pretty aggressive boot speed targets are being banded around. Also plans to shift to grub2 as the default boot loaders bringing internationalisation and perhaps allowing lilo to be retired. The introduction of Kernel Mode Setting to improve boot experience. Plus cloud computing and how that will change both what is possible and our own workflows. All in all a week well spent.

2009/06/01 Introductions

I am an Ubuntu Kernel Developer employed by Canonical to work on the Linux kernel full time. I work both with upstreams to test, fix and intregrate new features, as well as with our user community to find and fix issues they are seeing. We as a team strive to make Ubuntu as good a disto as it can be, as feature rich and reliable as possible. I enjoy my work!