Kernel Traffic
Latest | Archives | People | Topics
Latest | Archives | People | Topics
Latest | Archives | People | Topics
Home | News | RSS Feeds | Mailing Lists | Authors Info | Mirrors | Stalled Traffic

Kernel Traffic #334 For 26 Nov 2005

By Zack Brown

No, I did that on purpose.

--Linus Torvalds

Table Of Contents

Mailing List Stats For This Week

We looked at 2243 posts in 14MB. See the Full Statistics.

There were 662 different contributors. 246 posted more than once. The average length of each message was 106 lines.

The top posters of the week were: The top subjects of the week were:
88 posts in 386KB by
74 posts in 515KB by russell king
73 posts in 560KB by adrian bunk
68 posts in 384KB by
51 posts in 492KB by greg kh
78 posts in 364KB for "new (now current development process)"
41 posts in 172KB for "3d video card recommendations"
38 posts in 178KB for "best way to handle leds"
34 posts in 160KB for "[patch]: clean up of __alloc_pages"
33 posts in 159KB for "ntp broken with 2.6.14"

These stats generated by mboxstats version 2.8

1. Status Of Sharp SL-C3000 Support

13 Oct 2005 - 22 Oct 2005 (10 posts) Archive Link: "spitz (zaurus sl-c3000) support"

Topics: I2C

People: Pavel MachekRichard Purdie

Pavel Machek asked about the status of the Sharp SL-C3000 (Spitz) PDA, saying, "I got spitz machine today. I thought oz3.5.3 for spitz would be 2.6-based, but found out that I'm not _that_ fortunate." He saw from the changelogs that Spitz support in some form had come into the kernel in 2.6.14-rc2, but he pointed out that the 2.6 port page seemed old. Specifically, he asked, "Is there simple way to tell spitz and tosa apart (like without opening the machine)?" Tosa being the SL-6000.

Richard Purdie said, "oz 3.5.4 is due for release soon and will hopefully have a 2.6 option for spitz." He also confirmed that the port page was indeed outdated. He added:

I got a spitz recently which moved 2.6 for it forwards a lot. Have a look at:

This file should give you an idea of which patches to apply in what order:

With my patch series applied, we're missing usb client (usb host works) and sound support.

Mainline is missing power management and currently fails to compile without my patch series but I'm working on that.

Pavel asked if there was a "preview" of oz 3.5.4 anywhere, and Richard replied:

I'm no sure offical preview images exist but here's something I built myself recently:

Rename the gpe or opie file "hdimage1.tgz" to flash depending on what flavoured image you'd like. You need the other files including gnu-tar. You don't need an initrd.bin file as under 2.6 we can boot directly from the microdrive.

I'm hoping these work - I'm not sure I've tried one of them... :)

Pavel asked what he could do to help the project in general, and Richard summarized:

I'm open to any help in getting the none ipaq/tosa things merged with mainline. Have a look through them patch series and see if there's anything you fancy taking on. Most of them are simple fixes although some are nasty hacks we need to find some way of doing nicely.

The biggest thing is the battery/power management patch. I've just agreed some changes to enable it to stand a chance of making mainline. It probably needs more coding style cleanup.

There's also sound to get working although so code arrived yesterday which should help with that. The usb client code exists in's kernel26 cvs tree. We need to extract it, fix any bugs and talk to the usb developers about it.

Its a shame you don't have a C1000 as there's a nasty bit of coding someone with such a device needs to do to complete mainline 2.6 support (I2C driver for its IO Expander to enable access to its extra GPIOs).

Pavel didn't take any of that up explicitly, but they did continue to talk for awhile about how to get things working for Pavel, and how to interpret various ambiguous situations.

2. Linux 2.6.14-rc4-mm1 Released; NTFS Lurching Toward Writable

16 Oct 2005 - 24 Oct 2005 (75 posts) Archive Link: "2.6.14-rc4-mm1"

Topics: FS: NTFS, I2C, Kernel Release Announcement, PCI, USB

People: Andrew MortonAnton Altaparmakov

Andrew Morton announced Linux 2.6.14-rc4-mm1, saying:

One of the patches was also an NTFS update. Anton Altaparmakov replied:

This is a request for testers for ntfs. 2.6.14-rc4-mm1 contains a pretty much complete re-write of file write support so some more testers than just myself would be good before I submit it for inclusion in 2.6.15...

The rewrite means that the following features are now supported:

Given an existing uncompressed and unencrypted file, you can use:

What this means is that you can now run your favourite editor on an existing file, e.g. "/kernel-traffic/vim_/ntfs/somefile.txt" works fine and you can save your changes. Also things like running OpenOffice should work to edit existing MS Office documents but I haven't tried it yet (it should work as long as OpenOffice does not need to create temporary files in the same directory as the document).

Still not supported features are creation/deletion of files/directories and mmap(2) based writes to sparse regions of files. (The mmap(2) support has not been modified since the last release, only the file write(2) support was rewritten.)

If you do try it, please let me know how it worked for you! - Thanks a lot in advance for testing!

Alberto Patino tested it out, but the code corrupted his filesystem so badly he could no longer boot Windows 2000; and an examination of two files he edited showed parts of each file in the other. Anton replied:

You should be able to hopefully fix windows by booting with the installation CD and going into recovery console and running chkdsk on the partition. You can also try running ntfsfix from ntfsprogs and then booting into windows. This may be sufficient to allow it to do a chkdsk on the partition before it crashes.

I found a bug that caused corruption of the openoffice document as you described. The fix is below.

3. git/Cogito Tutorial; Some Discussion Of The Merge Problem

17 Oct 2005 - 24 Oct 2005 (20 posts) Archive Link: "LCA2006 Git/Cogito tutorial"

Topics: PCI, Version Control

People: Martin LanghoffJunio C. HamanoLinus TorvaldsPetr Baudis

On the git mailing list, Martin Langhoff said:

Sounds like I will be hosting a Git/Cogito tutorial in the upcoming LCA2006 (Dunedin, NZ, Jan 25~28). Given that I do have some significant holes in my git knowledge (no, really!?) I'll be happy if other git hackers/users are present at LCA and willing to take part in the tutorial.

Petr Baudis hinted earlier that he might be coming, as did Linus (but he was hoping for a sponsor, I'm not sure whether he'll be there or not). Speak up if you'll be there!

I'll post my slides and presentation plan beforehand to the list, to avoid spreading misinfirmation/bad practices. They will probably be based on a recent talk I gave @ Wellington Perl Mongers about swtiching to Git/Cogito:

The feedback (from both non-cogito-users and actual cogito-users) was that I made it sound too complicated, so the current plan is to focus on the tutorial part, and leave "under the hood" parts for a rainy day or for an after tutorial in-the-corridor chat.

Petr Baudis offered some suggestions, and remarked on Martin's description of the merge feature in his tutorial. Martin had said that git and Cogito provided "Very fast stupid merge" and "very smart, slow merges when stupid won't do". Petr said he didn't know of any merges that could be termed "very smart", and asked what Martin was referring to. Martin replied, "I'm very impressed with, which first does the simple git-read-tree -m, and it can then try several merger scripts to resolve the index. The "smartest" merge resolver we have follows renames, but we could have language-specific and project-specific resolvers, for instance." The discussion branched off in different directions at this point. In one of these, Junio C. Hamano replied:

I should not be saying this because I am the primary guilty party, but you should not be so impressed.

Being able to specify which merge strategy to use is a useful thing, but I do not think being able to try more than one merge strategies automatically, while it has some coolness value, is very useful in practice.

The language-specific or project-specific part should be made orthogonal to merge strategy modules, which currently is not. The primary thing Daniel's git-merge-resolve and Fredrik's git-merge-recursive do is to figure out which paths can be resolved without merging the file contents, and which paths need to be resolved with file contents merge, and they use different strategies to find which 3 variants of the contents to use for that final merge.

But at the end of the day, merging the contents is done by running 'merge' in either case. This should be made either customizable, or we ship our standard one that can be extended to first run 'file' to see the file content type of what is being merged and run content specific merge program if there is one.

Even if we did that, we are still doing 3-way merge; git-merge framework may not mesh very well when we want to use something like codeville merge which is not based on 3-way.

Linus Torvalds said:

Oh, the git merge is about a million times better than any silly weave merge with extra BonusPoints and MagicCapitalizedNames.

Why? Because if you want to be slow and careful, you can always just create the weave after-the-fact and do a weave merge.

And because well-behaved git merges are so fast, you can actually afford to so so.

There's nothing magic in a weave merge. It's just a trick. It doesn't need the files to be in weave format beforehand, even though people seem to believe that file formats go together with it.

If somebody thinks a weave merge is wonderful and fixes everything, I have to rain on their parade. You still need to manually fix real conflicts up, and regardless, what kind of merge you do has _nothing_ to do with how you maintain your files.

If you want to do a weave merge inside git, then the way to do that is to just create the weave on demand in the (rare) case where it's needed. We have all the history. You might even just do a "lazy weave", which just starts from the common parent, and ignores the history before that.

Much cheaper that way, and arguably nicer (others will argue that you want to take history into account, to decide about undo's etc. It's a matter of taste).

The thing is, automatic merging isn't all _that_ important. The thing that made BK wonderful at merging was that it had a wonderful tool for merging for when there were real clashes, which is where the _really_ nasty cases are. The actual automatic merge wasn't necessarily anything magical.

(Same went for applying diffs, btw. What made BK nice was "renametool". Of course, it was also what made me decide that tracking renames was the wrong thing to do in the first place, but if you make a CMS that does renames, you'd better have a "renametool").

And if you have a tool that helps you visually merge the _real_ clashes, it doesn't much matter if you are only half-way decent on the automatic ones. They'll be so trivial that nobody cares.

And it doesn't matter _how_ good your automatic merges are, there always _will_ be real clashes.

[ Side note. Think about this for a while. Git did three-way merges pretty much since day one, but they only became _useful_ when we made it easy to see the merge conflicts and fix them up. That's a fundamental lesson right there: you don't have to be perfect, you have to make it easy for the user to fix up your imperfections. ]

So we should spend time on making it easy to see what the clash was, and on tools to help resolve them. Some random merge-strategy-of-the-day is just bling-bling.

The reason people like merge strategies is that it's a nice area for some mental masturbation. You can create all these fancy examples. And then can ignore the fact that most real merge problems end up being two people changing the same code in different ways, that just need manual merging.

Don't get me wrong - if somebody does a nice automated merge for git, it's a good thing, but it's probably much more important to try to integrate something like xxdiff to a git workflow. And _that_ level is probably where you want to have special language-based coloring etc to further help things out.

So keep your eyes on the ball. And "automatic merge" isn't it.

Petr replied, "The *primary* reason for new merge strategies is not reducing number of conflicts, but actually being able to force a conflict at places where it isn't crystal-clear what the resolution should be (but not conflicting where it should be clear), and especially at places where the three-way merge *silently* gets it *wrong* without throwing any conflicts. And weren't it you who wanted a conservative merge strategy which wouldn't ever do that?" Linus replied:

A three-way merge is plenty conservative for me.

Any merge will always get some case wrong, exactly the same way any merge will always get some that require fixing up, even if a human might say "that's a stupid merge". Two patches add the same thing to different places (an unsorted array of PCI ID's or whatever), and pretty much any merge ever will silently "mismerge" it as far as a human is concerned. From a _technical_ point it was correct. In practice, it wasn't.

So when I say "conservative", I don't mean "can never mis-merge", because such a thing doesn't exist. No, "conservative" means that it seldom does it in practice, and that when it happens, you can at least understand why it happened.

A simple strategy that makes people understand what happened is often much better than a more complex one. And you should never underestimate the importance of peoples _expectations_. A three-way merge has a _huge_ advantage in that absolutely tons of people are used to what it does: even when they don't necessarily "understand" the merge (they never cared), if they've worked with CVS they've seen them before.

Don't get me wrong - a weave merge is pretty damn conservative too. I'm not down on weave merges, I just don't think that the difference is that huge in practice - the difference between a three-way merge and a weave merge is much smaller than the advantage of a good graphical tool and a good workflow.

That's really my argument here: automated merges aren't the end-all. I realize that a lot of SCM people think that three-way merges are old and boring and stupid, but the fact is, sometimes the old ways aren't the best ways, but sometimes they are old just because they are good enough.

4. Another git/Cogito Tutorial

17 Oct 2005 - 24 Oct 2005 (7 posts) Archive Link: "Scribblings for a cogito/git tutorial"

People: Horst von BrandPetr Baudis

On the git mailing list, Horst von Brand said, "I've also been asked around here for a cogito+git tutorial, to that end I've made up a script that simulates several developers interacting. Hacking around is simulated by patching, ed(1) scripts (merges don't turn out the same diff every time), and plain copying new files in. I've set up a GPG key with an empty passphrase (comment is "Experimental") to have signed tags, etc. in a convenient manner. The idea is to create interesting histories (for browsing) and show off the commands in a compact way. If only there was a convenient way to run a strech of the (bash) script, look at the results, and then resume..." He gave a link to a git repository of his script and supporting files. Petr Baudis said he loved it, and asked if it could be released under the GPL and added to the Cogito official documentation. Horst said he'd be honored, and added, "I realized later that I didn't clarify the license. It's on my TODO list ;-)" He also noticed that Petr had not only included the latest version of the tutorial into the Cogito repository, but the entire history of its development as well.

5. Linux 2.6.14-rc5 Released

19 Oct 2005 - 21 Oct 2005 (6 posts) Archive Link: "Linux v2.6.14-rc5"

Topics: Kernel Release Announcement

People: Linus Torvalds

Linus Torvalds announced Linux 2.6.14-rc5, saying:

Yeah, I know I said -rc4 was going to be the last one, but as some of you may have noticed from the discussions, a day before I was planning on releasing 2.6.14 we found a couple of bugs (nasty RCU callback delays, swiotlb, etc). The fixes for those weren't all that complicated, but the problems were subtle enough that I wanted to get them fixed and have another -rc before final release.

So here it is. There's a number of other small random fixes in there too.

6. git File-History Tracking; Some Consideration Of Rename Tracking

21 Oct 2005 - 25 Oct 2005 (21 posts) Subject: "git-rev-list: add "--dense" flag"

Topics: Version Control

People: Linus TorvaldsPetr BaudisJunio C. Hamano

On the git mailing list, Linus Torvalds said:

This is what the recent git-rev-list changes have all been gearing up for.

When we use a path filter to git-rev-list, the new "--dense" flag asks git-rev-list to compress the history so that it _only_ contains commits that change files in the path filter. It also rewrites the parent information so that tools like "gitk" will see the result as a dense history tree.

For example, on the current kernel archive:

        [torvalds@g5 linux]$ git-rev-list HEAD | wc -l
        [torvalds@g5 linux]$ git-rev-list HEAD -- kernel | wc -l
        [torvalds@g5 linux]$ git-rev-list --dense HEAD -- kernel | wc -l

which shows that while we have almost ten thousand commits, we can prune down the work to slightly more than half by only following the merges that are interesting. But further, we can then compress the history to just 356 entries that actually make changes to the kernel subdirectory.

To see this in action, try something like

gitk --dense -- gitk

to see just the history that affects gitk. Or, to show that true parallel development still remains parallel, do

gitk --dense -- daemon.c

which shows some parallel commits in the current git tree.

Signed-off-by: Linus Torvalds <>

I'm really happy with how this turned out. It's a bit expensive to run on big archives, but I _really_ think it's quite spectacular. And likely very useful indeed.

For example, say you love gitk, but only care about networking changes. Easy enough, just do

gitk --dense -- net/ include/net/

and off you go. It's not free (we do a _lot_ of tree comparisons), but dammit, it's fast enough that it's very very useful. The tree comparisons are done very efficiently.

This is _way_ more powerful than annotate. Interested in just a single file? Just do

gitk --dense -- kernel/exit.c

and it will show the 17 or so commits that change kernel/exit.c with the right history (it turns out that there is no parallel development at all in that file, so in this case it will linearize history entirely).

Damn, I'm good.

He added in reply:

one additional point.

This took 0.91 seconds to complete on the current kernel history on my machine with a pretty much fully packed archive, and the memory footprint was a total of about 12MB.

And it scales pretty well too. On the historical linux archive, which is three years of history, the same thing takes me just over 12 seconds and 52MB, and that's for the _whole_ history. And it's not just following one file: it's following that subdirectory.

So it really is pretty damn cool.

Of course, I might have a bug somewhere, but it all _seems_ to work very well indeed.

Marco Costalba was very impressed with this, and updated the qgit graphical archive browser to also support the --dense option.

Elsewhere, Petr Baudis raised an old question, "How to track renames? I believe the situation has changed in the last half a year. GIT really is a full-fledged SCM by now (at least its major part code-wise), and I think it's hopefully becoming obvious that we need to track renames." [...] "If I convince you that it is worth tracking the renames explicitly, "how" is already a minor question." Linus replied:

Never. I'm 100% convinced that tracking renames is WRONG WRONG WRONG.

You can follow renames _afterwards_.

Git tracks contents. And I think we've proven that figuring out renames after-the-fact from those contents is not only doable, but very well supported already.

I'm convinced that git handles renames better than any other SCM ever. Exactly because we figure it out when it matters.

Petr tried to discuss it, but Linus closed down the argument with, "The fact is, users have not a frigging clue when a rename happens. I told you before, I'll tell you again: if you depend on users telling you about renames, you'll get it wrong. You'll get it wrong quite often, in fact. This is not something I'm going to discuss again. Go back to all the same arguments from 6 months ago. I was right then, I'm right now."

Elsewhere, he posted some pseudo-code:

let's say that you want to follow a certain filename, what you can do is basically (fake shell syntax with "goto restart")


        git-rev-list $rev --dense --parents -- "$filename" |
                while read commit parent1 restofparents
                        if [ "$restofparents" ]; then
                                .. it's a merge, do whatever it is you do
                                   with merges ..
                                shaold=$(git-ls-tree $parent -- "$filename")
                                shanew=$(git-ls-tree $commit -- "$filename")

                                # Did it disappear?
                                # Maybe it got renamed from something
                                if [ -z "$shaold" ]; then
                                        old=$(git-diff-tree -M -r $commit |
                                                grep " R .* $filename")
                                        echo "rename? $old"
                                        goto restart
                                git-diff-tree -p $commit -- "$filename"

or something similar.

In other words: you'll basically have to figure out the renames on your own, and follow the renaming, but something like the above should do it.

Junio C. Hamano replied, "Isn't that only true because you are not doing more than "have these paths change" in the new rev-list that already has part of diff-tree? If rev-list can optionally be told to detect renames internally (it has necessary bits after all), it could adjust the set of paths to follow when it sees something got renamed, either by replacing the original path given from the command line with its previous name, or adding its previous name to the set of path limitters (to cover the copy case as well)." Linus said:

The problem with renames isn't the straight-line case.

The problem with renames is the merge case. And quite frankly, I don't know how to handle that sanely.

If everything was straight-line (CVS), renames would be trivial. But git-rev-list very fundamentally works on something that isn't. So let's look at the _fundamental_ problem:

And note that this fundamental issue is true _whether_ we have some explicit rename information in a commit or not.

Git-rev-list has a few additional issues that make it even more interesting:

So what do I propose? I propose that you realize that "git-rev-list" is just the _building_ block. It does one thing, and one thing only: it creates revision list.

A user basically _never_ uses git-rev-list directly: it just doesn't make sense. It's not what git-rev-list is there for. git-rev-list is meant to be used as a base for doing the real work. And that's how you can use it for renaming too.

If you think of "git-rev-list --dense -- name" as a fast way to get a set of commits that affect "name", suddenly it all makes sense. You suddenly realize that that's a nice building block for figuring out renames. It's not _all_ of it, but it's a big portion.

To go back to "gitk", let's see what the path limitation shows us. Right now, doing a

gitk --all --dense -d --

only shows that particular name, and that's by design. Maybe that's what the user wants? You have to realize that especially if you remember an _old_ name that may not even exist any more, that's REALLY what you want. Something that works more like "annotate" is useless, because something that works like annotate would just say "I don't have that file, I can't follow renames" and exit.

So the first lesson to learn is that following just pure path-names is actually meaningful ON ITS OWN! Sometimes you do NOT want to follow renames.

For example, let's say that you used to work on git 4 months ago, but gave up, since it was too rough for you. But you played around with it, and now you're back, and you have an old patch that you want to re-apply, but it doesn't talk about "", it talks about "git-fetch-script". So you do

gitk --dense -- git-fetch-script

and voila, it does the right thing, and top of the thing is "Big tool rename", which tells you _exactly_ what happened to that PATHNAME.

See? Static pathnames are important!

Now, this does show that when you _do_ care about renames, "gitk" right now doesn't help you very much (no offense to gitk - how could it know that git-rev-list would give it pathname detection one day?). Let's go back to the original example, and see what we could do to make gitk more useful..

gitk --all --dense -d --

Go and select that "Big tool rename" thing, and start looking for the rename..

You won't find it. Why? You'll see all-new files, no renames. It turns out that "gitk" follows the "parent" thing a bit _too_ slavishly, which is correct on one level, but in this case with "--dense" it turns out that what you want to do is see only what _that_ commit does, not what that commit did relative to its parent (which was the initial revision).

So while "--dense" made gitk work even without any changes, it's clear that the new capability means that gitk might want to have a new button: "show diff against 'fake parent'" and "show diff against 'real parent'".

If you want the global view, the default gitk behaviour is correct (it will show a "valid" diff - you'll see everything that changed between the points it shows). But in a rename-centric world, you want the _local_ change to that commit, and right now gitk can't show you that.

So for trackign renames, we probably want that as a helper to gitk.

Also, gitk has a "re-read references" button, but if you track renames, you probably want to do more than re-read them: you want to re-DO them with the "Big rename" as the new head (forget the old references entirely), and with the name list changed. New functionality (possibly you'd like to havea "New wiew" button, which actually starts a new gitk so that you can see both of them). Right now you'd have to do it by hand:

gitk --dense 215a7ad1ef790467a4cd3f0dcffbd6e5f04c38f7 -- git-fetch-script

(where 215a.. is the thing you get when you select the "Big tool rename")

You'd also probably like to have some way to limit the names shown for the git-diff-tree in gitk.

In short, the new "gitk --dense -- filename" doesn't help you nearly as much as it _could_. But when you squint a bit, I think you'll see how it's quite possible to do...

7. RCU Infrastructure Torture Test For Hotplug, Realtime, And More

22 Oct 2005 - 24 Oct 2005 (15 posts) Archive Link: "[PATCH] RCU torture-testing kernel module"

Topics: Hot-Plugging, Real-Time

People: Paul E. McKenneyKyle MoffettBadari PulavartyAndrew MortonIngo Oeser

Paul E. McKenney said:

This patch is a rewrite of the one submitted on October 1st, using modules (

This rewrite adds a tristate CONFIG_RCU_TORTURE_TEST, which enables an intense torture test of the RCU infratructure. This is needed due to the continued changes to the RCU infrastructure to accommodate dynamic ticks, CPU hotplug, realtime, and so on. Most of the code is in a separate file that is compiled only if the CONFIG variable is set. Documentation on how to run the test and interpret the output is also included.

This code has been tested on i386 and ppc64, and an earlier version of the code has received extensive testing on a number of architectures as part of the PREEMPT_RT patchset.

Ingo Oeser pointed out that this feature should really depend on the DEBUG_KERNEL configuration option; but this led to the discovery that depending on DEBUG_KERNEL would erroneously cause additional debugging code to be included in the compiled binary. After some back-and-forth that included Andrew Morton, the bug was fixed and the dependency updated.

Meanwhile, Badari Pulavarty tested the feature, reporting long boot delays and a lot of CPU hogging. Kyle Moffett explained, "Uhh... It's a torture test. What exactly do _you_ expect it will do? I think the idea is to enable it as a module and load it when you want to start torture testing, and unload it when done. "TORTURE_TEST"s are not for production systems :-D." Barari said he expected some sort of /proc interface to turn the feature on and off. But as Kyle went on to say, the feature should be turned on or off by loading or unloading the module. And Paul, close by, said, "I wonder if I should somehow exclude "=y" on this one -- I haven't come up with any case where it is useful." But there was no real discussion on that.

8. Support For Parallel Port On SGI 02 Workstation

23 Oct 2005 - 25 Oct 2005 (2 posts) Archive Link: "[PATCH] Parallel port support for SGI O2"

People: Arnaud Giersch

Arnaud Giersch said:

I wrote a low-level parallel port driver for the built-in port on SGI O2 (a.k.a. IP32).

The parallel port is driven by a standard ECP chipset, with memory-mapped I/O registers. That's why it was not possible to use the parport_pc module which assumes port-mapped I/O registers.

What works:

What does not work:

All tests were done with an HP LaserJet 5MP connected to a R5000 SGI O2.

The module is named parport_ip32. The patch is not included in this mail because it is not very small (2383 lines, 73 Kb). It is however avalaible from:

The patch is against the latest Linux/MIPS kernel (2.6.14-rc2 as of today). If you prefer that I post it on a mailing list, please just tell me where, and how (inlined, or gzip'ed attached file).

Further informations are available on:

9. Improved git-mv May Replace git-rename

23 Oct 2005 (1 post) Archive Link: "[PATCH] This commit implements git-mv"

Topics: Version Control

People: Josef Weidendorfer

On the git mailing list, Josef Weidendorfer implemented the git-mv command, to allow file renaming. He said:

It superceeds git-rename by adding functionality to move multiple files, directories or symlinks into another directory. It also provides according documentation.

The implementation renames multiple files, using the arguments from the command line to produce an array of sources and destinations. In a first pass, all requested renames are checked for errors, and overwriting of existing files is only allowed with '-f'. The actual renaming is done in a second pass. This ensures that any error condition is checked before anything is changed.

With his patch, Josef included a changelog entry:

The recent request on the list for "mv" in GIT reminded me about an addition to git-rename I made a week ago. I renamed it to "git-mv" and added some documentation.

If this works, we can remove git-rename sometimes in the future.

I should complement this command with tests. Also, a nice addition would be to support an interactive mode like 'mv', by asking if files should be overwritten.

By the way, it also checks for a request to move a directory into itself, which of course is an error.

Option "-k" is good for this: E.g. a "git-mv -k * dir" moves all revision controlled files and directories (but not "dir"!) into "dir". "-k" makes sure that the errors (trying to move "dir" into itself, or move files without revision control around) will not terminate the command but silently ignored and skipped. "-k" was taken from "make": Continue even on an error.

10. ethtool Development Migrates To git

25 Oct 2005 (1 post) Archive Link: "ethtool git repo created"

People: Jeff Garzik

Jeff Garzik said, "After sitting around in a non-public subversion repo for far too long, I've stuffed the latest ethtool source code into a git repository. As soon as it finished mirroring, it will be available at rsync://"

11. Making gitk Play Nice With --dense

25 Oct 2005 - 27 Oct 2005 (5 posts) Archive Link: "Make "gitk" work better with dense revlists"

People: Linus TorvaldsPaul MackerrasJunio C. Hamano

On the git mailing list, Linus Torvalds said:

To generate the diff for a commit, gitk used to do

git-diff-tree -p -C $p $id

(and same thing to generate filenames, except using just "-r" there) which does actually generate the diff from the parent to the $id, exactly like it meant to do.

However, that really sucks with --dense, where the "parent" information has all been rewritten to point to the previous commit. The diff actually works exactly right, but now it's the diff of the _whole_ sequence of commits all the way to the previous commit that last changed the file(s) that we are looking at.

And that's really not what we want 99.9% of the time, even if it may be perfectly sensible. Not only will the diff not actually match the commit message, but it will usually be _huge_, and all of it will be totally uninteresting to us, since we were only interested in a particular set of files.

It also doesn't match what we do when we write the patch to a file.

So this makes gitk just show the diff of _that_ commit.

We might even want to have some way to limit the diff to only the filenames we're interested in, but it's often nice to see what else changed at the same time, so that's secondary.

The merge diff handling is left alone, although I think that should also be changed to only look at what that _particular_ merge did, not what it did when compared to the faked-out parents.

Paul Mackerras, the gitk maintainer, accepted the patch and pushed out an updated release, which Junio C. Hamano accepted into the official git repository. Paul also remarked, "I'm hoping to get back to gitk hacking RSN - I've been going flat out on the ppc32/ppc64 merge. Thanks for doing the --dense thing; I was thinking about doing something like that inside gitk but doing it in git-rev-list is better. It does mean that I now want to be able to get gitk to contract the view to just a given set of files or directories and then expand back to the whole tree view, which means running git-rev-list multiple times, which gitk can't do at the moment..." Linus replied, "Yes. And please also give the option to contract the diffs to a set of files (I think that should be independently controlled, although perhaps with some way to set them both at the same time)."

12. Swap-Based Page Migration

25 Oct 2005 - 26 Oct 2005 (10 posts) Archive Link: "[PATCH 0/5] Swap Migration V4: Overview"

Topics: Hot-Plugging

People: Christoph Lameter

Christoph Lameter said, "This is a patchset intended to introduce page migration into the kernel through a simple implementation of swap based page migration. The aim is to be minimally intrusive in order to have some hopes for inclusion into 2.6.15. A separate direct page migration patch is being developed that applies on top of this patch. The direct migration patch is being discussed on <>. Much of the code is based on code that the memory hotplug project and Ray Bryant have been working on for a long time. See" He posted his patches, but there was no real discussion.

13. Linux 2.6.14 Released

27 Oct 2005 - 3 Nov 2005 (16 posts) Archive Link: "Linux 2.6.14"

Topics: Kernel Release Announcement, PCI, POSIX

People: Linus TorvaldsOleg NesterovRoland McGrath

Linus Torvalds announced Linux version 2.6.14, saying:

Ok, it's finally there.

2.6.14 was delayed twice due to some last-minute bug-reports, some of which ended up being false alarms (hey, I should be happy, but it was a bit frustrating)

But hey, the delays - even when perhaps unnecessary - got us to look at the code and fix some other bugs instead. So it's all good.

So special thanks go to Oleg Nesterov and Roland McGrath for doing some code inspection and fixing and just making the otherwise frustrating wait for bug resolution more productive ;^p.

Let's try the 2-week merge window thing again, I think it worked pretty well despite the delays, and hopefully it will work even better this time around.

The actual changes from 2.6.14-rc5 are a number of mostly one-liners, with the ShortLog appended (full log from 2.6.13 on the normal sites together with the release itself). The only slightly bigger ones (ie more than a handful of lines) is a kernel parameter doc update, and the PIIX4 PCI quirk printouts, and the cleanups/fixes for the posix cpu timers.

(In fact, according to diffstat, about half the diff is that one documentation update, and most of that is whitespace cleanups)

14. Status Of Native POSIX Thread Library Support In 2.4 And 2.6

28 Oct 2005 (3 posts) Archive Link: "NPTL support for 2.4.31?"

Topics: POSIX

People: Harald DunkelBert Hubert

Harald Dunkel asked about the status of Native POSIX Thread Library (NPTL) support in the 2.4 tree, adding, "I know that there is a backport in RH's 2.4.21, but obviously it didn't make it into the native 2.4 kernel." Bert Hubert replied, "I doubt anybody else would do the work, you'll find that any NPTL patch is bound to be huge and intrusive. Try running 2.6 if possible. A 2.4 kernel with NPTL patched in is not going to confer any stability benefits over 2.6."

15. Some Discussion Of 2.6 Maintenance Patterns

29 Oct 2005 - 31 Oct 2005 (32 posts) Archive Link: "[git patches] 2.6.x libata updates"

Topics: Version Control

People: Linus TorvaldsJeff Garzik

In the course of discussion, Linus Torvalds remarked:

one of the downsides of the new "merge lots of stuff early in the development series" approach is that the first few daily snapshots end up being _huge_.

So the -git1 and -git2 patches are/will be very big indeed.

For example, patch-2.6.14-git1 literally ended up being a megabyte compressed. Right now my diff to 2.6.14 (after just two days) is 1.6MB compressed.

Admittedly, some of it is due to things like the MIPS merge, but the point I'm trying to make is that it makes the daily snapshot diffs a lot less useful to people who try to figure out where something broke.

Now, I've gotten several positive comments on how easy "git bisect" is to use, and I've used it myself, but this is the first time that patch users _really_ become very much second-class citizens, and you can't necessarily always do useful things with just the tar-trees and patches. That's sad, and possibly a really big downside.

Don't get me wrong - I personally think that the new merge policy is a clear improvement, but it does have this downside.

Jeff Garzik replied:

Back when I did the BK snapshots, I would occasionally do a middle-of-the-day snapshot if there were a ton of incoming merges in a 24-hour span.

If this "huge -git1" becomes a real problem, we could always

None of these are terribly painful, but none are terribly appealing either.

16. Splitting The swsusp Code Into Two Subsystems

29 Oct 2005 - 30 Oct 2005 (17 posts) Archive Link: "[RFC][PATCH 0/6] swsusp: rework swap handling"

Topics: Software Suspend

People: Rafael J. WysockiPavel Machek

Rafael J. Wysocki said:

The following series of patches divides swsusp into two functionally independent subsystems:

On suspend the snapshot-handling part creates the system snapshot and makes the data stored in the snapshot available to the swap-handling part via an interface function allowing it to transfer the data as a series of consecutive data pages, in a specific order. The swap-handling part writes the data pages to a swap partition is such a way that they can be read in exactly the same order in which they have been saved.

On resume the snapshot-handling part is invoked by the swap-handling part to create the pagedir. Then, the swap-handling part is allowed to send it, with the help of an interface function, data pages that are used to populate the snapshot data structure. It is assumed that the data pages will be sent in the same order in which they have been received by the swap-handling part on suspend. Finally, the system state (from before suspend) is restored by the snaphot-handling part from the data structure handled by it.

From the point of view of the swap-handling part, the contents of the data pages provided by the snapshot-handling do not matter at all. It handles each data page in the same way without analyzing its contents and the snapshot-handling part is responsible for recognizing the metadata and using them as appropriate. Consequently, in principle the swap-handling part can be replaced with a user-space process and the interface functions used in transferring data between the two parts of swsusp can be replaced with a relatively simple kernel-user interface in the future.

The approach used in this series of patches has some additional benefits:

  1. the size of the pagedir is reduced by 1/4 which causes some more memory to be available on resume,
  2. the amount of metadata written to swap is reduced by 3/4,
  3. the artificial limitation on the pagedir size, imposed by the size of the swsusp_info structure, is lifted,
  4. the size of swsusp_info structure is reduced so it can be merged with the swsusp_header structure in the future,
  5. the swap-handling part does not use any global variables related to the snapshot data structure,
  6. the __nosavedata variables are almost eliminated (on x86-64 the last of them is the in_suspend variable).

I have divided the changes into some more or less logical steps for clarity. Although the code has been designed as proof-of-concept, it is functional and has been tested on x86-64, except for the cryptographic functionality and error paths.

For your convenience the patches are available from:

He posted each patch, and Pavel Machek went over them all, and was generally very pleased. He had a few minor objections, to which Rafael was very responsive, and the thread ended.

17. git 0.99.9 Released; Documenting More Of git's Power

29 Oct 2005 - 2 Nov 2005 (28 posts) Archive Link: "GIT 0.99.9"

People: Junio C. HamanoLinus Torvalds

On the git mailing list, Junio C. Hamano announced git 0.99.9, saying:

As I said in the 0.99.8 announcement, git already does everything I want it to do, and from here on I'd like to see us concentrate on fixes (both correctness and performance) until we hit 1.0 which should happen shortly.

Many thanks to everybody who contributed the comments, extra set of eyeballs, and code.

Linus Torvalds replied, "Congrats. I personally think this is very much worthy of a 1.0 after just giving it some time to shake out any possible last-minute bugs." In a later post, he added:

one thing I'd like to see (maybe it already exists and I just have overlooked it) is some kind of simple readme or something about the different ways to limit the output of the various git commands.

I've several times been surprised to see people not realize that "git-whatchanged" takes a file list to limit the files it is interested in. I also suspect people don't realize that you can limit it by time and version and file list, all at the same time.


git-whatchanged -p --pretty=short --since="2 weeks ago" v0.99.8..v0.99.9 Makefile

is a valid query: it basically asks for any change to the Makefile in between versions v0.99.8..v0.99.9, _and_ within the last two weeks, and asks to show it as a patch, with the shortened commit message.

Is it useful? The above exact line almost certainly isn't, but variations on the above definitely are. And I suspect a lot of people never even realized you could do something like that.

(The danger with date-based things is that something may be 4 months old, but it only got _merged_ yesterday, so it may be new to _you_. And the --since="2 weeks ago" will not show it, which can be surprising to people who expect things that are new to _them_ to be shown).

The above limiters now work with "git log" and "gitk" too (they've worked for a long time with "git-whatchanged", but only with the new git-rev-list functionality does the name-limiting work for the other commands).

It would be good to make this more well-known, because a lot of people probably end up using git not as developers, but just to follow what is going on. And then the different limiters are some of the most important parts (the date-one is likely the least important one, but limiting by version and name is _very_ important).

Junio replied:

I've somewhat updated git-rev-list documentation and tried to categorize the options into commit selectors and presentation modifiers. The documentation for commands you mentioned in your message all talk about them describing only frequently used options, and refer the user to rev-list documentation. I am not sure this would be enough.

One good thing to have would be to add a section to Tutorial. Currently we cover building a small project from scratch and have the readers graduate when they learn basic commit swapping, but we do not talk much about archaeology tools.

Linus said:

I don't think people really follow the links or think very abstractly at all in the first place.

So I was thinking more of some explicit examples. I actually think every command should have an example in the man-page, and hey, here's a patch to start things off.

Of course, I'm not exactly "Mr Documentation", and I don't know that this is the prettiest way to do this, but I checked that the resulting html and man-page seems at least reasonable.

And hey, if the examples look like each other, that's just because I'm also not "Mr Imagination".

18. Removing Obsolete OSS Sound Drivers

30 Oct 2005 - 1 Nov 2005 (19 posts) Archive Link: "[2.6 patch] schedule obsolete OSS drivers for removal"

Topics: PCI, Sound: ALSA, Sound: OSS

People: Adrian BunkAlistair John StrachanKyle McMartinLee RevellAndi KleenJeff GarzikAndrew Morton

Adrian Bunk said, "This patch schedules obsolete OSS drivers (with ALSA drivers that support the same hardware) for removal." He added, "Scheduling the via82cxxx driver for removal was ACK'ed by Jeff Garzik." Andrew Morton also signed off on the patch. And as Alistair John Strachan added a couple of posts later, "Adrian plans simply to remove drivers which have solid, known working replacements for all PCI ids in an equivalent ALSA driver."

Elsewhere, Kyle McMartin remarked, "I didn't see it here, but SOUND_AD1889 can definitely be removed as well. The driver never worked properly to begin with. This was ACK'd by the author last time this thread reared it's head." After a small mixup, Adrian said, "The ad1889 ALSA driver was not present before 2.6.14 and therefore not on my original list. Below is an updated patch that also schedules SOUND_AD1889 for removal."

The above-mentioned mixup involved Adrian thinking Kyle was talking about the ad1816 driver, to which his objection was that "ALSA bugs #1301 and #1302 are still open." As it turned out, the inadvertant consideration of ad1816 was useful. Lee Revell pointed out, "these bug reports can be disregarded. The submitter never responded to requests to retest with the latest ALSA version. #1302 is almost certainly a bug in kphone anyway." And Adrian said, "If these bugs will be marked as resolved/closed when I'll send the next batch of OSS driver removals a few months from now, SOUND_AD1816 will be part of this batch."

Elsewhere, Andi Kleen also replied to Adrian's initial post, requesting that "the ICH driver be kept. It works just fine on near all my systems and has a much smaller binary size than the ALSA variant. Moving to ALSA would bloat the kernels considerably." Lee pointed out, "The emu10k1 ALSA driver is considerably smaller than the OSS driver and has more features, like most ALSA drivers. If the ICH driver is really smaller I suspect it's missing some functionality." Adrian also said that Andi should submit a proper bug and provide the bug number, if he wanted the issue to be considered. He added, "If you consider ALSA too bloated you should help on solving this issue instead of insisting on keeping OSS." At this point several other folks did indeed begin a technical consideration of ways to reduce ALSA bloat.

19. Emails Delayed From Reaching Kernel Maintainers

30 Oct 2005 (4 posts) Archive Link: "[git patches] 2.6.x libata update"

Topics: Bug Tracking

People: Jeff GarzikLinus TorvaldsAndrew Morton

Jeff Garzik posted a patch, and Andrew Morton said, "Linus may not receive this. For me at least, large amounts of incoming and outgoing OSDL email have been disappearing into the ether for the past 12 hours or so." And Linus Torvalds added:

Apparently the osdl mail server (and bugzilla etc) had a disk failure overnight.

My (and yours, Andrew) mail should be starting to trickle in now. Just a little bit delayed ;)

20. NTFS Updates; Instructions On Testing

31 Oct 2005 - 1 Nov 2005 (28 posts) Archive Link: "[2.6-GIT] NTFS: Release 2.1.25."

Topics: FS: NTFS

People: Anton Altaparmakov

Anton Altaparmakov posted a bunch of NTFS patches, saying, "This is the next NTFS update containing more extended write support (and in fact pretty much completely rewritten file write support)! This has been in -mm for a while and at least one person (other than me that is) tested it, found a bug which I fixed, and since then noone has reported any bugs with the code. So I think it is definitely ready for a larger audience, i.e. the mainline kernel." Yura Pakhuchiy reported a bug, and it turned out he hadn't known where to get additional patches needed to test NTFS. In a subsequent release, Anton said:

If you go to a mirror and look at:

And then at the kernel you are interested in, e.g. lets take the latest -mm, then you will find the full -mm patch and the broken out bits in the following URLs respectively:

full mm: 2.6.14-rc5-mm1/

broken out: 2.6.14-rc5-mm1/broken-out/

And thusly, the ntfs patch from the developmental ntfs git repository would be: 2.6.14-rc5-mm1/broken-out/git-ntfs.patch

It is also worth looking in the broken out directory for other patches with ntfs in the name, as Andrew may have other ntfs patches. In the above -mm, there is for example: 2.6.14-rc5-mm1/broken-out/ntfs-printk-warning-fixes.patch

21. Fresh Attempt To Remove DevFS

1 Nov 2005 (1 post) Archive Link: "[GIT PATCH] Remove devfs from 2.6.14"

Topics: FS: devfs

People: Greg KH

Greg KH said:

Here are the same "delete devfs" patches that I submitted for 2.6.12 and 2.6.13. It rips out all of devfs from the kernel and ends up saving a lot of space. Since 2.6.13 came out, I have seen no complaints about the fact that devfs was not able to be enabled anymore, and in fact, a lot of different subsystems have already been deleting devfs support for a while now, with apparently no complaints (due to the lack of users.)

Please pull from:
or if hasn't synced up yet:

I've posted all of these patches before, but if people really want to look at them, they can be found at:

Also, if people _really_ are in love with the idea of an in-kernel devfs, I have posted a patch that does this in about 300 lines of code, called ndevfs. It is available in the archives if anyone wants to use that instead (it is quite easy to maintain that patch outside of the kernel tree, due to it only needing 3 hooks into the main kernel tree.)

There was no reply.

22. '(H)gct' git And Mercurial GUI-Enabled Commit Tool Update

1 Nov 2005 (1 post) Archive Link: "[ANNOUNCE] (H)gct 0.3"

People: Fredrik Kuivinen

On the git mailing list, Fredrik Kuivinen said:

Version 0.3 of (H)gct, a GUI enabled commit tool for Git and Mercurial, has been released and can be downloaded from

Thanks to Vincent Danjean there are Debian packages available at

Screen shots and a Git repository with a gitweb interface are available at the project homepage,

The major changes compared to v0.2 are:

23. git 0.99.9b In Use On Machines

2 Nov 2005 (1 post) Archive Link: "git 0.99.9b installed on"

People: H. Peter Anvin

On the git mailing list, H. Peter Anvin said, "git 0.99.9b is now installed on all machines."







Sharon And Joy

Kernel Traffic is grateful to be developed on a computer donated by Professor Greg Benson and Professor Allan Cruse in the Department of Computer Science at the University of San Francisco. This is the same department that invented FlashMob Computing. Kernel Traffic is hosted by the generous folks at All pages on this site are copyright their original authors, and distributed under the terms of the GNU General Public License version 2.0.