Kernel Traffic
Latest | Archives | People | Topics
Latest | Archives | People | Topics
Latest | Archives | People | Topics
Home | News | RSS Feeds | Mailing Lists | Authors Info | Mirrors | Stalled Traffic

Kernel Traffic #308 For 28 Apr 2005

By Zack Brown

Table Of Contents


In this issue, several discussions of Linus's git filesystem and the git-pasky version control wrappers by Petr Baudis are covered. Readers should be aware that git-pasky has since been renamed Cogito, and the command syntax has completely changed. No one should rely on the examples given here to clarify how to use those tools. Cogito comes with a somewhat useful README file that lists the correct syntax for many commands.

Mailing List Stats For This Week

We looked at 1424 posts in 8MB. See the Full Statistics.

There were 593 different contributors. 204 posted more than once. The average length of each message was 92 lines.

The top posters of the week were: The top subjects of the week were:
85 posts in 624KB by adrian bunk
40 posts in 322KB by jesper juhl
27 posts in 149KB by andreas steinmetz
26 posts in 141KB by tejun heo
26 posts in 263KB by dmitry torokhov
48 posts in 212KB for "[patch encrypted swsusp 1/3] core functionality"
41 posts in 273KB for "fortuna"
33 posts in 148KB for "fusyn and rt"
30 posts in 123KB for "intercepting syscalls"
23 posts in 87KB for "exploit in 2.6 kernels"

These stats generated by mboxstats version 2.8

1. New memmap Kernel Command-Line Option

14 Apr 2005 - 18 Apr 2005 (3 posts) Archive Link: "[RFC][x86_64] Introducing the memmap= kernel command line option"

People: Hariprasad NellitheerthaAndi Kleen

Hariprasad Nellitheertha said:

In order to port kdump to x86_64, we need to have the memmap= kernel command line option available. This is so that the dump-capture kernel can be booted with a custom memory map.

The attached patch adds the memmap= functionality to the x86_64 kernel. It is against 2.6.12-rc2-mm3. I have done some amount of testing and it is working fine.

Andi Kleen replied:

You should add a __setup somewhere, otherwise the kernel will complain about unknown arguments or generate a memmap variable in inits environment.

Comma parsing would be nice.

Otherwise it looks ok.

Hariprasad said he'd add a __setup, and do comma parsing.

2. Status Of Kernel Development Given Switch From BitKeeper

15 Apr 2005 (4 posts) Archive Link: "where is current kernel ?"

Topics: Version Control

People: Maciej SoltysiakPetr BaudisTom Duffy

Maciej Soltysiak asked, "Is there currently a kernel tree that Linus is working ? I mean, now that we have 2.6.12-rc2 not being developed with BK, is that code getting fixes and other patches as we speak or the development will continue in a while someplace else ?" Petr Baudis replied, "Linus stopped merging stuff to his kernel for few days in order to develop his (at least temporary) alternative to BK, called "git". See the mailing list archives for details." Tom Duffy remarked, "I have received many GIT commits recently to the old bk-commits mailing list." And Petr replied, "And they are clearly marked as "GIT testing". It isn't nothing official and they can go away randomly - they are mainly for testing git and they are not guaranteed to stay around. :-)"

3. Serial ATA Status Reports Updated; SATA Still Developed On BitKeeper

15 Apr 2005 - 16 Apr 2005 (12 posts) Archive Link: "[SATA] status reports updated"

Topics: Disks: IDE, Serial ATA, Version Control

People: Jeff Garzik

Jeff Garzik said:

My Linux SATA software/hardware status reports have just been updated. To see where libata (SATA) support stands for a particular piece of hardware, or a particular feature, go to

I've still got several patches from EMC (Brett) and IBM (Albert) to go through, as well as a few scattered ones from random authors.

I'm still working in BitKeeper for the time being.

4. New 'Kernel Mentors' Project

15 Apr 2005 (1 post) Archive Link: "[ANNOUNCE] Kernel Mentors Project"

People: Matt Mackall

Matt Mackall said:

Perhaps the hardest part of becoming a kernel developer is submitting your first major feature. There are technical and social hurdles to overcome and the process can be daunting to someone who is new to the community.

Thus, I'm proposing an informal project to get experienced developers to mentor new developers and coach them on the best ways to get their code ready for submission.

Developers will submit a description of their project and its current state as well as pointer to the code to the kernel-mentors mailing list. Mentors will pick for themselves which projects and developers they'd like to work with and offer their assistance.

The mentor will help the developer get their code accepted by:

For their part, new developers will be expected to use the feedback they're given productively and eventually get their code merged!

The project list is at with a web interface at:

If you're interested in helping out, come join us.

5. Some Kernel Developers Using git

18 Apr 2005 - 19 Apr 2005 (18 posts) Archive Link: "[GIT PATCH] I2C and W1 bugfixes for 2.6.12-rc2"

Topics: I2C

People: Greg KHLinus TorvaldsPetr Baudis

Greg KH tried using git for real kernel work, and asked Linus Torvalds to pull from Greg's tree, at

Afterwards, Greg said:

Nice, it looks like the merge of this tree, and my usb tree worked just fine.

So, what does this now mean? Is your git tree now going to be the "real" kernel tree that you will be working off of now? Should we crank up the nightly snapshots and emails to the -commits list?

Can I rely on the fact that these patches are now in your tree and I can forget about them? :)

Just wondering how comfortable you feel with your git tree so far.

Linus confirmed that everything seemed to be working, but he added, "I'm still working out some performance issues with merges (the actual "merge" operation itself is very fast, but I've been trying to make the subsequent "update the working directory tree to the right thing" be much better)." As for his total comfort level, he said:

Hold off for one more day. I'm very comfortable with how well git has worked out so far, and yes, mentally I consider this "the tree", but the fact is, git isn't exactly easy on "normal users".

I think my merge stuff and Pasky's scripts are getting there, but I want to make sure that we have a version of Pasky's scripts that use the new "read-tree -m" optimizations to make tracking a tree faster, and I'd like to have them _tested_ a bit first.

In other words, I want it to be at the point where people can do

git pull <repo-address>

and it will "just work", at least for people who don't have any local changes in their tree. None of this "check out all the files again" crap.

But how about a plan that we go "live" tomorrow - assuming nobody finds any problems before that, of course.

Greg offered a couple more trees for Linus to pull from if he wanted practice. Linus replied:

Done, pushed out. Can you verify that the end result looks sane to you? I just cheched that the diffstat looks similar (mine claims just 108 lines changed in aoecmd.c - possibly due to different diff formats).

And yes, my new merge thing seems to have kept the index-cache much better up-to-date, allowing an optimized checkout-cache -f -a to work and only get the new files.

Pasky? Can you check my latest git stuff, notably read-tree.c and the changes to git-pull-script?

Greg confirmed that the end result of Linus's pull looked sane, though he did notice slight anomalies in the changelog entries (a date was in the future). Petr Baudis (a.k.a Pasky) also said to Linus:

I've made git merge to use read-tree -m, HTH.

I will probably not buy git-export, though. (That is, it is merged, but I won't make git frontend for it.) My "git export" already does something different, but more importantly, "git patch" of mine already does effectively the same thing as you do, just for a single patch; so I will probably just extend it to do it for an (a,b] range of patches.

Linus replied, "That's fine. It was a quick hack, just to show that if somebody wants to, the data is trivially exportable."

6. New 'Mercurial' Version Control System Proposed For Linux

20 Apr 2005 - 22 Apr 2005 (2 posts) Archive Link: "Mercurial v0.1 - a minimal scalable distributed SCM"

Topics: Big O Notation, Compression, Version Control

People: Matt MackallBill Davidsen

Matt Mackall said:

April 19, 2005

I've spent the past couple weeks working on a completely new proof-of-concept SCM. The goals:

It's still very early on, but I think I'm doing surprisingly well. Now that I've got something that actually does some interesting things if you poke at it right, I figure it's time to throw it out there. Here's what I've got so far:

When I say "pull/sync" works, that means I can do:

hg merge other-repo

and it will pull all "changesets/deltas" that are in other-dir that I don't have, merge them into the changeset history graph, and do the same for all files changed for those deltas. It will call out to a user-specified merge tool like tkdiff for a proper 3-way merge with the nearest common ancestor in the case of conflicts.

Right now, "cloning/branching" is simply a matter of "cp -al" or "rsync" (mercurial knows how to break hardlinks if needed).

Some benchmarks from my laptop:

Much thought has gone into what the best asymptotic performance can be for the various things an SCM has to do and the core algorithms and data structures here should scale relatively painlessly to arbitrary numbers of changesets, files, and file revisions.

What remains to be done:

Sample usage:

 export HGMERGE=tkmerge   # set a 3-way merge tool
 mkdir foo
 hg create                # create a repository in .hg/
 echo foo > Makefile
 hg add Makefile          # add a file to the current changeset
 hg commit                # commit files currently marked for add or delete
 hg history               # show all changesets
 hg index Makefile        # show change
 touch Makefile
 hg stat                  # find changed files
 cd ..; cp -al foo branch  # make a branch
 hg merge ../branch-foo    # sync changesets from a branch

Bill Davidsen thought the project looked interesting, and said he'd give it a try.

7. Mercurial Version 0.2 Released

21 Apr 2005 (1 post) Archive Link: "Mercurial distributed SCM v0.2"

Topics: Version Control

People: Matt Mackall

Matt Mackall said:

This is my continuing attempt to make an SCM suitable for kernel hacking. It supports a distribution model similar to BK and Monotone but is orders of magnitude simpler than both (about 1k lines of code).

New in this version:

Some example usage:

Setting up a Mercurial project:

 $ cd linux/
 $ hg init         # creates .hg
 $ hg status       # show differences between repo and working dir
 $ hg addremove    # add all unknown files and remove all missing files
 $ hg commit       # commit all changes, edit changelog entry

Mercurial commands:

 $ hg history      # show changesets
 $ hg log Makefile # show commits per file
 $ hg checkout     # check out the tip revision
 $ hg checkout <hash> # check out a specified changeset
 $ hg add foo      # add a new file for the next commit
 $ hg remove bar   # mark a file as removed

Branching and merging:

 $ cd ..
 $ cp -al linux linux-work   # create a new hardlink branch
 $ cd linux-work
 $ <make changes>
 $ hg commit
 $ cd ../linux
 $ hg merge ../linux-work    # pull changesets from linux-work

Network support (highly experimental):

 # export your .hg directory as a directory on your webserver
 foo$ ln -s .hg ~/public_html/hg-linux

 # merge changes from a remote machine
 bar$ hg merge http://foo/~user/hg-linux

 This is just a proof of concept of grabbing byte ranges, and is not
 expected to perform well yet.

8. Some Discussion Of Linus' Preferred git Usage

21 Apr 2005 (14 posts) Archive Link: "Re: ia64 git pull"

People: Tony LuckPetr BaudisLinus TorvaldsHorst von Brand

Tony Luck said to Linus Torvalds:

I can't quite see how to manage multiple "heads" in git. I notice that in your tree on that .git/HEAD is a symlink to heads/master ... perhaps that is a clue.

I'd like to have at least two, or perhaps even three "HEADS" active in my tree at all times. One would correspond to my old "release" tree ... pointing to changes that I think are ready to go into the Linus tree. A second would be the "testing" tree ... ready for Andrew to pull into "-mm", but not ready for the base. The third (which might only exist in my local tree) would be for changes that I'm playing around with.

I can see how git can easily do this ... but I don't know how to set up my public tree so that you and Andrew can pull from the right HEAD.

Petr Baudis (a.k.a Pasky) explained that to have multiple heads, the user would have to:

go into your "master" working directory, and do:

git fork release ~/release
git fork testing ~/testing

Then in ~/release or ~/testing respectively, you have working trees for the respective branches.

As for Linus and Andrew pulling from the proper head, Petr said:

Currently, git pull will _never_ care about your internal heads structure in the remote repository. It will just take HEAD at the given rsync URI, and update the remote branch's head in your repository to that commit ID. This actually seems to be one of the very common pitfalls for people.

The way to work around that is to setup separate rsync URIs for each of the trees. ;-) I think I will make git-pasky (Cogito) accept also URIs in form


which will allow you to select the particular branch in the given repository, defaulting to the "master" branch.

Would that work for you?

Horst von Brand said the '!' character should be changed to something that wouldn't confuse the shell.

Elsewhere, Linus replied to Tony's original question, saying:

I personally like the "one repository, one head" approach, and if you want multiple heads you just do multiple repositories (and you can then mix just the object database - set your SHA1_FILE_DIRECTORY environment variable to point to the shared object database, and you're good to go).

But yes, if you want to mix heads in the same repo, you can do so, but it's a bit dangerous to switch between them, since you'll have to blow any dirty state away, or you end up having checked-out state that is inconsistent with the particular head you're working on.

Switching a head is pretty easy from a _technical_ perspective: make sure your tree is clean (ie fully checked in), and switch the .git/HEAD symlink to point to a new head. Then just do

read-tree .git/HEAD
checkout-cache -f -a

and you're done. Assuming most checked-out state matches, it should be basically instantaneous.

Oh, and you may want to check that yoy didn't have any files that are now stale, using "show-files --others" to show what files are _not_ being tracked in the head you just switched to.

I think Pasky has helper functions for doing this, but since I think multiple heads is really too confusing for mere mortals like me, I've not looked at it.

He added, "In contrast, having separate directories, all with their own individual heads, you are much more likely to know what you're up to by just getting the hints from the environment."

Tony replied that maybe there was a bit of terminology confusion. He said:

I want to have one "shared objects database" which I keep locally and mirror publicly at

I will have several "repositories" locally for various grades of patches, each of which use SHA1_FILE_DIRECTORY to point to my single repository. So I never have to worry about getting all the git commands to switch context ... I just use "cd ../testing", and "cd ../release".

But now I need a way to indicate to consumers of the public shared object data base which HEAD to use.

Perhaps I should just say "merge 821376bf15e692941f9235f13a14987009fd0b10 from rsync://"?

That works for interacting with you, because you only pull from me when I tell you there is something there to pull.

But Andrew had a cron job or somthing to keep polling every day. So he needs to see what the HEAD is.

Linus replied that with the current state of git, he was still pulling all objects from each developer's repository, and going over everything with a fine-toothed comb, just to make sure the system worked properly. He said this situation would change, but he would still prefer to pull from a known tree instead of specifically identified '821376bf15e692941f9235f13a14987009fd0b10' states. He said:

I think you could just define a head name, and say something like:


and we're all done. Give andrew a different filename, and you're done. The above syntax is trivial for me to automate.

9. SCSI Development Migrates From BitKeeper To git

21 Apr 2005 (1 post) Archive Link: "[ANNOUNCE] SCSI repository move from BK to git"

Topics: Disks: SCSI, Version Control

People: James BottomleyKay Sievers

James Bottomley said:

This is more or less forced by the fact that the 2.6.12-rc3 is now git based. So as of now I won't be updating

Hopefully I've set up broadly similar functionality on www.parisc- (with thanks to Dann Frazier and Paul Bame, the parisc-linux web admins).

To view the currently available SCSI trees, go to

This is using Kay Sievers' scripts and should give you broadly similar browsing capabilities to bkbits.

I'm still pushing the diffs and the change logs for all the trees into

which has almost the exact same content as the old BK based ones used to have. Since git is a very mobile target at the moment, I suspect the diffs and the changelogs will be of most use to people.

P.S. if you have any general git questions, please, I don't want to hear them. Ask on the git mailing list: (preferably after having searched the archives first).







Sharon And Joy

Kernel Traffic is grateful to be developed on a computer donated by Professor Greg Benson and Professor Allan Cruse in the Department of Computer Science at the University of San Francisco. This is the same department that invented FlashMob Computing. Kernel Traffic is hosted by the generous folks at All pages on this site are copyright their original authors, and distributed under the terms of the GNU General Public License version 2.0.