Kernel Traffic #158 For 18 Mar 2002

By Zack Brown

Table Of Contents

Mailing List Stats For This Week

We looked at 1506 posts in 6509K.

There were 467 different contributors. 223 posted more than once. 184 posted last week too.

The top posters of the week were:

1. Some Discussion Of The SSSCA

1 Mar 2002 - 7 Mar 2002 (38 posts) Archive Link: "SSSCA: We're in trouble now"

Topics: Legal Issues, Power Management: ACPI, Security

People: Paul G. AllenShawn StarrXavier BestelFlorian WeimerThomas HoodHelge Hafting

Paul G. Allen said:

Before anyone remarks about this being Off Topic for the various mailing lists I've sent this to, please think about the effects this could have to Linux. In addition, even though many of you may not be US citizens, the recent happenings with international laws against cybercrime, copy protection and the like could make this US law relevant to you as well, not to mention the impact to your company should you not be able to do business in the US because of such a law. Therefore, it really IS on topic, and the time to think about and act on such things is _BEFORE_ they are written in stone, not after.

In case you haven't heard, the SSSCA is before the Senate Commerce Committee, with a hearing earlier today (http://slashdot.org/articles/02/03/01/1423248.shtml?tid=103 for the story and several links, including a draft of the bill). The SSSCA, if passed, would basically require that all interactive digital devices, including your PC, have copy protection built in. This protection would not allow digital media from being viewed, copied, transferred, or downloaded if the device is not authorized to do so. The bill also makes it a crime to circumvent the protection, including manufacturing or trafficking in anything that does not include the protection or that would circumvent it.

Even if there is no SSSCA, the entertainment industry as well as the IT industry both agree: we must have copy protection of some kind. While I do not disagree that many movies, songs, and other media are distributed illegally without their owners consent, and that copyright owners need some sort of protection, this is not the way to fight the problem, and doing so can, and probably will, have drastic and far reaching consequences for not only the IT industry, but the entertainment industry and the consumer as well.

Many of us have become increasingly involved with, and dependent upon, Free Software (as in GNU GPL or similar), especially the Linux operating system. This type of software is distributed with the source code, allowing anyone to modify it as they choose and need. Linux has become popular to the point that many companies, especially those that provide some kind of service on or for the Internet, rely upon it heavily. Because of the free nature of Linux, and other Free Software, it is extremely difficult to place actual numbers on how many systems are out there employing such software. Some of you, like me, can approximate the number of such systems in your own company or realm of knowledge. So how does this relate to the SSSCA?

As any programmer worth his/her salt will attest, given the resources, anything that can be programmed into a computer can be programmed out, or worked around. In the case of copy protection such as the SSSCA would require, the resources needed for circumventing it is simply the source code for the operating system of the computer, and/or other source code for applications used on the computer (such as one of the many free video/audio players available). Now given the wording of the SSSCA, along with the DMCA and other supporting laws, it stands to reason that such Free Software would suddenly become a target for legislation. Such legislation logically may require such software to be judged illegal. Such a decision may have serious consequences to the IT industry as well as the entertainment industry and the consumer as well. Little may the consumer or entertainment industry know, but much of the technology they rely upon today is provided at low cost by Free Software. Take that software away, and suddenly doing business costs a lot more, and eventually the consumer just will not be willing to pay for it.

Now aside from the consequences to Free Software, what about the consequences to those who do not use such software. Imagine that home movie you shot last weekend on vacation. Now you wish to send that home movie to a relative, friend, whoever, over the Internet, or place it on your web site for all to download. Well, with many of the protection technologies suggested, this would not be possible, or would be extremely difficult. Some of these technologies require digital watermarks to be placed in the media, for one example. CD burners, digital cameras, etc. can not make these watermarks. The copy protection works by checking for such a watermark, and if it does not exist, the system either will not allow the media to be played, or will not allow it to be transmitted over the Internet as the case may be. So much for sending your cousin your latest home movie, or allowing your whole family to see it from your web site. An additional problem is all current media, including CDs and DVDs, you may currently legally own would not work on proposed new CD and DVD players with copy protection hardware. You would not be able to copy CDs, tapes, or anything else that you legally own in order to exercise your right to fair use, so as to listen to that CD on the cassette deck in your car.

I could go on, but I think this is long enough and has given some food for thought. Besides, I have work to do. Election time is near, so think about what that person you are voting for represents. Think about actually writing a letter to a congressman or other legislator, to a magazine (I actually had one published once, so it's not beyond the realms of possibility), newpaper, etc. Many people have the attitude that they can do nothing and make no difference. Well, I say to them they are right, because there are so many people with that attitude, that none of them do anything and they make no difference in doing so. The once that make the difference, are the ones taking a stance, and the ones taking the stance are the ones that are causing these rediculous laws to be passed. Guess who those people are?...

Welcome to The United Corporations of America.

Shawn Starr was outraged and said, "Let them pass it, they won't be able to enforce it. I won't let my Linux kernel become 'tainted' by closed binary drivers and I will really actively get involved in defeating such measures in Linux kernel modules." But Xavier Bestel pointed out, "You already use much BIOS with linux today, and tomorrow ACPI will be mandatory to use your box. Both are untrusted binary "drivers"." Shawn replied that Linux would avoid using the BIOS if possible and properly configured. Florian Weimer remarked:

The problem is that if you don't follow the Trusted Computing Platform Alliance booting procedure, you won't see much mass-compatible content on the Internet any longer.

The solution is simple: go and create your own content, and share it with your friends. But you won't get Hollywood movies this way.

Thomas Hood replied:

The problem is that copy-protection will only be effective if you impose Soviet style restrictions on the use of computers.

Certain powerful corporations want effective copy-protection. Ergo, those powerful corporations will want to impose Soviet style restrictions on the use of computers.

The attempt will ultimately fail. So did the Soviet Union, but in the meantime the attempt to make it work was, and will be, something of an inconvenience.

Florian replied to Thomas' first paragraph, "That's not necessarily true. Most people cannot circumvent even basic obstacles when it comes to computers, and both industry and legislative might be content with that." Helge Hafting replied:

Don't be too sure about that. If a "few" knows how to circumvent, they'll release circumvention kits that anybody can use.

Few people can use a new buffer overflow exploit, much more can use a rootkit.

At one point Paul said, "The bottom line is, too many too often sit and do nothing until they come to realize it's too late to do anything other than the drastic. Is this to be the trend when it comes to Open Source, including Linux? Maybe the SSSCA or other laws will have no effect, but then maybe they will. I for one do not wish to sit on my rear and wait to find out it (or they) have."

2. Some Dissent Over BitKeeper

6 Mar 2002 - 8 Mar 2002 (30 posts) Archive Link: "Re: Petition Against Official Endorsement of BitKeeper by Linux Maintainers"

Topics: Version Control

People: Andi KleenTom LordLarry McVoyTroy Benjegerdes

Apparently there had been some off-list protest against the use of BitKeeper. Andi Kleen finally took the debate to linux-kernel. He and others seemed to prefer not to use BitKeeper at all for reasons mentioned in private. But he said, "it is already very hard because often source is only available through it, e.g. for ppc or for 2.5 pre patches now -- hopefully this trend does not continue." At some point, Tom Lord said:

Let me share some news for people who might be interested in alternatives to BK:

At least one kernel contributor has a private arch repository for kernel work, so it seems to be at least marginally doable. I am certain that further testing, performance improvements, better documentation, and some touch-ups to existing functionality are necessary before I would say "arch is so good that you have no excuse for not using it for kernel work." Nevertheless, it's interesting that someone is already experimenting with it and the kernel.

I am working on some tools that will help to implement automatic, incremental, bidirectional gateways between arch, Subversion, and Bk.

I've written a document that describes the state of arch and the options that I think exist for getting from its current state to a state where it is unambiguously the best choice. See:

http://www.regexps.com/survey.html

I would like to hear (off-list) from people who are interested in eventually using arch for kernel work, but who aren't yet "early adopters". What milestones or features are needed, in your opinion? Please be sure to mention in your email if I may quote you or not (the default presumption will be that I may not, though I may anonymously paraphrase interesting messages and report aggregate results.)

Larry McVoy replied (after pausing to flame BitKeeper critics), "Gateways, yes, bidirectional, no. Arch doesn't begin to maintain the metadata which BK maintains, so it can't begin to solve the same problems. If you have a bidirectional gateway, you reduce BK to the level of arch or subversion, in which case, why use BK at all? If CVS/Arch/Subversion/whatever works for you, I'd say just use it and leave BK out of it." Troy Benjegerdes replied:

We really *DO* need to have more than one source control system available for people to use.

So maybe Arch and Subversion don't maintain all the metadata BK maintains. That just means that the $OTHER_SCM->BK gateway process has some manual involvement. This is no different conceptually than sending a 'plain old patch' in email to $MAINTAINER.

It is in everyone's best interest to make a functional *bidirectional* BK<->Arch gateway. (Including you Larry)

This keeps the all the open source zealots quiet, and reduces the support load of Bitmover to those people that actually *want* to use Larry's stuff because it's better, not those that use it now because there is no '90%' alternative.

I'd love to see Larry and Tom sit down in a room and come up with an *easy* way for $MAINTAINER to take patches from both Arch and BK. (I have only left Subversion out because I haven't seen anyone from the project take an interest in making changes for kernel developers)

Larry replied:

Go use arch and find out if you really want it. Using arch at this point is about as smart as using BK 3 years ago. Cort did it 2 years ago and that was painful enough. To foist arch at this point on people is actually the fastest way to kill it as a project. These tools take time to mature and if you want to help arch be prepared to do the same amount of work that Cort did with BK. It was a lot of work and time on his part.

And why Arch and not subversion? Subversion has more people working on it, Collab has put a pile of money into it, it has the Apache guy working on it, and Arch has one guy with no money and a pile of shell scripts. Come on. There is nothing free in this life, if one guy and some hacking could solve this problem, it would have been solved long ago.

I don't like gateways because they force everyone down to whatever is the highest level of functionality that the weakest system can do. It's exactly like a stereo system. You don't spend $4000 on really nice system and then try and drive it with $5 of speaker wire. It will suck, it's as good as the weakest part. In spite of your claims to the contrary, Troy, it is really not in our best interests to make a BK<->$OTHER_SCM gateway if that means that BK now works only as well as those other SCM systems. That's just stupid. If you want to do that, you do it, but don't foist the work off on me by trying to pretend it's good for BK, it's not. Diluting BK down to the level of average SCM is completely pointless and a waste of time.

Tom felt that Larry was trying to draw him into a flame war. He said that if Larry felt Arch was so hopeless, "I find the apparent urgency and hysteria with which you defame arch on this list to be pretty funny." Larry replied, "Hmm, maybe I am urgent, I'm packing for a long weekend." He reiterated that arch was a hopeless project, and the thread ended.

3. Seeking A Free Alternative To BitKeeper

7 Mar 2002 - 12 Mar 2002 (41 posts) Archive Link: "Kernel SCM: When does CVS fall down where it REALLY matters?"

Topics: FS: ReiserFS, Version Control, Virtual Memory

People: Jonathan A. GeorgePavel MachekRik van RielPau AliagasAndrew MortonNeil BrownErik AndersenDave JonesH. Peter Anvin

Jonathan A. George said:

I am considering adding some enhancements to CVS to address deficiencies which adversely affect my productivity. Since it would obviously be nice to have a completely free (or even GPL :-) tool which is not considered to consist of unacceptable compromises in the process of kernel development I would like to know what the Bitkeeper users consider the minimum acceptable set of improvements that CVS would require for broader acceptance. Obviously the tremendous set of features that Bitkeeper has are nice, but I'd like to narrow the comparative flaws to a manageable set.

Any comments would benefit all of the free SCM projects by at least helping to provide a guiding light.

Pavel Machek replied:

My pet feature?

cvs dontcommit file.c

What it should do? Mark changes in file.c as private to me, so that it never tries to commit them to official tree. It would be best if cvs diff just pretended changes are not there.

So, if I checkout tree, do some dirty hacks to make it compile, do cvs dontcommit ., cvs diff should show nothing and cvs commit should try to commit nothing. That would be nice.

There was no reply to this, but elsewhere, Rik van Riel listed:

  1. working merges
  2. atomic checkins of entire patches, fast tags
  3. graphical 2-way merging tool like bitkeeper has (this might not seem essential to people who have never used it, but it has saved me many many hours)
  4. distributed repositories
  5. ability to exchange changesets by email

For item 1, Jonathan asked Rik if he could be more specific, and Rik replied, "You do a merge of a particular piece of code once. After that the SCM remembers that this merge was done already and doesn't ask me to do it again when I move my code base to the next official kernel version." . Pau Aliagas said, "You can do that, can have separate branches, distributed repository, any normal development tree can be an arch. You have reconcile among distributed versions, star-merge, patches replay or update in any direction. You choose what you want to merge, you can always list the missing patches, you can generate the needed patches to join the branches..." In the course of the sub-thread, Rik said he and others would be willing to try out Arch, if a public repository were available.

Also in Jonathan's initial reply to Rik, he said in response to Rik's item 2, "I was thinking about something like automatically tagged globally descrete patch sets. It would then be fairly simple to create a tool that simply scanned, merged, and checked in that patch as a set. Is something like this what you have in mind?" Rik replied, "Yes, but doing this with the CVS storage as back-end would just be too slow. Also, the CVS model wouldn't be able to easily clean out the tree afterwards if a checkin is interrupted halfway through."

To Rik's item 3, Jonathan also replied, "Would having something like VIM or Emacs display a patch diff with providing keystroke level merge and unmerge get toward helpful for something like this, or is the need too complex to address that way?" Rik replied, "That would work, but you really need to try bitkeeper's graphical 2-way merge tool (or even a screenshot) to see how powerful such a simple thing can (and should) be." Pau replied that a really good text-based tool was available for Arch.

To Rik's item 4, Jonathan asked if he could be more specific, and Rik replied:

I'm looking for the ability to make changes to my local tree while away from the internet.

I want to be able to make a branch for some new VM stuff while I'm sitting on an airplane, without needing to "register" the branch with the SCM daemon on Linus's personal workstation.

Another thing to consider here is that you'll have dozens, if not hundreds, of people creating branches to their tree simultaneously. How would you ever convince rsync to merge those ?

Pau replied that changing the local tree while away from the Internet was provided by Arch by default. He said, "you work in your code derived from a concret point in the, let's call, reference repository. You can always see the differences, move them from one branch to the other, make the mavailable to others to "get"... No need to register anything, only to get a remote public archive you need to specify the location and version that you want to download." Regarding getting rsync to handle many folks creating branches simultaneously, Pau said, "You can't. But you can pull from the branches you need the patchsets you choose to. And make your branches public and available for others. It's very easy if you understand what's your development branch and your private branch. You move patches back and forth automatically. Even patches coming from the original trunk."

Elsewhere, Andrew Morton replied to Rik's list of desired features. He agreed that item 1 (working merges) was important, though more of a bug report than a feature request. He also agreed with item 2, that "changesets against a *group* of files (ie: a patch) needs to become a first-class citizen." As far as item 3, a graphical mergine tool, Andrew said that tkdiff was already quite good at this, and might be something to merge into the project. But he added, "The problem I find is that I often want to take (file1+patch) -> file2, when I don't have file1. But merge tools want to take (file1|file2) -> file3. I haven't seen a graphical tool which helps you to wiggle a patch into a file." Neil Brown agreed with this, and said, "I would like a tool (actually an emacs mode) that would show me exactly why a patch fails, and allow me to edit bits until it fits, and then apply it. I assume that is what you mean by "wiggle a patch into a file"." Pavel Machek also thought this would be a rerally great thing.

Back to Andrew's final statement in reply to Rik's list of features, Andrew said of item 5 (the ability to exchange changesets by email), that this could be handled in a future release, and wasn't really necessary right away. He said:

Probably the requirements of general developers differ from those of tree-owners. The general developer is always working against the official tree.

This is a bit extreme perhaps but I'm currently working code which consists of twelve changesets against 100 files. Many of those files are changed by multiple changesets. So two things:

  1. If I have two changesets applied to a file, and I make a change to that file, which changeset is it to be associated with?
  2. The ability to move a set of changes from one changeset into another one. ie: split that damn patch up!

But as a starting point I'd say: changesets as a first-class-concept, and lots of integration with tkdiff.

Rik agreed that changesets, branching, and merging were the top priorities.

Elsewhere, Erik Andersen also responded to Rik's initial set of feature-requests, adding two of his own:

6) Ability to do sane archival and renaming of directories. CVS doesn't even know what a directory is.

7) Support for archiving symlinks, device special files, fifos, etc.

Pau said of item 6, "Doable with arch. You can rename dirs and remove them, also files, and it will detect it generating a much smaller patchset. It all depends on the tagging you choose for files be it implicit -tags inside the file-, explicit -ci, co- or by name." He added that, regarding item 7, symlinks were supported, though he wasn't sure about the rest.

Elsewhere, Dave Jones also replied to Rik's list of feature requests, saying that item 3 (the graphical merging tool)

is the 'killer feature' of bk, and is my sole reason for spending the last few days beating up Larry to make some minor-ish improvements.

Say for example I want to push Linus reiserfs bits from my tree.

Old method:

(If during any of the steps above, Linus puts out a new pre that touches any of the files these patches do, resync, and go back to step #1)

This, takes a long time. And for some of the more compilicated bits, it's a pita to do.

The new method:

If during any of these steps Linus changes any of these files, I bk pull, and with luck, bk does the nasty bits for me, and fires up the conflict resolution tool if needbe.

The above steps look about equal in number, but in speed of operation for this work, bk wins hands down.

I'm not aware of anything other than bk that has the functionality of citool and fmtool combined. My usage pattern above doesn't fit the usual approach, as suggested in Jeff's minihowto, where I'd have multiple 'themed' trees for each cset I'd want to push Linus' way. With a 6MB diff, I'd need to grow a lot of themes, and fortunatly, bk can be quite easily bent into shape to fit my lazy needs.

I'm going to be trying it out for the next round of merging with Linus (which is partly the reason I've not pushed anything his way recently) As soon as I'm done moving house this weekend, I'll be having quite a long play with bk, to see how much quicker and easier my life becomes.

And the usual Larry disclaimer applies. I'll try it, and if it doesn't work out, I'll go back to my old way of working.

Jonathan replied:

This is a great example Dave, and is exactly the kind of feedback that free SCM tool developers need. This is my current list of features CVS doesn't have which are important for kernel developers (or me).

  1. Storage of select inode metadata (i.e. link, pipe, dir, owner, ...)
  2. Ability to rename files
  3. Atomic patch set tagging (i.e. global tag patched files)
  4. Advanced merge conflict tool (i.e. tkdiff/gvimdiff like features)
  5. Remote branch repository support
  6. Multi-branch merging and tracking (i.e. merge once)

The first three have been on my personal hit list for a while. A good implementation of 5 & 6 are probably the toughest to do properly, but also seem like key elements for kernel developers due to the importance of multiple trees. I'm not really worried about the performance of CVS since any problems here can probably be solved by adding some administrative meta data for caching and some tweaks to the back end. However, it sounds as if Arch and PRCS are pretty interesting, and I hope that a couple of people take a look at them to see how close it is to suitable. My respect for BK is certainly been enhanced by this discussion, but I still would prefer a free (or failing that GPL) license. ;-)

Elsewhere, H. Peter Anvin requested file copying and renaming support, and Pau replied that Arch could handle this as well.

4. Arranging BitKeeper Repositories

8 Mar 2002 - 11 Mar 2002 (7 posts) Archive Link: "bk://linux.bkbits.net/linux-2.5"

Topics: Version Control

People: Rik van RielRussell KingJeff GarzikLarry McVoyAnton AltaparmakovPaul MackerrasLinus Torvalds

Paul Mackerras asked Linus Torvalds to push his repository to the publically available site bk://linux.bkbits.net/linux-2.5, which currently contained only 2.5.6-pre2. Rik van Riel also replied:

For now I've put up the 2.5 tree on bk://linuxvm.bkbits.net/linus-2.5

This thing gets mirrored automagically from Linus's home directory on kernel.org by the script /home/riel/pushpull.sh which is run from my crontab.

Ideally Linus would run it from his own crontab and commit the data to bk://linux.bkbits.net/linux-2.5 ;)

Russell King replied, "Jeff also does this - http://gkernel.bkbits.net/linus-2.5. Seems a little wasteful to have multiple trees of the same thing available from the same place." And Jeff Garzik quipped, "Rik thinks that a cron job will somehow notice Linus updates faster than I do :)" But Anton Altaparmakov and Rik both pointed out, as Rik put it, "Not necessarily, but it will notice Linus updates even while you or I are asleep. Sleep latency for humans tends to be quite bad."

Elsewhere and some days later, Larry McVoy replied to Paul's initial post, saying, "I pushed last night, I had forgotten to automate this and have been doing it by hand. Linus is letting me deal with this one because he's unhappy with the locking model in BK and this serves as a reminder that it needs to be fixed (or at least that's my theory)."

5. Status Of Asymmetric Multi-Processing Support

10 Mar 2002 - 13 Mar 2002 (8 posts) Archive Link: "[PATCH] Support for assymmetric SMP"

Topics: SMP

People: Kurt GarloffFrank van de PolAndrea Arcangeli

Kurt Garloff announced:

some time ago (2.4.2 time), I created a patch that allows using a multiprocessor system with different speed ix86 CPUs and Linux.

The patch does the following:

The patch works fine, but it has a limited scope: It does not try to teach the scheduler about the fact that the CPUs are different. So a process that runs on the slower CPU does not get any bonus. It's just unlucky ... (Some dynamic prio weighting with the cpu_khz should not be too hard in the goodness calc, but I wanted to keep things simple.)

I attach the patch against 2.4.16.

Frank van de Pol replied, "Running quasi symetric system (dual P-II 300 MHz, but different cores, one is Klamath, other is Deschutes) the fix for the flags is very usefull and I'd like to see it integrated in the stock kernels. Perhaps the flags and the (more controversial) speed patches can be split?" There was no reply to this, but Andrea Arcangeli also pointed out that the patch as it stood had a problem with timer wrap-around, causing calculations to come out wrong in some cases. He said the correct solution would have to be more complicated than Kurt's implementation. Kurt felt that it would be better to simply document the cases in which the wrap-around problem would manifest (when the timer IRQ was bound to a single CPU) instead of introducing so much more complexity in the code. Andrea replied, saying he was pretty sure the problem could manifest in more ways than just that; and some folks went over some of the details.

6. zlib Security Vulnerability

11 Mar 2002 - 12 Mar 2002 (7 posts) Subject: "zlib vulnerability and modutils"

Topics: Compression, Networking, Samba, Security

People: Keith OwensVille HervaDavid Woodhouse

Keith Owens reported:

A double free vulnerability has been found in zlib which can be used in a DoS or possibly in an exploit. Distributions are now shipping upgraded versions of zlib, installing the new version of zlib will fix programs that use the shared library.

modutils has an option --enable-zlib which lets modprobe and insmod read modules that have been compressed with gzip. If you built your modutils with --enable-zlib and are using insmod.static then you must rebuild modutils after first upgrading zlib. This only applies if modutils was built with --enable-zlib (the default is not to use zlib) and you also use static versions of modutils.

Ville Herva asked, "Is there a patch for the kernel ppp zlib implementation available somewhere? I'd like to patch the kernels I'm running rather than stuffing a random vendor kernel to the boxes..." David Woodhouse replied:

ftp://ftp.kernel.org/pub/linux/kernel/people/dwmw2/linux-2.4.19-shared-zlib.bz2

That's a backport of the shared zlib from 2.5.6. As it does all its memory allocation beforehand, I _assume_ it doesn't suffer the same problem.

It may be a little more intrusive than you wanted though.

Ville took a look, though he thought there might be problems since he used 2.0 and 2.2 kernels for some machines. Later, he said:

I suppose this patch

http://cvs.samba.org/cgi-bin/cvsweb/rsync/zlib/infblock.c.diff?r1=text&tr1=1.2&r2=text&tr2=1.6&f=u

i closer to what I need. It seems most vendors have only patched ppp's zlib implementation (drivers/net/zlib.c). I couldn't find that particular patch in redhat update kernel .src.rpm, tough. I guess I'll have to apply the zlib diff by hand.

Later, he said he'd found the patch "in the redhat errata kernel .src.rpm. It was well hidden in ipvs-1.0.6-2.2.19.patch... I guess this is the same that Arjan sent to Alan. However, this does not apply to 2.0." There was no reply.

 

 

 

 

 

 

Sharon And Joy
 

Kernel Traffic is grateful to be developed on a computer donated by Professor Greg Benson and Professor Allan Cruse in the Department of Computer Science at the University of San Francisco. This is the same department that invented FlashMob Computing. Kernel Traffic is hosted by the generous folks at kernel.org. All pages on this site are copyright their original authors, and distributed under the terms of the GNU General Public License version 2.0.