Kernel Traffic
Latest | Archives | People | Topics
Latest | Archives | People | Topics
Latest | Archives | People | Topics
Home | News | RSS Feeds | Mailing Lists | Authors Info | Mirrors | Stalled Traffic

Kernel Traffic #217 For 23 May 2003

By Zack Brown

Table Of Contents

Mailing List Stats For This Week

We looked at 3112 posts in 15166K.

There were 601 different contributors. 326 posted more than once. 218 posted last week too.

The top posters of the week were:

1. Status Of Digital Rights Management

23 Apr 2003 - 9 May 2003 (299 posts) Subject: "Flame Linus to a crisp!"

Topics: Disks: IDE, Disks: SCSI, Microsoft, Patents

People: Linus TorvaldsGreg KHAndre HedrickWilliam Lee Irwin IIIJamie LokierJohn BradfordJeff GarzikDaniel Phillips

Linus Torvalds said:

Ok, there's no way to do this gracefully, so I won't even try. I'm going to just hunker down for some really impressive extended flaming, and my asbestos underwear is firmly in place, and extremely uncomfortable.

I want to make it clear that DRM is perfectly ok with Linux!

There, I've said it. I'm out of the closet. So bring it on...

I've had some private discussions with various people about this already, and I do realize that a lot of people want to use the kernel in some way to just make DRM go away, at least as far as Linux is concerned. Either by some policy decision or by extending the GPL to just not allow it.

In some ways the discussion was very similar to some of the software patent related GPL-NG discussions from a year or so ago: "we don't like it, and we should change the license to make it not work somehow".

And like the software patent issue, I also don't necessarily like DRM myself, but I still ended up feeling the same: I'm an "Oppenheimer", and I refuse to play politics with Linux, and I think you can use Linux for whatever you want to - which very much includes things I don't necessarily personally approve of.

The GPL requires you to give out sources to the kernel, but it doesn't limit what you can _do_ with the kernel. On the whole, this is just another example of why rms calls me "just an engineer" and thinks I have no ideals.

[ Personally, I see it as a virtue - trying to make the world a slightly better place _without_ trying to impose your moral values on other people. You do whatever the h*ll rings your bell, I'm just an engineer who wants to make the best OS possible. ]

In short, it's perfectly ok to sign a kernel image - I do it myself indirectly every day through the, as will sign the tar-balls I upload to make sure people can at least verify that they came that way. Doing the same thing on the binary is no different: signing a binary is a perfectly fine way to show the world that you're the one behind it, and that _you_ trust it.

And since I can imaging signing binaries myself, I don't feel that I can disallow anybody else doing so.

Another part of the DRM discussion is the fact that signing is only the first step: _acting_ on the fact whether a binary is signed or not (by refusing to load it, for example, or by refusing to give it a secret key) is required too.

But since the signature is pointless unless you _use_ it for something, and since the decision how to use the signature is clearly outside of the scope of the kernel itself (and thus not a "derived work" or anything like that), I have to convince myself that not only is it clearly ok to act on the knowledge of whather the kernel is signed or not, it's also outside of the scope of what the GPL talks about, and thus irrelevant to the license.

That's the short and sweet of it. I wanted to bring this out in the open, because I know there are people who think that signed binaries are an act of "subversion" (or "perversion") of the GPL, and I wanted to make sure that people don't live under mis-apprehension that it can't be done.

I think there are many quite valid reasons to sign (and verify) your kernel images, and while some of the uses of signing are odious, I don't see any sane way to distinguish between "good" signers and "bad" signers.

Comments? I'd love to get some real discussion about this, but in the end I'm personally convinced that we have to allow it.

Btw, one thing that is clearly _not_ allowed by the GPL is hiding private keys in the binary. You can sign the binary that is a result of the build process, but you can _not_ make a binary that is aware of certain keys without making those keys public - because those keys will obviously have been part of the kernel build itself.

So don't get these two things confused - one is an external key that is applied _to_ the kernel (ok, and outside the license), and the other one is embedding a key _into_ the kernel (still ok, but the GPL requires that such a key has to be made available as "source" to the kernel).

Greg KH said, regarding embedding public keys in the Linux binary, "I know a lot of people can (and do) object to such a potential use of Linux, and I'm glad to see you explicitly state that this is an acceptable use, it helps to clear up the issue." And Linus replied:

The reason I want to make it very explicit is that I know (judging from the private discussions I've had over the last few weeks) that a lot of people think that the GPL can be interpreted in such a way that even just the act of signing a binary would make the key used for the signing be covered by the GPL. Which obviously would make the act of signing something totally pointless.

And even if some lawyer could interpret it that way (and hey, they take limbo classes in law school explicitly to make sure that the lawyers _are_ flexible enough. Really! Look it up in the dictionary - right next to "gullible"), I wanted to make sure that we very explicitly do NOT interpret it that way.

Because signing is (at least right now) the only way to show that you trust something. And if you can't show that you trust something, you can't have any real security.

The problem with security, of course, is exactly _whom_ the security is put in place to protect. But that's not a question that we can (or should) try to answer in a license. That's a question that you have to ask yourself when (and if) you're presented with such a device.

Elsewhere, Andre Hedrick also replied to Linus' initial post, saying:

First Point: DRM is going to happen regardless.
Fact: NCITS T10 adopted MMC3 which is part of SCSI
Fact: MMC3 is the home of CSS.
Fact: SCSI by default supports DRM because of MMC3, see /dev/sg.

Second Point: ATA is a state machine driven protocol.
Fact: DRM requires an alternative state machine.
Fact: Hollywood forced this issue not Linus or me.

Third Point: DRM would be more difficult, had I not introduced Taskfile.
Fact: By forcing the native driver to execute a command sequencer, DRM requires more than simple command operations.
Fact: DRM would have happend regardless, so the best one can do is attempt to manage the mess.

Fourth Point: The Electronic Frontier Foundation is to BLAME!
Fact: I single handed forced Intel, IBM, Toshiba, and Matshustia to agree to an on-off mode for DRM with enduser control lock outs.
Fact: Feb 2001, EFF played a wild card and destroyed the deal.
Fact: IBM (4C) withdraws proposal, under firestorm.
Fact: April 2001, General Purpose Commands happen, Son-of-CPRM.
Fact: GPC creates 16-19 flavors of DRM with backdoor renable register banging methods.

I can not show the unpublished version of the of the proposal, unless 4C agrees to disclose. Given the simple fact CPRM/DRM is present now, the only solution was to control it. I and a few others on the NCITS T13 committee with the help of MicroSoft had managed to make it so the enduser could disable the feature set. Again this is all about choice not single minded dictatorships of nothing or nothing, aka EFF. The simple fact is some people may want to use DRM, and to prevent them is to cause a license twist on GPL. So now everyone has to live with the fact that DRM is here and it is now in the hardware.

Now the digital signing issue as a means to protect possible embedded or distribution environments is needed. DRM cuts two ways and do not forget it! We as the opensouce community can use DRM as a means to allow or deny an operation. Now the time has come to determine how to use this tool. Like fire, control DRM/CPRM and you recieve benefits. Let it run wild and you will be burned.

For those not aware, each and every kernel you download from K.O is DRM signed as a means to authenticate purity.

I suspect, this will fall in to the same arena as LSM, and that is where I am going to move to push it. DRM/CPRM has its use, and if managed well and open we can exploit it in ways that may even cause Hollywood people to back off.

To Andre's statement that DRM cut both ways, Linus replied:

This is _the_ most important part to remember.

Security is a two-edged sword. It can be used _for_ you, and it can be used _against_ you. A fence keeps the bad guys out, but by implication the bad guys can use it to keep _you_ out, too.

The technology itself is pretty neutral, and I'm personally pretty optimistic that _especially_ in an open-source environment we will find that most of the actual effort is going to be going into making security be a _pro_consumer_ thing. Security for the user, not to screw the user.

Put another way: I'd rather embrace it for the positive things it can do for us, than have _others_ embrace it for the things it can do for them.

Elsewhere, William Lee Irwin III also replied to Linus' initial post, saying, "I'm not particularly interested in the high-flown moral issues, but this DRM stuff smelled like nothing more than a transparent ploy to prevent anything but bloze from booting on various boxen to me." To which Linus said:

Let's be honest - to some people that is _exactly_ what DRM is. No ifs, buts and maybes.

And hey, the fact is (at least as far as I'm concerned), that as long as you make the hardware, you can control what it runs.

The GPL requires that you make the software available - but it doesn't require that the hardware be made so that you can always upgrade it.

You could write a kernel binary into a ROM, and solder it to the motherboard. That's fine - always has been. As long as you give out the sources to the software, there's nothing that says that the hardware has to be built to make it easy - or even possible - to change the binary there.

The beauty of PC's is how _flexible_ they are, and I think a lot of us take that kind of flexibility for granted. But the fact is, 99% of the worlds CPU's tend to go into devices that are _not_ general-purpose or flexible. And it shouldn't offend us (at most it might make us pity the poor hobbled hardware).

And there are projects for doing "Open Hardware" (like etc), and that may well end up being a hugely important thing to do. But Linux is about open source, not open hardware, and hardware openness has never been a requirement for running Linux.

In his same post, William also said, "I'm largely baffled as to what this has to do with Linux kernel hacking, as DRM appeared to me to primarily be hardware- and firmware- level countermeasures to prevent running Linux at all, i.e. boxen we're effectively forbidden from porting to. Even if vendors distribute their own special Linux kernels with patches for anti-warezing checks that boot on the things, the things are basically still just off-limits." And Linus said:

It has almost zero to do with the kernel code itself, since in the end all the DRM stuff ends up being at a much lower level (actual hardware, as you say, along with things like firmware - bioses etc - that decide on whether to trust what they run).

So in that sense I don't believe it has much of anything to do with the kernel: you're very unlikely to see any DRM code show up in the "kernel proper", if that's what you're asking. Although obviously many features in the kernel can be used to _maintain_ DRM control (ie somehting as simple as having file permissions is obviously nothing but a very specific form of rights management).

HOWEVER. The discussion really does matter from a "developer expectation" standpoint. There are developers who feel so strongly about DRM that they do not want to have anything to do with systems that could be "subverted" by a DRM check. A long private thread I've had over this issue has convinced me that this is true, and that some people really do expect the GPL to protect them from that worry.

And I do not want to have developers who _think_ that they are protected from the kinds of controls that signed binaries together with a fascist BIOS can implement. That just leads to frustration and tears. So I want this issue brought out in the open, so that nobody feels that they are being "taken advantage" of.

Again, from personal email discussions I know that this is a real feeling.

So I really want to set peoples _expectations_ right. I'd rather lose a developer over a flame-war here on Linux-kernel as a result of this discussion, than having somebody unhappy later on about having "wasted their time" on a project that then allowed things to happen that that developer felt was inherently morally _wrong_.

And this is where it touches upon kernel development. Not because I expect to apply DRM patches in the near future or anything like that: but simply because it's better to bring up the issue so that people know where they stand, and not have the wrong expectations of how their code might be used by third parties.

William, Jamie Lokier and others started talking about how to deal with a situation in which it could actually become necessary to design and produce their own hardware, just in order to run Linux or other good operating systems. Jamie remarked, "If the hardware that comes out of industry won't let you hack, hey you still have basic materials like SiO2 from the real world to make your own." John Bradford replied, "We should be doing this _anyway_. With open hardware designs, there would be no problem with documentation not being available to write drivers. With open hardware designed by Linux developers, we could have hardware _designed_ for Linux." He added, "Incidently, using the Transmeta CPUs, is it not possible for the user to replace the controlling software with their own code? I.E. not bother with X86 compatibility at all, but effectively design your own CPU? Couldn't we make the first Lin-PU this way?" Linus replied:

Well, I have to say that Transmeta CPU's aren't exactly known for their openness ;)

Also, the native mode is not very pleasant at all, and it really is designed for translation (and with a x86 flavour, too). You might as well think of it as a microcode on steroids.

If open hardware is what you want, FPGA's are actually getting to the point where you can do real CPU's with them. They won't be gigahertz, and they won't have big nice caches (but hey, you might make something that clocks fairly close to memory speeds, so you might not care about the latter once you have the former).

They're even getting reasonably cheap.

Jeff Garzik replied, "Check out At least one CPU there already can boot Linux. I'm waiting for the day, in fact, when somebody will use the OpenCores tech to build an entirely open system... They seem to have most of the pieces done already, though I dunno how applicable Wishbone technology is to PC-like systems."

Earlier, Jamie had also remarked, "It only gets _really_ bad when it becomes illegal to make your own hardware" And Daniel Phillips replied:

Actually, that's where we were a few years ago with hardware, because nearly anything anybody would want to print on a wafer was covered by patents or copyrights. It's getting better by leaps and bounds. Now, a lot of patents have expired, a lot of non-proprietary cores are available, and it's mainly the EDM tools that are non-free. That's where we coders can help.

Until the EDM tools get free, their current owners will continue to dictate what you can and can't design.

Elsewhere, in a completely different subthread, Linus also said:

Quite frankly, I suspect a much more likely issue is going to be that DRM doesn't matter at all in the long run.

Maybe I'm just a blue-eyed optimist, but all the _bad_ forms of DRM look to be just fundamentally doomed. They are all designed to screw over customers and normal users, and in the world I live in that's not how you make friends (or money, which ends up being a lot more relevant to most companies).

Think about it. Successful companies give their customers what they _want_. They don't force-feed them. Look at the total and utter failure of commercial on-line music: the DRM things that has been tried have been complete failures. Why? I'm personally convinced the cost is only a minor issue - the _anti_convenience of the DRM crap (magic file formats that only work with some players etc) is what really kills it in the end.

And that's a fundamental flaw in any "bad" DRM. It's not going away.

We've seen this before. Remember when dongles were plentiful in the software world? People literally had problems with having dongles on top of dongles to run a few programs. They all died out, simply because consumers _hate_ that kind of lock-in thing.

This is part of the reason why I have no trouble with DRM - let the people who want to try it go right ahead. They'll only screw themselves over in the end, because the people who do _not_ try to control their customers will in the end have the superior product. It's that simple.

As to the quake-on-PC issue - it's a completely made-up example, but it does show the same thing. Nobody in their right mind would ever _do_ a DRM-enabled quake on a PC, because it limits you too much. PC's are _designed_ ot be flexible - that's what makes the PC's. DRM on a PC is a totally braindead idea, and I _hope_ Microsoft goes down that path because it will kill them in the end.

The place where client authentication makes sense is on specialty boxes. On a dedicated game machine it's an _advantage_ to verify the client, exactly to make sure that nobody is cheating. I think products like the PS2 and the Xbox actually make _sense_ - they make it convenient for the user, and yes they use DRM techniques to "remove rights", but that's very much by design and when you buy the box 99.9% of all people buy it _because_ it only does one thing.

2. Binary Firmware And The GPL

6 May 2003 - 8 May 2003 (32 posts) Archive Link: "Binary firmware in the kernel - licensing issues."

Topics: PCI

People: Simon KelleyAlan Cox

Simon Kelley said:

I'm currently working on the drivers for Atmel PCMCIA and PCI wireless adaptors with the aim of getting up to snuff for inclusion in the mainline kernel.

I'm working from source drivers released by Atmel themselves last year under the GPL so there are no problems with the code - each source file from Atmel has a GPL notice at the top.

BUT. These things need firmware loaded, at least the ones without built-in flash. The Atmel drivers come with binary firmware as header files full of hex, with the following notice.

            Copyright (c) 1999-2000 by Atmel Corporation

This software is copyrighted by and is the sole property of Atmel
Corporation.  All rights, title, ownership, or other interests
in the software remain the property of Atmel Corporation.  This
software may only be used in accordance with the corresponding
license agreement.  Any un-authorized use, duplication, transmission,
distribution, or disclosure of this software is expressly forbidden.

This Copyright notice may not be removed or modified without prior
written consent of Atmel Corporation.

Atmel Corporation, Inc. reserves the right to modify this software
without notice.

Atmel Corporation.
2325 Orchard Parkway     
San Jose, CA 95131       

It isn't clear what the license agreement referred to in the above actually is, but I don't think it's reasonable to just assume it's the GPL and shove these files into the kernel as-is.

I shall contact Atmel for advice and clarification but my question for the list is, what should I ask them to do? It's unlikely that they will release the source to the firmware and even if they did I wouldn't want firmware source in the kernel tree since the kernel-build toolchain won't be enough to build the firmware. What permissions do they have to give to make including this stuff legal and compatible with the rest of the kernel?

Given the current SCO-IBM situation I don't want to be responsible for introducing any legally questionable IP into the kernel tree.

This situation must have come up before, how was it solved then?

Various folks offered advice, but Alan Cox pointed out that only a lawyer could really give a useful interpretation. But Simon felt that this was not an antagonistic legal situation, and that Amtel really had their heart in the right place, "given that the company itself published the source under the GPL and put them up on Sourceforge. What I need is the correct legalese to replace the above which makes it legal to redistribute (easy) and to combine with the GPL'd bulk of linux - that's the difficult bit. Once I have said legalese I'll put it to Atmel with the message "this is what I think you _meant_ to say.""

3. Process Attribute API for Security Modules

6 May 2003 - 16 May 2003 (7 posts) Archive Link: "[PATCH] Process Attribute API for Security Modules 2.5.69"

Topics: FS: ext2, FS: ext3

People: Stephen SmalleyAndreas GruenbacherJan HarkesAlexander ViroAndrew Morton

Stephen Smalley said, "This patch against 2.5.69 implements a process attribute API for security modules via a set of nodes in a /proc/pid/attr directory. Credit for the idea of implementing this API via /proc/pid/attr nodes goes to Al Viro. Jan Harkes provided a nice cleanup of the implementation to reduce the code bloat." In response to some technical criticism from Alexander Viro and Andrew Morton, Stephen said, "This updated patch against 2.5.69 merges the readdir and lookup routines for proc_base and proc_attr, fixes the copy_to_user call in proc_attr_read and proc_info_read, moves the new data and code within CONFIG_SECURITY, and uses ARRAY_SIZE, per the comments from Al Viro and Andrew Morton. As before, this patch implements a process attribute API for security modules via a set of nodes in a /proc/pid/attr directory. Credit for the idea of implementing this API via /proc/pid/attr nodes goes to Al Viro. Jan Harkes provided a nice cleanup of the implementation to reduce the code bloat." Andreas Gruenbacher remarked:

The Process Attribute API, Ext2 xattr handler, and Ext3 xattr handler look clean, so I have no objections, either. It remains to be seen how useful this API will be.

It will be necessary to document which security attributes are defined, and which are their valid values. These things will eventually have to be incorporated into e2fsck, so that after a file system check it is guaranteed that the file system is in a consistent state.

Alexander said Stephen's patch looked sane to him, and Stephen posted a new version, saying, "This patch, relative to the /proc/pid/attr patch against 2.5.69, fixes the mode values of the /proc/pid/attr nodes to avoid interference by the normal Linux access checks for these nodes (and also fixes the /proc/pid/attr/prev mode to reflect its read-only nature). Otherwise, when the dumpable flag is cleared by a set[ug]id or unreadable executable, a process will lose the ability to set its own attributes via writes to /proc/pid/attr due to a DAC failure (/proc/pid inodes are assigned the root uid/gid if the task is not dumpable, and the original mode only permitted the owner to write). The security module should implement appropriate permission checking in its [gs]etprocattr hook functions. In the case of SELinux, the setprocattr hook function only allows a process to write to its own /proc/pid/attr nodes as well as imposing other policy-based restrictions, and the getprocattr hook function performs a permission check between the security labels of the current process and target process to determine whether the operation is permitted."

4. Problems With BitKeeper-To-CVS Gateway

7 May 2003 - 10 May 2003 (7 posts) Archive Link: "bkcvs not up-to-date?"

Topics: CREDITS File, Version Control

People: John LevonPavel MachekLarry McVoy

Pavel Machek noticed that the BitKeeper-to-CVS gateway was not uptodate, or at least he wasn't able to get the latest tree from it. John Levon confirmed, "I have the same problem, the CVS gateway got stuck some point in the middle of 2.5.68 and has had no apparen t updates since." Pavel Machek added, "Its even worse: part of updates gets there. Like CREDITS file is up-to-date but Makefile is not." At some point in the discussion, Larry McVoy said, "This should be fixed now. We had a bad disk on" Pavel still had a problem; but the thread ended.

5. Kernel Policy On Link-Order Dependencies

7 May 2003 - 11 May 2003 (26 posts) Archive Link: "The magical mystical changing ethernet interface order"

Topics: Networking, PCI, Version Control

People: Russell KingRandy DunlapAndrew MortonDave HansenDavid S. MillerLinus TorvaldsJeff GarzikChuck Ebbert

Russell King asked:

Does anyone know if there's a reason that the ethernet driver initialisation order has changed again in 2.5?

In 2.2.xx, we had eth0 = NE2000, eth1 = Tulip
In 2.4, we have eth0 = Tulip, eth1 = NE2000
And in 2.5, it's back to eth0 = NE2000, eth1 = Tulip

Both interfaces are on the same bus:

00:0a.0 Ethernet controller: Digital Equipment Corporation DECchip 21142/43 (rev 30)
00:0d.0 Ethernet controller: Winbond Electronics Corp W89C940F

Its rather annoying when your dhcpd starts on the wrong interface.

Randy Dunlap replied:

What version of 2.5?

There was a patch 17 days ago by Chuck Ebbert (merged by akpm) that "fixed" PCI scan order in 2.5 to be same as 2.4. Comment in changelog says "Russell King has acked this change."|ChangeSet@-3w

An alternative is to use 'nameif' to associate MAC addresses with interface names. See here for mini HOWTO:

Russell said he'd noticed the change in 2.5.69; for the code comment attributed to him, he said:

Yes, that affects the order of PCI devices on the global list when you have multiple PCI buses present. This machine has only one PCI bus, so is not affected by this issue.

Note that I haven't been running 2.5 kernels on NetWinders until recently, so I couldn't say when it changed.

As a wild stab in the dark, he guessed that perhaps the init ordering changed. And Andrew Morton replied, "Well stabbed. The relative ordering of tulip and ne2k in drivers/net/Makefile got changed. Maybe we should reorganise the 2.5 Makefile to copy the 2.4 Makefile's ordering. How pleasant. I suspect the linker is at liberty to reorder these anyway." Dave Hansen said, "The linker will order things in the final object in the order that you passed them. We depend on this for getting __init functions run in the right order:" David S. Miller replied:

This is absolutely not guarenteed. The linker is at liberty to reorder objects in any order it so desires, for performance reasons etc.

Any reliance on link ordering is broken and needs to be fixed.

But Linus Torvalds said:

No. Last time this came up rth spoke up and said that link ordering _is_ guaranteed.

The kernel depends on this in a lot more ways than just initcalls, btw: all the exception handling etc also depend on the linker properly preserving ordering of text/data sections.

If the linker ever starts re-orderign things, we'll just either not upgrade to a broken linker, or we'll require a flag that disables the re-ordering.

End of discussion.

Folks like Jeff Garzik disagreed with Linus on this, but acknowledged that "Well, The Leader Has Spoken."

6. HFS+ Filesystem Rewrite

7 May 2003 - 12 May 2003 (11 posts) Archive Link: "[ANNOUNCE] HFS+ driver"

Topics: Version Control

People: Roman ZippelJeffrey BakerMiles LaneDaniele PalaBrad Boyer

Roman Zippel said:

I'm proud to announce a complete new version of the HFS+ fs driver. This work was made possible by Ardis Technologies ( It's based on the driver by Brad Boyer (

The new driver now supports full read and write access. Perfomance has improved a lot, the btrees are kept in the page cache with a hash on top of this to speed up the access to the btree nodes. I also added support for hard links and the resource fork is accessible via <file>/rsrc.

This is a beta release. I tested this a lot, so I consider it quite safe to use, but I can't give any guarantees at this time of course. There is also still a bit to do (e.g. the block allocator needs a bit more work).

The driver can be downloaded from The README describes how to build the driver.

If something should go wrong, I also have patch for Apple's diskdev_cmds (available from, which ports newfs_hfs and fsck_hfs to Linux and fixes the endian problems. The patch is at After applying the patch the tools can be built with 'make -f Makefile.lnx'.

Jeffrey Baker said, "This is a huge development for iPod and other mac users." and Miles Lane replied, "Yes! Will this driver be accepted into the 2.4 and 2.5 trees any time soon?" Daniele Pala also said, "Wow , great! thx a lot for that. ;) So the major thing to fix now on macs is the sound part which is quite poor for least on my old iMac DV...the problem is that i don't even know which audio chipset it uses! Gotta start searching better for info...well let's hope Apple gives info about this :)"

Brad Boyer also replied to Roman's initial post, saying, "If you don't mind, I'll start merging your changes into the CVS tree on SourceForge. I assume this is all GPL code, since you started from my original patches... I'll wait to hear back from you before merging it in, since it's a pretty big change." Roman said, "Yes, of course it is."

7. Status Of Centrino Wireless Support

7 May 2003 - 13 May 2003 (4 posts) Archive Link: "status of Centrino wireless support"

People: Andrew BaumannAnders KarlssonTimothy D. Witham

Andrew Baumann asked:

does anyone know anything more than "Intel have a driver internally, depending who you ask, and might release it at at unspecified time, if they feel like it" about the Centrino wireless adapter (Intel PRO/Wireless 2100 LAN MiniPCI Adapter).

I've heard several interesting rumours along those lines, but nothing more. Just wondering if anyone knows about or is working on support for this device.

Anders Karlsson said he hadn't heard any good news on this front; he said, "I sent e-mails to Scott McLaughlin at Intel and he said he forwarded my queries and comments to some internal team looking into what they were doing. From what I can gather, there is little happening from Intel to make the drivers available." Timothy D. Witham replied, "I am also working on getting something to address this issue." And Anders said:

I just had a response from Mr McLaughlin at Intel. They appear to have a centrino team where all requests about/for the drivers are forwarded to to gauge the interest in the drivers.

A guess is that the more people are interested in the drivers, the bigger the chance that they some day will become available, one way or another. I have made my interest known to Intel, not sure I could do any more than that. (Not really got any close pals in Oregon...)

8. Documentation For Embedded Systems

8 May 2003 (1 post) Archive Link: "[ANNOUNCE] embedded book /"

Topics: Small Systems

People: Karim Yaghmour

Karim Yaghmour said:

For some time now I have been working on putting together documentation to help developers use Linux in embedded systems without requiring the purchase of any product or the use of any pre-packaged distribution:

The approach I've documented requires only that you have an Internet connection to download the various packages straight from the source. The complete procedure to obtain a functional embedded system based on those packages is detailed in the book.

In order to further increase the level of technical discussion around the use of Linux in embedded systems and provide up-to-date information, I've also set up a web site and a mailing list at:

As the site states, hype and other marketing-related material aren't welcome on I think many will agree that there has been enough of that already on the subject of "embedded Linux."

My intent with writing the book and building the site was to bridge the gap that exists between embedded systems developers that use open source and free software packages, and the open source and free software community that produces these packages. My hope is that we will see mainstream embedded developers make more contributions to the open source and free software packages they use in building embedded Linux systems. Ultimately, this will ensure Linux remains the best choice for an embedded OS.

[There is, of course, more detail to these ideas than I can fit in an email. I invite you to take a look at the book and the site if you're interested.]

9. Linux Test Project 20030508

8 May 2003 (1 post) Archive Link: "[ANNOUNCE]Linux Test Project May Release Announcement"

Topics: Bug Tracking, Device Mapper, POSIX, Security, Version Control

People: Robert WilliamsonMarty RidgewayAndreas JaegerUlrich DrepperDan Kegel

Robert Williamson said:

The Linux Test Project test suite ltp-full-20030508.tgz has been released. Visit our website ( to download the latest version of the testsuite that contains 1800+ tests for the Linux OS. Our site also contains other information such as: test results, a Linux test tools matrix, an area for keeping up with fixes for known blocking problems in the 2.5 kernel releases, technical papers and HowTos on Linux testing, and a code coverage analysis tool.

Lists of test cases that are expected to fail for specific architectures and kernels are located at:

These lists also contain expected LTP compiler warnings for each architecture and kernel.


We encourage the community to post results, patches, or new tests on our mailing list, and to use the CVS bug tracking facility to report problems that you might encounter. More details available at our web-site.


- Updated the LTP to build and execute on NPTL    ( Robbie Williamson )
  installed systems
- Applied 'ash' compatibilty patch                ( Dan Kegel )
- Applied "CFLAGS+=" Makefile patch               ( Vasan Sundar )
- Created "/testscripts/index.html" directory and relocated  ( Robbie Williamson )
  scripts to it
- Fixed kill problem with genload's stress.c      ( Amos Waterland )
- Added checking for users and sys groups to      ( Robbie Williamson ) Also, called the script from before executing tests to support
  cross-compiled platforms
- Added 'ltpmenu' GUI                             ( Manoj Iyer
                                                    Robbie Williamson )
- Applied "posixfy" patches                       ( Vasan Sundar )
- Updated to use -o for            ( Robbie Williamson )
  redirecting output.
- Added code to to prompt for      ( Robbie Williamson )
  RHOST and PASSWD when running network tests.
- Updated Open POSIX Test Suite header file to    ( Robbie Williamson )
  allow timer tests to build.
- Compiler warnings cleanups.                     ( Robbie Williamson )
- Corrected buffer overflow in inode02.           ( Dan Kegel )
- Updated disktest to 1.1.10 and fixed for        ( Robbie Williamson )
  systems w/o O_DIRECT
- Completed merge of Open POSIX Test Suite 0.9.0  ( Robbie Williamson )
- Applied ia64 specific patches                   ( Jacky Malcles )
- Updated Makefiles to allow use of "-j"          ( Nate Straz )
- Correct fork05 for use in newer glibc/kernels   ( Ulrich Drepper )
- Applied "type" fixes to recvfrom and recvmsg    ( Andreas Jaeger )
- Applied x86_64 specific patches                 ( Andreas Jaeger )
- Applied MSG_CMSG_COMPAT fix for 64bit 2.5       ( Bryan Logan )
- Added new testcase for setegid.                 ( Dan Kegel )
- Modified syslog tests to use test apis          ( Manoj Iyer )
- Added 2.5 timer tests.                          ( Aniruddha Marathe )
- Added Device Mapper tests.                      ( Marty Ridgeway )
- Added sockets tests.                            ( Marty Ridgeway )
- Removed fptest03 due to use of obsolete         ( Robbie Williamson )
  syscalls that perform 48bit math operations

10. QLogic FC Driver Rewrite

8 May 2003 (1 post) Archive Link: "[ANNOUNCE] QLogic FC Driver for Linux kernel 2.5 available."

Topics: Ioctls

People: Andrew Vasquez

Andrew Vasquez said:

QLogic is pleased to announce the availability of a completely new version of the QLogic FC driver (8.00.00b1) for its ISP21xx/ISP22xx/ISP23xx chips and HBAs. Our desire after community review and continued driver development is inclusion of this work into the Linux 2.5 kernel tree.

This driver contains support for Linux kernels 2.5.x and above *only* (all 2.4.x support has been removed). It's based on the QLogic 6.x driver which is qualified with a number of OEMs and is functionally equivalent to the 6.05.00b9 driver.

The new driver contains a number of key functional changes, initialization (device scanning), I/O handling -- command queuing refinements (front-end), ISR rewrite (backend). The last two notables should assist in a reasonable performance improvement. Details pertaining to the these changes and the development direction of this work can be found towards the end of this email.

Driver tar-balls are available in two-forms from our SourceForge site:

   o Kernel tree drop-in tarball (synced with 2.5.69):  


        Extract the contents directly in the kernel tree:

                # cd /usr/src/linux-2.5.69
                # tar xvfj /tmp/qla2xxx-kernel-v8.00.00b1.tar.bz2
                # make config
                # ...

   o External build tarball:


        Extract the contents to your build directory:

                # mkdir /tmp/qla-8.00.00b1
                # cd /tmp/qla-8.00.00b1
                # tar xvfj /tmp/qla2xxx-src-v8.00.00b1.tar.bz2
                # make -C /usr/src/linux-2.5.69 SUBDIRS=$PWD modules

Please note, this is a (pre)beta release. Testing has been performed against a number of storage devices (JBODs, and FC raid boxes), but certainly has not received the level of test coverage present with the 6.x series code -- basic error injection (cable-pulls and recovery).

NOTE: The driver group will try to address any issues with this work within the linux-scsi and linux-kernel mailing lists. Please do not contact QLogic technical support regarding this driver.


This driver is and will continue to be in a very fluid state. Changes thus far include basic infrastructure and semantic rewrites of some core components of the driver:

        o Initialization:
          - pci_driver scanning.
          - Fabric scanning:
            - GID_PT (if not supported, fallback to GA_NXT).
            - SNS registration - RFT_ID, RFF_ID RNN_ID, RSNN_ID.
          - ISP abstractions:
            - Firmware loading mechanism.
            - NVRAM configuration.
          - 2k port login support (ISP23xx only).
          - SRB pool allocations.

        o Queuing mechanisms:
          - Rewrite command IOCB handling.

        o Command response handling:
          - Rewrite ISR -- simplification.
          - Bottom-half handling via work queues.

        o Code restructuring.

        o Kernel 2.5 support -- currently in sync with 2.5.69.

Additional work to be done include:

        o Further fabric scanning refinements:
          - Minimizing SNS queries.
          - Asynchronous fabric logins.

        o Review Locking mechanisms -- there are still a number of
          structures which depend on our high-level hardware lock for
          mutual exclusion.

        o Internal device-list management unification.

        o Rework mailbox and IOCTL request handling:
          - To use wait queues.

        o Logging mechanisms.  The current debugging requirement is to
          recompile the driver in 'debug' mode.  Once included in the
          kernel tree, recompilation is not a guaranteed option.
          Make use of 'Extended error logging' NVRAM parameter to enable
          additional debug statements.

        o Kernel 2.5 support:
          - module_param() interface.

11. Deep, Dark, Boot Vector Weirdness

9 May 2003 - 13 May 2003 (37 posts) Archive Link: "[PATCH] Use correct x86 reboot vector"

People: Andi KleenJamie LokierRandy DunlapJos HulzinkLinus TorvaldsDavide LibenziEric W. Biederman

Andi Kleen said:

Extensive discussion by various experts on the mailing list concluded that the correct vector to restart an 286+ CPU is f000:fff0, not ffff:0000. Both seem to work on current systems, but the first is correct.

See the "DPMI on AMD64" and "Warm reboot for x86-64 linux" threads on for more details.

Jamie Lokier replied:

You are right. That's what a 286 does when the RESET signal is asserted.

Which is amazing, because I wrote that ffff:0000 and I was reading from the Phoenix BIOS book at the time. It was long ago but I'm fairly sure I got that address from the book.

I just did some Googling and found that there examples of DOS code fragments using both vectors. Also, the original IBM BIOS (as they say) had a long jump at the vector, which is presumably one of the many de facto ABIs which real mode programmers grew to depend on.

Randy Dunlap replied, "This seems to be a difference from 8086/8088 to the 286. My iAPX 286 Hardware Reference Manual says that the RESET signal initializes CS to 0FF0000H and IP to 0FFF0H, while my iAPX 86,88 User's Manual says that RESET sets CS to 0FFFFh and IP to 0." And Jos Hulzink also said to Jamie:

The 16 byte code space is very small, and usually only contains that LONG jump to an usable address space.

When the vector f000:fff0 is used, we can survive BIOSes that use relative jumps with negative offsets or indirect short jumps instead.

When the vector ffff:0000 is used, the code segment effectively contains only 16 bytes (or someone must abuse the 8086 wraparound), can't think of negative offset short jumps there. As the code is read-only in this early stage, (BIOS code is RW after the BIOS copied itself to RAM) self modifying code (which uses absolute addressing) can be excluded too.

Okay... now, as 386 and newer cpus need a far jump to unlock A20-A31, I think it is safe to assume all BIOSes will do a far jump as soon as possible, which means it doesn't matter which vector is used.

For the sake of bad behaving BIOSes however, I'd vote for the f000:fff0 vector, unless someone can hand me a paper that says it is wrong.

Jamie replied, "I agree, for the simple reason that it is what the chip does on a hardware reset signal." At this point Linus Torvalds came in with:

Hmm.. Doesnt' a _real_ hardware reset actually use a magic segment that isn't even really true real mode? I have this memory that the reset value for a i386 has CS=0xf000, but the shadow base register actually contains 0xffff0000. In other words, the CPU actually starts up in "unreal" mode, and will fetch the first instruction from physical address 0xfffffff0.

At least that was true on an original 386. It's something that could easily have changed since.

In other words, you're all wrong. Nyaah, nyaah.

Jos replied:

Source: 80386 Programmers Reference Manual, Intel (1986)

EIP is set 0000FFF0H CS is set F000H

After RESET, lines A31-A20 are FORCED high till a far JMP is done.

So, unfortunately we have to say Linus is right once again. Damn ;-) My conclusion is that we are unable to use the CPU reset as the reference for warm boots, for we can't control A312-A20 in real mode. But as far as I can see, my arguments still hold...

Jamie replied, "I got my info from an article on the net which says that a 386 does behave as you say, but it is possible for the system designer to arrange that it boots into the 286-compatible vector at physical address 0x000ffff0. It states that the feature is specifically so that system designers don't have to create a "memory hole" (that's as much detail as it gives)." Davide Libenzi pointed out that 0xfffffff0 and 0x000ffff0 amounted to the same thing, "since the hw remaps the bios. Being picky about Intel specs, it should be f000:fff0 though." Eric W. Biederman replied, "The remapping is quite common but it usually happens that after bootup: 0xf0000-0xfffff is shadowed RAM. While 0xffff0000-0xffffffff still points to the rom chip. Now if someone could tell me how to do a jump to 0xffff0000:0xfff0 in real mode I would find that very interesting." Linus said:

You should be able to do it the same way as you enter unreal mode, ie:

One problem is that the code segment you create this way will have the right base and size, but it will be non-writeable (no way to create a writable code segment in protected mode), so it will be different in other ways.

And because you'll have to do some of the the setup with that new and inconvenient CS, you'll either have to make the limit be big (and wrap around EIP in order to first execute code that is in low memory), or you'll have to play even more tricks and clear both PE and PG at the same time and just "fall through" to the code at 0xfffffff0.

Sounds like it might work, at least on a few CPU's.

12. Support For PPP Encryption

12 May 2003 (12 posts) Archive Link: "MPPE in kernel?"

Topics: Compression, Networking

People: Frank CusackPaul Mackerras

Frank Cusack asked:

What are the chances of getting MPPE (PPP encryption) into the 2.4.21 and/or 2.5.x kernels?

For 2.4.21, sha1 and arcfour code needs to be added, so I don't have too much hope :-) even though the code is trivial to integrate.

For 2.5.x, just the arcfour code is needed (since sha1 is already there). I've written a public domain implementation, which I'd be willing to relicense under GPL (although I don't see the point), but in any case the algorithm is easy and could be written by anyone.

In addition to the crypto, the mppe compressor module is required.

I'm not so concerned about getting any of that included though; what I really want is for the changes to ppp_generic.c to be included. It's not so much fun to have to maintain patches. The changes required are generic, don't require crypto, and are generally uneventful. Getting the crypto bits and the mppe compressor itself included would just be a bonus.

Paul Mackerras replied, "The fundamental problem is that MPPE is misusing CCP (compression control protocol) for something for which it was never intended. The specific place where this is a problem is that the compression code in ppp_generic doesn't guarantee that it will never send a packet out uncompressed, but MPPE requires that. How do you get around that problem?" Frank said:

I have the compressor return a 3-valued return code (<0, 0, >0) instead of two-valued (>0, other). A negative value tells ppp_generic to drop the packet. 0 means the same as it does now--the compressor failed for some reason. (All current compressors always return 0 or >0, so the negative return is compatible.)

0 could also mean that CCP isn't up yet, but pppd userland doesn't allow NCP's to come up until CCP completes (iff trying to negotiate MPPE).

Note that ECP would have this same problem, it's addressed the same way.

Paul replied:

are you sure that nothing can cause CCP to go down? If it does then ppp_generic will send data uncompressed. What would happen if an attacker managed to insert a CCP terminate-request into the receive stream somehow?

I think the whole thing needs a careful audit. The idea that you fall back to sending and receiving uncompressed data if CCP goes down or a compressor fails is pretty fundamental to the CCP implementation in ppp_generic.

Frank thought he had a way to keep CCP from going down, but Paul felt more was needed; and the thread ended.

13. kconfig Enhancements

12 May 2003 - 14 May 2003 (18 posts) Archive Link: "[PATCH] new kconfig goodies"

Topics: FS: CIFS, FS: FAT, FS: JFS, FS: NTFS, FS: ext2, FS: ext3, Microsoft

People: Roman ZippelDave Jones

Roman Zippel announced:

There is a new kconfig patch at It adds a few new features, which were requested a few times:

BTW this clears my todo list of important features for the kconfig syntax itself, if you think there is something missing, please tell me now, otherwise it might have to wait for 2.7. After this I work a bit more on xconfig and the library interface.

The changes in detail:

  1. Working with derived symbols becomes simpler, e.g. this:

    config FS_MBCACHE
            depends on EXT2_FS_XATTR || EXT3_FS_XATTR
            default y if EXT2_FS=y || EXT3_FS=y
            default m if EXT2_FS=m || EXT3_FS=m

    can now also be written as:

    config FS_MBCACHE
            def_tristate EXT2_FS || EXT3_FS
            depends on EXT2_FS_XATTR || EXT3_FS_XATTR

    There are two new keywords "def_bool" and "def_tristate", which behave like "default", except that they also set the type of the config symbol. Defaults also accept expressions now, the result of it will be used as default (this works of course only with boolean and tristate symbols).

  2. There is a new keyword "enable", which can be used to force the value of another config value, e.g.

    config NLS
            depends on JOLIET || FAT_FS || NTFS_FS || NCPFS_NLS || SMB_NLS || JFS_FS || CIFS || BEFS_FS
            default y

    this could be written as:

    config NLS
            def_bool JOLIET || FAT_FS || NTFS_FS || NCPFS_NLS || SMB_NLS || JFS_FS || CIFS || BEFS_FS

    but this is now possible as well:

    config NLS
    config JOLIET
            bool "Microsoft Joliet CDROM extensions"
            enable NLS
    config FAT_FS
            tristate "DOS FAT fs support"
            enable NLS

    This means the information that a file system needs NLS is now specified with the file system itself and if the file system is selected, so is NLS.

    Another example:

    config AGP
            tristate "/dev/agpgart (AGP Support)" if !GART_IOMMU
            default y if GART_IOMMU

    this can be changed into:

    config AGP
            tristate "/dev/agpgart (AGP Support)"
    config GART_IOMMU
            bool "IOMMU support"
            enable AGP

    This will cause AGP to be selected if GART_IOMMU is selected.

    To better understand how this new feature works, it might help to describe how a config value is calculated:

            config value = (user input && visibility) || reverse dependency

    Visibility are the normal dependencies and limit the maximum value a user can select. Reverse dependencies on the other hand limit the minimum value a user can select. In above example this means there is a reverse dependency of GART_IOMMU added to AGP, so that value of AGP cannot be less than GART_IOMMU anymore.

    This feature can be easily abused, so please use it with care, don't use it to take the choice away from user, e.g. only enable another subsystem if it would result in compile errors otherwise. If you're not sure, just ask. To avoid bigger mistakes I finally added the code to check for recursive dependencies.

  3. Finally I added support for ranges, so that this becomes possible:

    config LOG_BUF_SHIFT
            int "Kernel log buffer size" if DEBUG_KERNEL
            range 10 20

    Right now this is only used to check the direct user input, this means directly editing .config will ignore the range (please don't rely on this feature :) ).

Dave Jones was happy to see this, but asked, "However, will this still offer the CONFIG_AGP tristate in the menu? If IOMMU is on, there must be no way to switch off the agpgart support on which it depends." Roman replied, "Yes, you will see AGP, but you can't change it." And Dave said, "Perfect!"

14. Support For The ARM26 Architecture

13 May 2003 (10 posts) Archive Link: "ARM26 [NEW ARCHITECTURE]"

People: Ian MoltonAlan CoxRussell King

Ian Molton said, "I want to submit the ARM26 architecture. its still broken, but its getting there. now a couple more people want to hack on it, so I'd appreciate it if you could put the non-invasive parts into the kernel tree for me. I have two patches - one to add arch/arm26 and another to add the corresponding incluse/asm-arm26." Alan Cox remarked, "I guess its no crazier than the MacII port. What does Russell think about it however and also is this 2.4 or 2.5 targetted ?" Ian replied, "it is 2.5 targetted currently 2.5.30, but the arch/ and asm/ stuff is independant as far as the rest of the tree is concerned, so it may as well go in as 'current' and then I can submit smaller patches to 'catch up' with the rest. it actually compiles on 2.5.30 (at least, some of it does ;-) and runs, excpet so far, mm stuff fails and user-land falls over HARD ver early." He added that Russell King was OK with the whole idea; and Russell put in for himself:

I'm fine with it; I'd rather someone else (who has more interest in the machines) picked it up.

The basic idea is to rip out the arm26 code from arch/arm and include/asm-arm, thereby allowing include/asm-arm/proc-armv to be collapsed into include/asm-arm, removing some clutter.

Separating it out should also allow arm26 to shrink down to something smaller, which is fairly critical for these machines.

15. Proposal For Digital Rights Management

14 May 2003 - 15 May 2003 (23 posts) Archive Link: "Digital Rights Management - An idea"

Topics: Access Control Lists, Executable File Format

People: Dean McEwanAlan Cox

Dean McEwan proposed, "I had an idea for DRM, what about a kernel that forces everything downloaded to have a valid signature, and doesn't let the file/program be accessed otherwise?" Alan Cox replied, "You can set this up with both rsbac and selinux," but Dean said, "Im thinking of much more... It would be set up so that files have an internal signature (ELF format might have to be fiddled with). It would verify itself by sending info to the creator of the contents PC OR server asking for verification of itself, files could be limited lease, rented, or automatically expire after some time." Alan replied:

That way around doesnt actually work because I'll simply lie, fake the server or firewall you (in fact any serious business firewalls all outgoing traffic from end users). If you want to do it for internal trust and you control the systems (the useful case) you set SELinux or RSBAC up so that all applications create files in a "non runnable" class. The only way to transition an app is a single user application which does your key checking and other processing then transitions the binary to "safe". I guess you also add a general rule that writing to a file moves it back into non runnable.

One of the problems with this is interpreters. Its easy to do this with ELF binaries but you have to extend it to scripts and that normally means more pain 8)

16. Support For The Virtual Redundancy Router Protocol (VRRP)

16 May 2003 (3 posts) Archive Link: "VRRP"

People: Chien-Lung WuMaciej Soltysiak

Chien-Lung Wu asked, "Do anyone know that Linux is able to support VRRP (Virtual Redundency Router protocol)?" Gianni Tedesco gave a link to Keepalived, and Maciej Soltysiak also said to Chien-Lung:

Read about vrrpd in:

Try looking for packages for your linux distro. Debian, RedHat have vrrpd packages.

Also try:







Sharon And Joy

Kernel Traffic is grateful to be developed on a computer donated by Professor Greg Benson and Professor Allan Cruse in the Department of Computer Science at the University of San Francisco. This is the same department that invented FlashMob Computing. Kernel Traffic is hosted by the generous folks at All pages on this site are copyright their original authors, and distributed under the terms of the GNU General Public License version 2.0.