Kernel Traffic #53 For 7 Feb 2000

By Zack Brown

Table Of Contents

Introduction

Sorry about the missed issue, I was hard at work with the rest of the Linuxcare folks, working on the new site.

Thanks go to Martin Strömberg, who pointed out duplicate links last week in Issue #52, Section #1  (4 Jan 2000: ToDo Before 2.4) . Thanks Martin!

Thanks also go to Jakob Frandsen, who noticed that none of the archive links in last week's issue actually led anywhere. Oops! Thanks Jakob!

Mailing List Stats For This Week

We looked at 3088 posts in 13136K.

There were 811 different contributors. 383 posted more than once. 218 posted last week too.

The top posters of the week were:

1. Discussion Of The Development Process

4 Jan 2000 - 24 Jan 2000 (45 posts) Archive Link: "Standard Development Integration"

Topics: CREDITS File, Disk Arrays: RAID, FS: ReiserFS, FS: devfs, MAINTAINERS File

People: Alan CoxHorst von BrandMarco ColomboDavid WeinehallPeter SamuelsonLinus TorvaldsMike A. Harris

Sam Powell asked about how version decisions were made, and Alan Cox explained, "We drop features into 2.3.x until its roughly where we want it. Linus then starts getting very hard to get new features past and the stuff is stabilised then it pops out probably as 2.4 and we start 2.5 a while later." Marco Colombo took the opportunity to suggest that a new unstable branch should begin at the same time as the stable branch, so that development could continue uninterrupted. Mike A. Harris replied that there would be problems with such a thing. He pointed out that Linus Torvalds had only so much time, and having a larger number of active trees (2.2, 2.4, 2.5) would make it difficult to keep track, and software developers might get confused about which kernel to code for. Driver writers, in particular, would have a hard time writing for all those separate trees. Mike added as well, that if 2.5 were started, some developers would not bother trying to stablize the 2.4 tree. He suggested that if someone wanted to start a 2.5 development branch on their own, they should go right ahead, and start their own website for it.

Marco pointed out that there were only 2 kernels actively being maintained: 2.2 and 2.3; but Horst von Brand replied, "There are at least 3 under active development right now: 2.0.39pre, 2.2.x, 2.3.x. It seems 1.2.13lmp development stopped, but who knows... and then there are the variants from the different distributions (mostly on 2.2.x, but also 2.0.x)" Marco argued that work on 2.0 was restricted to bug fixes, which could not be considered active development. He added, "2.0.xx is rock-stable, and i love 3 lines patches like 2.0.37 to 2.0.38 (or was it 5 lines? B-)). "Active development" means new features added, if not major kernel core redesign. I do hope 2.0.xx won't see that!" David Weinehall, the current maintainer of 2.0.x (see Issue #48, Section #5  (13 Dec 1999: Major Security Hole In 2.0.x!! Alan Hands Off The 2.0 Tree To David Weinehall!!) ), replied:

Rest assured that it won't. While the patch from v2.0.38 to v2.0.39 *will* be somewhat larger than v2.0.37 to v2.0.38, the changes will be almost as minimal. No new features.

v2.0.39pre1 is 5160 bytes. v2.0.39pre2 is, at the moment (not yet finalised) 167075 bytes, and will probably not grow much further. 2979 bytes of the diff touches code (and that includes the diff context), the rest is white-space changes, changes of Documentation, CREDITS, MAINTAINERS, whitespace, backporting of a code-page 8859-14 and some other text-fixes.

The only thing I consider adding now is a change to the Makefile to add an extra-version during the pre-patches.

Marco had also added in his same post:

I'm just proposing to shorten the devel cycle not by simply reducing the time between the first releaase of 2.5.0 and the final of 2.6.0, which may be a bad thing, but just letting it overlap with the previuos cycle of 2.3 - 2.4. Numbers means nothing in this context, but let's see an example. Given a devel cycle on 12 months, we have now a stable release every year, roughly this way:

3 months of wild changes/core rewriting - 3 months of porting the rest of the kernel to the new API - 3 months making it stable - RELEASE 3 months of fixes - SPAWN of new devel.

I'm just proposing to have two overlapped cycles, about 6 months apart: just spawn the new devel when you're at the pre-release step. This way there's always a place to put development code, with no need to way or to develop against stable releases... and there's also less pressure on including "new" features on a nearly stable kernel.

Yes, having two cycles means double work. But also developing against 2.2 and then "backporting" to 2.3 is also double work...

Horst replied to Marco's second paragraph, "Today that is Linus + DaveM + others concentrating on the experimental release, plus Alan Cox and a few others working on the stable one." And to Marco's final point, he added, "It is not double work, the differences are small(ish) today. If wild development goes on on two branches, crossporting will be very dificult and they will diverge."

In a later post, he added, "Besides the problem with not enough hackers for the developement work, if you fragment the development you are also fragmenting the testers, and this definitely has negative impact on the kernel too."

Somewhat later, Marco summed up his perspective:

The points are:

  1. there's an huge demand of new features in a soon-to be-released kernel, because everybody knows that we have to wait almost a year for the following release. I don't mean this pressure comes from end users only. Commercial distributions should take care of them (and they do). But also developers want their work to be released as part of the standard kernel, sooner or later. See pressure on relaxing the meaning of a 'feature-freeze' or 'stable kernel'. And that's not just ego: a large user base is required for the final debugging phase, to produce rock-stable software. This pressure tends to delay further the final release of new kernels, short-circuiting to 1).
  2. as a reaction to 1), some developers go on indipendently, expecially if their devel-cycle di out of sync with Linus' one. If they're at 30% of the initial implementing phase when Linus calls a feature-freeze, right now they have no other choice, if they want to go on working. Since it makes a lot of sense for them to keep their patches up to date with kernel patchlevels, in the end they find themselves working on the lastest *stable* kernel, say 2.2. Eventually, when they're at 95% of development, and need some extensive testing, they make a public release. A new feature appears for a *stable* kernel. Meanwhile, the new main kernel devel branch (2.3) appears. If it does not break their code, they're lucky, and may go on in fixing bugs, and have a stable piece of software on 2.2 and (with little effort) on 2.3 (later on 2.4). But changes in 2.3 may even force them to choose: either go back to implementation phase (or even re-design something), throwing away some or most of the work already done, or ignore 2.3 and go on debugging. The first choice is the best in the long term, of course, since they get in sync with the main kernel development cycle, but i think most would choose the second one, and i don't blame them for that. We end up with a feature thats work great on 2.2 and is not there on 2.3. This too raises the pressure on delaying 2.3/4, short-circuiting to 1) again.
  3. Pressure in 2) in not just a lot of messages posted on this list in "I want this/that" threads. It's also that: "i do want to do some testing, but since i need/like feature 'X' that is a 2.2 only thing, i can't test 2.3", while it should be right the opposite: "if you want the cool feature 'X', you have to help in testing 2.3". This slows down the process of getting 2.3 to 2.4. Sorry, this slows down *A LOT* the process.

    Just imagine that the new RAID and ReiserFS are available for 2.3 only (and that they work together B-)). To make a public release of them, both Redhat and Suse (to name just two) should put much effort on getting some 2.3.xx stable enough to be released. This leads necessarily to a faster development of 2.3.

    Right now 2.3 has less features that 2.2. That's not only because of its internal changes that broke drivers/modules/fs's and the like (that's part of the devel process) but also because some features were developed on 2.2, when 2.3 was not available.

Having an earlier appearance of the 2.5 branch, say before 2.4.0, will probaly lead to other problems (e.g. Linus and other being overloaded, the press being confused), but will help solving 1), 2) and 3) now, and, above all, avoiding they happen again for 2.5 itself, IMHO. I don't understand if you think that 1), 2) and 3) are not happening, or you don't see them as problems, or you don't think that my idea on facing them is good...

Peter Samuelson pointed out that whether 2.5 was released at or after 2.4, developers would still have to make the choice of whether to code for the stable or unstable versions. Marco replied that with stable and unstable trees released simultaneously, the decision of which to go with would be an easy one. The question would be, "Is my code close to stable (say, at 90%)? If yes, you join the team who's testing and finalizing the stable branch; if no, you go on with the new devel tree." He added, "In statu quo, the choice is easy only if you're at 90% or 5%. Everything in the middle means you have to go faster or put pressure on Linus to delay the final release, or choose between sit down and wait for the new devel (which is months) or go on working on the stable release, which is the wrong direction (we're still in the non-modular case)."

Peter replied, "Do you know the real reason Linus doesn't do this? I don't think it's the added burden of supporting two branches for a longer period of time. The real reason (at least one of the reasons) is psychological: to encourage developers to work on making 2.2.0pre and 2.2.x stable before spending all their time and energy adding new hairy features to 2.3. By not *having* a 2.3 until 2.2 had settled down to a fairly stable state, Linus was purposely making it somewhat more inconvenient to develop new stuff, because you would *have* to maintain it as a separate patch. You could still do it, and people did (Hans continued to work on reiserfs, Richard kept polishing devfs patches, etc), but the *incentive* was to fix bugs in 2.2 instead."

2. ALI M15x3 Chipset: Experimental Or Stable

9 Jan 2000 - 20 Jan 2000 (29 posts) Archive Link: "[2.3.3x] ALI M15x3 chipset support (EXPERIMENTAL)"

People: David WeinehallAndre HedrickAaron TiensivuOystein ViggenRogier WolffTom CraneAndrzej KrzysztofowiczHans-Joachim BaaderDavid Ford

Luca Montecchiani had been using ALI M15x3 chipset support in his kernel since mid November with no problems, and asked why the driver was still marked as "experimental". David Weinehall pointed out that this decision was up to the maintainer. He added, "There might be problems for other combinations of hardware than yours, for instance. But if you feel it's working fine, try to contact the author/maintainer, and maybe he'll change the status." And Andre Hedrick, the maintainer of the driver, added:

It has that label because the feedback is so light.......... Until more positive reports come in or I get some more hardware to do verification tests upon, that is defined as (EXPERIMENTAL).

Use it and complain, boast, or nothing.........being a bench warmer will continue the status of "EXPERIMENTAL".

This sparked a flurry of success reports. Hans-Joachim Baader said he'd had no problems of any kind using the driver on his K6-2 400 for the past year. Oystein Viggen said it worked perfectly on his Asus P5A and IBM Deskstar, and Jimmy Mekele reported success on identical hardware. David Ford also reported complete success, though he didn't describe his hardware. Tony den Haan reported success on a P5A-B board, and Eric Dittman reported success on a Compaq Presario 1690. David Ropte and Kay Diederichs reported success as well, but didn't describe their hardware. Aaron Tiensivu reported, "I've used the code on about 20 boxes, ranging mainly just ASUS P5A and P5A-B boxes and none of them have fallen over or given any trouble.. as much as I don't like SS7 boxes, they are rock solid under these conditions."

Elsewhere, Oystein eventually asked, "The interesting question would be: Has anyone ever had it _not_ work for them?" Andre replied, "You win the qupee doll at the carnival............ That is the real question to be asked." But Rogier Wolff pointed out:

Sort of: If you get 0 "it didn't work for me" reports that could be because 0 people actually tried it. However, now we know at least two people tried it.

I suggest that you still want to know both for whom it works and for whom it doesn't work! If you have the hardware: Shout!

Elsewhere, Tom Crane finally reported a problem on a Jetway J-542B MB with 64MB RAM and a K6-2/333 CPU running 2.3.37; apparently everything was fine except he was seeing only 6.87 MB/sec in buffered disk reads, which was much lower than he'd expected. He added that his kernel reported 'ALI15X3: MultiWord DMA enabled' rather than 'ALI15X3: Ultra DMA enabled', and went on, "According to Western Digital's UDMA info webpage, 'multiword DMA mode 2' should give a max. data-rate of 16.6MB/sec - much more than I get." Andrzej Krzysztofowicz explained the kernel misdetection, with, "This chipset revision is not UDMA capable," and felt this explained the low buffered disk read speed. He added, "Note, that driver author (from ALI) suggested that for M5229 rev. <= 0x20 UDMA is "not stable", so it should be disabled by the driver..." Tom thanked him for pointing this out, and asked what the nature of the instability was, and what upgrade possibilities were available. Andrzej replied, "I heard that some UDMA ALi problems are CRC-error related. But I'm not absolutely sure if this is the problem cocerning low chipset revisions. If so, it mighat be possible do disable CRC-error checking. However it may inflict other kernel part opration ..." He suggested changing motherboards.

3. US Crypto Laws

14 Jan 2000 - 20 Jan 2000 (17 posts) Archive Link: "Linux crypto patch for 2.3 kernels"

Topics: CREDITS File, Patents

People: H. Peter AnvinAndrew PamMichael H. WarfieldPavel MachekDavid WeinehallMarc MutzMike A. HarrisDavid Balazic

Andrew Pam had been porting the international Linux crypto patch from 2.2 to 2.3, and gave a pointer to his homepage (http://www.sericyb.com.au/) containing the patches. He hoped that the new US crypto laws would allow his patches to go into the main kernel sources. H. Peter Anvin replied, "We're currently having the new U.S. cryptographic rules reviewed by legal professionals. They have promised to get back to us by Thursday, Jan 20. At that time we'll figure out how to open up kernel.org and the official kernel for cryptography."

David Balazic gave a pointer to The Electronic Frontier Foundation home page (http://www.eff.org) , which contained a link to the new encryption export regularions (http://www.eff.org/pub/Privacy/ITAR_export/2000_export_policy/20000112_cryptoexport_regs.html) , as well as a link to a press release (http://www.eff.org/11300_crypto_release.html) arguing that the regulations were not as great as folks might think. Andrew replied, "While I agree with the EFF position that there are still significant problems with the export regulations and that further changes are desirable, I believe that the new regulations now in place already permit encryption to be included in the standard Linux kernels as I suggested."

Michael H. Warfield reminded folks that the new laws only freed the distribution of sources, not binaries. He pointed out that this would prevent distribution maintainers from shipping with compiled encryption binaries. He suggested, "The best way around this problem would be if the distro makers would provide for crypto on the install disks in source form only and then compile the sources into binaries as part of their normal install proceedure." Pavel Machek replied that the kernel should not cater to the distribution maintainers. They would have problems, he admitted, but he was sure they'd get around those problems somehow.

Elsewhere, Marc Mutz argued that legalizing source distribution of crypto in the US would not make it legal in other countries, and that therefore the kernel should not include crypto, since that would make it illegal to distribute the kernel in those countries. Mike A. Harris replied that if the US relaxed its crypto laws, other countries would surely follow suit. Marc replied that this was a very US-centric point of view, and would not apply to countries like Russia. Pavel replied, "When we get crypto into official kernel distribution, it starts to be widely used. And soon it will be hard to do _unencrypted_ connection. At that time, Russia will have to relax they laws, or they get effectively disconnected from the net." But Mark hadn't realized that highly populated countries still had restrictive crypto laws, and asked which others there were. David Weinehall replied:

Regardless of how important you consider each country is, they are always important for those who live there...

But I'd bet that for instance China has crypto-laws (no, I don't know if this indeed is true; it's merely a guess, depending on their political system), and China has the world's largest population, is a growing, prospering economy (now that they slowly approach market-economy), and has a rapidly growing Linux userbase.

Oh, and I'd consider Cuba, Iran, Iraq, Libya, North Korea, Sudan and Syria (the "horrible" 7, that the US seem to fear more than death itself), more than enough reason not to put crypto into the kernel.

Would you want to barr off even one of the kernel-developers, when there's already a perfect legit way of spreading the crypto-code. While it's pretty probably none of the kernel-developers come from the "horrible" 7 (at least, there were none in the CREDITS-file in v2.3.39), there may be more countries that get barred off.

Those people who don't know how to recompile a kernel, buy or download a distribution and the kernel that comes with it. Almost every distro that I know of has some kind of mirror where you can download software banned in the States (crypto, patent-encumbered stuff, etc.), and for those who _do_ know how to compile a kernel, ftp.kerneli.org is the perfect place to visit. What we need is simply the distro-makers to start compiling crypto-support into their kernels as default (only the international versions, of course), and provide them from a server somewhere.

However, if it proves itself that the change of mind in the US indeed is as good as it seems, we could start distributing the kerneli-patches from ftp.kernel.org too. Those mirrors who then aren't allowed to import crypto simply can exclude one directory (I'm no expert on rsync/mirroring or whatever is used, but I believe it's possible, correct me if I'm wrong.)

I think that we have to realise, that the big problem is not whether it's allowed to USE crypto in a country, or if it's allowed to IMPORT crypto. Almost no countries have such regulations. But a lot of mirrors of ftp.kernel.org might have to close down if we put stuff that is illegal to EXPORT into the kernel.

IANAL, but from my review of the Swedish regulations on crypto, for instance, it wouldn't be possible to redistribute the ikernel-patches from Sweden once imported. While the probability of anyone giving a damn here in Sweden is minimal (considering how easy it seems to export weapons from Sweden...), there are other countries that might be harder.

4. Slowing Down For 2.4

13 Jan 2000 - 20 Jan 2000 (119 posts) Archive Link: "[Patch] Cleanup struct gendisk registration, 2.3.40-pre1"

People: Alan CoxAlexander ViroLinus Torvalds

In the course of a long implementation discussion, Alan Cox said, "I'd much rather this redoing of stuff didnt expand further. The job list is growing not shrinking right now. Its making me jumpy at least." Alexander replied, "Reasonable. However, there is an impressive collection of bug-reports on interaction between ide-scsi and other ide drivers giving exactly the same mess that Andre got with this patch and I really wonder if this is due to bad ordering of ide and scsi initializations. I'm less than happy about the look of ide_init() - look at it yourself and check the usage of 'initialized' in drivers/block/ide.c ;-/ It seems that we are kludging around some dependency problems here. I would really appreciate if somebody familiar with upper layers of ide subsystem and with ide-scsi would comment on situation."

And Linus Torvalds said to Alan, "I'm definitely nervous about growing changes, but at the same time I'd hate to say "no" to a pending cleanup of an area that really is a bit too tangled, and where a lot of the issues are just shrouded in mystery and years of historical reasons.."

5. i386 TLB Flushing Of Global Pages

17 Jan 2000 - 28 Jan 2000 (15 posts) Archive Link: "BUG? i386 TLB Flushing of Global Pages"

Topics: Patents

People: Mark GiampapaJamie LokierManfred SpraulIngo Molnar

Mark Giampapa reported that __flush_tlb() in include/asm-i386/pgtable.h would not flush Translation Look-aside Buffer (TLB) entries for global pages. He added that:

According to Intel Manual 24319201.pdf (S/W Vol 3) Section 3.7, pg 3-27, there are only 2 ways to invalidate a global page:

This may be intentional for some uses of __flush_tlb(), but there are several places where Linux attempts to flush TLB entries for global pages, such as in the smp boot code.

Manfred Spraul agreed that this was a bug, and Ingo Molnar also confirmed it. Ingo promised a patch shortly. At some point, Mark added, "One thing that is not specified by Intel, as far as I can tell, is whether or not global and non-global TLB entries compete fairly/equally for normal TLB replacement. Although my assumption is that they compete fairly, given how little memory the TLB's actually map, I have always taken the approach of using _PAGE_GLOBAL sparingly. Clearly interrupt and trap handlers, the scheduler, etc. should be global, but infrequently executed kernel code need not be global." But Jamie Lokier replied, "AFAIK the specification simply says that global pages aren't flushed by reloading cr3. Why assume more?"

Mark explained, "I agree there is little if any information in the Intel documention (anyone from Intel care to comment?) on this topic. I do not have the references handy at the moment, but in a prior-art search for a patent I was working on, I stumbled across a number of patents and papers regarding controlling or optimizing hardware TLB replacement. The only hint we have on IA32 is _PAGE_GLOBAL, and we have already gotten ourselves in touble with it once."

6. Linus On Trademarks

18 Jan 2000 (1 post) Archive Link: "Re: Using 'linux' in a domain name"

People: Linus Torvalds

Linus Torvalds made a public statement:

I've been getting tons of email about the trademark thing due to the action of stopping the auctioning off of linux-related names, so instead of just answering individually (which was how I started out), I'll just send out a more generic email. And hope that slashdot etc pick it up so that enough people will be reassured or at least understand the issues.

And hey, you may not end up agreeing with me, but with the transmeta announcement tomorrow I won't have much time to argue about it until next week ;)

Basically, the rules are fairly simple, and there really are just a few simple basic issues involved:

Those are the kind of ground rules, I think everybody can pretty much agree with them..

What the above leads to is

So basically, in case the trademark issue comes up, you should make your own judgement. If you read and understood the above, you know pretty much what my motivation is - I hate the paperwork, and I think all of this is frankly a waste of my time, but I need to do it so that in the future I don't end up being in a position I like even less.

And I'm _not_ out to screw anybody. In order to cover the costs of paperwork and the costs of just _tracking_ the trademark issues (and to really make it a legally binding contract in the first place), if you end up going the whole nine yards and think you need your own trademark protection, there is a rather nominal fee(*) associated with combination mark paperwork etc. That money actually goes to the Linux International trademark fund, so it's not me scalping people if anybody really thought that that might be the case ;)

I hope people understand what happened, and why it happened, and why it really hasn't changed anything that we had to assert the trademark issue publically for the first time this week. And I hope people feel more comfortable about it.

And finally - I hope that people who decide due to this that what they really want is trademark protection for their own Linux trademark, that they could just wait a week or two, or contact maddog at Linux International rather than me. We're finally getting the shroud of secrecy lifted from transmeta (hey, we'll have a real web-site and zdtv is supposed to webcast the announcement tomorrow), and I'd rather worry about trademarks _next_ week.

Ok?

(*)("Nominal fee". What an ugly sentence. It's one of those things that implies that if you have to ask, you can't afford it. In reality, it's more a thing where both intent and the size of the project will make a difference - and quite frankly it's also a way to slightly discourage people who aren't really serious about it in the first place.)

7. Big Hardware

21 Jan 2000 (2 posts) Archive Link: "monster machine runs linux!"

Topics: Big Memory Support, Disk Arrays: RAID, Networking

People: Derek GliddenLarry Woodman

Derek Glidden reported with a big smile:

I have just gotten access (through the place I work, which will have to remain 'nonymous) to a fully-loaded Compaq 8500 with 8 Xeon 550Mhz processors (yep, eight of em) with 1MB of cache each and 4GB of RAM along with the usual Big Compaq Server goodies like the latest Compaq SMART RAID controller and some flavor of dual-100Mbps Ethernet/Gigabit fibre network adapter type thing.

The exciting thing (for me anyway) is that it is currently running RedHat 6.1, although with caveats: we haven't gotten it to use more than the first 2GB of RAM yet and the NIC is pretty finicky at 100Mbps and we haven't gotten the fibre channel working yet either. It does, however use all 8 processors without a problem, which lets it compile the kernel in like 35 seconds. I'm going to try to get the latest 2.3 kernel running on it tomorrow.

If there are kernel development things that will really take advantage of a monster machine like this for testing, please don't hesitate to let me know. Of course, I can't give anyone access to the machine directly, but I am more than willing to run experiemental stuff on it. (Within limits, of course.) It's also in a lab environment so we have many client machines on a switched network we can use to pound on it for load testing situations.

Larry Woodman replied:

We have a similar machine(Dell 6300) but currently with 4 cpus instead of 8. You need 2.3 in order to use all of your memory, when you config you can select 4GB.

BTW, Our 6300 ran 2.3.35 fine but 2.3.36 - 2.3.39 had problems booting with all 4 cpus. I just grabbed 2.3.40 and I will try it soon.

8. Fingering Kernel Versions

27 Jan 2000 (4 posts) Archive Link: "linux.kernel.org"

People: H. Peter AnvinMatthew KirkwoodBorislav Deianov

Borislav Deianov was used to the fact that the command "finger @linux.kernel.org" would tell you if a new kernel version was out, but he'd noticed that it didn't seem to work anymore. Matthew Kirkwood replied that "finger @master.kernel.org" would still work, but H. Peter Anvin said, "Please use "finger @finger.kernel.org", thanks..."

9. autofs Version 4, NFS Version 3 In Stable Tree?

28 Jan 2000 - 29 Jan 2000 (7 posts) Archive Link: "autofs v4, nfs v3, 2.2.15 ?"

Topics: FS: NFS, FS: autofs

People: Jeremy FitzhardingeAlan Cox

Richard Ems asked if autofs version 4 or NFS version 3 would get into 2.2.15, 2.2.16, or neither. Jeremy Fitzhardinge replied, "While I would like to believe that I got autofs v4 right first go, I think it would be best to see how it fairs once it's in 2.3 for a while," and Alan Cox also replied to Richard, "I'd prefer to see them get a track record in 2.3.x first"

 

 

 

 

 

 

Sharon And Joy
 

Kernel Traffic is grateful to be developed on a computer donated by Professor Greg Benson and Professor Allan Cruse in the Department of Computer Science at the University of San Francisco. This is the same department that invented FlashMob Computing. Kernel Traffic is hosted by the generous folks at kernel.org. All pages on this site are copyright their original authors, and distributed under the terms of the GNU General Public License version 2.0.