Kernel Traffic #48 For 27 Dec 1999

By Zack Brown

Table Of Contents

Introduction

Peter Samuelson sent in a correction to Issue #47, Section #1  (20 Nov 1999: spin_unlock() Optimization On Intel) : He wrote:

This probably wasn't mentioned anywhere on l-k but you might want to put in an "update" on the first section of KT. You report that Linus was convinced to do the spinlock optimization on Intel, but apparently someone has since changed his mind back. See <asm-i386/spinlock.h> from 2.3.30pre5 and above:

/*
* Sadly, some early PPro chips require the locked access,
* otherwise we could just always simply do
*
* #define spin_unlock_string \
* "movb $0,%0"
*
* Which is noticeably faster.
*/
#define spin_unlock_string \

"lock ; btrl $0,%0"

Thanks, Peter!

This issue of Kernel Traffic is dedicated to Jill Cook, Clemens Wehrmann, and the entire Cook family, who all distracted me from my work long enough to enjoy a wonderful Christmas weekend in their home. Christmas is the most important holiday of the Christian calendar, though in it one may detect certain pagan and capitalist influences.

Mailing List Stats For This Week

We looked at 1585 posts in 6644K.

There were 491 different contributors. 234 posted more than once. 174 posted last week too.

The top posters of the week were:

1. Binary Module Portability Across Kernel Versions

2 Dec 1999 - 14 Dec 1999 (107 posts) Archive Link: "[ot] Released Drivers for Lucent WinModems"

Topics: BSD: FreeBSD, Backward Compatibility, Microsoft, Modems, SMP, Sound: OSS

People: Kendall BennettLinus TorvaldsAlan CoxIngo MolnarChuck MeadStephen FrostMartin DaleckiRichard B. JohnsonJeff GarzikTim WaughJim Nance

Alexandre Hautequest gave a pointer to the linmodem (http://www.linmodems.org/) site, and asked about the project. Jason Clifford replied that for Lucent-based modems, a binary-only kernel module was available, that had been tested on 2.2.5 and 2.2.12; he gave a pointer to http://www.linmodems.com/lucent-linux565.zip and added that he'd be releasing RPMs soon. Alan Cox replied that it wasn't enough to say that the module had been tested on one kernel or another; it was also important whether it had been compiled with 'gcc' or 'egcs', which of the 1 or 2 gig memory options had been selected, and whether the module had been compiled for SMP or not. Jim Nance retched into his hand and said he hadn't realized the situation was that bad. He asked how difficult it would be to make an SMP binary run under a non-SMP kernel, and added that if it were feasible for a UP kernel to provide dummy locking functions that would always say they aquired a lock when in fact they hadn't even tried, that would be an easy solution to implement. Tim Waugh replied that this would entail a performance hit on UP kernels, which would have to implement a function call, instead of the current behavior of translating spin-lock code to simple no-ops. Alan also replied to Jim with essentially the same answer, to which Kendall Bennett posted a very lengthy reply. In it, he argued that all of the problems with binary-module-portability between kernel versions, were solvable; and that there were compelling reasons why those problems should be solved. He listed:

  1. bugs in a later kernel could break a perfectly stable module; the only fix would be to revert to the older kernel, which might be missing important enhancements and other fixes.
  2. binary module portability requires a consistent interface between modules and kernel internals, which would be good because it would prevent module writers from implementing "quick hacks" that accessed the internals of other modules or the kernel directly.
  3. binary portability would mean that much less testing would be needed, to ensure that a module was functioning properly with a given kernel.

He went on:

It would appear to me that unless Linux implemented a more clearly defined, binary portable driver mechanism, compatibility problems will continually creep in over time, plaguing the operating system with incompatibilities. Unless these problems are solved, and device driver conformance tests implemented, Linux is headed for disaster further down the track.

Constrast this again with FreeBSD whose development methodology actively supports binary portable kernel modules. Perhaps now it makes more sense why FreeBSD is considered more stable than Linux and that so many web servers run FreeBSD and not Linux. FreeBSD does not support as much hardware, but for what it does support, it is more stable.

The problem is that the *reasons* why the powers that be (Alan Cox and Linus Torvalds) do not want to implement binary portable drivers for the Linux kernel, are *not* based on sound reasoning. Specifically note the following correspondance between myself, Linus and Alan from about a month ago:

He then quoted Linus as saying:

I'm not all that interested in trying to help binary-only drivers, when people like 3dfx are opening up their specs and their libraries to the open source community. Why would I go to the extra work to help people who aren't even willing to help me?

Quid pro quo. That's what the license is all about. I =allow= binary only drivers, but that is very different from =supporting= them.

Kendall contined, "The *reason* binary portable drivers are not implemented in Linux, is because Linus and Alan are wielding the power of Linux to *force* hardware vendors to implement Open Source device drivers. IMHO this is just as bad as Microsoft using their monopoly power to force vendors to ship Windows on their PC's." He concluded, "Lots of stuff available for Linux outside of the Linux kernel is not Open Source. A lot of stuff is. The stuff that is, is Open Source because it makes sense for it to be Open Source, not because the developers were forced to make it Open Source. Open Source software will be successful because of the power that opening the source code provides. The power that 'With enough eyes, all bugs are shallow' as Linus once said. Has Linus forgotten the reasons why Linux is where it is today? Instead he appears content to wield the power of dictator over the Linux kernel sources to force vendors to do things his way."

There were many angry replies to Kendall's post, and two main threadlets of discussion. In the first threadlet, Jeff Garzik said that supporting binary portability would be a lot of work and would slow down the drivers, but that Linus and Alan would probably accept patches, if they did not add overhead to the kernel. Alan replied:

It must also be running in a way that it cannot trigger a bug in existing kernel code, otherwise we have binaries triggering stuff that we cannot debug. That is what makes it hard.

If netscape Oopses the kernel as a user then I know we screwed up, because we have a defined interface and it wasnt running as root. A driver is totally different, it might causes oopses, application funnies - anything and anywhere in the kernel.

Hans de Goede felt this was an unreasonable requirement, and Kendall argued that Alan's point only proved that Open Source device drivers was a good thing. But he added that if vendors Open Sourced their code because it was good to do so, that was different than if they Open Sourced their code because they had been forced to do so. He concluded, "Remember that the reason Windows is where is is today is because of bullying tactics by Microsoft. Why should Linux have to succeed based on the same bullying tactics to force hardware vendors to open their specifications?"

Ingo Molnar replied:

why should Linux make it easier for anyone to hinder Linux development? Closed-source drivers are definitely bad - not only for the hardware vendor (it's their problem and they are free to create whatever additional headache for themselves at will), but more for Linux users. Even if we had an ideal driver API, which stayed constant over time. [which is impossible btw. as it contradicts some of the fundamental properties of Linux] And Linux is _not_ neutral towards constructs that hinder Linux development and hurt Linux users. Linux _is_ neutral towards other binary-only constructs, like user-space applications. But kernel space is much more different.

Linux could flatly refuse binary modules, do you see that? Binary modules are allowed nevertheless (take this as a gift!), but we simply cannot guarantee cross-kernel-branch compatibility without hindering Linux development. Nevertheless as you might have noticed, we try to keep the driver API constant within the 'stable kernel branch'. Sometimes we have to break module compatibility, but we try to avoid it _in the stable, production branch_ as much as possible. What is your problem with this?

why is it impossible to keep the module API constant? Because Linux evolves so fast, and the goal of Linux is to integrate drivers into the OS as much as possible. This is a constantly fluctuating process, APIs, constants and frameworks change frequently (and without us knowing about it advance). This is essential to Linux, it gives us speed and generic drivers. i386 networking drivers worked almost out of box on other architectures - this would be impossible with binary-only drivers. Because we simply cannot guarantee '100% binary compatibility', we do not guarantee it at all. It's not a clean concept.

sadly it's not possible to distinct between 'good' binary modules (which are open-source), and 'bad' binary modules (which are closed-source) - but this is not a problem! i'm sure you will understand the point: it's not hard to create a 'module compilation package' that compiles the binary module on the spot. We _do_ guarantee driver source-compatibility for the stable branch. An on-the-spot driver compilation framework would be a welcome addition to Linux (feel free to contribute it), as it can eg. optimize for the given CPU and architecture.

There were a few more posts in that discussion.

In the second threadlet, Alan replied to Kendall's original long post. To the idea that binary portability could be accomplished without problems, and that this should be done, Alan said, "Not really. The binary format is dependant on compiler, architecture, SMPness and a dozen other things. The source is not. Binary compatibility killed windows Windows 98 could have been a much nicer OS without back compatibility. I see no reason to kill Linux by re-enacting proprietary OS history with a free OS."

To the idea that binary portability would require a consistant interface, Alan replied, "You mean slower, multiple and in frequent cases legacy overhead. You want every Linux user to have a slower more buggy system so a few people can use your product, how considerate you are."

Alan concluded:

You have a personal agenda to ship proprietary Linux modules. As Linus and I both told you, feel free - but _YOU_ and not the community can support the results. Its a simple matter of maintainability and debugging.

Anyone using your modules is your problem. People with kernel bug reports of any kind with your modules loaded will be referred to you, and binary compatibility messes will be yours to deal with. You have our source, we dont have yours, so you can debug it we cant.

Its your overhead, created by your product and desires. In the proprietary world every vendor pays their own support costs, I don't see why it should be different here.

Chuck Mead praised Alan's comments, and said to Kendall, "you've spanned the heights of incredibility here! You have Linux kernel source code which is freely available but you want the kernel folk to completely revamp their development program/path to suit your own needs? Since you have such an excellent understanding of what needs to be done why not use the source and do it yourself... that would solve all of your problems! If releasing the source code to your drivers doesn't suit your business model then don't play in the Linux space or do so and provide your own support as Alan has said. I fail to see why the OS community needs to support your proprietary source and business model. When you talk about `binary portability' it makes me shiver... I left that world behind by choice and personally do not want to revisit it and the idea that Linus and Alan are `forcing' you, or any other vendor to implement Open Source device drivers is a joke. The onus is on you to choose to do it or not. If the economic model for your outfit dictates that your source remains closed, that's fine, but don't then ask the Linux community to support your efforts! If that's Linus and Alan twisting your arm then go ahead and add my name to the list as well!"

Kendall replied:

I will be building mechanisms to support binary loadble drivers for our products into the Linux kernels, and making those available as patches. I truly believe such a mechanim will help make Linux a better OS, and I asked both Alan and Linus whether they would accept support for this type of mechanism if I created it and submitted the patches to them.

They flatly refused and said they would not support anything that will allow for binary portable modules. So I will do this for our own needs, but no-one else will benefit from this because it won't be a part of the standard Linux distributions. I don't have any problem developing and supporting this code for our own products, and I don't expect anyone to write this code for me.

However I am upset by the notion that the GPL nature of the Linux kernel source can and is being used to force hardware vendors to release Open Sourcce drivers because they have no other option. Not because another option is not feasible, but because the core developers of Linux want to lock our proprietry solutions.

Alan pointed out that Kendall's first and third paragraphs above, contradicted themselves. Kendall protested, and Stephen Frost explained, "In the first paragraph you talk about how you are intending to build a system whereby you will be able to have binary-only drivers for your hardware. In the second paragraph you claim that isn't possible."

Kendall also replied directly to Alan's original response to his long post. To Alan's statement that binary portability was dependant on many factors, Kendall reiterated that if done properly, all those dependencies were solvable. He pointed out that XFree86 4.0 had support for binary modules, and didn't suffer from Alan's dependancy problems.

To Alan's statement that backwards compatibility killed Windows, Kendall replied, "Sorry, I completely disagree here. Backwards compatibility in Windows 9x was a choice that Microsoft made to sell the OS (and it worked; OS/2 would have killed NT otherwise)." (Martin Dalecki replied, "Just a question: What is this funny anti trust court case about one hears such much last time in the media about. Please enlighten me. Are are you trying to tell me that Win-Shit won the market due to suprerioty in technology?" )

To Alan's statement that Kendall had a personal agenda to ship proprietary Linux modules, Kendall replied, "you try to force people who want to help Linux grow, to be forced into a model that is fraught with compatibity issues. This is the whole crux of the matter. You are totally against the concept of building support for binary portable modules, because it might enable companies to develop proprietry drivers for Linux? What about the fact that there are a ton of positive benefit for compatibility issues in doing this? Are you willing to jeapordise the future of Linux, just to stop my company from being able to build binary portable modules? I didn't realise that what I do has such a tremendous impact on the Linux community..."

To Alan's statement that he didn't want to re-enact proprietary operating system history, Kendall replied, "A *real* operating system is just not going to be able to survive unless some procedures and policies are put in place to ensure future compatibility with legacy hardware."

Alan replied to this last statement with:

We've put procedures in place. Its called *SOURCE CODE*. The NCR 5380 for example is the most godawful pile of crud scsi controller on the planet. Its old while good for its time nowdays is a joke.

Because we have this *SOURCE CODE* stuff it still works in Linux 2.3.30pre6 and people have extended it to support other platforms. So its extensible, its adaptable

Its also *maintainable* which is the most important item of all. I can take a bug report and look at the controller source and say "ok thats a driver bug".

Kendall replied, "What I am trying to get across is that binary portable modules help to solve driver compatibility problems when you do new releases of the Linux kernel. I am not trying to say that the source code should be closed." Alan came back with, "I've been helping maintain the Linux kernel for six or seven years. I don't believe a word of your claim. Things like vmware are a problem, the et inc modules were a problem, oss has its problems. And thats despite the fact OSS especially, and also to an extent ET worked hard on chasing bugs"

Richard B. Johnson spoke up as well:

If Linux allows binary modules, the end of the world, as we know it, has arrived. The result will be Trojan and other 'malware' inserted into the kernel. Even if we trust the world to be on its best behavior, we will become just like NT and other such crash-ware. Many/most/(perhaps all) problems with NT are directly caused by 'drivers' which are written by nitwits contracted by third-parties. You know, the ones who insist that the kernel be rewritten as a polymorphic "object" because they learned those buzz-words in some OO class at Three-Mile-Island.

If a company considers its drivers to be so secret that it will not allow anybody to view its source-code, the company probably obtained the source-code by theft. If so, the proper thing to do would be to used the stolen source as a template from which to write an interface specification. Then, from the interface specification, one writes the target software. However, this is expensive. It takes time to write a specification. Its quicker to steal source-code, hack it to run the proprietary hardware, then keep it in the dark.

There is a good example of this kind of company policy at a one-time major network company near the Great Salt Lake. It has been reported that its President publically stated that he would never pay for something he could steal.

The whole idea of an open-source Operating System is that it's open. We must keep it that way.

2. mmap Changes

8 Dec 1999 - 15 Dec 1999 (13 posts) Archive Link: "mmap on a device returns ENODEV"

People: Reid SpencerLinus TorvaldsIngo MolnarAndrea Arcangeli

After checking out the source, Reid Spencer reported that the 2.2 kernel didn't support mmap on block devices. He'd noticed on line 247 of mm/mmap.c, that do_mmap() would return ENODEV if the file didn't support the mmap operation, and this seemed to be the case for block devices. He pointed out, "It would be very useful if block devices would support mmap because it allows disk geometry to be factored into applications requiring high speed disk operations," and added that the compatibility existed at least on Solaris and IRIX.

Linus Torvalds replied, confirming Reid's interpretation, but adding that the problem "should finally be a thing of the past soon: one of the issues is that the mmap() code requires the page cache, and the page cache didn't use to handle the case of large devices correctly, and as such couldn't be used - and raw devices for that reason had their very own special code." He added that the feature would not be added to 2.2.x, but that it should be trivial to do in 2.3.x; he concluded, "This is one of those "small details" that I've wanted for a long time. I've never had the time to actually do it, even though it should be trivial. The sucker^H^H^H^H^H^Hperson to ask is probably Alex Viro or Ingo Molnar, and see if you can entice them into doing it - they both know this area forwards and backwards. Or it could be a great project for somebody willing to learn and not too afraid of messing his disks over in case of bugs.."

Alexander opted out, since he wanted to finish the symlinks-in-pagecache patch first. Ingo posted a patch against against 2.3.32-pre2, explaing that it:

adds all pagecache blocks that have established mappings to the buffer-cache hashlists (and can thus can be looked up), including the swapcache. It works fine here and there is no noticeable slowdown anywhere.

this does not fully solve the problem yet, what we need now is an agreed on set of rules to access buffers that are in other caches as well (eg. the pagecache).

Is it bad to synchronize access through the buffer-cache? I dont think so and it's simple and straightforward. The disadvantage is that the pagecache has to maintain the state of the buffer properly, which adds some overhead. Eg if. a page sees a pagefault with write-access then all underlying buffers have to be marked as 'permanent dirty'. Permament dirty buffers are not written back by bdflush. This very same new buffer-state could i believe be used by the journalling code too to tell other cache users that a block is not to be written back, yet. In the typical 4k blocksize case it's only one buffer that needs to be maintained, so the cost doesnt look like to be big. There are probably other issues to be solved as well, but this would be the >main approach.

Linus and Andrea Arcangeli both objected to the patch, Linus saying, "I really don't like this. It shouldn't be needed, and it does slow down lookups (the fact that we no longer do buffer cache lookups as often as we historically did hides the issue, but it's still conceptually wrong)."

End Of Thread.

3. Telephone Interface Card Under Linux

9 Dec 1999 - 16 Dec 1999 (4 posts) Archive Link: "Telephone Interface Card"

People: Mike A. HarrisAndrew Morton

Someone asked if there were any Telephone Interface Cards that supported Linux. Andrew Morton gave pointers to Asterisk Open Source PBX (http://www.asteriskpbx.com/main/index.html) and Linux Telephony (http://www.linuxtelephony.org/) . Rick Ellis gave a pointer to Franklin Telecom (http://www.ftel.com/products/ict-1.html) . Mike A. Harris put in, "Any card at all that has specs available, would be easy to program, either directly with port writes, or by writing a simple driver for. If specs are not available though, running it in DOSemu or with a breakout box of some kind should yield very simple to decode (reverse engineer) hardware. Any DTMF card AFAICS is a fairly simple device. I can't imagine it using any more than a few ports that are easy to determine, especially with DOSEMU."

4. Compiler Errors In 2.3.31 And 2.3.32

11 Dec 1999 - 16 Dec 1999 (24 posts) Archive Link: "2.3.31 (and 2.3.32pre2) breaks cpp with segmentation fault (ok in 2.3.30), reproducable (mremap problem???)"

People: Benjamin C.R. LaHaiseHeinz DiehlMike Galbraith

One of David Dyck's methods of testing new kernels is to compile and test Perl. This time, he got a repeatable segmentation fault in the C pre-processor, when compiling Perl under 2.3.31; Mike Galbraith led a debugging session, in which folks who were not seeing the problem, gave him their .config files for comparison. This didn't lead anywhere, but elsewhere, Heinz Diehl reported that the problem was still there in 2.3.32; Thorsten Kranzkowski, also experiencing the problem, asked if Heinz had the netfilter modules loaded; but Heinz replied that no, his machine was a standalone box using UUCP for Internet access. After a bit of back and forth between Thorsten and Heinz, Benjamin C.R. LaHaise posted a patch and reported, "Here's the fix that works for a couple of other people who have triggered the problem. I was being too clever, and it seems that in some cases even though the new parameter is not touched, its presence is corrupting *something*. If someone has an explanation, I'd be interested in hearing it."

Heinz reported success with the patch, but Thorsten was still getting internal compiler errors, but only when he loaded the NAT modules. Benjamin tried to diagnose the problem, but no solution presented itself in the thread.

David, however, posted three days after initiating the thread, saying that Benjamin's fix had made it into 2.3.33, and his seg-fault was gone.

5. Major Security Hole In 2.0.x!! Alan Hands Off The 2.0 Tree To David Weinehall!!

13 Dec 1999 - 17 Dec 1999 (36 posts) Archive Link: "[security] Big problem on 2.0.x? (fwd)"

Topics: Disks: IDE, FS: NFS, Networking

People: Alan CoxDavid WeinehallPedro M. RodriguesLinus TorvaldsAndrea ArcangeliMike A. Harris

Daniel Ryde quoted a post from bugtraq, in which Eduardo Cruz reported that doing a 'ping -s 65468 -R ANYIPADDRESS' would disable the system and cause an eventual reboot. Eduardo had tested this on 2.0.35 and 2.0.36; 2.2.x wouldn't fall for it.

Mike A. Harris suggested a 'chmod u-s /bin/ping' on all 2.0.x systems, adding that Alan Cox would probably not release a 2.0.39 just to fix this hole. Andrei Pelinescu-Onciul tracked the problem down to a bug in net/ipv4/ip_output.c, in ip_build_xmit(). He posted a code fragment that would at least prevent the exploit. Andrea Arcangeli agreed with his assessment, and posted a patch. David Weinehall asked if Alan would consider a 2.0.39 containing only that patch, adding that a lot of people still used 2.0.x systems. Alan replied, "If you want to become 2.0.x maintainer and fix this and the other chunk of bugs then be my guest. I don't really have time to worry about 2.0, 2.2 and 2.3.34."

David accepted responsibility for maintaining the stable tree. He asked for confirmation from Linus Tovalds and others, saying, "I REALLY need to know KNOW whether people accept me or not. I won't mind being critised now, as long as complaints are laid out in a serious manner. If any of you has anything on your minds that you don't want to discuss openly on the list, feel free to reply privately."

Pedro M. Rodrigues replied, "While i dont think that a 2.0.39 with just this fix is a good idea (i agree with Alan Cox that only external vulnerabilities should be a reason for a new version) i believe some sort of revision of the 2.0.X kernels once in a while is a good idea." David said, "My idea isn't to ONLY fix this one, but also fixup some other things. However, I'll probably drop a pre-patch 1 in some direction soon (I guess that'd be to Alan?), and that one will probably only contain this fix together with some documentation updates and maybe some other small things. I will release more pre-patches as my reviewing of the patch-log/bug-log Alan sent me progresses." Alan offered to review pre-patches if David (or someone else) was willing to "do all the legwork".

Linus said:

I can certainly look at 2.0.x updates too, but I also suspect that the people who REALLY care are the distribution makers. I don't have any strong feelings about 2.0.x - although I _do_ suspect that you have to be even more careful than usual, because you're not going to get very much testing any more..

The people who are still on 2.0.x are not the kind of people who are excited about testing unless they have major problems, and THAT in itself is a problem - it means that you get a very self-selected tester list, which may result in exactly the wrong output from testing. So I would suggest you only apply stuff that is "obviously correct" from reading the sources and directed testing, but I don't care enough about 2.0.x to really argue strongly one way or the other..

There was more discussion about issues surrounding 2.0.x maintenance, and at one point David announced that he had begun working on 2.0.39; Andrea Arcangeli asked what known problems remained with 2.0.x, and Alan said:

The major 2.0.x bugs left are

Needs lots of driver updates (aic etc). I have archived mails about these including patches in some cases

There was a bit of discussion about which of these changes to implement.

6. Development Process Criticized; Alan Uses egcs 1.1.2

13 Dec 1999 - 14 Dec 1999 (24 posts) Archive Link: "Kernels do not compile anymore"

People: Richard B. JohnsonJohn Anthony Kazos Jr.Mike A. HarrisAlan CoxSteve DoddArjan van de VenDavid ParsonsRichard GoochHorst von Brand

Richard B. Johnson reported, "I detect a very distressing trend. Since the 2.3.13 release, which took several days to fix so it would compile, I have not been able to download a kernel that will compile. The 2.3.31 release of a development kernel (December 6) had, not only problems with missing variables like "memory_start", etc., but also had incomplete structures, missing structure members, etc., all over the place. Most of the errors had to do with initial ram-disk support which is required on many systems that use modules. I spent about two weeks of my spare time trying to make 2.3.31 compile then gave up. If anyone is interested, I can provide a copy of .config but that's not the problem. The problem is that Linus and others used to make sure that kernels would compile. This is no longer being done. Instead, we have 'secret' versions of kernels retained by distributors while the publicly accessible kernels are junk."

He also pointed out that although he had been contributing patches to the kernel since 1995, his name had been removed from all parts of the code that had once contained it, including the contributors file.

There were a lot of replies. The only one that addressed Richard's concerns about not being given due credit for his contributions was by John Anthony Kazos Jr., where John objected to Richard's "blatant conspiracy-theorist tone. Perhaps you should offer some conclusive proof with your spicy accusations." Richard replied, "It's not a conspiracy. I never blame upon a conspiracy that which can be explained by stupidity." No more was said on the subject.

To Richard's original post, Richard Gooch pointed out that Richard J. shouldn't complain about development kernels not compiling. They were in the unstable tree, after all. He recommended using a kernel from the stable series for folks who needed stability. David Parsons objected that it was better for development if folks complained bitterly about things not working, than just use the stable tree. He added that, of course, submitting a patch to make the kernels work would be best of all.

Arjan van de Ven also replied to Richard J.'s original post, saying that having the kernel fail to compile drivers that were missing or broken, was intentional, because it was the best way to get them fixed.

Mike A. Harris also replied to Richard J.'s original post, saying, "These problems occur only with devel kernels, and possibly on very rare occasions with stable ones. Saying that these compilation problems puts things in the distributors hands is rediculous. I've NEVER seen ANY distribution release a dist with a development kernel. (That is speaking of the mainstream widely used dists like RedHat, Caldera, etc - MAIN RELEASES - not beta releases)."

This spawned a discussion of its own, when Horst von Brand pointed out that there had been a few 1.3.x and 2.1.x Red Hat releases; he seemed to remember Slackware doing a similar thing, and pointed out that SLS (Soft Landing Linux), one of the first Linux distributions, started up before version 1.0

Mike, Alan Cox and Chris Adams corrected him, saying that Red Hat may have released a "Rawhide" distribution with an unstable kernel, but never a production system. Alan added, "Slackware did a 1.3.59 release once, which at the time was not a bad idea at all. We could probably have gone from 1.3.59 to 2.0pre but people did more stuff and 2.0 took a lot longer."

Steve Dodd added, "Slackware also did a 1.1.x ([45]9?)-based distro, I'm sure. That would have been sometime between late '94 and early '95, I think."

Alan also reported elsewhere, that he is currently building kernels from egcs 1.1.2

7. Disabling Pentium III Serial Numbers

15 Dec 1999 - 20 Dec 1999 (47 posts) Archive Link: "disabling Intel PSN"

Topics: FS: sysfs, SMP

People: Dwayne C. LitzenberMarc MutzPeter SamuelsonPeter BenieMarc LehmannMatthew KirkwoodH. Peter AnvinDavid SchwartzDavid WoodhouseDwayne C. Litzenberger

Pawel Krawczyk asked if anyone had considered an option to disable the Intel Pentium III processor serial number (PSN). He gave a pointer to Intel docs (http://www.intel.com/design/pentiumiii/applnots/245125.htm) on the CPUID instruction, adding that this could be used to disable the PSN until the next reboot. Marc Mutz and Peter Samuelson both replied that Linux already did this in the unstable branch; Peter added that the feature would probably be back-ported to 2.2.15, if he wasn't mistaken.

Matthew Kirkwood also confirmed that the PSN was always disabled at boot-time, since it was not a useful feature. David Schwartz pointed out that this was a reason to not use it, not a reason to take active steps to disable it. H. Peter Anvin felt that there were some valid uses for the PSN. He suggested making it a command-line option, even if it defaulted to off.

There were a lot of replies to H. Peter; Marc Lehmann pointed out that PSNs were non-portable. Peter Benie felt that if the PSN were enabled, vendors would start using it for software licence keys. David Woodhouse felt that this wasn't a valid objection, since the vendors would have to have a fall-back for non-PIIIs, and users would simply disable the PSN during installation of the licence. Dwayne C. Litzenberger opined that the kernel should keep it disabled by default, and force users to patch the kernel in order to re-enable it. Later, he elaborated, "I suggest the default be that the PSN be simply disabled at startup. A kernel parameter (optionally compiled into the kernel) could change this so that the PSN is read before it is disabled. The PSN could then be stored in a /proc variable (and also read through a (perhaps privileged) sysctl command). This variable could be modified (ala /proc/sys/*) if the user wishes (whether or not they actually have a PIII). The PSN would then become no more infringent on privacy than a variable MAC adddress." He concluded, "By marginalising the PSN, it can become a useful feature without allowing vendors to force it upon the users."

Marc Mutz liked that idea a lot, and added, "Additionally, you _have_ to export the psn via /proc or the like, because if you let an application execute the CPUID command (I think that was what reveals the PSN?) on SMP, that value can change between calls."

8. glibc Including Kernel Headers

16 Dec 1999 - 17 Dec 1999 (48 posts) Archive Link: "[PATCH] asm*/resource.h fix for glibc"

Topics: Backward Compatibility

People: Linus TorvaldsMiquel van Smoorenburg

A brief excerpt from a long and interesting one-day thread.

In the course of discussion, Linus Torvalds fumed:

WHY THE H*LL DOES GLIBC CONTINUE TO INCLUDE LINUX KERNEL HEADER FILES?

Stop it NOW.

We had this discussion with the old libc5, where we due to historical reasons did the inclusion of kernel header files into user space. And one of the things glibc was supposed to do was to stop doing that!

The user-space library header files have to match the LIBRARY, not the kernel.

This is not open for discussion. The reason I ended up hating libc5 doing it was that it broke every once in a while when the kernel was re-organized. I thought we had gotten past that with glibc.

If the glibc people cannot figure out how "cp" works, maybe somebody should tell them. Symlinks are a maintenance nightmare, and means that not only does the user-space compilation environment suddenly depend on which version of the kernel sources you have installed (as opposed to which one you're _running_), but they also mean that suddenly you have to install the kernel sources to compile anything (or split up the kernel sources into "headers" and "the rest"). Which means that package management is screwed up etc etc.

I refuse to add more of the __KERNEL__ stupidity. The existing stuff is there for backwards compatibility, but the thing stops here.

Miquel van Smoorenburg replied:

Tell that to RedHat.

Debian ships with a set of known-good and stable kernel headers in /usr/include/linux and /usr/include/asm. Seems to work pretty well, even though people like tytso don't really agree ;)

People just need to understand that you have to compile modules with -I/usr/src/linux/include or -I/usr/src/linux-2.4.22/include

A Makefile fragment in /usr/src/linux (say, config.mk) that keeps the CFLAGS that were used to compile the kernel would go a long way to getting people to use -I/usr/src/linux-2.4.22/include pretty much automatically.

Later, he elaborated:

The README that comes with a module should tell you to edit the Makefile and adjust the path to the kernel if you want to build against a non-standard kernel. The default should probably be /usr/src/linux for people who installed the running kernel and headers and/or source straight from the rpm or deb and simply want to build a module.

That way, nothing depends on /usr/include/linux anymore.

So actually, we need 2 things:

Note that this is probably the 6th time that this is discussed and I am at least the 2nd person to come to this conclusion ;)

 

 

 

 

 

 

Sharon And Joy
 

Kernel Traffic is grateful to be developed on a computer donated by Professor Greg Benson and Professor Allan Cruse in the Department of Computer Science at the University of San Francisco. This is the same department that invented FlashMob Computing. Kernel Traffic is hosted by the generous folks at kernel.org. All pages on this site are copyright their original authors, and distributed under the terms of the GNU General Public License version 2.0.