Latest | Archives | People | Topics
Latest | Archives | People | Topics
Latest | Archives | People | Topics
|Home | News | RSS Feeds | Mailing Lists | Authors Info | Mirrors | Stalled Traffic|
Table Of Contents
|1.||30 Dec 2004 - 6 Jan 2005||(2 posts)||SATA Support For Intel ICH7 Under 2.4 And 2.6|
|2.||2 Jan 2005 - 11 Jan 2005||(222 posts)||Some Debate On The Development Model|
|3.||4 Jan 2005 - 6 Jan 2005||(5 posts)||DVB bt8xx Attempted Fixes|
|4.||6 Jan 2005 - 10 Jan 2005||(12 posts)||In-Kernel Genetic Algorithm Library|
|5.||7 Jan 2005||(1 post)||Linux 2.4.29-rc1 Released|
|6.||7 Jan 2005 - 9 Jan 2005||(7 posts)||Status Of The "Halloween Document" And Freinds|
|7.||9 Jan 2005 - 11 Jan 2005||(13 posts)||Status Of bcopy|
|8.||10 Jan 2005||(1 post)||IA64 Maintainership|
|9.||11 Jan 2005||(1 post)||Linux 2.2.27-rc1 Released|
|10.||12 Jan 2005||(5 posts)||DebugFS Gains Ground|
Mailing List Stats For This Week
We looked at 2201 posts in 12885K.
There were 514 different contributors. 269 posted more than once. 172 posted last week too.
The top posters of the week were:
1. SATA Support For Intel ICH7 Under 2.4 And 2.6
30 Dec 2004 - 6 Jan 2005 (2 posts) Archive Link: "[PATCH] SATA support for Intel ICH7 - 2.6.10 - repost"
Topics: Serial ATA
People: Jason Gaston, Jeff Garzik
Jason Gaston said, "This patch adds the Intel ICH7 DID's to the ata_piix.c SATA driver, ahci.c SATA AHCI driver and quirks.c for ICH7 SATA support. If acceptable, please apply." Jeff Garzik applied this to his trees, for ultimate inclusion in 2.4 and 2.6.
2. Some Debate On The Development Model
2 Jan 2005 - 11 Jan 2005 (222 posts) Archive Link: "starting with 2.7"
Topics: Backward Compatibility, Big Memory Support, Bug Tracking, FS: NFS, FS: devfs, Feature Freeze, Microsoft, Ottawa Linux Symposium, Power Management: ACPI, Sound: OSS, Version Control
People: William Lee Irwin III, Andries Brouwer, Adrian Bunk, Dr. David Alan Gilbert, L. A. Walsh, Bill Davidsen, Dave Jones, Willy Tarreau, Diego Calleja, Roman Zippel, Theodore Ts'o, Marcelo Tosatti, Russell King, Randy Dunlap, Alan Cox, Len Brown, Rik van Riel, Maciej Soltysiak, Andrew Morton
Maciej Soltysiak asked when the 2.6 tree would fork into 2.7, so folks could start submitting "experimental code"; and a huge discussion ensued. William Lee Irwin III replied:
I have a plan to never ever stop experimental code, which is to actually move on the 2.6.x.y strategy if no one else does and these kinds of complaints remain persistent and become more widespread.
There is a standard. Breaking things and hoping someone cleans up later doesn't work. So it has to be stable all the time anyway, and this is one of the observations upon which the "2.6 forever" theme is based. Frozen "minimal fix trees" for the benefit of those terrified of new working code (or alternatively, the astoundingly risk-averse) are a relatively straightforward theme, which kernel maintainers should be fully able to faithfully develop.
Maciej said he liked the 2.6.x.y idea, but he clarified that his question really pertained to whether there was a stock-pile of really invasive changes building up; and if so when would the 2.7 branch kick-off with these changes.
Andries Brouwer came back to William's initial reply, saying:
You are an optimist. I think reality is different.
You change some stuff. The bad mistakes are discovered very soon. Some subtler things or some things that occur only in special configurations or under special conditions or just with very low probability may not be noticed until much later.
So, your changes have a wake behind them that is wide the first few days and becomes thinner and thinner over time. Nontrivial changes may have bugs discovered after two or three years.
If a kernel is set apart and called "stable", then it is not, but it will become more and more stable over time, until after two or three years only very few unknown problems are encountered.
If you come with a new kernel every month, then you get the stability that the "stable" kernel has after less than a month, which is not particularly stable.
Rik van Riel agreed with this, pointing out that some bugs, such as those involving databases with large data-sets, could take years to fully manifest themselves.
But William said to Andries, "This is not optimism. This is experience. Every ``stable'' kernel I've seen is a pile of incredibly stale code where vi'ing any file in it instantly reveals numerous months or years old bugs fixed upstream. What is gained in terms of reducing the risk of regressions is more than lost by the loss of critical examination and by a long longshot." Adrian Bunk replied:
The main advantage with stable kernels in the good old days (tm) when 4 and 6 were even numbers was that you knew if something didn't work, and upgrading to a new kernel inside this stable kernel series had a relatively low risk of new breakages. This meant one big migration every few years and relatively easy upgrades between stable series kernels.
Nowadays in 2.6, every new 2.6 kernel has several regressions compared to the previous one, and additionally obsolete but used code like ipchains and devfs is scheduled for removal making upgrades even harder for many users.
There's the point that most users should use distribution kernels, but consider e.g. that there are poor souls with new hardware not supported by the 3 years old 2.4.18 kernel in the stable part of your Debian distribution.
Dr. David Alan Gilbert replied:
I have always found the stable series useful for two reasons:
I think (1) is very important - getting large numbers of people to test OSS is its greatest asset.
L. A. Walsh said:
I don't know about #3 below, but #1 and #2 are certainly true. I always preferred to run a vanilla stable kernel as I did not trust the vendors' kernels because their patches were not as well eyed as the vanilla kernel. I prefer to compile a kernel for my specific machines, some of which are old and do better with a hand-configured kernel rather than a Microsoftian monolith that is compiled with all possible options as modules.
I have one old laptop that sound just doesn't work on no matter what the settings -- may be failed hardware, but darned if I can't seem to easily stop the loading of sound related modules as hardware is probed by automatic hardware probing on each bootup, and the loading of sound modules by GUI dependencies on a memory constrained system.
With each new kernel release, I wonder if it will be satisfactory to use for a new, base-line, stable vanilla kernel, but post release comments seem to prove otherwise.
It seems that some developers have the opinion that the end-user base no longer is their problem or audience and that the distros will patch all the little boo-boo's in each unstable 2.6 release. Well, that's just plain asking for problems. Just in SuSE's previous release of 9.1, it wouldn't boot up, for update, on any system that used xfs disks. Redhat has officially dropped support for end-user distros, that leaves...who looking after end users? Debian, Mandrake?
From what I've read here, stable Debian, it seems, is in the 2.4 series. I don't know what Mandrake is up to, but I don't want to have to be jumping distros because my distro maker has screwed up the kernel with one of their patches. I also wouldn't want to give up reporting kernel bugs directly to developers as I would if I am using a non-vanilla, or worse, some tainted module.
However, all that being said, there would still be the choosing of someone, steady and capable, of holding on to the stable release and being it's gate-keeper. It seems like it would become quite a chore to decide what code is let into the stable version. It's also considered by many to be "less" fun, not only to "manage the stable distro", but backport code into the previous distro. Maybe no one _qualified_, wanted to manage a stable release. It takes time and possibly enough time to qualify as a full-time job. It takes a special person to find gainful employment as a vendor-neutral kernel maintainer. The alternative is to try to work 2 jobs where, in programming, each job might "like" to have 60-80 hours of attention per week. That's a demanding sacrifice as well.
It may be the case that no one at the last closed door kernel developer meeting wanted to undertake the care of a stable kernel. No volunteers...no kernel. There is less "wiggle room" in the average, mature, developer's schedule with the advent of easy outsourcing to cheaper labor that doesn't come from societies that breed independence and nurture talented, more mature, or eccentric developers that love spending spare cycles working on Open Source code.
Nevertheless, it would be nice to see a no-new-features, stable series spun off from these development kernels, maybe .4th number releases, like 2.6.10 also becomes a 220.127.116.11 that starts a 18.104.22.168, then 22.214.171.124, etc...with iteritive bug fixes to the same kernel and no new features in such a branch, it might become stable enough for users to have confidence installing them on their desktop or stable machines.
It wouldn't have to be months upon months of diverging code, as jumps to a new stable base can be started upon any fairly stable development kernel, say 2.6.10 w/e100 fixed, tracing fixed, the slab bug fix, and the capabilities bugs fixed going into a 126.96.36.199 that has no new features or old features removed. Serious bug fixes after that could go into a 188.8.131.52, etc. Such point releases would be easier to manage and only be updated/maintained as long as someone was interested enough to do it.
The same process would be applied to a future dev-kernel that appears to be mostly stable after some number of weeks of alpha testing. It may be the case that a given furture dev-kernel has no stable branch off of it because it either a) didn't need one, or b) was too far from stable to start one.
Anyway, just a thought for having something of the old with out as much of a headache of kernels that diverge for a year or more before getting sync'ed up.
Among other comments, Bill Davidsen remarked on Linda's idea about forking off a new stable series from any reasonably stable development kernel. Bill said that Andrew Morton was the only one who could accurately guage how good vendor fixes were, "but I suspect that the kernel is moving too fast and vendors "pick one" and stabilize that, by which time the kernel.org is generations down the road. It's possible that some fixes are then rediffed against the current kernel and fed, but I have zero information on that happening or not." William replied:
It does happen. I can't give a good estimate of how often. Someone at a distro may be able to help here, though it's unclear what this specific point is useful for.
What is a useful observation is that the 2.6-style development model is not in use in these instances, which instead use the older "frozen" model. This means that using frozen models in mainline is redundant. The function and service are available elsewhere and numerous simultaneously frozen trees guarantees no forward progress during such syzygys.
Dave Jones replied:
When we shipped Fedora Core 3, we drew a line in the sand, and decided that 2.6.9 was the kernel we were going to ship with. It happened to coincide nicely with the final release date, and everyone was happy.
Post release, the myriad of users filled RH bugzilla diligently with their many reports of interesting failures. Upstream had now started charging ahead with what was to be 2.6.10.
The delta between 2.6.9 -> 2.6.10 was around 4000 changesets. Cherry picking csets to backport to 2.6.9 at this rate of change is nigh on impossible. You /will/ miss stuff. In the absense of a 184.108.40.206, we chose to use Alan's -ac patches as a base to pick up most of the interesting meat, and then cherry pick anything else which people had noticed go past, or in some cases, after investigation into a bugreport.
So now we're at our 2.6.9-ac+a few dozen 2.6.10 csets and all is happy with the world. Except for the regressions. As an example, folks upgrading from Fedora core 2, with its 2.6.8 kernel found that ACPI no longer switched off their machines for example. Much investigation went into trying to pin this down. Kudos to Len Brown and team for spending many an hour staring into bug reports on this issue, but ultimately the cause was never found. It was noted by several of our users seeing this problem that 2.6.10 no longer exhibits this flaw. Yet our 2.6.9-ac+backports+every-2.6.10-acpi-cset also was broken. It's likely Fedora will get a 2.6.10 based update before the fault is ever really found for a 2.6.9 backport.
This is just one example of a regression that crept in unnoticed, and got fixed almost by accident. (If it was intentionally fixed, we'd know which patches we needed to backport 8-)
For distro kernels to be the 'stable' branch, we *rely* on help from various subsystem maintainers to continue to bugfix old kernels, despite it being unsexy. I admit it's painful, and given the option, replying "just use 2.6.10-bk6" is a much easier alternative, but with thousands of changes going into tree each month, it's not feasable for a distro to ship updates on that basis without something happening to deal with regressions.
As for stuff going back upstream.. You may be surprised how many bugs our 2.6.9-ac-many-backports hybrid has turned up which turned out to be just as relevant on 2.6.10 Here's the patchcount in our current trees..
Fedora Core 2: 245
Fedora Core 3: 63
FC2 is our 2.6.9 hybrid (the fc3 kernel got backport to fc2 as an update), FC3 is a rebase to 2.6.10-ac2. rawhide (FC4-to-be) is 2.6.10-bk6.
Note we still have 63 patches in FC3. Out of those, just over a dozen are 'features' that we added. The majority of the rest are real bugfixes, currently languishing in out-of-tree repositories for projects like NFS, s390, e1000 updates etc.. Note also that when FC3 first shipped, before we started backporting 2.6.10 bits, the patchcount was around 40 or so, so in the 2.6.9->2.6.10 rebase, we 'grew' around 13 patches. Each time I rebase to a new upstream, I want to get back to (or better than) the original patchcount where possible. When this doesn't happen, it means we're accumulating stuff that isn't making its way upstream fast enough.
So, of those 182 patches we dropped in our 2.6.10 rebase.. Some of them were upstream backports, but some of them were patches we pushed upstream that we now get to drop on a rebase. So the push/pull ecosystem is working out pretty well in this regard Whilst I'd like to get even more of this stuff upstream, it's the job of those out-of-tree pool maintainers to push their work, not mine.
That subthread skewed off into a discussion of binary modules, but elsewhere, Bill replied to Adrian's remark about the difficulty of upgrading in the face of things like ipchangs and DevFS being slated for removal from the main kernel tree. Bill said, "And there you have my largest complaint with the new model. If 2.6 is stable, it should not have existing features removed just because someone has a new wet dream about a better but incompatible way to do things. I expect working programs to be deliberately broken in a development tree, but once existing features are removed there simply is no stable set of features." William replied, "The presumption is that these changes are frivolous. This is false. The removals of these features are motivated by their unsoundness, and those removals resolve real problems. If they did not do so, they would not pass peer review." But a few posts down the line, Willy Tarreau objected:
There was a feature freeze by which everything which was considered hard to maintain or not very stable should have been removed. When 2.6 was announced, it was with a set of features. Who know, perhaps there are a few people who could replace a kernel 2.0 by a 2.6 on some firewalls. Even if they are only 2 or 3 people, there is no reason that suddenly a feature should be removed in the stable series. But it should be removed in 2.7 if it's a nightmare to maintain.
If the motivation to break backwards compatibility is not enough anymore to justify development kernels, I don't know what will justify it anymore. I'm particularly fed up by some developer's attitude who seem to never go outside and see how their creations are used by people who really trust the "stable" term... until they realize that this word is used only for marketting, eg. help distro makers announce their new major release at the right moment. ipfwadm had about 2 years to be removed before 2.6, wasn't that enough ? Once the stable release is out, the developer's point of view about how is creation *might* be used is not a justification to remove it. But of course, his difficulties at maintaining the code is fairly enough for him to say "well, it was a mistake to enable this, I don't want it in the future version anymore".
Why do you think that so many people are still using 2.4 (and even older versions) ? This is because they are the only ones who don't constantly change under your feet and from which you can build something reliable and guaranteed maintainable. At least, I've not seen any commercial product based on 2.6 yet !
Please, stop constantly changing the contents of the "stable" kernel.
Diego Calleja said:
2.6 will stop having small issues in each release until 2.7 is forked just like 2.4 broke things until 2.5 was forked. The difference IMO is that linux development now avoids things like the unstability which the 2.4.10 changes caused and things like the fs corruption bugs we saw in 2.4
I fully agree with WLI that the 2.4 development model and the backporting-mania created more problems than it solved, because in the real world almost everybody uses what distros ship, and what distros ship isn't kernel.org but heavily modified kernels, which means that the kernel.org was not really "well-tested" or it took much longer to become "well-tested" because it wasn't really being used.
Roman Zippel replied, "Backporting isn't the primary problem. The real problem were the huge time intervals between stable releases. A new stable release brings a huge amount of changes which got different levels of testing, which makes upgrading quite an experience. What we need are regular releases of stable kernels with a manageable amount of changes and a development tree to pull these changes from. It's a bit comparable to Debian testing/unstable. Changes go only from one tree to the other if they fulfil certain criteria. The job of the stable tree maintainer wouldn't be anymore to apply random patches sent to him, but to select instead which patches to pull from the development tree. This doesn't of course guarantees perfectly stable kernels, but it would encourage more people to run recent stable kernels and avoids the huge steps in kernel upgrades. The only problem is that I don't know of any source code management system which supports this kind of development reasonably easy..." Adrian also said to Diego, "The 2.6.9 -> 2.6.10 patch is 28 MB, and while the changes that went into 2.4 were limited since the most invasive patches were postponed for 2.5, now _all_ patches go into 2.6 . Yes, -mm gives a bit more testing coverage, but it doesn't seem to be enough for this vast amount of changes." A little later, he added, "My opinion is to fork 2.7 pretty soon and to allow into 2.6 only the amount of changes that were allowed into 2.4 after 2.5 forked. Looking at 2.4, this seems to be a promising model."
Theodore Ts'o broke in at this point, to say:
You have *got* to be kidding. In my book at least, 2.4 ranks as one of the less successful stable kernel series, especially as compared against 2.2 and 2.0. 2.4 was far less stable, and a vast number of patches that distributions were forced to apply in an (only partially successful) attempt to make 2.4 stable meant that there are some 2.4-based distributions where you can't even run with a stock 2.4 kernel from kernel.org. Much of the reputation that Linux had of a rock-solid OS that never crashed or locked up that we had gained during the 2.2 days was tarnished by 2.4 lockups, especially in high memory pressure situations.
One of the things which many people have pointed out was that even 2.6.0 was more stable than 2.4 was for systems under high load.
Marcelo Tosatti said, "99% of the features distributions have applied to their 2.4 based kernels are "enterprise" features such as direct IO, AIO, etc. Really I can't recall any "attempt to make 2.4 stable" from the distros, its mostly "attempt to backport nice v2.6 feature"." Theodore replied, "Sorry, those were two separate points; I should have been more careful to keep the two separate. I believe 2.4 has been less successful than other stable series for two reasons. The first is the very large divergence of what the distributions (and therefore most users) were actually using from each other and from kernel.org. The second is the lack of stability, in particular with systems with HIGHMEM configured, where low memory exhuastion is the first thing I suspect when a customer tells me that a 2.4-based system with a lot of memory freezes up." And William also added, "I am unfortunately holding 2.4.x' earlier history against it. While you were maintaining it, much of what we're discussing was resolved. Unfortunately, the stabilization you're talking about was essentially too late; distros had long-since wildly diverged, they had frozen on older releases, and the damage to Linux' reputation was already done. I'm also unaware of major commercial distros (e.g. Red Hat, SuSE) using 2.4.x more recent than 2.4.21 as a baseline, and it's also notable that one of the largest segments of the commercial userbase I see is using a distro kernel based on 2.4.9." Marcelo agreed wholeheartedly with this.
Elsewhere in the subthread, Theodore remarked, "The real key, as always, is getting users to download and test a release. So another approach might be to shorten the time between 2.6.x and 2.6.x+1 releases, so as to recreate more testing points, without training people to wait for -bk1, -bk2, -rc1, etc. before trying out the kernel code. This is the model that we used with the 2.3.x series, where the time between releases was often quite short. That worked fairly well, but we stopped doing it when the introduction of BitKeeper eliminated the developer synch-up problem. But perhaps we've gone too far between 2.6.x releases, and should shorten the time in order to force more testing." Russell King replied, "It is also the model we used until OLS this year - there was a 2.6 release about once a month prior to OLS. Post OLS, it's now once every three months or there abouts, which, IMO is far too long. I really liked the way pre-OLS 2.6 was working... it means I don't have to twiddle my fingers getting completely bored waiting for the next 2.6 release to happen. Can we return to that methodology please?" William seconded this, as did Randy Dunlap, who added, "We (whoever "we" are) have erred too much on longer cycles for stability, but it's not working out as hoped IMO." And Alan Cox said, "After 2.6.9-ac its clear that the long 2.6.9 process worked very badly. While 2.6.10 is looking much better its long period meant the allegedly "official" base kernel was a complete pile of insecure donkey turd for months. That doesn't hurt most vendor users but it does hurt those trying to do stuff on the base kernels very badly."
3. DVB bt8xx Attempted Fixes
4 Jan 2005 - 6 Jan 2005 (5 posts) Archive Link: "PATCH: DVB bt8xx in 2.6.10"
Topics: Digital Video Broadcasting
People: Arne Ahrend, Johannes Stezenbach, Johannes
Arne Ahrend said, "This patch allows the user to select only actually desired frontend driver(s) for bt8xx based DVB cards by removing calls to frontend-specific XXX_attach() functions and returning NULL instead for unconfigured frontends. To keep this patch small, no attempt is made to #ifdef away other static functions or data for unselected frontends. This leads to compiler warnings about defined, but unused code, unless all four frontends relevant to bt8xx based cards are selected. I have tested this on the Avermedia 771 (the only DVB card I have access to)." Johannes Stezenbach replied, "This approach has been discussed on the linux-dvb list and was rejected because of the huge #ifdef mess it creates (you just touched bt8xx, it's even worse for saa7146 based cards). The frontend drivers are tiny so I think you can afford to load some that aren't actually used by your hardware." Arne accepted this, but offered some cosmetic changes to the code in any case; which Johannes accepted.
4. In-Kernel Genetic Algorithm Library
6 Jan 2005 - 10 Jan 2005 (12 posts) Archive Link: "[ANNOUNCE 0/4][RFC] Genetic Algorithm Library"
People: Jake Moilanen, James Bruce, Pedro Larroy, William Lee Irwin III
Jake Moilanen said:
I'm pleased to announce a new in-kernel library to do kernel tuning using a genetic algorithm.
This library provides hooks for kernel components to take advantage of a genetic algorithm. There are patches to hook the different schedulers included.
The basic flow of the genetic algorithm is as follows:
Over time the tunables should converge toward the optimal settings for that workload. If the workload changes, the tunables should converge to the new optimal settings (this is part of the reason for mutation). This algorithm is used extensively in AI.
Using these patches, there are small gains (1-3%) in Unixbench & SpecJBB. I am hoping a scheduler guru will able to rework them to give higher gains.
The main area that could use reworking is the fitness calculation. The problem is that the kernel is looking more at the micro of what's going on, instead of the macro. I am thinking of moving the fitness calculation to outside the kernel.
However, I would advocate keeping the number of layers needed to communicate between the genetic library and the hooked component down in order to keep it as lightweight as possible.
The patches are based on 2.6.9 and still a little rough, but here is the descriptions:
[1/4 genetic-lib]: This is the base patch for the genetic algorithm. It's based against 2.6.9.
[2/4 genetic-io-sched]: The base patch for the IO schedulers to use the genetic library.
[3/4 genetic-as-sched]: A genetic-lib hooked anticipatory IO scheduler.
[4/4 genetic-zaphod-cpu-sched]: A hooked zaphod CPU scheduler. Depends on the zaphod-v6 patch.
One would expect something like an in-kernel genetic algorithm library to receive about the same welcome as an in-kernel Perl interpreter, but actually folks were fairly polite about it. James Bruce asked if Jake included a cross-over algorithm; but after perusing the code he saw that Jake did indeed implement cross-over. He asked, "What is the motivation for generating two children at once, instead of just one? Genes values shouldn't get "lost" since the parents are being kept around anyway. Also, since the parameters in general will not have a meaningful ordering, it might make sense for the generic crossover to be the "each gene randomly picked from one of the two parents" approach. In practice I've found that to mix things up a bit better in the parameter optimization problems I've done with GAs." Jake replied:
The intitial motivation for creating two children at once was so each parent could pass on all of their genes. The 75% of the parent's genes might be in child A, but the other 25% would be in child B.
Thinking about it more, there should be no reason that all of a parent's genes have to be passed on in a child. It would not be too difficult to have each gene come randomly from one of the two parents. I'll add that in on the next rev of the patches.
Pedro Larroy also commented on Jake's patches, saying:
your algorithm tends to converge to a global optimum, but also as William Lee Irwin III has commentend on irc, it might miss "special points" since there's no warranty of the function to minize to be continuous.
I think it's a good idea to introduce this techniques to tune the kernel, but perhaps userland would be the right place for them, to be able to switch them off when in need or have more controll over them. But it's a nice initiative in my opinion.
Jake replied, regarding a user-space implementation. He said, "I considered doing this in userland at first, but I went away from it for a couple reasons. I wanted users of the library to have a lot of flexibility. There was also a concern with the extra overhead going inbetween user/kernel space (important for users who's children have very short life-spans)." And regarding William Lee Irwin III's idea about missing "special points", Jake said, "This is a very good point, and is something that I'm working on now. I would like to be able to able to have multiple fitness rankings (ex. one that ranks specifically for throughput and one specifically for interactivity/latency). Then tune specific genes, that actually impact that specific fitness check."
5. Linux 2.4.29-rc1 Released
7 Jan 2005 (1 post) Archive Link: "Linux 2.4.29-rc1"
Topics: Serial ATA
People: Marcelo Tosatti
Marcelo Tosatti announced Linux 2.4.29-rc1, saying:
Here goes the first release canditate of v2.4.29.
This time it contains a SATA update, bunch of network drivers updates, amongst others.
More importantly it fixes a sys_uselib() vulnerability discovered by Paul Starzetz:
Upgrade is recommended for users of v2.4.x mainline, distros should be releasing their updates real soon now.
6. Status Of The "Halloween Document" And Freinds
7 Jan 2005 - 9 Jan 2005 (7 posts) Archive Link: "2.6.x features log"
Topics: Big O Notation, FS: devfs, Hot-Plugging, Power Management: ACPI, SMP, Scheduler, Version Control
People: Randy Dunlap, Jerome Lacoste, Rahul Karnik, Diego Calleja, Dave Jones, Christoph Hellwig
Randy Dunlap said:
I think that people really like the Dave Jones 2.5/2.6 halloween information/update. It contained a lot of useful info in one place, with pointers to more details.
What I'm seeing (and getting a little concerned about, although I dislike PR with a passion) is that the 2.6.x continuous development cycle will cause us (the Linux community) to miss logging some of these important new features (outside of bk). Has anyone kept a track of new features that are being added in 2.6?
I'll keep a list (or someone else can -- DaveJ ?) if anyone is interested in feeding items into it. Or do distros already keep such a running list of new features?
For example (and some of these might not be needed here):
Jerome Lacoste replied, "I loved going through the kernel newbies status: http://www.kernelnewbies.org/status/. Unfortunately it's not updated anymore."
Rahul Karnik also remarked, "Personally speaking, the key feature of the Halloween document was not documenting what new features we had in the kernel -- it was the ability to see what _user-visible_ changes there were. As a "mainstream" user, I might not care much about a new O(1) scheduler, but I might be affected by the removal of (say) ipchains."
Diego Calleja also said, "lwn.net has always had a excellent kernel development coverage" .
Elsewhere, Dave Jones said, "I don't really have the time right now to maintain it, but if you want to take anything from the doc I wrote, or push it for inclusion in the tree so others can modify it at will, feel free." Close by, Christoph Hellwig said, "Debian actually patches Dave's post_halloween document into Documentation. Maybe we should put it there for mainline aswell and make sure to update it when doing major changes?" Dave replied, "I've said "Sure, go for it" to a number of people who brought this up, but nothing has ever come of it. I'll send it to Linus myself later today. 8)"
7. Status Of bcopy
9 Jan 2005 - 11 Jan 2005 (13 posts) Archive Link: "removing bcopy... because it's half broken"
People: Arjan van de Ven, Linus Torvalds, Richard Henderson
Arjan van de Ven said:
Nothing in the kernel is using bcopy right know, and that is a good thing. Why? Because a lot of the architectures implement a broken bcopy().... the userspace standard bcopy() is basically a memmove() with a weird parameter order, however a bunch of architectures implement a memcpy() not a memmove().
Instead of fixing this inconsistency, I decided to remove it entirely, explicit memcpy() and memmove() are prefered anyway (welcome to the 1990's) and nothing in the kernel is using these functions, so this saves code size as well for everyone.
Linus Torvalds replied:
The problem is that at least some gcc versions would historically generate calls to "bcopy" on alpha for structure assignments. Maybe it doesn't any more, and no such old gcc versions exist any more, but who knows?
That's also why "bcopy" just acts like a memcpy() in many cases: it's simply not worth it to do the complex case, because the only valid use was a compiler that would never validly do overlapping ranges anyway.
Gcc _used_ to have a target-specific "do I use bcopy or memcpy" setting, and I just don't know if that is still true. I also don't know if it affected any other platforms than alpha (I would assume that it matched "target has BSD heritage", and that would likely mean HP-UX too)
Richard? You know both gcc and alpha, what's the word?
Richard Henderson replied:
Yes, TARGET_MEM_FUNCTIONS. It's never not set for Linux targets. Or for OSF/1 for that matter... Indeed, it would take me some time to figure out which targets it's *not* set for.
(Yet another thing that ought to get cleaned up -- either invert the default value or simply require the target to either provide the libc entry point or add a version to libgcc.)
I'm not sure how far back you'd have to go to find an Alpha compiler that needs this. Prolly back to at least gcc 2.6, but I don't have sources that old handy to check.
8. IA64 Maintainership
10 Jan 2005 (1 post) Archive Link: "[PATCH] New ia64 maintainer"
People: Tony Luck, David Mosberger
Tony Luck said that David Mosberger-Tang had "handed over the keys a few months ago. Time to make sure everyone knows to send stuff to me." He posted a patch to list himself as the official maintainer of the IA64 platform instead of David.
9. Linux 2.2.27-rc1 Released
11 Jan 2005 (1 post) Archive Link: "Linux 2.2.27-rc1"
People: Marc-Christian Petersen
Marc-Christian Petersen announced Linux 2.2.27-rc1, saying, "here goes 2.2.27-rc1. Please let me know if I missed something security related. It's hard to keep up2date with latest tons of security vulns ;)"
10. DebugFS Gains Ground
12 Jan 2005 (5 posts) Archive Link: "debugfs directory structure"
People: Roland Dreier, Greg KH
Roland Dreier said, "Now that debugfs is merged into Linus's tree, I'm looking at using it to replace the IPoIB debugging pseudo-filesystem (ipoib_debugfs). Is there any guidance on what the structure of debugfs should look like? Right now I'm planning on putting all the debug info files under an ipoib/ top level directory. Does that sound reasonable?" Greg KH was thrilled that Roland was going to use it; he said, "Anarchy rules in debugfs. Do what you want. If you stomp over someone else's stuff, I expect complaints and maybe someone will have to arbitrate, but odds are that will ever happen is pretty slim. So yes, ipoib/ in the top level sounds just fine."
Sharon And Joy
Kernel Traffic is grateful to be developed on a computer donated by Professor Greg Benson and Professor Allan Cruse in the Department of Computer Science at the University of San Francisco. This is the same department that invented FlashMob Computing. Kernel Traffic is hosted by the generous folks at kernel.org. All pages on this site are copyright their original authors, and distributed under the terms of the GNU General Public License version 2.0.