Table Of Contents
1. | 20 Jun 2005 - 14 Jul 2005 | (588 posts) | Status Of -mm Tree Merging Into 2.6.13 |
2. | 28 Jun 2005 - 8 Jul 2005 | (8 posts) | Clamping Down On SysFS "Abuses" |
3. | 28 Jun 2005 - 13 Jul 2005 | (19 posts) | Some Consideration Of Swap Files Versus Swap Partitions |
4. | 1 Jul 2005 - 8 Jul 2005 | (8 posts) | Status Of Linux Trace Toolkit Overhaul |
5. | 7 Jul 2005 - 11 Jul 2005 | (16 posts) | Linux 2.6.13-rc2-mm1 Released |
6. | 7 Jul 2005 - 9 Jul 2005 | (4 posts) | Audit Subsystem Maintainership |
7. | 8 Jul 2005 - 12 Jul 2005 | (28 posts) | New Apple USB Touchpad Driver For Recent PowerBooks |
8. | 9 Jul 2005 | (1 post) | bootutils 0.0.5 Released |
9. | 11 Jul 2005 - 13 Jul 2005 | (7 posts) | Summary Of Recent RT Patch Acceptance Discussion |
10. | 12 Jul 2005 - 14 Jul 2005 | (21 posts) | Linux 2.6.13-rc2-mm2 Released |
Mailing List Stats For This Week
We looked at 2412 posts in 14MB. See the Full Statistics.
There were 717 different contributors. 276 posted more than once. The average length of each message was 102 lines.
The top posters of the week were: | The top subjects of the week were: |
---|---|
87 posts in 384KB by hans reiser 85 posts in 538KB by david masover 65 posts in 609KB by hal rosenstock 62 posts in 288KB by nigel cunningham 57 posts in 289KB by pavel machek |
394 posts in 3MB for "reiser4 plugins" 104 posts in 436KB for "-mm -> 2.6.13 merge status" 51 posts in 333KB for "[git patches] ide update" 51 posts in 250KB for "reiser4 vs politics: linux misses out again" 40 posts in 157KB for "-mm -> 2.6.13 merge status (fuse)" |
These stats generated by mboxstats version 2.8
1. Status Of -mm Tree Merging Into 2.6.13
20 Jun 2005 - 14 Jul 2005 (588 posts) Archive Link: "-mm -> 2.6.13 merge status"
Topics: FS: CacheFS, FS: NFS, FS: sysfs, Hot-Plugging, Kexec, Profiling, SMP, Software Suspend
People: Andrew Morton, Miklos Szeredi, Andi Kleen, Christoph Hellwig, Hans Reiser, Jeff Garzik, Eric Van Hensbergen, Ronald G. Minnich
Andrew Morton said:
This summarises my current thinking on various patches which are presently in -mm. I cover large things and small-but-controversial things. Anything which isn't covered here (and that's a lot of material) is probably a "will merge", unless it obviously isn't.
(If you reply to this email it would be a good idea to alter the Subject: to reflect which feature you are discussing)
git-ocfs
The OCFS2 filesystem. OK by me, although I'm not sure it's had enough review.
sparsemem
OK by me for a merge. Need to poke arch maintainers first, check that they've looked at it sufficiently closely.
vm-early-zone-reclaim
Needs some convincing benchmark numbers to back it up. Otherwise OK.
avoiding-mmap-fragmentation
Tricky. Addresses vm area fragmentation issues due to recent optimisations to the free-area lookup code. Will merge.
periodically-drain-non-local-pagesets
Will merge
pcibus_to_node and users
Will merge
CONFIG_HZ for x86 and ia64: changes default HZ to 250, make HZ Kconfigurable.
Will merge (will switch default to 1000 Hz later if that seems necessary)
dmi-*.patch
Will merge. I have a comment "The below break x440". Maybe it got fixed. We'll doubtless hear if not.
xen-*.patch
These are little cleanups and abstractions which make a Xen merge easier. May as well merge them.
CPU hotplug for x86 and x86_64
Not really useful on current hardware, but these provide infrastructure which some power management patches need, and it seems sensible to make the reference architecture support hotplug. Will merge.
swsusp-on-SMP
Will merge.
cfq version 3
Not sure. Jens seems to be setting up a few git trees. On hold.
RCUification of the key management code
Don't know - dhowells seemed diffident last time we discussed this.
timers-fixes-improvements.patch
SMP speedups for the core timer code. It was bumpy, but this seems stable now. Will merge.
kprobes-*
Will merge
rapidio-*
Will merge.
namespace*.patch
Awaiting viro ack.
xtensa architecture
Is xtensa now, or will it be in the future a sufficiently popular architecture to justify the cost of having this code in the tree?
Heaven knows. Will merge.
dlm-*.patch: Red Hat distributed lock manager
Hard. Right now it seems that no in-kernel projects will use this and only one out-of-kernel project will use it. Shelve the problem until after Kernel Summit, where some light may be shed.
Opinions are sought...
connector.patch
Nice idea IMO, but there are still questions around the implementation. More dialogue needed ;)
connector-add-a-fork-connector.patch
OK, but needs connector.
inotify
There are still concerns about the userspace API and internal implementation details. More slogging needed.
pcmcia-*.patch
Makes the pcmcia layer generate hotplug events and deprecates cardmgr. Will merge.
NUMA-aware slab allocator
Seems stable now, but it needs some ifdef reduction work before merging, please.
CPU scheduler
Will merge some of these patches. We're still discussing which ones.
perfctr
Not yet, but getting closer. The PPC64 guys still need to sort out a few interface issues with Mikael. We might be able to fit this into 2.6.13 if people get a move on.
cachefs
This is a ton of code which knows rather a lot about pagecache internals. It allows the AFS client to cache file contents on a local blockdev.
I don't think it's a justified addition for only AFS and I'd prefer to see it proven for NFS as well.
Issues around add-page-becoming-writable-notification.patch need to be resolved.
cachefs-for-nfs
A recent addition. Needs review from NFS developers and considerably more testing.
These things aren't looking likely for 2.6.13.
kexec and kdump
I guess we should merge these.
I'm still concerned that the various device shutdown problems will mean that the success rate for crashing kernels is not high enough for kdump to be considered a success. In which case in six months time we'll hear rumours about vendors shipping wholly different crashdump implementations, which would be quite bad.
But I think this has gone as far as it can go in -mm, so it's a bit of a punt.
reiser4
Merge it, I guess.
The patches still contain all the reiser4-specific namespace enhancements, only it is disabled, so it is effectively dead code. Maybe we should ask that it actually be removed?
v9fs
I'm not sure that this has a sufficiently high usefulness-to-maintenance-cost ratio.
fuse
tHis is useful, but there are, AFAIK, two issues:
- We're still deadlocked over some permission-checking hacks in there
- It has an NFS server implementation which only works if the to-be-served file happens to be in dcache.
It has been said that a userspace NFS server can be used to get full NFS server functionality with FUSE. I think the half-assed kernel implementation should be done away with.
execute-in-place
Will merge. Have the embedded guys commented on the usefulness of this for execute-out-of-ROM?
There was quite a long thread following this post. A frustrated Miklos Szeredi spoke out about the FUSE situation. The sticking point was whether to keep unpriveleged mount support in the FUSE patch, or remove it to make acceptance easier. Miklos said he would not remore it even if that meant FUSE couldn't go in the kernel. He argued that people should examine the code and the feature, and try to improve their understanding before simply rejecting it on esthetic terms. Various folks argued back and forth without saying much; then Andrew asked for some clarification on what the controversy was really about. Miklos said:
The controversial part is fuse_allow_task() called from fuse_permission() and fuse_revalidate() (fs/fuse/dir.c).
This function (as explained by the header comment) disallows access to the filesystem for any task, which the filesystem owner (the user who did the mount) is not allowed to ptrace.
The rationale is that accessing the filesystem gives the filesystem implementor ptrace like capabilities (detailed in Documentation/filesystems/fuse.txt)
It is controversial, because obviously root owned tasks are not ptrace-able by the user, and so these tasks will be denied access to the user mounted filesystem (-EACCESS is returned on stat() or any other file operation).
However nobody raised _any_ concrete technical problem associated with this, and the 4 years of widespread use didn't turn up any either. So IMO it's "ugly" only in people's heads and not in reality.
The discussion continued, and Andrew seemed to grok what Miklos was saying, but there was still no concrete decision made.
Elsewhere, Andi Kleen had some remarks to make about the prospect of Reiser4 going in. He said, "Has there been actually any serious review on this? Last time I looked there was a lot of very ugly code in there. Also I'm not sure things like comming with an own profiler and spinlock debugger are really acceptable. At least this stuff should be removed too." He asked if the code base had been reviewed, and Christoph Hellwig replied, "I don't think so. Everyone used the previous criteria of the broken core changes, broken filesystem semantics and it's own useless abtraction layer as an excuse not to look deeply at this huge mess yet." But Hans said:
V4 has a mailing list, and a large number of testers who read the code and comment on it. V4 has been reviewed and tested much more than V3 was before merging. Given that we sent it in quite some time ago, your suggestion that an additional review by unspecified additional others be a requirement for merging seems untimely. Do you see my point of view on this?
I would however enjoy receiving coding suggestions at ANY time. We don't get as much of that as I would like. I would in particular love to have you Andi Kleen do a full review of V4 if you could be that generous with your time, as I liked much of the advice you gave us on V3.
Unspecified others doing a review, well, who knows, I will surely take the time to consider what is said by them though.....
I would prefer to not get reviews from authors of other filesystems who prefer their own code, skim through our code without taking the time to grok our philosophy and approach in depth, and then complain that our code is different from what they chose to write, and think that our changing to be like them should be mandated. I will not name names here....
Some of the suggestions on our mailing list are great, some reflect a lack of 5 years working with our code, perhaps I should feed our mailing list into the linux kernel mailing list so that people on the kernel mailing list are more aware that we exist and are active?
Jeff Garzik pointed out, "when a merge is imminent, a lot more attention is paid" ... "If you want to get your code merged, you gotta work with the system, and LISTEN to the feedback."
At some point close by, Hans remarked, "I like feedback on our code, and I particularly like feedback from a Mr. Andi Kleen, but there is no need to tie it to merging. If, however, it serves as an effective excuse to get some of your time allocated by SuSE management, sure, go for it.;-)" Jeff replied, "All merges of new code go like this. You've been around here for a while, this should not be a shock. "Hans' team says its good stuff" is not a criteria for merging." Hans suggested benchmarking the code instead of talking so much, and Jeff replied, "Still not a criteria for merging. We have to care about the code behind the benchmarks."
Elsewhere, regarding v9fs, Eric Van Hensbergen said to Andrew:
I think v9fs/9P has some unique aspects which differentiate it from the other distributed system protocols integrated into Linux: a) it presents a unified distributed resource sharing protocol. It will be able to distribute devices, file systems, system services, and application interfaces.
v9fs-2.0 has a somewhat limited audience at the moment - but now that the initial implementation is more or less complete we are working to build applications on top of it (and provide a better server). It's being integrated into cluster projects at LANL and being looked at wrt virtualization I/O at IBM. Its our hope that these improvements and cluster applications will motivate more wide-spread use of the v9fs module.
Ronald G. Minnich also added:
I got pointed at this discussion. Here are my $.02 on why we at LANL are interested in v9fs.
We build clusters on the order of 2000 machines at present, with larger systems coming along. The system which we use to run these clusters is bproc. While bproc has proven to be very powerful to date, it does have its limits:
We have a desire to build single-system-image looking clusters along the bproc model, but at the same time compose those clusters of, e.g., Opterons and G5s. This mixing is highly desirable for compoutations that have phases, some of which belong on one type of a machine, and some on another.
We are going to use v9fs as the glue for our next-generation cluster software, called 'xcpu'. Xcpu has been implemented on Plan 9 and works there. I have ported xcpu to Linux, using v9fs as the client side and Russ Cox's plan9ports server to write servers.
xcpu presents a remote execution service as a 9p server. xcpu has been tested across architectures and it works very well. By summer 2006, we hope to have cut over our bproc systems to xcpu.
That's one use for v9fs. We also plan to use v9fs to provide us with servers for global /proc, monitoring, and control systems for our clusters.
The global /proc is interesting. bproc provides a global /proc, but it is incomplete; entries for, e.g., exe and maps are not filled in. bproc also caches part of the /proc, but the rules about what is cached and what the timeouts are, are set in the kernel module and not easily changed. We are going to have an "aggregating" user level 9p server based on Mirtchovskis's aggrfs, which will both aggregate all the cluster nodes, and have caching rules that make sense in clusters of 1000s of node (for example, it is ok to cache /proc/x/status; there is no need to cache /proc/x/maps, and you probably don't want to anyway).
A neat capability is that if we give a user, e.g., 25% of the cluster, we can tailor that user's name space so that they only see their procs and the 25% of the cluster they own. This is good for security, but also good for convenience: most users don't really care that some other user is on 75% of the cluster. Global pid spaces are neat in theory, messy in practice at large scale. I want my global pid space to be global to *me*, meaning I see the global space of the nodes I care about. The sysadmin, of course, wants to see everything. All this is possible. V9fs, along with Linux private name spaces, will allow us to provide this model: users can see some or all of the global pid space, depending on need; users can be constrained to only see part of the global pid space, depending on other issues.
9p will also replace the Supermon protocol, allowing people to easily view status information in a file system.
In addition to the cluster usage, there is also grid usage. The 9grid, composed of plan 9 systems, is connected by 9p servers. Linux systems can join the 9grid with no problem, once Linux has v9fs.
Were v9fs just a file system, I would not really be interested in it one way or another; we have NFS, after all. But v9fs is really the key piece of a new model of cluster services we are building at LANL. 9p will be the glue, and v9fs will be the needed client side for hooking 9p servers into the file system name space.
I'm hoping we can see v9fs in the kernel someday.
2. Clamping Down On SysFS "Abuses"
28 Jun 2005 - 8 Jul 2005 (8 posts) Archive Link: "sysfs abuse in recent i2o changes"
People: Christoph Hellwig, Markus Lidel, Greg KH
Christoph Hellwig remarked:
drivers/message/i2o/config-osm.c has a function sysfs_create_fops_file, which creates a sysfs file with supplied file_operations. This is pretty much against the sysfs design which only wants simple attributes, ascsii or for corner cases binary.
Also, if we're going to allow this code it should move to sysfs. And stop using lookup_hash directly (use lookup_one_len instead), it'll go away soon.
Markus Lidel replied:
First, the attributes provided through these functions are for accessing the firmware... The controller has a little limitation, it could only handle 64 blocks, but sysfs only have 4k...
Now there are two options:
IMHO the first is not a very good solution, because for a 64k block it has to be written 16 times...
Of course if someone finds a better solution i would be glad to hear about it...
Greg KH said, "Use the binary file interface of sysfs, which was written exactly for this kind of thing. :)" Markus gave this a try, but said, "i haven't found a way to increase the block size beyond 4k, could you please tell me how i could adjust it, or where i could read about it?" Greg replied:
Your code should not care about the block size of the data given to you, as userspace could be giving you 1 byte at a time. Buffer it up yourself and then write it out to the device when needed.
But if you are doing this for firmware, then please use the kernel firmware interface, it does all of the buffering for you.
Either way, having your own file_ops in sysfs is not allowed.
Markus said that Greg's solution was more complex and required a lot more code. Greg offered a couple of suggestions, none of which worked for Markus, and the thread ended.
3. Some Consideration Of Swap Files Versus Swap Partitions
28 Jun 2005 - 13 Jul 2005 (19 posts) Archive Link: "Swap partition vs swap file"
Topics: FS: ext3
People: Andrew Morton, Mike Richards, Coywolf Qi Hunt, Bernd Eckenfels, Jeremy Nickurak, Wakko Warner
Mike Richards asked if there were any differences between using a swap file and a swap partition. Andrew Morton replied, "In 2.6 they have the same reliability and they will have the same performance unless the swapfile is badly fragmented." Mike replied:
Three more short questions if you have time:
To the first question, Andrew replied, "2.4 is weaker: it has to allocate memory from the main page allocator when performing swapout. 2.6 avoids that." And to the third, Andrew said there was no performance penalty for creating a swapfile on a journaled filesystem. He said, "The kernel generates a map of swap offset -> disk blocks at swapon time and from then on uses that map to perform swap I/O directly against the underlying disk queue, bypassing all caching, metadata and filesystem code."
To the second question, regarding fragmentation, Andrew said, "Create the swapfile when the filesystem is young and empty, it'll be nice and contiguous. Once created the kernel will never add or remove blocks. The kernel won't let you use a sparse file for a swapfile." Coywolf Qi Hunt remarked, "I guess/hope dd always makes it contiguously." And Bernd Eckenfels replied, "No, it is creating files by appending just like any other file write. One could think about a call to create unfragmented files however since this is not always working best is to create those files young or defragment them before usage." But Jeremy Nickurak remarked that "this defeats one of the biggest advantages a swap file has over a swap partition: the ability to easilly reconfigure the amount of hd space reserved for swap." Wakko Warner then asked, "Is it possible to create a large file w/o actually writing that much to the device (ie uninitialized). There's absolutely no reason that a swap file needs to be fully initialized, only part which mkswap does. Of course, I would expect that ONLY root beable to do this." A couple posts down the line, Bernd replied, "There is no portable/documented way to grow a file without having the file system null its content. However why is that a problem, you dont create those files very often. Besides it is better for the OS to be able to asume that a page with zeros in it is equal to the page on fresh swap."
4. Status Of Linux Trace Toolkit Overhaul
1 Jul 2005 - 8 Jul 2005 (8 posts) Archive Link: "[PATCH/RFC] Significantly reworked LTT core"
People: Karim Yaghmour, Christoph Hellwig
Karim Yaghmour said:
A few months back, there was a very large thread of discussion about the inclusion of the ltt code by Andrew in -mm. Following this discussion, relayfs was quite heavily trimmed down. However, unlike what I had promised, I never got around to actually do the same to the ltt code. Part of it was my not being ready to actually gut 5 years of coding ... that was just kind of difficult. Lately though, through active discussion on the ltt-dev list, this issue has resurfaced and a few pieces of revamped code started going around. Thanks to Mathieu Desnoyers (Ecole Polytechnique) and Michael Raymond (SGI) getting things moving again, I got back to thinking about the best way to get the LTT code down to a palatable structure. And this time around, I gave simplicity a chance ...
Which brings me to the patch below. This is a significantly cut down version of the ltt core. It's now 5K instead of the initial 100K. While the size has been trimmed down, much of the functionality can still be easily obtained through the introduction of a new method: the ltt multiplexer (ltt_mux). Basically, this is the function that controls the tracing behavior. If none is provided, no tracing goes on. Typically, such a function would be implemented as part of a loadable "control" module. Said module would be responsible for:
IOW, much of what was purged can now be modularized and loaded separately. Obviously this doesn't preclude having those modules still packaged with the rest of the kernel, but it does make things much cleaner.
This patch isn't definitive, it's truely experimental. I've only compile-tested it for now. I'm posting it here mostly as a preview. Of course, your feedback is welcome.
Christoph Hellwig remarked:
This code is rather pointless. The ltt_mux is doing all the real work and it's not included. And while we're at it the layering for it is wrong aswell - the ltt_log_event API should be implemented by the actual multiplexer with what's in ltt_log_event now minus the irq disabling becoming a library function.
Exporting a pointer to the root dentry seems like a very wrong API aswell, that's an implementation detail that should be hidden.
Besides that the code is not following Documentation/CodingStyle at all, please read it.
Besides that I'd sugest scrapping the ltt name and ltt_ prefix - we know we're on linux, adn we don't care whether it's a toolkit, but spelling trace_ out would actually be a lot more descriptive. So what about trace_* symbol names and trace.[ch] filenames?
Karim said he didn't mind changing the name, and he'd look into following CodingStyle more closely. Regarding Christoph's criticism of the code itself, Karim replied:
Yes, you're partially right, ltt_mux is doing a lot of work, and it's not included. However, what work ltt_mux is doing is administrative and that's what was complained about a lot last time the ltt patches were included. So yes, I could provide a very basic ltt_mux that would instantiate a single relayfs channel and does no filtering whatsoever, but that would be insufficient for real usage. And if I provided a full mux, then we'd pretty much end up with the same code we had previously.
By having it this way, the essential part of the mechanism, its logging code, is shared by all, yet there can be any number of muxes loaded on top of it. The LKST project, for example, has got a module that just counts the events that occur. Plug that as the mux, and always return NULL (no channel to write to) and you've ready to go.
For ltt, the mux would be quite involved, including having netlink sockets going back and forth talking to a user-space daemon, and allowing quite a few options/features to be set.
In other cases, it should be fairly simple to implement a mux local to a given subsystem that a developer needs to monitor. He can then manage everything about how tracing goes on without having to rewrite his own logging function.
The rational here is simple: there is no need to have multiple logging functions, but there are already multiple existing implementations of deciding how and what needs to be logged, how it's control, and how it interfaces with the outside world (be it user-space or otherwise.) This code, simplistic as it may be, serves this reality quite well.
If what's in ltt_log_event goes into the multiplexer, then we're back to having each implementation have its own buffering mechanism and yet no single entry-point for tracing inside the kernel.
Replacing local_irq_disable/enable() with function pointers is not a problem, if that's something desirable.
Christoph said, "We're not gonna add hooks to the kernel so you can copile the same horrible code you had before against it out of tree. Do a sane demux and submit it." And Karim replied, "If I just wanted hooks, I would have submitted a patch that did just that, without any logging function. The code for the mux that goes on top of that code is actually on its way to be completely rewritten. I can see that you may have read my posting as indicating that we were recompiling the same previous code out of tree, but that is certainly not the intent. FWIW, we'll look submitting a minimal mux with the patch."
5. Linux 2.6.13-rc2-mm1 Released
7 Jul 2005 - 11 Jul 2005 (16 posts) Archive Link: "2.6.13-rc2-mm1"
Topics: Digital Video Broadcasting, Kernel Release Announcement, Software Suspend, User-Mode Linux, Virtual Memory
People: 2.6.13-rc2-mm1, Miklos Szeredi, Andrew Morton
Andrew Morton announced Linux 2.6.13-rc2-mm1, saying:
ftp://ftp.kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.13-rc2/2.6.13-rc2-mm1/
(kernel.org seems to be stuck again - there's a copy at http://www.zip.com.au/~akpm/linux/patches/stuff/2.6.13-rc2-mm1.gz)
Miklos Szeredi asked about the status of FUSE inclusion, but there was no yay or nay on it.
6. Audit Subsystem Maintainership
7 Jul 2005 - 9 Jul 2005 (4 posts) Archive Link: "[PATCH] Add MAINTAINERS entry for audit subsystem"
Topics: MAINTAINERS File
People: David Woodhouse, Chris Wright, Andrew Morton
Chris Wright added an entry for the audit subsystem to the MAINTAINERS file, giving a mailing list but no actual maintainer. As it turned out, David Woodhouse had already submitted a similar patch to Andrew Morton's -mm tree, but Chris had missed it because David had made a slight alphabetizing error. They sorted it out.
7. New Apple USB Touchpad Driver For Recent PowerBooks
8 Jul 2005 - 12 Jul 2005 (28 posts) Archive Link: "[PATCH] Apple USB Touchpad driver (new)"
Topics: USB
People: Stelian Pop, Vojtech Pavlik, Peter Osterlund, Johannes
Stelian Pop said:
This is a driver for the USB touchpad which can be found on post-February 2005 Apple PowerBooks (PowerBook5,6).
This driver is derived from Johannes Berg's appletrackpad driver (http://johannes.sipsolutions.net/PowerBook/touchpad/), but it has been improved in some areas:
This driver has been tested by the readers of the 'debian-powerpc' mailing list for a few weeks now and I believe it is now ready for inclusion into the mainline kernel.
Credits go to Johannes Berg for reverse-engineering the touchpad protocol, Frank Arnold for further improvements, and Alex Harper for some additional information about the inner workings of the touchpad sensors.
Johannes Berg was happy to see this going into the kernel, and offered some technical suggestions, which Stelian accepted and submitted and updated patch. Vojtech Pavlik, Peter Osterlund and others also pitched in with their suggestions, which Stelian also implemented.
8. bootutils 0.0.5 Released
9 Jul 2005 (1 post) Archive Link: "[ANNOUNCE] bootutils v0.0.5"
Topics: FS: ReiserFS, FS: ext2, FS: ext3, FS: initramfs, FS: ramfs, Klibc
People: Nigel Kukard
Nigel Kukard said:
Project Description:
BootUtils is a collection of utilities to facilitate booting of modern Kernel 2.6 based systems. BootUtils is designed for initramfs, although volunteers to add support for initrd are welcome. The process of finding the root volume either by label or explicit label= on the kernel command line, mounting it and 'switchroot'ing is automated. BootUtils can also drop to emergency shell if the root volume cannot be mounted. Why not even start sshd and allow admin login if the box is in a remote location?
Features:
Changes:
Website:
9. Summary Of Recent RT Patch Acceptance Discussion
11 Jul 2005 - 13 Jul 2005 (7 posts) Archive Link: "Attempted summary of "RT patch acceptance" thread, take 2"
Topics: Assembly, Big O Notation, Microkernels: Adeos, Networking, POSIX, Real-Time: RTAI, SMP, Scheduler, Small Systems, Sound: ALSA, Virtual Memory
People: Paul E. McKenney, Thomas Gleixner, Lee Revell, David Lang, Bill Davidsen, Duncan Sands, Karim Yaghmour, Steven Rostedt, John Alvord, Takashi Iwai, Peter Chubb, Inaky Perez-Gonzalez, Andrew Morton, Paul G. Allen, Con Kolivas, Ingo Molnar, Victor Yodaiken, Kristian Benoit, Jonathan Corbet, Andrea Arcangeli, Gene Heskett, Daniel Walker, Darren Hart, Nicolas Pitre, Philippe Gerum, Sven-Thorsten Dietrich, Chris Friesen, Marcelo Tosatti, Paulo Marques, Nick Piggin, Andi Kleen, Bill Huey, William Lee Irwin III, Zwane Mwaikambo
Paul E. McKenney posted a summary of some recent discussion of RT patch acceptance:
CONTENTS
A. INTRODUCTION
B. DESIRABLE PROPERTIES
C. LINUX REALTIME APPROACHES
D. OTHER ASPECTS OF REALTIME
E. SUMMARY
F. RESOURCES
Search for a line beginning with the corresponding capital letter followed by a period to jump to the corresponding section.
A. INTRODUCTION
Common wisdom dictates that realtime operating systems, particularly hard-realtime operating systems, must be designed from ground up; that serious realtime support cannot be simply grafted onto an existing general-purpose operating system. Although this common wisdom was not arrived at lightly, it is often worthwhile to look for important exceptions to this sort of general rule of thumb. Candidate exceptions include:
There are still limits to the degree of realtime support that one can expect from a general-purpose OS -- there are some extremely demanding applications that can be satisfied only by hand-coded assembly running on bare metal. In fact, there are applications that can be satisfied only by custom hardware implementations. For example, standard DRAM is only so fast, and large CPU caches help only the common case, not the worst case that is important for hard realtime. In this case, the custom hardware might be a small CPU core with a modest amount of static RAM. In still more demanding situations, custom logic might be required.
Nevertheless, it is clear that Linux can support significant realtime requirements, as it is already being used heavily in the realtime arena. But how far should Linux extend its realtime support, and what is the best way to extend Linux in this direction? Can one approach to realtime satisfy all reasonable requirements, or would it be better to support multiple approaches, each with its area of applicability?
The answers to these questions are not yet clear, and have been the subject of much spirited discussion, for example, see the more than 300 messages in the following LKML thread:
http://lkml.org/lkml/2005/5/23/156
http://marc.theaimsgroup.com/?l=linux-kernel&m=111689227213061&w=2
This document looks at some strategies that have been proposed for realtime Linux, comparing and contrasting their capabilities. But, to evaluate these strategies, it is first necessary to determine what exactly one might want in a realtime Linux. If you would rather skip straight to the comparing and contrasting, search for "LINUX REALTIME APPROACHES".
B. DESIRABLE PROPERTIES
As usual, there are conflicting desires, at least they conflict given the current state of the art. These desires fall into the following categories:
Each of these categories is expanded upon below, and later used to compare a number of proposed realtime approaches for Linux. The discussion does go for some time, which is not surprising given that it is summarizing many hundreds of email messages. ;-) Search for the corresponding number at the beginning of a line to skip directly to the discussion of a given category.
The traditional view is that the entire operating system is either hard realtime, soft realtime, or non-realtime, but this viewpoint is too coarse grained. Different workloads have different needs, and there is disagreement over the exact definitions of these three categories of realtime. For example, (at least) the following two definitions of "hard realtime" are in use:
a. In absence of hardware failures, software provably meets the specified deadlines. This is fine and good, but many applications simply do not need this "diamond hard" realtime.
b. Failure to meet the specified deadline results in application failure. This is OK, but -only- if there is a corresponding required probability of success. Otherwise, one could claim "hard realtime" by simply failing the application every time it tries to do anything, which is clearly not useful.
A better approach is to simply specified the required probability of meeting the specified deadline in absence of hardware failure. A probability of 1.0 is consistent with definition (a). Other applications will be satisfied with a probability such as 0.999999, which might be sufficiently high that the probability of software scheduling failure is "in the noise" compared with the probability of hardware failure. A recent LKML thread called this "metal hard" realtime. Or was it "ruby hard"? ;-)
Of course, one can increase the reliability of hardware through redundancy, but no hardware configuration provides perfect reliability. For example, clusters can increase reliability, so that the probability of failure of the cluster is p^n, where "p" is the probability of a single node failing and "n" is the number of nodes. Note that this expression never reaches a probability of 0, no matter how large "n" is. In addition, this mathematical expression assumes that the failover software is perfectly reliable and perfectly configured. This assumption conflicts sharply with my own experience, in which there has always been a point beyond which adding nodes -decreased- cluster reliability. So one can argue that effort put into making software more reliable than is the underlying hardware is effort wasted. That said, there are situations, such as when human life is on the line, where such effort might be an extremely wise investment.
The timeframe is just as important as is the probability of meeting the deadline. Any system can provide hard realtime guarantees if the deadline is an infinite amount of time in the future. No computer system that I am aware of at this writing is capable of meeting a 1-picosecond scheduling deadline for any task of non-zero duration, but then neither can dedicated digital hardware. Some applications have definite response-time goals, for example, industrial process-control applications tend to have response-time goals ranging from 100s of microseconds to small numbers of seconds, while non-interactive applications such as graphics playback (movies and the like) are said to need no better than about 7 milliseconds scheduling jitter. Other applications can benefit from any improvement in response-time goals -- faster is better, think in terms of Doom players -- but even in these cases there is normally a point of diminishing returns.
The services used by the realtime application also figure in. Given current disk technology, it is not possible to meet a 100-microsecond deadline for a 1GB synchronous write to disk. Not even if you cheat and supply the disk with a battery-backed-up DRAM cache. However, many realtime applications need only a few of the services that an operating system might provide. This list might include interrupt handling, process scheduling, disk I/O, network I/O, process creation/destruction, VM operations, and so on. Keep in mind that many popular RTOSes provide very little in the way of services! They frequently leave the complex stuff (e.g., web serving) to general-purpose operating systems. This situation raises the possibility of providing a single Linux operating-system instance that provides some services with realtime guarantees and other services in a non-realtime fashion, with no guarantees of any sort.
Note that each service can have an associated deadline that it can meet. The interrupt system might be able to meet a 1-microsecond deadline, the real-time process scheduler a 10-microsecond deadline, the disk I/O system a 10-millisecond deadline for moderate-sized I/Os, and so on. The deadline that a service can meet might also depend on the parameters, so that the disk-I/O system would be expected to take longer for larger I/Os.
Furthermore, the probability might vary from service to service or with the parameters to that service. For example, the probability of network I/O completing successfully in minimal time might well be a function of the number of packets transmitted (to account for the probability of packet loss) as well as of packet size (to account for bit-error rate). To make things even more complicated, the probability of meeting the deadline will vary depending on the length of time allowed. Considering the networking example, a very short deadline might not allow the data transmission to complete, even if it proceeds at wire speed. A longer deadline might allow transmission to complete, but only if there are no transmission errors. An even longer deadline might allow time for a limited number of retransmissions, in order to recover from packet loss due to transmission errors. Of course, a deadline infinitely far into the future would allow guaranteed completion, but I for one am not that patient.
Finally, the performance and scalability of both realtime and non-realtime applications running on the system can be important. Given the current state of the art, one must pay a performance penalty for realtime support, but the smaller the penalty, the better.
So, to sum up, here are the components of a quality-of-service metric for realtime OSes:
a. List of services for which realtime response is supported.
b. For each service:
i. Probability of meeting a deadline in absence of hardware failure, ranging from 0 to 1, with the value of 1 corresponding to the hardest possible hard realtime.
ii. Allowable deadline, measured from the time that the request is initiated to the time by which the response must be received.
c. Performance and scalability provided to both realtime and non-realtime applications.
So you add a new feature to a realtime operating system. How much of the rest of the system must you inspect and understand in order to be able to guarantee that your new feature provides the required level of realtime response? The smaller this amount of code, the easier it is to add new features and fix bugs, and the greater the number of people who will be able to contribute to the project. In addition, the smaller the amount of such code, the smaller the probability that some well-intentioned bug fix will break realtime response.
Each of the following categories of code might need to be inspected:
a. The low-level interrupt-handing code.
b. The realtime process scheduler.
c. Any code that disables interrupts.
d. Any code that disables preemption.
e. Any code that holds a lock, mutex, semaphore, or other resource that is needed by the code implementing your new feature, as well as the code that actually implements the lock, mutex, semaphore, or other resource.
f. Any code that manipulates hardware that can stall the bus, delay interrupts, or otherwise interfere with forward progress. Note that it is also necessary to inspect user-level code that directly manipulates such hardware.
Of course, use of automated tools could make such inspection much more reliable and less onerous, but one would want such tools to deal with the very large number of CPU architectures and configuration options that Linux supports. The smaller the amount of code that must be inspected, the less chance there is that such a tool will fall victim to configuration-architecture combinatorial explosion. Of course, a tool that supported only a specific CPU architecture with a limited set of configuration options might still be useful, but the wider the coverage, the more useful the tool.
The hardware connection called out in point "f" above is quite important, and much more difficult to deal with, since machine-inspectable source code for firmware and for hardware (e.g., VHDL code) are typically not readily available. These sorts of problems are anything but theoretical, for example, see section 4.5 of:
http://www.cs.utah.edu/~regehr/papers/hotos7/hotos7.html
which describes some problems that were triggered by X-windows (not kernel!) driver bugs that resulted in hardware stalls. Similar problems have been triggered in other chipsets:
http://www.rme-audio.de/english/techinfo/nforce4_tests.htm
At present, there is no known way of finding these problems other than exhaustive testing.
Each of Linux realtime approaches uses a different strategy to minimize the amount of code in these categories. These differences are surprisingly important, and will be discussed in more detail when going over the various approaches to Linux realtime.
I never have learned to -really- like the POSIX API, with the gets() primitive being a particular cause of heartburn, but given the huge amount of software out there that relies on it and the equally huge number of developers who are familiar with it, one should certainly strive to provide it, or at least a sizeable subset of it.
Other popular APIs include the various Java runtime environments, and of course the feared and loathed, but quite ubiquitous, Windows API.
There are a lot of developers and a lot of software out there. The more of these existing developers and software your API supports, the more successful your realtime facility is likely to be.
How much realtime capability should be added to the operating system? How much of this burden should the applications take on? Is it better to push some of the complexity into a nanokernel, hypervisor, or other software or firmware layer? Let's first look at the tradeoff between OS and application.
For example, although it is certainly possible to program for separate realtime and non-realtime operating-system instances, doing so adds complexity to the application. Complexity is particularly deadly in the hard realtime arena, and can be literally so if human lives are at risk.
Balancing this consideration is the need for simplicity in the operating-system kernel. This balancing act must be carefully considered, taking both the relative complexities and the number of uses into account. Some would argue that it is worthwhile adding 1,000 lines to the OS if that saves 100 lines in each of 1,000 applications. Others would disagree, perhaps citing the greater fault isolation that might be provided by the separation.
But this balance clearly must be struck somewhere between writing the application to bare metal on the one hand (but achieving a perfectly simple zero-size operating system) and bloating the operating system beyond the limits of maintainability on the other hand.
Similar arguments can be made for moving some functionality into a hypervisor or nanokernel layer, though fault isolation also comes into play here.
Many of the most vociferous arguments seem to revolve around this complexity issue. It is quite possible that there never will be a single agreed-upon solution, since different people place different emphasis on different aspects of this design choice. Nonetheless, a well-thought-out discussion is very likely to turn up better design choices.
Can a programming error in a non-realtime application or in a non-realtime portion of the OS harm a realtime application?
Some applications do not care: in these cases, a failure anywhere causes a user-visible failure, so it is not important to isolate faults. Of course, even in these cases, it may be valuable to isolate faults in order to aid debugging, but, other than that, the fault isolation does not help overall application reliability -- regardless of where the bug occurs, the user sees a failure.
In other cases, the realtime portion of the application is protecting someone's life and limb, but the non-realtime portion is only compiling statistics and reports. In this case, fault isolation can be of the utmost importance.
What sorts of faults need isolating?
These faults might occur in the main kernel, in a loadable module, or in some debugging tool, such as a kprobe procedure or a kernel-debugger breakpoint script. Though in the latter case, perhaps realtime deadlines should not be guaranteed when actively debugging. After all, straightforward debugging techniques, such as use of kprint(), can cause response-time problems even in non-realtime environments.
Is SMP required? If so, how many CPUs? How many tasks? How many disks? How many HBAs?
If all the code in the kernel were O(1), it might not matter, but the Linux kernel has not yet reached this goal, and perhaps never will completely reach it. Therefore, some applications may choose to restrict the software or the hardware configuration of the platform in order to meet the realtime deadlines. This approach is consistent with traditional RTOS methodology, as RTOS vendors have been known to restrict the configurations in which they will support hard realtime guarantees.
C. LINUX REALTIME APPROACHES
The following general approaches to Linux realtime have been proposed, along with many variations on each of these themes:
Each of these general approaches is discussed in the following sections. Each section ends with a brief (but perhaps controversial) summary of the corresponding approach's strengths and weaknesses. I do not address "strength of community", even though this may well be the decisive factor. After all, the technical comparision will provide sufficient flame-bait. That said, if you are working on realtime extensions to Linux, you really really should be posting regularly on LKML. Yes, the resulting flames can be painful at times, but a little heat is needed for a patchset to get "well done" (sorry for the pun, but the point is nonetheless serious).
This document does not present measured comparisons among all of the approaches, despite the fact that such comparisons would be extremely useful. The reason for this, aside from gross laziness, is that it is wise to agree on the metrics beforehand. Therefore, the comparisons in this document are for the most part qualitative. In some cases, they are based on actual measurements, but these measurements were taken by different people on different configurations using different benchmarks. This is a prime area for future improvement.
This is the stock kernel, without even preemption. Why would -anyone- think of using stock 2.6 for a realtime task? Because some realtime applications have very forgiving scheduling deadlines. One project I worked on in the early 1980s had 2-second response-time deadlines. This was quite a challenge, given that it was running on a 4MHz Z80 CPU -- though, to be fair, the Z80 was accompanied by a hardware floating-point processor that was able to compute a 32-bit floating-point multiply in well under a millisecond. Modern hardware running a stock Linux 2.6 kernel would have no problem with this application. Hey, just having 32 address bits rather than only 16 would have helped a lot!
a. Quality of service: "soft realtime", with timeframe of 10s of milliseconds for most services. Some I/O requests can take longer. Provides full performance and scalability to both realtime and non-realtime applications.
b. Amount of code that must be inspected to assure quality of service for a new feature: the entire kernel, every little bit of it, since the entire kernel runs with preemption disabled.
c. API provided: POSIX with limited realtime extensions. Realtime and non-realtime applications can interact using the normal POSIX services.
d. Relative complexity of OS and applications: everything is stock, and all the normal system calls operate as expected.
e. Fault isolation: none.
f. Hardware and software configurations supported: all of them. Larger hardware configurations and some device drivers can result in degraded response time.
Strengths: Simplicity and robustness. "Good enough" realtime support for undemanding realtime applications. Excellent performance and scalability for both realtime and non-realtime applications. Applications and administrators see a single OS instance.
Weaknesses: Poor realtime response, need to inspect the entire kernel to find issues that degrade realtime response.
The CONFIG_PREEMPT option renders much of the kernel code preemptible, with the exception of spinlock critical sections, RCU read-side critical sections, code with interrupts disabled, code that accesses per-CPU variables, and other code that explicitly disables preemption.
a. Quality of service: "soft realtime", with timeframe of 100s of microseconds for task scheduling and interrupt handling, but -only- for very carefully restricted hardware configurations that exclude problematic devices and drivers (such as VGA) that can cause latency bumps of tens or even hundreds of milliseconds (-not- microseconds). Furthermore, the software configuration of such systems must be carefully controlled, for example, doing a "kill -1" traverses the entire task list with tasklist_lock held (see kill_something_info()), which might result in disappointing latencies in systems with very large numbers of tasks. System services providing I/O, networking, task creation, and VM manipulation can take much longer. A very small performance penalty is exacted, since spinlocks and RCU must suppress preemption.
Kristian Benoit and Karim Yaghmour measured CONFIG_PREEMPT at a maximum interrupt-response-time latency of about 555 microseconds, see:
http://marc.theaimsgroup.com/?l=linux-kernel&m=112086443319815&w=2
The machine under test was a Dell PowerEdge SC420 with a P4 2.8GHz CPU and 256MB RAM running a UP build of Fedora Core 3.
b. Amount of code that must be inspected to assure quality of service for a new feature:
i. The low-level interrupt-handing code.
ii. The process scheduler.
iii. Any code that disables interrupts, which includes all interrupt handlers, both hardware and softirq.
iv. Any code that disables preemption, including spinlock critical sections, RCU read-side critical sections, code with interrupts disabled, code that accesses per-CPU variables, and other code that explicitly disables preemption.
v. Any code that holds a lock, mutex, semaphore, or other resource that is needed by the code implementing your new feature, as well as the code that actually implements the lock, mutex, semaphore, or other resource.
vi. Any code that manipulates hardware that can stall the bus, delay interrupts, or otherwise interfere with forward progress. Note that it is also necessary to inspect user-level code that directly manipulates such hardware.
c. API provided: POSIX with limited realtime extensions.
d. Relative complexity of OS and applications: all the normal system calls operate as expected, so realtime and non-realtime processes can interact normally.
e. Fault isolation: none.
f. Hardware and software configurations supported: all of them. Larger hardware configurations and some device drivers can result in degraded response time.
Strengths: Simplicity. Available now, even from distributions. Provides "good enough" realtime support for a large number of applications. Applications and administrators see a single OS instance.
Weaknesses: Limited testing, so that some robustness issues remain. Need to inspect large portions of the kernel in order to find issues that degrade realtime response.
The CONFIG_PREEMPT_RT patch by Ingo Molnar introduces additional preemption, allowing most spinlock (now "mutexes") critical sections, RCU read-side critical sections, and interrupt handlers to be preempted. Preemption of spinlock critical sections requires that priority inheritance be added to prevent the "priority inversion" problem where a low-priority task holding a lock is preempted by a medium-priority task, while a high-priority task is blocked waiting on the lock. The CONFIG_PREEMPT_RT patch addresses this via "priority inheritance", where a task waiting on a lock "donates" its priority to the task holding that lock, but only until it releases the lock. In the example above, the low-priority task would run at high priority until it released the lock, preempting the medium-priority task, so that the high-priority task gets the lock in a timely fashion. Priority inheritance has been used in a number of realtime OS environments over the past few decades, so it is a well-tested concept.
One problem with priority inheritance is that it is difficult to implement for reader-writer locks, where a high-priority writer might wish to donate its high priority to a large number of low-priority readers. The CONFIG_PREEMPT_RT patch addresses this by allowing only one task at a time to read-acquire a reader-writer lock, although it is permitted to do so recursively. This can limit the scalability of reader-writer locks, but one would not expect any change unless and until someone finds a serious scalability limit that affected a significant fraction of realtime users.
Note that a few critical spinlocks remain non-preemptible, using the "raw spinlock" implementation.
a. Quality of service: "soft realtime", with timeframe of a few 10s of microseconds for task scheduling and interrupt-handler entry. System services providing I/O, networking, task creation, and VM manipulation can take much longer, though some subsystems (e.g., ALSA) have been reworked to obtain good latencies. Since spinlocks are replaced by blocking mutexes, the performance penalty can be significant (up to 40%) for some system calls, but user-mode execution runs at full speed. There is likely to be some performance penalty exacted from RCU, but, with luck, this penalty will be minimal.
Kristian Benoit and Karim Yaghmour have run an impressive set of benchmarks comparing CONFIG_PREEMPT_RT with CONFIG_PREEMPT(?) and Ipipe, see the LKML threads starting with:
1. http://marc.theaimsgroup.com/?l=linux-kernel&m=111846495403131&w=2
2. http://marc.theaimsgroup.com/?l=linux-kernel&m=111928813818151&w=2
3. http://marc.theaimsgroup.com/?l=linux-kernel&m=112008491422956&w=2
4. http://marc.theaimsgroup.com/?l=linux-kernel&m=112086443319815&w=2This last run put CONFIG_PREEMPT_RT at about 70 microseconds interrupt-response-time latency. The machine under test was a Dell PowerEdge SC420 with a P4 2.8GHz CPU and 256MB RAM running a UP build of Fedora Core 3.
b. Amount of code that must be inspected to assure quality of service by a new feature:
i. The low-level interrupt-handing code.
ii. The process scheduler.
iii. Any code that disables interrupts, but -not- including interrupt handlers, which now run in process context.
iv. Any code that disables preemption, including raw-spinlock critical sections, code with interrupts disabled, code that accesses per-CPU variables, and other code that explicitly disables preemption.
v. Any code that holds a lock, mutex, semaphore, or other resource that is needed by the code implementing your new feature, as well as the code that actually implements the lock, mutex, semaphore, or other resource.
vi. Any code that manipulates hardware that can stall the bus, delay interrupts, or otherwise interfere with forward progress. Note that it is also necessary to inspect user-level code that directly manipulates such hardware.
c. API provided: POSIX with limited realtime extensions.
d. Relative complexity of OS and applications: all the normal system calls operate as expected, so realtime and non-realtime processes can interact normally.
e. Fault isolation: none.
f. Hardware and software configurations supported: most of them. SMP support is a bit rough, and a number of drivers have not yet been upgraded to work properly in the CONFIG_PREEMPT_RT environment. It is likely that larger hardware configurations and some device drivers can result in degraded scheduling latency, but given that normal spinlocks are now preemptible, this effect should be much less of an issue than for CONFIG_PREEMPT.
Strengths: Excellent scheduling latencies, potential for hard realtime for some services (e.g., user-mode execution) in some configurations. A number of aspects of this approach might be incrementally added to Linux (e.g., priority inheritance for semaphores to prevent semaphore priority inversion, see "other aspects of realtime" for more discussion of this). Applications and administrators see a single OS instance.
Weaknesses: Limited testing, so that robustness issues remain. Large patch to Linux (~31K lines of context diff as of V0.7.51-23). Both realtime and non-realtime applications pay performance and scalability penalties for the realtime service.
The Linux instance runs as a user process in an enclosing RTOS. Realtime service is provided by the RTOS, and a richer set of non-realtime services is provided by the Linux instance. Note that there is considerable variety in RTOSes, and this section defines this term in its broadest possible meaning, including full OSes, hypervisors, nanokernels, and interrupt pipelines. At some point, it may make sense to split this section based on the type of the enclosing "OS", but there does not seem to be much reason to break it up at this point.
a. Quality of service: hard realtime, with timeframe of about 10 microseconds for services provided by the underlying RTOS. More complex services (I/O, task creation, and so on) will likely take longer to execute, which may impose a significant performance and scalability penalty.
Philippe Gerum's interrupt-pipeline layer, named Ipipe, is an example of an extreme case of a minimal RTOS. Kristian Benoit and Karim Yaghmour measured Ipipe's CONFIG_PREEMPT at a maximum interrupt-response-time latency of about 50 microseconds, see:
http://marc.theaimsgroup.com/?l=linux-kernel&m=112086443319815&w=2
This result was the best of the three alternatives tested (CONFIG_PREEMPT, CONFIG_PREEMPT_RT, and Ipipe in conjunction with Linux 2.6.12). It is believed that hardware limitations prevent much improvement in this result.
The machine under test was a Dell PowerEdge SC420 with a P4 2.8GHz CPU and 256MB RAM running a UP build of Fedora Core 3.
b. Amount of code that must be inspected to assure quality of service by a new feature:
i. All of the RTOS. One would strive to keep the RTOS quite small, the greater the number of realtime services provided, the larger the RTOS must be.
ii. Any Linux-kernel code that disables interrupts. Note that in many implementations, the Linux kernel will be prevented from disabling interrupts, since any attempt to disable interrupts will trap into the RTOS.
If the Linux kernel runs in privileged mode, however, all bets are off. In this case, special care must be used to avoid disabling the real hardware interrupts, including such disabling within any kernel modules that might be loaded.
iii. Any code that manipulates hardware that can stall the bus, delay interrupts, or otherwise interfere with forward progress. Note that it is also necessary to inspect user-level code that directly manipulates such hardware.
c. API provided: Whatever the RTOS wants to provide, often a subset of POSIX with realtime extensions.
d. Relative complexity of OS and applications: there are now two operating systems, both of which must be configured and administered. Applications that contain both realtime and non-realtime components must be explicitly aware of both OS instances, and of their respective APIs.
e. Fault isolation: the following faults may propagate from the Linux OS to the underlying RTOS, or not, depending on the implementation:
i. Excessive disabling of interrupts, if the Linux instance is permitted to disable them (hopefully not).
ii. Memory corruption, if the Linux instance is given direct access to the hardware MMU or to DMA-capable I/O devices.
f. Hardware and software configurations supported: depends on the implementation, however, there are products with this architecture that support SMP and a reasonable variety of devices. Note that supporting a large variety of devices either requires that this support be present in the RTOS, or that Linux be granted access to the devices. In the latter case, Linux will likely have the ability to DMA over the top of the RTOS.
Strengths: Excellent scheduling latencies. Hard-realtime support for some services in some configurations. Reasonable fault isolation for some implementations. Well-tested and robust implementations are available (I-pipe, L4Linux, RT-Linux, ...).
Weaknesses: Realtime application software must deal with two separate OS instances and their respective APIs, with explicit communication. Administrators must deal with two OS instances. Non-realtime applications are likely to suffer significant performance and scalability penalties.
Linux and RTOS instances run side-by-side on different CPUs in the same system. The CPUs might be different physical CPUs, different hardware threads in the same CPU, or different virtual CPUs provided by a virtualizing layer, such as Xen. The two instances might or might not share memory, and, if they do share memory, there might or might not be hardware protection to prevent one OS from overwriting the other OS's memory.
a. Quality of service: hard realtime, with timeframe of about 10 microseconds for services provided by the RTOS. Extremely simple polling-loop "RTOSes" could potentially provide sub-microsecond latencies. More complex services (I/O, task creation, and so on) will likely take longer to execute. Since the Linux instance runs on a separate core, there need not be any performance or scalability penalty for non-realtime tasks.
b. Amount of code that must be inspected to assure quality of service by a new feature: all of the RTOS, but only the RTOS. One would strive to keep the RTOS quite small, but the greater the number of realtime services provided, the larger the RTOS must be.
One important exception: if the RTOS and the Linux kernel access a shared hardware device (including memory!), it may be possible for Linux accesses to that hardware device to stall the RTOS.
c. API provided: Whatever the RTOS wants to provide, often a subset of POSIX with realtime extensions.
d. Relative complexity of OS and applications: there are now two operating systems, both of which must be configured and administered. Applications that contain both realtime and non-realtime components must be explicitly aware of both OS instances and APIs, and must also be aware of whatever hardware facility is used to communicate between the realtime and non-realtime CPUs.
e. Fault isolation: the following faults may propagate from the Linux OS to the underlying RTOS, or not, depending on the implementation:
i. Memory corruption, but only if the Linux instance is given direct access to the RTOS's memory or to DMA-capable I/O devices that can access the RTOS's memory.
f. Hardware and software configurations supported: depends on the implementation, however, there are products based on this approach that support SMP and a reasonable variety of devices.
Strengths: Best possible scheduling latencies with the hardest reasonable realtime -- just as good as bare metal in some implementations. Best possible fault isolation for some implementations. Well-tested and robust implementations are available. Linux can be used as is, so full performance and scalability can be provided to non-realtime tasks.
Weaknesses: Realtime application software must deal with two separate OS instances, with explicit communication. Administrators must deal with two OS instances. "RTOSes" that provide the best latencies offer the least services -- in extreme cases, the only service is execution of raw code on bare metal. The pair of cores will be more expensive than a single core, though one might use virtualization to emulate the two CPUs.
A Linux and RTOS instance run side-by-side in the same system. The two OSes might run on different physical CPUs, different hardware threads in the same CPU, different virtual CPUs provided by a virtualizing layer like Xen, or alternatively, the two OSes might use some sort of interrupt-pipeline scheme (such as Adeos) to share a single CPU.
However, applications see a single unified environment. Applications run on the RTOS, but the RTOS provides Linux-compatible system calls and memory layout. If the application invokes a non-realtime system call, the task is transparently migrated to the Linux OS instance for the duration of that system call. This differs from the other dual-OS approaches, where the applications must be explicitly aware of the different OSes.
At this writing, it appears that the two instances need to share memory, since tasks can migrate from one OS to the other.
a. Quality of service: hard realtime, with timeframe of about 10 microseconds for services provided by the RTOS. More complex services (I/O, task creation, and so on) will likely take longer to execute. It is also possible for tasks to be "trapped" in the Linux instance, for example, if they are sleeping, but have not yet been given a chance to respond to some event that should wake them up. The performance and scalability penalties to non-realtime tasks can be expected to depend on the amount of protection provided for realtime tasks against non-realtime misbehavior -- the greater the protection, the greater the expected penalty. It may be possible to provide hardware support to improve this tradeoff.
b. Amount of code that must be inspected to assure quality of service by a new feature:
i. All of the RTOS. One would strive to keep the RTOS quite small, but the greater the number of realtime services provided, the larger the RTOS must be.
ii. Any Linux-kernel code that disables interrupts. Note that in many implementations, the Linux kernel will be prevented from disabling interrupts, since any attempt to disable interrupts will trap into the RTOS or into the underlying software/firmware layer (e.g., Xen or Adeos).
If the Linux kernel runs in privileged mode, however, all bets are off. In this case, special care must be used to avoid disabling the real hardware interrupts, including such disabling within any kernel modules that might be loaded.
iii. Any code that manipulates hardware that can stall the bus, delay interrupts, or otherwise interfere with forward progress. Note that it is also necessary to inspect user-level code that directly manipulates such hardware.
iv. Any Linux code that manipulates a data structure that the RTOS accesses. If the Linux and RTOS code share any sort of lock, then all critical sections of that lock must be inspected, as must the implementation of the lock itself. The same is true of any shared mutex, shared semaphore, or other shared resource.
c. API provided: Full POSIX with realtime extensions. Anytime a task running in the context of the RTOS attempts to execute a non-realtime system call, it is migrated to the Linux instance.
d. Relative complexity of OS and applications: there are now two operating systems, both of which must be configured and administered. However, applications can be written as if there was only one OS instance that provided the full set of services, some realtime and some not.
e. Fault isolation: the following faults may propagate from the Linux OS to the underlying RTOS, or not, depending on the implementation:
i Excessive disabling of interrupts, if the Linux OS is permitted to disable hardware interrupts (hopefully not, though preventing this may require special hardware).
ii. Memory corruption, either due to wild pointer or via wild DMA.
f. Hardware and software configurations supported: depends on the implementation, however, it is reasonable to believe that SMP and a reasonable variety of devices could be supported. Note that supporting a large variety of devices either requires that this support be present in the RTOS, or that Linux be granted access to the devices. In the latter case, Linux will likely have the ability to DMA over the RTOS.
Strengths: Excellent scheduling latencies. Hard-realtime support for some services in some configurations. Applications see a single OS.
Weaknesses: Administrators must deal with two OS instances. The two OSes will be extremely sensitive to each other's version and patch level, since they access each other's data structures.
A Linux instance runs on multiple CPUs, either different physical CPUs, different hardware threads in the same CPU, or different virtual CPUs provided by a virtualizing layer such as Xen. Some (but not all!) of the CPUs are designated as realtime CPUs. If a task running on a realtime CPU executes a trap or system call that contains non-deterministic code sequences, the task is migrated to a non-realtime CPU to complete execution of the trap or system call, then migrated back. This prevents any non-realtime execution of a given realtime task from interfering with that of other realtime tasks.
Interrupts can be directed away from realtime CPUs. Such interrupt redirection is supported on a few architectures, and has in fact been used for realtime support since at least the 2.4 kernel.
a. Quality of service: ~40 microseconds for ARTiS, with restricted hard/firm realtime supported for user-mode execution. More complex services (I/O, task creation, and so on) will likely take longer to execute. It is also possible for tasks to be "trapped" on the non-realtime CPUs, for example, if they are sleeping, but have not yet been given a chance to respond to some event that should wake them up. Since a stock non-CONFIG_PREEMPT Linux may be used, there need be no performance or scalability penalty for non-realtime tasks, nor for realtime tasks that execute only realtime operations. There can be a significant migration penalty when realtime tasks frequently execute non-realtime operations.
b. Amount of code that must be inspected to assure quality of service by a new feature:
i. Any part of the Linux kernel that is permitted to execute on the realtime CPUs. This would normally be only the realtime portions of the scheduler and the low-level interrupt and trap handling code (the actual interrupts and traps would be migrated, if necessary).
ii. Any critical section of any lock acquired by the portion of the Linux kernel that is permitted to execute on the realtime CPUs.
iii. Any code that manipulates hardware that can stall the bus, delay interrupts, or otherwise interfere with forward progress, but only if that hardware can affect or is used by both the realtime and the non-realtime CPUs.
That said, note that it is also necessary to inspect user-level code that directly manipulates such hardware.
c. API provided: Full POSIX with realtime extensions.
d. Relative complexity of OS and applications: There is but one OS, though it has a bit of added complexity due to the migration capability. Applications see only one OS.
e. Fault isolation: the following faults may propagate from the non-realtime CPUs to the realtime CPUs:
i Holding a lock, mutex, or semaphore for too long, when that resource must be acquired by code that is permitted to run on the realtime CPUs.
ii. Memory corruption, either due to wild pointer or via wild DMA.
f. Hardware and software configurations supported: all configurations, though single-CPU systems must have some sort of virtualizing facility so that the OS sees at least two virtual CPUs.
Strengths: Excellent scheduling latencies. Hard-realtime support for some services in some configurations. Applications and administrators see a single OS and API. Full performance and scalability for non-realtime and for pure-realtime tasks.
Weaknesses: Migration overhead. Requires multiple CPUs, either real or virtual.
D. OTHER ASPECTS OF REALTIME
1. PRIORITY INVERSION PROBLEM STATEMENT
2. PRIORITY INVERSION SOLUTIONS
3. PRIORITY INVERSION AND PTHREADS
Priority inversion is a situation where a low-priority thread is holding a resource that a high-priority task needs. Priority inversion can result in indefinite delay of the high-priority task, so is fatal for realtime applications, and, in extreme cases, can be intolerable even for non-realtime applications.
To see how priority inversion can happen, consider the following sequence of events:
a. Low-priority thread A acquires a pthread_mutex.
b. Medium-priority thread B starts executing CPU-bound, preempting thread A.
c. High-priority thread C attempts to acquire the pthread_mutex, but is blocked because A holds it.
Suppose that thread B is a realtime thread and that it will execute CPU-bound indefinitely. Since it is a realtime thread, its priority will never age down, so low-priority thread A will never get to execute. Thread A will therefore never release the pthread_mutex, so high-priority thread C will never be able to proceed. This situation is fatal for realtime systems, and can be literally so if thread C is controlling a life-support system.
Note that although this example used a pthread_mutex, many other types of resources can be involved in a priority-inversion situation. For a second example, consider the following sequence of events:
a. Low-priority task A holds a large block of memory, which it is about to free up.
b. Medium-priority task B starts executing CPU-bound, preempting task A.
c. High-priority task C attempts to allocate some memory, but is blocked because the system is short on memory, and A has not yet freed up its large block.
Different type of resource, but very similar result. This problem is not limited to mutexes and memory, some other types of resources that can be involved in priority inversion include:
a. Communications packets. Low-priority task A is prevented from transmitting by medium-priority task B, thereby blocking high-priority task C, which needs to receive the packet that task A is being prevented from sending. In the case of things like TCP/IP, the priority inversion can span multiple systems, for example, tasks A and B might be on one system and task C on another system on the same LAN.
b. Signals and/or events. Low-priority task A is prevented from posting by medium-priority task B, thereby blocking high-priority task C, which needs to receive the signal/event that task A is being prevented from sending.
c. File data. Low-priority task A is prevented from writing out data to a file by task B, thereby blocking task C, which needs this data in order to proceed with its own processing.
The hard cold fact is that pretty much any resource that can cause a task to block can be involved in a priority inversion situation.
There are a number of ways of preventing priority inversion:
a. Disable preemption while a resource is held.
b. Forbid resources to be acquired by tasks of different priorities.
c. Priority inheritance.
These are each covered in the following sections.
a. Disable preemption while a resource is held.
A simple, but effective, way to prevent priority inheritance is to simply disable preemption during the time that the resource is held. This works very well for some sorts of resources, particularly locks. The CONFIG_PREEMPT option in the Linux kernel uses this for all spinlocks and also for RCU read-side critical sections. However, this approach is impractical for resources that may be held while blocked, such as sema_t sleeplocks, memory, and communications, the latter of which might involve memory allocation, which might block if the system is low on memory.
Even where disabling preemption does work well, it can degrade scheduling latencies. Since a major goal of extreme realtime support is to -reduce- scheduling latencies, other approaches are needed.
b. Forbid resources to be acquired by tasks of different priorities.
The "diamond-hard" realtime approach is to simply prohibit tasks of different priorities from sharing any blocking resources. This is simple in principle, but can become quite complex in practice. In some cases, non-blocking mechanisms can be used, such as asynchronous I/O or non-blocking synchronization. However, although non-blocking mechanisms can prevent the high-priority task from blocking, they are of no help if the high-priority task really needs the information held by the low-priority task. In such cases, it may be necessary to dynamically adjust priorities, perhaps via schemes such as deadline scheduling.
There is a huge body of literature on realtime scheduling mechanisms at all levels of complexity and effectiveness, which cannot be reproduced here. However, a conceptually simple approach would be to increase the priority of "supplier" tasks so that "consumer" tasks get what they need when they need it. If this is automated, it is called "priority inheritance".
c. Priority inheritance.
With priority inheritance, the holder of a given resource is temporarily boosted to the maximum priority of all tasks waiting for that resource. This temporary priority-boost is removed as soon as the resource is released.
Of course, there can be complications, for example, a given low-priority task might be holding multiple locks, each of which is being waited on by different high-priority tasks. While the low-priority holds all of these locks, its priority is boosted to that of the highest-priority task waiting on any of the locks, but when it releases one of the locks, it might be necessary to decrease (but not eliminate) the boost to allow for the smaller set of high-priority tasks still waiting.
Another complication is "transitivity", where a low-priority task A holds one lock needed by medium-priority task B, which in turns holds a second lock needed by high-priority task C. In this case, task A needs to inherit task C's priority in a transitive manner through both of the locks. Such a priority inheritance chain could be arbitrarily long.
Furthermore, avoiding blocking does not necessarily make the underlying problem go away, for example, suppose that the high-priority task was executing the following loop:
for (;;) { spin_trylock(&my_mutex); set_current_state(TASK_UNINTERRUPTIBLE); schedule_timeout(HZ / 100); }The standard priority-inheritance mechanisms would not understand the need to priority boost in this case. But suppose that they did. Then what would they make of the following code?
for (;;) { spin_trylock(&my_mutex); if ((random() & 0xfff) == 0) break; set_current_state(TASK_UNINTERRUPTIBLE); schedule_timeout(HZ / 100); }How is the priority-inheritance mechanism going to figure out that it should remove the priority boost when the high-priority task breaks out of the loop?
Despite such complications, priority inheritance works reasonably well for exclusive locks, and is a major component of Ingo Molnar's CONFIG_PREEMPT_RT patch. There are strongly held opinions both for and against priority inheritance, for example:
http://www.linuxdevices.com/articles/AT7168794919.html
in which Victor Yodaiken considers priority inheritance to be harmful, and, as near as I can tell, soft realtime to be irrelevant. Doug Locke posted a rebuttal at:
http://www.linuxdevices.com/articles/AT5698775833.html
The big advantage of priority inheritance is that it is simple for its users. Use of priority inheritance does degrade scheduling latency compared to a carefully hand-crafted solution, and priority inheritance's implementation is difficult for reader-writer locks, to say nothing of memory allocation or communications primitives.
Nevertheless, priority inheritance does seem to have a significant role to play in mainstream "metal hard" realtime. It is not perfect, but, then again, what is?
Inaky Perez-Gonzalez's "fusyn" project is intended to bring priority inheritance to user-level pthread_mutex primitives, although it (perhaps wisely) leaves reader-writer primitives alone. More information on fusyn may be found at the following web sites and LKML threads:
http://developer.osdl.org/dev/robustmutexes/fusyn/20040510
http://marc.theaimsgroup.com/?l=linux-kernel&m=111362457509145&w=2
http://marc.theaimsgroup.com/?t=111333601400001&r=1&w=2
Interestingly enough, the complexity of pthread_mutex priority inheritance depends strongly on the threading model in use. Linux NPTL uses a 1:1 threading model, so that each user-visible pthread has its own kernel task. In this threading model, priority inheritance can be carried out entirely by the Linux kernel, since all pthreads are visible to it.
However, some pthreads implementers choose an m:n threading model, where a user-level thread scheduler multiplexes multiple user-visible pthreads onto a potentially smaller set of kernel tasks. In this m:n case, the Linux kernel has no idea which of multiple tasks it should priority-boost, and it might well be that the pthread in need of a boost is currently not assigned to a task. Therefore, m:n priority boosting must involve both the kernel and the user-level schedulers, making it quite complex and fragile.
Therefore, use of 1:1 user-level thread scheduling is recommended in the strongest possible terms.
Why use m:n user-level thread scheduling in the first place? It turns out that some application benefit from the extremely efficient user-level context switches that m:n scheduling provides. However, every optimization has its price, and the price of m:n user-level thread scheduling becomes apparent in realtime systems.
E. SUMMARY
At this point, it does not appear that any one approach can be all things to all realtime applications. It is therefore too early to pick a winner. Advocates of a given approach are therefore advised to concentrate their energy on implementations of their favorite approach, rather than engaging in flamewars with advocates of other approaches. ;-)
After all, in the end, the approaches that best meet the needs of the user community will win out. In fact, given that the Linux community has come up with no fewer than seven classes of solutions to a problem that is commonly thought to be unsolvable, it seems quite reasonable to expect that yet more classes of solutions will yet appear.
So, which of these approaches can be combined? The first three can be thought of as elaborations on the general preemption theme, and can be combined with each of the remaining four. The nested-OS and dual-OS/dual-core ideas can be combined by having one of the OSes on one of the cores have another OS nested within it. The dual-core/dual-OS approach can be combined with either of the migration approaches, simply by having one of the cores implement the migration approach. It should be possible to combine the two migration approaches, though it is not clear that this is useful.
Regardless of whether Linux's direction ends up being a single one of these approaches, a yet-as-unknown approach, some combination, or one of several approaches depending on the workload, realtime Linux looks to remain an exciting area.
F. RESOURCES
1. General Discussion
http://marc.theaimsgroup.com/?l=linux-kernel&m=111689227213061&w=2
Spirited LKML debate on realtime Linux that inspired this document.
http://marc.theaimsgroup.com/?l=linux-kernel&m=111846495403131&w=2
http://marc.theaimsgroup.com/?l=linux-kernel&m=111928813818151&w=2
http://marc.theaimsgroup.com/?l=linux-kernel&m=112008491422956&w=2
http://marc.theaimsgroup.com/?l=linux-kernel&m=112086443319815&w=2
Kristian Benoit's and Karim Yaghmour's realtime-latency measurement LKML threads.
http://www.cs.utah.edu/~regehr/papers/hotos7/hotos7.html
http://www.rme-audio.de/english/techinfo/nforce4_tests.htm
Description of how hardware latencies can impact response time.
http://marc.theaimsgroup.com/?l=linux-kernel&m=111362457509145&w=2
http://marc.theaimsgroup.com/?t=111333601400001&r=1&w=2
LKML discussions of "fusyn" priority-inheritance implementation of pthread_mutex.
2. Example Realtime Approaches
ftp://kernel.org/pub/linux/kernel/v2.6
Linux kernel source for non-CONFIG_PREEMPT and CONFIG_PREEMPT kernels.
http://people.redhat.com/mingo/realtime-preempt/
Ingo Molnar's CONFIG_PREEMPT_RT patch.
http://marc.theaimsgroup.com/?l=linux-kernel&m=112051169508144&w=2
Philippe Gerum's I-pipe patch 2.6.12-v0.9-00. This is an example of the nested-OS approach, with I-pipe being an extreme example of an lightweight enclosing OS.
http://download.gna.org/rtai/documentation/fusion/pdf/Life-with-Adeos.pdf
http://download.gna.org/rtai/documentation/fusion/pdf/Introduction-to-UVMs.pdf
http://download.gna.org/rtai/documentation/fusion/pdf/Native-API-Tour.pdf
Documents describing Philippe Gerum's RTAI/fusion approach, which is an example of migration between OSes.
http://www.lifl.fr/west/publi/MPSD04rtlws.pdf
http://lkml.org/lkml/2005/5/3/50
Paper describing ARTiS (Asymmetric RealTime SMP), an example of the migration-within-OS approach, along with an LKML posting of the corresponding Linu patch. Additional ARTiS publications may be found at http://www.lifl.fr/west/artis/.
ACKNOWLEDGEMENTS
This document was extracted from the emails and code of a large number of people, including those listed below in alphabetic order. Please accept my apologies if I left you out, and please let me know of this or any other error or omission so that I can generate the fix.
Andi Kleen, Andrea Arcangeli, Andrew Morton, Bill Davidsen, Bill Huey, Brian O'Mahoney, Chris Friesen, Con Kolivas, Daniel Walker, Darren Hart, David Lang, Duncan Sands, Elladan, Eric Piel, Esben Nielsen, Gene Heskett, Giuseppe Bilotta, Hari N, Henry Kingman, Ingo Molnar, James R Bruce, John Alvord, Jonathan Corbet, K.R. Foley, Karim Yaghmour, Kristian Benoit, Kusche Klau, Lee Revell, Manas Saksena, Marcelo Tosatti, NZG, Nick Piggin, Nicolas Pitre, Paul G. Allen, Paulo Marques, Peter Chubb, Philippe Gerum, Steven Rostedt, Sven-Thorsten Dietrich, Takashi Iwai, Theodore Y Tso, Thomas Gleixner, Tim Bird, Tom Vier, Valdis Kletniek, William Lee Irwin III, Zan Lynx, Zwane Mwaikambo, john cooper
10. Linux 2.6.13-rc2-mm2 Released
12 Jul 2005 - 14 Jul 2005 (21 posts) Archive Link: "2.6.13-rc2-mm2"
Topics: Kernel Release Announcement
People: Andrew Morton, Matthias Urlichs
Andrew Morton announced Linux version 2.6.13-rc2-mm2, saying:
ftp://ftp.kernel.org/pub/linux/kernel/people/akpm/patches/2.6/2.6.13-rc2/2.6.13-rc2-mm2/
(And at http://www.zip.com.au/~akpm/linux/patches/stuff/2.6.13-rc2-mm2.gz - kenrel.org mirroring is being slow again)
Matthias Urlichs added that this was also available "as a GIT archive (once the mirror has mirrored): http://www.kernel.org/pub/scm/linux/kernel/git/smurf/v2.6.13-rc2-mm2.git/. Suggestions for improvements welcome."
Sharon And Joy
Kernel Traffic is grateful to be developed on a computer donated by Professor Greg Benson and Professor Allan Cruse in the Department of Computer Science at the University of San Francisco. This is the same department that invented FlashMob Computing. Kernel Traffic is hosted by the generous folks at kernel.org. All pages on this site are copyright their original authors, and distributed under the terms of the GNU General Public License version 2.0. |