Kernel Traffic
Latest | Archives | People | Topics
Wine
Latest | Archives | People | Topics
GNUe
Latest | Archives | People | Topics
Czech
Home | News | RSS Feeds | Mailing Lists | Authors Info | Mirrors | Stalled Traffic
 

Hurd Traffic #116 For 25 Mar 2002

By Paul Emsley

Mach 4 | Hurd Servers | Debian Hurd Home | Debian Hurd FAQ | debian-hurd List Archives | bug-hurd List Archives | help-hurd List Archive | Hurd Reference Manual | Hurd Installation Guide | Cross-Compiling GNUMach | Hurd Hardware Compatibility Guide

Table Of Contents

1. Use bochs not VMWare

4 Mar 2002 - 19 Mar 2002 (24 posts) Archive Link: "H3 first reboot after install failed"

Topics: Emulators: VMWare, Emulators: bochs, Emulators: plex86, FS: ext2

People: Jeroen DekkersRobert MillanJeroen Dekkers

This was quite a long-running thread started by a self-admitted Hurd newbie who quickly ran into problems when he tried to install from the H3 image using VMWare.

It has become apparent that talk of VMWare is not welcomed by the developers on the various Hurd mailing lists, for example Jeroen Dekkers said "We don't and will never support proprietary software. There is no way for us to debug to be sure, so we don't care either." being a typical response. VMWare users are encouraged to use bochs or plex86.

The idea is of course to install the operating system into a disk image. Jeroen compares the different methods of making the the disk image available for writing:

Readers are invited to draw their own conclusions about the relative elegance of the solutions.

It seems generally agreed that it is much more convenient to run GNU/Hurd on a second-hand machine than suffer the current tangles of WMWare/bochs/plex86.

Robert Millan let us know that "Network cards are supported in Bochs 1.4pre2 for both GNU/Linux and FreeBSD hosts (any guest)" .

2. Console code for OSKit-Mach

17 Mar 2002 - 21 Mar 2002 (16 posts) Archive Link: "some console code checked in"

People: Roland McGrathMarcus Brinkmann

Regular readers will know that the developers want to move away from the GNU Mach microkernel and use instead the OSKit-Mach microkernel. Recently substantial strides have been made in this direction (by Marcus Brinkmann) with the development of parts of a console driver (which was previously missing).

Roland McGrath comments:

I think all the terminal emulation stuff belongs in the common layer and not in the vga code. All that stuff would be the same for a different sort of output device. Each output device can just have a few calls for move cursor, scroll, clear space, write chars, etc.

About LEDs: in the PC keyboard hardware they are part of the same hardware, so it needs to be the same driver as handles the keyboard input. It will be the same story for USB keyboards and such as well. So it will be either a device_set_status call or just output bytes to the kbd device to diddle the state (write a new bitmask of lit LEDs). We can just punt this until we have a real keyboard driver in oskit-mach, but it would be good to factor it into the vc switching code and so on from the start anyway.

About the bell, you can see the oskit code for how it's handled in the PC hardware (pc_speaker). It's just io ports that you use to set the frequency and turn the tone on, then you wait, then you turn it off. I don't think anything else uses those particular io ports (0x61 turns the speaker on, 0x48 controls the timer that drives the tone frequency), so console could just diddle them directly as it does for VGA (though there is the timing question--it would suck to have your speaker keep beeping while something gets paged in before console runs again to turn it off).

Visual bell is certainly a feature that many people like. Ultimately, you want to have the bell behavior be an independent pluggable thingabob like screensavers so people can have it do something visually fancy, or contact fancy sound hardware, or turn on the vibrator in their pants, or whatever.

Roland went on to add:

Our chief concern to begin with is clean code reuse and consistency of the terminal emulation for standard display devices. Turning the interface inside out doesn't lose that.

But I am still on the fence about what the proper structure really is.

First are some feature concerns, with a garnish of efficiency concerns that are no doubt in fact negligible. There are several things you get from having the virtual terminal (the screen matrix) maintained in a common layer independent of and "above" any particular display driver.

The negligible efficiency concern is that you avoid duplication of the data structures and the processing code path that maintain the screen matrix and the state of escape sequence parsing.

You can have generic visual bell and screensaver modules that plug in at the higher level interface. They could use the character drawing interfaces or use the escape sequence engine to do whatever (play ascii movies, towers of hanoi, etc); the generic part can implement save/restore state, and also inspection of the matrix so you can do ascii decayscreen and the like. Such hacks would work the same for any underlying display driver(s) you might have (presumably a display driver could have its own fancier graphics things and those would take over instead when turned on).

Similarly, you could have things like split-screen display of two virtual terminals implemented generically at this layer (i.e. an interface like "window" or newer versions of "screen").

You can dynamically detach and attach display drivers to a virtual screen. New ones attaching can reconstruct the current screen matrix immediately from the higher layer's state. (The only other way is to replay an arbitrarily long log of the output to process the escape sequences--but in the general case you might need the entire output history.) I see uses for both the case of additional disparate display drivers attaching to spy on another display, and the case of maintaining the virtual terminal screen matrix while having no display driver attached at all, to be displayed when one attaches.

There is surely more than one way to structure the code such that these things are possible. But all I have in mind at the moment is the imposition of the central terminal emulation module.

Second, I am still of two minds about what "the right thing" is for the console abstraction (actually, I'm two minds aware that in the end my mind will just go to little pieces). I was buying your argument for a while, thinking about applications a la conserver where you are treating the virtual console as you would a serial port (i.e. equivalent to a pty), as well as the speech/Braille examples you mentioned. But now I am thinking more of applications a la screen, where there is a consistent virtual terminal screen image maintained that can be attached (sequentially or simultaneously) to multiple disparate players. And for speech and Braille interfaces, they might well have a virtual screen image in the canonical sense and have in their own front ends scrolling or "panning" sort of interfaces (screen has support for some Braille devices that appear to work this way).

Ultimately where this should be heading is libchannel. We want to decompose the functionality we're implementing and provide flexible forms of interconnection. For example, say I have 11 physical displays. On each of the first 10 physical displays I would like to have 12 virtual terminals and switch among those with hot keys. On the 11th physical display I would like to be able to see virtual terminal #1 of each of the first 10 displays, and switch among those virtual terminals with hot keys. Now generalize from there.

In the fully decomposed model, a terminal emulation module is a component that connects takes asynchronous input events (a stream of bytes) and on the other end provides a kind of storage interface (though one that delivers immediate updates asynchronously to asynchronous readers). You can stick this in the middle or not, depending on what configuration you want.

All these features are more than fancy enough not to worry about overmuch now. It's fine to impose a different sort of structure for now that gets just the features you are working on now. Keeping all the new code clean and modular in the general sense means it won't be terribly painful to restructure it later.

Marcus largely agreed with these comments.

3. Translator with sockets

18 Mar 2002 - 20 Mar 2002 (23 posts) Archive Link: "run.c translator"

People: Marcus BrinkmannThomas BushnellRoland McGrathMoritz Schulte

Marcus Brinkmann had been writing a run translator for the Hurd and recently tidied it up. He wrote:

I think it is a simple translator to learn from by beginners, but it also provides a useful functionality. It's one piece that bridges between the Hurd filesystems and UNIX programs.

It supports reading and writing, through a bidirectional pipe (but it will only set up the directions that correspond to the open() flags). One simple way is to use it to connect a file to the output of a program:

$ settrans -ac ~/.signature /hurd/run -r /games/fortune -s

Another simple way is to direct what is written into a file to program:

settrans -ac /var/log/apache.log /hurd/run -W /bin/sh -c '/bin/logfile-dispatcher_>/tmp/dispatcher-status/index.html'

There followed discussion about how to handle EOF:

Marcus wrote:

The implementation is using the Hurd's IO interface. It seems I was not clear enough in my original mail. The translator creates a pipe to the forked program, and translates io_read into a pipe read and io_write into a pipe write. The translator forks for every open().

Now, suppose you have a program like wc that collects data and returns a summary of that data. It will read from stdin until it gets EOF, and then print from stdout. But if I use the above translator, I have only one filedescriptor, and I cannot simply close it if I want to read back the summary of wc. So how do I inform the translator that it should close the pipe the forked program reads from (it can easily use two pipes instead one bidirectional one, but the program holding a port to the translator can not easily get two ports, one for the reads and one for the writes).

And Thomas Bushnell replied:

Everything that uses the IO interface is either a file or a socket.

You are using a bidirectional pipe: you should really call it a socketpair. (Exactly how are you creating it?)

This is what the shutdown call is for. You shutdown for writing, which is like closing one half of the pipe. The guy reading on the other end will get EOF, but the other half of the pipe will be just fine.

Which prompted Roland McGrath said: "Saying "bidirectional pipe" is descriptive too. :-) They're the same thing. As we discussed in great detail here at the time, `pipe' now creates a bidirectional pipe, i.e. it creates a socketpair and does not call shutdown. This follows what `pipe' nowadays does on FreeBSD, Solaris, and some other systems. (And of course, it's a feature got by just doing less rather than adding anything.) "

So Marcus then agreed that if the translator is a pipe, shutdown is appropriate.

However Moritz Schulte had already written a neat translator for sockets. Marcus had a look at this and decided to drop run.c for now.

4. The Hurd get (another) /proc filesystem

20 Mar 2002 - 22 Mar 2002 (13 posts) Archive Link: "Linux style /proc filesystem translator"

Topics: FS: procfs

People: Jon ArneyMarcus BrinkmannNeal Walfield

Jon Arney, a relative new-comer to Hurd hacking, nevertheless seems to already have the bit between his teeth, already having success with core files. Jon writes:

I've been busy the last week or so putting together a first pass at a Linux style /proc filesystem translator. It seems to have barely enough smarts in it now to support the 'procps' package (not that procps is better than the native Hurd utilities). The intent here was to start with a Linux-2.4 style /proc filesystem and then move on to bigger and better by supporting a 'native' Hurd mode /proc filesystem which models the Hurd's behavior better rather than just emulating a Linux style /proc. i.e. rather than /proc/<pid>/* we might have /proc/tasks/<pid>/<thread>/* and /proc/tasks/<pid>/<children>/* and other cool stuff. It would also be possible for /proc to only show processes owned or grouped by you so that you can't see other users processes.

The biggest problem with the Linux-2.4 /proc filesystem emulation is that many of the things available under Linux's /proc filesystem either have no equivalent under the Hurd, or are not redily available through existing data structures/RPCs because it's a fundamentally different architecture.

If anyone's interested, I've put it out at http://orac.ensor.org:8080/hurd/hurd-procfs/ with some documentation on what it is, what it does, etc. Let me know if you can't get to it because I've had problems from time to time with the network.

The beauty [] of the Hurd is that it's an entirely independent package and doesn't need to be compiled into the /hurd tree. It supports the traditional automake;autoconf;./configure;make scheme.

Marcus pointed to Neal Walfield's procfs - which attempts to do something similar and wrote:

It is not really clear to me that any procfs would model the Hurd adequatly. In fact, the Hurd already has its own models, like the individual translator settings. Or, for processes, we have libps, which is a library and is much better than parsing a filesystem.

Note that there is a hard reason for Linux to use a procfs. First, they don't have any really good interface to communicate with the system, whereas the Hurd has not only the whole filesystem but also the remote procedure call interfaces to communicate with they system interface. Second, it is easier for them to lump everything together into a single directory hierarchy, because procfs is just a hack, and no design concept of the system. In the Hurd, the individual system components provide their own interfaces at their location, and a lot of the flexibility of the Hurd stems from the fact that their is _no_ monolithic block that keeps control or has the information.

This basically means that something like a procfs can never be an integral part of the system, because it doesn't fit in conceptually. however, it might be useful anyway, be it to learn about translators, to give Linux users a touch of "feeling at home" or even for compatibility. But if you ask about the Hurd way of doing things, procfs doesn't really come into the answer at all. For the Hurd, procfs is, conceptionally, already everywhere you look.

That is not to say that there shouldn't be a filesystem view at the process table. Maybe it can even be provided by the proc server itself, making it a filesystem (but maybe better not for robustness). A more appropriate place if you only talk about processes would then be /servers/proc. If you talk about partitons of hd0, /dev/hd0/ (as a directory rather than a file, which it also would be) might be an idea.

I also think we should have an mtab translator that filesystems can register themselve voluntarily (for example when you specify the commandline option --mtab /path), for compatiblility with Unix (so that df is a little more helpful). (nickr is working on that). And this, although associating a filesystem with a path name is utterly broken conceptionally :) [note: the opposite, associating a path name with a filesytem makes perfectly sense]

Jon replied like this:

What you say is probably correct, however, inadequate models are often still useful :) I am not suggesting that Hurd utilities stop using 'libps' in favor of parsing data from /proc. Clearly, this is not an appropriate approach for the Hurd.

I realize that /proc under Linux is a hack and has a reason for existing. Its evolution made sense to the architecture (even if the design was ad-hoc).

After writing it, I also realize that the data obtained through /proc comes from a multitude of sources under the Hurd because of the division of responsibilities in the system.

When I tell Linux users about the Hurd, the first question out of their mouths is 'what does /proc' look like. They rarely listen to the full architectural schpiel and sort-of tune out. Part of the "value" in supporting some of the "legacy" applications and interfaces is in winning over more users which will ultimately (in my opinion) lead to greater interest in developing on and for the Hurd, shaking out bugs, etc.

There followed discussion about a System V Shared Memory implementation and the mtab translator. Code to come soon, perhaps?

5. libio bugs

24 Mar 2002 - 25 Mar 2002 (8 posts) Archive Link: "two bugs in libio environment"

People: Marcus BrinkmannRoland McGrathJeff Bailey

The developers are intending to move from stdio to libio. To this end, Marcus Brinkmann wrote:

I have stressted the libio environment a bit today. As this is a fresh build, there might still be issues that can be contributed to build weirdness, but it seems unlikely to me.

Sometimes creating a file fails with ENOPERM for no apparent reason. This is not reproducible, trying again creates the file just fine. (For example, gcc fails to create a tmp file, or install can not install a file). Can there be a race between io authentication and creating files? I think it is only for newly creating files, but I am not sure. This happens quite often, about once an hour or more often.

The second bug that does not happen as often is that a program hangs without outputting anything (well, I am not sure _which_ program hangs, so it might also be in the middle of output, but only at the start of a new line). This seems to be more like something that can be introduced by the libio change, as this has to do with buffered output. This has happened to me twice today. Terminal echo still worked, and interrupting the program and then restarting helped.

I have no full environment yet, so this is nothing I can debug right now (we first need a debugger :) But I am interested if Jeff is seeing this as well, and if Roland can make some guesses to what the nature is of such bugs. And maybe Thomas wants to stare at some code ;)

Jeff Bailey replied that he had seen this too. Marcus then added a reproducable failure case: make install for the Hurd sources: "It fails when creating a file, creating symlink, changing the mode. Maybe at other situations, too. We will have to look at the authentification code."

Roland McGrath replied: "There should not be any sort of races possible that could account for the lossage you are seeing. It must be some more indirect sort of bug."

Marcus printed out the user and group id vectors and the stat user and group fields that are looked for. "The latter were normal, the former were empty. Can it be that some tasks are created with a bogus auth port? This looks to me like a bug in exec."

Roland suggested looking at the creation of the iouser, which is on the server side of the autentication "If the server is correct then the auth port is indeed suspect."

And that is where the problem stands at present.

6. Disk performance on the Hurd

25 Mar 2002 - 29 Mar 2002 (37 posts) Archive Link: "removing an ext2fs file forces disk activity"

Topics: FS: ext2

People: Marcus BrinkmannThomas BushnellDiego RoversiJon Arney

The issue of (poor) disk/filesystem performance has been a long running - but until recently there has little more than anecdotal evidence.

Marcus Brinkmann writes:

I just found out that

$ while touch /tmp/foo; do rm /tmp/foo; done

causes a lot of disk activity. Further tests showed that the disk is activated for each rm. Is this a hard requirement? In Linux, the loop above does not cause any disk activity (except at the beginning and maybe at the end), it seems to be done completely in the cache.

Marcus went on to say: " we know about this problem in general. And trust me that it has been worse two years ago :) Linux has a very superb caching for the filesystem, while we have no equivalent caching (we just have the pager). However, I was wondering about whether this particular sequence of commands requires disk activity. Maybe it's just a bug somewhere that can be fixed. "

Thomas Bushnell had a different impression: his explanation:

The Hurd is synchronizing the disk more carefully than Linux.

It must guarantee that the directory is updated to drop the link *before* the inode refcnt is decremented and the inode possibly cleared.

So it synchronously writes the directory, and then lets the inode get cleared on the next regular sync.

A better way is to properly order writes with the devices, which would require having a table of which blocks depend on which, and the pagers not writing blocks out of order; that's hard work, but would be nice.

Linux, IIRC, simply ignores the issue entirely, and hopes you don't get screwed.

Diego Roversi ran 'tar' as a benchmark:

It looks like there is less than 1% of cpu time used by tar. And about 3% of cpu time spent in kernel code.

In some my simple benchmarking, I have found that hurd is 5-10 times slowest than linux in fs access. For example using bonnie++, a benchmark born to stress filesystems, shows that while linux have a slow cpu utilisation, usually <5%, hurd is using always a lot of cpu.

And doing some tweaking in the ext2fs translator, I found that caching of data is often inconsistent. I mean, rerunning the same test program that made always the same file access, sometimes made disk access and sometimes not.

Jon Arney also had an anecdote:

I noticed this activity as well quite a while back. It's not limited to 'rm'. I also wrote a similar test script with 'mv' and even a 'hello-world' with 'rename' to continuously rename a file from 'foo.0' to foo.fffffff and the drive light just went _crazy_. As you observed, the same sort of thing under Linux operates almost entirely in disk and fileystem cache.

I understand the goals of having disk syncrhonization performed in the proper order to avoid disk inconsistencies. I also, however, agree with Adam that something less than "optimal" might be better than nothing at all.

Jon went on to discuss performance comparisons:

%cpu time is an instantaneous rate of cpu consumption which varies widely from time to time depending on sample interval. What we're really interested in, I think, is the integral of %cpu over the lifetime of a process. That is, the amount of time (on behalf of the process both user and system) consumed. In other words, if I take 10% cpu time for 10 seconds, that is roughly equivalent to taking 100% cpu time for 1 second. The amount of system resource required to complete a task of known parameters is the same. Of course, there may be particular scheduling contstraints which make the former more desirable in some circumstances over the latter, but I leave that issue alone for the purposes of discussion. An equivalent metric for disk I/O is the transfer rate. That is, the number of bytes successfully transferred to/from a disk device or filesystem per unit time spent on said transfer.

High CPU utilization is usually a GOOD sign during I/O intensive operations because it indicates that the CPU is not spending lots of time waiting on the I/O but is instead being employed to good use. i.e. if the CPU utilization is high this generally indicates that the system is not waiting awfully long for I/O operations to be completed (at least in cache). The system's performance should be gated by the fastest individual component of the system (usually the CPU and front side bus).

This is why I initially suggested that some relative (and independently verifiable) performance characterizations be gathered for the Hurd, lest we be fooled into believing that the Emperor is wearing clothes.

Such performance data might also ultimately be useful in tracking the progress of the Hurd over time so that we can see what the changes we make are doing in terms of performance of various subsystems (io/network/...).

[Here are the results from Bonnie]

Linux:

Machine: Compaq Pentium III 850 Mhz
256 Kb L1 cache

Under Linux-2.4.17
hdc: 78165360 sectors (40021 MB) w/2048KiB Cache, CHS=77545/16/63,
UDMA(33)

File './Bonnie.1126', size: 104857600
Writing with putc()...done
Rewriting...done
Writing intelligently...done
Reading with getc()...done
Reading intelligently...done
Seeker 1...Seeker 3...Seeker 2...start 'em...done...done...done...
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
          100 10464 91.5 140505 97.4 141525 99.5 11226 97.7 445682 100.1 21369.7 96.2
 

Under Hurd-0.2

Transfer mode unknown

File './Bonnie.94', size: 104857600
Writing with putc()...done
Rewriting...done
Writing intelligently...done
Reading with getc()...done
Reading intelligently...done
Seeker 1...Seeker 2...Seeker 3...start 'em...done...done...done...
              -------Sequential Output-------- ---Sequential Input-- --Random--
              -Per Char- --Block--- -Rewrite-- -Per Char- --Block--- --Seeks---
Machine    MB K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU K/sec %CPU /sec %CPU
          100  1164  3.6  1375  1.0  1087  5.6  1508  5.3  3104  3.9 159.5  4.1
 

After examining the numbers, I think you will join me in appreciating the relative performance differences between the Hurd's ext2fs and Linux's ext2fs implementations. For the sake of completeness, it would be interesting to examine the numbers under FreeBSD and other similar systems. The more numbers we get, the better understanding we will have of the performance of the Hurd relative to other O/S implementations. This may lead ultimately in the direction of altering the implementation to the betterment of the system.

So it seem that the Hurd could do with some improvement.

Marcus read the results and said: "We need to run more stress tests and test suites. Before, it was hopeless, but it seems we have fixed enough border conditions that it becomes feasible now. "

Thomas had an idea about how things could be improved:

Suppose diskfs/ufs/ext2fs needs to guarantee that block A is written before block B.

Right now, it just does a synchronous update of block A before proceeding to the modification of B. That guarantees it, at some cost. (It's the way BSD always worked, by the way.)

The ordered writes way is to make both modifications, but keep track of the dependency: "A must be written before B".

When a block gets written, you can delete all records saying that it must be written before other things. For example, if your table says "A must be written before B", and you are now writing A, you can drop that entry.

But if the table says "A must be written before B", and the pager is now presented with block B to be written, it must *first* go find the block A and have it written.

I think the best way to do arrange both of these is the following algorithm (in the pageout routine):

 
  while (table contains a block A that must be written before this block)
    mark table "waiting for block A to be written"
    ask kernel to pageout block A
    wait on condition
  for each (block A such that this block must be written first)
    remove mark from table
    if (table marked "waiting for this block to be written")
      wakeup condition

This is not quite enough yet, however.

There are cases (as noted before) where the following sequence arises:

write block A
write block B
write block A again

and where the writes *must* occur in that sequence. (This often happens when block A contains two inodes, and block B must be written *after* the update of the first, and *before* the update of the second.)

If no actual pageouts happen until after all three writes, then the table will contain two records:
"A must be written before B"
"B must be written before A"
And as soon as a pageout happens, you'll get a deadlock.

And indeed, by that time, there's nothing you can do, because the intermediate state of block A is gone forever.

So the "add mark to table" routine must detect cycles. To say "block A must be written before block B" you must:

 
  while (table contains "block B must be written before block A"
      [TRANSITIVELY, even if not identically])
    mark table "waiting for block A to be written"
    ask kernel to pageout block A
    wait for condition
  mark table "block A must be written before block B"

Now, one final wrinkle. Suppose we are changing A, and then B, and the modification to A must get written first. Then which should you do:

modify A
mark table "A must be written before B"
modify B

or

modify A
modify B
mark table "A must be written before B"

The answer is clearly, the former.

There was no discussion about coding any of this up however.

7. Binary compatability chat

25 Mar 2002 - 26 Mar 2002 (7 posts) Archive Link: "GNU/Linux binary compatibility (Was: Re: memory_object_lock_request and memory_object_data_return fnord)"

People: Jeroen Dekkers Jeff BaileyAJeroen DekkersJeff BaileyMarcus Brinkmann

Jeroen Dekkers thought that it was not a good idea to have binary compatability with GNU/Linux. "It looks like we are then bound to the ABI and can't change it if we want to keep compatibility. There are also other problems, for example a program compiled on GNU/Linux could happily use PATH_MAX but that would cause a buffer overflow on GNU/Hurd. Also programs on GNU/Linux can use the /proc filesystem and can't on GNU/Hurd."

However, thinking the other way round, Jeff Bailey said "Worse is the idea of what would happen if a GNU/Hurd binary were run on a GNU/Linux system. You can almost guarantee buffer overruns in that case."

Marcus Brinkmann was not sure that he was right however.

There was also a difference of opinion about the point of it, the argument going something like: "If it is Free Software then I can recompile it for GNU/Hurd. I don't care about non-Free software".

8. GNU extensions for 'ls'

28 Mar 2002 (4 posts) Archive Link: "[RFC] First batch of changes for ls to support GNU extensions."

People: Roland McGrath

Several times in the past, mention has been made of the features of the Hurd, but that the GNU utilities did not support them. This week Alfred Szmidt attempts to remedy that by adding features to 'ls' such as the ability to print the author field, translator information and the permission bits for unknown users.

Roland McGrath had a few nits such as what the defaults should be and the feature testing in configure.

 

 

 

 

 

 

Sharon And Joy
 

Kernel Traffic is grateful to be developed on a computer donated by Professor Greg Benson and Professor Allan Cruse in the Department of Computer Science at the University of San Francisco. This is the same department that invented FlashMob Computing. Kernel Traffic is hosted by the generous folks at kernel.org. All pages on this site are copyright their original authors, and distributed under the terms of the GNU General Public License version 2.0.