The GIMP Homepage (http://www.gimp.org) | The GIMP News Archive (http://www.xach.com/gimp/news/index.html) | The GIMP Mailing Lists (http://www.gimp.org/mailing_list.html) | The GIMP FAQ (http://www.rru.com/~meo/gimp/)
Table Of Contents
|1.||15 Feb 2001 - 27 Feb 2001||(16 posts)||Layers, Dialogs and Other Bits of Love on Valentines Day|
|2.||17 Feb 2001 - 21 Feb 2001||(9 posts)||What's Going on in Gimp-Print Land|
|3.||20 Feb 2001 - 25 Feb 2001||(18 posts)||RGB vs RGBA|
|4.||20 Feb 2001 - 22 Feb 2001||(4 posts)||Preview Icons|
|5.||21 Feb 2001 - 22 Feb 2001||(5 posts)||Eliminating Tool Destruction and Adding Better Caching Support|
|6.||4 Mar 2001||(1 post)||ANNOUNCE Gimp-Print 4.1.5|
Mailing List Stats For This Week
We looked at 51 posts in 221K.
There were 21 different contributors. 12 posted more than once. 6 posted last week too.
The top posters of the week were:
1. Layers, Dialogs and Other Bits of Love on Valentines Day
15 Feb 2001 - 27 Feb 2001 (16 posts) Archive Link: "Layers, dialogs and other bits of love on Valentines Day"
People: Sven Neumann, Miguel de Icaza, Marc Lehmann, Ernst Lippe, Daniel Egger, Nick Lamb, Sven Neumann , , Nathan C Summers
Emmanuel Mwangi asked if any work on nested layers was going to appear before GIMP 2.0, to help with layer organization. Sven Neumann thought that any major changes to the way layers were implemented would push back the release of Gimp-1.4. He stated the layers section of the code is particularly messy, meaning the releases would go something like this: clean up the code, port to GTK+-2.0, release 1.4, add GEGL and PAPUS, then release 2.0. gimptek DESIGNS (presumably Emmanuel Mwangi) responded that the suggestion was cosmetic only. The suggestion was a way to crudely hide/show layers in groups, as opposed to individually. Emmanuel additionally asked how much of Gimp 2.0 would be bonoboized. Sven thought that layer viewing could be considered after the preview stuff was done. Sven also stated " We have not yet decided if we want to bonoboize something at all, but mostly due to the fact that noone I talked to so far could explain what this would mean in particular." Miguel de Icaza responded with a 60,000 foot picture of what "Bonobization" might include:
Nathan C Summers has wanted to hear this for a while, siting that the Gimp plug-in protocol was highly asymetric. The idea of keeping the plug-ins in a seperate process space from the images would also provide some flexibility. This also suggested to Nathan the idea of using CORBA for Gimp farms. He wondered if CORBA could handle the large amounts of data transfer the Gimp required. Michael Meeks pointed to the CORBA shared memory interface for Vladimir's video editing toy. Marc Lehmann added that CORBA was not the only way to go. There are many CORBA alternatives, like DCOP and MCOP. Ernst Lippe thought these would work as interfaces between the Gimp and plug-ins, but cautioned that CORBA was a serious standard and would allow better integration and applications outside the Gimp. Marc Lehmann said he wasn't advocating MCOP over CORBA, but wanted to make sure the CORBA was actually what the Gimp needed. He added " For example, what has CORBA done to gnome so far? All I see is a bewildering multitude of apis that most people don't understand. Gnome really *has* become a mega-API with functions for each and everything and then some. In practise this leads to hard-to-factor components because nobody understands the dependencies anymore. A famous example of this is the annoying stubbornness of many gnome applications to start a whole bunch of other processes (the gnome-panel for example), without being asked for."
Nick Lamb didn't think this argument advanced the discussion, and added wondered if the 1.3 core separation process would be able to strip the dependence on X and make a distributed object model possible. Marc Lehmann thought that CORBA would only be a different set of APIs, and that X was only required for fonts at the moment. Marc thought making the Gimp a proper library would help more than using CORBA. Ernst Lippe said "... i believe that distribution is a nice thing to have, e.g. to be able to communicate with scanners/printers that are not attached to your local machine or to be able to communicate with some 3D rendering application that is running on a powerful server somewhere. So I am very much in favor of a protocol that can handle network connections. " Earnst also asked that a distinction be made between CORBA and the interfaces people describe in CORBA, since the implementation is not the thing itself.
Daniel Egger added " ORBit has the nice feature that it optimizes local transfers for maximum performance instead of going over TCP/UDP which means that we could use it easily without having to worry about performance. "
There were no more posts in this thread.
2. What's Going on in Gimp-Print Land
17 Feb 2001 - 21 Feb 2001 (9 posts) Archive Link: "What's going on in gimp-print land"
People: Robert L Krawitz, , Roger Leigh, Austin Donnelly
Robert L Krawitz posted " Just thought I'd send a quick update on what we're up to with gimp-print. In 4.1, we've done a major reorganization of our code base, creating a new shared library (libgimpprint.so) that applications (such as the print plugin, Ghostscript, and CUPS drivers) can link against. The intent is that libgimpprint (which is not dependent on anything else, such as the Gimp, GTK, etc.) be installed as part of the underlying infrastructure that the print plugin can depend upon. This raises questions of how to package it with the Gimp. The Gimp could certainly distribute the whole package with it, but it's quite large. It could also be treated like JPEG and only built if the underlying libgimpprint exists on the system. Suggestions?"
Roger Leigh pointed out that this could be done by running configure for gimprint with --disable-libgimpprint. Austin Donnelly thought the plugin should be statically linked against the library to avoid version skew and that the shared library should be available to others to handle their own dependencies. Robert L Krawitz said this sounded like splitting libgimpprint into four separate libraries to handle libgimpprint, the plugin, CUPS and Ghostscript drivers. Roger Leigh thought this would work, but that these libraries should not be shipped with the Gimp. Austin Donnelly said this sounded sane. This meant that the only people with statically linked print plug-ins would be the ones who built the Gimp from source.
3. RGB vs RGBA
20 Feb 2001 - 25 Feb 2001 (18 posts) Archive Link: "RGB vs RGBA - why Add Alpha Channel?"
People: Raphael Quinet, Sven Neumann, Daniel Egger, Seth Burgess, Nick Lamb, Sven Neumann , , Zachary Beane, Jens Lautenbacher
Raphael Quinet posted regarding RGB v. RGBA images, and
why th background layer was special and required "Add Alpha Channel".
The results of a previous discussion yielded:
Sven Neumann had recently had this discussion with Mitch and decided
that the Gimp should automatically add the alpha channel if
Raphael Quinet went back and reread his discussion on this topic over a year ago and decided that it was a bad idea to add the alpha layer in every case. Jens Lautenbacher pointed out that this left things like clear selections on the background layer, which behaves differently depending on where the background layer is positioned. Daniel Egger commented " Always having an Alphachannel would allow us some nifty optimsations in the GIMP. Although this takes more memory on some images (4 instead of 3 resp. 2 instead of 1 byte per pixel) the code would be much simpler, many if's in the source which trash the branch prediction of any processor would go away and the pixels would in most cases directly be on cache boundary even speeding the thing more. And for real graphics one needs an alphachannel anyways so what's the deal? " Zachary Beane mentioned always having an alpha layer would cause compatibility issues in 1.4 for users that didn't need alpha, but that Sven's list of implicit alpha operations was good. Seth Burgess thought that a consistent results, with or without layers, was more important than the current handling of non-layered images. Jens Lautenbacher agreed, stating the current non-layered functionality was actually broken if one viewed it from a layered perspective.
Zachary Beane, in the same vein, pointed out that grayscale images
allowed you to attempt to paint in color, yet only grayscale color was
represented on the canvas. This is clearly wrong behavior, and the tool
should therefore have two separate modes of operation for two different
types of images.
Seth Burgess picked up that Zach was making a point, but
agreed that the grayscale image did not function correctly.
I think what it comes down to is that modes increase
the load on the user of the tool. If they have to
exist, there should always be a clear indicator of the
current mode. Where possible, I'd like to see them go
Raphael Quinet added flames to the fire by bringing up the
handling of indexed images, which are also not to be handled
as RGB images, therefore mandate a different set of tools.
He returned to his original points by saying
Basically, we have to choose the lesser of two evils:
Sven Neumann diverted the discussion again by bringing asking how file formats that don't understand the alpha channel would handle the alpha being added to the image by default. "you actually don't want to save the alpha channel with the image at all if you never touched it. One way to solve this would be to introduce a function to check if the image's alpha channel is empty. This hint could be set from the already existing tile-row hints without too much overhead. " Daniel Egger thought that special dirty flags would be useful and that the image could be flattened if it didn't support alpha. Nick Lamb added that COW (copy on write) tiles would make the overhead of not using the alpha channel zero. COW was lost due to the spagetti code in 1.1.x, but might be a good candidate once this is cleaned up. Daniel Egger said " COW is indeed a good thing. However I assume you address the mentioned memory overhead with your answer and I'm not sure how you would avoid it with copy-on-write. The 3 byte will be always problematic because we always step over memory boundaries which is a huge loss in performance on any modern architecture. However restructuring the code to have a special function for each of the possible cases could be a cure to the branchprediction smashing distinguish in the source of the performance critical functions. "
4. Preview Icons
20 Feb 2001 - 22 Feb 2001 (4 posts) Archive Link: "preview icons"
People: Sven Neumann, Sven Neumann , , Simon Budig, Austin Donnelly
In response to preview icons being added to interface.c in CVS, Sven Neumann said " This is IMHO the most superfluous feature that has ever been thought of and overall a pretty bad idea. You don't want every single brush stroke to propagate a notification about an icon change to the window manager. Also it seems undesirable to create new gc and preview for this all the time. The necessary task switches and the overhead caused in the X-Server and windowmanager is IMHO not worth the advantage you might get from those icons. I strongly doubt that the icons will be helpful at all since their size is so tiny on almost all window managers. Using slightly different but static icons for different kinds of gimp windows might prove to be helpful. " Simon Budig agreed that it was a crazy idea, but said it was fun to implement. The rendering is done during the idle loop, so it shouldn't have too deep of an impact on performance. Sven shuddered at having to constantly redraw the icon, but thought that if people like it, it should be a preference option. Austin Donnelly thought it was a good idea.
5. Eliminating Tool Destruction and Adding Better Caching Support
21 Feb 2001 - 22 Feb 2001 (5 posts) Archive Link: "RFC: eliminating tool destruction and adding better caching support"
People: Mathan C Summers, Michael Natterer, , Nathan C Summers, Sven Neumann
Nathan C Summers posted
Problem: Many tools instruct the core to destroy themselves on certain
kinds of state changes, such as a change of image or display. While some
tools are quite good at handling these changes, others are quite unstable
psychologically and commit hara-kari for the smallest reasons. This is
inefficient. It complicates the tool handling code, causes a lot of
unneccesary frees, mallocs and initialzations, and seems to me to be a lot
like "Windows must restart for these changes to take effect." On the
other hand, multiple instances of a tool should exist for diffent input
devices so that they can be in different states.
Proposed Solution: Tools should just deal with having thier state
changed. We can introduce a function tool_manager_get_tool that takes a
tool class and input device and returns the correct tool (creating it on
the fly if needed). The toolbox would just call that function and set the
current tool on that device to that. active_tool should just go away.
(so should iterating over the list of registered tools, perhaps)
Problem: Some tools, such as iscissors, keep around a lot of cached data
generated from the image they are attached to. Changing the image they
are working on clears this cache. This can be slow when working on
multiple images or layers.
Proposed Solution: a generic object, ToolCache, from which the specific
kind of cache would be derived. A virtual function, compute_cache, would
compute the value to be cached.
For efficiency reasons, the cache may either be generated on-the-fly when
its values are requested, or whenever the target changes. However, if the
cache is not accessed after a certain number of changes it automagically
switches to on-the-fly mode to conserve CPU cycles.
Problem: Many tools instruct the core to destroy themselves on certain kinds of state changes, such as a change of image or display. While some tools are quite good at handling these changes, others are quite unstable psychologically and commit hara-kari for the smallest reasons. This is inefficient. It complicates the tool handling code, causes a lot of unneccesary frees, mallocs and initialzations, and seems to me to be a lot like "Windows must restart for these changes to take effect." On the other hand, multiple instances of a tool should exist for diffent input devices so that they can be in different states.
Proposed Solution: Tools should just deal with having thier state changed. We can introduce a function tool_manager_get_tool that takes a tool class and input device and returns the correct tool (creating it on the fly if needed). The toolbox would just call that function and set the current tool on that device to that. active_tool should just go away. (so should iterating over the list of registered tools, perhaps)
Problem: Some tools, such as iscissors, keep around a lot of cached data generated from the image they are attached to. Changing the image they are working on clears this cache. This can be slow when working on multiple images or layers.
Proposed Solution: a generic object, ToolCache, from which the specific kind of cache would be derived. A virtual function, compute_cache, would compute the value to be cached.
For efficiency reasons, the cache may either be generated on-the-fly when its values are requested, or whenever the target changes. However, if the cache is not accessed after a certain number of changes it automagically switches to on-the-fly mode to conserve CPU cycles."
Sven Neumann thought the first problem would go away once all tools were proper objects and that object creation and destruction created little overhead. Mitch's GimpToolInfo, which tracks what tools are available, allows GimpTool to only exist when needed. Nathan C Summers thought the proper use of objects lessened the overall impact, but didn't cover the destruction of tools because of state changes. The GimpToolInfo was fine for interactive mode, but caused tools to be recreated several times in scripts. Changing of tools can cause significant slowdowns on older machines if the cache is not kept around. He also suggested that color selector was a tool, but contrast/brightness was not since it did not directly use the image view. Michael Natterer (Mitch) thought a tool should be able to handle their own destruction and the programmers should not need to decide if a tool is destroyed. He thought making the existing tools stable was more important than adding caches.He also argued that brightness/contrast should be a tool " how else do you want to get the display events there?" He also said that the Gimp 1.4 interface should be hacked so that it will work well with 2.0. Nathan agreed that tools should worry about their own destruction, but they should also be able to handle changes of display. He thought that a consistent interface between 1.4 and 2.0 was a good idea, since it allowed for arbitrary things like macro recorders.
6. ANNOUNCE Gimp-Print 4.1.5
4 Mar 2001 (1 post) Archive Link: "ANNOUNCE: gimp-print 4.1.5 release"
People: Robert L Krawitz,
Robert L Krawitz posted an announcement about gimp-print 4.1.5
This is gimp-print version 4.1.5, a development release on the 4.1 line.
This software includes the Print plug-in for the Gimp, and GhostScript and CUPS drivers. The support for printers in GhostScript and CUPS is identical to the support for these printers in the Print plugin -- they use the identical code base. Please read src/ghost/README and src/cups/README for more information on this.
The Gimp Print plugin requires the Gimp 1.2.
Gimp-Print 4.1.5 contains the following fixes and improvements over 4.1.4:
The following problem with 4.1.5 is known to exist: printing in at least some modes on the Canon BJC-8200 is incorrect.
Sharon And Joy
Kernel Traffic is grateful to be developed on a computer donated by Professor Greg Benson and Professor Allan Cruse in the Department of Computer Science at the University of San Francisco. This is the same department that invented FlashMob Computing. Kernel Traffic is hosted by the generous folks at kernel.org. All pages on this site are copyright their original authors, and distributed under the terms of the GNU General Public License version 2.0.