LINUX GAZETTE

October 2003, Issue 95       Published by Linux Journal

Front Page  |  Back Issues  |  FAQ  |  Mirrors
The Answer Gang knowledge base (your Linux questions here!)
Search (www.linuxgazette.com)


Linux Gazette Staff and The Answer Gang

TAG Editor: Heather Stern
Senior Contributing Editor: Jim Dennis
Contributing Editors: Ben Okopnik, Dan Wilder, Don Marti

TWDT 1 (gzipped text file)
TWDT 2 (HTML file)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.
Linux Gazette[tm], http://www.linuxgazette.com/
This page maintained by the Webmaster of Linux Gazette, webmaster@linuxgazette.com

Copyright © 1996-2003 Specialized Systems Consultants, Inc.

LINUX GAZETTE
...making Linux just a little more fun!
The MailBag


HELP WANTED : Article Ideas
Submit comments about articles, or articles themselves (after reading our guidelines) to The Editors of Linux Gazette, and technical answers and tips about Linux to The Answer Gang.


dual Booting xp @ suse8.2

Sat, 30 Aug 2003 15:44:39 -0700
Patrick B (ironman616 from hotmail.com)

I have two separate hard drives on my computer hda, hdb. Xp is on hda an suse 8.2 is on hdb.Im booting suse with a floopy with lilo installed on it.Do you know of a lilo configuration that will boot my system.I tried the default installation that wrote the boot loader to the mbr .All I got when I tried booting was a blinking curser in the upper left corner of the screen.If you know of a lilo configuration that works I would be most grateful.Any help is most appretiated.

ironman616


webdialer using http

Sat, 9 Aug 2003 11:39:22 +0100
Aengus Walton (smiley0 from myrealbox.com)

I have a server and workstation and when I use the workstation it's masqueraded behind the server, but when the rest of the family needs to get on the net i have to get the server to logoff the net and they logon directly from the windows workstation. So what I need is a http interface to wvdial (if possible) that's compatible with IE..

I've already spent time installing webdialer (a project which does just this) but unfortunatly it doesn't work too well with IE as its client, and changing the client isn't an option.

Any suggestions would be greatly appreciated

cheers

Aengus


booting linux from flash memory!

Wed, 30 Jul 2003 02:27:18 +0000
Devi Priya (ijpriya from hotmail.com)

Hello,

I am new to this list. I am involved in embedded project. I have a system with linux as its operating system. My system has external peripherals like SDRAM, Flash memory etc.

I have to boot my linux OS from Flash memory. I have a BIOS programming which does the minimal hardware initialization. I would like to know how to boot my OS from Flash memory?

Thanks in advance for any help!

Well, this fellow's just getting started and a google search probably helped him more than we could. But, if someone has their own tale of burning their own flash-based startup, and what they were really using it for, I think it'd make a great article. -- Heather


minicom related - help required

Tue, 5 Aug 2003 06:36:04 -0400 (EDT)
Sriram N.S. (sriram_ns from hotvoice.com)

hi,

(1) I have been using minicom v2.00.0 on red-hat 7.3 to test my serial-port driver. while doing file-transfers (with both flow-controls disabled) i observe that minicom attempts to enable software flow control automatically. this happens even when hardware flow control has been enabled. i get to see the corresponding ioctl being issued to the driver. how can i overcome this particular problem??. i have been attempting the transfer operation at baud-rates 230K, 115K using ZModem proto. Is there an undocumented limitation with minicom with respect to speed??. This particular problem also affects transfer of binary files as minicom mistakes the content of the received file as control info.

(2) what are the possible causes for "Garbage Count exceeded"/"Bad CRC" messages on minicom??

Your help in this regard will be highly appreciated

Rgds, Sriram.

If you have more juicy things to say than "there may be a new version out" - any readers are welcome to chime in with real experiences on this one... -- Heather


Re:help with grub

Fri, 8 Aug 2003 12:33:39 -0500
cnuccio (cnuccio from ltpro.com)

hi

i saw your linux tips about grub and am in a bit of a pickle and i thought you could help.

i got a new dell with xp preisntalled. i asked for fat32 but they gave me NTFS. anyway, so i instralled partition magic, made a 5 gig partiion and setup boot magic (prepare for new OS).

i activated the partition and then booted to the redhat 9 (shrike) CD and installed. reboot, and all i see is grub, with only linux as a choice.

dell was no help, so i tried for a few days to get some info on editing grub.conf and getting xp back booting. i tweaked and trial and errored and first got it to boot to xp, but there was a "unable to validated license" or something. no dialog to login.

more tweaking, added inter partition mapping (i assume the license key was on another partition and it seemed to work even if i am wrong) got it to get to login box, but after typing in log and PW, it just sat there unable to start the explorer. i know i am close, but can't seem to get grub correct.i found a GUI grubconf utlity, but it assumes you know what you are doing.

here is my grub conf as i left it when my brain melted. i did a thing or 2 more to it and broke it again, but it was late:

See attached melted.grub-conf.txt

and here is the output of fdisk -l to show my drive info:

See attached melted.fdisk-l.txt

can you help me? i didn;t make rescue floppies (xp nor partition magix) and didn't back up my data ( i have done this several times with no probs.) and i really hate reinstalling (mainly losing lots of unread mail) but i know i am close...

please help if you can. thanks very much for your time.

chris nuccio

Anyone who's gotten their hands grubby with WinXP want to give it a shot? -- Heather

GENERAL MAIL


The Mailbag: Article Idea: "Windows Defectors" column

Mon, 4 Aug 2003 12:45:15 -0400
grok (grok from sprint.ca)

Hi all:

I'm glad I came across this 'polemic' now (being, sadly, only a sometime reader of LG). My 2-cents'-worth:

Some LG contributors seem to consistently miss the point as (for that matter) do many GNU/Linux 'geeks': this isn't about what possible MS defectors should or shouldn't be doing -- it's about what they will do; and they WILL be staying away from GNU/Linux unnecessarily if they anticipate the least complication in 'switching over' -- as is already the case somewhat.

The original letter-writer has hit the nail on the head (again -- as this is not the first time this has come up, by any means). 'Turn-key' types need -- and should receive -- all the help and encouragement they can possibly get to switch over. To quibble ahead of time over the methodology or the feasibility or the desirability, even, of getting a significant percentage of Windows users to 'defect' to us, is more about confusing the politics of the matter with the mechanics of it all.

IMO 'geeks' seem to excel at being technically sophisticated about these issues -- but politically naive in the extreme. It ain't rocket science to understand that we simply are required to hold these people's hand a bit in helping them over the hump, if we entertain any hopes of freeing the planet from the thrall of Microsoft (and others). The details will take care of themselves AFAIC -- discussion here of the Knoppix solution being a case-in-point.

As a former long-time 'windows tips' reader and fairly experienced political activist -- and small-time GNU/Linux advocate/user of some years' experience (if not expertise) as well -- there is one thing that is ABUNDANTLY clear to me: there is absolutely NO fundamental contradiction between having a 'turn-key', 'idiot-proof' GNU/Linux install over top of the preferred geek wet-dream OS we all desire. We can have things both ways (when it comes to GNU/Linux, if not in Life). Geeks who object to 'dumbing things down', (for whatever reasons) are simply missing the Big (non-technical) Picture -- which does INDEED matter in the long term. Many GNU/Linux users won't settle for Free Software becoming yet another 'niche market'. Too many geeks have said as much that they wouldn't mind/care about such a state of affairs. They clearly do not understand that this attitude could (but I don't believe would) lead to the downfall of Free Software. It certainly doesn't help, and actually harms, IMO our 'Cause' [i.e. see my postscript].

I am looking forward to reading a forthcoming regular 'Windows Defectors' column monthly in Linux Gazette. ;>

P.S.: LG should do an article about the insulting little 'cliques' of geeks who inhabit the various #debian/#linux/#other channels on IRC, terrorizing and driving away newbies in droves. Clearly these 'experts' have one set of problems they themselves haven't yet 'defined'...

The staff here at LG had a mixed reaction to this. I've formatted the replies we got below in the format that TAG is laid out as, so that you, the gentle reader can still see the context of the reply. -- Thomas Adam

Some LG contributors seem to consistently miss the point as (for that matter) do many GNU/Linux 'geeks': this isn't about what possible MS defectors should or shouldn't be doing -- it's about what they will do; and they WILL be staying away from GNU/Linux unnecessarily if they anticipate the least complication in 'switching over' -- as is already the case somewhat.

[Thomas] Sigh, I think you're being too idealistic. I agree with you, but you have to remember that "will do" is the operative phrase in your sentence. Many people that write in asking questions such as "is Linux better than Windows" often don't know themselves whether or not it would be a viable alternative for them to switch, and so we ('we' as in the staff at LG) try and extrapilate what they might want to do, based on the really poor information that the querents send in.
Many people that want to run Linux though often have a pre-conceived notion as to what they want to use it for, i.e. a webserver, fileserver, mailserver, etc., and more often is the case that they've heard that Linux can do this, and so they concentrate their efforts in finding out how Linux can do that specific task -- which is great. This then usually gives them the insentive to explore Linux's capabilties further and to get Linux to do Other Things (tm).

The original letter-writer has hit the nail on the head (again -- as this is not the first time this has come up, by any means). 'Turn-key' types need -- and should receive -- all the help and encouragement they can possibly get to switch over.

[Thomas] Which we try as best we can to provide. You have to understand though that we can only go so far as to help them only if they are willing to put the effort in themselves. There is only so much effort we can put in to a querent's answer, based on how far he/she is prepared to take out efforts. This is why joining a local LUG can be hugely beneficial for those who are just finding their feet, as it were.
I know of one querent (I shan't name names, although Heather will know who I am talking about (Hi, Heather!)) who continually e-mails us questions. This is great, since this is what we're here for in the first place, but it seems to me as though very little to no effort is first put into researching the question before it is sent. More often than not, we at the LG are a front-end to google/linux.

To quibble ahead of time over the methodology or the feasibility or the desirability, even, of getting a significant percentage of Windows users to 'defect' to us, is more about confusing the politics of the matter with the mechanics of it all.

[Thomas] Not at all, the two are virtually synonymous if you ask me, and often go hand in hand, since it a) depends on what (if any specific task) person X wants to do, and b) the mechanics are usually executed as a result of the purpose for defecting. I use the term 'purpose' in the loose sense, since there are some people who try Linux, just because they have heard a lot about it.

IMO 'geeks' seem to excel at being technically sophisticated about these issues -- but politically naive in the extreme. It ain't rocket science to understand that we simply are required to hold these people's hand a bit in helping them over the hump, if we entertain any hopes of freeing the planet from the thrall of Microsoft (and others). The details will take care of themselves AFAIC -- discussion here of the Knoppix solution being a case-in-point.

I believe you are creating a stereotype, to say nothing of making a sweeping generalisation. Granted there are a small minority who have the attitude of: RTFM each and everytime a person asks a question (this is very common in IRC rooms), but then most people are genuinely trying to help. Again, I stress the importance of LUG's here as a means of "holding their hands".
I disagree with the way you have phrased your sentence: "freeing the planet from the thrall of Microsoft (and others)." Remember that switching over is down to the individual choice, or to the choice of the organisation/business/etc that a person may well work for. In the latter case though, training ought to be given, but for the former, it is again dependant on his/her needs from Linux.
There are some people who I recommend should stick to using MS-Windows based upon their requirements. My parents for example would really not get on with Linux one bit, due to their needs, and still at this time, Linux does not satisfy them.

As a former long-time 'windows tips' reader and fairly experienced political activist -- and small-time GNU/Linux advocate/user of some years' experience (if not expertise) as well -- there is one thing that is ABUNDANTLY clear to me: there is absolutely NO fundamental contradiction between having a 'turn-key', 'idiot-proof' GNU/Linux install over top of the preferred geek wet-dream OS we all desire.

[Thomas] Are you saying that Linux is a source of sexual satisfaction? I also completely refute your stereotype of "geek" (whatever you mean by that). No OS is idiot-proof', since it all comes down to how you as a user of the OS decide to manage it.

We can have things both ways (when it comes to GNU/Linux, if not in Life). Geeks who object to 'dumbing things down', (for whatever reasons) are simply missing the Big (non-technical) Picture -- which does INDEED matter in the long term.

[Thomas] Which is what?

Many GNU/Linux users won't settle for Free Software becoming yet another 'niche market'. Too many geeks have said as much that they wouldn't mind/care about such a state of affairs. They clearly do not understand that this attitude could (but I don't believe would) lead to the downfall of Free Software. It certainly doesn't help, and actually harms, IMO our 'Cause' [i.e. see my postscript].

[Thomas] We can perhaps thank RMS for his continual devotion to the FS cause here.

I am looking forward to reading a forthcoming regular 'Windows Defectors' column monthly in Linux Gazette. ;>

[Thomas] Assuming you would write one, but expect a lot of flame e-mails!

I am looking forward to reading a forthcoming regular 'Windows Defectors' column monthly in Linux Gazette. ;>

[Jimmy O'Regan] I volunteered to write about Wingate (it's still on my todo list, there's just one or two things I haven't gotten working the right way yet), I still have to use Windows to browse the net (winmodem), and most of the answers I've given have been on windows related subjects, so I volunteer to write a column about windows stuff. Now, I'm not the most confident person in the world, so (to the rest of the Answer Gang :) is there someone I can send draft articles to for constructive criticism? To Jim/grok, are there any specific topics you think should be covered?
I had been thinking of starting with articles which showed how Windows users could begin the transition by using free software under windows, to lower the learning curve: cygwin, open office (.org), mozilla &c. (I also have a couple of short scripts and aliases to convert (really) obscure windows file formats to something useful, though I might just group a few and sent them as 2c tips).
The quickest idea I could roll off is an article about Cygwin; how the standard tools are actually really useful, even if you can't just point and click to use them :) - I could roll a couple of the obscure file formats into that, by way of demonstration (using awk for CSV etc)
I could probably do an introduction to Open Office from the MS Office user's POV at the same time; provided I get enough time around my parents (my mother is/was an ECDL instructor, my Dad had to train the rest of the office staff at his last job), and having once been forced to use ASP, I'm pretty interested in trying out Arrowhead and giving my impressions.
On a level which leans more towards my personal interests, I've got some video editing to do in the next week, and I want to try out Ardour as a home studio solution, and as a guitarist, I want to see if Songwrite comes anywhere near GuitarPro as a way of representing tablature; but since on the distro I'm forced to use (Mandrake bloody 9.0) both gcc and python are broken, this may take a while.
Plus, the helicopter never came after the last time I volunteered :)

Some LG contributors seem to consistently miss the point as (for that matter) do many GNU/Linux 'geeks': this isn't about what possible MS defectors should or shouldn't be doing -- it's about what they will do; and they WILL be staying away from GNU/Linux unnecessarily if they anticipate the least complication in 'switching over' -- as is already the case somewhat.

[Ben] Och, that tired old refrain again. Why are you assuming that people are "missing" something here? What if, after sober and careful consideration, they have decided that the cons of doing what you ask for outweigh the pros? I am among the Linux "geeks" that have done so; many other people that I know are as well. Your assumption is poorly considered and rather offensive.

The original letter-writer has hit the nail on the head (again -- as this is not the first time this has come up, by any means). 'Turn-key' types need -- and should receive -- all the help and encouragement they can possibly get to switch over. To quibble ahead of time over the methodology or the feasibility or the desirability, even, of getting a significant percentage of Windows users to 'defect' to us, is more about confusing the politics of the matter with the mechanics of it all.

[Ben] Answer me one simple question here, if you would. Who pays? Conversely, who is it that owes the hundreds of thousands of hours of careful, exacting, difficult labor necessary to "convert" (quoted due to many unmerited assumptions behind the word) those would-be Wind0ws-to-Linux 'defectors'?

IMO 'geeks' seem to excel at being technically sophisticated about these issues -- but politically naive in the extreme.

[Ben] To put it plainly, you don't know what you're talking about. This myth has been propagated for so long that even people who should know better are affected by it - but a tiny bit of research would show you the cold, hard truth in just moments. Take a look at Kuroshin, Slashdot, Linux.org, EFF.org, etc.; there are many, many highly politically-savvy folks there if you look for them.

It ain't rocket science to understand that we simply are required to hold these people's hand a bit in helping them over the hump, if we entertain any hopes of freeing the planet from the thrall of Microsoft (and others).

[Ben] Who are "we"? If you are willing to do the job - if you manage to hold up for even a week of providing the level of support you're talking about without any remuneration - kudos and my respects to you. I have no doubt that LG would be more than happy to advertise your services.
Until you're willing to do this, please don't assume that you can co-opt other people's services without any return. You don't own anyone else's efforts. If we're speaking of extreme political naivete, it is exactly this, often displayed by those who spend too much time in political bull sessions and not enough time in the real world - people are not pawns, their labor is not to be taken for granted.

The details will take care of themselves AFAIC -- discussion here of the Knoppix solution being a case-in-point.

[Ben] For those details to "take care of themselves", a fellow named Klaus Knopper had to put in a few thousand hours (my best guess) of hard work. I doubt that he'd appreciate his efforts being so dismissively classified; I certainly don't.

As a former long-time 'windows tips' reader and fairly experienced political activist -- and small-time GNU/Linux advocate/user of some years' experience (if not expertise) as well -- there is one thing that is ABUNDANTLY clear to me: there is absolutely NO fundamental contradiction between having a 'turn-key', 'idiot-proof' GNU/Linux install over top of the preferred geek wet-dream OS we all desire.

[Ben] As Thomas noted, I do not use OSes for my sexual satisfaction. Besides, there's no such thing as idiot-proof; idiots are far too ingenuous. The point that you're missing is that using a computer requires intelligence, skill, and effort - and by its nature, always will. It's a *tool*: one that, in this respect, is no different from, say, a lathe... although a lathe is perhaps a little less physically forgiving. Idiots will never use either one well.
[Jason] The only way we could have a "idiot-proof", "turn-key" would be for someone other than the users to make choices for the users. Sounds kind of like what a distribution does, doesn't it?

We can have things both ways (when it comes to GNU/Linux, if not in Life). Geeks who object to 'dumbing things down', (for whatever reasons) are simply missing the Big (non-technical) Picture -- which does INDEED matter in the long term.

[Ben] The Big Picture, in your perception, being that the skilled and the knowledgeable are the servants of the idiots and the clueless? Please... try that somewhere else. I grew up under a political system that was based on that premise (the former USSR); the current state of that entity, and the amount of suffering it created in this world, should give you a clue as to the success of that idea.
[Jason] You should read "In the Beginning was the Command Line", an essay by Neal Stephenson. (I don't have a link handy: Google for it.) It's about user interfaces, and how GUIs rely upon other people making choices for you.
Oh yes, it's Linus Skywalker vs. the death star! :-)
Okay, this is going to harsh, but how will clueless Windows users help free software? They can't code. Bug reporting takes a certian skill.
That is highly inaccurate. Windows users' can write software, it is just that they'll probably be used to a different language. -- Thomas Adam
How exactly is it that we can't live without these people?
But really, I wan't to see Linux popular as much as the next guy, but if I have to do by making Linux look just like Windows, what's the point? Distros such as Mandrake, IMHO, are doing a great job of providing alternate configuration interfaces (ie, a GUI) and leveraging automatic hardware detection.
Mandrake and RedHat are trying to be too much like Windows, IMHO. The whole point about Linux should be that it is another alternative from using it...not: "How can we make Linux look more like Windows" -- Thomas Adam

 


Copyright © 2003, . Copying license http://www.linuxgazette.com/copying.html
Published in Issue 95 of Linux Gazette, October 2003

LINUX GAZETTE
...making Linux just a little more fun!
More 2-Cent Tips

See also: The Answer Gang's Knowledge Base and the LG Search Engine


cinelerra & libstdc++so.3

Fri, 08 Aug 2003 23:54:45 -0700
Thomas Adam (The LG Weekend Mechanic)
Question by Brian

Well, i installed the SuSe 8.2 pro, and it's really nice. Found Kino and got my firewire DV video camera to download some AVI files. wow! To get a final product SVCD of my little princess riding horses, i decided to use Cinelerra.it won't install, because libstdc++so.3 isn't found. I have all the C++ from SuSe, but, not this.

My Questions: 1 i have the files from a Linux Format DVD "essentials" section. It looks "involved" to install. Do i need the one file libstdc++so.3, or the whole group?

2 Could you recommend a course of action. I would like to do some video editing again. I'm 1/2 way there and real excited. Dang! i'm getting all shaky again!

This looks like a classic case of "I cannot find the symlink". I usually get annoyed at programs that do this, but the solution is simple:
1. Find "libstdc++" (possibly in /usr/lib/)
2. ln -s /usr/lib/libstdc++ /usr/lib/libstdc++so.3
(What you may find is that a file called "libstdc++so.so" exists, and should that point of a library file, symlink it as appropriate).
3. run "ldconfig -X" (as root)
(Step 3 is there to keep the cache happy, although it is usually not needed).
HTH,

Thank you very much. Cinelerra is working, the frame rate is up w/ my NVIDIA driver, and i have 2 week's vacation starting today!. whoo hoo!

I am requesting your permission to post your suggestion to the web - chat sites where a few are experiencing similar difficulties.

thanks a 1 EE6.

brian

You're more than welcome to do so -- Thomas Adam


problem in dns setting up

Wed, 6 Aug 2003 02:51:16 -0700 (PDT)
Kapil Hari Paranjape (The Answer Gnag)
Question by Anil KP

Hi, We have leaseline from our isp and they have given us 8 ethernet ipz(public ip) for our internal servers. The problem is that our ispz dns doesnt work properly. so i thought of setting up my own dns.

I was able to setup the dns on the private

network(192.168.1.1-first ethernet card) successfully but was not able to setup the dns properly on the public ip(another ethernet card). What wud be reverse lookup zone file in the case of public ip?(I was given by my isp only 8 pulic ipz ). Anticipating ur reply

[Kapil] This can only be done by subnet assignment. The ISP needs to create entries for your in their reverse zone file which point to your server. Look for CIDR or Classless Internet Domain Routing on google.
I think this is only applicable if bot hyou and the ISP use "bind". If you use DJBernstein's domain name server programs then things are different.
To repeat, this is only possible through co-operation with the entity (presumably your ISP) who has been authorised to provide reverse lookup to the entire Class C net to which your eight addresses belong.
As an example you can get for our domain:
$ host -t PTR 81.209.199.203.in-addr.arpa.
81.209.199.203.in-addr.arpa     CNAME   81.imsc.209.199.203.in-addr.arpa
81.imsc.209.199.203.in-addr.arpa        PTR     proxy.imsc.res.in

$ host -t NS 209.199.203.in-addr.arpa.
209.199.203.in-addr.arpa        NS      md3.vsnl.net.in
209.199.203.in-addr.arpa        NS      md2.vsnl.net.in

$ host -t NS imsc.209.199.203.in-addr.arpa.
imsc.209.199.203.in-addr.arpa   NS      ns1.imsc.res.in
imsc.209.199.203.in-addr.arpa   NS      ns2.imsc.res.in


Home LAN setup question

Sat, 09 Aug 2003 00:50:59 -0600
Faber Fedor (xgen from softhome.net)
Question by xgen

Hi there dude,

i have a general question regarding Home LAN setup on Linux

I have 2 PCs to be networked and sharing a connection. Do i need 2 network cards, 1 leading out to the outside world, another leading to my internal LAN? Is this setup common? Will it help in LAN security?

Thanks a mil

-Xgen

[Faber] These days, the Dude has been upgraded to Dudes and a Dudette. We're now known as The Answer Gang. The Answer Guy is still around , but he's got help these days.
Generally speaking, if you're using one of the Linux boxes as a router/firewall. If you're using, say a LInksys router/firewall, then no, you don't need two NICS in one Linux box.
Check out www.tldp.org for various documents on setting up networks and routers using Linux.


Anyone for winRAID?

Mon Sep 29 16:39:55 BST 2003
Hugo Mills (Hugo@carfax.org.uk)

[hugo] I seem to be getting a small but steady flow of people
[hugo] asking me about the Adaptec Serial ATA RAID card and Linux.

[editorgal] hrmmmm
[editorgal] is there a distro being buggy about it?
[hugo] No, it just doesn't work right.
[editorgal] a pal tells me that aacraid code is busted in some of the gentoo kernel kits but is safe to use in their vanilla source kit
[hugo] It's not AAC at all.
[hugo] It's the AAR-1210SA.

[editorgal] is theer a secret handshake for it or is it just Being Evil right now?
[hugo] It's based on the SiI3112 chip, but Adaptec mangled it.
[hugo] You can write stuff to the disk drive and read it back again,

[editorgal] since I'm working on the tips section anyway.... :D
[hugo] but every disk access just causes a DMA timeout,
[editorgal] ouch
[hugo] which takes anything up to about 45 seconds to clear.
[editorgal] urgh
[hugo] So it's basically worthless.
[editorgal] that's millenia in computer time
[hugo] I wrote a patch to the kernel to recognise the PCI ID of the card,
[hugo] which works (I've got code in the kernel! Woohoo!)
[hugo] but it has the unfortunate effect above.

[editorgal] this a private patch or submitted?
[hugo] Submitted.
[hugo] It went in 2.4.21-ac1, and 2.4.22 I think.
[hugo] (Or was it 2.4.20-ac1 and 2.4.21? I can't remember)

[editorgal] ok, it wasn't handled at all, but you provided code which tries to handle it, only DMA is still wicked?
[hugo] Yes, that's about the size of it.
[hugo] Adaptec provide Linux drivers for the card,
[hugo] but they're only for certain stock Red Hat kernel packages.
[hugo] and they're binary-only.

* Editorgal avoids ranting about RH's concept of "stock"
[hugo] I've tried asking moderately noisily on LKML about the problems with this card,
[hugo] but nobody seems to be able to give me any information at all.
[hugo] All I've achieved is having several email threads archived where I appear to be
[hugo] the font of all knowledge about getting the 1210SA working under Linux.

[tonytiger] heh
[hugo] As soon as this month's pay cheque clears, I'm buying an SIIG card instead,
[hugo] and selling the Adaptec on eBay.

[editorgal] what's the info you want to get, I could post a wanted note for you in LG?
[hugo] Why isn't it working, and how do you fix it? :)
[hugo] TBH, I can't be arsed at this point.

[editorgal] it isn't working because adaptec's too lame to cough up a source module instead of a binary for RH's heavily mangled kernels.
[hugo] Well, yes, that's about the size of it. :)
* Editorgal captures this thread for the 2c Tips column
[hugo] Also, I think they want to hide the fact that the RAID (0, 1, 0+1) part of the card is effectively done in software.
[hugo] (Or so it is rumoured)

That "rumor" is per a Linux Kernel Mailing List (LKML) post by Sam Flory. It's probably mirrored lots of places, but here's a pointer:
http://marc.theaimsgroup.com/?l=linux-kernel&m=105484662322837&w=2
In the rest of the thread, Hugo notes that he's only looking for basic drive-access features, but above, he notes that he hasn't managed to code them up himself, and is giving up (his main contribution having been patching the PCI ID recognition). For fairness sake they have binary modules for a few other stock boxed-linux kernels but as soon as you stray off the beaten path - and possibly as soon as you upgrade even if staying within the distro's offered kernels - someone else will have to figure out why the SiI3112 chip hates Seagate SATA drives.

Meanwhile, if you're the sort who bristles about binary-only drivers going into your otherwise trustable kernel, look out for VIA's "support" for the MPEG2 hardware on their EPIA boards, too. -- Heather


up2date SSL error

Fri, 26 Sep 2003 10:24:42 -0400
Greg Anderson (Greg from FutureRealms.com)

In Redhat 7.1 thru 9 running up2date fails with this message:

SSL.Error: [('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')]

Then it tells you your system's clock may be so wrong it is causing the problem. What is really going on is the certificate and the up2date program is out of date. You need to download the lateest up2date from Redhat.

...............

"The certificate used by up2date and rhn_register to communicate with the Red Hat Network reached its end of life on August 28th 2003. Users attempting to connect to Red Hat Network will see SSL connection or certificate verification failures."

"New versions of the up2date and rhn_register clients are now available which are required for continued access to Red Hat Network."

...............

RHSA-2003:267 for Red Hat Linux: https://rhn.redhat.com/errata/RHSA-2003-267.html

This solved it for me. -- Greg Anderson

[JimD] Might be good to try an rpm --rebuilddb command, too. Just in case rpm segfaults on a corrupted dbm/database.


Snapshot fo current window or desktop

24 Sep 2003 12:30:54 -0500
Dan Wilder, Thomas Adam, Ashwin N (the LG Answer Gang)
Question by Wes Hegge

How do I get a snapshot (preferably in gif or jpg format) of the current window in KDE? What I am looking for is the equivalent to MS's Alt-PrtScn then paste to paintbrush and then save to a file.

TIA -- Wes Hegge

[Dan] ksnapshot

Thanks,

I Guess, I am blind as a bat. Right in the "Graphics" submenu is "Screen Capture Program" (aka ksnapshot).

Thanks for the help.

[Thomas] I haven't run KDE for years (I'm an FVWM fan), but I do know that "ImageMagick" offers the "import" utility which does the same thing, as does "xwd".
[Ashwin N] Can be done using Gimp. From the menu choose,
File -> Acquire -> Screenshot
[Heather] For the ImageMagick method I keep a directory named prn and if I like what I captured I rename this image. This happens to also be where I keep documents I only have around to be printed. Here's my bash alias I've been using for awhile:
# capture an X display
# with thanks to the lazy folks at
# http://www.troubleshooters.com/linux/scrshot.htm
# who were kind enough to document how they do 'em.
screenshot ()
{
   import -window root ~/prn/screendump.png
}
Note that the man page for import is actually readable - you can take shots of specific things, not just the whole screen.
There's also an enlightenment epplet that will make screenshots.


Errors while using rpm extension..... Cygwin

Sun, 28 Sep 2003 17:34:49 +0530
Ashwin N (yodha8 from yahoo.co.uk)
Question by Atiya Azim

Actually i am not using red hat, although it is insalled on my pc. Rather i am using cygwin (http://cygwin.com which is linus-like environment for windows platform - my OS is Windows 2000) for these rpms. It is working fine with .tar and .gz files..but giving problems with all the files of rpm format.

[Ashwin] That is the problem! I have used Cygwin before, but I didn't know that they had even ported over RPM! In any case, if your need is just for a Java SDK, you can download the one available for Windows from the Sun website. Install it and the java, javac and other commandline tools will be available to you under Cygwin. Just remember to update the PATH variable with the directory where the Java binaries are located.

Thanks for the help.........it is working this way.......

[Ashwin] For other Linux utilities that are not available from the Cygwin mirrors, you will have the best chance with the .tar.gz or .tar.bz2 files of those applications.


Cool tool of the week: nntp//rss

Mon, 15 Sep 2003 11:17:32 -0600
Jason Creighton (The LG Answer Gang)

Hi,

nntp//rss (http://www.methodize.org/nntprss) is a RSS to NNTP so you can read RSS feeds in your favorite newsreaders. Very nice.


mult headeddsplay

Sun, 14 Sep 2003 20:08:16 +0100
Neil Youngman (n.youngman from ntlworld.com)
Question by Affan Ahmed

Hello,

I have a NVIDA GForce 2 Go 100 card that supports multiheaded display easily in Windows. Now i want to do the same in Linux. I have RedHat 8.0. What do you suggest that I do?

[Neil] I suggest using Google http://www.google.com/search?q=nvidia+linux+%22multiple+display%22&sourceid=opera&num=0&ie=utf-8&oe=utf-8
It throws up potentially useful stuff like
http://lists.suse.com/archive/suse-linux-e/2001-Jun/att-3012/01-TWINVIEW_README
I suppose you could try the Gentoo Unreal Tournament demo CD, which won't replace anything on your computer. At the very least it's tuned up for NVidia. Find it by typing "unreal" into the search gadget at Freshmeat.Net. -- Heather

 


Copyright © 2003, . Copying license http://www.linuxgazette.com/copying.html
Published in Issue 95 of Linux Gazette, October 2003

Contents:

¶: Greetings From Heather Stern
(?)Efficiency regards running script in a subshell () and a seperate shell
(?)force unmounting of CDROM
(?)IP config files on Red Hat 9
(?)a linux solution for the office
(?)Simple DNS solution with Red Hat 9
(?)Creating RAMDISK
(?)X server crash when starting up RH9 for the first time
(?)Converting from Win2k to Linux

(¶) Greetings from Heather Stern

Greetings everyone and welcome again to the world of The Answer Gang. It's been quite hectic for me and not all fun and games... among other things, I was ill last month around submission time :( and that meant that the TAG column wasn't submitted at all, as I missed the deadline, feeling a little too "dead" at the time. Oh well, I guess we all need a break now and then...

Thomas Adam valiantly threw a hand in to help out, and I have to admit he did more than half the work this time around. He says he's learning an awful lot about perl, too.

The peeve of this month is without a doubt a lack of information and extremely poor descriptions in the compositions of the e-mails sent in to TAG. As a positive note, there have been a record number of hits to:

http://www.linuxgazette.com/tag/ask-the-gang.html

Please, everyone - if you're thinking about asking a question, read that, and ask us what you need as clearly as you can. We understand it is difficult for those who do not speak English very well, but that's rarely been a problem - folks who are so carefully aware of their poor language skills also take a free moment, and only ask what they need to ask, and say what they've tried so far. The point is, if you can't be bothered to ask a clear question, there's far too many messages for us to try to detangle yours.

Regular and attentive readers will note some of the messy messages we have answered. Yes, there's been worse. With a question such as

d00z, cn u h3lp me

... maybe you'll get some chuckles, but you sure won't get an answer. The same goes for you students out there with a take home light quiz. We can spot those a handful of kilometers away, give or take a mile. Maybe you should cc: your professor when you ask us the question, and he can give us the passing marks in your class. The point is to learn a few research skills - so for such questions, search google. Search our KnowedgeBase - it's part of what it's here for. Search TLDP.org and freshmeat if the problem is really about Linux.

And now for what I'd really planned to say last month. I attended Linux World Expo, as I do every year here in the Silicon Valley area, with the prospect of meeting again friends in the open source world from all over the place. But this year, I was also taking a step back and a look around at how the trade show world has changed in its view of Linux. Now I get to write this with the additional perspective of having been invited to PC Expo - a more generic computer trade show.

The View From The Trade Show Floor

I can't comment on the view from the press room since all that's in there is pamphlets, a couple of spare computers, and coffee. Maybe a sandwich tray. The seminars are still seminars and the halls still suck your cellphone dry.

However, out in the exhibit hall, the world has changed a lot.

My first Linux World Expo was in San Jose. IDG had just taken over the project from a local group who wanted to create a Linux conference on the same order as regional conventions run for and by science fiction fans. IDG is a big player who runs a lot of trade shows. They invited big names and posted sponsor banners and the whole nine yards. Jim Dennis (our very own Answer Guy) was invited to speak about security for a half day class, and if the next speaker hadn't shown up, the audience would have kept them from cleaning the room so Jim could continue to talk. The hall was filled with a bunch of booths, many of them small companies, but a few names like Intel spring to mind. I recognized about half the listed speakers by face and about a third of them total would surely recognize me back. Geeks were everywhere, confused managerial and business types were too. T-shirts were plentiful.

They also made quite a splash by having a platinum sponsor pay for a bunch of floor space outright and donate it to be used as a Dot Org pavilion, where projects and linux user groups could have small kiosks and generally have a good time. Dot Org was much better laid out the second time I saw it.

As shows pressed on, toys were on the increase, but shirts amd CDs with products on them were certainly around, too. we saw an increase of booths and as more "generic" presence grew the amount of total IQ in the booths could be seen to be being split evenly, because individually talking to vendors, it was definitely going down. Toys were getting insanely cool - drawings for VW Bugs and motorcycles. I volunteered for the FSF booth. I helped out the Gnome guys. When I worked at Tuxtops I went to both LWE/SF and LWE/NY in that year and had a great time at both places. But Dot Org was becoming a ghetto with hardly any color and the main floor felt increasingly like any other computer show. Something was about to break down.

And it did. A lot of companies had picked up starter capital on the magic of the word Linux. Heck, some of them even were trying to be Linux companies. But VC enthusiasm is no excuse for a poor business plan and when something in the economy turned sour - I'm not sure what, but we'll start at asian money difficulties and work our way up from there - anyone who did not have their hand tightly on the finances watched it all start to head down the drain at tornado speed, and the VCs clamp down their pocketbooks. No more toys. T-shirts only if you sit through the spiel. (Booths that still had IQ points left around would also give them to people who seemed genuinely interested in their products, which if you ask me, is the way it should have always been.) More feet hurt because companies watching their dimes only used a single layer of carpet and not the nice padded layer under it which you can pay extra for. Lots of pamphlets though.

I'm pleased to say that I'm seeing winter's end. There are still insipidly blank faces to be foudn at some booths. And there are what I'll call the "barely Linux" booths - hardware vendors selling server racks, RAID arrays, GPS, scanning printers, and other weird peripherals. But Linux products are getting their own booths and that means they're affording them, I hope. The flavors of products are spreading around, and that's part of why it feels like a normal trade show to me - financial, games, diet plan calculators. Backup programs optimized for Linux, like Storix, are not only around but have competitors. All those things I'd see at a Windows oriented show, I'm seeing.

The Dot Org space still looks plain but it's not hiding so much anymore. There are a lot more projects than there used to be and projects were sharing booths - I recommend a little more platinum get spilled on this next time. It's rather cool to see MBNA America (a large VISA card vendor) hanging out in Dot Org plugging their LinuxFund card. They give away big beach towels if you can sign up for a card.

It's hilarious to see Microsoft booths in "the rookery" as if they're newly born open sourcerors. "Well we do let WinCE developers see some sources." Really? Can they compile Windows from scratch and pour it onto a handheld, then run WinCE office binaries out of the OEM private packages to test their build? "Um, no. They can look at selected drivers and structures." I see. Well, it's better than nothing :)

Yes, I like what I'm seeing. The frost is melting off the ground and not every seed has sprouted again - but things are getting good.

Over at the more ordinary trade show, PC Expo - for the second year running they had a "Linux Boot Camp" track. For the first year running it's calling itself "TechX>NY" instead and I wonder if they are thinking that the PC is on its way out as the only platform to run desktops or servers. Last year "the Linux track" was one lonely guy presenting nonstop between water cups and snack breaks. This year they invited LPI, Novell (well okay Ximian, and maybe they invited themselves since they're a sponsor for the show), and a few of the Answer Gang. Maybe in a few years they'll have the brass of various "Big Linux" companies knocking down their door trying to be on the speaker's list. I'm advised that our presentations were very well recieved, indeed.

On the show floor itself there was only a little of Linux. Local computing groups knew about it and chatted merrily. Product vendors knew what it was and generally whether they supported it or hadn't tested. I didn't really get "We don't support that! Our customers don't ask for that kind of stuff!" flamage which I'd been seeing a few years ago - in fact quite the opposite, one vendor told me that he'd had requests for support and they were working on it, and he kind of hoped for BSD support too, if it wasn't too hard, though it wasn't on the official roadmap. Outsourcing World (sic) mostly didn't know or care what "A Linux" was though there were a few "outsourced programming" firms I didn't think much of. Okay, the computing sector overall may still be heading downhill in places. But Linux is indeed, looking up.


(?) Efficiency regards running script in a subshell () and a seperate shell

From Nimish kamerkar

Answered By: Thomas Adam

Hi Answer Gang,

Which method is more efficient; Running a command in () or running it in a seperate shell? Can the answer include differences in how the processes are spawned? (i.e. fork exec etc)

(!) [Thomas] OK. This is usually application/situation dependant. If you have something like:
#!/bin/bash

echo $(whereis xterm)
What you are doing there is forcing the command "whereis xterm" to run in a subshell (denoted between ()'s). A "subshell" is just another instance of the command-line interpreter, running your program. Thus, a "subshell" in this instance means that the main shell script is its parent, i.e. assume that we called the script above "parent.sh" then when the subshell executed you'd get:
|-parent.sh
  |-----subshell
This is also known as forking -- i.e. where the process breaks off from the main caller, to form another.
"exec()"'ing a process however, means that the currently running process is replaced by the program that is to be exec'ed. Thus unlike above where-by the subshell ran under a new instance, the following script exec's itself:

See attached exec-anything.bash.txt

What this is doing is it will echo the first two lines, sleep for a second and then re-spawn itself. What you'll see is the same message above. The echo line will NEVER show, because it is after the program ($0 denotes the program's name) that is being (re)exec'ed.

(?) My question is how are invoking a program in a subshell() and invoking it in a seperate shell different, as regards fork & exec. I am asking this as the environment inherited by both is different. I started thinking of this really because I remembered reading somewhere sometime that invoking it in () is more efficient than in a seperate shell. But as far as I can see, both processes should need fork and exec.

Only difference is in (), both the local and global environment variables are initialised, and in a seperate shell, only global environment variables are initialised. By that logic actually, the seperate process should be more efficient than the one in ()!

(!) [Thomas] Those variables that are exported are "global" anyway, so you don't need to describe them in this way.
It depends. A subshell can be efficient if you want to ensure that a task running under another shell script is carried out to completion before the next one is executed (a good example of this would be tarring files over ssh on a pipe).
Of course, invoking a subshell program means that if the parent dies or is kill -9'ed, whatever, then the child process is also killed along with it. That is something you might want to consider.
Normally, when you write a shell script, and you want to end the script by calling another one then you would "exec" the program name, since there is no need for the shell process to fork().
If you need two independant processes to communicate concurrently then using a fork() would be best.

(?) Hello Thomas/Answer Gang,

Thanks for the patient unravelling of the intricacies!

With Warm Regards -Nimish.


(?) force unmounting of CDROM

From Chady Kassouf

Answered By: Thomas Adam, Mike Martin, Karl-Heinz Herrmann

(?) Hello Answer Gang,

I backed up most of my files on CD-Rs that later on appeared to be of very low quality, now none of my CD-ROM of CD-Writer drives manage to read from them, but that's not where the problem is.

(!) [Thomas] As long as you have (hopefully) learnt from the exercise, that's all the matters :)

(?) The fact that mounting a CDROM in linux locks the drive is making a problem with these CDs.

(!) [Thomas] How so? The whole idea of locking the drive is to do with the way the mount command works.

(?) OK, the CDs are bad, there's no way to read them, but there's the problem of not being able to unmount them, linux will just keep on trying to chew on the bad CD, and killing `cp' will not make it give up. `umount' will either hang forever waiting for system to finish reading from the drive, or it will return but the drive will not be released. same thing will happen with `eject'

(!) [Thomas] If you're trying to umount a /cdrom that is currently being read/write to, then what you should do is:
ps waxf
to see the upper program that is being called below "mount /cdrom", and do:
kill -9 $(pidof <program_name>)
(rude, I know). Then you'll be able to do:
umount /cdrom

(?) I'm using RedHat 7.3, and the guilty drives are a TEAC 40X CDROM and an HP 8200Plus CD-Writer.

(!) [Thomas] What you're describing here is not really a problem with the hardware per se but merely a gripe with the way the kernel and mount mounts a cdrom drive. Locking the drive is there to prevent the FS from screwing up, and allowing a clean change of disks via umount.

(?) It's good to note that while rebooting the machine, init will try to unmount the filesystems but will fail on the drive that's stuck trying to read, `umount2' will kick in and retry for three times before finally giving up and letting the reboot continue.

(!) [Thomas] Umounting drives is done for a reason. I suspect the reason why you cannot unmount /cdrom is due to Zombie processes clogging up your kernel buffer, and the kernel doesn't realised that these have effectively stopped. Usually, I have found in situations like this that a lengthy wait of 30 mins or so, does allow the kernel time to flush itself, and the locked drive is then accessible via an 'eject'.
fuser mount /cdrom
will also help you ascertain this information, as well as the classic:
ps wax

(?) My question is twofold; first, is there a way to tell linux to give up reading and force unmounting a CDROM drive without having to use the safety pin and, hence, lose access to the drive, OR reboot he machine?

(!) [Thomas] Yes, there is. I have done this on some severe occassions. The first thing I would try is to:
umount -f /cdrom
The "-f" flag says to mount to force umounting of it. If that does not work, then edit "/etc/mtab" and remove the entry pertaining to your cdrom drive. In case you are wondering, "/etc/fstab" holds information about drives that can and might be mounted, and "/etc/mtab" is there as a state file for those drives that are currently mounted. Editing the file in this instance is perhaps a good idea.
If you find that this is happening to you on each and every mount, try doing something like:
mount -n /cdrom
which will tell mount NOT to write to /etc/mtab. Typically I have used this on drives whereby "/" has been mounted ro, but I cannot see why it wont work here.

(?) Second, anyone happened to have a similar situation? or might that be a hardware problem?

(!) [Thomas] Reading my answer here will not doubt confirm that I have had experience in this sort of thing. I doubt this is hardware related, but it could be. I would need to know more information about which aspects of your cdrw work/don't work in order to help you further.
(!) [K.-H.] Better use some of the suggestions by Thomas but it seems cdrecord does not care for the "lock" state of a CD. If you issue:
cdrecord -eject
it will simply eject the CD -- regardless if it's locked or mounted. In the best case the kernel recognises the media is gone and the errors given back to "cp" causes all concerned processes to stop. Then a umount should also be possible. Worst case you get a kernel panic for a "damaged" filesystem on the now nonexistant CD (didn't happen here, mostly it's recovering gracefully).
(!) [MikeM] final issue could be a process called fam locking the drive - sometimes if you kill fam, you can then umount the drive.
(!) [Thomas] Indeed, Mike. I had considered this, but then I realised that FAM isn't always loaded on some machines.
FAM (File Alteration Monitor), is also debian package and an absolute cow to compile. It is not distro specific, no. It is used to monitor directory/file changes, so I can see how it might be used in this instance but it is unlikely.
The querent reported back.... -- Thomas Adam

(?) I tried all the options presented by the kind people in this list, the only one that worked though was the cdrecord -eject although it took about 3 minutes to succeed. Thanx for K.H for the solution.

I was able to capture the error that was printed out after a file copy in KDE started to choke:

scsi0: ERROR on channel 0, id 0, lun 0, CDB: Request Sense 00 00 00 40 00
Info fld=0x1a4f7, Current sd0b:00: sense key Medium Error
Additional sense indicates No seek complete
 I/O error: dev 0b:00, sector 431068
(!) [K.-H.] read error from the CD, "Medium error" as you already assumed in your original mail. Probably nothing to save anymore. I too had some CD's which came through the file comparison right after burning. Some time later they gave me plenty of read errors. In my case the read attempts at some point broke off but as I tried to get as much data back as possible I made the reading process to try harder.

(?) IP config files on Red Hat 9

From - E J -

Answered By: Mike Martin, Jay R. Ashworth, Jim Dennis, David Mandala, Kapil Hari Paranjape, Ben Okopnik, Thomas Adam

Please note I am trying to configure IP on a Red Hat 9 system. I would like to see about changing from the default DHCP configuration to a STATIC configuration.

I have tried a couple of things only to trash a test system, in one case I got the system up - but X no longer worked.

I know the hostname is saved in /etc/sysconfig/network.

What are the other files necessary to changing the configuration?

Thanks in Advance.

(!) [MikeM] first of all, for any problem try to do it manually so
once logged in (as root) do
ifdown eth0 ifconfig eth0 address <your ip address> then if you access a router directly do route add gw <name of your default gateway>
If this works add to /etc/hosts a line <your hostname (machie name)> <your machines IP address>
also this may be useful
http://www.faqs.org/docs/linux_network
(!) [jra] But don't worry about that.
Use the RedHat program netconfig, which has a button for DHCP. Turn it off, and fill in the blanks.
(!) [Breen] That'll work, Jay. If EJ wants to know what's going on behind the curtain, the documentation is in
/usr/share/doc/initscripts-<version>/sysconfig.txt

The subject of this thread then changes.... -- Heather

(?) [Breen] /usr/share/doc/initscripts-<version>/sysconfig.txt

(!) [JimD] I felt stupid reading that, too. I've complained for years at the lack of documentation for those. Sometimes I just read through the rc scripts that source the /etc/sysconfig files to figure how which names they want to use. Sometimes I just run the silly GUI config tool (when I can remember which name it goes by in which versions of Red Hat). I had never found this particular file.
I still think it's a bug. Red Hat should include a set of comments in each /etc/sysconfig file that list all of the valid variable names and show examples of the valid values.
Now that the "Red Hat Distribution" is a "community maintained project" rather than a shrink wrapped product ... heck with it; I'll still recommend that advanced Linux users switch to Debian and that businesses pressure Oracle and other ISVs to support Debian.
I think the "Red Hat community project" should set a goal for itself -- to provide a transition to Debian over the next several releases. They can write rpm command wrappers around apt-get/dpkg, add or help Debian add the -V and -K (verification and GnuPG key package signing) features to the .deb package format and the /var/lib/dpkg "database"), etc.
(!) [DavidM] Gack, I hope that never happens, I dislike Debian intensely. I use it on some of my servers and I grate my teeth every time I need to do something with it. Debian needs a lot more then just a rpm wrapper. I'll try not to start a religious war here and simply say you stick with Debian and I'll stick with Red Hat and I hope they get more compatible but I hope that happens by Debian getting closer to Red Hat and not the reverse (except apt-get, that is quite nice).
(!) [Ben] Yeeew. I sure hope that never happens either. I really hate the "papa knows best, just watch the blinkenlights" approach of RedHat, and money is what it takes to get me to play with it these days. :)
Fortunately, none of this is an issue since we _do have both of these distros plus a whole lot of others, which seems to satisfy a broad range of us picky techie types (say _that 3 times fast with a mouth full of marbles, and your dental problems will be a thing of the past.)
(!) [JimD] In other words, I think a convergence is in order. Let's have both major "community projects" try to consolidate (improving both).
(!) [DavidM] I strongly disagree here. The beauty of Linux and the various projects is there is no "one true way" and those who like it different can have it that way. Otherwise you are just like Microsoft and Apple each having a single "true way".
(!) [JimD] I have similar opinions on gentoo; why create a new distibution when they could have poured that energy into creating a Debian build system that could build your entire distribution from sources, with locally defined optimization and other flags). (BTW: the argument that this results in significant performance gains seems to have been somewhat refuted:
 http://articles.linmagau.org/modules.php?op=modload&name=Sections&file=index&req=viewarticle&artid=227
... though it's possible that this test was flawed.
(That said I've been thinking of installing gentoo on a machine just to see what it's like).
(!) [JimD] I knew my statement was likely to rankle. However, without it degenerating into a religous war, what specific things do you dislike about debian? /etc/sysconfig/network-scripts/ifcfg-* versus /etc/network/interfaces? The installer? Lack of Kickstart? The dpkg commant? dpkg-reconfigure or dpkg-repack? The fact that many packages ask configuration questions during pre-install or post-install? The fact that you can't tell the package manager to exclude the docs during installation?
I want to know because I'd like to see Debian make the necessary changes or options available to make Red Hat users happier.
(!) [DavidM] This is important in the business world where you need 50 machines exactly the same, not close but exactly the same. Also unless you make your own Debian depository the changes in the Debian tree make it all but impossible to do this if you are installing machines over the course of several months.
Another irritant is that Debian package maintainers patch the packages to run in funny places and have odd patches on them. We were running SAMBA on one machine and we were having a major problem that we could not explain. As luck would have it I was able to get tridge to take a look and even he was mystified for several hours. As it turned out the package maintainer had really messed up the package. There is not enough testing done on the edge cases by the Debian developers, so either leave the source alone or truly understand it and test all possible uses of the package which in the case of SAMBA is impossible, the SAMBA developers have spent a huge amount of time getting donated machines available to be able to test the edge cases, there is no way a Debian developer is going to have access to the same resources that the SAMBA developers have.
At tridge's advice and this is advice I've been given many times from others as well. "Use Debian as a base install but for packages you really care about make your own from source as you never quite know what they've done to them." The problem with this of course is if I can't trust what I'm installing then why am I installing it at all?
Followed by the fact that stable is most always too old thus forcing me to use testing which gets broken at times.
(!) [Thomas] Odd. I agree that sometimes, stable can have packages that are a little to out of date, but then that won't stop you from doing a "dist-upgrade" at any point. And as for "testing" being a little too unstable, I am going to have to refute that and ask for you to give an example.
While the Debain BTS (Bug tracking System): http://bugs.debian.org does a good job at fixing loose ends in Sarge, many of the problems that I have seen people encounter is compilation of programs. I have been running Sarge (Debian testing) for ages now with absolutely no problems whatsoever.
I'm therefore going to guess that David was meaning to say the "Unstable" causes problems.
(!) [DavidM] I don't think it's possible for Debian developers to make Red Hat users happier, they like to do things their way and the hell with the rest of the folks. An example, change the init levels to match Red Hats, make init 3 the normal run level sans X and level 5 with X. Virtually every other Linux distro is the same as Red Hat very very few are different. But as a business software developer I need to know that if my package it to be installed on both Red Hat and Debian I need to install my startup scripts in different run levels or bad things happen. That is silly.
I agree that one does need to minimize gratuitous differences so tackle the run level differences first, if you can get that changed (which I strongly doubt) perhaps there is a possibility that Debian can become a stronger player in the Corporate market place. What I think you will get is many reasons why Debian is correct but that is irrelevant, it's not about being technically correct it's about minimizing gratuitous differences, and I don't think it's possible for the Debian community to change.
(!) [JimD] Well "funny places" is the main sort of gratuitous change I also object too. FHS was supposed to reduce this (and probably has, somewhat).
We could argue that Red Hat and SuSE put their stuff in "funny" places; then we could both reach for copies of the FHS to bolster our arguments.
I agree about the runlevels and the way the run xdm as an rc script rather than in inittab. Those are gratuitous and Debian should bow to the more widespread convention in both cases.
As for the Samba anecdote: so what. Red Hat applies over a 100 patches to their kernels and they apply patches to many packages, including core packages like Samba. We could exchange package maintainer horror stories for hours. Usually the Debian packaging is better than Red Hat packaging. Debian has a published policy that is derived from the consensus of it's maintainers/developers. There are ongoing discussions on how to change that policy. Maintainers that don't conform to the policy see NMUs (non-maintainer uploads) of their packages and are eventually replaced by new maintainers.
Of course it's possible for them to change. It is easier for us to change Debian than it would be for us to change Red Hat Inc.
(!) [DavidM] Jim, here again we differ, I feel it's easier to change Red Hat all it takes is cash, nothing more nothing less. Changing Debian is akin to herding cats you have to get a large portion of the entire development community into agreement which is damned hard if not impossible to do. You need to join them in overwhelming force in order to out vote them on policy. Any single person is doomed to failure, it was designed that way and it works quite well. Debian is a hackers play land, it was designed to be that and that it will stay, the bylaws they live by are specifically designed to keep it that way and human inertia and apathy will tend to keep it that way.
How long did it take to fix the networking init script that would only start networking and not stop or reset it take? More then 3 years by my count, if you can't fix something like a simple init script how you expect to fix a fundamental difference in philosophy?
(!) [Thomas] That's not quite the point. It is my belief that Debian is very much a "hands-on" distribution -- that allows one to get one's hands dirty. If you thought that there was a problem, you should have submitted a patch :)
(!) [JimD] To effectively change Red Hat Inc. we have to either buy them, or represent a large enough portion of their revenues that they'll listen. To change Debian all we have to do is join them, work on the project and explain our reasons for recommending the change.
My problem with the Red Hat "distribution" is that it's been essentially abandoned by Red Hat Inc. Now, the community will maintain it; RH Inc. will take parts that they like and fold them into their proprietary AS/ES products. However, RH will also make changes that may make AS/ES diverge somewhat from the RH distro.
My comments are mostly intended to discourage people from create yet another distro for broad consumer use.
I'm not saying everyone should just abandon the RH distro -- just try to transition some parts to the Debian base (where that makes sense) and to join Debian in sufficient numbers and quality to effect changes there.
Notice that I'm mostly a proponent of changing Debian in many of the specifics. The one thing that I would push on the other distros is the use of the same package/dependency infrastructure and granularity. I'm not talking about the commands used to manipulate the packages (apt-get, dpkg, aptitude, rpm, etc) --- I personally would prefer to see a command named 'pkg' taking arguments somewhat like the rpm command (except for that idiocy of -U being "upgrade" and install thus necessitating the odd -F --freshen switch to mean just "upgrade"). The difference would be that pkg -i foo would look at your local policy file (/etc/apt/sources.list) and know how to fetch the foo from your preferred location or media.
(!) [DavidM] That is not the only problem, as far as I can tell, when a package changes (is updated) there is no history, the old package is removed and the new inserted replacing it. I could be wrong but the gents I used at my last place were dedicated Debian devotees and they could not find where "outdated" packages were stored. Thus if a package was "updated" you were screwed and could not make an identical from that point forward unless you went to the extreme of maintaining your own Debian depository.
(!) [JimD] If it happens at all I expect it to take several years and releases of each of the two distributions. We've already seen lots of people implementing apt on RPM based distributions. The problem with that superficial approach is that the underlying granularity and dependency cycle problems remain. The real value of Debian in in that acyclic dependency tree.

(!) [Kapil] Joining the distro wars once more ...
Just some random points.
1. While Kickstart is a great idea, systemimager is not a bad replacement. For the more hardcore debian-ers, Kickstart seems too complicated when compared with "dpkg --get-selections" in combination with "dpkg --set-selections".
(!) [Thomas] I have to agree here. I have used RH's kickstart one when I first setup my little 386 server. It was a pain to begin with.
(!) [Kapil] 2. When comparing "rpm" with "dpkg" (or deb) what is lacking in dpkg is the automated signature/md5sum checking. On the other hand try to unpack an "rpm" on a non-RedHat machine...not all that automatic (you need to install "alien"). To unpack a "deb"...you will (most likely) find "ar" on any machine and so can unpack deb's quite easily.
(!) [Thomas] I have to disagree slightly here. If one does not care for the "install" script that is inherent in either a .deb or .rpm (depending on which distro it is) then using "mc" is an efficient alternative to view the package in question and manually copy the files across to the specific directories. I don't recommend this for hugely dependent packages, but it is an effective means for smaller packages, where there are no dependencies
(!) [Kapil] 3. I don't dislike RedHat as intensely as David seems to dislike Debian! However, the choice was made for me by RedHat when they decided to exclude text-mode tools for installation and management. Even "dselect" is better than nothing except "rpm". I used "purp" on RedHat systems for a while but it was always in "contrib" and often out-of-date.
(!) [Thomas] Using "dselect" under Debian is NOT a good idea -- it is extremely clunky, not to mention it has a terrible UI. I would always recommend people to use "aptitude"; both as a replacement for "dselect" AND "apt-get". It functions as both dselect (without any arguments) and as apt-get on invocation. It also handles dependencies and logging much better, in my opinion.
(!) [Kapil] 4. Re: xdm. I think all daemons were "meant to be" started from rc scripts so in fact the RedHat policy of starting "xdm" from inittab is strange. For example, in the old days it was possible for the console user to somehow "kill" the xdm/gdm/kdm on RedHat (possibly by killing the X server or something) causing all external Xterminal users to be logged out!
(!) [Thomas] Exactly! I have had this very gripe with RH and even tried e-mailing them to ask them why they did not move xdm et al to rc scripts. They never did reply to me. As Kapil notes, if something goes wrong with xdm then it can be of consequences to other processes.
(!) [Kapil] 5. Finally, about building things from source. To some extent I agree. It would be nice if we had the following set-up:
A. All the system administrator had to do to was install a base system.
B. Most utility/application packages could be installed by users in (say) /opt/username/packagename/{bin,lib,...}.
(!) [Thomas] I disagree with this suggestion. Instead installing applications to $HOME is still an acceptable solution as $PATH usually includes $HOME/bin anyhow.
(!) [Kapil] C. There would be a user-level package management system that would allow users to mix and match their requirements by creating symlink farms in /home/username/{bin,lib,...} that point to different packages under /opt possibly installed by different users.
(!) [Thomas] Hmmm, installing applications system-wide and then allowing users to have custom settings in $HOME is no more different than the suggestion above.
(!) [Kapil] This way, those users who need "cutting edge" or even "bleeding edge" tools could install them. The system could remain "stable" and (hopefully) "secure". Such a setup may even be useful on single-user systems as the user could play with upgrades without messing up the running system. Disk space is not really an issue any more unless you use "KDE" or "Gnome" or "Openoffice" :-)

(?) a linux solution for the office

From amitava maity

Answered By: Ben Okopnik, Kapil Hari Paranjape, Heather Stern, Thomas Adam

Hello all!

We have three Windows machines at our office. These are shared by 10 to 12 persons. I am trying to find out if one of these machines could be converted into a Linux box. Hopefully this Linux box can then be configured to meet the demands of 8 terminals connected via serial cables and an appropriate 8 terminal serial adaptor.GUI is necessary at all the terminals. Is this a feasible configuration? Can a Pentium-II, 233MHz machine with an IDE hard disk be used for this purpose?

Linux Gazette, you are doing a great job.

A Maity.

(!) [Ben] Take a look at the Linux Terminal Server Project <http://ltsp.org/>;. It sounds like they'll fit your needs just fine.

(?) Is this a feasible configuration?

(!) [Kapil] There are two possible configurations:
  1. A server with a multi-port serial adapter which is connected via
  2. A server connected to an ethernet hub to which GUI terminals (such
In modern times, (2) is the way to go unless you already have the full infrastructure of (1). (Where will you find vt100 terminals nowadays?).
If you are going to buy some PC's you can anyway buy them with ethernet cards built-in and a small 16-port ethernet hub. In fact, you can buy "thin clients" if you use a server---in particular, you can save on disk costs.
I would suggest that you do not buy 8 vt100 terminals! This is not cost effective nowadays. Second-hand vt100 terminals are likely to be a hardware headache (since it would appear from your message that you are not a seasoned sysadmin of the unix days) and (more importantly) will not satisfy those of your users who are used to the colorful graphical interface of Windows.
Instead, you can "share" the use of the GNU/Linux machine as you have been sharing the use of the Windows machines---in fact this is a more shareable system!
Advantages over your existing configuration:
  1. Users will be able to customize their desktop without interfering
  2. Users will be able to run jobs that can run unattended in the
  3. You can find and use a lot of useful software without attracting
  4. When you get some funds you can add some more Linux machines very
(!) [Heather] Probably, esp. if you use something a bit on the lightweight side for basic window management details (such as fvwm instead of more weighty desktops, abiword with gnumeric instead of staroffice, etc.)
(!) [Thomas] Siag office (http://www.siag.org) isn't too bad.
(!) [Heather] Ben's right, LTSP is a good place to look. If these "terminals" are PCs but you'd reather they stay hard-diskless, consider keeping them having CD bays and run knoppix (a live CD distro). Or maybe another liveCD distro (for instance SuSE has a free one) but I'm advised, most of them ask a few sysadminly questions during bootup, and Knoppix doesn't.

(?) Linux Gazette, you are doing a great job. A Maity.

(!) [Heather] Thanks, we love to hear that sort of thing!

(?) Simple DNS solution with Red Hat 9

From - E J -

Answered By: Faber Fedor, Kapil Hari Paranjape, Jim Dennis

[root@localhost sbin]# ./ifconfig eth0 address
192.168.1.103
address: Unknown host
ifconfig: `--help' gives usage information.
[root@localhost sbin]#

Please note I believe I need a DNS server (solution) for my home Red Hat Network. Is there a simple DNS solution I can establish?

I have checked some books - this does not seem to be as simple as editing /etc/host.conf

(!) [Faber] Okay, so? You mis-typed a command. I do it all the time.
(!) [Thomas] Hopefully though, Faber doesn't mis-type too often while as "root"
(!) [Faber] What are you attempting to do? What is "address"? Is it supposed to be an actual (IP) address? Is it an enviromant variable? What?
(!) [JimD] I'm going out on a limb here to guess that this was supposed to be rendered as:
[root@localhost sbin]# ./ifconfig eth0 address 192.168.1.103
address: Unknown host
ifconfig: `--help' gives usage information.
[root@localhost sbin]#
... cut and pasted from a root shell session.
You wanted to type:
[root@localhost sbin]# ./ifconfig eth0 192.168.1.103
... which would seet your eth0 address to 192.168.1.103 That would also implicitly set the netmask to 255.255.255.0 and the broadcast address to 192.168.1.255. That netmask is the default for the traditional "Class C" network address blocks (all of 192.*.*.* among many others). The broadcast address is then calculated by masking off the high order bit using the netmask then setting that many bits to "on" (1). Then replacing the bits at the end original addresses with the broadcast.
In other words you can often just specify the address without spelling out the other settings. You only have to specify the others when you're network isn't following the "Classical" parameters and defaults.
The problem with your command was that the word "address" was parsed as the name of a host. The ifconfig command then tried to resolve that name into an IP address (presumably via your /etc/hosts file, then DNS --- though that depends on the settings in your nsswitch.conf)
Here's a couple of other examples of ifconfig commands:
# ifconfig eth1 10.0.1.10 netmask 255.255.255.0 broadcast 10.0.1.255 up
# ifconfig eth2 123.45.67.8 netmask 255.255.255.224 broadcast 123.45.67.31
Notice that the address is the one argument that is not prefixed by a literal/keyword or label. It's usually the first argument, though it might work even if you don't follow that convention.
Notice in my last example that we're using a smaller netmask, like the kind you might get from an ISP that was only giving you a block of 30 IP addresses. Long ago I wrote and article on "subnetting and routing" which is still one of the most popular article in LG/TAG history. I've been told it's used for some college TCP/IP fundamentals classes.

(?) Please note I believe I need a DNS server (solution) for my home Red Hat Network. Is there a simple DNS solution I can establish?

(!) [JimD] Perhaps you will need a DNS or other directory service (NIS or LDAP). However, in this case you just needed to look at the error and read the usage section of the --help and/or man page a little more carefully.
It also helps to think about the problem your trying to solve. You're trying to configure a network interface. DNS and other directory services need to use that interface (or some interface) in order to resolve names into IP addresses. That would create a chicken & egg problem if the ifconfig command truly depended on the name resolution. It would be unreasonable to assume that everyone has to run a network directory services daemon on localhost --- and you'd still need it to ifconfig the lo (localhost/loopback) interface.
(That line of reasoning should alert you to the fact that their was something wrong with your premise --- that your conclusion was dubious).
You almost certainly don't want to play with your /etc/host.conf
However, it could be as simple (in this case) as editing /etc/hosts
If you'd put an entry "192.168.1.103 address" as a line in your etc host file then your command would have almost worked. It would complain about extra arguments --- the lookup/resolution of the hostname "address" would have succeeded (assuming you have a normal /etc/nsswitch.conf).
If you put more reasonable address/name pairs in /etc/hosts and you securely distribute those (rsync -e ssh) to your other machines --- you have a working system of host name resolution without DNS NIS or LDAP.
/etc/nsswitch.conf defines the list of services and methods used by glibc (C library) functions to resolve names (hostnames, network names, netgroup, user, group, and service) into numbers (IP addresses, lists of hosts, UIDs, GIDs, and TCP/UDP port numbers). Almost all of the programs on your system are dynamically linked against glibc (a.k.a. just libc). glibc implements resolvers that read /etc/nsswitch.conf and dynamically load /lib/libnss* modules as listed.
Such run-time linkage uses the dlopen() interface. There are two types of dynamic linking in Linux. Link/compile time, such as the way that almost all programs are linked to libc and many programs are linked to libm (the C math functions library). These are listed by the ldd (ld dump --- ld is a non-intuitive mnemonic for "linker"). Run-time linking is done via the dlopen() interface. Any time a program must read a configuration file, command line option or environment setting, or any other run-time source of information to determine which modules to load --- it uses dlopen() Obviously this would be true of the NSS modules since any program that uses any of these name services it has to read /etc/nsswitch.conf to determine which libraries to load (NSS == "name services selection" or "name services subsystem" or something like that).
dlopen() (run-time dynamic linking) is also used by PAM, for PERL and Python binary modules, Apache modules, and XFree86 version 4 and later. You can think of these as being a way to implement some object oriented features in normal C programs. The primary uses of these modules are to extend and/or modularize the functionality of a base binary program.
Thus you can get a custom authentication module (say one of those little electronic credit card PIN tokens) and drop it into your system; add one configuration line and all of the PAM linked programs have been extended to use this module. All without recompiling anything.
As another example you can install XFree86, as compiled by your distribution vendor, and you can install a driver module for your video card from some third party (perhaps even the manufacturer of the card).
I realize I've delved deeply under the hood here --- into details that you won't understand at first reading (and probably don't care about).
My point is that you don't need to run a network name service. Most of the NSS linked programs check local config files /etc/hosts, /etc/passwd, /etc/groups, /etc/services, etc. first. They then check with other services as listed in the /etc/nsswitch.conf. /etc/host.conf is still used, but its usage is somewhat superceded by /etc/nsswitch.conf.
(I'd love to see a good explanation of why we have both nsswitch.conf and host.conf on modern systems --- something at a higher level than their respective man pages).

(?) Creating RAMDISK

From Jose Nathaniel G. Nengasca

Answered By: Ben Okopnik, Kapil Hari Paranjape, Thomas Adam

(?) Hi there,

I just want to create a RAMDISK of 100MB to use as temporary storage for squid cache files, I am using redhat 8.0, using grub bootloader, with 750MB of RAM, the one on linux focus site is rather old (november 1999) howto. Can someone help me with this?

Respectfully yours,

(!) [Kapil] I think you have something confused here. As I understand it:
Squid creates an object cache in memory and periodically saves objects to disk when it runs out of space in memory.
You want to create a virtual (RAM) disk for squid to use as its disk. Instead why don't you increase the amount of memory available to its in-memory object cache?
(!) [Ben] If you have the kernel sources installed, take a look at "/usr/src/kernel-source-<version>/Documentation/filesystems/tmpfs". It's a memory-based file system, and is created (by root) with "mount". Here's an example:

See attached ben-fstab.txt

(!) [Thomas] I think Ben has missed the point. Squid doesn't care for the type of RAM being used. Sure, it needs physical RAM, but this is accessed on-disk by squid's "object cache". It is this that determines how much RAM is being used (see Kapil's answer above).
In /etc/squid.conf you can adjust this by changing:
cache_mem=50MB
to something more appropriate. Don't forget to issue:
squid -k reconfigure
once changes have been made to the file.

(?) X server crash when starting up RH9 for the first time

From Claudiu Spataru

Answered By: Mike Martin, Ben Okopnik, Thomas Adam, Heather Stern

This is a multi-part message in MIME format.

This is naughty, please send e-mails in plain-text ONLY. -- Thomas Adam

(?) Hello

My X server crashes with the following error messages (see attached log).

My system is an Athlon 2500+, GeForce FX 5200 graphic card, A7N8X deluxe mainboard (not sure how relevant this info is, but added it anyway).

The font server can be stopped and restarted by using '/etc/init.d/xfs start|stop' without any problems. When eliminating the Fontpath line that points to unix/:7100 from the XF86Config file, it complains about a different fontpath that it cannot find and crashes once again. (There is also either no path set to run Xconfigurator or there is no such thing in my installation of RH9.)

Any known solution to the above problem?

(!) [MikeM] dont do this - put it back. If you dont use the xfs server then you need to hardcode the fontpath into /etc/X11/XF86Config
X wont start without xfs running, when X fails to start, is xfs running?
(!) [Ben] That's inaccurate. Either a font server _or a hard-coded font set is sufficient, and there's no advantage that I know of to a server if you're not doing X over a network. My system has run without "xfs" for years now.
(!) [MikeM] I know this is generally true, however the querent is using Red Hat 9
Red Hat does a fair bit with the font server, including dealing with easy adding of fonts, TTF fonts etc.
(!) [Thomas] That's inaccurate -- TrueType fonts are handled by the xtt server, NOT by xfs itself. There are two different servers. I still fail to see how RH find running another process to handle fonts an advantage
(!) [Heather] Smaller part to kick if it needs a restart? But within X this is often false economy, yanking live font servers is rather like pulling the rug out from under you. It can be done - I've done it (read the 'xset' man page if you're crazy enough to mess with this) - but I think the split is a holdover from an earlier time (only a year or two ago) when the X server did not speak Truetype on its own. You needed an external fontserver for it, and that was usually xfs patched for freetype access (the way I did it) but there was a competitor from the asian countries, who had a real need for readable letters.
(!) [MikeM] So to avoid other problems/questions IMHO, it is safe to say that the querent is better off using the installed font server. (ie: dont add unnecesary complications), especially as it sounds like it is actually working.
(!) [Thomas] I disagree with this -- X11 and fonts is rarely, if at all, distro specific. Just because RH uses a font server initially, doesn't mean to say that you have to continue using it. That is one of the ideals of Linux -- it is your OS. Do whatever you like :)
(!) [MikeM] just did a quick google - its possible that the card is not recognised properly
so try this (as root)
edit the file /etc/X11/XF86Config look for the device section eg:
Section "Device"
	Identifier  "Videocard0"
	Driver      "nv"
	VendorName  "Videocard vendor"
	BoardName   "NVIDIA GeForce 4 (generic)"
	BUSID	    "PCI:1:0:0"
EndSection
and experiment changing the Driver entry to vesa or nv, depending what is there already

(?) Thank you very much for the replies! It was indeed the fact that my graphic card did not get recognized properly and the generic vesa driver did not work in my case. After changing the values for 'BoardName' and 'Driver' as per Mike's suggestion, I was able to start X Windows.

(!) [MikeM] Generic X error problem solving
There are a few very common reasons for X not starting
1. Not enough disk space in / or /var
(!) [Thomas] If this is the case, then I doubt one would be able to login anyhow, since /var/log/wtmp would have to be written to so that the "last(1)" command can keep a log of who has logged in.
(!) [MikeM] 2. Font server not starting (can be caused by 1.)
(!) [Heather] More accurately, "fixed" or some font needed by your window manager isn't available, so the session manager dies -- taking X with it.
This may be a dead font server, or some other buggy FontPath.
(!) [Thomas] RedHat like to use xfs/xfs-xtt to issue fonts. The truth is that a font server is NOT necessary, no matter which distribution one uses. The only time I can think when you might want to use it is when one is having to share fonts over the network.
In anycase, if the font server fails to load, the default fonts listed in /etc/XF86Config (under other distros: /etc/X11/XF86Config-4) are used as the "fallback".
(!) [MikeM] 3. Mouse not being initialised
(!) [Heather] This is actually quite rare, much more common is it being incorrectly initialized, due to incorrect protocol being specified for the input device -- which will get you into X, but with a mouse that doesn't move, or does something crazy as soon as you touch it, like race to one edge of the screen and flutter there like a trapped moth.
(!) [MikeM] 4. If you use the Nvidia binary module, the kernel module not being loaded.
(!) [Heather] i810s and other "memory sharing with CPU" video cards can have this problem too. It may also matter to embedded designers. In short if you can't see the video without kernel help - it needs to be loaded.
5. Generic error "No screens", often a problem with the video driver.
(!) [Thomas] Yes, either that or the wrong video driver has been selected. In that instance, a new one should be chosen. Under Debian, the fix for that would be to run:
dpkg-reconfigure xserver-xfree86
Commonly, the "no screens" error can also be caused by FrameBuffer options turned on. If one comments these out, the problem may also go away.
(!) [MikeM] points 3,4 and 5 will show an error in the X error log (or on the terminal that starts X)
(!) [Thomas] They'll only show a message assuming point 1 above is false, otherwise, how is it to write to the log file?
(!) [Heather] It will show the same message on the controlling terminal, if you launch X as a command at a shell prompt instead of allowing anything automatic to try it. And while we're at it, disable any attempt whatever to launch X automatically if X isn't tuned up and happy. The infamous message "ID x respawning too fast" is a common symptom of that. The "ID" in that case is an /etc/inittab entry for your GUI login prompt.
(!) [MikeM] What can often work as a quick fix is to run the script xf86config (all lower case)
This will wipe out the config for the font server though.
(!) [Thomas] In any case, adding the line:
FontPath    "Unix/:7100"
as the first line under: Section "Files" should work.

(?) Converting from Win2k to Linux

From Tim Grossenbacher

Answered By: Faber Fedor, Jim Dennis

Gradually converting from a Windows 2000 server to Linux running Redhat 7.2.

(!) [Faber] First off, kudos on converting, but hy 7.2? You should at least be doing 7.3 (although I've found 9 to be nice and stable). You have patched the 7.2 box, haven't you?

(?) For many years, we have used social security numbers as login names within the Win2k domain to login.

(!) [Faber] My gawd, man! Are you mad! I certainly hope this domain is nowhere near the Internet! < ;Dr. Evil> But if it is, how do you translate between the login of the employee's SSN and his email name? Can you tell me the name of that file and which machine it is on??</Dr. Evil>
And you've never had a problem with identity theft? Amazing.

(?) Linux does not appear to allow me to create a user with numbers only as the user.

(!) [Faber] Correct. Linux (and every *nix I've seen) won't allow login names to start with a number. <Turns to the audience> Why is that? Anyone know?
(!) [JimD] Because any place in the code that's expecting a user toke looks at the first character to determine if it's a UID or a name; then it looks up (getpwnam()) the username and translates it into a UID.
In other words "names" beginning with digits create an ambiguity between different representations of the same object (UID vs. name).
Now, granted this could be changed. Programs could search the entire string for any non-digit and declare it to be a name rather than a UID. However, even then there'd be an ambiguity when the "name" consisted entirely of digits. Also changing this would entail finding every piece of code that was parsing UIDs and user names anywhere (precisely the sort of change that is nearly impossible for an operating system that's been in use in hundreds of implementions for over thirty years).
You could certainly just use a letter prefix to your SSN as your user naming scheme. u1234567890 (123-45-7890) would work just as well as 1234567890.
As Faber as said, using SSN's in ANY visible way is an incredibly bad idea. Perusing the Privacy SSN FAQ:
http://www.faqs.org/faqs/privacy/ssn-faq/index.html
... would be a good idea.

(?) I have created test users with both alpha and numeric characters, and all works perfectly. Is there a work around?

(!) [JimD] Re-think your policy.
(!) [Faber] Well, you could always hack the source, of course, of course. But I assume there's a Good Reason why they don't allow it, I just don't know what it is.
(!) [JimD] Think ambiguity. Then think, millions of lines of code in thousands of programs. Then think 30 years of books, education and programmer experience --- hundreds of thousands of programmers who already know that usernames like most identifiers in most languages must start with an alpha or some suitable punctuation and that leading digits signify a UID.
Sounds like a bad idea all around.
I suppose you could just modify the login programs to accept numerics and prefix them with some letter or even an _ (underscore) before logging the user in. This would keep the change focused just to a few programs and libraries (basically just the PAM and login suite).
However, this sort of hack has a way of causing more confusion later. Everyone at your site will then be "logging in" one way and getting a username that doesn't quite match the string they use to log in --- could cause lots of confusion.


Copyright © 2003
Copying license http://www.linuxgazette.com/copying.html
Published in Issue 95 of Linux Gazette, October 2003
HTML script maintained by Heather Stern of Starshine Technical Services, http://www.starshine.org/

LINUX GAZETTE
...making Linux just a little more fun!
News Bytes
By Michael Conry

News Bytes

Contents:

Selected and formatted by Michael Conry

Submitters, send your News Bytes items in PLAIN TEXT format. Other formats may be rejected without reading. You have been warned! A one- or two-paragraph summary plus URL gets you a better announcement than an entire press release. Submit items to bytes@linuxgazette.com


 November 2003 Linux Journal

[issue 114 cover image] The November issue of Linux Journal is on newsstands now. This issue focuses on System Administration. Click here to view the table of contents, or here to subscribe.

All articles in issues 1-102 are available for public reading at http://www.linuxjournal.com/magazine.php. Recent articles are available on-line for subscribers only at http://interactive.linuxjournal.com/.


Legislation and More Legislation


 European Patents

Last month we reported on an upcoming vote in the European Parliament regarding the future of software patents in Europe. According to many Free and Open Software advocates, the proposals in front of the parliament would, probably permanently, establish the practice of software patenting in Europe. This, they argued, would lead to a reduction in innovation and give an unnecessary and crushing advantage to large software companies who could use their greater resources to legally elbow smaller competitors out of the market. You can read a cogent and well-argued discussion of these views in the open letter addressed by Linux Torvalds and Alan Cox to the members of the European Parliament

There are many people out there who were not content simply to complain about the direction events were taking. Instead, they lobbied, and lobbied hard, to get the concerns of financially small (though numerically large) interest groups onto the agenda. An initial sign that these efforts might be having an effect was the announcement of a further postponement of the vote on the proposed directive. Though no reason was given for the delay, the first postponement was the result of widespread confusion regarding the issues at stake, and a feeling that the directive was being forced through with undue haste. It seems likely that similar sentiments lead to this second deferment.

The ultimate, and welcome, result of this concerted lobbying process was that amendments were proposed to the directive which removed many of the most objectionable proposals. This amended directive was approved by the parliament with a margin of 364 votes to 153 with 33 abstentions. LWN have helpfully reproduced the directive online in a readable HTML format.

It is important to learn lessons from this success and to apply them in future struggles. Many Free Software enthusiasts have learnt valuable lobbying skills in the course of their advocacy, and these skills must be developed and shared. A particularly interesting account of this lobbying process has been published by NewsForge, and it gives useful information both on how to lobby, and on what level of understanding can be expected of politicians and their staff. Additionally, it is important to follow up on politicians who have been lobbied and to check how they actually voted. As pointed out by NTKnow, the UK Liberal Democrats made very positive noises, but ultimately voted in favour of patents. Of the UK parties, only the Greens and the UK Independence Party voted against software patents. If politicians realise that Free software advocates pay attention (and draw attention) to their voting records, they will be far more likely to heed future representations. These skills will be especially important since pro-patent interests are likely to try to get their way through the alternative route of national parliaments.

For more information on this story, if you are interested in an anti-patents, pro-free-software point of view you should look at FFII.org. Their account of the final amendments and vote is worth reading.


Linux Links

A comparison of four Linux Office suites and how well they handle random MS Word/Excel/PowerPoint doucments.

Some links from NewsForge:

Bellevue's Linare sees Linux future, launches $19.95 operating system.

The age of corporate open source enlightenment

Some links from Linux Weekly News:

And since they generate so much media noise, lets list a few relating to SCO:

Some links from O'Reilly:

Some links via LinuxToday:

IRC Linux Help for beginners.


Upcoming conferences and events

Listings courtesy Linux Journal. See LJ's Events page for the latest goings-on.

IDUG 2003 - Europe
October 7-10, 2003
Nice, France
http://www.idug.org

Linux Clusters Institute Workshops
October 13-18, 2003
Montpellier, France
http://www.linuxclustersinstitute.org

Coast Open Source Software Technology (COSST) Symposium
October 18, 2003
Newport Beach, CA
http://cosst.ieee-occs.org

Enterprise Linux Forum
October 22-23, 2003
Washington, DC
http://www.enterpriselinuxforum.com/

Media Advisory/Open Source Software Chicago Forum
October 23, 2003
Chicago, IL
http://www.osschicago.com/

PhreakNic
October 24-26, 2003
Nasheville, TN
http://www.phreaknic.info/

LISA (17th USENIX Systems Administration Conference)
October 26-30, 2003
San Diego, CA
http://www.usenix.org/events/lisa03/

Linux Open Alternative Days
October 30-31, 2003
Bucharest, Romania
http://www.load.ro/

O'Reilly Mac OS X Conference
October 27-30, 2003
Santa Clara, CA
http://conferences.oreillynet.com/macosx2003/

HiverCon 2003
November 6-7, 2003
Dublin, Ireland
http://www.hivercon.com/

COMDEX Fall
November 17-21, 2003
Las Vegas, NV
http://www.comdex.com/fall2003/

Southern California Linux Expo (SCALE)
November 22, 2003
Los Angeles, CA
http://socallinuxexpo.com/

Annual Computer Security Applications Conference (ACSAC)
December 8-12, 2003
Las Vegas, NV
http://www.acsac.org/

Linux Clusters Institute Workshops
December 8-12, 2003
Albuquerque, NM
http://www.linuxclustersinstitute.org

Storage Expo 2003, co-located with Infosecurity 2003
December 9-11, 2003
New York, NY
http://www.infosecurityevent.com/

Consumer Electronics Show
January 8-11, 2004
Las Vegas, NV
http://www.cesweb.org/

Linux.Conf.AU
January 12-17, 2004
Australia
http://conf.linux.org.au/

LinuxWorld Conference & Expo
January 20-23, 2004
New York, NY
http://linuxworldexpo.com/

O'Reilly Emerging Technology Conference
February 9-12, 2004
San Diego, CA
http://conferences.oreillynet.com/etcon/

SXSW
March 12-21, 2004
Austin, TX
http://sxsw.com/

SD West
March 15-19, 2004
Santa Clara, CA
http://www.sdexpo.com

CeBit Hannover
March 18-24, 2004
Hannover, Germany
http://www.cebit.de

COMDEX Canada
March 24-26, 2004
Toronto, Ontario
http://www.comdex.com

2004 USENIX/ACM Symposium on Networked Systems Design and Implementation (NSDI)
March 29-31, 2004
San Francisco, CA
http://www.usenix.org/events/nsdi04/

RealWorld Linux
April 13-15, 2004
Toronto, Ontario
http://www.realworldlinux.com

CeBit America
May 25-27, 2004
New York, NY
http://www.cebit-america.com/

Strictly Business Solutions Expo
June 9-10, 2004
Minneapolis, MN
http://www.strictlyebusiness.net/sb/mpls/index.po

USENIX Annual Technical Conference
June 27 - July 2, 2004
Boston, MA
http://www.usenix.com/events/usenix04/

O'Reilly Open Source Convention
July 26-30, 2004
Portland, OR
http://conferences.oreillynet.com/

LinuxWorld Conference & Expo
August 3-5, 2004
San Francisco, CA
http://www.linuxworldexpo.com/

USENIX Security Symposium
August 9-13, 2004
San Diego, CA
http://www.usenix.com/events/sec04/

USENIX Systems Administration Conference (LISA)
November 14-19, 2004
Atlanta, GA
http://www.usenix.com/events/


News in General


 Mobilix

In the final roll of the dice for Mobilix, (a site providing information on mobile Linux systems), the highest German civil court has found in favour of Les Edition Albert Rene and has dismissed the appeal brought by Werner Heuser. The work that formerly took place under the Mobilix banner will still continue, thankfully, under the new name Tuxmobil


Distro News


 Debian

Debian Weekly News reported that The Debian project has received full access to a Dual Opteron machine for porting efforts to the new amd64 architecture.


Also from Debian Weekly News debian-installer team have put together a HOWTO which guides through the process of installing sarge.


 Morphix

Prakash Advani conducted an interview with Alex de Landgraaf, the founder and the lead maintainer of the Morphix project. [via DWN]


 Mandrake

Mandrake Linux is planning to sell advertising space in the upcoming release of Mandrake Linux 9.2. There is further information in the NewsForge story.


Software and Product News


 Python 2.3.1

The Python Software Foundation has announced the release of version 2.3.1 of the Python programming language. This minor release introduces a number of enhancements based on two months of experience since release of version 2.3.


 Scribus 1.1.0

Franz Schmid has announced the release of Scribus 1.1.0 - Linux Desktop Publishing, which builds upon the recently released Scribus 1.0, as well as the launching of an integrated Scribus Web site at www.scribus.org.uk.


 GNOME-Office 1.0 Released

The GNOME-Office team has announced the immediate availability of GNOME-Office 1.0. GNOME-Office is a suite of Free Software productivity applications that seamlessly blend with the GNOME Desktop Environment. GNOME-Office includes the AbiWord-2.0 Word Processor, GNOME-DB-1.0 Database Interface and Gnumeric-1.2.0 Spreadsheet."


 MuNAS

MuNAS is a piece of software which addresses the problem that the X Window system does not support the handling of audio data. It allows the thin-client/server computing model in Linux to handle multimedia applications: the audio data generated by Open Sound System (OSS/Free) compatible audio applications which are executed in the terminal server can be transferred to X-terminals. Thus, with MuNAS installed, you can execute multimedia applications in the terminal server and listen to the sound from your X-terminal. Currently, several manufactures of windows terminal are planning to install the MuNAS in their X-terminals.


 XFce 4.0

The XFce Project has announced the release of version 4.0 of their desktop environment and development platform.

 

Mick is LG's News Bytes Editor.

[Picture] Born some time ago in Ireland, Michael is currently working on a PhD thesis in the Department of Mechanical Engineering, University College Dublin. The topic of this work is the use of Lamb waves in nondestructive testing. GNU/Linux has been very useful in this work, and Michael has a strong interest in applying free software solutions to other problems in engineering. When his thesis is completed, Michael plans to take a long walk.


Copyright © 2003, Michael Conry. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 95 of Linux Gazette, October 2003

LINUX GAZETTE
...making Linux just a little more fun!
Ecol
By Javier Malonda

The Ecol comic strip is written for escomposlinux.org (ECOL), the web site tha t supports, es.comp.os.linux, the Spanish USENET newsgroup for Linux. The strips are drawn in Spanish and then translated to English by the author.

These images are scaled down to minimize horizontal scrolling. To see a panel in all its clarity, click on it.

[cartoon]
[cartoon]
[cartoon]

All Ecol cartoons are at tira.escomposlinux.org (Spanish), comic.escomposlinux.org (English) and http://tira.puntbarra.com/ (Catalan). The Catalan version is translated by the people who run the site; only a few episodes are currently available.

These cartoons are copyright Javier Malonda. They may be copied, linked or distributed by any means. However, you may not distribute modifications. If you link to a cartoon, please notify Javier, who would appreciate hearing from you.

 


Copyright © 2003, Javier Malonda. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 95 of Linux Gazette, October 2003

LINUX GAZETTE
...making Linux just a little more fun!
Quick and Dirty Data Extraction in AWK
By Phil Hughes

CC: Quick and Dirty Data Extraction in AWK

Many years ago, probably close to 20, there was a regular point made on the comp Usenet newsgroups about using the minimum tool to get the job done. That is, someone would ask for a quick and dirty way to do something and the followups could include a C solution followed by an AWK solution, followed by a sed solution and so on.

Today, I still try to use this philosophy when addressing a problem. In this particular case, I picked AWK but if any of you old-timers are reading this I expect you will come up with a sed-based solution.

The Problem: Extracting Data from E-mail Messages

I signed up for a daily summary of currency exchange rates. It's free and you can subscribe too--just go here. Most days I take a quick look at how the $ is doing against the Euro and then save the e-mail. Some days I just save it. I have always thought that, someday, I would write a program to show me the trend but it has always been low priority.

Yesterday, as I was looking at a few of the save mail messages, I realized that while writing a fancy graphing program was low-priority, writing a quick and dirty hack would take less time than the random sampling I was doing. What I wanted was dates and numbers along with a minimalist graphical display of the trend.

First step was to look at the data. Here is an extract of part of a message.


>From list@en.ucc.xe.net  Wed Sep 10 12:22:53 2003
...

XE.com's Currency Update Service writes:

Here is today's Currency Update, a service of XE.com. Please read the
copyright, terms of use agreement, and information sections at the
end of this message.  CUS5D0B3D5C16D9
____________________________________________________________________________

If you find our free currency e-mail updates useful, please forward this
message to a friend! Subscribe for free at: http://www.xe.com/cus/
____________________________________________________________________________
<PRE>

Rates as of 2003.09.09 20:46:35 UTC (GMT). Base currency is EUR.

Currency Unit                          EUR per Unit         Units per EUR
================================   ===================   ===================
USD United States Dollars                 0.890585              1.12286     
EUR Euro                                  1.00000               1.00000     
GBP United Kingdom Pounds                 1.41659               0.705920    
CAD Canada Dollars                        0.651411              1.53513     
...

</PRE>

For help reading this mailout, refer to: http://www.xe.com/cus/sample.htm

...
The ... lines just indicate that I tossed a lot of uninteresting lines.

There are three things I use to produce the report:

The Solution

The numeric part of the solution is really easy. Just grab the date info and the rate info. When I get the </PRE> line, print it out.

The graphical part is just done by printing a number of plus signs that corresponds to the rate. To get decent resolution I would either need a very wide printout or some sort of offset. I went for the offset assuming the Euro will not drop below $.90 which is pretty safe considering the direction it is going.

Finally, I wanted a heading. Using AWK's BEGIN block, I put in a couple of print statements. Not liking to count characters, I defined the variable over to be the number of spaces that needed to be placed before the title info to align everything. This just meant that I had to run the program, see how far I was off and adjust the variable.

Here is the code.


BEGIN		{
		over = "                 "
		print over, " Cost of Euros in $ by date"
		print over, ".9       1.0       1.1       1.2       1.3"
		print over, "|         |         |         |         |"
		}
/Rates as of/	{ date = $4 }
/^USD/		{ rate = $6 }
/^<\/PRE>/	{
		printf "%s %6.3f ", date, rate
		rc = (rate - .895) * 100
		for (i=0; i < rc; i++) printf "+"
		printf "\n"
		date = "xxx"
		rate = 0
		}

Just running the program with the mail file as input prints all the result lines but the order is that of the data in the mail file. The sort program to the rescue. The first field in the output is the date and some careful choice of the first character of the title lines means everything sorts just right with no options. Thus, to run, use:

    awk -f cc.as messages | sort 
and you get your fancy report. Pipe the result thru more if you have a lot of lines to look at.

Here is a sample of the output:


                   Cost of Euros in $ by date
                  .9       1.0       1.1       1.2       1.3
                  |         |         |         |         |
2003.01.02  1.036 +++++++++++++++
...
2003.08.28  1.087 ++++++++++++++++++++
2003.08.29  1.098 +++++++++++++++++++++
2003.08.31  1.099 +++++++++++++++++++++
2003.09.01  1.097 +++++++++++++++++++++
2003.09.02  1.081 +++++++++++++++++++
2003.09.04  1.094 ++++++++++++++++++++
2003.09.05  1.110 ++++++++++++++++++++++
2003.09.07  1.110 ++++++++++++++++++++++
2003.09.08  1.107 ++++++++++++++++++++++
2003.09.09  1.123 +++++++++++++++++++++++
2003.09.10  1.121 +++++++++++++++++++++++
2003.09.11  1.120 +++++++++++++++++++++++
2003.09.12  1.129 ++++++++++++++++++++++++
2003.09.14  1.127 ++++++++++++++++++++++++
2003.09.15  1.128 ++++++++++++++++++++++++
2003.09.16  1.117 +++++++++++++++++++++++
2003.09.17  1.129 ++++++++++++++++++++++++
2003.09.18  1.124 +++++++++++++++++++++++
2003.09.19  1.138 +++++++++++++++++++++++++

Ok sed experts, have at it. --

 

Phil Hughes is the publisher of Linux Journal, and thereby Linux Gazette. He dreams of permanently tele-commuting from his home on the Pacific coast of the Olympic Peninsula. As an employer, he is "Vicious, Evil, Mean, & Nasty, but kind of mellow" as a boss should be.


Copyright © 2003, Phil Hughes. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 95 of Linux Gazette, October 2003

LINUX GAZETTE
...making Linux just a little more fun!
Integrating Tomcat and Apache on RedHat 9
By Mike Millson

Integrating Tomcat and Apache on RedHat 9.0

Integrating Tomcat and Apache on RedHat 9.0


Mike Millson
Web Systems Engineer
mmillson@meritonlinesystems.com
August 26, 2003
Merit Online Systems, Inc.
www.meritonlinesystems.com

Introduction

Java servlets are a powerful tool for building websites and web based applications. One skill that every Java web developer should have is the ability to install and configure the Tomcat servlet engine. Many thanks to the Apache Software Foundation for providing this mature, stable, open source software. It was recently voted the Best Application Server of 2003 by InfoWorld readers.

This article discusses how to integrate Tomcat with the Apache web server on RedHat 9.0. The goal is to provide a simple, stable configuration that will allow users to gain confidence using Tomcat.

Please note all commands are issued as root unless otherwise noted.

Installing Apache

I chose to install Apache using the RedHat RPM. Using the RPM instead of compiling Apache from source simplifies system administration in the following ways:

I recommend using the RedHat up2date command line utility to install RedHat RPMs. Although up2date can be used without purchasing a RedHat Network subscription, a basic subscription is a great value. It eliminates a multitude of headaches by ensuring the software you install is the correct version and you have the right dependencies installed on your system.

RedHat RPMs that must be installed:

To install these packages using up2date, make sure you are connected to the Internet, and enter the following:

up2date -i httpd
up2date -i httpd-devel

You should now be able to start/stop/restart Apache as follows:

service httpd start
service httpd stop
service httpd restart

Verify that Apache is working by starting Apache and typing http://localhost/ into your browser. You should see the default Apache install page with links to documentation.

Installing Tomcat

The only requirements to run Tomcat are that a Java Development Kit (JDK), also called a Java Software Developement Kit (SDK), be installed and the JAVA_HOME environment variable be set.

Java SDK

I chose to install Sun's Java 2 Platform, Standard Edition, which can be downloaded from http://java.sun.com/j2se/). I chose the J2SE v1.4.2 SDK Linux self-extracting binary file.

Change to the directory where you downloaded the SDK and make the self-extracting binary executable:

chmod +x j2sdk-1_4_2-linux-i586.bin

Run the self-extracting binary:

./j2sdk-1_4_2-linux-i586.bin

There should now be a directory called j2sdk1.4.2 in the download directory. Move the SDK directory to where you want it to be installed. I chose to install it in /usr/java. Create /usr/java if it doesn't exist. Here is the command I used from inside the download directory:

mv j2sdk1.4.2 /usr/java

Set the JAVA_HOME environment variable, by modifying /etc/profile so it includes the following:

JAVA_HOME="usr/java/j2sdk1.4.2"
export JAVA_HOME

There will be other environment variables being set in /etc/profile, so you will probably be adding JAVA_HOME to an existing export command. /etc/profile is run at startup and when a user logs into a system.

Tomcat Account

You will install and configure Tomcat as root; however, you should create a group and user account for Tomcat to run under as follows:

groupadd tomcat
useradd -g tomcat tomcat

This will create the /home/tomcat directory, where I will install my Tomcat applications.

Download Tomcat

Download the latest release build from http://www.apache.org/dist/jakarta/tomcat-4/binaries/. Since Tomcat runs directly on top of a standard JDK, I cannot think of any reason to building it from source.

The Tomcat binary is available in two different flavors:

  1. non-LE
    • Full binary distribution
    • Includes all optional libraries and an XML parser (Xerces)
    • Can be run on JDK 1.2+
  2. LE
    • Lightweight binary distribution
    • Designed to be run on JDK 1.4
    • Does not include an XML parser because one is included in JDK 1.4
    • Can be run on JDK 1.2 by adding an XML parser
    • All the components of this distribution are open source software
    • Does not include any of the following optional binaries: JavaMail, Java Activation Framework, Xerces, JNDI, or the JDBC Standard Extension

There are a number of different download formats. I chose the LE version gnu zipped tar file (jakarta-tomcat-4.1.27-LE-jdk14.tar.gz).

Tomcat Standalone

Unzip Tomcat by issuing the following command from your download directory:

tar xvzf tomcat-4.1.27-LE-jdk14.tar.gz

This will create a directory called jakarta-tomcat-4.1.27-LE-jdk14. Move this directory to wherever you would like to install Tomcat. I chose /usr/local. Here is the command I issued from inside the download directory:

mv jakarta-tomcat-4.1.27-LE-jdk14 /usr/local/

The directory where Tomcat is installed is referred to as CATALINA_HOME in the Tomcat documentation. In this case CATALINA_HOME=/usr/local/jakarta-tomcat-4.1.27-LE-jdk14.

I recommend setting up a symbolic link to point to your current Tomcat version. This will save you from having to change your startup and shutdown scripts each time you upgrade Tomcat or set a CATALINA_HOME environment variable. It also allows you to keep several versions of Tomcat on your system and easily switch amongst them. Here is the command I issued from inside /usr/local to create a symbolic link called /usr/local/jakarta-tomcat that points to /usr/local/jakarta-tomcat-4.1.27-LE-jdk14:

ln -s jakarta-tomcat-4.1.27-LE-jdk14 jakarta-tomcat

Change the group and owner of the /usr/local/jakarta-tomcat and /usr/local/jakarta-tomcat-4.1.27-LE-jdk14 directories to tomcat:

chown tomcat.tomcat /usr/local/jakarta-tomcat
chown -R tomcat.tomcat /usr/local/jakarta-tomcat-4.1.27-LE-jdk14

It is not necessary to set the CATALINA_HOME environment variable. Tomcat is smart enough to figure out CATALINA_HOME on its own.

You should now be able to start and stop Tomcat from the CATALINA_HOME/bin directory by typing ./startup.sh and ./shutdown.sh respectively. Test that Tomcat is working by starting it and typing http://localhost:8080 into your browser. You should see the Tomcat welcome page with links to documentation and sample code. Verify Tomcat is working by clicking on some of the examples links.

Selecting A Connector

At this point, Apache and Tomcat should be working separately in standalone mode. You can run Tomcat in standalone mode as an alternative to Apache. In fact, in some cases, it is said that Tomcat standalone is faster than serving static content from Apache and dynamic content from Tomcat. However, there are compelling reasons to use Apache as the front end. If you run Tomcat standalone:

  1. You will have to run Tomcat as root on port 80. This is a security concern.
  2. You will not be able to use a connector such as mod_jk to load balance amongst several Tomcat instances.
  3. You will not be able to take advantage of Apache features such as cgi and PHP.
  4. You will not be able to take advantage of Apache modules such as mod_rewrite.
  5. You will not be able to isolate virtual hosts in their own Tomcat instances.

I think the increased functionality obtained by using Apache on the front end far outweighs the effort required to install and configure a connector. With that said, I selected the tried and true mod_jk connector. It has been around a long while and is very stable. mod_jk2 is the wave of the future, but I'm holding off on that for now. In early 2002 I invested a considerable amount of time on the "wave of the future" connector at that time, mod_webapp, which is now no longer being developed. For that reason, I am being cautious about migrating to mod_jk2.

Building the mod_jk Connector

The mod_jk connector is the communication link between Apache and Tomcat. It listens on port 8009 for requests from Apache.

In my experience, it's safest to think of connectors as being version dependent. If you upgrade Tomcat and you have a connector issue, try compiling the connector using the version-specific connector source.

Download the connector source for your version of Tomcat from http://www.apache.org/dist/jakarta/tomcat-4/source/. I used jakarta-tomcat-connectors-4.1.27-src.tar.gz. The source for all the different connectors (mod_jk, mod_jk2, coyote, etc.) is distributed in this one file.

Unzip the contents of the file into your download directory as follows:

tar xvzf jakarta-tomcat-connectors-4.1.27-src.tar.gz

This will create a folder called jakarta-tomcat-connectors-4.1.27-src. Move this folder to wherever you store source files on your system. I chose /usr/src. Here is the command I issued from inside the download directory:

mv jakarta-tomcat-connectors-4.1.27-src /usr/src/

I refer to the folder where the connector source is installed as CONN_SRC_HOME. In my case CONN_SRC_HOME = /usr/src/jakarta-tomcat-connectors-4.1.27-src.

Run the buildconf script to to create the CONN_SRC_HOME/jk/native/configure file.

CONN_SRC_HOME/jk/native/buildconf.sh

Run the configure script with the path to the apxs file on your system and the options below:

./configure --with-apxs=/usr/sbin/apxs

Build mod_jk with the following command:

make

If all went well, the mod_jk.so file was successfully created. Manually copy it to Apache's shared object files directory:

cp CONN_SRC_HOME/jk/native/apache-2.0/mod_jk.so /etc/httpd/modules

Configuring Tomcat

workers.properties

The workers.properties file contains information so mod_jk can connect to the Tomcat worker processes.

Create a directory called CATALINA_HOME/conf/jk and place the workers.properties file found in the Appendix in this directory.

server.xml

The server.xml file contains Tomcat server configuration information. The default CATALINA_HOME/conf/server.xml file that comes with Tomcat contains so much information that I recommend saving it for future reference (e.g. server.xml.bak) and starting from scratch. The default server.xml is great for verifying that Tomcat works in standalone mode and for viewing the examples that come with the application, but I have found it is not the best starting point when you want to integrate Apache with Tomcat. Instead, create a bare bones server.xml file as follows:

<Server port="8005" shutdown="SHUTDOWN" debug="0">

	<Service name="Tomcat-Apache">

		<Connector className="org.apache.ajp.tomcat.Ajp13Connector"
			port="8009" minProcessors="5" maxProcessors="75" 
			acceptCount="10" debug="0"/>   

		<Engine name="your_engine" debug="0" defaultHost="your_domain.com">
			<Logger className="org.apache.catalina.logger.FileLogger"
				prefix="apache_log." suffix=".txt" 
				timestamp="true"/>
			<Host name="your_domain" debug="0" appBase="webapps" 
				unpackWARs="true">
				
				<Context path="" docBase="/home/tomcat/your_application" 
				debug="0" reloadable="true" />
				
			</Host>
		</Engine>

	</Service>

</Server>
Notes:
  1. The setup assumes you will put your Tomcat applications in /home/tomcat, not CATALINA_HOME/webapps. This will allow you to easily upgrade Tomcat and back up your Tomcat applications.
  2. If you do keep the default server.xml, make sure you comment out any other connectors besides mod_jk that are listening on port 8009. The default file comes with the Coyote/JK2 connector enabled for the Tomcat-Standalone service. This will conflict with the mod_jk connector in your Tomcat-Apache service. You should comment this connector out. It isn't needed when you connect directly to Tomcat in standalone mode (port 8080), so I'm not sure why this connector is enabled by default.

Configuring Apache

httpd.conf

Apache is configured with directives placed in the Apache configuration file, /etc/httpd/conf/httpd.conf. You will notice that there are three sections labeled in the httpd.conf file supplied by RedHat: (1) Global Environment, (2) Main Server Configuration, and (3) Virtual Hosts.

Add the following to the bottom of the existing LoadModule directives in the Global Environment section:

LoadModule jk_module modules/mod_jk.so

Add the following to the bottom of the existing AddModule directives in the Global Environment section:

AddModule mod_jk.c

Add the following to the bottom of the Main Server Configuration section:

JkWorkersFile "/usr/local/jakarta-tomcat/conf/jk/workers.properties"
JkLogFile "/usr/local/jakarta-tomcat/logs/mod_jk.log"
JkLogLevel info
JkLogStampFormat "[%a %b %d %H:%M:%S %Y]"

The configuration above assumes you created a symbolic link /usr/jakarta-tomcat that points to the directory where your version of Tomcat is installed.

Set up a Virtual Host directive in the Virtual Hosts section of httpd.conf. Below is an example of how to set up the your_domain website to forward all URLs with "servlet" in the path to Tomcat:

NameVirtualHost *:80

<VirtualHost 192.168.1.1>
	ServerAdmin webmaster@your_domain
	ServerName your_domain
	DocumentRoot /home/www/your_domain/html
	ErrorLog /home/www/your_domain/logs/error_log
	CustomLog /home/www/your_domain/logs/access_log common
	JkMount /servlet/* ajp13
</VirtualHost>

The configuration above assumes that your application's static html files will be served from the /home/www/your_domain/html directory.

You can test your Apache configuration by typing the following:

apachectl configtest

You will receive the response "Syntax OK" if there are no errors in httpd.conf.

Setting Up your_domain

your_domain.com does not need to be a domain name with a DNS entry. For testing purposes, you can set up any domain you want in the /etc/hosts file of the machine that you will be using to access your_application.

The example below shows the entry for your_domain when running Apache and Tomcat on a single machine, typical for a development computer.

127.0.0.1	your_domain

Testing

We will now create and install a simple Hello World servlet so we can test our setup.

Hello World Servlet

Copy the following into a file called HelloWorld.java:

import java.io.*;
import javax.servlet.*;
import javax.servlet.http.*;
public class HelloWorld
    extends HttpServlet {
    public void doGet(HttpServletRequest request, 
                       HttpServletResponse response)
                throws IOException, ServletException {
		
		response.setContentType("text/html");
		PrintWriter out = response.getWriter();
		
		out.println("Hello World");

	}

}

Compile the source into a class file as follows:

javac -classpath /usr/java/jakarta-tomcat/common/lib/servlet.jar HelloWorld.java

This will create a file called HelloWorld.class.

Tomcat Application

Create the following directories and files in /home/tomcat/your_application:

/home/tomcat/your_application/WEB_INF
/home/tomcat/your_application/WEB_INF/classes
/home/tomcat/your_application/WEB_INF/web.xml

The web.xml file is where you map the name of your servlet to a URL pattern so Tomcat can run your servlet when requested. Below is the web.xml file that runs the HelloWorld servlet whenever the URL http://your_domain/servlet/HelloWorld is entered in the browser:

<?xml version="1.0" encoding="ISO-8859-1"?>

<!DOCTYPE web-app
    PUBLIC "-//Sun Microsystems, Inc.//DTD Web Application 2.3//EN"
    "http://java.sun.com/dtd/web-app_2_3.dtd">

<web-app>

	<servlet>
		<servlet-name>HelloWorld</servlet-name>
		<servlet-class>HelloWorld</servlet-class>
	</servlet>
	<servlet-mapping>
		<servlet-name>HelloWorld</servlet-name>
		<url-pattern>/servlet/HelloWorld</url-pattern>
	</servlet-mapping>
                
</web-app>

Copy the HelloWorld.class file to the /tomcat/home/your_application/WEB-INF/classes directory.

Restart Tomcat as follows:

/CATALINA_HOME/bin/shutdown.sh
/CATALINA_HOME/bin/startup.sh

Restart Apache as follows:

service httpd restart

You should now be able to type http://your_domain/servlet/HelloWorld into your browser and see the always-exciting "Hello World" message.

Advanced Configuration

The following steps are not mandatory, but are suggested for a better, tighter Tomcat installation.

Tomcat Startup Script

If you want to automatically start Tomcat when your system boots and manage it using the service command as we do Apache, you must create an initialization script.

Create the following Tomcat initialization script as /etc/rc.d/init.d/tomcat

#!/bin/sh
#
# Startup script for Tomcat, the Apache Servlet Engine
#
# chkconfig: 345 80 20
# description: Tomcat is the Apache Servlet Engine
# processname: tomcat
# pidfile: /var/run/tomcat.pid
#
# Mike Millson <mmillson@meritonlinesystems.com>
#
# version 1.02 - Clear work directory on shutdown per John Turner suggestion.
# version 1.01 - Cross between RedHat Tomcat RPM and Chris Bush scripts

# Tomcat name :)
TOMCAT_PROG=tomcat
 
# if TOMCAT_USER is not set, use tomcat like Apache HTTP server
if [ -z "$TOMCAT_USER" ]; then
 TOMCAT_USER="tomcat"
fi

RETVAL=0

# start and stop functions
start() {
    echo -n "Starting tomcat: "

    chown -R $TOMCAT_USER:$TOMCAT_USER /usr/local/jakarta-tomcat/*    
    chown -R $TOMCAT_USER:$TOMCAT_USER /home/tomcat/*
    su -l $TOMCAT_USER -c '/usr/local/jakarta-tomcat/bin/startup.sh'
    RETVAL=$?
    echo
    [ $RETVAL = 0 ] && touch /var/lock/subsys/tomcat
    return $RETVAL
}

stop() {
    echo -n "Stopping tomcat: "
    su -l $TOMCAT_USER -c '/usr/local/jakarta-tomcat/bin/shutdown.sh'
    RETVAL=$?
    Echo
    [ $RETVAL = 0 ] && rm -f /var/lock/subsys/tomcat /var/run/tomcat.pid    
    rm -rf /usr/local/jakarta-tomcat/work/*
}

# See how we were called.
case "$1" in
  start)
        start
        ;;
  stop)
        stop
        ;;
  restart)
        stop
	# Ugly hack
	# We should really make sure tomcat
	# is stopped before leaving stop
        sleep 2	
        start
        ;;
  *)
	echo "Usage: $0 {start|stop|restart}"
	exit 1
esac

exit $RETVAL

Add the startup script to your system as follows:

chkconfig --add tomcat

You will be able to start/stop/restart it using the following commands:

service tomcat start
service tomcat stop
service tomcat restart

If you want Tomcat to start automatically when your system boots, you need to add tomcat to your runlevel as follows:

chkconfig --level 5 tomcat on

Runlevel 5 is the X Window System, typical for a development computer. Runlevel 3 is typical for a dedicated web server.

The start order of Apache and Tomcat is very important. Tomcat must be started before you start Apache so Apache can attach itself to the Tomcat processes.

Development Setup

During development, you will need access to your tomcat application directory. Add the user account under which you will be doing development to the tomcat group in /etc/group. For example, this is what the tomcat entry might look like in /etc/group if you do development under the

yourname
account:

tomcat:x:502:yourname

Make sure the tomcat group has write permission to /home/tomcat so you can publish files (e.g. using ant) to your Tomcat application in /home/tomcat/your_application. Issue the following command as root:

chmod g+w /home/tomcat

Appendix

workers.properties

# workers.properties
#
# This file provides jk derived plugins with the needed information to
# connect to the different tomcat workers.  Note that the distributed
# version of this file requires modification before it is usable by a
# plugin.
#
# As a general note, the characters $( and ) are used internally to define
# macros. Do not use them in your own configuration!!!
#
# Whenever you see a set of lines such as:
# x=value
# y=$(x)\something
#
# the final value for y will be value\something
#
# Normaly all you will need to do is un-comment and modify the first three
# properties, i.e. workers.tomcat_home, workers.java_home and ps.
# Most of the configuration is derived from these.
#
# When you are done updating workers.tomcat_home, workers.java_home and ps
# you should have 3 workers configured:
#
# - An ajp12 worker that connects to localhost:8007
# - An ajp13 worker that connects to localhost:8009
# - A jni inprocess worker.
# - A load balancer worker
#
# However by default the plugins will only use the ajp12 worker. To have
# the plugins use other workers you should modify the worker.list property.
#
# OPTIONS ( very important for jni mode )
#
# workers.tomcat_home should point to the location where you
# installed tomcat. This is where you have your conf, webapps and lib
# directories.
#
workers.tomcat_home=/usr/local/jakarta-tomcat
#
# workers.java_home should point to your Java installation. Normally
# you should have a bin and lib directories beneath it.
#
workers.java_home=$(JAVA_HOME)
#
# You should configure your environment slash... ps=\ on NT and / on UNIX
# and maybe something different elsewhere.
#
ps=/
#
#------ ADVANCED MODE ------------------------------------------------
#---------------------------------------------------------------------
#
#------ DEFAULT worket list ------------------------------------------
#---------------------------------------------------------------------
#
# The workers that your plugins should create and work with
#
worker.list=ajp12, ajp13
#
#------ DEFAULT ajp12 WORKER DEFINITION ------------------------------
#---------------------------------------------------------------------
#
#
# Defining a worker named ajp12 and of type ajp12
# Note that the name and the type do not have to match.
#
worker.ajp12.port=8007
worker.ajp12.host=localhost
worker.ajp12.type=ajp12
#
# Specifies the load balance factor when used with
# a load balancing worker.
# Note:
#  ----> lbfactor must be > 0
#  ----> Low lbfactor means less work done by the worker.
worker.ajp12.lbfactor=1
#
#------ DEFAULT ajp13 WORKER DEFINITION ------------------------------
#---------------------------------------------------------------------
#
# Defining a worker named ajp13 and of type ajp13
# Note that the name and the type do not have to match.
#
worker.ajp13.port=8009
worker.ajp13.host=localhost
worker.ajp13.type=ajp13
#
# Specifies the load balance factor when used with
# a load balancing worker.
# Note:
#  ----> lbfactor must be > 0
#  ----> Low lbfactor means less work done by the worker.
worker.ajp13.lbfactor=1
#
# Specify the size of the open connection cache.
#worker.ajp13.cachesize
#
#------ DEFAULT LOAD BALANCER WORKER DEFINITION ----------------------
#---------------------------------------------------------------------
#
# The loadbalancer (type lb) workers perform wighted round-robin
# load balancing with sticky sessions.
# Note:
#  ----> If a worker dies, the load balancer will check its state
#        once in a while. Until then all work is redirected to peer
#        workers.
worker.loadbalancer.type=lb
worker.loadbalancer.balanced_workers=ajp12, ajp13
#
#------ DEFAULT JNI WORKER DEFINITION---------------------------------
#---------------------------------------------------------------------
#
# Defining a worker named inprocess and of type jni
# Note that the name and the type do not have to match.
#
worker.inprocess.type=jni
#
#------ CLASSPATH DEFINITION -----------------------------------------
#---------------------------------------------------------------------
#
# Additional class path components.
#
worker.inprocess.class_path=$(workers.tomcat_home)$(ps)lib$(ps)tomcat.jar
#
# Setting the command line for tomcat.
# Note: The cmd_line string may not contain spaces.
#
worker.inprocess.cmd_line=start
#
# Not needed, but can be customized.
# worker.inprocess.cmd_line=-config
# worker.inprocess.cmd_line=$(workers.tomcat_home)$(ps)conf$(ps)server.xml
# worker.inprocess.cmd_line=-home
# worker.inprocess.cmd_line=$(workers.tomcat_home)
#
# The JVM that we are about to use
#
# This is for Java2
#
# Windows
# #worker.inprocess.jvm_lib=$(workers.java_home)$(ps)jre$(ps)bin$(ps)classic$(ps)jvm.dll
# IBM JDK1.3
# worker.inprocess.jvm_lib=$(workers.java_home)$(ps)jre$(ps)bin$(ps)classic$(ps)libjvm.so
# Unix - Sun VM or blackdown
#worker.inprocess.jvm_lib=$(workers.java_home)$(ps)jre$(ps)lib$(ps)i386$(ps)classic$(ps)libjvm.so
# RH + JDK1.4
worker.inprocess.jvm_lib=$(workers.java_home)$(ps)jre$(ps)lib$(ps)i386$(ps)server$(ps)libjvm.so
#
# And this is for jdk1.1.X
#
# worker.inprocess.jvm_lib=$(workers.java_home)$(ps)bin$(ps)javai.dll
#
# Setting the place for the stdout and stderr of tomcat
#
worker.inprocess.stdout=$(workers.tomcat_home)$(ps)logs$(ps)inprocess.stdout
worker.inprocess.stderr=$(workers.tomcat_home)$(ps)logs$(ps)inprocess.stderr
#
# Setting the tomcat.home Java property
#
# worker.inprocess.sysprops=tomcat.home=$(workers.tomcat_home)
#
# Java system properties
#
# worker.inprocess.sysprops=java.compiler=NONE
# worker.inprocess.sysprops=myprop=mypropvalue
#
# Additional path components.
#
# worker.inprocess.ld_path=d:$(ps)SQLLIB$(ps)bin

Related Linux Gazette Articles

Installing Tomcat on Linux by Allan Peda, August 2001


Bio

Mike is a Web Systems Engineer with Merit Online Systems in Atlanta, GA. His first computer experience came programming BASIC on an IBM PC in 1981. When he isn't wearing his propeller cap, he enjoys spending time with his wife, Debora, and spoiling his Golden Retriever, Belle.


© 2003 Merit Online Systems, Inc.

 

[BIO] Mike is a Web Systems Engineer with Merit Online Systems in Atlanta, GA. His first computer experience came programming BASIC on an IBM PC in 1981. When he isn't wearing his propeller cap, he enjoys spending time with his wife, Debora, and spoiling his Golden Retriever, Belle.


Copyright © 2003, Mike Millson. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 95 of Linux Gazette, October 2003

LINUX GAZETTE
...making Linux just a little more fun!
Linux Through an Oscilloscope
By Pramode C.E

Linux Through an Oscilloscope

Introduction

It was some time since I had wired up a few circuits and watched them on my old 20MHz Oscilloscope. I thought it might be interesting to observe how the complex, dynamic nature of a multitasking operating system influences the working of timing sensitive code by viewing signals generated by such programs on the scope. This article describes a few experiments which I did, first with a `normal' 2.4.18 kernel and then with a kernel patched with `real time extensions' provided by the RTAI project. The reader is assumed to have some background in simple kernel programming.

Experimental setup

I converted an old Cyrix CPU based system which was lying around unused to my `embedded linux' experimentation platform. The motherboard was taken out of the cabinet - HDD, monitor, keyboard etc were removed - only the Ethernet card with a boot ROM remained - together with an ISA protoboard. This machine boots from a full fledged Linux system situated just a few feet away. This way, I can conduct hardware experiments without worrying too much about damaging expensive hardware. I have the option of booting either a plain 2.4.18 kernel or an RTAI patched one.

Simple waveform generation

Here is a little user space program which, when executed as the superuser, generates a waveform on the parallel port output pins - I can view this on the scope.


#include <asm/io.h>

#define ON 100000
#define OFF ON*10

delay(unsigned int i)
{
	while(i--);
}

main()
{
	iopl(3);
	while(1) {
		outb(0xff, 0x378);
		delay(ON);
		outb(0x0, 0x378);
		delay(OFF);
	}
}

The working of the program is simple. Parallel port pins 2 to 9 act as output pins - they can be accessed through an i/o port whose address is 0x378. You write 0xff to 0x378, you are turning on (ie, putting about 5V) on all these pins, you write 0x0 and your are turning off the voltage on these pins. The program has to be compiled with the -O2 option and executed as super user (if the outb is to work, the iopl call, which is concerned with setting some privilege levels, should work. For iopl to work, you have to be the superuser).

On my system, I observe a waveform with an on time of about 2.5 to 2.7ms with my scope set at 1ms/division. The result will surely vary depending on the speed of your processor.

Why simple things are not so simple

Anybody who has done a basic course in microprocessors will know how to generate `delays' by writing loops. That's exactly what we have done here - absolute kids stuff.

Just being curious, I log on to another console and run the `yes' command, which generates a continuous stream of the character `y' on the screen. I watch the scope and see that my nice looking signal has gone haywire. The ON and OFF periods have been so lengthened that what I see is mostly a continuos line which keeps on jumping from 0V to 5V.

I do another experiment. I `flood ping' (the ping command with the -f option) the sytem from a faster machine - again, I notice that the signal on the scope gets wildly disturbed.

The reason behind this behaviour is not at all difficult to see. My program is now contesting with another one for CPU cycles. In between executing the delay loop, control can switch to the other program, thereby lengthening the delay perceived by the first program. Flood pinging results in lots of activity within the OS kernel, this too has a detrimental effect on the timing of my program.

The solution to the problem is simple - just don't disturb the program which generates the waveform. Let it have full control of the CPU. Then the question is why have a complex multitasking OS at all? Let's see.

I call the program which generates the signal a `realtime' program. Let's visualize the program as a `task' whose job is to `toggle' the parallel port pins at specified intervals. If the generated waveform is used to control a physical appliance like, say, a servo motor (the rotation of the servo is controlled by the length of the `on period' of a pulse whose total on+off period is somewhere around 20ms. When the ON period varies from 1ms to 2ms, the servo rotates by about 180 degree), variation in pulse length can have dramatic effects. My Futaba S2003 servo swings wildly when it is controlled by a program like the one above, if it is perturbed by some other process. A real time program has timing deadlines which it HAS to meet, for correct operation. The classical solution to designing control applications has been to use dedicated microcontrollers and digital signal processors. But with PC hardware becoming so cheap, a very wide range of applications are cropping up where we require the ability to run programs with sensitive timing requirements correctly, and, at the same time, also do things like communicate over the network, visualize data with graphical interfaces, log data on to secondary storage etc, jobs where timing deadlines are not an issue, so called `non-realtime' jobs.

If it is possible to modify the Linux kernel in some way so that the timing constraints imposed on some tasks (which are created and executed in some special manner) are always met, even under the prescence of other `non-realtime' tasks, then we have an ideal situation. We will see a bit later in this article that not one, but many such solutions are available.

Sleeping Vs Looping

Besides the fact that the timing of the program depends a lot on other activities going on in the system, we are burning up CPU cycles by executing a tight loop (also, on a complex microprocessor like the Pentium, it is difficult to compute delays by counting instructions). Why not let the program sleep?

By using functions like `nanosleep', we instruct the Operating System to put our process to sleep, to be woken up at a specified time. But, here again, there is a possibility that our process does not wake up and execute at the desired time because the Operating System was too busy executing some action in kernel mode (say, processing TCP/IP packets, or doing disk I/O) or another process got scheduled just before the kernel woke up our process.

Doing it in kernel space

What if we implement our signal generation code as a kernel space module?

#include <linux/module.h>
#include <linux/fs.h>
#include <linux/param.h>
#include <asm/uaccess.h>
#include <asm/io.h>

static char *name = "foo";
static int major;

#define ON 100000
#define OFF ON*10

void delay(unsigned int i)
{
	while(i--);
}

static int
foo_read(struct file* filp, char *buf, size_t count, loff_t *f_pos)
{
    while(1) {
		outb(0xff, 0x378);    
		delay(ON);
		outb(0x0, 0x378);
		delay(OFF);
   }
   return 0;
}
	
    
static struct file_operations fops = {
    read: foo_read,
};

int init_module(void)
{
    major = register_chrdev(0, name, &fops);
    printk("Registered, got major = %d\n", major);
    return 0;
}

void cleanup_module(void)
{
    printk("Cleaning up...\n");
    unregister_chrdev(major, name);
}
Executing an infinte loop in the kernel has disastrous consequences - as far as user processes are concerned. No user process would be able to execute until control comes out of kernel mode (this is the way the OS is designed). What we would like to have is a situation where realtime as well as nonrealtime processes coexist.

Although user space processes now can't disturb our program, it is still possible to generate interrupts on the network card by flood pinging. As interrupts are serviced even when kernel code is executing, the waveform displayed on the scope starts jumping around as usual.

It is possible to go to sleep within the kernel - this prevents the system from getting locked up - but then it does not solve our problem of peaceful coexistence of realtime as well as non realtime code.

Enter Real Time Linux

What if we slide in a `nano kernel' between Linux and our hardware? This kernel would be in control of both Linux as well as a set of `real time tasks'. Linux will be treated as a very low priority task which will be executed only when no other higher priority `real time' tasks are executing. The control of interrupts would be in the hands of this specialized kernel - requests by Linux to disable interrupts will be treated in such a way that interrupts don't really get disabled - only Linux won't be able to see those interrupts - the real time tasks will still be able to execute their interrupt handlers without too much delay.

This novel concept, introduced by Dr.Victor Yodaiken, lead to the birth of RTLinux. Many other universities and research instituitions have attempted their own implementations - one of the most promising (and completely non proprietary) being RTAI, developed by researchers at Dipartimento di Ingegneria Aerospaziale - Politecnico di Milano (DIAPM).

Getting and Installing RTAI

RTAI can be obtained from here. There are two major components:

Before patching and installing the new kernel, the instructions given in the README.INSTALL file should be read carefully (especially those regarding certain kernel configuration options. "Set version information on loadable modules" should be disabled. You are most probably using a uni processor system - so don't forget to disable SMP support (maybe, disable power management also)). Once you reboot with the new kernel, you can compile the main RTAI modules and examples. Before running any programs, you will need to load the three modules - rtai.o, rtai_fifos.o and rtai_sched.o.

Generating waveforms with RTAI tasks

Let's look at an RTAI program which creates a waveform on the parallel port output pins:


#include <linux/module.h>
#include <rtai.h>
#include <rtai_sched.h>

#define LPT1_BASE 0x378
#define STACK_SIZE 4096
#define TIMERTICKS 1000000 /*  1 milli second */

static RT_TASK my_task;

static void fun(int t)
{
	unsigned char c = 0x0;
	while(1) {
		outb(c, LPT1_BASE);
		c = ~c;
		rt_task_wait_period();
	}
}

int init_module(void)
{
	RTIME tick_period, now;

	rt_set_periodic_mode();
	rt_task_init(&my_task, fun, 0, STACK_SIZE, 0, 0, 0);
	tick_period = start_rt_timer(nano2count(TIMERTICKS));
	now = rt_get_time();
	rt_task_make_periodic(&my_task, now + tick_period, 2*tick_period);
	return 0;
}

void cleanup_module(void)
{
	stop_rt_timer();
	rt_busy_sleep(10000000);
	rt_task_delete(&my_task);
}

Let's look at the general idea before we examine specific details. First, we need a `task' to do anything useful. The `task' is simply a C function. The structure of most of our tasks would be something like this - perform some action, sleep for some time, perform some action again, repeat. One way to sleep is to call `rt_task_wait_period' - the question is how long do we sleep? We sleep for a certain fixed `period', which will be a multiple of a base `tick'. The system 8254 timer can be programmed to generate interrupts at a rate of say 1KHz (ie, 1000 times a second). The RTAI scheduler takes scheduling decisions at each tick - if we set the period of our task to be `2 ticks' and if the interval between each tick is 1ms, then the scheduler will wake up our task after 2ms.

We start with `init_module'. We first configure the timer as a `periodic timer' (another mode is available). The `rt_task_init' function accepts the address of an object of type RT_TASK, the address of our function and a stack size, besides some other values. Some kind of `initialization' is performed and information is stored in the object of type RT_TASK which can be later used for identifying this particular task.

Our TICK_PERIOD is 1000000 nano seconds (1 milli second). The nano2count function converts this time into internal `count units'. The timer is started with a tick period equal to 1ms (which is what the `start_rt_timer' function does).

What remains is to start our task and set its period (remeber, the period is used by rt_task_wait_period to set the time at which the task is to be awakened). We set the period to 2 ticks and instruct the scheduler to start it at the next tick itself.

The body of our task is very simple - it simply writes a value to the parallel port output pins, complements the variable which stores that value and waits for the next period (which will be 2ms). After waking up, it performs the same sequence. Again and again and again... The end result is we observe a waveform on the scope whose on time is 2ms and off time also is 2ms.

I observed the waveform first on an unloaded system. I then resorted to flood pinging the system. The waveform on the scope remained steady. The promise that RTAI gives us is that it will always run Linux as a very low priority task - Linux will execute only when no real time tasks are to be serviced. A real time task waking up will result in control getting transferred to it immediately (of course, there are delays involved in preempting whatever is being done now, activating the real time scheduler and transferring control back to the task which just woke up - these delays also need not be constant). That is why we are able to observe a fairly steady signal even under load.

Here is a code segment which demonstrates the use of a function - `rt_sleep':


#define LPT1_BASE 0x378
#define STACK_SIZE 4096
#define TIMERTICKS 1000000 /*  1 milli second */

#define ON_TIME 3000000 /* 3 milli seconds */
#define OFF_TIME 1000000 /* 1 milli second */

static RT_TASK my_task;
RTIME on_time, off_time;

static void fun(int t)
{
	while(1) {
		outb(0xff, LPT1_BASE);
		rt_sleep(on_time);
		outb(0x0, LPT1_BASE);
		rt_sleep(off_time);
	}
}

int init_module(void)
{
	RTIME tick_period, now;

	rt_set_periodic_mode();
	rt_task_init(&my_task, fun, 0, STACK_SIZE, 0, 0, 0);
	tick_period = start_rt_timer(nano2count(TIMERTICKS));
	on_time = nano2count(ON_TIME);
	off_time = nano2count(OFF_TIME);
	now = rt_get_time();
	rt_task_make_periodic(&my_task, now + tick_period, 2*tick_period);
	return 0;
}
The basic tick period is 1ms. Our on and off times are integral multiples of this period (3ms and 1ms each). An invocation of `rt_sleep(on_time)' will put the task to sleep - it gets woken up after 3 tick periods. It does some action and again goes to sleep for one tick period.

Using FIFO's to communicate between real time and non real time tasks

It may be required to transmit data from a user space non realtime program to an RTAI task (and back). This is very easily done with the use of fifo's. For example, an RTAI task may be generating a PWM (pulse width modulated) signal and you may have to control the width from user space. The RTAI installation creates several device files under /dev/ going by the name rtf0, rtf1 etc. The user program identifies each fifo by its name while the RTAI task does it with numbers 0, 1, 2 etc.


#include <linux/module.h>
#include <linux/errno.h>
#include <rtai.h>
#include <rtai_sched.h>
#include <rtai_fifos.h>


#define STACK_SIZE 4096
#define COMMAND_FIFO 0
#define FIFO_SIZE 1024


int fifo_handler(unsigned int fifo)
{
	char buf[100];
	int r;
	
	r = rtf_get(COMMAND_FIFO, buf, sizeof(buf)-1);
	if (r <= 0) return r;
	rt_printk("handler called for fifo %d, get = %d\n", fifo, r);
	buf[r] = 0;
	rt_printk("data = %s\n", buf);
	return 0;
}

int init_module(void)
{
	/* Create fifo, set handler */
	rtf_create(COMMAND_FIFO, FIFO_SIZE);
	rtf_create_handler(COMMAND_FIFO, fifo_handler);
	
	return 0;
}

void cleanup_module(void)
{
	printk("cleaning up...\n");
}

In `init_module', we create a fifo and set `fifo_handler' as a function to be invoked when somebody writes to the fifo. The `rtf_get' function reads data from the fifo. After compiling and loading the module, if we do something like:

echo hello > /dev/rtf0

we will see the handler getting invoked and reading data from the fifo.

Further Reading

If you are interested in general real time programming issues, you should start with the excellent Real Time and Embedded Guide written by Herman Bruyninckx. RTAI programming is explained in detail in the RTAI manual and RTAI programming guide available for download from the project home page.

Conclusion

An Operating System which provides support for deterministic execution of tasks with stringent timing requirements is just one part of the realtime system design landscape. After playing with RTAI for a few days, I realized that this (realtime design) is something which can't be done as a hobby by a novice like me - you have to invest a lot of time, effort and patience in understanding your system thoroughly (hardware as well as software) and using the tools well. But then, that shouldn't stop you from experimenting and having a little bit of fun!

 

[BIO] I am an instructor working for IC Software in Kerala, India. I would have loved becoming an organic chemist, but I do the second best thing possible, which is play with Linux and teach programming!


Copyright © 2003, Pramode C.E. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 95 of Linux Gazette, October 2003

LINUX GAZETTE
...making Linux just a little more fun!
Software Engineering
By Gustavo Rondina

Abstract

The so called software crisis can generate several and serious consequences to computing and IT in the future, mainly on issues that refers to the free software. This article intends to quickly introduce the reader to some of the problems which can contribute with this crisis and maybe encouraje users and programmers to adopt the necessary measures to avoid it.

1. Introduction

Nowdays the hardware devces are becaming more powerfull and are expanding its capacities and features every day. But those devices are useless if there are not software that are able to explore those features as much as possible, thus it is appropriate to say that the software development process, called software engineering, is one of the most important areas of computing.

To catch up with the rapidly technological advance at the hardware industry, the programmers and software engineers, whose job is develop the core of the softwares, must keep the idea that it is necessary not just create and develop a product that works, but a product that implements good software engineering practices, assuring that no computer or programmer's efforts will be wasted.

2. Linux, free software and software engineering

You might be asking yourselves: where does Linux fit on that scenario ? Well, the free software movement can answer that question. The Linux OS have been one great and successfull project that helped to spread the free software principles, concepts and philosophy.

In the last decade we can note a significant increasing if we look at the number of Linux users. Users that have adopted the system at home, at work or even at school. Many of these users had joined to Linux due its source code availability: any person over the world can get the source, read it, make modifications, copy and redistribute this code. Most of those users have had already a previous knowledge of computer programming and OSes based on Unix.

However there were users that have adopted Linux just to fill their curiosity, to know and explore a new and different system and have an idea of how a Unix-like system works; some of them didn't like or didn't get used to Linux and droped it soon, but most of them have adopted Linux as a way of life and a philosophy.

This new Linux's enthusiastics wanted to learn as much as possible about the system, so, with the source code open to anyone and built under C and assembler programming languages, they have decided that learning how to programm would be a good way to start understanting Linux. And this was what happened, a lot of new users have started to programm and hack codes volunteerly, giving their time and efforts to projects aimming to contribute with the open source community. Today there are a lot of successfull projects which show to the world that the free software development philosofy really works.

3. So, where is the problem ?

These new programmers had learned the programming languages that are necessary to software development and had put in practice their knowledge developing software that fits their needs, producing the expected results. However, there are no warranty that those products are totally realiable and the ideas have been implemented at the best way possible.

This happens due the lack of knowledge at the software engineering area. There are many projects that are developed by people who don't have much experience at the software development and production, so there are a lack of concepts and theory to these programmers. To programm isn't just about launching a text editor and start to hack some code and then compile it, even getting the expected results.

To know the syntax of a programming language doesn't mean to know how to programm and develop good and quality software. The software development is complex process, since the first algorithm design until the debugging and testing phase. A programmer who doesn't have the conecpts os language paradigms and their differences (i.e. oriented object programming and procedural programming); who doesn't know deeply the several types os data structures, such as stacks, queues, lists and binary trees; or the programmer who doesn't know how a hardware achitecture does arithmetic operations and how their logical circuits works, can develop a software that, even working, have not been implemented at it must have.

A software that isn't implemented at the right way may cause wast of hardware resources such as processing time of the CPU or how the memory is managed; waste the programmer's efforts as well, once he can solve trivial problems using a complex and expensive methods, since he doesn't know an easier way; waste also the features of the language by using it poorly or even incorrectly. All this things increases the final cost of the project.

4. How to solve this problem ?

This kind of problem is know as "the software crisis". Each person who starts to hack and produce new codes without the right knowledges of good software engineering contributes to the increasement of this crisis.

In the future that can be very dangerous, mainly to the free software community since, in some cases, there is no one in charge of monitoring and moderating the development phase of the project, once that several projects are developed by volunteers programmers.

The solution of this problem may be clear: the users and the programmers must study deeply software engineering and modeling, algorithms analisys, software testing and each paradigms and languages' details. The developers should read more scientific papers and tecnical books about software engineering. Many programmers just want their software producting the expected results, but if a programmer wants to be a successfull developer and have high quality and reliable software, it is essential to know all the theorycal bases that is hide under the practice. Theorycal knowledge is the base to everything.

A lot of programs and projects starts in the paper, it is not a shame to make some rough draft of the algorithm. Some times an idea can achieve a high abstraction level which can be more understantable trought a draw. To test the software before it get into the consumer hands is also important, and there are many different tecnics and issues related to software testing. To know several programming languages and languages paradigms give the programmer more flexibility while choosing the best way to solve a problem, since each language has its own limitations. Even the source code identation is important to increase the readability of the sources. All this things are related with the good software engineering and improves the quality of your product.

5. Conclusion

This article does not intend to criticize neither to discourage hobbyst programmers, but encourage them to know further and deeply the issues related with the software development process. Only that way we will have good and realiable software at the next generations.

I hope you have enjoyed this article. Please forgive my really poor english, this is not my native language. Maybe on a future article it will be improved a bit. Comments, questions and suggestions are always welcome. Feel free to email me at gustavorondina at uol dot com dot br

 

[BIO] I am Gustavo Rondina, and I am from Brazil. I am a graduation student taking the 4th semester of Computer Science course, but I have been in touch with computers and Linux for about 5 years.


Copyright © 2003, Gustavo Rondina. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 95 of Linux Gazette, October 2003

LINUX GAZETTE
...making Linux just a little more fun!
An introduction to MultiTail
By Folkert van Heusden

An introduction to MultiTail

An introduction to MultiTail

Introduction

What is MultiTail?

MultiTail lets you view one or multiple files like the original tail program. The difference is that it creates multiple windows on your console (with ncurses). Merging of 2 or even more logfiles is possible. It can also use colors while displaying the logfiles (through regular expressions), for faster recognition of what is important and what not. It can also filter lines (again with regular expressions). It has interactive menus for editing given regular expressions and deleting and adding windows. One can also have windows with the output of shell scripts and other software. When viewing the output of external software, MultiTail can mimic the functionality of tools like 'watch' and such.

Why this text?

When you start multitail without any parameters, it clears the screen and shows a couple of keys you can press together with a short explanation what they do. You can then press any of those keys or 'x', 'q' or 'CTRL'+'C' to exit the program. If you like to know what commandlineparameters can be given, start multitail with the '-h' parameter.
The "help" that is given with the methods described above might not be sufficient, that's why this text was written. If there is still anything not clear after reading this, do not hesitate to contact me at the following e-mail address: folkert@vanheusden.com

The Basics

The most trivial use of MultiTail is as follows:
multitail [-i] file
This shows the file 'file' in your terminal-window. At the bottom, a line (the statusline) is displayed with the name of the file, its size and the date/time of when the last time the file changed. You can make this status- line static (not updated) with the '-d' commandline parameter. With '-D' no statusline is displayed at all.
You only need to specify the '-i' when the filename starts with a dash ('-'). Something a little more complex is this:
multitail [-i] file1 [-i] file2
This splits your terminalwindow in two windows. In each window, one file is displayed. Both windows have a statusline giving info. The window with 'file1' is displayed above the window with 'file2'. Instead of above each other, you can also have them displayed side by side. For that, you can enter the parameter '-s' on the commandline or press the 'v' key while the program is running.

Scrolling

Of course you can scroll in the shown files. For that, press the 'b' key. When you're viewing multiple files, you'll first get a fileselector. Then a window is displayed with the buffered contents of the selected file (=window). You can then scroll with the cursorkeys and the page-up and pagedown key. Press 'x' or 'q' to exit this window. You cannot scroll the whole file, only the last 100 lines. To reset this limit to something bigger (or smaller), press the 'm' key. You will then be asked to enter a new value, e.g. 1000. This parameter can also be set from the commandline with the '-m value' parameter. With '-m' you set the limit for the next file, with '-M value' you'll set this parameter for all following files on the commandline. When you press the 'm'-key, the current buffer is cleared. So it is also a replacement for pressing the enter key a few times when using 'tail -f' to view a file.

Merging Files

Then there's the '-I' commandline parameter. It is the same as '-i', only '-I' merges two or more files together. For example:
multitail [-i] file1 -I file2
a reallife example:
multitail /var/log/apache/access.log -I /var/log/apache/error.log
These two examples will merge the output of the given files into one window. This can be usefull with, for example, the files given in the second example: with this example you see what happend just before an error appeared in the Apache errorlog.

Viewing Output of External Programs

As I mentioned in the foreword, one can not only view files with MultiTail, also the output of other programs (scripts/executables/etc.) can be put inside one or more windows. To make this happen, you need to use the '-l' switch. For example:
multitail -l ls
another example:
multitail -l "ping localhost"
As you can see, you need to add doublequotes around the command when it needs parameters, otherwhise MultiTail would not be able to recognize what parameters are intended for the selected program or for MultiTail itself.
You might have tried the example with the ls-command. You then saw that MultiTail automatically closes the window when the external command has finished. There are a few options you can use to control this behaviour. For example the '-z' parameter: when given, the window is just closed, the screen redrawed and MultiTail goes on without the popup window telling you that the program ended.
Another option is: '-r interval': this will cause the command to be run every 'interval' seconds. Instead of '-r interval' also the '-R interval' option is available: when fed to MultiTail, it makes it run the next command with an interval of 'interval' seconds displaying only the difference with the previous run of the command! So if you run MultiTail like this:
multitail -R 3 -l "netstat -p tcp"
you will see state-changes for every TCP-connection: new connections and connections getting closed.
As with '-I file', '-L command' also merges the output of the external command to the previous file or command. Yes: output of commands can be safely merged with logfiles. Multiple commands, multiple logfiles, most things you can think of are possible.

Colors

When you have been watching logfiles scrolling by, it can get a little though after a while to still recognize what is important and what not. Because of that, MultiTail has the ability to display logfiles in color. When you give the '-c' parameter, the next given file or command is showed in color. It decides what color to use by looking at the whole log-line. If you want it to only use at the programname causing that logline (when monitoring syslog logfiles for example), you can use the '-cs' switch. The last option is the '-cS colorscheme' switch. As parameter it needs the name of a colorscheme. The colorschemes are read from multitail.conf. In multitail.conf you set by entering regular expressions what color to use for what "patterns". By default, MultiTail looks for multitail.conf in the current directory and in the /etc directory. With the '-z' parameter you can explicitly define what file it should use.
An example:
colorscheme:postfix
cs_re:yellow:status=sent
cs_re:magenta:queue active
The first line names the current colorscheme. The 'cs_re'-lines define combinations of regular expressions and a color. With the first 'cs_re'- line you define that if MultiTail encounters the string 'status=sent' in a logline that it should print it in the color yellow. The next line defines that the string 'queue active' must be printed in magenta. Another example, a little more complex:
colorscheme:syslog
cs_re:green:\[|\]
cs_re:blue:^... .. ..:..:..
The first 'cs_re'-line sets all occurences of '[' or ']' to green and all lines starting with a date in the format 'Mon DD HH:MM:SS' in blue. For more details on regular expressions: o'Reilly has few books on this topic.
One last thing on colors: if you use '-C' (uppercase 'C') instead of '-c', all following files will use the parameters you specify at that time, unless you override them with a new '-cx' or '-Cx' parameter.

Filtering using regular expressions

For filtering MultiTail uses regular expressions. To keep things simple, it uses them the exact same way as 'grep' does: '-e' says: a regular expression follows and '-v' says: invert it.
Examples:
multitail -e "gnu-pop3d" /var/log/messages
multitail -v -e "ssh" -v -e "gnu-pop3d" -e "localhost" /var/log/messages
The first example shows only lines from /var/log/messages which have the string "gnu-pop3d" somewhere in them. The second example only shows lines which do not have the string "ssh" and not have the string "gnu-pop3d" and DO HAVE the string "localhost" in them.

Miscellaneous Options

There are a few other options not fitting elsewhere, these are:
-fThis makes MultiTail follow the file. In case the original file gets renamed and a new file is created with the original filename, MultiTail will start watching the file with the original filename (the one you entered).
-u secondsWhen using MultiTail over a slow link (a modem connection or maybe even over HAM) you might want to have a little less frequent updates. With this parameter you set how frequently MultiTail updates the screen. The default is immediately.
-H intervalIf you have a connection to some host (on which you're using MultiTail) which gets automatically disconnected when nothing happens for a while, you can use '-H'. When used, MultiTail moves the cursor around the screen generating traffic, keeping your line up.
-VIn case you're wondering what version of MultiTail you're using, you can start it with the '-V' option. It'll then display its version and exit. You can also press the 'i' key while it is running.

Is that all?

Not everything was covered in this manual. For a complete list of options and keys you can press while MultiTail runs, have a look at the man-page, the output of the '-h' commandline parameter and the help when you press 'h'-key while the program runs.
And let's not forget the sourcecode!



The latest version of MultiTail can always be found here: http://www.vanheusden.com/multitail/

 

[BIO]


Copyright © 2003, Folkert van Heusden. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 95 of Linux Gazette, October 2003

LINUX GAZETTE
...making Linux just a little more fun!
Mexico is conquered by FLOSS
By Felipe Barousse Boue

G A C E T A   D E   L I N U X ...Making Linux just a little more fun!

Veracruz, México is conquered by FLOSS
By: La Gaceta de Linux

 

During the past week (17-19 Sept, 2003) an international crowd of free/libre and Open Source Software experts gathered at the Mexican port of Veracruz during the celebration of the third edition of the GULEV Linux Congress. (GULEV = Veracruz LUG)

Names such as Bruce Momjian from the PostgreSQL project, Miguel de Icaza from Novell/Ximian, Felipe Barousse from Piensa Technologies, Bdale Garbee from Debian, Gunnar Wolf a leading Mexican software engineer and contributor to Debian and many many more top stars of the worldwide FLOSS community gave conferences, tutorials and personal chat sessions during the three day event in Veracruz.

Weather was warn, attendees were excited and you could even look in everyone's faces the joy of being able to share experiences, ideas and learn more about the new libre technologies.

I could tell that most of the attendees were actually students and young programmers -in the twenties or early thirties- although you could see in each session a couple of business people and even some... lets say; seniors, interested in Linux when they stood up to ask a question or express a comment or idea. I was glad to see this mix in the various audiences.

In this third year of the GULEV Congress, the scenario was nothing less than the World Trade Center in Veracruz, México... What other place could Linux, free software and open source ask for to host an event like this ? The place was indeed superb.

The first year the Congress was held at a University's premises, the second year at the ball rooms in a hotel within the city. The main organizers of the event, Miguel Angel López and Lucy Guzmán Mijangos made a great job in making this year's show run smoothly, making everyone feel satisfied with the Congress' programme and returning home with an enhanced sensation that the FLOSS movement in Mexico is really growing up, getting mature and ready to compete with any other technologies out there. “We need to improve a lot more...” said Miguel Angel, I couldn't agree more, there is always room to improve an event like this, but I can also tell that we have come a long way since the first time, two years ago.

Some of the anecdotes about conferences and topics I have and that you could hear people commenting while on the hallways just after a conference and while waiting for the next are:

Bruce Momjian, who came with his son Mathew gave a session about Mastering PostgreSQL administration. The room was full and everyone attentive to all tips, hints and recipes that Bruce had to say. On a fun side of the Congress, I had a chance to briefly chat with Mathew who said he really likes “... traveling with dad and liked Mexican food a lot... yummy!” he said.

The second Bruce's conference was a bit more in the visionary side of PostgreSQL and what to expecting the next versions.

Larry Wall was expected to be in Veracruz with all of us, unfortunately he got ill and couldn't make it. We hope that by the time you are reading this, he is well up again.

Miguel de Icaza spoke about Mono and how it accelerated the productivity for software development in Linux. He is clearly concerned in providing the most powerful tools and technologies to Linux programmers as to keep and enhance their competitiveness while developing software.

Bdale Garbee on the other hand spoke about where is Debian standing now; meanwhile the well known technology research engineer Francisco de Urquijo Niembro gave a lecture regarding the current status of Mexico in relation to the ongoing digital revolution. “Open and Free technologies are the future..... we will have open specifications and standards in cars, home building, and everything else....” he said.

Felipe Barousse conference was about the Zope Corporation's Zope framework and how he has been using it to develop powerful business applications which are already being used by large companies; he concluded: “Zope is very inexpensive, great,powerful, easy to use and extremely scalable.... in short an ideal platform for web based business applications.”

Federico Mena Quintero participation was about the concepts and ideas used for programming applications in GTK+ where Drag and Drop features are required.

During his conference, Fernando Romo talked about the concepts of constructing application's logic within the data base itself rather than leaving all logic at the application level. A nice collection of suggestions and experiences where discussed in Fernando's conference.

On another hand of topics, Gunnar Wolf talked about Object Oriented Perl.

There were many more topics addressed in Veracruz, actually 53 conferences took place with themes ranging from building clusters, to designing and installing a WiFi networks, to programming in Perl and Python.

More than 500 confirmed persons went to Veracruz just to participate in this great FLOSS show in Mexico. A great touch to the event was that there was free WiFi internet connectivity in all conference rooms and during all conference times.

At the end of the event, the following day, there was a small trip to the prehispanic ruins site called the “Tajin”, about a 4 hours drive from Veracruz. Although I was supposed to get a phone call and or note advising me of the meeting point and time to leave, somehow I never got that note so I missed the excursion which I really regret since I bet it for sure was fun to be in the bus with all my fellow speakers..... maybe next time, now I have a great excuse for going next year.

I really expect next year's congress to be even better and to see it grow in many more aspects as it has been doing during these three editions. I can't wait to my 2004 trip to Veracruz.

In the mean time what we can say is that the port of Veracruz, México was indeed conquered by the penguins and by FLOSS enthusiasts just as it was taken by the conquistadores many centuries ago.

 

[BIO]


Copyright © 2003, Felipe Barousse Boue. Copying license http://www.linuxgazette.com/copying.html
Published in Issue 95 of Linux Gazette, October 2003