LINUX GAZETTE

June 2000, Issue 54       Published by Linux Journal

Front Page  |  Back Issues  |  FAQ  |  Mirrors  |  Search

Visit Our Sponsors:

Linux Journal
eLinux.com
LinuxCare
LinuxMall
Cygnus Solutions
VMware
InfoMagic

Table of Contents:

-------------------------------------------------------------

TWDT 1 (gzipped text file)
TWDT 2 (HTML file)
are files containing the entire issue: one in text format, one in HTML. They are provided strictly as a way to save the contents as one file for later printing in the format of your choice; there is no guarantee of working links in the HTML version.
Linux Gazette[tm], http://www.linuxgazette.com/
This page maintained by the Editor of Linux Gazette, gazette@ssc.com

Copyright © 1996-2000 Specialized Systems Consultants, Inc.

"Linux Gazette...making Linux just a little more fun!"


 The Mailbag!

Write the Gazette at gazette@ssc.com

Contents:


Help Wanted -- Article Ideas

Answers to these questions should be sent directly to the e-mail address of the inquirer with or without a copy to gazette@ssc.com. Answers that are copied to LG will be printed in the next issue in the Tips column.

Before asking a question, please check the Linux Gazette FAQ to see if it has been answered there.


Viddy these well, little bruthers, viddy well. It would seem that our friends 'ere, like, ave a problem with their Linux boxes. If thou woulds't be so kind as to, like, give them a little 'and, I'm sure they would love it, real horror-show like. But first me little droogies, an introduction mabye in the necessary. My name is Michael Williams and I live in the UK (Wales). As of now, I will be helping to format the mailbag's columns. What's with all the blurb you ask? Do we all speak like that it Wales. No! I'm actually basing my character on Alex from "A Clockwork Orange".


Fri, 5 May 2000 12:49:43 +0200

From: Joseph Simushi <jsimushi@pulse.com.zm>
Subject: New User - Help

Hi,
I am a new user of Linux and is running Red hat version 6.1.1 operating system. I am asking for your help in finding Materials (Books, websites, CD-write-ups) on Linux to help me in the Administering of this system. Regards,

Simushi Joseph
LAN Administrator
PULSE Project
P.O. Box RW 51269
Lusaka
Zambia.



Fri, 5 May 2000 15:24:48 +1000

From: Eellis <abacus2@primus.com.au>
Subject: Prepress Rip software

Hi i like to find out is there any 3rd party or shareware rip-software to use on postscript and pdf files instead of using scitex or adobe rip.

Many Thanks From Down under.

Ezra Ellis



Thu, 04 May 2000 20:35:28 -0400
From: Raul Claros Urey <raul@upsaint.upsa.edu.bo>
Subject: Help

I'm using linux red hat, kernel 2.5.5 and when it is booting this report the fallowing errors:

RPC: sendmesg return error 105
Unable to send; errno = no buffer space available
NFS mountd: neighbour table overflow
Unable to register (mountd, 1, udp)
portmap: server localhost not responding, time out

And I can't do anything the message "neighbour table overflow" appears every time. do you know something about it?

Atte.
Raul Claros Urey


Thu, 4 May 2000 04:34:58 EDT
From: <ERICSTMAUR@aol.com>
Subject: Password recovery for equinox database

Do you know if I can find a software which recovered my password on a equinox database?


Mon, 01 May 2000 20:22:18 +0800
From: 61002144 <61002144@snetfr.cpg.com.au>
Subject: resolution

My comuter under linux redhat xwindow will only run 300x200 graphics. Even if I hit CTRL ALT + , it wont change. I have a SiS620 Card with 8mb. Can you please help. I have spent a lot of time on the internet, It seems other people have the same problem but no one can help.

Rudy Martignago


Sat, 29 Apr 2000 01:09:12 +0100
From: Andy Blanchard <andyb@zocalo.demon.co.uk>
Subject: Help wanted - updating a Linux distro's ISO image

While downloading the newly posted kernel updates to RedHat 6.2 the following question arose in my mind the answer to which might be of use to anyone who has to build numerous Linux boxes. If one were to replace arbitrary RPM (or DEB, or ...) files on the distro with updated versions and burn a new CDROM - would it still install cleanly? If I understand this correctly the answer to this is "yes" if the installer runs:

rpm -install kernel-doc
But will be "no" (unless you can frig the install script) if it runs:
rpm -install kernel-doc-2.2.14-12.i386.rpm
Can anyone give me an answer? An inquiring mind wants to know...

Andy.


Sat, 6 May 2000 13:08:20 +0200
From: Drasko Saric <doktor@beotel.yu>
Subject: trouble with full partitions...

Hi, I have a problem and I hope you'll help me. I have Linux SuSe 6.1 and WIN98 on one machine. Linux partition of 800MB is full now, and I wish to add (if it's possible) an extra 800 MB to my exisiting Linux partition from WIN98part, but I don't know how. Can you help me?

Thanx in advance. Drasko,
Belgrade, Yugoslavia


Sun, 07 May 2000 00:35:12 -0500
From: edge or <edge-op@mailcity.com>
Subject: Help in setting up Red Hat as a dial-up server -- LG#53

I have searched and searched for 2 months now and can not get any info on how to set up a server for customers to dial into and access the internet with mail accounts and such. I have been to every news group and discussion I can find. No one will give any information on how to set this up. The ONLY help or answer I get is...:"why do you want to be an ISP,
they are to expensive to set up?" Please have a "How-To" for the beginner
to set up an ISP for the first time?

Thanks in advance.

A reader writes:

First, I hope you've received better answers in the meantime. Second, I
hope the following links helps (apologies for the mailcity redirection
stuff):

http://alpha.greenie.net/mgetty/

Notice about midway down the page there are links specifically related to
your question. This will get your callers connected to your box.

http://www.tuxedo.org/~esr/fetchmail/index.html

This is nice for grabbing the mail and handing it off to sendmail for
local delivery.

As for sendmail configuration, I'm clueless.

Alex comments:

You could also check out the this howto http://www.linuxdocs.org/ISP-Setup-RedHat.html. It's meant for Red Hat systems, but I'm sure it could easily be adapted for another distribution with little difficulty.


Mon, 8 May 2000 11:22:19 +0100
From: Steven Cowling <steven.cowling@sonera.com>
Subject: bread in fat_access not found (error from Redhat 6)

The following error was scrolled down the screen:

bread in fat_access not found 

We are running Redhat version 6 on a PC and it has been running fine for about 6 months. The latest work done has been to start using CVS which was installed with the initial installation of Redhat. CVS has been working fine for about a week. Since the bread error appeared we are unable to login either at the console or remotely using telnet from windows. Every time we try to login the "login incorrect" error appears. We have tried all user names and root. The strange thing is that we can still use CVS from our Windows machines using WinCVS 1.0.6 to login and check files in and out. Basically we can't login normally at all. Has any body seen this before? Or know what 'bread' is? Any help or suggestions would be greatly appreciated.

Steve Cowling


Mon, 8 May 2000 13:50:15 -0500
From: <Stephen.W.Thomas@Nttc-Pen.Navy.Mil>
Subject: High Availability Hosting

In you latest issue of Linux Gazette you have an article titled "Penguin power credited for 100.000% network availability". This article mentions about different Classes of Web Hosting based on Uptime. Where on the net can I find a definitive source for these different classes?

Thanks,
Steve.


Wed, 10 May 2000 09:49:25 -0400
From: Ruven Gottlieb <igadget@earthlink.net>
Subject: Redirecting kdm output from console to file

Hi,

I've been trying to figure out how to redirect console output from tty1 to a file when starting kdm.

I use an alias:

startx="startx >& /root/startx.txt"
with startx to send output to /root/startx.txt, but I can't figure out what to do to get the same thing to happen with kdm.

I'm suprised this isn't the default anyway. You can't read the console output when starting kdm, and if you have mc or something running on tty1, it gets trashed when kdm starts up.

Thanks for your help.

Ruven Gottlieb


Tue, 09 May 2000 23:22:52 +0530
From: "pundu" < pundu@mantraonline.com>
Subject: calculate cpu load

Hi,

I would like to know how one can calculate cpu load and memory used by processes as shown by 'top' command. It would be nice if anyone can explain me how you could do these by writing your own programs, or by any other means.


Mon, 8 May 2000 10:32:25 +0000 (GMT)
From: Jimmy O'Regan <jimregan@litsu.ie>
Subject: Is There a Version of PC/NFS for Linux?


I have the O'Reilly book Managing NFS and NIS and there is a section in the back of the book called PC/NFS describing a Unix utility that enables a PC DOS machine to access a Unix machine using the NFS file system as an extended DOS file system. I am wondering if there is a Linux version of this available?

J.

[As far as I was aware, that program is an NFS client for the PC - it runs
on DOS, and lets you use NFS from a remote UNIX box. If I'm right, the
standard version should work with Linux.

You'd be better off setting up Samba though. It does what you're looking
for - makes Linux look like an MS server. This would be better for Ghost,
as Ghost works on MS shares. -Alex.]


Mon, 1 May 2000 08:57:32 -0700
From: lisa simpson <rflores@pssi-intl.com>
Subject: Mandrake and tab

When you hit the TAB key under a shell in Mandrake, it gives a list instantly unlike in Redhat where you have to hit tab twice, if there are several similar entries. How do I disable that behavior?

[Look in the shell manual page under the options section. There are
options you can set that control this behavior. I don't remember
the names offhand, and it's different for each shell. -Ed.]


Thu, 11 May 2000 08:47:38 -0700
From: <agomez2@axtel.com.mx>
Subject: Installation of Linux using an HP 486/25NI

Hello,
I hope that you can help me, I´m new to linux and I´m trying to install it using an HP 486 25 MHz.
The BIOS does not has the capability to recognize a second IDE drive. (I have upgraded the BIOS to the latest available version from HP website support)
The Motherboard has an integrated NIC, (I also have a 3COM 3c509).
I can not find the way to start the installation, since I have Linux Mandrake as well as Turbolinux on CDROM.
I have tried doing it using the CDROM of my second PC running windows 98, with cisco TFTP server and a local LAN between both PC´s using a coax. 10 base2 cable.
Where can I find a detailed explanation with some suggestions to my problem? The manuals included with my linux flavours are not detailed enough, they assume that I have a CDROM for linux installation
How about an installation from a FTP site?, where can I find some DETAILED information about that?

Thanks in advance for your help!

Sincerely,

Alex


Thu, 11 May 2000 08:36:27 -0700
From: NANCY Philippe <Philippe.NANCY@UCB.FR>
Subject: Energy star support

Last year I bought one of these cheap(er) east-asian PC computers (like many of us ?) with the Energy Star feature (i.e. No more need to press any button to power off).

But this feature is implemented with M$ Win... and I've no idea of the
way they manage the hardware behind this process.

So, as I recently installed a Corel distribution, I would like to know if there is any mean to power off directly from Linux, and not Shutdown-And-Restart, Open-M$Windows and Quit-From-There (and Decrease-My-Coffee-Stock ;-} )

Thank you for your help.


Fri, 12 May 2000 07:02:37 -0700 (PDT)
From: Surfer PR <SurferPR1@excite.com>
Subject: Help with Voodoo3

I have followed every instruction I could find on how to install the voodoo3 and MESA and all the 3d tests run just fine... but when I try to run any game that uses glide or mesa (quakeII) I try all the renderes but it does not work and continues to use the very lame software.... I have all my resolutions set... I am mainly having problems with the glide2x.so or something like that.. everything else in Linux (Mandrake 7.0) is fine...

Please help me.


Mon, 15 May 2000 08:53:49 -0700
From: "VanTuyl, George" <George.VanTuyl@voicestream.com>
Subject: Backup to a CD- Re-Writeable drive

I have been asked to put together a backup strategy for the company's Red Hat 6.1 Linux gateway server. The backup medium chosen "not by this individual" is a HP 9200I parallel port CD re-writeable drive /burner.

I would like to here some reflections and recommendations on this strategy please.

Thanks gvt.


Mon, 15 May 2000 12:33:44 +0100
From: <marco.brouwer@nl.abb.com>
Subject: image

Hi,

Do you know where I can get the "Dont fear the Penguis" logo names : linux-dont-f.jpg Or can you send it me...

cu


Wed, 17 May 2000 13:15:28 -0400
From: "Jeff Houston" <jhouston42@worldspy.net>
Subject: Video help

Howdy I have 2 problems. First and foremost I believe is when I boot up linux none of my window managers will work. In my old computer they did but not in my new one. I think it is because my graphics card is not compatible but not sure about that I know that it is not listed in setup but neither was my graphics card in my other computer. Anyway I went to the website of my graphics card and they had a file to supposedly add support for my card to linux but how do i go about installing it? It is gzipped and to be honest I have no clue where I am or what I am doing once I get logged in to linux without any of the window managers. I have only had redhat about 3 days now:) Anyway I have the file I supposedly need on a floppy, but dont have any idea what to do with it now. Alos after I installed RedHat for some reason Win98 became EXTREMELY slow and is giving me probs and a lot of programs not responding any idea why this is?
Thanks for any and all help you can give me.

signed, NEWBIE

[It would seem that you are -extremely- confused here Jeff. It would appear you have no idea how to use the BASH prompt. Obviously, you need to read up upon the subject - http://www.linuxdoc.org has a variety of tutorials and howtos for Linux. Have you tried running 'Xconfigurator' (remember folks, it's case sensative)? See if your graphics card is listed there. To unzip a file that's gzipped, use the 'gunzip' command. That's about as much as I can tell you, since you do not provide enough information. As for your Win98 slowdown problem, I really see no link between installing Linux and that type of problem. Mabye I'm wrong, or mabye it's just you being a bit paranoid :) -Alex.]


Wed, 17 May 2000 10:12:38 -0700
From: "Jeffrey X" <krixsoft@hotmail.com>
Subject: "run of input data" error

I recently compiled the RedHat kernel 2.2.12-20. Everything
went well and I can start new kernel from lilo. Lilo.conf
looks like: ....<

image=/boot/vmlinuz
label=linux
root=/dev/hda5

image=/usr/src/linux/arch/i386/boot/bzImage label=new root=/dev/hda5 ....
The problem I ran into is that I copied "bzImage" to "/boot/vmlinuz", ran lilo, and rebooted the system. When I tried to start new kernel with label "linux", the system halted with the following messages:
"Loading linux....."
"Uncompressing Linux......"
ran out of imput data"
"-- System halted"
Why? Where is the problem ? I had a 128MHz phsical RAM and 256MHz /swap.
Please help out.

Tahnks!

Linux Newbie


17 May 2000 13:12:55 -0000
From: "narender malhan" <malhan@rediffmail.com>
Subject: linuxsoftwareraid HELP

Dear Sir,

I want to configure my linux box for mirroring(RAID1) with SCSI cards. I
want help or HOWTO documents regarding this.

Hope u 'll reply soon,

waiting for an early reply,

yours,
singh.


Mon, 22 May 2000 11:56:08 +0200
From: REVET Bernard <bmrevet@igr.fr>
Subject: VIRUSES on the Net !!!

Many articles have been written in the press concerning the virus "I love You " and similar one
It would be appreciated to have a general article in the Linux Gazette about the problem of viruses as many computers have both Microsoft Windows and Linux installed . What are the protections of Linux against virus intrusions ? What
differentiates Microsoft OS from Linux concerning this problem? Is it safe or reasonable to continue to use Microsoft Windows as it costs so much to the community to get rid of these viruses? To these financial worries one can add updating systems 95 Versions, 98, Millenium , plus WWW browser plus bugs plus plus.

Bernard

[I'm sure it's been said before, many, many times. But, just for the point of clarity, I'l say it again. Virii (viruses) are virtually a non-issue in Linux, especially those like the love bug. I myself have never expereinced that particular virus, but I've read about Linux users who have, and, after using a bit of common sense, I've come to the conclusion that it could not affect a Linux box. Why? The love bug is a Visual BASIC script designed to run on Windows computers. Under Linux, you could just download the script and read it, without it doing any damage to your system. Most virii will have little affect on Linux, most are Windows-centric, and only designed to run under the aforementioned GUI. There are virus scanners available for Linux, and true, there are Linux specific Virii. However, I wouldn't waste the time of the download if I was you - the odds of you getting one are -extremely low-. Thankyou, and goodnight. -Alex.]


Sat, 20 May 2000 12:02:59 -0500 (COT)
From: Servicios Telematicos <servicios@r220h253.telecom.com.co>
Subject: Missing root password

Hello
I use linux red hat 6.1 but my friend Fabian change the lilo configurations and the root password. Please help me.

I need change the lilo configuration and root password.

Thanks,

victoriano sierra
Barranquilla
Colombia


 Thu, 11 May 2000 13:19:41 -0500
From: Juan Pablo <j_pablo18@yahoo.com>
Subject: Linux

Hello, I want to know if there is books, texts, etc. of linux in spanish . Where explains HOW TO USE it? Thanks!!!

[See below about an upcoming Spanish translation of the Gazette. Also, the Linux Journal site has a section listing the Linux Users Groups in many countries. Perhaps you can find one near you. Where are you located? http://www.linuxjournal.com/glue . -Ed.]


 Thu, 11 May 2000 13:19:41 -0500
From: Warren <warren@guano.org>
Subject: Spiro Linux

Were you ever contacted by someone at Spiro Linux? I am searching for information on the distribution, but the published website, http://www.spiro-linux.com, is not answering.

The Editor wrote:

No, I haven't. The domain doesn't exist now. You can try a search engine. I'm printing this in case one of our readers knows.

Warren responded:

I called the number for SPIRO-Linux, +1 (402) 375-4337, and an automated attendent identified the company as "Inventive Communications".

Web searches turn up a lot of reviews, but no news on what happened to the company.


 Thu, 11 May 2000 13:19:41 -0500
From: <jshock@linuxfreemail.com>
Subject: Windoze 98 under WINE

I know wine is meant for running windows applications, but is it also possible to just run windows 98 from within linux using Wine? I tried to run win.com with wine, but i got a dosmod error of some sort. If it is possible to run windoze 98 under linux WINE then please tell me how; thanx in advance.


 Thu, 11 May 2000 13:19:41 -0500
From: Eric Ford <eford@eford.student.princeton.edu>
Subject: read-only -> read/write

Back when I ran NetBSD, there was a way I could mount (or link?) a directory from a read-only medium (CD-ROM, NFS that I only have read permission for, etc.) to a directory on my hard disk as read-write. If I added a file to the directory, it would be stored on my hd. If I modified a file, then it would save my version on my hd and transparently use that version rather the version on the ro medium. If I deleted a file, it stored something locally so it knew to make it appear as if that file wasn't there.

Can I do this in Linux? If so, how?


 Thu, 11 May 2000 13:19:41 -0500
From: Amrit Rajbansh <amrit_101@rediffmail.com>
Subject: remote login methods

My workstation has presently a damaged hard disk is there any provision that i can directly boot from the server using a linux bootable floppy,instead of installing a new hardisk in the work station

waiting eagerly for your reply


General Mail


 Sat, 29 Apr 2000 13:02:46 +0200
From: Jan-Hendrik Terstegge <helge@jhterstegge.de>
Subject: Linux Gazette - German translation

Hi folks.

I love the Linux Gazette, but the last time I think there are more and more Linux users in Germany who didn't speak english (yes it's possible to use Linux without speaking English. The SuSE does a very good translation) or have really problems to speak it. I think most of them want to learn a lot about Linux, but there are not so much german-languaged Pages. So I think if would be nice if there are some guys speaking english and german very well who help me to translate the Linux Gazette.

[As you know, we very much like to see versions of the Gazette in other languages. If you can translate a few articles per issue and put them up on a web site, that will be a start. Perhaps seeing the articles there will encourage some other people to offer to help. Remember to add your site to our mirrors list using the form at the bottom of http://www.linuxgazette.com/mirrors.html-Ed.]


 Wed, 3 May 2000 08:33:41 -0700
From: Karin Bakker
Subject: Re: Linux gazette in a German version How can I get the gazette in a German version ?

[A German-speaking reader or group will have to translate it and host it on their web site. This is how all our foreign-language mirrors work.

Just this week I get a letter from somebody who may be willing to translate part of it but he's looking for others to do some of the work. Let's see if I can find his e-mail address... Here it is: Jan-Hendrik Terstegge <helge@jhterstegge.de>

Would you like to speak with him and see if you guys can figure out how to get a translation off the ground? -Ed.]


 Sun, 30 Apr 2000 10:14:36 -0400
From: usrloco <usrloco@userlocal.com>
Subject: userlocal.com

I just wanted to thank you for listing my site (userlocal.com) in the May issue of Linux Gazette.


 Mon, 1 May 2000 21:58:34 -0500
From: Brad Schrunk <schrunk@mediaone.net>
Subject: SuSE Linux and Microsoft medialess OS

Dear Linux Supporters:

I have started playing around with SuSE Linux and am impressed with the product. I have been a died in the wool Microsoft user for the last eight years. I have seen them step on a lot of folks and that is part of business. I have also put up with their mindless CD keys that make a network administrators life miserable. Not copy protected is what it said on all of their software. That was until they controlled the market now everything is copy protected.

But the latest rumor or plan that Microsoft has put me over the edge. I read the an article in the May 1, 2000 issue of INFO WORLD that Microsoft now wants to jam a "medialess OS" down our throats. The article is entitled "Users find Microsoft's medialess anti piracy play hard to swallow" explains their latest attempt to stop software piracy. This is it for me.

I have been an ardent supporter up till this. I want to convert to something else. The problems are my word, access and other apps that use MS apps. Is there a way to continue to use these apps without Microsoft OS. Or is there a way to emulate win apps or is there other apps that transparently use their files? Any help would be greatly appreciated.


 Wed, 3 May 2000 21:02:05 +0200
From: Alan Ward <award@mypic.ad>
Subject: RE: Here comes another article

Just a line to mention I liked a lot the new look. You also did well to put the programs in separate files.


 Sat, 06 May 2000 01:53:15 -0400
From: Charlie Robinson <crrobin@iglou.com>
Subject: New Logo

Sir,

I am very excited about Linux and the work that you and your staff perform. Because I am very much a "newbie", I turn to your web site religiously every month. Thanks for all of the hand holding and the impressive looking new logo - I like it.


 Thu, 18 May 2000 11:28:04 +0100
From: Paul Sims <psims@lombard.co.uk>
Subject: new logo

Nice new logo - well done!


 Mon, 08 May 2000 16:42:12 +1200
From: Linux Gazette <gazette@ssc.com>
Subject: Rsync Ewen McNeill <ewen@catalyst.net.nz> and others wrote in about difficulties mirroring LG after we installed wu-ftpd. In response, we have installed anonymous rsync also. Many people find rsync more convenient to use than mirror, and it also has the advantage that it transfers only the changed portions of files, saving bandwidth.

Hints for using rsync with Linux Gazette are in the LG FAQ, question 14. -Ed.


 Tue, 09 May 2000 01:46:50 -0500
From: Felipe E. Barousse <fbarousse@piensa.com>
Subject: Spanish translation of Linux Gazette

Sirs:

I noticed on your mirrors list that there are "none known" translations to Spanish of Linux Gazette.

We are a Linux consulting firm based in Mexico City and with operations all across Latin America and the Caribbean.

We would like to take the task of translating LG into Spanish. We are able to coordinate a team of technical translators, Linux / Unix specialized and, eventually, when translated, host those pages in our web site.

I would like to know your opinion about this idea and, if approved, make all required arrangements for this to happen. We are also open to discuss any other outstanding issues to accomplish this project.

Hoping to hear from you soon.

[The translation is expected to go live on June 1 at http://www.piensa.com. The site has been added to the mirrors list. -Ed.]


 Wed, 10 May 2000 13:50:05 -0400
From: Aurelio_Martínez_Dalis <aureliomd@cantv.net>
Subject: Suscribing Information

My Name is Aurelio Martinez (aureliomd@cantv.net).I am a linux beginner, and I have access to Internet only by e-mail. Is it possible to receive Linux Gazzete in HTML format by e-mail ? Thanks.

[Quoting from the LG FAQ:

The Gazette is too big to send via e-mail. Issue #44 is 754 KB; the largest issue (#34) was 2.7 MB. Even the text-only version of #44 is 146 K compressed, 413 K uncompressed. If anybody wishes to distribute the text version via e-mail, be my guest. There is an announcement mailing list where I announce each issue; e-mail lg-announce-request@ssc.com with "subscribe" in the message body to subscribe. Or read the announcement on comp.os.linux.announce.

You'll have to either read the web version or download the FTP files.

I asked our sysadmin whether we could set up a mailing list for the Gazette issues themselves, and he was unwilling, again because of the size issue. Many mail transport systems are configured to reject messages larger than 1 or 1.5 MB. "And I don't want my sysadmin mailbox stuffed chock full of bounced 4M emails."

Note to mirrors:

We receive at least one request a month to send the Gazette via e-mail. So there is definitely reader demand for it. If you wish to offer Gazette via e-mail, you would need to send out the current issue's FTP file along with lg-base-new (the changed shared files) every month. Users would somehow need access to lg-base (all the shared files) when they subscribe and whenever they reinstall the Gazette. I don't know how you would handle changes to the FTP files later (i.e., corrections to back issues). -Ed.]


 Wed, 17 May 2000 10:23:37 +0300
From: Shelter-Afrique <info@shelterafrique.co.ke>
Subject: Compliments

Thanks 4 maintaining this great magazine - its been really helpful!

D. S. Daju

[Thanks for the vote of encouragement. -Ed.]


 Mon, 22 May 2000 09:11:22 EDT
From: <LFessen106@aol.com>
Subject: Kudo's

Hello! My name is Linc Fessenden and I first want to congratulate you on an outstanding magazine! I also happen to run a Linux User Group (Lehigh Valley Linux User Group) in eastern Pennsylvania. We were wondering if you might be willing to donate any promotional item(s) that we could give away at a meeting to help increase Linux enthusiasm and awareness, and also to promote the Gazette? Please let me know, and keep up the great work!

[Thanks for the feedback. We do not currently have any Gazette-specific merchandise. I have forwarded your request to our GLUE coordinator (GLUE = Groups of Linux Users Everywhere, http://www.linuxjournal.com/glue) who can give you further info. -Ed.]


 Sat, 20 May 2000 12:55:34 +0200
From: Maciej Jablonski <maciekj@pik-net.pl>
Subject: a comment about page

On Polish version version of on-line Linux Gazette there are empty sides, for example: Accessing Linux from DOS!?

[Could you please send me some URLs that have the wrong behavior so I can see what the problem is? The master copy (all the way back in issue #1) is coming up fine.

Each mirror is responsible for its own site. We do not update the mirrors centrally. -Ed.]


 Sun, 30 Apr 2000 06:14:54 +0200
From: Meino Cramer <user@domain.nospam.com>
Subject: Moonlight-Atelier 3D ... sigh

Dear Editor!

In on one of the articles of issue 53 of the Linux Gazette Moonlight Atelier 3D is mentioned as a 3D-modeller and raytracer.

Unfortunately this program has been taken from the WEB for what reason ever.

Please take a look at www.moonlight3D.org.

I have used this program before its "shutdown" and I am really sad, that there is neither suppport nor any updates any more.

May be you can achieve some informations about this case ?

Thank you very much for your help and for the Linux Gazette!


This page written and maintained by the Editor of the Linux Gazette. Copyright © 2000, gazette@ssc.com
Published in Issue 54 of Linux Gazette, June 2000

"Linux Gazette...making Linux just a little more fun!"


News Bytes

Contents:


 June 2000 Linux Journal

The June issue of Linux Journal is on newsstands now. This issue focuses on People Behind Linux.

Linux Journal has articles that appear "Strictly On-Line". Check out the Table of Contents at http://www.linuxjournal.com/issue74/index.html for articles in this issue as well as links to the on-line articles. To subscribe to Linux Journal, go to http://www.linuxjournal.com/subscribe/index.html.

For Subcribers Only: Linux Journal archives are available on-line at http://interactive.linuxjournal.com/


Distro News


 Best

Best Linux 2000 R2-Moscow is a Russian-language version of the Best Linux distribution, which is also available in English, Swedish and Finnish.


 Bluetooth

Las Vegas, NV May 9, 2000 Today at Networld+Interop (N+I), Axis Communications is demonstrating a new wireless solution that provides broadband access to the Internet and LANs for a wide range of emerging wireless devices. General availability is expected in the fourth quarter. The Bluetooth Access Point will be used to create local "hot spots," areas where instant wireless broadband access to the Internet or a network is available to Bluetooth enabled devices, such as cell phones, PDAs, laptops and emerging Webpads. These hot spots will enable new and innovative services for a variety of user environments, in the office, home, hotels, retail establishments and other public places such as the airport.

In the hotel of the future, while you check into your room, your laptop checks into the office - retrieves e-mail, voicemail and accesses corporate Intranet services - all with broadband speed. Phone calls will be routed automatically via telephony services to your personal mobile phone, providing one number simplicity and lower-cost phone bills. The hotel will offer new conveniences: such as easy wireless faxing and printing from anywhere in the hotel to the business center, poolside food service ordering and streamlined checkout payment all from your PDA.

The Bluetooth Access Point from Axis is the first to support both data and voice services. The product platform is based on Axis' integrated system-on-a-chip technology and embedded Linux, which includes a Bluetooth stack for Linux developed by Axis and recently released under GNU General Public License (GPL) to the open source community.


 Newlix

OTTAWA, Ontario - May 2, 2000 - Newlix Corporation announced today a strategic relationship with 3D Microcomputers Wholesale and Distribution to market its Newlix OfficeServer, a Linux-based network operating system.

Newlix is focusing on building an outstanding array of 'set-and-forget' performance features into a reliable, cost-effective network operating system, which runs on standard Intel-based hardware. The company's flagship product, Newlix OfficeServer, is a robust network operating system which features plug-and-play software installation coupled with easy-to-use, web-based configuration tools.

3D Microcomputers is the largest Canadian-owned manufacturer of computer systems. The company provides products and services to 6,000 computer resellers across Canada.

Ottawa, ON - May 3, 2000 - Newlix Corporation and Look Communications Inc. today announced a marketing partnership to promote the use of Newlix OfficeServer, a turnkey Linux-based network operating system for small and mid-sized businesses looking for secure, company-wide Internet access.

Look Communications is a leading wireless broadband carrier and one of the largest Internet Service Providers in Canada. The Newlix OfficeServer will be included in a host of Web-based applications Look offers to support business Internet requirements.

Newlix Corporation (www.newlix.com) is a privately funded company headquartered in Ottawa, Ontario and founded in 1999. Corel Corporation (Nasdaq: CORL; TSE: COR) is an investor in the company. Newlix develops software for an easy-to-use Linux-based network operating system that meets the networking and internetworking needs of small to medium-sized businesses and provides OEMs, VARs and other partners with the essential building blocks to custom tailor networking solutions. The company's flagship product, Newlix OfficeServer, provides a robust, worry-free, 'set-and-forget' communications and networking platform, designed to be delivered in partnership with hardware vendors, connectivity providers and application service providers.


 Red Hat

Red Hat releases 64-bit Itanium Linux (ZDnet article)
( Official press release from Red Hat)

RESEARCH TRIANGLE PARK, N.C.--April 25, 2000--Red Hat, Inc., announced today that it is now taking orders for developer tools and services for the embedded Linux market. The Red Hat Embedded DevKit (EDK) begins shipping immediately and answers the demand for open source software and tools in the growing embedded space, which includes Internet appliances and handhelds.

The Red Hat EDK provides an integrated development environment (IDE) to deliver software developers everything needed to quickly and easily create embedded Linux applications on a wide spectrum of pervasive computing platforms. The targeted markets include manufacturers who are building Internet infrastructure appliances and consumer Internet appliances, as well as the traditional telecom, datacom, industrial and embedded enterprise markets.

The Red Hat Embedded DevKit is a completely open source software package and is sold via redhat.com with varying levels of services starting at $199.95.

A key advantage to the Red Hat Embedded DevKit is access to the premium support services that Red Hat has pioneered in the open source space. Red Hat Support customers receive assistance on the usage of the Embedded DevKit and response to questions about embedded Linux. In addition, customers are entitled to priority response on corrections to any EDK or kernel problems they submit. This ensures that customer projects stay on schedule.

For EDK, Red Hat offers two types of premium support:

Incident support for small workgroups, and Platinum Support for larger development teams. Incident packages provide the customer with priority response on a fixed number of requests. Platinum packages provide priority response on an unlimited number of requests, but are based on the number of software developers using the EDK.

www.redhat.com/store


 LuteLinux

A distribution from Vancouver, Canada.

www.lutelinux.com


News in General


 Filesystem Hierarchy Standard 2.1

FHS 2.1 is done!

I'm pleased to announce the release of FHS 2.1, a updated version of the Filesystem Hierarchy Standard for Linux and other Unix-like operating systems. FHS is part of the draft Linux Standard Base specification, which will soon be updated to reflect FHS 2.1.

FHS 2.1 supersedes both FSSTND 1.2 and FHS 2.0. There have been some significant improvements and bug fixes since FHS 2.0. Please see the FHS web site for details. (It has been a few years since the last official release, so check it out if you're using a previous version of FHS or FSSTND.)

What is FHS?

FHS defines a common arrangement of the many files and directories in Unix-like systems (the filesystem hierarchy) that many different developers and groups have agreed to use. See below for details on retrieving the standard.

The FHS specification is used by the implementors of Linux distributions and other Unix-like operating systems, application developers, and open-source writers. In addition, many system administrators and users have found it to be a useful resource.

FHS or its predecessor, FSSTND, is currently implemented by most major Linux distributions, including Debian, Red Hat, Caldera, SuSE, and more.

FHS 2.1 and other FHS-related information is available at http://www.pathname.com/fhs/

Information on the Linux Standard Base is available at http://www.linuxbase.org/

Daniel Quinlan <quinlan at pathname.com>
FHS editor
Linux Standard Base chair


 Upcoming conferences & events

Strictly eBusiness Solutions Expo
June 7 & 8, 2000
Minneapolis Convention Center
Minneapolis, MN
Visit www.strictlyebusinessexpo.com

USENIX
June 19-23, 2000
San Diego, CA
www.usenix.org/events/usenix2000/

LinuxFest
June 20-24, 2000
Kansas City, KS
www.linuxfest.com

PC Expo
June 27-29, 2000
New York, NY
www.pcexpo.com

LinuxConference
June 27-28, 2000
Zürich, Switzerland
www.linux-conference.ch

"Libre" Software Meeting #1
(Rencontres mondiales du logiciels libre)
, sponsored by ABUL (Linux Users Bordeaux Association)
July 5-9, 2000
Bordeaux, France
French: lsm.abul.org/lsm-fr.html
English: lsm.abul.org

Summer COMDEX
July 12-14, 2000
Toronto, Canada
www.zdevents.com/comdex

* O'Reilly/2000 Open Source Software Convention
July 17-20, 2000
Monterey, CA
conferences.oreilly.com/convention2000.html

Ottawa Linux Symposium
July 19-22, 2000
Ottawa, Canada
www.ottawalinuxsymposium.org

IEEE Computer Fair 2000
Focus: Open Source Systems
August 25-26, 2000
Huntsville, Alabama
www.ieee-computer-fair.org

Atlanta Linux Showcase
October 10-14, 2000
Atlanta, GA
www.linuxshowcase.org

Fall COMDEX
November 13-17, 2000
Las Vegas, NV
www.zdevents.com/comdex

USENIX Winter - LISA 2000
December 3-8, 2000
New Orleans, LA
www.usenix.org


 News from the E-Commerce Times

The E-Commerce Times has a Linux section: http://www.ecommercetimes.com/linux/

Caldera sponsors Linux Professional Institute's (LPI) exam-based certification program, TurboLinux partners with Computer Associates for Unicenter, WordPerfect hits the 1-million-download mark.
http://www.ecommercetimes.com/news/articles2000/000517-tc.shtml

One Year Ago: Penguin and Linux Taking Center-Stage. (An article originally published in May 1999.) For Linux, 1998 was kind of like the year its voice broke....
http://www.ecommercetimes.com/news/articles2000/000503-tc.shtml

The End of Linux Hysteria? 2000 could be the year that Linux comes fully into its own....
http://www.ecommercetimes.com/news/articles2000/000509-1.shtml

IBM and Linux: A Test of Metal
http://www.ecommercetimes.com/news/articles2000/000522-1.shtml


 Cobalt news ONS

NAMPA, Idaho and MOUNTAIN VIEW, Calif., - May 17, 2000 - HostPro, Inc., (www.hostpro.net), a Web hosting subsidiary of Micron Electronics , and Cobalt Networks, Inc., (www.cobalt.com)today announced an alliance to expand HostPro's Web hosting programs by offering dedicated server solutions on Cobalt RaQ 3 server appliances. The arrangement enables HostPro to offer direct sales and support to its dedicated Web hosting customers by using a server appliance platform specifically designed by Cobalt Networks for dedicated hosting.

Orlando, Florida, May 22, 2000 - Cobalt Networks, Inc. today announced Cobalt StaQware, a high availability clustering solution that ensures the uptime of business critical Web sites and applications. StaQware, which runs on Cobalt's RaQ 3i server appliances, offers 99.99 percent availability and requires no customization or modification to applications.


 RAID solutions for Linux

Hello!

I read several of your Linux Gazette issues. Just to let you know- my company sells a line of RAID products that are Linux compatible.

Our address is www.raidweb.com

The advantages of our products are that we sell systems utilizing either SCSI or IDE hard drives. Also, our RAIDs are O/S independent--useful if your readers are utilizing multiple-boot or different operating systems.


 NetMax products

Ann Arbor, Michigan, May 8, 2000 - Cybernet Systems Corporation today announced two new product releases with enhanced features for its popular Linux-based NetMAX Internet appliance software line, providing consumers with more capabilities and flexibility at the same low cost and in the same easy, 15-minute installation format. The NetMAX Internet Server Suite now includes the ability to host multiple domains on a single IP address, and improvements to the NetMAX FireWall Suite include a proxy server with 100 MB of cached storage to speed network performance.


 Computer I/O streaming telecom server

Santa Clara, CA -- May 22, 2000 -- Computer I/O Corporation, a provider of communications servers, embedded software and services, announced the Easy I/O (TM) T1/E1 Streaming Server, a high-performance communications server specifically designed for data insertion, capture and analysis applications.

The Linux-based T1/E1 Streaming Server functions as a communictations probe enabling client applications to directly access T1/E1 DS0 channels from the LAN environment.

http://www.computerio.com


 LinuxMall and EBIZ (TheLinuxStore) to merge

DENVER-- LinuxMall.com Inc. and EBIZ Enterprises Inc., announced today both parties have executed a letter of intent (LOI) to merge. The merger of LinuxMall.com and TheLinuxStore.com, a division of EBIZ Enterprises, will position the combined entity as the largest vendor-neutral Linux shopping mall and destination on the Internet. The resulting company will offer the most comprehensive selection of Linux products and solutions, information and services. The companies' combined prior fiscal year revenues were more than $25 million.

Under terms of the agreement, the new corporation will be known as LinuxMall.com. Today, LinuxMall.com is the No. 1 e-commerce site for the Linux community and was recently listed the No. 1 shopping destination in Linux Magazine's "Top One Hundred Linux Sites." The rise of the Linux operating system has been one of the top technology stories of the year as companies are adopting this system within their enterprises. TheLinuxStore.com Web site will become a store within the LinuxMall.com collection of online stores.

The new Company intends to apply for NASDAQ listing after successful completion of the proposed merger.


 Software Carpentry Design Competition Finalists

The Software Carpentry Project is pleased to announce the selection of finalists in its first Open Source Design Competition. There were many strong entries, and we would like to thank everyone who took the time to participate.

We would also like to invite everyone who has been involved to contact the teams listed below, and see if there is any way to collaborate in the second round. Many of you had excellent ideas that deserve to be in the final tools, and the more involved you are in discussions over the next two months, the easier it will be for you to take part in the ensuing implementation effort.

The 12 entries that are going forward in the "Configuration", "Build", and "Track" categories are listed at the URL below. The four prize-winning entries in the "Test" category are also listed, we are putting this section of the competition on hold for a couple of months while we try to refine the requirements. You can inspect these entries on-line at http://www.software-carpentry.com/first-round-results.html


 Stalker: StrongARM version of CommuniGate Pro

From the Big Iron down to a Pocket Server. Stalker Announces a Linux StrongARM version of the CommuniGate Pro Mail Server

MILL VALLEY, CA - May 15, 2000 - Just two weeks after the successful release of the AS/400 version of CommuniGate Pro, Stalker Software, Inc. today announced the Linux StrongARM version of their highly scalable, carrier-grade messaging server.

CommuniGate Pro was initially designed to be a highly portable messaging system that can effectively use the resources of any operating system on any hardware platform. Current installations include small to mid-size ISPs on up to the extra large ISPs and Fortune 500 companies.

With this release, Stalker expands the number of supported Linux architectures: besides the "regular" Intel-based systems, CommuniGate Pro can be deployed on PowerPC, MIPS, Alpha, Sparc, and now StrongARM processors running the Linux(r) operating system.

The highly scalable messaging platform can support 100,000 accounts with an average ISP-type load on a single server, and the CommuniGate Pro unique clustering mechanisms allow it to support a virtually unlimited number of accounts.

For office environments and smaller ISPs, CommuniGate Pro makes an ideal Internet appliance when installed on MIPS-based Cobalt Cubes(r) and, now, Rebel.com's NetWinder(r) mini-servers.

The CommuniGate Pro Free Trial Version is available at http://www.stalker.com/CommuniGatePro/.


 First UK Linux Conference Set to Challenge IT in Business

First UK Linux Conference Set To Challenge IT In Business SuSE Linux Ltd, Europe's leading Linux distributor, will be hosting the first UK Linux Conference on 1st June at the Olympia Conference Centre in London. The Conference, in association with IBM, is set to position Linux as a viable option for the corporate desktop, whilst preserving its traditional role of powering many corporate servers. Leading industry figures, including Larry Augustin of VA Linux, Alan Cox of Red Hat, Dirk Hohndel of SuSE Linux and Vice President of the Xfree86 Project, and John Hall from Linux International, will discuss issues ranging from the origins and direction of Linux, to the increasing relevance it has in the business environment today.

http://www.suse.com


 Magic Software news

IRVINE, CA (May 17, 2000) - Magic Software Enterprises announced completion of two key acquisitions. Magic purchased a majority interest in Sintec Call Centers Ltd. (Sintec), a Magic Solutions Partner that is the developer of the leading call center management software in Israel. Magic plans to market and sell the Magic-based solution -- which already has been implemented extensively in Israel -- worldwide under the brandname, "Magic eContacit" Magic also has acquired ITM, another Magic Solutions Partner with expertise in the development and implementation of e-commerce projects.

IRVINE, CA (May 22, 2000) - Magic Software Enterprises (Nasdaq: MGIC), a leading provider of state-of-the-art application development technology and business solutions, announced today that it has signed a deal with Compass Group PLC, a major worldwide foodservice organization, to deliver an e-procurement solution. The e-procurement solution, which is being developed and implemented by Magic's French subsidiary at Compass Group France, will be built using Magic's award-winning business-to-business e-commerce solution, Magic eMerchant. The new application is expected to become operational in June 2000.

"We chose Magic over Oracle and IBM because they were able to provide us a competitive, fixed-price solution that could be implemented much more quickly and efficiently than the other two, and would adhere exactly to our specific data model," said Ludovic Penin, Compass Group's IS director in France.

http://www.magic-sw.com


 Solemates chooses Lutris Technologies

SANTA CRUZ, Calif. - May 22, 2000 - Lutris Technologies, Inc., an Open Source enterprise software and services company, today announced that its Professional Services group was chosen to deliver the interactive customatix (www.customatix.com) Web site for Solemates. Customatix.com is an interactive E-commerce site that enables customers to design and build their own shoes click-by-click from the sole up.

Solemates, the company behind customatix.com, relied on Lutris Technologies' Professional Services group to develop a site capable of delivering the three billion trillion combinations of custom shoe designs that only a Web-based business could offer customers. Visitors to customatix.com can select from a vast assortment of shoe design elements, including sole heights, materials, colors, laces, and other options to build a uniquely individual pair of shoes.

Lutris made customatix.com come to life quickly. Using Enhydra (www.enhydra.org), a leading Open Source Java(tm)/XML application server, the Professional Services group built a complex, multi-faceted application, architecting a solution that integrates seamlessly with Solemates' partners, including UPS, Cybersource, and FaceTime. The Enhydra Open Source application server decreased Solemates' time-to-market to a fraction of what it could have been using closed source, proprietary software.

Using Enhydra XMLC, Lutris was able to deploy Solemates' business in five months-roughly half the time it would have taken without this innovation, and at a cost of approximately one-third of what a pioneering site typically costs, according to recent GartnerGroup survey data. Enhydra XMLC separates HTML design and coding from business logic, allowing interface designers and Java programmers to work simultaneously yet independently. Since a core benefit of customatix.com's vision lay in allowing customers to view their creations in real-time, Enhydra XMLC provided the precise technology to support such an inventive business strategy.


 Linux Links

LinuxMall's Ask Linus forum.

Proton Media specializes in creating multimedia web presentations using Flash 4.0. The same presentation may be used as a trade-show kiosk and also given away as a "CD-ROM business card". (This URL requires Macromedia's Shockwave Flash plug-in. A link to the Linux version is available at the site.)

TheLinuxExperts.com sells Linux servers in North America and installs office LANs.

Firstlinux.com "I've installed Linux: What Next?" is a series of articles aimed at helping you realise the full potential of Linux.

Making the Palm/Linux Connection (O'Reilly article)

Universal Device Networking -- the Future is Here (LinuxDevices.com article)

AnchorDeskUK article about Red Hat's default login mishap

ZDNetUK article: "Linux took another major stride towards corporate acceptance last week, with IBM's announcement that IBM Global Services would support S/390 versions of Linux from SuSE and TurboLinux."

Can Linuxcare stay afloat? (ZDNetUK article) "The real story behind a potential open-source disaster."

Browser Wars: the Future Belongs to the Dinosaurs


 MS Kerberos, "medialess" OS, hypocracy, Gnutella, and overenthusiastic open-source enthusiasts

Most of these are linked directly or indirectly from the indicated OSOpinion articles.

How to publish a trade secret (Microsoft's Keberos specification)

Microsoft's Gamble of a Lifetime (switching from selling software to on-line subscription services).

Open source is (so far) a road to nowhere

Microsoft - The Penguin's Buddy (some more ways MS is shooting itself in the foot)

Napster and Gnutella

Infoworld article about the "medialess" OS


Software Announcements


 new release of BORG

We would like to announce that the next version (v0.2.20) of BORG (BMRT Ordinary Rendering GUI) is now available for download at www.project-borg.org. BORG is now running on most of the BMRT supported platforms including LINUX, WinNT, SOLARIS. (Requires Java 1.1.7 or higher.)


 Linux Accounting

I would like to announce the availability of AccountiX for LINUX. this is a full featured, modular accounting package. The source code is available in order to provide customization to fit an end-users needs. Information on the package is located at www.accountixinc.com.

Frank Quirk, President
AccountiX, Inc.


 Loki: Heavy Gear II

With "Heavy Gear II" Loki Entertainment Software is opening the door to new dimensions in the Linux world: with 3D audio effects and joystick support, a further step has been taken towards the acceptance of Linux by the home user.

With the release of the first "big" Linux game, "Civilization: Call To Power" (awarded the "Best End User Product of 1999" by Linux Journal), Loki Entertainment has already made a name for itself. Just like its successors, "Heavy Gear II" makes optimal use of the qualities of Linux in the network, whereby multi-player games based on rounds, or real-time, are possible. Due to its success, it is no surprise that around a dozen more titles are planned on being ported to Linux for the year 2000.

Loki is currently placing its main emphasis on 3D sound support by means of OpenAL. "OpenAL represents a milestone for Linux," realizes Scott Draeker, president of Loki Entertainment Software. "Until now, 3D audio features in games were reserved for users of other platforms. This has all changed now."

OpenAL, entirely in the tradition of the Open Source community, is issued under the LPGL license (GNU Library Public License).

Loki released 7 front-line Linux game titles in 1999, and plans 16 titles for 2000. For more information, visit www.lokigames.com.


 Loki: Quake III Arena Editor

We proudly announce the beta release of Linux SDK for use with Quake III Arena.

The full version of Linux SDK will benefit Linux enthusiasts and aspiring game developers alike by allowing them to create maps and game code modifications under Linux. Windows users have had this capability since the release of the original Quake game.

Linux SDK offers Linux users a toolchain for content creation. It combines software for image processing, conversion and editing with a fully-featured map editor compatible with the Quake III engine. The features include custom texturing, lighting, patches, shaders, entities and more. It is based in part on the QERadiant code from id Software, Inc.

Download the unsupported beta version.


 Other software

Public demo of Kylix, Borland's Delphi for Linux.

Aestiva HTML/OS is a simple way to build a database designed for the web.

CRiSP is a programmer's editor 7.0 including file compare, FTP client, GUI and text modes, vi/emacs emulation, and much more. (21-day evaluation copy)

Cybozu Office 3 is an English version of Cybozu's Japanese office suite. Includes ten applications. Dowload the 60-day trial at http://cybozu.com/download/index.html

Canvas 7 Linux Beta 2 by Deneba provides vector drawing, diagramming, technical illustration, creative drawing, image editing, web graphics and page layout features in one powerful application. Download the beta from Deneba's web site.

MontaVista real-time scheduler for the Linux kernel. (For embedded applications.) Download source and documentation at http://www.mvista.com/realtime/rtsched.html

EiconCard Connections for Linux, when combined with an EiconCard network interface card, provides the wide area communications needs for an easy-to-use, low-cost, and easy-to-manage communications server. The flexibility of the EiconCard, when combined with this software, provides powerful IP Routing over various WAN protocols, making it ideal for applications such as Web Servers or Thin Server Appliances. In addition, many Linux-based embedded systems, such as point-of-sales, can use the X.25 connectivity built into the software. It will be available in June.

Opera has signed a deal with RSA to use RASFE Crypto-Ci 1.0 encryption software in its Opera web browser.


This page written and maintained by the Editor of the Linux Gazette. Copyright © 2000, gazette@ssc.com
Published in Issue 54 of Linux Gazette, June 2000


(?) The Answer Guy (!)


By James T. Dennis, linux-questions-only@ssc.com
LinuxCare, http://www.linuxcare.com/


Contents:

(!)Greetings From Jim Dennis
plus ¶: Greetings From Heather Stern
(?)LILO hangs --or--
LILO Hangs in Switzerland
(?)question on rm command --or--
Homework Answer: All about 'rm'
(?)Suse Linux telnet problem --or--
Can't telnet to Linux server
(?)question on trees --or--
Another Homework Assignment from Hotmail
(?)Telnet --or--
Can't Telnet: Another possibility
(?)>>>> HELP ! Name of the 1st GUI used by original AOL software ??? --or--
A GEM of a Question?
(?)Win4Lin nand NT = nil --or--
Win4Lin's Limitations: VMWare's Strength?
(?)login script --or--
"Unary Command Expected" in Shell Script
(?)linux --or--
Step through a Program
(?)Question from a quasi Novice... --or--
Linux is {Now|Not} UNIX
(?)Thanx for -->Re: hanging the IFS to newline --or--
Embedding Newlines in Shell and Environment Values
(?)Home partition sizes --or--
Sizing the Home Directories: Quotas and Partitioning
(?)calculate cpu load --or--
Use the Sources, Dude!
(?)Scheduling 3rd party services --or--
Cron
(?)a quick question --or--
DIR /S
(?)Shared Libraries --or--
Limiting "Public Interfaces" on Share Libraries
(?)Passwords problems --or--
Corel Linux and Blank Passwords
(?)Dual Boot Questions. --or--
Windoze [sic] on 2nd Hard Drive

(!) Greetings from Jim Dennis

I had a great time this weekend at an annual science fiction conference named Baycon. Heather and I were staff in their first terminal room, sponsored by Red Hat, LinuxCare, and VA Linux Systems and it was a rousing success. Other SF conventions are looking forward to doing the same.

Good news: Heather, my wife and principle editor, will be taking over the Answer Blurb. She's refined her 'lgazmail' PERL script to the point where she can take up the slack and has graciously agreed to take over responsibility for the monthly blurb as well.

Long time readers may recall that early Answer Guy columns had no blurbs. They also had no HTML formatting. The URLs weren't even wrapped in links! I'd been frustrated by this for some time --- from about the time that I realized that Marjorie (then the editor of LG) was publishing my responses as a column, (and that she had dubbed me with as "The Answer Guy" --- a title that still makes me nervous).

Heather agreed to step of to the plate and do the formatting. She tried a few mail-to-web utilities like MHOnArc, and John Callendar's Babymail, etc. Then she decided to derive her own from a Babymail source. So her script reads "Jim's e-mail markup hints" and converts it to reasonable HTML.

Heather also designed and drew the new Answer Guy Wizard (TM?) with its distinctive Question Crystal Ball and Answer Speak Bubble --- which visually refer to the question and answer speak bubbles throughout the column. (She's also added the pilcrow bubble for editorial comments).

In other words, Heather went way beyond just "wrapping the URLs in links" and completely overhauled the visual look of our column.

I should also note that Heather is no slouch technically. She has often helped me find answers to questions --- including the answers that I've published here.

When we did that overhaul I also decided to add the "blurbs" The idea was to say things of interest that were not in response to any questions. (I suppose I could've use a shill to jury rig the desired questions, but that would be cheating!).

The blurb has sometimes been editorial (commenting on the Mindcraft/Microsoft fiasco and the wonderful Linux community anti-FUD response). Sometimes it's been a summary and commentary on the sorts of questions we got in the previous month, feedback that we got from my answers, and any trends that we were seeing, etc.

For awhile I tried to identify a specific person and forum every month --- to recognize them with the "Answer Guy's Support Award." I wanted to point out where other individuals were providing lots of technical support in more specific forms of support in various fora. For instance in May I wanted to recognize Ben Collins of the Debian-SPARC mailing list. He seems to respond to most of the questions that show up there. (Unfortunately I was too much of a flake to keep that up for long. It's hard to dig up a really good new selection every month).

Of course there have also been the two April Fool's hoax blurbs and a few others that weren't really there.

The sad fact is that I don't have enough time to conceive and compose articles for this column every month. It is much easier for me to answer questions (react) than to write from scratch. (I tend to digress enough when there IS a question at hand. I'm a regular attention deficit train wreck when left to my own devices!).

Let me reassure everyone that I'm not leaving the "Answer Guy" column. I'm somewhat compulsive about answering technical questions, and I used to make a hobby out of USENet netnews before the advent of LG ensured that I get a 100 or so diverse Linux questions every month in my inbox. I sometimes still make it out to USENet --- though I dropped the uucp netnews feed that used to fill the disk on antares on a semi-regular basis! (Now I just telnet out to a shell account at my ISP, or use my $NNTPSERVER environment setting to get to his news server).

I'll also probably still insert a few comments to supplement Heather's.


(¶) Greetings from Heather Stern

Hi everybody. I suppose I don't have to introduce myself now. I will also be taking on some deeper organizational features -- in the next few months we'll see a revamp of how Tips, the Mailbag, and Answer Guy messages are formatted -- though I think they won't look all that different.

Also, we'll have more Wizards joining us. Jim had from the early days conveived of this as The Answer Gang -- he was just helping an editor with a few technical notes, a role which anyone can play. The Mailbag and Tips is popular and more gurus are around, now. If you'd like to join The Answer Gang as a regular, let us know what your specialties are.

I'll have something more "Blurb"ish next month. On to the answers!


(?) LILO Hangs in Switzerland

From Tom on Fri, 05 May 2000

Hi Jim (or James? Is Jim short for James?)

(!) Jim is short of James. I tend to go by Jim.

(?) First let me thank you for the work you're doing in the LG. I've read it for about 2 years now and have seen lots of tips. Even the AnswerGuy section is interesting, sometimes amusing... But let me come to the point now ;-)

I have Suse Linux 6.3, Kernel 2.2.13, with NCR SCSI and 2 disks. With fdisk I set Boot=Y on /dev/sda1.

mtab looks like:

/dev/sda1 /boot
/dev/sda2 /
/dev/sdb1 /home

But mtab will be processed after LILO has loaded the kernel, right?

(!) /etc/mtab is the file which contains a list of currently mounted filesystems. /etc/fstab is the list of filesystems which are "known" to the system. /proc/mounts is a virtual file, it is a dynamic representation of the kernel's mount table.
/etc/mtab might be out of sync with you /proc/mounts in cases where the system is in single user mode --- and the root filesystem is mounted read-only, or under other add circumstances. /proc might not be mounted in some other cases. The structure of the two files is similar, but not quite identical. I've experimented with making /etc/mtab a symlink to /proc/mounts (and adjusting a few startup scripts to cope with that). It seems to work.
The main commands that use /etc/mtab are the 'mount' command (when used with no arguments, to display the list of currently mounted filesystems) and the 'df' command (which displays the currently available free space on each mounted fs). Personally I think these (and any others that need this info) should be adjusted to use /proc/mounts in preference to /etc/mtab --- since this would be one step that might allow us to mount / in read-only mode.
Of course that should be abstracted through a library and it should still be able to use /etc/mtab for cases where /proc isn't available (particularly on some sorts of embedded systems).
But I digress.

(?)lilo.conf looks like:

initrd = /boot/initrd    # exists
boot = /dev/sda          # put the Bootstrap code here
#-#-#-#-#
image = /boot/vmlinuz    # exists
root = /dev/sda2         # the device holding /
label = lx               # short but unique :-)

When running lilo, it shows
Addes lx *

When rebooting the system, it hangs after printing LI. I've read the lilo-README. It says that this is caused by "geometry mismatch" or having moved "/boot/boot.b without running the map installer."

Uuuuh?!? What's the problem? I just don't get it ... Please help me. - Thank you!

Tom
Greez from Switzerland!

(!) Try adding the "linear" directive to the global section of your /etc/lilo.conf. That would be the part before the first "image=" directive.
Try running /sbin/lilo -v -v (should give more verbose output).

(?) LILO: linear Directive

From Tom on Mon, 08 May 2000

Hello Jim

Thank you for your quick response!

Try adding the "linear" directive to the global section of your /etc/lilo.conf. That would be the part before the first "image=" directive.

I've done that and ... it works! Why does it? Is there a general problem with SCSI-drive(r)s and the old style adressing C/H/S? AFAIK "linear" means that the sectors on a disk are counted from 0 to n, as the SCSI does itself on block devices. But now I'm digressing ;-)

Thanks again! Tom

(!) The failure mode you described (the LILO loader stops at just LI) is described in their documentation ("tech.dvi" or "tech.ps" depending on your distribution/source).
Basically the boot loader prints the letters LILO one at a time, and each at a specific point in its boot process. This is useful for debugging and troubleshooting. LI says the the first stage boot loader completed, and the second stage boot loader was found, but the mapping data (used to find the kernels, etc) was not. This is usually due to a problem where the BIOS and the LILO loader are using incompatible blocking addressing modes. (One is using CHS --- cylinder/head/sector --- while the other is using LBA/linear).
Sometimes SCSI expect linear addressing, some SCSI controllers or controller/drive combinations emulate the old WD1003 (ST506) interface closely enough that CHS addresses will do.
Sometimes you need to switch your CMOS/BIOS to use UDMA/LBA modes and/or add the "linear" to your /etc/lilo.conf --- sometimes you need to just take the "linear" directive out of /etc/lilo.conf (and re-run /sbin/lilo, of course).

(?) Homework Answer: All about 'rm'

From The Phantom on Mon, 01 May 2000

Hello,

I'm wondering if you can answer a few questions on the UNIX rm command. I need a response before May 3rd if possible. Your assistance on this matter is greatly appreciated. Thank you for your time and service. Here's the questions

(!) Hmm. Wouldn't want this assignment to be late for the prof, heh?
Well, at least you had the brights to use a hotmail account rather than sending this from your flunkme@someuniv.edu address.

(?) The rm unix command lowers the link of an inode. When the link count goes to zero the inode is made available to the system and cleared of extraneous information.

(!) The 'rm' command is basically a parser and wrapper around the unlink() system call.
BTW: This definition is an oversimplification. When the link count is less than 1 AND THERE ARE NO OPEN FILE DESCRIPTORS ON THAT FILE then the system does some sort of maintenance on the inode and any data blocks that were assigned to it.
Exactly what the filesystem does depends on what type of fs it is, and on how it was implemented for that version of that flavor of UNIX.
Usually the inode is marked as "available" in some way --- so that it can be re-used for new files. Usually the data blocks are added to a free list, so that they can be allocated to other files.
(It is possible for some implementations to mark and reserve these to allow for some sort of "undelete" process --- and it would certainly be possible to have "purge" and "salvage" features for some versions of UNIX).

(?) 1) Explain link count?

(!) The link count is one of the elements (fields) of the inode structure. An inode is a data structure that is used to manage most of the metadata for a file on a UNIX like filesystem.
On UNIX filesystems a directory entry is (usually) a link to an inode. (On some forms of UNIX, on some types of filesystems there may be exceptions to this. Some filesystems can store symbolic link data directly in their directory structures without dereferencing that through an inode; some of them can even store the contents of small files there. However --- in most cases the directory entry is a link to an inode.
This allows one to have multiple links to a file. In other words you can have many different names for a file --- and you can have identical names in different directories.
It turns out that most filesystems use this feature extensively to support the directory structure. Directories are just inodes that are mostly just like files. Somewhere you have a parent directory. It contains a link to you. Each of your subdirectories contains a ".." link to its parent (you). Thus each directory must contain a link count that is equal to it's number of sudirectories plus two (one for . and another for ../somelink.to.me).
(Note: On most modern forms of UNIX there is a prohibition against creating additional named hard links to directories -- this is apparently enforced in order to make things easier for fsck).

(?) 2) Explain why the name of the command is called remove (rm)?

(!) It seems pretty self explanatory to me. You're removing a link. If that link is the last one to that file, then you've remove the file as well.

(?) 3) What hapens to the blocks referenced by the inode when the link count goes to zero?

(!) Normally the data block would be returned to the free list. The free list is another data structure on UNIX filesystems. I think it is usually implemented as a bitmap.
Note: On some forms of UNIX the filesystem driver might implement a secure deleted feature which might implement arbitrarily complex sets of overwritting the data with NULs, with random data, etc. There is a special feature in Linux which is reserved for this use -- but which is not yet implemented. You might find similar features in your favorite form of UNIX.

(?) 4) What data is present in these blocks after the inode has been cleared?

(!) That depends on the filesystem implementation. It usually would still contain whatever data was laying around in those blocks at the time that they were freed.
If you're thinking: "Ooooh! That means I can peek at other people's data after they remove it!" Think again. Any decent UNIX implementation will ensure that those data blocks are clear (zero'd out) as they are re-allocated.

(?) 5) How does the removal of an inode which is a symbolic link change the answer to 3) and 4)?

(!) Symbolic links may be implemented by storing the "data" in the directory entry. In which case the unlink() simply zeros out that directory entry in whatever way is appropriate to the filesystem on which it is found.
Symbolic links may also be implemented by reference to an inode --- and by storing the target filename in the data blocks that are assigned to that inode. In which case they are treated just like any other file.
Note that removing a symbolic link with 'rm' should NEVER affect the target file links or inodes. The symbolic link is completely independent of the hard links to which they point and the inodes to which those refer.

(?) Thank you for your help.

(!) As I'm sure you noticed this sounds to me like a "do my homework" message. However, I've decided to answer it since it is likely to be of interest to many of my readers.
You may also have noticed that I was a bit vague on a number of points. Keep in mind that there is quite alot of this that depends on which version of UNIX you're using, which filesystem your talking about (Linux, for example supports over a dozen different types of local filesystem), and how you've configured it.
Of course you could learn quite a bit more about it by reading the sources to a Linux or FreeBSD kernel ;)

(?) Can't telnet to Linux server

From kd on Mon, 01 May 2000

I recently installed Suse Linux on a machine to be a server, but I cannot telnet to the linux server from my other machines. can you help?

~kelly

(!) Short answer: Probably TCP Wrappers and the old "double reverse lookup problem." Try adding an entry in /etc/hosts to refer back to your client(s) and make sure that your /etc/nsswitch.conf and /etc/hosts.conf are configured to honor "files" over DNS and NIS.
You could have been a bit more vague. You could have left out the word "telnet" ;)?
When asking people technical support questions you have to ask:
How many possible causes are there to this problem? How many of them have I eliminated? How have I eliminated them? Can I eliminate some more? What is the error message I'm getting (if any)? What was I expecting? What happened that didn't match that/those expection(s)?
For example: Can you ping the server from the client system? (That eliminates many IP addressing, routing, firewall and packet filtering questions). Can you telnet from that client to any other server? (That eliminates most of the questions that relate to proper client software/system configuration and function). Can I access any other service on this client? (Web server, file or print services, etc.)
Then you ask: What did I expect to happen when I telnetted to that system? I'd expect to get a set of responses something like:
Trying 123.45.67.89 Connected to myserver.mydomain.not Escape character is '^]'. Debian GNU/Linux 2.2 myserver.mydomain.not myserver login:
So, what did you get. Did you see the "Trying" line? That would mean that the telnet DNS or other hostname lookup returned something. Did the IP address in the trying line match that of your new server? That would mean that your DNS is correct! Did you get the "connected to" line? That suggests that the routing is correct. Did it just sit there for a long time? How long? What if you wait for five or ten minutes? Does it eventually connect?
It sounds like you have the old "double reverse DNS" problem. You are probably using DNS and you probably don't have proper reverse DNS (PTR) records for you client system(s). Do a search in the Linux Gazette archives for several discussions on this.
When you are getting free software and free support, it's important to do your homework. I typically will put about 10 hours into trying to solve a problem before I'll write up a question to post to the newsgroups, mailing lists, authors/maintainers, etc.
Of course I can understand part of the problem you might be facing. It sounds like you have little or no Linux experience, or at least little or no experience in setting up Linux networking.
You probaby don't know all of the elements that go into "telnetting into your server." Here's the basic rundown:
You have to have a client (telnet command). That has to be on a system with TCP/IP installed, configured and working. It must have an IP address and a route to your server.
You have to have a server (in.telnetd). It would normally be launched on demand by a dispatch program (inetd) which would be reading configuration out of a configuration file (/etc/inetd.conf).
On Linux systems the /etc/inetd.conf is usually configured to run most programs under an access control and logging utility called "TCP Wrappers" (/usr/sbin/tcpd). That utility refers to a couple of configuration files (/etc/hosts.allow, and /etc/hosts.deny) and it does some "paranoid" consistency checking to try and ensure that the client "is who he claims to be." The specifics of this paranoid checking are referred to as a "double reverse DNS lookup."
This requires that the client system's IP address somehow be registered in some sort of naming service that the server is configured to query. The easiest of these in most cases is to simply add the appropriate IP address (and some arbitrary name) int the /etc/hosts file. A better way is to add an appropriate PTR record to your DNS zone.
Linux uses a modular name services resolution system. Newer versions of Linux use the /etc/nsswitch.conf files to control the list of name services that are used for each name space (users/accounts, groups, hosts and networks, services, mail aliases, file server maps, etc). In most cases you wouldn't have to modify the nsswitch.conf to make it look at the /etc/hosts file. In other cases you might.
In previous months I've gone into greater detail about how to troubleshoot problems in accessing TCP services on Linux systems. Look for references to tcpdump and strace to find out more.
(Summary: You can replace the entry in /etc/inetd.conf with a wrapper script that runs 'strace' on the program, thus logging what the program is trying to do in great detail. You can also run 'tcpdump' on any machine on the local LAN segment, seeing the traffic between your client and server in great detail).
Unfortunately these tools are rather advanced, very powerful and coresponding difficult to use effectively. (You can probably get the information from them pretty easily -- the problem is to configure them to provide just the info you need and in parsing and understanding what they tell you).
Hopefully I've guessed correctly on what you problem is. Otherwise search through my back issues and the FAQ and do lots of troubleshooting. Ask a more detailed question.

(?) Another Homework Assignment from Hotmail

From Milton bradley on Tue, 02 May 2000

Hello,

Don't really know if you'll answer my questions but it doesn't hurt to give it a try. If you can all I can say is thanks. Well here goes

the situation is this

(!) You and your friends have decided that e-mail is the easiest way to get your homework done for you?
[I got another question from a different address at Hotmail yesterday. It had a similarly "Do my homework for me" tone to it.]

(?) Directory tress can include large numbers of files. Referencing a file by full path name can be burdensome on the user. Consequently in UNIX there is an environment variable $PATH (e.g. .:/bin:/usr/bin) which directs the system for the directories it is to search for an executable file. All non-executable files are looked for only in current working directory(.).

(!) Actually this set of propositions is full of minor inaccuracies. First the $PATH environment variable is not a feature of UNIX per se. It is not unique to UNIX, and it is not necessitated by UNIX. However it is a widely used convention --- and it's probably required by POSIX in the implementation of shells and possibly some standard libraries.
Non-executable files are found according to the semantics of the program doing the opening. Usually this is a path (either an absolute path from the root directly or one that is relative to the current working directory or $CWD).
The main flaw in your propositions is that the PATH exists primarily for convenience. There is actually a more important reason for things to use the PATH.

(?) questions are

1) Why shouldn't other non-executable file be referenced by this mechanism?

(!) Why should they.

(?) 2) SuperUsers are cautioned that the shell should not look in the current working directory first (e.g. /bin:/usr/bin:.) for security reasons. Why?

(!) All users are cautioned that adding . (CWD, the current working directory) to their PATH carries some risk.
Let's say that you put . on your path. If you put it at the beginning of your path you've implemented a policy that any executable in the current directly takes precedence over any other executables by that name. So I'm an evil user and I just create a program name 'ls' which does "bad things(TM)"
(I'll leave the exact nature of "bad things(TM)" to your imagination).
When 'root' or any other user then does a 'cd' into my directory and types 'ls' (a very common situation) then my program runs in their security context. I effectively can do anything that they could do. I can access any file they can access. I can completely subvert their account.
Doh!
So let's put that . at the end of the PATH. That's solve the problem. Now the /bin/ls or /usr/bin/ls will be executed in preference to my copy of 'ls.'
So now the user "evil" has to get more clever. He makes a number of useful links to his "bad things(TM)" script. These are carefully crafted strings like: "sl" and "ls-al" (common typos that the hurried user might make make while visiting my directory).
Quod erat demonstratum.

(?) 3) The c-shell creates a hash table of the files in $PATH on start-up. Give one advantage of this scheme:

(!)
The hash tables is basically an index of all executables on the path. Thus one can find, in O(logN) time if an executable exists and where it is. (Look up "theta notation" in any text book on "computational complexity analysis to understand that "big Oh" notation).

(?) 4) Give one disadvantage of the above mentioned scheme:

(!)
I'll give two.

(?) 5) Since the system can easily maintain a list of files referenced in teh course of a login session, one could also maintain a REFERENCE FILE TABLE and use it as part of a scheme to locate files. Give one advantage of this scheme:

(!)
Hmm. MU!
Which "one" could do this? Would this be a new API? What programs would support it? How?
Ergo I unask your question.

(?) 6) Give one disadvantage of this scheme:

(!)
Commands with the same name are presumed to provide compatible semantics. Ambiguity among data files is likely to have severe consequence.
One could use expressions like `locate foo` in each case where one wished to refer to "the first file named 'foo' on my data search path." One could certainly implement an API that took filenames, perhaps of the form: ././foo and resolved them via a search mechanism.
(Note: GNU systems, such as Linux, often have the "updatedb" or "slocate" packages installed. These provide a hashed index of all files on the system which are linked through publicly readable directories. Thus the `locate` command expression could be used already --- though the user wouldn't be able to implement a policy over how many and in which order the file names were returned. It would a simple matter of programming to write one's own shell function or script which read a DPATH environment variable, called the 'locate' command and search the return list for matches in a preferential order).
BTW: Some shells implement a CDPATH environment setting.
Here's an excerpt from the 'bash' man page:
       CDPATH The search path for the  cd  command.   This  is  a
              colon-separated  list  of  directories in which the
              shell looks for destination  directories  specified
              by the cd command.  A sample value is ".:~:/usr".
As I see it the man reason for UNIX to implement support for executable search PATH is to allow scripts to be more portable, while allowing users and administrators to implement their own polices and preferences among multiple versions of executables by the same name.
Thus when I use 'awk' or 'sed' in a script I don't care which 'awk' or 'sed' it is and where this particular version of UNIX keeps its version of these utilities. All I care about is that these utilities provide the same semantics as the rest of my scripts and commands require.
If I find that the system default 'awk' or 'sed' is deficient in some way (and if I'm a "mere mortal user") I can still serve my needs by installing a personal copy of a better 'awk' (gawk or mawk) and/or a better 'sed' (such as the GNU version). PATHs are the easiest way to accomplish this.
So, the disadvantage of implement some sort of "data path" feature into the UNIX shells and libraries would basically be:
IT'S A STUPID IDEA!

(?) Can't Telnet: Another possibility

From Walter Ribeiro de Oliveira Jr. on Tue, 02 May 2000

I read a question about not being able to use telnet to connect to a linux box... you complained about very few information, I agree with you, but I have a suggestion: isn't the problem about trying to make a telnet as the root user, and in the file /etc/securetty the remote terminals not permiting so ? I mean, for make a telnet as the root user, you need to edit /etc/securetty to allow it... Hugs, see ya

(!) Of course that is a different possibility. However, editing /etc/securetty is a very bad way to do this. You'd have to add all of the possible psuedo-tty device nodes to that list --- which would be long and pretty silly.
If one really insists on thwarting the system policy of prevent direct root logins via telnet, then it's best to do so by editing the /etc/pam.d/login configuration file to comment out the "requisite pam_securetty.so" directive:
# Disallows root logins except on tty's listed in /etc/securetty
# (Replaces the `CONSOLE' setting from login.defs)
auth       requisite  pam_securetty.so
... assuming that you are using a PAM based authentication suite -- as most new Linux distributions do. As noted in the excerpted comments from my .../pam.d/login file (as installed by Debian/Potato) there is an applicable setting in /etc/login.defs if you're using JF Haugh's old shadow suite without PAM.
Better Answer: use 'ssh'!

(?) A GEM of a Question?

From John K. Straughan on Tue, 02 May 2000

I have a question stuck in my head which is keeping me up at night! What was the name of the very first GUI program that the original AOL software was based upon? This would have been around 1987,1988,1989. It was prior to MS Windows. AOL wasn't the only company to use it. It never really evolved, but there were some applications written for it. I'm thinking it was GEO something, or something GEO. It was for PC, MS-DOS systems. Please help so I can sleep!!! Thanks - John

(!) (Note: it's a bad idea to include HTML attachment/copies of your e-mail to most people. I'd suggest doing that only in cases where you know that the correspondent involved prefers that particular format).
I don't know what package you're thinking of. As far as I remember the original AOL client software was purely for Apple Macintosh systems.
However, it sounds like you're talking about some version that might have run on GeoWorks Ensemble. GeoWorks Ensemble was actually a predecessor of MS Windows --- but it did run on 8086 (XT class) computers on which MS Windows was never supported. If I recall correctly GeoWorks orginally released GEOS, an operating system and graphical environment for the Commodore 64?
Geoworks Inc. has gone on to focus on things like cell phones and WAP. There as a /. (http://www.slashdot.org) thread about their recent attempts to use U.S. and Japanese patents which may stifle the deployment of free WAP and WML packages.
Meanwhile the desktop software that was part of Geoworks Ensemble appears to have been licensed out or spun off to a company called "New Deal Inc." (http://www.newdealinc.com). The specifically mention compatibility with Linux DOSEMU on their web site. This might make an interesing application suite --- though a native version for Linux would be nicer.
There was also the GEM graphical environment by Digital Research. This was the GUI on which Ventura Publisher was originally based. I think that GEM was basically a clone of the Xerox PARC look-and-feel --- very similar in appear and behavior to the Xerox 820 and to the original Macintosh finder software.
Since DR was eventually sold to Caldera by Novell, and spun off again as "Caldera Thin Clients." Meanwhile GEM was released under the GPL and it seems that the canonical site for ongoing GEM development on the net would be at: http://www.deltasoft.com/news.htm
Hope that helps.

(?) Win4Lin's Limitations: VMWare's Strength?

From Charles Hethcoat on Thu, 04 May 2000

I was excited to find out about Win4Lin and went straight to their web page for more details. There I read that they only work with Windows 95 and 98. They explain why in their white paper, and the reasoning is, well, reasonable. But I don't think I will be able to use Win4Lin where I work. Here's why.

My company sees to it that my computer runs NT. This was done because NT is far more stable than 9X. Not perfect, but pretty stable. But I would much prefer to use Linux, and I do have Debian installed on my computer. I boot it via a boot disk, and don't fool with lilo.

Since I don't have Windows 95, I can't use Win4Lin. Pity. I could make good use of it. I wonder how many other people are in my position?

Charles Hethcoat

(!) Try VMware (http://www.vmware.com) instead. It does run a full hardware system emulation and can run NT. It can even run a copy of Linux under Linux or Linux under NT (though that seems like a horrible waste).
You might also watch the free virtual machine project (which is not ready for production time) called Plex86 (at http://www.freemware.org). That's based on the work of Kevin Lawton (Bochs) and is apparently now sponsored by Mandrakesoft (http://www.linux-mandrake.com/en).
Of course there's still WINE (http://www.winehq.com). That will run some of your MS Windows applications natively under Linux. There's also still the opportunity to access some of your applications remotely through VNC. You'd run the VNC server on one of your (other) NT systems and access it via the Linux native VNC client (or the Java client, if you really wanted to).

(?) "Unary Command Expected" in Shell Script

From J.Keo Power on Fri, 05 May 2000

Hi Jim,

My name is Keo. I have been trying to write a script that provides a salutation to the user, though that is different depending on who logs in. There are only three of us logging in on the system, and I want to have a little fun with them by putting in some cool messages.

So far, I have attempted to write a script in vi named intro and placing the file in my home directory. I have "chmod to ugo+x intro". Then going to the /etc/bashrc file and putting in the path of the executable intro file in my home directory.

The bashrc is trying to run the executable, but is returning the message "unary command expected". I am not sure what that means!

If you could give me a little guidance on if my methodology is correct as far as the files I am manipulating, and possibly an outline of the script to write. here is what I have attempted (last time):

 #! intro
 # test of login script

 name=$LOGIN
 if [ $name = keo ]
 then
     echo "Whats up mano?"
 else
     if [ $name = dan ]
     then
         echo "Lifes a peach, homeboy."
     else
          if [ $name = $other ]
             then
                 exit
          fi
     fi
 fi
 exit

Thanks for any help. Keo

(!) I've been trying to clean out my inbox of the 500 messages that have been sitting unanswer and unsorted for months.
This is one of them that I just couldn't pass up.
First problem with this script is right at the first line. That should be a "she-bang" line --- like:
#!/bin/sh
... which is normally found at the beginning of all scripts.
The "she-bang" line is sometimes called "hash bang" -- so-called because the "#" is called a "hash" in some parts, and the "!" is often called a "bang" among hackers, it's also short for "shell-bang" according to some. It looks like a comment line --- but it is used by the system to determine where to find an interpreter that can handle the text of any script. Thus you might see 'awk' programs start with a line like:
#!/usr/bin/gawk -f
... or PERL programs with a she-bang like:
#!/usr/local/bin/perl
... and programs written using the 'expect' language (a derivative of TCL) would naturally start with something like:
#!/usr/local/bin/expect -f
After you fix that here are some other comments that I've inserted into your code (my comments start with the ## -- double hash):
 #! intro
 # test of login script

 name=$LOGIN
##  should be quoted:  name="$LOGIN" in case $LOGIN had
## any embedded whitespace.  It shouldn't, but your scripts
## will be more robust if you code them to accept the most
## likely forms of bad input.

## Also I don't think $LOGIN is defined on most forms of
## UNIX.  I know of $LOGNAME and $USER, but no $LOGIN

## Finally, why assign this to some local shell variable?
## Why not just use the environment variable directly
## since you're not modifying it?

 if [ $name = keo ]
 then
     echo "Whats up mano?"
## That can be replaced with:
##     [ "$name" = "keo" ] && echo "What's up mano?"
## Note the quotations, and the use of the && conditional
## execution operator
 else
## don't need an else, just let this test drop through to here
## (the else would be useful for cases where the tests were expensive
## or they had "side effects."
     if [ $name = dan ]
     then
         echo "Lifes a peach, homeboy."
## [ "$name" = "dan" ] && echo "Lifes a peach, homeboy."
     else
          if [ $name = $other ]
             then
                 exit
          fi
     fi
 fi
 exit
## $other is undefined.  Thus the [ ('test') here will give
## you a complaint.  If it was written as: [ "$name" = "$other" ]
## then the null expansion of the $other (undefined) variable
## would not be a problem for the 'test' command.  The argument
## would be there, albeit empty.  Otherwise the = operation
## to the 'test' command will not have its requisite TWO operands.

## All eight of these trailing lines are useless.  You can just
## drop out of the nested tests with just the two 'fi' delimiters
## (technically 'fi' is not a command, it's a delimiter).
Here's a more effective version of the script:
#!/bin/sh
case "$LOGNAME" in
     jon)
	echo "Whats up mano?" ;;
     dan)
	echo "Lifes a peach, homeboy."
       *)
       # Do whatever here for any other cases
       ;;
esac
This is pretty flexible. You can easily extend it for additional cases by insering new "foo)" clauses with their own ";;" terminators. It also allows you to use shell globbing and some other pattern matching like:
#!/bin/sh
case "$LOGNAME" in
     jon|mary)
	echo "Whats up mano?" ;;
     dan)
	echo "Lifes a peach, homeboy."
     b*)
	echo "You bad, man!"
       ;;
esac
Note that this sees "Jon" or "Mary" in the first clause, Dan in the second and anyone whose login name starts with a "b" in the last case.
Any good book on shell scripting will help you with this.

(?) Step through a Program

From Bubba44hos on Fri, 05 May 2000

I have a question and I can't seem to find the answer anywhere. My question is "sttep through a p[rogram being loaded into the system". If you could help, that would be great. Thank you for your time, Brian

(!) Argh!
What does this mean? First "step through a program being loaded into the system" is not a question; it's a directive.
Does this (instructor?) want you to explain how to "single step" through a program (using a debugger like 'gdb')? Does he or she want you to explain the process of how programs get "loaded into" (installed and configured) a system? Does he or she want to hear about how programs are loaded (executed) by a shell?
Anyway those are all interesting and vastly different topics. None of them have simple answers since they depend a lot on what sort of system, who is doing the "loading," and and what sort of "program" we are talking about.

(?) Linux is {Now|Not} UNIX

From Mark Hugo on Fri, 05 May 2000

Jim:

I hope this doesn't make me sound too ignorent. Is it possible to get a Unix system (Not a Linux) on a PC?

I have a potential job opportunity if I have some Unix "experience". Is there a simulator available for a PC? Are Linux and Unix similar enough to learn on RedHat Linux? Or are they too different?

Mark Hugo, Mpls, MN

(!) Linux is the best UNIX "simulator" for the PC.
Linux is similar enough to other forms of UNIX for over 90% of the work you would do as a sysadmin and over 80% of what you'd be doing in the course of normal (applications level) programming.
You can also get a variety of other forms of UNIX for the PC: FreeBSD (http://www.freebsd.org) and its ilk (NetBSD http://www.netbsd.org, BSDI/OS http://www.bsdi.com, and OpenBSD http://www.openbsd.com), Solaris/x86 (http://www.sunsoft.com) and SCO "OpenDesktop" and Unixware (http://www.sco.com).
Most of these are free. All have versions that are "free for personal use."
BTW: The fact that your experience is limited to PCs is more likely to be a problem than the fact that you only have Linux experience. PCisms are worse in many regards then the differences between PCs and other forms of UNIX.
Also note that Linux is not just for PCs anymore. There are versions that run on Alpha, PowerPC (Macintosh and other), SPARC and other platforms.

(?) Embedding Newlines in Shell and Environment Values

From Mark Chitty on Mon, 08 May 2000

Hi Jim,

Thanks for the solution. I had gone down a different path but this has cleared up that little conundrum !! I seems obvious now that you point it out.

Your reply is much appreciated.

Oh yes, if you ever write a book let me know. I'll buy it !!

cheers, mark

(!) Actually I have written one book --- it's _Linux_System_Administration_ (New Riders Publishing, 1999, with M Carling and Stephen Degler). (http://www.linuxsa.com).
However, I might have to write a new one on shell scripting. Oddly enough it seems to be a topic of growing interest despite the ubiquity of PERL, Python, and many other scripting languages.
In fact, one thing I'd love to do is learn enough Python to write a book that covers all three (comparatively). Python seems to be a very good language for learning/teaching programming. I've heard several people refer to Python as "executable psuedo-code."
Despite the availability of other scripting languages, the basic shell, AWK, and related tools are compelling. They are what we use when we work at the command line. Often enough we just want our scripts to "do what we would do manually" --- and then to add just a bit of logic and error checking around that.
Extra tidbit:
I recently found a quirky difference between Korn shell ('93) and bash. Consider the following:
echo foo | read bar; echo $bar
... whenever you see a "|" operator in a shell command sequence you should understand that there is implicitly a subshell (new process) that is created (forked) on one side of it or the other.
Of course other processes (including subshells) cannot affect the values of your shell variables. So the sequence above consists of three commands (echo the string "foo", read something and assign it to a shell variable named "bar", and echo the value of (read the $ dereferencing operator as "the value of") the shell named "bar"). It consists of two processes. One on one side of the pipe, and the other on the other side of the pipe. At the semicolon the shell waits for the completion of any programs and commands that precede it, and then continues with a new command sequence in the current shell.
The question becomes whether the subshell was created on the left or the right of the | in this command. In bash it is clearly created on the right. The 'read' command executes in a subshell. That then exits (thus "forgetting" its variable and environment heaps). Thus $bar is unaffected after the semicolon.
In ksh '93 and in zsh the subshell seems to be created to the left of the pipe. The 'read' command is executed in the current shell and thus the local value of "bar" is affected. Then the subsequent access to that shell variable does reflect the new value.
As far as I know the POSIX spec is silent on this point. It may even be that ksh '93 and zsh are in violation of the spec. If so, the spec is wrong!
It is very useful to be able to parse a set of command outputs into a local list of shell variables. Note that for a single variable this is easy:
bar=$(echo foo)
or:
bar=`echo foo`
... are equivalent expressions and they work just fine.
However, when we want to read the outputs into several values, and especially when we want to do so using the IFS environment value to parse these values then we have to resort of inordinate amounts of fussing in bash while ksh '93 and newer versions of zsh allow us to do something like:
grep ^joe /etc/passwd | IFS=":" read login pw uid gid gecos home sh
(Note the form: 'VAR=val cmd' as shown here is also a bit obscure but handy. The value of VAR is only affected for the duration of the following command --- thus saving us the trouble of saving the old IFS value, executing our 'read' command and restoring the IFS).
BTW: If you do need to save/restore something like IFS you must using proper quoting. For example:
OLDIFS="$IFS"
# MUST have double/soft quotes here!
IFS=:,
# do stuff parsing words on colons and commas
IFS="$OLDIFS"
# MUST also have double/soft quotes here!
Anyway, I would like to do some more teaching in the field of shell scripting. I also plan to get as good with C and Python as I currently am with 'sh'. That'll take at least another year or so, and a lot more practice!

(?) Sizing the Home Directories: Quotas and Partitioning

From Hank on Wed, 10 May 2000

I understand that under Linux you can set the home directories to a certin size. Either I am not looking in the right place or for the right thing, but I can't seem to find any info on this. I run Mandrake v7.0, and I am just trying to learn about Linux as best I can. I love the Linux on a floppy distributions, I can show everyone I know how well Linux runs now.

Thanks for your help, Hank

(!) It depends on what you mean by "set ... to a certain size."
First you home directories under Linux, or any form of UNIX can be any normal directory tree. Normally the home directory for each account is set via a field in the /etc/passwd file (the main repository for all user account information --- ironically the one vital bit of account data that is normally no longer stored in /etc/passwd is the user's password hash; but that's a long story).
Under Linux is is common to have all of the user home directories located under /home. This should be on it's own filesystem (partition) or it should be a symlink to some directory that is not on the root filesystem. Actually the whole issue of how filesystems should be laid out is frought with controversy among sysadmins and techies. There is a relatively recent movement that says: "just make it all one big partition and forget about all this fussing with filesystems."
Anyway, you are free to configure your filesystems pretty much any way you want under Linux. You can have several hard drives: two per IDE channel (/dev/hda and /dev/hdb for the first controller, /dev/hdc and /dev/hdd for the next, and so on), 7 for each traditional SCSI host, and 15 for the "wide" controllers (/dev/sda, /dev/sdb, etc). Each hard drive can have up to four primary partitions (/dev/hda1, /dev/hda2, etc) one of which can be an "extended partition container" (actually there are apparently now TWO types of "extended container" partition types, so you can have one of each). The "extended container" partitions can hold a number of additonal partitons. I've heard that you can have upto 12 partitions on a drive (I don't think I've ever gone beyond 10).
Unfortunately you have to make these decisions early on (when running 'fdisk' during your Linux installation. There is an 'ext2resize' program floating around the 'net. I haven't tried it yet (maybe on my next "sacrificial" system).
So, you can limit the size of the whole home directory tree by simply putting /home on its own filesystem (and sizing it as you need).
To limit how much space individual users can consume (under their home directories or on any other filesystems) you can use the Linux "quotas" support. This involves a few steps. You much ensure that the "quotas" feature is enabled in your kernel (I suspect that Mandrake ships with this setting). Then you want to read the instuctions in the Quota mini-HOWTO at http://www.linuxdoc.org/HOWTO/mini/Quota.html
Once the kernel support is there basically you do the following:
*) Create a couple of (initially empty) files at the
root of each partition (fs) on which you wish to enforce quotas.
*) Edit your /etc/fstab file to add the usrquota and/or grpquota mount options to each these filesystems
*) Run the command 'edquota' (with the -u or -g option
for user or group quotas respectively) and create a
series of text entries to describe your quota policies in the appropriate syntax.
*) Ensure that the "quotaon" command is run by your
system startup scripts (the init or "rc" scripts).
(This is probably also already being managed by your distribution).
Note that the mini-HOWTO is good, but you must follow it carefully. Be particularly carefull about the syntax you use in these quota files.
The whole affair is further complicated by the existence of both hard and soft quotas. Bascially you can set two different limits on each user or group's utilization of the space on each of your filesystems. The "soft quota" marks a point at which the users will start to get warnings while the hard quote marks a point at which attempts to create files or allocate more blocks to existing files will fail.
Read Mr. Tam's mini-HOWTO --- it's pretty old, but it has the details you need. It also shows some techniques for using on users quota configuration as a template --- so you can clone those settings to other users quickly and automatically without having to manually edit your quota files all the time.

(?) Use the Sources, Dude!

From pundu on Wed, 10 May 2000

Hi,

I would like to know how one can calculate cpu load and memory

used by processes as shown by 'top' command. It would be nice if any one can explain me how you could do these by writing your own programs , or by any other means.

(!) Why don't you download the sources to 'top' and 'uptime' and read them? On a reasonably modern Debian system you could just issue the command 'apt-get source procps' to have your system find, fetch, unpack and patch those. ('top', 'uptime', 'kill' and a number of other process management commands are in the "procps" package --- since these are all tools that implement process management and reporting using the /proc kernel support.
(Technically there were/are other ways to do these sorts of process management things, in cases where you don't have /proc enabled --- but they are no widely used anymore. There is a /proc alternative that's implemented as a device driver --- for embedded systems, and there's some old techniques for doing it by reading some of the kernel's data structures though /dev/kmem --- basically by using root level read access to wander around the kernels memory extracting and parsing bits of it from all over).
Your distribution probably came with sources (maybe on an extra CD) or you could always wander around Metalab (formerly known as Sunsite) http://metalab.unc.edu/pub/Linux to find lots of source code for lots of Linux stuff. You might also look at Freshmeat (http://www.freshmeat.net), Appwatch (http://www.appwatch.com) and even ExecPC's LSM (Linux Software Map) at http://www.execpc.com/lsm (You can even get 'appindex' a little curses package which can help you find apps from Freshmeat and the LSM by downloading RSS files from each of them on demand).

[ As of publication time, there's another one, called IceWALKERS (www.icewalk.com) -- Heather ]

Another good site to find the sources to your free software is the "Official GNU Web site" (http://www.gnu.org) and at the old GNU master archive site: ftp://prep.ai.mit.edu/gnu
Of course you could always compare these sources to those from another free implemention of UNIX. Look at the FreeBSD web site (http://www.freebsd.org) and its ilk (OpenBSD http://www.openbsd.org and NetBSD http://www.netbsd.org).
Of course I realize that you might not have realized that the source code was available. That's one of the features of Linux that you may have heard touted in the press. That "open source" thing means you can look at the sources to any of the core systems and packages (from the kernel, and libraries, through the compilers and the rest of the tool chain, and down into most of the utilities and applications).
I also realize that many people have no idea how to find these sources. Obviously the first step is to find out what package the program you what to look at came from. Under any of the RPM based systems (S.u.S.E. Red Hat, TurboLinux, Caldera OpenLinux, etc) you can use a command like 'rpm -qf /usr/bin/top' to find out that 'top' is part of the procps package. Under Debian you could install the dlocate package, or use a command like 'grep /usr/bin/top /var/lib/dpkg/info/*.list' or one like 'dpkg -S bin/top' (note I don't need a full path in that case). All of these will give you a package name (procps in this case). Then you can use the techniques and web sites I've mentioned above to find the package sources.
Incidentally the canonical (master) URL for procps seems to be:
ftp://people.redhat.com/johnsonm/procps/procps-2.0.6.tar.gz
... according to the Appindex and LSM entries I read).

(?) Cron

From Drew Jackson on Sun, 14 May 2000

Dear sir:

I have recently installed an anti-virus software program that is

executed from the command-line. I would like for this service to run at regular intervals. (i.e. every 2 hours)

I am using a Red Hat 5.2 based platform without GUI support.

Thank you for your time and effort.

Sincerely, Drew Jackson

(!) Short answer: use cron (the UNIX/Linux scheduling daemon/service).
The easiest way to do this would be use a text entry to the /etc/crontab file that would look something like:
0   */2	   *    *    *		root	/root/bin/vscan.sh
(Obviously you'd replace the filename /root/bin/vscan.sh with the actual command you need to run, or create a vscah.sh shell script to contain all of the commands that you want to run).
This table consist of the following fields: minute, hour, day of month, month, day of week, user, command. Each of the first five fields is filled with numerics, from one or zero. So the minutes field is 0-59, from the first to the last minute within any hour. The "*" character means "every" (like in filename globbing, or "wildcard matching"). The hours are 0-23, the dates are from 1-31, etc. The syntax of this file allows one to specify ranges (9-17 for 9:00 am to 5:00 pm for example), lists (1,15 for the first and fifteenth --- presumably one you'd use for dates within a month), and modulo patterns (such as the one in my example, which means "ever other" or "ever even"). So, to do something every fifteen minutes of every other day of every month I'd use a pattern like: '*/4 * */2 * * user command'.
The day of week and the months can use symbolic names and English abbreviations in the most common versions 'cron' utility (by Paul Vixie) that are included with Linux distributions.
You can read the crontab(5) man page for details. Note that there is a 'crontab' command which has its own man page in section one. Since section one (user commands) is generally searched first --- you have to use a command like: 'man 5 crontab' to read the section five manual page on the topic. (Section five is devoted to text file formats --- documenting the syntax of many UNIX configuration files).
This system is pretty flexible but cannot handle some date patterns that we intuitively use through natural language. For example: 2nd Tuesday of the month doesn't translate directly into any pattern in a crontab. Generally the easiest way to handle that is to have a crontab entry that goes off the minimal number of times that can be expressed in crontab patters, and have a short stub of shell code that checks for the additional conditions.
For example, to get some activity on the second Tuesday of the month you might use a contab entry like:
* * * * 2 joe /home/joe/bin/2ndtuesday.sh
which runs every Tuesday. If we used a pattern like:
* * 7-13 * 2 joe /home/joe/bin/2ndtuesday.sh
... our command would run on every Tuesday and on each of the days of the second week of the month (from the 7th through the 13th). This is NOT what we want. So we use the former pattern and have a line near the beginning of our shell script that looks something like:
#!/bin/bash

# Which week is this?
weeknum=$[ $(date +%e) / 7 + 1 ]
## returns 1 through 5
[ "$weeknum" == 2 ] || exit 0

# Rest of script below this line:

Of course that could be shortened to one expression like:
[ "$[ $(date +%e) / 7 + 1 ]" == 2 ] || exit 0
... which works under 'bash' (the default Linux command shell) and should would under any recent version of ksh (the Korn shell). That might need adjustment to run under other shells. This also assumes that we have the FSF GNU 'date' command (which is also the default under Linux).
Of course, if you were going to do this more than a few times we'd be best off writing one script that used this logic can calling that in all of our crontab entries that needed it. For example we could have a script named 'week' that might look something like:
#!/bin/bash
## Week
##  Conditionally execute a command if it is issued
##  during a given week of the month.
##  weeks are numbered 1 through 5

[ $# -ge 2 ] || {
  echo "$0 requires least two args: week number and command" 1>&2
  exit 1
  }

[ "$(( $1 + 0 ))" == "$1"  ] &> /dev/null || {
  echo "$0: first argument must be a week number" 1>&2
  exit 1
  }

[ "$[ $(date +%e) / 7 + 1 ]" == "$1" ] || exit 0
shift
eval $@
... or something like that.
(syntax notes about this shell script: '[' is an alias for the 'test' command; '$#' is a shell scripting token that means "the number of arguments"; '||' is a shell "conditional execution operator" (means, if the last thing returned an error code, do this); '1>&2' is a shell redirection idiom that means "print this as an error message"; '$[ ... ]' and '$(( ... ))' enclose arithmetic expressions (a bash/ksh extension); '$@' is all of our (remaining) arguments; and the braces enclose groups of commands, so my error messages and exit commands are taken together in the cases I've shown here).
So this shell script basically translates to:
If there aren't at least 2 command line arguments here, complain and exit. If the first argument isn't a number (adding 0 to any number should yield the same number) then complain and exit. If the week number of today's date doesn't match the in the first argument then just exit (no complain). Otherwise, forget that first argument and treat the rest of the arguments as a command.
(Note: cron automatically sends the owner of a job e-mail if the command exits with a non-zero (error) value or if it produces any output. Normally people write the cron job scripts to avoid generating any normal output --- they either pipe the output into an e-mail, redirect it to /dev/null or to some custom log file; and/or possibly add 'logger' commands to send messages to the system logging services ('syslog'). E-mail from 'cron' consists of some diagnostics information and any output from the job).
In some fairly rare cases it would be necessary to wrap the target command, or parts of it in single quotes to get it to work as desired. Those involve subtleties of shell syntax that are way beyond the task at hand.
A more elaborate version of that shell script might allow one to have a first argument that consisted of more than one week number. The easiest way to do that would be to require that multiple week numbers be quoted and separated with spaces. Then we'd call it with a command like 'week "1 3" $cmd' (note the double quotes around 1 and 3).
That would add about five lines to my script. Anyway, I don't feel like it right now so it's left as an exercise to the reader.
Anyway, 'cron' is one of the most basic UNIX services. It and the related 'at' command (schedule "one time" events) are vital system administration and user tools. You should definitely read up on them in any good general book on using or administering UNIX or Linux. (I personally think that they are woefully underused judging from the number of "temporary" kludges that I have found on systems. Hint: every time you do something that's supposed to be a "temporary" change to your system --- submit an 'at' job to remind you when you should look at it again; maybe to remove it).
BTW: I'd suggest that you seriously consider upgrading to a newer version of Linux. Red Hat 5.2 was one of the most stable releases of Red Hat. However, there have been many security enhancements to many of the packages therein over the years.

(?) DIR /S

From Romulus Gintautas on Sun, 14 May 2000

First off, thank you for your time.

I did a man on ls but did not find what I was looking for. I'm looking for a linux equivalent of dir /s (DOS). Basically, I am looking for a way to find how much data is stored in any specific dir in linux (red hat 6.0). As you know, in dos, all you do is enter the dir in question and just do dir /s.

(!) Under UNIX we use a separate command for that.
You want the 'du' (disk usage) command. So a command like:
du -sck foo bar
... will give you summaries of the disk usage of all the files listed under the foo and bar directories. It will also give a total, and the numbers will be in kilobytes. Actually "foo" and "bar" don't have to be directory names; you can list files and directories --- basically as many as you like. Of course you can mix and match these command line switches (-s -c -k, and many others).
To work with your free disk space you can use the 'df' (disk free) command. It also has lots of options. Just the command 'df' by itself will list the free disk space on all of your currently mounted regular filesystems. (There are about a half dozen psuedo-filesystems, like /proc, devpts, the new devfs and shmfs and some others that are no listed by 'df' --- because the notion of "free space" doesn't apply to them).
Anyway, read the man pages for both of these utilities to understand them better. Read the 'info' pages to learn even more.
Incidentally --- if you want to get more detailed information about a list of files than 'ls' can provide, or you need the meta information in custom format then you usually want to use the UNIX/Linux 'find' command. This is basically a small programming language for "finding" a set of files that match a set of criteria and printing specific type of information about those files, or executing commands on each of them.
In other words 'find' is one of the most powerful tools on a UNIX system. As a simple example, if I want to find the average file sizes of all of the "regular" files under a pair of directories I can use a command like:
      find foo bar -type f -printf "%s\n" | awk '{ c++; t+= $1 }; END { print "Average: ", t/c }'
The 'find' command looks at the files/directories named "foo" and "bar" finds all of them that are of type "f" (regular files) and prints their sizes. It doesn't print ANYTHING else in this case, just one size in bytes, per line. The 'awk' command computes the average (awk is a little programming language, simpler than PERL).
To find all of the files older than one week in the current directory you can use a command like:
find . -ctime +7
... for those that are newer than a week:
find . -ctime -7
... (BTW: UNIX keeps three timestamps on its files,
ctime is the timestamp on the "inode" --- when the file's meta-data was modified, the mtime is the timestamp for the file's data when the data blocks OR meta-data were touched and atime is the last "access" (read) time).
I think the current version of GNU 'find' has about 60 options and switches (including support for -and, -or, and -not for combining complex expressions) and the -printf and -fprintf directives support about 25 different "replaceable parameters" and a variety of formatting options within some of those.
About the only bit of 'stat' information I can't get write from 'find' is the "device number" on which a file resides. (Under UNIX every file can be uniquely identified by the combination of device number and inode. inodes are unique within any given device). 'find' also doesn't (yet) give me the ability to print or test some special "flags" (BSD UFS) or "attributes" (Linux ext2).
I've been meaning to write a custom patch to add those features.

(?) I apologize if this is a simple question. I am just starting in Linux and hope to learn a lot more.

Rom

(!) That's O.K. I'm too tired to do hard questions at the moment.

(?) Limiting "Public Interfaces" on Share Libraries

From Rik Heywood on Mon, 15 May 2000

I am trying to create a shared library that my other programs can load and use. I have managed to get this to work and all is well. However, I am trying to limit the functions that the library exports to just the ones I want. On win32 there are numerous ways of achieving this (eg listing the functions you want to export in a .def file, adding __dllexport to the function definition). I feel sure it will be possible in Linux, but so far I have been unable to figure it out. Any ideas?

Rik Heywood.

(!) I don't know why you'd do such a thing. It can't possibly be used for any security purpose (either someone or some program has read/execute permission to the whole shared library, or not).
From what I gather you "export" a C function from a library by documenting its interface in a header (.h) file. Frankly even if the feature exists I think it would be of dubious value. If you limit access to some function you must make the programmer re-implement it in their space (goes against code re-use). If the do that then they've forked to functionality and any refinement of the function(s) must now be done in multiple places (bad for maintainability). If you are simply trying to discourage the use of some internal interfaces (since they may change and you don't want to be saddled with backward compatabilty responsibilities in those particular areas) then just comment and document them as internal (in your sources) and separate their prototypes into a different set of header files (which are not installed into the public include directory tree).
However, I'm not an expert. In fact I don't even consider myself to be a professional programmer (though I've done a bit of it here and there). So it's certainly possibly that everything I've just said is idiotic gibberish. (Of course that would be possible even if I was a recognized expert).
As for the fact that this "feature" exists in Microsoft DLLs and programming tools --- it sounds like it's probably primarily useful if you need to create binary products that take advantage of "hidden" (undocumented) private interfaces which you plan to keep from your competitors.

(?) Corel Linux and Blank Passwords

Repairing Lost and Broken Passwords: A Redux

From Charles Gratarolli on Mon, 15 May 2000

(?) Hi,

After a few crashes I managed to install a Corel Linux in my machine(Pemtium II 450, IDE Drive, 96 MB of memoy). When the system asked me for a login and password, it didn"t recognize and gave me the following messsage:

COREL LINUX 1.0(tty1)

Login:XXXXX(I gave it in the insyalation beggining) Password: Incorrect login

(!) This is fairly difficult to read and I'm not sure of the context. I think you are saying: "After I installed Corel Linux and rebooted the system, I tried to enter the same name and password at its login prompt that I had entered during the installation process." (I've only installed Corel Linux a couple of times I don't remember the exact sequence of installation dialogs).
Normally you'd have been prompted to create a password for 'root' (the system administration account) and you would have been offered a chance to create one or more user accounts --- which involve selecting at least a user name and initial password for each account. (Usually there's also a chance to fill in a full name, change the account's "home directory" and "login shell" settings, set the account's primarly group membership and possibly add that account to a list of other groups).
What happens if you use the name 'root' (all lower case, no capital letters) at the "Login:" prompt and enter your password for that account? (BTW: It's a good idea to keep those passwords different. It's a wretched idea to login as 'root' when you want to run "normal" applications like a web browser, mail program etc).

(?) I left the password blank, as was said in the manual

(!) Did the manual really suggest that you should leave a password blank? That's irresponsible.
For situations when you really want to have a service accessible from the console with no password, it is better to configure the system to skip the password request than to set the password to be empty. Basically a username/password combination can potentially be used to access any service on a Linux/UNIX system. Usernames are fairly easy to find for a system, so it is almost impossible to enforce any security policy on an account with no password. If you want a service or program to be accessible without a password it's almost certain that you want to limit the access to specific files (i.e. just your HTML files in your document root directory tree), through specific means (i.e. just through the web server, for read-only access), etc.
Anyway, many Linux systems are configured to forbid blank passwords. Thus, it may be that the installation program let you leave the password blank while the login program(s) are enforcing this common policy.

(?) How can i change it now? considering I am a newbie.....

Thank you Charles G.

(!) It depends. Is this a user account? Does logging in as 'root' work? If so, then just login as the root user (and open a "terminal" or "xterm" window if you've logged into a GUI) so you can type in commands.
First you need to know if the account you created exists.
Let's say you created your account name using your initials "cg." So you might use a command like:
grep cg: /etc/passwd
... if that doesn't pop-up a line that looks something like:
cg:x:0:0:Charles G:/home/cg:/bin/bash
... then you don't have a user account (or you mistyped something --- possibly when you created the account, or whatever).
You can create a user account using a command like:
useradd -m cg
... the -m tells 'useradd' to "make" a home directory for the new account. There are many options to the 'useradd' command. You can read more than you want to know about them by typing:
man useradd
Once you've created the account you can set the password using a command like:
passwd cg
... which, if done as 'root' will simply prompt you for a new password and ask you to repeat it. If you can type in the same string twice consecutively --- you will have successfully changed or set the password for that account.
You can also use the passwd command to change your own password by simply typing it (with no parameters or arguments). In that case it will require you to type your old password, and then repeat your new password twice.
Note that sometimes the 'passwd' command will complain that a password is "too short" or "too weak" or that it is "based on a dictionary word." The Linux 'passwd' command tries to enforce some "best practice" policies about your users password selections in order to make the system more secure. Basically anyone who cracks into a user account on a system has a pretty good chance of using that to take control of the whole system eventually. (Also they can do quite a bit of damage to that user's files and quite a bit of snooping about in that users e-mail etc. even if they don't manage to disrupt other users or the system itself).
I realize that you may not care about all this "security stuff" as a new Linux user. After all, you're probably adopting Linux after years of using MS Windows, which has no concept of users and makes no effort to protect the system from "normal users" or to protect any one users stuff from any other.
However, it's a good idea to take a lesson from Microsoft's mistakes. You may want to considering having one account on your system for reading mail, a different on for doing your web browsing, another for playing games, and yet another for any of your important work. (With a little practice it's possible for these to share data without too much inconvenience while limiting the damaged that a trojan horse (such as the ILOVEYOU e-mail virus) could do to your other work.
(Of course Linux systems are unaffected by ILOVEYOU, Melissa and all of the other e-mail trojan/viruses so far. However, such a problem might eventually affect some Linux users. Luckily there are many different e-mail packages in widespread use under Linux --- any bug that could be used to exploit one is very unlikely to affect more than a small fraction of the total population. This "technodiversity" (analogous to the "biodiversity" that we need in our ecosystems) does protect us somewhat --- since the infection can't spread quickly or easily unless there is a critically high percentage of "monoculture" applications users).
(I could write a long article on the pros and cons of technodiversity vs. standardization and code re-use. However, I have a feeling that it not be of much immediate interest to you).
Getting back to your problem. If you don't have a working root password then the job is a little more difficult. Basically you need to boot up the system in "rescue mode" or from a "rescue disc or diskette" mount the root filesystem, possibly mount a "/usr" filesystem on top of that, run the 'passwd' command, unmount the filesystems that you brought up, and restart the system from its hard drive.
Whoa! Did you get all of that? I didn't think so. Here's the same sequence again, with a little more explanation:
If you see the "LILO:" prompt while you're booting up the system you can usually hit the [Caps Lock] or the [Scroll Lock] key or just start typing to force the boot loader to pause at this point.
From there you can tap the [Tab] key to see a list of boot image "labels" (usually one will be named "Linux" or "linux").
From this prompt you can type a command like:
linux init=/bin/sh rw
... to bring up the system in a "rescue mode."
This will bypass the whole normal startup sequence and prevent the system's normal initialization program (init) from spawning the 'getty' processes that take over the console and force you to login.
BTW: It's possible to set another password on your LILO boot loader (adding a line to your /etc/lilo.conf) that would prevent this trick from working. That password, if set, would not convey any other access to the system, it would only allow one at the console during the boot up cycle to select and over-ride the boot settings.
The "rw" at the end is a convenience to make sure that the main (root) filesystem is brought up (mounted) in a read/write mode. Normally a UNIX/Linux system comes up with the root filesystem mounted read-only so that it can be checked and repaired.
You might have been offered a chance to make a custom rescue diskette during your installation. If you were wise you did.
If you system can boot from a CD drive then your distribution's CD usually can act as a "rescue disc." So you act as though you're going to re-isntall, but you use the keys [Alt]+[F2] (hold down the [Alt] key and hit the [F2], second function, key).
If that doesn't work, boot the system up under some other operating system or use a different computer and look for a "rescue diskette" image. Hopefully the instructions for that will be listed somewhere in your manual or on the web site for your favorite distribution. (Of course Corel's site is basically impossible to navigate if you're looking for technical support information specifically about their product. I doesn't seem to have a search engine and I don't see a link to a simple "Corel Linux FAQ").
Failing that look at Tom Oehser's site for his "Root/Boot" floppy (http://www.toms.net/rb) Unfortunately this is NOT a package for newbies.
If you booted from a rescue diskette you'd normally be running from a RAM disk. So you have to find your main (root) filesystem and mount it up. On a typical Linux system that would involve a command like:
mount /dev/hda1 /mnt
You need to know what type of hard drive you have (/dev/hd* for IDE, /dev/sd* for SCSI), which one it is (a for the first drive on the primary controller, and letters b, c, d, etc for others), and which partition it's one (1 through 4 for the primary partitions, and 5-12 or so for any logical drives in an extended partition).
Once you done that you should change into that directory (/mnt in my example and in most cases) and make that the "virtual" root directory using the following commands:
cd /mnt chroot . /bin/sh
Even if you booted from the hard drive using the init=/bin/sh trick, you may have to bring up another filesystem. The 'passwd' command is usually in the /usr/bin directory, and the /usr directory is often separated unto its own filesystem. (It's traditional though there are good reasons for this as well).
Here's the command to do that:
mount /usr
Finally you should be able to run the 'passwd' command to set a new password for yourself.
If you get some sort of error about a "read-only" filesystem then you probably forget the rw option at your LILO prompt. Use the following command:
mount -o remount,rw /
and try again.
If that was successful then you should be able to unmount any filesystem that you mounted:
umount /usr
... and if you were booted from a rescue diskette or CD:
exit; umount /mnt
... or if you were booted from the hard drive:
mount -o remount,ro /
This sets up all of the filesystems so that they are "clean" and can be used immediately after the next step without a time-consuming consistency check.
Finally you should be able to reboot. This is actually a bit trickier than you'd think when you've booted into this "rescue mode." (If you booted from a diskette or CD, just pull that out and hit the reset switch).
If you've booted from your hard drive using the init=/bin/sh trick (what I call "rescue mode" then you should shutdown and restart the system with the following command:
exec /sbin/init 6
... this is because the various sorts of 'shutdown' and 'reboot' commands usually are just sending a "signal" and performing some IPC (interprocess communications) with the 'init' program. In other words, normally only the init program does a reboot or a system halt (or changes "runlevels" --- operational modes). However, we bypassed the normal process and we're running a command shell instead of init. The shell isn't programmed to respond to signals by reading the /dev/initctl pipe (FIFO) for messages.
We can't just "run" init like a normal program. init detects what process ID it is running under and only assumes system control if it is process ID number 1 (PID ==1). If not then it acts as a messenger, trying to pass signals and commands to the "real" init process. However, our shell is running as PID 1 --- so we need to tell the shell to "chain over" or "replace its code with" that of init.
I realize that all of that was pretty complicated. You don't have to understand the inner workings of init in order to run this last command or to follow most of this procedure.
It won't even be the end of the world if you just hit the red switch and reboot the system. However, I've tried to make this set of instructions simple enough and general enough that it will work on most Linux systems.
If you get too stuck, call tech support. I see that Corel offers a fee-based North American telephone technical support option at about $50 per incident (I guess that would be in U.S. dollars). Of course my employer Linuxcare (http://www.Linuxcare.com) also offers per incident fee-based support as well. You could call them at 1-888-LIN-GURU for details.
There are also many Linux consultants that might be able to help you, possibly in person. Look at the Linux Consultants HOWTO (http://www.linuxports.com/howto/consultants/Consultants-HOWTO.html)

(?) Windoze [sic] on 2nd Hard Drive

From Anthony Kamau on Tue, 16 May 2000

I have Linux installed on the 1st hard drive and want to boot to windoze on the 2nd hard drive. I read somewhere that I could fool windoze into thinking that it is on the first harddrive by changing a few parameters in the "lilo.conf" file. Would you happen to know what I need to add to the this file in order to have it dual boot.

Thanks, Anthony.

(!) I don't know. But I don't recommend this way of doing things. MS Windows and other Microsoft products are somewhat brittle and it's a bad idea to try to fool them. Even it it works for some situations, for awhile, it can break as the situation changes and whenever you upgrade any of their products.
So, I'd really suggest putting Linux on the second drive and letting MS Windows have the first drive. Linux is very flexible and is far less likely to break something like this from one upgrade to the next (and you'll always have the Linux sources to fix anything that we do break).
Remember, if you have any problems with LILO on a system where you are running any sort of MS-DOS derivative --- take the easy way out and run LOADLIN.EXE. It's often much easier then fussing with boot records. In the worst case, use a SYSLINUX or LILO floppy and boot Linux from that.

"Linux Gazette...making Linux just a little more fun!"


More 2¢ Tips!


Send Linux Tips and Tricks to gazette@ssc.com


Getting the most from multiple X servers - in the office and at home

Fri, 19 May 2000 10:31:27 +1000
From: Bob Hepple <bhepple@bit.net.au>

I wonder how many people know how to get the most from the power of X - it really sets Unix apart from simple windowing PC's. Here is a tip that I've been using for years - maybe it will be news to others as it's not really documented anywhere for the average user, it's rather buried in the man pages.

To set the scene, poor old dad often has to stand aside to let the rest of the family read their email, do their homework etc. This is a bit of a fag on certain well known proprietary windowing systems as you would have to

save your work exit all applications, log out, let them play through, log them out, log back in restore all applications

Rather than do all this, I simply create a new X session with the following command:

X :1 -query raita &

where 'raita' is the name of my computer. A new X server starts up and the visitor can log in and do their stuff. We can flip between their session and my own with Ctrl-Alt-F8 and -F7. When they are finished, they simply hit Ctrl-Alt-BackSpace or log out and I warp back to my own workspace with Ctrl-Alt-F7.

No loss of data, no messy loging in and out.

You need to be running an XDMCP session manager (e.g. xdm, gm or kdm) for this to work. You are using XDMCP if you get a graphical logon at bootup. If you have a text-mode logon and run X with startx then you might need to modify this approach.

I also use this neat feature of X at work - we have many Unix systems that I need to log into from time to time - Linux, Solaris and UnixWare. I could use rlogin, rsh or xrsh but for some jobs nothing beats a full X session.

I can flip from one system to another by creating new X sessions on my Linux workstation. Normally at work I use a slightly modified command:

X :1 -indirect dun &

... where dun is runnning an XDMCP server (like xdm, gdm or kdm). It then gives me a chooser that I can use to pick which system to log into.

I often have many such sessions at once - just increment the display number for each and they map to different 'hotkeys':

X :1 -indirect dun .... Ctrl-Atl-F8 X :2 -indirect dun .... Ctrl-Atl-F9 X :3 -indirect dun .... Ctrl-Atl-F10

with Ctrl-Alt-F7 being the default X display :0

Another ploy is to use Xnest in a similar way. Instead of getting an extra X server, Xnest runs a new X session in a window. I use this:

Xnest :1 -indirect dun &

or, if I want to use a full-sized screen I use:

Xnest -geometry 1280x1024+0+0 :1 -indirect dun &

There are some minor issues with font sizes when using a smaller window but generally not too bad.


Starting and stopping daemons

Fri May 26 16:13:11 PDT 2000
From: Mike Orr <mso@mso.oz.net>

If you get tired of typing "/etc/init.d/apache reload" every time you change your Apache configuration, or if you frequently start and stop squid (e.g., to free up memory for extensive image editing), use shell functions to take the tedium out of typing.

The following functions allow you to type "start daemon", "stop daemon", "restart daemon", and "reload daemon" to accomplish the same thing. They should work on Debian or a similar system which has a script for each daemon in /etc/init.d/, where each script accepts start, stop, restart and reload as a command-line argument.

I used zsh, so I put the following in my /root/.zshrc:

function start stop restart reload {  /etc/init.d/$1 $0  }
This creates four functions, each with an identical body. $0 is the command name (e.g.; "start"); $1 is the first argument (the name of the daemon).

The equivalent functions in bash look like this:

function start { /etc/init.d/$1 start; }
function stop { /etc/init.d/$1 stop; }
function restart { /etc/init.d/$1 restart; }
function reload { /etc/init.d/$1 reload; }
bash puts "-bash" into $0 instead of the command name. Perhaps there's another way to get at the command name, but I just chose to make four functions instead.

Debian actually puts the name of the package in /etc/init.d/; this may be different than the name of the daemon. For instance, the lpd daemon comes from a package called lprng. An enhancement to the functions would be to recognize lpd, lpr and lp as synonyms for the easily-forgotten lprng.


Disabling the console screensaver

Fri May 26 16:13:11 PDT 2000
From: Jim Dennis <jimd@starshine.org>

Shane Kennedy <skenn@indigo.ie> asked the Answer Guy:

How do I switch off the shell screensaver?
setterm -blank 0

It's a feature of the Linux console driver, not the shell.


Tips in the following section are answers to questions printed in the Mail Bag column of previous issues. These tips were compiled with help from Michael Williams (Alex).


 

Linux Kernel Split

Thu, 04 May 2000 08:34:09 -0500
From: Christopher Browne <cbbrowne@hex.net>


This can refer to to things:

a) The fact that Linux kernel releases are split into "stable" and "experimental" releases.

Thus, versions numbered like 1.1.n, 1.3.n, 2.1.n, 2.3.n represent "experimental" versions, where addition of new functionality is solicited, whilst those numbered 1.0.n, 1.2.n, 1.4.n, 2.0.n, 2.2.n, 2.4.n represent "stable" versions, where changes are intended to only be made to fix problems.

Occasionally, "experimental" functionality gets backported to the "stable" releases, but this is not the norm.

b) There is a theory that, at some point, development of Linux could "split" to multiple independent groups.

For instance, there are some people working on functionality intended to support big servers (e.g. - SMP, various filesystem efforts). And there are others building functionality supportive of tiny embedded systems (Lineo, Embeddix, ...)

The theory essentially goes that since their purposes are different, there may be some point at which the needs may diverge sufficiently that it will not make sense for there to be a single point of contact (e.g. Linus Torvalds) to decide the direction of development of _THE_ official
Linux kernel.

What might happen is that a group would take a particular version of the Linux kernel source code, and start developing that quite independently of another.

For instance, there might be a "split" where the embedded developers start developing the kernel in a way attuned to their needs.

This is _essentially_ what happened when OpenBSD "split" off of the NetBSD project; the developers concluded that they could not work together, and so a new BSD variant came into being.

The use of the GNU General Public License on the Linux kernel does mean that it would be legally permissible for a person or a group to perform such a "split."

It would, however, be quite _costly_, in that it would mean that the new group of developers would no longer have much benefit from the efforts of people on the other side of the split. It is a costly enterprise (whether assessed in terms of money, or, better, time and effort) to keep independent sets of source code "in sync" once they are purposefully taken out of sync.

Hope this helps provide some answers to the question...

 


Incorrect Tip....

Date: Sat, 13 May 2000 15:57:49 -0400
From: Tony Arnett <lkp@bluemarble.net>

Tip given on Linux systems that do not recognize the total almount of available ram.

The tip given was to insert the following param into "lilo.conf"

append="ram=128M"

I had no such luck with this param. I think the proper param to use is:

append="mem=128M"

This worked for me on my Gentus Lunux 1.0 System.

Here is my entire lilo.conf


boot = /dev/hda
timeout = 50
prompt
default = linux
vga = normal
read-only
map=/boot/map
install=/boot/boot.b
image = /boot/vmlinuz-2.2.13-13abit
label = linux
initrd = /boot/initrd-2.2.13-13abit.img
root = /dev/hda5
append="hdc=ide-scsi hdd=ide-scsi mem=128M"
other = /dev/hda1
label = win



I hope this will help someone.

Lost Kingdom Productions
Tony Arnett

[It is definitely append="mem=128M" as you say. I use it myself. The only instance of "ram=" I could find was in http://www.linuxgazette.com/issue44/tag/46.html, and it is quoted in part of the question, not as the answer. If there are any other places where it says "ram=128M", please let me know where and I'll fix them immediately.

I looked in the Bootprompt-HOWTO
http://www.ssc.com/mirrors/LDP/HOWTO/BootPrompt-HOWTO.html and did not see a "ram=" parameter. There are some "ramdisk_*=" parameters, though, but that's a different issue. -Ed.]


Re: Command line editing

Wed, 17 May 2000 08:38:09 +0200
From: Sebastian Schleussner
Sebastian.Schleussner@gmx.de

I have been trying to set command line editing (vi mode) as part of
my bash shell environment and have been unsuccessful so far. You might
think this is trivial - well so did I.
I am using Red Hat Linux 6.1 and wanted to use "set -o vi" in my
start up scripts. I have tried all possible combinations but it JUST DOES
NOT WORK. I inserted the line in /etc/profile , in my .bash_profile, in
my .bashrc etc but I cannot get it to work. How can I get this done? This
used to be a breeze in the korn shell. Where am I going wrong?

Hi!
I recently learned from the SuSE help that you have to put the line
set keymap vi
into your /etc/inputrc or ~/.inputrc file, in addition to what you did
('set -o vi' in ~/.bashrc or /etc/profile)!
I hope that will do the trick for you.

Cheers,
Sebastian Schleussner


This page written and maintained by the Editor of the Linux Gazette. Copyright © 2000, gazette@ssc.com
Published in Issue 54 of Linux Gazette, June 2000

"Linux Gazette...making Linux just a little more fun!"


0800-LINUX: Creating a Free Linux-only ISP

By Carlos Betancourt


The Open ISP Project


Preface

Free Internet access. It's a sentence we hear everywhere. With the proliferation of ISPs Internet access is getting hot... I mean, cool. Whatever. Prices are going down everyday even more. But there's a limit. We always have to pay the Phone Company for our "free" Internet time. In countries where there is a PSTN Monopoly, usually the end user is abused from the almighty Phone Company. And in countries where local phone calls are free, users always have to pay the ISP. Even if you are OK with that, we all must acknowledge that being Linux users we get marginal support from our ISPs. Yes, there are a lot of Linux-friendly ISPs, but what about the power features, like encrypted PPP sessions, or Serial Load Balance? There's even a new modality: advertisements sponsored ISPs. Just by loading a "bar" which display ads while you use the ISP, you get "free" internet access (phone call charges vary depending on your zone coverage or country). Of course, there is no Linux version of such services, ad even if they existed, would you agree on eating their ads? In this article, I want to begin a discussion of why the Linux community needs a truly zero-cost and feature rich ISP, and how such a project would benefit the entire Linux community, our own countries, and the IT world in general. To reach this goals, I believe the zero-cost ISP project should be Linux-Only oriented. Keep on reading, and I'll expose why.

There have been efforts all around the world to bring Internet access costs down, namely "Plain rates"; some have partially succeeded, some others not. Why? Because THEY DON'T HAVE THE COMMUNITY SUPPORT/UNITY WE HAVE.

1. Why Linux needs a zero-cost ISP

The need is out there... here is what I believe a zero-cost ISP can do for Linux and for Nations:

1.1. Nurture the new Linux minds: as an intelligent species, we nurture the youth to become the next generation of leaders and supporters of our society. If we provide the means for our kids and teenagers to learn and develop themselves we will be a successful society in the long run. Professional Soccer/Baseball clubs have early leagues where kids a grown up enhancing their skills. Those who invest in the young ones are the ones who survive. I'm not worried about Linux survival, but it's certain that by now we are still a minority. And in this new IT era, we need the people to support all the infrastructure we are building today, ten years from now, even less. Who in the family are the ones with less priority to use Internet, the computer or the Phone? Kids. As simple as that. If you pay the phone by the minute, most parents wouldn't like their kids spend hours online. And if parents use the computer regularly, the kids must get away from it. And most parents consider a computer a too expensive toy to buy a new one for the kids. If we, as a community, nourish our youth, we are going to have an inevitable success. Many people discuss, these days, about winning the desktop war. Give kids Linux and we'll se five years from now.

1.2. Bring Enlightenment: how to expand our user base: people use what they are given to use. If you buy a new computer, which OS will you get by default? I know this story is ending, with recent support from Hardware integrators, or, for instance, the deal between Corel and PC Chips. OK, from now on people will have a choice, although not so soon. We as a community must develop some strategy to attract people to our OS. How? Give them free, I mean _free_, Internet access. We have to give people a reason to use Linux. We have lots of ISPs around, each one trying to have new customers using different strategies and features. They distribute Windows-only setup CD-ROMs to ease the subscription process. And most of them claim "Free Internet Access". What it really is is a half-truth. You still have to pay the phone call. There are some others that give you one "free month", and then they charge you for a maximum amount of time online per month, being the phone call free. Ok. But what they don't tell you, is that you pay the phone calls during this "free month". I just feel sick with all those half-truths, or should I call them "half-lies"? Isn't a half-lie a lie anyway? Now, imagine, we provide a truly free and unlimited (this point we have to discuss. Remember, I'm just trying to build a discussion around this subject) Internet access to anyone who wants, only if they use Linux. I mean, like M$-Chap (fortunately pppd can deal with that), we can develop some Linux-Chap, but I don't think it's ethical; or is it? In case it's not ethical, maybe accepting only ppp-encryption or bsd/slhc compression capable clients. We have to address all this technical details in a forum. But the main idea is: "Use Linux, and you'll have free unlimited Internet access. Just by using it on your computer". We already have everything to fill the needs of end users: Web Browsers, Office Suites, Drawing tools, etc., and more is coming.

1.3. More bugs hunted - more eyes on the source code: if we bring more people to Linux, we'll get more people interested on studying it's internals, learning to program, developing programs. I know not everyone, but if we get just one out of a thousand, and we get some more millions of new users, it looks pretty sexy, eh? And if we give them a way to download more source core or binaries per unit of time, in the long run we'll have more developers and/or bug reports. Just by reporting bugs, or what they dislike/need from our OS, evolution is going to accelerate. And remember, we won't have Linus or Alan or thousands others forever (what a sad life without them). We need to plant the seed for the new generations to come. By giving users a free High Quality OS, and free Internet access, don't you think that someday they will want to give something back to the community? That's how Linux works: we all are trying to give something back to the community. Those of you reading these lines, aren't you trying the same everyday? That's why we have copy parties, mailing lists, newsgroups, etc. We are a gift community and a bazaar community.

1.4. Provide our community with a unified local repository of software - faster downloads: in many countries there are not unified national backbones. Academical networks and commercial ones have not a common backbone, or are in the process of doing so. Around the world we have hundreds of mirrors of Linux repositories, but when it comes to a single country, maybe the user and the mirror and in different networks, thus having slow downloads, although the mirror is in the same country. I don't pretend to abolish existing mirrors, but to provide by the zero-cost ISP project a nation-wide ISP with all the necessary Linux resources. People won't _have_ to use it, it's just a choice, and a fast choice. The 0800-Linux ISP must be nationwide to achieve this goal. Besides, the PPP link can be established with extra compression (not just IP headers), thus giving a phenomenal throughoutput. And let's add to this the chance to have two phone links using Serial Load Balance (an option in the kernel). Should this ISP include ISDN/xDSL service? In the beginning maybe not, due to increased costs, but it's just a matter of counting the demand for it. It's another issue to discuss in this project.
And last but not least: faster downloads mean efficiency -then economy- to the Open ISP's budget!

1.5. Give privacy to people: what is your ISP doing with your data? And your mail? Do you think your actual ISP protects your privacy? I don't know for sure, but I don't think so. What about Massive Web Tracking Via ISPs, with technologies as Predictive Networks? Have you ever heard of the Echelon Project? "The big brother is watching you", remember? The 0800-Linux ISP project can help us reach a decent level of privacy. How? With encrypted PPP links, educating our users to use PGP/GPG, giving free web mail a la hushmail.com or through SSL. It's a very simple way to encourage users to user strong encryption. Which well-known free web mail server provides users with strong encryption? Remember what happened in Hotmail some time ago, when crackers published techniques/programs to read any account's mail? If we support strong encryption this can't happen again. I also think of, let's say, an encrypted /home file system. We can think endlessly of new applications. Well, let's not forget all this is subject to government permission. There are new project laws in UK that will consider illegal denying to give decryption keys to the government, for example.

1.6. Open new business opportunities: with a large user base it's impossible not to mention the new (now not so new) and huge market it will bring. Books, Commercial Software (while we don't have free replacements we have to buy them. Think about games), more Distributions sales, support companies, huge demands for Linux-inside PCs, etc.; everything will grow, exponentially, and even unthinkable new businesses. We are in a new e-conomic era, and Linux is one of the driving forces on it. Look at the success of Linux IPOs. And we are a minority! We just have to pull the trigger. The results will overwhelm us.

1.7. Fill the demands in the IT world: lots of nations are now making plans to fill the huge demand for IT professionals. It's a problem of all developed and developing countries. The projections for IT workers shortage for the years to come is alarming. I think the Open-ISP project can play a major role in reversing this proccess: it will bring the free software community spirit to thousands of new individuals, stimulating colaborative development and user-to-user support. The more people gets access to computers and Internet, the more skilled the population.

1.8. Allow more nations to involve/profit of e-commerce: most of European nations are worried about the advantage the US has taken in e-commerce. And in the end, the final customer is the one who benefits from competition. But to fill the gap they need the human resources to build and support the infrastructure. The European Union has launched the "eEurope Initiative", to develop the course of action to adquire a competitive edge in ecommerce and new technologies.

2. Creating a zero-cost LINUX ISP

So if a zero-cost Linux ISP can benefit the Linux community, how can we raise the funds to achieve it?

2.1. Existing Linux/Open Source funds: the Open Source Equipment Exchange, the Linux Fund or Open Source Awards, like the Benie Awards by Andover.net or the EFF Pioneer Awards

2.2. Linux distributors: if we get this project to work, it is certain that the companies behind the Linux distributions are going to benefit. Nowadays, you can see boxed Linux distributions in well-known stores around Europe and South America, whereas just one and a half year ago you couldn't. Now it's easy to find bright and shiny boxes of SuSE, RedHat, Corel and Mandrake, to name a few. The main Linux distributions have showed along all these years a firm and sincere support for a vast range of projects. And they know their success depends on the user base. We just have to develop a strong project and they surely are going to help. If this project comes to life, Linux distributors could advertise "Free Internet" bundled with the product. You just install Linux, and you have free Net Access.

2.3. Linux Publishers: lots of publishing houses are having a business around Linux these days. It's more common to see new Linux books in the shelves at major bookstores. If they donate a little fraction for the sale of each book to the project, then we have more funds. We just have to get more people into the community, and books are going to start flying away from the shelves. It's inevitable. Houses like O'Reilly are well known for its support and sponsorship of projects.

2.4. Other UN*X companies: why did SUN gave StarOffice away for free? If the Linux community succeeds, Un*x will get exposed to the general public and corporations. It will strengthen Un*x acceptance. Un*x vendors will keep alive in the game. Even SGI, which is now embracing Linux instead of IRIX, will win because hardware sales make more sense to them. If Linux in general has support from this companies, the 0800-LINUX project benefits indirectly from that support. Now we have a High Quality Office Suite free to offer to the public, thanks to SUN. Maybe we can become a Sunsite partner, thus receiving hardware from the very SUN.

2.5. Hardware Integrators: if you sell a computer with free Internet access when you buy it, with no more headaches to the end user to set it up, just dialing 0800-LINUX, hey, it's a hell of a good strategy. And if you save users the cost of the OS, prices are going to be even more attractive. Hardware integrators can supply a machine with a free OS, free applications, free Office suite, and FREE Internet access... Again, the more users we attract, the more hardware gets sold. V.A. Systems, Penguin Computing, Compaq, Dell, to name just a few, all of them are in the game. They are just waiting, _waiting_, for demand to supply Linux already installed. They are tired to pay the M$ tax. They can instead, save that money and support this project with just a fraction of that. Whether it's hardware or money, we'll benefit.

2.6. The government: in high developed countries kids have computers at school. They develop their understanding and attraction to computers from early ages. Until now, the beginning of the 21st century, all the countries had access to the same kind of technology and education. Technology was easy and cheap to replicate in every country, even the poorest. And education more or less the same everywhere, with no specialization, or a low tech one. Every country has had, more or less, the same opportunities to develop themselves. Now we enter a new era. The gap between developed nations and developing ones, is everyday larger. It's technology, services, specialization, high tech industries, education and the Internet the turning points in this new era. And I'm not saying anything new. The more people with access to technology, information, services, and communications the wealthier the country becomes. And more developed, in general terms. Where do you think is Linus from? Finland; Cox? UK; Stallman? US. I know you see the path. As a non-profit project, the zero-cost Linux ISP, the government can concede tax deductions to the funds private companies and/or individuals give to the project. Even the same government could help fund the project, due to the importance of the results. It's not just Linux; it's the enlightening of the population by means of Linux, and the long run results it is going to bring.

2.7. United Nations: (please help me on this)

2.8. End Users donations: we can't impose our users to pay a fee for the Internet access; if we do, we'll just become YAISP (Yet Another ISP), and will add another level of complexity to the project (manage subscriptions, payment, etc.). Besides, the goal in our project is to provide an easy way for users to setup their Internet access: they just dial 0800-LINUX after installing Linux or buying a brand new computer. Even the distributions can have a pre-setup out of the box with a list of countries were the 0800-LINUX project is working. So users just will be one click away the 'Net. In this project we have to develop all the policies and framework of the ISP, so it will be the same all around the world. Distributions can ship already set up. Therefore, when users want to give back to the community, they just can donate hardware and/or funds to the project. Just with a tiny fraction of what they pay annually to their respective ISPs and/or Phone MonopoLIES it will be enough.
 
 

Moving Forward

If you agree that a zero-cost Linux-only ISP can be beneficial for the growth of Linux, how do we as a community address the points I made above about creating such a project? I think that as a first step we should create a mailing-list and run a poll to know the percentage of the Linux users in our country that use dial-up Internet access.

Is a Linux-only free ISP project even possible? The first thing one could think when reading about this project is that it is going to cost too much money. OK. You have a point. But think it this way: if we raise the necessary money to have a 0800-LINUX ISP in our country, do you think it is worth it? We have plenty of choices, and reasons, to find funds.

We have to find all these answers together. This is a project that must be born inside the community, not imposed from the outside. After we find consensus, we must prepare a complete proposal to all the Linux related companies, to know how much funding we can get.

And for the technical details of the ISP we could create an "Engineering Task Force". Please, email me at carlos.betancourt@chello.be if you believe in the plausibility of this project and would like to participate.

LINKS
Hotmail Cracked Badly

UK Decryption Law Pushed Through

Surveillance bill under fire

'Echelon Study' Released by European Parliament

Echelon Watch

Echelon is Listening to You

Hotmail Hole Exposes Users to Security Exploits

UPDATES!

March 24th:
EU looks to e-job bonanza
EU to Push Growth in Innovation, Technology


Copyright © 2000, Carlos Betancourt
Published in Issue 54 of Linux Gazette, June 2000

"Linux Gazette...making Linux just a little more fun!"


HelpDex

By Shane Collinge


racingstripes.jpg
arrrs.jpg
shoe.jpg
slashdot.jpg
beepboop.jpg

More HelpDex cartoons are on Shane's web site, http://mrbanana.hypermart.net/Linux.htm.


Copyright © 2000, Shane Collinge
Published in Issue 54 of Linux Gazette, June 2000

"Linux Gazette...making Linux just a little more fun!"


What are Oracle's plans to boost and support the Linux platform?

By Fernando Ribeiro Corrêa
Originally published at http://www.olinux.com.br/interviews/14/en
(Follow this link for more OLinux interviews)


Mr. Pradeep Bhanot is the Senior Marketing Director for Linux at Oracle.

Olinux: Can you tell us about yourself and your carrier. How long have you been working for Oracle?

Pradeep Banhot: I am the head marketing for Linux at Oracle. I have been with Oracle 11 years starting in Oracle as a technical Consultant in pre-sales followed by post sales. In have been in the US for 8 years. Three years as a technical manager in the San Francisco Bay Area managing customers such as Wells Fargo Bank, Cisco, Netscape, SGI and HP. In recent years I did marketing with IBM followed by Dell. Prior to Oracle I worked in technical systems roles for the UK government and British Telecom for 10 years. I hold a BSc in Computer Science from Greenwich University in London.

Olinux: Shortly, what is the secret behind its success?

Pradeep Banhot: Great people and products. Oracle's recent success due to the focus on internet based solutions and marketing.

Olinux: Tell us the background, facts, visions, events, decisions and master moves (toward internet) explain Oracle extreme success? Oracle lead the shift to client-server computing on Unix in the 1990's. Three years ago Oracle bet it's business on the internet. We moved all out products from character based and client-server windows GUI to a web browser based UI. This bet paid off. Some companies such as IBM are really good at articulating customer requirements such as SQL databases, repository based development, cross platform common user interfaces and e-business. Oracle is really good at adapting it's software products and people to deliver on that vision. Larry saw the emergence of Unix, Windows and the Internet and made sure we produced products that lead those markets.

Olinux: How Larry Elison leadership has guided Oracle?

Pradeep Banhot: Larry is our visionary. Larry is very consistent in his vision. He goal has been to manage all kinds of information, both structured and unstructured with very high integrity and in great volume. This has been delivered in various products in several forms over the years including Oracle Video Server, Oracle Universal Server and today as Oracle8i Release 2. On the business side Lary drives our $1B infrastructure savings, product consolidation and web based sales programs.

Olinux:How does Oracle analyze Linux growth for past few years? Is it aconsistent growth in your opinion? In what extent Oracle wants Linux to succeeded as an alternative operating system to server and desktop?

Pradeep Banhot: Oracle's primary focus is on Linux as a server OS. Oracle wants to see Linux succeed as an OS as it offers customers openness and superior TCO. There is strong developer and customer demand for Linux. There have been over 200,000 downloads of Oracle products on linux from our developers site at http://www.technet.oracle.com. Oracle has always been successful on open platforms. Highly proprietary platforms such as AS/400 with built in data management do not enable Oracle to offer it's benefits to the same extent as Linux.

Olinux: Oracle defines as an internet and ecommerce company. Linux was born and is currently maintained via Internet. Are there any convergence/relation between Oracle internet strategy and a large deploy and support of linux platform?

Pradeep Banhot: Today according to IDC linux is more of a middle tier platform than a database platform. Oracle is working with distributors such as Red Hat to add database friendly features into Linux such as 4GB RAM support, raw I/O and 64-bit file addressing to make it a better third-tier OS. I expect Oracle to increase it's focus on Linux as a middle tier OS for it's internet based solutions.

Olinux: Oracle8 was available Linux compatible long time ago. Can you describe Oracle plans towards making other products as Developer 2000 available to companies that already use Linux? Can you list those products or where user can find information about them?

Pradeep Banhot: Oracle plans to make most of it's products available on Linux. Today there as the Oracle8i R2 database in its third revision on Linux, Oracle Application Server in the middle tier and WebDB as the development environment. You can expect to see Developer6i, Oracle Parallel Server and the Oracle e-business suite being available this year. Product availability information is at http://www.platforms.oracle.com/linux

Olinux: What is Oracle marketing strategy for Linux?

Pradeep Banhot: Oracle wants to be the number one ISV on linux as well as the number on e-business solution on linux. Linux is an integral component of the majority of it's marketing programs. You won't see Oracle advertising Linux solutions as it gets better value investing in marketing solutions such as the dot-com suite and it's e-business suite as deployment solutions on Linux.

Olinux: What are its key alliances including Linux companies and organizations to support this platform?

Oracle has alliances with the major distributions such as Red Hat, SuSe, Caldera and Turbo. We work with Intel and major OEMs such as Compaq and Dell as well as distributors such as Keylink and Hallmark.

Olinux: Can you detail the relation between Oracle and Linux International?

Pradeep Banhot: We are on the board.

Olinux: What are the main sites that are sponsored or companies that Oracle has investments?

Pradeep Banhot: Oracle is not sponsoring any Linux specific sites today. Oracle venture fund has invested in several Linux vendors including Red Hat and Turbo Linux.

Olinux: We recently announced at Olinux that Oracle launched in Japan a new company called Miracle Linux Corporation. Why Oracle has chosen Japan and what are the plans for this new company?

Pradeep Banhot:This is an independent initiative by Oracle Japan. Miracle sees the potential to maximize penetration of Linux in that region by having a domestic product with good national language support. Partnering with NEC and Turbo Linux will minimize time to market. The demand for a national product is high and Oracle has a great reputation on delivery which makes Miracle a sound solution for that market.

Olinux: Officially, Oracle offers linux support based upon Red Hat distribution. what were the main technical and corporate aspects that lead this decision?

Pradeep Banhot:Oracle does it's base build on Red Hat Linux. However our products are certified on all the major intel based distributions which include SuSE, Turbo and Caldera. It would be great to have a certification platform for all ISV's that is distribution independent as proposed by LSB that would make our certification easier.

Olinux: In a meeting in Oracle office in Rio de Janeiro, Brazil, Olinux got to know about the recent creation your.COM department and a about a multi-million venture capital fund to invest in internet company. Are there any to invest aggressively on linux companies or web sites?

Pradeep Banhot: We have a member of the venture fund team that is dedicate to the Linux and related space.

Olinux: An Oracle top executive said recently that Microsoft old fashion desktop style was not needed any longer. Now, President Bill Clinton administration wants to break it apart, but his top economist are studying the impact and financial consequences of this breakup. What your opinion about the case and what will happen in your opinion?

Pradeep Banhot: The government has considered a options including a break up into an OS company and applications company, open source of windows API's and creating smaller baby companies along the AT&T model. I would bet on the many smaller companies with the same charter and products which would not kill the monopoly but would slow down Microsoft enough to enable real competition to exist. The real benefit I would be liking for is Microsoft being less arrogant would be less likely to crush potential competitors. I would also be looking for better compatibility of MS-Office file formats across products like star office and applix.

Olinux: What is faster and performs better technically: Oracle running Linux or NT? Can you give some results and indicate resource on the internet or those test results?

Pradeep Banhot: Oracle has spent 8 years optimizing it's NT technology. Today Linux is close on traditional benchmarks. We didi some internal testing which make Linux faster on some workloads and NT on others. We plan to do some Java based testing that I would expect linux to do better in. We have customers such as 1stUp.com that are doing over 40M database transactions per day on a single VA linux box with excellent availability.

Olinux: Can you cite some new technologies and services that guide Oracle project toward future?

Pradeep Banhot: The most exciting technology area for Linux is IA-64. Oracle8i is a workload that really exploits what a 64-bit OS can offer. I am also looking forward to the Oracle Parallel Server option being available later this year in a 4-node configuration which will reinforce that great availability Linux already has coupled with transparent database scalability beyond a single box.


Copyright © 2000, Fernando Ribeiro Corrêa
Published in Issue 54 of Linux Gazette, June 2000

"Linux Gazette...making Linux just a little more fun!"


General Graphics Interface Project (GGI)

By Fernando Ribeiro Corrêa
Originally published at http://www.olinux.com.br/interviews/14/en
(Follow this link for more OLinux interviews)


Olinux: Please, introduce yourself and our workgroup.

Steffen Seeger: I am working with the General Graphics Interface Project (GGI). The GGI Project is developing a suite of libraries, that should allow applications to output graphics using a common Application Programming Interface no matter what the actual underlying mechanism (target) is. For instance, you can run GGI applications directing output to an X window, but also run the same application fullscreen using output to SVGAlib, frame buffer devices etc. I am mainly working on design and implementation of the Kernel Graphics Interface (KGI), a part of GGI. KGI should provide the neccessary Operating System services to allow several applications to share the graphics hardware safely.

Olinux: Where did you graduate and live?

Steffen Seeger: I received my Diplom in Phyiscs from the Chemnitz University of Technology and live in Chemnitz.

Olinux: What is GGI?

Steffen Seeger: A general graphics interface. Strictly speaking it is a set of libraries that allow you to write programs that can do graphics output on any so-called target. A target is a generalized output device that is capable of doing graphics output. This way you may for instance run the same program in an X11 window, fullscreen using SVGAlib or fullscreen using fbdev. Also, there are so-called wrappers which translate other application programming interfaces into GGI, so that you may - for instance - run SVGAlib programs on any target GGI supports.

Olinux: When and how did it start?

Steffen Seeger: The first ideas were developed in 1995, at that time known as scrdrv, then it grew bigger and became what is known as the GGI project.

Olinux: Was it a group?

Steffen Seeger: Yes, it is. I can hardly image both GGI and KGI become what they are now without the input from all the people that contributed to this effort.

Olinux: Were you unsatisfied or willing to innovate?

Steffen Seeger: Both. In 1995 I got my first computer capable of running Linux. I had trouble getting XFree86 running. It crashed from time to time, preferably when I switched between X and virtual consoles or started a SVGAlib application. Mostly this action resulted in hard-locking the machine so I could only reset and reboot. I tried to understand why this happened, and found several implementation details that -- if done different -- would have prevented the troubles I had. So I looked around for projects that had similar concepts and came about an early version of scrdrv, and got involved. However, I had no idea how difficult this effort would be.

Olinux: What are the most important innovations and future in all GGI projects?

Steffen Seeger: As far as I know the ability to run the same application in any environment, from a simple terminal window to a full screen video wall is unique to GGI. As far as the kernel part is concerned, I think we could claim to have done some important steps towards a flexible console system the first time. There have been a lot of controverse discussions about the approach we have taken, but the latest developments of the console subsystem show that these approaches were at least not as wrong as believed by some people.

Olinux: What were the main ideas ands plans involved?

Steffen Seeger: Originally, the basic idea to overcome the troubles mentioned was to have one an only one driver responsible for coordinating access to the hardware by different applications. Considering the traditional UNIX operating system design, the most natural place of this driver would be as a part of the kernel. However, this implies that one should not prescribe any drawing operations or hardware models specific to a particular API, but rather provide a 'virtual graphics card' that should have most or all capabilties of the real hardware. We soon realized that this in turn makes it neccessary to have some library that does convert the actual drawing routines into 'hardware commands', which is pretty much independent of the kernel part. This way libGGI and KGI became the two main components we are working on.

Olinux: How the project is divided?

Steffen Seeger: For the reasons mentioned above, development may be divided into the following parts, though all might be seen as an integrated effort:

Olinux: Does it involve universities or any other gnu/linux organizations?

Steffen Seeger: Except for the fact that some developers are students, and that the universities implicitly provide infrastructure to do development, there is no active involvement in terms of research projects etc. There is no GNU/Linux organization involved.

Olinux: How is GGI integrated with GNU/Linux community?

Steffen Seeger: As much as the community likes it. All enhancements to existing GNU/Linux programs we have done are available to the community, but we will not force anybody to use it.

Olinux: Are there any sponsors helping and funding the project?

Steffen Seeger: The Freiberger Linux Users Group, 3Dlabs Incorporated and AEON Technologies donated development hardware, but beside that there is no sponsoring or funding of the project except by the developers themselves. As much as we would a preciate it sometimes.

Olinux: How many people are involved?

Steffen Seeger: All in all probably 10-20 active developers.

Olinux: How far the project has gone? Steffen Seeger: libGGI is pretty much useable already. Currently my main effort is to get the next generation of KGI delivered so that we can work on a improved version of XGGI.

Olinux: What can be done to improve the project?

Steffen Seeger: There are lots of things that could be done, mainly active developers are welcome. But we also need a new webmaster, documentation writers, etc. I think KGI could most benefit from porting it to another hardware platform, e.g. PowerPC or Alpha. Unfortunately an Alpha based system is beyond my budget.

Olinux: Are there any weakness that still disturbs you?

Steffen Seeger: Speaking of KGI, it has advanced quite a lot, but there are still some deficiencies I would like to fix, mainly it needs to have specification and documentation, and further improved backward compatiblity.

Olinux: Are there any part of it already available for use?

Steffen Seeger: Yes. LibGGI is useable already. As far as KGI is concerned, the KGI console subsystem is quite useable already, though there are still some known bugs, so this part of KGI could be labeled being in beta testing state. The KGI drivers, however, are still alpha or early development.

Olinux: What are the main objectives for this year?

Steffen Seeger: Yes. LibGGI is useable already. As far as KGI is conce rnedTo deliver the new KGI and get an accelerated X server running on top of it.

Olinux: Are there any deadlines for GGI apps releases?

Steffen Seeger: Yes. LibGGI is useable already. As far as KGI is conce rnedToGGI applications are outside the main GGI development (except XGGI). As for KGI or GGI, we will announce when new versions become available.

Olinux: How GGI project is related with Berlin?

Steffen Seeger: Berlin is a windowing environment, while GGI is a drawing kit. Berlin may use GGI to draw it's windows and GGI may provide its drawing to Berlin. Just as GGI provides its drawing to XGGI. Some GGI developers are also involved in Berlin.

Olinux: Are GGI applications compatible with X or svgalib?

Steffen Seeger: Sure. You are able running GGI applications using the SVGAlib target and you can run SVGAlib applications on GGI using the SVGAlib wrapper.

Olinux: How Linux graphical interface has evaluate since it began? How will GGI help Linux interface improvement?

Steffen Seeger: GGI is primarily about a stable and fast graphics output, which is one thing required for a good user interface. There are much more areas where current user interfaces need to be improved.

Olinux: What about hand devices: GGI will ever be used in this kind of technology?

Steffen Seeger: GGI already has been used on the Itsy, a little hand-held computer using the StrongARM processor. Whenever there is a need for graphical output, without the overhead involved with running a full featured X server, GGI can help to ease development.

Olinux: Pick the most interesting and promise new technology for the future in your opinion.

Steffen Seeger: Optical information processing and storage.

Olinux: What are the web sites that you like most?

Steffen Seeger: I prefer well-structured sites with interesting content. I do not have that much time to surf the web, and I do not have particular sites I would prefer.

Olinux: Do you have any other idea that will pursue in future?

Steffen Seeger: There are quite some ideas I want to try, but first I want to finish my part on KGI such that it can stand on it's own.

Olinux: What else do you imagine to create or project o be involved with?

Steffen Seeger: Animation tools, creation of tools for film production and special effects...

Olinux: Are there any special personalities or organizations that you admire. Who is he/she?

Steffen Seeger: People who have the braveness to think their own thoughts. Organisations that remain open to input from outside.

Olinux: Send a message for developers dedicated to FS/OS around the world?

Steffen Seeger: If you intend to write free software, write it to solve a problem, not to please a particular person or organization. Write it such that others can re-use your work easily, you can't know everything.

Abstract::: Steffen Seeger is leading GGI projects to develop "a suite of libraries, that should allow applications to output graphics using a common Application Programming Interface." He explains what are the other related projects as KGI (kernel Graphic Interface), XGGI (Xserver for GGI) and libGGI (its graphic library).


Copyright © 2000, Fernando Ribeiro Corrêa
Published in Issue 54 of Linux Gazette, June 2000

"Linux Gazette...making Linux just a little more fun!"


Creating A Linux Certification Program, Part 9

By Ray Ferrari


A Call to Action

The Linux Professional Institute(LPI) invites all Linux enthusiasts and professionals to consider certification as a means of furthering their career or increasing their earnings potential. Anyone interested in Linux certification should visit LPI's web site at www.lpi.org. The volunteers and staff of LPI along with their long list of sponsors, have spent thousands of man hours; as well as dollars; to bring a non-biased, vendor neutral testing procedure to the world.


LPI exams are now available globally in English through Virtual University Enterprises(VUE) which has 2,200 test centers; these centers are for testing only. Interested individuals should visit the LPI web site at www.lpi.org/c-index.html for more information. Persons interested in Linux related training information should visit www.lintraining.com. The format for testing has been changed to simplify the process. Numerous questions arise, so we have posted these frequently asked questions at www.lpi.org/faq.html.


LPI is currently working on development of their Level 2 certification. A survey has been organized by professional Linux system administrators from the U.S., Canada, and Germany for the purpose of allowing as many volunteers as possible to participate in the structuring of the Level 2 tests. They need the responses of Level 2 system administrators to help them formulate the writing of these exams. Any individual who would like to participate in this phase should contact Kara Pritchard by e-mailing her at kara@lpi.org or scott@lpi.org .


Some exciting things have been happening all around the globe. This year, LPI has participated in events in France, Germany, Australia, Canada and the U.S. In April, LPI was in Chicago for the Comdex and Linux Business Expo; see www.comdex.com. At this show, almost 200 people took the Level One tests. These tests were made possible by the cooperation of Linux International, VUE, Linux Business Expo, and LPI. It was a great success, and the first time testing was performed at an exhibition. LPI is looking to further this success by its presence at future events. Other shows included Advanstar in Toronto,May 16-18(see www.newmedia.ca); and Montreal Skyevents in April, (www.skyevents.com).


Future events currently scheduled include a booth in the Olympia Convention Center, London, England on June 1-2(www.itevents.co.uk), the Linux Business Expo. in Toronto, July 12-14(www.zdevents.com), and Linux World in San Jose, California, August15-17 (www.idg.com/events). Anyone interested in volunteering to help staff these booths should contact wilma@lpi.org.


The Linux Professional Institute(LPI) continues to attract the attention and sponsorship from some of the most influential companies and individuals in the world. We appreciate their guidance and support and are pleased to welcome Hewlett-Packard, Mission Critical Linux, Psycmet Corporation, SmartForce Corporation, TeamLinux, and VA Linux Systems among our ever-growing list of contributors and sponsors. For a complete list of corporate and individual sponsors, please visit www.lpi.org/a-sponsors.html.


For anyone interested in viewing any presentations put on by LPI, you can visit www.lpi.org/library/.Also, for user group information or to find a linux group in your area, visit www.linux.org/users or http://lugs.linux.com.


At LPI, we continue to bring you the latest on testing fundamentals and certification. We invite you to stand up and be noticed. Put some credentials beside your knowledge, and create the future in Linux. This is "a call to action." We'll see you around the next bend.




*Linux is a trademark of Linux Torvalds; Linux Professional Institute is a trademark of the Linux Professional Institute Inc.


Copyright © 2000, Ray Ferrari
Published in Issue 54 of Linux Gazette, June 2000

"Linux Gazette...making Linux just a little more fun!"


CAD Programs for Linux

By Keith Frost


A discussion on Slashdot in October would have you believe that there aren't any good CAD programs for Linux. In fact, nothing could be farther from the truth. This discussion started with the GPL release of a 2-D CAD package called Qcad. From there it evolved into what is a ``GOOD CAD'' and who wanted what commercial package on Linux someday.

Once and for all I would like to set the record straight. There are options out there today. Several different packages are available, each with a different level of power and capability. Each package fits a different budget.

Qcad

Qcad is the first (to my knowledge) working GPL CAD package for Linux. There are several projects currently listed as work-in-progress, but Qcad is here now. Qcad has a simple 2-D editor and uses DXF as it's native format. Qcad gets its name from the Qt tool kit. For those who do not use KDE, relax; it is not desktop-dependent. I have used it with both Xfce and AfterStep and have not seen any problems. With a simple icon menu it is functional and easy to learn. After a few minutes, I was working on my daughters new bed design. All the basic functions are at your finger tips.

Lines can be drawn by coordinates, clicking or offsetting an existing line. Circles and arcs can be created just as easily. Construction geometry can then be trimmed or extended to clean up the drawing and ready it for detailing. The font selection did seem to be a little limited. If you are willing, however, there is a means to create new fonts by copying an existing font file to a new name and modifying it. I imported one of my ``OLD'' title blocks and found that it required very little fixing or tweaking. Again a better selection of fonts would have helped with this problem.

I also pulled up several NACA wing sections none of which were corrupted in any way. For a final test I edited one of the sections, saved it and then pulled it up and extruded it with AC3D. For those who use AC3D, Qcad makes a very nice flat-pattern editor.

figure

Figure 1. Qcad

To find more information or download, the Qcad home page is at http://www.qcad.org/index.php3.

CAM Expert

CAM Expert is the commercial big brother to Qcad. It has a similar interface to Qcad, but with extended features leaning more towards the creation of NC-programs. These features include but are not limited to: NC Import, NC Creation, Optimizing way, Optimization for Cutting Machines, (Cutting Contours From Inside To Outside), Individual Configuration of the NC Output Format, CAM Simulation, Regulating Simulation Speed, Smooth Simulation and Show Rapid Move. I would be interested in hearing from those who have put this software to use as I do not have the proper equipment.

For more information or a trial download, the CAM Expert home page is at http://www.ribbonsoft.com/.

figure

Figure 2. CAM Expert

SISCAD-P

SISCAD-P is a 2-D parametric CAD system from Staedtler. Installation was a little more complex than for some of the others (especially for non-SUSE users), but it is well worth the effort. For those familiar with Sketcher (2-D editor for CATIA), SISCAD-P reminds me of it only with many more features and a bit easier to use. The features include: parametrics, variational geometry, inference sketching, a fully customizable user interface, constraint-based modeling and feature-based modeling. Also if all of the smart geometry becomes too overwhelming you can turn it off and just treat it like a simple 2-D CAD Package with all the standard line, arc, circle, and text commands that you'd expect to have at your disposal.

The downloadable version is a demo that is limited in the size of a file it will save. From the menu, I selected the LOAD/DXF and imported the same bed design I had started in Qcad. After adding some more detail, I inserted the same title block as I had with Qcad only to receive a message stating that I had exceeded the limit (of the DEMO). I would have liked a little more room to play, but it did give me enough time to see that I should have taken the time to get this one going sooner.

My earlier attempts had been on a Mandrake and various Red Hats. This was my first try with SuSE and that seemed to make the difference. To download the demo go to: ftp://tsx-11.mit.edu/pub/linux/packages/cad/. There are instructions on how to register and get a full license in the documentation, but I've been told that Staedtler is no longer in the software business and will not support it.

figure

Figure 3. SISCAD-P

ME10

ME10 is a 2-D parametric CAD by CoCreate, a subsidiary of Hewlett-Packard. If an award is given for the fastest learning curve then this is the winner. I've always preferred a text-based menu over icons. I think icons only make sense to the person who creates them. The oversized menu section takes up a lot of the screen but it makes up for it with the ease you can move through the commands. Whatever you need, it's right there.

According to the web page, it features parametrics with a ``parts concept'': an assembly may contain multiple copies or instances of a part. When the part is modified all the instances would update as well. By the same concept, sub assemblies may be inserted as an instance in other assemblies. This can be repeated creating an intelligent tree for your part structure.

ME10 has it's own internal browser for previewing drawings and symbols. Also included is a parts library and engineering symbols. Although it does have an IGES translator, DXF would have been nice. I would have liked to bring in some of my older geometry, but it's all DXF. There is a demo available that is well-worth the time to download. Again, the demo is limited in the size of the file that it will save. For more information and the demo check out the home page at http://www.cocreate.com/english/products/2d/index.htm.

figure

Figure 4. ME10

CADDA

The CADDA is from DAVEG. I did not find any kind of demo to try out on the home page, but it appears very nice. In response to my e-mails I was given the following to share:

The CADDA software is a true CAD/CAM solution that offers CAD and CAM functionality within one user interface. CAD-data can be imported as 3-D or 2-D models. The CADDA user selects, verifies and corrects the data during the preparation process. A postprocessor generates a ready to use CNC-machine program.

CADDA supports following technologies: 2 1/2-D milling/drilling, 3-D free form milling, turning, erosion cutting, sink erosion and grinding. The newest branch of CADDA is the CAD/CAQ-module. It is working like CADDA CAD/CAM, but the preparation and post-processor system produces a ready-to-use program for a CNC-measurement machine. The CADDA application extends 3D-CAD produced data to become directly processable by the CNC-machine-equipped factory. If necessary, a direct connection between CADDA and the CNC-controls is deliverable. As an option, CADDA-CAD/CAM can include a full 2-D drawing capability to enable the staff with limited-modelling capabilities.

CADDA has been under continuous development by DAVEG for 15 years. HP-UX was the system basis up to 1998. In 1998, DAVEG offered a first version of a LINUX-based CADDA with PENTIUM II Hardware. Today DAVEG has installed 300 seats with LINUX: the results are extremely good. Customers are impressed with performance and stability.

For more information visit their web page at http://www.daveg.com/index_e.html

Varicad

Varicad offers 3-D solids and 2-D drafting at a very nice price. The user has the options of the icon panels or the pull-down menus. Although I like the text-based menu (pull down), I did find the ``Commands'' nested a bit too deep. This makes the pull-down menus slow. The icon panels work much faster, but the icons are not always obvious as to their meaning. Also, you can enter commands at a command prompt.

Varicad is another one that has been around for Linux for many years. More people are probably familiar with Varicad than any of the others. Part of this is because of a very good article about it in LJ last year.

Varicad can import and export both DXF and IGS. You can extrude or revolve 2-D geometry. Other types of solids include: prisms, cylinders, filled elbows, truncated pyramids, truncated cones, cone pipe, helix and square to round transitions. In addition to the standard boolean add (union) and cut (subtraction), you also have cut save tool, save part, cut save part and tool, and add cut part. Other additional functions include fillet, chamfer, hole, milling and groove. A simple intersection would have been nice. One thing I do appreciate very much was a good undo/redo that was easy to find. in fact it's hard to miss. Once the solids have been created they can be analyzed for anything from distance between objects to center of mass and moment of inertia.

figure

Figure 5. Varicad

There is a non-saving demo which can be downloaded for free. In addition there is a 30-day trial key which you can obtain to allow you to save for those 30 days. Varicad has announced that they are now a member of opendwg. What this means is that varicad will import and export (read and write) the AutoCAD DWG format. For the demo and more information go to the Varicad home page at http://www.varicad.com/.

Microstation

Bentley is well known for it's Microstation line of cad products. Although there is not a commercial version for Linux there is an academic version. If you venture to the home page there is also a page where you can ``petition'' for a full commercial version. Word is they will not go commercial unless there is more interest. The academic version seems to have most of the functionality of the regular UNIX version except there are no Parasolid libraries. Modeler, TriForma and MS/J all use the Parasolid libraries. So if you're working 3-D, it will be wireframe and surfaces. Once again, if there is enough interest to justify the port this may change. All of the 2-D tools to create, edit and detail geometry are present.

One of the things that I have always like about Microstation is that it creates a very nice RIB file for rendering, with BMRT or other Renderman-compliant renderers. It also has the ability to render within the application itself. Try some of the sample files included to get a better idea of what can be done. There are no demo's or downloads to my knowledge, but there is a wealth of information on the Bentley home page at http://www.bentley.com/academic/products/linux2.htm.

figure

Figure 6. Microstation

Varimetrix

Varimetrix has been in the Linux CAD market for over three years. Their previous generation product was renamed VX Classic. The newest product line from Varimetrix is called Vision. Both Vision and VX Classic are Commercial applications whose prices are probably beyond what most people could afford for personal use. For this reason, the information I have given is based on their home page, and an article in Cadence magazine. There is a demo disk for Vision, but don't get your hopes up. I sent off for it to help me with this article--what I received was not what I consider a demo. It was a presentation program that duplicated the information from the web page. If you do order a copy, don't panic when it says Windows 95 or better, it works well with Linux/Wine.

VX Classic is broken down into modules. The first module for VX Classic is VX modeling. Using their own in-house modeling engine called Unified Parametric Geometry (UPG), they did not have to wait for some one else to port it to the platforms they wish to support. VX Classic offers the choice of 3-D wireframe, surfaces and solids. In addition to having the choice of modeling methods, you also have the ability to transform geometry between types. Solids can be created by constrained/dimensioned geometry created from its intelligent sketcher. In addition to the traditional boolean operations, you can also sculpt the solids with a collection of spatula functions. For the Perl buffs out there, guess what they use for user scripting? Hint, it starts with ``P'' and has four letters. There is also a C interface called OpenVX.

The Second Module, VX Assembly, allows intelligent positioning of the details both in relation to other geometry and also with the bill of materials. Concurrent control of the assemblies is provided so that multiple designers can work within the same project without splintering the design. BOMs can be created automatically. A schematic representation of the BOM tree is also available. Parts can be analyzed to show CG, overall mass, moments of inertia, and collision between parts. The third module, VX Drafting, takes the details and assemblies created and gives the user all the tools needed to turn these into engineering drawings. The Drafting module can also work independently of the other modules. You may use layout templets, arrays, blocks or multiple instancing of geometry. VX Drafting provides automatic hidden line removal, and both automatic and interactive dimensioning. There is also a complete list of 2-D drafting utilities, all using constraint-based geometry. The list of features goes on and on and on.

The final module for VX Classic, VX Manufacturing, is a complete suite of CAM tools. VX Manufacturing uses the dataset from the modeling module. All forms of geometry can be used by this module wireframe, surfaces, and solids. Up to five axes are supported. Once again the list of features go on and on.

Vision for Linux should have been commercially available already. You would never have known this from the web page. The web pages on Vision never even mention Linux. I had mailed Varimetrix last year and received a replay saying ``Our new product line called VX Vision will also be running under Linux soon (mid-summer). Actually, it runs now but we are still testing.'' There was an article on Vision in the July/99 issue of Cadence magazine. Although the article was based on the NT version it does mention that there is a Linux version. For those who wish to migrate from NT to Linux this might be a good starting point.

You can find out more on VX Classic and VX Vision by going to their home page at http://www.vx.com/ and clicking on products.

Conclusion

As I have stated, there are options available ranging from free GPL to high-dollar commercial products. What may prove even more interesting are the other projects and products still in the works. Matra Datavision has released their Cascade libraries as open source. Keep your eyes on this one. Check out their web page at http://opencascade.org/.

I think it is time we started to recognize and support both the GPL projects and the Commercial CAD companies that are here and willing to support us today.

Glossary

AC3D: 3-D object/scene modeler for Linux

CAD: computer-aided design

CAM: computer-aided manufacturing

CATIA: family of 2-D and 3-D CAD programs from IBM

CNC: computerized numerical control

DXF: format for autoCAD

IGES: initial graphics exchange specification

NACA: National Advisory Committee for Aeronautics

NC: numerical control


Copyright © 2000, Keith Frost
Published in Issue 54 of Linux Gazette, June 2000

"Linux Gazette...making Linux just a little more fun!"


First Attempt at Creating a Bootable Live Filesystem on a CDROM

By Mark Nielsen

If this document changes, it will be available at http://genericbooks.com/Literature/Articles/3/cdburn_2.html


Contents

  1. References
  2. Introduction to cdrom burning and bootable cdroms.
  3. Creating an EXT2 cdrom and a bootable floppy disk
  4. Creating a bootable installation CDROM using ISO9660 and Rock-Ridge extensions (for my MILAS project). This is the preferred way of making CDROMs.
  5. Configuring the boot-up process so that the computer is useable.
  6. Conclusions and future articles
  7. A crude Perl script to make a bootable iso9660 formatted cdrom of RedHat 5.1 from my computer.
  8. My rc.sysinit file for RedHat 6
  9. My example lilo.conf file.
  10. My example fstab.
  11. My old Install.pl script. This is where MILAS came from. This Perl scripts will eventually be integrated with the bootable cdrom.

References

  1. CD-Writing HOWTO by Winfried Trümper
  2. Lilo mini-Howto
  3. Xcdroast  -- read about cdrecord with "man cdrecord".

Introduction to cdrom burning.

First off, you should read the previous article Creating Installation Cds from various Linux Distributions. This article will assume you know how to make cdroms using cdrecord. Now the, the next step. Why make a Live Linux Filesystem for a cdrom?
 
  1. You want to make an installation cdrom (like MILAS).
  2. You want to boot off the cdrom and use it for most of the core files for your operating system and use your hard drive for other stuff.
  3. You want to make it real easy to do upgrades. Use a rewritable cdrom and just swap out the old cdrom with a new one.
The long term direction is to use a cdrom to create computers without hard drives. You use the cdrom for most of the core filesystem, a ramdisk for /tmp, and NFS for everything else. I really really dislike professional network computers that are diskless and hope that creating your own diskless workstations will be the way of the future.

STRING NOTE: The perl scripts and methods I use to make bootable cdroms is NOT NOT NOT very clean yet. I am still working to perfect the process. I want to have it all run in Python or Perl (preferrably Python). Once making bootable cdroms is well documented, I am going to merge this with my MILAS project.



Creating an EXT2 cdrom and a bootable floppy disk

For this exersize, we will do something a little strange. We will make an ext2 formatted cdrom and a floppy disk that is bootable. For people want to use the easier iso9660 format, skip this section.

What advantages do you have by doing this in my strange way? Well, first, realize something before I answer that question. Realize that a floppy, cdrom, or a hard drive partition can be treated the same in most respects. Okay, now I will answer the question:

  1. You can use a spare partition on your hard drive to test the image you want to put on your cdrom. If you boot off of a floppy disk, you can point it to the hard drive partition, and if it works out great, then on the next floppy disk you make bootable, have it point to your cdrom. Remember to mount the hard-drive partition read-only to simulate a read-only cdrom. Hard drive partitions are a good way to test images before you put them on your cdrom (especially when it is a write-once cdrom).
OPTIONAL: First, create an ext2 filesystem (using the hard drive for testing purposes):
  1. Have a spare partition on your hard drive to use for your image for the cdrom.
  2. Format the partition as an ext2 format. Example: "mkfs -t ext2 /dev/hda3". This formats the third partition on your primary hard drive. Make sure you change the number "3" to the correct number used for your spare partition.
  3. Copy over all critical directories and configure the files in ROOT/etc to correctly reflect your new installation.
  4. Use a ramdisk for "/tmp" and point "/var" to "/tmp/var".
  5. Make a bootable floppy disk, and either configure it to use the cdrom drive as "root" or "/" or if you get lilo installed on the floppy drive, you can type in a command at boottime to use a different partition for "/". You can do this with the command

  6. lilo root=/dev/hdc
    if your cdrom drive is "/dev/hdc". Notice I did not specify a partition number. There is none.
    There are also other ways to make bootable floppies. For my redhat installation
       ### Make a copy of the kernel
    cp /boot/vmlinuz-2.2.12-32 /tmp/Vmlinuz
       ### Make the copy boot from the cdrom on /dev/hdc
    rdev /tmp/Vmlinuz /dev/hdc
    ramsize /tmp/Vmlinuz 20000
       ### copy the kernel directly to the floppy disk, you might have to format it first, mkfs -t ext2 /dev/fd0
    dd if=/tmp/Vmlinuz of=/dev/fd0
For examples of how to copy over directories and files, look at the Perl script at the end of the document.
For examples on how to use a ramdisk, read about this RamDisk article I wrote a while ago, and also "man lilo.conf".

Second, either using the files from a partition you were using for testing purposes, or if you want to start from scratch,

Make an image that is 600 megs but using "dd" and a loopback device. Then copy this image to your cdrom. How do you make an image?
Assume that "/mnt/Partition" is the directory you have all the files that you want to make an image out of.

  ## Create a blank file or image
dd if=/dev/zero of=/tmp/Image bs=1024k count=650
    ### Format this blank image
/sbin/mke2fs  -b 2048 /tmp/Image
   ### Answer "y" to mkfs if it says that it doesn't recognize the device
mkdir -p /mnt/CDROM_IMAGE
   ### Mount the blank formatted image to a directory
mount  -t ext2 -o loop=/dev/loop1 /tmp/Image mnt/CDROM_IMAGE
   ### Copy over the stuff from your hard drive partition to your image for the cdrom.
tar -C /mnt/Partition -pc . | tar -C /mnt/CDROM_IMAGE -xvp
  ### Or just use rsync to copy it over
#    rsync -a /mnt/Partition/* /mnt/CDROM_IMAGE
  ### Umount the image
umount /mnt/CDROM_IMAGE

OR, if you don't mind using an ISO9660 formatted cdrom, which will work the same with rock-ridge extensions, enter this command.
mkisofs -aJlL  -r -o /tmp/Image /mnt/Partition
NOTE: Making an iso9660 cdrom in one step is a lot easier and is described in the section below.

Now burn the image located at "/tmp/Image" to your cdrom.

Actually, I was thinking, you can probably just make a hard drive partition 600 megs and copy it over directlly without having to make an image. If your hard drive partition is "/dev/hda4", then do this.

  ### Note, I never tested this yet.
  ### Unmount our partition that we copied files to
umount /dev/hda4
  ### Make an image of the partition
dd if=/dev/hda4 of=/tmp/Image.raw

Now just take Image.raw and burn it to your cdrom.

For better examples on how to do this, look at my Perl script below. Anybody want to convert this into a Python Script? Perhaps a Python/TK script?


Creating an installation CDROM using ISO9660 and Rock-Ridge extensions (for my MILAS project)

The big deal about make ISO9660 formatted cdroms with Rock-Ridge extensions is that fact that you can make cdroms bootable. This is very useful for creating your own diskless workstations, creating boot-able installation cdroms, creating a cdrom to fix hard drives, and probably other stuff.

With this section, you don't need a to use a loopback device, you don't need to use any partitions, you just need a directory somewhere on your computer and the program "mkisofs". This probably is the easiest way to create an image that you want to use for a cdrom.

     The key to making a bootable cdrom is the "mkisofs" program. Here is a typical command that I use,
mkisofs -aJlL  -r -o /tmp/Boot_Image /CDROM
    "/CDROM" is the directory that you want to burn onto a cdrom. To add the boot file,
mkisofs -aJlL  -r -b /tmp/Boot.image -o /tmp/Boot_Image /CDROM
    In the next section we discuss how to make a bootable floppy disk that you can put on your cdrom.

The key item to remember is that you need a directory for this program. This nice thing is, it doesn't grab the empty space on a partition when it creates its image. You can use a spare


Configuring the boot-up process so that the computer is useable.

The toughest part about creating a live filesystem is copying over the critical files and configuring them. You should have the same directory structure as your Linux filesystem, except the stuff under /usr should not be critical, but perhaps helpful. Remember to mount a ramdisk to /tmp, remember that you should point /var to /tmp/var, and remember to configure the files in /etc correctly. This could be a whole article itself. I try to do it in the Perl script below. If you combine a live filesytem on a cdrom with a hard drive or NFS, you will have more options as to what you can do.

Here is an example of how to copy over files and configure the bootup process.
Assume the directory you are making an image of is, "/tmp/Boot_Image".

cd /tmp/Boot_Image
mkdir root
mkdir mnt
mkdir proc
mkdir tmp
mkdir home
mkdir misc
mkdir opt
### Yes, tmp/var doesn't exist, but it will after bootime
ln -s tmp/var var
mkdir dev
rsync -a /dev/* dev
mkdir lib
rsync -a /lib/* lib
mkdir bin
rsync -a /bin/* bin
mkdir sbin
rsync -a /sbin/* sbin
mkdir usr
mkdir etc
rsync -a /etc/* etc
mkdir boot
rsync -a /boot/* boot

Now, configure etc/inittab to boot at runlevel "1".
Change
id:5:initdefault:
to
id:1:initdefault:
in the file etc/inittab

Now, change your etc/fstab to this,

    #### change /dev/hdc to wherever your cdrom is located
/dev/hdc      /        ext2    defaults        1 1
/dev/fd0     /mnt/floppy             ext2    noauto,owner    0 0
none       /proc                   proc    defaults        0 0
none        /dev/pts                devpts  gid=5,mode=620  0 0
        ### Note, this is using a swap partition from a hard drive.
        #### Delete this is or change this
/dev/hda6               swap                    swap    defaults

Now, add to the end of etc/rc.d/rc.local the following commands

mkfs -t ext2 /dev/ram0
mount /dev/ram0 /tmp
chmod 777 /tmp
chmod +t /tmp

Now you need to make a bootdisk with a larger ramdisk on it.
  ### This makes a bootdisk, put a floppy disk in
mkbootdisk `uname -r`
  ### This makes the directory to mount the floppy disk
mkdir  /mnt/floppy_test
  ### Mount the floppy disk
mount /dev/fd0 /mnt/floppy_test
  ### Edit the lilo.conf file and put "ramdisk=35000" in the lilo.conf file, mine looks like

boot=/dev/fd0
timeout=100
message=/boot/message
prompt
image=/vmlinuz-2.2.12-32
        label=linux
           ### Change /dev/hdc to /dev/hdb or /dev/hdd or wherever your cdrom is
        root=/dev/hdc
        ramdisk=35000
image=/vmlinuz-2.2.12-32
        label=rescue
        append="load_ramdisk=2 prompt_ramdisk=1"
        root=/dev/fd0

   ### Now execute the lilo command on the floppy drive
lilo -r /mnt/floppy_test
  ### Now umount the floppy disk
umount /dev/fd0

Now you have a bootable floppy disk that uses your cdrom as root.
If you are going to burn the floppy disk image onto your cdrom using mkisofs, then change lilo.conf to this,

boot=/dev/hdc
timeout=100
message=/boot/message
prompt
image=/vmlinuz-2.2.12-32
        label=linux
           ### Change /dev/hdc to /dev/hdb or /dev/hdd or wherever your cdrom is
        root=/dev/hdc
        ramdisk=35000

   ### After you umount the floppy disk, make an image of the floppy disk to burn on a cdrom
dd if=/dev/fd0 of=/tmp/Boot.image


Conclusions and future articles

I wanted to make it easier to create bootable cdroms with a live filesystem. From here, I will make an article on how to use a bootable cdrom to
  1. Create installation cdroms to burn your image of an operating system to your hard drive.
  2. How to use a bootable cdrom and a hard drive and/or NFS.
  3. Finish up my MILAS project. My MILAS project started when I needed a way to configure custom-made computers that I used to sell (and probably will again someday to help force competitors to do cool things).
  4. Make a more accurate Perl script to take the version of Linux you have on your computer and put it on a cdrom. I will probably end up using the iso9660 format for the cdrom.
I apologize for the roughness of this article. It was a pain in the butt to figure out how to make bootable cdroms. I imagine other people have documented it much better than I have. In my next article, I will clean it up a lot.

Mark Nielsen works for The Computer Underground as a clerk and as a book binder at ZING. In his spare time, he does volunteer stuff, like writing articles for The Linux Gazette and developing ZING's website.


Copyright © 2000, Mark Nielsen
Published in Issue 54 of Linux Gazette, June 2000

"Linux Gazette...making Linux just a little more fun!"


Introduction to Shell Scripting

By Ben Okopnik


Never write it in 'C' if you can do it in 'awk';
Never do it in 'awk' if 'sed' can handle it;
Never use 'sed' when 'tr' can do the job;
Never invoke 'tr' when 'cat' is sufficient;
Avoid using 'cat' whenever possible.
--Taylor's Laws of Programming

Last month, we looked at loops and conditional execution. This time around, we'll look at a few of the simpler "external" tools (i.e., GNU utilities) that are commonly used in shell scripts.

Something to keep in mind as you read this article: the tools available to you as a script writer, as you might have guessed from the above quote, are arranged in a rough sort of a "power hierarchy". It's important to remember this - if you find yourself continually being frustrated by the limitations of a specific tool, it may not have enough "juice" to do the job.

Some time ago, while writing a script that processed Clipper database files, I found myself pushed up against the wall by the limitations of arrays in "bash"; after a day and a half of fighting it, I swore a bitter oath, glued a "screw it" label over the original attempt, and rewrote it in "awk".

It took 15 minutes.

I didn't tell anyone at the time; even my good friends would have taken a "clue-by-4" to my head to make sure that the lesson stuck...

Don't be stubborn about changing tools when the original one proves under-powered.

cat

Strange as it may seem, 'cat' - which you've probably used on innumerable occasions - can do a number of useful things beyond simple catenation. As an example, 'cat -v file.txt' will print the contents of "file.txt" to the screen - and will also show you all the non-text characters that might normally be invisible (this excludes the standard textfile characters such as `end-of-line' and `tab'), in "'^' notation". This can be very useful when you've got something that is supposed to be a text file, but various utilities keep failing to process it and give errors like "This is a binary file!". This capability can also come in handy when converting files from one type to another (see the section on 'tr'). If you decide you'd like to see all the characters in the file, the `-A' switch will fill the bill - `$' signs will show the end-of-lines (the buck stops here?), and `^I' will show the tabs.

'-n' is another useful option. This one will number all the lines (you can use `-b' to number only the non-blank lines) of a file - very useful when you want to create a `line selector', i.e., whenever you want to have a "handle" for a specific line which you would then pass to another utility, say, 'sed' (which is very happy with line numbers).

'cat' can also serve as a "mini-editor", if you need to insert more than a line or two into a file during the execution of your script. In most cases, the built-in 'read' function of 'bash' will take care of that sort of thing - but it is designed as more of a "question/reply" mechanism; 'cat' is a bit more useful for file input.

Last, but not least, 'cat' is very useful for displaying formatted text, e.g., the error messages at the beginning of a shell script.
Here are two script "snippets" for comparison:


  ...
  echo "'guess' - a shell script that reads your mind"
  echo "and runs the program you're thinking about."
  echo
  echo "Syntax:"
  echo
  echo "guess [-fnrs]"
  echo
  echo "-f    Force mode: if no mental activity is detected,"
  echo "      take a Scientific Wild-Ass Guess (SWAG) and execute."
  echo "-n    Read your neighbor's mind; commonly used to retrieve"
  echo "      the URLs of really good porno sites."
  echo "-r    Reboot brain via TCP (Telepathic Control Protocol) - for
  echo "      those times when you're drawing a complete blank."
  echo "-s    Read the supervisor's mind; implies the '-f' option."
  echo
  exit
  ...



  ...
  cat << !
  'guess' - a shell script that reads your mind
  and runs the program you're thinking about.

  Syntax:

  guess [-fnrs]

  -f    Force mode: if no mental activity is detected,
        take a Scientific Wild-Ass Guess (SWAG) and execute.
  -n    Read your neighbor's mind; commonly used to retrieve
        the URLs of really good porno sites.
  -r    Reboot brain via TCP (Telepathic Control Protocol) - for
        those times when you're drawing a complete blank.
  -s    Read the supervisor's mind; implies the '-f' option.

  !
  exit
  ...



Note that everything between the two exclamation points will be printed to 'stdout' (the screen) as formatted; the only requirement for "closing" the printable text is that the "!" must be on a line by itself, which allows the delimiter to be used as part of the text. Delimiters other than "!" may also be used.

I tend to think of 'cat' as an "initial processor" for text that will be further worked on with other tools. That's not to say that it's unimportant - in some cases, it's almost irreplaceable. Indeed, your 'cat' can do tricks that are not only entertaining but useful... and you don't even need a litter box.

tr

When it comes to "one character at a time" processing, this utility, despite it's oddities in certain respects (e.g., characters specified by their ASCII value have to be entered in octal), is one of the most useful ones in our toolbox. Here's a script using it that replaces those "DOS-text-to-Unix" conversion utilities:



  #!/bin/bash
  [ -z $1 ] && { echo "d2u - converts DOS text to Unix."; echo \
       "Syntax: d2u <file>"; exit }

  cat $1|tr -d '\015'


<grin> I guess I'd better take time to explain; I can already hear the screams of rage from all those folks who just learned about 'if' constructs in last month's column.

"What happened to that nice `if' statement you said we needed at the beginning of the script? and what's that `&&' thing?"

Believe it or not, it's all still there - at least the mechanism that makes the "right stuff" happen. Now, though, instead of using the structure of the statement and fitting our commands into the "slots" in the syntax, we use the return value of the commands, and make the logic do the work.  Let's take a look at this very important concept. henever you use a command, it returns a code on exit - typically 0  for success, and 1 for failure (exceptions are things like the 'length' function, which returns a value). Some programs return a variety of numbers for specific types of exits, which is why you'd normally want to test for zero versus non-zero, rather than testing for `1' specifically. You can implement the same mechanism in your scripts (this is a good coding policy): if your script generates a variety of messages on different exit conditions, use 'exit n' as the last statement, where `n' is the code to be returned. The plain 'exit' statement returns 0. These codes, by the way, are invisible - they're internal "flags"; there's nothing printed on the screen, so don't bother looking.

To test for them, 'bash' provides a simple mechanism - the reserved words `&&' (logical AND) and `||' (logical OR). In the script above, the statement basically says "if $1 has a length of zero, then the following statements (echo... echo...  exit) should be executed". If you're not familiar with binary logic, this may be confusing, so here's a quick rundown that will suffice for our purposes: in an 'A && B' statement, if 'A' is true, then 'B' will also be true (i.e., if 'B' is a command, it will be executed). In an
'A || B' statement, if 'A' is false, then 'B' will be true (i.e., executed). The converse of either statement is obvious (a.k.a. "is left as an exercise for the student".) <grin>

As a comparison, here are two script fragments that do much the same thing:



  if [ -z $1 ] then
          echo "Enter a parameter."
  else
      echo "Parameter entered."
  fi


 [ -z $1 ] && echo "Enter a parameter." || "Parameter entered."

You have to be a bit cautious about using the second version for anything more complex than "echo" statements: if you use a command in the part after the `&&' which returns a failure code, both it and the statements after `||' will be executed! This in itself can be useful, if that's what you need - but you have to be aware of how the mechanism works.
 

Back to the original "d2u" script - note the use of the `\' character at the end of the second line: this `escape' character "cancels" the `end-of-line' character, making the following line a continuation of the current one. This is a neat trick for enhancing readability in scripts with long lines, allowing you to visually break them while maintaining program continuity. I put it between the "echo"  statement and the text string for a reason: whitespace (here, spaces and tabs) makes no difference to a command string but stands out like a beacon in text, creating ugly formatting problems. Make sure your line breaks happen in reasonable places in the command string - i.e., not in the middle of text or quoted syntax.

The "active" part of the script, "cat $1|tr -d '\015'", pipes the original text into 'tr', which deletes DOS's "CR/Carriage Return" character (0x0D), shown here in octal (\015). That's the bit... err, _byte_ that makes DOS text different from Unix text - we use just the "LF/Newline" character (0x0A), while DOS uses both (CR/LF). This is why Unix text looks like

        This is line one*This is line two*This is line three*

  in DOS, and DOS text like

        This is line one^M
        This is line two^M
        This is line three^M

  in Unix.

"A word to the wise" applicable to any budding shell-script writer: close study of the "tr" man page will pay off handsomely. This is a tool that you will find yourself using again and again.

head/tail

A very useful pair of tools, with mostly identical syntax. By default they print, respectively, the first/last 10 lines of a given file; the number and the units are easily changed via syntax. Here's a snippet that shows how to read a specific line in a file, using its line number as a "handle" (you may recall this from the discussion on "cat"):


  ...
  handle=5
  line="$(head -$handle $1|tail -1)"
  ...


Having defined `$handle' as `5', we use "head -$handle" to read a file specified on the command line and print all lines from 1 to 5; we then use "tail -1" to read only the last line of that. This can, of course, be done with more powerful tools like "sed"... but we won't get to that for a bit - and Taylor's law, above, is often a sensible guideline.

These programs can also be used to "identify" very large files without the necessity of reading the whole thing; if you know that one of a number of very large databases contains a unique field name that identifies it as the one you want, you can do something like this:



  ...
  for fname in *dbf
  do
      head -c10k $fname|grep -is "cost_in_sheckels_per_cubit" && echo $fname
  done
  ...

(Yes, I realize we haven't covered 'grep' yet. I trust those readers that aren't familiar with it will use their "man" pages wisely... or hold their water until we get to that part. :)

So - the above case is simple enough; we take the first 10k bytes (you'd adjust it to whatever size chunk is necessary to capture all the field names) off the top of each database by using 'head', then use 'grep' to look for the string. If it's found, we print the name of the file. Those of you who have to deal with large numbers of multi-gigabyte databases can really appreciate this capability.

'tail' is interesting in its own way; one of the syntax differences is the '+' switch, which answers the question of "how do I read everything after the first X characters/lines?" Believe it or not, that can be a very important question - and a very difficult one to answer in any other way... (Also sprach The Voice of Bitter Experience.)

cut/paste

In my experience, 'cut' comes in for a lot more usage than 'paste' - it's very good at dealing with fields in formatted data, allowing you to separate out the info you need. As an example, let's say that you have a directory where you need to get a list of all the files that are 100k or more in size, once a week (logfiles over a size limit, perhaps). You can set up a "cron" job to e-mail you:



  ...
  ls -r --sort=size $dir|tr -s ' '|cut -d ' ' -f 5,9|grep \
      -E ^'[1-9]{6,} '|mail joe@thefarm.com -s "Logfile info"
  ...

'ls -r --sort=size $dir' gives us a listing of `$dir' sorted by size in `reverse' order (smallest to largest). We pipe that through "tr -s ' '" to collapse all repeated spaces to a single space, then use "cut" with space as a delimiter (now that the spaces are singular, we can actually use them to separate the fields) to return fields 5 and 9 (size and filename). We then use 'grep' to look at the very beginning of the line (where the size is listed) and print every line that starts with a digit between 1 and 9, repeats that match 5 times, and is followed by a space.  The lines that match are piped into 'mail' and sent off to the recipient.

'paste' can be useful at times. The simplest way of describing it that I can think of is a "vertical 'cat'" - it merges files line by line,
instead of "head to tail". As an example, I had a long list of songs followed by the names of the groups that performed them, and I wanted the song names to be in quotes. The songs were separated from the names by tabs. Here was the solution:


  #!/bin/bash
  # Single-use file; no error checking

  cut -f 1 $1 > groups      # 'Tab' is the default separator
  cut -f 2- $1 > songs
  for n in $(seq $(grep -c $ songs))
  do
      echo '"'>>quotes
  done

  paste -d "" quotes songs quotes > list1
  paste list1 groups > list

  rm quotes songs groups list1


So - I split the file in two, with the first fields going into "groups" and all the rest into "songs". Then, I created a file called "quotes" that contained the same number of double quotation marks as there were lines in the "songs" file by using 'grep' to count `end-of-line' characters in "songs" (the `$' character stands for `EOL' in regular expressions). The next part was up to 'paste' - the standard delimiter for it is `tab', which I replaced with an empty string (I wanted the quotes right next to the song names). Then, I pasted the "groups" file into the result with the default 'tab' as the separator - and it was done, all except for cleaning up the temporary
files.

grep

The "Vise-Grips" of Unix. :) This utility, as well as its more specialized  relatives 'fgrep' and 'egrep', is used primarily for searching files for matching text strings, using the 'regexp' (Regular Expression) mechanism. (There are actually two of these, the 'basic' and the 'extended', either one of which can be used; the 'basic' is the default for 'grep'.)

"Let's see now; I know the quote that I want is in of these 400+ text files in this directory - something about "Who hath desired the Sea". What was it, again?..."

  Odin:~$ grep -iA 12 "who hath desired the sea" *

  Poems.txt-Who hath desired the Sea? - the sight of salt water unbounded -
  Poems.txt-The heave and the halt and the hurl and the crash of the comber
  Poems.txt-    wind-hounded?
  Poems.txt-The sleek-barrelled swell before storm, grey, foamless, enormous,
  Poems.txt-    and growing -
  Poems.txt:Stark calm on the lap of the Line or the crazy-eyed hurricane
  Poems.txt-    blowing -
  Poems.txt-His Sea in no showing the same - his Sea and the same 'neath each
  Poems.txt-    showing:
  Poems.txt-        His Sea as she slackens or thrills?
  Poems.txt-So and no otherwise - so and no otherwise - hillmen desire their
  Poems.txt-    Hills!

  Odin:~$
 

"Yep, that was the one; so, it's in `Poems.txt'..."

'grep' has a wide variety of switches (the "-A <n>" switch that I used above determines the number of lines of context after the matched line that will be printed; the "-i" switch means "ignore case") that allow precise searches within a single file or a group of files, as well as specifying the type of output when a match is found (or conversely, when no match is found). I've used 'grep' in several of the "example" scripts so far, and use it, on the average, about a dozen times a day, command line and script usage together: the search for the above Kipling quote (including my muttered comments) happened just a few minutes before I sharpened my cursor and scribbled this paragraph.

You can also use it to search binary files too, with 'strings' (a utility that prints only the text strings found in binary files) as a useful companion: an occasionally useful "last-ditch" procedure for those programs where the author has hidden the help/syntax info behind some obscure switch, and 'man', 'info', and the '/usr/doc/' directory come up empty.

Often, there is a requirement for performing some task the same number of times as there are lines in a given file, e.g., reading in each line of a configuration file and parsing it. 'grep' helps us here, too:



  ...
  for n in $(grep -n $ ~/.scheduler)
  do
      LINE=$(head -$n ~/.scheduler|tail -1)
      DATE=$(echo "$LINE"|cut -d ' ' -f 1)

      ...
      ...

  done


This is a snippet from a scheduling program I wrote some time ago; whenever I log in, it reminds me of appointments, etc. for that day. 'grep', in this instance, numbers the lines (this is used in further processing - not shown) and polls every line for the "end-of-line" metacharacter ('$') which matches every line in the file. The result is then parsed into the date and text variables, and the script executes an "alarm and display" routine if the appointment date matches today's date.

Wrapping it up

In order to produce good shell scripts, you need to be very familiar with how all of these tools work - at the very least, have a good idea
what a given tool can and cannot do (you can always look up the exact syntax via 'man'). There are many other, more complex tools that are available to us - but these six programs will get you started and keep you going for a long time, as well as giving you a broad field of possibilities for script experimentation of your own.

Until next month -

Happy Linuxing!

"Script quote" of the month

I used to program my IBM PC to make hideous noises to wake me up. I also made the conscious decision to hard-code the alarm time into the program, so as to make it more difficult for me to reset it. After I realised that I was routinely getting up, editing the source file, recompiling the program and rerunning it for 15 minutes extra sleep, before going back to bed, I gave up and made the alarm time a command-line option.
--B.M. Buck

References


Copyright © 2000, Ben Okopnik
Published in Issue 54 of Linux Gazette, June 2000

"Linux Gazette...making Linux just a little more fun!"


Special Method Attributes in Python

By Pramode C E


C++ programmers use 'operator overloading' to apply built-in operators to user defined classes. Thus, a complex number class may have an addition operator which makes it possible for us to use two objects of type 'complex' in an arithmetic expression in the same way we use integers or floating point numbers. The Python programming language provides much of the same functionality in a simple and elegant manner - using special method attributes. This is a quick introduction to some of the special methods through code snippets which we wrote while trying to digest the Python language reference. The code has been tested on Python ver 1.5.1.

Classes and objects in Python

Let us look at a simple class definition in Python:


class foo:
	def __init__(self, n):
		print 'Constructor called'
		self.n = n
		
	def hello(self):
		print 'Hello world'
		
	def change(self, n):
		print 'Changing self.n'
		self.n = n

f = foo(10)  # create an instance of class foo with a data field 'n' whose value is 10.
print f.n    # prints 10. Python does not support member access control the way C++ does.
f.m = 20     # you can even add new fields!
f.hello()    # prints 'Hello world'
foo.change(f, 40)
print f.n    # prints 40

The method __init__ is similar to a 'constructor' in C++. It is a 'special method' which is automatically called when an object is being created.

We see that all member functions have a parameter called 'self'. Now, when we call f.hello(), we are actually calling a method belonging to class foo with the object 'f' as the first parameter. This parameter is usually called 'self'. Note that it can be given any other name, though you are encouraged to name it that way by convention. It is even possible to call f.hello() as foo.hello(f). C++ programmers will find some relation between 'self' and the keyword 'this'(in C++) through which the hidden first parameter to member function invocations can be accessed.

The special method __add__

Consider the following class definition:

class foo:
	def __init__(self, n):
		self.n = n
	def __add__(self, right):
		t = foo(0) 
		t.n = self.n + right.n
		return t
		
Now we create two objects f1 and f2 of type 'foo' and add them up:

f1 = foo(10)  # f1.n is 10
f2 = foo(20)  # f2.n is 20
f3 = f1 + f2  
print f3.n    # prints 30

What happens when Python executes f1+f2 ? In fact, the interpreter simply calls f1.__add__(f2). So the 'self' in function __add__ refers to f1 and 'right' refers to f2.

Another flavor of __add__

Let us look at another flavor of the __add__ special method:

class foo:
	def __init__(self, n):
		self.n = n
	def __radd__(self, left):
		t = foo(0) 
		t.n = self.n + left.n
		print 'left.n is', left.n
		return t
	
f1 = foo(10)
f2 = foo(20)
f3 = f1 + f2  # prints 'left.n is 10'

The difference in this case is that f1+f2 is converted into f2.__radd__(f1).

How objects print themselves - the __str__ special method


class foo:
	def __init__(self, n1, n2):
		self.n1 = n1
		self.n2 = n2
	
	def __str__(self):
		return 'foo instance:'+'n1='+`self.n1`+','+'n2='+`self.n2`
		
class foo defines a special method called __str__. We will see it in action if we run the following test code:

f1 = foo(10,20)
print f1    # prints 'foo instance: n1=10,n2=20'

The reader is encouraged to look up the Python Language Reference and see how a similar function, __repr__ works.

Truth value testing with __nonzero__

__nonzero__ is called to implement truth value testing. It should return 0 or 1. When this method is not defined, __len__ is called, if it(__len__) is defined. If a class defines neither __len__ nor __nonzero__, all its instances are considered true. This is how the language reference defines __nonzero__.

Let us put this to test.


class foo:
	def __nonzero__(self):
		return 0
		
class baz:
	def __len__(self):
		return 0
		
class abc:
	pass
	

f1 = foo()
f2 = baz()
f3 = abc()
		
if (not f1): print 'foo: false'  # prints 'foo: false'
if (not f2): print 'baz: false'  # prints 'baz: false'
if (not f3): print 'abc: false'  # does not print anything

The magic of __getitem__

How would you like your object to behave like a list? The distinguishing feature of a list (or tuple, or what is in general called a 'sequence' type) is that it supports indexing. That is, you are able to write things like 'print a[i]'. There is a special method called __getitem__ which has been designed to support indexing on user defined objects. Here is an example:

class foo:
	def __init__(self, limit):
		self.limit = limit
		
	def __getitem__(self, key):
		if ((key > self.limit-1) or (key < -(self.limit-1))):
			raise IndexError
		else:
			return 2*key
			
f = foo(10)       # f acts like a 20 element array 
print f[0], f[1]  # prints 0, 2
print f[-3]       # prints -6
print f[10]       # generates IndexError

There are some additional methods available like __setitem__, __delitem__, __getslice__, __setslice__, __delslice__ .

Attribute access with __getattr__

__getattr__(self,name) is called when attribute lookup fails to find the attribute whose name is given as the second argument. It should either raise an AttributeError exception or return a computed attribute value.

class foo:
	def __getattr__(self, name):
		return 'no such attribute'
		
f = foo()
f.n = 100
print f.n     # prints 100
print f.m     # prints 'no such attribute'

Note that we also have a builtin function getattr(object, name). Thus, getattr(f, 'n') returns 100 and getattr(f,'m') returns the string 'no such attribute'. It is easy to implement stuff like delegation using getattr. Here is an example:

class Boss:
	def __init__(self, delegate):
		self.d = delegate
		
	def credits(self):
		print 'I am the great boss, I did all that amazing stuff'
		
	def __getattr__(self, name):
		return getattr(self.d, name)
		
	
class Worker:
	def work(self):
		print 'Sigh, I am the worker, and I get to do all the work'
		
	
w = Worker()	
b = Boss(w)
b.credits()  # prints 'I am the great boss, I did all that amazing stuff'
b.work()     # prints 'Sigh, I am the worker, and I get to do all the work'

Further Reading

The Python distribution comes with excellent documentation, which includes a tutorial, language reference and library reference. If you are a beginner to Python, you should start by reading Guido's excellent tutorial. Then you can browse through the language reference and library reference. This article was written with the help of the language reference.


Copyright © 2000, Pramode C E
Published in Issue 54 of Linux Gazette, June 2000

"Linux Gazette...making Linux just a little more fun!"


A Laptop with Linux Preinstalled

By Richard Sevenich


Those who use Linux and occasionally travel in their work are highly motivated to purchase a laptop. I did s in September of 1999 and thought it worthwhile to share my experience. Now it is becoming easier to install Linux on a laptop, but at that time I was leery of purchasing a machine and finding some of the hardware to be incompatible with Linux. After some searching I found a reasonably priced laptop with Linux preinstalled, the Nflux, from theLinuxStore (now part of EBIZ). Here are the advertised specs:

CPU AMD K6-2 3D 300
Display 13.3" XVGA (1024x768) TFT Color LCD
RAM 64 Mbyte
graphics card Neomagic NM2160 128-bit, 2MB RAM
Hard drive 4.3 Gbyte
Mouse touchpad ps/2-compatible
cdrom 24X
3.5" floppy 1.44
sound 16-bit, microphone, speakers
external ports monitor, parallel, Fasr irda, usb, etc.

I also ordered PCMCIA cards for modem and ethernet capabilities and an extra battery. By the time the dust cleared, I was still under $1800 - quite reasonable. The laptop was ordered on September 30, promised after 5 working days, and appeared in mid October, quite normal slippage for 'email order' hardware, in my experience. The delivered machine worked fine; I was very pleased with its capabilities and functionality. I separately purchased a cheap ps/2 mouse, preferring that to a touchpad. For home use, a printer was easily configured. I was up and running immediately, but there were two shortcomings.

The latter shortcoming I could and did deal with myself.

I called theLinuxStore for the missing 32 Mbyte of RAM and after a bit of a delay a RAM chip appeared, but it was the wrong one (adding 8 Mbyte, rather than 32 Mbyte). So I called, got an RMA number, and shipped it back. Eventually another chip appeared, 128 Mbyte this time - unfortunately it was incompatible with my machine. So I called, got an RMA number, and shipped it back (note how I could cut and paste that sentence from above). The third time was the charm - I received a compatible chip, installed it, and the laptop was finally up to spec. It was now mid March, 2000, approximately 6 months after the initial order. Part of the 6 month interval was due to my travel, but most to vendor latency.

However, during this entire time period I was able to happily use the machine, hand carrying it carefully on two business trips. It was incredibly useful and productive to have it along. These were training course trips where I presented a low level, beginning Linux device driver course. In the evenings after the day of training, I was able to check various things out that came up during the day - really handy.

I received cheerful and responsive service through this rather disconcerting 32 Mbyte RAM scenario (thanks to Tiffany Johnson at theLinuxStore). Further, with 32 Mbyte of RAM the machine was serviceable enough for my needs. It would have been worse if the machine was not usable - unfortumately that came next. The panel display went out! The laptop was now unusable! So I called, got an RMA number, and shipped it back (cut and paste, again). UPS tracking indicated that it was received by theLinuxStore on April 17. Several weeks later I called theLinuxStore and was able to determine that theLinuxStore shipped the machine back to its hardware vendor, but could get no other information.. Currently it is May 23 and my calls to theLinuxStore inquiring when the machine might be returned remain unanswered. I need it for an upcoming trip - I hope it shows up.

Laptop functionality is really wonderful to have, if you need it, and I do. This machine performed well, but was not delivered as specified and ultimately broke. In my case, the seller was not the ultimate hardware vendor - so there can be an extra step in the problem resolution process, adding time and uncertainty to returns. The verdict is really not in yet, but the summary is not encouraging:

I'll share the resolution in a subsequent submission to Linux Gazette.

[Readers, use the Talkback button below to discuss your experiences using Linux laptops from this vendor or other vendors. -Ed.]


Copyright © 2000, Richard Sevenich
Published in Issue 54 of Linux Gazette, June 2000

"Linux Gazette...making Linux just a little more fun!"


Building a Secure Gatway System

By Chris Stoddard


Introduction

In issue 51 of the Linux Gazette, the article titled "Private Networks and RoadRunner using IP masquerading", explains how to setup a Linux based gateway with good security in mind. The authors suggest starting with a clean install of Linux, which is an excellent idea, as security starts with a secure install, and that is what this article is about. When finished this will be a very lean install, weighing in at about 130 MB plus swap, there will be no X Windows, though I like to install Midnight Commander for file management.

I'm going to make a couple of assumptions here, first, you know how to install Linux and are familiar with its use. Second I assume you are setting up a gateway computer permanently attached to the internet be it by cable modem, DSL or whatever and will not be used for anything else like a ftp, telnet or web server.

What you will need

My machine is an old Dell Optiplex 466/MXe, it is a 486 DX2 66, with 16 MB of RAM, a 512 MB Hard Drive, a sound card and a 4X IDE CDROM. I acquired this one for $50 and upgraded it to a 486DX4 100, 40 MB of RAM, I removed the sound card and added 2 network cards, a SCSI card and installed a 320 MB SCSI hard drive, all of which I had in spare parts. The minimum system you will need, is a 486 (any flavor), 16 MB of RAM, 200 MB hard drive, two network cards and either a CDROM or the ability to do a network install. You will also need a copy of RedHat Linux 6.x. Although any distribution will work just fine, I will only cover RedHat. The system will only need a monitor during the install, after that it can run headless and can be administered remotely using Openssh.

Before you begin, go to ftp://ftp.redhat.com, download and copy to floppy disks, the following;

If you are using RedHat 6.2, the previous files are unnecessary. Go to ftp://thermo.stat.ncsu.edu/pub/openssh-usa and again, download and copy to disk;

Installing and configuring Linux

I will only be covering the items which deviate from the default settings.

  1. Choose a custom install. When Disk Druid comes up, make the following partitions.
    Partition     Minimum size     % of total        Mine
    /                    40 MB            10%       75 MB
    /boot                 5 MB            5 MB       5 MB
    /home               100 MB            25%      200 MB
    /tmp                 40 MB            10%       75 MB
    /usr                220 MB            45%      320 MB 1
    /var                 40 MB            10%       75 MB
    swap                 64 MB         2X RAM       80 MB 2
    

    1 For simplicity I used the entire SCSI drive

    2 In reality you could make the swap partition size equal to your RAM size or even smaller. I suggest larger in case you want to setup a web or ftp site later.

    This chart shows roughly how to divide up your Hard Drive, The minimums are just that, if your hard Drive is larger then 512 MB, then use the percentages after the swap and /boot sizes have been taken out. If your drive is smaller than 512 MB, then just make a swap partition and a root partition. By doing this, if an intruder does get in, he will not be able to fill up your hard drive by writing large files to either the /tmp or the /home directories. It also lets you do some Interesting things in /etc/fstab, like set nosuid and nodev on /tmp and /home. Some people will ask why I dedicate such a large chunk of drive space to the /home partition, when in theory, this system won't have many, if any real users. The answer is, room for transferring files to and from remote locations, like sharing MP3's or work files.

  2. When selecting the components to install, only choose Networked Workstation, Network Management Workstation, Utilities and Select Individual Packages. If you are using RedHat 6.2 and did not download the updated RPM's, select Lynx, so it is installed.

    Deselect the following packages: git, finger, ftp, fwhois, ncftp, rsh, rsync, talk, telnet ghostscript, ghostscript-fonts, mpage, rhs-printfilters arpwatch, bind-utils, knfsd-clients, procinfo, rdate, rdist, screen, ucd-snmp-utils, chkfontpath, yp-tools, XFree86-xfs, lpr, pidentd, portmap, routed, rusers, rwho, tftp, ucd-snmp, ypbind, XFree86-libs, libpng, XFree86-75dpi-fonts, urw-fonts

  3. After the system reboot, log in as root and type in the following command line, to clean out the packages the install program doesn't let you deselect.
    rpm -e --nodeps pump mt-st eject bc mailcap apmd
    kernel-pcmcia-cs getty_ps setconsole setserial raidtools
    rmt sendmail
    

    You may also want to consider removing Linuxconf, kudzu, kbdconfig, authconfig, timeconfig, mouseconfig, ntsysv and setuptool, depending on your skill level. All of the above packages are either security risks, such as rsh or not needed like XFree86 fonts.

  4. Copy all the rpm's you downloaded from RedHat to a couple of floppies, take it to the newly installed machine and mount the floppy drive with mount -t msdos /dev/fd0 /mnt/floppy then install the files by typing rpm -Uvh /mnt/floppy/*.rpm

  5. Copy all the Openssh files to a floppy disk and again take it to the newly installed system and mount the floppy disk by typing mount -t msdos /dev/fd0 /mnt/floppy and type rpm -ivh /mnt/floppy/open* . Change into the /etc/ssh directory and open sshd.config and look for"PermitRootLogin yes" and change it to no. This will cause the system to deny access to anyone trying to log onto the system as root from a remote system. If you need to logon as root remotely, logon as a normal user, then use the su command to get root access.

Final Notes

I am not going to go into detail about setting up a good firewall, "Private Networks and RoadRunner using IP Masquerading" does an excellent job of that, however I have a couple of suggestions.

I believe for security purposes DNS services should not be placed on the firewall system, either each client should be setup individually to use your internet service provider for DNS or a different machine on the network should be configured to act as a DNS server. Futher, I feel no inetd services from should be run on the firewall machine either, the only port which should be open is port 22, the ssh port. I as a rule will delete the inetd.conf file and replace it with an empty one, using "touch /etc/inetd.conf".

If you have more than two or three users on the system, you may want to consider using Squid, which is a web proxy/caching program. This speeds things up by keeping copies of often visited web sites on the local machine. It can also be used to block web sites, which can be useful if there are under age users in the house. If you decide to use Squid, I recommend at least 1 GB hard drive, 32 MB of RAM and a 486DX2/66 processor. Squid can be installed off the RedHat CD. Alternately, you can install Junkbuster, which is also a proxy program, it does not cache web sites and therefore will not require a larger hard drive, more RAM or a faster processor, what it does is blocks ad banners, which depending on the sites you visit will speed things up and keep these companies from gathering information about you. Junkbuster can be downloaded from http://www.waldherr.org/junkbuster.

For easy firewall construction, you should download either Seawall or pmfirewall, these are ipchains based firewall programs designed for simplicity, I have tried both and they work as promised and will save you the trouble of learning ipchains. Seawall is harder to setup, but has more configuration options, pmfirewall is easier to setup, but has less options.

Finished

Now go back to "Private Networks and RoadRunner using IP Masquerading" and finish configuring the gateway. Please remember this is not the end all and be all of Linux security, this simply give you a solid starting point. For a masters tutorial on Linux security download, see http://packetstorm.sercuify.com/papers/unix/Securing-Optimizing-RH-Linux-1_2.pdf. This document is massive at 475 pages, but the first two chapters alone are worth the read.


Copyright © 2000, Chris Stoddard
Published in Issue 54 of Linux Gazette, June 2000

"Linux Gazette...making Linux just a little more fun!"


The Back Page


About This Month's Authors


Carlos Betancourt

Carlos is a young Venezuelan Linux and GNU philosophy advocate since 1996. Working as a research student for the University of Tachira's (UNET) Network development laboratory (CETI) since 1995, he has been involved in the development of the University's network infrastructure and services. In a University with no Unix roots, Carlos was part of the research team of students in charge of learning, testing, teaching, supporting and implementing Unix (by means of Linux, Solaris and HP-UX) technologies to the UNET's new computer infrastructure, as well as assistant administrator of internet services. His hobbies involve Astronomy, Electronics, Reading Science Fiction, Classical and Political literature, [Astro]Photography and Poetry. He currently live in Brussels, Belgium, where he has recently married.

Shane Collinge

Part computer programmer, part cartoonist, part Mars Bar. At night, he runs around in a pair of colorful tights fighting criminals. During the day... well, he just runs around. He eats when he's hungry and sleeps when he's sleepy.

Fernando Correa

Fernando is a computer analyst just about to finish his graduation at Federal University of Rio de Janeiro. Now, he has built with his staff the best Linux portal in Brazil and have further plans to improve services and content for their Internet users.

Ray Ferrari

I am a new linux enthusiast who has been following the trend for over a year now. I have successfully installed Debian and participate in helping bring Linux to more people. I have been working with computers for seven years on my own, learning as much as possible. I currently am looking for a sales position within the Linux community. Talks are under way with VALinux; my dream company. I have been a volunteer for both Debian and LPI.

Mark Nielsen

Mark founded The Computer Underground, Inc. in June of 1998. Since then, he has been working on Linux solutions for his customers ranging from custom computer hardware sales to programming and networking. Mark specializes in Perl, SQL, and HTML programming along with Beowulf clusters. Mark believes in the concept of contributing back to the Linux community which helped to start his company. Mark and his employees are always looking for exciting projects to do.

Krassimir Petrov

Krassimir has a PhD in Agricultural Economics from Ohio State University. He also has an MA in Economics and a BA in Business (Finance, Accounting, Management).

Ben Okopnik

A cyberjack-of-all-trades, Ben wanders the world in his 38' sailboat, building networks and hacking on hardware and software whenever he runs out of cruising money. He's been playing and working with computers since the Elder Days (anybody remember the Elf II?), and isn't about to stop any time soon.

Pramode C.E and Gopakumar C.E

Pramode works as a teacher and programmer while Gopakumar is an engineering student who likes to play with Linux and electronic circuits.

Richard Sevenich

Richard is a Professor of Computer Science at Eastern Washington University in Cheney, WA and also teaches occasional introductory Linux device driver courses in the commercial sector for UniForum. His computer science interests include device drivers, Fuzzy Logic, Application-Specific Languages, and State Languages for Industrial Control. He has been an enthusiastic user of Linux since 1993.

Chris Stoddard

I work for Dell Computer Corporation doing "not Linux" stuff. I have been using computers since 1979 and I started using Linux sometime in 1994, exclusivly since 1997. My main interest is in networking implementations, servers, security, Beowulf clusters etc. I hope someday to quit my day job and become the Shepard of a Linux Farm.


Not Linux


I really should change the title of this column, because most of the material is about Linux....

I'm really happy with how the Answer Gang is turning out. Thanks also to Michael "Alex" Williams for helping to format the Mailbag and 2-Cent Tips starting this month.

Linux Gazette is now available via anonymous rsync. This is good news for our mirrors, because it decreases their update bandwidth. See the FAQ, question 14 for instructions.

I've got a new title now, Chief Gazetteer. This was invented by our sysadmin assistant Rory Krause.

Michael Orr
Editor, Linux Gazette, gazette@ssc.com


This page written and maintained by the Editor of the Linux Gazette.
Copyright © 2000, gazette@ssc.com
Published in Issue 54 of Linux Gazette, June 2000