The Mailbag
HELP WANTED : Article Ideas
Submit comments about articles, or articles themselves (after reading our guidelines) to The Editors of Linux Gazette, and technical answers and tips about Linux to The Answer Gang.
installing mandrake 10.0
Wed, 24 Mar 2004 20:58:16 +0000
spb (
t14497867 from netscape.net)
Greetings I installedd disks 1 and 2 linux mandrake 10.0 as an upgrade
to my linux mandrake 9.2, immediately all froze onsgreen . Need an
answer- my 9.2 works ok. thanks.
Wanted point number 1: clearer requests.
-- Heather
[Thomas]
Hi, unfortunately I often have trouble decyphering all that information
that you poured into such a well thought out e-mail. You obviously spent a
lot of time thinking about your problem, listing all the symptoms, etc.
You should give yourself credit for it.
Here's a tip though:
http://linuxgazette.net/tag/ask-the-gang.html
Oh, if you send HTML e-mail to this list again, there's a high chance
you'll get ignored. Seriously we need more information about what id
freezing, whether or not you completed your upgrade, whether or not your
kernel boots....etc.... Otherwise, how can we help you?
Greetings answer gang, I reply to your mail yesterday.
Alas, he still sent text+html email. Wanted point number 2: I can't
use your webmail's HTML folks. Don't waste the bandwidth (about 3x
the space!) sending those extra bits.
-- Heather
The computer using today has Microsft ME os, I cannot send any info to
you from my linux PC, as when I loaded the upgrade mandrake 10.0 on top
of my Mandrake 9.2-( purchased from Mandrake central USA} when after
some time(I thought it had installed 10.0) it returned to the welcome
desktop screen, from there it is completely frozen, the mouse, the arrow
keys , tab key and function keys, so restart many times( by power off
and on only,my only option at the moment} I arrive each time to this
situation, I cannot get to bios settings to change device priority for
startup. to use a boot disk or floppy.
[THomas]
Presumably you managed to boot off of a CD just fine. In that case, I
suggest (no, I urge you) to download knoppix [1] and boot from that. If
you have not the facilities to burn CDs, then find someone who can. Why
exactly can't you access your BIOS?
That's where you come in, dear readers :D
-- Heather
I purchased Mandrake 10.0 from
Linuz.org in USA they sent two cd's by airmail without any other info
except invoice.
[Thomas]
Also, as this is Mandrake, I suggest when you next boot into Linux, you issue:
linux 3
(at the LILO/Grub prompt) so that you're forced into a text-only mode.
Hoipefully you should be able to report back to us whether you can login
or not, and whether your keyboard is still locked, etc.
Please advise a good Linux Tutorial book for home users,
I am not IT boffin thank you .spbramwell.
[Thomas]
The book I used wqas "Running Linux", Matt Walsh, et al. Publsihed by
O'reilly. You'll find it on amazon no problems.
[Heather]
The book I got started with was "Unix as a Second Language" by Sobell,
which has now become several books, for different flavors of UNIX. The
current one of the Linux flavor shows a penguin belly flopping down a
snow bank
It also got renmaed though ... the name escapes me at this
hour ... Any of the series though, will definitely make your starting
out a little more fun.
Readers: more good book suggestions?
More Cool Answers
Wed, 31 Mar 2004 11:35:38 -0800
Heather Stern (
The Answer Gang's Editor Gal)
There's some dark chocolate waiting in the Answer Gang's back lounge for
new folk inclined to send in their good tips or a nice long chat about
how some useful part of Linux really works. You don't have to join the
TAG list either -- just send your bits in to tag@linuxgazette.net.
GENERAL MAIL
My sig, and Linux Gazette... :)
Mon, 8 Mar 2004 02:18:52 -0800 (PST)
Dave Bechtel (
kingneutron from yahoo.com)
Question by carla (carla from bratgrrl.com)
re: http://linuxgazette.net/100/lg_mail.html
Glad you liked my .sig.
I have fond feelings for the original Muppets,
including the Sesame Street ones. Even have some mpeg's of them, such as two
aliens and a hippie singing "Manomonot" on the Muppet Show, as well as the
Intro to the Fraggle Rock show -- in Swedish (I think - it's called
"Fragglarna".)
IIRC, I saw a similar 1337 .sig on Slashdot or somewhere a few years ago,
and adapted it for the Muppets. I got a real charge out of seeing it posted
in LG.Net (twice now, no less!) and the response it generated. LOL.
(You have my permission to publish this letter or share with LG.Net colleagues
if you see fit.)
Best wishes. (And LG, please tell Thomas Adam I'm sorry for being so short
with him. I was going through a bad time, and ended up having to move in
order to get away from the situation.)
Linux, making Sesame Street more fun...
-- Heather
[cc] Looking for Stephen Bint
Thu, 1 Apr 2004 00:20:47 -0500
Ben Okopnik (
LG Technical Editor)
Question by Heather (star from starshine.org)
Hello Heather,
Hi, Gianfranco -
I'm not Heather, but I'm the fellow who receives mail at the editor@
address these days.
I lost contact to Stephen Bint who used
to be a member in the Answer Gang. Messages
to him are bouncing.
Please be so kind and let him know
that he should get in touch with me.
Thank you very much.
--
gianfranco accardo
gfa2c gmx.net
Stephen Bint was never a member of The Answer Gang; he wrote a couple of
articles for LG. I'm afraid we have no way to contact him beyond his
email, and given his own statement of his lifestyle:
Stephen is a homeless Englishman who lives in a tent in the woods. He eats out
of bins and smokes cigarette butts he finds on the road. Though he once worked
for a short time as a C programmer, he prefers to describe himself as a "keen
amateur".
- losing track of him is not an unlikely occurence. I'll CC Heather on
this, but I doubt that she'll be able to help you any more than I could.
Presuming that he occasionally sneaks into a cybercafe to read LG and
write an article now and then, we'll pub the request in the Gazette and
see if he responds. To which end, I've left gianfranco's address
visible. I apologize in advance if the spambeasts find it too.
-- Heather
GAZETTE MATTERS
Back issues of lg as pdbs
Mon, 1 Mar 2004 11:56:12 -0000 (GMT)
Alan Pope (
alan from popey.com)
Hi all,
I spoke with Thomas on IRC last night about this but it was late and I'm
not sure I made myself clear.
I'd like to be able to read the lg on my palm device. Now I know I can
click the TWDT link on the front page of the site, whereupon some magic
cgi-bin foo generates a pdb file for me to download. I guess this is
taking the text version of TWDT and generating the pdb on the fly?
My question is this. That process is fine and dandy for the current
release, but I'd like to read older issues on my palm. Can the pdbs be
made available on the ftp site?
[Sluggo]
linuxgazette.net doesn't have an FTP site. It now uses a portion of
the website for tarball downloads.
The biggest issue with putting Palm-format files in that directory is
they will be picked up by the mirrors. We'd have to check with the
mirrors whether the bandwidth/size would be a hardship. We also look
at how widely used the files would be. The tarballs can be used on any
platforms with any OS. Palm files work only with certain brands of
palmtops and exclude everything else.
[Thomas]
Hence I agree that generating them on-the-fly (talking of that,
congratulations, Ben!!) is a far better thing to do.
[Ben]
Actually, AFAIK, the Palm Reader is available for several different
platforms; certainly for Wind0ws, WinCE and both of the Linux distros
made for the iPaq. However, I agree that we shouldn't clutter our
tarballs with these things; that was the point of doing this stuff "on
the fly" for the folks who want it.
I could convert the files myself I suppose which would probably involve
lynxing the TWDT text version of the file and then using "some tool" to
generate the pdb from the .txt file. However I just wondered if as that
particular wheel has already been invented, it might save some work?
Cheers,
Al.
[Ben]
Nope, there's nothing set up. I'd suggest grabbing "bibelot" from
Freshmeat and converting whichever TWDT you'd like. Here's the simplest
way I can think of:
lynx -dump -nolist http://linuxgazette.net/issueXX/TWDT.html|bibelot -f -t twdtXX.pdb
where XX is the number of the issue. If you wanted to do a bunch of them
in one shot, you could even do a "for" loop:
for n in `seq $first $last`; do ... done
where $first and $last would delimit the range of issues that you want to convert.
Sure, and I wasn't suggesting that you should add pdb to the tarball or
the ftp server, just asking if the process is already there to generate
pdbs, why not make it available for back issues too?
Hence me saying why reinvent the wheel.
I don't have a problem downloading the TWDTs and converting them, just
thought it would be easier to use whats already there.
Cheers,
Al.
This page edited and maintained by the Editors of Linux Gazette
HTML script maintained by Heather Stern of Starshine Technical Services, http://www.starshine.org/
Published in Issue 101 of Linux Gazette, April 2004
More 2 Cent Tips!
See also: The Answer Gang's
Knowledge Base
and the LG
Search Engine
Pushing Files To Multiple Hosts
Sun, 07 Mar 2004 20:46:45 -0500
Sean Johnson (
sean from gutenpress.org)
While it might be overkill for your situation, this is a perfect place
to use cfengine ( http://www.cfengine.org ).
Perhaps I should write up an article for Linux Gazette?
Cheers,
Sean
[Thomas]
You're more than welcome to do so. Author submission guidelines can be
found in the FAQ, found here:
http://linuxgazette.net/faq/author.html
Connecting Mac OS9/10 to a Linux Samba Domain
Thu, 05 Feb 2004 12:22:12 +0300
Thomas Adam, Breen Mullins (
The LG Answer Gang)
Question by JG Nasser Olwero (jgnasser from mpala.org)
I run Samba 2.2.3a on RH Linux 7.3 as the Domain Controller. I have all
my Windows clients connecting fine to it but have trouble with Mac
clients, no idea how to log them on. I also attempted to have the Mac
client connect to the Linux POP3 and SMTP server (Sendmail) to no avail
probably because the Mac is not welcome on the network. I am connecting
the Mac using wired ethernet to a Network switch.
[Thomas]
You need to ensure that you're using the 'appletalk' protocol. This has to
be enabled in the kernel. There are also userspace programs that are
needed for this.
http://ftp.linux.org.uk/pub/linux/appletalk
might be of interest.
Hope That Helps
[Breen]
You don't use AppleTalk to connect to a POP or SMTP server. That's pure
TCP/IP.
If we're talking about a Mac OS X client, that comes with Windows
filesharing built in. Classic Mac OS is of course a different problem.
Actually, MacOS X knows how to speak Samba/mswin sharing now too; their
client side tool is called DAVE, and it used to be third party software
for MacOS 9. If you want to install Mac style sharing on your Linux
box, the app you're looking for is called netatalk. I used it years ago
and it was a breeze to setup - had 'em working faster than their mswin
cousins in the same office.
-- Heather
[Breen]
If you're using OS X you can call up a connect dialog with Cmd-K and
enter the address smb://<ip_of_server>. You'll need an IP address, of course
-- check your network preferences pane to make sure.
Beyond that, you're probably looking at a Mac client issue. You might
try asking for help at a Mac specific site. (I recommend
http://forums.macosxhints.com, if you're using X.)
CDROM not seen by RH9
Thu, 11 Mar 2004 16:29:46 -0500
Thomas Adam (
The LG Weekend Mechanic)
Question by Joseph Lalingo (ah300 from torfree.net)
Hi,
I installed RedHat Linux 9 via the cd-rw, successfully, but the cdrom
was not seen. I know the cdrom is still connected internally as I
haven't interfered with the system's insides (which came with a a cdrom
and cd-rw) internally. The cd-rw is understood to be the cdrom and there
is now NO /mnt/cdrom1 but there IS a /mnt/cdrom.
The cdrom door does not open, yet the light of the cdrom is on.
Joe
[Thomas]
"The lights are on but there's no one home". /mnt/cdrom is the mount-point
location of your cdrom drive. It is arbitrary and you can use anything you
like. These are defined (or should be) in /etc/fstab.
If you look in that file, you should have a line similar to:
/dev/cdrom /cdrom iso9660 ro,user,noauto 0 0
Here, /dev/cdrom is in turn a symlink that points to my main cdrom device:
/dev/hdd. This then gets mounted to /cdrom, when I issue the command:
mount /cdrom
I suspect your troubles come from you missing an entry in /etc/fstab. If
you wanted to mount the second drive as /mnt/cdrom1, then look at the
existing line in the /etc/fstab file, and modify it to reflect the new
drive.
The device name (/dev/xxx) can be got from viewing:
dmesg | less
In Short, Dig This
Wed, 31 Mar 2004 01:45:38 -0800
Jim Dennis (
the LG Answer Guy)
Possibly there's not a sysadmin around who hasn't needed to do a host
lookup now and then, to make sure they know what addresses are really
being found when a DNS lookup is made.
nslookup is deprecated, host can be confusing, dig is the nice tool for
the job - regardless of attempts to claim it is old too, it will be
around a long time. But who really wants to get a long listing full of
semicolon comments and things?
; <<>> DiG 9.2.3rc4 <<>> linuxgazette.net
;; global options: printcmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 605
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2
;; QUESTION SECTION:
;linuxgazette.net. IN A
;; ANSWER SECTION:
linuxgazette.net. 86400 IN A 64.246.26.120
;; AUTHORITY SECTION:
linuxgazette.net. 86400 IN NS ns1.linuxmafia.com.
linuxgazette.net. 86400 IN NS ns1.genetikayos.com.
;; ADDITIONAL SECTION:
ns1.linuxmafia.com. 61864 IN A 198.144.195.186
ns1.genetikayos.com. 61864 IN A 64.246.26.120
;; Query time: 153 msec
;; SERVER: 216.240.40.162#53(216.240.40.162)
;; WHEN: Wed Mar 31 01:39:18 2004
;; MSG SIZE rcvd: 144
Unless I'm tracking the path of authority rather than just checking the
address, I don't care either.
(jimd@phobos) ~$ dig +short a linuxgazette.net
64.246.26.120
Short, sweet, and to the point. Replace "a" with "mx" or "ns" as you
please, but this is a lot handier for scripting; I don't have to
invoke my talent for awk and grep one-liners on DNS checks anymore.
DNS proxy/cache (Tip)
Tue, 16 Mar 2004 00:11:33 +0100
Karl-Heinz Herrmann (
kh1dump from khherrmann.de)
Hi,
I had an annoying little problem: My home network has grown to 3 PC's --
one directly on the phone line, the others connected via WLAN. Usually I
would pick one dial-up provider and stick with that. Unfortunately the
German ISP's are a big mess of call-by call providers with constantly
changing tarifs.
The directly connected box is only the dial-in and firewall/NAT Router,
the other two are my Laptop and desktop.
The annoying problem: Everytime I change the provider I had to change
the resolv.conf on all systems according to the new nameservers as
transmitted via [i]ppp protocol.
-
My solution: dproxy
- http://dproxy.sourceforge.net
It serves as a proxy/cache for DNS lookups. It uses regular sys-calls
for namelookups and reacts instantly (no kill -HUP or similar) to new
entries in /etc/resolv.conf. This is on the router of course and
everytime pppd changes the resolv.conf for the new provider it simply
uses the new values.
The other two machines have the router as the nameserver and always get
the correct information (even offline, so a connection is of course not
possible). No manual changing anymore.
K.-H.
Making filenames lowercase
Wed, 31 Mar 2004 10:05:38 -0500
Ben Okopnik (
LG Technical Editor)
Sometimes, despite our best eforts with "unzip -L", we end up with a
bunch of files the names of which are ALL IN CAPS. The easy way to deal
with these is with a simple utility that I call "lc". (Also, should you
ever need such a thing, creating a complementary "uc" would be an
obvious modification.)
#!/usr/bin/perl -w
# Created by Ben Okopnik on Fri Jul 25 09:13:22 EDT 2003
die "Usage: ", $0 =~ /([^\/]+)$/, " <FILE[S]_TO_LOWERCASE>\n" unless @ARGV;
rename $_, lc for @ARGV
Note that you can specify multiple files or even shell wildcards at the
command line; it's perfectly happy to chew on whatever you supply.
measuring the temperature in your computer room
Wed, 17 Mar 2004 13:12:23 -0800
Yan-Fa Li (
yanfali from best.com)
Hi,
I've found a useful side effect of running smartd on my drives at home
which I've used for a while now
to monitor the temperature in my apartment. A lot of newer IDE drives,
especially IBM/Hitachi's and SCSI
hard disks monitor the drive temperature. I've found this to be a
useful way to figure out how hot it is
in my computer room at home :D
Assuming you've already installed smartmontools, this was tested with
version 5.26:
# smartctl -a /dev/hda | grep 194
194 Temperature_Celsius 0x0002 161 161 000 Old_age
Always - 34 (Lifetime Min/Max 20/37)
As you can see the drive is a toasty 34 degrees celsius. Of about 5-7
degrees above ambient, it's about 27-29C
or 80-85F in that room right now. Not great for the equipment but
survivable. Anyway, not terribly useful,
but interesting nonetheless :D
Yan
Troubleshooting mail delivery
Thu, 1 Apr 2004 22:33:59 -0500
Ben Okopnik (
LG Technical Editor)
There are times when the mail just won't go through, for any of a host
of reasons. Your ISP's server may be down, your own mail programs don't
work, whatever - and of course, this happens at the most critical times,
"when it absolutely, positively has to be there." Well, assuming that
your recipient's mail server is working, you can bypass most of the
chain - at least your end of it. This can also be a good testing tool.
It lacks a few refinements (e.g., there's no subject and the address you
supply is actually used as the normally hidden "From" header rather than
the friendlier and visible "From:"), but it will at least get the
content across.
ben@Fenrir:~/Docs$ telnet badabing.com 25
Trying badabing.com...
Connected to badabing.com.
Escape character is '^]'.
220 badabing.com ESMTP Postfix (Debian/GNU)
HELO myserver.net
250 badabing.com
MAIL FROM: me@myserver.net
250 Ok
RCPT TO: joe@badabing.com
250 Ok
DATA
354 End data with <CR><LF>.<CR><LF>
Hi, Joe - it's me!
.
250 Ok: queued as D1F8F160B4
QUIT
To recap - I connected to port 25 (SMTP) at badabing.com, identified my
server via HELO ("hello"), to which the server responded with its own
name. I then told it who the MAIL was FROM: and who the recipient (RCPT)
is supposed to be, and asked it to stand by for the actual DATA, which
it told me to end with a return, a period, and a return. When I was
done, I typed "QUIT" to exit.
This is not for everyday use, but can be a very handy tool for those
times when you've just got to get your mail across despite problems.
vmlinuz from when and where?
Wed, 31 Mar 2004 01:45:38 -0800
Heather Stern (
The Answer Gang's Editor Gal)
In my consulting I find myself running into an awful lot of systems
booting off of 'vmlinuz' in the root directory. What kernel is that?
How the heck would I know?
I'll tell you how I ask it :D
[root@somebox] /# strings vmlinuz | grep 200
2.6.0-test7-1-386 (herbert@gondolin) #1 Sun Oct 12 10:29:56 EST 2003
Why does this work? Because now that we're a few years into the
century, nearly all the kernels contain something with a year 2000 or
later, and we'll have 200n year numbers for a while yet. On an older
system, try 199 - Linux isn't old enough to have kernels from 1980
unless someone is playing serious games with their clock. You could
probably look for the @ sign, but chances are too good of finding one
alone somewhere in the binary portion of the code.
This page edited and maintained by the Editors of Linux Gazette
HTML script maintained by Heather Stern of Starshine Technical Services, http://www.starshine.org/
Published in Issue 101 of Linux Gazette, April 2004
The Answer Gang
We have guidelines for asking and answering questions. Linux questions only, please.
We make no guarantees about answers, but you can be anonymous on request.
See also: The Answer Gang's
Knowledge Base
and the LG
Search Engine
Contents:
- ¶: Greetings From Heather Stern
- Compile on one, run on another machine
- Dedicated Linux application
- Diagnosing a Linux crash
- I blew out Fedora with yum and 2.62
- 2c tip: filtering in-place
- framebuffer colors
- Mirror 2 web servers
Greetings from Heather Stern
Greetings, folks, and welcome to the world of The Answer Gang. It's
been a rough and tumble time here... I've got some hardware in a
shambles. It's just not my week. Just looking at all these scattered
parts makes me wonder if I could build a robot out of them. I've got
enough backups I've certainly my pick of linux flavors...
Then some folks in an IRC channel got into discussing what sort of
mayhem we would see if various window managers were thrown into a
gladiatorial arena and forced to duke it out, mano a mano, claw
to saw. "Fight! Fight!" cried our Weekend Mechanic -- as he cheerfully
added parts to the house favorite --
and the battle was on.
WM-bot Wars
In our first round of the competition, we have the little guys. Heck,
you might have even heard of some of them. Aren't they so cute!
Minimal is the name of the game here. Let's weigh in, the advantage is
they're light, the disads are what features they left out - the
underpower crowd roars for these underdogs - here you have 'em
folks:
- Ratpoison - throw
that mouse away, kids.... and also menus.
- aewm -
aesthetic? Maybe. But its place in the very brief
limelight has been taken by its many children.
- sWM
- a failsafe, just enough to manage windows - meaning where the events
go, and moving them - forget everything else.
- windowlab. You
Amiga fans out there might like this one. Slam that mouse around. Zip
across the screen. Whee!
The fight begins, with these and crowds of other small fry swarming to the
freshmeat.
Amaterus
makes a pretty good start but it has a menu - just a sucky
one. Someone hops a pogo stick and virtual desktops sprout.
Apps being executed everywhere. A pypanel confers, Oroborus nearly trapped against the
obscurity wall when MacOSX
rescues it. And the first round goes to the happy Blackbox family, including
hacked, open, and flux box for creative menu
tricks ranking recently
used apps.
The bell rings and we clear out a few smoking ruins - now for the
midsize mayhem.
WindowMaker wades in - or
is that widowmaker - docks his jaws around AfterStep. Tiny Tom's wm escapes the
system spikes, but Claude has the
extra edge. IceWM is looking cool
until he hits the arena's flame trap "I wanna look like..." but won't
fall for that - escapes the pit! Menus click, swap thrashes, and catlike
fvwm takes control of the mouse,
scripting rings around the others.
Now for a page from the masters. We know those flames are tough,
and it's time to let the survivors here get a chance to commit before
the next round... the big noisy battle of them all... Desktops.
Gnome and KDE both extend their
hints against the competition, crushing smaller opponents. Enlightenment upgrades to 16.6 and
stands its ground. FVWM laughs and sprouts modules to extend itself, while
fast light wm joins it in
pushing the brutes into the arena's OOM killer. Parts are
flying everywhere! Chipping through the armor, flames are getting through
... ooh! FVWM escapes by shedding its modules again, while K is trapped.
Something's bound to overload... K's gears
grind slowly to a halt, while Gnome has the metacity to pick
on E's incomplete brother which valiantly struggles to code up href="http://enlightenment.org/pages/news.html">new features
before timeout. XFce zooms
into the fray -- "wanna piece of me?" Then the commercial desktops enter
the fray, xig's CDE-like DeXtop rushing forward
only to wedge in the pit of interoperability. Athene constantly regenerates
but when the battle gets toe to toe, the obscurity spikes pin it down, the
theme of the day turns Fvwm's way -- and it looks
like we've a champion.
But what's this? The arena has been invaded. Who are these interlopers?
screen has taken to the
field, with twin
nipping at its heels. A growl from behind, but they squish splitvt
together, dtach another tiny
opponent, then turn back to each other -- only to wail
as emacs turns its
Gnu-like
head in their direction, establishing a sessions server...
Is it possible for there to be any more carnage than this? Probably.
Seen on alt.sysadmin.recovery:
I am now taking bets on when this planet will reach its window manager
event horizon. At some distant point in the future some sort of
alien life-form is going to land on this planet and find everything dead
except for a lone Sparcstation in an abandoned building waiting
for a consignment of small lemon-soaked Motif widgets to be loaded.
-- Peter Gutmann
Ok, ok. That was overkill. I'm sorry, really sorry that I had to pub
late this month, but as you can see now, my place is a shambles. But I
did find something cool while finishing up... hooray! Someone in Ireland
actually *CAUGHT* a spammer. Better yet, got them
hauled
away by the cops. (Alleged, hah. Caught with everything but a
patsy present in person and teary-eyed.) Not even April fooling. Enjoy
your month, folks. I know I will.
So that's it. Answers by Jim Dennis, Ben Okopnik, Thomas, Faber,
many others among The Answer Gang... and you! If you've got some great
Linux answers - send them to
us. Ideally the answers explain why things work in a friendly
manaer and with some enthusiasm, thus Making Linux Just A Little
More Fun! Good short bits will probably go in Two Cent Tips, but
truly juicy explanations, especially those that get the Gnag jumping in,
could end up here. We don't promise that we'll publish everything,
though. Also - you can be anonymous, either asking or answering - just
tell us so, and Tux will eat the herring we wrote your name on. We
swear.
Compile on one, run on another machine
From Ferenc-Jan
Answered By: Thomas Adam
Hi there!
I've got a question that may be of interest to the linuxgazette
community. Where does one go to find out more about cross compiling?
(I'm not even sure cross compiling is the correct term for this.)
[Thomas]
Cross-compiling refers to compiling applications on a computer that is not
intended for the same computer because often the target computer has a
different architecture.
The case is this: I often want to compile stuff on one machine but
run it on another. My permanently-in-disrepair, bleedingly fast &
incredibly messy test system is the machine of choice for compiling
all kinds of linux stuff. The barely alive, old laptop or the lean &
mean firewall box aren't - besides, I don't want a full blown gcc
environment on those. But I run into problems most of the time.
Approach A: compile it as per instructions. Then I have to go and
find all the nitty bitty parts of the package, that are now residing
everywhere on the test system's drive. This often fails because
missing parts don't always generate comprehensible error messages.
[Thomas]
This is not an approach, this is what usually happens when you compile a
program.
Approach B: try to install it to a different directory, i.e. /tmp/
and then move it to the other machine. This often founders on hard
coded file location, i.e. when /tmp/lib/something is actually at
/usr/local/lib/something (like it should be.)
[Thomas]
This is dependant upon how ldconfig registers where the libraries are
(possibly by way of $LD_LIBRARY_PATH).
I don't consider myself a beginner, but who does. I don't seem to be
able to google the answer to this problem, but surely I'm not the
only one running into this?
Any hints would be much appreciated, but I understand this must be
relevant to linuxzgazette before you can spend your time on this.
[Thomas]
I'm not quite sure what it is you're asking. If it is "how can I compile
applications such that I can minimise the number of (likely) errors", then
the answer is to compile it statically so that the application doesn't
have to go using any external libraries. Thr only disadvantage with this
is that the resultant binary is often very large.
Dynamic libararies are the most popular -- much smaller, but it does mean
that it is up to the user to ensure that these libraries are installed.
'ldd' goes a long way to checking and ensuring that is the case. But at
compilation time, the user is often told what libraries (if any) are
needed.
The querent then reported back that all was right with the world and
that he found what he was after. I suppose we call all learn a lesson
here in that being precise is important when asking questions, otherwise
your question may never get the correct answer!
-- Thomas Adam
Anyway, another helpful soul came up with the answer on usenet:
make prefix=/tmp/foo install.
That is exactly what I'm looking for. With this I can drop everything in
an empty dir, gzip it, move it to another machine and unpack it & run it.
Never thought it would be this simple, I tried this with
configure --prefix=/tmp/foo, but that's where /tmp/foo gets into the compiled
files.
Dedicated Linux application
From Jon Aldrich
Answered By: Thomas Adam, Kapil Hari Paranjape, Jay R. Ashworth
I am in the process of developing a linux app (for gaming). I want the
final product to reside on a linux box that, after booting, automatically
runs the application. What is the preferred method for doing this?
1) An 'auto login' for a special user on one of the system consoles, who's
user profile starts the application.
[Thomas]
You could add something like this to the user's ~/.bashrc
[ "$(tac ~/.xsession | sed -n '2p')" = "the_name_of_my_game" ] && {
startx &
}
[Thomas]
which says that if the penultimate line starts with your game program name
then launch X, otherwise don't bother. Why the penultimate line? Because
the last line in ~/.xsession should ALWAYS an "exec call" to your window
manager. Of course, should you add anything to that file, the number
passed to sed will have to be changed.
2) Start the app with an inittab entry.
[Thomas]
No. Doing this is deprecated and will cause all kinds of weird errors.
What happens say, if X crashes? Each time X tries to start (based on the
run-level X starts in) the game will also try to load, and so you get a
feedback loop. In the case of Debian, X is started throughout runlevels
2-5.
3) Something else?
The app will produce graphics output and get user input, so it can't run as
just a backgroud type daemon. It will run on a secure, dedicated network
Is there an info source or HOWTO for this sort of "bringing to market,
implementation" kind of topic?
[Thomas]
I would just have it launched from within the user's ~/.xsession file (see
above).
[Kapil]
One of the lesser known methods. An "@reboot" entry in the crontab. This
will start a program just after "crond" is started.
[Thomas]
Not unless you have read: 'man 5 crontab', it isn't!!
One other small question. Is it possible to display a flash screen of some
sort during the boot sequence and pipe the boot data to the bit bucket?
Sort of a "boot -quiet" option.
[Kapil]
I think there is a "bootsplash" patch to the kernel that does this.
[Thomas]
I think Jon was referring specifically to his game's flashscreen.
[Jay]
You'll get lots of opinions, I suspect, but mine is 'put it in
inittab'. That and (optionally) an sshd should be about it: you should
know everything that's running in a ps, in this kind of environment.
Make sure the bios boots the HD (/flash image) and nothing else, and is
passworded. Yes, even if there isn't a hardware keyboard.
Diagnosing a Linux crash
From Tom Brown
Answered By: Thomas Adam, Karl-Heinz Herrmann
OK guys, here's a n00b question for you that probably crosses over into
Sys Admin territory.
What steps should someone follow after Linux crashes to figure out what
went wrong?
Where do I start, and where do I look for clues?
Are all the logs found in /var/log, or are there others?
In what order should I look at the logs, and what should I look for?
[Thomas]
It depends what you think went wrong. Essentially:
/var/log/messages
is where syslogd will dump all its data and so is the best place to look.
But there may well be application specific data in /var/log
(XFree86.0.log) is one such example.
Any pro-active steps I should be taking to get more info, should it
happen again?
The specifics of my case: my file server (a 750 Mhz Athlon running Suse
9) simply locked up, and I couldn't get anything to display (GUI or
command line). I knew the machine was in trouble, when it didn't respond
to pings. I had to hit the reset button to get it back (and deal with
fsck, naturally). Funny thing is, the system clock reset itself to 28
minutes after midnight (when it should have read the middle of the
afternoon), but didn't loose the date. Odd, that. The machine's been
running 24/7 for about three weeks now (I set it up around then), and no
sign of problems until now.
[Thomas]
This might be framebuffer related. At the lilo/grub prompt, type:
linux video=vga16:off
[Thomas]
to see if that has any effect.
There have been snippets of these effects metioned in the past. The one
that springs to mind is:
http://linuxgazette.net/issue74/tag/9.html
[K.-H]
There are ways of still getting kernel info (pro active steps):
- plug an old printer into the lpX port and declare it the system console
(kernel kompile parameter, and I don't know how exactly you activate it
-- maybe inittab).
- When running switch to system console (Alt-Ctrl-F10 on SuSE) and leave
it there. It might show a kernel oops/panic there next crash.
- search SuSE config for Magic SysRequest keys -- the function should be
compiled in the kernel but has to be activated. Then you can press
weird key-combinations like Alt-Ctrl-Sysreq-R for register dump, ...S
for disk sync,... see /usr/src/linux/Documentation for details.
- File server? What hardware? I had SCSI disks locking my system for
various reasons (Tagged queuing incompatibilites of indiv. drives, too
long cables,..)
I'm going to keep your response handy -- several things to try.
Meantime, I realized I was booting the thing into runlevel 5 (rather
stupid, actually), so I've since changed it to 3. If it is, as someone
suggested, a framebuffer problem, maybe that will solve it for now. I'm
using a real old Voodoo 3 card I scrounged from my parts bin. If it
happens again, I'll have to tear the machine apart and start playing
with the memory, as someone else here suggested.
install and configure Linux is one thing. Learning how to do an autopsy
seems to be quite another!
[Thomas]
That's because generally one doesn't do it quite like that. Problem
diagnosis is situation dependant. In any givem situation there is often a
small set of files and related information that you can analyse without
having to worry about the rest of the system.
Granted, this is related to how much information one is told at the time
(if you've been on this list for as long as I have, you'll come to realise
that usually we don't get any), and whether or not the person has tried to
remedy it.
In general though, poking around, taking an aspect of your system, looking
at what it does, and how is all related and helful to you when you have to
come to diagnose anything.
Yes, well, I looked at the messages log, but saw only a gap time-wise
between cron processing around 4 in the morning, and the time of the
crash. I'm not sure which of the other logs are important in that case.
Where do I find the register dump (although I suspect it won't make much
sense to me, rather like those register dumps you get in Windows XP)?
[Thomas]
Syslogd might have logged it, if the problem was software related, and
indeed if the said program produced any errors. If hardware then it
might not have, depending on the severity of the hardware failure.
I'm using a real old Voodoo 3 card I scrounged from my parts bin. If it
happens again, I'll have to tear the machine apart and start playing
with the memory, as someone else here suggested.
[Thomas]
It might be memory, but as the link I have you last time around said,
memory problems tend to be more 'visible' in the sense that you get a lot
of applications SEGFAULTing and SEGABRTing for no apparant reason. In such
instances, installing and running 'memtest86' is usually of help.
[K.-H]
Most of the time I had the great luck of oopes and
kernel crashes occuring in the scsi layer, often hardware problems. If
the scsi layer is in trouble nothing will get written to disk. What's
software related regarding the kernel? The kernel deals with hardware,
and it's supposed to handle error conditions gracefully, i.e. not just
freeze without a hints whats gone wrong. But there are situations where
the kernel doesn't have a chance of leaving hints on harddrive.
Then a few thing might be useful: (to Tom B)
- run the box without X as you suggested yourself
- switch to konsole 10 (sys messages). Even if the kernel might not be
able to leave a trace on the HD it might give a hint here.
- for any reg dump on konsole 10 or syslog you need to run it through
ksymoops to make it useful. That's something nobody can take over
because it has to be done on your system, with your kernel and kernel
symbols. I hope SuSE set everything needed up correctly.
As you mentioned WinXX reg dumps -- in Linux they are about as useful
as in WinXX, but Linux has the tools to decode them (ksymoops) to make
them useful.
- If you gain any information (and yes you will have to note it down on
paper and give it to ksymoops after reboot) you can try here oo with the
kernel people.
- this is an option to follow if you are interested why your system
crashes. As it's crashing very irregular this is a rather difficult
situation and a very slow process. But "the machine was dead on the
next morning" wont help you next time it happens. Above mentioned things
(along with running a printer or a serial line console) would also help
in getting the syslog right up to the crash.
suggested reading:
/usr/src/linux/Documentation/oops-tracing.txt
ksymoops man page
But I have to say that often enough I also do not try to hunt spurious
crashes which do occasionally happen. Either hardware causes or whatever.
You always can try a different kernel or simply hope for the best.
Still -- keeping the system on konsole 10 is not a difficult thing to do
and it just might give you something useful next time (note it down for
ksymoops if it's a oops or panic).
SuSE has memtest as bootoption -- run it if you suspect the RAM, run it
long (several passes) and the full test suite if you don't find any
errors on the first go.
Thanks Karl and Thomas. This is the starting point I needed. (For one
thing, I didn't even know about konsole10: looks helpful). I just wish
I had more from the crash than just a black screen, but that's what I
get for running X on bootup for a file server. Between the two of you, I
think I have the answers I was looking for when I started this thread:
not what went wrong exactly, but how to dig in, and try to figure it out
for myself.
Oh, Thomas, when I rebooted to runlevel 3, I entered that video setting
you suggested as well.
I just know I'll be back with more questions, though. One way or
another, I'll figure this Linux thing out.
Thanks again, guys. Your help, as always, is much appreciated.
I blew out Fedora with yum and 2.62
From Jack Sprat
Answered By: Benjamin A. Okopnik, Thomas Adam
I will try and balance brevity with information.
[Thomas]
Always a good thing
I was on dial-up with kppp on an old 266MHz PC with Fedora running 2.6.2
kernel and running yum for the first time. I did not like the large
number of not small files being downloaded to my machine and did a "kill
-9" on the process. Bad bad bad. "ls" produced "Segmentation fault" as
did several other commands. Machine would not reboot. Booted from rescue
CDROM and "chroot" gave Segmentation fault".
[Thomas]
This is indicative that glibc had yet to upgrade. You should never hault
the system in an upgrade if you can help it. Debian does a good job of
recovery, but the point is that if you do, you'll have damaged and
incomplete packages which will invariably break your system. Then you have
to go down the route of rescue CDs and the like.
Read up on chroot and created a static linked bash on a second machine
and was able to chroot once this was in place. "ls" failed as did "vi"; pwd
worked. On a good machine I compiled a static linked "ls"
[Ben]
*Ouch.*
ben@Fenrir:~$ ldd `which ls`
librt.so.1 => /lib/tls/librt.so.1 (0x4002a000)
libacl.so.1 => /lib/libacl.so.1 (0x40030000)
libc.so.6 => /lib/tls/libc.so.6 (0x40038000)
libpthread.so.0 => /lib/tls/libpthread.so.0 (0x40171000)
/lib/ld-linux.so.2 => /lib/ld-linux.so.2 (0x40000000)
libattr.so.1 => /lib/libattr.so.1 (0x40180000)
When any one of those libs isn't happy, ain't *nobody* happy; almost
all of them are critical core libraries from the "libc6" kit.
[Thomas]
pwd worked because you were using the bash builtin as opposed to the ELF
version.
(I had to add -static to linker and it complained about one function not
being linked in, but seems to work. Should not one be able to make static
linked copies of utilities/programs such as are in Fedora coreutils?).
[Thomas]
No, the whole point of having dynamically linked files is that their size
is reduced, making them easy to put onto floppy diskettes, etc. Not only
that it allows for greater portability.
With "ls" in place on the damaged machine, I was able to see what I was
doing and recovered /lib from a two week old backup after "chroot"ing to
the damaged file system. I have cowardly retreated to Fedora kernels and the system boots and I have discovered no problems with one exception.
[Thomas]
Usually something like:
mount /dev/abc /mnt && chroot /mnt /bin/bash
where /dev/abc is your damaged partition, is sufficient.
On my good machine the route table when on dial up had a route to the
server I am hooked up to and the name is resolved. On my damaged machine
"host" and "dig" work in resolving names or IPs but the server name is
not resolved in the route table. The real problem is neither Konqueror
non Mozilla will go to any Internet site regardless of whether IPs or
names are used. If an IP is used, such as 000.00.00.0 the name is
resolved in the error message stating that "www.aaaaaaa.aaaaa.com not
resolved.". Text browser lynx does work, although I am worse than a
[Thomas]
Sounds like you're missing:
/etc/resolv.conf
and valid entries in there.
Hint: man resolv.conf
newbie with it. I have checked network files in /etc and its
subdirectories and compared to my good machine until I am blue. I have
turned off the firewall and commented out hosts.<deny allow>. I have
done "rpm -V" on all installed rpms. Although the listing is not
perfectly clean, neither is it on my good machine. I suspect something
in the /usr/lib directory. I do not believe I have been hacked but of
course who knows? How would you proceed?
[Thomas]
This is not a /usr/lib issue, but more a question of configuration. As
well as checking for /etc/resolv.conf, ensure that in the file:
/etc/nsswitch.conf
you have the line:
hosts: files dns
That, is, make sure that dns is listed after files, as above.
[Ben]
I'm amazed at your persistence and quite tickled by yet another
demonstration of how tough Linux is. In my experience and in the stories
that I've heard from others, Linux systems can be rescued from all sorts
of incredible damage - and you've (mostly) managed to pull it off.
I would back up all my data, clear out the partition, do a fresh
install, and put the data back on. Yeah, you could be macho about it and
try to trace everything down by function... which would leave you open
to problems with subsystems you're currently not using (this will, of
course, happen long after you've forgotten that this damage even
occured.) Since you don't know exactly what you broke, you'll never know
everything you need to fix; this is one of the situations where I'd
apply my mechanic friend Ken's dictum of "keep removing stuff until you
get to something you know is good and build up from there." It's also
the most efficient approach at this point, IMO.
I hate to clog your mailbox, but I wanted to thank Ben O. and Thomas A.
for their rapid response to my 2/20 question and their insights.
I have decided to restore my backups (only two weeks old) and if that
fails, reinstall from scratch and reload my data.
2c tip: filtering in-place
From Kapil Hari Paranjape
Answered By: Kapil Hari Paranjape, Ben A. Okopnik, Thomas Adam, Jay R. Ashworth
Hello,
I have always thought that filtering files "in-place" was
not possible from the command line...
...until today---one lives and learns.
dd if=file bs=4k | filter | dd of=file bs=4k conv=notrunc
Where "file" is the file you want to filter and "filter"
is the filtering program you want to apply.
Examples:
- Use rot13 as the filter and you get (rather minimal) encryption
of the file contents.
- Use tr '[A-Z]' '[a-z]' as the filter and you can downcase a file.
http://www.itworld.com/nl/unix_sys_adm/09252002
have a perl solution.
[Thomas]
Perl/sed/ruby all honour the '-i' flag, which is a started, then just
apply regexps to match anything but the filter expression.
[Ben]
The "buffer" program does exactly the same as the above; the process is
called "reblocking".
buffer < foo | filter > foo
[Ben]
If the file is bigger than 1MB, you'll need to specify a larger queue
with the "-m" option, but that's usually not an issue.
Conversely, as Thomas mentioned, you could use Perl's, etc. "in-place
edit" switch:
# rot13
perl -i -wpe'y/a-zA-Z/n-za-mN-ZA-M/' file
# lc everything
perl -i -wpe'$_=lc' file
[Ben]
buffer < foo | filter > foo
[Jay]
Oh, cause buffer reads the entire file before the '>' can stomp it?
Well, that's not exactly the same...
Doesn't that still depend on order of evaluation by the shell? Is that
defined?
[Thomas]
Well, yes....
- buffer < foo
- | filer is acted upon
- Resultant output to file
[Jay]
Well, not necessarily.
[Ben]
Well, yeah - just about as definitively as anything in Bash is.
Otherwise Kapil's method wouldn't work either. Neither would piping
anything through "sort". The left side of the pipe has to terminate
before the right side can do anything with the output; in many cases,
there is no output until just before the left side terminates.
[Jay]
In fact I think that's wrong: I don't think the dd method does depend
on order of eval; the writing copy of dd can't try to write a block
until it has it, so I believe that that method is guaranteed never
to stomp data.
[Jay]
A shell could (un)reasonably decide to evaluate the output redirection
(ie: stomp on the file) before the buffer program can read it. At
best, it might be a race condition between the two sides of the pipe.
I don't think, intuitively, that it's at all reliable, where as I think
the dd approach probably is.
[Ben]
Uh, not any shell that contains a working implementation of IPC. One
that's broken, certainly. Chances are that if time ran backwards, it
probably wouldn't work too well either...
Please state the mean and the average probabilities and the relevant
confidence levels for the accuracy of your intuitive approach. The data
generated in the course of your study may or may not be used as grounds
for questioning during your exam.
[Jay]
Every shell programing I've read in 20 years warns againts that
construct, precisely because most shells will set up the redirect
first and stomp the output file. As for the pipeline, I believe that
most shells exec the last component first. Maybe bash has changed
that; I remember a warning about it in the Bourne book.
The nature of the thread changes slightly
-- Thomas Adam
[Kapil]
Hi,
Just a few additional remarks:
(a) perl, python and vi/ex do offer alternate solutions ... but see below.
(b) I couldn't locate "buffer"---where do I find it?
[Thomas]
Oddly enough, under Debian it is in the 'buffer' package.
[Kapil]
(c) Just to defend the "dd" solution a bit:
When the "dd" command-line given in the earlier mail is terminated
(for any reason like a Control-C), it outputs the number of blocks
read/written. Thus, the intrepid user can restart the process by
modifying it with suitable "seek" and "skip" commands. Of course, this
assumes that the filter operates on data sizes less than 4k
independently.
[Thomas]
See the "-S", "-s", and "-z" to buffer(1)
[Kapil]
I became aware of this "dd" procedure while trying to (yes I'm crazy)
encrypt one entire disk partition in-place. The problem with the other
solutions is that they require a lot of memory to run.
As far reading and writing to pipes is concerned, here is how I
understand it---please correct me if I am wrong. The kernel has its
own internal settings on how much data is buffered before a writing
process is put at the back of the queue and the reading process is woken
up. Thus killing any one process in the "dd" pipeline could only result
in less data being written than was read---an error from which one can
recover as described above.
[Ben]
Since the source and the target file are the same, wouldn't you end up
with some truncated version of your data (i.e., the source being gone no
matter what)? It seems to me that the difference between complete
destruction of data and truncation of it at some random point can only
matter theoretically, except in a vanishingly small number of
situations.
[Jay]
No, you wouldn't.
The target side dd is doing random access.
It writes the blocks sequentially, but it writes them into the
standing file, one at a time, without touching the blocks around them.
Likewise on the read side. The killer is the redirection, which his
approach does not use, at all. Not the pipe.
[Ben]
Ah. I hadn't realized that. In that case, I agree; there's a large
difference. I've just tried it on a 100MB file I've made up for the
purpose, and it seems that you're right.
framebuffer colors
From frank.n. dale
Answered By: Thomas Adam
Hello Gang,
Assume X is not running and your framebuffer
allows for 256 colors or more.
Please give manual instructions (or C code)
to write on the screen 64 characters in
different colors.
[Thomas]
Hi, there. Assume for a moment that we cared. Then realise that actually
this is The Answer Gang to do with Linux questions. Then assume that we do
not answer homework questions. Then assume that you want to look here:
http://linuxgazette.net/tag/ask-the-gang.html
If you assume all of these, then you have a tautology.
Of course your programming homework is quite easy to do, it just requires
that you do a little research first, as with any academic course you might
be studying. It takes work, and if you are not prepared to put any in, why
bother with it? If you do not understand it, then that is a different
matter, to be taken up with your lecturer.
Please do not forward this query to Thomas Adam, he
does not know.
[Thomas]
Unfortunately, I do.
How do you set and get colors outside the range 0-F hex
when using framebuffer in the Linux console?
[Thomas]
For this, you will have to look at vga.c in the kernel source, and at
fb.c. You will almost certainly have to hack the kernel source to allow
this.
This query is not a home work exercise. It is based on
fruitless Goggle search of many hours over a span of
several months. The major framebuffer sites
http://linuxconsole.sourceforge.net/fbdev
http://linux-fbdev.sourceforge.net
http://www.directfb.org
do not deal with it nor does the HOWTO-framebuffer.
Also the console escape sequences for colors only apply
to colors 0-F.
The only solution found so far involves the SVGALib
which is too much of an overhead. There must be a
simpler solution. Do you know more than I do?
[Thomas]
SVGALib is nothing to do with framebuffers, but it will allow you to
display console screen resolutions higher than the default.
[Heather] Other than the vague bit about it being able to use FBDev as its video
type, if you have a framebuffer enabled kernel and your monitor settings
are right in /etc/libvga.config.
The query is relevant for syntax coloring in the Linux
console (text, no X). With the standard 16 foreground
and 8 background colors, syntax coloring is a pain in
the eyes. With the 256 colors or more that you get from
framebuffer, syntax coloring could become pleasant and
effective.
[Thomas]
So why not run your console with a setting of:
vga=0x317
while will run a 1024x786x256 colours.
Look here:
Colours 640x400 640x480 800x600 1024x768 1152x864 1280x1024 1600x1200
--------+--------------------------------------------------------------
4 bits | ? ? 0x302 ? ? ? ?
8 bits | 0x300 0x301 0x303 0x305 0x161 0x307 0x31C
15 bits | ? 0x310 0x313 0x316 0x162 0x319 0x31D
16 bits | ? 0x311 0x314 0x317 0x163 0x31A 0x31E
24 bits | ? 0x312 0x315 0x318 ? 0x31B 0x31F
32 bits | ? ? ? ? 0x164 ?
(this is for: vga=<foo> in /etc/lilo.conf).
Mirror 2 web servers
From Bradley Chapman, Thomas Adam, John Karns, Ben Okopnik
Hi folks at ,The Answer Gang
I have two webservers running linux (redhat)
one of them is the primary server and the other one acts as backup
the thing is that i want to be able to mirror these servers in both ways
including this:
- mysql databases
- http folder
- httpdconfig
[Bradley]
Well, since you're using a MySQL database, have you considered moving the MySQL
database and DBMS onto a third machine which is more powerful than the two
webservers, and simply having the webservers query the database using your
favorite RDBMS protocol. Or you could use the MySQL mirroring and load balancing
features (Google turns up a lot of links).
Many online websites, such as discussion forums, often place the DB onto a
separate machine and the Web server software onto another one (or two, or any
value of N), and use DNS round robin to rotate amongst the Web servers, thus
lessening the risk that a single machine failure will kill the site.
i was thinking a shell script would do the trick!
but my competence ends just after #!/bin/bash!
i am not asking for a complete script
just some were to turn for help
[Bradley]
You can help us by supplying ASCII art defining your network topology - the
really smart people who know DBs and stuff on this list will probably need it
[Thomas]
Have a look at the 'rsync' utility -- that will fit your needs quite
happily.
[John]
Just thought I'd mention a few general suggestions.
To read up on bash scripting, you might want to take a look at the
"advanced bash scripting guide" at one of the following URL's.
"man bash" at the command prompt will yield the extensive bash man page,
assuming that you have it available on your system.
Also, there are some articles on the subject in the archives of the Linux
Gazette, some of which were authored by Ben Okopnik, one of the members of
the answer gang.
[Ben]
Yep. Just search for "shell scripting" (including the double-quotes) at
<http://linuxgazette.net/search.html> and you'll see it right at the top
of the stack.
[John]
The rsync utility is often used for mirroring and can save bandwidth. For
the MySQL side of it, you may need to get a bit more creative. I would
suggest fishing around on the 'net for DB mirroring with MySQL, as I
think rsync is intended more as a tool to deal with plain ASCII files,
although I could be off on that.
[Ben]
[grin] Just a little, John. "rsync" is intended for file transfer in
general, no restriction as to type. I use it with the "-e ssh" option
(actually, I have "RSYNC_RSH=/usr/bin/ssh" set in my "~/.bash_profile"),
so it uses SSH as the underlying mechanism, but "rsh", although
deprecated in most situations, is also type-independent (although it's
somewhat clunky - the STDIN/STDOUT method of transferring data does make
it look like a text-specific gadget.)
This page edited and maintained by the Editors
of Linux Gazette
Copyright © its authors, 2004
Published in issue 101 of Linux Gazette April 2004