Interview with Steve Bourne

August 26, 2010

ARNnet have an Interview with Steve Bourne

I believe you can write shell scripts that will run either in the Bourne shell or Bash. It may have some additional features that aren’t in the Bourne shell. I believe Bash was intended as a strictly compatible open source version of the Bourne shell. Honestly I haven’t looked at it in any detail so I could be wrong. I have used Bash myself because I run a Linux/Gnu system at home and it appears to do what I would expect.

I have nearly finished reading Coders At Work – Steve Bourne could have been an interesting interviewee for that book.

When I first posted this link at urandom, I was not aware that I myself was quoted, at the top of page 5 of the 7-page interview:

Unix Specialist Steve Parker has posted ‘Steve’s Bourne / Bash scripting tutorial’ in which he writes: “Shell script programming has a bit of a bad press amongst some Unix systems administrators. This is normally because of one of two things: a) The speed at which an interpreted program will run as compared to a C program, or even an interpreted Perl program; b) Since it is easy to write a simple batch-job type shell script, there are a lot of poor quality shell scripts around.” Do you agree?

It would be hard to disagree because he probably knows more about it than I do. The truth of the matter is you can write bad code in any language, or most languages anyway, and so the shell is no exception to that. Just as you can write obfuscated C you can write obfuscated shell. It may be that it is easier to write obfuscated shell than it is to write obfuscated C. I don’t know. But that’s the first point.

The second point is that the shell is a string processing language and the string processing is fairly simple. So there is no fundamental reason why it shouldn’t run fairly efficiently for those tasks. I am not familiar with the performance of Bash and how that is implemented. Perhaps some of the people that he is talking about are running Bash versus the shell but again I don’t have any performance comparisons for them. But that is where I would go and look. I know when I wrote the original implementation of the shell I spent a lot of time making sure that it was efficient. And in particular with respect to the string processing but also just the reading of the command file. In the original implementation that I wrote, the command file was pre-loaded and pre-digested so when you executed it you didn’t have to do any processing except the string substitutions and any of the other semantics that would change values. So that was about as efficient as you could get in an interpretive language without generating code.

I think that the points were presented to Steve Bourne in reverse order; his answer to the first point seems to relate to “b” (quality of scripts), and his longer answer to the second point seems to relate to “a” (performance).

Regarding performance, as he says, the real cost is of the Unix exec() call, which makes “cat /etc/hosts | grep localhost” half as fast as “grep localhost /etc/hosts”. There is nothing that the shell itself can do about that.

Regarding quality, deliberately obfusacated C is an institution; my point was merely that it is easy to write a bad shell script simply by not knowing how to write a better shell script. As this quote was from the introduction to a shell scripting tutorial, it should hopefully be clear from the context that the tutorial aims to enable the reader to write better shell scripts.


Solaris 10 SMF Manifests

July 17, 2010

I have recently written a web service which creates Solaris 10 SMF manifests based on the information you give it.

It creates a ZIP file with the XML Manifest file, and the startup/shutdown script, based on what you tell it.

There is much more that SMF can do – create entire new runlevels, and so on – but this does the basic single-instance startup and shutdown stuff that /etc/init.d scripts did.

Feel free to go and check it out at sgpit.com/smf/


Useful GNU/Linux Commands

June 23, 2010

Pádraig Brady has some useful, if somewhat basic hints, at http://www.pixelbeat.org/cmdline.html. He has updated them to include more powerful commands at http://www.pixelbeat.org/docs/linux_commands.html.

Here are a few of my favourites (I have taken the liberty of slightly altering some of the code and/or descriptions):
From the original:
Search recursively for “expr” in all *.c and *.h files:
find -name '*.[ch]' | xargs grep -E 'expr'

Concatenate lines with training backslash:
sed ':a; /\\$/N; s/\\\n//; ta'

Delete line 42 from .known_hosts:
sed -i 42d ~/.ssh/known_hosts

From the new post:
Echo the path one item per line (assumes GNU tr):
echo $PATH | tr : '\n'

Top for Network:
iftop
Top for Input/Output (I/O):
iotop

Get SSL website Certificate:
openssl s_client -connect http://www.google.com:443 < /dev/null

List processes with Port 80 open:
lsof -i tcp:80

Edit a remote file directly in vim:
vim scp://user@remote//path/to/file

Add 20ms latency to loopback device (for testing):
tc qdisc add dev lo root handle 1:0 netem delay 20msec
Remove the latency:
tc qdisc del dev lo root


Ten Good Unix Habits

June 22, 2010

IBM’s DeveloperWorks has 10 Good Unix Habits, which apply to GNU/Linux at least as much as to Unix.

I would expect that most experienced admins can second-guess the content to 5-7 of these 10 points, just from the title (for example, item 1 is a reference to “mkdir -p”, plus another related syntax available to Bash users). I would be surprised if you knew all ten:

1. Make directory trees in a single swipe.
2. Change the path; do not move the archive.
3. Combine your commands with control operators.
4. Quote variables with caution.
5. Use escape sequences to manage long input.
6. Group your commands together in a list.
7. Use xargs outside of find .
8. Know when grep should do the counting — and when it should step aside.
9. Match certain fields in output, not just lines.
10. Stop piping cats.

How many did you get?


Unix / Linux Training Courses in the UK

May 11, 2010

After a few customers requesting it, my consultancy firm, SGP IT, is planning to run some technical training courses this Summer; in the Manchester area initially, though any location is possible.

Now would be a very good time to get in touch (training@sgpit.com) as things are at a very early stage and very fluid – if you can bring a few people along, we can even run a bespoke course for you, and tailor everything to your need.

Depending on subject, duration, location and so on, it should be possible to run the first few courses for as little as £250 – £300 per person per day – much less than the £400 – £500 or so you’d pay for a corporate course where you all get is a trainer who has no experience of the actual situation you face at work, and who delivers powerpoint slides to you, then doles out the free mousepads and t-shirts at the end of the course.

None of us have been overly impressed by many of the available training courses – we are hoping to redefine how personal IT training can be delivered. Here’s how:

The kind of training session I would envisage us providing, would involve a fairly small class size (certainly fewer than 6 people), allowing us to focus on your current issues, and tailor the course around the needs, interests and skills of the attendees. The courses are likely to be between 2 and 5 days, most being 2-3 day courses.

Of course, there will be no corners cut – we will insist on great location and facilities, free internet access, PCs for all candidates (preinstalled with Linux, Solaris, *BSD, you name it – contact us before the course and we’ll build the PC to suit you), tons of good quality course notes, including certificates and the obligatory full VAT receipts, of course. I’m sure that we can find a few freebies to throw in, too!

If you have specific queries or concerns that you would like to be addressed in the course, let us know up-front, and we can find a way to work it in to the course.

If any of this sounds vaguely interesting, please do get in touch (training@sgpit.com) and we can mold things around your requirements.


Use of pipes, and other nifty tricks

December 18, 2009

http://www.tuxradar.com/content/command-line-tricks-smart-geeks has some useful tricks. A lot of it is presented as being bash-specific, but isn’t. Also, a lot seems Linux-specific, but isn’t. Lots of useful info for all Unix/Linux admins here. These hints go on and on; hardly any of them are the generic stuff you often see on Ubuntu forums, stumbleupon, and so on.


Flushing Cache to Disk under Linux

November 4, 2009

There are lots of well-written articles, such as this [westnet.com] and especially this [kerneltrap] on Page Cacheing and pdflush, but RackerHacker (although the title says “reads”, it really seems to address lots of small writes) summarises it very well:

vm.dirty_ratio – The highest % of your memory that can be used to hold dirty data. If you set this to a low value, the kernel will flush small writes to the disk more often. Higher values allow the small writes to stack up in memory. They’ll go to the disk in bigger chunks.

vm.dirty_background_ratio – The lowest % of your memory where pdflush is told to stop when it is writing dirty data. You’ll want to keep this set as low as possible.


Follow

Get every new post delivered to your Inbox.