Paperless payday loansmilitary payday or take several visits appliance Advance Cash Advance Cash repair bill that pop up anymore.

Strange wildcard behaviour in cmd.exe?

E:\hugh\music\TW_analysis>dir *22*wav
Volume in drive E has no label.
Volume Serial Number is 88A2-A44C

Directory of E:\hugh\music\TW_analysis

06/12/2010 12:42 4,030,672 section_60pc_11_12.wav
06/12/2010 12:44 1,587,244 section_80pc_12.wav
06/12/2010 12:41 4,675,324 section_50pc_12_13.wav
06/12/2010 12:45 1,508,344 section_80pc_17.wav
06/12/2010 12:43 1,665,924 section_70pc_7.wav
06/12/2010 12:47 1,164,912 section_100pc_21_22.wav
06/12/2010 12:41 2,329,780 section_50pc_21_22.wav
06/12/2010 12:42 1,941,488 section_60pc_21_22.wav
06/12/2010 12:43 1,664,140 section_70pc_21_22.wav
06/12/2010 12:45 1,456,128 section_80pc_21_22.wav
06/12/2010 12:46 1,294,344 section_90pc_21_22.wav
11 File(s) 23,318,300 bytes
0 Dir(s) 8,391,557,120 bytes free


Error adding theme to WordPress 2.8

I was getting this error:

PHP Fatal error: Call to a member function read() on a non-object in ... \\wp-includes\\theme.php on line 387

Fixed it by editing wp-includes/theme.php, adding ‘is_object’ check at line 387 as follows :

$template_dir = @ dir("$theme_root/$template");
if ( $template_dir ) {
	while ( ($file = $template_dir->read()) !== false ) {
		if ( preg_match('|^\.+$|', $file) )
		if ( preg_match('|\.php$|', $file) ) {
			$template_files[] = "$theme_loc/$template/$file";
		} elseif ( is_dir("$theme_root/$template/$file") ) {
			$template_subdir = @ dir("$theme_root/$template/$file");
                if ( is_object($template_subdir) ) {
                        while ( ($subfile = $template_subdir->read()) !== false ) {
                             if ( preg_match('|^\.+$|', $subfile) )
                            if ( preg_match('|\.php$|', $subfile) )
                                $template_files[] = "$theme_loc/$template/$file/$subfile";
                        @ $template_subdir->close();
	@ $template_dir->close();

Comments (13)

Two more drum transcriptions

Philly Joe’s breaks + solo on ‘Pot Luck': phillyjoe_potluck_analysis.pdf

and a somewhat rushed transcription of Art Taylor’s breaks on ‘Good Bait': arttaylor_goodbait.pdf.


Two more drum transcriptions

The drum part from `Flim’, an Aphex Twin song as covered by the Bad Plus. This is from the album `These Are The Vistas’.

Flim drum part

And a Frankie Dunlop solo from `I Mean You’, recorded with Monk. This was done as an assignment for college and it has my analysis essay attached. Another transcription of this solo is available from my drum teacher’s website.

Frankie Dunlop – I Mean You solo

Here it is with no essay and no melody stave, much handier for practising: Frankie Dunlop – solo only

Comments (1)

Ben Riley solo on Blue Monk

I was called for a standards gig in a restaurant a few years ago and they mentioned Blue Monk, which I didn’t know. So I found it on YouTube and liked it a lot. I ended up transcribing the drum solo; here it is.

Solo transcription

Only the snare, bass drum, and cymbals are used. The phrasing is very clear, and plays across the bar lines and form divisions throughout.

Most of the rolls are played as singles. The triplets in bar 17 are played RRR LLL RRR LLL.

Comments (1)

T-rex talks to God

I liked this:

T-Rex talks to God


scp-resume for downloading multiple files

You need to transfer a lot of files across a slightly temperamental ssl connection. You want something like a recursive scp command that supports resuming and will keep on trying until it gets the job done.

rsync is ideal for this purpose – however, I find it quite dodgy under cygwin, especially when transferring large files.

A sweet alternative is Unison, for synchronizing filesets over ssh.

However, I often find myself falling back on a nice script called designed for resuming the transfer of large files using dd over ssh. We can invoke this script inside a loop to transfer lots of files at a time.

One problem with the script is the use of the construct below to determine file sizes:

localsize=`ls -l "${localfile}" | awk '{ print $5 }'`

This will fail if there are spaces in the username of the file owner. Most likely you’ll get:

Resuming download of [file] at byte None
dd: invalid number `None'

where the group owning the file is reported by cygwin as ‘None’. The fix is to replace every instance of this ls -l construct with something like localsize=`ls -g "${localfile}" | awk '{ print $4 }'`. The -g option displays the file size but not the owner name, so you should be safe from spaces confusing awk. I don’t know if the -g option is POSIX, but it’s in GNU ls anyway.

You might be tempted to use ls -s, but this reports the amount of disk space used, rather than the actual length of the file (i.e. it will be a multiple of the allocation blocks). You can see the difference using ls -ls:

Hugh Denman@gpplap3 ~
$ cat > asd.txt

Hugh Denman@gpplap3 ~
$ ls -ls --block-size=1 ./asd.txt
1024 -rw-r--r-- 1 Hugh Denman None 19 Mar 28 17:59 ./asd.txt

Here my 19-byte text file is taking up 1024 bytes of disk space.

Two other possibilities, suggested by Erik Jan Taal, are perl -e "print -s '$filename'" and ls -l | sed -n 's/.* [^0-9]*\([0-9]\+\) .*/\1/ p'. These will work on FreeBSD, for example, which does not support ls -g.

To use the scp-resume script, we’ll need a text file containing the filenames to transfer from the remote machine. Here’s one way to generate this list.
$ ssh remote-user@remote.machine.ip.addr "/bin/find /cygdrive/d -type f" | grep -vi i386 > ./filelist.txt
In this example, the remote drive contains the OS installation files in /cygdrive/d/I386, which we don’t want to transfer.

With a fixed scp-resume script, and the list of files to transfer present, all that’s left to do is iterate over each file in the list and tell scp-resume to download it. We use the cat filelist.txt | while read FILE approach because it will preserve spaces in the filename (unlike for file in `cat filelist.txt`).

cat filelist.txt | while read FILE ; do
DIR=`dirname "$FILE"`;
mkdir -p "./$DIR" ;
./ -d "remote-user@remote.machine.ip.addr:$FILE" "./$FILE" ;

This very nearly works – the only trouble is that it will only transfer the first file in the list, and then inexplicably stops without an error! This is a difficulty that arises whenever you use the cat [file] | while read VAR idea, with a shell invocation inside the while loop: whenever a shell is started, it gets STDIN, which kills the pipe (I found that out in a Usenet post). So we have to modify scp-resume one last time, changing the download command

ssh -C -c arcfour "$userhost" "dd bs=1 skip=$localsize \"if=${remotefile}\"" >> $localfile < /dev/null

With this change, you can’t enter the ssh password in manually – but you’d have to have automatic authentication setup anyway really, as you don’t want to enter your password for every file. A simple way to set up automatic authentication is described here.

Lastly, you can wrap the whole command above in a for loop with a few iterations so that if the connection is dropped on a few transfers, the file can be resumed in a subsequent pass:

for i in `seq 0 100`; do
cat filelist.txt | while read FILE ; do
DIR=`dirname "$FILE"`;  mkdir -p "./$DIR" ;
./ -d "remote-user@remote.machine.ip.addr:$FILE" "./$FILE" ;
done; done

This whole process is hideously inefficient for large numbers of files, alas. But it seems to get the job done. Here’s my edited version of scp-resume, using the redirect from /dev/null for ssh and using ls -g instead of ls -l to query the file size. Note that I’ve only tested the downloading functionality, never the uploading bits.

# scp-resume - by erik jan taal
# Speed improvements by using blocks by
# Fixed by Hugh Denman to use ls -g (safe with usernames containing spaces)
#   this versions assumes that ssh is setup for automatic authentication rather than manual password entry
# This script assumes that you have access to the 'dd' utility
# on both the local and remote host.

# dd transfer blocksize (8192 by default)

  echo "Usage: `basename $0` -u(pload)   $localfile  $remotefile [$sshargs]"
  echo "       `basename $0` -d(ownload) $remotefile $localfile  [$sshargs]"
  echo "  $remotefile should be in the scp format, i.e.: [user@]host:filename"
  echo "  $sshargs are option further ssh options such as a port specification"
  echo "     (-p 1234) or use of compression (-C)"
  echo "  -u:"
  echo "     $remotefile may be [user@]host: for uploading to your remote home directory"
  echo "  -d:"
  echo "     $localfile may be a period (.) when downloading a remote file to the"
  echo "       current working directory."
  exit 1

[ -z "$1" -o -z "$2" -o -z "$3" ] && usage

case $option in
    shift 3


    if [ ! -f "$localfile" ]; then
      echo "!! File not found: $localfile"
    if [ x"$userhost" = x"$remote" ]; then usage; fi
    if [ x"$remotefile" = x"$remote" -o -z "$remotefile" ]; then remotefile=`basename "$localfile"`; fi

    echo "==>> Getting size of remote file:"
    localsize=`ls -g "${localfile}" | awk '{ print $4 }'`
    remotesize=`ssh $sshargs "$userhost" "[ -f \"${remotefile}\" ] && ls -g \"${remotefile}\"" | awk '{ print $4 }' < /dev/null`

    [ -z "$remotesize" ] && remotesize=0
    echo "=> Remote filesize: $remotesize bytes"

    if [ $localsize -eq $remotesize ]; then
      echo "=> Local size equals remote size, nothing to transfer."
      exit 0;

    remainder=$((remotesize % blocksize))
    restartpoint=$((remotesize - remainder))
    blockstransferred=$((remotesize / blocksize))

    echo "=> Resuming upload of '$localfile'"
    echo "  at byte: $restartpoint ($blockstransferred blocks x $blocksize bytes/block),"
    echo "  will overwrite the trailing $remainder bytes."

    dd bs=$blocksize skip=$blockstransferred "if=${localfile}" |
      ssh $sshargs "$userhost" "dd bs=$blocksize seek=$blockstransferred of=\"$remotefile\"" < /dev/null

    echo "done."
    shift 3


    if [ x"$localfile" = x"." ]; then localfile=`basename "$remotefile"`; fi
    if [ ! -f "$localfile" ]; then
      localsize=`ls -g "${localfile}" | awk '{ print $4 }'`
    [ x"$remotefile" = x"$remote" ] && usage
    [ -z "$localsize" ] && localsize=0

    remainder=$((localsize % blocksize))
    restartpoint=$((localsize - remainder))
    blockstransferred=$((localsize / blocksize))

    echo "=> Resuming download of '$localfile'"
    echo "  at byte: $restartpoint ($blockstransferred blocks x $blocksize bytes/block)"
    echo "  filesize: $localsize; will overwrite the trailing $remainder bytes."
    ssh $sshargs "$userhost" "dd bs=$blocksize skip=$blockstransferred \"if=${remotefile}\"" < /dev/null |
      dd bs=$blocksize seek=$blockstransferred "of=$localfile"


Second real post exactly one year after the first! Prolific.


Debugging JSP pages on GoDaddy

I was recently hired to look into some problems with a fairly simple web-based software licensing application hosted on GoDaddy. The server application consists of a number of servlets in some JAR files, and some .jsp pages invoking those servlets.

One of the problems was that email messages originated by the server application weren’t being delivered. There were two issues affecting email delivery. Firstly, the SMTP server had been hard-coded as ‘localhost'; this was easily changed to GoDaddy’s mail server. After this change was made, the second problem appeared: the qmail server was complaining about raw linefeed characters (LF vs CRLF). The app uses an email servlet from (now apparently defunct), which uses PrintWriter.println to issue each SMTP command. The email messages constructed by the app used \n newline characters. Neither of these approaches is correct: PrintWriter.println emits the system-specific newline, and no Java IO function will ever do C-style translation of \n into \r\n. A succinct little summary of newlines in programming languages can be had from Wikipedia.

The fix is to change every out.println(stuff) in to out.print(stuff + "\r\n"), and similarly to construct email messages using \r\n instead of \n. If you need to make a similar fix, the source for the email servlet is, at the time of writing, available here.

So far, so straightforward. The only issue here is that GoDaddy’s Tomcat server only restarts once a day (1am Arizona time). And because of servlet caching, updates to a jar (or standalone servlet) only come into effect when the server is bounced. So if you’re trying to debug a server application that depends intimately on the host’s (GoDaddy’s) email server & MySQL databases, the naive MO is basically patch, compile, upload, wait til 1am AZ time, repeat until done. Nightmare. Other people have encountered similar GoDaddy Gotchas.

However, while an update to an existing servlet has to wait for the bounce, a new servlet (standalone or in a .jar, but not in a WAR) is available immediately on upload. So for a faster turnaround time, you can rename the class each time you upload it, and invoke it straight away. Of course, this gets cumbersome pretty quickly, as every reference to the updated class in every .java and .jsp file has to be updated as well.

So the fix is: automation by shell script. For the application I was working on, I had two servlet source trees: /WEB-INF/lib/com.consultant/ (consultant were the crowd that originally developed the application), and /WEB-INF/lib/com.coolservlets/. I also had a bunch of .jsp files in /jsp/. I figured the simplest thing was to update the package names, specifically changing com. to comTestNNN. Don’t use comN, as com1 and com2 are serial ports under Cygwin.

The shell script ( appears below. As it stands, you invoke it as com comTest1 after the first fix. If the problem’s not sorted, make another change and go comTest1 comTest2. And so on, with a new basename for the updated packages every time. You can test the updated application immediately it’s uploaded.

The main flaw in the system is that you have to close all the source files in your editor before each build—even if the build fails—as the root directory name changes each time the script is invoked, so the file paths are invalidated. I suspect that this can easily be worked around using directory links, however, where the directory link is renamed each time while the code sits inside the ‘real’ directory. Also, it would be smarter to have the script generate the new name automatically, rather than have the user specify the old and new names.

The four stages are:

  • Rename the package root directory from $OLDNAME to $NEWNAME
  • Replace every occurence of $OLDNAME to $NEWNAME in all .java and .jsp files, if referring to one of the packages we’re updating
  • Package up the files in a .jar
  • Upload everything to the server

The script uses the Java tools (same version as GoDaddy, rather than the most recent), perl, and ncftp. It’s developed under cygwin, but should work on any bash-alike.


set -o nounset
set -o errexit
set -o verbose

JAVAC=/cygdrive/c/Program Files/Java/jdk1.5.0_11/bin/javac.exe
JAR=/cygdrive/c/Program Files/Java/jdk1.5.0_11/bin/jar.exe

[ $# -lt 2 ] && echo "Need OLDNAME / NEWNAME arguments!" && exit -1


BASEDIR="e:/code/" #can't use /cygdrive/e for win32 java tools




perl -pi -w -e "s#${OLDNAME}.consultant#${NEWNAME}.consultant#g" `find $JAVADIR -name *.java`
perl -pi -w -e "s#${OLDNAME}.consultant#${NEWNAME}.consultant#g" `find $JSPDIR -name *.jsp`

perl -pi -w -e "s#${OLDNAME}.coolservlets#${NEWNAME}.coolservlets#g" `find $JAVADIR -name *.java`
perl -pi -w -e "s#${OLDNAME}.coolservlets#${NEWNAME}.coolservlets#g" `find $JSPDIR -name *.jsp`

"$JAVAC" -cp .;servlet.jar `find $JAVADIR -name *.java`
#Can't use $JAVADIR here; puts full path in the jar
"$JAR" -cf stuff.jar `find $NEWNAME -name *java -o -name *class` #-uvf: add files; verbose, see files as added

ncftpput -u $FTPUSER -p $FTPPASS WEB-INF/lib stuff.jar
ncftpput -u $FTPUSER -p $FTPPASS jsp `find $JSPDIR -name *.jsp`

For more on debugging servlets, check here.

Comments (6)

First Post

A shiny new blog on a shiny new website. This will mostly be a place to post things that I searched for on the web but didn’t find, and had to then figure out. I’ll also post occasionally on coding, drumming, trying to learn music & ear-training, and probably a few bits about FOREX trading.

The first thing I always do when setting up a new server (about once every 4 years) is migrate across my older websites. The bogeypage, from 1997-1998, is here. Around 1999, I got my own server, Flowerpot, in my college room, which hosted a newer site. Some years after I came to Trinity college, I set up a new site, which was mostly for playing with m4 as a tool for statically managing websites; this site is falling apart as I never updated the database bits, but the gist of it is still there.

If you’ve stumbled across the site looking for me, and you want to get in touch, I’m at hdenman .at. I am not a scholar of yiddish literature.