Dojo examples from UC2009

I know, I know, loads of people have been waiting for these…So here we go, I’ve finally sorted a downloaded version of the Dojo examples from the presentation I provided at the MySQL Users Conference 2009. There are three examples:

  • The auto-paging table example, which uses the functionality of the Dojo Toolkit and the QueryReadStore to automatically load content from a table.
  • The basic graphing example, which loads data dynamically and plots a graph.
  • And the zooming version of the same basic graph interface
  • There’s a README in the download that contains instructions on getting everything up to speed, although it should be fairly obvious. It’s attached to the bottom of this post too.Any questions for getting this to work, please go ahead and ask!Download the package hereUC2009 Working with MySQL and Dojo==================================MC Brown, 2009, and http://coalface.mcslp.comComponents:There are three examples here:- Dojo table with auto-paging (table_autopage.html and table_autopage.cgi)- Dojo Basic Graph (graph_basic.html and graph_ajax.cgi)- Dojo Zooming Graph (graph.html and graph_ajaz_zoom.cgi)You will also need the Dojo/Dijit and DojoX toolkit package from here: the combined package, and then extract the contents and place in the same directory as the CGI scriptsIntructions for use:For the Table example, you can create a table (or use an existing DB) from WordPress using the wp_posts tabls.In practice you need only create and populate a table with the following columns:id (int)user_nicename (char)post_date (datetime)post_title (char)To be compatible with the example without any more modifications.For the Graphing examples, you need a table as follows:CREATE TABLE `currencies` ( `currencyid` bigint(20) unsigned NOT NULL auto_increment, `currency` char(20) default NULL, `value` float default NULL, `datetime` int(11) default NULL, `new` int(11) default NULL, PRIMARY KEY (`currencyid`), UNIQUE KEY `currencyid` (`currencyid`), KEY `currencies_currency` (`currency`,`datetime`), KEY `currencies_datetime` (`datetime`)) ENGINE=MyISAM AUTO_INCREMENT=170048 DEFAULT CHARSET=latin1And then populate with date/value data according to the information you want to store.Once done, make sure you change the database information in the CGI scripts to populate the information correctly during operation.

DTrace in MySQL: Documentation and a MySQL University Session

DTrace has been something that I’ve been trying to get into the MySQL server for more than a year now.After a combination of my own patches and working with Mikael Ronstrom and Alexey Kopytov we finally have a suite of probes in MySQL 6.0.8. Better still, after a short hiatus while I was busy working on a million-and-one other things, the documentation for those probes is now available: Tracing mysqld with DTrace. The documentation is comparatively light and deep all at the same time. It’s lightweight from the perspective that I’ve added very little detail on the mechanics of DTrace itself, since there is no need to replicate the excellent guides that Sun already provide on the topic. At the same time, I’ve tried to provide at least one (and sometimes two) D script examples for each of the groups of probes in the 6.0.8 release. So what next for MySQL DTrace probes? Well, the next version of the probes has already been worked on. I’ve been testing them for a month or so, and due to a problem with the probes on SPARC I couldn’t approve the patch, but I managed to resolve that last week. The new patch extends the probes to enable a more detailed look at certain operations, and it enables us to expand the probes to be placed anywhere within the server, including individual engine-level row operations. If you want a demonstration of DTrace in MySQL, some of the things you can monitor without using the user probes we’ve added, and those new probes I just mentioned, then you will want to attend the MySQL University session this Thursday (12th Feb), at 14:00UTC where I’ll be doing all of the above. As a side note, because I know there are people interested, last week I also finished the patch for these probes to go into the MySQL 5.1.30 version that we will be putting into OpenSolaris. Sunanda is working on getting that release out there as I type this.

Multiple VCS Updates and Cleanups

I spend a lot of time updating a variety of different repositories of different varieties and denominations, and I hate having to do that all by hand – I’d rather just go up into a top-level directory and say update-all and let a script work out what to do, no matter what different repos are there. I do it with a function defined within my bash profile/rc scripts, and it covers git, bzr, svn, bk, and cvs. The trick is to identify what type of directory we are updating. I do this, lazily, for each type individually, rather than for each directory, but I’ve found this method to be more reliable.

update-all () { for file in `ls -d */.svn 2>/dev/null`; do realdir=`echo $file|cut -d/ -f1`; echo Updating in $realdir; ( cd $realdir; svn update ); done; for file in `ls -d */.bzr 2>/dev/null`; do realdir=`echo $file|cut -d/ -f1`; echo Updating in $realdir; ( cd $realdir; bzr pull ); done; for file in `ls -d */.git 2>/dev/null`; do realdir=`echo $file|cut -d/ -f1`; echo Updating in $realdir; ( cd $realdir; git pull ); done; for file in `ls -d */CVS 2>/dev/null`; do realdir=`echo $file|cut -d/ -f1`; echo Updating in $realdir; ( cd $realdir; cvs up ); done; for file in `ls -d */BitKeeper 2>/dev/null`; do realdir=`echo $file|cut -d/ -f1`; echo Updating in $realdir; ( cd $realdir; bk pull ); done; unset realdir}

That’s it – a quick way to update any directory of repos.

Mysterious crashes? – check your temporary directory settings

Just recently I seem to have noticed an increased number of mysterious crashes and terminations of applications. This is generally on brand new systems that I’m setting up, or on existing systems where I’m setting up a new or duplicate account. Initially everything is fine, but then all of a sudden as I start syncing over my files, shell profile and so on applications will stop working. I’ve experienced it in MySQL, and more recently when starting up Gnome on Solaris 10 9/07. Sometimes the problem is obvious, other times it takes me a while to realize what is happening and causing the problem. But in all cases it’s the same problem – my TMPDIR environment variable points to a directory that doesn't exist. That's because for historical reasons (mostly related to HP-UX, bad permissions and global tmp directories) I've always set TMPDIR to a directory within my home directory. It's just a one of those things I've had in my bash profile for as long as I can remember. Probably 12 years or more at least. This can be counterproductive on some systems - on Solaris for example the main /tmp directory is actually mounted on the swap space, which means that RAM will be used if it’s available, which can make a big difference during compilation. But any setting is counterproductive if you point to a directory that doesn’t exist and then have an application that tries to create a temporary file, fails, and then never prints out a useful trace of why it had a problem (yes, I mean you Gnome!). I’ve just reset my TMPDIR in .bash_vars to read:

case $OSTYPE in (solaris*) export set TMPDIR=/tmp/mc;mkdir -m 0700 -p $TMPDIR ;; (*) export set TMPDIR=~/tmp;mkdir -m 0700 -p $TMPDIR ;;esac

Now I explicitly create a directory in a suitable location during startup, so I shouldn’t experience those crashes anymore.

Setting up the developer stack issues

There’s a great post on Coding Horror about Configuring the Stack.Basically the gripe is with the complexity of installing the typical developer stack, in this case on Windows, using Visual Studio. My VS setup isn’t vastly different to the one Jeff mentions, and I have similar issues with the other stacks I use. I’ve just set up the Ultra3 mobile workstation again for building MySQL and other stuff on, and it took about 30 packages (from Sun Freeware) just to get the basics like gcc, binutils, gdb, flex, bison and the rest set up. It took the best part of a day to get everything downloaded, installed, and configured. I haven’t even started on modules for Perl yet. The Eclipse stack is no better. On Windows you’ll need the JDK of your choice, plus Eclipse. Then you’ll have to update Eclipse. Then add in the plugins and modules you want. Even though some of that is automated (and, annoyingly some of it is not although it could be), it generally takes me a few hours to get stuff installed. Admittedly on my Linux boxes it’s easier – I use Gentoo and copy around a suitable make.conf with everything I need in it, so I need only run emerge, but that can still take a day or so to get everything compiled.Although I’m sure we can all think of easier ways to create the base systems – I use Parallels for example and copy VM folders to create new environments for development – even the updating can take a considerable amount of time. I suggest the new killer app is one that makes the whole process easier.

Setting a remote key through ssh

One of the steps I find myself doing a lot is distributing round an ssh key so that I can login and use different machines automatically. To help in that process I created a small function in my bash profile script (acutally for me it’s in .bash_aliases):

function setremotekey{ OLDDIR=`pwd` if [ -z "$1" ] then echo Need user@host info fi cd $HOME if [ -e "./.ssh/" ] then cat ./.ssh/ |ssh $1 'mkdir -p -m 0700 .ssh && cat >> .ssh/authorized_keys' else ssh-keygen -t rsa cat ./.ssh/ |ssh $1 'mkdir -p -m 0700 .ssh && cat >> .ssh/authorized_keys' fi cd $OLDDIR}

To use, whenever I want to copy my public key to a remote machine I just have to specify the login and machine:

$ setremotekey mc@narcissus

Then type in my password once, and the the function does the rest. How? Well it checks to make sure I’ve entered a user/host (or actually just a string of some kind). Then, if I haven’t created a public key before (which I might not have on a new machine), I run the ssh-keygen to create it. Once the key is in place, I output the key text and then use ssh to pipe append that to the remote authorized_keys file, creating the directory along the way if it doesn’t exist. Short and sweet, but saves me a lot of time.

Extra bash improvements

If you’ve read my Getting the most out of bash article at IBM developerWorks then you be interested in some further bash goodness and improvements. Juliet Kemp covers some additional tricks on Improving bash to make working with bash easier. Some of the stuff there I have already covered, but the completion extensions might be useful if you like to optimize your typing. Even better, one of the comments provides the hooks to change your prompt to include your current CVS branch, another to include your current platform, and a really cool way of simplifying your history searching.

Controlling OS X volume through Cron

One of the biggest annoyances of working from home is that with the computers in the room next door, the volume of your computers can cause a problem if someone suddenly calls you on Skype, or your backup software suddenly kicks in and starts beeping. I never remember to mute the volume, so I started looking for a way to this automatically through cron at specific times. I also wanted to be sure that rather than setting a specific volume (and having to remember it), that I could just use the OS X mute function. The solution is to combine Applescript, which you can run from the command line using the osascript command, with the command line limitations of cron. There are three components, the two Applescripts that mute and unmute the volume, and the lines in a crontab to run the scripts. To mute the volume with Applescript:

set volume with output muted

To unmute:

set volume without output muted

Save both these into Applescripts (use the Applescript editor so they are compiled). Then we can just set the scripts to execute when required:

0 9 * * * osascript /usr/local/mcslp/volume-unmute.scpt0 19 * * * osascript /usr/local/mcslp/volume-mute.scpt

I’ve set this on the three machines and now we get a silent night!

Making a single extractor

One of my new articles is on smplifying your command line (read more about System Administrators Toolkit: Standardizing your UNIX command-line tools, making your life easier as you move between different environments. The same principles can be applied just to make your life easier. Here’s a function I’ve had in my bash init script for years that gets round the issue of extracting a compressed archive file of various types, even if your tar isn’t aware of the compression type:

function uz (){    file=$1    case $file in        (*gz)  gunzip -c $file|tar xf -;;        (*bz2) bunzip2 -c $file|tar xf -;;        (*Z) tar zxf $file;;        (*zip) unzip $file;;    esac}

Now I can extract any file with:

$ uz file{gz|bz2|zip|Z)

And not worry that my Solaris tar isn’t bzip2 aware even though it is Gzip aware.

Copying multiple files with scp

I keep my .bash init scripts on one machine and copy them over to each machine on which I have a login. There’s various bits of logic in there to ensure that the right PATH and other values are set according to the host and/or platform. I then have a simple line that updates the .ocal .bash scripts from the main box that holds the main copies, so that I can just run:


To update everything. I use scp and, depending on the system, use a preset key or require a password.For copying multiple files there are many solutions; I could just use .bash*, but I’d also get the history and backup files. The typical advice is separate entries:

scp mc@narcissus:.bashrc mc@narcissus:.bash_aliases

This is less than optimal for a number of reasons – the first is that each location is treated individually, and that requires multiple connections and multiple password requirements. You can, though, use normal shell like expansion, just make sure you use quotes to ensure that it isn’t parsed and expanded by the local shell instead of the remote one:

scp mc@narcissus:".bash{rc,_path,_aliases,_vars}" ~