Dojo examples from UC2009

I know, I know, loads of people have been waiting for these… So here we go, I’ve finally sorted a downloaded version of the Dojo examples from the presentation I provided at the MySQL Users Conference 2009. There are three examples: The auto-pag…

I know, I know, loads of people have been waiting for these…So here we go, I’ve finally sorted a downloaded version of the Dojo examples from the presentation I provided at the MySQL Users Conference 2009. There are three examples:

  • The auto-paging table example, which uses the functionality of the Dojo Toolkit and the QueryReadStore to automatically load content from a table.
    Media_httpcoalfacemcs_eglji
  • The basic graphing example, which loads data dynamically and plots a graph.
    Media_httpcoalfacemcs_etjjh
  • And the zooming version of the same basic graph interface
  • There’s a README in the download that contains instructions on getting everything up to speed, although it should be fairly obvious. It’s attached to the bottom of this post too.Any questions for getting this to work, please go ahead and ask!Download the package hereUC2009 Working with MySQL and Dojo==================================MC Brown, 2009, http://mcslp.com and http://coalface.mcslp.comComponents:There are three examples here:- Dojo table with auto-paging (table_autopage.html and table_autopage.cgi)- Dojo Basic Graph (graph_basic.html and graph_ajax.cgi)- Dojo Zooming Graph (graph.html and graph_ajaz_zoom.cgi)You will also need the Dojo/Dijit and DojoX toolkit package from here:http://www.dojotoolkit.org/downloadsDownload the combined package, and then extract the contents and place in the same directory as the CGI scriptsIntructions for use:For the Table example, you can create a table (or use an existing DB) from WordPress using the wp_posts tabls.In practice you need only create and populate a table with the following columns:id (int)user_nicename (char)post_date (datetime)post_title (char)To be compatible with the example without any more modifications.For the Graphing examples, you need a table as follows:CREATE TABLE `currencies` ( `currencyid` bigint(20) unsigned NOT NULL auto_increment, `currency` char(20) default NULL, `value` float default NULL, `datetime` int(11) default NULL, `new` int(11) default NULL, PRIMARY KEY (`currencyid`), UNIQUE KEY `currencyid` (`currencyid`), KEY `currencies_currency` (`currency`,`datetime`), KEY `currencies_datetime` (`datetime`)) ENGINE=MyISAM AUTO_INCREMENT=170048 DEFAULT CHARSET=latin1And then populate with date/value data according to the information you want to store.Once done, make sure you change the database information in the CGI scripts to populate the information correctly during operation.

DTrace in MySQL: Documentation and a MySQL University Session

DTrace has been something that I’ve been trying to get into the MySQL server for more than a year now. After a combination of my own patches and working with Mikael Ronstrom and Alexey Kopytov we finally have a suite of probes in MySQL 6.0.8. Bett…

DTrace has been something that I’ve been trying to get into the MySQL server for more than a year now.After a combination of my own patches and working with Mikael Ronstrom and Alexey Kopytov we finally have a suite of probes in MySQL 6.0.8. Better still, after a short hiatus while I was busy working on a million-and-one other things, the documentation for those probes is now available: Tracing mysqld with DTrace. The documentation is comparatively light and deep all at the same time. It’s lightweight from the perspective that I’ve added very little detail on the mechanics of DTrace itself, since there is no need to replicate the excellent guides that Sun already provide on the topic. At the same time, I’ve tried to provide at least one (and sometimes two) D script examples for each of the groups of probes in the 6.0.8 release. So what next for MySQL DTrace probes? Well, the next version of the probes has already been worked on. I’ve been testing them for a month or so, and due to a problem with the probes on SPARC I couldn’t approve the patch, but I managed to resolve that last week. The new patch extends the probes to enable a more detailed look at certain operations, and it enables us to expand the probes to be placed anywhere within the server, including individual engine-level row operations. If you want a demonstration of DTrace in MySQL, some of the things you can monitor without using the user probes we’ve added, and those new probes I just mentioned, then you will want to attend the MySQL University session this Thursday (12th Feb), at 14:00UTC where I’ll be doing all of the above. As a side note, because I know there are people interested, last week I also finished the patch for these probes to go into the MySQL 5.1.30 version that we will be putting into OpenSolaris. Sunanda is working on getting that release out there as I type this.

Multiple VCS Updates and Cleanups

I spend a lot of time updating a variety of different repositories of different varieties and denominations, and I hate having to do that all by hand – I’d rather just go up into a top-level directory and say update-all and let a script work out w…

I spend a lot of time updating a variety of different repositories of different varieties and denominations, and I hate having to do that all by hand – I’d rather just go up into a top-level directory and say update-all and let a script work out what to do, no matter what different repos are there. I do it with a function defined within my bash profile/rc scripts, and it covers git, bzr, svn, bk, and cvs. The trick is to identify what type of directory we are updating. I do this, lazily, for each type individually, rather than for each directory, but I’ve found this method to be more reliable.

update-all () { for file in `ls -d */.svn 2>/dev/null`; do realdir=`echo $file|cut -d/ -f1`; echo Updating in $realdir; ( cd $realdir; svn update ); done; for file in `ls -d */.bzr 2>/dev/null`; do realdir=`echo $file|cut -d/ -f1`; echo Updating in $realdir; ( cd $realdir; bzr pull ); done; for file in `ls -d */.git 2>/dev/null`; do realdir=`echo $file|cut -d/ -f1`; echo Updating in $realdir; ( cd $realdir; git pull ); done; for file in `ls -d */CVS 2>/dev/null`; do realdir=`echo $file|cut -d/ -f1`; echo Updating in $realdir; ( cd $realdir; cvs up ); done; for file in `ls -d */BitKeeper 2>/dev/null`; do realdir=`echo $file|cut -d/ -f1`; echo Updating in $realdir; ( cd $realdir; bk pull ); done; unset realdir}

That’s it – a quick way to update any directory of repos.

Mysterious crashes? – check your temporary directory settings

Just recently I seem to have noticed an increased number of mysterious crashes and terminations of applications. This is generally on brand new systems that I’m setting up, or on existing systems where I’m setting up a new or duplicate account. In…

Just recently I seem to have noticed an increased number of mysterious crashes and terminations of applications. This is generally on brand new systems that I’m setting up, or on existing systems where I’m setting up a new or duplicate account. Initially everything is fine, but then all of a sudden as I start syncing over my files, shell profile and so on applications will stop working. I’ve experienced it in MySQL, and more recently when starting up Gnome on Solaris 10 9/07. Sometimes the problem is obvious, other times it takes me a while to realize what is happening and causing the problem. But in all cases it’s the same problem – my TMPDIR environment variable points to a directory that doesn't exist. That's because for historical reasons (mostly related to HP-UX, bad permissions and global tmp directories) I've always set TMPDIR to a directory within my home directory. It's just a one of those things I've had in my bash profile for as long as I can remember. Probably 12 years or more at least. This can be counterproductive on some systems - on Solaris for example the main /tmp directory is actually mounted on the swap space, which means that RAM will be used if it’s available, which can make a big difference during compilation. But any setting is counterproductive if you point to a directory that doesn’t exist and then have an application that tries to create a temporary file, fails, and then never prints out a useful trace of why it had a problem (yes, I mean you Gnome!). I’ve just reset my TMPDIR in .bash_vars to read:

case $OSTYPE in (solaris*) export set TMPDIR=/tmp/mc;mkdir -m 0700 -p $TMPDIR ;; (*) export set TMPDIR=~/tmp;mkdir -m 0700 -p $TMPDIR ;;esac

Now I explicitly create a directory in a suitable location during startup, so I shouldn’t experience those crashes anymore.

Setting up the developer stack issues

There’s a great post on Coding Horror about Configuring the Stack. Basically the gripe is with the complexity of installing the typical developer stack, in this case on Windows, using Visual Studio. My VS setup isn’t vastly different to the one Je…

There’s a great post on Coding Horror about Configuring the Stack.Basically the gripe is with the complexity of installing the typical developer stack, in this case on Windows, using Visual Studio. My VS setup isn’t vastly different to the one Jeff mentions, and I have similar issues with the other stacks I use. I’ve just set up the Ultra3 mobile workstation again for building MySQL and other stuff on, and it took about 30 packages (from Sun Freeware) just to get the basics like gcc, binutils, gdb, flex, bison and the rest set up. It took the best part of a day to get everything downloaded, installed, and configured. I haven’t even started on modules for Perl yet. The Eclipse stack is no better. On Windows you’ll need the JDK of your choice, plus Eclipse. Then you’ll have to update Eclipse. Then add in the plugins and modules you want. Even though some of that is automated (and, annoyingly some of it is not although it could be), it generally takes me a few hours to get stuff installed. Admittedly on my Linux boxes it’s easier – I use Gentoo and copy around a suitable make.conf with everything I need in it, so I need only run emerge, but that can still take a day or so to get everything compiled.Although I’m sure we can all think of easier ways to create the base systems – I use Parallels for example and copy VM folders to create new environments for development – even the updating can take a considerable amount of time. I suggest the new killer app is one that makes the whole process easier.

Setting a remote key through ssh

One of the steps I find myself doing a lot is distributing round an ssh key so that I can login and use different machines automatically. To help in that process I created a small function in my bash profile script (acutally for me it’s in .bash_a…

One of the steps I find myself doing a lot is distributing round an ssh key so that I can login and use different machines automatically. To help in that process I created a small function in my bash profile script (acutally for me it’s in .bash_aliases):

function setremotekey{ OLDDIR=`pwd` if [ -z "$1" ] then echo Need user@host info fi cd $HOME if [ -e "./.ssh/id_rsa.pub" ] then cat ./.ssh/id_rsa.pub |ssh $1 'mkdir -p -m 0700 .ssh && cat >> .ssh/authorized_keys' else ssh-keygen -t rsa cat ./.ssh/id_rsa.pub |ssh $1 'mkdir -p -m 0700 .ssh && cat >> .ssh/authorized_keys' fi cd $OLDDIR}

To use, whenever I want to copy my public key to a remote machine I just have to specify the login and machine:

$ setremotekey mc@narcissus

Then type in my password once, and the the function does the rest. How? Well it checks to make sure I’ve entered a user/host (or actually just a string of some kind). Then, if I haven’t created a public key before (which I might not have on a new machine), I run the ssh-keygen to create it. Once the key is in place, I output the key text and then use ssh to pipe append that to the remote authorized_keys file, creating the directory along the way if it doesn’t exist. Short and sweet, but saves me a lot of time.

Extra bash improvements

If you’ve read my Getting the most out of bash article at IBM developerWorks then you be interested in some further bash goodness and improvements. Juliet Kemp covers some additional tricks on Improving bash to make working with bash easier. Some …

If you’ve read my Getting the most out of bash article at IBM developerWorks then you be interested in some further bash goodness and improvements. Juliet Kemp covers some additional tricks on Improving bash to make working with bash easier. Some of the stuff there I have already covered, but the completion extensions might be useful if you like to optimize your typing. Even better, one of the comments provides the hooks to change your prompt to include your current CVS branch, another to include your current platform, and a really cool way of simplifying your history searching.