MySQL Conf 2009 Preview: Scalability and HA Tutorial

Like most people, and with just over a week to go before the conference, I’m putting the finishing touchs on my various presentations. First up for me, on Monday afternoon, is my tutorial: Scale Up, Scale Out, and High Availability: Solutions and Combinations. What will be doing? Very simply: Looking at every potential solution for maximum scalability and availability for your database environment. If you are attending, be prepared to:

  • Expand your mind as we think about scaling up.
  • Expand your horizons as we think about scaling out.
  • Divide and conquer, as we think about high-availability.

We’re not not hands on in this session – but I will expect you to be brains on!

Reverse Changelog

One of the pain points of upgrading any software package is knowing which version to upgrade to when the time comes. With software that is frequently updated, like MySQL, choosing a version that provides new features and addresses issues you are aware of, without also being exposed to other issues is an added complexity. This week, I added another new feature to our documentation, spurred on by Baron Schwarz’s idea for a ‘reverse changelog’. As Baron noted, the idea is to report the bugs reported in a specific version, and then provide information about the version in which each bug was fixed, so that you can determine which version you need to choose when upgrading to avoid that bug. You can see the output for 5.1: Bugs Reported and Fixed in 5.1.That page contains every version of 5.1, a list of the bugs known to be reported against that specific version, the bug synopsis/description, and the version in which the bug was known to be fixed. The reported version information is extracted from our bugs database. The bug fixed version is extracted from the changelog information – another part of the dynamic changelog functionality is the ability to find out information like this super easy when generating changelog related output.I’ve done a lot of work over the last couple of years to improve the quality and content of our changelog. Although the basic structure of the changelog hasn’t changed much on the outside, from the inside, I’ve simplified and expanded the content and information that we track, and made the documentation team’s work processes easier to handle too. About 20% of our time in the docs team is tied up in keeping the changelogs up to date with both the changelog content and any relevant changes to the manual, so keeping that time down enables us to focus on the content.On the outside, for our users, we’ve been publishing the enhanced ‘key changes’ information for more than a year now. This uses tags that we add to the changelog entries when we write them, and allow us to pull out the ‘key changes’, that is bugs tagged with Important Change, Incompatible Change, etc. into a single view. Hopefully both the reverse changelog and key changes will be useful to everybody – and I thank (and will thank) Baron for the inspiration for the idea, largely made possible through our dynamic changelog system. For those curious about the dynamic changelog and what it can do, you might also want to check out the Open Bugs for 5.1 list which also uses the information provided by the dynamic changelog to tell you when known bugs are expected to be fixed.If anybody has any further ideas about the sort of information they might want out of the changelog, please feel free to let me know. As a rough guide, pulling out information based on specific tags (for example, all the bugs affecting a particular storage engine, partitioning, or specific platform) are easy to produce, as are changes between specific versions, ranges, or combinations of the above.

MySQL University Needs You!!

MySQL University keeps rolling along. We’ve had some fantastic sessions just recently (not including my own, of course!), such as Lenz’s presentation of backing up MySQL with filesystem snapshots, Allan Packer’s presentation on the optimization of MySQL and, going back a little further, David Van Couvering and Petr Pisl’s talk on using PHP and MySQL within Netbeans. Remember that all of these presentations can be viewed again online if you missed them first time round!We’ve got some good topics coming up, but we ned more!!Got some hot topic that you want to tell the world about? Using MySQL in an interesting way? Got a good MySQL scalability story that you want to tell? Developed a great storage engine? Any and all of these topics are welcome for discussion and presentation at MySQL University. On the other hand, perhaps you’ve seen one of our existing presentations and would like an update or follow-on session that goes deeper, or that enables you to ask more questions. Whatever it is you want to see on MySQL University, let me know here or through the contact form, and get these things scheduled in before the slots all disappear!

LOSUG Presentation Slides Now Available

My presentation at LOSUG on tuesday went down like a house on fire – I think it would be safe to say that the phrase for the evening was ‘It’s a cache!’. For that to make sense, you need to look at the slides, which are now available here. Attendance was great, but it seems the last minute change of day meant that some people missed the session. We had 151 people register, and about 80 turned up on the night.

I love my Moleskine

Not remotely related to computing at all, but I’ve just been updating my diaries, and I use Moleskine. I go everywhere with a Moleskine of some description (they’ve recently released some really tiny notebooks, which make it easier to carry your life around with you). Despite having computers, organization tools, and email, there is something decidedly comforting about writing, by hand, into a physical notebook. Of course, you write in pencil so that the contents don’t smudge, and somehow that just feels even better.

MySQL on OpenSolaris Presentation/Transcript Now Available

As I mentioned earlier this week, I did a presentation on MySQL in OpenSolaris today. The presentation (audio and slides) is now viewable online (and downloadable), and you can also get hold of the transcript of the questions: here (or download). The original presentation is here. One minor difference from the presentation is that we have upgraded MySQL to 5.0.67 in 2008.11. I had forgotten we’d agreed to do this after the 5.1 pushback. Thanks to Matt Lord for the heads up.And thanks to everybody for attending. Up next week, memcached!

MySQL University: MySQL and OpenSolaris

On Thursday, November 13, 2008 (14:00 UTC / 14:00 BST / 15:00 CET), I’ll be presenting a MySQL University session on MySQL and OpenSolaris. The presentation will be similar to the presentation I did at the London OpenSolaris Users Group in July, you can see that presentation by visiting the LOSUG: July 2008 page. The presentation on thursday will be slightly different – I’ll be providing a bit more hands-on information about how to install MySQL, how to configure and change the configuration and some more detail on solutions like the Webstack and Coolstack distributions. I’ll also cover our plans for the inclusion of MySQL 5.1 in OpenSolaris, which will happen next year, and provide some examples on the new DTrace probes that we have been adding to MySQL generally. Of course, if there’s anything specific you want me to talk about, comment here and I’ll see if I can squeeze it into the presentation before thursday.

ZFS Replication for MySQL data

At the European Customer Conference a couple of weeks back, one of the topics was the use of DRBD. DRBD is a kernel-based block device that replicates the data blocks of a device from one machine to another. The documentation I developed for that and MySQL is available here. Fundamentally, with DRBD, you set up a physical device, configure DRBD on top of that, and write to the DRBD device. In the background, on the primary, the DRBD device writes the data to the physical disk and replicates those changed blocks to the seconday, which in turn writes the data to it’s physical device. The result is a block level copy of the source data. In an HA solution, which means that you can switch over from your primary host to your secondary host in the event of system failure and be sure pretty certain that the data on the primary and seconday are the same. In short, DRBD simplifies one of the more complex aspects of the typical HA solution by copying the data needed during the switch. Because DRBD is a Linux Kernel module you can’t use it on other platforms, like Mac OS X or Solaris. But there is another solution: ZFS. ZFS supports filesystem snapshots. You can create a snapshot at any time, and you can create as many snapshots as you like.Let’s take a look at a typical example. Below I have a simple OpenSolaris system running with two pools, the root pool and another pool I’ve mount at /opt:

Filesystem             size   used  avail capacity  Mounted onrpool/ROOT/opensolaris-1                       7.3G   3.6G   508M    88%    //devices                 0K     0K     0K     0%    /devices/dev                     0K     0K     0K     0%    /devctfs                     0K     0K     0K     0%    /system/contractproc                     0K     0K     0K     0%    /procmnttab                   0K     0K     0K     0%    /etc/mnttabswap                   465M   312K   465M     1%    /etc/svc/volatileobjfs                    0K     0K     0K     0%    /system/objectsharefs                  0K     0K     0K     0%    /etc/dfs/sharetab/usr/lib/libc/libc_hwcap1.so.1                       4.1G   3.6G   508M    88%    /lib/libc.so.1fd                       0K     0K     0K     0%    /dev/fdswap                   466M   744K   465M     1%    /tmpswap                   465M    40K   465M     1%    /var/runrpool/export           7.3G    19K   508M     1%    /exportrpool/export/home      7.3G   1.5G   508M    75%    /export/homerpool                  7.3G    60K   508M     1%    /rpoolrpool/ROOT             7.3G    18K   508M     1%    /rpool/ROOTopt                    7.8G   1.0G   6.8G    14%    /opt

I’ll store my data in a directory on /opt. To help demonstrate some of the basic replication stuff, I have other things stored in /opt as well:

total 17drwxr-xr-x  31 root     bin           50 Jul 21 07:32 DTT/drwxr-xr-x   4 root     bin            5 Jul 21 07:32 SUNWmlib/drwxr-xr-x  14 root     sys           16 Nov  5 09:56 SUNWspro/drwxrwxrwx  19 1000     1000          40 Nov  6 19:16 emacs-22.1/lrwxrwxrwx   1 root     root          48 Nov  5 09:56 uninstall_Sun_Studio_12.class -> SUNWspro/installer/uninstall_Sun_Studio_12.class

To create a snapshot of the filesystem, you use zfs snapshot, and then specify the pool and the snapshot name:

# zfs snapshot opt@snap1

To get a list of snapshots you’ve already taken:

# zfs list -t snapshotNAME                                         USED  AVAIL  REFER  MOUNTPOINTopt@snap1                                       0      -  1.03G  -rpool@install                               19.5K      -    55K  -rpool/ROOT@install                            15K      -    18K  -rpool/ROOT/opensolaris-1@install            59.8M      -  2.22G  -rpool/ROOT/opensolaris-1@opensolaris-1       100M      -  2.29G  -rpool/ROOT/opensolaris-1/opt@install            0      -  3.61M  -rpool/ROOT/opensolaris-1/opt@opensolaris-1      0      -  3.61M  -rpool/export@install                          15K      -    19K  -rpool/export/home@install                     20K      -    21K  -

The snapshots themselves are stored within the filesystem metadata, and the space required to keep them will vary as time goes on because of the way the the snapshots are created. The initial creation of a snapshot is really quick, because instead of taking an entire copy of the data and metadata required to hold the entire snapshot, ZFS merely records the point in time and metadata of when the snaphot was created.As you make more changes to the original filesystem, the size of the snapshot increases because more space is required to keep the record of the old blocks. Furthermore, if you create lots of snapshots, say one per day, and then delete the snapshots from earlier in the week, the size of the newer snapshots may also increase, as the changes that make up the newer state have to be included in the more recent snapshots, rather than being spread over the seven snapshots that make up the week. The result is that creating snapshots is generally very fast, and storing snapshots is very efficient. As an example, creating a snapshot of a 40GB filesystem takes less than 20ms on my machine. The only issue, from a backup perspective, is that snaphots exist within the confines of the original filesystem. To get the snapshot out into a format that you can copy to another filesystem, tape, etc. you use the zfs send command to create a stream version of the snapshot. For example, to write out the snapshot to a file:

# zfs send opt@snap1 >/backup/opt-snap1

Or tape, if you are still using it:

# zfs send opt@snap1 >/dev/rmt/0

You can also write out the incremental changes between two snapshots using zfs send:

# zfs send opt@snap1 opt@snap2 >/backup/opt-changes

To recover a snapshot, you use zfs recv which applies the snapshot information either to a new filesytem, or to an existing one. I’ll skip the demo of this for the moment, because it will make more sense in the context of what we’ll do next. Both zfs send and zfs recv work on streams of the snapshot information, in the same way as cat or sed do. We’ve already seen some examples of that when we used standard redirection to write the information out to a file. Because they are stream based, you can use them to replicate information from one system to another by combining zfs send, ssh, and zfs recv. For example, let’s say I’ve created a snapshot of my opt filesystem and want to copy that data to a new system into a pool called slavepool:

# zfs send opt@snap1 |ssh mc@slave pfexec zfs recv -F slavepool

The first part, zfs send opt@snap1, streams the snapshot, the second, ssh mc@slave, and the third, pfexec zfs recv -F slavepool, receives the streamed snapshot data and writes it to slavepool. In this instance, I’ve specified the -F option which forces the snapshot data to be applied, and is therefore destructive. This is fine, as I’m creating the first version of my replicated filesystem. On the slave machine, if I look at the replicated filesystem:

# ls -al /slavepool/total 23drwxr-xr-x   6 root     root           7 Nov  8 09:13 ./drwxr-xr-x  29 root
 root          34 Nov  9 07:06 ../drwxr-xr-x  31 root     bin           50 Jul 21 07:32 DTT/drwxr-xr-x   4 root     bin            5 Jul 21 07:32 SUNWmlib/drwxr-xr-x  14 root     sys           16 Nov  5 09:56 SUNWspro/drwxrwxrwx  19 1000     1000          40 Nov  6 19:16 emacs-22.1/lrwxrwxrwx   1 root     root          48 Nov  5 09:56 uninstall_Sun_Studio_12.class -> SUNWspro/installer/uninstall_Sun_Studio_12.class

Wow – that looks familiar!Once you’ve snapshotted once, to synchronize the filesystem again, I just need to create a new snapshot, and then use the incremental snapshot feature of zfs send to send the changes over to the slave machine again:

# zfs send -i opt@snapshot1 opt@snapshot2 |ssh mc@192.168.0.93 pfexec zfs recv slavepool

Actually, this operation will fail. The reason is that the filesystem on the slave machine can currently be modified, and you can’t apply the incremental changes to a destination filesystem that has changed. What’s changed? The metadata about the filesystem, like the last time it was accessed – in this case, it will have been our ls that caused the problem. To fix that, set the filesystem on the slave to be read-only:

# zfs set readonly=on slavepool

Setting readonly means that we can’t change the filesystem on the slave by normal means – that is, I can’t change the files or metadata (modification times and so on). It also means that operations that would normally update metadata (like our ls) will silently perform their function without attempting to update the filesystem state. In essence, our slave filesystem is nothing but a static copy of our original filesystem. However, even when enabled to readonly, a filesystem can have snapshots applied to it. Now it’s read only, re-run the initial copy:

# zfs send opt@snap1 |ssh mc@slave pfexec zfs recv -F slavepool

Now we can make changes to the original and replicate them over. Since we’re dealing with MySQL, let’s initialize a database on the original pool. I’ve updated the configuration file to use /opt/mysql-data as the data directory, and now I can initialize the tables:

# mysql_install_db --defaults-file=/etc/mysql/5.0/my.cnf --user=mysql

Now, we can synchronize the information to our slave machine and filesystem by creating another snapshot and then doing an incremental zfs send:

# zfs snapshot opt@snap2

Just to demonstrate the efficiency of the snapshots, the size of the data created during initialization is 39K:

# du -sh /opt/mysql-data/  39K        /opt/mysql-data

If I check the size used by the snapshots:

# zfs list -t snapshotNAME                                         USED  AVAIL  REFER  MOUNTPOINTopt@snap1                                     47K      -  1.03G  -opt@snap2                                       0      -  1.05G  -

The size of the snapshot is 47K. Note, by the way, that it is 47K in snap1, because currently snap2 should be more or less equal to our current filesystem state.Now, let’s synchronize this over:

# zfs send -i opt@snap1 opt@snap2|ssh mc@192.168.0.93 pfexec zfs recv slavepool

Note we don’t have to force the operation this time – we’re synchronizing the incremental changes from what are identical filesystems, just on different systems. And double check that the slave has it:

# ls -al /slavepool/mysql-data/

Now we can start up MySQL, create some data, and then synchronize the information over again, replicating the changes. To do that, you have to create a new snapshot, then do the send/recv to the slave to synchronize the changes. The rate at which you do it is entirely up to you, but keep in mind that if you have a lot of changes then doing it as frequently as once a minute may lead to your data becoming behind the because of the time taken to transfer the filesystem changes over the network – running snapshot with MySQL running in the background still takes comparatively little time. To demonstrate that, here’s the time taken to create a snapshot mid-way through a 4 million row insert into an InnoDB table:

# time zfs snapshot opt@snap3real    0m0.142suser    0m0.006ssys     0m0.027s

I told you it was quick :)However, the send/recv operation took a few minutes to complete, with about 212MB of data transferred over a very slow network connection, and the machine was busy writing those additional records.Ideally you want to set up a simple script that will handle that sort of snapshot/replication for you and run it past cron to do the work for you. You might also want to try ready-made tools like Tim Foster’s zfs replication tool, which you can find out about here. Tim’s system works through SMF to handle the replication and is very configurable. It even handles automatic deletion of old, synchronized, snapshots. Of course, all of this is useless unless once replicated from one machine to another we can actually use the databases. Let’s assume that there was a failure and we needed to fail over to the slave machine. To do:

  1. Stop the script on the master, if it’s still up and running.
  2. Set the slave filesystem to be read/write:
    # zfs set readonly=off slavepool
  3. Start up mysqld on the slave. If you are using InnoDB, Falcon or Maria you should get auto-recovery, if it’s needed, to make sure the table data is correct, as shown here when I started up from our mid-INSERT snapshot:
    InnoDB: The log sequence number in ibdata files does not matchInnoDB: the log sequence number in the ib_logfiles!081109 15:59:59  InnoDB: Database was not shut down normally!InnoDB: Starting crash recovery.InnoDB: Reading tablespace information from the .ibd files...InnoDB: Restoring possible half-written data pages from the doublewriteInnoDB: buffer...081109 16:00:03  InnoDB: Started; log sequence number 0 1142807951081109 16:00:03 [Note] /slavepool/mysql-5.0.67-solaris10-i386/bin/mysqld: ready for connections.Version: '5.0.67'  socket: '/tmp/mysql.sock'  port: 3306  MySQL Community Server (GPL)

Yay – we’re back up and running. On MyISAM, or other tables, you need to run REPAIR TABLE, and you might even have lost some information, but it should be minor. The point is, a mid-INSERT ZFS snapshot, combined with replication, could be a good way of supporting a hot-backup of your system on Mac OS X or Solaris/OpenSolaris. Probably, the most critical part is finding the sweet spot between the snapshot replication time, and how up to date you want to be in a failure situation. It’s also worth pointing out that you can replicate to as many different hosts as you like, so if you want wanted to replicate your ZFS data to two or three hosts, you could.

Using the MySQL Doc source tree

I’ve mentioned a number of times that the documentation repositories that we use to build the docs are freely available, and so they are, but how do you go about using them? More and more people are getting interested in being able to work with the MySQL docs, judging by the queries we get, and internally we sometimes get specialized requests. There are some limitations – although you can download and access the docs and generate your own versions in various formats, you are not allowed to distribute or supply that iinformation, it can only be employed for personal use. The reasons and disclaimer for that are available on the main page for each of the docs, such as the one on the 5.1 Manual. Those issues aside, if you want to use and generate your own docs from the Subversion source tree then you’ll need the following:

  • Subversion to download the sources
  • XML processors to convert the DocBook XML into various target formats; we include DocBook XML/XSLT files you’ll need.
  • Perl for some of the checking scripts and the ID mapping parts of the build process
  • Apache’s FOP if you want to generate PDFs, if not, you can ignore.

To get you started, you must download the DocBook XML source from the public subversion repository. We recently split a single Subversion tree with the English language version into two different repositories, one containing the pure content, and the other the tools that required to build the docs. The reason for that is consistency across all of our repositories, internally and externally, for the reference manual in all its different versions. Therefore, to get started, you need both repositories. You need check them out into the same directory:

$ svn checkout http://svn.mysql.com/svnpublic/mysqldoc$ svn checkout http://svn.mysql.com/svnpublic/mysqldoc-toolset

Assuming you have the downloaded the XML toolkit already, make sure you have the necessary Perl modules installed. You’ll need Expat library, and the following Perl modules:

  • Digest::MD5
  • XML::Parser::PerlSAX
  • IO::File
  • IO::String

If you have CPAN installed, you can install them automatically using perl -MCPAN -e 'install modulename', or use your respective package management system to install the modules for you. You’ll get an error message if there is something missing. OK, with everything in place you are ready to try building the documentation. You can change into most directories and convert the XML files there into a final document. For example, to build the Workbench documentation, change into the Workbench directory. We use make to build the various files and dependencies. To build the full Workbench documentation, specify the main file, workbench, as the target, and the file format you want to produce as the extension. For example, to build a single HTML file, the extension is html. I’ve included the full output here so that you can see the exact output you will get:

make workbench.htmlset -e;
../../mysqldoc-toolset/tools/dynxml-parser.pl
--infile=news-workbench-core.xml --outfile=dynxml-local-news-workbench.xml-tmp-$$ --srcdir=../dynamic-docs --srclangdir=../dynamic-docs;

mv dynxml-local-news-workbench.xml-tmp-$$ dynxml-local-news-workbench.xmlmake -C ../refman-5.1 metadata/introduction.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1'../../mysqldoc-toolset/tools/idmap.pl refman/5.1/en introduction.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1'make -C ../refman-5.1 metadata/partitioning.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1'../../mysqldoc-toolset/tools/idmap.pl refman/5.1/en partitioning.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1'make -C ../refman-5.1 metadata/se-merge.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1'../../mysqldoc-toolset/tools/idmap.pl refman/5.1/en se-merge.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1'make -C ../refman-5.1 metadata/se-myisam-core.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1'../../mysqldoc-toolset/tools/idmap.pl refman/5.1/en se-myisam-core.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1'make -C ../refman-5.1 metadata/sql-syntax-data-definition.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1'../../mysqldoc-toolset/tools/idmap.pl refman/5.1/en sql-syntax-data-definition.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1'make -C ../workbench metadata/documenting-database.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/idmap.pl workbench//en documenting-database.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/foreign-key-relationships.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/idmap.pl workbench//en foreign-key-relationships.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/forward-engineering.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/idmap.pl workbench//en forward-engineering.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/grt-shell.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/idmap.pl workbench//en grt-shell.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/images.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/idmap.pl workbench//en images.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/installing.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/idmap.pl workbench//en installing.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/layers.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/idmap.pl workbench//en layers.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/notes.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/idmap.pl workbench//en notes.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/plugins.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/idmap.pl workbench//en plugins.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/printing.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/idmap.pl workbench//en printing.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/reference.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/idmap.pl workbench//en reference.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/reverse-engineering.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/idmap.pl workbench//en reverse-engineering.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/server-connection-wizard.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/idmap.pl workbench//en server-connection-wizard.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/stored-procedures.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/idmap.pl workbench//en stored-procedures.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/tables.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/idmap.pl workbench//en tables.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/text-objects.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/idmap.pl workbench//en text-objects.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/tutorial.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/idmap.pl workbench//en tutorial.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/validation-plugins.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/idmap.pl workbench//en validation-plugins.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/views.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/idmap.pl workbench//en views.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'XML_CATALOG_FILES="../../mysqldoc-toolset//catalog.xml" xsltproc --xinclude --novalid

--stringparam repository.revision "`../../mysqldoc-toolset/tools/get-svn-revision`"

--param map.remark.to.para 0

--stringparam qandaset.style ""

../../mysqldoc-toolset/xsl.d/dbk-prep.xsl workbench.xml > workbench-prepped.xml.tmp2../../mysqldoc-toolset/tools/bug-prep.pl workbench-prepped.xml.tmp../../mysqldoc-toolset/tools/idremap.pl --srcpath="../workbench ../gui-common ../refman-5.1 ../refman-common ../refman-5.0" --prefix="workbench-" workbench-prepped.xml.tmp > workbench-prepped.xml.tmp2mv workbench-prepped.xml.tmp2 workbench-prepped.xmlrm -f workbench-prepped.xml.tmpXML_CATALOG_FILES="../../mysqldoc-toolset//catalog.xml" xsltproc --xinclude --novalid

--stringparam l10n.gentext.default.language en

--output workbench.html-tmp

../../mysqldoc-toolset/xsl.d/mysql-html.xsl

workbench-prepped.xml../../mysqldoc-toolset/tools/add-index-navlinks.pl workbench.html-tmpmv workbench.html-tmp workbench.html

There’s lots in the output above, and I’ll describe the content as best I can without going in to too much detail in this piece. First off, the make triggers some dependencies, which are the creation of a number of ‘IDMap’ files. These files contain information about the content of the files and are used to help produce valid links in to other parts of the documentation. I’ll talk about ID mapping more in a later post. The next stage is to build the ‘prepped’ version of the documentation, which combines all of the individual files into one large file and does some pre-processing to ensure that we get the output that we want. The next is the remapping. This uses the IDMap information built in the first stage and ensures that any links in the documentation that go to a document we know about, like the reference manual, point to the correct online location. It is the ID mapping (and remapping) that allows us to effectively link between documents (such as the Workbench and Refman) without us having to worry about creating a complex URL link. Instead, we just include a link to the correct ID within the other document and let the ID mapping system do the rest. The final stage takes our prepped, remapped, DocBook XML source and converts it into the final XML using the standard DocBook XSL templates. One of the benefits of us using make is that because we build different stages in the build process, when we build another target, we dont have to repeat the full process. For example, to build a PDF version of the same document, the prepping, remapping and other stages are fundamentally the same, which is why we keep the file, workbench-prepped.xml. Building the PDF only requires us to build the FO (Formatting Objects) output, and then use fop to turn this into PDF:

$ make workbench.pdfXML_CATALOG_FILES="../../mysqldoc-toolset//catalog.xml" xsltproc --xinclude --novalid

--output - ../../mysqldoc-toolset/xsl.d/strip-remarks.xsl workbench-prepped.xml
| XML_CATALOG_FILES="../../mysqldoc-toolset//catalog.xml" xsltproc --xinclude --novalid

--stringparam l10n.gentext.default.language en

--output workbench.fo-tmp ../../mysqldoc-toolset/xsl.d/mysql-fo.xsl -Making portrait pages on USletter paper (8.5inx11in)mv workbench.fo-tmp workbench.foset -e;
if [ -f ../../mysqldoc-toolset/xsl.d/userconfig.xml ]; then

../../mysqldoc-toolset/tools/fixup-multibyte.pl workbench.fo workbench.fo.multibyte;

mv workbench.fo.multibyte workbench.fo;

fop -q -c ../../mysqldoc-toolset/xsl.d/userconfig.xml workbench.fo workbench.pdf-tmp > workbench.pdf-err;
else

fop -q workbench.fo workbench.pdf-tmp > workbench.pdf-err;
fimv workbench.pdf-tmp workbench.pdfsed -e '/hyphenation/d'

You can see in this output that the prepping and remapping processes don’t even take place – the process immediately converts the prepped file into FO and then calls fop. That completes our whirlwind tour of the basics of building MySQL documentation, I’ll look at some more detailed aspects of the process in future blog posts. Until then, you might want to read our metadocs on the internals in MySQL Guide to MySQL Documentation.