Using the Continuent Docs

As hopefully has been noticed, the Continuent documentation is achieving a pretty good critical mass. The content of the documentation is always the most important consideration. Secondary is making sure that the information in the documentation can be found, and that when reading, you can hover and click to get relevant information so that you can understand the content and information being provided even better.

We’ve got a few different solutions and tips that I think are worth highlighting so that people can use the documentation more effectively.


When you want to look for something in the documentation, use the search bar right up at the top. The search is available both on the Documentation Library page and within individual documents.

Screen Shot 2014-03-12 at 07.13.22

When used on the Documentation Library page, search shows you potential matches across all the documentation for the word or item you are searching for. For example, here where I’ve searched for FAQ. Entries are ranked by the manual according to releases:

Screen Shot 2014-03-12 at 07.17.32

When searching within a document, you get shown the items within this document first, followed by matches within other documents:

Screen Shot 2014-03-12 at 07.22.39

The search content itself is heavily indexed and designed so that you should go to the right item as the first one in the list.

It also works both on wide terms, for example, Filters, but it also works on commands, and command-line arguments and options within a typical command. For example, type ‘trepctl status’ and you will get not only the key command, but all it’s derivatives. But type in an option, like ‘-at-event’, and you’ll get the explicit entry for that item.

Screen Shot 2014-03-12 at 07.28.44

Note that the search is very deliberately not a free-text search. This is to ensure that you get to exactly the right page, rather than all the pages that might mention ‘trepctl status’.

Hover Highlights

When reading the documentation you might come across some terms or information that you are not familiar with. In this case, hover over the item and you’ll get a definition.

Screen Shot 2014-03-12 at 07.40.13

Click the highlighted item, and you’ll get taken to the reference page for that specific item.

Deep Linking

I mentioned the mechanics of this process recently, but the use-case within the documentation is that virtually everything of significance is automatically linked to the right, canonical, page for the information.

For example, in the image below, there are links to the various ONLINE and OFFLINE states that can be clicked on, and the same is true for nearly all filenames, options, commands, and all combinations thereof.

Screen Shot 2014-03-12 at 10.12.11

Related Pages

In certain sections, links to other pages that might be useful to the current discussion, but which we do not directly link to in reference to another item are listed in the sidebar.

This is supported for related pages:

Screen Shot 2014-03-12 at 10.25.58

FAQ entries:

Screen Shot 2014-03-12 at 10.52.26

We don’t have entries yet, but release note and Error/Cause/Solution (troubleshooting) links are supported too. Note that these links only appear on pages that have the related items.

Table of Contents Navigation

Immediately above the related pages is the basic navigation section. These are divided into:

  • Parent Sections – these are sections at the same level as the current page that you might want to jump to. For example, you can easily jump from Fan-In to Star deployments.
  • Navigate Up – Goes up the parent.
  • Chapters – A list of all the chapters and appendices in this manual.

Other Manuals

For each page in each manual we also provide a link to the same page in other manuals. There are two reasons for this, the first is so that you can compare or jump to differences in other versions of the same manual. The second is to jump between the Tungsten Replicator and Continuent Tungsten if you find yourself in the right page, but the wrong product manual.

Screen Shot 2014-03-12 at 10.42.58

So as you can see, there’s a lot more to the docs than just the content (critical though it is), and hopefully this has helped to explain how usable the documentation is and more important how easy it should be to find the information you need.

Intelligent Linking and Indexing in DocBook

One of the issues I have with DocBook XML is that the links are a little forced and manual. 

By that, I mean that if I have a command, like trepctl, and I used it in a sentence or description, if I want to link trepctl back to the corresponding trepctl page, I have to manually add it like this:

<link linkend="cmdline-tools-trepctl"><command>trepctl</command></link>

Not only is that a mouthful to say, it’s a lot of keys to type. 

So I’ve fixed that. 

What I do instead is add a <remark> block into the documentation for that command-line page:

<section id="cmdline-tools-trepctl">
<title>The trepctl Command</title>
<remark role="index:canonical" condition="command:trepctl"/>

The ‘role’ attribute specifies the index entry, and that this is canonical, i.e., this is the canon page for <command>trepctl</command>

That’s what is defined in the ‘condition’ attribute. 

This means that when I put <command>trepctl</command>, during post-processing, the command is automatically linked to that page without me having to manually do that. 

You can see the effect of this at the top of this page.

It works for anything, and it works for longer fragments too, so I can do ‘Use the <command>trepctl status</command> command’, and the post-processing will automatically link to the canonical page for the trepctl status command. On that same page, you can see links to the field names in the output. 

This uses an extension to the original index reference format: 

<remark role="index:canonical" condition="parameter:appliedLastSeqno:thl"/>

That third argument to the condition attribute gives a ‘hint’ as to what it might apply to. This means that we can link using a commonly used DocBook element, such as parameter, and tag it to link to this canonical page, just by adding a condition attribute to the parameter element, like this:

<parameter condition="thl">appliedLastSeqno</parameter>

OK, so it’s still long, but it’s less complex than writing out <link> or <xref> elements, and it means that I don’t have to know the ID where the information is held, that’s entirely driven from marking up the content with the canonical index entry. 

Finally, and perhaps the most important point, is that you can go to any of the Continuent documentation pages and type either the partial or full command, and it will take you to the canonical page for that command, option, etc, which means not only is the content heavily linked (making it more useful), but it also makes it more easily searchable to the right place. 

Customizing Chunking in DocBook

I love DocBook XML. No, really. But one thing I hate is the way you have to set a global chunking level for your HTML and then live with it. For most documentation, you want to be able to choose whether a conveniently addressable section within a chapter, and then you want to combine it into one page to make it easier to read.

For example, consider this page in the Continuent docs. Technically it’s high enough (I use a default chunking depth of 4) to be chunked, but I want the subsections on one page to make it easier to read.

Custom chunking in DocBook is clunky, so here’s an alternative.

Create a custom copy of your

Find the main chunk template definition (around line 996).

Add these two lines to the <xsl:choose> block:

<xsl:when test="$node[@condition='nochunk']">0</xsl:when>
<xsl:when test="$node[@condition='forcechunk']">1</xsl:when>

These two overwrite the implied chunking decision based on object type or depth. 

Now in your Docbook XML you can choose whether an item should be chunked or not by adding a condition attribute to your section. To chunk it:

<section id="chunkme" condition="forcechunk">...</section>

To prevent a section from being chunked:

<section id="dontchunkme" condition="nochunk">...</section>

Using the MySQL Doc source tree

I’ve mentioned a number of times that the documentation repositories that we use to build the docs are freely available, and so they are, but how do you go about using them? More and more people are getting interested in being able to work with the MySQL docs, judging by the queries we get, and internally we sometimes get specialized requests. There are some limitations – although you can download and access the docs and generate your own versions in various formats, you are not allowed to distribute or supply that iinformation, it can only be employed for personal use. The reasons and disclaimer for that are available on the main page for each of the docs, such as the one on the 5.1 Manual. Those issues aside, if you want to use and generate your own docs from the Subversion source tree then you’ll need the following:

  • Subversion to download the sources
  • XML processors to convert the DocBook XML into various target formats; we include DocBook XML/XSLT files you’ll need.
  • Perl for some of the checking scripts and the ID mapping parts of the build process
  • Apache’s FOP if you want to generate PDFs, if not, you can ignore.

To get you started, you must download the DocBook XML source from the public subversion repository. We recently split a single Subversion tree with the English language version into two different repositories, one containing the pure content, and the other the tools that required to build the docs. The reason for that is consistency across all of our repositories, internally and externally, for the reference manual in all its different versions. Therefore, to get started, you need both repositories. You need check them out into the same directory:

$ svn checkout$ svn checkout

Assuming you have the downloaded the XML toolkit already, make sure you have the necessary Perl modules installed. You’ll need Expat library, and the following Perl modules:

  • Digest::MD5
  • XML::Parser::PerlSAX
  • IO::File
  • IO::String

If you have CPAN installed, you can install them automatically using perl -MCPAN -e 'install modulename', or use your respective package management system to install the modules for you. You’ll get an error message if there is something missing. OK, with everything in place you are ready to try building the documentation. You can change into most directories and convert the XML files there into a final document. For example, to build the Workbench documentation, change into the Workbench directory. We use make to build the various files and dependencies. To build the full Workbench documentation, specify the main file, workbench, as the target, and the file format you want to produce as the extension. For example, to build a single HTML file, the extension is html. I’ve included the full output here so that you can see the exact output you will get:

make workbench.htmlset -e;
--infile=news-workbench-core.xml --outfile=dynxml-local-news-workbench.xml-tmp-$$ --srcdir=../dynamic-docs --srclangdir=../dynamic-docs;

mv dynxml-local-news-workbench.xml-tmp-$$ dynxml-local-news-workbench.xmlmake -C ../refman-5.1 metadata/introduction.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1'../../mysqldoc-toolset/tools/ refman/5.1/en introduction.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1'make -C ../refman-5.1 metadata/partitioning.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1'../../mysqldoc-toolset/tools/ refman/5.1/en partitioning.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1'make -C ../refman-5.1 metadata/se-merge.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1'../../mysqldoc-toolset/tools/ refman/5.1/en se-merge.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1'make -C ../refman-5.1 metadata/se-myisam-core.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1'../../mysqldoc-toolset/tools/ refman/5.1/en se-myisam-core.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1'make -C ../refman-5.1 metadata/sql-syntax-data-definition.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1'../../mysqldoc-toolset/tools/ refman/5.1/en sql-syntax-data-definition.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/refman-5.1'make -C ../workbench metadata/documenting-database.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/ workbench//en documenting-database.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/foreign-key-relationships.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/ workbench//en foreign-key-relationships.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/forward-engineering.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/ workbench//en forward-engineering.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/grt-shell.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/ workbench//en grt-shell.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/images.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/ workbench//en images.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/installing.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/ workbench//en installing.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/layers.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/ workbench//en layers.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/notes.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/ workbench//en notes.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/plugins.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/ workbench//en plugins.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/printing.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/ workbench//en printing.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/reference.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/ workbench//en reference.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/reverse-engineering.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/ workbench//en reverse-engineering.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/server-connection-wizard.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/ workbench//en server-connection-wizard.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/stored-procedures.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/ workbench//en stored-procedures.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/tables.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/ workbench//en tables.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/text-objects.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/ workbench//en text-objects.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/tutorial.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/ workbench//en tutorial.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/validation-plugins.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/ workbench//en validation-plugins.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'make -C ../workbench metadata/views.idmapmake[1]: Entering directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'../../mysqldoc-toolset/tools/ workbench//en views.xmlmake[1]: Leaving directory `/nfs/mysql-live/mysqldocs/working/Docs/mysqldoc/workbench'XML_CATALOG_FILES="../../mysqldoc-toolset//catalog.xml" xsltproc --xinclude --novalid

--stringparam repository.revision "`../../mysqldoc-toolset/tools/get-svn-revision`"

--param 0

--stringparam ""

../../mysqldoc-toolset/xsl.d/dbk-prep.xsl workbench.xml > workbench-prepped.xml.tmp2../../mysqldoc-toolset/tools/ workbench-prepped.xml.tmp../../mysqldoc-toolset/tools/ --srcpath="../workbench ../gui-common ../refman-5.1 ../refman-common ../refman-5.0" --prefix="workbench-" workbench-prepped.xml.tmp > workbench-prepped.xml.tmp2mv workbench-prepped.xml.tmp2 workbench-prepped.xmlrm -f workbench-prepped.xml.tmpXML_CATALOG_FILES="../../mysqldoc-toolset//catalog.xml" xsltproc --xinclude --novalid

--stringparam l10n.gentext.default.language en

--output workbench.html-tmp


workbench-prepped.xml../../mysqldoc-toolset/tools/ workbench.html-tmpmv workbench.html-tmp workbench.html

There’s lots in the output above, and I’ll describe the content as best I can without going in to too much detail in this piece. First off, the make triggers some dependencies, which are the creation of a number of ‘IDMap’ files. These files contain information about the content of the files and are used to help produce valid links in to other parts of the documentation. I’ll talk about ID mapping more in a later post. The next stage is to build the ‘prepped’ version of the documentation, which combines all of the individual files into one large file and does some pre-processing to ensure that we get the output that we want. The next is the remapping. This uses the IDMap information built in the first stage and ensures that any links in the documentation that go to a document we know about, like the reference manual, point to the correct online location. It is the ID mapping (and remapping) that allows us to effectively link between documents (such as the Workbench and Refman) without us having to worry about creating a complex URL link. Instead, we just include a link to the correct ID within the other document and let the ID mapping system do the rest. The final stage takes our prepped, remapped, DocBook XML source and converts it into the final XML using the standard DocBook XSL templates. One of the benefits of us using make is that because we build different stages in the build process, when we build another target, we dont have to repeat the full process. For example, to build a PDF version of the same document, the prepping, remapping and other stages are fundamentally the same, which is why we keep the file, workbench-prepped.xml. Building the PDF only requires us to build the FO (Formatting Objects) output, and then use fop to turn this into PDF:

$ make workbench.pdfXML_CATALOG_FILES="../../mysqldoc-toolset//catalog.xml" xsltproc --xinclude --novalid

--output - ../../mysqldoc-toolset/xsl.d/strip-remarks.xsl workbench-prepped.xml
| XML_CATALOG_FILES="../../mysqldoc-toolset//catalog.xml" xsltproc --xinclude --novalid

--stringparam l10n.gentext.default.language en

--output ../../mysqldoc-toolset/xsl.d/mysql-fo.xsl -Making portrait pages on USletter paper (8.5inx11in)mv workbench.foset -e;
if [ -f ../../mysqldoc-toolset/xsl.d/userconfig.xml ]; then



fop -q -c ../../mysqldoc-toolset/xsl.d/userconfig.xml workbench.pdf-tmp > workbench.pdf-err;

fop -q workbench.pdf-tmp > workbench.pdf-err;
fimv workbench.pdf-tmp workbench.pdfsed -e '/hyphenation/d'

You can see in this output that the prepping and remapping processes don’t even take place – the process immediately converts the prepped file into FO and then calls fop. That completes our whirlwind tour of the basics of building MySQL documentation, I’ll look at some more detailed aspects of the process in future blog posts. Until then, you might want to read our metadocs on the internals in MySQL Guide to MySQL Documentation.