Feels like a long time ago, but my talk at the MySQL User’s Conference back in April 09 on running MySQL Multiple Times to get better performance is now available online at YouTube. The original PDF of the presentation is available here.View it here: YouTube – Improving Performance by Running MySQL Multiple Times
I know, I know, loads of people have been waiting for these…So here we go, I’ve finally sorted a downloaded version of the Dojo examples from the presentation I provided at the MySQL Users Conference 2009. There are three examples:
- The auto-paging table example, which uses the functionality of the Dojo Toolkit and the QueryReadStore to automatically load content from a table.
- The basic graphing example, which loads data dynamically and plots a graph.
- And the zooming version of the same basic graph interface
There’s a README in the download that contains instructions on getting everything up to speed, although it should be fairly obvious. It’s attached to the bottom of this post too.Any questions for getting this to work, please go ahead and ask!Download the package here
UC2009 Working with MySQL and Dojo==================================MC Brown, 2009, http://mcslp.com and http://coalface.mcslp.comComponents:There are three examples here:- Dojo table with auto-paging (table_autopage.html and table_autopage.cgi)- Dojo Basic Graph (graph_basic.html and graph_ajax.cgi)- Dojo Zooming Graph (graph.html and graph_ajaz_zoom.cgi)You will also need the Dojo/Dijit and DojoX toolkit package from here:http://www.dojotoolkit.org/downloadsDownload the combined package, and then extract the contents and place in the same directory as the CGI scriptsIntructions for use:For the Table example, you can create a table (or use an existing DB) from WordPress using the wp_posts tabls.In practice you need only create and populate a table with the following columns:id (int)user_nicename (char)post_date (datetime)post_title (char)To be compatible with the example without any more modifications.For the Graphing examples, you need a table as follows:CREATE TABLE `currencies` ( `currencyid` bigint(20) unsigned NOT NULL auto_increment, `currency` char(20) default NULL, `value` float default NULL, `datetime` int(11) default NULL, `new` int(11) default NULL, PRIMARY KEY (`currencyid`), UNIQUE KEY `currencyid` (`currencyid`), KEY `currencies_currency` (`currency`,`datetime`), KEY `currencies_datetime` (`datetime`)) ENGINE=MyISAM AUTO_INCREMENT=170048 DEFAULT CHARSET=latin1And then populate with date/value data according to the information you want to store.Once done, make sure you change the database information in the CGI scripts to populate the information correctly during operation.
I continue to try and find new ways to make it easier for people to get into the right place in the documentation for the information you are looking for. This week, I’ve added some experimental custom indexes. These use information from the ID mapping process that we use in lots of parts of the documentation to do some interesting stuff. The indexes are an extension of the other areas where we already provide summaries (like the list of options/variables, or functions). These provide quick access into the definition of the function, but not a link to where that function might have been mentioned elsewhere in the documentation. With the indexes, we’ve extended that so that you can now see all of the places within the reference manual where we mention a specific function. This goes beyond the index (which relies on us adding specific tags), and just lists everywhere that we mention a specific option.Currently we only provide the new indexes in the 5.1 manual, in all formats, including online, downloadable and PDFs. The new indexes are:
- Standard Index – this is a version of the standard index, but with some minor improvements and adding extended support for see also and deeper index links.
- C Function Index – indexes all of the C functions used in the
- Command Index – index all of the commands (
mysqladminand so on). We are aware of a few aberrations in the index.
- Function Index – an index of all the SQL functions we support.
- INFORMATION_SCHEMA Index – points to all of the uses of the
- Transaction Isolation Level Index – pointers to all the places we quote different isolation levels.
- JOIN Types Index – examples of the different join types supported in SQL statements.
- Operator Index – SQL operators.
- Option Index – options, both for the server, and client commands.
- Privileges Index – all the different privileges supported through the
- SQL Modes Index – where we quote different SQL modes.
- Status Variable Index – status variables
- Statement/Syntax Index – where we use or provide examples of different SQL syntax and statements.
- System Variable Index – pointers to all the different system variables.
I am of course inviting comments and input from people to find out how useful you guys find this information. Assuming nobody has any objections, we’ll start rolling it out to the other reference manuals and documents. Are there any other types of information presented in an index that our readers would like to see? If so, let me know!!
If you’ve attended just one of my recent talks, either at the UC, LOSUG or MySQL University, you should know that MySQL 5.1.30 will be in the next official drop of OpenSolaris. In fact, you can find MySQL 5.1 in the current pre-release builds – I just download build 111 of the future 2009.06 release. Key things about the new MySQL 5.1 in OpenSolaris:
- Contains the set of DTRACE probes that also exists in MySQL 5.4 (see DTrace Documentation)
- Like the 5.0, we have SMF integration, so you can start, stop, monitor and change some of the core configuration through SMF
- Directory layout is similar to 5.0, with a version specific directory (/usr/mysql/5.1), and the two can coexist if you want to handle a migration from 5.0 to 5.1
To install MySQL 5.1, use the
pkg tool to perform the installation. We’ve split the components into three packages:
SUNWmysql51contains the server and client binaries, and associated scripts.
SUNWmysql51libscontains the client libraries, which you’ll need for all external (i.e. MySQL) tools, like DBD::mysql for Perl connectivity)
SUNWmysql51testcontains the MySQL test suite
$ pfexec pkg install SUNWmysql51
Once installed, you can start MySQL server through SMF:
$ pfexec svcadm enable mysql:version_51
You can set properties, like the data directory, by using svccfg:
$ svccfg svc:> select mysql:version_51 svc:/application/database/mysql:version_51> setprop mysql/data=/data0/mysql svc:/application/database/mysql:version_51> setprop mysql/enable_64bit=1
Any questions, please ask!
We got an interesting question on the documentation list the other day, which basically asked if we provided a service that listed changes to specific pages in the manual. While I like the idea of such a service, the mechanics of making such a service work are very difficult. To start with, and to re-iterate something I have to explain again and again:
the MySQL documentation is rebuilt up to 10 times *every* day
We don’t have set schedules for when we release changes. We don’t bundle changes up and then produce a new reference manual on a set day of the month, or week. If I make a change to the documentation right now, there is every chance that you would see that change in the live docs at http://dev.mysql.com/doc within 3 hours, and for the PDF format, possibly even within the hour. Yep, it happens that quickly. And that happens every day of the year – even at weekends and holidays. It’s also worth pointing out some of the statistics of the documentation to help explain why such a system might be impractical, but not impossible.Our current mysqldoc repo, in existence since we made the move to DocBook, has just hit revision 14955, which means over the last 3 years and 7 months, we’ve made on average 11.45 commits every single day. On some days, we make many more than that, and sometimes those changes can be minor (like a typo correction), and other times they will be a huge reorganization or rewrite.But there-in lies the other problem with any kind of monitoring of changes – our documentation is big and complex. Any ‘page watcher’ to notify you of changes would be limited by the fact that we split the reference manual into just over 2000 different HTML pages. And of course we do that for each manual (4.1, 5.0, 5.1, 5.1-maria, 5.4, and 6.0). For those keeping score, that’s about 12,000 different HTML pages for the reference manual alone. If you want to talk printed pages (and why not), the Letter-sized PDF manuals for the main documents in English (reference manuals, GUI docs, the MySQL version reference (15 docs) constitutes about 17,300 pages of content. We actually create 91 different documents from our English language repository, and 234 documents across all languages in up to 14 different output types (PDF, HTML, texinfo, txt, etc). So to return to the original question, would a page monitoring service be possible? Probably. Would it be useful? I’m in two minds about this. I can see the value of having a way to see changes, but if what you want is to be able to monitor changes in behavior, then most of the time this is documented in the changelog, and we have a number of different outputs available for viewing the changelog information in more useful ways. Other, more minor changes, and in some case major rewrites wouldn’t feature in the changelog (because they don’t relate to a change in the product we are documenting). Whether you would find such changes useful is up to you. A rewrite normally means we’ve decided to change the content to make it clearer, or we’ve re-organized the information to make it flow better. Again, it doesn’t necessarily constitute a change to the product that wouldn’t otherwise be in the changelog.Would it be practical? Probably not – the sheer size of our documentation would mean that just providing the data on the changes would probably double the number of pages and information, and it wouldn’t always be clear whether the change were just a typo correction or a major rewrite. And tracking the changes if we change a page ID (which we do, from time to time), would it difficult to correctly identify changes across different pages that really had the same content. Of course, I’m open to suggestion here, and we are doing things to further improve and enhance the content, and maybe this is something we will consider. But I’m open to input on whether this is needed.
Sorry for the (relatively) short notice, but I will be talking at Sun’s CommunityOne conference in San Francisco on June 1st.I’ll be talking about, and demonstrating, the DTrace probes we have put into MySQL in a joint presentation with Robert Lor who will be doing the same for Postgres.Our presentation is on the Monday afternoon. Check out the CommunityOne West Conference Site for more details and registration.
OK, so this has now bitten me twice on a new install. Basically, put Solaris 10u5 on certain machines, and it will work fine until you edit the
/etc/vfstab and forget to add a terminating newline to one of your entries. Upon reboot you will get something like this:
Error: svc:/system/filesystem/root:default failed to mount /boot (see 'svcs -x' for details)[ system/filesystem/root:default failed fatally (see 'svcs -x' for details) ]Requesting System Maintenance ModeConsole login service(s) cannot runAt the top of the output from svcs, you’ll see:
svc:/system/filesystem/root:default (root file system mount)Reason: Start method exited with $SMF_EXIT_ERR_FATALsee: http://sun.com/msg/SMF-8000-KSsee: /etc/svc/volatile/system-filesystem-root:default.logImpact: 44 dependent services are not running. (use -v for list.)The problem is that missing newline, which means the mount table is never parsed correctly. To fix, enter your root password to get into admin mode. You’ll need to first remount the root fs as read/write:
# mount -orw,remountAnd then add that offending missing line end:
# echo >>/etc/vfstabBe careful with that second line; miss a
> symbol and you’ll wipe out your vfstab altogether. Now reboot, and things should be back to normal.
I’ve just received a copy of Cloud Application Architectures by George Reese for review, and my first glance through it this morning I have so far been very impressed by what I’ve read. It’s in a nice conversational style, and so far the technical material I have glanced over has been very cleanly laid out.I’ll continue reading and hope to have a full review up soon.
The presentation file for the Dojo toolkit presentation at the Users Conference is now available. You can find it on the conference session page. I’ll be uploading the example scripts that work to produce the examples I gave once I’m back in the office after the conference.