Continuent at Hadoop Summit

I’m pleased to say that Continuent will be at the Hadoop Summit in San Jose next week (3-5 June). Sadly I will not be attending as I’m taking an exam next week, but my colleagues Robert Hodges, Eero Teerikorpi and Petri Versunen will be there to answer any questions you have about Continuent products, and, of course, Hadoop replication support built into Tungsten Replicator 3.0.

If you are at the conference, please go along and say hi to the team. And, as always, if there are any questions please let them or me know.

Real-Time Data Movement: The Key to Enabling Live Analytics With Hadoop

An article about moving data into Hadoop in real-time has just been published over at DBTA, written by me and my CEO Robert Hodges.

In the article I talk about one of the major issues for all people deploying databases in the modern heterogenous world – how do we move and migrate data effectively between entirely different database systems in a way that is efficient and usable. How do you get the data you need to the database you need it in. If your source is a transactional database, how does that data get moved into Hadoop in a way that makes the data usable to be queried by Hive, Impala or HBase?

You can read the full article here: Real-Time Data Movement: The Key to Enabling Live Analytics With Hadoop

 

Cross your Fingers for Tech14, see you at OSCON

i

So I’ve submitted my talks for the Tech14 UK Oracle User Group conference which is in Liverpool this year. I’m not going to give away the topics, but you can imagine they are going to be about data translation and movement and how to get your various databases talking together.

I can also say, after having seen other submissions for talks this year (as I’m helping to judge), that the conference is shaping up to be very interesting. There’s a good spread of different topics this year, but I know from having talked to the organisers that they are looking for more submissions in the areas of Operating Systems, Engineered Systems and Development (mobile and cloud).

If you’ve got a paper, presentation, or idea for one that you think would be useful, please go ahead and submit your idea.

I’m also pleased to say that I’ll be at OSCON in Oregon in July, handling a Birds of a Feather (BOF) session on the topic of exchanging data between MySQL, Oracle and Hadoop. I’ll be there with my good friend Eric Herman from Booking.com where we’ll be providing advice, guidance, experiences, and hoping to exchange more ideas, wishes and requirements for heterogeneous environments.

It’d be great to meet you if you want to come along to either conference.

 

 

Passion for Newspaper Comics? Watch Stripped

I’m a big fan of comics – and although I am a fan of Spiderman, Superman, and my personal favourite, 2000AD – what I’m really talking about is the newspaper comics featuring stars like Garfield, Dilbert, and Calvin and Hobbes.

Unfortunately being in the UK, before the internet existed in it’s current form, finding these comics, particularly from the US was difficult. We don’t have many US comics in UK newspapers, and to be honest, very few papers in the UK have a good variety of any comic. That made feeding the habit difficult, as I would trawl, literally, around bookstores in the humour section to find the books I needed.

Garfield was my first foray into the market, and I bought one of the first books not long after it came out. Then, as I started looking around a little more I came across others, like Luann, For Better or For Worse, before finding the absolute joy that was Calvin and Hobbes before ultimately getting hold of Foxtrot, Sherman’s Lagoon and many many more.

Of course, the Internet has made these hugely accessible, and indeed not only do I read many comics every day, but I very deliberately subscribe (and by that, I mean pay money) to both Comics Kingdom (43 daily comics in my subscription) and GoComics.com (72 daily comics) I also continue to the buy the books. Why?

Because at the end of the today looking at screens and taxing the brain, what I really want to do is chill and read some still intelligent, but not mentally taxing, content, and that means reading my comic books. They give me a break and giggle and I find that a nice way to go to sleep.

The more important reason, though, is because I enjoy these comics and believe these people should be rewarded for their efforts. Honestly, these guys work their laughter muscles harder than most people I know, creating new jokes, every day, that make me laugh. They don’t just do this regularly, or even frequently. They do it *every day*.

As a writer I know how hard it is to create new content every day, and keep it interesting. I cannot imagine how hard it is to keep doing it, and making it funny and enjoyable for people to read.

Over the years, I’ve also bought a variety of interesting things, including the massive Dilbert, Calvin & Hobbes and Far Side collectibles. I own complete collections of all the books for my favourite authors, and I’ve even contacted the authors directly when I haven’t been able to get them from the mighty Amazon. To people like Hilary B Price (Rhymes with Orange), Tony Carillo (F-Minus), Scott Hilburn (The Argyle Sweater), Leigh Rubin (Rubes) and Dave Blazek (Loose Parts) I thank you for your help in feeding my addiction. To Mark Leiknes (the now defunct Cow & Boy), I thank you for the drawings from your drawing board and notebook, and I’m Sorry it didn’t work out.

But to Dave Kellett & Fred Schroeder I owe a debt of special gratitude. Of course Dave Kellett writes the excellent Sheldon, and not only do I have the full set, Dave signed them first. I’ve also got one of the limited editions Arthur’s…

But together, they produced the wonderful Stripped! which I funded through Kickstarter along with so many others (you can even see my name in the credits!). If you have any interest in how comics are drawn, where the ideas come from, and how difficult the whole process is, you should watch it. Even more, you should watch it if you want to know what these people look like.

Comic artists are people who for some people we don’t even know their name, but for some we might know, but probably very few who we ever get to see what they look like. Yet these people are superstars. Really. Think about it, they write the screenplay, direct it, produce it, provide all the special effects, act all the parts, and do all the voices. And despite wearing all of these different hats, every day, they can still be funny, and, like all good comedy, thought provoking.

For me there is one poignant moment in the film too. Understanding how, in a world where newspapers and comic syndication is dwindling fast, how these people expect to make a living. The Internet is a great way for comic artists to get exposure to an ever growing army of fans, but I think there’s going to be an interesting crossover period for those comics that started out in the papers.

The film itself is great. Not only do you get to see these comic artist gods, but you get to understand their passion and interest, and why they do what they do. That goes a long way to helping you empathise with them and their passion in line with you and your passion – reading them.

If you like comics, find a way of giving some money back to these people, whether it’s a subscription, buying their books or buying merchandise.

 

Revisiting ZFS and MySQL

While at Percona Live this year I was reminded about ZFS and running MySQL on top of a ZFS-based storage platform.

Now I’m a big fan of ZFS (although sadly I don’t get to use it as much as I used to after I shutdown my home server farm), and I did a lot of different testing back while at MySQL to ensure that MySQL, InnoDB and ZFS worked correctly together.

Of course today we have a completely new range of ZFS compatible environments, not least of which are FreeBSD and ZFS on Linux, I think it’s time to revisit some of my original advice on using this combination.

Unfortunately the presentations and MySQL University sessions back then have all been taken down. But that doesn’t mean the advice is any less valid.

Some of the core advice for using InnoDB on ZFS:

  • Configure a single InnoDB tablespace, rather than configuring multiple tablespaces across different disks, and then let ZFS manage the underlying disk using stripes or mirrors or whatever configuration you want. This avoids you having to restart or reconfigure your tablespaces as your data grows, and moves that out to ZFS which can do it much more easily and while the filesystem and database remain online. That means we can do:
innodb_data_file_path = /zpool/data/ibdatafile:10G:autoextend
  • While we’re taking about the InnoDB data files, the best optimisation you can do is to set the ZFS block size to match the InnoDB block size. You should do this *before* you start writing data. That means creating the filesystem and then setting the block size:
zfs set recordsize=8K zpool/data
  • What you can also do is configure a separate filesystem for the InnoDB logs that has a ZPool record size of 128K. That’s less relevant in later versions of ZFS, but actually it does no harm.
  • Switch on I/O compression. Within ZFS this improves I/O time (because less data is read/written physically from/to disk), and therefore improves overall I/O times. The compression is good enough and passive to be able to handle the load while still reducing the overall time.
  • Disable the double-write buffer. The transactional nature of ZFS helps to ensure the validity of data written down to disk, so we don’t need two copies of the data to be written to ensure valid recovery in the case of failure that are normally caused by partial writes of the record data. The performance gain is small, but worth it.
  • Using direct IO (O_DIRECT in your my.cnf) also improves performance for similar reasons. We can be sure with direct writes in ZFS that the information is written down to the right place. EDIT: Thanks to Yves, this is not currently supported on Linux/ZFS right now.
  • Limit the Adjustable Replacement Cache (ARC); without doing this you can end up with ZFS using a lot of cache memory that will be better used at the database level for caching record information. We don’t need the block data cache as well.
  • Configure a separate ZFS Intent Log (ZIL), really a Separate Intent Log (SLOG) – if you are not using SSD throughout, this is a great place to use SSD to speed up your overall disk I/O performance. Using SLOG stores immediate writes out to SSD, enabling ZFS to manage the more effective block writes of information out to slower spinning disks. The real difference is that this lowers disk writes, lowers latency, and lowers overall spinning disk activity, meaning they will probably last longer, not to mention making your system quieter in the process. For the sake of $200 of SSD, you could double your performance and get an extra year or so out the disks.

Surprisingly not much has changed in these key rules, perhaps the biggest different is the change in price of SSD between when I wrote these original rules and today. SSD is cheap(er) today so that many people can afford SSD as their main disk, rather than their storage format, especially if you are building serious machines.