To round off my memoirs from the MySQL conference 2011, I'll just write down for the historical record my own activities.
MySQL awards
With the community picking up tasks that used to be handled by MySQL AB, it somehow has fallen on my lap to drive the selection of winners for the annual MySQL awards. This was the second year we did it and we have settled on a format where the winners are chosen by a community panel consisting of 2 previous years winners, plus the conference chair(s). I think having the community nominating and voting the winners have brought forward some truly deserving and sometimes also surprising winners, and it has been a pleasure to be involved in this process. I feel privileged to be part of a process channeling so much goodwill and respect from the MySQL community to the winners.
This year's winners were already published here previously.
Xtrabackup Manager BoF
Together with Lachlan we did a BoF on Xtrabackup Manager. There was a good group of people turning up. I didn't write down the name, but someone offered to participate by creating a browser based user interface, which XBM doesn't have yet. Peter Zaitsev stopped by for a word of encouragement even if he was also going to another BoF at the same time.
An interesting discussion we had both before and at the BoF was how to create a verification procedure for the backups. So you run xtrabackup and it produces a file. How do you know your data is really there? Especially when you use the --stream option, what is done on the other end is a simple untar, or just saving the tar. I want to verify that it is actually a complete and correct copy of the database.
Talking with Lachlan, Stewart Smith and people at the BoF, we came up with the following scheme:
- Use innochecksum utility or CHECK TABLE. This checks that all pages in the InnoDB tablespace are intact, however it doesn't say that all pages or all tablespace files are actually there.
- To verify that you actually have all data, one way is simply to read the table plus all secondary indexes from beginning to end. Since they are BTREE structures, if you read each of them from the root page, you are guaranteed to find all pages and notice if some is missing. How to most conveniently do this is an interesting question, you could generate SQL statements that use FORCE INDEX, or use the HANDLER statement to be more explicit. Stewart advertised using HailDB (Embedded InnoDB) instead of a full blown SQL server. In any case you need to generate a list of all databases, tables and indexes at the time the backup is taken, so you can then traverse that inventory. This could be done by SHOW DATABASES, SHOW TABLES and SHOW CREATE TABLE statements, or probably more conveniently from the INFORMATION_SCHEMA.
- Using Maatkit mk-table-checksum was deemed a non-solution if the server to be backed up is online and data is being updated - which will be the case. mk-table-checksum does some interesting magic to do a non-blocking online check for a master-slave pair. It is not even explained in the documentation how this works, but it uses the replication stream to propagate these checks in the same order as updates are happening on the master, so that any check on the slave will happen in the same state, and it just works.
But it is not obvious to me it could work to check a backup, when we do not and cannot use replication. But possibly one could do the same thing with a copy of the binlog and replay it against the backup. I have to think about this a bit more.
In any case I don't think mk-table-checksum does any checks on secondary indexes being intact, which is still a concern.
As you can see, checking that your backups are really intact is not as easy as you'd think!
Severalnines ClusterControl BoF
I helped out Johan with two BoF's on ClusterControl for MySQL Cluster and ClusterControl for MySQL replication. Johan did all the work, I mainly helped him schedule the BoF's and was there to help. But hey, that's a contribution too! Those of us who have worked with MySQL Cluster really appreciate these tools for making our work so much easier, so it is nice to help out now that Johan is starting his own business at severalnines.com.
The first BoF had an ok turnout for a MySQL Cluster event. (I once remember a MySQL Cluster BoF a few years ago with one participant, but he turned into a user, so it was still a success!)
The second BoF about the new ClusterControl for MySQL replication was less well attended, because it collided with the MariaDB BoF. If you don't know, "MariaDB BoF" is code for free Finnish vodka. So we went to party with all the others in the MariaDB room instead (followed by Oracle reception). That's the nature of BoFs.
Drizzle developer day
I already mentioned Drizzle developer day in my previous post. In addition to being part of a warm and friendly atmosphere plus some deeply technical cross-fork discussions about group commit solutions, I actually gave an introductory talk at the day. I put together a talk based on Jonathan Levin's How to make MySQL cool again and my own additions to it. The slides are available as an attachment below.
As for embedding a JSON parser into Drizzle, I couldn't resist putting out a teaser and said "this is something I can see Stewart doing over a weekend". I was wrong. He did it the next Thursday.
So I owe it to Stewart to try out the code and play with my ideas for using MySQL/MariaDB/Drizzle as a JSON document database. (And Stewart, I'll be needing a JSON equivalent for MySQL's ExtractValue()...)
In fact he did more than I proposed, he also embedded an http server and added a simple html/javascript gui to Drizzle. So now we have to reimplement phpMyAdmin in Javascript and serve it directly from within Drizzle. My head is exploding with ideas...
- Log in to post comments
- 14113 views
Thanks for the mention!
Thanks for the mention in your blog and presentation :)
Well thank YOU for writing it
Well thank YOU for writing it up in the first place. Our twin blogs are already bearing fruit, such as in Stewarts work. Goes to show asking for features is useful in its own right.