How to make MySQL cool again

Jonathan Levin has an excellent blog post titled How to make MySQL cool again. It is almost word for word something I've wanted to write for a long time. Now I don't need to, thanks Jonathan.

Once again Blogger failed to post my comments to his site, so I will make some comments as a new post of my own instead. Jonathan actually lists things that exist already but isn't getting used enough. My list contains also a few things that I don't know if they exist or not.

Hi Jonathan

This is an excellent post! I've been thinking of writing exactly these points for a while, thanks to you, now I don't have to. All of what you write are low hanging fruits that basically exist already in some form.

I have some things to add to this list:

Lightweight MapReduce = bashreduce:
https://blog.last.fm/2009/04/06/mapreduce-bash-script
br -r "sort" output
Breat for one liners against a set of servers, mysql or otherwise. But of course use Gearman for serious work.

Transparent sharding.
MySQL Cluster does exactly this, but we need it for InnoDB. Spider Engine should do this? Have not previously heard of Scalebase mentioned in previous comment (in Jonathan's blog). Both Ulf Wendel and Justin Swanhart have separately been blogging about experiments with the MySQL connectors (in PHP, first is with plugin, other with wrapper) that could be used for this.

Transparent re-sharding aka online add node.
MySQL Cluster does this. Need also for InnoDB. Spider Engine could probably do this with "very little downtime" as an ALTER TABLE? Implementing an automated solution for this would be kinda cool! The connector based experiments should easily lend themselves to this.

Several replication improvements: paralell replication, multi-source. Like you say Tungsten has these so it is a good addon to standard MySQL replication. But we also want
- synchronous master-master replication.
- global transaction id's (tungsten).
- which gives you easier deployment of new slaves and promoting a new master.
There are various solutions out there, in the Google patch, via Galera replication etc...

Better support for schema-less development. With HandlerSocket you can already get a very fast key value store: CREATE TABLE t (key INT, value BLOB) and there you go. But MongoDB and some others take this further and you can actually have something like secondary indexes into that BLOB. MariaDB's virtual columns and dynamic columns are interesting here (I've blogged about this as a concept: https://openlife.cc/blogs/2010/october/mariadb-52-using-mariadb-column-…. Roland Bouman made some interesting comments to those blogs.)

The thing still completely missing is to embed a JSON parser into MySQL/MariaDB, that would allow you to create the secondary indexes in a user friendly way. My experimentations with XML documents suggest that the performance penalty for using XPath and the existing XML parser was very high compared to using SUBSTR(). Unless the performance of a XML/JSON parser can be improved to the level of SUBSTR(), this again suggests that the task of maintaining the secondary indexes could be pushed to the client side instead. (Eg when inserting/updating a record, the client extracts the values needed for secondary indexes and writes them as separate columns that are indexed. From the DB point of view these are then just normal indexed columns, even if they are derived from the JSON document that is stored in the BLOB.)

I also maintain Shard-Key-Mapper (http://code.google.com/p/shard-key-mapper) which can be used to map rows to shards based on a "shard key", the column on which the data is sharded. You can add new nodes and immediately map rows to them. For moving rows between shards, Shard-Key-Mapper includes functionality to do that too.

The mapper includes a move_key() method which moves the rows between the shards, and then updates the directory. Technically this should be done via XA, but MySQL's XA is broken in multiple ways, so I did not implement the added complexity of using it. There is an example http://code.google.com/p/shard-key-mapper/wiki/DirectoryShardKeyMapper which shows how to use the move_key() method. This works best if you quiesce writes for the key being moved.

A full implementation could use FlexCDC/Flexviews to capture the changes which happen after the move begins, then apply them to the new shard after the move. This is similar to how the Facebook 'online schema change' tool works, except it collects changes with triggers, which is another possible implementation.

Finally, Shard-Query includes a loader which uses the mapper to determine which shard on which to load rows. This can be used to spread data for new keys over new shards too.

Hi,

Yes, most of the problem is solved, but it could use an easier administration interface :) Of course this also needs additional testing in the real world and some blog posts to popularize it. It is easily extensible for use in an existing directory too.

Most of these projects are useful as examples in the classes I teach. I can describe a shard directory in class and then direct my students to an existing project that they can use as an implementation reference.

I also like to solve hard problems, it keeps me from getting bored. The many projects are inter-connected. Shard-Query needed a way to figure out where to route queries, and a reusable implementation seemed to be the way to go. I also need a way to load data onto Shards, so a loader is a natural extension of Shard-Query too, but perhaps it actually belongs in Shard-Key-Mapper. I'll think about that. Shard-Query also needed a real SQL parser, hence PHP-SQL-Parser. Last, Flexviews needed change data capture. There is a tool for java that I could possibly have used (Tungsten), but I'm not a Java programmer by trade so I wrote FlexCDC in PHP.

Like Tungsten, Flexviews can also be used for multi-master replication, since with a little hacking you can create remote snapshots. It needs some data dictionary modifications to make this easy, though, which is why I haven't popularized the idea yet.

Eventually you should be able to combine all the parts into an easily distributable compute cluster for the cloud. I plan on doing this in a "distribution" called Clutch-DB. I already have a Google code project reserved for it, but it isn't ready for beta testing yet.

See you at the MySQL conference!

About the bookAbout this siteAcademicAccordAmazonAppleBeginnersBooksBuildBotBusiness modelsbzrCassandraCloudcloud computingclsCommunitycommunityleadershipsummitConsistencycoodiaryCopyrightCreative CommonscssDatabasesdataminingDatastaxDevOpsDistributed ConsensusDrizzleDrupalEconomyelectronEthicsEurovisionFacebookFrosconFunnyGaleraGISgithubGnomeGovernanceHandlerSocketHigh AvailabilityimpressionistimpressjsInkscapeInternetJavaScriptjsonKDEKubuntuLicensingLinuxMaidanMaker cultureMariaDBmarkdownMEAN stackMepSQLMicrosoftMobileMongoDBMontyProgramMusicMySQLMySQL ClusterNerdsNodeNoSQLNyrkiöodbaOpen ContentOpen SourceOpenSQLCampOracleOSConPAMPParkinsonPatentsPerconaperformancePersonalPhilosophyPHPPiratesPlanetDrupalPoliticsPostgreSQLPresalespresentationsPress releasesProgrammingRed HatReplicationSeveralninesSillySkySQLSolonStartupsSunSybaseSymbiansysbenchtalksTechnicalTechnologyThe making ofTransactionsTungstenTwitterUbuntuvolcanoWeb2.0WikipediaWork from HomexmlYouTube

Search

Recent blog posts

Recent comments