Four Kitchens
Insights

Schema changes should be lazy

3 Min. ReadDevelopment

We’re in the middle of upgrading Drupal.org, and many of the longest-running upgrades involve schema changes. Unfortunately, MySQL with InnoDB has a very unfriendly method of doing schema changes: it rebuilds the table and blocks all writes until the new table is ready. A more sensible approach would be schema versioning that allows different parts of a table to have different schema versions. This would minimize blocking, allowing schema changes to happen without downtime.

David’s rules for efficient schema changes:

  • Don’t block: Take no immediate action when the admin requests the schema change, except for recording the change that needs to occur.
  • Be flexible: Have a versioning model in place that allows different rows to have different on-disk formats. This versioning may not be per-row, but it should be sufficiently high-resolution that upgrading the smallest unit does not cause troublesome lock delays on a live system.
  • Be lazy: Upgrade the on-disk data format as rows are written. Transform row formats as they are read.
  • Hide the mess: Despite having rows in various formats, the application should read from and write to the tables as if they are all in the latest format once the change occurs. Combined with the “be lazy” rule, this abstraction should be possible with minimal performance impact.
  • Be thorough: Actively upgrade on-disk data in the background during periods of low load. Tables should converge on having all rows in the latest format.
  • Stay correct and atomic: Check that the conversion will be successful on a current table snapshot (you’re using MVCC, right?) and any incoming writes that happen during the check. Before the atomic moment of schema change, the change should fail if any rows won’t convert, including the ones written during the check. After the atomic moment of schema change, writes should fail if they don’t match the new schema.
  • Play loose with indexes: Indexes are optimizations. A database shouldn’t block while adding, removing, or modifying indexes. There’s no need to. Just make sure an index, if used, is complete. Even MyISAM follows this rule. (Granted, the external InnoDB plugin handles this.)

Is there any free, open-source database that observes all of these rules? I know PostgreSQL has transactional DDL, which leads me to believe it implements something similar.

I’m aware there may be objections to delaying the atomic schema changeover until after tables have been checked because it doesn’t perfectly preserve transaction sequencing. But, there’s no sensible alternative. If we assume the schema change will succeed and immediately reject illegal writes, we risk false rejection of the writes if the schema change later fails. That would violate isolation and atomicity for the schema change, a far greater ACID sin. (Give penance to the Oracle?)

While the delay for the schema change in my model may be considerable, administrators would have a choice of taking the application offline (as they do now) or making the application aware of the upcoming schema change (which cannot easily happen now). Such application awareness could involve as little as a check each request to see if the change occurred, choosing alternate code paths for before and after. The next application update removes the old code path. Properly handled, there would be no downtime.

This “lazy schema change” model isn’t just applicable to relational behemoths. It maps well to document databases like CouchDB where records do not generally get cluster-wide updates, even as formats change. Document database systems can be aware of what changes need to occur when reading data in old formats, and they can update documents to the latest formats while reading and writing them.