Patrick Galbraith (capttofu) wrote,
  • Mood: awake

innodb_file_per_table Revisited

In a previous post, I was trying to figure out the most optimal way to switch from two large innodb table space files to using innodb_file_per_table to take advantage of some of the benefits of this setting. I had one part of it solved, which was to stop MySQL, add innodb_file_per_table to the my.cnf, then restart, perform a "no-op" alter of "ALTER TABLE t1 ENGINE=InnoDB" which would cause the table to be re-created an it's own .ibd file. The remaining problem was how to be able to resize the huge table space files after converting all the tables to a smaller size (in my case from 10GB to 10MB).

Someone suggested a better way:

1. Alter all innodb tables to MyISAM
2. Stop the server
3. Add innodb_file_per_table to my.cnf
4. Change innodb_data_file_path to new settings (10MB tablespaces) in my.cnf
5. Move all innodb files (logs, data) to a backup directory
6. Restart MySQL
7. Alter all tables converted to MyISAM back to InnoDB

This works great! I think using ALTER vs. dumping the data is the fastest way, for me at least.

To make this easier on myself, I wrote a perl script to automate steps 1 and 7, which is available at:

UPDATED: Ok, thanks Tobias Asplund just reminded me of mysql_convert_table_format does something similar to what my little script does. The difference being:

My script uses info schema, and *only* converts tables of the type supplied --from= to the engine type specified in --to=, whereas mysql_convert_table_format uses 'show tables', and converts en-masse, all tables in a schema to a given schema, regardless of what their previous engine type was. In effect, my script has a limiting factor in the --from argument.

"" lets you specify a schema, and engine to convert from, and an engine to convert to. The help page is:

./ --help
-d, -dry-run Dry run, don't actually perform conversion
-f , --from= The storage engine converting from
-p , --password= DB Password for user who has alter privs
-t , --to= The storage engine converting to
-s , --schema Schema name
-u , --user= DB Username for user who has alter privs
-v, --verbose Verbose flag

This handy little script simply uses information schema to obtain a list of tables with the engine specified in the --from argument, loops through each and performs:


The script prints how long, at the end, for the alter of all tables (I should add for each table too)

The reason I think ALTER is the best route is that we have about 35GB of data (feeds and feed items) is it took 1 hour to convert all tables to MyISAM, then 2.5 hours to convert back to InnoDB, total of about 3.5 hours. Not bad at all!


Caveat #1: Very important - make sure you don't have any remaining tables as innodb before you move your tablespace to the backup directory, otherwise you'll loose them. If this is a production box, make a backup!!!!!!!

Caveat #2: make sure, if using my tool, to make a note, either with a schema dump or create an alter file that contains things such as foreign keys, which MyISAM doesn't support, or FULLTEXT indexes, which InnoDB doesn't support. You will have to re-add these indexes after the switch.
Tags: innodb, innodb_file_per_table, myisam, storage engines
  • Post a new comment


    Anonymous comments are disabled in this journal

    default userpic