How to Import and Export Large MySQL Databases via command line in CentOS
Import/Export Large MYSQL Databases
When working with MYSQL, people tend to use phpMyAdmin, which is a nice GUI way to manipulate a database. But some operations won’t work in phpMyAdmin such as when the database is too large. In particular, you can’t import or export really large databases using phpMyAdmin. So sometimes you need to do things on the command line.
In the following, replace [USERNAME] with your mysql username, [DBNAME] with your database name, [/path_to_file/DBNAME] with the path and name of the file used for the database dump, and [/path_to_mysql/] with the path to mysql bin (like /Applications/MAMP/Library/bin/). If you already have a backup file of your MySQL database that you created using cPanel or another form of backup/export procedure that created a file.sql file, then you can skip this first section and go to “part two” after you have already uploaded this file.sql file to your server and have it ready to import.
Copy/Export a Large Database
MYSQL has no ‘Copy’ function. You create a copy by dumping the database with mysqldump.
To dump the database and gzip it at the same time, use the following. This will prompt you for your password.
mysqldump -u [USERNAME] -p [DBNAME] | gzip > [/path_to_file/DBNAME].sql.gz
Import a Large Database
If you want to replace the database with a fresh dump created by the above process, do the following.
First, unzip the file.
gzip -d [/path_to_file/DBNAME].sql.gz
Get to a mysql prompt (you will be asked for your password.)
[/path_to_mysql/]mysql -u [USERNAME] -p
Part 2
Then do the following to wipe out the old database and replace it with the new dump:
SHOW DATABASES;
DROP DATABASE [DBNAME];
CREATE DATABASE [DBNAME];
USE [DBNAME];
SOURCE [/path_to_file/DBNAME].sql;
That’s it. After you run the last line, you should see a bunch of messages saying “ok” as it imports all of the tables for you into your database.
Conditional Dumps
Sometimes the search index is huge and you want to omit it from the dump. Do so with:
mysqldump -u [USERNAME] -p [DBNAME] –ignore-table=[DBNAME].search_index | gzip > [/path_to_file/DBNAME].sql.gz
There are actually a number of tables you could exclude, like the sessions table, the watchdog table and all the cache* tables.
But if you use the above technique to destroy and recreate the database after doing this, you will be missing all those excluded tables. So you will want to do a two step process instead:
First, create a backup with ONLY the table information, no data.
mysqldump -u [USERNAME] -p [DBNAME] –no-data | gzip > [/path_to_file/DBNAME].info.sql.gz
Then create a backup, including only data from the tables you need.
[path_to_mysql/]mysqldump -u [USERNAME] -p [DBNAME] –no-create-info –ignore-table=[DBNAME].search_index –ignore-table=[DBNAME].cache –ignore-table=[DBNAME].cache_block –ignore-table=[DBNAME].cache_content –ignore-table=[DBNAME].cache_filter –ignore-table=[DBNAME].cache_form –ignore-table=[DBNAME].cache_menu –ignore-table=[DBNAME].cache_mollom –ignore-table=[DBNAME].cache_page –ignore-table=[DBNAME].cache_pathdst –ignore-table=[DBNAME].cache_pathsrc –ignore-table=[DBNAME].cache_views | gzip > [/path_to_file/DBNAME].data.sql.gz;
Well that’s a lot of typing. Wouldn’t it be nice if there was a wildcard we could use instead of typing out all those cache_ tables? Well there is!! You can do:
[path_to_mysql/]mysqldump -u [USERNAME] -p [DBNAME] –no-create-info –ignore-table=[DBNAME].search_index –ignore-table=[DBNAME].cache% | gzip > [/path_to_file/DBNAME].data.sql.gz;
After doing this, just import the two files as above, first the one with only the table info, and then the data. Result, a (relatively) small database with all the optional tables emptied out.
Note that the wildcard trick above is not documented anywhere that I can see, so you’ll want to test that it works in your setup.