How to move tables and data between two MySQL databases
There are several ways how to perform move or copy of data between two MySQL databases, let’s explain which method should be used depending on amount of data and amount of indexes.
Prerequisites
Of course there should be a MySQL database installed on your machine. If you’re missing it, get one here. The following examples were made on openSUSE Linux, therefore same OS commands cannot be used on a Windows machine. However the principle is same. In the examples below we’re going to create a new database called “test1” and we’ll also create a procedure that will insert random data into a table that is about to be moved.
Prepare new database and table data
First of all it is necessary to issue following commands so we have a new database and our testing table in an existing database called “test”.
#ensure that your MySQL database is running, execute the command below
ps -ef | grep mysql
--we should see more than one line, if the process hasn’t been started yet,
execute the command below (valid for openSUSE/SUSE Linux)
service mysql start
#login to mysql console and enter password when prompted
mysql -u root -p -A
--when we’re in, we’re going to create our new database called test1
CREATE DATABASE `test1` /*!40100 DEFAULT CHARACTER SET utf8 */;
--now let’s switch to database test and create a procedure that inserts data
use test;
delimiter $$
CREATE PROCEDURE `test_table`()
begin
declare i int Default 0 ;
declare random char(20) ;
myloop: loop
set random=conv(floor(rand() * 99999999999999), 20, 36) ;
INSERT INTO `test_table` VALUES ( NOW() , random );
set i=i+1;
if i=1000 then
SHOW VARIABLES LIKE 'datadir';
leave myloop;
end if;
end loop myloop;
end $$
delimiter ;
--also create our testing table
CREATE TABLE `test_table` (
`datecol` datetime NOT NULL,
`name` varchar(25) NOT NULL,
KEY `datecol` (`datecol`),
KEY `name` (`name`)
) ENGINE=MyISAM DEFAULT CHARSET=utf8;
--now fill it by executing our recent procedure
call test_table();
--to be continued below...
The first method - Copy files on the OS level
Even though there are people who keep saying that hard-copying will corrupt data, it doesn’t have to be the case and I successfully did this when there was a running database and large tables to be imported into it. The point is simple: mysqldump may take too long in order to import data, so copying all files on the OS level is definitely faster than mysqldump because dump becomes very slow when we have to import say 100M of rows with lots of indexes, primary key and no table partitions. The technique below is simple and can be used when copying files between two hosts too, in such a case scp shall be used instead of dd.
--the task continues...
--there will be a path to our data directory, so open a new terminal as root and execute
cd /var/lib/mysql/ #path may be different in your MySQL, check output of the procedure above
#now return to the MySQL console and lock the table by executing following
lock table test_table read;
--our table is now locked, try to execute following:
truncate table test_read;
--output will be this:
ERROR 1099 (HY000): Table 'test_table' was locked with a READ lock and can't be updated
--now we have to return to the console where we’re logged as root and execute following:
ll test/test_table*
#this is a list of files to be moved, execute following OS commands to move the table to the database test1
dd if=test/test_table.frm of=test1/test_table.frm bs=1M
dd if=test/test_table.MYI of=test1/test_table.MYI bs=1M
dd if=test/test_table.MYD of=test1/test_table.MYD bs=1M
#bear in mind we have to modify privileges and ownership, thus execute these commands:
find ./test1/ -name "test_table.*" -exec chown mysql:mysql {} \; #owner is usually mysql,
# group is usually mysql
find ./test1/ -name "test_table.*" -exec chmod 660 {} \; #usual permissions are 660
#great, data has been moved, let’s unlock the table in our MySQL console
UNLOCK tables;
--now switch to the test1 database and check health of our copied table
use test1;
CHECK table test_table;
--find out whether some data is missing:
SELECT datecol FROM test1.test_table WHERE datecol NOT IN (SELECT datecol FROM test.test_table);
--and select the number of equal rows:
SELECT COUNT(*) FROM test1.test_table a, test.test_table b WHERE a.datecol = b.datecol AND a.name = b.name;
--output should look like this:
+----------+
| COUNT(*) |
+----------+
| 1000 |
+----------+
--now ensure that index of our new table works properly by rebuilding the table:
ALTER table test_table engine=MyISAM;
--or eventually repair the index (this doesn’t have to be done if alter table was done already)
REPAIR table test_table;
Moving data with mysqldump
This method works perfectly for tables that aren’t very large, or for tables that don’t contain lots of indexes. Mysqldump will lock table unless it’s specified not to lock them, however in such a case we may get inconsistent data. Commands are below:
--once database test1 exists, open a new terminal and execute:
mysqldump -u root -p test --tables test_table > test_table.dump
#the command above will ask for a root MySQL password
#when it finishes, we can find a new file “test_table.dump” in our current directory
#now execute following command in order to import table into database test1
mysql -u root -p test1 < test_table.dump
#once root password is given, our table is going to be imported into database test1
#now switch back to the MySQL console and execute:
use test1;
show tables;
--output will look like this
+-----------------+
| Tables_in_test1 |
+-----------------+
| test_table |
+-----------------+
--also compare our new and original table by issuing following command:
SELECT COUNT(*) FROM test1.test_table a, test.test_table b WHERE a.datecol = b.datecol AND a.name = b.name;
--result should look like
+----------+
| COUNT(*) |
+----------+
| 1000 |
+----------+
Cleanup
If everything is clear, execute following commands in order to cleanup all above.
--drop the database test1 and tables
DROP database test1;
DROP table test.test_table;
--remove the procedure
DROP procedure test.test_table;
--now switch to the terminal with location of test_table.dump file and remove it
rm test_table.dump -r -f
Conclusion
Import and export can also be done via synchronisation or replication, however the two techniques described above are most suitable for creating duplicates of certain tables or whole databases. Mysqldump can be used when we’re about to import anything up to 1GB, large files should be imported via scp or dd.
Further reading & help
If you need help with your MySQL database, drop me a line. You’re also welcome to visit other articles devoted to MySQL - they’re listed below.
- Performing regular backups and point-in-time recovery
- Enabling logging of issued SQL statements
- Understanding indexes
- Memory tuning & filesystem tuning & data optimisation
- Moving datafiles between two databases
- Difference between MYISAM and InnoDB engines
- Partitioning
- Enabling remote access to a MySQL database
- How to setup replication
- Rebuilding indexes and checking health of tables





