tips_and_howtos:mysql_backup_restore

Doing a backup and restore to MySQL is straightforward but the restore is very slow and almost impossible to use with bigger database sizes (gigabytes).

Traditional way

If your database is not large then there is no reason to try optimize other than gzip the dump.

mysqldump -u <username> -p <database_name> | gzip > <backup_directory/backup_file.sql.gz>

If problems occur try options

  • –maxallowedpacket=<maximumpacketsize> to allow larger packets, for example 1G = 1073741824
  • –lock-tables=false to disabled locking while dumping

Restore database with command

gzip -dc <backup_directory/backup_file.sql.gz> | mysql -u root -p <database_name>

Traditional way tuned with larger databases

This should help you backup / restore databases of several gigabytes of size.

Backing up should work similarly to the above

mysqldump -u <username> -p <database_name> | gzip > <backup_directory/backup_file.sql.gz>

But when restoring we need to do it in mysql shell

gzip -d <backup_directory/backup_file.sql.gz>
mysql -u root -p
set global net_buffer_length=1000000;
set global max_allowed_packet=1073741824;
set foreign_key_checks = 0;
source <backup_directory/backup_file.sql.gz>;
set foreign_key_checks = 1;

Import / Export for huge databases

I think the most efficient solution to really big databases is to export-import with comma separated files.

Exporting can be done with mysqldump command:

chown mysql.mysql <backup_directory>
mysqldump -u root -p -t -T<backup_directory> <database_name> --fields-enclosed-by=\" --fields-terminated-by=,

Importing needs you to log in with mysql client and issuing command:

LOAD DATA FROM INFILE </path/to/backup/directory/table.csv> INTO TABLE <table> FIELDS TERMINATED BY ',';

Try and leave a comment below.

  • tips_and_howtos/mysql_backup_restore.txt
  • Last modified: 2021/10/24 13:51
  • by 127.0.0.1