'MySQL dump by query

Is it possible to do mysqldump by single SQL query?

I mean to dump the whole database, like phpmyadmin does when you do export to SQL



Solution 1:[1]

not mysqldump, but mysql cli...

mysql -e "select * from myTable" -u myuser -pxxxxxxxxx mydatabase

you can redirect it out to a file if you want :

mysql -e "select * from myTable" -u myuser -pxxxxxxxx mydatabase > mydumpfile.txt

Update: Original post asked if he could dump from the database by query. What he asked and what he meant were different. He really wanted to just mysqldump all tables.

mysqldump --tables myTable --where="id < 1000"

Solution 2:[2]

This should work

mysqldump --databases X --tables Y --where="1 limit 1000000"

Solution 3:[3]

Dump a table using a where query:

mysqldump mydatabase mytable --where="mycolumn = myvalue" --no-create-info > data.sql

Dump an entire table:

mysqldump mydatabase mytable > data.sql

Notes:

  • Replace mydatabase, mytable, and the where statement with your desired values.
  • By default, mysqldump will include DROP TABLE and CREATE TABLE statements in its output. Therefore, if you wish to not delete all the data in your table when restoring from the saved data file, make sure you use the --no-create-info option.
  • You may need to add the appropriate -h, -u, and -p options to the example commands above in order to specify your desired database host, user, and password, respectively.

Solution 4:[4]

You can dump a query as csv like this:

SELECT * from myTable
INTO OUTFILE '/tmp/querydump.csv'
FIELDS TERMINATED BY ','
ENCLOSED BY '"'
LINES TERMINATED BY '\n'

Solution 5:[5]

You could use --where option on mysqldump to produce an output that you are waiting for:

mysqldump -u root -p test t1 --where="1=1 limit 100" > arquivo.sql

At most 100 rows from test.t1 will be dumped from database table.

Solution 6:[6]

If you want to export your last n amount of records into a file, you can run the following:

mysqldump -u user -p -h localhost --where "1=1 ORDER BY id DESC LIMIT 100" database table > export_file.sql

The above will save the last 100 records into export_file.sql, assuming the table you're exporting from has an auto-incremented id column.

You will need to alter the user, localhost, database and table values. You may optionally alter the id column and export file name.

Solution 7:[7]

MySQL Workbench also has this feature neatly in the GUI. Simply run a query, click the save icon next to Export/Import:

enter image description here

Then choose "SQL INSERT statements (*.sql)" in the list.

enter image description here

Enter a name, click save, confirm the table name and you will have your dump file.

Solution 8:[8]

Combining much of above here is my real practical example, selecting records based on both meterid & timestamp. I have needed this command for years. Executes really quickly.

mysqldump -uuser -ppassword main_dbo trHourly --where="MeterID =5406 AND TIMESTAMP<'2014-10-13 05:00:00'" --no-create-info --skip-extended-insert | grep  '^INSERT' > 5406.sql

Solution 9:[9]

mysql Export the query results command line?

mysql -h120.26.133.63 -umiyadb -proot123 miya -e "select * from user where id=1" > mydumpfile.txt

Solution 10:[10]

If you want to dump specific fields from a table this can be handy

1/ create temporary table with your query.

create table tmptable select field1, field2, field3 from mytable where filter1 and fileter2 ;

2/ dump the whole temporary table. then you have your dump file with your specific fields.

mysqldump -u user -p mydatabase tmptable > my-quick-dump.sql

Solution 11:[11]

To dump a specific table,

mysqldump -u root -p dbname -t tablename --where="id<30" > post.sql

Solution 12:[12]

here is my mysqldump to select the same relation from different tables:

 mysqldump --defaults-file=~/.mysql/datenbank.rc -Q -t -c --hex-blob \
 --default-character-set=utf8 --where="`cat where-relation-ids-in.sql`" \
 datenbank table01 table02 table03 table04 > recovered-data.sql

where-relation-ids-in.sql:

relation_id IN (6384291, 6384068, 6383414)

~/.mysql/datenbank.rc

[client]
user=db_user
password=db_password
host=127.0.0.1

Remark: If your relation_id file is huge, the comment of the where clause will be cut in the dump file, but all data is selected correct ;-)

I hope it helps someone ;-)