'Fastest way to import 1000s of big csv files to mysql database?

I have around 1000s of big dataset, each having 2-3 million rows of data. I want to import each of them to mysql so that analysis becomes easier. I wrote this code for the purpose of this. But the processing is very slow. It is taking around 5-6 minutes for each file to do so. Is there any faster way to import all the csv to the database?

from sqlalchemy import create_engine
import pandas as pd,os


all_files = os.listdir('D:\\All_Tick_Data\\Tick_dataset\\')
for file in all_files:
    print(file)
    engine = create_engine("mysql://root:rocky@localhost/options")
    con = engine.connect()
    df = pd.read_csv('D:\\All_Tick_Data\\Tick_dataset\\'+file)
    df.to_sql(name='options_data',con=con,if_exists='append',index=True)
con.close()


Solution 1:[1]

Have your code generate a bunch of LOAD DATA SQL commands. Then feed them to the commandline mysql tool.

Suggest adding indexes (other than the PRIMARY KEY) after loading the table(s).

Sources

This article follows the attribution requirements of Stack Overflow and is licensed under CC BY-SA 3.0.

Source: Stack Overflow

Solution Source
Solution 1 Rick James