I am trying to import a very large file (CSV) to my database in mySQL (I am using Dbeaver btw) and everytime I import it finds a new error. An example of such errors is ‘X has no default value’ etc… So I need to arrest the importing (which takes A LOT of time), re-correct the code for the table creation and re-start the import. Every time I restart I also have to manually map almost all the columns from the CSV to the Dataset in Dbeaver.
I would like to know if there are any smart ways to optimize this chore, especially if there is anyway to memorize the mapping that I am doing manually. I tried to modifying the headings directly in the CSV file, this has spared me mapping some columns but it didn’t work for all of them. I would really like to avoid doing the same thing for days… is there anyway I can optimize my work? Thank you in advance
kube pubblicita is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.