I have 4 million IDs in a .csv file. I am trying to insert these values into a blank table in SMSS. Due to the insert limit of 1,000 values in SQL Server, i will need to utilize python to create an insert statement for every 900 rows. The end result will be 4,444 INSERT statements each containing 900 values.
So far i have:
`import pandas as pd
csvfile = pd.read_csv(‘my_file_path)
print(csvfile)
statement = f”INSERT INTO Table(ID) VALUES”`
The statement will be the insert statement.
My ID file looks as such.
:ID
124
456
345
3489
43
568
654
678
6453
098
566
43455
556:
…. to 4 million rows of IDs.
I need python to read the .csv, then group them in bunches of 900, and return them in the insert statement as such.
Output:
INSERT INTO Table(ID) VALUES(124), (456), (345) .. from row 2-901 of IDs
INSERT INTO Table(ID) VALUES(877), (6565456), (23235) .. from row 902-1802 of IDs
etc
I am not sure where to start as no examples i have found online have done something similar, it seems there is a grouping function but i’m not sure it is applicable to this
MGoB24 is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.