Every day, I migrate data from a MySQL database to BigQuery. For most processes, I use LoadJobConfig().writedisposition
with WRITE_APPEND
. However, for my raw_stock table, using WRITE_APPEND
doesn’t work because I need to update the existing data instead.
I have a local file in .csv format with a header, containing only the rows that need updating. They are identified by id_store
and id_product
(which form a composite primary key). The columns to be updated are product_amount
and date_update
.
The entire process is done through Python. The necessary libraries and credentials are already set up.
My attempts resulted in either deleting and replacing the data or adding new rows. I specifically need the updates and nothing else. I tried searching for information in the documentation provided by GCP but was unsuccessful in finding the correct answer.
I’m still a novice in this area and unsure how to handle the updates. Could someone please help me out?
Thank you in advance.