I’m trying to load files from SAP ECC using Azure Data Factory, so I have a Data Flow and a pipeline that executes said Data Flow. Everything goes smoothly until I try to load the file into a SQL Server table because there is schema drift occurring from ECC into Blob Storage. When I run the pipeline I receive an error saying the columnset I’m trying to load is greater than the columnset in the destination table.
It’s been suggested by a co-worker to just add a few extra columns to the destination, but this would need to be implemented on 70+ more tables and stored procedures. I’d like to just handle it in the pipeline.
The file type is txt but it is a comma-separated format. Each column is enclosed in double quotes. Here is an example of the data causing an issue:
“2358UD ECO LINE 40″,25NM,RED”
Here are the dataset settings I’m using:
I suppose there should be a backslash before the comma as well but there isn’t. I’m at a loss for how to ingest this data.
I am happy to provide more details as well. Any help would be greatly appreciated!