I have a large csv file that I pulled from an online database that doesn’t seem to read well. It looks like it treats commas in a couple of the columns as delimiters and then parses the data into new columns.
The site is very old and buggy and hasn’t been updated in a while. It only lets me export the data as a CSV and I can’t even scrub the data from the HTML code because it glitches when I switch pages. So basically I’m stuck with what I have.
Is there a way to fix this when importing into R or within R?
CSV FILE SPECS:
Rows: 26009 Columns: 11
── Column specification ───────────────────────────────────────────────────────────────────────────────
Delimiter: “,”
chr (10): Status, Customer, Location, City, Description, PO, Date, Total, Accepted, Rejected
dbl (1): Number
I have 11 columns but have a few rows where data has been shifted into a 12th, 13th and even 14th columm.
Here’s a sample of the data:
This is how the data should look:
The most problematic columns seem to primarily be Customer and Location (mostly Location). The last two columns (Accepted, Rejected) are not particularly important because that information is already in the ‘Status’ column.
I’m still just brainstorming but I have a unique list of cities that I have pulled so I thought about using the ‘City’ column as a reference for how far to shift the data – at least as a first pass. For example, in row 9 the city ‘Ray City’ is in the ‘PO’ column, therefore all of the cells before that should be concatenated with the string in the ‘Location’ column along with the commas ex. “LDS – Ray City 1, 4, 10”
This method is likely going to require a lot of corrections so if anyone has a better idea I’m all ears