I’m trying to scrape the Schedule of Investments from many 10Q’s using puppeteer. I’m going to link example 1 and example 2. I’m scraping every schedule table as a map, with categories as keys and data scraped as values. Those maps are then stored in an array. The idea is that we have to know that the data belongs to the category.
map.set(category, cellData);
I’m already successfully finding the handles of tables and parsing categories for variations of the 10Q. The problem is parsing rows for data.
The first idea I had was finding the indices of the TDs containing the category within the table. Then I’d go row by row, and simply store data when I hit those indices. Unfortunately, the number of TD’s varies throughout rows many times for no apparent reason. Sometimes it’s to add a dollar sign, other times there are just empty TD’s thrown in seemingly at random.
My second idea was an addition to the first. The approach was to handle cases where unwanted strings were found by splicing the array according to different cases. Say if there was a single $ character where content should be, I’d delete that row, and have it throw an error if it encountered a case it didn’t know. This has caused me to miss data.
My third idea was to find all strings in a row, throw them in an array, and then count. If length was equal to category length, I’d just store left to right. If it was greater than, I’d remove all irrelevant strings (‘$’ and spaces), then compare the length again. And if the content count was less than the categories, then I’d go case by case. However, this seems highly inefficient, and I feel like it will have a negative impact on the accuracy of the data, like my second approach.
Is there a way to scrape data by the area that it’s at in the X axis? Can we find child handles according to the X axis? If not, is there another approach you would recommend? The only consistency across documents seems to be that the data is always underneath its category. I thought of using an image-to-text library like Tesseract, but that would make it unreliable to parse footnotes, which are vital to the project.
Thank you!