Context
I’m currently doing document indexing to a vector database, and to process pdf files, Ideally, I need to extract text, tables and images from a pdf file. I know there is a range of libraries used to to these extraction such as pypdf
, pdfplumber
(that i’m currently using) or PyMuPDF
.
Problem
I want to create a final text file with the extracted content, but I want to keep the order of each element. I also need to format tables into an html style. The problems comes when I try to extract text and tables, because when extracting text, the text contained in the table is extracted as well.
For example, using pdfplumber
:
import pdfplumber
if __name__ == "__main__":
with pdfplumber.open("docs/test.pdf") as pdf: #pdf file containing text and tables
for pages in pdf.pages:
text = pdf.extract_text()
tables = pdf.extract_tables()
print(f"{text =}")
print(f"{tables =}")
The text contained in tables
is also present in text
.
How can I generate a text file with both results combined ? Am I even investigating the right path ?
Attempts
I have tried to go a little more “low level”. In pdfplumber, you can extract every object present in the file. For instance, each character and images are considered a individual objects. You can access them with a page.chars
for characters. Each object contains coordinates (“top”, which is the distance to the top of the page, and “x0”, which is the distance to the left of the page).
So I came up with the following function, that can check for every table, and then check for every lines of text, and if they are in a table, skip it. Otherwise add it as a line
import pdfplumber
from collections import defaultdict
def _extract_raw_elements_from_pdf(
pdf_path : str
) -> None:
_elements = [] #list where all the content will be stored (text lines, tables and images)
with pdfplumber.open(pdf_path) as pdf:
for page_num, page in enumerate(pdf.pages): #iterate through every page
table_start = len(_elements)
#Extract tables
table_content_history = [] #stores tables seen
for table in page.extract_tables(): #table is a list[list[str]]
_elements.append({
"type": "table",
"raw_content": table,
"x0": 0, # not used.
"top": 1e20, # simulate infinite value
"page": page_num
})
table_content_history.append(
"".join(["".join(row) for row in table])
)
# Extract characters and group into lines
char_lines = defaultdict(list)
for char in page.chars:
char_lines[round(char['top'], 1)].append(char)
# Extract text lines
for top, chars in char_lines.items():
chars = sorted(chars, key=lambda c: c['x0'])
line_text = "".join([char['text'] for char in chars])
in_a_table = False
for i,table_text in enumerate(table_content_history, start=table_start):
in_curr_table = True
for word in line_text.split(' '):
if word.strip(' n') and not(word in table_text):
in_curr_table = False
if in_curr_table:
#update coordinates of table
in_a_table = True
table_data = _elements[i]
if top < table_data['top']:
_elements[i]['top'] = top
if not(in_a_table): #If not in a table yet, declare it as plain text
x0 = chars[0]['x0']
_elements.append({
"type": "text",
"raw_content": line_text,
"x0": x0,
"top": top,
"page": page_num
})
# Extract images
for img in page.images:
_elements.append({
"type": "image",
"raw_content": img["stream"],
"x0": img["x0"],
"top": img["top"],
"page": page_num,
"shape": img["srcsize"],
"bits": img["bits"]
})
return _elements
I would then sort this list based on its elements attributes :
def _reorder_elements(_elements):
_elements.sort(key=lambda e: (e['page'], e['top'], e['x0']))
This method has some problems, especially when there are text splitted in several columns.
If you have any simpler suggestion, I’m really interested, I seem my solution is overcomplicated for the task I want to perform.