I’m working on a Python script to merge PDF files, and I need some help with de-duplicating images on each page. I’m using the pypdf
library and running into an issue where multiple instances of the same image are detected on each page. I want to achieve functionality similar to Adobe Acrobat’s “Compress” feature, which ensures only one instance of each image per page.
Here is the version of the code I am currently using:
from pypdf import PdfReader, PdfWriter
import logging
import io
def process_pdf(buffer, pdf_writer, logger):
try:
pdf_reader = PdfReader(buffer)
if len(pdf_reader.pages) > 0:
for page_num, page in enumerate(pdf_reader.pages, start=1):
pdf_writer.add_page(page)
print(f"Added page {page_num} from input PDF to writer")
else:
logger.error("PDF has no pages")
return False
except Exception as e:
buffer.seek(0)
logger.error(f"Error processing PDF: {str(e)}")
return False
return True
if __name__ == "__main__":
logging.basicConfig(level=logging.ERROR)
logger = logging.getLogger(__name__)
input_pdf_paths = ["merged_by_adobe.pdf"]
output_pdf_path = "custom_merge.pdf"
pdf_writer = PdfWriter()
for input_pdf_path in input_pdf_paths:
with open(input_pdf_path, "rb") as f:
buffer = io.BytesIO(f.read())
if process_pdf(buffer, pdf_writer, logger):
print(f"Processed {input_pdf_path}")
else:
print(f"Failed to process {input_pdf_path}")
for pg_indx, page in enumerate(pdf_writer.pages, start=1):
image_count = count_images_in_page(page)
if image_count > 0:
print(f"Found {image_count} image(s) on page {pg_indx}")
with open(output_pdf_path, "wb") as f:
pdf_writer.write(f)
print(f"PDFs merged and saved to {output_pdf_path}")
Issue:
When I merge PDFs, the log indicates multiple instances of the same image on each page. This is an issue because in my real application, I’m processing the images to reduce the quality of them to fit a certain file size threshold. The number of images directly impacts the performance of the quality processing.
An observation I made is that the number of images corresponds to the number of pages in the actual PDF. However, I’m not sure why each page of the PDF itself has that same number of images.
For example, consider this log trace from the source code:
Found 3 image(s) on page 1
Found 3 image(s) on page 2
Found 3 image(s) on page 3
Found 34 image(s) on page 4
etc...
This results in an unexpectedly high number of images per page, which shouldn’t be the case.
Objective:
I want to de-duplicate the images on each page, ensuring only one instance of each image per page, similar to what the compress functionality in Adobe Acrobat does.
For example, consider this log trace from the source code when using a PDF compressed with Adobe Acrobat:
Found 1 image(s) on page 1
Found 1 image(s) on page 2
Found 1 image(s) on page 3
Found 1 image(s) on page 4
etc...
Questions:
- Is there a way to detect and remove duplicate images per page using
pypdf
? - How can I modify my script to ensure only one instance of each image per page?
Any guidance or suggestions on how to achieve this would be greatly appreciated!
Thank you in advance!