I’ve been trying to pansharpen some multispectral images using Python within a Docker environment, but the output image quality is very poor. The pansharpened image appears degraded, and I’m not sure what I might be doing wrong. I would appreciate any guidance or suggestions.
1- Locate the Panchromatic Image: I first search for the panchromatic file in the directory using the following function:
def find_panchro_images(directory):
panchro_images = []
other_images = []
for root, _, files in os.walk(directory):
for file in sorted(files): # Sort the files before processing
if file.lower().endswith(".tif"):
file_path = os.path.join(root, file)
try:
result = subprocess.run(
["exiftool", "-BandName", file_path], stdout=subprocess.PIPE, stderr=subprocess.PIPE, text=True
)
if "Panchro" in result.stdout:
panchro_images.append(file_path)
else:
other_images.append(file_path)
except Exception as e:
print(f"Error processing {file_path}: {e}")
return panchro_images, other_images
2- Stack the Multispectral Channels Together: After identifying the multispectral bands, I stack them using this function:
def read_and_stack_bands(files):
bands = []
for file in files:
with rasterio.open(file) as src:
band = src.read(1)
bands.append(band)
stacked_image = np.dstack(bands)
return stacked_image
3- Pansharpening: I then use the pansharpen function from the GitHub repository https://github.com/ThomasWangWeiHong/Simple-Pansharpening-Algorithms for the pansharpening process:
def pansharpen(m_files, pan, psh, R=3, G=2, B=1, NIR=4, method="esri", W=0.05):
# Function implementation...
4- Finally, I execute the following script to apply the pansharpening:
if __name__ == "__main__":
for folder in ["photo_1", "photo_2"]:
folder_path = os.path.join(yasmina_dir, folder)
output_folder = os.path.join(yasmina_dir, "output", folder)
os.makedirs(output_folder, exist_ok=True)
panchro_images, other_images = find_panchro_images(folder_path)
print(f"nIn folder '{folder}':")
if panchro_images:
print("Panchromatic Image:")
for image in panchro_images:
print(image)
print("Other Images:")
for image in other_images:
print(image)
if len(panchro_images) == 1:
other_images = sorted(other_images)[:4] # Use only [B, G, R, NIR]
output_image_name = os.path.basename(other_images[0]).replace("_1.tif", "_pansharpened.tif")
psh_output_path = os.path.join(output_folder, output_image_name)
pansharpen(other_images, panchro_images[0], psh_output_path)
else:
print("Multiple Panchromatic images found. Skipping pansharpening.")
else:
print("No Panchromatic images found. Skipping pansharpening.")
When I check the output image, it looks very bad (e.g., overly pixelated, image in grayscale). I’m not sure if the issue is due to the pansharpening method I’m using, a problem with how I’m reading the images, or something else.
Has anyone encountered a similar issue or can suggest what might be going wrong? I’m particularly interested in understanding if there’s a flaw in my image processing pipeline or if the problem lies with how the images are being rescaled or processed before pansharpening.
feriel djelil is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.
1