I am using boto3.client.download_file() and boto3.client.upload_file() functions, For python3.8 environment it’s working fine, but after upgrading to python3.10, getting “RuntimeError: cannot schedule new futures after interpreter shutdown”. I am using latest version of boto3 library. Tried with several other versions of the boto3 (1.18, 1.17, 1.26, latest 1.34), all other versions are giving same error.
Traceback (most recent call last):
File "/opt/gravity/cop-backup-restore/.venv/lib/python3.10/site-packages/awswrangler/s3/_fs.py", line 585, in open_s3_object
yield s3obj
File "/opt/gravity/cop-backup-restore/.venv/lib/python3.10/site-packages/awswrangler/s3/_upload.py", line 74, in upload
s3_f.write(local_f.read()) # type: ignore[arg-type]
File "/opt/gravity/cop-backup-restore/.venv/lib/python3.10/site-packages/awswrangler/s3/_fs.py", line 550, in write
self.flush()
File "/opt/gravity/cop-backup-restore/.venv/lib/python3.10/site-packages/awswrangler/s3/_fs.py", line 427, in flush
self._upload_proxy.upload(
File "/opt/gravity/cop-backup-restore/.venv/lib/python3.10/site-packages/awswrangler/s3/_fs.py", line 133, in upload
future = self._exec.submit(
File "/usr/lib/python3.10/concurrent/futures/thread.py", line 169, in submit
raise RuntimeError('cannot schedule new futures after '
RuntimeError: cannot schedule new futures after interpreter shutdown
One solution which I got while browsing this error was to disable the use of threads, it worked but we want to use the thread since our file size might go significantly big which will slowdown the download and upload process.
I also tried using thread.joins solution given in another similar question, but getting same error.
Another solution which I tried was by using awswrangler library it is working fine when the file size is smaller but it starts to give the same error when I increase the file size.