I need to handle extremely large matrix (larger than (150K, 150k)) and use these matrices to do matrix operations (mainly matrix multiplications and calculate the inverse of matrix). This process needs large memory (larger than 183GB) so that calculations may fail. May I ask is there any way to reduce the memory usage by sacrificing runtime?
The program language I am using is python, but I am also open to other languages like c++ and matlab.
I have tried to change the matrix data type from float64 to float16, but Numpy.linalg can not support float64. I have also tried to change the matrix data type from float64 to float32, but numpy still treats matrix data type as float64 so the memory usage is the same.
Another thing I have tried is using dask to chunk the matrix. But the memory usage is still very large so it can not run properly.
I can stand the runtime is a little bit longer as long as the program can run properly. How can I do that?
Zheng YANG is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.