I currently have a code where I download a BIG csv file and deal with it using Dask (multiple transformations).
I´ve been informed that I wont be able to download the CSV file, and instead, I must query the same info from a SQL Database (IBM DB2).
My doubt is how Dask read_sql handle to big tables? Does it read until it fits on memory (as pandas) or is there any optimaztion?