Does dask bag preserve order when using sequential dask.bag.map operations
It is stated that dask bags do not preserve order. However, the example given for dast.bag.map
does something that implies that order is preserved, or at least predictable, in https://docs.dask.org/en/stable/generated/dask.bag.map.html
cannot create a storer when reading an hdf5 filre with `dd.read_hdf`
I want to use a dask dataframe to load a pandas dataframe using the dd.read_hdf()
method. I create a very basic pandas dataframe, then I separate the values from column headers and index and I save them in an hdf5 file. I can read the hdf5 file and recreate the orinal dataframe, that loos ok
cannot create a storer when reading an hdf5 filre with `dd.read_hdf`
I want to use a dask dataframe to load a pandas dataframe using the dd.read_hdf()
method. I create a very basic pandas dataframe, then I separate the values from column headers and index and I save them in an hdf5 file. I can read the hdf5 file and recreate the orinal dataframe, that loos ok
dask compute function, compute method and Client
I am trying to understand the difference between compute function
vs compute method
and Client
.
Error with tuple indices when calling compute_chunk_sizes() on dask.array.argwhere() result
I am trying to slice the output of dask.array.argwhere(), but the result of this has unknown chunk sizes. The suggested solution, calling compute_chunk_sizes(), returns the error
Error with tuple indices when calling compute_chunk_sizes() on dask.array.argwhere() result
I am trying to slice the output of dask.array.argwhere(), but the result of this has unknown chunk sizes. The suggested solution, calling compute_chunk_sizes(), returns the error
Dask Read_SQL over big tables
I currently have a code where I download a BIG csv file and deal with it using Dask (multiple transformations).