I’m trying out some things with Dask for the first time, and while I had it running a few weeks ago, I now find that I can’t get the LocalCluster initiated. I’ve cut if off after running 30 minutes at some point, and am unable to get it initiated. Does anybody know what the issue may be? The code I’m trying to run is below, it doesn’t get past the LocalCluster line.
<code>import dask.dataframe as dd
from dask.distributed import LocalCluster, Client
from dask.diagnostics import ResourceProfiler
import multiprocessing as mp
import time
from globals import * # Custom file with some global variables
# /questions/53394935/what-is-the-right-way-to-close-a-dask-localcluster
cluster = LocalCluster(n_workers=int(0.9 * mp.cpu_count()), processes=True, threads_per_worker=1, memory_limit="2GB")
print(cluster)
client = Client(cluster)
print(client.dashboard_link)
</code>
<code>import dask.dataframe as dd
from dask.distributed import LocalCluster, Client
from dask.diagnostics import ResourceProfiler
import multiprocessing as mp
import time
from globals import * # Custom file with some global variables
# /questions/53394935/what-is-the-right-way-to-close-a-dask-localcluster
cluster = LocalCluster(n_workers=int(0.9 * mp.cpu_count()), processes=True, threads_per_worker=1, memory_limit="2GB")
print(cluster)
client = Client(cluster)
print(client.dashboard_link)
</code>
import dask.dataframe as dd
from dask.distributed import LocalCluster, Client
from dask.diagnostics import ResourceProfiler
import multiprocessing as mp
import time
from globals import * # Custom file with some global variables
# /questions/53394935/what-is-the-right-way-to-close-a-dask-localcluster
cluster = LocalCluster(n_workers=int(0.9 * mp.cpu_count()), processes=True, threads_per_worker=1, memory_limit="2GB")
print(cluster)
client = Client(cluster)
print(client.dashboard_link)