I am doing my independent study about clustering by GNN method. Here is github link of the paper. I want to use GPU to run all .ipynb
files in jupyter notebook. I use Main.ipynb as example. My computer environment is
- CPU: Intel(R) Core(TM) i7-10700 CPU @ 2.90GHz 2.90 GHz
- Memory: 16GB(15.9GB available)
- Operating system: Windows 10(version 22H2)
- Graphics card: NVIDIA GeForce RTX 3060
By How to Use GPUs from a Docker Container, I download Nvidia CUDA
, cuDNN
, container toolkit
. Since container toolkit can be only available in Linux
environment, I download Ubuntu
virtual environment. Here is the validation result in the Ubuntu command line
- Ubuntu version: 18.04.6
~$ hostnamectl
Static hostname: pc24
Icon name: computer-container
Chassis: container
Machine ID: f87dcf9800cb4680ad72a5f48a54e2cb
Boot ID: 0db53b32bd3648dbb2d356a0b961543c
Virtualization: wsl
Operating System: Ubuntu 18.04.6 LTS
Kernel: Linux 5.15.146.1-microsoft-standard-WSL2
Architecture: x86-64
- CUDA version: 10.1.243
~$ nvcc --version
nvcc: NVIDIA (R) Cuda compiler driver
Copyright (c) 2005-2019 NVIDIA Corporation
Built on Sun_Jul_28_19:07:16_PDT_2019
Cuda compilation tools, release 10.1, V10.1.243
- cuDNN version: 7.6.5
~$ cat /usr/local/cuda/include/cudnn.h | grep CUDNN_MAJOR -A 2
#define CUDNN_MAJOR 7
#define CUDNN_MINOR 6
#define CUDNN_PATCHLEVEL 5
--
#define CUDNN_VERSION (CUDNN_MAJOR * 1000 + CUDNN_MINOR * 100 + CUDNN_PATCHLEVEL)
#include "driver_types.h"
- Nvidia container toolkit version: 1.15.0
~$ dpkg -l | grep nvidia-container-toolkit
ii nvidia-container-toolkit 1.15.0-1 amd64 NVIDIA Container toolkit
ii nvidia-container-toolkit-base 1.15.0-1 amd64 NVIDIA Container Toolkit Base
It seems that everything is ready according to the guide. However, the file does not utilize the GPU when processing. The following is how I open the Dockerfile and start processing .ipynb
files.
- Open Dockerfile: by using Ubuntu virtual commend line
~$ sudo docker run --runtime=nvidia -it -v ~/graph-sc-master:/workspace/graph-sc -p 8888:8888 graph-sc
- Open Jupyter notebook: post the URL in Edge browser
http://localhost:8888/tree/graph-sc/notebooks
During processing, I use a code block in Jupyter notebook to check GPU environment first
!nvidia-smi
import torch
if torch.cuda.is_available():
device = torch.device("cuda")
print("d: ", device)
print("GPU: ", torch.cuda.get_device_name(0))
Sat May 25 11:19:06 2024
+-----------------------------------------------------------------------------------------+
| NVIDIA-SMI 550.76.01 Driver Version: 552.22 CUDA Version: 12.4 |
|-----------------------------------------+------------------------+----------------------+
| GPU Name Persistence-M | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap | Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|=========================================+========================+======================|
| 0 NVIDIA GeForce RTX 3060 On | 00000000:01:00.0 On | N/A |
| 41% 31C P8 13W / 170W | 456MiB / 12288MiB | 4% Default |
| | | N/A |
+-----------------------------------------+------------------------+----------------------+
+-----------------------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=========================================================================================|
| 0 N/A N/A 930 G /Xwayland N/A |
+-----------------------------------------------------------------------------------------+
d: cuda
GPU: NVIDIA GeForce RTX 3060
The CUDA Version: 12.4 appears to be a display error.
Here the Dockerfile content
FROM pytorch/pytorch:1.4-cuda10.1-cudnn7-runtime
RUN apt update
&& apt install -y
nodejs
npm
&& rm -rf /var/lib/apt/lists/*
RUN pip install setuptools==45.0.0
jupyterlab==2.1.4
notebook==6.0.3
scikit-learn==0.24.2
lmdb
attrdict
h5py
scipy==1.6.0
ipywidgets==7.5.1
keras==2.3.1
tensorflow-gpu==1.15.0
tensorboard==1.15.0
tensorboardX
scanpy==1.5.1
jgraph
louvain
openpyxl
pandas==1.2.1
dgl-cu101==0.5.3
xlrd==1.2.0
leidenalg
RUN pip install markupsafe==2.0.1
traitlets==5.3.0
jinja2==3.0.0
ipython==7.23.1
numpy==1.18.5
get_version==2.1
legacy_api_wrap==1.2
protobuf==3.20.3
umap-learn==0.4.3
numba==0.49.1
gnn
# Jupyter notebook configuration
RUN pip install yapf==0.30.0
RUN pip install jupyter_contrib_nbextensions==0.5.1
RUN pip install jupyter_highlight_selected_word==0.2.0
RUN apt-get update
RUN apt-get install -y libglib2.0-0 libsm6 libxext6 libxrender-dev
RUN jupyter contrib nbextension install --user
RUN jupyter nbextension install https://github.com/jfbercher/code_prettify/archive/master.zip --user
RUN jupyter nbextension enable code_prettify-master/code_prettify
RUN jupyter nbextension install --py jupyter_highlight_selected_word
RUN jupyter nbextension enable highlight_selected_word/main
EXPOSE 8080 8888 6006
CMD ["jupyter", "notebook", "--port=8888", "--no-browser", "--ip=0.0.0.0", "--allow-root", "--NotebookApp.token=''"]
Start to run Main.ipynb
- Open Main.ipynb
- Kernal – Restart & Run All
- Open Task Manager
I found that GPU column never uses more than 7%, and I don’t touch keyboard or mouse during processing. There is a phenomenon that the first time execute Main.ipynb
is slow about 10 minutes. But the second time is only about 1 minute. Here are warnings happened during processing
- In[2]
import sys
sys.path.append("..")
import argparse
import numpy as np
import dgl
from dgl import DGLGraph
import torch
import torch.nn.functional as F
import time
import matplotlib.pyplot as plt
import pandas as pd
from tqdm import tqdm
from collections import Counter
from sklearn.manifold import TSNE
import pickle
import h5py
import random
import glob2
import seaborn as sns
import train
import models
%load_ext autoreload
%autoreload 2
random.seed(42)
np.random.seed(42)
torch.manual_seed(42)
torch.cuda.manual_seed(42)
device = train.get_device()
import warnings
warnings.filterwarnings("ignore", category=FutureWarning)
path= "../"
# check available files
!ls ../real_data
Using backend: pytorch
/opt/conda/lib/python3.7/site-packages/dgl/base.py:45: DGLWarning: Detected an old version of PyTorch. Suggest using torch>=1.5.0 for the best experience.
return warnings.warn(message, category=category, stacklevel=1)
/opt/conda/lib/python3.7/site-packages/scanpy/api/__init__.py:7: FutureWarning:
In a future version of Scanpy, `scanpy.api` will be removed.
Simply use `import scanpy as sc` and `import scanpy.external as sce` instead.
FutureWarning,
- In[5]
# remove less variable genes
genes_idx, cells_idx = train.filter_data(X, highly_genes=nb_genes)
X = X[cells_idx][:, genes_idx]
Y = Y[cells_idx]
n_clusters = len(np.unique(Y))
# create graph
graph = train.make_graph(
X,
Y, # Pass None of Y is not available for validation
dense_dim=pca_size,
normalize_weights=normalize_weights,
)
labels = graph.ndata["label"]
train_ids = np.where(labels != -1)[0]
# create training data loader
sampler = dgl.dataloading.MultiLayerFullNeighborSampler(n_layers)
dataloader = dgl.dataloading.NodeDataLoader(
graph,
train_ids,
sampler,
batch_size=batch_size,
shuffle=True,
drop_last=False,
num_workers=1,
)
# create model
model = models.GCNAE(
in_feats=pca_size,
n_hidden=hidden_dim,
n_layers=n_layers,
activation=activation,
dropout=0.1,
hidden=hidden,
).to(device)
optim = torch.optim.Adam(model.parameters(), lr=1e-5)
# train model
results = train.train(model,
optim,
epochs,
dataloader,
n_clusters,
plot=False,
save = True,
cluster=["KMeans", "Leiden"])
Train.py
import dgl
...some code...
graph = dgl.graph(([],[]))
/opt/conda/lib/python3.7/site-packages/dgl/base.py:45: DGLWarning: Recommend creating graphs by `dgl.graph(data)` instead of `dgl.DGLGraph(data)`.
return warnings.warn(message, category=category, stacklevel=1)
and
/opt/conda/lib/python3.7/site-packages/umap/spectral.py:4: NumbaDeprecationWarning: No direct replacement for 'numba.targets' available. Visit https://gitter.im/numba/numba-dev to request help. Thanks!
import numba.targets
I also try Benchmark_real_data.ipynb
, and encountered a new warning.
- In[3]
results = pd.DataFrame()
model_name = "GraphConv"
normalize_weights = "log_per_cell"
node_features = "scale"
same_edge_values = False
edge_norm = True
hidden_relu = False
hidden_bn = False
n_layers = 1
hidden_dim = 200
hidden = [300]
nb_genes = 3000
activation = F.relu
for dataset in files:
print(f">> {dataset}")
data_mat = h5py.File(f"{path}/real_data/{dataset}.h5", "r")
Y = np.array(data_mat['Y'])
X = np.array(data_mat['X'])
n_clusters = len(np.unique(Y))
genes_idx, cells_idx = train.filter_data(X, highly_genes=nb_genes)
X = X[cells_idx][:, genes_idx]
Y = Y[cells_idx]
t0 = time.time()
graph = train.make_graph(
X,
Y,
dense_dim=pca_size,
node_features=node_features,
normalize_weights=normalize_weights,
)
labels = graph.ndata["label"]
train_ids = np.where(labels != -1)[0]
sampler = dgl.dataloading.MultiLayerFullNeighborSampler(n_layers)
dataloader = dgl.dataloading.NodeDataLoader(
graph,
train_ids,
sampler,
batch_size=batch_size,
shuffle=True,
drop_last=False,
num_workers=1,
)
print(
f"INPUT: {model_name} {hidden_dim}, {hidden}, {same_edge_values}, {edge_norm}"
)
t1 = time.time()
for run in range(3):
t_start = time.time()
torch.manual_seed(run)
torch.cuda.manual_seed_all(run)
np.random.seed(run)
random.seed(run)
model = models.GCNAE(
in_feats=pca_size,
n_hidden=hidden_dim,
n_layers=n_layers,
activation=activation,
dropout=0.1,
hidden=hidden,
hidden_relu=hidden_relu,
hidden_bn=hidden_bn,
).to(device)
if run == 0:
print(f">", model)
optim = torch.optim.Adam(model.parameters(), lr=1e-5)
scores = train.train(model,
optim,
epochs,
dataloader,
n_clusters,
plot=False,
cluster=["KMeans", "Leiden"])
scores["dataset"] = dataset
scores["run"] = run
scores["nb_genes"] = nb_genes
scores["hidden"] = str(hidden)
scores["hidden_dim"] = str(hidden_dim)
scores["tot_kmeans_time"] = (t1 - t0) + (
scores['ae_end'] - t_start) + scores['kmeans_time']
scores["tot_leiden_time"] = (t1 - t0) + (
scores['ae_end'] - t_start) + scores['leiden_time']
scores["time_graph"] = t1 - t0
scores["time_training"] = (scores['ae_end'] - t_start)
results = results.append(scores, ignore_index=True)
# results.to_pickle(
# f"../output/pickle_results/{category}/{category}_gae.pkl")
# print("Done")
results.mean() #
Train.py
import scanpy as sc
...some code...
def filter_data(X, highly_genes=500):
X = np.ceil(X).astype(np.int_)
adata = sc.AnnData(X)
../train.py:42: FutureWarning: X.dtype being converted to np.float32 from int64. In the next version of anndata (0.9) conversion will not be automatic. Pass dtype explicitly to avoid this warning. Pass `AnnData(X, dtype=X.dtype, ...)` to get the future behavour.
adata = sc.AnnData(X)
I’m not sure if these warnings or other problems are affecting GPU usage, but I believe I’ve finished preparing the environment.
I test my GPU by using Maththew-x83. The result is 11840. According to the result table, I believe this is a common score for an RTX 3060.
| System | Benchmark Result |
| ——– | ——– |
| NVIDIA RTX 3070 Ti, 8GB | ~ 20000 Points |
| NVIDIA GeForce GTX 1080 Max-Q, 8GB | ~ 6000 Points |
And the GPU does not work in other .ipynb
projects.
I am not sure about the efficiency percentage that the GPU can achieve, but I believe it unusual (mostly 1% or 0%).