On a Debian/WSL2 setup without GPU I try running the following code but it returns me errors. I installed Pytorch as well, hoping it would resolve some TF error, but it didn’t resolve the issue wholly. I merely wanted to complete the huggingface tutorial on my laptop. Any ideas to debug it?
from transformers import pipeline
data = ["I love you", "I hate you"]
specific_model = pipeline(model="finiteautomata/bertweet-base-sentiment-analysis")
specific_model(data)
Output:
$ python3 HelloWorld/test.py
2024-12-23 09:33:35.046787: E external/local_xla/xla/stream_executor/cuda/cuda_fft.cc:477] Unable to register cuFFT factory: Attempting to register factory for plugin cuFFT when one has already been registered
WARNING: All log messages before absl::InitializeLog() is called are written to STDERR
E0000 00:00:1734942815.063438 1495 cuda_dnn.cc:8310] Unable to register cuDNN factory: Attempting to register factory for plugin cuDNN when one has already been registered
E0000 00:00:1734942815.068390 1495 cuda_blas.cc:1418] Unable to register cuBLAS factory: Attempting to register factory for plugin cuBLAS when one has already been registered
2024-12-23 09:33:35.084911: I tensorflow/core/platform/cpu_feature_guard.cc:210] This TensorFlow binary is optimized to use available CPU instructions in performance-critical operations.
To enable the following instructions: AVX2 FMA, in other operations, rebuild TensorFlow with the appropriate compiler flags.
emoji is not installed, thus not converting emoticons or emojis into text. Install emoji: pip3 install emoji==0.6.0
Device set to use cpu
My setup:
$ pip list
Package Version
---------------------------- ----------
absl-py 2.1.0
astunparse 1.6.3
certifi 2024.12.14
charset-normalizer 3.4.0
filelock 3.16.1
flatbuffers 24.3.25
fsspec 2024.12.0
gast 0.6.0
google-pasta 0.2.0
grpcio 1.68.1
h5py 3.12.1
huggingface-hub 0.27.0
idna 3.10
Jinja2 3.1.5
keras 3.7.0
libclang 18.1.1
Markdown 3.7
markdown-it-py 3.0.0
MarkupSafe 3.0.2
mdurl 0.1.2
ml-dtypes 0.4.1
mpmath 1.3.0
namex 0.0.8
networkx 3.4.2
numpy 2.0.2
nvidia-cublas-cu12 12.4.5.8
nvidia-cuda-cupti-cu12 12.4.127
nvidia-cuda-nvrtc-cu12 12.4.127
nvidia-cuda-runtime-cu12 12.4.127
nvidia-cudnn-cu12 9.1.0.70
nvidia-cufft-cu12 11.2.1.3
nvidia-curand-cu12 10.3.5.147
nvidia-cusolver-cu12 11.6.1.9
nvidia-cusparse-cu12 12.3.1.170
nvidia-nccl-cu12 2.21.5
nvidia-nvjitlink-cu12 12.4.127
nvidia-nvtx-cu12 12.4.127
opt_einsum 3.4.0
optree 0.13.1
packaging 24.2
pillow 11.0.0
pip 23.0.1
protobuf 5.29.2
Pygments 2.18.0
PyYAML 6.0.2
regex 2024.11.6
requests 2.32.3
rich 13.9.4
safetensors 0.4.5
sentencepiece 0.2.0
setuptools 66.1.1
six 1.17.0
sympy 1.13.1
tensorboard 2.18.0
tensorboard-data-server 0.7.2
tensorflow 2.18.0
tensorflow-io-gcs-filesystem 0.37.1
termcolor 2.5.0
tf_keras 2.18.0
tokenizers 0.21.0
torch 2.5.1
torchaudio 2.5.1
torchvision 0.20.1
tqdm 4.67.1
transformers 4.47.1
triton 3.1.0
typing_extensions 4.12.2
urllib3 2.3.0
Werkzeug 3.1.3
wheel 0.45.1
wrapt 1.17.0
$ python3 --version
Python 3.11.2
$ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 12 (bookworm)"
NAME="Debian GNU/Linux"
VERSION_ID="12"
VERSION="12 (bookworm)"
VERSION_CODENAME=bookworm
2
As suggested by Nick ODell this isn’t a mb error but warnings. The model is loaded and in order to see the output the print statement is needed!