Hi I tried to train a quantized model fitting my VRAM as I have a GTX 1070ti, but I got this error that I did not have on a friend’s computer who has an RTX 2070 (so same VRAM but more recent) :
(https://i.sstatic.net/2fmD9pTM.png)
While on the GitHUb main page of unsloth (https://github.com/unslothai/unsloth) it is stated that it should work even slower with GTX 1070 and 1080.
So I am wondering if I can change things in torch or triton files to change CUDA version from 6.1 to 7.0 or above which would make my code work because I don’t want to buy a new GPU just for that.
Thanks for your help.