I am wondering how much GPU space would be required to be able to fine-tune Llama3-Instruct-70B. This can be using 4bit quantisation and QLoRA to minimise GPU space used.
Thanks in advance!
I am wondering how much GPU space would be required to be able to fine-tune Llama3-Instruct-70B. This can be using 4bit quantisation and QLoRA to minimise GPU space used.
Thanks in advance!