Relative Content

Tag Archive for large-language-modelllamafine-tuning

GPU space required to fine-tune llama3 70B

I am wondering how much GPU space would be required to be able to fine-tune Llama3-Instruct-70B. This can be using 4bit quantisation and QLoRA to minimise GPU space used.