noob to finetuning. I tried running the finetuning scripts mentioned in llama-recipes for finetuning, specifically this one
python -m finetuning.py --use_peft --peft_method lora --quantization --use_fp16 --model_name /patht_of_model_folder/8B --output_dir Path/to/save/PEFT/model
everytime I use the 7b model, it seems to fail. I need to use the 7b-hf model (hugging face weights). Are all the finetuning in the recipes using only the hugging face weights?