I’ve been looking for info but nothing found. It is known which PEFT technique is being used when finetuning models? It’s like low rank adaptation or p-tuning? Or something else?
I am interpreting your question as:
“When someone finetunes a model with PEFT, how do I know which PEFT technique is being used?”
The answer is you have to see the code that was used to fine-tune the model using PEFT. PEFT can be used to fine-tune a model with several methods, such as LoRA, X-LoRA, LoHa, LoKr, OFT, BOFT, and others (see HuggingFace’s PEFT Adapters page for a list of fine-tuning adapter methods supported by HuggingFace’s PEFT library). For example, if you are looking at some fine-tuned model called ABC, and you find the code that was used to fine-tune ABC, and the code includes lines that looks like:
peft_config = LoraConfig(task_type=TaskType.SEQ_2_SEQ_LM, inference_mode=False, r=8, lora_alpha=32, lora_dropout=0.1)
model = get_peft_model(model_ABC, peft_config)
Then model ABC was most likely fine-tuned using LoRA.