Which statement is true about Fine-tuning and Parameter-Efficient Fine-Tuning (PEFT)?
Correct Answer: C
Fine-tuning and Parameter-Efficient Fine-Tuning (PEFT) are two techniques used for adapting pre-trained LLMs for specific tasks.
Fine-tuning:
Modifies all model parameters, requiring significant computing power.
Can lead to catastrophic forgetting, where the model loses prior general knowledge.
Example: Training GPT on medical texts to improve healthcare-specific knowledge.
Parameter-Efficient Fine-Tuning (PEFT):
Only a subset of model parameters is updated, making it computationally cheaper.
Uses techniques like LoRA (Low-Rank Adaptation) and Adapters to modify small parts of the model.
Avoids retraining the full model, maintaining general-purpose knowledge while adding task-specific expertise.
Why Other Options Are Incorrect:
(A) is incorrect because fine-tuning does not train from scratch, but modifies an existing model.
(B) is incorrect because both techniques involve model modifications.
(D) is incorrect because PEFT does not replace the model architecture.
🔹 Oracle Generative AI Reference:
Oracle AI supports both full fine-tuning and PEFT methods, optimizing AI models for cost efficiency and scalability.