Models & Algorithms🇰🇷 한국어

From Evaluation to Deployment — The Complete Fine-tuning Guide

Evaluate with Perplexity, KoBEST, ROUGE-L. Merge adapters with merge_and_unload(), convert to GGUF, deploy via vLLM/Ollama. Overfitting prevention, data quality, hyperparameter guide.

From Evaluation to Deployment — The Complete Fine-tuning Guide

From Evaluation to Deployment — The Complete Fine-tuning Guide

In Part 1 we covered LoRA fundamentals and ran our first fine-tuning. In Part 2 we tackled QLoRA and Korean dataset construction. Training is done. Now two questions remain:

Series: Part 1: LoRA Theory | Part 2: QLoRA + Korean | Part 3 (this post)
  1. Did the model actually improve? (Evaluation)
  2. How do we serve it to users? (Deployment)
🔒

Sign in to continue reading

Create a free account to access the full content.

Related Posts