Download this tutorial as a Jupyter notebook
Full SFT Customization#
Learn how to fine-tune all model weights using supervised fine-tuning (SFT) to customize LLM behavior for your specific tasks.
About#
Supervised Fine-Tuning (SFT) customizes model behavior, injects new knowledge, and optimizes performance for specific domains and tasks. Full SFT modifies all model weights during training, providing maximum customization flexibility.
What you can achieve with SFT:
🎯 Specialize for domains: Fine-tune models on legal texts, medical records, or financial data
💡 Inject knowledge: Add new information not present in the base model
📈 Improve accuracy: Optimize for specific tasks like sentiment analysis, summarization, or code generation
SFT vs LoRA: Understanding the Trade-offs#
Full SFT trains all model parameters (e.g., all 70 billion weights in Llama 70B):
✅ Maximum model adaptation and knowledge injection
✅ Can fundamentally change model behavior
✅ Best for significant domain shifts or specialized tasks
❌ Requires substantial GPU resources (4-8x more than LoRA)
❌ Produces full model weights (~140GB for Llama 70B)
❌ Longer training time
LoRA trains only ~1% of weights by adding thin matrices to existing weights:
✅ 75-95% less memory required
✅ Faster training (2-4x speedup)
✅ Produces small adapter files (~100-500MB)
✅ Multiple adapters can share one base model
❌ Limited adaptation capability compared to full fine-tuning
When to choose Full SFT:
Training small models (1B-8B) where resource cost is manageable
Need fundamental behavior changes (e.g., medical diagnosis, legal reasoning)
Injecting substantial new knowledge not in the base model
When to choose LoRA: See the LoRA tutorial for most use cases, especially with large models (70B+) or limited GPU resources.
Prerequisites#
Before starting this tutorial, ensure you have:
Completed the Quickstart to install and deploy NeMo Microservices locally
Installed the Python SDK (included with
pip install nemo-microservices)Set up organizational entities (namespaces and projects) if you’re new to the platform
Quick Start#
1. Initialize SDK#
The SDK needs to know your NMP server URL. By default, http://localhost:8080 is used in accordance with the Quickstart guide. If NMP is running at a custom location, you can override the URL by setting the NMP_BASE_URL environment variable:
export NMP_BASE_URL=<YOUR_NMP_BASE_URL>
import os
from nemo_microservices import NeMoMicroservices, ConflictError
NMP_BASE_URL = os.environ.get("NMP_BASE_URL", "http://localhost:8080")
sdk = NeMoMicroservices(
base_url=NMP_BASE_URL,
workspace="default"
)
2. Prepare Dataset#
Create your data in JSONL format—one JSON object per line. The platform auto-detects your data format. Supported dataset formats are listed below.
Flexible Data Setup:
No validation file? The platform automatically creates a 10% validation split
Multiple files? Upload to
training/orvalidation/subdirectories—they’ll be automatically mergedFormat detection: Your data format is auto-detected at training time
In this tutorial the following dataset directory structure will be used:
my_dataset
`-- training.jsonl
`-- validation.jsonl
Simple Prompt/Completion Format#
The simplest format with input prompt and expected completion:
prompt: The input prompt for the modelcompletion: The expected output response
{"prompt": "Write an email to confirm our hotel reservation.", "completion": "Dear Hotel Team, I am writing to confirm our reservation for two guests..."}
Chat Format (for conversational models)#
For multi-turn conversations, use the messages format:
messages: List of message objects withroleandcontentfieldsRoles:
system,user,assistant
{"messages": [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "What is AI?"}, {"role": "assistant", "content": "AI is..."}]}
Custom Format (specify columns in job)#
You can use custom field names and map them during job creation:
Define your own field names
Map them to prompt/completion in the job configuration
{"question": "What is 2+2?", "answer": "4"}
3. Create Dataset FileSet and Upload Training Data#
Install huggingface datasets package to download public rajpurkar/squad dataset if it’s not installed in your Python environment:
pip install datasets
Download rajpurkar/squad Dataset#
SQuAD (Stanford Question Answering Dataset) is a reading comprehension dataset consisting of questions posed on Wikipedia articles, where the answer is a segment of text from the corresponding passage.
from pathlib import Path
from datasets import load_dataset, DatasetDict
import json
# Load the SQuAD dataset from Hugging Face
print("Loading dataset rajpurkar/squad")
raw_dataset = load_dataset("rajpurkar/squad")
if not isinstance(raw_dataset, DatasetDict):
raise ValueError("Dataset does not contain expected splits")
print("Loaded dataset")
# Configuration
VALIDATION_PROPORTION = 0.05
SEED = 1234
# For the purpose of this tutorial, we'll use a subset of the dataset
# The larger the datasets, the better the model will perform but longer the training will take
training_size = 3000
validation_size = 300
DATASET_PATH = Path("sft-dataset").absolute()
# Create directory if it doesn't exist
os.makedirs(DATASET_PATH, exist_ok=True)
# Get the train split and create a validation split from it
train_set = raw_dataset.get('train')
split_dataset = train_set.train_test_split(test_size=VALIDATION_PROPORTION, seed=SEED)
# Select subsets for the tutorial
train_ds = split_dataset['train'].select(range(min(training_size, len(split_dataset['train']))))
validation_ds = split_dataset['test'].select(range(min(validation_size, len(split_dataset['test']))))
# Convert SQuAD format to prompt/completion format and save to JSONL
def convert_squad_to_sft_format(example):
"""Convert SQuAD format to prompt/completion format for SFT training."""
prompt = f"Context: {example['context']} Question: {example['question']} Answer:"
completion = example["answers"]["text"][0] # Take the first answer
return {"prompt": prompt, "completion": completion}
# Save training data
with open(f"{DATASET_PATH}/training.jsonl", "w", encoding="utf-8") as f:
for example in train_ds:
converted = convert_squad_to_sft_format(example)
f.write(json.dumps(converted) + "\n")
# Save validation data
with open(f"{DATASET_PATH}/validation.jsonl", "w", encoding="utf-8") as f:
for example in validation_ds:
converted = convert_squad_to_sft_format(example)
f.write(json.dumps(converted) + "\n")
print(f"Saved training.jsonl with {len(train_ds)} rows")
print(f"Saved validation.jsonl with {len(validation_ds)} rows")
# Show a sample from the training data
print("\nSample from training data:")
with open(f"{DATASET_PATH}/training.jsonl", 'r') as f:
first_line = f.readline()
sample = json.loads(first_line)
print(f"Prompt: {sample['prompt'][:200]}...")
print(f"Completion: {sample['completion']}")
# Create fileset to store SFT training data
DATASET_NAME = "sft-dataset"
try:
sdk.filesets.create(
workspace="default",
name=DATASET_NAME,
description="SFT training data"
)
print(f"Created fileset: {DATASET_NAME}")
except ConflictError:
print(f"Fileset '{DATASET_NAME}' already exists, continuing...")
# Upload training data files individually to ensure correct structure
sdk.filesets.fsspec.put(
lpath=DATASET_PATH, # Local directory with your JSONL files
rpath=f"default/{DATASET_NAME}/",
recursive=True
)
# Validate training data is uploaded correctly
print("Training data:")
print(sdk.filesets.list_files(name=DATASET_NAME, workspace="default").model_dump_json(indent=2))
4. Secrets Setup#
If you plan to use NGC or HuggingFace models, you’ll need to configure authentication:
NGC models (
ngc://URIs): Requires NGC API keyHuggingFace models (
hf://URIs): Requires HF token for gated/private models
Configure these as secrets in your platform. See Managing Secrets for detailed instructions.
Get your credentials to access base models:
NGC API Key (Setup → Generate API Key)
HuggingFace Token (Create token with Read access)
Quick Setup Example#
In this tutorial we are going to work with meta-llama/Llama-3.2-1B-Instruct model from HuggingFace. Ensure that you have sufficient permissions to download the model. If you cannot see the files in the meta-llama/Llama-3.2-1B-Instruct Hugging Face page, request access
HuggingFace Authentication:
For gated models (Llama, Gemma), you must provide a HuggingFace token via the
token_secretparameterGet your token from HuggingFace Settings (requires Read access)
Accept the model’s terms on the HuggingFace model page before using it. Example: meta-llama/Llama-3.2-1B-Instruct
For public models, you can omit the
token_secretparameter when creating a fileset for model in the next step
# Export the HF_TOKEN and NGC_API_KEY environment variables if they are not already set
HF_TOKEN = os.getenv("HF_TOKEN")
NGC_API_KEY = os.getenv("NGC_API_KEY")
def create_or_get_secret(name: str, value: str | None, label: str):
if not value:
raise ValueError(f"{label} is not set")
try:
secret = sdk.secrets.create(
name=name,
workspace="default",
data=value,
)
print(f"Created secret: {name}")
return secret
except ConflictError:
print(f"Secret '{name}' already exists, continuing...")
return sdk.secrets.retrieve(name=name, workspace="default")
# Create HuggingFace token secret
hf_secret = create_or_get_secret("hf-token", HF_TOKEN, "HF_TOKEN")
print("HF_TOKEN secret:")
print(hf_secret.model_dump_json(indent=2))
# Create NGC API key secret
# Uncomment the line below if you have NGC API Key and want to finetune NGC models
# ngc_api_key = create_or_get_secret("ngc-api-key", NGC_API_KEY, "NGC_API_KEY")
5. Create Base Model FileSet#
Create a fileset pointing to meta-llama/Llama-3.2-1B-Instruct model in HuggingFace that we will train with SFT. Model downloading will take place at the SFT finetuning job creation time. This step creates a pointer to the Hugging Face and does not download the model.
Note: for public models, you can omit the token_secret parameter when creating a model fileset.
# Create a fileset pointing to the desired HuggingFace model
from nemo_microservices.types.filesets import HuggingfaceStorageConfigParam
HF_REPO_ID = "meta-llama/Llama-3.2-1B-Instruct"
MODEL_NAME = "llama-3-2-1b-base"
# Ensure you have a HuggingFace token secret created
try:
base_model = sdk.filesets.create(
workspace="default",
name=MODEL_NAME,
description="Llama 3.2 1B base model from HuggingFace",
storage=HuggingfaceStorageConfigParam(
type="huggingface",
# repo_id is the full model name from Hugging Face
repo_id=HF_REPO_ID,
repo_type="model",
# we use the secret created in the previous step
token_secret=hf_secret.name
)
)
except ConflictError:
print(f"Base model fileset already exists. Skipping creation.")
base_model = sdk.filesets.retrieve(
workspace="default",
name="llama-3-2-1b-base",
)
print(f"Base model fileset: fileset://default/{base_model.name}")
print("Base model fileset files list:")
print((sdk.filesets.list_files(name="llama-3-2-1b-base", workspace="default")).model_dump_json(indent=2))
6. Create SFT Finetuning Job#
Create a customization job with an inline target referencing the base model and dataset filesets created in previous steps.
Target model_uri Format:
Currently, model_uri must reference a FileSet:
FileSet:
fileset://workspace/fileset-name
Support for direct HuggingFace (hf://) and NGC (ngc://) URIs is coming soon. For now, create a fileset and upload your base model from these sources as shown in step 4.
GPU Requirements:
1B models: 1 GPU (24GB+ VRAM)
3B models: 1-2 GPUs
8B models: 2-4 GPUs
70B models: 8+ GPUs
Adjust num_gpus_per_node and tensor_parallel_size based on your model size.
import uuid
from nemo_microservices.types.customization import (
CustomizationJobInputParam,
CustomizationTargetParamParam,
HyperparametersParam,
)
job_suffix = uuid.uuid4().hex[:4]
JOB_NAME = f"my-sft-job-{job_suffix}"
job = sdk.customization.jobs.create(
name=JOB_NAME,
workspace="default",
spec=CustomizationJobInputParam(
target=CustomizationTargetParamParam(
workspace="default",
model_uri=f"fileset://default/{base_model.name}"
),
dataset=f"fileset://default/{DATASET_NAME}",
hyperparameters=HyperparametersParam(
training_type="sft",
finetuning_type="all_weights",
epochs=2,
batch_size=64,
learning_rate=0.00005,
max_seq_length=2048,
# GPU and parallelism settings
num_gpus_per_node=1,
num_nodes=1,
tensor_parallel_size=1,
pipeline_parallel_size=1,
micro_batch_size=1,
)
)
)
print(f"Job ID: {job.name}")
print(f"Output model: {job.spec.output_model}")
7. Track Training Progress#
import time
from IPython.display import clear_output
# Poll job status every 10 seconds until completed
while True:
status = sdk.audit.jobs.get_status(
name=job.name,
workspace="default"
)
clear_output(wait=True)
print(f"Job Status: {status.status}")
# Extract training progress from nested steps structure
step: int | None = None
max_steps: int | None = None
training_phase: str | None = None
for job_step in status.steps or []:
if job_step.name == "customization-training-job":
for task in job_step.tasks or []:
task_details = task.status_details or {}
step = task_details.get("step")
max_steps = task_details.get("max_steps")
training_phase = task_details.get("phase")
break
break
if step is not None and max_steps is not None:
progress_pct = (step / max_steps) * 100
print(f"Training Progress: Step {step}/{max_steps} ({progress_pct:.1f}%)")
if training_phase:
print(f"Training Phase: {training_phase}")
else:
print("Training step not started yet or progress info not available")
# Exit loop when job is completed (or failed/cancelled)
if status.status in ("completed", "failed", "cancelled"):
print(f"\nJob finished with status: {status.status}")
break
time.sleep(10)
Interpreting SFT Training Metrics:
Monitor the relationship between training and validation loss curves:
Scenario |
Interpretation |
Action |
|---|---|---|
Both decreasing together |
Model is learning well |
Continue training |
Training decreases, validation flat/increasing |
Overfitting |
Reduce epochs, add data |
Both flat/not decreasing |
Underfitting |
Increase LR, check data |
Sudden spikes |
Training instability |
Lower learning rate |
Note: Training metrics measure optimization progress, not final model quality. Always evaluate the deployed model on your specific use case.
8. Deploy Fine-Tuned Model#
Once training completes, deploy using the Deployment Management Service:
# Validate model entity exists
model_entity = sdk.models.retrieve(workspace='default', name=job.spec.output_model)
print(model_entity.model_dump_json(indent=2))
from nemo_microservices.types.inference import NIMDeploymentParam
# Create deployment config
deploy_suffix = uuid.uuid4().hex[:4]
DEPLOYMENT_CONFIG_NAME = f"sft-model-deployment-cfg-{deploy_suffix}"
DEPLOYMENT_NAME = f"sft-model-deployment-{deploy_suffix}"
deployment_config = sdk.inference.deployment_configs.create(
workspace="default",
name=DEPLOYMENT_CONFIG_NAME,
nim_deployment=NIMDeploymentParam(
image_name="nvcr.io/nim/nvidia/llm-nim",
image_tag="1.13.1",
gpu=1,
model_name=job.spec.output_model, # ModelEntity name from training,
model_namespace="default", # Workspace where ModelEntity lives
)
)
# Deploy model using deployment_config created above
deployment = sdk.inference.deployments.create(
workspace="default",
name=DEPLOYMENT_NAME,
config=deployment_config.name
)
# Check deployment status
deployment_status = sdk.inference.deployments.retrieve(
name=deployment.name,
workspace="default"
)
print(f"Deployment name: {deployment.name}")
print(f"Deployment status: {deployment_status.status}")
The deployment service automatically:
Downloads model weights from the Files service
Provisions storage (PVC) for the weights
Configures and starts the NIM container
Multi-GPU Deployment:
For larger models requiring multiple GPUs, configure parallelism with environment variables:
deployment_config = sdk.inference.deployment_configs.create(
workspace="default",
name="sft-model-config-multigpu",
nim_deployment={
"image_name": "nvcr.io/nim/nvidia/llm-nim",
"image_tag": "1.13.1",
"gpu": 2, # Total GPUs
"additional_envs": {
"NIM_TENSOR_PARALLEL_SIZE": "2", # Tensor parallelism
"NIM_PIPELINE_PARALLEL_SIZE": "1" # Pipeline parallelism
}
}
)
Single-Node Constraint: Model deployments are limited to a single node. The maximum gpu value depends on the total GPUs available on a single node in your cluster. Multi-node deployments are not supported.
GPU Parallelism#
By default, NIM uses all GPUs for tensor parallelism (TP). You can customize this behavior using the NIM_TENSOR_PARALLEL_SIZE and NIM_PIPELINE_PARALLEL_SIZE environment variables.
Strategy |
Description |
Best For |
|---|---|---|
Tensor Parallel (TP) |
Splits model layers across GPUs |
Lowest latency |
Pipeline Parallel (PP) |
Splits model depth across GPUs |
Highest throughput |
Formula: gpu = NIM_TENSOR_PARALLEL_SIZE × NIM_PIPELINE_PARALLEL_SIZE
Example Configurations#
Default (TP=8, PP=1) — Lowest Latency
"gpu": 8
# NIM automatically sets NIM_TENSOR_PARALLEL_SIZE=8
Balanced (TP=4, PP=2)
"gpu": 8,
"additional_envs": {
"NIM_TENSOR_PARALLEL_SIZE": "4",
"NIM_PIPELINE_PARALLEL_SIZE": "2"
}
Throughput Optimized (TP=2, PP=4)
"gpu": 8,
"additional_envs": {
"NIM_TENSOR_PARALLEL_SIZE": "2",
"NIM_PIPELINE_PARALLEL_SIZE": "4"
}
Track deployment status#
import time
from IPython.display import clear_output
# Poll deployment status every 15 seconds until ready
TIMEOUT_MINUTES = 30
start_time = time.time()
timeout_seconds = TIMEOUT_MINUTES * 60
print(f"Monitoring deployment '{deployment.name}'...")
print(f"Timeout: {TIMEOUT_MINUTES} minutes\n")
while True:
deployment_status = sdk.inference.deployments.retrieve(
name=deployment.name,
workspace="default"
)
elapsed = time.time() - start_time
elapsed_min = int(elapsed // 60)
elapsed_sec = int(elapsed % 60)
clear_output(wait=True)
print(f"Deployment: {deployment.name}")
print(f"Status: {deployment_status.status}")
print(f"Elapsed time: {elapsed_min}m {elapsed_sec}s")
# Check if deployment is ready
if deployment_status.status == "READY":
print("\nDeployment is ready!")
break
# Check for failure states
if deployment_status.status in ("FAILED", "ERROR", "TERMINATED", "LOST"):
print(f"\nDeployment failed with status: {deployment_status.status}")
break
# Check timeout
if elapsed > timeout_seconds:
print(f"\nTimeout reached ({TIMEOUT_MINUTES} minutes). Deployment may still be in progress.")
print("You can continue to check status manually or wait longer.")
break
time.sleep(15)
9. Evaluate Your Model#
After training, evaluate whether your model meets your requirements:
Quick Manual Evaluation#
# Wait for deployment to be ready, then test
# Test the fine-tuned model with a question answering prompt
context = "The Apollo 11 mission was the first manned mission to land on the Moon. It was launched on July 16, 1969, and Neil Armstrong became the first person to walk on the lunar surface on July 20, 1969. Buzz Aldrin joined him shortly after, while Michael Collins remained in lunar orbit."
question = "Who was the first person to walk on the Moon?"
messages = [
{"role": "user", "content": f"Based on the following context, answer the question.\n\nContext: {context}\n\nQuestion: {question}"}
]
response = sdk.inference.gateway.provider.post(
"v1/chat/completions",
name=deployment.name,
workspace="default",
body={
"model": f"default/{job.spec.output_model}",
"messages": messages,
"temperature": 0,
"max_tokens": 128
}
)
print("=" * 60)
print("MODEL EVALUATION")
print("=" * 60)
print(f"Question: {question}")
print(f"Expected: Neil Armstrong")
print(f"Model output: {response['choices'][0]['message']['content']}")
Evaluation Best Practices#
Manual Evaluation (Recommended)
Test with real-world examples from your use case
Compare responses to base model and expected outputs
Verify the model exhibits desired behavior changes
Check edge cases and error handling
What to look for:
✅ Model follows your desired output format
✅ Applies domain knowledge correctly
✅ Maintains general language capabilities
✅ Avoids unwanted behaviors or biases
❌ Doesn’t hallucinate facts not in training data
❌ Doesn’t produce repetitive or nonsensical outputs
Hyperparameters#
For detailed information on all available hyperparameters, recommended values, and tuning guidance, see the Hyperparameter Reference.
Troubleshooting#
Job fails during model download:
Verify authentication secrets are configured (see Managing Secrets)
For gated HuggingFace models (Llama, Gemma), accept the license on the model page
Check the
model_uriformat is correct (fileset://)Ensure you have accepted the model’s terms of service on HuggingFace
Check job status and logs:
sdk.customization.jobs.retrieve(name=job.name, workspace="default")
Job fails with OOM (Out of Memory) error:
First try: Reduce
micro_batch_sizefrom 2 to 1Still OOM: Reduce
batch_sizefrom 4 to 2Still OOM: Reduce
max_seq_lengthfrom 2048 to 1024 or 512Last resort: Increase GPU count and use
tensor_parallel_sizefor model sharding
Loss curves not decreasing (underfitting):
Increase training duration:
epochs: 5-10instead of 3Adjust learning rate: Try
1e-5to1e-4Add warmup: Set
warmup_stepsto ~10% of total training stepsCheck data quality: Verify formatting, remove duplicates, ensure diversity
Training loss decreases but validation loss increases (overfitting):
Reduce epochs: Try
epochs: 1-2instead of 5+Lower learning rate: Use
2e-5or1e-5Increase dataset size and diversity
Verify train/validation split has no data leakage
Model output quality is poor despite good training metrics:
Training metrics optimize for loss, not your actual task—evaluate on real use cases
Review data quality, format, and diversity—metrics can be misleading with poor data
Try a different base model size or architecture
Adjust learning rate and batch size
Compare to baseline: Test base model to ensure fine-tuning improved performance
Deployment fails:
Verify output model exists:
sdk.models.retrieve(name=job.spec.output_model, workspace="default")Check deployment logs:
sdk.inference.deployments.get_logs(name=deployment.name, workspace="default")Ensure sufficient GPU resources available for model size
Verify NIM image tag
1.13.1is compatible with your model
Next Steps#
Monitor training metrics in detail
Evaluate your fine-tuned model using the Evaluator service
Learn about LoRA customization for resource-efficient fine-tuning
Explore knowledge distillation to compress larger models