Parameters Reference#

This page provides a complete reference for all configuration parameters available when creating NeMo Safe Synthesizer jobs. These schemas are automatically extracted from the authoritative OpenAPI specification, ensuring they are always in sync with the API.

Top-Level Configuration#

The SafeSynthesizerParametersInput schema defines the main configuration structure for Safe Synthesizer jobs.

ParameterTypeDescription
dataobjectData parameters.
evaluationobjectEvaluation parameters.
enable_synthesisbooleanDefault: true
Enable synthesizing new data by training a model.
enable_replace_piibooleanDefault: true
Enable replacing PII in the data.
trainingobjectTraining parameters.
generationobjectGeneration parameters.
privacyobjectPrivacy parameters. Optional.
time_seriesobjectTime series parameters.
replace_piiobjectPII replacement parameters. Optional.

Data Parameters#

Configuration for how to shape or use the input data, including grouping, ordering, and holdout settings.

ParameterTypeDescription
group_training_examples_bystringColumn to group training examples by. This is useful when you want the model to learn inter-record correlations for a given grouping of records.
order_training_examples_bystringColumn to order training examples by. This is useful when you want the model to learn sequential relationships for a given ordering of records. If you provide this parameter, you must also provide `group_training_examples_by`.
max_sequences_per_examplestring | integerDefault: auto
If specified, adds at most this number of sequences per example; otherwise, fills up context. Supports 'auto' where a value of 1 is chosen if differential privacy is enabled, and None otherwise. Required for DP to limit contribution of each example.
holdoutnumberDefault: 0.05
Amount of records to holdout. If this is a float between 0 and 1, that ratio of records is held out. If an integer greater than 1, that number of records is held out.If the value is equal to zero, no holdout will be performed.
max_holdoutintegerDefault: 2000
Maximum number of records to hold out. Overrides any behavior set by holdout parameter.
random_stateintegerUse this random state for holdout split to ensure reproducibility.

Training Parameters#

Hyperparameters for model fine-tuning, including learning rate, batch size, and LoRA configuration.

ParameterTypeDescription
num_input_records_to_samplestring | integerDefault: auto
Number of records the model will see during training. This parameter is a proxy for training time. For example, if its value is the same size as the input dataset, this is like training for a single epoch. If its value is larger, this is like training for multiple (possibly fractional) epochs. If its value is smaller, this is like training for a fraction of an epoch. Supports 'auto' where a reasonable value is chosen based on other config params and data.
batch_sizeintegerDefault: 1
The batch size per device for training
gradient_accumulation_stepsintegerDefault: 8
Number of update steps to accumulate the gradients for, before performing a backward/update pass. This technique increases the effective batch size that will fit into GPU memory.
weight_decaynumberDefault: 0.01
The weight decay to apply (if not zero) to all layers except all bias and LayerNorm weights in the AdamW optimizer.
warmup_rationumberDefault: 0.05
Ratio of total training steps used for a linear warmup from 0 to the learning rate.
lr_schedulerstringDefault: cosine
The scheduler type to use. See the HuggingFace documentation of `SchedulerType` for all possible values.
learning_ratenumberDefault: 0.0005
The initial learning rate for `AdamW` optimizer.
lora_rintegerDefault: 32
The rank of the LoRA update matrices, expressed in int. Lower rank results in smaller update matrices with fewer trainable parameters.
lora_alpha_over_rnumberDefault: 1.0
The ratio of the LoRA scaling factor (alpha) to the LoRA rank. Empirically, this parameter works well when set to 0.5, 1, or 2.
lora_target_modulesstring[]Default: ['q_proj', 'k_proj', 'v_proj', 'o_proj']
The list of transformer modules to apply LoRA to. Possible modules: 'q_proj', 'k_proj', 'v_proj', 'o_proj', 'gate_proj', 'up_proj', 'down_proj'
use_unslothstring | booleanDefault: auto
Whether to use unsloth.
rope_scaling_factorstring | integerDefault: auto
Scale the base LLM's context length by this factor using RoPE scaling.
validation_rationumberDefault: 0.0
The fraction of the training data that will be used for validation.The range should be 0 to 1. If set to 0, no validation will be performed.If set larger than 0, validation loss will be computed and reported throughout training.
validation_stepsintegerDefault: 15
The number of steps between validation checks for the HF Trainer arguments.
pretrained_modelstringDefault: TinyLlama/TinyLlama-1.1B-Chat-v1.0
Pretrained model to use for fine tuning. Uses default of TinyLlama.
quantize_modelbooleanDefault: false
Whether to quantize the model during training. This can reduce memory usage and potentially speed up training, but may also impact model accuracy.
quantization_bitsintegerDefault: 8
The number of bits to use for quantization if `quantize_model` is True. Common values are 8 or 4 bits.
Allowed: 4, 8
peft_implementationstringDefault: QLORA
The PEFT (Parameter-Efficient Fine-Tuning) implementation to use. Options include 'lora' for Low-Rank Adaptation or QLoRA for Quantized LoRA. Each method has its own trade-offs in terms of performance and resource requirements.
max_vram_fractionnumberDefault: 0.8
The fraction of the total VRAM to use for training. Default is 0.9. Modify this to allow longer sequences to be used.

Generation Parameters#

Configuration for synthetic data generation after training, including number of records, temperature, and structured generation options.

ParameterTypeDescription
num_recordsintegerDefault: 1000
Number of records to generate.
temperaturenumberDefault: 0.9
Sampling temperature.
repetition_penaltynumberDefault: 1.0
The value used to control the likelihood of the model repeating the same token.
top_pnumberDefault: 1.0
Nucleus sampling probability.
patienceintegerDefault: 3
Number of consecutive generations where the `invalid_fraction_threshold` is reached before stopping generation.
invalid_fraction_thresholdnumberDefault: 0.8
The fraction of invalid records that will stop generation after the `patience` limit is reached.
use_structured_generationbooleanDefault: false
Use structured generation.
structured_generation_backendstringDefault: auto
The backend used by VLLM when use_structured_generation=True. Supported backends (from vllm) are 'outlines', 'guidance', 'xgrammar', 'lm-format-enforcer'. 'auto' will allow vllm to choose the backend.
Allowed: auto, xgrammar, guidance, outlines, lm-format-enforcer
structured_generation_schema_methodstringDefault: regex
The method used to generate the schema from your dataset and pass it to the generation backend. auto will usually default to 'json_schema'. Use 'regex to use our custom regex construction method, which tends to be more comprehensive than 'json_schema' at the cost of speed.
Allowed: regex, json_schema
enforce_timeseries_fidelitybooleanDefault: false
Enforce timeseries fidelity by enforcing the time series order, intervals, start and end times of the records.

Differential Privacy Parameters#

Hyperparameters for differential privacy during training using DP-SGD. Enable these for formal privacy guarantees.

ParameterTypeDescription
dp_enabledbooleanDefault: false
Enable differentially-private training with DP-SGD.
epsilonnumberDefault: 8.0
Target for epsilon when training completes.
deltastring | numberDefault: auto
Probability of accidentally leaking information. Setting to 'auto' usesdelta of 1/n^1.2, where n is the number of training records
per_sample_max_grad_normnumberDefault: 1.0
Maximum L2 norm of per sample gradients.

Evaluation Parameters#

Configuration for synthetic data quality and privacy assessment, including MIA, AIA, and PII replay detection.

ParameterTypeDescription
mia_enabledbooleanDefault: true
Enable membership inference attack evaluation.
aia_enabledbooleanDefault: true
Enable attribute inference attack evaluation.
sqs_report_columnsintegerDefault: 250
sqs_report_rowsintegerDefault: 5000
mandatory_columnsinteger-
enabledbooleanDefault: true
Enable evaluation.
quasi_identifier_countintegerDefault: 3
Number of quasi-identifiers to sample.
pii_replay_enabledbooleanDefault: true
Enable PII Replay detection.
pii_replay_entitiesstring[]List of entities for PII Replay. If not provided, default entities will be used.
pii_replay_columnsstring[]List of columns for PII Replay. If not provided, only entities will be used.

PII Replacement Configuration#

Configuration for PII detection and replacement. See PII Replacement for conceptual documentation.

ParameterTypeDescription
globalsobjectGlobal config options.
steps *object[]list of transform steps to perform on input.

Example Configuration#

Here’s an example showing a complete job configuration using the Python SDK:

import os
import pandas as pd

from nemo_microservices import NeMoMicroservices
from nemo_microservices.beta.safe_synthesizer.sdk.job_builder import SafeSynthesizerJobBuilder

# Placeholders
df: pd.DataFrame = pd.DataFrame()
client = NeMoMicroservices(
    base_url=os.environ.get("NMP_BASE_URL", "http://localhost:8080")
)

builder = (
    SafeSynthesizerJobBuilder(client)
    .with_data_source(df)
    .with_train(
        num_input_records_to_sample=10000,
        learning_rate=0.0005,
        batch_size=1,
    )
    .with_generate(
        num_records=5000,
        temperature=0.9,
    )
    .with_differential_privacy(
        dp_enabled=True,
        epsilon=8.0,
    )
    .with_replace_pii()
    .synthesize()
)
job = builder.create_job(name="my-job", project="my-project")