Tutorials#

Learn how to deploy models and run inference.

Prerequisites#

  • NMP_BASE_URL environment variable set to your platform URL

  • Appropriate API credentials (NGC API key, HuggingFace token, or external provider keys)

Guides#

  • Deploy Models — Deploy from NGC, HuggingFace, Customizer, or external providers

  • Run Inference — Route requests via model entity, provider, or OpenAI routing