Developer Documentation

Everything you need to integrate Tagion's AI platform into your applications. From quick starts to advanced guides.

Getting Started

Quick Start Guide

Get up and running in 5 minutes with your first API call

Start Tutorial →

API Reference

Complete API documentation with request/response examples

View API Docs →

SDK & CLI

Python SDK, JavaScript SDK, and command-line tools

Install SDK →

Tutorials

Step-by-step guides for common use cases and workflows

Browse Tutorials →

Code Examples

Production-ready code snippets in multiple languages

View Examples →

Security Best Practices

Learn how to secure your API keys and production deployments

Security Guide →

Documentation by Topic

Data Labeling API

Create labeling projects, manage annotators, and retrieve labeled data programmatically

REST APIPython SDKWebhooks

Model Training

Submit training jobs, monitor progress, and manage model checkpoints and artifacts

PyTorchTensorFlowDistributed Training

Inference API

Deploy models and make real-time predictions with low-latency inference endpoints

REST APIgRPCBatch Inference

GPU Cloud

Provision GPU instances, manage environments, and run custom workloads

CLIDockerSSH Access

LLM Integration

Integrate trained language models into business workflows and applications

OpenAI CompatibleStreamingFunction Calling

API Quick Start

Authentication

All API requests require authentication using your API key in the Authorization header:

# Example API Request
curl https://api.tagion.ai/v1/inference \
-H "Authorization: Bearer YOUR_API_KEY" \
-H "Content-Type: application/json" \
-d '{"model": "gpt-4", "prompt": "Hello"}'

Python SDK Installation

pip install tagion
# Initialize client
from tagion import TagionClient
client = TagionClient(api_key="YOUR_API_KEY")

Rate Limits

Free tier: 100 requests/minute • Professional: 1,000 requests/minute • Enterprise: Custom limits

Need Help?

Can't find what you're looking for? Our support team is here to help.