What you can do with the SDK
- Run AI inference — chat with hosted LLMs, generate images, transcribe speech
- Fine-tune models — upload datasets, train LoRA adapters, deploy to inference GPUs
- Verify computation — validate TEE attestations and signed responses on-chain
- Manage funds — deposit, transfer, and refund prepaid balances per provider
- Issue API keys — create persistent, revocable tokens for server applications
Supported service types
- Chatbot services — conversational AI with OpenAI-compatible endpoints
- Text-to-image — generate images from text prompts
- Image editing — modify existing images
- Speech-to-text — transcribe audio to text
- Fine-tuning — train custom LoRA adapters on your own datasets
How it works
Acknowledge a provider
One-time on-chain action that records the provider’s TEE signer to your account.
Why 0G Compute
- Up to 90% cheaper — pay only for compute used, no monthly minimums. Compare to 0.03+ per request on managed APIs.
- Instantly available — access thousands of GPUs globally with 50–100ms latency
- OpenAI-compatible — drop-in replacement at the HTTP layer; works with the OpenAI Python SDK
- TEE-verified — every response is cryptographically signed by a Trusted Execution Environment
- Decentralized — providers are matched on-chain, no single vendor controls access
Next steps
Install the SDK
Set up your environment and create a broker in under 5 minutes.
Run your first inference
Fund an account and make a chat completion request.
For protocol-level details and ecosystem docs, see the official 0G Labs documentation.