Every other inference provider is running your workloads on repurposed gaming hardware. We're not. Purpose-built ASICs, 1,000 tokens per second, 7x faster inference.
Hand this prompt to any autonomous coding agent. It'll handle the whole signup flow and return with an API key ready to use.
Sign me up for a General Compute API account and get an API key. Fetch instructions from https://docs.generalcompute.com/agent-signup and follow them.
Both responses below run GPT OSS 120B. One on your old GPU infrastructure. One on ours. Run the test and watch the gap.
Ready to compare
Ready to compare
Try preset prompts or enter your own to compare inference speed in real-time
GPUs carry 70 years of legacy architecture — designed for rendering pixels, adapted for training, and now pressed into inference. We skipped all of that.
MiniMax M2.5 model comparison
Throughput (tokens/sec)*
Higher is better
Energy Usage*
Lower is better
Energy Cost
Lower is better
*Projected on next-generation racks. NVIDIA throughput via Together AI benchmarks. Energy: US commercial avg vs. our rate.
Whether you're prototyping with our models or deploying your own weights at scale — same hardware, same speed, your choice of setup.
REST API with OpenAI-compatible endpoints. Access the fastest models with a single API key.
Get API KeyDedicated infrastructure with SLAs, custom scaling, and guaranteed capacity for your workloads.
Contact SalesDeploy any model on our optimized infrastructure. Same speed, your weights.
Learn MoreFaster Inference
Time to First Token
Uptime SLA
Tokens per Second
*Performance varies by model and geography.
OpenAI-compatible API. Change your base URL, swap your key, and you're running on ASIC infrastructure. Your existing code doesn't change.
from openai import OpenAI
client = OpenAI(
base_url="https://api.generalcompute.com",
api_key="your-api-key",
)
response = client.chat.completions.create(
model="gpt-oss-120b",
messages=[{"role": "user", "content": "Hello!"}],
stream=True,
)Get your API key in seconds. OpenAI-compatible — just change your base URL. $5 free credit to see the difference yourself.