Migrate from Claude to typed
For Claude Pro / Claude Max customers evaluating typed as a replacement.
1. The two-line config change
typed speaks the Anthropic API. Any client that already talks to Claude works against typed by changing two environment variables:
export ANTHROPIC_BASE_URL=https://api.typed.cloud
export ANTHROPIC_API_KEY=<your typed key>
export ANTHROPIC_MODEL=typed-xhigh
Note the third line: typed uses its own model identifiers (typed-max, typed-xhigh, typed-high, typed-medium, typed-low, typed-minimal). If you forget to set ANTHROPIC_MODEL and your client sends an Anthropic model name like claude-3.5-sonnet, typed returns a 400 with the full list of valid identifiers in the error body. See section 1a below for how to choose.
We run our own day-to-day on Claude Code against typed. Other Anthropic-API-compatible clients (Cursor, Cline, Roo Code, OpenClaw, and any tool that reads ANTHROPIC_API_KEY from the environment) should work the same way -- the API is wire-compatible, so a client that already talks to Claude over the Anthropic API talks to typed the same way. If your client doesn't pick up the env vars, set the same values in its config file.
You can flip back and forth between Claude and typed by swapping these variables. Nothing else changes.
1a. Choosing a model
typed exposes six effort tiers, mirroring the agent-effort vocabulary Claude Code already uses. Set ANTHROPIC_MODEL to one of these:
| Model ID | Use when | Quality | Latency | Cost per request |
|---|---|---|---|---|
typed-max |
Genuinely hard problems: architectural design, complex multi-file refactors, debugging that needs deep reasoning, anything irreversible | Highest | Slowest (often 10-40s on hard prompts) | Most expensive per request. Highest reasoning effort against the most capable model in our text fleet. |
typed-xhigh (default) |
Production coding default. Most Claude Code sessions. Routine refactors, feature work, code review | High (cache-reliable) | Fast (3-8s) | The default. Different model than typed-max -- the one that engages our prompt cache reliably. Much cheaper per request in practice because most coding-client traffic gets cache discounts. |
typed-high |
One notch below default. Use when you want serious quality but don't need maximum reasoning depth | High (heavier reasoning) | Medium (8-25s) | Same underlying model as typed-max with lower reasoning effort -- fewer reasoning tokens generated, so cheaper per request than typed-max. |
typed-medium |
Routine edits, simple refactors, quick lookups where you do not need deep reasoning | Mid | Fast on routine work; reasoning-heavy prompts may stretch to ~15s | Lighter (and faster) model than typed-high. Significantly cheaper per request -- typical workloads land around ~10x less than typed-max. |
typed-low |
Single-line fixes, syntax help, quick questions | Mid (less reasoning depth) | Faster than typed-medium | Same underlying model as typed-medium at lower reasoning effort. Cheaper still per request. |
typed-minimal |
Trivial throwaway: echo, classify, very short edits | Lowest | Fastest | Same underlying model as typed-low at minimal reasoning effort. Cheapest tier; reserve for trivial throwaway work. |
Default is typed-xhigh. If you set ANTHROPIC_MODEL=typed-xhigh (or omit ANTHROPIC_MODEL once typed's default-server-side default kicks in for your client), you get the production tier that routes to our cache-reliable coding model. This is the right choice for ~90% of Claude Code sessions.
Bump to typed-max when you have a hard problem that's worth the latency. Drop to typed-medium or typed-low when you want a quick answer on routine work and can accept slightly less depth.
Legacy aliases that work for backward compatibility:
| Alias | Resolves to |
|---|---|
typed-pro |
typed-xhigh (the default tier) |
typed-fast |
typed-medium |
typed-long-context |
typed-xhigh (all text tiers handle 1M context; the dedicated long-context ID is redundant) |
Anything else -- including all claude-*, gpt-*, or typos -- returns a 400 with the full list of valid IDs in the error body.
2. What works identically
- Most coding workflows: refactoring, debugging, code generation, code explanation, test writing, doc writing.
- Image input: paste screenshots, design mockups, error messages. typed accepts the same multimodal content shape Claude does.
- Long-context text up to 1M tokens on every text tier (
typed-max,typed-xhigh,typed-high,typed-medium,typed-low,typed-minimal). Claude's flagship API models support a comparable 1M context window via ananthropic-beta: context-1m-2025-08-07opt-in header that prices long-context input at 2x input / 1.5x output above 200K. typed delivers 1M on every text route at the normal per-token rate -- no beta header, no premium long-context pricing tier. Multimodal requests (any message containing an image) cap at 262K tokens, which matches the underlying multimodal model's published ceiling. - MCP servers: if your client already mounts MCP servers against Claude, the same configuration works against typed.
- Prompt caching: available. Most reliably engages on the default
typed-xhightier; other tiers cache on a best-effort basis.
3. What's different -- be honest
typed is a different model than Claude.
- Different model = different responses on edge cases. Most coding work feels identical; some won't. We encourage spot-checking on the workflows that matter to you.
- Knowledge cutoff varies by underlying model. For recent libraries, frameworks, and APIs published after the model's training, paste relevant docs into the prompt.
- No Claude artifacts (browser-rendered code previews). Keep Claude for that surface if you rely on artifacts heavily.
- Billing structure differs. typed bills monthly. By default, requests past quota return a 429 with a one-click top-up prompt; opt in to auto-top-up in your dashboard if you'd rather skip the prompt and auto-charge instead. Claude resets every 5 hours and weekly. Different shapes; better fit for some workflows.
- Coding-only product surface. typed is tuned for coding workflows. General chat and creative writing will work but are not the design target.
4. The numbers
| Claude Pro | typed Pro | Claude Max 5x | typed Max | |
|---|---|---|---|---|
| Monthly | $20 | $20 (same) | $100 | $100 (same) |
| Annual | $200 | $200 (same) | not offered | $1,000 |
| Monthly usage | Claude Pro standard tier | At least Claude Pro's capacity | Claude Max 5x standard tier | At least Claude Max 5x's capacity |
| Image input | yes | yes | yes | yes |
| Context window | 200K (Sonnet 4.5, Haiku 4.5); 1M beta (Sonnet 4.6, Opus 4.6/4.7) | 1M (text); 262K (multimodal) | 200K (Sonnet 4.5, Haiku 4.5); 1M beta (Sonnet 4.6, Opus 4.6/4.7) | 1M (text); 262K (multimodal) |
| Prompt caching | yes | yes (best on typed-xhigh tier) |
yes | yes (best on typed-xhigh tier) |
| Overage available | yes (Extra Usage) | yes (top-ups) | yes (Extra Usage) | yes (top-ups) |
| Overage rate (input) | $1-5 / M tokens (Haiku to Opus, current gen) | $1.67 / M (~45-67% cheaper vs Sonnet/Opus) | $1-5 / M tokens (Haiku to Opus, current gen) | $1.67 / M (~45-67% cheaper vs Sonnet/Opus) |
| Overage rate (output) | $5-25 / M tokens (Haiku to Opus, current gen) | $8.33 / M (~45-67% cheaper vs Sonnet/Opus) | $5-25 / M tokens (Haiku to Opus, current gen) | $8.33 / M (~45-67% cheaper vs Sonnet/Opus) |
| Billing structure | 5-hour windows + weekly | Monthly with auto-top-up at quota | 5-hour windows + weekly | Monthly with auto-top-up at quota |
The pitch in one sentence: same monthly price as Claude, with 1M context (vs Claude's 200K default), overage roughly half what Claude charges, monthly billing instead of rolling 5-hour windows, and an annual Max plan Anthropic does not offer.
Claude's "1M beta" context applies to API requests sent with the
anthropic-beta: context-1m-2025-08-07header, and long-context input above 200K is priced at 2x input / 1.5x output. typed delivers 1M on every text tier with no beta header and no premium long-context pricing -- the rates in the table are the rates that apply at any context size up to the cap. Multimodal requests on typed cap at 262K (the underlying multimodal model's published ceiling). See Claude pricing and context windows for source-of-truth details on Claude's side.
Equivalence note: monthly usage equivalence is estimated against Anthropic's standard tier capacities. Anthropic does not officially publish per-tier token quotas, so the comparison is approximate. typed's internal quotas (technical detail for migration evaluators -- customer-facing pricing copy uses Claude-equivalent framing per our pricing policy): Starter = 15M input + 3M output tokens per month, Pro = 45M input + 9M output, Max = 225M input + 45M output. Cache-eligible repeated-prefix workflows effectively stretch these significantly. Full methodology at
typed.cloud/pricing/details.
5. Sales-final policy (clearly disclosed)
All sales final. Cancellations take effect at the next renewal -- the remainder of your paid period is served normally, and your API key keeps working until the period ends.
We process discretionary refunds on a case-by-case basis for billing errors or extended service outages on our side -- email support@typed.cloud. We do not promise an automatic refund window because cost-of-goods scales with usage, and we would rather be honest about the economics than build the refund into the price.
If you are not sure typed will work for your workflow, the right thing to do is pay for one month, try it for a week, and cancel before renewal if it does not suit you. The remaining three weeks of that month still serve normally. That is the trial.
6. Getting started
- Sign up at
app.typed.cloudand pick a plan. - Copy your API key from the dashboard.
- Set the two env vars above in your shell or your client's config.
- Run a normal Claude Code session. Usage will appear in your dashboard within a few seconds of the first request.
If you hit problems, email support@typed.cloud. Include your account email and the approximate UTC timestamp of the failed request -- that is enough for us to find the trace.
Last updated 2026-05-11.