How to Add ATXP to Mastra
Mastra’s step-based workflow model is one of the cleanest execution primitives in the TypeScript agent ecosystem. Each step is typed, isolated, and composable — which means each step is also a discrete cost unit. The moment your Mastra agent starts calling paid APIs, that property becomes critical for mastra agent payments and cost control.
The problem: Mastra doesn’t have a native payment layer. You can wire tools that call external APIs, but tracking what each step costs, enforcing per-step spend limits, and preventing runaway workflows requires you to build that infrastructure yourself — or use ATXP.
This guide covers the complete integration: installing ATXP alongside Mastra, defining payment-gated tools with TypeScript types, mapping ATXP credits onto Mastra’s workflow steps, and setting budget limits that fire before your workflow drains a credit card.
What Makes Mastra Different From Other Agent Frameworks?
Mastra is TypeScript-native in a way that matters architecturally, not just syntactically. Its core primitives are:
- Agents — LLM-backed entities with tools, memory, and a model config
- Tools — strongly-typed functions with Zod-validated inputs and outputs
- Workflows — directed step graphs where each step has explicit inputs, outputs, and transitions
- Integrations — first-class connectors to external services
The step graph model is the key distinction. In frameworks like LangChain or LlamaIndex, execution is more chain-like — you compose a sequence and run it. In Mastra, a workflow is a named graph of named steps, each with its own typed context. You can suspend a step, retry a step, and branch on step outputs.
That structure maps perfectly onto per-step cost tracking. When step fetch-competitor-prices calls a paid scraping API, you want to know what it cost — separately from what step generate-summary cost at the LLM layer. ATXP handles both layers through a single credential.
How Does ATXP’s Credit Model Map Onto Mastra’s Step Execution?
ATXP operates on a prepaid credit balance. Your agent carries a wallet; every API call debits from it. The balance is scoped: you can fund a single workflow run, a single agent session, or a shared pool across your entire application.
That scope flexibility is what makes it work well with Mastra’s execution model:
| Scope | ATXP configuration | Mastra mapping |
|---|---|---|
| Per workflow run | Pass a run-scoped API key or budget cap | One ATXPClient instance per workflow execute() call |
| Per step | Track credit delta before/after each step | Read creditsRemaining from ATXP response in each tool |
| Per agent session | Shared client across steps | Instantiate ATXPClient once, pass through context |
| Hard cap | Set maxCredits on the client | Workflow suspends when cap is hit |
The ATXP credit model is documented in detail in How ATXP’s IOU Model Caps Agent Spending — the short version is that credits are pre-purchased and consumed at cost with a small platform margin. No credit card exposed to individual API vendors.
Setting Up ATXP in a Mastra Project
Install the ATXP SDK alongside your Mastra dependencies:
npm install @mastra/core @mastra/openai atxp zod
Initialize the ATXP client and configure it with your API key. For Mastra workflows, the right pattern is to create the client once and pass it through workflow context — not to instantiate it inside every tool.
// lib/atxp.ts
import { ATXPClient } from 'atxp';
export function createATXPClient(options?: { maxCredits?: number }) {
return new ATXPClient({
apiKey: process.env.ATXP_API_KEY!,
maxCredits: options?.maxCredits,
onCreditWarning: (remaining) => {
console.warn(`[ATXP] Credits low: ${remaining} remaining`);
},
onCapReached: () => {
throw new Error('ATXP budget cap reached — workflow suspended');
},
});
}
The maxCredits parameter is the critical control. Set it per workflow run and ATXP will throw before debiting past the limit. Your Mastra workflow catches that error like any other step failure.
How Do You Define Payment-Gated Tools in Mastra?
Mastra tools are defined with createTool from @mastra/core. Each tool has an input schema, output schema, and execute function. ATXP integrates at the execute layer — the tool calls the ATXP-powered endpoint instead of the raw vendor API.
Here’s a web search tool that routes through ATXP:
// tools/web-search.ts
import { createTool } from '@mastra/core';
import { z } from 'zod';
import type { ATXPClient } from 'atxp';
const WebSearchInput = z.object({
query: z.string().min(1),
maxResults: z.number().default(5),
});
const WebSearchOutput = z.object({
results: z.array(z.object({
title: z.string(),
url: z.string(),
snippet: z.string(),
})),
creditsUsed: z.number(),
creditsRemaining: z.number(),
});
export function createWebSearchTool(atxp: ATXPClient) {
return createTool({
id: 'web-search',
description: 'Search the web. Returns ranked results with source URLs.',
inputSchema: WebSearchInput,
outputSchema: WebSearchOutput,
execute: async ({ context }) => {
const result = await atxp.search({
query: context.query,
maxResults: context.maxResults,
});
return {
results: result.results,
creditsUsed: result.creditsUsed,
creditsRemaining: result.creditsRemaining,
};
},
});
}
The pattern — factory function that takes an ATXPClient — is intentional. It keeps the tool pure and testable while letting you swap clients per workflow run (different budgets, different scopes).
The same pattern applies to every ATXP tool: image generation, email send, LLM inference via non-default models. See How to Build an AI Agent Without API Keys for a broader treatment of why consolidating to a single credential layer matters at scale.
Wiring Tools Into a Mastra Agent
// agents/research-agent.ts
import { Agent } from '@mastra/core';
import { openai } from '@mastra/openai';
import { createATXPClient } from '../lib/atxp';
import { createWebSearchTool } from '../tools/web-search';
const atxp = createATXPClient({ maxCredits: 500 }); // 500 credits per session
export const researchAgent = new Agent({
name: 'ResearchAgent',
instructions: `You are a research assistant. Use web search to find current information.
Always cite your sources. Stop if you exhaust your credit budget.`,
model: openai('gpt-4o'),
tools: {
webSearch: createWebSearchTool(atxp),
},
});
The maxCredits: 500 limit here is session-scoped. Every tool call debits from the same pool. If the agent loops and fires 50 searches, it hits the cap and stops — no surprise invoice.
How Do You Track Spend Per Step in a Mastra Workflow?
This is where Mastra’s step model earns its keep. Because each step has explicit typed outputs, you can surface ATXP credit consumption as a first-class output and inspect it in the workflow transition logic.
// workflows/competitor-analysis.ts
import { createWorkflow, createStep } from '@mastra/core';
import { z } from 'zod';
import { createATXPClient } from '../lib/atxp';
import { createWebSearchTool } from '../tools/web-search';
const atxp = createATXPClient({ maxCredits: 1000 });
const webSearch = createWebSearchTool(atxp);
const fetchCompetitorPrices = createStep({
id: 'fetch-competitor-prices',
inputSchema: z.object({ competitors: z.array(z.string()) }),
outputSchema: z.object({
prices: z.record(z.number()),
stepCreditsUsed: z.number(),
}),
execute: async ({ inputData }) => {
let stepCredits = 0;
const prices: Record<string, number> = {};
for (const competitor of inputData.competitors) {
const result = await webSearch.execute({
context: { query: `${competitor} pricing 2026`, maxResults: 3 },
});
stepCredits += result.creditsUsed;
// parse prices from results...
prices[competitor] = parsePrice(result.results);
}
return { prices, stepCreditsUsed: stepCredits };
},
});
const generateReport = createStep({
id: 'generate-report',
inputSchema: z.object({
prices: z.record(z.number()),
stepCreditsUsed: z.number(),
}),
outputSchema: z.object({
report: z.string(),
totalWorkflowCredits: z.number(),
}),
execute: async ({ inputData, getInitData }) => {
// generate report from prices (LLM call, also billed through ATXP)
const balance = await atxp.getBalance();
return {
report: buildReport(inputData.prices),
totalWorkflowCredits: 1000 - balance.creditsRemaining,
};
},
});
export const competitorAnalysisWorkflow = createWorkflow({
name: 'competitor-analysis',
triggerSchema: z.object({ competitors: z.array(z.string()) }),
})
.then(fetchCompetitorPrices)
.then(generateReport)
.commit();
The stepCreditsUsed field in each step output makes cost attribution explicit in your workflow logs. When this workflow runs in production, you know exactly what fetch-competitor-prices cost versus generate-report.
Ready to Add Payments to Your Mastra Agent?
ATXP handles the payment layer so you don’t have to manage vendor credentials, billing accounts, or per-API rate limits. One API key, 14+ tools, and a hard cap that protects your budget.
Get your ATXP API key at atxp.ai →
Setup takes about ten minutes. The maxCredits parameter alone is worth it.
Mastra vs. Other TypeScript Frameworks: Payment Integration Complexity
If you’re evaluating Mastra against other TypeScript options, the payment integration story varies significantly:
| Framework | Language | Step isolation | ATXP integration pattern | Typed tool outputs |
|---|---|---|---|---|
| Mastra | TypeScript | Named steps, explicit graph | Factory pattern, per-step tracking | Yes — Zod schemas |
| Vercel AI SDK | TypeScript | Implicit (tool call loop) | Per-tool-call injection | Partial |
| LangChain.js | TypeScript/JS | Chain steps, less typed | Chain middleware | Partial |
| OpenAI Agents SDK | Python/TS | Handoffs and tool calls | Context injection | Yes |
Mastra’s explicit step graph and strong Zod typing make per-step cost attribution cleaner than any other TypeScript framework in this list. The typed output schema means your creditsUsed field is never any — it’s validated at runtime and surfaced in your workflow logs.
For comparison on how ATXP integrates with the OpenAI Agents SDK, see How to Add ATXP to the OpenAI Agents SDK. The patterns are similar but Mastra’s step isolation gives you finer-grained attribution.
What Does a Mastra Workflow With Budget Controls Actually Cost?
To ground the numbers: here’s a realistic cost breakdown for the competitor analysis workflow above, assuming ATXP standard pricing.
| Step | API calls | Estimated credits | Notes |
|---|---|---|---|
fetch-competitor-prices (5 competitors) | 5 web searches | ~25 credits | ~5 credits per search |
generate-report (GPT-4o, 2K tokens) | 1 LLM call | ~8 credits | Routed through ATXP |
| Total per run | 6 API calls | ~33 credits | Well under 1000-credit cap |
At $0.01/credit (approximate), a full competitor analysis workflow runs under $0.35. The 1000-credit cap gives you ~30 runs before it fires. That’s the right mental model: set your cap at 3-5x your expected single-run cost, so you have headroom for retries and parallel runs without exposure to runaway loops.
For a deeper look at how the numbers compound across a real API stack, The Real Cost of Managing 7 AI APIs Yourself breaks down credential overhead vs. per-call cost.
Frequently Asked Questions
Does ATXP work with Mastra’s built-in memory and storage?
Yes. ATXP operates at the tool execution layer — it doesn’t touch Mastra’s memory or storage subsystems. You can use LibSQL, PostgreSQL, or any other Mastra-supported storage backend without changes. The ATXP client is stateless between steps; the only shared state is the credit balance on the ATXP server side.
What happens when a Mastra workflow hits the ATXP credit cap mid-step?
The ATXPClient throws a BudgetCapError before making the API call that would exceed the limit. In a Mastra workflow, that surfaces as a step error. You can catch it in the step’s onError handler, emit a budget-exhausted event, and either suspend the workflow or return a partial result. The workflow does not silently continue — no charges happen after the cap.
Can I use different credit budgets for different workflow runs?
Yes. Because the ATXPClient is instantiated with maxCredits per workflow execution (using the factory pattern shown above), you can vary the budget per run. A nightly batch workflow might get 10,000 credits; a real-time user-triggered workflow gets 200. The per-run isolation is complete — one run’s spending doesn’t affect another’s cap.
Does this work with Mastra agents running on Cloudflare Workers or edge runtimes?
Yes. The ATXP SDK uses the Fetch API for all HTTP calls and has no Node.js-specific dependencies. It runs in any V8-based edge runtime. The one caveat: the ATXPClient should be instantiated per-request in edge contexts (not at module scope), since edge workers don’t share process state between invocations.
How do I audit what each workflow step spent?
Surface creditsUsed in each step’s output schema (as shown in the workflow example above). ATXP also exposes a /v1/usage endpoint that returns a timestamped log of every credit debit against your key. In production, pipe that to your observability stack alongside your Mastra workflow logs and you have full per-step cost attribution without building any custom instrumentation.
The full payment story for autonomous agents — why per-call accounting matters and what breaks without it — is in How Agents Pay for API Calls. Mastra gives you the best execution model in the TypeScript ecosystem. ATXP gives you the payment layer it’s missing.