How to Build a Research Agent With ATXP
Research agents are the most common first agent project. They’re also where most tutorials fall short — they show you how to call a search API, not how to build a real research pipeline that reads pages, handles errors, and does something useful with the output.
This is the full build, with real cost numbers.
What a Research Agent Does
A research agent takes a question or topic and produces a structured answer or summary by:
- Searching for relevant sources
- Reading the most relevant pages
- Synthesizing findings with an LLM
- Storing or delivering the result
The key word is “reading.” A research agent that only looks at search snippets produces shallow output. The quality jump comes from reading full page content — which requires a browsing tool, not just a search tool.
Most developers underestimate how many tool calls a real research run requires. A thorough investigation of a topic might involve 10–15 searches, 6–8 full page reads, and 2–3 LLM synthesis calls. Each of those costs something.
The Tools It Needs
| Tool | Purpose | Approx Cost |
|---|---|---|
| Web search | Find relevant URLs, get snippets | $0.003–$0.005 per query |
| Web browsing | Read full page content | $0.01–$0.02 per page |
| LLM | Synthesize findings, summarize | $0.002–$0.008 per call |
| File storage | Persist research output | ~$0.001 per save |
| Email (optional) | Deliver report to a human | ~$0.001 per send |
The cost structure matters. If you use a flat-rate research API, you pay whether you run 1 query or 1,000. With ATXP’s per-call billing, you pay for exactly what you use. At research-agent scale — dozens to hundreds of runs per day — the difference adds up fast.
Building It With ATXP
Provision your account:
npx atxp
Install the SDK:
pip install atxp
Basic research agent:
import atxp
client = atxp.Client()
def research(topic: str, depth: int = 3) -> dict:
"""
Research a topic using web search + browsing + LLM synthesis.
depth: number of pages to read (more = thorough but slower)
"""
# Step 1: Search
search_results = client.tools.web_search(
query=topic,
num_results=10
)
# Step 2: Read top pages
page_contents = []
for result in search_results[:depth]:
try:
page = client.tools.browse(url=result["url"])
page_contents.append({
"url": result["url"],
"title": result["title"],
"content": page["content"][:3000] # cap per page
})
except Exception:
continue # skip pages that fail to load
# Step 3: Synthesize
source_text = "\n\n---\n\n".join([
f"Source: {p['title']} ({p['url']})\n\n{p['content']}"
for p in page_contents
])
synthesis = client.tools.llm(
prompt=f"""
You are a research analyst. Synthesize the following sources into a
structured research brief on: {topic}
Format:
- Executive summary (2-3 sentences)
- Key findings (bullet list)
- Sources reviewed
Sources:
{source_text}
""",
model="claude-3-5-sonnet"
)
return {
"topic": topic,
"summary": synthesis["text"],
"sources": [p["url"] for p in page_contents],
"pages_read": len(page_contents)
}
# Run it
result = research("multi-agent payment infrastructure 2026", depth=5)
print(result["summary"])
Run npx atxp to provision your account. Web search, browsing, LLM access, and all other tools are available immediately — no separate API keys needed.
Example: A Competitive Intelligence Agent
Here’s a more production-ready version that monitors competitors and delivers weekly reports:
import atxp
from datetime import datetime
client = atxp.Client()
COMPETITORS = ["competitor-a.com", "competitor-b.com", "competitor-c.com"]
REPORT_RECIPIENT = "strategy@yourcompany.com"
def competitive_intel_report():
findings = {}
for competitor in COMPETITORS:
# Search for recent news and updates
news = client.tools.web_search(
query=f"site:{competitor} OR \"{competitor}\" news updates 2026",
num_results=8
)
# Read their homepage + any recent blog posts
pages = []
urls_to_read = [f"https://{competitor}"] + [r["url"] for r in news[:3]]
for url in urls_to_read:
try:
page = client.tools.browse(url=url)
pages.append(page["content"][:2000])
except Exception:
continue
# Summarize
if pages:
summary = client.tools.llm(
prompt=f"Summarize recent activity and any notable changes for {competitor}:\n\n" + "\n\n".join(pages),
model="claude-3-5-haiku"
)
findings[competitor] = summary["text"]
else:
findings[competitor] = "No data retrieved."
# Compile report
report = f"Competitive Intelligence Report — {datetime.now().strftime('%B %d, %Y')}\n\n"
for competitor, summary in findings.items():
report += f"## {competitor}\n{summary}\n\n"
# Save and deliver
client.tools.file_storage.save(
name=f"competitive-intel-{datetime.now().strftime('%Y-%m-%d')}.md",
content=report
)
client.tools.email.send(
to=REPORT_RECIPIENT,
subject=f"Competitive Intel — {datetime.now().strftime('%b %d')}",
body=report
)
competitive_intel_report()
Schedule this weekly. Your strategy team gets a structured competitive overview every Monday. No manual research. No subscriptions to competitive intelligence SaaS tools.
What It Costs
| Scenario | Tool Calls | Estimated Cost |
|---|---|---|
| Quick lookup (1 topic, 3 pages) | 3 searches + 3 reads + 1 LLM | ~$0.04–$0.08 |
| Thorough brief (1 topic, 8 pages) | 8 searches + 8 reads + 2 LLM | ~$0.12–$0.22 |
| Competitive sweep (3 competitors) | 12 searches + 9 reads + 3 LLM | ~$0.18–$0.30 |
| Daily news digest (10 topics) | 30 searches + 15 reads + 10 LLM | ~$0.30–$0.55 |
Running a daily competitive digest costs less than a dollar per day. The comparable SaaS tools — Crayon, Klue, Similarweb — run $500–$2,000/month for team plans. You’re not getting everything those tools offer, but for basic monitoring and summarization, the cost difference is two orders of magnitude.
That’s the research agent case. Do the math for your own use case, and run npx atxp to get started.
See also: What is an AI agent? · How to build a content creation agent · Virtual cards vs IOU tokens