How to Build a Content Creation Agent With ATXP
A content creation agent that actually works does more than run a prompt. It researches before it writes. It stores drafts. It delivers output to the right place without requiring a human to babysit each run. Building that end-to-end pipeline is the part most tutorials skip.
This is the full build.
What a Content Creation Agent Actually Does
A production content agent isn’t just a “write me a blog post” prompt. It’s a multi-step pipeline:
- Research — pull current information on the topic (web search + page reads)
- Outline — structure the piece based on research findings
- Draft — write the full content using an LLM
- Store — save the draft somewhere accessible
- Deliver — send to a human editor or publish endpoint
Each step requires a different tool. The quality of the final output depends on how well those tools are connected.
What Tools It Needs
| Step | Tool | What It Does |
|---|---|---|
| Research | Web search | Find recent articles, data, competitor content |
| Research | Web browsing | Read full page content from search results |
| Drafting | LLM | Generate structured content from research + prompt |
| Storage | File storage | Persist draft between agent steps |
| Delivery | Send draft to editor or distribution list | |
| (Optional) | Image gen | Create featured image or inline visuals |
Without all five layers, your agent either produces shallow content (no research), loses drafts (no storage), or requires manual extraction (no delivery). ATXP provides every one of these under a single account and a single IOU balance.
Building the Agent With ATXP
Start by provisioning your ATXP account:
npx atxp
This creates your account, provisions your @atxp.email address, and gives you credentials for all tools. No separate accounts for search, browsing, or email.
Install the ATXP SDK:
pip install atxp
Here’s the core agent loop:
import atxp
client = atxp.Client() # uses credentials from npx atxp setup
def run_content_agent(topic: str, recipient_email: str):
# Step 1: Research
search_results = client.tools.web_search(
query=f"{topic} latest insights 2026",
num_results=8
)
# Step 2: Read top pages
pages = []
for result in search_results[:3]:
page = client.tools.browse(url=result["url"])
pages.append(page["content"])
# Step 3: Draft
research_context = "\n\n".join(pages)
draft = client.tools.llm(
prompt=f"""
You are a content writer. Based on the research below, write a
1,000-word blog post about: {topic}
Research:
{research_context}
Format: H2 headings, clear sections, no fluff.
""",
model="claude-3-5-sonnet"
)
# Step 4: Store
file_id = client.tools.file_storage.save(
name=f"draft-{topic.replace(' ', '-')}.md",
content=draft["text"]
)
# Step 5: Deliver
client.tools.email.send(
to=recipient_email,
subject=f"Draft ready: {topic}",
body=f"Your draft is ready.\n\n{draft['text']}\n\nFile ID: {file_id}"
)
return {"draft": draft["text"], "file_id": file_id}
Run npx atxp to provision your account and get your credentials. All tool calls above bill to your IOU balance — no separate API keys required.
Example: A Blog Research and Draft Agent
Here’s a more complete example that generates weekly content for a SaaS blog:
import atxp
from datetime import datetime
client = atxp.Client()
TOPICS = [
"AI agent infrastructure trends",
"pay-per-use API pricing models",
"multi-agent system design patterns"
]
def weekly_content_pipeline():
drafts = []
for topic in TOPICS:
print(f"Researching: {topic}")
# Research phase
searches = client.tools.web_search(
query=topic,
num_results=10
)
pages = []
for r in searches[:4]:
try:
p = client.tools.browse(url=r["url"])
pages.append(p["content"][:2000]) # cap per page
except Exception:
continue
# Write phase
draft = client.tools.llm(
prompt=f"Write a 900-word blog post on: {topic}\n\nContext:\n" + "\n".join(pages),
model="gpt-4o"
)
# Store
file_id = client.tools.file_storage.save(
name=f"{datetime.now().strftime('%Y-%m-%d')}-{topic[:30]}.md",
content=draft["text"]
)
drafts.append({"topic": topic, "file_id": file_id})
# Send summary email
summary = "\n".join([f"- {d['topic']} (file: {d['file_id']})" for d in drafts])
client.tools.email.send(
to="editor@yourcompany.com",
subject=f"Weekly drafts ready — {datetime.now().strftime('%b %d')}",
body=f"This week's drafts:\n\n{summary}"
)
weekly_content_pipeline()
This pipeline runs unattended. It researches, writes, stores, and notifies. Your editor gets an email. You get your Monday morning back.
Cost Breakdown
| Action | Tool | Approx Cost |
|---|---|---|
| 10 web searches | Web search | ~$0.03–$0.05 |
| 4 page reads | Web browsing | ~$0.04–$0.08 |
| 1 LLM write (1,000 words) | LLM | ~$0.002–$0.008 |
| 1 file save | File storage | ~$0.001 |
| 1 email sent | ~$0.001 | |
| Per post total | ~$0.07–$0.14 |
At 50 posts per month, that’s $3.50–$7.00 in tool costs. No subscriptions. No per-seat pricing. You pay for what runs.
The economics of this model only make sense with per-call billing. Monthly flat-rate AI writing tools charge $50–$200/month regardless of output volume. If you’re running a high-frequency content pipeline, ATXP’s pay-as-you-go model is significantly cheaper at scale.
Build your content agent today. Run npx atxp to get started — your account, email address, and all tools are provisioned in a single command.