The Mastra adapter allows you to use AgentMark prompts with Mastra ’s agentic workflow framework.
Installation
npm install @agentmark-ai/mastra-v0-adapter @mastra/core
Setup
Create your AgentMark client with a MastraModelRegistry. Use .registerModels() to map model names to Mastra model instances:
import { createAgentMarkClient , MastraModelRegistry } from "@agentmark-ai/mastra-v0-adapter" ;
const modelRegistry = new MastraModelRegistry ()
. registerModels ([ "claude-3-5-sonnet-20241022" ], ( name ) => ({
provider: "ANTHROPIC" ,
name ,
apiKey: process . env . ANTHROPIC_API_KEY ! ,
}))
. registerModels ([ "gpt-4o" ], ( name ) => ({
provider: "OPENAI" ,
name ,
apiKey: process . env . OPENAI_API_KEY ! ,
}));
export const client = createAgentMarkClient ({
loader: fileLoader ,
modelRegistry ,
});
Running Prompts
AgentMark prompts return Mastra agents via formatAgent(), which you can then use with formatMessages():
import { client } from "./agentmark.client" ;
const prompt = await client . loadTextPrompt ( "greeting.prompt.mdx" );
const agent = await prompt . formatAgent ({
props: { name: "Alice" },
});
const [ messages , options ] = agent . formatMessages ();
const result = await agent . generate ( messages , options );
console . log ( result . text );
Object Generation
For structured output, use object prompts:
import { client } from "./agentmark.client" ;
import { z } from "zod" ;
const prompt = await client . loadObjectPrompt ( "extract.prompt.mdx" , {
schema: z . object ({
sentiment: z . enum ([ "positive" , "negative" , "neutral" ]),
confidence: z . number (),
}),
});
const agent = await prompt . formatAgent ({
props: { text: "This product is amazing!" },
});
const [ messages , options ] = agent . formatMessages ();
const result = await agent . generate ( messages , options );
console . log ( result . object );
// { sentiment: 'positive', confidence: 0.95 }
Streaming
Stream responses using agent.stream():
const prompt = await client . loadTextPrompt ( "story.prompt.mdx" );
const agent = await prompt . formatAgent ({
props: { topic: "space exploration" },
});
const [ messages , options ] = agent . formatMessages ();
const stream = await agent . stream ( messages , options );
for await ( const chunk of stream ) {
process . stdout . write ( chunk );
}
Tools
Configure tools in your client:
import { createAgentMarkClient , MastraModelRegistry } from "@agentmark-ai/mastra-v0-adapter" ;
import { createTool } from "@mastra/core/tools" ;
import { z } from "zod" ;
const weatherTool = createTool ({
id: "weather" ,
description: "Get current weather for a location" ,
inputSchema: z . object ({
location: z . string (),
}),
execute : async ({ context }) => {
const { location } = context ;
return `The weather in ${ location } is sunny and 72°F` ;
},
});
export const client = createAgentMarkClient ({
loader: fileLoader ,
modelRegistry ,
tools: {
weather: weatherTool ,
},
});
Then reference tools in your prompts:
---
name : weather
text_config :
model_name : claude-3-5-sonnet-20241022
tools :
- weather
---
< System > You are a helpful weather assistant. </ System >
< User > What's the weather in { props . location } ? </ User >
MCP Servers
Configure MCP servers for extended capabilities:
import { createAgentMarkClient , MastraModelRegistry } from "@agentmark-ai/mastra-v0-adapter" ;
export const client = createAgentMarkClient ({
loader: fileLoader ,
modelRegistry ,
mcpServers: {
filesystem: {
command: "npx" ,
args: [ "-y" , "@modelcontextprotocol/server-filesystem" , "/path/to/files" ],
},
},
});
Limitations
Next Steps
Prompts Learn about prompt syntax
Testing Test your prompts with datasets
Observability Monitor your agents in production
Other Integrations Explore other AI frameworks
Have Questions? We’re here to help! Choose the best way to reach us: