Skip to main content
This guide walks you through creating a new AgentMark project connected to the cloud platform. By the end, you’ll have a working project with example prompts, cloud sync, and the ability to run prompts directly from the dashboard.

Prerequisites

  • Node.js 18+
  • An LLM provider API key (OpenAI or Anthropic, depending on your adapter choice)
1

Create Your AgentMark App

Run the interactive setup:
npm create agentmark@latest -- --cloud
The CLI will guide you through the following prompts:
PromptDescription
Project folderWhere to create your project (default: my-agentmark-app)
LanguageTypeScript or Python
AdapterYour preferred AI framework (AI SDK, Claude Agent SDK, Mastra, or Pydantic AI)
API keyYour OpenAI or Anthropic API key (can be skipped and added later)
Deployment modeChoose AgentMark Cloud to sync with the platform
IDEOptionally configure MCP servers for your editor
2

Connect to the Platform

To sync your files with the AgentMark platform:
  1. Commit and push your project to a Git repository
  2. In the AgentMark platform, navigate to your app
  3. Add your LLM provider API key (e.g. OPENAI_API_KEY or ANTHROPIC_API_KEY) in Settings > Environment Variables
  4. Connect your repository
Sync RepositoryOnce connected, the platform automatically syncs your prompt files and deploys your handler code with your configured environment variables. You can edit prompts in the platform’s visual editor, run them from the dashboard, and changes deploy automatically on every push.See Deployment for details on the deployment pipeline.
3

Run Your First Prompt

Open a prompt in the dashboard and click Run. The platform executes it on your deployed handler and streams results back in real time.Run Prompt

What’s in Your Project

File / DirectoryPurpose
agentmark/Prompt templates (.prompt.mdx) and test datasets (.jsonl)
agentmark.client.tsClient configuration — models, tools, and loader setup
agentmark.jsonProject configuration (models, evals, schema)
agentmark.types.tsAuto-generated TypeScript types for your prompts
handler.tsHandler for cloud deployment — executes prompts on the platform
dev-entry.tsDevelopment server entry point (customizable)
index.tsExample application entry point
.envEnvironment variables (API keys, credentials)

Available Scripts

ScriptCommandDescription
devnpm run agentmark devStart the local development server with dashboard
promptnpm run agentmark prompt <file>Run a single prompt with test props
experimentnpm run agentmark experiment <file>Run a prompt against its test dataset
buildnpm run agentmark buildCompile prompts for standalone use
demonpm run demoRun the example application (requires build first)

IDE Integration

If you selected an IDE during setup, your project includes MCP server configuration that gives your AI assistant access to AgentMark documentation and trace debugging. Supported editors: Claude Code, Cursor, VS Code, Zed

Next Steps

Core Concepts

Understand organizations, apps, and branches

Writing Prompts

Learn how to create and configure prompts

Testing & Evals

Test prompts with datasets and evaluations

Observability

Monitor traces, costs, and performance

Have Questions?

We’re here to help! Choose the best way to reach us: