Skip to main content
AgentMark’s CLI guides you through the process of setting up your AgentMark application. It will:
  • Create the agentmark.json configuration file.
  • Create your AgentMark client.
  • Create your AgentMark Cloud SDK.
  • Create example prompts, datasets, and evals.
  • Setup an example application, which you can use to test your platform.
  • Setup the basic API keys for your application.

Initializing AgentMark

You can initialize AgentMark by running the following command:
npx
npx @agentmark/cli@latest init -t cloud

Agentmark.json

The agentmark.json file is used to configure your AgentMark Cloud application.

Basic Example

agentmark.json
{
  "agentmarkPath": "/",
  "version": "2.0.0",
  "mdxVersion": "1.0",
  "builtInModels": ["gpt-4"]
}

Configuration Properties

$schema (optional)

Provides a reference to the JSON schema definition for IDE autocompletion and validation.
"$schema": "https://agentmark.dev/schema.json"

agentmarkPath (required)

The base directory where AgentMark will look for your application’s resources including:
  • Prompt templates
  • Evaluation configurations
  • Datasets
  • Other AgentMark-specific resources
Default is "/", meaning AgentMark will look for the “agentmark” directory in your project root. Modify this path if you want to organize resources in a different directory structure (e.g., monorepo).

version (required)

Specifies the version of the AgentMark configuration. Useful for tracking changes and ensuring compatibility.

mdxVersion (optional)

Specifies the version of the MDX format. Can be either "1.0" or "0.0".

builtInModels (optional)

An array of model names that are supported by AgentMark Cloud. These models are pre-configured and ready to use without additional setup.
"builtInModels": ["gpt-4", "gpt-4o-mini", "claude-3-5-sonnet"]
Add more models using: npx @agentmark/cli@latest pull-models

evals (optional)

An array of evaluation names that correspond to evaluations registered in your EvalRegistry. Listing evaluations here makes them available in the editor for selection when configuring prompts.
"evals": [
  "correctness",
  "hallucination",
  "relevance"
]
These names must match the evaluations you register in your code:
evalRegistry.register("correctness", (params) => {
  return { score: 0.9, label: "correct", reason: "..." };
});
Learn more: Evaluations guide

modelSchemas (optional)

Define custom model configurations with settings, pricing, and UI controls. This allows you to add models with custom parameters and cost tracking.
"modelSchemas": {
  "my-custom-model": {
    "label": "My Custom Model",
    "cost": {
      "inputCost": 0.01,
      "outputCost": 0.03,
      "unitScale": 1000000
    },
    "settings": {
      "temperature": {
        "label": "Temperature",
        "order": 1,
        "default": 0.7,
        "minimum": 0,
        "maximum": 2,
        "multipleOf": 0.1,
        "type": "slider"
      }
    }
  }
}
Learn more: Model Schemas guide

mcpServers (optional)

Configure Model Context Protocol (MCP) servers for your application. Listing servers here makes them available in the editor for selection when configuring prompts. Supports both URL/SSE and stdio server types.
"mcpServers": {
  "docs": {
    "url": "https://mcp.example.com",
    "headers": {
      "Authorization": "Bearer token"
    }
  },
  "local": {
    "command": "npx",
    "args": ["-y", "@mastra/mcp-docs-server"]
  }
}
The server names ("docs", "local") can then be referenced in your prompts using mcp://server-name/tool-name. Learn more: MCP Integration guide

Have Questions?

We’re here to help! Choose the best way to reach us: