This vignette covers the many ways to configure an agent: system prompts, settings, skills, sessions, and result inspection.
System Prompts
Every agent has a system prompt that shapes its behaviour. Pass it directly at construction:
library(deputy)
agent <- Agent$new(
chat = ellmer::chat_anthropic(),
system_prompt = "You are a helpful data analyst. Always show your
reasoning step by step. Use R code when calculations are needed."
)The system prompt is prepended to the conversation. If you also load skills (see below), their prompts are appended after the system prompt.
Claude-Style Settings
deputy can load settings from .claude/ directories,
following the same conventions as Claude Code:
# Load from project and user directories
settings <- claude_settings_load(
setting_sources = c("project", "user"),
working_dir = getwd()
)
# Apply to an agent
claude_settings_apply(agent, settings)Settings sources:
-
"project"– Reads from.claude/in the working directory -
"user"– Reads from~/.claude/ - A file path – Reads a specific
.jsonsettings file
This is useful when you want an agent that mirrors your Claude Code setup.
Skills
Skills are modular extensions that add tools and system prompt
segments to an agent. They live in directories with a
SKILL.yaml (or SKILL.md) file.
Loading a Skill
skill <- skill_load("path/to/skill/directory")
agent$load_skill(skill)Creating a Skill Programmatically
skill <- skill_create(
name = "data_analysis",
description = "Helps with data analysis tasks",
prompt = "When analysing data, always check for missing values first.",
tools = tools_data(),
version = "1.0.0"
)
agent$load_skill(skill)Listing Available Skills
skills_list("path/to/skills/directory")Session Management
Agents can save and restore their conversation state:
# Save current state
agent$save_session("my_session.rds")
# Later, restore it
agent2 <- Agent$new(chat = ellmer::chat_anthropic())
agent2$load_session("my_session.rds")
# Continue the conversation
result <- agent2$run_sync("What were we working on?")Sessions persist the conversation turns, so the agent can pick up where it left off.
Working with AgentResult
Every call to run_sync() returns an
AgentResult with rich metadata:
library(deputy)
chat <- ellmer::chat_anthropic(model = "claude-sonnet-4-20250514")
agent <- Agent$new(chat = chat, tools = tools_file())
result <- agent$run_sync("How many files are in the current directory?")Response Text
cat(result$response)Provider Support
deputy works with any provider that ellmer supports:
| Provider | Constructor | Notes |
|---|---|---|
| Anthropic | chat_anthropic() |
Native web tools |
| OpenAI | chat_openai() |
Structured output |
chat_google() |
Native web search | |
| Ollama | chat_ollama() |
Local models |
| Azure OpenAI | chat_azure_openai() |
Enterprise |
# Any ellmer chat works
Agent$new(chat = ellmer::chat_openai())
Agent$new(chat = ellmer::chat_anthropic())
Agent$new(chat = ellmer::chat_google())
Agent$new(chat = ellmer::chat_ollama(model = "llama3.1"))