This guide helps you diagnose and fix common issues when using dsprrr.
Quick Diagnostic
When something goes wrong, start here:
library(dsprrr)
# 1. Check your configuration
dsprrr_sitrep()
# 2. If a call failed, inspect the last prompt
get_last_prompt()
# 3. View recent prompt history
inspect_history(n = 5)Setup & Configuration Issues
“No default Chat available”
Error:
Error: No default Chat available
ℹ Set an API key environment variable, or call `dsp_configure()`
ℹ Supported: OPENAI_API_KEY, ANTHROPIC_API_KEY, GOOGLE_API_KEY
Cause: dsprrr couldn’t find an LLM provider to use.
Solutions:
- Set an API key (recommended):
# In your .Renviron file (run usethis::edit_r_environ())
OPENAI_API_KEY=sk-your-key-here
# Or set in R session
Sys.setenv(OPENAI_API_KEY = "sk-your-key-here")- Configure explicitly:
dsp_configure(provider = "openai", model = "gpt-4o-mini")- Pass a Chat object directly:
chat <- ellmer::chat_openai()
chat |> dsp("question -> answer", question = "What is 2+2?")“Invalid provider”
Error:
Error: Invalid provider
✖ Provider must be one of: openai, anthropic, google
Solution: Use a supported provider name:
dsp_configure(provider = "openai") # OpenAI
dsp_configure(provider = "anthropic") # Anthropic Claude
dsp_configure(provider = "google") # Google Gemini“chat must be an ellmer Chat object”
Error:
Error: `chat` must be an ellmer Chat object
Cause: You passed something other than an ellmer Chat to a function expecting one.
Solution:
# Wrong - passing a string
mod <- module(sig, type = "predict", chat = "openai")
# Correct - pass an actual Chat object
chat <- ellmer::chat_openai()
mod <- module(sig, type = "predict", chat = chat)Signature Parsing Errors
“Signature string cannot be empty”
Error:
Error: Signature string cannot be empty
✖ Provide a signature like "question -> answer"
Solution: Provide a valid signature string:
“Missing ‘->’ separator”
Error:
Error: Missing '->' separator in signature
ℹ Did you mean: 'question -> answer'?
Cause: Your signature is missing the arrow that separates inputs from outputs.
Solution:
“Use ‘->’ not ‘=>’” (or ‘–>’, ‘<-’)
Error:
Error: Invalid signature format
✖ Use '->' not '=>'
ℹ Corrected: 'question -> answer'
Cause: You used the wrong arrow type.
Solution:
“Multiple ‘->’ separators found”
Error:
Error: Multiple '->' separators found in signature
✖ Only one '->' is allowed
Solution: Use commas to separate multiple outputs, not additional arrows:
# Wrong
signature("question -> reasoning -> answer")
# Correct - multiple outputs use commas
signature("question -> reasoning, answer")Input Validation Errors
“Missing required inputs”
Error:
Error: Missing required inputs
✖ Missing: question
ℹ Signature requires: context, question
Cause: You didn’t provide all the inputs the signature requires.
Solution:
“Did you mean: …?” (typo in input name)
Error:
Error: Missing required inputs
✖ Missing: question
Did you mean: qeustion?
Cause: You likely misspelled an input name.
Solution: Fix the typo:
“Ignoring unknown input”
Warning:
Warning: Ignoring unknown input: `extra_field`
ℹ Available fields: question
Cause: You passed an input that’s not in the signature.
Solution: Either remove the extra input or add it to your signature:
# This warns because 'context' isn't in the signature
dsp("question -> answer", question = "Hi", context = "extra")
# Option 1: Remove extra input
dsp("question -> answer", question = "Hi")
# Option 2: Add to signature
dsp("context, question -> answer", context = "extra", question = "Hi")LLM & API Errors
“Rate limit exceeded”
Error:
Error: LLM call failed
✖ Error code: 429 - Too many requests
ℹ Model: gpt-4o-mini via OpenAI
! Rate limit exceeded
ℹ Suggestion: Wait a few seconds and try again, or use a different model
Solutions:
- Wait and retry:
- Use a different model:
dsp_configure(provider = "openai", model = "gpt-3.5-turbo")- For batch processing, add delays:
“Authentication failed”
Error:
Error: LLM call failed
✖ Invalid API key
! Authentication failed
ℹ Check that your API key is set correctly
ℹ Run `dsprrr_sitrep()` to check configuration
Solutions:
- Verify your API key is set:
dsprrr_sitrep()
# Check the raw value (careful - don't share this!)
Sys.getenv("OPENAI_API_KEY")- Re-set your API key:
Sys.setenv(OPENAI_API_KEY = "sk-your-actual-key")
dsp_configure() # Re-initialize- Check for invisible characters (copy-paste issues):
“Request timed out”
Error:
Error: LLM call failed
✖ Connection timed out
! Request timed out
ℹ Try reducing prompt length or using a faster model
Solutions:
- Simplify your prompt:
# Instead of a huge context, summarize first
summary <- dsp("text -> brief_summary: string[100]", text = long_text)
answer <- dsp("context, question -> answer", context = summary, question = q)- Use a faster model:
dsp_configure(provider = "openai", model = "gpt-4o-mini") # Faster than gpt-4o“Prompt too long” / “Context length exceeded”
Error:
Error: LLM call failed
✖ This model's maximum context length is 8192 tokens
! Prompt too long (15000 characters)
ℹ Reduce input size or use a model with larger context window
Solutions:
- Truncate your input:
max_chars <- 10000
truncated_context <- substr(long_context, 1, max_chars)- Use a model with larger context:
# GPT-4o supports 128k tokens
dsp_configure(provider = "openai", model = "gpt-4o")
# Claude supports 200k tokens
dsp_configure(provider = "anthropic", model = "claude-3-5-sonnet-latest")- Chunk and summarize:
“Response parsing failed”
Error:
Error: LLM call failed
✖ Failed to parse JSON response
! Response parsing failed
ℹ The LLM returned invalid JSON. Try simplifying the output type
ℹ Check `get_last_prompt()` to see the raw response
Cause: The LLM didn’t return valid structured output.
Solutions:
- Inspect what happened:
get_last_prompt() # See the prompt and response- Simplify your output type:
# Complex nested types can confuse some models
# Instead of:
sig <- signature("text -> data: dict[string, list[dict[string, int]]]")
# Try simpler:
sig <- signature("text -> items: list[string]")- Use a more capable model:
# GPT-4o is better at structured output than GPT-3.5
dsp_configure(provider = "openai", model = "gpt-4o")- Add explicit instructions:
sig <- signature(
"text -> result",
instructions = "Return only valid JSON. No explanations or markdown."
)“Content was blocked by safety filters”
Error:
Error: LLM call failed
✖ Content blocked by safety system
! Content was blocked by safety filters
ℹ Rephrase your input to avoid triggering content filters
Solution: Rephrase your input to be less likely to trigger safety filters. Consider the context and framing of your request.
Module & Optimization Errors
“First argument must be a Signature object”
Error:
Error: First argument must be a Signature object
Solution:
“Unknown module type”
Error:
Error: Unknown module type: 'chain'
ℹ Available types: predict, react
Solution: Use a supported module type:
“devset must contain at least one row”
Error:
Error: devset must contain at least one row
Solution: Ensure your development set has data:
“No Chat provided and no stored Chat”
Error:
Error: No Chat provided and no stored Chat in module
ℹ Either provide `.llm` or configure default via `dsp_configure()`
Solution:
# Option 1: Pass .llm explicitly
result <- run(mod, question = "Hi", .llm = chat_openai())
# Option 2: Store Chat in module
mod <- module(sig, type = "predict", chat = chat_openai())
# Option 3: Configure default
dsp_configure(provider = "openai")
result <- run(mod, question = "Hi")Debugging Workflow
When you encounter an issue, follow this systematic approach:
Step 1: Check Configuration
This shows: - Whether a default Chat is configured - Which provider and model are active - API key status
Step 2: Inspect the Last Call
# See the full prompt that was sent
get_last_prompt()
# Get the trace with all details
trace <- get_last_trace()
trace$prompt # The prompt
trace$output # The raw output
trace$model # Which model was usedStep 3: View History
# See recent prompts and responses
inspect_history(n = 5)
# Include full prompts
inspect_history(n = 3, include_prompts = TRUE)Step 4: Test Incrementally
# 1. Test the signature parses correctly
sig <- signature("question -> answer")
sig # Should print without error
# 2. Test the module creates correctly
mod <- module(sig, type = "predict")
mod # Should print module info
# 3. Test a simple call
result <- run(mod, question = "What is 1+1?", .llm = chat_openai())
# 4. If that works, test your actual use case
result <- run(mod, question = your_complex_question, .llm = chat_openai())Step 6: Check for Known Issues
# Ensure ellmer is up to date
packageVersion("ellmer")
# Ensure dsprrr is up to date
packageVersion("dsprrr")
# Check for any warnings during package load
library(dsprrr)Getting Help
If you’re still stuck:
- Check the documentation:
?dsp
?signature
vignette("getting-started", package = "dsprrr")
vignette("cheatsheet", package = "dsprrr")- Gather diagnostic info:
Create a minimal reproducible example that shows the issue
Report issues at: https://github.com/JamesHWade/dsprrr/issues
