Skip to contents

What is dsprrr?

dsprrr treats LLM prompts as programs that can be optimized, not strings to be tweaked by hand. You declare what you want with signatures, wrap them in modules for reuse, and let teleprompters find the best prompts automatically.

library(dsprrr)
library(ellmer)

# Declare what you want
chat_openai() |> dsp("question -> answer", question = "What is 2+2?")
#> "4"

That one-liner handles prompt construction, structured output parsing, and type validation. No prompt engineering required.

Choose Your Path

New to dsprrr?

Start with the tutorial sequence—hands-on lessons that build on each other:

  1. Your First LLM Call — Make structured calls with dsp() (10 min)
  2. Building a Classifier — Create reusable modules (20 min)
  3. Extracting Structured Data — Multi-field outputs (25 min)
  4. Improving with Examples — Few-shot learning (25 min)
  5. Finding Best Configuration — Grid search (30 min)
  6. Taking to Production — Save and deploy (30 min)

Already know the basics?

Jump to what you need:

Want to understand the “why”?

Read the conceptual guides:

Building something specific?

Check the how-to guides:

5-Minute Taste

Here’s dsprrr in action—from simple call to optimized module:

library(dsprrr)
library(ellmer)

# 1. Quick call
chat <- chat_openai()
chat |> dsp("text -> sentiment: enum('positive', 'negative', 'neutral')",
            text = "Love this product!")
#> "positive"

# 2. Reusable module
classifier <- chat |> as_module("text -> sentiment: enum('positive', 'negative', 'neutral')")
classifier$predict(text = c("Great!", "Awful", "Meh"))
#> c("positive", "negative", "neutral")

# 3. Optimized module
trainset <- dsp_trainset(
  text = c("Amazing!", "Terrible!", "It's okay"),
  sentiment = c("positive", "negative", "neutral")
)

optimized <- compile_module(
  program = classifier,
  teleprompter = LabeledFewShot(k = 2),
  trainset = trainset
)

Prerequisites

Before you begin:

  1. Install R (4.1 or later)

  2. Install the packages:

install.packages("pak")
pak::pak("JamesHWade/dsprrr")
pak::pak("tidyverse/ellmer")
  1. Set your API key:
# In your .Renviron file
OPENAI_API_KEY=sk-your-key-here

Learning Path

┌─────────────────────────────────────────────────────────────────────┐
│                         TUTORIALS                                    │
│  ┌──────────┐   ┌──────────┐   ┌──────────┐   ┌──────────┐          │
│  │ 1. Hello │ → │ 2. Build │ → │ 3. Struct│ → │ 4. Demos │          │
│  │   World  │   │Classifier│   │ Outputs  │   │          │          │
│  └──────────┘   └──────────┘   └──────────┘   └──────────┘          │
│                                                      ↓               │
│                                ┌──────────┐   ┌──────────┐          │
│                                │ 5.Optim- │ → │ 6. Prod- │          │
│                                │   ize    │   │  uction  │          │
│                                └──────────┘   └──────────┘          │
│                                                      ↓               │
│                         ADVANCED TUTORIALS                           │
│                    ┌────────────────┐   ┌────────────────┐          │
│                    │ Text Adventure │   │  llms.txt Gen  │          │
│                    └────────────────┘   └────────────────┘          │
└─────────────────────────────────────────────────────────────────────┘
                                   ↓
                 ┌─────────────────┴─────────────────┐
                 ↓                                   ↓
        ┌────────────────┐                 ┌────────────────┐
        │   HOW-TO       │                 │   CONCEPTS     │
        │   GUIDES       │                 │                │
        └────────────────┘                 └────────────────┘
                 ↓                                   ↓
        ┌────────────────┐                 ┌────────────────┐
        │   REFERENCE    │ ←───────────────│   (cheatsheet) │
        └────────────────┘                 └────────────────┘

Start at the top and work your way down. Each tutorial builds on the previous one.

What’s Next?

Ready to begin? Start with Tutorial 1: Your First LLM Call.