A module that transforms context from "input" to "environment", enabling LLMs to programmatically explore large contexts through a REPL interface rather than embedding them in prompts.
Details
Instead of llm(prompt, context=huge_document), RLM stores context as R
variables that the LLM can peek, slice, search, and recursively query.
The execution flow is:
Context is made available as variables in an R execution environment
LLM generates R code to explore and analyze the context
Code is executed in an isolated subprocess via RCodeRunner
Results are fed back to the LLM for the next iteration
Process continues until SUBMIT() is called or max_iterations reached
If max_iterations reached without SUBMIT(), fallback extraction is used
Available REPL tools:
SUBMIT(...): Terminate and return final output valuespeek(var, start, end): View a slice of a variable (default: first 1000 chars)search(var, pattern): Regex search in variablellm_query(query, context_slice): Recursive LLM call (requires sub_lm)llm_query_batched(queries, slices): Batched recursive calls
Security: Code execution requires explicit opt-in via a runner parameter. The runner provides subprocess isolation but is NOT a security sandbox. For untrusted inputs, use OS-level sandboxing (containers, AppArmor).
Examples
if (FALSE) { # \dontrun{
# Create a runner (required for code execution)
runner <- r_code_runner(timeout = 30)
# Create an RLM module for exploring large documents
rlm <- rlm_module(
signature = "document, question -> answer",
runner = runner
)
# Use it for context exploration
long_doc <- paste(readLines("large_file.txt"), collapse = "\n")
result <- run(rlm, document = long_doc, question = "What are the main themes?", .llm = llm)
# Enable recursive LLM calls for complex reasoning
rlm_recursive <- rlm_module(
signature = "document -> summary",
runner = runner,
sub_lm = ellmer::chat_openai(model = "gpt-4o-mini"),
max_llm_calls = 10
)
} # }