Convenience wrapper that creates a runner, module, and executes an RLM in a single call. Equivalent to:
runner <- r_code_runner(timeout = timeout)
mod <- rlm_module(signature, runner = runner, ...)
run(mod, ..., .llm = .llm)For repeated use or optimization, prefer creating a module with
rlm_module() and calling run() separately.
Usage
rlm(
signature,
...,
.llm = NULL,
.timeout = 30,
.max_iterations = 20L,
.max_llm_calls = 50L,
.sub_lm = NULL,
.tools = list(),
.verbose = FALSE
)Arguments
- signature
A Signature object or string notation defining inputs/outputs (e.g.,
"question -> answer")- ...
Named arguments matching the signature's inputs. These are passed to
run().- .llm
An ellmer Chat object. If
NULL, uses the default Chat fromget_default_chat().- .timeout
Numeric. Maximum execution time in seconds per code evaluation. Default 30.
- .max_iterations
Integer. Maximum REPL iterations before fallback. Default 20.
- .max_llm_calls
Integer. Maximum recursive LLM calls allowed. Default 50.
- .sub_lm
Optional ellmer Chat for recursive
llm_query()calls.NULLdisables recursive queries.- .tools
Named list of user-defined R functions available in the REPL.
- .verbose
Logical. Print execution progress. Default
FALSE.
See also
rlm_module()for creating reusable RLM modulesr_code_runner()for configuring the code execution backendrun()for executing modulesdsp()for simple one-shot LLM calls (no code execution)
Examples
if (FALSE) { # \dontrun{
# One-liner RLM call
result <- rlm(
"document, question -> answer",
document = readLines("big_file.txt") |> paste(collapse = "\n"),
question = "What are the main themes?",
.llm = ellmer::chat_openai()
)
# With recursive sub-queries
result <- rlm(
"codebase, question -> answer",
codebase = source_code,
question = "How does auth work?",
.llm = ellmer::chat_openai(),
.sub_lm = ellmer::chat_openai(model = "gpt-4o-mini")
)
} # }