Skip to contents

The primary function for creating executable LLM modules. Supports "predict" for standard structured prediction, "react" for ReAct-style tool-using modules, "chain_of_thought" for step-by-step reasoning, "multichain" for multi-chain comparison, and "program_of_thought" for code execution modules.

Usage

module(
  signature,
  type = "predict",
  tools = NULL,
  max_iterations = 10L,
  M = 3L,
  temperature = 0.7,
  runner = NULL,
  max_iters = 3L,
  extract_answer = TRUE,
  template = "",
  demos = list(),
  config = list(),
  chat = NULL,
  ...
)

Arguments

signature

A Signature object defining the module's interface

type

Character string specifying the module type:

  • "predict" (default): Standard prediction module

  • "react": ReAct-style module with tool support

  • "chain_of_thought": Adds step-by-step reasoning to the signature

  • "multichain": MultiChainComparison module for ensemble reasoning

  • "program_of_thought": Code execution module (requires runner)

  • "codeact": Hybrid agent with tools + code execution (requires runner)

tools

Optional list of ellmer ToolDef objects for react modules. If provided with type = "predict", automatically upgrades to react.

max_iterations

Maximum ReAct iterations (default: 10, only for react)

M

Number of reasoning chains for multichain (default: 3)

temperature

Temperature for multichain diversity (default: 0.7)

runner

RCodeRunner for program_of_thought modules. Required for code execution types. Create with r_code_runner().

max_iters

Maximum code repair iterations for program_of_thought (default: 3)

extract_answer

Logical. For program_of_thought, whether to use LLM to extract final answer from execution result (default: TRUE)

template

Optional glue template for prompt generation

demos

Optional list of demonstration examples

config

Optional configuration list

chat

Optional ellmer Chat object for LLM operations. If provided, the module will use this Chat for all predictions unless overridden with .llm.

...

Additional arguments for future module types

Value

A module object (R6) that can be executed with run()

Examples

# Create a simple prediction module
classifier <- signature("text -> sentiment") |>
  module(type = "predict", template = "Analyze: {text}")

# With demonstrations
qa <- signature("context, question -> answer") |>
  module(
    type = "predict",
    demos = list(
      list(
        inputs = list(context = "...", question = "..."),
        output = "..."
      )
    )
  )

# Create a multichain comparison module
mcc <- signature("question -> answer") |>
  module(type = "multichain", M = 5, temperature = 0.8)

if (FALSE) { # \dontrun{
# Execute the module (requires an llm object)
llm <- ellmer::chat_openai()
result <- classifier |> run(text = "Great package!", .llm = llm)

# Or create module with Chat attached
classifier <- signature("text -> sentiment") |>
  module(type = "predict", chat = chat_openai())
result <- classifier |> run(text = "Great package!")  # No .llm needed

# Create a ReAct module with tools
search_tool <- ellmer::tool(
  search_fn,
  description = "Search for information",
  arguments = list(query = ellmer::type_string())
)
agent <- signature("question -> answer") |>
  module(type = "react", tools = list(search_tool), chat = chat_openai())
} # }