Skip to contents

Save evaluation results from a vitals Task run to a pins board. This enables tracking model performance over time and across experiments.

Usage

pin_vitals_log(
  board,
  name,
  eval_result,
  module = NULL,
  description = NULL,
  ...
)

Arguments

board

A pins board object

name

Character name for the pin

eval_result

Evaluation result from evaluate() or a vitals Task

module

Optional module that was evaluated (for additional metadata)

description

Optional description for the pin

...

Additional arguments passed to pins::pin_write()

Value

The pin name (invisibly)

Examples

if (FALSE) { # \dontrun{
board <- pins::board_folder("pins")

# Evaluate module on test set
eval_result <- evaluate(mod, test_data, metric = exact_match)

# Pin the evaluation results
pin_vitals_log(board, "sentiment-eval-v1", eval_result,
               module = mod,
               description = "Test set evaluation")
} # }