Save evaluation results from a vitals Task run to a pins board. This enables tracking model performance over time and across experiments.
Arguments
- board
A pins board object
- name
Character name for the pin
- eval_result
Evaluation result from
evaluate()or a vitals Task- module
Optional module that was evaluated (for additional metadata)
- description
Optional description for the pin
- ...
Additional arguments passed to
pins::pin_write()
See also
Other orchestration:
orchestration,
pin_module_config(),
pin_trace(),
restore_module_config(),
use_dsprrr_template(),
validate_workflow()
Examples
if (FALSE) { # \dontrun{
board <- pins::board_folder("pins")
# Evaluate module on test set
eval_result <- evaluate(mod, test_data, metric = exact_match)
# Pin the evaluation results
pin_vitals_log(board, "sentiment-eval-v1", eval_result,
module = mod,
description = "Test set evaluation")
} # }
