Skip to content

Releases: KennispuntTwente/tidyprompt

tidyprompt 0.4.0

22 Apr 20:10

Choose a tag to compare

  • New prompt wrap answer_as_dataframe() for extracting tabular results via
    structured output, with support for row schemas, array-of-row schemas,
    and optional row-count validation.

  • New prompt wrap answer_as_numeric() for extracting numeric responses,
    including optional minimum and maximum value validation.

  • send_prompt() can now directly use an 'ellmer' chat object as
    the llm_provider to evaluate the prompt with (will build
    an llm_provider_ellmer() under the hood)

  • llm_provider_ellmer() was improved to better synchronize with
    the native 'ellmer' state, for instance for streaming, multimodal/image
    content, and persistent chats, with clearer warnings when settings need to
    be configured on the underlying ellmer chat object.

  • answer_as_json() and answer_using_tools() have broader ellmer
    compatibility, including ellmer::type_from_schema(), ellmer built-in tools,
    and better handling of optional or ignored tool arguments.

  • Chat history handling is more robust for tool and ellmer-native workflows:
    tool rows are supported, non-replayable native rows (tool call and thinking rows)
    are kept for inspection but not re-sent to the LLM provider, and related
    metadata is normalized more reliably.

  • Update e-mail address of maintainer in DESCRIPTION file
    (change to a personal e-mail address due to leaving the organization).

tidyprompt 0.3.0

30 Nov 18:28

Choose a tag to compare

  • llm_provider-class: can now take a stream_callback function, which
    can be used to intercept streamed tokens as they arrive from the LLM provider.
    This may be used to build custom streaming behavior, for instance to show a live
    response in a Shiny app (see new vignette("streaming_shiny_ipc") for an example)

  • llm_provider_ellmer(): now supports streaming responses

  • add_image(): new prompt wrap to add an image to a prompt, for use
    with multimodal LLMs

  • answer_using_r(): fixed error with unsafe conversion of resulting object
    to character

tidyprompt 0.2.0

25 Aug 10:26

Choose a tag to compare

  • Add provider-level prompt wraps (provider_prompt_wrap()) these are prompt
    wraps which can be attached to a LLM provider object. They can be applied to
    any prompt which is sent through this LLM provider, either before or after
    prompt-specific prompt wraps. This is useful when you want to achieve
    certain behavior for various prompts, without having to re-apply the same
    prompt wrap to each prompt

  • answer_as_json(): support 'ellmer' definitions of structured output
    (e.g., ellmer::type_object()). answer_as_json() can convert between ellmer
    definitions and the previous R list objects which represent JSON schemas; thus,
    'ellmer' and R list object definitions work with both regular and 'ellmer'
    LLM providers. When using an llm_provider_ellmer(), answer_as_json() will
    ensure the native 'ellmer' functions for obtaining structured output are used

  • answer_using_tools(): support 'ellmer' definitions of tools (from
    ellmer::tool()). answer_using_tools() can convert between 'ellmer' tool
    definitions and the previous R function objects with documentation from
    tools_add_docs(); thus, 'ellmer' and tools_add_docs() definitions work
    with both regular and 'ellmer' LLM providers. When using an
    llm_provider_ellmer(), answer_using_tools() will ensure the native 'ellmer'
    functions for registering tools are used.

  • answer_using_tools(): because of the above, and the fact that
    package 'mcptools' returns 'ellmer' tool definitions with
    mcptools::mcp_tools(), answer_using_tools()
    can now also be used with tools from Model Context Protocol (MCP) servers

  • send_prompt() can now return an updated 'ellmer' chat object when using an
    llm_provider_ellmer() (containing for instance the history of 'ellmer' turns
    and tool calls). Additionally fixed issues with how turn history is handled
    in 'ellmer' chat objects

  • send_prompt()'s clean_chat_history argument is now defaulted to FALSE,
    as it may be confusing for users to see cleaned chat histories without
    having actively requested this. If return_mode = "full", $clean_chat_history
    is also no longer included when clean_chat_history = FALSE

  • llm_provider_openai() now supports (as default) the OpenAI responses API,
    which allows setting parameters like 'reasoning_effort' and 'verbosity'
    (relevant for gpt-5). The OpenAI chat completions API is also still supported

  • llm_provider_google_gemini() has been superseded by
    llm_provider_ellmer(ellmer::chat_google_gemini())

  • Add a json_type & tool_type field to LLM provider objects; when
    automatically determining the route towards structured output (in
    answer_as_json()) and tool use (in answer_using_tools()), this can override
    the type decided by the api_type field (e.g., user can use this field to force
    the text-based type, for instance when using an OpenAI type LLM provider but
    with a model which does not support the typical OpenAI API parameters for
    structured output)

  • Update how responses are streamed (with httr2::req_perform_connection(),
    since httr2::req_perform_stream() is being deprecated)

  • Fix bug where the LLM provider object was not properly passed on to
    modify_fn in prompt_wrap(), which could lead to errors when dynamically
    constructing prompt text based on the LLM provider type

tidyprompt 0.1.0

18 Aug 09:19

Choose a tag to compare

  • New prompt wraps answer_as_category() and answer_as_multi_category()

  • New llm_break_soft() interrupts prompt evaluation without error

  • New experimental provider llm_provider_ellmer() for ellmer chat objects

  • Ollama provider gains num_ctx parameter to control context window size

  • set_option() and set_options() are now available for the Ollama provider
    to configure options

  • Error messages are more informative when an LLM provider cannot be reached

  • Google Gemini provider now works without errors in affected cases

  • Chat history handling is safer; rows with NA values no longer cause errors
    in specific cases

  • Final-answer extraction in chain-of-thought prompts is more flexible

  • Printed LLM responses now use message() instead of cat()

  • Moved repository to https://github.com/KennispuntTwente/tidyprompt

tidyprompt 0.0.1

13 Aug 07:29

Choose a tag to compare

  • Initial CRAN release