Releases: KennispuntTwente/tidyprompt
tidyprompt 0.4.0
-
New prompt wrap
answer_as_dataframe()for extracting tabular results via
structured output, with support for row schemas, array-of-row schemas,
and optional row-count validation. -
New prompt wrap
answer_as_numeric()for extracting numeric responses,
including optional minimum and maximum value validation. -
send_prompt()can now directly use an 'ellmer' chat object as
thellm_providerto evaluate the prompt with (will build
anllm_provider_ellmer()under the hood) -
llm_provider_ellmer()was improved to better synchronize with
the native 'ellmer' state, for instance for streaming, multimodal/image
content, and persistent chats, with clearer warnings when settings need to
be configured on the underlyingellmerchat object. -
answer_as_json()andanswer_using_tools()have broader ellmer
compatibility, includingellmer::type_from_schema(), ellmer built-in tools,
and better handling of optional or ignored tool arguments. -
Chat history handling is more robust for tool and ellmer-native workflows:
toolrows are supported, non-replayable native rows (tool call and thinking rows)
are kept for inspection but not re-sent to the LLM provider, and related
metadata is normalized more reliably. -
Update e-mail address of maintainer in DESCRIPTION file
(change to a personal e-mail address due to leaving the organization).
tidyprompt 0.3.0
-
llm_provider-class: can now take astream_callbackfunction, which
can be used to intercept streamed tokens as they arrive from the LLM provider.
This may be used to build custom streaming behavior, for instance to show a live
response in a Shiny app (see newvignette("streaming_shiny_ipc")for an example) -
llm_provider_ellmer(): now supports streaming responses -
add_image(): new prompt wrap to add an image to a prompt, for use
with multimodal LLMs -
answer_using_r(): fixed error with unsafe conversion of resulting object
to character
tidyprompt 0.2.0
-
Add provider-level prompt wraps (
provider_prompt_wrap()) these are prompt
wraps which can be attached to a LLM provider object. They can be applied to
any prompt which is sent through this LLM provider, either before or after
prompt-specific prompt wraps. This is useful when you want to achieve
certain behavior for various prompts, without having to re-apply the same
prompt wrap to each prompt -
answer_as_json(): support 'ellmer' definitions of structured output
(e.g.,ellmer::type_object()).answer_as_json()can convert between ellmer
definitions and the previous R list objects which represent JSON schemas; thus,
'ellmer' and R list object definitions work with both regular and 'ellmer'
LLM providers. When using anllm_provider_ellmer(),answer_as_json()will
ensure the native 'ellmer' functions for obtaining structured output are used -
answer_using_tools(): support 'ellmer' definitions of tools (from
ellmer::tool()).answer_using_tools()can convert between 'ellmer' tool
definitions and the previous R function objects with documentation from
tools_add_docs(); thus, 'ellmer' andtools_add_docs()definitions work
with both regular and 'ellmer' LLM providers. When using an
llm_provider_ellmer(),answer_using_tools()will ensure the native 'ellmer'
functions for registering tools are used. -
answer_using_tools(): because of the above, and the fact that
package 'mcptools' returns 'ellmer' tool definitions with
mcptools::mcp_tools(),answer_using_tools()
can now also be used with tools from Model Context Protocol (MCP) servers -
send_prompt()can now return an updated 'ellmer' chat object when using an
llm_provider_ellmer()(containing for instance the history of 'ellmer' turns
and tool calls). Additionally fixed issues with how turn history is handled
in 'ellmer' chat objects -
send_prompt()'sclean_chat_historyargument is now defaulted toFALSE,
as it may be confusing for users to see cleaned chat histories without
having actively requested this. Ifreturn_mode = "full",$clean_chat_history
is also no longer included whenclean_chat_history = FALSE -
llm_provider_openai()now supports (as default) the OpenAI responses API,
which allows setting parameters like 'reasoning_effort' and 'verbosity'
(relevant for gpt-5). The OpenAI chat completions API is also still supported -
llm_provider_google_gemini()has been superseded by
llm_provider_ellmer(ellmer::chat_google_gemini()) -
Add a
json_type&tool_typefield to LLM provider objects; when
automatically determining the route towards structured output (in
answer_as_json()) and tool use (inanswer_using_tools()), this can override
the type decided by theapi_typefield (e.g., user can use this field to force
the text-based type, for instance when using an OpenAI type LLM provider but
with a model which does not support the typical OpenAI API parameters for
structured output) -
Update how responses are streamed (with
httr2::req_perform_connection(),
sincehttr2::req_perform_stream()is being deprecated) -
Fix bug where the LLM provider object was not properly passed on to
modify_fninprompt_wrap(), which could lead to errors when dynamically
constructing prompt text based on the LLM provider type
tidyprompt 0.1.0
-
New prompt wraps
answer_as_category()andanswer_as_multi_category() -
New
llm_break_soft()interrupts prompt evaluation without error -
New experimental provider
llm_provider_ellmer()forellmerchat objects -
Ollama provider gains
num_ctxparameter to control context window size -
set_option()andset_options()are now available for the Ollama provider
to configure options -
Error messages are more informative when an LLM provider cannot be reached
-
Google Gemini provider now works without errors in affected cases
-
Chat history handling is safer; rows with
NAvalues no longer cause errors
in specific cases -
Final-answer extraction in chain-of-thought prompts is more flexible
-
Printed LLM responses now use
message()instead ofcat() -
Moved repository to https://github.com/KennispuntTwente/tidyprompt
tidyprompt 0.0.1
- Initial CRAN release