diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml
index cca13f96..d36f9484 100644
--- a/.github/workflows/ci.yml
+++ b/.github/workflows/ci.yml
@@ -9,7 +9,7 @@ on:
jobs:
build-lint-test:
strategy:
- fail-fast: true
+ fail-fast: false
matrix:
# TODO(cretz): Enable Linux ARM. It's not natively supported with setup-ruby (see
# https://github.com/ruby/setup-ruby#supported-platforms and https://github.com/ruby/setup-ruby/issues/577).
@@ -24,7 +24,7 @@ jobs:
# https://github.com/temporalio/sdk-ruby/issues/172
os: [ubuntu-latest, macos-latest]
# Earliest and latest supported
- rubyVersion: ["3.1", "3.3"]
+ rubyVersion: ["3.2", "3.3"]
include:
- os: ubuntu-latest
@@ -73,4 +73,6 @@ jobs:
- name: Lint, compile, test Ruby
working-directory: ./temporalio
+ # Timeout just in case there's a hanging part in rake
+ timeout-minutes: 20
run: bundle exec rake TESTOPTS="--verbose"
diff --git a/README.md b/README.md
index 81ee43a4..63e868c9 100644
--- a/README.md
+++ b/README.md
@@ -1,4 +1,4 @@
-
+

[](LICENSE)
@@ -17,13 +17,7 @@ Also see:
⚠️ UNDER ACTIVE DEVELOPMENT
This SDK is under active development and has not released a stable version yet. APIs may change in incompatible ways
-until the SDK is marked stable. The SDK has undergone a refresh from a previous unstable version. The last tag before
-this refresh is [v0.1.1](https://github.com/temporalio/sdk-ruby/tree/v0.1.1). Please reference that tag for the
-previous code if needed.
-
-Notably missing from this SDK:
-
-* Workflow workers
+until the SDK is marked stable.
**NOTE: This README is for the current branch and not necessarily what's released on RubyGems.**
@@ -35,15 +29,32 @@ Notably missing from this SDK:
- [Quick Start](#quick-start)
- [Installation](#installation)
- - [Implementing an Activity](#implementing-an-activity)
- - [Running a Workflow](#running-a-workflow)
+ - [Implementing a Workflow and Activity](#implementing-a-workflow-and-activity)
+ - [Running a Worker](#running-a-worker)
+ - [Executing a Workflow](#executing-a-workflow)
- [Usage](#usage)
- [Client](#client)
- [Cloud Client Using mTLS](#cloud-client-using-mtls)
+ - [Cloud Client Using API Key](#cloud-client-using-api-key)
- [Data Conversion](#data-conversion)
- [ActiveRecord and ActiveModel](#activerecord-and-activemodel)
- [Workers](#workers)
- [Workflows](#workflows)
+ - [Workflow Definition](#workflow-definition)
+ - [Running Workflows](#running-workflows)
+ - [Invoking Activities](#invoking-activities)
+ - [Invoking Child Workflows](#invoking-child-workflows)
+ - [Timers and Conditions](#timers-and-conditions)
+ - [Workflow Fiber Scheduling and Cancellation](#workflow-fiber-scheduling-and-cancellation)
+ - [Workflow Futures](#workflow-futures)
+ - [Workflow Utilities](#workflow-utilities)
+ - [Workflow Exceptions](#workflow-exceptions)
+ - [Workflow Logic Constraints](#workflow-logic-constraints)
+ - [Workflow Testing](#workflow-testing)
+ - [Automatic Time Skipping](#automatic-time-skipping)
+ - [Manual Time Skipping](#manual-time-skipping)
+ - [Mocking Activities](#mocking-activities)
+ - [Workflow Replay](#workflow-replay)
- [Activities](#activities)
- [Activity Definition](#activity-definition)
- [Activity Context](#activity-context)
@@ -65,6 +76,8 @@ Notably missing from this SDK:
### Installation
+The Ruby SDK works with Ruby 3.1, 3.2, and 3.3. 3.4 support will be added soon, and 3.1 support will be dropped soon.
+
Can require in a Gemfile like:
```
@@ -85,58 +98,87 @@ information.
**NOTE**: Due to [an issue](https://github.com/temporalio/sdk-ruby/issues/162), fibers (and `async` gem) are only
supported on Ruby versions 3.3 and newer.
-### Implementing an Activity
+### Implementing a Workflow and Activity
-Implementing workflows is not yet supported in the Ruby SDK, but implementing activities is.
+Activities are classes. Here is an example of a simple activity that can be put in `say_hello_activity.rb`:
-For example, if you have a `SayHelloWorkflow` workflow in another Temporal language that invokes `SayHello` activity on
-`my-task-queue` in Ruby, you can have the following Ruby script:
```ruby
require 'temporalio/activity'
-require 'temporalio/cancellation'
-require 'temporalio/client'
-require 'temporalio/worker'
# Implementation of a simple activity
-class SayHelloActivity < Temporalio::Activity
+class SayHelloActivity < Temporalio::Activity::Definition
def execute(name)
"Hello, #{name}!"
end
end
+```
+
+Workflows are also classes. To create the workflow, put the following in `say_hello_workflow.rb`:
+
+```ruby
+require 'temporalio/workflow'
+require_relative 'say_hello_activity'
+
+class SayHelloWorkflow < Temporalio::Workflow::Definition
+ def execute(name)
+ Temporalio::Workflow.execute_activity(
+ SayHelloActivity,
+ name,
+ schedule_to_close_timeout: 300
+ )
+ end
+end
+```
+
+This is a simple workflow that executes the `SayHelloActivity` activity.
+
+### Running a Worker
+
+To run this in a worker, put the following in `worker.rb`:
+
+```ruby
+require 'temporalio/client'
+require 'temporalio/worker'
+require_relative 'say_hello_activity'
+require_relative 'say_hello_workflow'
# Create a client
client = Temporalio::Client.connect('localhost:7233', 'my-namespace')
-# Create a worker with the client and activities
+# Create a worker with the client, activities, and workflows
worker = Temporalio::Worker.new(
client:,
task_queue: 'my-task-queue',
- # There are various forms an activity can take, see specific section for details.
- activities: [SayHelloActivity]
+ workflows: [SayHelloWorkflow],
+ # There are various forms an activity can take, see "Activities" section for details
+ activities: [SayHelloActivity],
+ # During the beta period, this must be provided explicitly, see "Workers" section for details
+ workflow_executor: Temporalio::Worker::WorkflowExecutor::ThreadPool.default
)
-# Run the worker until SIGINT. This can be done in many ways, see specific
-# section for details.
+# Run the worker until SIGINT. This can be done in many ways, see "Workers" section for details.
worker.run(shutdown_signals: ['SIGINT'])
```
-Running that will run the worker until Ctrl+C pressed.
+Running that will run the worker until Ctrl+C is pressed.
-### Running a Workflow
+### Executing a Workflow
-Assuming that `SayHelloWorkflow` just calls this activity, it can be run like so:
+To start and wait on the workflow result, with the worker program running elsewhere, put the following in
+`execute_workflow.rb`:
```ruby
require 'temporalio/client'
+require_relative 'say_hello_workflow'
# Create a client
client = Temporalio::Client.connect('localhost:7233', 'my-namespace')
# Run workflow
result = client.execute_workflow(
- 'SayHelloWorkflow',
- 'Temporal',
+ SayHelloWorkflow,
+ 'Temporal', # This is the input to the workflow
id: 'my-workflow-id',
task_queue: 'my-task-queue'
)
@@ -163,8 +205,8 @@ client = Temporalio::Client.connect('localhost:7233', 'my-namespace')
# Start a workflow
handle = client.start_workflow(
- 'SayHelloWorkflow',
- 'Temporal',
+ MyWorkflow,
+ 'arg1', 'arg2',
id: 'my-workflow-id',
task_queue: 'my-task-queue'
)
@@ -179,6 +221,8 @@ Notes about the above code:
* Temporal clients are not explicitly closed.
* To enable TLS, the `tls` option can be set to `true` or a `Temporalio::Client::Connection::TLSOptions` instance.
* Instead of `start_workflow` + `result` above, `execute_workflow` shortcut can be used if the handle is not needed.
+* Both `start_workflow` and `execute_workflow` accept either the workflow class or the string/symbol name of the
+ workflow.
* The `handle` above is a `Temporalio::Client::WorkflowHandle` which has several other operations that can be performed
on a workflow. To get a handle to an existing workflow, use `workflow_handle` on the client.
* Clients are thread safe and are fiber-compatible (but fiber compatibility only supported for Ruby 3.3+ at this time).
@@ -201,6 +245,22 @@ client = Temporalio::Client.connect(
))
```
+#### Cloud Client Using API Key
+
+Assuming the API key is 'my-api-key', this is how to connect to Temporal cloud:
+
+```ruby
+require 'temporalio/client'
+
+# Create a client
+client = Temporalio::Client.connect(
+ 'my-namespace.a1b2c.tmprl.cloud:7233',
+ 'my-namespace.a1b2c',
+ api_key: 'my-api-key'
+ tls: true
+)
+```
+
#### Data Conversion
Data converters are used to convert raw Temporal payloads to/from actual Ruby types. A custom data converter can be set
@@ -215,11 +275,12 @@ which supports the following types:
* `nil`
* "bytes" (i.e. `String` with `Encoding::ASCII_8BIT` encoding)
* `Google::Protobuf::MessageExts` instances
-* [`JSON` module](https://docs.ruby-lang.org/en/master/JSON.html) for everything else
+* [JSON module](https://docs.ruby-lang.org/en/master/JSON.html) for everything else
This means that normal Ruby objects will use `JSON.generate` when serializing and `JSON.parse` when deserializing (with
-`create_additions: true` set by default). So a Ruby object will often appear as a hash when deserialized. While
-"JSON Additions" are supported, it is not cross-SDK-language compatible since this is a Ruby-specific construct.
+`create_additions: true` set by default). So a Ruby object will often appear as a hash when deserialized. Also, hashes
+that are passed in with symbol keys end up with string keys when deserialized. While "JSON Additions" are supported, it
+is not cross-SDK-language compatible since this is a Ruby-specific construct.
The default payload converter is a collection of "encoding payload converters". On serialize, each encoding converter
will be tried in order until one accepts (default falls through to the JSON one). The encoding converter sets an
@@ -280,8 +341,7 @@ well.
### Workers
-Workers host workflows and/or activities. Workflows cannot yet be written in Ruby, but activities can. Here's how to run
-an activity worker:
+Workers host workflows and/or activities. Here's how to run a worker:
```ruby
require 'temporalio/client'
@@ -291,12 +351,15 @@ require 'my_module'
# Create a client
client = Temporalio::Client.connect('localhost:7233', 'my-namespace')
-# Create a worker with the client and activities
+# Create a worker with the client, activities, and workflows
worker = Temporalio::Worker.new(
client:,
task_queue: 'my-task-queue',
- # There are various forms an activity can take, see specific section for details.
- activities: [MyModule::MyActivity]
+ workflows: [MyModule::MyWorkflow],
+ # There are various forms an activity can take, see "Activities" section for details
+ activities: [MyModule::MyActivity],
+ # During the beta period, this must be provided explicitly, see below for details
+ workflow_executor: Temporalio::Worker::WorkflowExecutor::ThreadPool.default
)
# Run the worker until block complete
@@ -309,21 +372,495 @@ Notes about the above code:
* A worker uses the same client that is used for other Temporal things.
* This just shows providing an activity class, but there are other forms, see the "Activities" section for details.
+* The `workflow_executor` defaults to `Temporalio::Worker::WorkflowExecutor::Ractor.instance` which intentionally does
+ not work during this beta period. Therefore, during this beta period, opting in to
+ `Temporalio::Worker::WorkflowExecutor::ThreadPool.default` is required explicitly.
* The worker `run` method accepts an optional `Temporalio::Cancellation` object that can be used to cancel instead or in
addition to providing a block that waits for completion.
-* The worker `run` method accepts an `shutdown_signals` array which will trap the signal and start shutdown when
+* The worker `run` method accepts a `shutdown_signals` array which will trap the signal and start shutdown when
received.
* Workers work with threads or fibers (but fiber compatibility only supported for Ruby 3.3+ at this time). Fiber-based
activities (see "Activities" section) only work if the worker is created within a fiber.
* The `run` method does not return until the worker is shut down. This means even if shutdown is triggered (e.g. via
`Cancellation` or block completion), it may not return immediately. Activities not completing may hang worker
shutdown, see the "Activities" section.
-* Workers can have many more options not shown here (e.g. data converters and interceptors).
+* Workers can have many more options not shown here (e.g. tuners and interceptors).
* The `Temporalio::Worker.run_all` class method is available for running multiple workers concurrently.
### Workflows
-⚠️ Workflows cannot yet be implemented Ruby.
+#### Workflow Definition
+
+Workflows are defined as classes that extend `Temporalio::Workflow::Definition`. The entry point for a workflow is
+`execute` and must be defined. Methods for handling signals, queries, and updates are marked with `workflow_signal`,
+`workflow_query`, and `workflow_update` just before the method is defined. Here is an example of a workflow definition:
+
+```ruby
+require 'temporalio/workflow'
+
+class GreetingWorkflow < Temporalio::Workflow::Definition
+ workflow_query_attr_reader :current_greeting
+
+ def execute(params)
+ loop do
+ # Call activity called CreateGreeting to create greeting and store as attribute
+ @current_greeting = Temporalio::Workflow.execute_activity(
+ CreateGreeting,
+ params,
+ schedule_to_close_timeout: 300
+ )
+ Temporalio::Workflow.logger.debug("Greeting set to #{@current_greeting}")
+
+ # Wait for param update or complete signal. Note, cancellation can occur by default
+ # on wait_condition calls, so Cancellation object doesn't need to be passed
+ # explicitly.
+ Temporalio::Workflow.wait_condition { @greeting_params_update || @complete }
+
+ # If there was an update, exchange and rerun. If it's _only_ a complete, finish
+ # workflow with the greeting.
+ if @greeting_params_update
+ params, @greeting_params_update = @greeting_params_update, nil
+ else
+ return @current_greeting
+ end
+ end
+ end
+
+ workflow_update
+ def update_greeting_params(greeting_params_update)
+ @greeting_params_update = greeting_params_update
+ end
+
+ workflow_signal
+ def complete_with_greeting
+ @complete = true
+ end
+end
+```
+
+Notes about the above code:
+
+* `execute` is the primary entrypoint and its result/exception represents the workflow result/failure.
+* `workflow_signal`, `workflow_query` (and the shortcut seen above, `workflow_query_attr_reader`), and `workflow_update`
+ implicitly create class methods usable by callers/clients. A workflow definition with no methods actually implemented
+ can even be created for use by clients if the workflow is implemented elsewhere and/or in another language.
+* Workflow code must be deterministic. See the "Workflow Logic Constraints" section below.
+* `execute_activity` accepts either the activity class or the string/symbol for the name.
+
+The following protected class methods are available on `Temporalio::Workflow::Definition` to customize the overall
+workflow definition/behavior:
+
+* `workflow_name` - Accepts a string or symbol to change the name. Otherwise the name is defaulted to the unqualified
+ class name.
+* `workflow_dynamic` - Marks a workflow as dynamic. Dynamic workflows do not have names and handle any workflow that is
+ not otherwise registered. A worker can only have one dynamic workflow. It is often useful to use `workflow_raw_args`
+ with this.
+* `workflow_raw_args` - Have workflow arguments delivered to `execute` (and `initialize` if `workflow_init` in use) as
+ `Temporalio::Converters::RawValue`s. These are wrappers for the raw payloads that have not been decoded. They can be
+ decoded with `Temporalio::Workflow.payload_converter`. Using this with `*args` splat can be helpful in dynamic
+ situations.
+* `workflow_failure_exception_type` - Accepts one or more exception classes that will be considered workflow failure
+ instead of task failure. See the "Exceptions" section later on what this means. This can be called multiple times.
+* `workflow_query_attr_reader` - Is a helper that accepts one or more symbols for attributes to expose as `attr_reader`
+ _and_ `workflow_query`. This means it is a superset of `attr_reader` and will not work if also using `attr_reader` or
+ `attr_accessor`. If a writer is needed alongside this, use `attr_writer`.
+
+The following protected class methods can be called just before defining instance methods to customize the
+definition/behavior of the method:
+
+* `workflow_init` - Mark an `initialize` method as needing the workflow start arguments. Otherwise, `initialize` must
+ accept no required arguments. This must be placed above the `initialize` method or it will fail.
+* `workflow_signal` - Mark the next method as a workflow signal. The signal name is defaulted to the method name but can
+ be customized by the `name` kwarg. See the API documentation for more kwargs that can be set. Return values for
+ signals are discarded and exceptions raised in signal handlers are treated as if they occurred in the primary workflow
+ method. This also defines a class method of the same name to return the definition for use by clients.
+* `workflow_query` - Mark the next method as a workflow query. The query name is defaulted to the method name but can
+ be customized by the `name` kwarg. See the API documentation for more kwargs that can be set. The result of the method
+ is the result of the query. Queries must never have any side effects, meaning they should never mutate state or try to
+ wait on anything. This also defines a class method of the same name to return the definition for use by clients.
+* `workflow_update` - Mark the next method as a workflow update. The update name is defaulted to the method name but can
+ be customized by the `name` kwarg. See the API documentation for more kwargs that can be set. The result of the method
+ is the result of the update. This also defines a class method of the same name to return the definition for use by
+ clients.
+* `workflow_update_validator` - Mark the next method as a validator to an update. This accepts a symbol for the
+ `workflow_update` method it validates. Validators are used to do early rejection of updates and must never have any
+ side effects, meaning they should never mutate state or try to wait on anything.
+
+Workflows can be inherited, but subclass workflow-level decorators override superclass ones, and the same method can't
+be decorated with different handler types/names in the hierarchy.
+
+#### Running Workflows
+
+To start a workflow from a client, you can `start_workflow` and use the resulting handle:
+
+```ruby
+# Start the workflow
+handle = my_client.start_workflow(
+ GreetingWorkflow,
+ { salutation: 'Hello', name: 'Temporal' },
+ id: 'my-workflow-id',
+ task_queue: 'my-task-queue'
+)
+
+# Check current greeting via query
+puts "Current greeting: #{handle.query(GreetingWorkflow.current_greeting)}"
+
+# Change the params via update
+handle.execute_update(
+ GreetingWorkflow.update_greeting_params,
+ { salutation: 'Aloha', name: 'John' }
+)
+
+# Tell it to complete via signal
+handle.signal(GreetingWorkflow.complete_with_greeting)
+
+# Wait for workflow result
+puts "Final greeting: #{handle.result}"
+```
+
+Some things to note about the above code:
+
+* This uses the `GreetingWorkflow` workflow from the previous section.
+* The output of this code is "Current greeting: Hello, Temporal!" and "Final greeting: Aloha, John!".
+* ID and task queue are required for starting a workflow.
+* Signal, query, and update calls here use the class methods created on the definition for safety. So if the
+ `update_greeting_params` method didn't exist or wasn't marked as an update, the code will fail client side before even
+ attempting the call. Static typing tooling may also take advantage of this for param/result type checking.
+* A helper `execute_workflow` method is available on the client that is just `start_workflow` + handle `result`.
+
+#### Invoking Activities
+
+* Activities are executed with `Temporalio::Workflow.execute_activity`, which accepts the activity class or a
+ string/symbol activity name.
+* Activity options are kwargs on the `execute_activity` method. Either `schedule_to_close_timeout` or
+ `start_to_close_timeout` must be set.
+* Other options like `retry_policy`, `cancellation_type`, etc can also be set.
+* The `cancellation` can be set to a `Cancellation` to send a cancel request to the activity. By default, the
+ `cancellation` is the overall `Temporalio::Workflow.cancellation` which is the overarching workflow cancellation.
+* Activity failures are raised from the call as `Temporalio::Error::ActivityError`.
+* `execute_local_activity` exists with mostly the same options for local activities.
+
+#### Invoking Child Workflows
+
+* Child workflows are started with `Temporalio::Workflow.start_child_workflow`, which accepts the workflow class or
+ string/symbol name, arguments, and other options.
+* Result for `start_child_workflow` is a `Temporalio::Workflow::ChildWorkflowHandle` which has the `id`, the ability to
+ wait on the `result`, and the ability to `signal` the child.
+* The `start_child_workflow` call does not complete until the start has been accepted by the server.
+* A helper `execute_child_workflow` method is available that is just `start_child_workflow` + handle `result`.
+
+#### Timers and Conditions
+
+* A timer is represented by `Temporalio::Workflow.sleep`.
+ * Timers are also started on `Temporalio::Workflow.timeout`.
+ * _Technically_ `Kernel.sleep` and `Timeout.timeout` also delegate to the above calls, but the more explicit workflow
+ forms are encouraged because they accept more options and are not subject to Ruby standard library implementation
+ changes.
+ * Each timer accepts a `Cancellation`, but if none is given, it defaults to `Temporalio::Workflow.cancellation`.
+* `Temporalio::Workflow.wait_condition` accepts a block that waits until the evaluated block result is truthy, then
+ returns the value.
+ * This function is invoked on each iteration of the internal event loop. This means it cannot have any side effects.
+ * This is commonly used for checking if a variable is changed from some other part of a workflow (e.g. a signal
+ handler).
+ * Each wait conditions accepts a `Cancellation`, but if none is given, it defaults to
+ `Temporalio::Workflow.cancellation`.
+
+#### Workflow Fiber Scheduling and Cancellation
+
+Workflows are backed by a custom, deterministic `Fiber::Scheduler`. All fiber calls inside a workflow use this scheduler
+to ensure coroutines run deterministically.
+
+Every workflow contains a `Temporalio::Cancellation` at `Temporalio::Workflow.cancellation`. This is canceled when the
+workflow is canceled. For all workflow calls that accept a cancellation token, this is the default. So if a workflow is
+waiting on `execute_activity` and the workflow is canceled, that cancellation will propagate to the waiting activity.
+
+`Cancellation`s may be created to perform cancellation more specifically. A `Cancellation` token derived from the
+workflow one can be created via `my_cancel, my_cancel_proc = Cancellation.new(Temporalio::Workflow.cancellation)`. Then
+`my_cancel` can be passed as `cancellation` to cancel something more specifically when `my_cancel_proc.call` is invoked.
+
+`Cancellation`s don't have to be derived from the workflow one, they can just be created standalone or "detached". This
+is useful for executing, say, a cleanup activity in an `ensure` block that needs to run even on cancel. If the cleanup
+activity had instead used the workflow cancellation or one derived from it, then on cancellation it would be cancelled
+before it even started.
+
+#### Workflow Futures
+
+`Temporalio::Workflow::Future` can be used for running things in the background or concurrently. This is basically a
+safe wrapper around `Fiber.schedule` for starting and `Workflow.wait_condition` for waiting.
+
+Nothing uses futures by default, but they work with all workflow code/constructs. For instance, to run 3 activities and
+wait for them all to complete, something like this can be written:
+
+```ruby
+# Start 3 activities in background
+fut1 = Temporalio::Workflow::Future.new do
+ Temporalio::Workflow.execute_activity(MyActivity1, schedule_to_close_timeout: 300)
+end
+fut2 = Temporalio::Workflow::Future.new do
+ Temporalio::Workflow.execute_activity(MyActivity2, schedule_to_close_timeout: 300)
+end
+fut3 = Temporalio::Workflow::Future.new do
+ Temporalio::Workflow.execute_activity(MyActivity3, schedule_to_close_timeout: 300)
+end
+
+# Wait for them all to complete
+Temporalio::Workflow::Future.all_of(fut1, fut2, fut3).wait
+
+Temporalio::Workflow.logger.debug("Got: #{fut1.result}, #{fut2.result}, #{fut3.result}")
+```
+
+Or, say, to wait on the first of 5 activities or a timeout to complete:
+
+```ruby
+# Start 5 activities
+act_futs = 5.times.map do |i|
+ Temporalio::Workflow::Future.new do
+ Temporalio::Workflow.execute_activity(MyActivity, "my-arg-#{i}" schedule_to_close_timeout: 300)
+ end
+end
+# Start a timer
+sleep_fut = Temporalio::Workflow::Future.new { Temporalio::Workflow.sleep(30) }
+
+# Wait for first act result or sleep fut
+act_result = Temporalio::Workflow::Future.any_of(sleep_fut, *act_futs).wait
+# Fail if timer done first
+raise Temporalio::Error::ApplicationError, 'Timer expired' if sleep_fut.done?
+# Print act result otherwise
+puts "Act result: #{act_result}"
+```
+
+There are several other details not covered here about futures, such as how exceptions are handled, how to use a setter
+proc instead of a block, etc. See the API documentation for details.
+
+#### Workflow Utilities
+
+In addition to the pieces documented above, additional methods are available on `Temporalio::Workflow` that can be used
+from workflows including:
+
+* `in_workflow?` - Returns `true` if in the workflow or `false` otherwise. This is the only method on the class that can
+ be called outside of a workflow without raising an exception.
+* `info` - Immutable workflow information.
+* `logger` - A Ruby logger that adds contextual information and takes care not to log on replay.
+* `metric_meter` - A metric meter for making custom metrics that adds contextual information and takes care not to
+ record on replay.
+* `random` - A deterministic `Random` instance.
+* `memo` - A read-only hash of the memo (updated via `upsert_memo`).
+* `search_attributes` - A read-only `SearchAttributes` collection (updated via `upsert_search_attributes`).
+* `now` - Current, deterministic UTC time for the workflow.
+* `all_handlers_finished?` - Returns true when all signal and update handlers are done. Useful as
+ `Temporalio::Workflow.wait_condition { Temporalio::Workflow.all_handlers_finished? }` for making sure not to return
+ from the primary workflow method until all handlers are done.
+* `patched` and `deprecate_patch` - Support for patch-based versioning inside the workflow.
+* `continue_as_new_suggested` - Returns true when the server recommends performing a continue as new.
+* `current_update_info` - Returns `Temporalio::Workflow::UpdateInfo` if the current code is inside an update, or nil
+ otherwise.
+* `external_workflow_handle` - Obtain an handle to an external workflow for signalling or cancelling.
+* `payload_converter` - Payload converter if needed for converting raw args.
+* `signal_handlers`, `query_handlers`, and `update_handlers` - Hashes for the current set of handlers keyed by name (or
+ nil key for dynamic). `[]=` or `store` can be called on these to update the handlers, though defined handlers are
+ encouraged over runtime-set ones.
+
+`Temporalio::Workflow::ContinueAsNewError` can be raised to continue-as-new the workflow. It accepts positional args and
+defaults the workflow to the same as the current, though it can be changed with the `workflow` kwarg. See API
+documentation for other details.
+
+#### Workflow Exceptions
+
+* Workflows can raise exceptions to fail the workflow/update or the "workflow task" (i.e. suspend the workflow, retrying
+ until code update allows it to continue).
+* By default, exceptions that are instances of `Temporalio::Error::Failure` (or `Timeout::Error`) will fail the
+ workflow/update with that exception.
+ * For failing the workflow/update explicitly with a user exception, explicitly raise
+ `Temporalio::Error::ApplicationError`. This can be marked non-retryable or include details as needed.
+ * Other exceptions that come from activity execution, child execution, cancellation, etc are already instances of
+ `Temporalio::Error::Failure` and will fail the workflow/update if uncaught.
+* By default, all other exceptions fail the "workflow task" which means the workflow/update will continually retry until
+ the code is fixed. This is helpful for bad code or other non-predictable exceptions. To actually fail the
+ workflow/update, use `Temporalio::Error::ApplicationError` as mentioned above.
+* By default, all non-deterministic exceptions that are detected internally fail the "workflow task".
+
+The default behavior can be customized at the worker level for all workflows via the
+`workflow_failure_exception_types` worker option or per workflow via the `workflow_failure_exception_type` definition
+method on the workflow itself. When a workflow encounters a "workflow task" fail (i.e. suspend), it will first check
+either of these collections to see if the exception is an instance of any of the types and if so, will turn into a
+workflow/update failure. As a special case, when a non-deterministic exception occurs and
+`Temporalio::Workflow::NondeterminismError` is assignable to any of the types in the collection, that too
+will turn into a workflow/update failure. However unlike other exceptions, non-deterministic exceptions that match
+during update handlers become workflow failures not update failures because a non-deterministic exception is an
+entire-workflow-failure situation.
+
+#### Workflow Logic Constraints
+
+Temporal Workflows [must be deterministic](https://docs.temporal.io/workflows#deterministic-constraints), which includes
+Ruby workflows. This means there are several things workflows cannot do such as:
+
+* Perform IO (network, disk, stdio, etc)
+* Access/alter external mutable state
+* Do any threading
+* Do anything using the system clock (e.g. `Time.Now`)
+* Make any random calls
+* Make any not-guaranteed-deterministic calls
+
+#### Workflow Testing
+
+Workflow testing can be done in an integration-test fashion against a real server. However, it is hard to simulate
+timeouts and other long time-based code. Using the time-skipping workflow test environment can help there.
+
+A non-time-skipping `Temporalio::Testing::WorkflowEnvironment` can be started via `start_local` which supports all
+standard Temporal features. It is actually a real Temporal server lazily downloaded on first use and run as a
+subprocess in the background.
+
+A time-skipping `Temporalio::Testing::WorkflowEnvironment` can be started via `start_time_skipping` which is a
+reimplementation of the Temporal server with special time skipping capabilities. This too lazily downloads the process
+to run when first called. Note, this class is not thread safe nor safe for use with independent tests. It can be reused,
+but only for one test at a time because time skipping is locked/unlocked at the environment level. Note, the
+time-skipping test server does not work on ARM-based processors at this time, though macOS ARM users can use it via the
+built-in x64 translation in macOS.
+
+##### Automatic Time Skipping
+
+Anytime a workflow result is waited on, the time-skipping server automatically advances to the next event it can. To
+manually advance time before waiting on the result of the workflow, the `WorkflowEnvironment.sleep` method can be used
+on the environment itself. If an activity is running, time-skipping is disabled.
+
+Here's a simple example of a workflow that sleeps for 24 hours:
+
+```ruby
+require 'temporalio/workflow'
+
+class WaitADayWorkflow < Temporalio::Workflow::Definition
+ def execute
+ Temporalio::Workflow.sleep(1 * 24 * 60 * 60)
+ 'all done'
+ end
+end
+```
+
+A regular integration test of this workflow on a normal server would be way too slow. However, the time-skipping server
+automatically skips to the next event when we wait on the result. Here's a minitest for that workflow:
+
+```ruby
+class MyTest < Minitest::Test
+ def test_wait_a_day
+ Temporalio::Testing::WorkflowEnvironment.start_time_skipping do |env|
+ worker = Temporalio::Worker.new(
+ client: env.client,
+ task_queue: "tq-#{SecureRandom.uuid}",
+ workflows: [WaitADayWorkflow],
+ workflow_executor: Temporalio::Worker::WorkflowExecutor::ThreadPool.default
+ )
+ worker.run do
+ result = env.client.execute_workflow(
+ WaitADayWorkflow,
+ id: "wf-#{SecureRandom.uuid}",
+ task_queue: worker.task_queue
+ )
+ assert_equal 'all done', result
+ end
+ end
+ end
+end
+```
+
+This test will run almost instantly. This is because by calling `execute_workflow` on our client, we are actually
+calling `start_workflow` + handle `result`, and `result` automatically skips time as much as it can (basically until the
+end of the workflow or until an activity is run).
+
+To disable automatic time-skipping while waiting for a workflow result, run code inside a block passed to
+`auto_time_skipping_disabled`.
+
+##### Manual Time Skipping
+
+Until a workflow is waited on, all time skipping in the time-skipping environment is done manually via
+`WorkflowEnvironment.sleep`.
+
+Here's a workflow that waits for a signal or times out:
+
+```ruby
+require 'temporalio/workflow'
+
+class SignalWorkflow < Temporalio::Workflow::Definition
+ def execute
+ Temporalio::Workflow.timeout(45) do
+ Temporalio::Workflow.wait_condition { @signal_received }
+ 'got signal'
+ rescue Timeout::Error
+ 'got timeout'
+ end
+ end
+
+ workflow_signal
+ def some_signal
+ @signal_received = true
+ end
+end
+```
+
+To test a normal signal, you might:
+
+```ruby
+class MyTest < Minitest::Test
+ def test_signal_workflow_success
+ Temporalio::Testing::WorkflowEnvironment.start_time_skipping do |env|
+ worker = Temporalio::Worker.new(
+ client: env.client,
+ task_queue: "tq-#{SecureRandom.uuid}",
+ workflows: [SignalWorkflow],
+ workflow_executor: Temporalio::Worker::WorkflowExecutor::ThreadPool.default
+ )
+ worker.run do
+ handle = env.client.start_workflow(
+ SignalWorkflow,
+ id: "wf-#{SecureRandom.uuid}",
+ task_queue: worker.task_queue
+ )
+ handle.signal(SignalWorkflow.some_signal)
+ assert_equal 'got signal', handle.result
+ end
+ end
+ end
+end
+```
+
+But how would you test the timeout part? Like so:
+
+```ruby
+class MyTest < Minitest::Test
+ def test_signal_workflow_timeout
+ Temporalio::Testing::WorkflowEnvironment.start_time_skipping do |env|
+ worker = Temporalio::Worker.new(
+ client: env.client,
+ task_queue: "tq-#{SecureRandom.uuid}",
+ workflows: [SignalWorkflow],
+ workflow_executor: Temporalio::Worker::WorkflowExecutor::ThreadPool.default
+ )
+ worker.run do
+ handle = env.client.start_workflow(
+ SignalWorkflow,
+ id: "wf-#{SecureRandom.uuid}",
+ task_queue: worker.task_queue
+ )
+ env.sleep(50)
+ assert_equal 'got timeout', handle.result
+ end
+ end
+ end
+end
+```
+
+This test will run almost instantly. The `env.sleep(50)` manually skips 50 seconds of time, allowing the timeout to be
+triggered without actually waiting the full 45 seconds to time out.
+
+##### Mocking Activities
+
+When testing workflows, often you don't want to actually run the activities. Activities are just classes that extend
+`Temporalio::Activity::Definition`. Simply write different/empty/fake/asserting ones and pass those to the worker to
+have different activities called during the test. You may need to use `activity_name :MyRealActivityClassName` inside
+the mock activity class to make it appear as the real name.
+
+#### Workflow Replay
+
+TODO: Workflow replayer not yet implemented
### Activities
@@ -334,16 +871,16 @@ Activities can be defined in a few different ways. They are usually classes, but
Here is a common activity definition:
```ruby
-class FindUserActivity < Temporalio::Activity
+class FindUserActivity < Temporalio::Activity::Definition
def execute(user_id)
User.find(user_id)
end
end
```
-Activities are defined as classes that extend `Temporalio::Activity` and provide an `execute` method. When this activity
-is provided to the worker as a _class_ (e.g. `activities: [FindUserActivity]`), it will be instantiated for
-_every attempt_. Many users may prefer using the same instance across activities, for example:
+Activities are defined as classes that extend `Temporalio::Activity::Definition` and provide an `execute` method. When
+this activity is provided to the worker as a _class_ (e.g. `activities: [FindUserActivity]`), it will be instantiated
+for _every attempt_. Many users may prefer using the same instance across activities, for example:
```ruby
class FindUserActivity < Temporalio::Activity
@@ -367,8 +904,8 @@ Some notes about activity definition:
* Long running activities should heartbeat regularly, see "Activity Heartbeating and Cancellation" later.
* By default every activity attempt is executed in a thread on a thread pool, but fibers are also supported. See
"Activity Concurrency and Executors" section later for more details.
-* Technically an activity definition can be created manually via `Temporalio::Activity::Definition.new` that accepts a
- proc or a block, but the class form is recommended.
+* Technically an activity definition can be created manually via `Temporalio::Activity::Definition::Info.new` that
+ accepts a proc or a block, but the class form is recommended.
#### Activity Context
diff --git a/temporalio/.rubocop.yml b/temporalio/.rubocop.yml
index 309e3bbc..57e93820 100644
--- a/temporalio/.rubocop.yml
+++ b/temporalio/.rubocop.yml
@@ -28,7 +28,8 @@ Layout/LeadingCommentSpace:
# Don't need super for activities
Lint/MissingSuper:
AllowedParentClasses:
- - Temporalio::Activity
+ - Temporalio::Activity::Definition
+ - Temporalio::Workflow::Definition
# Allow tests to nest methods
Lint/NestedMethodDefinition:
@@ -61,7 +62,7 @@ Metrics/ModuleLength:
# The default is too small
Metrics/PerceivedComplexity:
- Max: 25
+ Max: 40
# We want classes to be documented
Style/Documentation:
@@ -80,6 +81,10 @@ Style/GlobalVars:
Exclude:
- test/**/*
+# We're ok with two compound comparisons before doing Array.include?
+Style/MultipleComparison:
+ ComparisonsThreshold: 3
+
# We want our require lists to be in order
Style/RequireOrder:
Enabled: true
diff --git a/temporalio/.yardopts b/temporalio/.yardopts
new file mode 100644
index 00000000..0e4f7448
--- /dev/null
+++ b/temporalio/.yardopts
@@ -0,0 +1,2 @@
+--readme README.md
+--protected
\ No newline at end of file
diff --git a/temporalio/Gemfile b/temporalio/Gemfile
index bca320ea..e7a4e980 100644
--- a/temporalio/Gemfile
+++ b/temporalio/Gemfile
@@ -17,7 +17,7 @@ group :development do
gem 'rbs', '~> 3.5.3'
gem 'rb_sys', '~> 0.9.63'
gem 'rubocop'
- gem 'sqlite3', '~> 1.4'
+ gem 'sqlite3'
gem 'steep', '~> 1.7.1'
gem 'yard'
end
diff --git a/temporalio/Rakefile b/temporalio/Rakefile
index c2866a55..d99065c7 100644
--- a/temporalio/Rakefile
+++ b/temporalio/Rakefile
@@ -1,6 +1,6 @@
# frozen_string_literal: true
-# rubocop:disable Metrics/BlockLength, Lint/MissingCopEnableDirective, Style/DocumentationMethod
+# rubocop:disable Lint/MissingCopEnableDirective, Style/DocumentationMethod
require 'bundler/gem_tasks'
require 'rb_sys/cargo/metadata'
@@ -55,8 +55,9 @@ module CustomizeYardWarnings # rubocop:disable Style/Documentation
super
rescue YARD::Parser::UndocumentableError
# We ignore if it's an API warning
- raise unless statement.last.file.start_with?('lib/temporalio/api/') ||
- statement.last.file.start_with?('lib/temporalio/internal/bridge/api/')
+ last_file = statement.last.file
+ raise unless (last_file.start_with?('lib/temporalio/api/') && last_file.count('/') > 3) ||
+ last_file.start_with?('lib/temporalio/internal/bridge/api/')
end
end
@@ -64,301 +65,15 @@ YARD::Handlers::Ruby::ConstantHandler.prepend(CustomizeYardWarnings)
YARD::Rake::YardocTask.new { |t| t.options = ['--fail-on-warning'] }
-require 'fileutils'
-require 'google/protobuf'
+Rake::Task[:yard].enhance([:copy_parent_files]) do
+ rm ['LICENSE', 'README.md']
+end
namespace :proto do
desc 'Generate API and Core protobufs'
task :generate do
- # Remove all existing
- FileUtils.rm_rf('lib/temporalio/api')
-
- def generate_protos(api_protos)
- # Generate API to temp dir and move
- FileUtils.rm_rf('tmp-proto')
- FileUtils.mkdir_p('tmp-proto')
- sh 'bundle exec grpc_tools_ruby_protoc ' \
- '--proto_path=ext/sdk-core/sdk-core-protos/protos/api_upstream ' \
- '--proto_path=ext/sdk-core/sdk-core-protos/protos/api_cloud_upstream ' \
- '--proto_path=ext/additional_protos ' \
- '--ruby_out=tmp-proto ' \
- "#{api_protos.join(' ')}"
-
- # Walk all generated Ruby files and cleanup content and filename
- Dir.glob('tmp-proto/temporal/api/**/*.rb') do |path|
- # Fix up the import
- content = File.read(path)
- content.gsub!(%r{^require 'temporal/(.*)_pb'$}, "require 'temporalio/\\1'")
- File.write(path, content)
-
- # Remove _pb from the filename
- FileUtils.mv(path, path.sub('_pb', ''))
- end
-
- # Move from temp dir and remove temp dir
- FileUtils.cp_r('tmp-proto/temporal/api', 'lib/temporalio')
- FileUtils.rm_rf('tmp-proto')
- end
-
- # Generate from API with Google ones removed
- generate_protos(Dir.glob('ext/sdk-core/sdk-core-protos/protos/api_upstream/**/*.proto').reject do |proto|
- proto.include?('google')
- end)
-
- # Generate from Cloud API
- generate_protos(Dir.glob('ext/sdk-core/sdk-core-protos/protos/api_cloud_upstream/**/*.proto'))
-
- # Generate additional protos
- generate_protos(Dir.glob('ext/additional_protos/**/*.proto'))
-
- # Write files that will help with imports. We are requiring the
- # request_response and not the service because the service depends on Google
- # API annotations we don't want to have to depend on.
- File.write(
- 'lib/temporalio/api/cloud/cloudservice.rb',
- <<~TEXT
- # frozen_string_literal: true
-
- require 'temporalio/api/cloud/cloudservice/v1/request_response'
- TEXT
- )
- File.write(
- 'lib/temporalio/api/workflowservice.rb',
- <<~TEXT
- # frozen_string_literal: true
-
- require 'temporalio/api/workflowservice/v1/request_response'
- TEXT
- )
- File.write(
- 'lib/temporalio/api/operatorservice.rb',
- <<~TEXT
- # frozen_string_literal: true
-
- require 'temporalio/api/operatorservice/v1/request_response'
- TEXT
- )
- File.write(
- 'lib/temporalio/api.rb',
- <<~TEXT
- # frozen_string_literal: true
-
- require 'temporalio/api/cloud/cloudservice'
- require 'temporalio/api/common/v1/grpc_status'
- require 'temporalio/api/errordetails/v1/message'
- require 'temporalio/api/operatorservice'
- require 'temporalio/api/workflowservice'
-
- module Temporalio
- # Raw protocol buffer models.
- module Api
- end
- end
- TEXT
- )
-
- # Write the service classes that have the RPC calls
- def write_service_file(qualified_service_name:, file_name:, class_name:, service_enum:)
- # Do service lookup
- desc = Google::Protobuf::DescriptorPool.generated_pool.lookup(qualified_service_name)
- raise 'Failed finding service descriptor' unless desc
-
- # Open file to generate Ruby code
- File.open("lib/temporalio/client/connection/#{file_name}.rb", 'w') do |file|
- file.puts <<~TEXT
- # frozen_string_literal: true
-
- # Generated code. DO NOT EDIT!
-
- require 'temporalio/api'
- require 'temporalio/client/connection/service'
- require 'temporalio/internal/bridge/client'
-
- module Temporalio
- class Client
- class Connection
- # #{class_name} API.
- class #{class_name} < Service
- # @!visibility private
- def initialize(connection)
- super(connection, Internal::Bridge::Client::#{service_enum})
- end
- TEXT
-
- desc.each do |method|
- # Camel case to snake case
- rpc = method.name.gsub(/([A-Z])/, '_\1').downcase.delete_prefix('_')
- file.puts <<-TEXT
-
- # Calls #{class_name}.#{method.name} API call.
- #
- # @param request [#{method.input_type.msgclass}] API request.
- # @param rpc_options [RPCOptions, nil] Advanced RPC options.
- # @return [#{method.output_type.msgclass}] API response.
- def #{rpc}(request, rpc_options: nil)
- invoke_rpc(
- rpc: '#{rpc}',
- request_class: #{method.input_type.msgclass},
- response_class: #{method.output_type.msgclass},
- request:,
- rpc_options:
- )
- end
- TEXT
- end
-
- file.puts <<~TEXT
- end
- end
- end
- end
- TEXT
- end
-
- # Open file to generate RBS code
- # TODO(cretz): Improve this when RBS proto is supported
- File.open("sig/temporalio/client/connection/#{file_name}.rbs", 'w') do |file|
- file.puts <<~TEXT
- # Generated code. DO NOT EDIT!
-
- module Temporalio
- class Client
- class Connection
- class #{class_name} < Service
- def initialize: (Connection) -> void
- TEXT
-
- desc.each do |method|
- # Camel case to snake case
- rpc = method.name.gsub(/([A-Z])/, '_\1').downcase.delete_prefix('_')
- file.puts <<-TEXT
- def #{rpc}: (
- untyped request,
- ?rpc_options: RPCOptions?
- ) -> untyped
- TEXT
- end
-
- file.puts <<~TEXT
- end
- end
- end
- end
- TEXT
- end
- end
-
- require './lib/temporalio/api/workflowservice/v1/service'
- write_service_file(
- qualified_service_name: 'temporal.api.workflowservice.v1.WorkflowService',
- file_name: 'workflow_service',
- class_name: 'WorkflowService',
- service_enum: 'SERVICE_WORKFLOW'
- )
- require './lib/temporalio/api/operatorservice/v1/service'
- write_service_file(
- qualified_service_name: 'temporal.api.operatorservice.v1.OperatorService',
- file_name: 'operator_service',
- class_name: 'OperatorService',
- service_enum: 'SERVICE_OPERATOR'
- )
- require './lib/temporalio/api/cloud/cloudservice/v1/service'
- write_service_file(
- qualified_service_name: 'temporal.api.cloud.cloudservice.v1.CloudService',
- file_name: 'cloud_service',
- class_name: 'CloudService',
- service_enum: 'SERVICE_CLOUD'
- )
-
- # Generate Rust code
- def generate_rust_match_arm(file:, qualified_service_name:, service_enum:, trait:)
- # Do service lookup
- desc = Google::Protobuf::DescriptorPool.generated_pool.lookup(qualified_service_name)
- file.puts <<~TEXT
- #{service_enum} => match call.rpc.as_str() {
- TEXT
-
- desc.to_a.sort_by(&:name).each do |method|
- # Camel case to snake case
- rpc = method.name.gsub(/([A-Z])/, '_\1').downcase.delete_prefix('_')
- file.puts <<~TEXT
- "#{rpc}" => rpc_call!(self, callback, call, #{trait}, #{rpc}),
- TEXT
- end
- file.puts <<~TEXT
- _ => Err(error!("Unknown RPC call {}", call.rpc)),
- },
- TEXT
- end
- File.open('ext/src/client_rpc_generated.rs', 'w') do |file|
- file.puts <<~TEXT
- // Generated code. DO NOT EDIT!
-
- use magnus::{Error, Ruby};
- use temporal_client::{CloudService, OperatorService, WorkflowService};
-
- use super::{error, rpc_call};
- use crate::{
- client::{Client, RpcCall, SERVICE_CLOUD, SERVICE_OPERATOR, SERVICE_WORKFLOW},
- util::AsyncCallback,
- };
-
- impl Client {
- pub fn invoke_rpc(&self, service: u8, callback: AsyncCallback, call: RpcCall) -> Result<(), Error> {
- match service {
- TEXT
- generate_rust_match_arm(
- file:,
- qualified_service_name: 'temporal.api.workflowservice.v1.WorkflowService',
- service_enum: 'SERVICE_WORKFLOW',
- trait: 'WorkflowService'
- )
- generate_rust_match_arm(
- file:,
- qualified_service_name: 'temporal.api.operatorservice.v1.OperatorService',
- service_enum: 'SERVICE_OPERATOR',
- trait: 'OperatorService'
- )
- generate_rust_match_arm(
- file:,
- qualified_service_name: 'temporal.api.cloud.cloudservice.v1.CloudService',
- service_enum: 'SERVICE_CLOUD',
- trait: 'CloudService'
- )
- file.puts <<~TEXT
- _ => Err(error!("Unknown service")),
- }
- }
- }
- TEXT
- end
- sh 'cargo', 'fmt', '--', 'ext/src/client_rpc_generated.rs'
-
- # Generate core protos
- FileUtils.rm_rf('lib/temporalio/internal/bridge/api')
- # Generate API to temp dir
- FileUtils.rm_rf('tmp-proto')
- FileUtils.mkdir_p('tmp-proto')
- sh 'bundle exec grpc_tools_ruby_protoc ' \
- '--proto_path=ext/sdk-core/sdk-core-protos/protos/api_upstream ' \
- '--proto_path=ext/sdk-core/sdk-core-protos/protos/local ' \
- '--ruby_out=tmp-proto ' \
- "#{Dir.glob('ext/sdk-core/sdk-core-protos/protos/local/**/*.proto').join(' ')}"
- # Walk all generated Ruby files and cleanup content and filename
- Dir.glob('tmp-proto/temporal/sdk/**/*.rb') do |path|
- # Fix up the imports
- content = File.read(path)
- content.gsub!(%r{^require 'temporal/(.*)_pb'$}, "require 'temporalio/\\1'")
- content.gsub!(%r{^require 'temporalio/sdk/core/(.*)'$}, "require 'temporalio/internal/bridge/api/\\1'")
- File.write(path, content)
-
- # Remove _pb from the filename
- FileUtils.mv(path, path.sub('_pb', ''))
- end
- # Move from temp dir and remove temp dir
- FileUtils.mkdir_p('lib/temporalio/internal/bridge/api')
- FileUtils.cp_r(Dir.glob('tmp-proto/temporal/sdk/core/*'), 'lib/temporalio/internal/bridge/api')
- FileUtils.rm_rf('tmp-proto')
+ require_relative 'extra/proto_gen'
+ ProtoGen.new.run
end
end
@@ -379,6 +94,7 @@ Rake::Task[:build].enhance([:copy_parent_files]) do
end
task :rust_lint do
+ # TODO(cretz): Add "-- -Dwarnings" to clippy when SDK core passes with it
sh 'cargo', 'clippy'
sh 'cargo', 'fmt', '--check'
end
diff --git a/temporalio/ext/src/client_rpc_generated.rs b/temporalio/ext/src/client_rpc_generated.rs
index de7a3cab..f862e074 100644
--- a/temporalio/ext/src/client_rpc_generated.rs
+++ b/temporalio/ext/src/client_rpc_generated.rs
@@ -1,11 +1,11 @@
// Generated code. DO NOT EDIT!
use magnus::{Error, Ruby};
-use temporal_client::{CloudService, OperatorService, WorkflowService};
+use temporal_client::{CloudService, OperatorService, TestService, WorkflowService};
use super::{error, rpc_call};
use crate::{
- client::{Client, RpcCall, SERVICE_CLOUD, SERVICE_OPERATOR, SERVICE_WORKFLOW},
+ client::{Client, RpcCall, SERVICE_CLOUD, SERVICE_OPERATOR, SERVICE_TEST, SERVICE_WORKFLOW},
util::AsyncCallback,
};
@@ -516,6 +516,27 @@ impl Client {
}
_ => Err(error!("Unknown RPC call {}", call.rpc)),
},
+ SERVICE_TEST => match call.rpc.as_str() {
+ "get_current_time" => {
+ rpc_call!(self, callback, call, TestService, get_current_time)
+ }
+ "lock_time_skipping" => {
+ rpc_call!(self, callback, call, TestService, lock_time_skipping)
+ }
+ "sleep" => rpc_call!(self, callback, call, TestService, sleep),
+ "sleep_until" => rpc_call!(self, callback, call, TestService, sleep_until),
+ "unlock_time_skipping" => {
+ rpc_call!(self, callback, call, TestService, unlock_time_skipping)
+ }
+ "unlock_time_skipping_with_sleep" => rpc_call!(
+ self,
+ callback,
+ call,
+ TestService,
+ unlock_time_skipping_with_sleep
+ ),
+ _ => Err(error!("Unknown RPC call {}", call.rpc)),
+ },
_ => Err(error!("Unknown service")),
}
}
diff --git a/temporalio/ext/src/testing.rs b/temporalio/ext/src/testing.rs
index 8ca72897..1e5fcb82 100644
--- a/temporalio/ext/src/testing.rs
+++ b/temporalio/ext/src/testing.rs
@@ -4,6 +4,7 @@ use magnus::{
use parking_lot::Mutex;
use temporal_sdk_core::ephemeral_server::{
self, EphemeralExe, EphemeralExeVersion, TemporalDevServerConfigBuilder,
+ TestServerConfigBuilder,
};
use crate::{
@@ -23,6 +24,10 @@ pub fn init(ruby: &Ruby) -> Result<(), Error> {
"async_start_dev_server",
function!(EphemeralServer::async_start_dev_server, 3),
)?;
+ class.define_singleton_method(
+ "async_start_test_server",
+ function!(EphemeralServer::async_start_test_server, 3),
+ )?;
class.define_method("target", method!(EphemeralServer::target, 0))?;
class.define_method(
"async_shutdown",
@@ -51,24 +56,7 @@ impl EphemeralServer {
// Build options
let mut opts_build = TemporalDevServerConfigBuilder::default();
opts_build
- .exe(
- if let Some(existing_path) =
- options.member::>(id!("existing_path"))?
- {
- EphemeralExe::ExistingPath(existing_path)
- } else {
- EphemeralExe::CachedDownload {
- version: match options.member::(id!("download_version"))? {
- ref v if v == "default" => EphemeralExeVersion::SDKDefault {
- sdk_name: options.member(id!("sdk_name"))?,
- sdk_version: options.member(id!("sdk_version"))?,
- },
- download_version => EphemeralExeVersion::Fixed(download_version),
- },
- dest_dir: options.member(id!("download_dest_dir"))?,
- }
- },
- )
+ .exe(EphemeralServer::exe_from_options(&options)?)
.namespace(options.member::(id!("namespace"))?)
.ip(options.member::(id!("ip"))?)
.port(options.member::>(id!("port"))?)
@@ -81,7 +69,39 @@ impl EphemeralServer {
.extra_args(options.member(id!("extra_args"))?);
let opts = opts_build
.build()
- .map_err(|err| error!("Invalid Temporalite config: {}", err))?;
+ .map_err(|err| error!("Invalid dev server config: {}", err))?;
+
+ // Start
+ let callback = AsyncCallback::from_queue(queue);
+ let runtime_handle = runtime.handle.clone();
+ runtime.handle.spawn(
+ async move { opts.start_server().await },
+ move |_, result| match result {
+ Ok(core) => callback.push(EphemeralServer {
+ target: core.target.clone(),
+ core: Mutex::new(Some(core)),
+ runtime_handle,
+ }),
+ Err(err) => callback.push(new_error!("Failed starting server: {}", err)),
+ },
+ );
+ Ok(())
+ }
+
+ pub fn async_start_test_server(
+ runtime: &Runtime,
+ options: Struct,
+ queue: Value,
+ ) -> Result<(), Error> {
+ // Build options
+ let mut opts_build = TestServerConfigBuilder::default();
+ opts_build
+ .exe(EphemeralServer::exe_from_options(&options)?)
+ .port(options.member:: >(id!("port"))?)
+ .extra_args(options.member(id!("extra_args"))?);
+ let opts = opts_build
+ .build()
+ .map_err(|err| error!("Invalid test server config: {}", err))?;
// Start
let callback = AsyncCallback::from_queue(queue);
@@ -100,6 +120,23 @@ impl EphemeralServer {
Ok(())
}
+ fn exe_from_options(options: &Struct) -> Result {
+ if let Some(existing_path) = options.member::>(id!("existing_path"))? {
+ Ok(EphemeralExe::ExistingPath(existing_path))
+ } else {
+ Ok(EphemeralExe::CachedDownload {
+ version: match options.member::(id!("download_version"))? {
+ ref v if v == "default" => EphemeralExeVersion::SDKDefault {
+ sdk_name: options.member(id!("sdk_name"))?,
+ sdk_version: options.member(id!("sdk_version"))?,
+ },
+ download_version => EphemeralExeVersion::Fixed(download_version),
+ },
+ dest_dir: options.member(id!("download_dest_dir"))?,
+ })
+ }
+ }
+
pub fn target(&self) -> &str {
&self.target
}
diff --git a/temporalio/ext/src/worker.rs b/temporalio/ext/src/worker.rs
index 8a36939a..6883bb08 100644
--- a/temporalio/ext/src/worker.rs
+++ b/temporalio/ext/src/worker.rs
@@ -1,4 +1,9 @@
-use std::{cell::RefCell, sync::Arc, time::Duration};
+use std::{
+ cell::RefCell,
+ collections::{HashMap, HashSet},
+ sync::Arc,
+ time::Duration,
+};
use crate::{
client::Client,
@@ -7,8 +12,8 @@ use crate::{
util::{AsyncCallback, Struct},
ROOT_MOD,
};
-use futures::StreamExt;
use futures::{future, stream};
+use futures::{stream::BoxStream, StreamExt};
use magnus::{
class, function, method, prelude::*, typed_data, DataTypeFunctions, Error, IntoValue, RArray,
RString, RTypedData, Ruby, TypedData, Value,
@@ -18,7 +23,8 @@ use temporal_sdk_core::{
ResourceBasedSlotsOptions, ResourceBasedSlotsOptionsBuilder, ResourceSlotOptions,
SlotSupplierOptions, TunerHolder, TunerHolderOptionsBuilder, WorkerConfigBuilder,
};
-use temporal_sdk_core_api::errors::PollActivityError;
+use temporal_sdk_core_api::errors::{PollActivityError, PollWfError, WorkflowErrorType};
+use temporal_sdk_core_protos::coresdk::workflow_completion::WorkflowActivationCompletion;
use temporal_sdk_core_protos::coresdk::{ActivityHeartbeat, ActivityTaskCompletion};
pub fn init(ruby: &Ruby) -> Result<(), Error> {
@@ -40,6 +46,10 @@ pub fn init(ruby: &Ruby) -> Result<(), Error> {
"record_activity_heartbeat",
method!(Worker::record_activity_heartbeat, 1),
)?;
+ class.define_method(
+ "async_complete_workflow_activation",
+ method!(Worker::async_complete_workflow_activation, 3),
+ )?;
class.define_method("replace_client", method!(Worker::replace_client, 1))?;
class.define_method("initiate_shutdown", method!(Worker::initiate_shutdown, 0))?;
Ok(())
@@ -54,18 +64,28 @@ pub struct Worker {
core: RefCell>>,
runtime_handle: RuntimeHandle,
activity: bool,
- _workflow: bool,
+ workflow: bool,
}
+#[derive(Copy, Clone)]
enum WorkerType {
Activity,
+ Workflow,
+}
+
+struct PollResult {
+ worker_index: usize,
+ worker_type: WorkerType,
+ result: Result >, String>,
}
impl Worker {
pub fn new(client: &Client, options: Struct) -> Result {
enter_sync!(client.runtime_handle);
+
let activity = options.member::(id!("activity"))?;
- let _workflow = options.member::(id!("workflow"))?;
+ let workflow = options.member::(id!("workflow"))?;
+
// Build config
let config = WorkerConfigBuilder::default()
.namespace(options.member::(id!("namespace"))?)
@@ -107,8 +127,20 @@ impl Worker {
.child(id!("tuner"))?
.ok_or_else(|| error!("Missing tuner"))?,
)?))
- // TODO(cretz): workflow_failure_errors
- // TODO(cretz): workflow_types_to_failure_errors
+ .workflow_failure_errors(
+ if options.member::(id!("nondeterminism_as_workflow_fail"))? {
+ HashSet::from([WorkflowErrorType::Nondeterminism])
+ } else {
+ HashSet::new()
+ },
+ )
+ .workflow_types_to_failure_errors(
+ options
+ .member::>(id!("nondeterminism_as_workflow_fail_for_types"))?
+ .into_iter()
+ .map(|s| (s, HashSet::from([WorkflowErrorType::Nondeterminism])))
+ .collect::>>(),
+ )
.build()
.map_err(|err| error!("Invalid worker options: {}", err))?;
@@ -122,10 +154,55 @@ impl Worker {
core: RefCell::new(Some(Arc::new(worker))),
runtime_handle: client.runtime_handle.clone(),
activity,
- _workflow,
+ workflow,
})
}
+ // Helper that turns a worker + type into a poll stream
+ fn stream_poll<'a>(
+ worker: Arc,
+ worker_index: usize,
+ worker_type: WorkerType,
+ ) -> BoxStream<'a, PollResult> {
+ stream::unfold(Some(worker.clone()), move |worker| async move {
+ // We return no worker so the next streamed item closes
+ // the stream with a None
+ if let Some(worker) = worker {
+ let result = match worker_type {
+ WorkerType::Activity => {
+ match temporal_sdk_core_api::Worker::poll_activity_task(&*worker).await {
+ Ok(res) => Ok(Some(res.encode_to_vec())),
+ Err(PollActivityError::ShutDown) => Ok(None),
+ Err(err) => Err(format!("Poll error: {}", err)),
+ }
+ }
+ WorkerType::Workflow => {
+ match temporal_sdk_core_api::Worker::poll_workflow_activation(&*worker)
+ .await
+ {
+ Ok(res) => Ok(Some(res.encode_to_vec())),
+ Err(PollWfError::ShutDown) => Ok(None),
+ Err(err) => Err(format!("Poll error: {}", err)),
+ }
+ }
+ };
+ let shutdown_next = matches!(result, Ok(None));
+ Some((
+ PollResult {
+ worker_index,
+ worker_type,
+ result,
+ },
+ // No more work if shutdown
+ if shutdown_next { None } else { Some(worker) },
+ ))
+ } else {
+ None
+ }
+ })
+ .boxed()
+ }
+
pub fn async_poll_all(workers: RArray, queue: Value) -> Result<(), Error> {
// Get the first runtime handle
let runtime = workers
@@ -133,42 +210,35 @@ impl Worker {
.runtime_handle
.clone();
- // Create stream of poll calls
- // TODO(cretz): Map for workflow pollers too
+ // Create streams of poll calls
let worker_streams = workers
.into_iter()
.enumerate()
- .filter_map(|(index, worker_val)| {
+ .flat_map(|(index, worker_val)| {
let worker_typed_data = RTypedData::from_value(worker_val).expect("Not typed data");
let worker_ref = worker_typed_data.get::().expect("Not worker");
+ let worker = worker_ref
+ .core
+ .borrow()
+ .as_ref()
+ .expect("Unable to borrow")
+ .clone();
+ let mut streams = Vec::with_capacity(2);
if worker_ref.activity {
- let worker = Some(
- worker_ref
- .core
- .borrow()
- .as_ref()
- .expect("Unable to borrow")
- .clone(),
- );
- Some(Box::pin(stream::unfold(worker, move |worker| async move {
- // We return no worker so the next streamed item closes
- // the stream with a None
- if let Some(worker) = worker {
- let res =
- temporal_sdk_core_api::Worker::poll_activity_task(&*worker).await;
- let shutdown_next = matches!(res, Err(PollActivityError::ShutDown));
- Some((
- (index, WorkerType::Activity, res),
- // No more worker if shutdown
- if shutdown_next { None } else { Some(worker) },
- ))
- } else {
- None
- }
- })))
- } else {
- None
+ streams.push(Self::stream_poll(
+ worker.clone(),
+ index,
+ WorkerType::Activity,
+ ));
}
+ if worker_ref.workflow {
+ streams.push(Self::stream_poll(
+ worker.clone(),
+ index,
+ WorkerType::Workflow,
+ ));
+ }
+ streams
})
.collect::>();
let mut worker_stream = stream::select_all(worker_streams);
@@ -184,24 +254,24 @@ impl Worker {
runtime.spawn(
async move {
// Get next item from the stream
- while let Some((worker_index, worker_type, result)) = worker_stream.next().await {
- // Encode result and send callback to Ruby
- let result = result.map(|v| v.encode_to_vec());
+ while let Some(poll_result) = worker_stream.next().await {
+ // Send callback to Ruby
let callback = callback.clone();
let _ = async_command_tx.send(AsyncCommand::RunCallback(Box::new(move || {
// Get Ruby in callback
let ruby = Ruby::get().expect("Ruby not available");
- let worker_type = match worker_type {
+ let worker_type = match poll_result.worker_type {
WorkerType::Activity => id!("activity"),
+ WorkerType::Workflow => id!("workflow"),
};
// Call block
- let result: Value = match result {
- Ok(val) => RString::from_slice(&val).as_value(),
- Err(PollActivityError::ShutDown) => ruby.qnil().as_value(),
+ let result: Value = match poll_result.result {
+ Ok(Some(val)) => RString::from_slice(&val).as_value(),
+ Ok(None) => ruby.qnil().as_value(),
Err(err) => new_error!("Poll failure: {}", err).as_value(),
};
callback.push(ruby.ary_new_from_values(&[
- worker_index.into_value(),
+ poll_result.worker_index.into_value(),
worker_type.into_value(),
result,
]))
@@ -316,6 +386,37 @@ impl Worker {
Ok(())
}
+ pub fn async_complete_workflow_activation(
+ &self,
+ run_id: String,
+ proto: RString,
+ queue: Value,
+ ) -> Result<(), Error> {
+ let callback = AsyncCallback::from_queue(queue);
+ let worker = self.core.borrow().as_ref().unwrap().clone();
+ let completion = WorkflowActivationCompletion::decode(unsafe { proto.as_slice() })
+ .map_err(|err| error!("Invalid proto: {}", err))?;
+ self.runtime_handle.spawn(
+ async move {
+ temporal_sdk_core_api::Worker::complete_workflow_activation(&*worker, completion)
+ .await
+ },
+ move |ruby, result| {
+ callback.push(ruby.ary_new_from_values(&[
+ (-1).into_value_with(&ruby),
+ run_id.into_value_with(&ruby),
+ match result {
+ Ok(()) => ruby.qnil().into_value_with(&ruby),
+ Err(err) => {
+ new_error!("Completion failure: {}", err).into_value_with(&ruby)
+ }
+ },
+ ]))
+ },
+ );
+ Ok(())
+ }
+
pub fn replace_client(&self, client: &Client) -> Result<(), Error> {
enter_sync!(self.runtime_handle);
let worker = self.core.borrow().as_ref().unwrap().clone();
diff --git a/temporalio/extra/payload_visitor_gen.rb b/temporalio/extra/payload_visitor_gen.rb
new file mode 100644
index 00000000..eb85bd09
--- /dev/null
+++ b/temporalio/extra/payload_visitor_gen.rb
@@ -0,0 +1,236 @@
+# frozen_string_literal: true
+
+require_relative '../lib/temporalio/api'
+require_relative '../lib/temporalio/api/cloud/cloudservice/v1/service'
+require_relative '../lib/temporalio/api/operatorservice/v1/service'
+require_relative '../lib/temporalio/api/workflowservice/v1/service'
+require_relative '../lib/temporalio/internal/bridge/api'
+
+# Generator for the payload visitor.
+class PayloadVisitorGen
+ DESCRIPTORS = [
+ 'temporal.api.workflowservice.v1.WorkflowService',
+ 'temporal.api.operatorservice.v1.OperatorService',
+ 'temporal.api.cloud.cloudservice.v1.CloudService',
+ 'temporal.api.export.v1.WorkflowExecutions',
+ 'coresdk.workflow_activation.WorkflowActivation',
+ 'coresdk.workflow_completion.WorkflowActivationCompletion'
+ ].freeze
+
+ # Generate file code.
+ #
+ # @return [String] File code.
+ def gen_file_code
+ # Collect all the methods of all the classes
+ methods = {}
+ DESCRIPTORS.each do |name|
+ desc = Google::Protobuf::DescriptorPool.generated_pool.lookup(name) or raise "Unknown name: #{name}"
+ walk_desc(desc:, methods:)
+ end
+
+ # Build the code for each method
+ method_bodies = methods.map do |_, method_hash|
+ # Do nothing if no fields
+ next if method_hash[:fields].empty?
+
+ body = "def #{method_name_from_desc(method_hash[:desc])}(value)\n"
+ # Ignore if search attributes are ignored
+ if method_hash[:desc].name == 'temporal.api.common.v1.SearchAttributes'
+ body += " return if @skip_search_attributes\n"
+ end
+ body += " @on_enter&.call(value)\n"
+ method_hash[:fields].each do |field_hash|
+ field_name = field_hash[:field].name
+ other_method_name = method_name_from_desc(field_hash[:type])
+ body += case field_hash[:form]
+ when :map
+ # We need to skip this if skip search attributes is on and the field name is search attributes. This
+ # is because Core protos do not always use the search attribute type.
+ suffix = field_name == 'search_attributes' ? ' unless @skip_search_attributes' : ''
+ " value.#{field_name}.values.each { |v| #{other_method_name}(v) }#{suffix}\n"
+ when :repeated_payload
+ " api_common_v1_payload_repeated(value.#{field_name}) unless value.#{field_name}.empty?\n"
+ when :repeated
+ " value.#{field_name}.each { |v| #{other_method_name}(v) }\n"
+ else
+ " #{other_method_name}(value.#{field_name}) if value.has_#{field_name}?\n"
+ end
+ end
+ "#{body} @on_exit&.call(value)\nend"
+ end.compact.sort
+
+ # Build the class
+ <<~TEXT
+ # frozen_string_literal: true
+
+ # Generated code. DO NOT EDIT!
+
+ require 'temporalio/api'
+ require 'temporalio/internal/bridge/api'
+
+ module Temporalio
+ module Api
+ # Visitor for payloads within the protobuf structure. This visitor is thread safe and can be used multiple
+ # times since it stores no mutable state.
+ #
+ # @note WARNING: This class is not considered stable for external use and may change as needed for internal
+ # reasons.
+ class PayloadVisitor
+ # Create a new visitor, calling the block on every {Common::V1::Payload} or
+ # {Google::Protobuf::RepeatedField} encountered.
+ #
+ # @param on_enter [Proc, nil] Proc called at the beginning of the processing for every protobuf value
+ # _except_ the ones calling the block.
+ # @param on_exit [Proc, nil] Proc called at the end of the processing for every protobuf value _except_ the
+ # ones calling the block.
+ # @param skip_search_attributes [Boolean] If true, payloads within search attributes do not call the block.
+ # @param traverse_any [Boolean] If true, when a [Google::Protobuf::Any] is encountered, it is unpacked,
+ # visited, then repacked.
+ # @yield [value] Block called with the visited payload value.
+ # @yieldparam [Common::V1::Payload, Google::Protobuf::RepeatedField] Payload or payload list.
+ def initialize(
+ on_enter: nil,
+ on_exit: nil,
+ skip_search_attributes: false,
+ traverse_any: false,
+ &block
+ )
+ raise ArgumentError, 'Block required' unless block_given?
+ @on_enter = on_enter
+ @on_exit = on_exit
+ @skip_search_attributes = skip_search_attributes
+ @traverse_any = traverse_any
+ @block = block
+ end
+
+ # Visit the given protobuf message.
+ #
+ # @param value [Google::Protobuf::Message] Message to visit.
+ def run(value)
+ return unless value.is_a?(Google::Protobuf::MessageExts)
+ method_name = method_name_from_proto_name(value.class.descriptor.name)
+ send(method_name, value) if respond_to?(method_name, true)
+ nil
+ end
+
+ # @!visibility private
+ def _run_activation(value)
+ coresdk_workflow_activation_workflow_activation(value)
+ end
+
+ # @!visibility private
+ def _run_activation_completion(value)
+ coresdk_workflow_completion_workflow_activation_completion(value)
+ end
+
+ private
+
+ def method_name_from_proto_name(name)
+ name
+ .sub('temporal.api.', 'api_')
+ .gsub('.', '_')
+ .gsub(/([a-z])([A-Z])/, '\\1_\\2')
+ .downcase
+ end
+
+ def api_common_v1_payload(value)
+ @block.call(value)
+ end
+
+ def api_common_v1_payload_repeated(value)
+ @block.call(value)
+ end
+
+ def google_protobuf_any(value)
+ return unless @traverse_any
+ desc = Google::Protobuf::DescriptorPool.generated_pool.lookup(value.type_name)
+ unpacked = value.unpack(desc.msgclass)
+ run(unpacked)
+ value.pack(unpacked)
+ end
+
+ ### Generated method bodies below ###
+
+ #{method_bodies.join("\n\n").gsub("\n", "\n ")}
+ end
+ end
+ end
+ TEXT
+ end
+
+ private
+
+ def walk_desc(desc:, methods:)
+ case desc
+ when Google::Protobuf::ServiceDescriptor
+ walk_service_desc(desc:, methods:)
+ when Google::Protobuf::Descriptor
+ walk_message_desc(desc:, methods:)
+ when Google::Protobuf::EnumDescriptor
+ # Ignore
+ else
+ raise "Unrecognized descriptor: #{desc}"
+ end
+ end
+
+ def walk_service_desc(desc:, methods:)
+ desc.each do |method|
+ walk_desc(desc: method.input_type, methods:)
+ walk_desc(desc: method.output_type, methods:)
+ end
+ end
+
+ def walk_message_desc(desc:, methods:)
+ return if methods[desc.name]
+
+ methods[desc.name] = {
+ desc:,
+ fields: desc.map do |field|
+ next unless field.subtype && desc_has_payloads_or_any(field.subtype)
+
+ {
+ field:,
+ type: if field.subtype.options.map_entry
+ field.subtype.lookup('value').subtype
+ else
+ field.subtype
+ end,
+ form: if field.subtype.options.map_entry
+ :map
+ elsif field.label == :repeated && field.subtype.msgclass == Temporalio::Api::Common::V1::Payload
+ :repeated_payload
+ elsif field.label == :repeated
+ :repeated
+ else
+ :message
+ end
+ }
+ end.compact
+ }
+
+ desc.each do |field|
+ type = field.subtype
+ # If the subtype is a map entry, only walk the value of the subtype
+ type = type.lookup('value').subtype if type.is_a?(Google::Protobuf::Descriptor) && type.options.map_entry
+ walk_desc(desc: type, methods:) if type.is_a?(Google::Protobuf::Descriptor)
+ end
+ end
+
+ def desc_has_payloads_or_any(desc, parents: [])
+ return false if !desc.is_a?(Google::Protobuf::Descriptor) || parents.include?(desc)
+
+ desc.msgclass == Temporalio::Api::Common::V1::Payload ||
+ desc.msgclass == Google::Protobuf::Any ||
+ desc.any? do |field|
+ field.subtype && desc_has_payloads_or_any(field.subtype, parents: parents + [desc])
+ end
+ end
+
+ def method_name_from_desc(desc)
+ desc.name
+ .sub('temporal.api.', 'api_')
+ .gsub('.', '_')
+ .gsub(/([a-z])([A-Z])/, '\1_\2')
+ .downcase
+ end
+end
diff --git a/temporalio/extra/proto_gen.rb b/temporalio/extra/proto_gen.rb
new file mode 100644
index 00000000..68ac2206
--- /dev/null
+++ b/temporalio/extra/proto_gen.rb
@@ -0,0 +1,338 @@
+# frozen_string_literal: true
+
+require 'fileutils'
+require 'google/protobuf'
+
+# Generator for the proto files.
+class ProtoGen
+ # Run the generator
+ def run
+ FileUtils.rm_rf('lib/temporalio/api')
+
+ generate_api_protos(Dir.glob('ext/sdk-core/sdk-core-protos/protos/api_upstream/**/*.proto').reject do |proto|
+ proto.include?('google')
+ end)
+ generate_api_protos(Dir.glob('ext/sdk-core/sdk-core-protos/protos/api_cloud_upstream/**/*.proto'))
+ generate_api_protos(Dir.glob('ext/sdk-core/sdk-core-protos/protos/testsrv_upstream/**/*.proto'))
+ generate_api_protos(Dir.glob('ext/additional_protos/**/*.proto'))
+ generate_import_helper_files
+ generate_service_files
+ generate_rust_client_file
+ generate_core_protos
+ generate_payload_visitor
+ end
+
+ private
+
+ def generate_api_protos(api_protos)
+ # Generate API to temp dir and move
+ FileUtils.rm_rf('tmp-proto')
+ FileUtils.mkdir_p('tmp-proto')
+ system(
+ 'bundle',
+ 'exec',
+ 'grpc_tools_ruby_protoc',
+ '--proto_path=ext/sdk-core/sdk-core-protos/protos/api_upstream',
+ '--proto_path=ext/sdk-core/sdk-core-protos/protos/api_cloud_upstream',
+ '--proto_path=ext/sdk-core/sdk-core-protos/protos/testsrv_upstream',
+ '--proto_path=ext/additional_protos',
+ '--ruby_out=tmp-proto',
+ *api_protos,
+ exception: true
+ )
+
+ # Walk all generated Ruby files and cleanup content and filename
+ Dir.glob('tmp-proto/temporal/api/**/*.rb') do |path|
+ # Fix up the import
+ content = File.read(path)
+ content.gsub!(%r{^require 'temporal/(.*)_pb'$}, "require 'temporalio/\\1'")
+ File.write(path, content)
+
+ # Remove _pb from the filename
+ FileUtils.mv(path, path.sub('_pb', ''))
+ end
+
+ # Move from temp dir and remove temp dir
+ FileUtils.cp_r('tmp-proto/temporal/api', 'lib/temporalio')
+ FileUtils.rm_rf('tmp-proto')
+ end
+
+ def generate_import_helper_files
+ # Write files that will help with imports. We are requiring the
+ # request_response and not the service because the service depends on Google
+ # API annotations we don't want to have to depend on.
+ File.write(
+ 'lib/temporalio/api/cloud/cloudservice.rb',
+ <<~TEXT
+ # frozen_string_literal: true
+
+ require 'temporalio/api/cloud/cloudservice/v1/request_response'
+ TEXT
+ )
+ File.write(
+ 'lib/temporalio/api/workflowservice.rb',
+ <<~TEXT
+ # frozen_string_literal: true
+
+ require 'temporalio/api/workflowservice/v1/request_response'
+ TEXT
+ )
+ File.write(
+ 'lib/temporalio/api/operatorservice.rb',
+ <<~TEXT
+ # frozen_string_literal: true
+
+ require 'temporalio/api/operatorservice/v1/request_response'
+ TEXT
+ )
+ File.write(
+ 'lib/temporalio/api.rb',
+ <<~TEXT
+ # frozen_string_literal: true
+
+ require 'temporalio/api/cloud/cloudservice'
+ require 'temporalio/api/common/v1/grpc_status'
+ require 'temporalio/api/errordetails/v1/message'
+ require 'temporalio/api/export/v1/message'
+ require 'temporalio/api/operatorservice'
+ require 'temporalio/api/workflowservice'
+
+ module Temporalio
+ # Raw protocol buffer models.
+ module Api
+ end
+ end
+ TEXT
+ )
+ end
+
+ def generate_service_files
+ require './lib/temporalio/api/workflowservice/v1/service'
+ generate_service_file(
+ qualified_service_name: 'temporal.api.workflowservice.v1.WorkflowService',
+ file_name: 'workflow_service',
+ class_name: 'WorkflowService',
+ service_enum: 'SERVICE_WORKFLOW'
+ )
+ require './lib/temporalio/api/operatorservice/v1/service'
+ generate_service_file(
+ qualified_service_name: 'temporal.api.operatorservice.v1.OperatorService',
+ file_name: 'operator_service',
+ class_name: 'OperatorService',
+ service_enum: 'SERVICE_OPERATOR'
+ )
+ require './lib/temporalio/api/cloud/cloudservice/v1/service'
+ generate_service_file(
+ qualified_service_name: 'temporal.api.cloud.cloudservice.v1.CloudService',
+ file_name: 'cloud_service',
+ class_name: 'CloudService',
+ service_enum: 'SERVICE_CLOUD'
+ )
+ require './lib/temporalio/api/testservice/v1/service'
+ generate_service_file(
+ qualified_service_name: 'temporal.api.testservice.v1.TestService',
+ file_name: 'test_service',
+ class_name: 'TestService',
+ service_enum: 'SERVICE_TEST'
+ )
+ end
+
+ def generate_service_file(qualified_service_name:, file_name:, class_name:, service_enum:)
+ # Do service lookup
+ desc = Google::Protobuf::DescriptorPool.generated_pool.lookup(qualified_service_name)
+ raise 'Failed finding service descriptor' unless desc
+
+ # Open file to generate Ruby code
+ File.open("lib/temporalio/client/connection/#{file_name}.rb", 'w') do |file|
+ file.puts <<~TEXT
+ # frozen_string_literal: true
+
+ # Generated code. DO NOT EDIT!
+
+ require 'temporalio/api'
+ require 'temporalio/client/connection/service'
+ require 'temporalio/internal/bridge/client'
+
+ module Temporalio
+ class Client
+ class Connection
+ # #{class_name} API.
+ class #{class_name} < Service
+ # @!visibility private
+ def initialize(connection)
+ super(connection, Internal::Bridge::Client::#{service_enum})
+ end
+ TEXT
+
+ desc.each do |method|
+ # Camel case to snake case
+ rpc = method.name.gsub(/([A-Z])/, '_\1').downcase.delete_prefix('_')
+ file.puts <<-TEXT
+
+ # Calls #{class_name}.#{method.name} API call.
+ #
+ # @param request [#{method.input_type.msgclass}] API request.
+ # @param rpc_options [RPCOptions, nil] Advanced RPC options.
+ # @return [#{method.output_type.msgclass}] API response.
+ def #{rpc}(request, rpc_options: nil)
+ invoke_rpc(
+ rpc: '#{rpc}',
+ request_class: #{method.input_type.msgclass},
+ response_class: #{method.output_type.msgclass},
+ request:,
+ rpc_options:
+ )
+ end
+ TEXT
+ end
+
+ file.puts <<~TEXT
+ end
+ end
+ end
+ end
+ TEXT
+ end
+
+ # Open file to generate RBS code
+ # TODO(cretz): Improve this when RBS proto is supported
+ File.open("sig/temporalio/client/connection/#{file_name}.rbs", 'w') do |file|
+ file.puts <<~TEXT
+ # Generated code. DO NOT EDIT!
+
+ module Temporalio
+ class Client
+ class Connection
+ class #{class_name} < Service
+ def initialize: (Connection) -> void
+ TEXT
+
+ desc.each do |method|
+ # Camel case to snake case
+ rpc = method.name.gsub(/([A-Z])/, '_\1').downcase.delete_prefix('_')
+ file.puts <<-TEXT
+ def #{rpc}: (
+ untyped request,
+ ?rpc_options: RPCOptions?
+ ) -> untyped
+ TEXT
+ end
+
+ file.puts <<~TEXT
+ end
+ end
+ end
+ end
+ TEXT
+ end
+ end
+
+ def generate_rust_client_file
+ File.open('ext/src/client_rpc_generated.rs', 'w') do |file|
+ file.puts <<~TEXT
+ // Generated code. DO NOT EDIT!
+
+ use magnus::{Error, Ruby};
+ use temporal_client::{CloudService, OperatorService, TestService, WorkflowService};
+
+ use super::{error, rpc_call};
+ use crate::{
+ client::{Client, RpcCall, SERVICE_CLOUD, SERVICE_OPERATOR, SERVICE_TEST, SERVICE_WORKFLOW},
+ util::AsyncCallback,
+ };
+
+ impl Client {
+ pub fn invoke_rpc(&self, service: u8, callback: AsyncCallback, call: RpcCall) -> Result<(), Error> {
+ match service {
+ TEXT
+ generate_rust_match_arm(
+ file:,
+ qualified_service_name: 'temporal.api.workflowservice.v1.WorkflowService',
+ service_enum: 'SERVICE_WORKFLOW',
+ trait: 'WorkflowService'
+ )
+ generate_rust_match_arm(
+ file:,
+ qualified_service_name: 'temporal.api.operatorservice.v1.OperatorService',
+ service_enum: 'SERVICE_OPERATOR',
+ trait: 'OperatorService'
+ )
+ generate_rust_match_arm(
+ file:,
+ qualified_service_name: 'temporal.api.cloud.cloudservice.v1.CloudService',
+ service_enum: 'SERVICE_CLOUD',
+ trait: 'CloudService'
+ )
+ generate_rust_match_arm(
+ file:,
+ qualified_service_name: 'temporal.api.testservice.v1.TestService',
+ service_enum: 'SERVICE_TEST',
+ trait: 'TestService'
+ )
+ file.puts <<~TEXT
+ _ => Err(error!("Unknown service")),
+ }
+ }
+ }
+ TEXT
+ end
+ system('cargo', 'fmt', '--', 'ext/src/client_rpc_generated.rs', exception: true)
+ end
+
+ def generate_rust_match_arm(file:, qualified_service_name:, service_enum:, trait:)
+ # Do service lookup
+ desc = Google::Protobuf::DescriptorPool.generated_pool.lookup(qualified_service_name)
+ file.puts <<~TEXT
+ #{service_enum} => match call.rpc.as_str() {
+ TEXT
+
+ desc.to_a.sort_by(&:name).each do |method|
+ # Camel case to snake case
+ rpc = method.name.gsub(/([A-Z])/, '_\1').downcase.delete_prefix('_')
+ file.puts <<~TEXT
+ "#{rpc}" => rpc_call!(self, callback, call, #{trait}, #{rpc}),
+ TEXT
+ end
+ file.puts <<~TEXT
+ _ => Err(error!("Unknown RPC call {}", call.rpc)),
+ },
+ TEXT
+ end
+
+ def generate_core_protos
+ FileUtils.rm_rf('lib/temporalio/internal/bridge/api')
+ # Generate API to temp dir
+ FileUtils.rm_rf('tmp-proto')
+ FileUtils.mkdir_p('tmp-proto')
+ system(
+ 'bundle',
+ 'exec',
+ 'grpc_tools_ruby_protoc',
+ '--proto_path=ext/sdk-core/sdk-core-protos/protos/api_upstream',
+ '--proto_path=ext/sdk-core/sdk-core-protos/protos/local',
+ '--ruby_out=tmp-proto',
+ *Dir.glob('ext/sdk-core/sdk-core-protos/protos/local/**/*.proto'),
+ exception: true
+ )
+ # Walk all generated Ruby files and cleanup content and filename
+ Dir.glob('tmp-proto/temporal/sdk/**/*.rb') do |path|
+ # Fix up the imports
+ content = File.read(path)
+ content.gsub!(%r{^require 'temporal/(.*)_pb'$}, "require 'temporalio/\\1'")
+ content.gsub!(%r{^require 'temporalio/sdk/core/(.*)'$}, "require 'temporalio/internal/bridge/api/\\1'")
+ File.write(path, content)
+
+ # Remove _pb from the filename
+ FileUtils.mv(path, path.sub('_pb', ''))
+ end
+ # Move from temp dir and remove temp dir
+ FileUtils.mkdir_p('lib/temporalio/internal/bridge/api')
+ FileUtils.cp_r(Dir.glob('tmp-proto/temporal/sdk/core/*'), 'lib/temporalio/internal/bridge/api')
+ FileUtils.rm_rf('tmp-proto')
+ end
+
+ def generate_payload_visitor
+ require_relative 'payload_visitor_gen'
+ File.write('lib/temporalio/api/payload_visitor.rb', PayloadVisitorGen.new.gen_file_code)
+ end
+end
diff --git a/temporalio/lib/temporalio.rb b/temporalio/lib/temporalio.rb
index 14e7b6c8..c168d3d7 100644
--- a/temporalio/lib/temporalio.rb
+++ b/temporalio/lib/temporalio.rb
@@ -4,4 +4,8 @@
# Temporal Ruby SDK. See the README at https://github.com/temporalio/sdk-ruby.
module Temporalio
+ # @!visibility private
+ def self._root_file_path
+ __FILE__
+ end
end
diff --git a/temporalio/lib/temporalio/activity.rb b/temporalio/lib/temporalio/activity.rb
index 8abeb35d..57d55d86 100644
--- a/temporalio/lib/temporalio/activity.rb
+++ b/temporalio/lib/temporalio/activity.rb
@@ -6,64 +6,7 @@
require 'temporalio/activity/info'
module Temporalio
- # Base class for all activities.
- #
- # Activities can be given to a worker as instances of this class, which will call execute on the same instance for
- # each execution, or given to the worker as the class itself which instantiates the activity for each execution.
- #
- # All activities must implement {execute}. Inside execute, {Activity::Context.current} can be used to access the
- # current context to get information, issue heartbeats, etc.
- #
- # By default, the activity is named as its unqualified class name. This can be customized with {activity_name}.
- #
- # By default, the activity uses the `:default` executor which is usually the thread-pool based executor. This can be
- # customized with {activity_executor}.
- #
- # By default, upon cancellation {::Thread.raise} or {::Fiber.raise} is called with a {Error::CanceledError}. This can
- # be disabled by passing `false` to {activity_cancel_raise}.
- #
- # See documentation for more detail on activities.
- class Activity
- # Override the activity name which is defaulted to the unqualified class name.
- #
- # @param name [String, Symbol] Name to use.
- def self.activity_name(name)
- raise ArgumentError, 'Activity name must be a symbol or string' if !name.is_a?(Symbol) && !name.is_a?(String)
-
- @activity_name = name.to_s
- end
-
- # Override the activity executor which is defaulted to `:default`.
- #
- # @param executor_name [Symbol] Executor to use.
- def self.activity_executor(executor_name)
- raise ArgumentError, 'Executor name must be a symbol' unless executor_name.is_a?(Symbol)
-
- @activity_executor = executor_name
- end
-
- # Override whether the activity uses Thread/Fiber raise for cancellation which is defaulted to true.
- #
- # @param cancel_raise [Boolean] Whether to raise.
- def self.activity_cancel_raise(cancel_raise)
- raise ArgumentError, 'Must be a boolean' unless cancel_raise.is_a?(TrueClass) || cancel_raise.is_a?(FalseClass)
-
- @activity_cancel_raise = cancel_raise
- end
-
- # @!visibility private
- def self._activity_definition_details
- {
- activity_name: @activity_name || name.to_s.split('::').last,
- activity_executor: @activity_executor || :default,
- activity_cancel_raise: @activity_cancel_raise.nil? ? true : @activity_cancel_raise
- }
- end
-
- # Implementation of the activity. The arguments should be positional and this should return the value on success or
- # raise an error on failure.
- def execute(*args)
- raise NotImplementedError, 'Activity did not implement "execute"'
- end
+ # All activity related classes.
+ module Activity
end
end
diff --git a/temporalio/lib/temporalio/activity/complete_async_error.rb b/temporalio/lib/temporalio/activity/complete_async_error.rb
index dc0267e9..1b40584c 100644
--- a/temporalio/lib/temporalio/activity/complete_async_error.rb
+++ b/temporalio/lib/temporalio/activity/complete_async_error.rb
@@ -3,7 +3,7 @@
require 'temporalio/error'
module Temporalio
- class Activity
+ module Activity
# Error raised inside an activity to mark that the activity will be completed asynchronously.
class CompleteAsyncError < Error
end
diff --git a/temporalio/lib/temporalio/activity/context.rb b/temporalio/lib/temporalio/activity/context.rb
index 3475465f..9d51769a 100644
--- a/temporalio/lib/temporalio/activity/context.rb
+++ b/temporalio/lib/temporalio/activity/context.rb
@@ -3,7 +3,7 @@
require 'temporalio/error'
module Temporalio
- class Activity
+ module Activity
# Context accessible only within an activity. Use {current} to get the current context. Contexts are fiber or thread
# local so may not be available in a newly started thread from an activity and may have to be propagated manually.
class Context
diff --git a/temporalio/lib/temporalio/activity/definition.rb b/temporalio/lib/temporalio/activity/definition.rb
index abccc755..c107ca87 100644
--- a/temporalio/lib/temporalio/activity/definition.rb
+++ b/temporalio/lib/temporalio/activity/definition.rb
@@ -1,76 +1,140 @@
# frozen_string_literal: true
module Temporalio
- class Activity
- # Definition of an activity. Activities are usually classes/instances that extend {Activity}, but definitions can
- # also be manually created with a proc/block.
+ module Activity
+ # Base class for all activities.
+ #
+ # Activities can be given to a worker as instances of this class, which will call execute on the same instance for
+ # each execution, or given to the worker as the class itself which instantiates the activity for each execution.
+ #
+ # All activities must implement {execute}. Inside execute, {Activity::Context.current} can be used to access the
+ # current context to get information, issue heartbeats, etc.
+ #
+ # By default, the activity is named as its unqualified class name. This can be customized with {activity_name}.
+ #
+ # By default, the activity uses the `:default` executor which is usually the thread-pool based executor. This can be
+ # customized with {activity_executor}.
+ #
+ # By default, upon cancellation {::Thread.raise} or {::Fiber.raise} is called with a {Error::CanceledError}. This
+ # can be disabled by passing `false` to {activity_cancel_raise}.
+ #
+ # See documentation for more detail on activities.
class Definition
- # @return [String, Symbol] Name of the activity.
- attr_reader :name
-
- # @return [Proc] Proc for the activity.
- attr_reader :proc
-
- # @return [Symbol] Name of the executor. Default is `:default`.
- attr_reader :executor
-
- # @return [Boolean] Whether to raise in thread/fiber on cancellation. Default is `true`.
- attr_reader :cancel_raise
-
- # Obtain a definition representing the given activity, which can be a class, instance, or definition.
- #
- # @param activity [Activity, Class, Definition] Activity to get definition for.
- # @return Definition Obtained definition.
- def self.from_activity(activity)
- # Class means create each time, instance means just call, definition
- # does nothing special
- case activity
- when Class
- raise ArgumentError, "Class '#{activity}' does not extend Activity" unless activity < Activity
-
- details = activity._activity_definition_details
- new(
- name: details[:activity_name],
- executor: details[:activity_executor],
- cancel_raise: details[:activity_cancel_raise],
- # Instantiate and call
- proc: proc { |*args| activity.new.execute(*args) }
- )
- when Activity
- details = activity.class._activity_definition_details
- new(
- name: details[:activity_name],
- executor: details[:activity_executor],
- cancel_raise: details[:activity_cancel_raise],
- # Just call
- proc: proc { |*args| activity.execute(*args) }
- )
- when Activity::Definition
- activity
- else
- raise ArgumentError, "#{activity} is not an activity class, instance, or definition"
+ class << self
+ protected
+
+ # Override the activity name which is defaulted to the unqualified class name.
+ #
+ # @param name [String, Symbol] Name to use.
+ def activity_name(name)
+ if !name.is_a?(Symbol) && !name.is_a?(String)
+ raise ArgumentError,
+ 'Activity name must be a symbol or string'
+ end
+
+ @activity_name = name.to_s
+ end
+
+ # Override the activity executor which is defaulted to `:default`.
+ #
+ # @param executor_name [Symbol] Executor to use.
+ def activity_executor(executor_name)
+ raise ArgumentError, 'Executor name must be a symbol' unless executor_name.is_a?(Symbol)
+
+ @activity_executor = executor_name
end
+
+ # Override whether the activity uses Thread/Fiber raise for cancellation which is defaulted to true.
+ #
+ # @param cancel_raise [Boolean] Whether to raise.
+ def activity_cancel_raise(cancel_raise)
+ unless cancel_raise.is_a?(TrueClass) || cancel_raise.is_a?(FalseClass)
+ raise ArgumentError,
+ 'Must be a boolean'
+ end
+
+ @activity_cancel_raise = cancel_raise
+ end
+ end
+
+ # @!visibility private
+ def self._activity_definition_details
+ {
+ activity_name: @activity_name || name.to_s.split('::').last,
+ activity_executor: @activity_executor || :default,
+ activity_cancel_raise: @activity_cancel_raise.nil? ? true : @activity_cancel_raise
+ }
end
- # Manually create activity definition. Most users will use an instance/class of {Activity}.
- #
- # @param name [String, Symbol] Name of the activity.
- # @param proc [Proc, nil] Proc for the activity, or can give block.
- # @param executor [Symbol] Name of the executor.
- # @param cancel_raise [Boolean] Whether to raise in thread/fiber on cancellation.
- # @yield Use this block as the activity. Cannot be present with `proc`.
- def initialize(name:, proc: nil, executor: :default, cancel_raise: true, &block)
- @name = name
- if proc.nil?
- raise ArgumentError, 'Must give proc or block' unless block_given?
-
- proc = block
- elsif block_given?
- raise ArgumentError, 'Cannot give proc and block'
+ # Implementation of the activity. The arguments should be positional and this should return the value on success
+ # or raise an error on failure.
+ def execute(*args)
+ raise NotImplementedError, 'Activity did not implement "execute"'
+ end
+
+ # Definition info of an activity. Activities are usually classes/instances that extend {Definition}, but
+ # definitions can also be manually created with a block via {initialize} here.
+ class Info
+ # @return [String, Symbol] Name of the activity.
+ attr_reader :name
+
+ # @return [Proc] Proc for the activity.
+ attr_reader :proc
+
+ # @return [Symbol] Name of the executor. Default is `:default`.
+ attr_reader :executor
+
+ # @return [Boolean] Whether to raise in thread/fiber on cancellation. Default is `true`.
+ attr_reader :cancel_raise
+
+ # Obtain definition info representing the given activity, which can be a class, instance, or definition info.
+ #
+ # @param activity [Definition, Class, Info] Activity to get info for.
+ # @return Info Obtained definition info.
+ def self.from_activity(activity)
+ # Class means create each time, instance means just call, definition
+ # does nothing special
+ case activity
+ when Class
+ unless activity < Definition
+ raise ArgumentError,
+ "Class '#{activity}' does not extend Temporalio::Activity::Definition"
+ end
+
+ details = activity._activity_definition_details
+ new(
+ name: details[:activity_name],
+ executor: details[:activity_executor],
+ cancel_raise: details[:activity_cancel_raise]
+ ) { |*args| activity.new.execute(*args) } # Instantiate and call
+ when Definition
+ details = activity.class._activity_definition_details
+ new(
+ name: details[:activity_name],
+ executor: details[:activity_executor],
+ cancel_raise: details[:activity_cancel_raise]
+ ) { |*args| activity.execute(*args) } # Just and call
+ when Info
+ activity
+ else
+ raise ArgumentError, "#{activity} is not an activity class, instance, or definition info"
+ end
+ end
+
+ # Manually create activity definition info. Most users will use an instance/class of {Definition}.
+ #
+ # @param name [String, Symbol] Name of the activity.
+ # @param executor [Symbol] Name of the executor.
+ # @param cancel_raise [Boolean] Whether to raise in thread/fiber on cancellation.
+ # @yield Use this block as the activity.
+ def initialize(name:, executor: :default, cancel_raise: true, &block)
+ @name = name
+ raise ArgumentError, 'Must give block' unless block_given?
+
+ @proc = block
+ @executor = executor
+ @cancel_raise = cancel_raise
end
- @proc = proc
- @executor = executor
- @cancel_raise = cancel_raise
end
end
end
diff --git a/temporalio/lib/temporalio/activity/info.rb b/temporalio/lib/temporalio/activity/info.rb
index 64cb6475..808063c4 100644
--- a/temporalio/lib/temporalio/activity/info.rb
+++ b/temporalio/lib/temporalio/activity/info.rb
@@ -1,7 +1,7 @@
# frozen_string_literal: true
module Temporalio
- class Activity
+ module Activity
# Information about an activity.
#
# @!attribute activity_id
diff --git a/temporalio/lib/temporalio/api.rb b/temporalio/lib/temporalio/api.rb
index cf0e6742..621b1a18 100644
--- a/temporalio/lib/temporalio/api.rb
+++ b/temporalio/lib/temporalio/api.rb
@@ -3,6 +3,7 @@
require 'temporalio/api/cloud/cloudservice'
require 'temporalio/api/common/v1/grpc_status'
require 'temporalio/api/errordetails/v1/message'
+require 'temporalio/api/export/v1/message'
require 'temporalio/api/operatorservice'
require 'temporalio/api/workflowservice'
diff --git a/temporalio/lib/temporalio/api/payload_visitor.rb b/temporalio/lib/temporalio/api/payload_visitor.rb
new file mode 100644
index 00000000..a7605ba3
--- /dev/null
+++ b/temporalio/lib/temporalio/api/payload_visitor.rb
@@ -0,0 +1,1440 @@
+# frozen_string_literal: true
+
+# Generated code. DO NOT EDIT!
+
+require 'temporalio/api'
+require 'temporalio/internal/bridge/api'
+
+module Temporalio
+ module Api
+ # Visitor for payloads within the protobuf structure. This visitor is thread safe and can be used multiple
+ # times since it stores no mutable state.
+ #
+ # @note WARNING: This class is not considered stable for external use and may change as needed for internal
+ # reasons.
+ class PayloadVisitor
+ # Create a new visitor, calling the block on every {Common::V1::Payload} or
+ # {Google::Protobuf::RepeatedField} encountered.
+ #
+ # @param on_enter [Proc, nil] Proc called at the beginning of the processing for every protobuf value
+ # _except_ the ones calling the block.
+ # @param on_exit [Proc, nil] Proc called at the end of the processing for every protobuf value _except_ the
+ # ones calling the block.
+ # @param skip_search_attributes [Boolean] If true, payloads within search attributes do not call the block.
+ # @param traverse_any [Boolean] If true, when a [Google::Protobuf::Any] is encountered, it is unpacked,
+ # visited, then repacked.
+ # @yield [value] Block called with the visited payload value.
+ # @yieldparam [Common::V1::Payload, Google::Protobuf::RepeatedField] Payload or payload list.
+ def initialize(
+ on_enter: nil,
+ on_exit: nil,
+ skip_search_attributes: false,
+ traverse_any: false,
+ &block
+ )
+ raise ArgumentError, 'Block required' unless block_given?
+ @on_enter = on_enter
+ @on_exit = on_exit
+ @skip_search_attributes = skip_search_attributes
+ @traverse_any = traverse_any
+ @block = block
+ end
+
+ # Visit the given protobuf message.
+ #
+ # @param value [Google::Protobuf::Message] Message to visit.
+ def run(value)
+ return unless value.is_a?(Google::Protobuf::MessageExts)
+ method_name = method_name_from_proto_name(value.class.descriptor.name)
+ send(method_name, value) if respond_to?(method_name, true)
+ nil
+ end
+
+ # @!visibility private
+ def _run_activation(value)
+ coresdk_workflow_activation_workflow_activation(value)
+ end
+
+ # @!visibility private
+ def _run_activation_completion(value)
+ coresdk_workflow_completion_workflow_activation_completion(value)
+ end
+
+ private
+
+ def method_name_from_proto_name(name)
+ name
+ .sub('temporal.api.', 'api_')
+ .gsub('.', '_')
+ .gsub(/([a-z])([A-Z])/, '\1_\2')
+ .downcase
+ end
+
+ def api_common_v1_payload(value)
+ @block.call(value)
+ end
+
+ def api_common_v1_payload_repeated(value)
+ @block.call(value)
+ end
+
+ def google_protobuf_any(value)
+ return unless @traverse_any
+ desc = Google::Protobuf::DescriptorPool.generated_pool.lookup(value.type_name)
+ unpacked = value.unpack(desc.msgclass)
+ run(unpacked)
+ value.pack(unpacked)
+ end
+
+ ### Generated method bodies below ###
+
+ def api_batch_v1_batch_operation_signal(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.input) if value.has_input?
+ api_common_v1_header(value.header) if value.has_header?
+ @on_exit&.call(value)
+ end
+
+ def api_batch_v1_batch_operation_termination(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.details) if value.has_details?
+ @on_exit&.call(value)
+ end
+
+ def api_cloud_cloudservice_v1_add_namespace_region_response(value)
+ @on_enter&.call(value)
+ api_cloud_operation_v1_async_operation(value.async_operation) if value.has_async_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_cloud_cloudservice_v1_create_api_key_response(value)
+ @on_enter&.call(value)
+ api_cloud_operation_v1_async_operation(value.async_operation) if value.has_async_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_cloud_cloudservice_v1_create_namespace_response(value)
+ @on_enter&.call(value)
+ api_cloud_operation_v1_async_operation(value.async_operation) if value.has_async_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_cloud_cloudservice_v1_create_service_account_response(value)
+ @on_enter&.call(value)
+ api_cloud_operation_v1_async_operation(value.async_operation) if value.has_async_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_cloud_cloudservice_v1_create_user_group_response(value)
+ @on_enter&.call(value)
+ api_cloud_operation_v1_async_operation(value.async_operation) if value.has_async_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_cloud_cloudservice_v1_create_user_response(value)
+ @on_enter&.call(value)
+ api_cloud_operation_v1_async_operation(value.async_operation) if value.has_async_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_cloud_cloudservice_v1_delete_api_key_response(value)
+ @on_enter&.call(value)
+ api_cloud_operation_v1_async_operation(value.async_operation) if value.has_async_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_cloud_cloudservice_v1_delete_namespace_response(value)
+ @on_enter&.call(value)
+ api_cloud_operation_v1_async_operation(value.async_operation) if value.has_async_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_cloud_cloudservice_v1_delete_service_account_response(value)
+ @on_enter&.call(value)
+ api_cloud_operation_v1_async_operation(value.async_operation) if value.has_async_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_cloud_cloudservice_v1_delete_user_group_response(value)
+ @on_enter&.call(value)
+ api_cloud_operation_v1_async_operation(value.async_operation) if value.has_async_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_cloud_cloudservice_v1_delete_user_response(value)
+ @on_enter&.call(value)
+ api_cloud_operation_v1_async_operation(value.async_operation) if value.has_async_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_cloud_cloudservice_v1_failover_namespace_region_response(value)
+ @on_enter&.call(value)
+ api_cloud_operation_v1_async_operation(value.async_operation) if value.has_async_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_cloud_cloudservice_v1_get_async_operation_response(value)
+ @on_enter&.call(value)
+ api_cloud_operation_v1_async_operation(value.async_operation) if value.has_async_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_cloud_cloudservice_v1_rename_custom_search_attribute_response(value)
+ @on_enter&.call(value)
+ api_cloud_operation_v1_async_operation(value.async_operation) if value.has_async_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_cloud_cloudservice_v1_set_user_group_namespace_access_response(value)
+ @on_enter&.call(value)
+ api_cloud_operation_v1_async_operation(value.async_operation) if value.has_async_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_cloud_cloudservice_v1_set_user_namespace_access_response(value)
+ @on_enter&.call(value)
+ api_cloud_operation_v1_async_operation(value.async_operation) if value.has_async_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_cloud_cloudservice_v1_update_api_key_response(value)
+ @on_enter&.call(value)
+ api_cloud_operation_v1_async_operation(value.async_operation) if value.has_async_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_cloud_cloudservice_v1_update_namespace_response(value)
+ @on_enter&.call(value)
+ api_cloud_operation_v1_async_operation(value.async_operation) if value.has_async_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_cloud_cloudservice_v1_update_service_account_response(value)
+ @on_enter&.call(value)
+ api_cloud_operation_v1_async_operation(value.async_operation) if value.has_async_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_cloud_cloudservice_v1_update_user_group_response(value)
+ @on_enter&.call(value)
+ api_cloud_operation_v1_async_operation(value.async_operation) if value.has_async_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_cloud_cloudservice_v1_update_user_response(value)
+ @on_enter&.call(value)
+ api_cloud_operation_v1_async_operation(value.async_operation) if value.has_async_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_cloud_operation_v1_async_operation(value)
+ @on_enter&.call(value)
+ google_protobuf_any(value.operation_input) if value.has_operation_input?
+ @on_exit&.call(value)
+ end
+
+ def api_command_v1_cancel_workflow_execution_command_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.details) if value.has_details?
+ @on_exit&.call(value)
+ end
+
+ def api_command_v1_command(value)
+ @on_enter&.call(value)
+ api_sdk_v1_user_metadata(value.user_metadata) if value.has_user_metadata?
+ api_command_v1_schedule_activity_task_command_attributes(value.schedule_activity_task_command_attributes) if value.has_schedule_activity_task_command_attributes?
+ api_command_v1_complete_workflow_execution_command_attributes(value.complete_workflow_execution_command_attributes) if value.has_complete_workflow_execution_command_attributes?
+ api_command_v1_fail_workflow_execution_command_attributes(value.fail_workflow_execution_command_attributes) if value.has_fail_workflow_execution_command_attributes?
+ api_command_v1_cancel_workflow_execution_command_attributes(value.cancel_workflow_execution_command_attributes) if value.has_cancel_workflow_execution_command_attributes?
+ api_command_v1_record_marker_command_attributes(value.record_marker_command_attributes) if value.has_record_marker_command_attributes?
+ api_command_v1_continue_as_new_workflow_execution_command_attributes(value.continue_as_new_workflow_execution_command_attributes) if value.has_continue_as_new_workflow_execution_command_attributes?
+ api_command_v1_start_child_workflow_execution_command_attributes(value.start_child_workflow_execution_command_attributes) if value.has_start_child_workflow_execution_command_attributes?
+ api_command_v1_signal_external_workflow_execution_command_attributes(value.signal_external_workflow_execution_command_attributes) if value.has_signal_external_workflow_execution_command_attributes?
+ api_command_v1_upsert_workflow_search_attributes_command_attributes(value.upsert_workflow_search_attributes_command_attributes) if value.has_upsert_workflow_search_attributes_command_attributes?
+ api_command_v1_modify_workflow_properties_command_attributes(value.modify_workflow_properties_command_attributes) if value.has_modify_workflow_properties_command_attributes?
+ api_command_v1_schedule_nexus_operation_command_attributes(value.schedule_nexus_operation_command_attributes) if value.has_schedule_nexus_operation_command_attributes?
+ @on_exit&.call(value)
+ end
+
+ def api_command_v1_complete_workflow_execution_command_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.result) if value.has_result?
+ @on_exit&.call(value)
+ end
+
+ def api_command_v1_continue_as_new_workflow_execution_command_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.input) if value.has_input?
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ api_common_v1_payloads(value.last_completion_result) if value.has_last_completion_result?
+ api_common_v1_header(value.header) if value.has_header?
+ api_common_v1_memo(value.memo) if value.has_memo?
+ api_common_v1_search_attributes(value.search_attributes) if value.has_search_attributes?
+ @on_exit&.call(value)
+ end
+
+ def api_command_v1_fail_workflow_execution_command_attributes(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ @on_exit&.call(value)
+ end
+
+ def api_command_v1_modify_workflow_properties_command_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_memo(value.upserted_memo) if value.has_upserted_memo?
+ @on_exit&.call(value)
+ end
+
+ def api_command_v1_record_marker_command_attributes(value)
+ @on_enter&.call(value)
+ value.details.values.each { |v| api_common_v1_payloads(v) }
+ api_common_v1_header(value.header) if value.has_header?
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ @on_exit&.call(value)
+ end
+
+ def api_command_v1_schedule_activity_task_command_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_header(value.header) if value.has_header?
+ api_common_v1_payloads(value.input) if value.has_input?
+ @on_exit&.call(value)
+ end
+
+ def api_command_v1_schedule_nexus_operation_command_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_payload(value.input) if value.has_input?
+ @on_exit&.call(value)
+ end
+
+ def api_command_v1_signal_external_workflow_execution_command_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.input) if value.has_input?
+ api_common_v1_header(value.header) if value.has_header?
+ @on_exit&.call(value)
+ end
+
+ def api_command_v1_start_child_workflow_execution_command_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.input) if value.has_input?
+ api_common_v1_header(value.header) if value.has_header?
+ api_common_v1_memo(value.memo) if value.has_memo?
+ api_common_v1_search_attributes(value.search_attributes) if value.has_search_attributes?
+ @on_exit&.call(value)
+ end
+
+ def api_command_v1_upsert_workflow_search_attributes_command_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_search_attributes(value.search_attributes) if value.has_search_attributes?
+ @on_exit&.call(value)
+ end
+
+ def api_common_v1_header(value)
+ @on_enter&.call(value)
+ value.fields.values.each { |v| api_common_v1_payload(v) }
+ @on_exit&.call(value)
+ end
+
+ def api_common_v1_memo(value)
+ @on_enter&.call(value)
+ value.fields.values.each { |v| api_common_v1_payload(v) }
+ @on_exit&.call(value)
+ end
+
+ def api_common_v1_payloads(value)
+ @on_enter&.call(value)
+ api_common_v1_payload_repeated(value.payloads) unless value.payloads.empty?
+ @on_exit&.call(value)
+ end
+
+ def api_common_v1_search_attributes(value)
+ return if @skip_search_attributes
+ @on_enter&.call(value)
+ value.indexed_fields.values.each { |v| api_common_v1_payload(v) }
+ @on_exit&.call(value)
+ end
+
+ def api_export_v1_workflow_execution(value)
+ @on_enter&.call(value)
+ api_history_v1_history(value.history) if value.has_history?
+ @on_exit&.call(value)
+ end
+
+ def api_export_v1_workflow_executions(value)
+ @on_enter&.call(value)
+ value.items.each { |v| api_export_v1_workflow_execution(v) }
+ @on_exit&.call(value)
+ end
+
+ def api_failure_v1_application_failure_info(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.details) if value.has_details?
+ @on_exit&.call(value)
+ end
+
+ def api_failure_v1_canceled_failure_info(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.details) if value.has_details?
+ @on_exit&.call(value)
+ end
+
+ def api_failure_v1_failure(value)
+ @on_enter&.call(value)
+ api_common_v1_payload(value.encoded_attributes) if value.has_encoded_attributes?
+ api_failure_v1_failure(value.cause) if value.has_cause?
+ api_failure_v1_application_failure_info(value.application_failure_info) if value.has_application_failure_info?
+ api_failure_v1_timeout_failure_info(value.timeout_failure_info) if value.has_timeout_failure_info?
+ api_failure_v1_canceled_failure_info(value.canceled_failure_info) if value.has_canceled_failure_info?
+ api_failure_v1_reset_workflow_failure_info(value.reset_workflow_failure_info) if value.has_reset_workflow_failure_info?
+ @on_exit&.call(value)
+ end
+
+ def api_failure_v1_reset_workflow_failure_info(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.last_heartbeat_details) if value.has_last_heartbeat_details?
+ @on_exit&.call(value)
+ end
+
+ def api_failure_v1_timeout_failure_info(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.last_heartbeat_details) if value.has_last_heartbeat_details?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_activity_task_canceled_event_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.details) if value.has_details?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_activity_task_completed_event_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.result) if value.has_result?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_activity_task_failed_event_attributes(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_activity_task_scheduled_event_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_header(value.header) if value.has_header?
+ api_common_v1_payloads(value.input) if value.has_input?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_activity_task_started_event_attributes(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.last_failure) if value.has_last_failure?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_activity_task_timed_out_event_attributes(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_child_workflow_execution_canceled_event_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.details) if value.has_details?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_child_workflow_execution_completed_event_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.result) if value.has_result?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_child_workflow_execution_failed_event_attributes(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_child_workflow_execution_started_event_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_header(value.header) if value.has_header?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_history(value)
+ @on_enter&.call(value)
+ value.events.each { |v| api_history_v1_history_event(v) }
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_history_event(value)
+ @on_enter&.call(value)
+ api_sdk_v1_user_metadata(value.user_metadata) if value.has_user_metadata?
+ api_history_v1_workflow_execution_started_event_attributes(value.workflow_execution_started_event_attributes) if value.has_workflow_execution_started_event_attributes?
+ api_history_v1_workflow_execution_completed_event_attributes(value.workflow_execution_completed_event_attributes) if value.has_workflow_execution_completed_event_attributes?
+ api_history_v1_workflow_execution_failed_event_attributes(value.workflow_execution_failed_event_attributes) if value.has_workflow_execution_failed_event_attributes?
+ api_history_v1_workflow_task_failed_event_attributes(value.workflow_task_failed_event_attributes) if value.has_workflow_task_failed_event_attributes?
+ api_history_v1_activity_task_scheduled_event_attributes(value.activity_task_scheduled_event_attributes) if value.has_activity_task_scheduled_event_attributes?
+ api_history_v1_activity_task_started_event_attributes(value.activity_task_started_event_attributes) if value.has_activity_task_started_event_attributes?
+ api_history_v1_activity_task_completed_event_attributes(value.activity_task_completed_event_attributes) if value.has_activity_task_completed_event_attributes?
+ api_history_v1_activity_task_failed_event_attributes(value.activity_task_failed_event_attributes) if value.has_activity_task_failed_event_attributes?
+ api_history_v1_activity_task_timed_out_event_attributes(value.activity_task_timed_out_event_attributes) if value.has_activity_task_timed_out_event_attributes?
+ api_history_v1_activity_task_canceled_event_attributes(value.activity_task_canceled_event_attributes) if value.has_activity_task_canceled_event_attributes?
+ api_history_v1_marker_recorded_event_attributes(value.marker_recorded_event_attributes) if value.has_marker_recorded_event_attributes?
+ api_history_v1_workflow_execution_signaled_event_attributes(value.workflow_execution_signaled_event_attributes) if value.has_workflow_execution_signaled_event_attributes?
+ api_history_v1_workflow_execution_terminated_event_attributes(value.workflow_execution_terminated_event_attributes) if value.has_workflow_execution_terminated_event_attributes?
+ api_history_v1_workflow_execution_canceled_event_attributes(value.workflow_execution_canceled_event_attributes) if value.has_workflow_execution_canceled_event_attributes?
+ api_history_v1_workflow_execution_continued_as_new_event_attributes(value.workflow_execution_continued_as_new_event_attributes) if value.has_workflow_execution_continued_as_new_event_attributes?
+ api_history_v1_start_child_workflow_execution_initiated_event_attributes(value.start_child_workflow_execution_initiated_event_attributes) if value.has_start_child_workflow_execution_initiated_event_attributes?
+ api_history_v1_child_workflow_execution_started_event_attributes(value.child_workflow_execution_started_event_attributes) if value.has_child_workflow_execution_started_event_attributes?
+ api_history_v1_child_workflow_execution_completed_event_attributes(value.child_workflow_execution_completed_event_attributes) if value.has_child_workflow_execution_completed_event_attributes?
+ api_history_v1_child_workflow_execution_failed_event_attributes(value.child_workflow_execution_failed_event_attributes) if value.has_child_workflow_execution_failed_event_attributes?
+ api_history_v1_child_workflow_execution_canceled_event_attributes(value.child_workflow_execution_canceled_event_attributes) if value.has_child_workflow_execution_canceled_event_attributes?
+ api_history_v1_signal_external_workflow_execution_initiated_event_attributes(value.signal_external_workflow_execution_initiated_event_attributes) if value.has_signal_external_workflow_execution_initiated_event_attributes?
+ api_history_v1_upsert_workflow_search_attributes_event_attributes(value.upsert_workflow_search_attributes_event_attributes) if value.has_upsert_workflow_search_attributes_event_attributes?
+ api_history_v1_workflow_execution_update_accepted_event_attributes(value.workflow_execution_update_accepted_event_attributes) if value.has_workflow_execution_update_accepted_event_attributes?
+ api_history_v1_workflow_execution_update_rejected_event_attributes(value.workflow_execution_update_rejected_event_attributes) if value.has_workflow_execution_update_rejected_event_attributes?
+ api_history_v1_workflow_execution_update_completed_event_attributes(value.workflow_execution_update_completed_event_attributes) if value.has_workflow_execution_update_completed_event_attributes?
+ api_history_v1_workflow_properties_modified_externally_event_attributes(value.workflow_properties_modified_externally_event_attributes) if value.has_workflow_properties_modified_externally_event_attributes?
+ api_history_v1_workflow_properties_modified_event_attributes(value.workflow_properties_modified_event_attributes) if value.has_workflow_properties_modified_event_attributes?
+ api_history_v1_workflow_execution_update_admitted_event_attributes(value.workflow_execution_update_admitted_event_attributes) if value.has_workflow_execution_update_admitted_event_attributes?
+ api_history_v1_nexus_operation_scheduled_event_attributes(value.nexus_operation_scheduled_event_attributes) if value.has_nexus_operation_scheduled_event_attributes?
+ api_history_v1_nexus_operation_completed_event_attributes(value.nexus_operation_completed_event_attributes) if value.has_nexus_operation_completed_event_attributes?
+ api_history_v1_nexus_operation_failed_event_attributes(value.nexus_operation_failed_event_attributes) if value.has_nexus_operation_failed_event_attributes?
+ api_history_v1_nexus_operation_canceled_event_attributes(value.nexus_operation_canceled_event_attributes) if value.has_nexus_operation_canceled_event_attributes?
+ api_history_v1_nexus_operation_timed_out_event_attributes(value.nexus_operation_timed_out_event_attributes) if value.has_nexus_operation_timed_out_event_attributes?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_marker_recorded_event_attributes(value)
+ @on_enter&.call(value)
+ value.details.values.each { |v| api_common_v1_payloads(v) }
+ api_common_v1_header(value.header) if value.has_header?
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_nexus_operation_canceled_event_attributes(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_nexus_operation_completed_event_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_payload(value.result) if value.has_result?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_nexus_operation_failed_event_attributes(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_nexus_operation_scheduled_event_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_payload(value.input) if value.has_input?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_nexus_operation_timed_out_event_attributes(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_signal_external_workflow_execution_initiated_event_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.input) if value.has_input?
+ api_common_v1_header(value.header) if value.has_header?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_start_child_workflow_execution_initiated_event_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.input) if value.has_input?
+ api_common_v1_header(value.header) if value.has_header?
+ api_common_v1_memo(value.memo) if value.has_memo?
+ api_common_v1_search_attributes(value.search_attributes) if value.has_search_attributes?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_upsert_workflow_search_attributes_event_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_search_attributes(value.search_attributes) if value.has_search_attributes?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_workflow_execution_canceled_event_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.details) if value.has_details?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_workflow_execution_completed_event_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.result) if value.has_result?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_workflow_execution_continued_as_new_event_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.input) if value.has_input?
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ api_common_v1_payloads(value.last_completion_result) if value.has_last_completion_result?
+ api_common_v1_header(value.header) if value.has_header?
+ api_common_v1_memo(value.memo) if value.has_memo?
+ api_common_v1_search_attributes(value.search_attributes) if value.has_search_attributes?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_workflow_execution_failed_event_attributes(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_workflow_execution_signaled_event_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.input) if value.has_input?
+ api_common_v1_header(value.header) if value.has_header?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_workflow_execution_started_event_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.input) if value.has_input?
+ api_failure_v1_failure(value.continued_failure) if value.has_continued_failure?
+ api_common_v1_payloads(value.last_completion_result) if value.has_last_completion_result?
+ api_common_v1_memo(value.memo) if value.has_memo?
+ api_common_v1_search_attributes(value.search_attributes) if value.has_search_attributes?
+ api_common_v1_header(value.header) if value.has_header?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_workflow_execution_terminated_event_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.details) if value.has_details?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_workflow_execution_update_accepted_event_attributes(value)
+ @on_enter&.call(value)
+ api_update_v1_request(value.accepted_request) if value.has_accepted_request?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_workflow_execution_update_admitted_event_attributes(value)
+ @on_enter&.call(value)
+ api_update_v1_request(value.request) if value.has_request?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_workflow_execution_update_completed_event_attributes(value)
+ @on_enter&.call(value)
+ api_update_v1_outcome(value.outcome) if value.has_outcome?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_workflow_execution_update_rejected_event_attributes(value)
+ @on_enter&.call(value)
+ api_update_v1_request(value.rejected_request) if value.has_rejected_request?
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_workflow_properties_modified_event_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_memo(value.upserted_memo) if value.has_upserted_memo?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_workflow_properties_modified_externally_event_attributes(value)
+ @on_enter&.call(value)
+ api_common_v1_memo(value.upserted_memo) if value.has_upserted_memo?
+ @on_exit&.call(value)
+ end
+
+ def api_history_v1_workflow_task_failed_event_attributes(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ @on_exit&.call(value)
+ end
+
+ def api_nexus_v1_endpoint(value)
+ @on_enter&.call(value)
+ api_nexus_v1_endpoint_spec(value.spec) if value.has_spec?
+ @on_exit&.call(value)
+ end
+
+ def api_nexus_v1_endpoint_spec(value)
+ @on_enter&.call(value)
+ api_sdk_v1_user_metadata(value.metadata) if value.has_metadata?
+ @on_exit&.call(value)
+ end
+
+ def api_nexus_v1_request(value)
+ @on_enter&.call(value)
+ api_nexus_v1_start_operation_request(value.start_operation) if value.has_start_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_nexus_v1_response(value)
+ @on_enter&.call(value)
+ api_nexus_v1_start_operation_response(value.start_operation) if value.has_start_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_nexus_v1_start_operation_request(value)
+ @on_enter&.call(value)
+ api_common_v1_payload(value.payload) if value.has_payload?
+ @on_exit&.call(value)
+ end
+
+ def api_nexus_v1_start_operation_response(value)
+ @on_enter&.call(value)
+ api_nexus_v1_start_operation_response_sync(value.sync_success) if value.has_sync_success?
+ @on_exit&.call(value)
+ end
+
+ def api_nexus_v1_start_operation_response_sync(value)
+ @on_enter&.call(value)
+ api_common_v1_payload(value.payload) if value.has_payload?
+ @on_exit&.call(value)
+ end
+
+ def api_operatorservice_v1_create_nexus_endpoint_request(value)
+ @on_enter&.call(value)
+ api_nexus_v1_endpoint_spec(value.spec) if value.has_spec?
+ @on_exit&.call(value)
+ end
+
+ def api_operatorservice_v1_create_nexus_endpoint_response(value)
+ @on_enter&.call(value)
+ api_nexus_v1_endpoint(value.endpoint) if value.has_endpoint?
+ @on_exit&.call(value)
+ end
+
+ def api_operatorservice_v1_get_nexus_endpoint_response(value)
+ @on_enter&.call(value)
+ api_nexus_v1_endpoint(value.endpoint) if value.has_endpoint?
+ @on_exit&.call(value)
+ end
+
+ def api_operatorservice_v1_list_nexus_endpoints_response(value)
+ @on_enter&.call(value)
+ value.endpoints.each { |v| api_nexus_v1_endpoint(v) }
+ @on_exit&.call(value)
+ end
+
+ def api_operatorservice_v1_update_nexus_endpoint_request(value)
+ @on_enter&.call(value)
+ api_nexus_v1_endpoint_spec(value.spec) if value.has_spec?
+ @on_exit&.call(value)
+ end
+
+ def api_operatorservice_v1_update_nexus_endpoint_response(value)
+ @on_enter&.call(value)
+ api_nexus_v1_endpoint(value.endpoint) if value.has_endpoint?
+ @on_exit&.call(value)
+ end
+
+ def api_protocol_v1_message(value)
+ @on_enter&.call(value)
+ google_protobuf_any(value.body) if value.has_body?
+ @on_exit&.call(value)
+ end
+
+ def api_query_v1_workflow_query(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.query_args) if value.has_query_args?
+ api_common_v1_header(value.header) if value.has_header?
+ @on_exit&.call(value)
+ end
+
+ def api_query_v1_workflow_query_result(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.answer) if value.has_answer?
+ @on_exit&.call(value)
+ end
+
+ def api_schedule_v1_schedule(value)
+ @on_enter&.call(value)
+ api_schedule_v1_schedule_action(value.action) if value.has_action?
+ @on_exit&.call(value)
+ end
+
+ def api_schedule_v1_schedule_action(value)
+ @on_enter&.call(value)
+ api_workflow_v1_new_workflow_execution_info(value.start_workflow) if value.has_start_workflow?
+ @on_exit&.call(value)
+ end
+
+ def api_schedule_v1_schedule_list_entry(value)
+ @on_enter&.call(value)
+ api_common_v1_memo(value.memo) if value.has_memo?
+ api_common_v1_search_attributes(value.search_attributes) if value.has_search_attributes?
+ @on_exit&.call(value)
+ end
+
+ def api_sdk_v1_user_metadata(value)
+ @on_enter&.call(value)
+ api_common_v1_payload(value.summary) if value.has_summary?
+ api_common_v1_payload(value.details) if value.has_details?
+ @on_exit&.call(value)
+ end
+
+ def api_update_v1_input(value)
+ @on_enter&.call(value)
+ api_common_v1_header(value.header) if value.has_header?
+ api_common_v1_payloads(value.args) if value.has_args?
+ @on_exit&.call(value)
+ end
+
+ def api_update_v1_outcome(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.success) if value.has_success?
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ @on_exit&.call(value)
+ end
+
+ def api_update_v1_request(value)
+ @on_enter&.call(value)
+ api_update_v1_input(value.input) if value.has_input?
+ @on_exit&.call(value)
+ end
+
+ def api_workflow_v1_callback_info(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.last_attempt_failure) if value.has_last_attempt_failure?
+ @on_exit&.call(value)
+ end
+
+ def api_workflow_v1_new_workflow_execution_info(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.input) if value.has_input?
+ api_common_v1_memo(value.memo) if value.has_memo?
+ api_common_v1_search_attributes(value.search_attributes) if value.has_search_attributes?
+ api_common_v1_header(value.header) if value.has_header?
+ api_sdk_v1_user_metadata(value.user_metadata) if value.has_user_metadata?
+ @on_exit&.call(value)
+ end
+
+ def api_workflow_v1_nexus_operation_cancellation_info(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.last_attempt_failure) if value.has_last_attempt_failure?
+ @on_exit&.call(value)
+ end
+
+ def api_workflow_v1_pending_activity_info(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.heartbeat_details) if value.has_heartbeat_details?
+ api_failure_v1_failure(value.last_failure) if value.has_last_failure?
+ @on_exit&.call(value)
+ end
+
+ def api_workflow_v1_pending_nexus_operation_info(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.last_attempt_failure) if value.has_last_attempt_failure?
+ api_workflow_v1_nexus_operation_cancellation_info(value.cancellation_info) if value.has_cancellation_info?
+ @on_exit&.call(value)
+ end
+
+ def api_workflow_v1_workflow_execution_config(value)
+ @on_enter&.call(value)
+ api_sdk_v1_user_metadata(value.user_metadata) if value.has_user_metadata?
+ @on_exit&.call(value)
+ end
+
+ def api_workflow_v1_workflow_execution_info(value)
+ @on_enter&.call(value)
+ api_common_v1_memo(value.memo) if value.has_memo?
+ api_common_v1_search_attributes(value.search_attributes) if value.has_search_attributes?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_count_workflow_executions_response(value)
+ @on_enter&.call(value)
+ value.groups.each { |v| api_workflowservice_v1_count_workflow_executions_response_aggregation_group(v) }
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_count_workflow_executions_response_aggregation_group(value)
+ @on_enter&.call(value)
+ api_common_v1_payload_repeated(value.group_values) unless value.group_values.empty?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_create_schedule_request(value)
+ @on_enter&.call(value)
+ api_schedule_v1_schedule(value.schedule) if value.has_schedule?
+ api_common_v1_memo(value.memo) if value.has_memo?
+ api_common_v1_search_attributes(value.search_attributes) if value.has_search_attributes?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_describe_schedule_response(value)
+ @on_enter&.call(value)
+ api_schedule_v1_schedule(value.schedule) if value.has_schedule?
+ api_common_v1_memo(value.memo) if value.has_memo?
+ api_common_v1_search_attributes(value.search_attributes) if value.has_search_attributes?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_describe_workflow_execution_response(value)
+ @on_enter&.call(value)
+ api_workflow_v1_workflow_execution_config(value.execution_config) if value.has_execution_config?
+ api_workflow_v1_workflow_execution_info(value.workflow_execution_info) if value.has_workflow_execution_info?
+ value.pending_activities.each { |v| api_workflow_v1_pending_activity_info(v) }
+ value.callbacks.each { |v| api_workflow_v1_callback_info(v) }
+ value.pending_nexus_operations.each { |v| api_workflow_v1_pending_nexus_operation_info(v) }
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_execute_multi_operation_request(value)
+ @on_enter&.call(value)
+ value.operations.each { |v| api_workflowservice_v1_execute_multi_operation_request_operation(v) }
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_execute_multi_operation_request_operation(value)
+ @on_enter&.call(value)
+ api_workflowservice_v1_start_workflow_execution_request(value.start_workflow) if value.has_start_workflow?
+ api_workflowservice_v1_update_workflow_execution_request(value.update_workflow) if value.has_update_workflow?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_execute_multi_operation_response(value)
+ @on_enter&.call(value)
+ value.responses.each { |v| api_workflowservice_v1_execute_multi_operation_response_response(v) }
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_execute_multi_operation_response_response(value)
+ @on_enter&.call(value)
+ api_workflowservice_v1_start_workflow_execution_response(value.start_workflow) if value.has_start_workflow?
+ api_workflowservice_v1_update_workflow_execution_response(value.update_workflow) if value.has_update_workflow?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_get_workflow_execution_history_response(value)
+ @on_enter&.call(value)
+ api_history_v1_history(value.history) if value.has_history?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_get_workflow_execution_history_reverse_response(value)
+ @on_enter&.call(value)
+ api_history_v1_history(value.history) if value.has_history?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_list_archived_workflow_executions_response(value)
+ @on_enter&.call(value)
+ value.executions.each { |v| api_workflow_v1_workflow_execution_info(v) }
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_list_closed_workflow_executions_response(value)
+ @on_enter&.call(value)
+ value.executions.each { |v| api_workflow_v1_workflow_execution_info(v) }
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_list_open_workflow_executions_response(value)
+ @on_enter&.call(value)
+ value.executions.each { |v| api_workflow_v1_workflow_execution_info(v) }
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_list_schedules_response(value)
+ @on_enter&.call(value)
+ value.schedules.each { |v| api_schedule_v1_schedule_list_entry(v) }
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_list_workflow_executions_response(value)
+ @on_enter&.call(value)
+ value.executions.each { |v| api_workflow_v1_workflow_execution_info(v) }
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_poll_activity_task_queue_response(value)
+ @on_enter&.call(value)
+ api_common_v1_header(value.header) if value.has_header?
+ api_common_v1_payloads(value.input) if value.has_input?
+ api_common_v1_payloads(value.heartbeat_details) if value.has_heartbeat_details?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_poll_nexus_task_queue_response(value)
+ @on_enter&.call(value)
+ api_nexus_v1_request(value.request) if value.has_request?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_poll_workflow_execution_update_response(value)
+ @on_enter&.call(value)
+ api_update_v1_outcome(value.outcome) if value.has_outcome?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_poll_workflow_task_queue_response(value)
+ @on_enter&.call(value)
+ api_history_v1_history(value.history) if value.has_history?
+ api_query_v1_workflow_query(value.query) if value.has_query?
+ value.queries.values.each { |v| api_query_v1_workflow_query(v) }
+ value.messages.each { |v| api_protocol_v1_message(v) }
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_query_workflow_request(value)
+ @on_enter&.call(value)
+ api_query_v1_workflow_query(value.query) if value.has_query?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_query_workflow_response(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.query_result) if value.has_query_result?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_record_activity_task_heartbeat_by_id_request(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.details) if value.has_details?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_record_activity_task_heartbeat_request(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.details) if value.has_details?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_respond_activity_task_canceled_by_id_request(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.details) if value.has_details?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_respond_activity_task_canceled_request(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.details) if value.has_details?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_respond_activity_task_completed_by_id_request(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.result) if value.has_result?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_respond_activity_task_completed_request(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.result) if value.has_result?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_respond_activity_task_failed_by_id_request(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ api_common_v1_payloads(value.last_heartbeat_details) if value.has_last_heartbeat_details?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_respond_activity_task_failed_by_id_response(value)
+ @on_enter&.call(value)
+ value.failures.each { |v| api_failure_v1_failure(v) }
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_respond_activity_task_failed_request(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ api_common_v1_payloads(value.last_heartbeat_details) if value.has_last_heartbeat_details?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_respond_activity_task_failed_response(value)
+ @on_enter&.call(value)
+ value.failures.each { |v| api_failure_v1_failure(v) }
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_respond_nexus_task_completed_request(value)
+ @on_enter&.call(value)
+ api_nexus_v1_response(value.response) if value.has_response?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_respond_query_task_completed_request(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.query_result) if value.has_query_result?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_respond_workflow_task_completed_request(value)
+ @on_enter&.call(value)
+ value.commands.each { |v| api_command_v1_command(v) }
+ value.query_results.values.each { |v| api_query_v1_workflow_query_result(v) }
+ value.messages.each { |v| api_protocol_v1_message(v) }
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_respond_workflow_task_completed_response(value)
+ @on_enter&.call(value)
+ api_workflowservice_v1_poll_workflow_task_queue_response(value.workflow_task) if value.has_workflow_task?
+ value.activity_tasks.each { |v| api_workflowservice_v1_poll_activity_task_queue_response(v) }
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_respond_workflow_task_failed_request(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ value.messages.each { |v| api_protocol_v1_message(v) }
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_scan_workflow_executions_response(value)
+ @on_enter&.call(value)
+ value.executions.each { |v| api_workflow_v1_workflow_execution_info(v) }
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_signal_with_start_workflow_execution_request(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.input) if value.has_input?
+ api_common_v1_payloads(value.signal_input) if value.has_signal_input?
+ api_common_v1_memo(value.memo) if value.has_memo?
+ api_common_v1_search_attributes(value.search_attributes) if value.has_search_attributes?
+ api_common_v1_header(value.header) if value.has_header?
+ api_sdk_v1_user_metadata(value.user_metadata) if value.has_user_metadata?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_signal_workflow_execution_request(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.input) if value.has_input?
+ api_common_v1_header(value.header) if value.has_header?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_start_batch_operation_request(value)
+ @on_enter&.call(value)
+ api_batch_v1_batch_operation_termination(value.termination_operation) if value.has_termination_operation?
+ api_batch_v1_batch_operation_signal(value.signal_operation) if value.has_signal_operation?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_start_workflow_execution_request(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.input) if value.has_input?
+ api_common_v1_memo(value.memo) if value.has_memo?
+ api_common_v1_search_attributes(value.search_attributes) if value.has_search_attributes?
+ api_common_v1_header(value.header) if value.has_header?
+ api_failure_v1_failure(value.continued_failure) if value.has_continued_failure?
+ api_common_v1_payloads(value.last_completion_result) if value.has_last_completion_result?
+ api_sdk_v1_user_metadata(value.user_metadata) if value.has_user_metadata?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_start_workflow_execution_response(value)
+ @on_enter&.call(value)
+ api_workflowservice_v1_poll_workflow_task_queue_response(value.eager_workflow_task) if value.has_eager_workflow_task?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_terminate_workflow_execution_request(value)
+ @on_enter&.call(value)
+ api_common_v1_payloads(value.details) if value.has_details?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_update_schedule_request(value)
+ @on_enter&.call(value)
+ api_schedule_v1_schedule(value.schedule) if value.has_schedule?
+ api_common_v1_search_attributes(value.search_attributes) if value.has_search_attributes?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_update_workflow_execution_request(value)
+ @on_enter&.call(value)
+ api_update_v1_request(value.request) if value.has_request?
+ @on_exit&.call(value)
+ end
+
+ def api_workflowservice_v1_update_workflow_execution_response(value)
+ @on_enter&.call(value)
+ api_update_v1_outcome(value.outcome) if value.has_outcome?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_activity_result_activity_resolution(value)
+ @on_enter&.call(value)
+ coresdk_activity_result_success(value.completed) if value.has_completed?
+ coresdk_activity_result_failure(value.failed) if value.has_failed?
+ coresdk_activity_result_cancellation(value.cancelled) if value.has_cancelled?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_activity_result_cancellation(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_activity_result_failure(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_activity_result_success(value)
+ @on_enter&.call(value)
+ api_common_v1_payload(value.result) if value.has_result?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_child_workflow_cancellation(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_child_workflow_child_workflow_result(value)
+ @on_enter&.call(value)
+ coresdk_child_workflow_success(value.completed) if value.has_completed?
+ coresdk_child_workflow_failure(value.failed) if value.has_failed?
+ coresdk_child_workflow_cancellation(value.cancelled) if value.has_cancelled?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_child_workflow_failure(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_child_workflow_success(value)
+ @on_enter&.call(value)
+ api_common_v1_payload(value.result) if value.has_result?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_activation_cancel_workflow(value)
+ @on_enter&.call(value)
+ api_common_v1_payload_repeated(value.details) unless value.details.empty?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_activation_do_update(value)
+ @on_enter&.call(value)
+ api_common_v1_payload_repeated(value.input) unless value.input.empty?
+ value.headers.values.each { |v| api_common_v1_payload(v) }
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_activation_initialize_workflow(value)
+ @on_enter&.call(value)
+ api_common_v1_payload_repeated(value.arguments) unless value.arguments.empty?
+ value.headers.values.each { |v| api_common_v1_payload(v) }
+ api_failure_v1_failure(value.continued_failure) if value.has_continued_failure?
+ api_common_v1_payloads(value.last_completion_result) if value.has_last_completion_result?
+ api_common_v1_memo(value.memo) if value.has_memo?
+ api_common_v1_search_attributes(value.search_attributes) if value.has_search_attributes?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_activation_query_workflow(value)
+ @on_enter&.call(value)
+ api_common_v1_payload_repeated(value.arguments) unless value.arguments.empty?
+ value.headers.values.each { |v| api_common_v1_payload(v) }
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_activation_resolve_activity(value)
+ @on_enter&.call(value)
+ coresdk_activity_result_activity_resolution(value.result) if value.has_result?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_activation_resolve_child_workflow_execution(value)
+ @on_enter&.call(value)
+ coresdk_child_workflow_child_workflow_result(value.result) if value.has_result?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_activation_resolve_child_workflow_execution_start(value)
+ @on_enter&.call(value)
+ coresdk_workflow_activation_resolve_child_workflow_execution_start_cancelled(value.cancelled) if value.has_cancelled?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_activation_resolve_child_workflow_execution_start_cancelled(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_activation_resolve_request_cancel_external_workflow(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_activation_resolve_signal_external_workflow(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_activation_signal_workflow(value)
+ @on_enter&.call(value)
+ api_common_v1_payload_repeated(value.input) unless value.input.empty?
+ value.headers.values.each { |v| api_common_v1_payload(v) }
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_activation_workflow_activation(value)
+ @on_enter&.call(value)
+ value.jobs.each { |v| coresdk_workflow_activation_workflow_activation_job(v) }
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_activation_workflow_activation_job(value)
+ @on_enter&.call(value)
+ coresdk_workflow_activation_initialize_workflow(value.initialize_workflow) if value.has_initialize_workflow?
+ coresdk_workflow_activation_query_workflow(value.query_workflow) if value.has_query_workflow?
+ coresdk_workflow_activation_cancel_workflow(value.cancel_workflow) if value.has_cancel_workflow?
+ coresdk_workflow_activation_signal_workflow(value.signal_workflow) if value.has_signal_workflow?
+ coresdk_workflow_activation_resolve_activity(value.resolve_activity) if value.has_resolve_activity?
+ coresdk_workflow_activation_resolve_child_workflow_execution_start(value.resolve_child_workflow_execution_start) if value.has_resolve_child_workflow_execution_start?
+ coresdk_workflow_activation_resolve_child_workflow_execution(value.resolve_child_workflow_execution) if value.has_resolve_child_workflow_execution?
+ coresdk_workflow_activation_resolve_signal_external_workflow(value.resolve_signal_external_workflow) if value.has_resolve_signal_external_workflow?
+ coresdk_workflow_activation_resolve_request_cancel_external_workflow(value.resolve_request_cancel_external_workflow) if value.has_resolve_request_cancel_external_workflow?
+ coresdk_workflow_activation_do_update(value.do_update) if value.has_do_update?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_commands_complete_workflow_execution(value)
+ @on_enter&.call(value)
+ api_common_v1_payload(value.result) if value.has_result?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_commands_continue_as_new_workflow_execution(value)
+ @on_enter&.call(value)
+ api_common_v1_payload_repeated(value.arguments) unless value.arguments.empty?
+ value.memo.values.each { |v| api_common_v1_payload(v) }
+ value.headers.values.each { |v| api_common_v1_payload(v) }
+ value.search_attributes.values.each { |v| api_common_v1_payload(v) } unless @skip_search_attributes
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_commands_fail_workflow_execution(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_commands_modify_workflow_properties(value)
+ @on_enter&.call(value)
+ api_common_v1_memo(value.upserted_memo) if value.has_upserted_memo?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_commands_query_result(value)
+ @on_enter&.call(value)
+ coresdk_workflow_commands_query_success(value.succeeded) if value.has_succeeded?
+ api_failure_v1_failure(value.failed) if value.has_failed?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_commands_query_success(value)
+ @on_enter&.call(value)
+ api_common_v1_payload(value.response) if value.has_response?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_commands_schedule_activity(value)
+ @on_enter&.call(value)
+ value.headers.values.each { |v| api_common_v1_payload(v) }
+ api_common_v1_payload_repeated(value.arguments) unless value.arguments.empty?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_commands_schedule_local_activity(value)
+ @on_enter&.call(value)
+ value.headers.values.each { |v| api_common_v1_payload(v) }
+ api_common_v1_payload_repeated(value.arguments) unless value.arguments.empty?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_commands_signal_external_workflow_execution(value)
+ @on_enter&.call(value)
+ api_common_v1_payload_repeated(value.args) unless value.args.empty?
+ value.headers.values.each { |v| api_common_v1_payload(v) }
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_commands_start_child_workflow_execution(value)
+ @on_enter&.call(value)
+ api_common_v1_payload_repeated(value.input) unless value.input.empty?
+ value.headers.values.each { |v| api_common_v1_payload(v) }
+ value.memo.values.each { |v| api_common_v1_payload(v) }
+ value.search_attributes.values.each { |v| api_common_v1_payload(v) } unless @skip_search_attributes
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_commands_update_response(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.rejected) if value.has_rejected?
+ api_common_v1_payload(value.completed) if value.has_completed?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_commands_upsert_workflow_search_attributes(value)
+ @on_enter&.call(value)
+ value.search_attributes.values.each { |v| api_common_v1_payload(v) } unless @skip_search_attributes
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_commands_workflow_command(value)
+ @on_enter&.call(value)
+ coresdk_workflow_commands_schedule_activity(value.schedule_activity) if value.has_schedule_activity?
+ coresdk_workflow_commands_query_result(value.respond_to_query) if value.has_respond_to_query?
+ coresdk_workflow_commands_complete_workflow_execution(value.complete_workflow_execution) if value.has_complete_workflow_execution?
+ coresdk_workflow_commands_fail_workflow_execution(value.fail_workflow_execution) if value.has_fail_workflow_execution?
+ coresdk_workflow_commands_continue_as_new_workflow_execution(value.continue_as_new_workflow_execution) if value.has_continue_as_new_workflow_execution?
+ coresdk_workflow_commands_start_child_workflow_execution(value.start_child_workflow_execution) if value.has_start_child_workflow_execution?
+ coresdk_workflow_commands_signal_external_workflow_execution(value.signal_external_workflow_execution) if value.has_signal_external_workflow_execution?
+ coresdk_workflow_commands_schedule_local_activity(value.schedule_local_activity) if value.has_schedule_local_activity?
+ coresdk_workflow_commands_upsert_workflow_search_attributes(value.upsert_workflow_search_attributes) if value.has_upsert_workflow_search_attributes?
+ coresdk_workflow_commands_modify_workflow_properties(value.modify_workflow_properties) if value.has_modify_workflow_properties?
+ coresdk_workflow_commands_update_response(value.update_response) if value.has_update_response?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_completion_failure(value)
+ @on_enter&.call(value)
+ api_failure_v1_failure(value.failure) if value.has_failure?
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_completion_success(value)
+ @on_enter&.call(value)
+ value.commands.each { |v| coresdk_workflow_commands_workflow_command(v) }
+ @on_exit&.call(value)
+ end
+
+ def coresdk_workflow_completion_workflow_activation_completion(value)
+ @on_enter&.call(value)
+ coresdk_workflow_completion_success(value.successful) if value.has_successful?
+ coresdk_workflow_completion_failure(value.failed) if value.has_failed?
+ @on_exit&.call(value)
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/api/testservice/v1/request_response.rb b/temporalio/lib/temporalio/api/testservice/v1/request_response.rb
new file mode 100644
index 00000000..8d2564dd
--- /dev/null
+++ b/temporalio/lib/temporalio/api/testservice/v1/request_response.rb
@@ -0,0 +1,31 @@
+# frozen_string_literal: true
+# Generated by the protocol buffer compiler. DO NOT EDIT!
+# source: temporal/api/testservice/v1/request_response.proto
+
+require 'google/protobuf'
+
+require 'google/protobuf/duration_pb'
+require 'google/protobuf/timestamp_pb'
+
+
+descriptor_data = "\n2temporal/api/testservice/v1/request_response.proto\x12\x1btemporal.api.testservice.v1\x1a\x1egoogle/protobuf/duration.proto\x1a\x1fgoogle/protobuf/timestamp.proto\"\x19\n\x17LockTimeSkippingRequest\"\x1a\n\x18LockTimeSkippingResponse\"\x1b\n\x19UnlockTimeSkippingRequest\"\x1c\n\x1aUnlockTimeSkippingResponse\"B\n\x11SleepUntilRequest\x12-\n\ttimestamp\x18\x01 \x01(\x0b\x32\x1a.google.protobuf.Timestamp\";\n\x0cSleepRequest\x12+\n\x08\x64uration\x18\x01 \x01(\x0b\x32\x19.google.protobuf.Duration\"\x0f\n\rSleepResponse\"B\n\x16GetCurrentTimeResponse\x12(\n\x04time\x18\x01 \x01(\x0b\x32\x1a.google.protobuf.TimestampB\xaa\x01\n\x1eio.temporal.api.testservice.v1B\x14RequestResponseProtoP\x01Z-go.temporal.io/api/testservice/v1;testservice\xaa\x02\x1dTemporalio.Api.TestService.V1\xea\x02 Temporalio::Api::TestService::V1b\x06proto3"
+
+pool = Google::Protobuf::DescriptorPool.generated_pool
+pool.add_serialized_file(descriptor_data)
+
+module Temporalio
+ module Api
+ module TestService
+ module V1
+ LockTimeSkippingRequest = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("temporal.api.testservice.v1.LockTimeSkippingRequest").msgclass
+ LockTimeSkippingResponse = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("temporal.api.testservice.v1.LockTimeSkippingResponse").msgclass
+ UnlockTimeSkippingRequest = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("temporal.api.testservice.v1.UnlockTimeSkippingRequest").msgclass
+ UnlockTimeSkippingResponse = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("temporal.api.testservice.v1.UnlockTimeSkippingResponse").msgclass
+ SleepUntilRequest = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("temporal.api.testservice.v1.SleepUntilRequest").msgclass
+ SleepRequest = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("temporal.api.testservice.v1.SleepRequest").msgclass
+ SleepResponse = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("temporal.api.testservice.v1.SleepResponse").msgclass
+ GetCurrentTimeResponse = ::Google::Protobuf::DescriptorPool.generated_pool.lookup("temporal.api.testservice.v1.GetCurrentTimeResponse").msgclass
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/api/testservice/v1/service.rb b/temporalio/lib/temporalio/api/testservice/v1/service.rb
new file mode 100644
index 00000000..d7e838ea
--- /dev/null
+++ b/temporalio/lib/temporalio/api/testservice/v1/service.rb
@@ -0,0 +1,23 @@
+# frozen_string_literal: true
+# Generated by the protocol buffer compiler. DO NOT EDIT!
+# source: temporal/api/testservice/v1/service.proto
+
+require 'google/protobuf'
+
+require 'temporalio/api/testservice/v1/request_response'
+require 'google/protobuf/empty_pb'
+
+
+descriptor_data = "\n)temporal/api/testservice/v1/service.proto\x12\x1btemporal.api.testservice.v1\x1a\x32temporal/api/testservice/v1/request_response.proto\x1a\x1bgoogle/protobuf/empty.proto2\xc2\x05\n\x0bTestService\x12\x81\x01\n\x10LockTimeSkipping\x12\x34.temporal.api.testservice.v1.LockTimeSkippingRequest\x1a\x35.temporal.api.testservice.v1.LockTimeSkippingResponse\"\x00\x12\x87\x01\n\x12UnlockTimeSkipping\x12\x36.temporal.api.testservice.v1.UnlockTimeSkippingRequest\x1a\x37.temporal.api.testservice.v1.UnlockTimeSkippingResponse\"\x00\x12`\n\x05Sleep\x12).temporal.api.testservice.v1.SleepRequest\x1a*.temporal.api.testservice.v1.SleepResponse\"\x00\x12j\n\nSleepUntil\x12..temporal.api.testservice.v1.SleepUntilRequest\x1a*.temporal.api.testservice.v1.SleepResponse\"\x00\x12v\n\x1bUnlockTimeSkippingWithSleep\x12).temporal.api.testservice.v1.SleepRequest\x1a*.temporal.api.testservice.v1.SleepResponse\"\x00\x12_\n\x0eGetCurrentTime\x12\x16.google.protobuf.Empty\x1a\x33.temporal.api.testservice.v1.GetCurrentTimeResponse\"\x00\x42\xa2\x01\n\x1eio.temporal.api.testservice.v1B\x0cServiceProtoP\x01Z-go.temporal.io/api/testservice/v1;testservice\xaa\x02\x1dTemporalio.Api.TestService.V1\xea\x02 Temporalio::Api::TestService::V1b\x06proto3"
+
+pool = Google::Protobuf::DescriptorPool.generated_pool
+pool.add_serialized_file(descriptor_data)
+
+module Temporalio
+ module Api
+ module TestService
+ module V1
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/cancellation.rb b/temporalio/lib/temporalio/cancellation.rb
index faddc977..6d5416d5 100644
--- a/temporalio/lib/temporalio/cancellation.rb
+++ b/temporalio/lib/temporalio/cancellation.rb
@@ -1,6 +1,7 @@
# frozen_string_literal: true
require 'temporalio/error'
+require 'temporalio/workflow'
module Temporalio
# Cancellation representation, often known as a "cancellation token". This is used by clients, activities, and
@@ -19,7 +20,7 @@ def initialize(*parents)
@canceled_reason = nil
@canceled_mutex = Mutex.new
@canceled_cond_var = nil
- @cancel_callbacks = []
+ @cancel_callbacks = {} # Keyed by sentinel value, but value iteration still is deterministic
@shield_depth = 0
@shield_pending_cancel = nil # When pending, set as single-reason array
parents.each { |p| p.add_cancel_callback { on_cancel(reason: p.canceled_reason) } }
@@ -59,15 +60,24 @@ def to_ary
[self, proc { |reason: nil| on_cancel(reason:) }]
end
- # Wait on this to be canceled. This is backed by a {::ConditionVariable}.
+ # Wait on this to be canceled. This is backed by a {::ConditionVariable} outside of workflows or
+ # {Workflow.wait_condition} inside of workflows.
def wait
+ # If this is in a workflow, just wait for the canceled. This is because ConditionVariable does a no-duration
+ # kernel_sleep to the fiber scheduler which ends up recursing back into this because the workflow implementation
+ # of kernel_sleep by default relies on cancellation.
+ if Workflow.in_workflow?
+ Workflow.wait_condition(cancellation: nil) { @canceled }
+ return
+ end
+
@canceled_mutex.synchronize do
break if @canceled
# Add cond var if not present
if @canceled_cond_var.nil?
@canceled_cond_var = ConditionVariable.new
- @cancel_callbacks.push(proc { @canceled_mutex.synchronize { @canceled_cond_var.broadcast } })
+ @cancel_callbacks[Object.new] = proc { @canceled_mutex.synchronize { @canceled_cond_var.broadcast } }
end
# Wait on it
@@ -105,21 +115,29 @@ def shield
#
# @note WARNING: This is advanced API, users should use {wait} or similar.
#
- # @param proc [Proc, nil] Proc to invoke, or nil to use block.
# @yield Accepts block if not using `proc`.
- def add_cancel_callback(proc = nil, &block)
- raise ArgumentError, 'Must provide proc or block' unless proc || block
- raise ArgumentError, 'Cannot provide both proc and block' if proc && block
- raise ArgumentError, 'Parameter not a proc' if proc && !proc.is_a?(Proc)
+ # @return [Object] Key that can be used with {remove_cancel_callback} or `nil`` if run immediately.
+ def add_cancel_callback(&block)
+ raise ArgumentError, 'Must provide block' unless block_given?
- callback_to_run_immediately = @canceled_mutex.synchronize do
- callback = proc || block
- @cancel_callbacks.push(proc || block)
- break nil unless @canceled
+ callback_to_run_immediately, key = @canceled_mutex.synchronize do
+ break [block, nil] if @canceled
- callback
+ key = Object.new
+ @cancel_callbacks[key] = block
+ [nil, key]
end
callback_to_run_immediately&.call
+ key
+ end
+
+ # Remove a cancel callback using the key returned from {add_cancel_callback}.
+ #
+ # @param key [Object] Key returned from {add_cancel_callback}.
+ def remove_cancel_callback(key)
+ @canceled_mutex.synchronize do
+ @cancel_callbacks.delete(key)
+ end
nil
end
@@ -144,7 +162,9 @@ def prepare_cancel(reason:)
@canceled = true
@canceled_reason = reason
- @cancel_callbacks.dup
+ to_return = @cancel_callbacks.dup
+ @cancel_callbacks.clear
+ to_return.values
end
end
end
diff --git a/temporalio/lib/temporalio/client.rb b/temporalio/lib/temporalio/client.rb
index 45cf472c..b2559e45 100644
--- a/temporalio/lib/temporalio/client.rb
+++ b/temporalio/lib/temporalio/client.rb
@@ -19,6 +19,7 @@
require 'temporalio/retry_policy'
require 'temporalio/runtime'
require 'temporalio/search_attributes'
+require 'temporalio/workflow/definition'
module Temporalio
# Client for accessing Temporal.
@@ -185,7 +186,7 @@ def operator_service
# Start a workflow and return its handle.
#
- # @param workflow [Workflow, String] Name of the workflow
+ # @param workflow [Class, String, Symbol] Workflow definition class or workflow name.
# @param args [Array] Arguments to the workflow.
# @param id [String] Unique identifier for the workflow execution.
# @param task_queue [String] Task queue to run the workflow on.
@@ -199,7 +200,7 @@ def operator_service
# is set to terminate if running.
# @param retry_policy [RetryPolicy, nil] Retry policy for the workflow.
# @param cron_schedule [String, nil] Cron schedule. Users should use schedules instead of this.
- # @param memo [Hash, nil] Memo for the workflow.
+ # @param memo [Hash{String, Symbol => Object}, nil] Memo for the workflow.
# @param search_attributes [SearchAttributes, nil] Search attributes for the workflow.
# @param start_delay [Float, nil] Amount of time in seconds to wait before starting the workflow. This does not work
# with `cron_schedule`.
@@ -251,7 +252,7 @@ def start_workflow(
# Start a workflow and wait for its result. This is a shortcut for {start_workflow} + {WorkflowHandle.result}.
#
- # @param workflow [Workflow, String] Name of the workflow
+ # @param workflow [Class, Symbol, String] Workflow definition class or workflow name.
# @param args [Array] Arguments to the workflow.
# @param id [String] Unique identifier for the workflow execution.
# @param task_queue [String] Task queue to run the workflow on.
@@ -265,7 +266,7 @@ def start_workflow(
# is set to terminate if running.
# @param retry_policy [RetryPolicy, nil] Retry policy for the workflow.
# @param cron_schedule [String, nil] Cron schedule. Users should use schedules instead of this.
- # @param memo [Hash, nil] Memo for the workflow.
+ # @param memo [Hash{String, Symbol => Object}, nil] Memo for the workflow.
# @param search_attributes [SearchAttributes, nil] Search attributes for the workflow.
# @param start_delay [Float, nil] Amount of time in seconds to wait before starting the workflow. This does not work
# with `cron_schedule`.
diff --git a/temporalio/lib/temporalio/client/connection/test_service.rb b/temporalio/lib/temporalio/client/connection/test_service.rb
new file mode 100644
index 00000000..a186719a
--- /dev/null
+++ b/temporalio/lib/temporalio/client/connection/test_service.rb
@@ -0,0 +1,111 @@
+# frozen_string_literal: true
+
+# Generated code. DO NOT EDIT!
+
+require 'temporalio/api'
+require 'temporalio/client/connection/service'
+require 'temporalio/internal/bridge/client'
+
+module Temporalio
+ class Client
+ class Connection
+ # TestService API.
+ class TestService < Service
+ # @!visibility private
+ def initialize(connection)
+ super(connection, Internal::Bridge::Client::SERVICE_TEST)
+ end
+
+ # Calls TestService.LockTimeSkipping API call.
+ #
+ # @param request [Temporalio::Api::TestService::V1::LockTimeSkippingRequest] API request.
+ # @param rpc_options [RPCOptions, nil] Advanced RPC options.
+ # @return [Temporalio::Api::TestService::V1::LockTimeSkippingResponse] API response.
+ def lock_time_skipping(request, rpc_options: nil)
+ invoke_rpc(
+ rpc: 'lock_time_skipping',
+ request_class: Temporalio::Api::TestService::V1::LockTimeSkippingRequest,
+ response_class: Temporalio::Api::TestService::V1::LockTimeSkippingResponse,
+ request:,
+ rpc_options:
+ )
+ end
+
+ # Calls TestService.UnlockTimeSkipping API call.
+ #
+ # @param request [Temporalio::Api::TestService::V1::UnlockTimeSkippingRequest] API request.
+ # @param rpc_options [RPCOptions, nil] Advanced RPC options.
+ # @return [Temporalio::Api::TestService::V1::UnlockTimeSkippingResponse] API response.
+ def unlock_time_skipping(request, rpc_options: nil)
+ invoke_rpc(
+ rpc: 'unlock_time_skipping',
+ request_class: Temporalio::Api::TestService::V1::UnlockTimeSkippingRequest,
+ response_class: Temporalio::Api::TestService::V1::UnlockTimeSkippingResponse,
+ request:,
+ rpc_options:
+ )
+ end
+
+ # Calls TestService.Sleep API call.
+ #
+ # @param request [Temporalio::Api::TestService::V1::SleepRequest] API request.
+ # @param rpc_options [RPCOptions, nil] Advanced RPC options.
+ # @return [Temporalio::Api::TestService::V1::SleepResponse] API response.
+ def sleep(request, rpc_options: nil)
+ invoke_rpc(
+ rpc: 'sleep',
+ request_class: Temporalio::Api::TestService::V1::SleepRequest,
+ response_class: Temporalio::Api::TestService::V1::SleepResponse,
+ request:,
+ rpc_options:
+ )
+ end
+
+ # Calls TestService.SleepUntil API call.
+ #
+ # @param request [Temporalio::Api::TestService::V1::SleepUntilRequest] API request.
+ # @param rpc_options [RPCOptions, nil] Advanced RPC options.
+ # @return [Temporalio::Api::TestService::V1::SleepResponse] API response.
+ def sleep_until(request, rpc_options: nil)
+ invoke_rpc(
+ rpc: 'sleep_until',
+ request_class: Temporalio::Api::TestService::V1::SleepUntilRequest,
+ response_class: Temporalio::Api::TestService::V1::SleepResponse,
+ request:,
+ rpc_options:
+ )
+ end
+
+ # Calls TestService.UnlockTimeSkippingWithSleep API call.
+ #
+ # @param request [Temporalio::Api::TestService::V1::SleepRequest] API request.
+ # @param rpc_options [RPCOptions, nil] Advanced RPC options.
+ # @return [Temporalio::Api::TestService::V1::SleepResponse] API response.
+ def unlock_time_skipping_with_sleep(request, rpc_options: nil)
+ invoke_rpc(
+ rpc: 'unlock_time_skipping_with_sleep',
+ request_class: Temporalio::Api::TestService::V1::SleepRequest,
+ response_class: Temporalio::Api::TestService::V1::SleepResponse,
+ request:,
+ rpc_options:
+ )
+ end
+
+ # Calls TestService.GetCurrentTime API call.
+ #
+ # @param request [Google::Protobuf::Empty] API request.
+ # @param rpc_options [RPCOptions, nil] Advanced RPC options.
+ # @return [Temporalio::Api::TestService::V1::GetCurrentTimeResponse] API response.
+ def get_current_time(request, rpc_options: nil)
+ invoke_rpc(
+ rpc: 'get_current_time',
+ request_class: Google::Protobuf::Empty,
+ response_class: Temporalio::Api::TestService::V1::GetCurrentTimeResponse,
+ request:,
+ rpc_options:
+ )
+ end
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/client/schedule.rb b/temporalio/lib/temporalio/client/schedule.rb
index 1bcba178..bf80ff0a 100644
--- a/temporalio/lib/temporalio/client/schedule.rb
+++ b/temporalio/lib/temporalio/client/schedule.rb
@@ -224,7 +224,7 @@ def self._from_proto(raw_info, data_converter)
# Create start-workflow schedule action.
#
- # @param workflow [String] Workflow.
+ # @param workflow [Class, Symbol, String] Workflow.
# @param args [Array] Arguments to the workflow.
# @param id [String] Unique identifier for the workflow execution.
# @param task_queue [String] Task queue to run the workflow on.
@@ -251,7 +251,7 @@ def initialize(
)
# steep:ignore:start
super(
- workflow:,
+ workflow: Workflow::Definition._workflow_type_from_workflow_parameter(workflow),
args:,
id:,
task_queue:,
@@ -958,7 +958,8 @@ def initialize(raw_info)
State = Struct.new(
:note,
- :paused
+ :paused,
+ keyword_init: true
)
# State of a listed schedule.
diff --git a/temporalio/lib/temporalio/client/workflow_handle.rb b/temporalio/lib/temporalio/client/workflow_handle.rb
index 743a1ff4..5b2c2d67 100644
--- a/temporalio/lib/temporalio/client/workflow_handle.rb
+++ b/temporalio/lib/temporalio/client/workflow_handle.rb
@@ -100,13 +100,13 @@ def result(follow_runs: true, rpc_options: nil)
raise Error::WorkflowFailedError.new, cause: @client.data_converter.from_failure(attrs.failure)
when :EVENT_TYPE_WORKFLOW_EXECUTION_CANCELED
attrs = event.workflow_execution_canceled_event_attributes
- raise Error::WorkflowFailedError.new, cause: Error::CanceledError.new(
+ raise Error::WorkflowFailedError.new, 'Workflow execution canceled', cause: Error::CanceledError.new(
'Workflow execution canceled',
details: @client.data_converter.from_payloads(attrs&.details)
)
when :EVENT_TYPE_WORKFLOW_EXECUTION_TERMINATED
attrs = event.workflow_execution_terminated_event_attributes
- raise Error::WorkflowFailedError.new, cause: Error::TerminatedError.new(
+ raise Error::WorkflowFailedError.new, 'Workflow execution terminated', cause: Error::TerminatedError.new(
Internal::ProtoUtils.string_or(attrs.reason, 'Workflow execution terminated'),
details: @client.data_converter.from_payloads(attrs&.details)
)
@@ -115,7 +115,7 @@ def result(follow_runs: true, rpc_options: nil)
hist_run_id = attrs.new_execution_run_id
next if follow_runs && hist_run_id && !hist_run_id.empty?
- raise Error::WorkflowFailedError.new, cause: Error::TimeoutError.new(
+ raise Error::WorkflowFailedError.new, 'Workflow execution timed out', cause: Error::TimeoutError.new(
'Workflow execution timed out',
type: Api::Enums::V1::TimeoutType::TIMEOUT_TYPE_START_TO_CLOSE,
last_heartbeat_details: []
@@ -210,7 +210,7 @@ def fetch_history_events(
# Send a signal to the workflow. This will signal for {run_id} if present. To use a different run ID, create a new
# handle via {Client.workflow_handle}.
#
- # @param signal [String] Signal name.
+ # @param signal [Workflow::Definition::Signal, Symbol, String] Signal definition or name.
# @param args [Array] Signal arguments.
# @param rpc_options [RPCOptions, nil] Advanced RPC options.
#
@@ -232,7 +232,7 @@ def signal(signal, *args, rpc_options: nil)
# Query the workflow. This will query for {run_id} if present. To use a different run ID, create a new handle via
# {Client.workflow_handle}.
#
- # @param query [String] Query name.
+ # @param query [Workflow::Definition::Query, Symbol, String] Query definition or name.
# @param args [Array] Query arguments.
# @param reject_condition [WorkflowQueryRejectCondition, nil] Condition for rejecting the query.
# @param rpc_options [RPCOptions, nil] Advanced RPC options.
@@ -265,7 +265,7 @@ def query(
# Send an update request to the workflow and return a handle to it. This will target the workflow with {run_id} if
# present. To use a different run ID, create a new handle via {Client.workflow_handle}.
#
- # @param update [String] Update name.
+ # @param update [Workflow::Definition::Update, Symbol, String] Update definition or name.
# @param args [Array] Update arguments.
# @param wait_for_stage [WorkflowUpdateWaitStage] Required stage to wait until returning. ADMITTED is not
# currently supported. See https://docs.temporal.io/workflows#update for more details.
@@ -280,7 +280,6 @@ def query(
#
# @note Handles created as a result of {Client.start_workflow} will send updates the latest workflow with the same
# workflow ID even if it is unrelated to the started workflow.
- # @note WARNING: This API is experimental.
def start_update(
update,
*args,
@@ -303,7 +302,7 @@ def start_update(
# Send an update request to the workflow and wait for it to complete. This will target the workflow with {run_id}
# if present. To use a different run ID, create a new handle via {Client.workflow_handle}.
#
- # @param update [String] Update name.
+ # @param update [Workflow::Definition::Update, Symbol, String] Update definition or name.
# @param args [Array] Update arguments.
# @param id [String] ID of the update.
# @param rpc_options [RPCOptions, nil] Advanced RPC options.
@@ -317,7 +316,6 @@ def start_update(
#
# @note Handles created as a result of {Client.start_workflow} will send updates the latest workflow with the same
# workflow ID even if it is unrelated to the started workflow.
- # @note WARNING: This API is experimental.
def execute_update(update, *args, id: SecureRandom.uuid, rpc_options: nil)
start_update(
update,
@@ -335,8 +333,6 @@ def execute_update(update, *args, id: SecureRandom.uuid, rpc_options: nil)
# users will not need to set this and instead use the one on the class.
#
# @return [WorkflowUpdateHandle] The update handle.
- #
- # @note WARNING: This API is experimental.
def update_handle(id, specific_run_id: run_id)
WorkflowUpdateHandle.new(
client: @client,
diff --git a/temporalio/lib/temporalio/common_enums.rb b/temporalio/lib/temporalio/common_enums.rb
index eff9512d..7772d387 100644
--- a/temporalio/lib/temporalio/common_enums.rb
+++ b/temporalio/lib/temporalio/common_enums.rb
@@ -7,18 +7,35 @@ module Temporalio
#
# @see https://docs.temporal.io/workflows#workflow-id-reuse-policy
module WorkflowIDReusePolicy
+ # Allow starting a workflow execution using the same workflow ID.
ALLOW_DUPLICATE = Api::Enums::V1::WorkflowIdReusePolicy::WORKFLOW_ID_REUSE_POLICY_ALLOW_DUPLICATE
+ # Allow starting a workflow execution using the same workflow ID, only when the last execution's final state is one
+ # of terminated, canceled, timed out, or failed.
ALLOW_DUPLICATE_FAILED_ONLY =
Api::Enums::V1::WorkflowIdReusePolicy::WORKFLOW_ID_REUSE_POLICY_ALLOW_DUPLICATE_FAILED_ONLY
+ # Do not permit re-use of the workflow ID for this workflow. Future start workflow requests could potentially change
+ # the policy, allowing re-use of the workflow ID.
REJECT_DUPLICATE = Api::Enums::V1::WorkflowIdReusePolicy::WORKFLOW_ID_REUSE_POLICY_REJECT_DUPLICATE
+ # This option is {WorkflowIDConflictPolicy::TERMINATE_EXISTING} but is here for backwards compatibility. If
+ # specified, it acts like {ALLOW_DUPLICATE}, but also the {WorkflowIDConflictPolicy} on the request is treated as
+ # {WorkflowIDConflictPolicy::TERMINATE_EXISTING}. If no running workflow, then the behavior is the same as
+ # {ALLOW_DUPLICATE}.
+ #
+ # @deprecated Use {WorkflowIDConflictPolicy::TERMINATE_EXISTING} instead.
TERMINATE_IF_RUNNING = Api::Enums::V1::WorkflowIdReusePolicy::WORKFLOW_ID_REUSE_POLICY_TERMINATE_IF_RUNNING
end
# How already-running workflows of the same ID are handled on start.
+ #
+ # @see https://docs.temporal.io/workflows#workflow-id-conflict-policy
module WorkflowIDConflictPolicy
+ # Unset.
UNSPECIFIED = Api::Enums::V1::WorkflowIdConflictPolicy::WORKFLOW_ID_CONFLICT_POLICY_UNSPECIFIED
+ # Don't start a new workflow, instead fail with already-started error.
FAIL = Api::Enums::V1::WorkflowIdConflictPolicy::WORKFLOW_ID_CONFLICT_POLICY_FAIL
+ # Don't start a new workflow, instead return a workflow handle for the running workflow.
USE_EXISTING = Api::Enums::V1::WorkflowIdConflictPolicy::WORKFLOW_ID_CONFLICT_POLICY_USE_EXISTING
+ # Terminate the running workflow before starting a new one.
TERMINATE_EXISTING = Api::Enums::V1::WorkflowIdConflictPolicy::WORKFLOW_ID_CONFLICT_POLICY_TERMINATE_EXISTING
end
end
diff --git a/temporalio/lib/temporalio/converters/failure_converter.rb b/temporalio/lib/temporalio/converters/failure_converter.rb
index 570da9de..c2c78afe 100644
--- a/temporalio/lib/temporalio/converters/failure_converter.rb
+++ b/temporalio/lib/temporalio/converters/failure_converter.rb
@@ -85,7 +85,7 @@ def to_failure(error, converter)
)
else
failure.application_failure_info = Api::Failure::V1::ApplicationFailureInfo.new(
- type: error.class.name.split('::').last
+ type: error.class.name
)
end
diff --git a/temporalio/lib/temporalio/converters/payload_converter/composite.rb b/temporalio/lib/temporalio/converters/payload_converter/composite.rb
index 9add8e6a..d5cadabe 100644
--- a/temporalio/lib/temporalio/converters/payload_converter/composite.rb
+++ b/temporalio/lib/temporalio/converters/payload_converter/composite.rb
@@ -2,6 +2,7 @@
require 'temporalio/api'
require 'temporalio/converters/payload_converter'
+require 'temporalio/converters/raw_value'
module Temporalio
module Converters
@@ -34,6 +35,9 @@ def initialize(*converters)
# @return [Api::Common::V1::Payload] Converted payload.
# @raise [ConverterNotFound] If no converters can process the value.
def to_payload(value)
+ # As a special case, raw values just return the payload within
+ return value.payload if value.is_a?(RawValue)
+
converters.each_value do |converter|
payload = converter.to_payload(value)
return payload unless payload.nil?
diff --git a/temporalio/lib/temporalio/converters/raw_value.rb b/temporalio/lib/temporalio/converters/raw_value.rb
new file mode 100644
index 00000000..bf5b7d88
--- /dev/null
+++ b/temporalio/lib/temporalio/converters/raw_value.rb
@@ -0,0 +1,20 @@
+# frozen_string_literal: true
+
+module Temporalio
+ module Converters
+ # Raw value wrapper that has the raw payload. When raw args are configured at implementation time, the inbound
+ # arguments will be instances of this class. When instances of this class are sent outbound or returned from
+ # inbound calls, the raw payload will be serialized instead of applying traditional conversion.
+ class RawValue
+ # @return [Api::Common::V1::Payload] Payload.
+ attr_reader :payload
+
+ # Create a raw value.
+ #
+ # @param payload [Api::Common::V1::Payload] Payload.
+ def initialize(payload)
+ @payload = payload
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/error.rb b/temporalio/lib/temporalio/error.rb
index 735bb945..d23daf1b 100644
--- a/temporalio/lib/temporalio/error.rb
+++ b/temporalio/lib/temporalio/error.rb
@@ -35,8 +35,8 @@ def self._with_backtrace_and_cause(err, backtrace:, cause:)
# Error that is returned from when a workflow is unsuccessful.
class WorkflowFailedError < Error
# @!visibility private
- def initialize
- super('Workflow failed')
+ def initialize(message = 'Workflow execution failed')
+ super
end
end
diff --git a/temporalio/lib/temporalio/error/failure.rb b/temporalio/lib/temporalio/error/failure.rb
index 4da19915..82d40799 100644
--- a/temporalio/lib/temporalio/error/failure.rb
+++ b/temporalio/lib/temporalio/error/failure.rb
@@ -17,7 +17,7 @@ class WorkflowAlreadyStartedError < Failure
# @return [String] Workflow type name of the already-started workflow.
attr_reader :workflow_type
- # @return [String] Run ID of the already-started workflow if this was raised by the client.
+ # @return [String, nil] Run ID of the already-started workflow if this was raised by the client.
attr_reader :run_id
# @!visibility private
diff --git a/temporalio/lib/temporalio/internal/bridge/testing.rb b/temporalio/lib/temporalio/internal/bridge/testing.rb
index 02cc4985..846c4fb3 100644
--- a/temporalio/lib/temporalio/internal/bridge/testing.rb
+++ b/temporalio/lib/temporalio/internal/bridge/testing.rb
@@ -24,6 +24,17 @@ class EphemeralServer
keyword_init: true
)
+ StartTestServerOptions = Struct.new(
+ :existing_path, # Optional
+ :sdk_name,
+ :sdk_version,
+ :download_version,
+ :download_dest_dir, # Optional
+ :port, # Optional
+ :extra_args,
+ keyword_init: true
+ )
+
def self.start_dev_server(runtime, options)
queue = Queue.new
async_start_dev_server(runtime, options, queue)
@@ -33,6 +44,15 @@ def self.start_dev_server(runtime, options)
result
end
+ def self.start_test_server(runtime, options)
+ queue = Queue.new
+ async_start_test_server(runtime, options, queue)
+ result = queue.pop
+ raise result if result.is_a?(Exception)
+
+ result
+ end
+
def shutdown
queue = Queue.new
async_shutdown(queue)
diff --git a/temporalio/lib/temporalio/internal/bridge/worker.rb b/temporalio/lib/temporalio/internal/bridge/worker.rb
index e1b6598d..e7f9c71b 100644
--- a/temporalio/lib/temporalio/internal/bridge/worker.rb
+++ b/temporalio/lib/temporalio/internal/bridge/worker.rb
@@ -26,6 +26,8 @@ class Worker
:max_task_queue_activities_per_second,
:graceful_shutdown_period,
:use_worker_versioning,
+ :nondeterminism_as_workflow_fail,
+ :nondeterminism_as_workflow_fail_for_types,
keyword_init: true
)
diff --git a/temporalio/lib/temporalio/internal/client/implementation.rb b/temporalio/lib/temporalio/internal/client/implementation.rb
index da28bc0f..d315b6a1 100644
--- a/temporalio/lib/temporalio/internal/client/implementation.rb
+++ b/temporalio/lib/temporalio/internal/client/implementation.rb
@@ -19,6 +19,7 @@
require 'temporalio/internal/proto_utils'
require 'temporalio/runtime'
require 'temporalio/search_attributes'
+require 'temporalio/workflow/definition'
module Temporalio
module Internal
@@ -49,7 +50,9 @@ def start_workflow(input)
req = Api::WorkflowService::V1::StartWorkflowExecutionRequest.new(
request_id: SecureRandom.uuid,
namespace: @client.namespace,
- workflow_type: Api::Common::V1::WorkflowType.new(name: input.workflow.to_s),
+ workflow_type: Api::Common::V1::WorkflowType.new(
+ name: Workflow::Definition._workflow_type_from_workflow_parameter(input.workflow)
+ ),
workflow_id: input.workflow_id,
task_queue: Api::TaskQueue::V1::TaskQueue.new(name: input.task_queue.to_s),
input: @client.data_converter.to_payloads(input.args),
@@ -65,7 +68,7 @@ def start_workflow(input)
search_attributes: input.search_attributes&._to_proto,
workflow_start_delay: ProtoUtils.seconds_to_duration(input.start_delay),
request_eager_execution: input.request_eager_start,
- header: Internal::ProtoUtils.headers_to_proto(input.headers, @client.data_converter)
+ header: ProtoUtils.headers_to_proto(input.headers, @client.data_converter)
)
# Send request
@@ -135,7 +138,7 @@ def count_workflows(input)
resp.groups.map do |group|
Temporalio::Client::WorkflowExecutionCount::AggregationGroup.new(
group.count,
- group.group_values.map { |payload| SearchAttributes.value_from_payload(payload) }
+ group.group_values.map { |payload| SearchAttributes._value_from_payload(payload) }
)
end
)
@@ -188,7 +191,7 @@ def signal_workflow(input)
workflow_id: input.workflow_id,
run_id: input.run_id || ''
),
- signal_name: input.signal,
+ signal_name: Workflow::Definition::Signal._name_from_parameter(input.signal),
input: @client.data_converter.to_payloads(input.args),
header: Internal::ProtoUtils.headers_to_proto(input.headers, @client.data_converter),
identity: @client.connection.identity,
@@ -209,7 +212,7 @@ def query_workflow(input)
run_id: input.run_id || ''
),
query: Api::Query::V1::WorkflowQuery.new(
- query_type: input.query,
+ query_type: Workflow::Definition::Query._name_from_parameter(input.query),
query_args: @client.data_converter.to_payloads(input.args),
header: Internal::ProtoUtils.headers_to_proto(input.headers, @client.data_converter)
),
@@ -252,7 +255,7 @@ def start_workflow_update(input)
identity: @client.connection.identity
),
input: Api::Update::V1::Input.new(
- name: input.update,
+ name: Workflow::Definition::Update._name_from_parameter(input.update),
args: @client.data_converter.to_payloads(input.args),
header: Internal::ProtoUtils.headers_to_proto(input.headers, @client.data_converter)
)
diff --git a/temporalio/lib/temporalio/internal/metric.rb b/temporalio/lib/temporalio/internal/metric.rb
index 00b9d516..f67755bf 100644
--- a/temporalio/lib/temporalio/internal/metric.rb
+++ b/temporalio/lib/temporalio/internal/metric.rb
@@ -1,5 +1,6 @@
# frozen_string_literal: true
+require 'singleton'
require 'temporalio/internal/bridge'
require 'temporalio/metric'
diff --git a/temporalio/lib/temporalio/internal/proto_utils.rb b/temporalio/lib/temporalio/internal/proto_utils.rb
index 6315f438..2c20cd0b 100644
--- a/temporalio/lib/temporalio/internal/proto_utils.rb
+++ b/temporalio/lib/temporalio/internal/proto_utils.rb
@@ -5,11 +5,11 @@
module Temporalio
module Internal
module ProtoUtils
- def self.seconds_to_duration(seconds_float)
- return nil if seconds_float.nil?
+ def self.seconds_to_duration(seconds_numeric)
+ return nil if seconds_numeric.nil?
- seconds = seconds_float.to_i
- nanos = ((seconds_float - seconds) * 1_000_000_000).round
+ seconds = seconds_numeric.to_i
+ nanos = ((seconds_numeric - seconds) * 1_000_000_000).round
Google::Protobuf::Duration.new(seconds:, nanos:)
end
@@ -41,7 +41,13 @@ def self.timestamp_to_time(timestamp)
def self.memo_to_proto(hash, converter)
return nil if hash.nil? || hash.empty?
- Api::Common::V1::Memo.new(fields: hash.transform_values { |val| converter.to_payload(val) })
+ Api::Common::V1::Memo.new(fields: memo_to_proto_hash(hash, converter))
+ end
+
+ def self.memo_to_proto_hash(hash, converter)
+ return nil if hash.nil? || hash.empty?
+
+ hash.transform_keys(&:to_s).transform_values { |val| converter.to_payload(val) }
end
def self.memo_from_proto(memo, converter)
@@ -53,7 +59,13 @@ def self.memo_from_proto(memo, converter)
def self.headers_to_proto(headers, converter)
return nil if headers.nil? || headers.empty?
- Api::Common::V1::Header.new(fields: headers.transform_values { |val| converter.to_payload(val) })
+ Api::Common::V1::Header.new(fields: headers_to_proto_hash(headers, converter))
+ end
+
+ def self.headers_to_proto_hash(headers, converter)
+ return nil if headers.nil? || headers.empty?
+
+ headers.transform_values { |val| converter.to_payload(val) }
end
def self.headers_from_proto(headers, converter)
diff --git a/temporalio/lib/temporalio/internal/worker/activity_worker.rb b/temporalio/lib/temporalio/internal/worker/activity_worker.rb
index 3e61e589..b727c48b 100644
--- a/temporalio/lib/temporalio/internal/worker/activity_worker.rb
+++ b/temporalio/lib/temporalio/internal/worker/activity_worker.rb
@@ -11,12 +11,14 @@
module Temporalio
module Internal
module Worker
+ # Worker for handling activity tasks. Upon overarching worker shutdown, {wait_all_complete} should be used to wait
+ # for the activities to complete.
class ActivityWorker
LOG_TASKS = false
attr_reader :worker, :bridge_worker
- def initialize(worker, bridge_worker)
+ def initialize(worker:, bridge_worker:)
@worker = worker
@bridge_worker = bridge_worker
@runtime_metric_meter = worker.options.client.connection.options.runtime.metric_meter
@@ -31,7 +33,7 @@ def initialize(worker, bridge_worker)
@activities = worker.options.activities.each_with_object({}) do |act, hash|
# Class means create each time, instance means just call, definition
# does nothing special
- defn = Activity::Definition.from_activity(act)
+ defn = Activity::Definition::Info.from_activity(act)
# Confirm name not in use
raise ArgumentError, "Multiple activities named #{defn.name}" if hash.key?(defn.name)
@@ -181,7 +183,7 @@ def execute_activity(task_token, defn, start)
).freeze
# Build input
- input = Temporalio::Worker::Interceptor::ExecuteActivityInput.new(
+ input = Temporalio::Worker::Interceptor::Activity::ExecuteInput.new(
proc: defn.proc,
args: ProtoUtils.convert_from_payload_array(
@worker.options.client.data_converter,
@@ -230,9 +232,9 @@ def execute_activity(task_token, defn, start)
def run_activity(activity, input)
result = begin
# Build impl with interceptors
- # @type var impl: Temporalio::Worker::Interceptor::ActivityInbound
+ # @type var impl: Temporalio::Worker::Interceptor::Activity::Inbound
impl = InboundImplementation.new(self)
- impl = @worker._all_interceptors.reverse_each.reduce(impl) do |acc, int|
+ impl = @worker._activity_interceptors.reverse_each.reduce(impl) do |acc, int|
int.intercept_activity(acc)
end
impl.init(OutboundImplementation.new(self))
@@ -307,7 +309,10 @@ def initialize( # rubocop:disable Lint/MissingSuper
def heartbeat(*details)
raise 'Implementation not set yet' if _outbound_impl.nil?
- _outbound_impl.heartbeat(Temporalio::Worker::Interceptor::HeartbeatActivityInput.new(details:))
+ # No-op if local
+ return if info.local?
+
+ _outbound_impl.heartbeat(Temporalio::Worker::Interceptor::Activity::HeartbeatInput.new(details:))
end
def metric_meter
@@ -321,7 +326,7 @@ def metric_meter
end
end
- class InboundImplementation < Temporalio::Worker::Interceptor::ActivityInbound
+ class InboundImplementation < Temporalio::Worker::Interceptor::Activity::Inbound
def initialize(worker)
super(nil) # steep:ignore
@worker = worker
@@ -339,7 +344,7 @@ def execute(input)
end
end
- class OutboundImplementation < Temporalio::Worker::Interceptor::ActivityOutbound
+ class OutboundImplementation < Temporalio::Worker::Interceptor::Activity::Outbound
def initialize(worker)
super(nil) # steep:ignore
@worker = worker
diff --git a/temporalio/lib/temporalio/internal/worker/multi_runner.rb b/temporalio/lib/temporalio/internal/worker/multi_runner.rb
index 0c5090c3..b4f7586b 100644
--- a/temporalio/lib/temporalio/internal/worker/multi_runner.rb
+++ b/temporalio/lib/temporalio/internal/worker/multi_runner.rb
@@ -6,6 +6,8 @@
module Temporalio
module Internal
module Worker
+ # Primary worker (re)actor-style event handler. This handles multiple workers, receiving events from the bridge,
+ # and handling a user block.
class MultiRunner
def initialize(workers:, shutdown_signals:)
@workers = workers
@@ -47,6 +49,16 @@ def apply_thread_or_fiber_block(&)
end
end
+ def apply_workflow_activation_decoded(workflow_worker:, activation:)
+ @queue.push(Event::WorkflowActivationDecoded.new(workflow_worker:, activation:))
+ end
+
+ def apply_workflow_activation_complete(workflow_worker:, activation_completion:, encoded:)
+ @queue.push(Event::WorkflowActivationComplete.new(
+ workflow_worker:, activation_completion:, encoded:, completion_complete_queue: @queue
+ ))
+ end
+
def raise_in_thread_or_fiber_block(error)
@thread_or_fiber&.raise(error)
end
@@ -80,22 +92,25 @@ def next_event
# * [worker index, :activity/:workflow, error] - poll fail
# * [worker index, :activity/:workflow, nil] - worker shutdown
# * [nil, nil, nil] - all pollers done
+ # * [-1, run_id_string, error_or_nil] - workflow activation completion complete
result = @queue.pop
if result.is_a?(Event)
result
else
- worker_index, worker_type, poll_result = result
- if worker_index.nil? || worker_type.nil?
+ first, second, third = result
+ if first.nil? || second.nil?
Event::AllPollersShutDown.instance
+ elsif first == -1
+ Event::WorkflowActivationCompletionComplete.new(run_id: second, error: third)
else
- worker = @workers[worker_index]
- case poll_result
+ worker = @workers[first]
+ case third
when nil
- Event::PollerShutDown.new(worker:, worker_type:)
+ Event::PollerShutDown.new(worker:, worker_type: second)
when Exception
- Event::PollFailure.new(worker:, worker_type:, error: poll_result)
+ Event::PollFailure.new(worker:, worker_type: second, error: third)
else
- Event::PollSuccess.new(worker:, worker_type:, bytes: poll_result)
+ Event::PollSuccess.new(worker:, worker_type: second, bytes: third)
end
end
end
@@ -122,6 +137,35 @@ def initialize(worker:, worker_type:, error:) # rubocop:disable Lint/MissingSupe
end
end
+ class WorkflowActivationDecoded < Event
+ attr_reader :workflow_worker, :activation
+
+ def initialize(workflow_worker:, activation:) # rubocop:disable Lint/MissingSuper
+ @workflow_worker = workflow_worker
+ @activation = activation
+ end
+ end
+
+ class WorkflowActivationComplete < Event
+ attr_reader :workflow_worker, :activation_completion, :encoded, :completion_complete_queue
+
+ def initialize(workflow_worker:, activation_completion:, encoded:, completion_complete_queue:) # rubocop:disable Lint/MissingSuper
+ @workflow_worker = workflow_worker
+ @activation_completion = activation_completion
+ @encoded = encoded
+ @completion_complete_queue = completion_complete_queue
+ end
+ end
+
+ class WorkflowActivationCompletionComplete < Event
+ attr_reader :run_id, :error
+
+ def initialize(run_id:, error:) # rubocop:disable Lint/MissingSuper
+ @run_id = run_id
+ @error = error
+ end
+ end
+
class PollerShutDown < Event
attr_reader :worker, :worker_type
diff --git a/temporalio/lib/temporalio/internal/worker/workflow_instance.rb b/temporalio/lib/temporalio/internal/worker/workflow_instance.rb
new file mode 100644
index 00000000..e660a08d
--- /dev/null
+++ b/temporalio/lib/temporalio/internal/worker/workflow_instance.rb
@@ -0,0 +1,730 @@
+# frozen_string_literal: true
+
+require 'json'
+require 'temporalio'
+require 'temporalio/activity/definition'
+require 'temporalio/api'
+require 'temporalio/converters/raw_value'
+require 'temporalio/error'
+require 'temporalio/internal/bridge/api'
+require 'temporalio/internal/proto_utils'
+require 'temporalio/internal/worker/workflow_instance/child_workflow_handle'
+require 'temporalio/internal/worker/workflow_instance/context'
+require 'temporalio/internal/worker/workflow_instance/details'
+require 'temporalio/internal/worker/workflow_instance/externally_immutable_hash'
+require 'temporalio/internal/worker/workflow_instance/handler_execution'
+require 'temporalio/internal/worker/workflow_instance/handler_hash'
+require 'temporalio/internal/worker/workflow_instance/illegal_call_tracer'
+require 'temporalio/internal/worker/workflow_instance/inbound_implementation'
+require 'temporalio/internal/worker/workflow_instance/outbound_implementation'
+require 'temporalio/internal/worker/workflow_instance/replay_safe_logger'
+require 'temporalio/internal/worker/workflow_instance/replay_safe_metric'
+require 'temporalio/internal/worker/workflow_instance/scheduler'
+require 'temporalio/retry_policy'
+require 'temporalio/scoped_logger'
+require 'temporalio/worker/interceptor'
+require 'temporalio/workflow/info'
+require 'temporalio/workflow/update_info'
+require 'timeout'
+
+module Temporalio
+ module Internal
+ module Worker
+ # Instance of a user workflow. This is the instance with all state needed to run the workflow and is expected to
+ # be cached by the worker for sticky execution.
+ class WorkflowInstance
+ def self.new_completion_with_failure(run_id:, error:, failure_converter:, payload_converter:)
+ Bridge::Api::WorkflowCompletion::WorkflowActivationCompletion.new(
+ run_id: run_id,
+ failed: Bridge::Api::WorkflowCompletion::Failure.new(
+ failure: begin
+ failure_converter.to_failure(error, payload_converter)
+ rescue Exception => e # rubocop:disable Lint/RescueException
+ Api::Failure::V1::Failure.new(
+ message: "Failed converting error to failure: #{e.message}, " \
+ "original error message: #{error.message}",
+ application_failure_info: Api::Failure::V1::ApplicationFailureInfo.new
+ )
+ end
+ )
+ )
+ end
+
+ attr_reader :context, :logger, :info, :scheduler, :disable_eager_activity_execution, :pending_activities,
+ :pending_timers, :pending_child_workflow_starts, :pending_child_workflows,
+ :pending_external_signals, :pending_external_cancels, :in_progress_handlers, :payload_converter,
+ :failure_converter, :cancellation, :continue_as_new_suggested, :current_history_length,
+ :current_history_size, :replaying, :random, :signal_handlers, :query_handlers, :update_handlers,
+ :context_frozen
+
+ def initialize(details)
+ # Initialize general state
+ @context = Context.new(self)
+ if details.illegal_calls && !details.illegal_calls.empty?
+ @tracer = IllegalCallTracer.new(details.illegal_calls)
+ end
+ @logger = ReplaySafeLogger.new(logger: details.logger, instance: self)
+ @logger.scoped_values_getter = proc { scoped_logger_info }
+ @runtime_metric_meter = details.metric_meter
+ @scheduler = Scheduler.new(self)
+ @payload_converter = details.payload_converter
+ @failure_converter = details.failure_converter
+ @disable_eager_activity_execution = details.disable_eager_activity_execution
+ @pending_activities = {} # Keyed by sequence, value is fiber to resume with proto result
+ @pending_timers = {} # Keyed by sequence, value is fiber to resume with proto result
+ @pending_child_workflow_starts = {} # Keyed by sequence, value is fiber to resume with proto result
+ @pending_child_workflows = {} # Keyed by sequence, value is ChildWorkflowHandle to resolve with proto result
+ @pending_external_signals = {} # Keyed by sequence, value is fiber to resume with proto result
+ @pending_external_cancels = {} # Keyed by sequence, value is fiber to resume with proto result
+ @buffered_signals = {} # Keyed by signal name, value is array of signal jobs
+ # TODO(cretz): Should these be sets instead? Both should be fairly low counts.
+ @in_progress_handlers = [] # Value is HandlerExecution
+ @patches_notified = []
+ @definition = details.definition
+ @interceptors = details.interceptors
+ @cancellation, @cancellation_proc = Cancellation.new
+ @continue_as_new_suggested = false
+ @current_history_length = 0
+ @current_history_size = 0
+ @replaying = false
+ @failure_exception_types = details.workflow_failure_exception_types + @definition.failure_exception_types
+ @signal_handlers = HandlerHash.new(
+ details.definition.signals,
+ Workflow::Definition::Signal
+ ) do |defn|
+ # New definition, drain buffer. If it's dynamic (i.e. no name) drain them all.
+ to_drain = if defn.name.nil?
+ all_signals = @buffered_signals.values.flatten
+ @buffered_signals.clear
+ all_signals
+ else
+ @buffered_signals.delete(defn.name)
+ end
+ to_drain&.each { |job| apply_signal(job) }
+ end
+ @query_handlers = HandlerHash.new(details.definition.queries, Workflow::Definition::Query)
+ @update_handlers = HandlerHash.new(details.definition.updates, Workflow::Definition::Update)
+
+ # Create all things needed from initial job
+ @init_job = details.initial_activation.jobs.find { |j| !j.initialize_workflow.nil? }&.initialize_workflow
+ raise 'Missing init job from first activation' unless @init_job
+
+ illegal_call_tracing_disabled do
+ @info = Workflow::Info.new(
+ attempt: @init_job.attempt,
+ continued_run_id: ProtoUtils.string_or(@init_job.continued_from_execution_run_id),
+ cron_schedule: ProtoUtils.string_or(@init_job.cron_schedule),
+ execution_timeout: ProtoUtils.duration_to_seconds(@init_job.workflow_execution_timeout),
+ last_failure: if @init_job.continued_failure
+ @failure_converter.from_failure(@init_job.continued_failure, @payload_converter)
+ end,
+ last_result: if @init_job.last_completion_result
+ @payload_converter.from_payloads(@init_job.last_completion_result).first
+ end,
+ namespace: details.namespace,
+ parent: if @init_job.parent_workflow_info
+ Workflow::Info::ParentInfo.new(
+ namespace: @init_job.parent_workflow_info.namespace,
+ run_id: @init_job.parent_workflow_info.run_id,
+ workflow_id: @init_job.parent_workflow_info.workflow_id
+ )
+ end,
+ retry_policy: (RetryPolicy._from_proto(@init_job.retry_policy) if @init_job.retry_policy),
+ run_id: details.initial_activation.run_id,
+ run_timeout: ProtoUtils.duration_to_seconds(@init_job.workflow_run_timeout),
+ start_time: ProtoUtils.timestamp_to_time(details.initial_activation.timestamp) || raise,
+ task_queue: details.task_queue,
+ task_timeout: ProtoUtils.duration_to_seconds(@init_job.workflow_task_timeout) || raise,
+ workflow_id: @init_job.workflow_id,
+ workflow_type: @init_job.workflow_type
+ ).freeze
+
+ @random = Random.new(@init_job.randomness_seed)
+ end
+ end
+
+ def activate(activation)
+ # Run inside of scheduler
+ run_in_scheduler { activate_internal(activation) }
+ end
+
+ def add_command(command)
+ raise Workflow::InvalidWorkflowStateError, 'Cannot add commands in this context' if @context_frozen
+
+ @commands << command
+ end
+
+ def instance
+ @instance or raise 'Instance accessed before created'
+ end
+
+ def search_attributes
+ # Lazy on first access
+ @search_attributes ||= SearchAttributes._from_proto(
+ @init_job.search_attributes, disable_mutations: true, never_nil: true
+ ) || raise
+ end
+
+ def memo
+ # Lazy on first access
+ @memo ||= ExternallyImmutableHash.new(ProtoUtils.memo_from_proto(@init_job.memo, payload_converter) || {})
+ end
+
+ def now
+ # Create each time
+ ProtoUtils.timestamp_to_time(@now_timestamp) or raise 'Time unexpectedly not present'
+ end
+
+ def illegal_call_tracing_disabled(&)
+ @tracer.disable(&)
+ end
+
+ def patch(patch_id:, deprecated:)
+ # Use memoized result if present. If this is being deprecated, we can still use memoized result and skip the
+ # command.
+ patch_id = patch_id.to_s
+ @patches_memoized ||= {}
+ @patches_memoized.fetch(patch_id) do
+ patched = !replaying || @patches_notified.include?(patch_id)
+ @patches_memoized[patch_id] = patched
+ if patched
+ add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ set_patch_marker: Bridge::Api::WorkflowCommands::SetPatchMarker.new(patch_id:, deprecated:)
+ )
+ )
+ end
+ patched
+ end
+ end
+
+ def metric_meter
+ @metric_meter ||= ReplaySafeMetric::Meter.new(
+ @runtime_metric_meter.with_additional_attributes(
+ {
+ namespace: info.namespace,
+ task_queue: info.task_queue,
+ workflow_type: info.workflow_type
+ }
+ )
+ )
+ end
+
+ private
+
+ def run_in_scheduler(&)
+ Fiber.set_scheduler(@scheduler)
+ if @tracer
+ @tracer.enable(&)
+ else
+ yield
+ end
+ ensure
+ Fiber.set_scheduler(nil)
+ end
+
+ def activate_internal(activation)
+ # Reset some activation state
+ @commands = []
+ @current_activation_error = nil
+ @continue_as_new_suggested = activation.continue_as_new_suggested
+ @current_history_length = activation.history_length
+ @current_history_size = activation.history_size_bytes
+ @replaying = activation.is_replaying
+ @now_timestamp = activation.timestamp
+
+ # Apply jobs and run event loop
+ begin
+ # Create instance if it doesn't already exist
+ @instance ||= with_context_frozen { create_instance }
+
+ # Apply jobs
+ activation.jobs.each { |job| apply(job) }
+
+ # Schedule primary 'execute' if not already running (i.e. this is
+ # the first activation)
+ @primary_fiber ||= schedule(top_level: true) { run_workflow }
+
+ # Run the event loop
+ @scheduler.run_until_all_yielded
+ rescue Exception => e # rubocop:disable Lint/RescueException
+ on_top_level_exception(e)
+ end
+
+ # If we are not replaying and workflow is complete but not a
+ # failure (i.e. success, continue as new, or cancel), we warn for
+ # any unfinished handlers.
+ if !@replaying && @commands.any? do |c|
+ !c.complete_workflow_execution.nil? ||
+ !c.continue_as_new_workflow_execution.nil? ||
+ !c.cancel_workflow_execution.nil?
+ end
+ warn_on_any_unfinished_handlers
+ end
+
+ # Return success or failure
+ if @current_activation_error
+ @logger.replay_safety_disabled do
+ @logger.warn('Failed activation')
+ @logger.warn(@current_activation_error)
+ end
+ WorkflowInstance.new_completion_with_failure(
+ run_id: activation.run_id,
+ error: @current_activation_error,
+ failure_converter: @failure_converter,
+ payload_converter: @payload_converter
+ )
+ else
+ Bridge::Api::WorkflowCompletion::WorkflowActivationCompletion.new(
+ run_id: activation.run_id,
+ successful: Bridge::Api::WorkflowCompletion::Success.new(commands: @commands)
+ )
+ end
+ ensure
+ @commands = nil
+ @current_activation_error = nil
+ end
+
+ def create_instance
+ # Convert workflow arguments
+ @workflow_arguments = convert_args(payload_array: @init_job.arguments,
+ method_name: :execute,
+ raw_args: @definition.raw_args)
+
+ # Initialize interceptors
+ @inbound = @interceptors.reverse_each.reduce(InboundImplementation.new(self)) do |acc, int|
+ int.intercept_workflow(acc)
+ end
+ @inbound.init(OutboundImplementation.new(self))
+
+ # Create the user instance
+ if @definition.init
+ @definition.workflow_class.new(*@workflow_arguments)
+ else
+ @definition.workflow_class.new
+ end
+ end
+
+ def apply(job)
+ case job.variant
+ when :initialize_workflow
+ # Ignore
+ when :fire_timer
+ pending_timers.delete(job.fire_timer.seq)&.resume
+ when :update_random_seed
+ @random = illegal_call_tracing_disabled { Random.new(job.update_random_seed.randomness_seed) }
+ when :query_workflow
+ apply_query(job.query_workflow)
+ when :cancel_workflow
+ # TODO(cretz): Use the details somehow?
+ @cancellation_proc.call(reason: 'Workflow canceled')
+ when :signal_workflow
+ apply_signal(job.signal_workflow)
+ when :resolve_activity
+ pending_activities.delete(job.resolve_activity.seq)&.resume(job.resolve_activity.result)
+ when :notify_has_patch
+ @patches_notified << job.notify_has_patch.patch_id
+ when :resolve_child_workflow_execution_start
+ pending_child_workflow_starts.delete(job.resolve_child_workflow_execution_start.seq)&.resume(
+ job.resolve_child_workflow_execution_start
+ )
+ when :resolve_child_workflow_execution
+ pending_child_workflows.delete(job.resolve_child_workflow_execution.seq)&._resolve(
+ job.resolve_child_workflow_execution.result
+ )
+ when :resolve_signal_external_workflow
+ pending_external_signals.delete(job.resolve_signal_external_workflow.seq)&.resume(
+ job.resolve_signal_external_workflow
+ )
+ when :resolve_request_cancel_external_workflow
+ pending_external_cancels.delete(job.resolve_request_cancel_external_workflow.seq)&.resume(
+ job.resolve_request_cancel_external_workflow
+ )
+ when :do_update
+ apply_update(job.do_update)
+ else
+ raise "Unrecognized activation job variant: #{job.variant}"
+ end
+ end
+
+ def apply_signal(job)
+ defn = signal_handlers[job.signal_name] || signal_handlers[nil]
+ handler_exec =
+ if defn
+ HandlerExecution.new(name: job.signal_name, update_id: nil, unfinished_policy: defn.unfinished_policy)
+ end
+ # Process as a top level handler so that errors are treated as if in primary workflow method
+ schedule(top_level: true, handler_exec:) do
+ # Send to interceptor if there is a definition, buffer otherwise
+ if defn
+ @inbound.handle_signal(
+ Temporalio::Worker::Interceptor::Workflow::HandleSignalInput.new(
+ signal: job.signal_name,
+ args: begin
+ convert_handler_args(payload_array: job.input, defn:)
+ rescue StandardError => e
+ # Signals argument conversion failure must not fail task
+ @logger.error("Failed converting signal input arguments for #{job.signal_name}, dropping signal")
+ @logger.error(e)
+ next
+ end,
+ definition: defn,
+ headers: ProtoUtils.headers_from_proto_map(job.headers, @payload_converter) || {}
+ )
+ )
+ else
+ buffered = @buffered_signals[job.signal_name]
+ buffered = @buffered_signals[job.signal_name] = [] if buffered.nil?
+ buffered << job
+ end
+ end
+ end
+
+ def apply_query(job)
+ # TODO(cretz): __temporal_workflow_metadata
+ defn = case job.query_type
+ when '__stack_trace'
+ Workflow::Definition::Query.new(
+ name: '__stack_trace',
+ to_invoke: proc { scheduler.stack_trace }
+ )
+ else
+ query_handlers[job.query_type] || query_handlers[nil]
+ end
+ schedule do
+ unless defn
+ raise "Query handler for #{job.query_type} expected but not found, " \
+ "known queries: [#{query_handlers.keys.compact.sort.join(', ')}]"
+ end
+
+ result = with_context_frozen do
+ @inbound.handle_query(
+ Temporalio::Worker::Interceptor::Workflow::HandleQueryInput.new(
+ id: job.query_id,
+ query: job.query_type,
+ args: begin
+ convert_handler_args(payload_array: job.arguments, defn:)
+ rescue StandardError => e
+ raise "Failed converting query input arguments: #{e}"
+ end,
+ definition: defn,
+ headers: ProtoUtils.headers_from_proto_map(job.headers, @payload_converter) || {}
+ )
+ )
+ end
+ add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ respond_to_query: Bridge::Api::WorkflowCommands::QueryResult.new(
+ query_id: job.query_id,
+ succeeded: Bridge::Api::WorkflowCommands::QuerySuccess.new(
+ response: @payload_converter.to_payload(result)
+ )
+ )
+ )
+ )
+ rescue Exception => e # rubocop:disable Lint/RescueException
+ add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ respond_to_query: Bridge::Api::WorkflowCommands::QueryResult.new(
+ query_id: job.query_id,
+ failed: @failure_converter.to_failure(e, @payload_converter)
+ )
+ )
+ )
+ end
+ end
+
+ def apply_update(job)
+ defn = update_handlers[job.name] || update_handlers[nil]
+ handler_exec =
+ (HandlerExecution.new(name: job.name, update_id: job.id, unfinished_policy: defn.unfinished_policy) if defn)
+ schedule(handler_exec:) do
+ # Until this is accepted, all errors are rejections
+ accepted = false
+
+ # Set update info
+ Fiber[:__temporal_update_info] = Workflow::UpdateInfo.new(id: job.id, name: job.name).freeze
+
+ # Reject if not present
+ unless defn
+ raise "Update handler for #{job.name} expected but not found, " \
+ "known updates: [#{update_handlers.keys.compact.sort.join(', ')}]"
+ end
+
+ # To match other SDKs, we are only calling the validation interceptor if there is a validator. Also to match
+ # other SDKs, we are re-converting the args between validate and update to disallow user mutation in
+ # validator/interceptor.
+ if job.run_validator && defn.validator_to_invoke
+ with_context_frozen do
+ @inbound.validate_update(
+ Temporalio::Worker::Interceptor::Workflow::HandleUpdateInput.new(
+ id: job.id,
+ update: job.name,
+ args: begin
+ convert_handler_args(payload_array: job.input, defn:)
+ rescue StandardError => e
+ raise "Failed converting update input arguments: #{e}"
+ end,
+ definition: defn,
+ headers: ProtoUtils.headers_from_proto_map(job.headers, @payload_converter) || {}
+ )
+ )
+ end
+ end
+
+ # We build the input before marking accepted so the exception can reject instead of fail task
+ input = Temporalio::Worker::Interceptor::Workflow::HandleUpdateInput.new(
+ id: job.id,
+ update: job.name,
+ args: begin
+ convert_handler_args(payload_array: job.input, defn:)
+ rescue StandardError => e
+ raise "Failed converting update input arguments: #{e}"
+ end,
+ definition: defn,
+ headers: ProtoUtils.headers_from_proto_map(job.headers, @payload_converter) || {}
+ )
+
+ # Accept
+ add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ update_response: Bridge::Api::WorkflowCommands::UpdateResponse.new(
+ protocol_instance_id: job.protocol_instance_id,
+ accepted: Google::Protobuf::Empty.new
+ )
+ )
+ )
+ accepted = true
+
+ # Issue update
+ result = @inbound.handle_update(input)
+
+ add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ update_response: Bridge::Api::WorkflowCommands::UpdateResponse.new(
+ protocol_instance_id: job.protocol_instance_id,
+ completed: @payload_converter.to_payload(result)
+ )
+ )
+ )
+ rescue Exception => e # rubocop:disable Lint/RescueException
+ # Re-raise to cause task failure if this is accepted but this is not a failure exception
+ raise if accepted && !failure_exception?(e)
+
+ # Reject
+ add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ update_response: Bridge::Api::WorkflowCommands::UpdateResponse.new(
+ protocol_instance_id: job.protocol_instance_id,
+ rejected: @failure_converter.to_failure(e, @payload_converter)
+ )
+ )
+ )
+ end
+ end
+
+ def run_workflow
+ result = @inbound.execute(
+ Temporalio::Worker::Interceptor::Workflow::ExecuteInput.new(
+ args: @workflow_arguments,
+ headers: ProtoUtils.headers_from_proto_map(@init_job.headers, @payload_converter) || {}
+ )
+ )
+ add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ complete_workflow_execution: Bridge::Api::WorkflowCommands::CompleteWorkflowExecution.new(
+ result: @payload_converter.to_payload(result)
+ )
+ )
+ )
+ end
+
+ def schedule(
+ top_level: false,
+ handler_exec: nil,
+ &
+ )
+ in_progress_handlers << handler_exec if handler_exec
+ Fiber.schedule do
+ yield
+ rescue Exception => e # rubocop:disable Lint/RescueException
+ if top_level
+ on_top_level_exception(e)
+ else
+ @current_activation_error ||= e
+ end
+ ensure
+ in_progress_handlers.delete(handler_exec) if handler_exec
+ end
+ end
+
+ def on_top_level_exception(err)
+ if err.is_a?(Workflow::ContinueAsNewError)
+ @logger.debug('Workflow requested continue as new')
+ add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ continue_as_new_workflow_execution: Bridge::Api::WorkflowCommands::ContinueAsNewWorkflowExecution.new(
+ workflow_type: if err.workflow
+ Workflow::Definition._workflow_type_from_workflow_parameter(err.workflow)
+ end,
+ task_queue: err.task_queue,
+ arguments: ProtoUtils.convert_to_payload_array(payload_converter, err.args),
+ workflow_run_timeout: ProtoUtils.seconds_to_duration(err.run_timeout),
+ workflow_task_timeout: ProtoUtils.seconds_to_duration(err.task_timeout),
+ memo: ProtoUtils.memo_to_proto_hash(err.memo, payload_converter),
+ headers: ProtoUtils.headers_to_proto_hash(err.headers, payload_converter),
+ search_attributes: err.search_attributes&._to_proto,
+ retry_policy: err.retry_policy&._to_proto
+ )
+ )
+ )
+ elsif @cancellation.canceled? && Error.canceled?(err)
+ # If cancel was ever requested and this is a cancellation or an activity/child cancellation, we add a
+ # cancel command. Technically this means that a swallowed cancel followed by, say, an activity cancel
+ # later on will show the workflow as cancelled. But this is a Temporal limitation in that cancellation is
+ # a state not an event.
+ @logger.debug('Workflow requested to cancel and properly raised cancel')
+ @logger.debug(err)
+ add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ cancel_workflow_execution: Bridge::Api::WorkflowCommands::CancelWorkflowExecution.new
+ )
+ )
+ elsif failure_exception?(err)
+ @logger.debug('Workflow raised failure')
+ @logger.debug(err)
+ add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ fail_workflow_execution: Bridge::Api::WorkflowCommands::FailWorkflowExecution.new(
+ failure: @failure_converter.to_failure(err, @payload_converter)
+ )
+ )
+ )
+ else
+ @current_activation_error ||= err
+ end
+ end
+
+ def failure_exception?(err)
+ err.is_a?(Error::Failure) || err.is_a?(Timeout::Error) || @failure_exception_types.any? do |cls|
+ err.is_a?(cls)
+ end
+ end
+
+ def with_context_frozen(&)
+ @context_frozen = true
+ yield
+ ensure
+ @context_frozen = false
+ end
+
+ def convert_handler_args(payload_array:, defn:)
+ convert_args(
+ payload_array:,
+ method_name: defn.to_invoke.is_a?(Symbol) ? defn.to_invoke : nil,
+ raw_args: defn.raw_args,
+ ignore_first_param: defn.name.nil? # Dynamic
+ )
+ end
+
+ def convert_args(payload_array:, method_name:, raw_args:, ignore_first_param: false)
+ # Just in case it is not an array
+ payload_array = payload_array.to_ary
+
+ # We want to discard extra arguments if we can. If there is a method
+ # name, try to look it up. Then, assuming there's no :rest, trim args
+ # to the amount of :req or :opt there are.
+ if method_name && @definition.workflow_class.method_defined?(method_name)
+ count = 0
+ req_count = 0
+ @definition.workflow_class.instance_method(method_name).parameters.each do |(type, _)|
+ if type == :rest
+ count = nil
+ break
+ elsif %i[req opt].include?(type)
+ count += 1
+ req_count += 1 if type == :req
+ end
+ end
+ # Fail if too few required param values, trim off excess if too many. If count is nil, it has a splat.
+ if count
+ if ignore_first_param
+ count -= 1
+ req_count -= 1
+ end
+ if req_count > payload_array.size
+ # We have to fail here instead of let Ruby fail the invocation because some handlers, such as signals,
+ # want to log and ignore invalid arguments instead of fail and if we used Ruby failure, we can't
+ # differentiate between too-few-param caused by us or somewhere else by a user.
+ raise ArgumentError, "wrong number of required arguments for #{method_name} " \
+ "(given #{payload_array.size}, expected #{req_count})"
+ end
+ payload_array = payload_array.take(count)
+ end
+ end
+
+ # Convert
+ if raw_args
+ payload_array.map { |p| Converters::RawValue.new(p) }
+ else
+ ProtoUtils.convert_from_payload_array(@payload_converter, payload_array)
+ end
+ end
+
+ def scoped_logger_info
+ @scoped_logger_info ||= {
+ attempt: info.attempt,
+ namespace: info.namespace,
+ run_id: info.run_id,
+ task_queue: info.task_queue,
+ workflow_id: info.workflow_id,
+ workflow_type: info.workflow_type
+ }
+ # Append update info if there is any
+ update_info = Fiber[:__temporal_update_info]
+ return @scoped_logger_info unless update_info
+
+ @scoped_logger_info.merge({ update_id: update_info.id, update_name: update_info.name })
+ end
+
+ def warn_on_any_unfinished_handlers
+ updates, signals = in_progress_handlers.select do |h|
+ h.unfinished_policy == Workflow::HandlerUnfinishedPolicy::WARN_AND_ABANDON
+ end.partition(&:update_id)
+
+ unless updates.empty?
+ updates_str = JSON.generate(updates.map { |u| { name: u.name, id: u.update_id } })
+ warn(
+ "[TMPRL1102] Workflow #{info.workflow_id} finished while update handlers are still running. This may " \
+ 'have interrupted work that the update handler was doing, and the client that sent the update will ' \
+ "receive a 'workflow execution already completed' RPCError instead of the update result. You can wait " \
+ 'for all update and signal handlers to complete by using ' \
+ '`Temporalio::Workflow.wait_condition { Temporalio::Workflow.handlers_finished? }`. ' \
+ 'Alternatively, if both you and the clients sending the update are okay with interrupting running ' \
+ 'handlers when the workflow finishes, and causing clients to receive errors, then you can disable this ' \
+ 'warning via the update handler definition: ' \
+ '`workflow_update unfinished_policy: Temporalio::Workflow::HandlerUnfinishedPolicy.ABANDON`. ' \
+ "The following updates were unfinished (and warnings were not disabled for their handler): #{updates_str}"
+ )
+ end
+
+ return if signals.empty?
+
+ signals_str = JSON.generate(signals.group_by(&:name)
+ .transform_values(&:size).sort_by { |_, v| -v }.map { |name, count| { name:, count: } })
+ warn(
+ "[TMPRL1102] Workflow #{info.workflow_id} finished while signal handlers are still running. This may " \
+ 'have interrupted work that the signal handler was doing. You can wait for all update and signal ' \
+ 'handlers to complete by using ' \
+ '`Temporalio::Workflow.wait_condition { Temporalio::Workflow.handlers_finished? }`. ' \
+ 'Alternatively, if both you and the clients sending the signal are okay with interrupting running ' \
+ 'handlers when the workflow finishes, then you can disable this warning via the signal handler ' \
+ 'definition: ' \
+ '`workflow_signal unfinished_policy: Temporalio::Workflow::HandlerUnfinishedPolicy.ABANDON`. ' \
+ "The following signals were unfinished (and warnings were not disabled for their handler): #{signals_str}"
+ )
+ end
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/internal/worker/workflow_instance/child_workflow_handle.rb b/temporalio/lib/temporalio/internal/worker/workflow_instance/child_workflow_handle.rb
new file mode 100644
index 00000000..e6c5656e
--- /dev/null
+++ b/temporalio/lib/temporalio/internal/worker/workflow_instance/child_workflow_handle.rb
@@ -0,0 +1,54 @@
+# frozen_string_literal: true
+
+require 'temporalio/cancellation'
+require 'temporalio/workflow'
+require 'temporalio/workflow/child_workflow_handle'
+
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ # Implementation of the child workflow handle.
+ class ChildWorkflowHandle < Workflow::ChildWorkflowHandle
+ attr_reader :id, :first_execution_run_id
+
+ def initialize(id:, first_execution_run_id:, instance:, cancellation:, cancel_callback_key:) # rubocop:disable Lint/MissingSuper
+ @id = id
+ @first_execution_run_id = first_execution_run_id
+ @instance = instance
+ @cancellation = cancellation
+ @cancel_callback_key = cancel_callback_key
+ @resolution = nil
+ end
+
+ def result
+ # Notice that we actually provide a detached cancellation here instead of defaulting to workflow
+ # cancellation because we don't want workflow cancellation (or a user-provided cancellation to this result
+ # call) to be able to interrupt waiting on a child that may be processing the cancellation.
+ Workflow.wait_condition(cancellation: Cancellation.new) { @resolution }
+
+ case @resolution.status
+ when :completed
+ @instance.payload_converter.from_payload(@resolution.completed.result)
+ when :failed
+ raise @instance.failure_converter.from_failure(@resolution.failed.failure, @instance.payload_converter)
+ when :cancelled
+ raise @instance.failure_converter.from_failure(@resolution.cancelled.failure, @instance.payload_converter)
+ else
+ raise "Unrecognized resolution status: #{@resolution.status}"
+ end
+ end
+
+ def _resolve(resolution)
+ @cancellation.remove_cancel_callback(@cancel_callback_key)
+ @resolution = resolution
+ end
+
+ def signal(signal, *args, cancellation: Workflow.cancellation)
+ @instance.context._signal_child_workflow(id:, signal:, args:, cancellation:)
+ end
+ end
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/internal/worker/workflow_instance/context.rb b/temporalio/lib/temporalio/internal/worker/workflow_instance/context.rb
new file mode 100644
index 00000000..6736bd4f
--- /dev/null
+++ b/temporalio/lib/temporalio/internal/worker/workflow_instance/context.rb
@@ -0,0 +1,329 @@
+# frozen_string_literal: true
+
+require 'temporalio/cancellation'
+require 'temporalio/error'
+require 'temporalio/internal/bridge/api'
+require 'temporalio/internal/proto_utils'
+require 'temporalio/internal/worker/workflow_instance'
+require 'temporalio/internal/worker/workflow_instance/external_workflow_handle'
+require 'temporalio/worker/interceptor'
+require 'temporalio/workflow'
+
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ # Context for all workflow calls. All calls in the {Workflow} class should call a method on this class and then
+ # this class can delegate the call as needed to other parts of the workflow instance system.
+ class Context
+ def initialize(instance)
+ @instance = instance
+ end
+
+ def all_handlers_finished?
+ @instance.in_progress_handlers.empty?
+ end
+
+ def cancellation
+ @instance.cancellation
+ end
+
+ def continue_as_new_suggested
+ @instance.continue_as_new_suggested
+ end
+
+ def current_history_length
+ @instance.current_history_length
+ end
+
+ def current_history_size
+ @instance.current_history_size
+ end
+
+ def current_update_info
+ Fiber[:__temporal_update_info]
+ end
+
+ def deprecate_patch(patch_id)
+ @instance.patch(patch_id:, deprecated: true)
+ end
+
+ def execute_activity(
+ activity,
+ *args,
+ task_queue:,
+ schedule_to_close_timeout:,
+ schedule_to_start_timeout:,
+ start_to_close_timeout:,
+ heartbeat_timeout:,
+ retry_policy:,
+ cancellation:,
+ cancellation_type:,
+ activity_id:,
+ disable_eager_execution:
+ )
+ @outbound.execute_activity(
+ Temporalio::Worker::Interceptor::Workflow::ExecuteActivityInput.new(
+ activity:,
+ args:,
+ task_queue: task_queue || info.task_queue,
+ schedule_to_close_timeout:,
+ schedule_to_start_timeout:,
+ start_to_close_timeout:,
+ heartbeat_timeout:,
+ retry_policy:,
+ cancellation:,
+ cancellation_type:,
+ activity_id:,
+ disable_eager_execution: disable_eager_execution || @instance.disable_eager_activity_execution,
+ headers: {}
+ )
+ )
+ end
+
+ def execute_local_activity(
+ activity,
+ *args,
+ schedule_to_close_timeout:,
+ schedule_to_start_timeout:,
+ start_to_close_timeout:,
+ retry_policy:,
+ local_retry_threshold:,
+ cancellation:,
+ cancellation_type:,
+ activity_id:
+ )
+ @outbound.execute_local_activity(
+ Temporalio::Worker::Interceptor::Workflow::ExecuteLocalActivityInput.new(
+ activity:,
+ args:,
+ schedule_to_close_timeout:,
+ schedule_to_start_timeout:,
+ start_to_close_timeout:,
+ retry_policy:,
+ local_retry_threshold:,
+ cancellation:,
+ cancellation_type:,
+ activity_id:,
+ headers: {}
+ )
+ )
+ end
+
+ def external_workflow_handle(workflow_id, run_id: nil)
+ ExternalWorkflowHandle.new(id: workflow_id, run_id:, instance: @instance)
+ end
+
+ def illegal_call_tracing_disabled(&)
+ @instance.illegal_call_tracing_disabled(&)
+ end
+
+ def info
+ @instance.info
+ end
+
+ def initialize_continue_as_new_error(error)
+ @outbound.initialize_continue_as_new_error(
+ Temporalio::Worker::Interceptor::Workflow::InitializeContinueAsNewErrorInput.new(error:)
+ )
+ end
+
+ def logger
+ @instance.logger
+ end
+
+ def memo
+ @instance.memo
+ end
+
+ def metric_meter
+ @instance.metric_meter
+ end
+
+ def now
+ @instance.now
+ end
+
+ def patched(patch_id)
+ @instance.patch(patch_id:, deprecated: false)
+ end
+
+ def payload_converter
+ @instance.payload_converter
+ end
+
+ def query_handlers
+ @instance.query_handlers
+ end
+
+ def random
+ @instance.random
+ end
+
+ def replaying?
+ @instance.replaying
+ end
+
+ def search_attributes
+ @instance.search_attributes
+ end
+
+ def signal_handlers
+ @instance.signal_handlers
+ end
+
+ def sleep(duration, summary:, cancellation:)
+ @outbound.sleep(
+ Temporalio::Worker::Interceptor::Workflow::SleepInput.new(
+ duration:,
+ summary:,
+ cancellation:
+ )
+ )
+ end
+
+ def start_child_workflow(
+ workflow,
+ *args,
+ id:,
+ task_queue:,
+ cancellation:,
+ cancellation_type:,
+ parent_close_policy:,
+ execution_timeout:,
+ run_timeout:,
+ task_timeout:,
+ id_reuse_policy:,
+ retry_policy:,
+ cron_schedule:,
+ memo:,
+ search_attributes:
+ )
+ @outbound.start_child_workflow(
+ Temporalio::Worker::Interceptor::Workflow::StartChildWorkflowInput.new(
+ workflow:,
+ args:,
+ id:,
+ task_queue:,
+ cancellation:,
+ cancellation_type:,
+ parent_close_policy:,
+ execution_timeout:,
+ run_timeout:,
+ task_timeout:,
+ id_reuse_policy:,
+ retry_policy:,
+ cron_schedule:,
+ memo:,
+ search_attributes:,
+ headers: {}
+ )
+ )
+ end
+
+ def timeout(duration, exception_class, *exception_args, summary:, &)
+ raise 'Block required for timeout' unless block_given?
+
+ # Run timer in background and block in foreground. This gives better stack traces than a future any-of race.
+ # We make a detached cancellation because we don't want to link to workflow cancellation.
+ sleep_cancel, sleep_cancel_proc = Cancellation.new
+ fiber = Fiber.current
+ Workflow::Future.new do
+ Workflow.sleep(duration, summary:, cancellation: sleep_cancel)
+ fiber.raise(exception_class, *exception_args) if fiber.alive? # steep:ignore
+ rescue Exception => e # rubocop:disable Lint/RescueException
+ # Re-raise in fiber
+ fiber.raise(e) if fiber.alive?
+ end
+
+ begin
+ yield
+ ensure
+ sleep_cancel_proc.call
+ end
+ end
+
+ def update_handlers
+ @instance.update_handlers
+ end
+
+ def upsert_memo(hash)
+ # Convert to memo, apply updates, then add the command (so command adding is post validation)
+ upserted_memo = ProtoUtils.memo_to_proto(hash, payload_converter)
+ memo._update do |new_hash|
+ hash.each do |key, val|
+ # Nil means delete
+ if val.nil?
+ new_hash.delete(key.to_s)
+ else
+ new_hash[key.to_s] = val
+ end
+ end
+ end
+ @instance.add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ modify_workflow_properties: Bridge::Api::WorkflowCommands::ModifyWorkflowProperties.new(
+ upserted_memo:
+ )
+ )
+ )
+ end
+
+ def upsert_search_attributes(*updates)
+ # Apply updates then add the command (so command adding is post validation)
+ search_attributes._disable_mutations = false
+ search_attributes.update!(*updates)
+ @instance.add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ upsert_workflow_search_attributes: Bridge::Api::WorkflowCommands::UpsertWorkflowSearchAttributes.new(
+ search_attributes: updates.to_h(&:_to_proto_pair)
+ )
+ )
+ )
+ ensure
+ search_attributes._disable_mutations = true
+ end
+
+ def wait_condition(cancellation:, &)
+ @instance.scheduler.wait_condition(cancellation:, &)
+ end
+
+ def _cancel_external_workflow(id:, run_id:)
+ @outbound.cancel_external_workflow(
+ Temporalio::Worker::Interceptor::Workflow::CancelExternalWorkflowInput.new(id:, run_id:)
+ )
+ end
+
+ def _outbound=(outbound)
+ @outbound = outbound
+ end
+
+ def _signal_child_workflow(id:, signal:, args:, cancellation:)
+ @outbound.signal_child_workflow(
+ Temporalio::Worker::Interceptor::Workflow::SignalChildWorkflowInput.new(
+ id:,
+ signal:,
+ args:,
+ cancellation:,
+ headers: {}
+ )
+ )
+ end
+
+ def _signal_external_workflow(id:, run_id:, signal:, args:, cancellation:)
+ @outbound.signal_external_workflow(
+ Temporalio::Worker::Interceptor::Workflow::SignalExternalWorkflowInput.new(
+ id:,
+ run_id:,
+ signal:,
+ args:,
+ cancellation:,
+ headers: {}
+ )
+ )
+ end
+ end
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/internal/worker/workflow_instance/details.rb b/temporalio/lib/temporalio/internal/worker/workflow_instance/details.rb
new file mode 100644
index 00000000..caf043fe
--- /dev/null
+++ b/temporalio/lib/temporalio/internal/worker/workflow_instance/details.rb
@@ -0,0 +1,44 @@
+# frozen_string_literal: true
+
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ # Details needed to instantiate a {WorkflowInstance}.
+ class Details
+ attr_reader :namespace, :task_queue, :definition, :initial_activation, :logger, :metric_meter,
+ :payload_converter, :failure_converter, :interceptors, :disable_eager_activity_execution,
+ :illegal_calls, :workflow_failure_exception_types
+
+ def initialize(
+ namespace:,
+ task_queue:,
+ definition:,
+ initial_activation:,
+ logger:,
+ metric_meter:,
+ payload_converter:,
+ failure_converter:,
+ interceptors:,
+ disable_eager_activity_execution:,
+ illegal_calls:,
+ workflow_failure_exception_types:
+ )
+ @namespace = namespace
+ @task_queue = task_queue
+ @definition = definition
+ @initial_activation = initial_activation
+ @logger = logger
+ @metric_meter = metric_meter
+ @payload_converter = payload_converter
+ @failure_converter = failure_converter
+ @interceptors = interceptors
+ @disable_eager_activity_execution = disable_eager_activity_execution
+ @illegal_calls = illegal_calls
+ @workflow_failure_exception_types = workflow_failure_exception_types
+ end
+ end
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/internal/worker/workflow_instance/external_workflow_handle.rb b/temporalio/lib/temporalio/internal/worker/workflow_instance/external_workflow_handle.rb
new file mode 100644
index 00000000..c880f2c3
--- /dev/null
+++ b/temporalio/lib/temporalio/internal/worker/workflow_instance/external_workflow_handle.rb
@@ -0,0 +1,32 @@
+# frozen_string_literal: true
+
+require 'temporalio/cancellation'
+require 'temporalio/workflow'
+require 'temporalio/workflow/external_workflow_handle'
+
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ # Implementation of the external workflow handle.
+ class ExternalWorkflowHandle < Workflow::ExternalWorkflowHandle
+ attr_reader :id, :run_id
+
+ def initialize(id:, run_id:, instance:) # rubocop:disable Lint/MissingSuper
+ @id = id
+ @run_id = run_id
+ @instance = instance
+ end
+
+ def signal(signal, *args, cancellation: Workflow.cancellation)
+ @instance.context._signal_external_workflow(id:, run_id:, signal:, args:, cancellation:)
+ end
+
+ def cancel
+ @instance.context._cancel_external_workflow(id:, run_id:)
+ end
+ end
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/internal/worker/workflow_instance/externally_immutable_hash.rb b/temporalio/lib/temporalio/internal/worker/workflow_instance/externally_immutable_hash.rb
new file mode 100644
index 00000000..08533314
--- /dev/null
+++ b/temporalio/lib/temporalio/internal/worker/workflow_instance/externally_immutable_hash.rb
@@ -0,0 +1,22 @@
+# frozen_string_literal: true
+
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ # Delegator to a hash that does not allow external mutations. Used for memo.
+ class ExternallyImmutableHash < SimpleDelegator
+ def initialize(initial_hash)
+ super(initial_hash.freeze)
+ end
+
+ def _update(&)
+ new_hash = __getobj__.dup
+ yield new_hash
+ __setobj__(new_hash.freeze)
+ end
+ end
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/internal/worker/workflow_instance/handler_execution.rb b/temporalio/lib/temporalio/internal/worker/workflow_instance/handler_execution.rb
new file mode 100644
index 00000000..a1ac62b9
--- /dev/null
+++ b/temporalio/lib/temporalio/internal/worker/workflow_instance/handler_execution.rb
@@ -0,0 +1,25 @@
+# frozen_string_literal: true
+
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ # Representation of a currently-executing handler. Used to track whether any handlers are still running and warn
+ # on workflow complete as needed.
+ class HandlerExecution
+ attr_reader :name, :update_id, :unfinished_policy
+
+ def initialize(
+ name:,
+ update_id:,
+ unfinished_policy:
+ )
+ @name = name
+ @update_id = update_id
+ @unfinished_policy = unfinished_policy
+ end
+ end
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/internal/worker/workflow_instance/handler_hash.rb b/temporalio/lib/temporalio/internal/worker/workflow_instance/handler_hash.rb
new file mode 100644
index 00000000..890b8dc0
--- /dev/null
+++ b/temporalio/lib/temporalio/internal/worker/workflow_instance/handler_hash.rb
@@ -0,0 +1,41 @@
+# frozen_string_literal: true
+
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ # Hash for handlers that notifies when one is added. Only `[]=` and `store` can be used to mutate it.
+ class HandlerHash < SimpleDelegator
+ def initialize(initial_frozen_hash, definition_class, &on_new_definition)
+ super(initial_frozen_hash)
+ @definition_class = definition_class
+ @on_new_definition = on_new_definition
+ end
+
+ def []=(name, definition)
+ store(name, definition)
+ end
+
+ # steep:ignore:start
+ def store(name, definition)
+ raise ArgumentError, 'Name must be a string or nil' unless name.nil? || name.is_a?(String)
+
+ unless definition.nil? || definition.is_a?(@definition_class)
+ raise ArgumentError,
+ "Value must be a #{@definition_class.name} or nil"
+ end
+ raise ArgumentError, 'Name does not match one in definition' if definition && name != definition.name
+
+ # Do a copy-on-write op on the underlying frozen hash
+ new_hash = __getobj__.dup
+ new_hash[name] = definition
+ __setobj__(new_hash.freeze)
+ @on_new_definition&.call(definition) unless definition.nil?
+ definition
+ end
+ # steep:ignore:end
+ end
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/internal/worker/workflow_instance/illegal_call_tracer.rb b/temporalio/lib/temporalio/internal/worker/workflow_instance/illegal_call_tracer.rb
new file mode 100644
index 00000000..5b6432cd
--- /dev/null
+++ b/temporalio/lib/temporalio/internal/worker/workflow_instance/illegal_call_tracer.rb
@@ -0,0 +1,97 @@
+# frozen_string_literal: true
+
+require 'temporalio/workflow'
+
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ # Class that installs {::TracePoint} to disallow illegal calls.
+ class IllegalCallTracer
+ def self.frozen_validated_illegal_calls(illegal_calls)
+ illegal_calls.to_h do |key, val|
+ raise TypeError, 'Invalid illegal call map, top-level key must be a String' unless key.is_a?(String)
+
+ # @type var fixed_val: :all | Hash[Symbol, bool]
+ fixed_val = case val
+ when Array
+ val.to_h do |sub_val|
+ unless sub_val.is_a?(Symbol)
+ raise TypeError,
+ 'Invalid illegal call map, each value must be a Symbol'
+ end
+
+ [sub_val, true]
+ end.freeze
+ when :all
+ :all
+ else
+ raise TypeError, 'Invalid illegal call map, top-level value must be an Array or :all'
+ end
+
+ [key.frozen? ? key : key.dup.freeze, fixed_val]
+ end.freeze
+ end
+
+ # Illegal calls are Hash[String, :all | Hash[Symbol, Bool]]
+ def initialize(illegal_calls)
+ @tracepoint = TracePoint.new(:call, :c_call) do |tp|
+ # Manual check for proper thread since we have seen issues in Ruby 3.2 where it leaks
+ next unless Thread.current == @enabled_thread
+
+ cls = tp.defined_class
+ next unless cls.is_a?(Module)
+
+ # Extract the class name from the defined class. This is more difficult than it seems because you have to
+ # resolve the attached object of the singleton class. But in older Ruby (at least <= 3.1), the singleton
+ # class of things like `Date` does not have `attached_object` so you have to fall back in these rare cases
+ # to parsing the string output. Reaching the string parsing component is rare, so this should not have
+ # significant performance impact.
+ cls_name = if cls.singleton_class?
+ if cls.respond_to?(:attached_object)
+ cls = cls.attached_object # steep:ignore
+ next unless cls.is_a?(Module)
+
+ cls.name.to_s
+ else
+ cls.to_s.delete_prefix('#')
+ end
+ else
+ cls.name.to_s
+ end
+
+ # Check if the call is considered illegal
+ vals = illegal_calls[cls_name]
+ if vals == :all || vals&.[](tp.callee_id) # steep:ignore
+ raise Workflow::NondeterminismError,
+ "Cannot access #{cls_name} #{tp.callee_id} from inside a " \
+ 'workflow. If this is known to be safe, the code can be run in ' \
+ 'a Temporalio::Workflow::Unsafe.illegal_call_tracing_disabled block.'
+ end
+ end
+ end
+
+ def enable(&block)
+ # We've seen leaking issues in Ruby 3.2 where the TracePoint inadvertently remains enabled even for threads
+ # that it was not started on. So we will check the thread ourselves.
+ @enabled_thread = Thread.current
+ @tracepoint.enable do
+ block.call
+ ensure
+ @enabled_thread = nil
+ end
+ end
+
+ def disable(&block)
+ previous_thread = @enabled_thread
+ @tracepoint.disable do
+ block.call
+ ensure
+ @enabled_thread = previous_thread
+ end
+ end
+ end
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/internal/worker/workflow_instance/inbound_implementation.rb b/temporalio/lib/temporalio/internal/worker/workflow_instance/inbound_implementation.rb
new file mode 100644
index 00000000..997d4f0e
--- /dev/null
+++ b/temporalio/lib/temporalio/internal/worker/workflow_instance/inbound_implementation.rb
@@ -0,0 +1,62 @@
+# frozen_string_literal: true
+
+require 'temporalio/internal/worker/workflow_instance'
+require 'temporalio/worker/interceptor'
+require 'temporalio/workflow'
+
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ # Root implementation of the inbound interceptor.
+ class InboundImplementation < Temporalio::Worker::Interceptor::Workflow::Inbound
+ def initialize(instance)
+ super(nil) # steep:ignore
+ @instance = instance
+ end
+
+ def init(outbound)
+ @instance.context._outbound = outbound
+ end
+
+ def execute(input)
+ @instance.instance.execute(*input.args)
+ end
+
+ def handle_signal(input)
+ invoke_handler(input.signal, input)
+ end
+
+ def handle_query(input)
+ invoke_handler(input.query, input)
+ end
+
+ def validate_update(input)
+ invoke_handler(input.update, input, to_invoke: input.definition.validator_to_invoke)
+ end
+
+ def handle_update(input)
+ invoke_handler(input.update, input)
+ end
+
+ private
+
+ def invoke_handler(name, input, to_invoke: input.definition.to_invoke)
+ args = input.args
+ # Add name as first param if dynamic
+ args = [name] + args if input.definition.name.nil?
+ # Assume symbol or proc
+ case to_invoke
+ when Symbol
+ @instance.instance.send(to_invoke, *args)
+ when Proc
+ to_invoke.call(*args)
+ else
+ raise "Unrecognized invocation type #{to_invoke.class}"
+ end
+ end
+ end
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/internal/worker/workflow_instance/outbound_implementation.rb b/temporalio/lib/temporalio/internal/worker/workflow_instance/outbound_implementation.rb
new file mode 100644
index 00000000..5cf46c25
--- /dev/null
+++ b/temporalio/lib/temporalio/internal/worker/workflow_instance/outbound_implementation.rb
@@ -0,0 +1,411 @@
+# frozen_string_literal: true
+
+require 'temporalio/activity/definition'
+require 'temporalio/cancellation'
+require 'temporalio/error'
+require 'temporalio/internal/bridge/api'
+require 'temporalio/internal/proto_utils'
+require 'temporalio/internal/worker/workflow_instance'
+require 'temporalio/worker/interceptor'
+require 'temporalio/workflow'
+require 'temporalio/workflow/child_workflow_handle'
+
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ # Root implementation of the outbound interceptor.
+ class OutboundImplementation < Temporalio::Worker::Interceptor::Workflow::Outbound
+ def initialize(instance)
+ super(nil) # steep:ignore
+ @instance = instance
+ @activity_counter = 0
+ @timer_counter = 0
+ @child_counter = 0
+ @external_signal_counter = 0
+ @external_cancel_counter = 0
+ end
+
+ def cancel_external_workflow(input)
+ # Add command
+ seq = (@external_cancel_counter += 1)
+ cmd = Bridge::Api::WorkflowCommands::RequestCancelExternalWorkflowExecution.new(
+ seq:,
+ workflow_execution: Bridge::Api::Common::NamespacedWorkflowExecution.new(
+ namespace: @instance.info.namespace,
+ workflow_id: input.id,
+ run_id: input.run_id
+ )
+ )
+ @instance.add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(request_cancel_external_workflow_execution: cmd)
+ )
+ @instance.pending_external_cancels[seq] = Fiber.current
+
+ # Wait
+ resolution = Fiber.yield
+
+ # Raise if resolution has failure
+ return unless resolution.failure
+
+ raise @instance.failure_converter.from_failure(resolution.failure, @instance.payload_converter)
+ end
+
+ def execute_activity(input)
+ if input.schedule_to_close_timeout.nil? && input.start_to_close_timeout.nil?
+ raise ArgumentError, 'Activity must have schedule_to_close_timeout or start_to_close_timeout'
+ end
+
+ activity_type = case input.activity
+ when Class
+ Activity::Definition::Info.from_activity(input.activity).name
+ when Symbol, String
+ input.activity.to_s
+ else
+ raise ArgumentError, 'Activity must be a definition class, or a symbol/string'
+ end
+ execute_activity_with_local_backoffs(local: false, cancellation: input.cancellation) do
+ seq = (@activity_counter += 1)
+ @instance.add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ schedule_activity: Bridge::Api::WorkflowCommands::ScheduleActivity.new(
+ seq:,
+ activity_id: input.activity_id || seq.to_s,
+ activity_type:,
+ task_queue: input.task_queue,
+ headers: ProtoUtils.headers_to_proto_hash(input.headers, @instance.payload_converter),
+ arguments: ProtoUtils.convert_to_payload_array(@instance.payload_converter, input.args),
+ schedule_to_close_timeout: ProtoUtils.seconds_to_duration(input.schedule_to_close_timeout),
+ schedule_to_start_timeout: ProtoUtils.seconds_to_duration(input.schedule_to_start_timeout),
+ start_to_close_timeout: ProtoUtils.seconds_to_duration(input.start_to_close_timeout),
+ heartbeat_timeout: ProtoUtils.seconds_to_duration(input.heartbeat_timeout),
+ retry_policy: input.retry_policy&._to_proto,
+ cancellation_type: input.cancellation_type,
+ do_not_eagerly_execute: input.disable_eager_execution
+ )
+ )
+ )
+ seq
+ end
+ end
+
+ def execute_local_activity(input)
+ if input.schedule_to_close_timeout.nil? && input.start_to_close_timeout.nil?
+ raise ArgumentError, 'Activity must have schedule_to_close_timeout or start_to_close_timeout'
+ end
+
+ activity_type = case input.activity
+ when Class
+ Activity::Definition::Info.from_activity(input.activity).name
+ when Symbol, String
+ input.activity.to_s
+ else
+ raise ArgumentError, 'Activity must be a definition class, or a symbol/string'
+ end
+ execute_activity_with_local_backoffs(local: true, cancellation: input.cancellation) do |do_backoff|
+ seq = (@activity_counter += 1)
+ @instance.add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ schedule_local_activity: Bridge::Api::WorkflowCommands::ScheduleLocalActivity.new(
+ seq:,
+ activity_id: input.activity_id || seq.to_s,
+ activity_type:,
+ headers: ProtoUtils.headers_to_proto_hash(input.headers, @instance.payload_converter),
+ arguments: ProtoUtils.convert_to_payload_array(@instance.payload_converter, input.args),
+ schedule_to_close_timeout: ProtoUtils.seconds_to_duration(input.schedule_to_close_timeout),
+ schedule_to_start_timeout: ProtoUtils.seconds_to_duration(input.schedule_to_start_timeout),
+ start_to_close_timeout: ProtoUtils.seconds_to_duration(input.start_to_close_timeout),
+ retry_policy: input.retry_policy&._to_proto,
+ cancellation_type: input.cancellation_type,
+ local_retry_threshold: ProtoUtils.seconds_to_duration(input.local_retry_threshold),
+ attempt: do_backoff&.attempt || 0,
+ original_schedule_time: do_backoff&.original_schedule_time
+ )
+ )
+ )
+ seq
+ end
+ end
+
+ def execute_activity_with_local_backoffs(local:, cancellation:, &)
+ # We do not even want to schedule if the cancellation is already cancelled. We choose to use canceled
+ # failure instead of wrapping in activity failure which is similar to what other SDKs do, with the accepted
+ # tradeoff that it makes rescue more difficult (hence the presence of Error.canceled? helper).
+ raise Error::CanceledError, 'Activity canceled before scheduled' if cancellation.canceled?
+
+ # This has to be done in a loop for local activity backoff
+ last_local_backoff = nil
+ loop do
+ result = execute_activity_once(local:, cancellation:, last_local_backoff:, &)
+ return result unless result.is_a?(Bridge::Api::ActivityResult::DoBackoff)
+
+ # @type var result: untyped
+ last_local_backoff = result
+ # Have to sleep the amount of the backoff, which can be canceled with the same cancellation
+ # TODO(cretz): What should this cancellation raise?
+ Workflow.sleep(ProtoUtils.duration_to_seconds(result.backoff_duration), cancellation:)
+ end
+ end
+
+ # If this doesn't raise, it returns success | DoBackoff
+ def execute_activity_once(local:, cancellation:, last_local_backoff:, &)
+ # Add to pending activities (removed by the resolver)
+ seq = yield last_local_backoff
+ @instance.pending_activities[seq] = Fiber.current
+
+ # Add cancellation hook
+ cancel_callback_key = cancellation.add_cancel_callback do
+ # Only if the activity is present still
+ if @instance.pending_activities.include?(seq)
+ if local
+ @instance.add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ request_cancel_local_activity: Bridge::Api::WorkflowCommands::RequestCancelLocalActivity.new(seq:)
+ )
+ )
+ else
+ @instance.add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ request_cancel_activity: Bridge::Api::WorkflowCommands::RequestCancelActivity.new(seq:)
+ )
+ )
+ end
+ end
+ end
+
+ # Wait
+ resolution = Fiber.yield
+
+ # Remove cancellation callback
+ cancellation.remove_cancel_callback(cancel_callback_key)
+
+ case resolution.status
+ when :completed
+ @instance.payload_converter.from_payload(resolution.completed.result)
+ when :failed
+ raise @instance.failure_converter.from_failure(resolution.failed.failure, @instance.payload_converter)
+ when :cancelled
+ raise @instance.failure_converter.from_failure(resolution.cancelled.failure, @instance.payload_converter)
+ when :backoff
+ resolution.backoff
+ else
+ raise "Unrecognized resolution status: #{resolution.status}"
+ end
+ end
+
+ def initialize_continue_as_new_error(input)
+ # Do nothing
+ end
+
+ def signal_child_workflow(input)
+ _signal_external_workflow(
+ id: input.id,
+ run_id: nil,
+ child: true,
+ signal: input.signal,
+ args: input.args,
+ cancellation: input.cancellation,
+ headers: input.headers
+ )
+ end
+
+ def signal_external_workflow(input)
+ _signal_external_workflow(
+ id: input.id,
+ run_id: input.run_id,
+ child: false,
+ signal: input.signal,
+ args: input.args,
+ cancellation: input.cancellation,
+ headers: input.headers
+ )
+ end
+
+ def _signal_external_workflow(id:, run_id:, child:, signal:, args:, cancellation:, headers:)
+ raise Error::CanceledError, 'Signal canceled before scheduled' if cancellation.canceled?
+
+ # Add command
+ seq = (@external_signal_counter += 1)
+ cmd = Bridge::Api::WorkflowCommands::SignalExternalWorkflowExecution.new(
+ seq:,
+ signal_name: Workflow::Definition::Signal._name_from_parameter(signal),
+ args: ProtoUtils.convert_to_payload_array(@instance.payload_converter, args),
+ headers: ProtoUtils.headers_to_proto_hash(headers, @instance.payload_converter)
+ )
+ if child
+ cmd.child_workflow_id = id
+ else
+ cmd.workflow_execution = Bridge::Api::Common::NamespacedWorkflowExecution.new(
+ namespace: @instance.info.namespace,
+ workflow_id: id,
+ run_id:
+ )
+ end
+ @instance.add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(signal_external_workflow_execution: cmd)
+ )
+ @instance.pending_external_signals[seq] = Fiber.current
+
+ # Add a cancellation callback
+ cancel_callback_key = cancellation.add_cancel_callback do
+ # Add the command but do not raise, we will let resolution do that
+ @instance.add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ cancel_signal_workflow: Bridge::Api::WorkflowCommands::CancelSignalWorkflow.new(seq:)
+ )
+ )
+ end
+
+ # Wait
+ resolution = Fiber.yield
+
+ # Remove cancellation callback
+ cancellation.remove_cancel_callback(cancel_callback_key)
+
+ # Raise if resolution has failure
+ return unless resolution.failure
+
+ raise @instance.failure_converter.from_failure(resolution.failure, @instance.payload_converter)
+ end
+
+ def sleep(input)
+ # If already cancelled, raise as such
+ if input.cancellation.canceled?
+ raise Error::CanceledError,
+ input.cancellation.canceled_reason || 'Timer canceled before started'
+ end
+
+ # Disallow negative durations
+ raise ArgumentError, 'Sleep duration cannot be less than 0' if input.duration&.negative?
+
+ # If the duration is infinite, just wait for cancellation
+ if input.duration.nil?
+ input.cancellation.wait
+ raise Error::CanceledError, input.cancellation.canceled_reason || 'Timer canceled'
+ end
+
+ # If duration is zero, we make it one millisecond. It was decided a 0 duration still makes a timer to ensure
+ # determinism if a timer's duration is altered from non-zero to zero or vice versa.
+ duration = input.duration
+ duration = 0.001 if duration.zero?
+
+ # Add command
+ seq = (@timer_counter += 1)
+ @instance.add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ start_timer: Bridge::Api::WorkflowCommands::StartTimer.new(
+ seq:,
+ start_to_fire_timeout: ProtoUtils.seconds_to_duration(duration)
+ )
+ )
+ )
+ @instance.pending_timers[seq] = Fiber.current
+
+ # Add a cancellation callback
+ cancel_callback_key = input.cancellation.add_cancel_callback do
+ # Only if the timer is still present
+ fiber = @instance.pending_timers.delete(seq)
+ if fiber
+ # Add the command for cancel then raise
+ @instance.add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ cancel_timer: Bridge::Api::WorkflowCommands::CancelTimer.new(seq:)
+ )
+ )
+ if fiber.alive?
+ fiber.raise(Error::CanceledError.new(input.cancellation.canceled_reason || 'Timer canceled'))
+ end
+ end
+ end
+
+ # Wait
+ Fiber.yield
+
+ # Remove cancellation callback (only needed on success)
+ input.cancellation.remove_cancel_callback(cancel_callback_key)
+ end
+
+ def start_child_workflow(input)
+ raise Error::CanceledError, 'Child canceled before scheduled' if input.cancellation.canceled?
+
+ # Add the command
+ seq = (@child_counter += 1)
+ @instance.add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ start_child_workflow_execution: Bridge::Api::WorkflowCommands::StartChildWorkflowExecution.new(
+ seq:,
+ namespace: @instance.info.namespace,
+ workflow_id: input.id,
+ workflow_type: Workflow::Definition._workflow_type_from_workflow_parameter(input.workflow),
+ task_queue: input.task_queue,
+ input: ProtoUtils.convert_to_payload_array(@instance.payload_converter, input.args),
+ workflow_execution_timeout: ProtoUtils.seconds_to_duration(input.execution_timeout),
+ workflow_run_timeout: ProtoUtils.seconds_to_duration(input.run_timeout),
+ workflow_task_timeout: ProtoUtils.seconds_to_duration(input.task_timeout),
+ parent_close_policy: input.parent_close_policy,
+ workflow_id_reuse_policy: input.id_reuse_policy,
+ retry_policy: input.retry_policy&._to_proto,
+ cron_schedule: input.cron_schedule,
+ headers: ProtoUtils.headers_to_proto_hash(input.headers, @instance.payload_converter),
+ memo: ProtoUtils.memo_to_proto_hash(input.memo, @instance.payload_converter),
+ search_attributes: input.search_attributes&._to_proto_hash,
+ cancellation_type: input.cancellation_type
+ )
+ )
+ )
+
+ # Set as pending start and register cancel callback
+ @instance.pending_child_workflow_starts[seq] = Fiber.current
+ cancel_callback_key = input.cancellation.add_cancel_callback do
+ # Send cancel if in start or pending
+ if @instance.pending_child_workflow_starts.include?(seq) ||
+ @instance.pending_child_workflows.include?(seq)
+ @instance.add_command(
+ Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ cancel_child_workflow_execution: Bridge::Api::WorkflowCommands::CancelChildWorkflowExecution.new(
+ child_workflow_seq: seq
+ )
+ )
+ )
+ end
+ end
+
+ # Wait for start
+ resolution = Fiber.yield
+
+ case resolution.status
+ when :succeeded
+ # Create handle, passing along the cancel callback key, and set it as pending
+ handle = ChildWorkflowHandle.new(
+ id: input.id,
+ first_execution_run_id: resolution.succeeded.run_id,
+ instance: @instance,
+ cancellation: input.cancellation,
+ cancel_callback_key:
+ )
+ @instance.pending_child_workflows[seq] = handle
+ handle
+ when :failed
+ # Remove cancel callback and handle failure
+ input.cancellation.remove_cancel_callback(cancel_callback_key)
+ if resolution.failed.cause == :START_CHILD_WORKFLOW_EXECUTION_FAILED_CAUSE_WORKFLOW_ALREADY_EXISTS
+ raise Error::WorkflowAlreadyStartedError.new(
+ workflow_id: resolution.failed.workflow_id,
+ workflow_type: resolution.failed.workflow_type,
+ run_id: nil
+ )
+ end
+ raise "Unknown child start fail cause: #{resolution.failed.cause}"
+ when :cancelled
+ # Remove cancel callback and handle cancel
+ input.cancellation.remove_cancel_callback(cancel_callback_key)
+ raise @instance.failure_converter.from_failure(resolution.cancelled.failure, @instance.payload_converter)
+ else
+ raise "Unknown resolution status: #{resolution.status}"
+ end
+ end
+ end
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/internal/worker/workflow_instance/replay_safe_logger.rb b/temporalio/lib/temporalio/internal/worker/workflow_instance/replay_safe_logger.rb
new file mode 100644
index 00000000..deb69738
--- /dev/null
+++ b/temporalio/lib/temporalio/internal/worker/workflow_instance/replay_safe_logger.rb
@@ -0,0 +1,37 @@
+# frozen_string_literal: true
+
+require 'temporalio/scoped_logger'
+require 'temporalio/workflow'
+
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ # Wrapper for a scoped logger that does not log on replay.
+ class ReplaySafeLogger < ScopedLogger
+ def initialize(logger:, instance:)
+ @instance = instance
+ @replay_safety_disabled = false
+ super(logger)
+ end
+
+ def replay_safety_disabled(&)
+ @replay_safety_disabled = true
+ yield
+ ensure
+ @replay_safety_disabled = false
+ end
+
+ def add(...)
+ if !@replay_safety_disabled && Temporalio::Workflow.in_workflow? && Temporalio::Workflow::Unsafe.replaying?
+ return true
+ end
+
+ # Disable illegal call tracing for the log call
+ @instance.illegal_call_tracing_disabled { super }
+ end
+ end
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/internal/worker/workflow_instance/replay_safe_metric.rb b/temporalio/lib/temporalio/internal/worker/workflow_instance/replay_safe_metric.rb
new file mode 100644
index 00000000..c9ba6057
--- /dev/null
+++ b/temporalio/lib/temporalio/internal/worker/workflow_instance/replay_safe_metric.rb
@@ -0,0 +1,40 @@
+# frozen_string_literal: true
+
+require 'temporalio/scoped_logger'
+
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ # Wrapper for a metric that does not log on replay.
+ class ReplaySafeMetric < SimpleDelegator
+ def record(value, additional_attributes: nil)
+ return if Temporalio::Workflow.in_workflow? && Temporalio::Workflow::Unsafe.replaying?
+
+ super
+ end
+
+ def with_additional_attributes(additional_attributes)
+ ReplaySafeMetric.new(super)
+ end
+
+ class Meter < SimpleDelegator
+ def create_metric(
+ metric_type,
+ name,
+ description: nil,
+ unit: nil,
+ value_type: :integer
+ )
+ ReplaySafeMetric.new(super)
+ end
+
+ def with_additional_attributes(additional_attributes)
+ Meter.new(super)
+ end
+ end
+ end
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/internal/worker/workflow_instance/scheduler.rb b/temporalio/lib/temporalio/internal/worker/workflow_instance/scheduler.rb
new file mode 100644
index 00000000..b09abb97
--- /dev/null
+++ b/temporalio/lib/temporalio/internal/worker/workflow_instance/scheduler.rb
@@ -0,0 +1,163 @@
+# frozen_string_literal: true
+
+require 'temporalio'
+require 'temporalio/cancellation'
+require 'temporalio/error'
+require 'temporalio/internal/worker/workflow_instance'
+require 'temporalio/workflow'
+require 'timeout'
+
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ # Deterministic {::Fiber::Scheduler} implementation.
+ class Scheduler
+ def initialize(instance)
+ @instance = instance
+ @fibers = []
+ @ready = []
+ @wait_conditions = {}
+ @wait_condition_counter = 0
+ end
+
+ def context
+ @instance.context
+ end
+
+ def run_until_all_yielded
+ loop do
+ # Run all fibers until all yielded
+ while (fiber = @ready.shift)
+ fiber.resume
+ end
+
+ # Find the _first_ resolvable wait condition and if there, resolve it, and loop again, otherwise return.
+ # It is important that we both let fibers get all settled _before_ this and only allow a _single_ wait
+ # condition to be satisfied before looping. This allows wait condition users to trust that the line of
+ # code after the wait condition still has the condition satisfied.
+ # @type var cond_fiber: Fiber?
+ cond_fiber = nil
+ cond_result = nil
+ @wait_conditions.each do |seq, cond|
+ next unless (cond_result = cond.first.call)
+
+ cond_fiber = cond[1]
+ @wait_conditions.delete(seq)
+ break
+ end
+ return unless cond_fiber
+
+ cond_fiber.resume(cond_result)
+ end
+ end
+
+ def wait_condition(cancellation:, &block)
+ raise Workflow::InvalidWorkflowStateError, 'Cannot wait in this context' if @instance.context_frozen
+
+ if cancellation&.canceled?
+ raise Error::CanceledError,
+ cancellation.canceled_reason || 'Wait condition canceled before started'
+ end
+
+ seq = (@wait_condition_counter += 1)
+ @wait_conditions[seq] = [block, Fiber.current]
+
+ # Add a cancellation callback
+ cancel_callback_key = cancellation&.add_cancel_callback do
+ # Only if the condition is still present
+ cond = @wait_conditions.delete(seq)
+ if cond&.last&.alive?
+ cond&.last&.raise(Error::CanceledError.new(cancellation&.canceled_reason || 'Wait condition canceled'))
+ end
+ end
+
+ # This blocks until a resume is called on this fiber
+ result = Fiber.yield
+
+ # Remove cancellation callback (only needed on success)
+ cancellation&.remove_cancel_callback(cancel_callback_key) if cancel_callback_key
+
+ result
+ end
+
+ def stack_trace
+ # Collect backtraces of known fibers, separating with a blank line. We make sure to remove any lines that
+ # reference Temporal paths, and we remove any empty backtraces.
+ dir_path = @instance.illegal_call_tracing_disabled { File.dirname(Temporalio._root_file_path) }
+ @fibers.map do |fiber|
+ fiber.backtrace.reject { |s| s.start_with?(dir_path) }.join("\n")
+ end.reject(&:empty?).join("\n\n")
+ end
+
+ ###
+ # Fiber::Scheduler methods
+ #
+ # Note, we do not implement many methods here such as io_read and
+ # such. While it might seem to make sense to implement them and
+ # raise, we actually want to default to the blocking behavior of them
+ # not being present. This is so advanced things like logging still
+ # work inside of workflows. So we only implement the bare minimum.
+ ###
+
+ def block(_blocker, timeout = nil)
+ # TODO(cretz): Make the blocker visible in the stack trace?
+
+ # We just yield because unblock will resume this. We will just wrap in timeout if needed.
+ if timeout
+ begin
+ Timeout.timeout(timeout) { Fiber.yield }
+ true
+ rescue Timeout::Error
+ false
+ end
+ else
+ Fiber.yield
+ true
+ end
+ end
+
+ def close
+ # Nothing to do here, lifetime of scheduler is controlled by the instance
+ end
+
+ def fiber(&block)
+ if @instance.context_frozen
+ raise Workflow::InvalidWorkflowStateError, 'Cannot schedule fibers in this context'
+ end
+
+ fiber = Fiber.new do
+ block.call # steep:ignore
+ ensure
+ @fibers.delete(Fiber.current)
+ end
+ @fibers << fiber
+ @ready << fiber
+ fiber
+ end
+
+ def io_wait(io, events, timeout)
+ # TODO(cretz): This in a blocking fashion?
+ raise NotImplementedError, 'TODO'
+ end
+
+ def kernel_sleep(duration = nil)
+ Workflow.sleep(duration)
+ end
+
+ def process_wait(pid, flags)
+ raise NotImplementedError, 'Cannot wait on other processes in workflows'
+ end
+
+ def timeout_after(duration, exception_class, *exception_arguments, &)
+ context.timeout(duration, exception_class, *exception_arguments, summary: 'Timeout timer', &)
+ end
+
+ def unblock(_blocker, fiber)
+ @ready << fiber
+ end
+ end
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/internal/worker/workflow_worker.rb b/temporalio/lib/temporalio/internal/worker/workflow_worker.rb
new file mode 100644
index 00000000..99d08269
--- /dev/null
+++ b/temporalio/lib/temporalio/internal/worker/workflow_worker.rb
@@ -0,0 +1,196 @@
+# frozen_string_literal: true
+
+require 'temporalio/api/payload_visitor'
+require 'temporalio/error'
+require 'temporalio/internal/worker/workflow_instance'
+require 'temporalio/scoped_logger'
+require 'temporalio/workflow'
+require 'temporalio/workflow/definition'
+require 'timeout'
+
+module Temporalio
+ module Internal
+ module Worker
+ # Worker for handling workflow activations. Most activation work is delegated to the workflow executor.
+ class WorkflowWorker
+ def self.workflow_definitions(workflows)
+ workflows.each_with_object({}) do |workflow, hash|
+ # Load definition
+ defn = begin
+ if workflow.is_a?(Workflow::Definition::Info)
+ workflow
+ else
+ Workflow::Definition::Info.from_class(workflow)
+ end
+ rescue StandardError
+ raise ArgumentError, "Failed loading workflow #{workflow}"
+ end
+
+ # Confirm name not in use
+ raise ArgumentError, "Multiple workflows named #{defn.name || ''}" if hash.key?(defn.name)
+
+ hash[defn.name] = defn
+ end
+ end
+
+ def initialize(worker:, bridge_worker:, workflow_definitions:)
+ @executor = worker.options.workflow_executor
+
+ payload_codec = worker.options.client.data_converter.payload_codec
+ @workflow_payload_codec_thread_pool = worker.options.workflow_payload_codec_thread_pool
+ if !Fiber.current_scheduler && payload_codec && !@workflow_payload_codec_thread_pool
+ raise ArgumentError, 'Must have workflow payload codec thread pool if providing codec and not using fibers'
+ end
+
+ # If there is a payload codec, we need to build encoding and decoding visitors
+ if payload_codec
+ @payload_encoding_visitor = Api::PayloadVisitor.new(skip_search_attributes: true) do |payload_or_payloads|
+ apply_codec_on_payload_visit(payload_or_payloads) { |payloads| payload_codec.encode(payloads) }
+ end
+ @payload_decoding_visitor = Api::PayloadVisitor.new(skip_search_attributes: true) do |payload_or_payloads|
+ apply_codec_on_payload_visit(payload_or_payloads) { |payloads| payload_codec.decode(payloads) }
+ end
+ end
+
+ @state = State.new(
+ workflow_definitions:,
+ bridge_worker:,
+ logger: worker.options.logger,
+ metric_meter: worker.options.client.connection.options.runtime.metric_meter,
+ data_converter: worker.options.client.data_converter,
+ deadlock_timeout: worker.options.debug_mode ? nil : 2.0,
+ # TODO(cretz): Make this more performant for the default set?
+ illegal_calls: WorkflowInstance::IllegalCallTracer.frozen_validated_illegal_calls(
+ worker.options.illegal_workflow_calls || {}
+ ),
+ namespace: worker.options.client.namespace,
+ task_queue: worker.options.task_queue,
+ disable_eager_activity_execution: worker.options.disable_eager_activity_execution,
+ workflow_interceptors: worker._workflow_interceptors,
+ workflow_failure_exception_types: worker.options.workflow_failure_exception_types.map do |t|
+ unless t.is_a?(Class) && t < Exception
+ raise ArgumentError, 'All failure types must classes inheriting Exception'
+ end
+
+ t
+ end.freeze
+ )
+
+ # Validate worker
+ @executor._validate_worker(worker, @state)
+ end
+
+ def handle_activation(runner:, activation:, decoded:)
+ # Encode in background if not encoded but it needs to be
+ if @payload_encoding_visitor && !decoded
+ if Fiber.current_scheduler
+ Fiber.schedule { decode_activation(runner, activation) }
+ else
+ @workflow_payload_codec_thread_pool.execute { decode_activation(runner, activation) }
+ end
+ else
+ @executor._activate(activation, @state) do |activation_completion|
+ runner.apply_workflow_activation_complete(workflow_worker: self, activation_completion:, encoded: false)
+ end
+ end
+ rescue Exception => e # rubocop:disable Lint/RescueException
+ # Should never happen, executors are expected to trap things
+ @state.logger.error("Failed issuing activation on workflow run ID: #{activation.run_id}")
+ @state.logger.error(e)
+ end
+
+ def handle_activation_complete(runner:, activation_completion:, encoded:, completion_complete_queue:)
+ if @payload_encoding_visitor && !encoded
+ if Fiber.current_scheduler
+ Fiber.schedule { encode_activation_completion(runner, activation_completion) }
+ else
+ @workflow_payload_codec_thread_pool.execute do
+ encode_activation_completion(runner, activation_completion)
+ end
+ end
+ else
+ @state.bridge_worker.async_complete_workflow_activation(
+ activation_completion.run_id, activation_completion.to_proto, completion_complete_queue
+ )
+ end
+ end
+
+ def on_shutdown_complete
+ @state.evict_all
+ end
+
+ private
+
+ def decode_activation(runner, activation)
+ @payload_decoding_visitor.run(activation)
+ runner.apply_workflow_activation_decoded(workflow_worker: self, activation:)
+ end
+
+ def encode_activation_completion(runner, activation_completion)
+ @payload_encoding_visitor.run(activation_completion)
+ runner.apply_workflow_activation_complete(workflow_worker: self, activation_completion:, encoded: true)
+ end
+
+ def apply_codec_on_payload_visit(payload_or_payloads, &)
+ case payload_or_payloads
+ when Temporalio::Api::Common::V1::Payload
+ new_payloads = yield [payload_or_payloads]
+ payload_or_payloads.metadata = new_payloads.first.metadata
+ payload_or_payloads.data = new_payloads.first.data
+ when Enumerable
+ payload_or_payloads.replace(yield payload_or_payloads) # steep:ignore
+ else
+ raise 'Unrecognized visitor type'
+ end
+ end
+
+ class State
+ attr_reader :workflow_definitions, :bridge_worker, :logger, :metric_meter, :data_converter, :deadlock_timeout,
+ :illegal_calls, :namespace, :task_queue, :disable_eager_activity_execution,
+ :workflow_interceptors, :workflow_failure_exception_types
+
+ def initialize(
+ workflow_definitions:, bridge_worker:, logger:, metric_meter:, data_converter:, deadlock_timeout:,
+ illegal_calls:, namespace:, task_queue:, disable_eager_activity_execution:,
+ workflow_interceptors:, workflow_failure_exception_types:
+ )
+ @workflow_definitions = workflow_definitions
+ @bridge_worker = bridge_worker
+ @logger = logger
+ @metric_meter = metric_meter
+ @data_converter = data_converter
+ @deadlock_timeout = deadlock_timeout
+ @illegal_calls = illegal_calls
+ @namespace = namespace
+ @task_queue = task_queue
+ @disable_eager_activity_execution = disable_eager_activity_execution
+ @workflow_interceptors = workflow_interceptors
+ @workflow_failure_exception_types = workflow_failure_exception_types
+
+ @running_workflows = {}
+ @running_workflows_mutex = Mutex.new
+ end
+
+ # This can never be called at the same time for the same run ID on the same state object
+ def get_or_create_running_workflow(run_id, &)
+ instance = @running_workflows_mutex.synchronize { @running_workflows[run_id] }
+ # If instance is not there, we create it out of lock then store it under lock
+ unless instance
+ instance = yield
+ @running_workflows_mutex.synchronize { @running_workflows[run_id] = instance }
+ end
+ instance
+ end
+
+ def evict_running_workflow(run_id)
+ @running_workflows_mutex.synchronize { @running_workflows.delete(run_id) }
+ end
+
+ def evict_all
+ @running_workflows_mutex.synchronize { @running_workflows.clear }
+ end
+ end
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/search_attributes.rb b/temporalio/lib/temporalio/search_attributes.rb
index ab36dff9..bb3a9458 100644
--- a/temporalio/lib/temporalio/search_attributes.rb
+++ b/temporalio/lib/temporalio/search_attributes.rb
@@ -7,7 +7,7 @@ module Temporalio
#
# This is represented as a mapping of {SearchAttributes::Key} to object values. This is not a hash though it does have
# a few hash-like methods and can be converted to a hash via {#to_h}. In some situations, such as in workflows, this
- # class is frozen.
+ # class is immutable for outside use.
class SearchAttributes
# Key for a search attribute.
class Key
@@ -20,7 +20,7 @@ class Key
def initialize(name, type)
raise ArgumentError, 'Invalid type' unless Api::Enums::V1::IndexedValueType.lookup(type)
- @name = name
+ @name = name.to_s
@type = type
end
@@ -104,28 +104,54 @@ def initialize(key, value)
@key = key
@value = value
end
+
+ # @!visibility private
+ def _to_proto_pair
+ SearchAttributes._to_proto_pair(key, value)
+ end
end
# @!visibility private
- def self._from_proto(proto)
- return nil unless proto
- raise ArgumentError, 'Expected proto search attribute' unless proto.is_a?(Api::Common::V1::SearchAttributes)
-
- SearchAttributes.new(proto.indexed_fields.map do |key_name, payload| # rubocop:disable Style/MapToHash
- key = Key.new(key_name, IndexedValueType::PROTO_VALUES[payload.metadata['type']])
- value = value_from_payload(payload)
- [key, value]
- end.to_h)
+ def self._from_proto(proto, disable_mutations: false, never_nil: false)
+ return nil unless proto || never_nil
+
+ attrs = if proto
+ unless proto.is_a?(Api::Common::V1::SearchAttributes)
+ raise ArgumentError, 'Expected proto search attribute'
+ end
+
+ SearchAttributes.new(proto.indexed_fields.map do |key_name, payload| # rubocop:disable Style/MapToHash
+ key = Key.new(key_name, IndexedValueType::PROTO_VALUES[payload.metadata['type']])
+ value = _value_from_payload(payload)
+ [key, value]
+ end.to_h)
+ else
+ SearchAttributes.new
+ end
+ attrs._disable_mutations = disable_mutations
+ attrs
end
# @!visibility private
- def self.value_from_payload(payload)
+ def self._value_from_payload(payload)
value = Converters::PayloadConverter.default.from_payload(payload)
# Time needs to be converted
value = Time.iso8601(value) if payload.metadata['type'] == 'DateTime' && value.is_a?(String)
value
end
+ # @!visibility private
+ def self._to_proto_pair(key, value)
+ # We use a default converter, but if type is a time, we need ISO format
+ value = value.iso8601 if key.type == IndexedValueType::TIME && value.is_a?(Time)
+
+ # Convert to payload
+ payload = Converters::PayloadConverter.default.to_payload(value)
+ payload.metadata['type'] = IndexedValueType::PROTO_NAMES[key.type]
+
+ [key.name, payload]
+ end
+
# Create a search attribute collection.
#
# @param existing [SearchAttributes, Hash, nil] Existing collection. This can be another
@@ -149,6 +175,7 @@ def initialize(existing = nil)
# @param key [Key] A key to set. This must be a {Key} and the value must be proper for the {Key#type}.
# @param value [Object, nil] The value to set. If `nil`, the key is removed. The value must be proper for the `key`.
def []=(key, value)
+ _assert_mutations_enabled
# Key must be a Key
raise ArgumentError, 'Key must be a key' unless key.is_a?(Key)
@@ -162,33 +189,37 @@ def []=(key, value)
# Get a search attribute value for a key.
#
- # @param key [Key, String] The key to find. If this is a {Key}, it will use key equality (i.e. name and type) to
- # search. If this is a {::String}, the type is not checked when finding the proper key.
+ # @param key [Key, String, Symbol] The key to find. If this is a {Key}, it will use key equality (i.e. name and
+ # type) to search. If this is a {::String}, the type is not checked when finding the proper key.
# @return [Object, nil] Value if found or `nil` if not.
def [](key)
# Key must be a Key or a string
- if key.is_a?(Key)
+ case key
+ when Key
@raw_hash[key]
- elsif key.is_a?(String)
- @raw_hash.find { |hash_key, _| hash_key.name == key }&.last
+ when String, Symbol
+ @raw_hash.find { |hash_key, _| hash_key.name == key.to_s }&.last
else
- raise ArgumentError, 'Key must be a key or string'
+ raise ArgumentError, 'Key must be a key or string/symbol'
end
end
# Delete a search attribute key
#
- # @param key [Key, String] The key to delete. Regardless of whether this is a {Key} or a {::String}, the key with
- # the matching name will be deleted. This means a {Key} with a matching name but different type may be deleted.
+ # @param key [Key, String, Symbol] The key to delete. Regardless of whether this is a {Key} or a {::String}, the key
+ # with the matching name will be deleted. This means a {Key} with a matching name but different type may be
+ # deleted.
def delete(key)
+ _assert_mutations_enabled
# Key must be a Key or a string, but we delete all values for the
# name no matter what
- name = if key.is_a?(Key)
+ name = case key
+ when Key
key.name
- elsif key.is_a?(String)
- key
+ when String, Symbol
+ key.to_s
else
- raise ArgumentError, 'Key must be a key or string'
+ raise ArgumentError, 'Key must be a key or string/symbol'
end
@raw_hash.delete_if { |hash_key, _| hash_key.name == name }
end
@@ -205,7 +236,9 @@ def to_h
# @return [SearchAttributes] Copy of the search attributes.
def dup
- SearchAttributes.new(self)
+ attrs = SearchAttributes.new(self)
+ attrs._disable_mutations = false
+ attrs
end
# @return [Boolean] Whether the set of attributes is empty.
@@ -225,6 +258,7 @@ def length
# @param updates [Update] Updates created via {Key#value_set} or {Key#value_unset}.
# @return [SearchAttributes] New collection.
def update(*updates)
+ _assert_mutations_enabled
attrs = dup
attrs.update!(*updates)
attrs
@@ -234,27 +268,36 @@ def update(*updates)
#
# @param updates [Update] Updates created via {Key#value_set} or {Key#value_unset}.
def update!(*updates)
+ _assert_mutations_enabled
updates.each do |update|
raise ArgumentError, 'Update must be an update' unless update.is_a?(Update)
- self[update.key] = update.value
+ if update.value.nil?
+ delete(update.key)
+ else
+ self[update.key] = update.value
+ end
end
end
# @!visibility private
def _to_proto
- Api::Common::V1::SearchAttributes.new(
- indexed_fields: @raw_hash.to_h do |key, value|
- # We use a default converter, but if type is a time, we need ISO format
- value = value.iso8601 if key.type == IndexedValueType::TIME
+ Api::Common::V1::SearchAttributes.new(indexed_fields: _to_proto_hash)
+ end
+
+ # @!visibility private
+ def _to_proto_hash
+ @raw_hash.to_h { |key, value| SearchAttributes._to_proto_pair(key, value) }
+ end
- # Convert to payload
- payload = Converters::PayloadConverter.default.to_payload(value)
- payload.metadata['type'] = IndexedValueType::PROTO_NAMES[key.type]
+ # @!visibility private
+ def _assert_mutations_enabled
+ raise 'Search attribute mutations disabled' if @disable_mutations
+ end
- [key.name, payload]
- end
- )
+ # @!visibility private
+ def _disable_mutations=(value)
+ @disable_mutations = value
end
# Type for a search attribute key/value.
diff --git a/temporalio/lib/temporalio/testing/activity_environment.rb b/temporalio/lib/temporalio/testing/activity_environment.rb
index 60e420c6..68fc2c72 100644
--- a/temporalio/lib/temporalio/testing/activity_environment.rb
+++ b/temporalio/lib/temporalio/testing/activity_environment.rb
@@ -67,11 +67,11 @@ def initialize(
# Run an activity and returns its result or raises its exception.
#
- # @param activity [Activity, Class, Activity::Definition] Activity to run.
+ # @param activity [Activity::Definition, Class, Activity::Definition::Info] Activity to run.
# @param args [Array] Arguments to the activity.
# @return Activity result.
def run(activity, *args)
- defn = Activity::Definition.from_activity(activity)
+ defn = Activity::Definition::Info.from_activity(activity)
executor = @activity_executors[defn.executor]
raise ArgumentError, "Unknown executor: #{defn.executor}" if executor.nil?
diff --git a/temporalio/lib/temporalio/testing/workflow_environment.rb b/temporalio/lib/temporalio/testing/workflow_environment.rb
index e426fe77..ea221bbb 100644
--- a/temporalio/lib/temporalio/testing/workflow_environment.rb
+++ b/temporalio/lib/temporalio/testing/workflow_environment.rb
@@ -1,8 +1,14 @@
# frozen_string_literal: true
+require 'delegate'
+require 'temporalio/api'
+require 'temporalio/api/testservice/v1/request_response'
require 'temporalio/client'
+require 'temporalio/client/connection/test_service'
+require 'temporalio/client/workflow_handle'
require 'temporalio/converters'
require 'temporalio/internal/bridge/testing'
+require 'temporalio/internal/proto_utils'
require 'temporalio/runtime'
require 'temporalio/version'
@@ -63,7 +69,8 @@ def self.start_local(
dev_server_log_level: 'warn',
dev_server_download_version: 'default',
dev_server_download_dest_dir: nil,
- dev_server_extra_args: []
+ dev_server_extra_args: [],
+ &
)
server_options = Internal::Bridge::Testing::EphemeralServer::StartDevServerOptions.new(
existing_path: dev_server_existing_path,
@@ -80,7 +87,96 @@ def self.start_local(
log_level: dev_server_log_level,
extra_args: dev_server_extra_args
)
- core_server = Internal::Bridge::Testing::EphemeralServer.start_dev_server(runtime._core_runtime, server_options)
+ _with_core_server(
+ core_server: Internal::Bridge::Testing::EphemeralServer.start_dev_server(
+ runtime._core_runtime, server_options
+ ),
+ namespace:,
+ data_converter:,
+ interceptors:,
+ logger:,
+ default_workflow_query_reject_condition:,
+ runtime:,
+ supports_time_skipping: false,
+ & # steep:ignore
+ )
+ end
+
+ # Start a time-skipping test server. This server can skip time but may not have all of the Temporal features of
+ # the {start_local} form. By default, the server is downloaded to tmp if not already present. The test server is
+ # run as a child process. All options that start with +test_server_+ are for this specific implementation and
+ # therefore are not stable and may be changed as the underlying implementation changes.
+ #
+ # If a block is given it is passed the environment and the environment is shut down after. If a block is not
+ # given, the environment is returned and {shutdown} needs to be called manually.
+ #
+ # @param data_converter [Converters::DataConverter] Data converter for the client.
+ # @param interceptors [Array] Interceptors for the client.
+ # @param logger [Logger] Logger for the client.
+ # @param default_workflow_query_reject_condition [WorkflowQueryRejectCondition, nil] Default rejection condition
+ # for the client.
+ # @param port [Integer, nil] Port to bind on, or +nil+ for random.
+ # @param runtime [Runtime] Runtime for the server and client.
+ # @param test_server_existing_path [String, nil] Existing CLI path to use instead of downloading and caching to
+ # tmp.
+ # @param test_server_download_version [String] Version of test server to download and cache.
+ # @param test_server_download_dest_dir [String, nil] Where to download. Defaults to tmp.
+ # @param test_server_extra_args [Array] Any extra arguments for the test server.
+ #
+ # @yield [environment] If a block is given, it is called with the environment and upon complete the environment is
+ # shutdown.
+ # @yieldparam environment [WorkflowEnvironment] Environment that is shut down upon block completion.
+ #
+ # @return [WorkflowEnvironment, Object] Started local server environment with client if there was no block given,
+ # or block result if block was given.
+ def self.start_time_skipping(
+ data_converter: Converters::DataConverter.default,
+ interceptors: [],
+ logger: Logger.new($stdout, level: Logger::WARN),
+ default_workflow_query_reject_condition: nil,
+ port: nil,
+ runtime: Runtime.default,
+ test_server_existing_path: nil,
+ test_server_download_version: 'default',
+ test_server_download_dest_dir: nil,
+ test_server_extra_args: [],
+ &
+ )
+ server_options = Internal::Bridge::Testing::EphemeralServer::StartTestServerOptions.new(
+ existing_path: test_server_existing_path,
+ sdk_name: 'sdk-ruby',
+ sdk_version: VERSION,
+ download_version: test_server_download_version,
+ download_dest_dir: test_server_download_dest_dir,
+ port:,
+ extra_args: test_server_extra_args
+ )
+ _with_core_server(
+ core_server: Internal::Bridge::Testing::EphemeralServer.start_test_server(
+ runtime._core_runtime, server_options
+ ),
+ namespace: 'default',
+ data_converter:,
+ interceptors:,
+ logger:,
+ default_workflow_query_reject_condition:,
+ runtime:,
+ supports_time_skipping: true,
+ & # steep:ignore
+ )
+ end
+
+ # @!visibility private
+ def self._with_core_server(
+ core_server:,
+ namespace:,
+ data_converter:,
+ interceptors:,
+ logger:,
+ default_workflow_query_reject_condition:,
+ runtime:,
+ supports_time_skipping:
+ )
# Try to connect, shutdown if we can't
begin
client = Client.connect(
@@ -92,8 +188,8 @@ def self.start_local(
default_workflow_query_reject_condition:,
runtime:
)
- server = Ephemeral.new(client, core_server)
- rescue StandardError
+ server = Ephemeral.new(client, core_server, supports_time_skipping:)
+ rescue Exception # rubocop:disable Lint/RescueException
core_server.shutdown
raise
end
@@ -120,18 +216,167 @@ def shutdown
# Do nothing by default
end
+ # @return [Boolean] Whether this environment supports time skipping.
+ def supports_time_skipping?
+ false
+ end
+
+ # Advanced time.
+ #
+ # If this server supports time skipping, this will immediately advance time and return. If it does not, this is
+ # a standard {::sleep}.
+ #
+ # @param duration [Float] Duration seconds.
+ def sleep(duration)
+ Kernel.sleep(duration)
+ end
+
+ # Current time of the environment.
+ #
+ # If this server supports time skipping, this will be the current time as known to the environment. If it does
+ # not, this is a standard {::Time.now}.
+ #
+ # @return [Time] Current time.
+ def current_time
+ Time.now
+ end
+
+ # Run a block with automatic time skipping disabled. This just runs the block for environments that don't support
+ # time skipping.
+ #
+ # @yield Block to run.
+ # @return [Object] Result of the block.
+ def auto_time_skipping_disabled(&)
+ raise 'Block required' unless block_given?
+
+ yield
+ end
+
# @!visibility private
class Ephemeral < WorkflowEnvironment
- def initialize(client, core_server)
+ def initialize(client, core_server, supports_time_skipping:)
+ # Add our interceptor at the end of the existing interceptors that skips time
+ client_options = client.options.dup
+ client_options.interceptors += [TimeSkippingClientInterceptor.new(self)]
+ client = Client.new(**client_options.to_h) # steep:ignore
super(client)
+
+ @auto_time_skipping = true
@core_server = core_server
+ @test_service = Client::Connection::TestService.new(client.connection) if supports_time_skipping
end
# @!visibility private
def shutdown
@core_server.shutdown
end
+
+ # @!visibility private
+ def supports_time_skipping?
+ !@test_service.nil?
+ end
+
+ # @!visibility private
+ def sleep(duration)
+ return super unless supports_time_skipping?
+
+ @test_service.unlock_time_skipping_with_sleep(
+ Api::TestService::V1::SleepRequest.new(duration: Internal::ProtoUtils.seconds_to_duration(duration))
+ )
+ end
+
+ # @!visibility private
+ def current_time
+ return super unless supports_time_skipping?
+
+ resp = @test_service.get_current_time(Google::Protobuf::Empty.new)
+ Internal::ProtoUtils.timestamp_to_time(resp.time) or raise 'Time missing'
+ end
+
+ # @!visibility private
+ def auto_time_skipping_disabled(&)
+ raise 'Block required' unless block_given?
+ return super unless supports_time_skipping?
+
+ already_disabled = @auto_time_skipping
+ @auto_time_skipping = false
+ begin
+ yield
+ ensure
+ @auto_time_skipping = true unless already_disabled
+ end
+ end
+
+ # @!visibility private
+ def time_skipping_unlocked(&)
+ # If disabled or unsupported, no locking/unlocking, just run and return
+ return yield if !supports_time_skipping? || !@auto_time_skipping
+
+ # Unlock to start time skipping, lock again to stop it
+ @test_service.unlock_time_skipping(Api::TestService::V1::UnlockTimeSkippingRequest.new)
+ user_code_success = false
+ begin
+ result = yield
+ user_code_success = true
+ result
+ ensure
+ # Lock it back
+ begin
+ @test_service.lock_time_skipping(Api::TestService::V1::LockTimeSkippingRequest.new)
+ rescue StandardError => e
+ # Re-raise if user code succeeded, otherwise swallow
+ raise if user_code_success
+
+ client.options.logger.error('Failed locking time skipping after error')
+ client.options.logger.error(e)
+ end
+ end
+ end
end
+
+ private_constant :Ephemeral
+
+ # @!visibility private
+ class TimeSkippingClientInterceptor
+ include Client::Interceptor
+
+ def initialize(env)
+ @env = env
+ end
+
+ # @!visibility private
+ def intercept_client(next_interceptor)
+ Outbound.new(next_interceptor, @env)
+ end
+
+ # @!visibility private
+ class Outbound < Client::Interceptor::Outbound
+ def initialize(next_interceptor, env)
+ super(next_interceptor)
+ @env = env
+ end
+
+ # @!visibility private
+ def start_workflow(input)
+ TimeSkippingWorkflowHandle.new(super, @env)
+ end
+ end
+
+ # @!visibility private
+ class TimeSkippingWorkflowHandle < SimpleDelegator
+ def initialize(handle, env)
+ super(handle) # steep:ignore
+ @env = env
+ end
+
+ # @!visibility private
+ def result(follow_runs: true, rpc_options: nil)
+ @env.time_skipping_unlocked { super(follow_runs:, rpc_options:) }
+ end
+ end
+ end
+
+ private_constant :TimeSkippingClientInterceptor
end
end
end
diff --git a/temporalio/lib/temporalio/worker.rb b/temporalio/lib/temporalio/worker.rb
index 3e05f77f..73998ffd 100644
--- a/temporalio/lib/temporalio/worker.rb
+++ b/temporalio/lib/temporalio/worker.rb
@@ -8,9 +8,13 @@
require 'temporalio/internal/bridge/worker'
require 'temporalio/internal/worker/activity_worker'
require 'temporalio/internal/worker/multi_runner'
+require 'temporalio/internal/worker/workflow_instance'
+require 'temporalio/internal/worker/workflow_worker'
require 'temporalio/worker/activity_executor'
require 'temporalio/worker/interceptor'
+require 'temporalio/worker/thread_pool'
require 'temporalio/worker/tuner'
+require 'temporalio/worker/workflow_executor'
module Temporalio
# Worker for processing activities and workflows on a task queue.
@@ -24,8 +28,10 @@ class Worker
:client,
:task_queue,
:activities,
- :activity_executors,
+ :workflows,
:tuner,
+ :activity_executors,
+ :workflow_executor,
:interceptors,
:build_id,
:identity,
@@ -42,6 +48,11 @@ class Worker
:max_task_queue_activities_per_second,
:graceful_shutdown_period,
:use_worker_versioning,
+ :disable_eager_activity_execution,
+ :illegal_workflow_calls,
+ :workflow_failure_exception_types,
+ :workflow_payload_codec_thread_pool,
+ :debug_mode,
keyword_init: true
)
@@ -121,16 +132,34 @@ def self.run_all(
block_result = nil
loop do
event = runner.next_event
+ # TODO(cretz): Consider improving performance instead of this case statement
case event
when Internal::Worker::MultiRunner::Event::PollSuccess
# Successful poll
- event.worker._on_poll_bytes(event.worker_type, event.bytes)
+ event.worker._on_poll_bytes(runner, event.worker_type, event.bytes)
when Internal::Worker::MultiRunner::Event::PollFailure
# Poll failure, this causes shutdown of all workers
- logger.error('Poll failure (beginning worker shutdown if not alaredy occurring)')
+ logger.error('Poll failure (beginning worker shutdown if not already occurring)')
logger.error(event.error)
first_error ||= event.error
runner.initiate_shutdown
+ when Internal::Worker::MultiRunner::Event::WorkflowActivationDecoded
+ # Came back from a codec as decoded
+ event.workflow_worker.handle_activation(runner:, activation: event.activation, decoded: true)
+ when Internal::Worker::MultiRunner::Event::WorkflowActivationComplete
+ # An activation is complete
+ event.workflow_worker.handle_activation_complete(
+ runner:,
+ activation_completion: event.activation_completion,
+ encoded: event.encoded,
+ completion_complete_queue: event.completion_complete_queue
+ )
+ when Internal::Worker::MultiRunner::Event::WorkflowActivationCompletionComplete
+ # Completion complete, only need to log error if it occurs here
+ if event.error
+ logger.error("Activation completion failed to record on run ID #{event.run_id}")
+ logger.error(event.error)
+ end
when Internal::Worker::MultiRunner::Event::PollerShutDown
# Individual poller shut down. Nothing to do here until we support
# worker status or something.
@@ -186,6 +215,9 @@ def self.run_all(
end
end
+ # Notify each worker we're done with it
+ workers.each(&:_on_shutdown_complete)
+
# If there was an shutdown-causing error, we raise that
if !first_error.nil?
raise first_error
@@ -194,6 +226,53 @@ def self.run_all(
end
end
+ # @return [Hash]>] Default, immutable set illegal calls used for the
+ # `illegal_workflow_calls` worker option. See the documentation of that option for more details.
+ def self.default_illegal_workflow_calls
+ @default_illegal_workflow_calls ||= begin
+ hash = {
+ 'BasicSocket' => :all,
+ 'Date' => %i[initialize today],
+ 'DateTime' => %i[initialize now],
+ 'Dir' => :all,
+ 'Fiber' => [:set_scheduler],
+ 'File' => :all,
+ 'FileTest' => :all,
+ 'FileUtils' => :all,
+ 'Find' => :all,
+ 'GC' => :all,
+ 'IO' => [
+ :read
+ # Intentionally leaving out write so puts will work. We don't want to add heavy logic replacing stdout or
+ # trying to derive whether it's file vs stdout write.
+ #:write
+ ],
+ 'Kernel' => %i[abort at_exit autoload autoload? eval exec exit fork gets load open rand readline readlines
+ spawn srand system test trap],
+ 'Net::HTTP' => :all,
+ 'Pathname' => :all,
+ # TODO(cretz): Investigate why clock_gettime called from Timeout thread affects this code at all. Stack trace
+ # test executing activities inside a timeout will fail if clock_gettime is blocked.
+ 'Process' => %i[abort argv0 daemon detach exec exit exit! fork kill setpriority setproctitle setrlimit setsid
+ spawn times wait wait2 waitall warmup],
+ # TODO(cretz): Allow Ractor.current since exception formatting in error_highlight references it
+ # 'Ractor' => :all,
+ 'Random::Base' => [:initialize],
+ 'Resolv' => :all,
+ 'SecureRandom' => :all,
+ 'Signal' => :all,
+ 'Socket' => :all,
+ 'Tempfile' => :all,
+ 'Thread' => %i[abort_on_exception= exit fork handle_interrupt ignore_deadlock= kill new pass
+ pending_interrupt? report_on_exception= start stop initialize join name= priority= raise run
+ terminate thread_variable_set wakeup],
+ 'Time' => %i[initialize now]
+ } #: Hash[String, :all | Array[Symbol]]
+ hash.each_value(&:freeze)
+ hash.freeze
+ end
+ end
+
# @return [Options] Frozen options for this client which has the same attributes as {initialize}.
attr_reader :options
@@ -201,20 +280,25 @@ def self.run_all(
#
# @param client [Client] Client for this worker.
# @param task_queue [String] Task queue for this worker.
- # @param activities [Array, Activity::Definition>] Activities for this worker.
- # @param activity_executors [Hash] Executors that activities can run within.
+ # @param activities [Array, Activity::Definition::Info>]
+ # Activities for this worker.
+ # @param workflows [Array>] Workflows for this worker.
# @param tuner [Tuner] Tuner that controls the amount of concurrent activities/workflows that run at a time.
- # @param interceptors [Array] Interceptors specific to this worker. Note, interceptors set on the
- # client that include the {Interceptor} module are automatically included here, so no need to specify them again.
+ # @param activity_executors [Hash] Executors that activities can run within.
+ # @param workflow_executor [WorkflowExecutor] Workflow executor that workflow tasks run within.
+ # @param interceptors [Array] Interceptors specific to this worker.
+ # Note, interceptors set on the client that include the {Interceptor::Activity} or {Interceptor::Workflow} module
+ # are automatically included here, so no need to specify them again.
# @param build_id [String] Unique identifier for the current runtime. This is best set as a unique value
# representing all code and should change only when code does. This can be something like a git commit hash. If
# unset, default is hash of known Ruby code.
# @param identity [String, nil] Override the identity for this worker. If unset, client identity is used.
+ # @param logger [Logger] Logger to override client logger with. Default is the client logger.
# @param max_cached_workflows [Integer] Number of workflows held in cache for use by sticky task queue. If set to 0,
# workflow caching and sticky queuing are disabled.
# @param max_concurrent_workflow_task_polls [Integer] Maximum number of concurrent poll workflow task requests we
# will perform at a time on this worker's task queue.
- # @param nonsticky_to_sticky_poll_ratio [Float] `max_concurrent_workflow_task_polls`` * this number = the number of
+ # @param nonsticky_to_sticky_poll_ratio [Float] `max_concurrent_workflow_task_polls` * this number = the number of
# max pollers that will be allowed for the nonsticky queue when sticky tasks are enabled. If both defaults are
# used, the sticky queue will allow 4 max pollers while the nonsticky queue will allow one. The minimum for either
# poller is 1, so if `max_concurrent_workflow_task_polls` is 1 and sticky queues are enabled, there will be 2
@@ -239,12 +323,35 @@ def self.run_all(
# @param use_worker_versioning [Boolean] If true, the `build_id` argument must be specified, and this worker opts
# into the worker versioning feature. This ensures it only receives workflow tasks for workflows which it claims
# to be compatible with. For more information, see https://docs.temporal.io/workers#worker-versioning.
+ # @param disable_eager_activity_execution [Boolean] If true, disables eager activity execution. Eager activity
+ # execution is an optimization on some servers that sends activities back to the same worker as the calling
+ # workflow if they can run there. This should be set to true for `max_task_queue_activities_per_second` to work
+ # and in a future version of this API may be implied as such (i.e. this setting will be ignored if that setting is
+ # set).
+ # @param illegal_workflow_calls [Hash]>] Set of illegal workflow calls that are
+ # considered unsafe/non-deterministic and will raise if seen. The key of the hash is the fully qualified string
+ # class name (no leading `::`). The value is either `:all` which means any use of the class, or an array of
+ # symbols for methods on the class that cannot be used. The methods refer to either instance or class methods,
+ # there is no way to differentiate at this time.
+ # @param workflow_failure_exception_types [Array>] Workflow failure exception types. This is the
+ # set of exception types that, if a workflow-thrown exception extends, will cause the workflow/update to fail
+ # instead of suspending the workflow via task failure. These are applied in addition to the
+ # `workflow_failure_exception_type` on the workflow definition class itself. If {::Exception} is set, it
+ # effectively will fail a workflow/update in all user exception cases.
+ # @param workflow_payload_codec_thread_pool [ThreadPool, nil] Thread pool to run payload codec encode/decode within.
+ # This is required if a payload codec exists and the worker is not fiber based. Codecs can potentially block
+ # execution which is why they need to be run in the background.
+ # @param debug_mode [Boolean] If true, deadlock detection is disabled. Deadlock detection will fail workflow tasks
+ # if they block the thread for too long. This defaults to true if the `TEMPORAL_DEBUG` environment variable is
+ # `true` or `1`.
def initialize(
client:,
task_queue:,
activities: [],
- activity_executors: ActivityExecutor.defaults,
+ workflows: [],
tuner: Tuner.create_fixed,
+ activity_executors: ActivityExecutor.defaults,
+ workflow_executor: WorkflowExecutor::Ractor.instance,
interceptors: [],
build_id: Worker.default_build_id,
identity: nil,
@@ -260,17 +367,23 @@ def initialize(
max_activities_per_second: nil,
max_task_queue_activities_per_second: nil,
graceful_shutdown_period: 0,
- use_worker_versioning: false
+ use_worker_versioning: false,
+ disable_eager_activity_execution: false,
+ illegal_workflow_calls: Worker.default_illegal_workflow_calls,
+ workflow_failure_exception_types: [],
+ workflow_payload_codec_thread_pool: nil,
+ debug_mode: %w[true 1].include?(ENV['TEMPORAL_DEBUG'].to_s.downcase)
)
- # TODO(cretz): Remove when workflows come about
- raise ArgumentError, 'Must have at least one activity' if activities.empty?
+ raise ArgumentError, 'Must have at least one activity or workflow' if activities.empty? && workflows.empty?
@options = Options.new(
client:,
task_queue:,
activities:,
- activity_executors:,
+ workflows:,
tuner:,
+ activity_executors:,
+ workflow_executor:,
interceptors:,
build_id:,
identity:,
@@ -286,15 +399,36 @@ def initialize(
max_activities_per_second:,
max_task_queue_activities_per_second:,
graceful_shutdown_period:,
- use_worker_versioning:
+ use_worker_versioning:,
+ disable_eager_activity_execution:,
+ illegal_workflow_calls:,
+ workflow_failure_exception_types:,
+ workflow_payload_codec_thread_pool:,
+ debug_mode:
).freeze
+ # Preload workflow definitions and some workflow settings for the bridge
+ workflow_definitions = Internal::Worker::WorkflowWorker.workflow_definitions(workflows)
+ nondeterminism_as_workflow_fail = workflow_failure_exception_types.any? do |t|
+ t.is_a?(Class) && t >= Workflow::NondeterminismError
+ end
+ nondeterminism_as_workflow_fail_for_types = workflow_definitions.values.map do |defn|
+ next unless defn.failure_exception_types.any? { |t| t.is_a?(Class) && t >= Workflow::NondeterminismError }
+
+ # If they tried to do this on a dynamic workflow and haven't already set worker-level option, warn
+ unless defn.name || nondeterminism_as_workflow_fail
+ warn('Note, dynamic workflows cannot trap non-determinism errors, so worker-level ' \
+ 'workflow_failure_exception_types should be set to capture that if that is the intention')
+ end
+ defn.name
+ end.compact
+
# Create the bridge worker
@bridge_worker = Internal::Bridge::Worker.new(
client.connection._core_client,
Internal::Bridge::Worker::Options.new(
activity: !activities.empty?,
- workflow: false,
+ workflow: !workflows.empty?,
namespace: client.namespace,
task_queue:,
tuner: Internal::Bridge::Worker::TunerOptions.new(
@@ -308,26 +442,42 @@ def initialize(
max_concurrent_workflow_task_polls:,
nonsticky_to_sticky_poll_ratio:,
max_concurrent_activity_task_polls:,
- no_remote_activities:,
+ # For shutdown to work properly, we must disable remote activities
+ # ourselves if there are no activities
+ no_remote_activities: no_remote_activities || activities.empty?,
sticky_queue_schedule_to_start_timeout:,
max_heartbeat_throttle_interval:,
default_heartbeat_throttle_interval:,
max_worker_activities_per_second: max_activities_per_second,
max_task_queue_activities_per_second:,
graceful_shutdown_period:,
- use_worker_versioning:
+ use_worker_versioning:,
+ nondeterminism_as_workflow_fail:,
+ nondeterminism_as_workflow_fail_for_types:
)
)
# Collect interceptors from client and params
- @all_interceptors = client.options.interceptors.select { |i| i.is_a?(Interceptor) } + interceptors
+ @activity_interceptors = (client.options.interceptors + interceptors).select do |i|
+ i.is_a?(Interceptor::Activity)
+ end
+ @workflow_interceptors = (client.options.interceptors + interceptors).select do |i|
+ i.is_a?(Interceptor::Workflow)
+ end
# Cancellation for the whole worker
@worker_shutdown_cancellation = Cancellation.new
# Create workers
- # TODO(cretz): Make conditional when workflows appear
- @activity_worker = Internal::Worker::ActivityWorker.new(self, @bridge_worker)
+ unless activities.empty?
+ @activity_worker = Internal::Worker::ActivityWorker.new(worker: self,
+ bridge_worker: @bridge_worker)
+ end
+ unless workflows.empty?
+ @workflow_worker = Internal::Worker::WorkflowWorker.new(worker: self,
+ bridge_worker: @bridge_worker,
+ workflow_definitions:)
+ end
# Validate worker
@bridge_worker.validate
@@ -387,16 +537,35 @@ def _bridge_worker
end
# @!visibility private
- def _all_interceptors
- @all_interceptors
+ def _activity_interceptors
+ @activity_interceptors
end
# @!visibility private
- def _on_poll_bytes(worker_type, bytes)
- # TODO(cretz): Workflow workers
- raise "Unrecognized worker type #{worker_type}" unless worker_type == :activity
+ def _workflow_interceptors
+ @workflow_interceptors
+ end
- @activity_worker.handle_task(Internal::Bridge::Api::ActivityTask::ActivityTask.decode(bytes))
+ # @!visibility private
+ def _on_poll_bytes(runner, worker_type, bytes)
+ case worker_type
+ when :activity
+ @activity_worker.handle_task(Internal::Bridge::Api::ActivityTask::ActivityTask.decode(bytes))
+ when :workflow
+ @workflow_worker.handle_activation(
+ runner:,
+ activation: Internal::Bridge::Api::WorkflowActivation::WorkflowActivation.decode(bytes),
+ decoded: false
+ )
+ else
+ raise "Unrecognized worker type #{worker_type}"
+ end
+ end
+
+ # @!visibility private
+ def _on_shutdown_complete
+ @workflow_worker&.on_shutdown_complete
+ @workflow_worker = nil
end
private
diff --git a/temporalio/lib/temporalio/worker/activity_executor.rb b/temporalio/lib/temporalio/worker/activity_executor.rb
index fa9bd863..b6cf4731 100644
--- a/temporalio/lib/temporalio/worker/activity_executor.rb
+++ b/temporalio/lib/temporalio/worker/activity_executor.rb
@@ -21,7 +21,7 @@ def self.defaults
# allows executor implementations to do eager validation based on the definition. This does not have to be
# implemented and the default is a no-op.
#
- # @param defn [Activity::Definition] Activity definition.
+ # @param defn [Activity::Definition::Info] Activity definition info.
def initialize_activity(defn)
# Default no-op
end
@@ -29,7 +29,7 @@ def initialize_activity(defn)
# Execute the given block in the executor. The block is built to never raise and need no arguments. Implementers
# must implement this.
#
- # @param defn [Activity::Definition] Activity definition.
+ # @param defn [Activity::Definition::Info] Activity definition info.
# @yield Block to execute.
def execute_activity(defn, &)
raise NotImplementedError
@@ -45,7 +45,7 @@ def activity_context
# {execute_activity} with a context before user code is executed and with nil after user code is complete.
# Implementers must implement this.
#
- # @param defn [Activity::Definition] Activity definition.
+ # @param defn [Activity::Definition::Info] Activity definition info.
# @param context [Activity::Context, nil] The value to set.
def set_activity_context(defn, context)
raise NotImplementedError
diff --git a/temporalio/lib/temporalio/worker/activity_executor/thread_pool.rb b/temporalio/lib/temporalio/worker/activity_executor/thread_pool.rb
index 0ccb4e75..04ad7616 100644
--- a/temporalio/lib/temporalio/worker/activity_executor/thread_pool.rb
+++ b/temporalio/lib/temporalio/worker/activity_executor/thread_pool.rb
@@ -1,54 +1,27 @@
# frozen_string_literal: true
-# Much of this logic taken from
-# https://github.com/ruby-concurrency/concurrent-ruby/blob/044020f44b36930b863b930f3ee8fa1e9f750469/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb,
-# see MIT license at
-# https://github.com/ruby-concurrency/concurrent-ruby/blob/044020f44b36930b863b930f3ee8fa1e9f750469/LICENSE.txt
+require 'temporalio/worker/thread_pool'
module Temporalio
class Worker
class ActivityExecutor
- # Activity executor for scheduling activities in their own thread. This implementation is a stripped down form of
- # Concurrent Ruby's `CachedThreadPool`.
+ # Activity executor for scheduling activities in their own thread using {Worker::ThreadPool}.
class ThreadPool < ActivityExecutor
- # @return [ThreadPool] Default/shared thread pool executor instance with unlimited max threads.
+ # @return [ThreadPool] Default/shared thread pool executor using default thread pool.
def self.default
@default ||= new
end
- # @!visibility private
- def self._monotonic_time
- Process.clock_gettime(Process::CLOCK_MONOTONIC)
- end
-
- # Create a new thread pool executor that creates threads as needed.
+ # Create a new thread pool executor.
#
- # @param max_threads [Integer, nil] Maximum number of thread workers to create, or nil for unlimited max.
- # @param idle_timeout [Float] Number of seconds before a thread worker with no work should be stopped. Note,
- # the check of whether a thread worker is idle is only done on each new activity.
- def initialize(max_threads: nil, idle_timeout: 20) # rubocop:disable Lint/MissingSuper
- @max_threads = max_threads
- @idle_timeout = idle_timeout
-
- @mutex = Mutex.new
- @pool = []
- @ready = []
- @queue = []
- @scheduled_task_count = 0
- @completed_task_count = 0
- @largest_length = 0
- @workers_counter = 0
- @prune_interval = @idle_timeout / 2
- @next_prune_time = ThreadPool._monotonic_time + @prune_interval
+ # @param thread_pool [Worker::ThreadPool] Thread pool to use.
+ def initialize(thread_pool = Worker::ThreadPool.default) # rubocop:disable Lint/MissingSuper
+ @thread_pool = thread_pool
end
# @see ActivityExecutor.execute_activity
- def execute_activity(_defn, &block)
- @mutex.synchronize do
- locked_assign_worker(&block) || locked_enqueue(&block)
- @scheduled_task_count += 1
- locked_prune_pool if @next_prune_time < ThreadPool._monotonic_time
- end
+ def execute_activity(_defn, &)
+ @thread_pool.execute(&)
end
# @see ActivityExecutor.activity_context
@@ -67,187 +40,6 @@ def set_activity_context(defn, context)
thread.raise(Error::CanceledError.new('Activity canceled')) if thread[:temporal_activity_context] == context
end
end
-
- # @return [Integer] The largest number of threads that have been created in the pool since construction.
- def largest_length
- @mutex.synchronize { @largest_length }
- end
-
- # @return [Integer] The number of tasks that have been scheduled for execution on the pool since construction.
- def scheduled_task_count
- @mutex.synchronize { @scheduled_task_count }
- end
-
- # @return [Integer] The number of tasks that have been completed by the pool since construction.
- def completed_task_count
- @mutex.synchronize { @completed_task_count }
- end
-
- # @return [Integer] The number of threads that are actively executing tasks.
- def active_count
- @mutex.synchronize { @pool.length - @ready.length }
- end
-
- # @return [Integer] The number of threads currently in the pool.
- def length
- @mutex.synchronize { @pool.length }
- end
-
- # @return [Integer] The number of tasks in the queue awaiting execution.
- def queue_length
- @mutex.synchronize { @queue.length }
- end
-
- # Gracefully shutdown each thread when it is done with its current task. This should not be called until all
- # workers using this executor are complete. This does not need to be called at all on program exit (e.g. for the
- # global default).
- def shutdown
- @mutex.synchronize do
- # Stop all workers
- @pool.each(&:stop)
- end
- end
-
- # Kill each thread. This should not be called until all workers using this executor are complete. This does not
- # need to be called at all on program exit (e.g. for the global default).
- def kill
- @mutex.synchronize do
- # Kill all workers
- @pool.each(&:kill)
- @pool.clear
- @ready.clear
- end
- end
-
- # @!visibility private
- def _remove_busy_worker(worker)
- @mutex.synchronize { locked_remove_busy_worker(worker) }
- end
-
- # @!visibility private
- def _ready_worker(worker, last_message)
- @mutex.synchronize { locked_ready_worker(worker, last_message) }
- end
-
- # @!visibility private
- def _worker_died(worker)
- @mutex.synchronize { locked_worker_died(worker) }
- end
-
- # @!visibility private
- def _worker_task_completed
- @mutex.synchronize { @completed_task_count += 1 }
- end
-
- private
-
- def locked_assign_worker(&block)
- # keep growing if the pool is not at the minimum yet
- worker, = @ready.pop || locked_add_busy_worker
- if worker
- worker << block
- true
- else
- false
- end
- end
-
- def locked_enqueue(&block)
- @queue << block
- end
-
- def locked_add_busy_worker
- return if @max_threads && @pool.size >= @max_threads
-
- @workers_counter += 1
- @pool << (worker = Worker.new(self, @workers_counter))
- @largest_length = @pool.length if @pool.length > @largest_length
- worker
- end
-
- def locked_prune_pool
- now = ThreadPool._monotonic_time
- stopped_workers = 0
- while !@ready.empty? && (@pool.size - stopped_workers).positive?
- worker, last_message = @ready.first
- break unless now - last_message > @idle_timeout
-
- stopped_workers += 1
- @ready.shift
- worker << :stop
-
- end
-
- @next_prune_time = ThreadPool._monotonic_time + @prune_interval
- end
-
- def locked_remove_busy_worker(worker)
- @pool.delete(worker)
- end
-
- def locked_ready_worker(worker, last_message)
- block = @queue.shift
- if block
- worker << block
- else
- @ready.push([worker, last_message])
- end
- end
-
- def locked_worker_died(worker)
- locked_remove_busy_worker(worker)
- replacement_worker = locked_add_busy_worker
- locked_ready_worker(replacement_worker, ThreadPool._monotonic_time) if replacement_worker
- end
-
- # @!visibility private
- class Worker
- def initialize(pool, id)
- @queue = Queue.new
- @thread = Thread.new(@queue, pool) do |my_queue, my_pool|
- catch(:stop) do
- loop do
- case block = my_queue.pop
- when :stop
- pool._remove_busy_worker(self)
- throw :stop
- else
- begin
- block.call
- my_pool._worker_task_completed
- my_pool._ready_worker(self, ThreadPool._monotonic_time)
- rescue StandardError => e
- # Ignore
- warn("Unexpected activity block error: #{e}")
- rescue Exception => e # rubocop:disable Lint/RescueException
- warn("Unexpected activity block exception: #{e}")
- my_pool._worker_died(self)
- throw :stop
- end
- end
- end
- end
- end
- @thread.name = "activity-thread-#{id}"
- end
-
- # @!visibility private
- def <<(block)
- @queue << block
- end
-
- # @!visibility private
- def stop
- @queue << :stop
- end
-
- # @!visibility private
- def kill
- @thread.kill
- end
- end
-
- private_constant :Worker
end
end
end
diff --git a/temporalio/lib/temporalio/worker/interceptor.rb b/temporalio/lib/temporalio/worker/interceptor.rb
index cc0d0f54..9a6c7808 100644
--- a/temporalio/lib/temporalio/worker/interceptor.rb
+++ b/temporalio/lib/temporalio/worker/interceptor.rb
@@ -2,85 +2,373 @@
module Temporalio
class Worker
- # Mixin for intercepting worker work. Clases that `include` may implement their own {intercept_activity} that
- # returns their own instance of {ActivityInbound}.
- #
- # @note Input classes herein may get new required fields added and therefore the constructors of the Input classes
- # may change in backwards incompatible ways. Users should not try to construct Input classes themselves.
module Interceptor
- # Method called when intercepting an activity. This is called when starting an activity attempt.
+ # Mixin for intercepting activity worker work. Clases that `include` may implement their own {intercept_activity}
+ # that returns their own instance of {Inbound}.
#
- # @param next_interceptor [ActivityInbound] Next interceptor in the chain that should be called. This is usually
- # passed to {ActivityInbound} constructor.
- # @return [ActivityInbound] Interceptor to be called for activity calls.
- def intercept_activity(next_interceptor)
- next_interceptor
- end
-
- # Input for {ActivityInbound.execute}.
- ExecuteActivityInput = Struct.new(
- :proc,
- :args,
- :headers,
- keyword_init: true
- )
-
- # Input for {ActivityOutbound.heartbeat}.
- HeartbeatActivityInput = Struct.new(
- :details,
- keyword_init: true
- )
-
- # Inbound interceptor for intercepting inbound activity calls. This should be extended by users needing to
- # intercept activities.
- class ActivityInbound
- # @return [ActivityInbound] Next interceptor in the chain.
- attr_reader :next_interceptor
-
- # Initialize inbound with the next interceptor in the chain.
+ # @note Input classes herein may get new required fields added and therefore the constructors of the Input classes
+ # may change in backwards incompatible ways. Users should not try to construct Input classes themselves.
+ module Activity
+ # Method called when intercepting an activity. This is called when starting an activity attempt.
#
- # @param next_interceptor [ActivityInbound] Next interceptor in the chain.
- def initialize(next_interceptor)
- @next_interceptor = next_interceptor
+ # @param next_interceptor [Inbound] Next interceptor in the chain that should be called. This is usually passed
+ # to {Inbound} constructor.
+ # @return [Inbound] Interceptor to be called for activity calls.
+ def intercept_activity(next_interceptor)
+ next_interceptor
end
- # Initialize the outbound interceptor. This should be extended by users to return their own {ActivityOutbound}
- # implementation that wraps the parameter here.
- #
- # @param outbound [ActivityOutbound] Next outbound interceptor in the chain.
- # @return [ActivityOutbound] Outbound activity interceptor.
- def init(outbound)
- @next_interceptor.init(outbound)
+ # Input for {Inbound.execute}.
+ ExecuteInput = Struct.new(
+ :proc,
+ :args,
+ :headers,
+ keyword_init: true
+ )
+
+ # Inbound interceptor for intercepting inbound activity calls. This should be extended by users needing to
+ # intercept activities.
+ class Inbound
+ # @return [Inbound] Next interceptor in the chain.
+ attr_reader :next_interceptor
+
+ # Initialize inbound with the next interceptor in the chain.
+ #
+ # @param next_interceptor [Inbound] Next interceptor in the chain.
+ def initialize(next_interceptor)
+ @next_interceptor = next_interceptor
+ end
+
+ # Initialize the outbound interceptor. This should be extended by users to return their own {Outbound}
+ # implementation that wraps the parameter here.
+ #
+ # @param outbound [Outbound] Next outbound interceptor in the chain.
+ # @return [Outbound] Outbound activity interceptor.
+ def init(outbound)
+ @next_interceptor.init(outbound)
+ end
+
+ # Execute an activity and return result or raise exception. Next interceptor in chain (i.e. `super`) will
+ # perform the execution.
+ #
+ # @param input [ExecuteInput] Input information.
+ # @return [Object] Activity result.
+ def execute(input)
+ @next_interceptor.execute(input)
+ end
end
- # Execute an activity and return result or raise exception. Next interceptor in chain (i.e. `super`) will
- # perform the execution.
- #
- # @param input [ExecuteActivityInput] Input information.
- # @return [Object] Activity result.
- def execute(input)
- @next_interceptor.execute(input)
+ # Input for {Outbound.heartbeat}.
+ HeartbeatInput = Struct.new(
+ :details,
+ keyword_init: true
+ )
+
+ # Outbound interceptor for intercepting outbound activity calls. This should be extended by users needing to
+ # intercept activity calls.
+ class Outbound
+ # @return [Outbound] Next interceptor in the chain.
+ attr_reader :next_interceptor
+
+ # Initialize outbound with the next interceptor in the chain.
+ #
+ # @param next_interceptor [Outbound] Next interceptor in the chain.
+ def initialize(next_interceptor)
+ @next_interceptor = next_interceptor
+ end
+
+ # Issue a heartbeat.
+ #
+ # @param input [HeartbeatInput] Input information.
+ def heartbeat(input)
+ @next_interceptor.heartbeat(input)
+ end
end
end
- # Outbound interceptor for intercepting outbound activity calls. This should be extended by users needing to
- # intercept activity calls.
- class ActivityOutbound
- # @return [ActivityInbound] Next interceptor in the chain.
- attr_reader :next_interceptor
-
- # Initialize outbound with the next interceptor in the chain.
+ # Mixin for intercepting workflow worker work. Classes that `include` may implement their own {intercept_workflow}
+ # that returns their own instance of {Inbound}.
+ #
+ # @note Input classes herein may get new required fields added and therefore the constructors of the Input classes
+ # may change in backwards incompatible ways. Users should not try to construct Input classes themselves.
+ module Workflow
+ # Method called when intercepting a workflow. This is called when creating a workflow instance.
#
- # @param next_interceptor [ActivityOutbound] Next interceptor in the chain.
- def initialize(next_interceptor)
- @next_interceptor = next_interceptor
+ # @param next_interceptor [Inbound] Next interceptor in the chain that should be called. This is usually passed
+ # to {Inbound} constructor.
+ # @return [Inbound] Interceptor to be called for workflow calls.
+ def intercept_workflow(next_interceptor)
+ next_interceptor
end
- # Issue a heartbeat.
- #
- # @param input [HeartbeatActivityInput] Input information.
- def heartbeat(input)
- @next_interceptor.heartbeat(input)
+ # Input for {Inbound.execute}.
+ ExecuteInput = Struct.new(
+ :args,
+ :headers,
+ keyword_init: true
+ )
+
+ # Input for {Inbound.handle_signal}.
+ HandleSignalInput = Struct.new(
+ :signal,
+ :args,
+ :definition,
+ :headers,
+ keyword_init: true
+ )
+
+ # Input for {Inbound.handle_query}.
+ HandleQueryInput = Struct.new(
+ :id,
+ :query,
+ :args,
+ :definition,
+ :headers,
+ keyword_init: true
+ )
+
+ # Input for {Inbound.validate_update} and {Inbound.handle_update}.
+ HandleUpdateInput = Struct.new(
+ :id,
+ :update,
+ :args,
+ :definition,
+ :headers,
+ keyword_init: true
+ )
+
+ # Inbound interceptor for intercepting inbound workflow calls. This should be extended by users needing to
+ # intercept workflows.
+ class Inbound
+ # @return [Inbound] Next interceptor in the chain.
+ attr_reader :next_interceptor
+
+ # Initialize inbound with the next interceptor in the chain.
+ #
+ # @param next_interceptor [Inbound] Next interceptor in the chain.
+ def initialize(next_interceptor)
+ @next_interceptor = next_interceptor
+ end
+
+ # Initialize the outbound interceptor. This should be extended by users to return their own {Outbound}
+ # implementation that wraps the parameter here.
+ #
+ # @param outbound [Outbound] Next outbound interceptor in the chain.
+ # @return [Outbound] Outbound workflow interceptor.
+ def init(outbound)
+ @next_interceptor.init(outbound)
+ end
+
+ # Execute a workflow and return result or raise exception. Next interceptor in chain (i.e. `super`) will
+ # perform the execution.
+ #
+ # @param input [ExecuteInput] Input information.
+ # @return [Object] Workflow result.
+ def execute(input)
+ @next_interceptor.execute(input)
+ end
+
+ # Handle a workflow signal. Next interceptor in chain (i.e. `super`) will perform the handling.
+ #
+ # @param input [HandleSignalInput] Input information.
+ def handle_signal(input)
+ @next_interceptor.handle_signal(input)
+ end
+
+ # Handle a workflow query and return result or raise exception. Next interceptor in chain (i.e. `super`) will
+ # perform the handling.
+ #
+ # @param input [HandleQueryInput] Input information.
+ # @return [Object] Query result.
+ def handle_query(input)
+ @next_interceptor.handle_query(input)
+ end
+
+ # Validate a workflow update. Next interceptor in chain (i.e. `super`) will perform the validation.
+ #
+ # @param input [HandleUpdateInput] Input information.
+ def validate_update(input)
+ @next_interceptor.validate_update(input)
+ end
+
+ # Handle a workflow update and return result or raise exception. Next interceptor in chain (i.e. `super`) will
+ # perform the handling.
+ #
+ # @param input [HandleUpdateInput] Input information.
+ # @return [Object] Update result.
+ def handle_update(input)
+ @next_interceptor.handle_update(input)
+ end
+ end
+
+ # Input for {Outbound.cancel_external_workflow}.
+ CancelExternalWorkflowInput = Struct.new(
+ :id,
+ :run_id,
+ keyword_init: true
+ )
+
+ # Input for {Outbound.execute_activity}.
+ ExecuteActivityInput = Struct.new(
+ :activity,
+ :args,
+ :task_queue,
+ :schedule_to_close_timeout,
+ :schedule_to_start_timeout,
+ :start_to_close_timeout,
+ :heartbeat_timeout,
+ :retry_policy,
+ :cancellation,
+ :cancellation_type,
+ :activity_id,
+ :disable_eager_execution,
+ :headers,
+ keyword_init: true
+ )
+
+ # Input for {Outbound.execute_local_activity}.
+ ExecuteLocalActivityInput = Struct.new(
+ :activity,
+ :args,
+ :schedule_to_close_timeout,
+ :schedule_to_start_timeout,
+ :start_to_close_timeout,
+ :retry_policy,
+ :local_retry_threshold,
+ :cancellation,
+ :cancellation_type,
+ :activity_id,
+ :headers,
+ keyword_init: true
+ )
+
+ # Input for {Outbound.initialize_continue_as_new_error}.
+ InitializeContinueAsNewErrorInput = Struct.new(
+ :error,
+ keyword_init: true
+ )
+
+ # Input for {Outbound.signal_child_workflow}.
+ SignalChildWorkflowInput = Struct.new(
+ :id,
+ :signal,
+ :args,
+ :cancellation,
+ :headers,
+ keyword_init: true
+ )
+
+ # Input for {Outbound.signal_external_workflow}.
+ SignalExternalWorkflowInput = Struct.new(
+ :id,
+ :run_id,
+ :signal,
+ :args,
+ :cancellation,
+ :headers,
+ keyword_init: true
+ )
+
+ # Input for {Outbound.sleep}.
+ SleepInput = Struct.new(
+ :duration,
+ :summary,
+ :cancellation,
+ keyword_init: true
+ )
+
+ # Input for {Outbound.start_child_workflow}.
+ StartChildWorkflowInput = Struct.new(
+ :workflow,
+ :args,
+ :id,
+ :task_queue,
+ :cancellation,
+ :cancellation_type,
+ :parent_close_policy,
+ :execution_timeout,
+ :run_timeout,
+ :task_timeout,
+ :id_reuse_policy,
+ :retry_policy,
+ :cron_schedule,
+ :memo,
+ :search_attributes,
+ :headers,
+ keyword_init: true
+ )
+
+ # Outbound interceptor for intercepting outbound workflow calls. This should be extended by users needing to
+ # intercept workflow calls.
+ class Outbound
+ # @return [Outbound] Next interceptor in the chain.
+ attr_reader :next_interceptor
+
+ # Initialize outbound with the next interceptor in the chain.
+ #
+ # @param next_interceptor [Outbound] Next interceptor in the chain.
+ def initialize(next_interceptor)
+ @next_interceptor = next_interceptor
+ end
+
+ # Cancel external workflow.
+ #
+ # @param input [CancelExternalWorkflowInput] Input.
+ def cancel_external_workflow(input)
+ @next_interceptor.cancel_external_workflow(input)
+ end
+
+ # Execute activity.
+ #
+ # @param input [ExecuteActivityInput] Input.
+ # @return [Object] Activity result.
+ def execute_activity(input)
+ @next_interceptor.execute_activity(input)
+ end
+
+ # Execute local activity.
+ #
+ # @param input [ExecuteLocalActivityInput] Input.
+ # @return [Object] Activity result.
+ def execute_local_activity(input)
+ @next_interceptor.execute_local_activity(input)
+ end
+
+ # Initialize continue as new error.
+ #
+ # @param input [InitializeContinueAsNewErrorInput] Input.
+ def initialize_continue_as_new_error(input)
+ @next_interceptor.initialize_continue_as_new_error(input)
+ end
+
+ # Signal child workflow.
+ #
+ # @param input [SignalChildWorkflowInput] Input.
+ def signal_child_workflow(input)
+ @next_interceptor.signal_child_workflow(input)
+ end
+
+ # Signal external workflow.
+ #
+ # @param input [SignalExternalWorkflowInput] Input.
+ def signal_external_workflow(input)
+ @next_interceptor.signal_external_workflow(input)
+ end
+
+ # Sleep.
+ #
+ # @param input [SleepInput] Input.
+ def sleep(input)
+ @next_interceptor.sleep(input)
+ end
+
+ # Start child workflow.
+ #
+ # @param input [StartChildWorkflowInput] Input.
+ # @return [Workflow::ChildWorkflowHandle] Child workflow handle.
+ def start_child_workflow(input)
+ @next_interceptor.start_child_workflow(input)
+ end
end
end
end
diff --git a/temporalio/lib/temporalio/worker/thread_pool.rb b/temporalio/lib/temporalio/worker/thread_pool.rb
new file mode 100644
index 00000000..d9ffe829
--- /dev/null
+++ b/temporalio/lib/temporalio/worker/thread_pool.rb
@@ -0,0 +1,237 @@
+# frozen_string_literal: true
+
+# Much of this logic taken from
+# https://github.com/ruby-concurrency/concurrent-ruby/blob/044020f44b36930b863b930f3ee8fa1e9f750469/lib/concurrent-ruby/concurrent/executor/ruby_thread_pool_executor.rb,
+# see MIT license at
+# https://github.com/ruby-concurrency/concurrent-ruby/blob/044020f44b36930b863b930f3ee8fa1e9f750469/LICENSE.txt
+
+module Temporalio
+ class Worker
+ # Implementation of a thread pool. This implementation is a stripped down form of Concurrent Ruby's
+ # `CachedThreadPool`.
+ class ThreadPool
+ # @return [ThreadPool] Default/shared thread pool instance with unlimited max threads.
+ def self.default
+ @default ||= new
+ end
+
+ # @!visibility private
+ def self._monotonic_time
+ Process.clock_gettime(Process::CLOCK_MONOTONIC)
+ end
+
+ # Create a new thread pool that creates threads as needed.
+ #
+ # @param max_threads [Integer, nil] Maximum number of thread workers to create, or nil for unlimited max.
+ # @param idle_timeout [Float] Number of seconds before a thread worker with no work should be stopped. Note,
+ # the check of whether a thread worker is idle is only done on each new {execute} call.
+ def initialize(max_threads: nil, idle_timeout: 20)
+ @max_threads = max_threads
+ @idle_timeout = idle_timeout
+
+ @mutex = Mutex.new
+ @pool = []
+ @ready = []
+ @queue = []
+ @scheduled_task_count = 0
+ @completed_task_count = 0
+ @largest_length = 0
+ @workers_counter = 0
+ @prune_interval = @idle_timeout / 2
+ @next_prune_time = ThreadPool._monotonic_time + @prune_interval
+ end
+
+ # Execute the given block in a thread. The block should be built to never raise and need no arguments.
+ #
+ # @yield Block to execute.
+ def execute(&block)
+ @mutex.synchronize do
+ locked_assign_worker(&block) || locked_enqueue(&block)
+ @scheduled_task_count += 1
+ locked_prune_pool if @next_prune_time < ThreadPool._monotonic_time
+ end
+ end
+
+ # @return [Integer] The largest number of threads that have been created in the pool since construction.
+ def largest_length
+ @mutex.synchronize { @largest_length }
+ end
+
+ # @return [Integer] The number of tasks that have been scheduled for execution on the pool since construction.
+ def scheduled_task_count
+ @mutex.synchronize { @scheduled_task_count }
+ end
+
+ # @return [Integer] The number of tasks that have been completed by the pool since construction.
+ def completed_task_count
+ @mutex.synchronize { @completed_task_count }
+ end
+
+ # @return [Integer] The number of threads that are actively executing tasks.
+ def active_count
+ @mutex.synchronize { @pool.length - @ready.length }
+ end
+
+ # @return [Integer] The number of threads currently in the pool.
+ def length
+ @mutex.synchronize { @pool.length }
+ end
+
+ # @return [Integer] The number of tasks in the queue awaiting execution.
+ def queue_length
+ @mutex.synchronize { @queue.length }
+ end
+
+ # Gracefully shutdown each thread when it is done with its current task. This should not be called until all
+ # workers using this executor are complete. This does not need to be called at all on program exit (e.g. for the
+ # global default).
+ def shutdown
+ @mutex.synchronize do
+ # Stop all workers
+ @pool.each(&:stop)
+ end
+ end
+
+ # Kill each thread. This should not be called until all workers using this executor are complete. This does not
+ # need to be called at all on program exit (e.g. for the global default).
+ def kill
+ @mutex.synchronize do
+ # Kill all workers
+ @pool.each(&:kill)
+ @pool.clear
+ @ready.clear
+ end
+ end
+
+ # @!visibility private
+ def _remove_busy_worker(worker)
+ @mutex.synchronize { locked_remove_busy_worker(worker) }
+ end
+
+ # @!visibility private
+ def _ready_worker(worker, last_message)
+ @mutex.synchronize { locked_ready_worker(worker, last_message) }
+ end
+
+ # @!visibility private
+ def _worker_died(worker)
+ @mutex.synchronize { locked_worker_died(worker) }
+ end
+
+ # @!visibility private
+ def _worker_task_completed
+ @mutex.synchronize { @completed_task_count += 1 }
+ end
+
+ private
+
+ def locked_assign_worker(&block)
+ # keep growing if the pool is not at the minimum yet
+ worker, = @ready.pop || locked_add_busy_worker
+ if worker
+ worker << block
+ true
+ else
+ false
+ end
+ end
+
+ def locked_enqueue(&block)
+ @queue << block
+ end
+
+ def locked_add_busy_worker
+ return if @max_threads && @pool.size >= @max_threads
+
+ @workers_counter += 1
+ @pool << (worker = Worker.new(self, @workers_counter))
+ @largest_length = @pool.length if @pool.length > @largest_length
+ worker
+ end
+
+ def locked_prune_pool
+ now = ThreadPool._monotonic_time
+ stopped_workers = 0
+ while !@ready.empty? && (@pool.size - stopped_workers).positive?
+ worker, last_message = @ready.first
+ break unless now - last_message > @idle_timeout
+
+ stopped_workers += 1
+ @ready.shift
+ worker << :stop
+
+ end
+
+ @next_prune_time = ThreadPool._monotonic_time + @prune_interval
+ end
+
+ def locked_remove_busy_worker(worker)
+ @pool.delete(worker)
+ end
+
+ def locked_ready_worker(worker, last_message)
+ block = @queue.shift
+ if block
+ worker << block
+ else
+ @ready.push([worker, last_message])
+ end
+ end
+
+ def locked_worker_died(worker)
+ locked_remove_busy_worker(worker)
+ replacement_worker = locked_add_busy_worker
+ locked_ready_worker(replacement_worker, ThreadPool._monotonic_time) if replacement_worker
+ end
+
+ # @!visibility private
+ class Worker
+ def initialize(pool, id)
+ @queue = Queue.new
+ @thread = Thread.new(@queue, pool) do |my_queue, my_pool|
+ catch(:stop) do
+ loop do
+ case block = my_queue.pop
+ when :stop
+ pool._remove_busy_worker(self)
+ throw :stop
+ else
+ begin
+ block.call
+ my_pool._worker_task_completed
+ my_pool._ready_worker(self, ThreadPool._monotonic_time)
+ rescue StandardError => e
+ # Ignore
+ warn("Unexpected execute block error: #{e.full_message}")
+ rescue Exception => e # rubocop:disable Lint/RescueException
+ warn("Unexpected execute block exception: #{e.full_message}")
+ my_pool._worker_died(self)
+ throw :stop
+ end
+ end
+ end
+ end
+ end
+ @thread.name = "temporal-thread-#{id}"
+ end
+
+ # @!visibility private
+ def <<(block)
+ @queue << block
+ end
+
+ # @!visibility private
+ def stop
+ @queue << :stop
+ end
+
+ # @!visibility private
+ def kill
+ @thread.kill
+ end
+ end
+
+ private_constant :Worker
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/worker/workflow_executor.rb b/temporalio/lib/temporalio/worker/workflow_executor.rb
new file mode 100644
index 00000000..b8cb48c1
--- /dev/null
+++ b/temporalio/lib/temporalio/worker/workflow_executor.rb
@@ -0,0 +1,27 @@
+# frozen_string_literal: true
+
+require 'temporalio/worker/workflow_executor/ractor'
+require 'temporalio/worker/workflow_executor/thread_pool'
+
+module Temporalio
+ class Worker
+ # Workflow executor that executes workflow tasks. Unlike {ActivityExecutor}, this class is not meant for user
+ # implementation. Instead, either {WorkflowExecutor::ThreadPool} or {WorkflowExecutor::Ractor} should be used.
+ class WorkflowExecutor
+ # @!visibility private
+ def initialize
+ raise 'Cannot create custom executors'
+ end
+
+ # @!visibility private
+ def _validate_worker(worker, worker_state)
+ raise NotImplementedError
+ end
+
+ # @!visibility private
+ def _activate(activation, worker_state, &)
+ raise NotImplementedError
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/worker/workflow_executor/ractor.rb b/temporalio/lib/temporalio/worker/workflow_executor/ractor.rb
new file mode 100644
index 00000000..0301d6ef
--- /dev/null
+++ b/temporalio/lib/temporalio/worker/workflow_executor/ractor.rb
@@ -0,0 +1,69 @@
+# frozen_string_literal: true
+
+require 'temporalio/internal/worker/workflow_instance'
+require 'temporalio/worker/workflow_executor'
+
+module Temporalio
+ class Worker
+ class WorkflowExecutor
+ # Ractor-based implementation of {WorkflowExecutor}.
+ #
+ # @note WARNING: This is not currently implemented. Do not try to use this class at this time.
+ class Ractor < WorkflowExecutor
+ include Singleton
+
+ def initialize # rubocop:disable Lint/MissingSuper
+ # Do nothing
+ end
+
+ # @!visibility private
+ def _validate_worker(_worker, _worker_state)
+ raise 'Ractor support is not currently working, please set ' \
+ 'workflow_executor to Temporalio::Worker::WorkflowExecutor::ThreadPool'
+ end
+
+ # @!visibility private
+ def _activate(activation, worker_state, &)
+ raise NotImplementedError
+ end
+
+ # TODO(cretz): This does not work with Google Protobuf
+ # steep:ignore:start
+
+ # @!visibility private
+ class Instance
+ def initialize(initial_details)
+ initial_details = ::Ractor.make_shareable(initial_details)
+
+ @ractor = ::Ractor.new do
+ # Receive initial details and create the instance
+ details = ::Ractor.receive
+ instance = Internal::Worker::WorkflowInstance.new(details)
+ ::Ractor.yield
+
+ # Now accept activations in a loop
+ loop do
+ activation = ::Ractor.receive
+ completion = instance.activate(activation)
+ ::Ractor.yield(completion)
+ end
+ end
+
+ # Send initial details and wait until yielded
+ @ractor.send(initial_details)
+ @ractor.take
+ end
+
+ # @!visibility private
+ def activate(activation)
+ @ractor.send(activation)
+ @ractor.take
+ end
+ end
+
+ private_constant :Instance
+ # steep:ignore:end
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/worker/workflow_executor/thread_pool.rb b/temporalio/lib/temporalio/worker/workflow_executor/thread_pool.rb
new file mode 100644
index 00000000..cf57f679
--- /dev/null
+++ b/temporalio/lib/temporalio/worker/workflow_executor/thread_pool.rb
@@ -0,0 +1,230 @@
+# frozen_string_literal: true
+
+require 'etc'
+require 'temporalio/internal/bridge/api'
+require 'temporalio/internal/proto_utils'
+require 'temporalio/internal/worker/workflow_instance'
+require 'temporalio/scoped_logger'
+require 'temporalio/worker/thread_pool'
+require 'temporalio/worker/workflow_executor'
+require 'temporalio/workflow'
+require 'temporalio/workflow/definition'
+require 'timeout'
+
+module Temporalio
+ class Worker
+ class WorkflowExecutor
+ # Thread pool implementation of {WorkflowExecutor}.
+ #
+ # Users should use {default} unless they have specific needs to change the thread pool or max threads.
+ class ThreadPool < WorkflowExecutor
+ # @return [ThreadPool] Default executor that lazily constructs an instance with default values.
+ def self.default
+ @default ||= ThreadPool.new
+ end
+
+ # Create a thread pool executor. Most users may prefer {default}.
+ #
+ # @param max_threads [Integer] Maximum number of threads to use concurrently.
+ # @param thread_pool [Worker::ThreadPool] Thread pool to use.
+ def initialize(max_threads: [4, Etc.nprocessors].max, thread_pool: Temporalio::Worker::ThreadPool.default) # rubocop:disable Lint/MissingSuper
+ @max_threads = max_threads
+ @thread_pool = thread_pool
+ @workers_mutex = Mutex.new
+ @workers = []
+ @workers_by_worker_state_and_run_id = {}
+ end
+
+ # @!visibility private
+ def _validate_worker(worker, worker_state)
+ # Do nothing
+ end
+
+ # @!visibility private
+ def _activate(activation, worker_state, &)
+ # Get applicable worker
+ worker = @workers_mutex.synchronize do
+ run_key = [worker_state, activation.run_id]
+ @workers_by_worker_state_and_run_id.fetch(run_key) do
+ # If not found, get a new one either by creating if not enough or find the one with the fewest.
+ new_worker = if @workers.size < @max_threads
+ created_worker = Worker.new(self)
+ @workers << Worker.new(self)
+ created_worker
+ else
+ @workers.min_by(&:workflow_count)
+ end
+ @workers_by_worker_state_and_run_id[run_key] = new_worker
+ new_worker.workflow_count += 1
+ new_worker
+ end
+ end
+ raise "No worker for run ID #{activation.run_id}" unless worker
+
+ # Enqueue activation
+ worker.enqueue_activation(activation, worker_state, &)
+ end
+
+ # @!visibility private
+ def _thread_pool
+ @thread_pool
+ end
+
+ # @!visibility private
+ def _remove_workflow(worker_state, run_id)
+ @workers_mutex.synchronize do
+ worker = @workers_by_worker_state_and_run_id.delete([worker_state, run_id])
+ if worker
+ worker.workflow_count -= 1
+ # Remove worker from array if done. The array should be small enough that the delete being O(N) is not
+ # worth using a set or a map.
+ if worker.workflow_count.zero?
+ @workers.delete(worker)
+ worker.shutdown
+ end
+ end
+ end
+ end
+
+ # @!visibility private
+ class Worker
+ LOG_ACTIVATIONS = false
+
+ attr_accessor :workflow_count
+
+ def initialize(executor)
+ @executor = executor
+ @workflow_count = 0
+ @queue = Queue.new
+ executor._thread_pool.execute { run }
+ end
+
+ # @!visibility private
+ def enqueue_activation(activation, worker_state, &completion_block)
+ @queue << [:activate, activation, worker_state, completion_block]
+ end
+
+ # @!visibility private
+ def shutdown
+ @queue << [:shutdown]
+ end
+
+ private
+
+ def run
+ loop do
+ work = @queue.pop
+ if work.is_a?(Exception)
+ Warning.warn("Failed activation: #{work}")
+ elsif work.is_a?(Array)
+ case work.first
+ when :shutdown
+ return
+ when :activate
+ activate(work[1], work[2], &work[3])
+ end
+ end
+ rescue Exception => e # rubocop:disable Lint/RescueException
+ Warning.warn("Unexpected failure during run: #{e.full_message}")
+ end
+ end
+
+ def activate(activation, worker_state, &)
+ worker_state.logger.debug("Received workflow activation: #{activation}") if LOG_ACTIVATIONS
+
+ # Check whether it has eviction
+ cache_remove_job = activation.jobs.find { |j| !j.remove_from_cache.nil? }&.remove_from_cache
+
+ # If it's eviction only, just evict inline and do nothing else
+ if cache_remove_job && activation.jobs.size == 1
+ evict(worker_state, activation.run_id)
+ worker_state.logger.debug('Sending empty workflow completion') if LOG_ACTIVATIONS
+ yield Internal::Bridge::Api::WorkflowCompletion::WorkflowActivationCompletion.new(
+ run_id: activation.run_id,
+ successful: Internal::Bridge::Api::WorkflowCompletion::Success.new
+ )
+ return
+ end
+
+ completion = Timeout.timeout(
+ worker_state.deadlock_timeout,
+ DeadlockError,
+ # TODO(cretz): Document that this affects all running workflows on this worker
+ # and maybe test to see how that is mitigated
+ "[TMPRL1101] Potential deadlock detected: workflow didn't yield " \
+ "within #{worker_state.deadlock_timeout} second(s)."
+ ) do
+ # Get or create workflow
+ instance = worker_state.get_or_create_running_workflow(activation.run_id) do
+ create_instance(activation, worker_state)
+ end
+
+ # Activate. We expect most errors in here to have been captured inside.
+ instance.activate(activation)
+ rescue Exception => e # rubocop:disable Lint/RescueException
+ worker_state.logger.error("Failed activation on workflow run ID: #{activation.run_id}")
+ worker_state.logger.error(e)
+ Internal::Worker::WorkflowInstance.new_completion_with_failure(
+ run_id: activation.run_id,
+ error: e,
+ failure_converter: worker_state.data_converter.failure_converter,
+ payload_converter: worker_state.data_converter.payload_converter
+ )
+ end
+
+ # Go ahead and evict if there is an eviction job
+ evict(worker_state, activation.run_id) if cache_remove_job
+
+ # Complete the activation
+ worker_state.logger.debug("Sending workflow completion: #{completion}") if LOG_ACTIVATIONS
+ yield completion
+ end
+
+ def create_instance(initial_activation, worker_state)
+ # Extract start job
+ init_job = initial_activation.jobs.find { |j| !j.initialize_workflow.nil? }&.initialize_workflow
+ raise 'Missing initialize job in initial activation' unless init_job
+
+ # Obtain definition
+ definition = worker_state.workflow_definitions[init_job.workflow_type] ||
+ worker_state.workflow_definitions[nil]
+ unless definition
+ raise Error::ApplicationError.new(
+ "Workflow type #{init_job.workflow_type} is not registered on this worker, available workflows: " +
+ worker_state.workflow_definitions.keys.compact.sort.join(', '),
+ type: 'NotFoundError'
+ )
+ end
+
+ Internal::Worker::WorkflowInstance.new(
+ Internal::Worker::WorkflowInstance::Details.new(
+ namespace: worker_state.namespace,
+ task_queue: worker_state.task_queue,
+ definition:,
+ initial_activation:,
+ logger: worker_state.logger,
+ metric_meter: worker_state.metric_meter,
+ payload_converter: worker_state.data_converter.payload_converter,
+ failure_converter: worker_state.data_converter.failure_converter,
+ interceptors: worker_state.workflow_interceptors,
+ disable_eager_activity_execution: worker_state.disable_eager_activity_execution,
+ illegal_calls: worker_state.illegal_calls,
+ workflow_failure_exception_types: worker_state.workflow_failure_exception_types
+ )
+ )
+ end
+
+ def evict(worker_state, run_id)
+ worker_state.evict_running_workflow(run_id)
+ @executor._remove_workflow(worker_state, run_id)
+ end
+ end
+
+ private_constant :Worker
+
+ # Error raised when a processing a workflow task takes more than the expected amount of time.
+ class DeadlockError < Exception; end # rubocop:disable Lint/InheritException
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/workflow.rb b/temporalio/lib/temporalio/workflow.rb
new file mode 100644
index 00000000..2b7e9d6f
--- /dev/null
+++ b/temporalio/lib/temporalio/workflow.rb
@@ -0,0 +1,523 @@
+# frozen_string_literal: true
+
+require 'random/formatter'
+require 'temporalio/error'
+require 'temporalio/workflow/activity_cancellation_type'
+require 'temporalio/workflow/child_workflow_cancellation_type'
+require 'temporalio/workflow/child_workflow_handle'
+require 'temporalio/workflow/definition'
+require 'temporalio/workflow/external_workflow_handle'
+require 'temporalio/workflow/future'
+require 'temporalio/workflow/handler_unfinished_policy'
+require 'temporalio/workflow/info'
+require 'temporalio/workflow/parent_close_policy'
+require 'temporalio/workflow/update_info'
+require 'timeout'
+
+module Temporalio
+ # Module with all class-methods that can be made from a workflow. Methods on this module cannot be used outside of a
+ # workflow with the obvious exception of {in_workflow?}. This module is not meant to be included or mixed in.
+ module Workflow
+ # @return [Boolean] Whether all update and signal handlers have finished executing. Consider waiting on this
+ # condition before workflow return or continue-as-new, to prevent interruption of in-progress handlers by workflow
+ # return: `Temporalio::Workflow.wait_condition { Temporalio::Workflow.all_handlers_finished? }``
+ def self.all_handlers_finished?
+ _current.all_handlers_finished?
+ end
+
+ # @return [Cancellation] Cancellation for the workflow. This is canceled when a workflow cancellation request is
+ # received. This is the default cancellation for most workflow calls.
+ def self.cancellation
+ _current.cancellation
+ end
+
+ # @return [Boolean] Whether continue as new is suggested. This value is the current continue-as-new suggestion up
+ # until the current task. Note, this value may not be up to date when accessed in a query. When continue as new is
+ # suggested is based on server-side configuration.
+ def self.continue_as_new_suggested
+ _current.continue_as_new_suggested
+ end
+
+ # @return [Integer] Current number of events in history. This value is the current history event count up until the
+ # current task. Note, this value may not be up to date when accessed in a query.
+ def self.current_history_length
+ _current.current_history_length
+ end
+
+ # @return [Integer] Current history size in bytes. This value is the current history size up until the current task.
+ # Note, this value may not be up to date when accessed in a query.
+ def self.current_history_size
+ _current.current_history_size
+ end
+
+ # @return [UpdateInfo] Current update info if this code is running inside an update. This is set via a Fiber-local
+ # storage so it is only visible to the current handler fiber.
+ def self.current_update_info
+ _current.current_update_info
+ end
+
+ # Mark a patch as deprecated.
+ #
+ # This marks a workflow that had {patched} in a previous version of the code as no longer applicable because all
+ # workflows that use the old code path are done and will never be queried again. Therefore the old code path is
+ # removed as well.
+ #
+ # @param patch_id [Symbol, String] Patch ID.
+ def self.deprecate_patch(patch_id)
+ _current.deprecate_patch(patch_id)
+ end
+
+ # Execute an activity and return its result. Either `start_to_close_timeout` or `schedule_to_close_timeout` _must_
+ # be set. The `heartbeat_timeout` should be set for any non-immediately-completing activity so it can receive
+ # cancellation. To run an activity in the background, use a {Future}.
+ #
+ # @note Using an already-canceled cancellation may give a different exception than canceling after started. Use
+ # {Error.canceled?} to check if the exception is a cancellation either way.
+ #
+ # @param activity [Class, Symbol, String] Activity definition class or activity name.
+ # @param args [Array] Arguments to the activity.
+ # @param task_queue [String] Task queue to run the activity on. Defaults to the current workflow's task queue.
+ # @param schedule_to_close_timeout [Float, nil] Max amount of time the activity can take from first being scheduled
+ # to being completed before it times out. This is inclusive of all retries.
+ # @param schedule_to_start_timeout [Float, nil] Max amount of time the activity can take to be started from first
+ # being scheduled.
+ # @param start_to_close_timeout [Float, nil] Max amount of time a single activity run can take from when it starts
+ # to when it completes. This is per retry.
+ # @param heartbeat_timeout [Float, nil] How frequently an activity must invoke heartbeat while running before it is
+ # considered timed out. This also affects how heartbeats are throttled, see general heartbeating documentation.
+ # @param retry_policy [RetryPolicy] How an activity is retried on failure. If unset, a server-defined default is
+ # used. Set maximum attempts to 1 to disable retries.
+ # @param cancellation [Cancellation] Cancellation to apply to the activity. How cancellation is treated is based on
+ # `cancellation_type`. This defaults to the workflow's cancellation, but may need to be overridden with a
+ # new/detached one if an activity is being run in an `ensure` after workflow cancellation.
+ # @param cancellation_type [ActivityCancellationType] How the activity is treated when it is canceled from the
+ # workflow.
+ # @param activity_id [String, nil] Optional unique identifier for the activity. This is an advanced setting that
+ # should not be set unless users are sure they need to. Contact Temporal before setting this value.
+ # @param disable_eager_execution [Boolean] Whether eager execution is disabled. Eager activity execution is an
+ # optimization on some servers that sends activities back to the same worker as the calling workflow if they can
+ # run there. If `false` (the default), eager execution may still be disabled at the worker level or may not be
+ # requested due to lack of available slots.
+ #
+ # @return [Object] Result of the activity.
+ # @raise [Error::ActivityError] Activity failed (and retry was disabled or exhausted).
+ # @raise [Error::CanceledError] Activity was canceled before started. When canceled after started (and not
+ # waited-then-swallowed), instead this canceled error is the cause of a {Error::ActivityError}.
+ def self.execute_activity(
+ activity,
+ *args,
+ task_queue: info.task_queue,
+ schedule_to_close_timeout: nil,
+ schedule_to_start_timeout: nil,
+ start_to_close_timeout: nil,
+ heartbeat_timeout: nil,
+ retry_policy: nil,
+ cancellation: Workflow.cancellation,
+ cancellation_type: ActivityCancellationType::TRY_CANCEL,
+ activity_id: nil,
+ disable_eager_execution: false
+ )
+ _current.execute_activity(
+ activity, *args,
+ task_queue:, schedule_to_close_timeout:, schedule_to_start_timeout:, start_to_close_timeout:,
+ heartbeat_timeout:, retry_policy:, cancellation:, cancellation_type:, activity_id:, disable_eager_execution:
+ )
+ end
+
+ # Shortcut for {start_child_workflow} + {ChildWorkflowHandle.result}. See those two calls for more details.
+ def self.execute_child_workflow(
+ workflow,
+ *args,
+ id: random.uuid,
+ task_queue: info.task_queue,
+ cancellation: Workflow.cancellation,
+ cancellation_type: ChildWorkflowCancellationType::WAIT_CANCELLATION_COMPLETED,
+ parent_close_policy: ParentClosePolicy::TERMINATE,
+ execution_timeout: nil,
+ run_timeout: nil,
+ task_timeout: nil,
+ id_reuse_policy: WorkflowIDReusePolicy::ALLOW_DUPLICATE,
+ retry_policy: nil,
+ cron_schedule: nil,
+ memo: nil,
+ search_attributes: nil
+ )
+ start_child_workflow(
+ workflow, *args,
+ id:, task_queue:, cancellation:, cancellation_type:, parent_close_policy:, execution_timeout:, run_timeout:,
+ task_timeout:, id_reuse_policy:, retry_policy:, cron_schedule:, memo:, search_attributes:
+ ).result
+ end
+
+ # Execute an activity locally in this same workflow task and return its result. This should usually only be used for
+ # short/simple activities where the result performance matters. Either `start_to_close_timeout` or
+ # `schedule_to_close_timeout` _must_ be set. To run an activity in the background, use a {Future}.
+ #
+ # @note Using an already-canceled cancellation may give a different exception than canceling after started. Use
+ # {Error.canceled?} to check if the exception is a cancellation either way.
+ #
+ # @param activity [Class, Symbol, String] Activity definition class or name.
+ # @param args [Array] Arguments to the activity.
+ # @param schedule_to_close_timeout [Float, nil] Max amount of time the activity can take from first being scheduled
+ # to being completed before it times out. This is inclusive of all retries.
+ # @param schedule_to_start_timeout [Float, nil] Max amount of time the activity can take to be started from first
+ # being scheduled.
+ # @param start_to_close_timeout [Float, nil] Max amount of time a single activity run can take from when it starts
+ # to when it completes. This is per retry.
+ # @param retry_policy [RetryPolicy] How an activity is retried on failure. If unset, a server-defined default is
+ # used. Set maximum attempts to 1 to disable retries.
+ # @param local_retry_threshold [Float, nil] If the activity is retrying and backoff would exceed this value, a timer
+ # is scheduled and the activity is retried after. Otherwise, backoff will happen internally within the task.
+ # Defaults to 1 minute.
+ # @param cancellation [Cancellation] Cancellation to apply to the activity. How cancellation is treated is based on
+ # `cancellation_type`. This defaults to the workflow's cancellation, but may need to be overridden with a
+ # new/detached one if an activity is being run in an `ensure` after workflow cancellation.
+ # @param cancellation_type [ActivityCancellationType] How the activity is treated when it is canceled from the
+ # workflow.
+ # @param activity_id [String, nil] Optional unique identifier for the activity. This is an advanced setting that
+ # should not be set unless users are sure they need to. Contact Temporal before setting this value.
+ #
+ # @return [Object] Result of the activity.
+ # @raise [Error::ActivityError] Activity failed (and retry was disabled or exhausted).
+ # @raise [Error::CanceledError] Activity was canceled before started. When canceled after started (and not
+ # waited-then-swallowed), instead this canceled error is the cause of a {Error::ActivityError}.
+ def self.execute_local_activity(
+ activity,
+ *args,
+ schedule_to_close_timeout: nil,
+ schedule_to_start_timeout: nil,
+ start_to_close_timeout: nil,
+ retry_policy: nil,
+ local_retry_threshold: nil,
+ cancellation: Workflow.cancellation,
+ cancellation_type: ActivityCancellationType::TRY_CANCEL,
+ activity_id: nil
+ )
+ _current.execute_local_activity(
+ activity, *args,
+ schedule_to_close_timeout:, schedule_to_start_timeout:, start_to_close_timeout:,
+ retry_policy:, local_retry_threshold:, cancellation:, cancellation_type:, activity_id:
+ )
+ end
+
+ # Get a handle to an external workflow for canceling and issuing signals.
+ #
+ # @param workflow_id [String] Workflow ID.
+ # @param run_id [String, nil] Optional, specific run ID.
+ #
+ # @return [ExternalWorkflowHandle] External workflow handle.
+ def self.external_workflow_handle(workflow_id, run_id: nil)
+ _current.external_workflow_handle(workflow_id, run_id:)
+ end
+
+ # @return [Boolean] Whether the current code is executing in a workflow.
+ def self.in_workflow?
+ _current_or_nil != nil
+ end
+
+ # @return [Info] Information about the current workflow.
+ def self.info
+ _current.info
+ end
+
+ # @return [Logger] Logger for the workflow. This is a scoped logger that automatically appends workflow details to
+ # every log and takes care not to log during replay.
+ def self.logger
+ _current.logger
+ end
+
+ # @return [Hash{String, Symbol => Object}] Memo for the workflow. This is a read-only view of the memo. To update
+ # the memo, use {upsert_memo}. This always returns the same instance and updates are reflected on the returned
+ # instance, so it is not technically frozen.
+ def self.memo
+ _current.memo
+ end
+
+ # @return [Metric::Meter] Metric meter to create metrics on. This metric meter already contains some
+ # workflow-specific attributes and takes care not to apply metrics during replay.
+ def self.metric_meter
+ _current.metric_meter
+ end
+
+ # @return [Time] Current UTC time for this workflow. This creates and returns a new {::Time} instance every time it
+ # is invoked, it is not the same instance continually mutated.
+ def self.now
+ _current.now
+ end
+
+ # Patch a workflow.
+ #
+ # When called, this will only return true if code should take the newer path which means this is either not
+ # replaying or is replaying and has seen this patch before. Results for successive calls to this function for the
+ # same ID and workflow are memoized. Use {deprecate_patch} when all workflows are done and will never be queried
+ # again. The old code path can be removed at that time too.
+ #
+ # @param patch_id [Symbol, String] Patch ID.
+ # @return [Boolean] True if this should take the newer patch, false if it should take the old path.
+ def self.patched(patch_id)
+ _current.patched(patch_id)
+ end
+
+ # @return [Converters::PayloadConverter] Payload converter for the workflow.
+ def self.payload_converter
+ _current.payload_converter
+ end
+
+ # @return [Hash] Query handlers for this workflow. This hash is mostly immutable except
+ # for `[]=` (and `store`) which can be used to set a new handler, or can be set with `nil` to remove a handler.
+ # For most use cases, defining a handler as a `workflow_query` method is best.
+ def self.query_handlers
+ _current.query_handlers
+ end
+
+ # @return [Random] Deterministic instance of {::Random} for use in a workflow. This instance should be accessed each
+ # time needed, not stored. This instance may be recreated with a different seed in special cases (e.g. workflow
+ # reset). Do not use any other randomization inside workflow code.
+ def self.random
+ _current.random
+ end
+
+ # @return [SearchAttributes] Search attributes for the workflow. This is a read-only view of the attributes. To
+ # update the attributes, use {upsert_search_attributes}. This always returns the same instance and updates are
+ # reflected on the returned instance, so it is not technically frozen.
+ def self.search_attributes
+ _current.search_attributes
+ end
+
+ # @return [Hash] Signal handlers for this workflow. This hash is mostly immutable except
+ # for `[]=` (and `store`) which can be used to set a new handler, or can be set with `nil` to remove a handler.
+ # For most use cases, defining a handler as a `workflow_signal` method is best.
+ def self.signal_handlers
+ _current.signal_handlers
+ end
+
+ # Sleep in a workflow for the given time.
+ #
+ # @param duration [Float, nil] Time to sleep in seconds. `nil` represents infinite, which does not start a timer and
+ # just waits for cancellation. `0` is assumed to be 1 millisecond and still results in a server-side timer. This
+ # value cannot be negative. Since Temporal timers are server-side, timer resolution may not end up as precise as
+ # system timers.
+ # @param summary [String, nil] A simple string identifying this timer that may be visible in UI/CLI. While it can be
+ # normal text, it is best to treat as a timer ID.
+ # @param cancellation [Cancellation] Cancellation for this timer.
+ # @raise [Error::CanceledError] Sleep canceled.
+ def self.sleep(duration, summary: nil, cancellation: Workflow.cancellation)
+ _current.sleep(duration, summary:, cancellation:)
+ end
+
+ # Start a child workflow and return the handle.
+ #
+ # @param workflow [Class, Symbol, String] Workflow definition class or workflow name.
+ # @param args [Array] Arguments to the workflow.
+ # @param id [String] Unique identifier for the workflow execution. Defaults to a new UUID from {random}.
+ # @param task_queue [String] Task queue to run the workflow on. Defaults to the current workflow's task queue.
+ # @param cancellation [Cancellation] Cancellation to apply to the child workflow. How cancellation is treated is
+ # based on `cancellation_type`. This defaults to the workflow's cancellation.
+ # @param cancellation_type [ChildWorkflowCancellationType] How the child workflow will react to cancellation.
+ # @param parent_close_policy [ParentClosePolicy] How to handle the child workflow when the parent workflow closes.
+ # @param execution_timeout [Float, nil] Total workflow execution timeout in seconds including retries and continue
+ # as new.
+ # @param run_timeout [Float, nil] Timeout of a single workflow run inseconds.
+ # @param task_timeout [Float, nil] Timeout of a single workflow task in seconds.
+ # @param id_reuse_policy [WorkflowIDReusePolicy] How already-existing IDs are treated.
+ # @param retry_policy [RetryPolicy, nil] Retry policy for the workflow.
+ # @param cron_schedule [String, nil] Cron schedule. Users should use schedules instead of this.
+ # @param memo [Hash{String, Symbol => Object}, nil] Memo for the workflow.
+ # @param search_attributes [SearchAttributes, nil] Search attributes for the workflow.
+ #
+ # @return [ChildWorkflowHandle] Workflow handle to the started workflow.
+ # @raise [Error::WorkflowAlreadyStartedError] Workflow already exists for the ID.
+ # @raise [Error::CanceledError] Starting of the child was canceled.
+ def self.start_child_workflow(
+ workflow,
+ *args,
+ id: random.uuid,
+ task_queue: info.task_queue,
+ cancellation: Workflow.cancellation,
+ cancellation_type: ChildWorkflowCancellationType::WAIT_CANCELLATION_COMPLETED,
+ parent_close_policy: ParentClosePolicy::TERMINATE,
+ execution_timeout: nil,
+ run_timeout: nil,
+ task_timeout: nil,
+ id_reuse_policy: WorkflowIDReusePolicy::ALLOW_DUPLICATE,
+ retry_policy: nil,
+ cron_schedule: nil,
+ memo: nil,
+ search_attributes: nil
+ )
+ _current.start_child_workflow(
+ workflow, *args,
+ id:, task_queue:, cancellation:, cancellation_type:, parent_close_policy:, execution_timeout:, run_timeout:,
+ task_timeout:, id_reuse_policy:, retry_policy:, cron_schedule:, memo:, search_attributes:
+ )
+ end
+
+ # Run the block until the timeout is reached. This is backed by {sleep}. This does not accept cancellation because
+ # it is expected the block within will properly handle/bubble cancellation.
+ #
+ # @param duration [Float, nil] Duration for the timeout. This is backed by {sleep} so see that method for details.
+ # @param exception_class [Class] Exception to raise on timeout. Defaults to {::Timeout::Error} like
+ # {::Timeout.timeout}. Note that {::Timeout::Error} is considered a workflow failure exception, not a task failure
+ # exception.
+ # @param message [String] Message to use for timeout exception. Defaults to "execution expired" like
+ # {::Timeout.timeout}.
+ # @param summary [String] Timer summer for the timer created by this timeout. This is backed by {sleep} so see that
+ # method for details.
+ #
+ # @yield Block to run with a timeout.
+ # @return [Object] The result of the block.
+ # @raise [Exception] Upon timeout, raises whichever class is set in `exception_class` with the message of `message`.
+ def self.timeout(
+ duration,
+ exception_class = Timeout::Error,
+ message = 'execution expired',
+ summary: 'Timeout timer',
+ &
+ )
+ _current.timeout(duration, exception_class, message, summary:, &)
+ end
+
+ # @return [Hash] Update handlers for this workflow. This hash is mostly immutable except
+ # for `[]=` (and `store`) which can be used to set a new handler, or can be set with `nil` to remove a handler.
+ # For most use cases, defining a handler as a `workflow_update` method is best.
+ def self.update_handlers
+ _current.update_handlers
+ end
+
+ # Issue updates to the workflow memo.
+ #
+ # @param hash [Hash{String, Symbol => Object, nil}] Updates to apply. Value can be `nil` to effectively remove the
+ # memo value.
+ def self.upsert_memo(hash)
+ _current.upsert_memo(hash)
+ end
+
+ # Issue updates to the workflow search attributes.
+ #
+ # @param updates [Array] Updates to apply. Note these are {SearchAttributes::Update}
+ # objects which are created via {SearchAttributes::Key.value_set} and {SearchAttributes::Key.value_unset} methods.
+ def self.upsert_search_attributes(*updates)
+ _current.upsert_search_attributes(*updates)
+ end
+
+ # Wait for the given block to return a "truthy" value (i.e. any value other than `false` or `nil`). The block must
+ # be side-effect free since it may be invoked frequently during event loop iteration. To timeout a wait, {timeout}
+ # can be used. This cannot be used in side-effect-free contexts such as `initialize`, queries, or update validators.
+ #
+ # This is very commonly used to wait on a value to be set by a handler, e.g.
+ # `Temporalio::Workflow.wait_condition { @some_value }`. Special care was taken to only wake up a single wait
+ # condition when it evaluates to true. Therefore if multiple wait conditions are waiting on the same thing, only one
+ # is awoken at a time, which means the code immediately following that wait condition can change the variable before
+ # other wait conditions are evaluated. This is a useful property for building mutexes/semaphores.
+ #
+ # @param cancellation [Cancellation, nil] Cancellation to cancel the wait. This defaults to the workflow's
+ # cancellation.
+ # @yield Block that is run many times to test for truthiness.
+ # @yieldreturn [Object] Value to check whether truthy or falsy.
+ #
+ # @return [Object] Truthy value returned from the block.
+ # @raise [Error::CanceledError] Wait was canceled.
+ def self.wait_condition(cancellation: Workflow.cancellation, &)
+ raise 'Block required' unless block_given?
+
+ _current.wait_condition(cancellation:, &)
+ end
+
+ # @!visibility private
+ def self._current
+ current = _current_or_nil
+ raise Error, 'Not in workflow environment' if current.nil?
+
+ current
+ end
+
+ # @!visibility private
+ def self._current_or_nil
+ # We choose to use Fiber.scheduler instead of Fiber.current_scheduler here because the constructor of the class is
+ # not scheduled on this scheduler and so current_scheduler is nil during class construction.
+ sched = Fiber.scheduler
+ return sched.context if sched.is_a?(Internal::Worker::WorkflowInstance::Scheduler)
+
+ nil
+ end
+
+ # Unsafe module contains only-in-workflow methods that are considered unsafe. These should not be used unless the
+ # consequences are understood.
+ module Unsafe
+ # @return [Boolean] True if the workflow is replaying, false otherwise. Most code should not check this value.
+ def self.replaying?
+ Workflow._current.replaying?
+ end
+
+ # Run a block of code with illegal call tracing disabled. Users should be cautious about using this as it can
+ # often signify unsafe code.
+ #
+ # @yield Block to run with call tracing disabled
+ #
+ # @return [Object] Result of the block.
+ def self.illegal_call_tracing_disabled(&)
+ Workflow._current.illegal_call_tracing_disabled(&)
+ end
+ end
+
+ # Error that is raised by a workflow out of the primary workflow method to issue a continue-as-new.
+ class ContinueAsNewError < Error
+ attr_accessor :args, :workflow, :task_queue, :run_timeout, :task_timeout,
+ :retry_policy, :memo, :search_attributes, :headers
+
+ # Create a continue as new error.
+ #
+ # @param args [Array] Arguments for the new workflow.
+ # @param workflow [Class, String, Symbol, nil] Workflow definition class or workflow name.
+ # If unset/nil, the current workflow is used.
+ # @param task_queue [String, nil] Task queue for the workflow. If unset/nil, the current workflow task queue is
+ # used.
+ # @param run_timeout [Float, nil] Timeout of a single workflow run in seconds. The default is _not_ carried over
+ # from the current workflow.
+ # @param task_timeout [Float, nil] Timeout of a single workflow task in seconds. The default is _not_ carried over
+ # from the current workflow.
+ # @param retry_policy [RetryPolicy, nil] Retry policy for the workflow. If unset/nil, the current workflow retry
+ # policy is used.
+ # @param memo [Hash{String, Symbol => Object}, nil] Memo for the workflow. If unset/nil, the current workflow memo
+ # is used.
+ # @param search_attributes [SearchAttributes, nil] Search attributes for the workflow. If unset/nil, the current
+ # workflow search attributes are used.
+ # @param headers [Hash] Headers for the workflow. The default is _not_ carried over from the
+ # current workflow.
+ def initialize(
+ *args,
+ workflow: nil,
+ task_queue: nil,
+ run_timeout: nil,
+ task_timeout: nil,
+ retry_policy: nil,
+ memo: nil,
+ search_attributes: nil,
+ headers: {}
+ )
+ super('Continue as new')
+ @args = args
+ @workflow = workflow
+ @task_queue = task_queue
+ @run_timeout = run_timeout
+ @task_timeout = task_timeout
+ @retry_policy = retry_policy
+ @memo = memo
+ @search_attributes = search_attributes
+ @headers = headers
+ Workflow._current.initialize_continue_as_new_error(self)
+ end
+ end
+
+ # Error raised when a workflow does something with a side effect in an improper context. In `initialize`, query
+ # handlers, and update validators, a workflow cannot do anything that would generate a command (e.g. starting an
+ # activity) or anything that could wait (e.g. scheduling a fiber, running a future, or using a wait condition).
+ class InvalidWorkflowStateError < Error; end
+
+ # Error raised when a workflow does something potentially non-deterministic such as making an illegal call. Note,
+ # non-deterministic errors during replay do not raise an error that can be caught, those happen internally. But this
+ # error can still be used with configuring workflow failure exception types to change non-deterministic errors from
+ # task failures to workflow failures.
+ class NondeterminismError < Error; end
+ end
+end
diff --git a/temporalio/lib/temporalio/workflow/activity_cancellation_type.rb b/temporalio/lib/temporalio/workflow/activity_cancellation_type.rb
new file mode 100644
index 00000000..383a7c3e
--- /dev/null
+++ b/temporalio/lib/temporalio/workflow/activity_cancellation_type.rb
@@ -0,0 +1,20 @@
+# frozen_string_literal: true
+
+require 'temporalio/internal/bridge/api'
+
+module Temporalio
+ module Workflow
+ # Cancellation types for activities.
+ module ActivityCancellationType
+ # Initiate a cancellation request and immediately report cancellation to the workflow.
+ TRY_CANCEL = Internal::Bridge::Api::WorkflowCommands::ActivityCancellationType::TRY_CANCEL
+ # Wait for activity cancellation completion. Note that activity must heartbeat to receive a cancellation
+ # notification. This can block the cancellation for a long time if activity doesn't heartbeat or chooses to ignore
+ # the cancellation request.
+ WAIT_CANCELLATION_COMPLETED =
+ Internal::Bridge::Api::WorkflowCommands::ActivityCancellationType::WAIT_CANCELLATION_COMPLETED
+ # Do not request cancellation of the activity and immediately report cancellation to the workflow.
+ ABANDON = Internal::Bridge::Api::WorkflowCommands::ActivityCancellationType::ABANDON
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/workflow/child_workflow_cancellation_type.rb b/temporalio/lib/temporalio/workflow/child_workflow_cancellation_type.rb
new file mode 100644
index 00000000..def79a89
--- /dev/null
+++ b/temporalio/lib/temporalio/workflow/child_workflow_cancellation_type.rb
@@ -0,0 +1,21 @@
+# frozen_string_literal: true
+
+require 'temporalio/internal/bridge/api'
+
+module Temporalio
+ module Workflow
+ # Cancellation types for child workflows.
+ module ChildWorkflowCancellationType
+ # Do not request cancellation of the child workflow if already scheduled.
+ ABANDON = Internal::Bridge::Api::ChildWorkflow::ChildWorkflowCancellationType::ABANDON
+ # Initiate a cancellation request and immediately report cancellation to the parent.
+ TRY_CANCEL = Internal::Bridge::Api::ChildWorkflow::ChildWorkflowCancellationType::TRY_CANCEL
+ # Wait for child cancellation completion.
+ WAIT_CANCELLATION_COMPLETED =
+ Internal::Bridge::Api::ChildWorkflow::ChildWorkflowCancellationType::WAIT_CANCELLATION_COMPLETED
+ # Request cancellation of the child and wait for confirmation that the request was received.
+ WAIT_CANCELLATION_REQUESTED =
+ Internal::Bridge::Api::ChildWorkflow::ChildWorkflowCancellationType::WAIT_CANCELLATION_REQUESTED
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/workflow/child_workflow_handle.rb b/temporalio/lib/temporalio/workflow/child_workflow_handle.rb
new file mode 100644
index 00000000..12cf8119
--- /dev/null
+++ b/temporalio/lib/temporalio/workflow/child_workflow_handle.rb
@@ -0,0 +1,43 @@
+# frozen_string_literal: true
+
+module Temporalio
+ module Workflow
+ # Handle for interacting with a child workflow.
+ #
+ # This is created via {Workflow.start_child_workflow}, it is never instantiated directly.
+ class ChildWorkflowHandle
+ # @!visibility private
+ def initialize
+ raise NotImplementedError, 'Cannot instantiate a child handle directly'
+ end
+
+ # @return [String] ID for the workflow.
+ def id
+ raise NotImplementedError
+ end
+
+ # @return [String] Run ID for the workflow.
+ def first_execution_run_id
+ raise NotImplementedError
+ end
+
+ # Wait for the result.
+ #
+ # @return [Object] Result of the child workflow.
+ #
+ # @raise [Error::ChildWorkflowError] Workflow failed with +cause+ as the cause.
+ def result
+ raise NotImplementedError
+ end
+
+ # Signal the child workflow.
+ #
+ # @param signal [Workflow::Definition::Signal, Symbol, String] Signal definition or name.
+ # @param args [Array] Signal args.
+ # @param cancellation [Cancellation] Cancellation for canceling the signalling.
+ def signal(signal, *args, cancellation: Workflow.cancellation)
+ raise NotImplementedError
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/workflow/definition.rb b/temporalio/lib/temporalio/workflow/definition.rb
new file mode 100644
index 00000000..d926395a
--- /dev/null
+++ b/temporalio/lib/temporalio/workflow/definition.rb
@@ -0,0 +1,566 @@
+# frozen_string_literal: true
+
+require 'temporalio/workflow'
+require 'temporalio/workflow/handler_unfinished_policy'
+
+module Temporalio
+ module Workflow
+ # Base class for all workflows.
+ #
+ # Workflows are instances of this class and must implement {execute}. Inside the workflow code, class methods on
+ # {Workflow} can be used.
+ #
+ # By default, the workflow is named as its unqualified class name. This can be customized with {workflow_name}.
+ class Definition
+ class << self
+ protected
+
+ # Customize the workflow name. By default the workflow is named the unqualified class name of the class provided
+ # to the worker.
+ #
+ # @param workflow_name [String, Symbol] Name to use.
+ def workflow_name(workflow_name)
+ if !workflow_name.is_a?(Symbol) && !workflow_name.is_a?(String)
+ raise ArgumentError,
+ 'Workflow name must be a symbol or string'
+ end
+
+ @workflow_name = workflow_name.to_s
+ end
+
+ # Set a workflow as dynamic. Dynamic workflows do not have names and handle any workflow that is not otherwise
+ # registered. A worker can only have one dynamic workflow. It is often useful to use {workflow_raw_args} with
+ # this.
+ #
+ # @param value [Boolean] Whether the workflow is dynamic.
+ def workflow_dynamic(value = true) # rubocop:disable Style/OptionalBooleanParameter
+ @workflow_dynamic = value
+ end
+
+ # Have workflow arguments delivered to `execute` (and `initialize` if {workflow_init} in use) as
+ # {Converters::RawValue}s. These are wrappers for the raw payloads that have not been decoded. They can be
+ # decoded with {Workflow.payload_converter}.
+ #
+ # @param value [Boolean] Whether the workflow accepts raw arguments.
+ def workflow_raw_args(value = true) # rubocop:disable Style/OptionalBooleanParameter
+ @workflow_raw_args = value
+ end
+
+ # Configure workflow failure exception types. This sets the types of exceptions that, if a
+ # workflow-thrown exception extends, will cause the workflow/update to fail instead of suspending the workflow
+ # via task failure. These are applied in addition to the worker option. If {::Exception} is set, it effectively
+ # will fail a workflow/update in all user exception cases.
+ #
+ # @param types [Array>] Exception types to turn into workflow failures.
+ def workflow_failure_exception_type(*types)
+ types.each do |t|
+ raise ArgumentError, 'All types must classes inheriting Exception' unless t.is_a?(Class) && t < Exception
+ end
+ @workflow_failure_exception_types ||= []
+ @workflow_failure_exception_types.concat(types)
+ end
+
+ # Expose an attribute as a method and as a query. A `workflow_query_attr_reader :foo` is the equivalent of:
+ # ```
+ # workflow_query
+ # def foo
+ # @foo
+ # end
+ # ```
+ # This means it is a superset of `attr_reader`` and will not work if also using `attr_reader` or
+ # `attr_accessor`. If a writer is needed alongside this, use `attr_writer`.
+ #
+ # @param attr_names [Array] Attributes to expose.
+ def workflow_query_attr_reader(*attr_names)
+ @workflow_queries ||= {}
+ attr_names.each do |attr_name|
+ raise 'Expected attr to be a symbol' unless attr_name.is_a?(Symbol)
+
+ if method_defined?(attr_name, false)
+ raise 'Method already defined for this attr name. ' \
+ 'Note that a workflow_query_attr_reader includes attr_reader behavior. ' \
+ 'If you also want a writer for this attribute, use a separate attr_writer.'
+ end
+
+ # Just run this as if done manually
+ workflow_query
+ define_method(attr_name) { instance_variable_get("@#{attr_name}") }
+ end
+ end
+
+ # Mark an `initialize` as needing the workflow start arguments. Otherwise, `initialize` must accept no required
+ # arguments. This must be placed above the `initialize` method or it will fail.
+ #
+ # @param value [Boolean] Whether the start parameters will be passed to `initialize`.
+ def workflow_init(value = true) # rubocop:disable Style/OptionalBooleanParameter
+ self.pending_handler_details = { type: :init, value: }
+ end
+
+ # Mark the next method as a workflow signal with a default name as the name of the method. Signals cannot return
+ # values.
+ #
+ # @param name [String, Symbol, nil] Override the default name.
+ # @param dynamic [Boolean] If true, make the signal dynamic. This means it receives all other signals without
+ # handlers. This cannot have a name override since it is nameless. The first parameter will be the name. Often
+ # it is useful to have the second parameter be `*args` and `raw_args` be true.
+ # @param raw_args [Boolean] If true, does not convert arguments, but instead provides each argument as
+ # {Converters::RawValue} which is a raw payload wrapper, convertible with {Workflow.payload_converter}.
+ # @param unfinished_policy [HandlerUnfinishedPolicy] How to treat unfinished handlers if they are still running
+ # when the workflow ends. The default warns, but this can be disabled.
+ def workflow_signal(
+ name: nil,
+ dynamic: false,
+ raw_args: false,
+ unfinished_policy: HandlerUnfinishedPolicy::WARN_AND_ABANDON
+ )
+ raise 'Cannot provide name if dynamic is true' if name && dynamic
+
+ self.pending_handler_details = { type: :signal, name:, dynamic:, raw_args:, unfinished_policy: }
+ end
+
+ # Mark the next method as a workflow query with a default name as the name of the method. Queries can not have
+ # any side effects, meaning they should never mutate state or try to wait on anything.
+ #
+ # @param name [String, Symbol, nil] Override the default name.
+ # @param dynamic [Boolean] If true, make the query dynamic. This means it receives all other queries without
+ # handlers. This cannot have a name override since it is nameless. The first parameter will be the name. Often
+ # it is useful to have the second parameter be `*args` and `raw_args` be true.
+ # @param raw_args [Boolean] If true, does not convert arguments, but instead provides each argument as
+ # {Converters::RawValue} which is a raw payload wrapper, convertible with {Workflow.payload_converter}.
+ def workflow_query(
+ name: nil,
+ dynamic: false,
+ raw_args: false
+ )
+ raise 'Cannot provide name if dynamic is true' if name && dynamic
+
+ self.pending_handler_details = { type: :query, name:, dynamic:, raw_args: }
+ end
+
+ # Mark the next method as a workflow update with a default name as the name of the method. Updates can return
+ # values. Separate validation methods can be provided via {workflow_update_validator}.
+ #
+ # @param name [String, Symbol, nil] Override the default name.
+ # @param dynamic [Boolean] If true, make the update dynamic. This means it receives all other updates without
+ # handlers. This cannot have a name override since it is nameless. The first parameter will be the name. Often
+ # it is useful to have the second parameter be `*args` and `raw_args` be true.
+ # @param raw_args [Boolean] If true, does not convert arguments, but instead provides each argument as
+ # {Converters::RawValue} which is a raw payload wrapper, convertible with {Workflow.payload_converter}.
+ # @param unfinished_policy [HandlerUnfinishedPolicy] How to treat unfinished handlers if they are still running
+ # when the workflow ends. The default warns, but this can be disabled.
+ def workflow_update(
+ name: nil,
+ dynamic: false,
+ raw_args: false,
+ unfinished_policy: HandlerUnfinishedPolicy::WARN_AND_ABANDON
+ )
+ raise 'Cannot provide name if dynamic is true' if name && dynamic
+
+ self.pending_handler_details = { type: :update, name:, dynamic:, raw_args:, unfinished_policy: }
+ end
+
+ # Mark the next method as a workflow update validator to the given update method. The validator is expected to
+ # have the exact same parameter signature. It will run before an update and if it raises an exception, the
+ # update will be rejected, possibly before even reaching history. Validators cannot have any side effects or do
+ # any waiting, and they do not return values.
+ #
+ # @param update_method [Symbol] Name of the update method.
+ def workflow_update_validator(update_method)
+ self.pending_handler_details = { type: :update_validator, update_method: }
+ end
+
+ private
+
+ attr_reader :pending_handler_details
+
+ def pending_handler_details=(value)
+ if value.nil?
+ @pending_handler_details = value
+ return
+ elsif @pending_handler_details
+ raise "Previous #{@pending_handler_details[:type]} handler was not put on method before this handler"
+ end
+
+ @pending_handler_details = value
+ end
+ end
+
+ # @!visibility private
+ def self.method_added(method_name)
+ super
+
+ # Nothing to do if there are no pending handler details
+ handler = pending_handler_details
+ return unless handler
+
+ # Reset details
+ self.pending_handler_details = nil
+
+ # Initialize class variables if not done already
+ @workflow_signals ||= {}
+ @workflow_queries ||= {}
+ @workflow_updates ||= {}
+ @workflow_update_validators ||= {}
+ @defined_methods ||= []
+
+ defn, hash, other_hashes =
+ case handler[:type]
+ when :init
+ raise "workflow_init was applied to #{method_name} instead of initialize" if method_name != :initialize
+
+ @workflow_init = handler[:value]
+ return
+ when :update_validator
+ other = @workflow_update_validators[handler[:update_method]]
+ if other && (other[:method_name] != method_name || other[:update_method] != handler[:update_method])
+ raise "Workflow update validator on #{method_name} for #{handler[:update_method]} defined separately " \
+ "on #{other[:method_name]} for #{other[:update_method]}"
+ end
+
+ # Just store this, we'll apply validators to updates at definition
+ # building time
+ @workflow_update_validators[handler[:update_method]] = { method_name:, **handler }
+ return
+ when :signal
+ [Signal.new(
+ name: handler[:dynamic] ? nil : (handler[:name] || method_name).to_s,
+ to_invoke: method_name,
+ raw_args: handler[:raw_args],
+ unfinished_policy: handler[:unfinished_policy]
+ ), @workflow_signals, [@workflow_queries, @workflow_updates]]
+ when :query
+ [Query.new(
+ name: handler[:dynamic] ? nil : (handler[:name] || method_name).to_s,
+ to_invoke: method_name,
+ raw_args: handler[:raw_args]
+ ), @workflow_queries, [@workflow_signals, @workflow_updates]]
+ when :update
+ [Update.new(
+ name: handler[:dynamic] ? nil : (handler[:name] || method_name).to_s,
+ to_invoke: method_name,
+ raw_args: handler[:raw_args],
+ unfinished_policy: handler[:unfinished_policy]
+ ), @workflow_updates, [@workflow_signals, @workflow_queries]]
+ else
+ raise "Unrecognized handler type #{handler[:type]}"
+ end
+
+ # We only allow dupes with the same method name (override/redefine)
+ # TODO(cretz): Should we also check that everything else is the same?
+ other = hash[defn.name]
+ if other && other.to_invoke != method_name
+ raise "Workflow #{handler[:type].name} #{defn.name || ''} defined on " \
+ "different methods #{other.to_invoke} and #{method_name}"
+ elsif defn.name && other_hashes.any? { |h| h.include?(defn.name) }
+ raise "Workflow signal #{defn.name} already defined as a different handler type"
+ end
+ hash[defn.name] = defn
+
+ # Define class method for referencing the definition only if non-dynamic
+ return unless defn.name
+
+ define_singleton_method(method_name) { defn }
+ @defined_methods.push(method_name)
+ end
+
+ # @!visibility private
+ def self.singleton_method_added(method_name)
+ super
+ # We need to ensure class methods are not added after we have defined a method
+ return unless @defined_methods&.include?(method_name)
+
+ raise 'Attempting to override Temporal-defined class definition method'
+ end
+
+ # @!visibility private
+ def self._workflow_definition
+ @workflow_definition ||= _build_workflow_definition
+ end
+
+ # @!visibility private
+ def self._workflow_type_from_workflow_parameter(workflow)
+ case workflow
+ when Class
+ unless workflow < Definition
+ raise ArgumentError, "Class '#{workflow}' does not extend Temporalio::Workflow::Definition"
+ end
+
+ info = Info.from_class(workflow)
+ info.name || raise(ArgumentError, 'Cannot pass dynamic workflow to start')
+ when Info
+ workflow.name || raise(ArgumentError, 'Cannot pass dynamic workflow to start')
+ when String, Symbol
+ workflow.to_s
+ else
+ raise ArgumentError, 'Workflow is not a workflow class or string/symbol'
+ end
+ end
+
+ # @!visibility private
+ def self._build_workflow_definition
+ # Make sure there isn't dangling pending handler details
+ if pending_handler_details
+ raise "Leftover #{pending_handler_details&.[](:type)} handler not applied to a method"
+ end
+
+ # Apply all update validators before merging with super
+ updates = @workflow_updates&.dup || {}
+ @workflow_update_validators&.each_value do |validator|
+ update = updates.values.find { |u| u.to_invoke == validator[:update_method] }
+ unless update
+ raise "Unable to find update #{validator[:update_method]} pointed to by " \
+ "validator on #{validator[:method_name]}"
+ end
+ if instance_method(validator[:method_name])&.parameters !=
+ instance_method(validator[:update_method])&.parameters
+ raise "Validator on #{validator[:method_name]} does not have " \
+ "exact parameter signature of #{validator[:update_method]}"
+ end
+
+ updates[update.name] = update._with_validator_to_invoke(validator[:method_name])
+ end
+
+ # If there is a superclass, apply some values and check others
+ override_name = @workflow_name
+ dynamic = @workflow_dynamic
+ init = @workflow_init
+ raw_args = @workflow_raw_args
+ signals = @workflow_signals || {}
+ queries = @workflow_queries || {}
+ if superclass && superclass != Temporalio::Workflow::Definition
+ # @type var super_info: Temporalio::Workflow::Definition::Info
+ super_info = superclass._workflow_definition # steep:ignore
+
+ # Override values if not set here
+ override_name = super_info.override_name if override_name.nil?
+ dynamic = super_info.dynamic if dynamic.nil?
+ init = super_info.init if init.nil?
+ raw_args = super_info.raw_args if raw_args.nil?
+
+ # Make sure handlers on the same method at least have the same name
+ # TODO(cretz): Need to validate any other handler override details?
+ # Probably not because we only care that caller-needed values remain
+ # unchanged (method and name), implementer-needed values can be
+ # overridden/changed.
+ self_handlers = signals.values + queries.values + updates.values
+ super_handlers = super_info.signals.values + super_info.queries.values + super_info.updates.values
+ super_handlers.each do |super_handler|
+ self_handler = self_handlers.find { |h| h.to_invoke == super_handler.to_invoke }
+ next unless self_handler
+
+ if super_handler.class != self_handler.class
+ raise "Superclass handler on #{self_handler.to_invoke} is a #{super_handler.class} " \
+ "but current class expects #{self_handler.class}"
+ end
+ if super_handler.name != self_handler.name
+ raise "Superclass handler on #{self_handler.to_invoke} has name #{super_handler.name} " \
+ "but current class expects #{self_handler.name}"
+ end
+ end
+
+ # Merge handlers. We will merge such that handlers defined here
+ # override ones from superclass by _name_ (not method to invoke).
+ signals = super_info.signals.merge(signals)
+ queries = super_info.queries.merge(queries)
+ updates = super_info.updates.merge(updates)
+ end
+
+ # If init is true, validate initialize and execute signatures are identical
+ if init && instance_method(:initialize)&.parameters&.size != instance_method(:execute)&.parameters&.size
+ raise 'workflow_init present, so parameter count of initialize and execute must be the same'
+ end
+
+ raise 'Workflow cannot be given a name and be dynamic' if dynamic && override_name
+
+ Info.new(
+ workflow_class: self,
+ override_name:,
+ dynamic: dynamic || false,
+ init: init || false,
+ raw_args: raw_args || false,
+ failure_exception_types: @workflow_failure_exception_types || [],
+ signals:,
+ queries:,
+ updates:
+ )
+ end
+
+ # Execute the workflow. This is the primary workflow method. The workflow is completed when this method completes.
+ # This must be implemented by all workflows.
+ def execute(*args)
+ raise NotImplementedError, 'Workflow did not implement "execute"'
+ end
+
+ # Information about the workflow definition. This is usually not used directly.
+ class Info
+ attr_reader :workflow_class, :override_name, :dynamic, :init, :raw_args,
+ :failure_exception_types, :signals, :queries, :updates
+
+ # Derive the workflow definition info from the class.
+ #
+ # @param workflow_class [Class] Workflow class.
+ # @return [Info] Built info.
+ def self.from_class(workflow_class)
+ unless workflow_class.is_a?(Class) && workflow_class < Definition
+ raise "Workflow '#{workflow_class}' must be a class and must extend Temporalio::Workflow::Definition"
+ end
+
+ workflow_class._workflow_definition
+ end
+
+ # Create a definition info. This should usually not be used directly, but instead a class that extends
+ # {Workflow::Definition} should be used.
+ def initialize(
+ workflow_class:,
+ override_name: nil,
+ dynamic: false,
+ init: false,
+ raw_args: false,
+ failure_exception_types: [],
+ signals: {},
+ queries: {},
+ updates: {}
+ )
+ @workflow_class = workflow_class
+ @override_name = override_name
+ @dynamic = dynamic
+ @init = init
+ @raw_args = raw_args
+ @failure_exception_types = failure_exception_types.dup.freeze
+ @signals = signals.dup.freeze
+ @queries = queries.dup.freeze
+ @updates = updates.dup.freeze
+ end
+
+ # @return [String] Workflow name.
+ def name
+ dynamic ? nil : (override_name || workflow_class.name.to_s.split('::').last)
+ end
+ end
+
+ # A signal definition. This is usually built as a result of a {Definition.workflow_signal} method, but can be
+ # manually created to set at runtime on {Workflow.signal_handlers}.
+ class Signal
+ attr_reader :name, :to_invoke, :raw_args, :unfinished_policy
+
+ # @!visibility private
+ def self._name_from_parameter(signal)
+ case signal
+ when Workflow::Definition::Signal
+ signal.name || raise(ArgumentError, 'Cannot call dynamic signal directly')
+ when String, Symbol
+ signal.to_s
+ else
+ raise ArgumentError, 'Signal is not a definition or string/symbol'
+ end
+ end
+
+ # Create a signal definition manually. See {Definition.workflow_signal} for more details on some of the
+ # parameters.
+ #
+ # @param name [String, nil] Name or nil if dynamic.
+ # @param to_invoke [Symbol, Proc] Method name or proc to invoke.
+ # @param raw_args [Boolean] Whether the parameters should be raw values.
+ # @param unfinished_policy [HandlerUnfinishedPolicy] How the workflow reacts when this handler is still running
+ # on workflow completion.
+ def initialize(
+ name:,
+ to_invoke:,
+ raw_args: false,
+ unfinished_policy: HandlerUnfinishedPolicy::WARN_AND_ABANDON
+ )
+ @name = name
+ @to_invoke = to_invoke
+ @raw_args = raw_args
+ @unfinished_policy = unfinished_policy
+ end
+ end
+
+ # A query definition. This is usually built as a result of a {Definition.workflow_query} method, but can be
+ # manually created to set at runtime on {Workflow.query_handlers}.
+ class Query
+ attr_reader :name, :to_invoke, :raw_args
+
+ # @!visibility private
+ def self._name_from_parameter(query)
+ case query
+ when Workflow::Definition::Query
+ query.name || raise(ArgumentError, 'Cannot call dynamic query directly')
+ when String, Symbol
+ query.to_s
+ else
+ raise ArgumentError, 'Query is not a definition or string/symbol'
+ end
+ end
+
+ # Create a query definition manually. See {Definition.workflow_query} for more details on some of the
+ # parameters.
+ #
+ # @param name [String, nil] Name or nil if dynamic.
+ # @param to_invoke [Symbol, Proc] Method name or proc to invoke.
+ # @param raw_args [Boolean] Whether the parameters should be raw values.
+ def initialize(
+ name:,
+ to_invoke:,
+ raw_args: false
+ )
+ @name = name
+ @to_invoke = to_invoke
+ @raw_args = raw_args
+ end
+ end
+
+ # An update definition. This is usually built as a result of a {Definition.workflow_update} method, but can be
+ # manually created to set at runtime on {Workflow.update_handlers}.
+ class Update
+ attr_reader :name, :to_invoke, :raw_args, :unfinished_policy, :validator_to_invoke
+
+ # @!visibility private
+ def self._name_from_parameter(update)
+ case update
+ when Workflow::Definition::Update
+ update.name || raise(ArgumentError, 'Cannot call dynamic update directly')
+ when String, Symbol
+ update.to_s
+ else
+ raise ArgumentError, 'Update is not a definition or string/symbol'
+ end
+ end
+
+ # Create an update definition manually. See {Definition.workflow_update} for more details on some of the
+ # parameters.
+ #
+ # @param name [String, nil] Name or nil if dynamic.
+ # @param to_invoke [Symbol, Proc] Method name or proc to invoke.
+ # @param raw_args [Boolean] Whether the parameters should be raw values.
+ # @param unfinished_policy [HandlerUnfinishedPolicy] How the workflow reacts when this handler is still running
+ # on workflow completion.
+ # @param validator_to_invoke [Symbol, Proc, nil] Method name or proc validator to invoke.
+ def initialize(
+ name:,
+ to_invoke:,
+ raw_args: false,
+ unfinished_policy: HandlerUnfinishedPolicy::WARN_AND_ABANDON,
+ validator_to_invoke: nil
+ )
+ @name = name
+ @to_invoke = to_invoke
+ @raw_args = raw_args
+ @unfinished_policy = unfinished_policy
+ @validator_to_invoke = validator_to_invoke
+ end
+
+ # @!visibility private
+ def _with_validator_to_invoke(validator_to_invoke)
+ Update.new(
+ name:,
+ to_invoke:,
+ raw_args:,
+ unfinished_policy:,
+ validator_to_invoke:
+ )
+ end
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/workflow/external_workflow_handle.rb b/temporalio/lib/temporalio/workflow/external_workflow_handle.rb
new file mode 100644
index 00000000..37e22100
--- /dev/null
+++ b/temporalio/lib/temporalio/workflow/external_workflow_handle.rb
@@ -0,0 +1,41 @@
+# frozen_string_literal: true
+
+require 'temporalio/workflow'
+
+module Temporalio
+ module Workflow
+ # Handle for interacting with an external workflow.
+ #
+ # This is created via {Workflow.external_workflow_handle}, it is never instantiated directly.
+ class ExternalWorkflowHandle
+ # @!visibility private
+ def initialize
+ raise NotImplementedError, 'Cannot instantiate an external handle directly'
+ end
+
+ # @return [String] ID for the workflow.
+ def id
+ raise NotImplementedError
+ end
+
+ # @return [String, nil] Run ID for the workflow.
+ def run_id
+ raise NotImplementedError
+ end
+
+ # Signal the external workflow.
+ #
+ # @param signal [Workflow::Definition::Signal, Symbol, String] Signal definition or name.
+ # @param args [Array] Signal args.
+ # @param cancellation [Cancellation] Cancellation for canceling the signalling.
+ def signal(signal, *args, cancellation: Workflow.cancellation)
+ raise NotImplementedError
+ end
+
+ # Cancel the external workflow.
+ def cancel
+ raise NotImplementedError
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/workflow/future.rb b/temporalio/lib/temporalio/workflow/future.rb
new file mode 100644
index 00000000..3d9b2a4a
--- /dev/null
+++ b/temporalio/lib/temporalio/workflow/future.rb
@@ -0,0 +1,151 @@
+# frozen_string_literal: true
+
+require 'temporalio/workflow'
+
+module Temporalio
+ module Workflow
+ # Asynchronous future for use in workflows to do concurrent and background work. This can only be used inside
+ # workflows.
+ class Future
+ # Return a future that completes when any of the given futures complete. The returned future will return the first
+ # completed futures value or raise the first completed futures exception. To not raise the exception, see
+ # {try_any_of}.
+ #
+ # @param futures [Array>] Futures to wait for the first to complete.
+ # @return [Future] Future that relays the first completed future's result/failure.
+ def self.any_of(*futures)
+ Future.new do
+ Workflow.wait_condition(cancellation: nil) { futures.any?(&:done?) }
+ # We know a future is always returned from find, the & just helps type checker
+ (futures.find(&:done?) || raise).wait
+ end
+ end
+
+ # Return a future that completes when all of the given futures complete or any future fails. The returned future
+ # will return nil on success or raise an exception if any of the futures failed. This means if any future fails,
+ # this will not wait for the other futures to complete. To wait for all futures to complete no matter what, see
+ # {try_all_of}.
+ #
+ # @param futures [Array>] Futures to wait for all to complete (or first to fail).
+ # @return [Future] Future that completes successfully with nil when all futures complete, or raises on first
+ # future failure.
+ def self.all_of(*futures)
+ Future.new do
+ Workflow.wait_condition(cancellation: nil) { futures.all?(&:done?) || futures.any?(&:failure?) }
+ # Raise on error if any
+ futures.find(&:failure?)&.wait
+ nil
+ end
+ end
+
+ # Return a future that completes when the first future completes. The result of the future is the future from the
+ # list that completed first. The future returned will never raise even if the first completed future fails.
+ #
+ # @param futures [Array>] Futures to wait for the first to complete.
+ # @return [Future>] Future with the first completing future regardless of success/fail.
+ def self.try_any_of(*futures)
+ Future.new do
+ Workflow.wait_condition(cancellation: nil) { futures.any?(&:done?) }
+ futures.find(&:done?) || raise
+ end
+ end
+
+ # Return a future that completes when all of the given futures complete regardless of success/fail. The returned
+ # future will return nil when all futures are complete.
+ #
+ # @param futures [Array>] Futures to wait for all to complete (regardless of success/fail).
+ # @return [Future] Future that completes successfully with nil when all futures complete.
+ def self.try_all_of(*futures)
+ Future.new do
+ Workflow.wait_condition(cancellation: nil) { futures.all?(&:done?) }
+ nil
+ end
+ end
+
+ # @return [Object, nil] Result if the future is done or nil if it is not. This will return nil if the result is
+ # nil too. Users can use {done?} to differentiate the situations.
+ attr_reader :result
+
+ # @return [Exception, nil] Failure if this future failed or nil if it didn't or hasn't yet completed.
+ attr_reader :failure
+
+ # Create a new future. If created with a block, the block is started in the background and its success/raise is
+ # the result of the future. If created without a block, the result or failure can be set on it.
+ def initialize(&block)
+ @done = false
+ @result = nil
+ @failure = nil
+ @block_given = block_given?
+ return unless block_given?
+
+ @fiber = Fiber.schedule do
+ @result = block.call # steep:ignore
+ rescue Exception => e # rubocop:disable Lint/RescueException
+ @failure = e
+ ensure
+ @done = true
+ end
+ end
+
+ # @return [Boolean] True if the future is done, false otherwise.
+ def done?
+ @done
+ end
+
+ # @return [Boolean] True if done and not a failure, false if still running or failed.
+ def result?
+ done? && !failure
+ end
+
+ # Mark the future as done and set the result. Does nothing if the future is already done. This cannot be invoked
+ # if the future was constructed with a block.
+ #
+ # @param result [Object] The result, which can be nil.
+ def result=(result)
+ Kernel.raise 'Cannot set result if block given in constructor' if @block_given
+ return if done?
+
+ @result = result
+ @done = true
+ end
+
+ # @return [Boolean] True if done and failed, false if still running or succeeded.
+ def failure?
+ done? && !failure.nil?
+ end
+
+ # Mark the future as done and set the failure. Does nothing if the future is already done. This cannot be invoked
+ # if the future was constructed with a block.
+ #
+ # @param failure [Exception] The failure.
+ def failure=(failure)
+ Kernel.raise 'Cannot set result if block given in constructor' if @block_given
+ Kernel.raise 'Cannot set nil failure' if failure.nil?
+ return if done?
+
+ @failure = failure
+ @done = true
+ end
+
+ # Wait on the future to complete. This will return the success or raise the failure. To not raise, use
+ # {wait_no_raise}.
+ #
+ # @return [Object] Result on success.
+ # @raise [Exception] Failure if occurred.
+ def wait
+ Workflow.wait_condition(cancellation: nil) { done? }
+ Kernel.raise failure if failure? # steep:ignore
+
+ result #: untyped
+ end
+
+ # Wait on the future to complete. This will return the success or nil if it failed, this will not raise.
+ #
+ # @return [Object, nil] Result on success or nil on failure.
+ def wait_no_raise
+ Workflow.wait_condition(cancellation: nil) { done? }
+ result
+ end
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/workflow/handler_unfinished_policy.rb b/temporalio/lib/temporalio/workflow/handler_unfinished_policy.rb
new file mode 100644
index 00000000..c6e046d5
--- /dev/null
+++ b/temporalio/lib/temporalio/workflow/handler_unfinished_policy.rb
@@ -0,0 +1,13 @@
+# frozen_string_literal: true
+
+module Temporalio
+ module Workflow
+ # Actions taken if a workflow completes with running handlers.
+ module HandlerUnfinishedPolicy
+ # Issue a warning in addition to abandoning.
+ WARN_AND_ABANDON = 1
+ # Abandon the handler with no warning.
+ ABANDON = 2
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/workflow/info.rb b/temporalio/lib/temporalio/workflow/info.rb
new file mode 100644
index 00000000..cb5ba1d4
--- /dev/null
+++ b/temporalio/lib/temporalio/workflow/info.rb
@@ -0,0 +1,82 @@
+# frozen_string_literal: true
+
+module Temporalio
+ module Workflow
+ Info = Struct.new(
+ :attempt,
+ :continued_run_id,
+ :cron_schedule,
+ :execution_timeout,
+ :last_failure,
+ :last_result,
+ :namespace,
+ :parent,
+ :retry_policy,
+ :run_id,
+ :run_timeout,
+ :start_time,
+ :task_queue,
+ :task_timeout,
+ :workflow_id,
+ :workflow_type,
+ keyword_init: true
+ )
+
+ # Information about the running workflow. This is immutable for the life of the workflow run.
+ #
+ # @!attribute attempt
+ # @return [Integer] Current workflow attempt.
+ # @!attribute continued_run_id
+ # @return [String, nil] Run ID if this was continued.
+ # @!attribute cron_schedule
+ # @return [String, nil] Cron schedule if applicable.
+ # @!attribute execution_timeout
+ # @return [Float, nil] Execution timeout for the workflow.
+ # @!attribute last_failure
+ # @return [Exception, nil] Failure if this workflow run is a continuation of a failure.
+ # @!attribute last_result
+ # @return [Object, nil] Successful result if this workflow is a continuation of a success.
+ # @!attribute namespace
+ # @return [String] Namespace for the workflow.
+ # @!attribute parent
+ # @return [ParentInfo, nil] Parent information for the workflow if this is a child.
+ # @!attribute retry_policy
+ # @return [RetryPolicy, nil] Retry policy for the workflow.
+ # @!attribute run_id
+ # @return [String] Run ID for the workflow.
+ # @!attribute run_timeout
+ # @return [Float, nil] Run timeout for the workflow.
+ # @!attribute start_time
+ # @return [Time] Time when the workflow started.
+ # @!attribute task_queue
+ # @return [String] Task queue for the workflow.
+ # @!attribute task_timeout
+ # @return [Float] Task timeout for the workflow.
+ # @!attribute workflow_id
+ # @return [String] ID for the workflow.
+ # @!attribute workflow_type
+ # @return [String] Workflow type name.
+ #
+ # @note WARNING: This class may have required parameters added to its constructor. Users should not instantiate this
+ # class or it may break in incompatible ways.
+ class Info
+ # Information about a parent of a workflow.
+ #
+ # @!attribute namespace
+ # @return [String] Namespace for the parent.
+ # @!attribute run_id
+ # @return [String] Run ID for the parent.
+ # @!attribute workflow_id
+ # @return [String] Workflow ID for the parent.
+ #
+ # @note WARNING: This class may have required parameters added to its constructor. Users should not instantiate
+ # this class or it may break in incompatible ways.
+ ParentInfo = Struct.new(
+ :namespace,
+ :run_id,
+ :workflow_id,
+ keyword_init: true
+ )
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/workflow/parent_close_policy.rb b/temporalio/lib/temporalio/workflow/parent_close_policy.rb
new file mode 100644
index 00000000..214f21ce
--- /dev/null
+++ b/temporalio/lib/temporalio/workflow/parent_close_policy.rb
@@ -0,0 +1,19 @@
+# frozen_string_literal: true
+
+require 'temporalio/internal/bridge/api'
+
+module Temporalio
+ module Workflow
+ # How a child workflow should be handled when the parent closes.
+ module ParentClosePolicy
+ # Unset.
+ UNSPECIFIED = Internal::Bridge::Api::ChildWorkflow::ParentClosePolicy::PARENT_CLOSE_POLICY_UNSPECIFIED
+ # The child workflow will also terminate.
+ TERMINATE = Internal::Bridge::Api::ChildWorkflow::ParentClosePolicy::PARENT_CLOSE_POLICY_TERMINATE
+ # The child workflow will do nothing.
+ ABANDON = Internal::Bridge::Api::ChildWorkflow::ParentClosePolicy::PARENT_CLOSE_POLICY_ABANDON
+ # Cancellation will be requested of the child workflow.
+ REQUEST_CANCEL = Internal::Bridge::Api::ChildWorkflow::ParentClosePolicy::PARENT_CLOSE_POLICY_REQUEST_CANCEL
+ end
+ end
+end
diff --git a/temporalio/lib/temporalio/workflow/update_info.rb b/temporalio/lib/temporalio/workflow/update_info.rb
new file mode 100644
index 00000000..a1318912
--- /dev/null
+++ b/temporalio/lib/temporalio/workflow/update_info.rb
@@ -0,0 +1,20 @@
+# frozen_string_literal: true
+
+module Temporalio
+ module Workflow
+ # Information about a workflow update
+ #
+ # @!attribute id
+ # @return [String] Update ID.
+ # @!attribute name
+ # @return [String] Update name.
+ #
+ # @note WARNING: This class may have required parameters added to its constructor. Users should not instantiate this
+ # class or it may break in incompatible ways.
+ UpdateInfo = Struct.new(
+ :id,
+ :name,
+ keyword_init: true
+ )
+ end
+end
diff --git a/temporalio/sig/common.rbs b/temporalio/sig/common.rbs
new file mode 100644
index 00000000..b20a99c5
--- /dev/null
+++ b/temporalio/sig/common.rbs
@@ -0,0 +1 @@
+type duration = Integer | Float
\ No newline at end of file
diff --git a/temporalio/sig/temporalio.rbs b/temporalio/sig/temporalio.rbs
index 66cbc4af..6841eea9 100644
--- a/temporalio/sig/temporalio.rbs
+++ b/temporalio/sig/temporalio.rbs
@@ -1,2 +1,3 @@
module Temporalio
+ def self._root_file_path: -> String
end
diff --git a/temporalio/sig/temporalio/activity.rbs b/temporalio/sig/temporalio/activity.rbs
index 63e4ee31..a0a60500 100644
--- a/temporalio/sig/temporalio/activity.rbs
+++ b/temporalio/sig/temporalio/activity.rbs
@@ -1,15 +1,4 @@
module Temporalio
- class Activity
- def self.activity_name: (String | Symbol name) -> void
- def self.activity_executor: (Symbol executor_name) -> void
- def self.activity_cancel_raise: (bool cancel_raise) -> void
-
- def self._activity_definition_details: -> {
- activity_name: String | Symbol,
- activity_executor: Symbol,
- activity_cancel_raise: bool
- }
-
- def execute: (?) -> untyped
+ module Activity
end
end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/activity/complete_async_error.rbs b/temporalio/sig/temporalio/activity/complete_async_error.rbs
index c0be3b60..c7f81534 100644
--- a/temporalio/sig/temporalio/activity/complete_async_error.rbs
+++ b/temporalio/sig/temporalio/activity/complete_async_error.rbs
@@ -1,5 +1,5 @@
module Temporalio
- class Activity
+ module Activity
class CompleteAsyncError < Error
end
end
diff --git a/temporalio/sig/temporalio/activity/context.rbs b/temporalio/sig/temporalio/activity/context.rbs
index 0122e785..5aec9089 100644
--- a/temporalio/sig/temporalio/activity/context.rbs
+++ b/temporalio/sig/temporalio/activity/context.rbs
@@ -1,5 +1,5 @@
module Temporalio
- class Activity
+ module Activity
class Context
def self.current: -> Context
def self.current_or_nil: -> Context?
diff --git a/temporalio/sig/temporalio/activity/definition.rbs b/temporalio/sig/temporalio/activity/definition.rbs
index b1477454..e3bdd324 100644
--- a/temporalio/sig/temporalio/activity/definition.rbs
+++ b/temporalio/sig/temporalio/activity/definition.rbs
@@ -1,19 +1,32 @@
module Temporalio
- class Activity
+ module Activity
class Definition
- attr_reader name: String | Symbol
- attr_reader proc: Proc
- attr_reader executor: Symbol
- attr_reader cancel_raise: bool
+ def self.activity_name: (String | Symbol name) -> void
+ def self.activity_executor: (Symbol executor_name) -> void
+ def self.activity_cancel_raise: (bool cancel_raise) -> void
+
+ def self._activity_definition_details: -> {
+ activity_name: String | Symbol,
+ activity_executor: Symbol,
+ activity_cancel_raise: bool
+ }
+
+ def execute: (*untyped) -> untyped
- def self.from_activity: (Activity | singleton(Activity) | Definition activity) -> Definition
+ class Info
+ attr_reader name: String | Symbol
+ attr_reader proc: Proc
+ attr_reader executor: Symbol
+ attr_reader cancel_raise: bool
- def initialize: (
- name: String | Symbol,
- ?proc: Proc?,
- ?executor: Symbol,
- ?cancel_raise: bool
- ) ?{ (?) -> untyped } -> void
+ def self.from_activity: (Definition | singleton(Definition) | Info activity) -> Info
+
+ def initialize: (
+ name: String | Symbol,
+ ?executor: Symbol,
+ ?cancel_raise: bool
+ ) { (?) -> untyped } -> void
+ end
end
end
end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/activity/info.rbs b/temporalio/sig/temporalio/activity/info.rbs
index 93559bb9..07b6465b 100644
--- a/temporalio/sig/temporalio/activity/info.rbs
+++ b/temporalio/sig/temporalio/activity/info.rbs
@@ -1,5 +1,5 @@
module Temporalio
- class Activity
+ module Activity
class Info
attr_reader activity_id: String
attr_reader activity_type: String
diff --git a/temporalio/sig/temporalio/cancellation.rbs b/temporalio/sig/temporalio/cancellation.rbs
index 84a8f18a..7b73fbb2 100644
--- a/temporalio/sig/temporalio/cancellation.rbs
+++ b/temporalio/sig/temporalio/cancellation.rbs
@@ -10,7 +10,8 @@ module Temporalio
def to_ary: -> [Cancellation, Proc]
def wait: -> void
def shield: [T] { (?) -> untyped } -> T
- def add_cancel_callback: (?Proc proc) ?{ -> untyped } -> void
+ def add_cancel_callback: { -> untyped } -> Object
+ def remove_cancel_callback: (Object key) -> void
private def on_cancel: (reason: Object?) -> void
private def prepare_cancel: (reason: Object?) -> Array[Proc]?
diff --git a/temporalio/sig/temporalio/client.rbs b/temporalio/sig/temporalio/client.rbs
index 6570bfcb..c3ae4539 100644
--- a/temporalio/sig/temporalio/client.rbs
+++ b/temporalio/sig/temporalio/client.rbs
@@ -16,6 +16,8 @@ module Temporalio
logger: Logger,
default_workflow_query_reject_condition: WorkflowQueryRejectCondition::enum?
) -> void
+
+ def to_h: -> Hash[Symbol, untyped]
end
def self.connect: (
@@ -54,39 +56,39 @@ module Temporalio
def operator_service: -> Connection::OperatorService
def start_workflow: (
- String workflow,
+ singleton(Workflow::Definition) | Workflow::Definition::Info | Symbol | String workflow,
*Object? args,
id: String,
task_queue: String,
- ?execution_timeout: Float?,
- ?run_timeout: Float?,
- ?task_timeout: Float?,
+ ?execution_timeout: duration?,
+ ?run_timeout: duration?,
+ ?task_timeout: duration?,
?id_reuse_policy: WorkflowIDReusePolicy::enum,
?id_conflict_policy: WorkflowIDConflictPolicy::enum,
?retry_policy: RetryPolicy?,
?cron_schedule: String?,
- ?memo: Hash[String, Object?]?,
+ ?memo: Hash[String | Symbol, Object?]?,
?search_attributes: SearchAttributes?,
- ?start_delay: Float?,
+ ?start_delay: duration?,
?request_eager_start: bool,
?rpc_options: RPCOptions?
) -> WorkflowHandle
def execute_workflow: (
- String workflow,
+ singleton(Workflow::Definition) | Workflow::Definition::Info | Symbol | String workflow,
*Object? args,
id: String,
task_queue: String,
- ?execution_timeout: Float?,
- ?run_timeout: Float?,
- ?task_timeout: Float?,
+ ?execution_timeout: duration?,
+ ?run_timeout: duration?,
+ ?task_timeout: duration?,
?id_reuse_policy: WorkflowIDReusePolicy::enum,
?id_conflict_policy: WorkflowIDConflictPolicy::enum,
?retry_policy: RetryPolicy?,
?cron_schedule: String?,
- ?memo: Hash[String, Object?]?,
+ ?memo: Hash[String | Symbol, Object?]?,
?search_attributes: SearchAttributes?,
- ?start_delay: Float?,
+ ?start_delay: duration?,
?request_eager_start: bool,
?rpc_options: RPCOptions?
) -> Object?
diff --git a/temporalio/sig/temporalio/client/connection/test_service.rbs b/temporalio/sig/temporalio/client/connection/test_service.rbs
new file mode 100644
index 00000000..cef27296
--- /dev/null
+++ b/temporalio/sig/temporalio/client/connection/test_service.rbs
@@ -0,0 +1,35 @@
+# Generated code. DO NOT EDIT!
+
+module Temporalio
+ class Client
+ class Connection
+ class TestService < Service
+ def initialize: (Connection) -> void
+ def lock_time_skipping: (
+ untyped request,
+ ?rpc_options: RPCOptions?
+ ) -> untyped
+ def unlock_time_skipping: (
+ untyped request,
+ ?rpc_options: RPCOptions?
+ ) -> untyped
+ def sleep: (
+ untyped request,
+ ?rpc_options: RPCOptions?
+ ) -> untyped
+ def sleep_until: (
+ untyped request,
+ ?rpc_options: RPCOptions?
+ ) -> untyped
+ def unlock_time_skipping_with_sleep: (
+ untyped request,
+ ?rpc_options: RPCOptions?
+ ) -> untyped
+ def get_current_time: (
+ untyped request,
+ ?rpc_options: RPCOptions?
+ ) -> untyped
+ end
+ end
+ end
+end
diff --git a/temporalio/sig/temporalio/client/interceptor.rbs b/temporalio/sig/temporalio/client/interceptor.rbs
index fecb8ca8..07f59933 100644
--- a/temporalio/sig/temporalio/client/interceptor.rbs
+++ b/temporalio/sig/temporalio/client/interceptor.rbs
@@ -4,39 +4,39 @@ module Temporalio
def intercept_client: (Outbound next_interceptor) -> Outbound
class StartWorkflowInput
- attr_accessor workflow: String
+ attr_accessor workflow: singleton(Workflow::Definition) | Workflow::Definition::Info | Symbol | String
attr_accessor args: Array[Object?]
attr_accessor workflow_id: String
attr_accessor task_queue: String
- attr_accessor execution_timeout: Float?
- attr_accessor run_timeout: Float?
- attr_accessor task_timeout: Float?
+ attr_accessor execution_timeout: duration?
+ attr_accessor run_timeout: duration?
+ attr_accessor task_timeout: duration?
attr_accessor id_reuse_policy: WorkflowIDReusePolicy::enum
attr_accessor id_conflict_policy: WorkflowIDConflictPolicy::enum
attr_accessor retry_policy: RetryPolicy?
attr_accessor cron_schedule: String?
- attr_accessor memo: Hash[String, Object?]?
+ attr_accessor memo: Hash[String | Symbol, Object?]?
attr_accessor search_attributes: SearchAttributes?
- attr_accessor start_delay: Float?
+ attr_accessor start_delay: duration?
attr_accessor request_eager_start: bool
attr_accessor headers: Hash[String, Object?]
attr_accessor rpc_options: RPCOptions?
def initialize: (
- workflow: String,
+ workflow: singleton(Workflow::Definition) | Workflow::Definition::Info | Symbol | String,
args: Array[Object?],
workflow_id: String,
task_queue: String,
- execution_timeout: Float?,
- run_timeout: Float?,
- task_timeout: Float?,
+ execution_timeout: duration?,
+ run_timeout: duration?,
+ task_timeout: duration?,
id_reuse_policy: WorkflowIDReusePolicy::enum,
id_conflict_policy: WorkflowIDConflictPolicy::enum,
retry_policy: RetryPolicy?,
cron_schedule: String?,
- memo: Hash[String, Object?]?,
+ memo: Hash[String | Symbol, Object?]?,
search_attributes: SearchAttributes?,
- start_delay: Float?,
+ start_delay: duration?,
request_eager_start: bool,
headers: Hash[String, Object?],
rpc_options: RPCOptions?
@@ -96,7 +96,7 @@ module Temporalio
class SignalWorkflowInput
attr_accessor workflow_id: String
attr_accessor run_id: String?
- attr_accessor signal: String
+ attr_accessor signal: Workflow::Definition::Signal | Symbol | String
attr_accessor args: Array[Object?]
attr_accessor headers: Hash[String, Object?]
attr_accessor rpc_options: RPCOptions?
@@ -104,7 +104,7 @@ module Temporalio
def initialize: (
workflow_id: String,
run_id: String?,
- signal: String,
+ signal: Workflow::Definition::Signal | Symbol | String,
args: Array[Object?],
headers: Hash[String, Object?],
rpc_options: RPCOptions?
@@ -114,7 +114,7 @@ module Temporalio
class QueryWorkflowInput
attr_accessor workflow_id: String
attr_accessor run_id: String?
- attr_accessor query: String
+ attr_accessor query: Workflow::Definition::Query | Symbol | String
attr_accessor args: Array[Object?]
attr_accessor reject_condition: WorkflowQueryRejectCondition::enum?
attr_accessor headers: Hash[String, Object?]
@@ -123,7 +123,7 @@ module Temporalio
def initialize: (
workflow_id: String,
run_id: String?,
- query: String,
+ query: Workflow::Definition::Query | Symbol | String,
args: Array[Object?],
reject_condition: WorkflowQueryRejectCondition::enum?,
headers: Hash[String, Object?],
@@ -135,7 +135,7 @@ module Temporalio
attr_accessor workflow_id: String
attr_accessor run_id: String?
attr_accessor update_id: String
- attr_accessor update: String
+ attr_accessor update: Workflow::Definition::Update | Symbol | String
attr_accessor args: Array[Object?]
attr_accessor wait_for_stage: WorkflowUpdateWaitStage::enum
attr_accessor headers: Hash[String, Object?]
@@ -145,7 +145,7 @@ module Temporalio
workflow_id: String,
run_id: String?,
update_id: String,
- update: String,
+ update: Workflow::Definition::Update | Symbol | String,
args: Array[Object?],
wait_for_stage: WorkflowUpdateWaitStage::enum,
headers: Hash[String, Object?],
@@ -204,7 +204,7 @@ module Temporalio
attr_accessor schedule: Schedule
attr_accessor trigger_immediately: bool
attr_accessor backfills: Array[Schedule::Backfill]
- attr_accessor memo: Hash[String, Object?]?
+ attr_accessor memo: Hash[String | Symbol, Object?]?
attr_accessor search_attributes: SearchAttributes?
attr_accessor rpc_options: RPCOptions?
@@ -213,7 +213,7 @@ module Temporalio
schedule: Schedule,
trigger_immediately: bool,
backfills: Array[Schedule::Backfill],
- memo: Hash[String, Object?]?,
+ memo: Hash[String | Symbol, Object?]?,
search_attributes: SearchAttributes?,
rpc_options: RPCOptions?
) -> void
diff --git a/temporalio/sig/temporalio/client/schedule.rbs b/temporalio/sig/temporalio/client/schedule.rbs
index 1ee9b682..2d22ac91 100644
--- a/temporalio/sig/temporalio/client/schedule.rbs
+++ b/temporalio/sig/temporalio/client/schedule.rbs
@@ -60,13 +60,13 @@ module Temporalio
class StartWorkflow
include Action
- attr_accessor workflow: String
+ attr_accessor workflow: singleton(Workflow::Definition) | Workflow::Definition::Info | Symbol | String
attr_accessor args: Array[Object?]
attr_accessor id: String
attr_accessor task_queue: String
- attr_accessor execution_timeout: Float?
- attr_accessor run_timeout: Float?
- attr_accessor task_timeout: Float?
+ attr_accessor execution_timeout: duration?
+ attr_accessor run_timeout: duration?
+ attr_accessor task_timeout: duration?
attr_accessor retry_policy: RetryPolicy?
attr_accessor memo: Hash[String, Object?]?
attr_accessor search_attributes: SearchAttributes?
@@ -78,13 +78,13 @@ module Temporalio
) -> StartWorkflow
def initialize: (
- String workflow,
+ singleton(Workflow::Definition) | Workflow::Definition::Info | Symbol | String workflow,
*Object? args,
id: String,
task_queue: String,
- ?execution_timeout: Float?,
- ?run_timeout: Float?,
- ?task_timeout: Float?,
+ ?execution_timeout: duration?,
+ ?run_timeout: duration?,
+ ?task_timeout: duration?,
?retry_policy: RetryPolicy?,
?memo: Hash[String, Object?]?,
?search_attributes: SearchAttributes?,
@@ -191,14 +191,14 @@ module Temporalio
end
class Interval
- attr_accessor every: Float
- attr_accessor offset: Float?
+ attr_accessor every: duration
+ attr_accessor offset: duration?
def self._from_proto: (untyped raw_int) -> Interval
def initialize: (
- every: Float,
- ?offset: Float?
+ every: duration,
+ ?offset: duration?
) -> void
def _to_proto: -> untyped
@@ -225,14 +225,14 @@ module Temporalio
class Policy
attr_accessor overlap: OverlapPolicy::enum
- attr_accessor catchup_window: Float
+ attr_accessor catchup_window: duration
attr_accessor pause_on_failure: bool
def self._from_proto: (untyped raw_policies) -> Policy
def initialize: (
?overlap: OverlapPolicy::enum,
- ?catchup_window: Float,
+ ?catchup_window: duration,
?pause_on_failure: bool
) -> void
diff --git a/temporalio/sig/temporalio/client/workflow_handle.rbs b/temporalio/sig/temporalio/client/workflow_handle.rbs
index e20b5f81..a0ad22fb 100644
--- a/temporalio/sig/temporalio/client/workflow_handle.rbs
+++ b/temporalio/sig/temporalio/client/workflow_handle.rbs
@@ -37,20 +37,20 @@ module Temporalio
) -> Enumerator[untyped, untyped]
def signal: (
- String signal,
+ Workflow::Definition::Signal | Symbol | String signal,
*Object? args,
?rpc_options: RPCOptions?
) -> void
def query: (
- String query,
+ Workflow::Definition::Query | Symbol | String query,
*Object? args,
?reject_condition: WorkflowQueryRejectCondition::enum?,
?rpc_options: RPCOptions?
) -> Object?
def start_update: (
- String update,
+ Workflow::Definition::Update | Symbol | String update,
*Object? args,
wait_for_stage: WorkflowUpdateWaitStage::enum,
?id: String,
@@ -58,7 +58,7 @@ module Temporalio
) -> WorkflowUpdateHandle
def execute_update: (
- String update,
+ Workflow::Definition::Update | Symbol | String update,
*Object? args,
?id: String,
?rpc_options: RPCOptions?
diff --git a/temporalio/sig/temporalio/converters/raw_value.rbs b/temporalio/sig/temporalio/converters/raw_value.rbs
new file mode 100644
index 00000000..1e9867f0
--- /dev/null
+++ b/temporalio/sig/temporalio/converters/raw_value.rbs
@@ -0,0 +1,9 @@
+module Temporalio
+ module Converters
+ class RawValue
+ attr_reader payload: untyped
+
+ def initialize: (untyped payload) -> void
+ end
+ end
+end
diff --git a/temporalio/sig/temporalio/error/failure.rbs b/temporalio/sig/temporalio/error/failure.rbs
index 35ba7f36..6cbcdc0b 100644
--- a/temporalio/sig/temporalio/error/failure.rbs
+++ b/temporalio/sig/temporalio/error/failure.rbs
@@ -6,23 +6,23 @@ module Temporalio
class WorkflowAlreadyStartedError < Failure
attr_reader workflow_id: String
attr_reader workflow_type: String
- attr_reader run_id: String
+ attr_reader run_id: String?
- def initialize: (workflow_id: String, workflow_type: String, run_id: String) -> void
+ def initialize: (workflow_id: String, workflow_type: String, run_id: String?) -> void
end
class ApplicationError < Failure
attr_reader details: Array[Object?]
attr_reader type: String?
attr_reader non_retryable: bool
- attr_reader next_retry_delay: Float?
+ attr_reader next_retry_delay: duration?
def initialize: (
String message,
*Object? details,
?type: String?,
?non_retryable: bool,
- ?next_retry_delay: Float?
+ ?next_retry_delay: duration?
) -> void
def retryable?: -> bool
diff --git a/temporalio/sig/temporalio/internal/bridge/testing.rbs b/temporalio/sig/temporalio/internal/bridge/testing.rbs
index da264d6d..56a36c77 100644
--- a/temporalio/sig/temporalio/internal/bridge/testing.rbs
+++ b/temporalio/sig/temporalio/internal/bridge/testing.rbs
@@ -35,8 +35,30 @@ module Temporalio
) -> void
end
+ class StartTestServerOptions
+ attr_accessor existing_path: String?
+ attr_accessor sdk_name: String
+ attr_accessor sdk_version: String
+ attr_accessor download_version: String
+ attr_accessor download_dest_dir: String?
+ attr_accessor port: Integer?
+ attr_accessor extra_args: Array[String]
+
+ def initialize: (
+ existing_path: String?,
+ sdk_name: String,
+ sdk_version: String,
+ download_version: String,
+ download_dest_dir: String?,
+ port: Integer?,
+ extra_args: Array[String]
+ ) -> void
+ end
+
def self.start_dev_server: (Runtime runtime, StartDevServerOptions options) -> EphemeralServer
+ def self.start_test_server: (Runtime runtime, StartTestServerOptions options) -> EphemeralServer
+
def shutdown: -> void
# Defined in Rust
@@ -47,6 +69,12 @@ module Temporalio
Queue queue
) -> void
+ def self.async_start_test_server: (
+ Runtime runtime,
+ StartTestServerOptions options,
+ Queue queue
+ ) -> void
+
def target: -> String
def async_shutdown: (Queue queue) -> void
diff --git a/temporalio/sig/temporalio/internal/bridge/worker.rbs b/temporalio/sig/temporalio/internal/bridge/worker.rbs
index 2a543e3e..2701f8cd 100644
--- a/temporalio/sig/temporalio/internal/bridge/worker.rbs
+++ b/temporalio/sig/temporalio/internal/bridge/worker.rbs
@@ -22,6 +22,8 @@ module Temporalio
attr_accessor max_task_queue_activities_per_second: Float?
attr_accessor graceful_shutdown_period: Float
attr_accessor use_worker_versioning: bool
+ attr_accessor nondeterminism_as_workflow_fail: bool
+ attr_accessor nondeterminism_as_workflow_fail_for_types: Array[String]
def initialize: (
activity: bool,
@@ -42,7 +44,9 @@ module Temporalio
max_worker_activities_per_second: Float?,
max_task_queue_activities_per_second: Float?,
graceful_shutdown_period: Float,
- use_worker_versioning: bool
+ use_worker_versioning: bool,
+ nondeterminism_as_workflow_fail: bool,
+ nondeterminism_as_workflow_fail_for_types: Array[String]
) -> void
end
@@ -110,6 +114,8 @@ module Temporalio
def async_complete_activity_task: (String proto, Queue queue) -> void
+ def async_complete_workflow_activation: (String run_id, String proto, Queue queue) -> void
+
def record_activity_heartbeat: (String proto) -> void
def replace_client: (Client client) -> void
diff --git a/temporalio/sig/temporalio/internal/proto_utils.rbs b/temporalio/sig/temporalio/internal/proto_utils.rbs
index 590841a3..56258f3d 100644
--- a/temporalio/sig/temporalio/internal/proto_utils.rbs
+++ b/temporalio/sig/temporalio/internal/proto_utils.rbs
@@ -1,7 +1,7 @@
module Temporalio
module Internal
module ProtoUtils
- def self.seconds_to_duration: (Float? seconds_float) -> untyped?
+ def self.seconds_to_duration: (duration? seconds_numeric) -> untyped?
def self.duration_to_seconds: (untyped? duration) -> Float?
@@ -10,10 +10,15 @@ module Temporalio
def self.timestamp_to_time: (untyped? timestamp) -> Time?
def self.memo_to_proto: (
- Hash[String, Object?]? hash,
+ Hash[String | Symbol, Object?]? hash,
Converters::DataConverter | Converters::PayloadConverter converter
) -> untyped?
+ def self.memo_to_proto_hash: (
+ Hash[String | Symbol, Object?]? hash,
+ Converters::DataConverter | Converters::PayloadConverter converter
+ ) -> Hash[String, untyped]?
+
def self.memo_from_proto: (
untyped? memo,
Converters::DataConverter | Converters::PayloadConverter converter
@@ -24,6 +29,11 @@ module Temporalio
Converters::DataConverter | Converters::PayloadConverter converter
) -> untyped?
+ def self.headers_to_proto_hash: (
+ Hash[String, Object?]? hash,
+ Converters::DataConverter | Converters::PayloadConverter converter
+ ) -> Hash[String, untyped]?
+
def self.headers_from_proto: (
untyped? headers,
Converters::DataConverter | Converters::PayloadConverter converter
diff --git a/temporalio/sig/temporalio/internal/worker/activity_worker.rbs b/temporalio/sig/temporalio/internal/worker/activity_worker.rbs
index 57ad3872..af6364e2 100644
--- a/temporalio/sig/temporalio/internal/worker/activity_worker.rbs
+++ b/temporalio/sig/temporalio/internal/worker/activity_worker.rbs
@@ -6,8 +6,8 @@ module Temporalio
attr_reader bridge_worker: Bridge::Worker
def initialize: (
- Temporalio::Worker worker,
- Bridge::Worker bridge_worker,
+ worker: Temporalio::Worker,
+ bridge_worker: Bridge::Worker,
) -> void
def set_running_activity: (String task_token, RunningActivity? activity) -> void
@@ -19,14 +19,14 @@ module Temporalio
def handle_start_task: (String task_token, untyped start) -> void
def handle_cancel_task: (String task_token, untyped cancel) -> void
- def execute_activity: (String task_token, Activity::Definition defn, untyped start) -> void
+ def execute_activity: (String task_token, Activity::Definition::Info defn, untyped start) -> void
def run_activity: (
RunningActivity activity,
- Temporalio::Worker::Interceptor::ExecuteActivityInput input
+ Temporalio::Worker::Interceptor::Activity::ExecuteInput input
) -> void
class RunningActivity < Activity::Context
- attr_accessor _outbound_impl: Temporalio::Worker::Interceptor::ActivityOutbound?
+ attr_accessor _outbound_impl: Temporalio::Worker::Interceptor::Activity::Outbound?
attr_accessor _server_requested_cancel: bool
def initialize: (
@@ -39,11 +39,11 @@ module Temporalio
) -> void
end
- class InboundImplementation < Temporalio::Worker::Interceptor::ActivityInbound
+ class InboundImplementation < Temporalio::Worker::Interceptor::Activity::Inbound
def initialize: (ActivityWorker worker) -> void
end
- class OutboundImplementation < Temporalio::Worker::Interceptor::ActivityOutbound
+ class OutboundImplementation < Temporalio::Worker::Interceptor::Activity::Outbound
def initialize: (ActivityWorker worker) -> void
end
end
diff --git a/temporalio/sig/temporalio/internal/worker/multi_runner.rbs b/temporalio/sig/temporalio/internal/worker/multi_runner.rbs
index ac8b663b..547e78f5 100644
--- a/temporalio/sig/temporalio/internal/worker/multi_runner.rbs
+++ b/temporalio/sig/temporalio/internal/worker/multi_runner.rbs
@@ -6,6 +6,17 @@ module Temporalio
def apply_thread_or_fiber_block: ?{ (?) -> untyped } -> void
+ def apply_workflow_activation_decoded: (
+ workflow_worker: WorkflowWorker,
+ activation: untyped
+ ) -> void
+
+ def apply_workflow_activation_complete: (
+ workflow_worker: WorkflowWorker,
+ activation_completion: untyped,
+ encoded: bool
+ ) -> void
+
def raise_in_thread_or_fiber_block: (Exception error) -> void
def initiate_shutdown: -> void
@@ -27,6 +38,40 @@ module Temporalio
) -> void
end
+ class WorkflowActivationDecoded < Event
+ attr_reader workflow_worker: WorkflowWorker
+ attr_reader activation: untyped
+
+ def initialize: (
+ workflow_worker: WorkflowWorker,
+ activation: untyped
+ ) -> void
+ end
+
+ class WorkflowActivationComplete < Event
+ attr_reader workflow_worker: WorkflowWorker
+ attr_reader activation_completion: untyped
+ attr_reader encoded: bool
+ attr_reader completion_complete_queue: Queue
+
+ def initialize: (
+ workflow_worker: WorkflowWorker,
+ activation_completion: untyped,
+ encoded: bool,
+ completion_complete_queue: Queue
+ ) -> void
+ end
+
+ class WorkflowActivationCompletionComplete < Event
+ attr_reader run_id: String
+ attr_reader error: Exception
+
+ def initialize: (
+ run_id: String,
+ error: Exception
+ ) -> void
+ end
+
class PollFailure < Event
attr_reader worker: Temporalio::Worker
attr_reader worker_type: Symbol
diff --git a/temporalio/sig/temporalio/internal/worker/workflow_instance.rbs b/temporalio/sig/temporalio/internal/worker/workflow_instance.rbs
new file mode 100644
index 00000000..f4504ad0
--- /dev/null
+++ b/temporalio/sig/temporalio/internal/worker/workflow_instance.rbs
@@ -0,0 +1,99 @@
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ def self.new_completion_with_failure: (
+ run_id: String,
+ error: Exception,
+ failure_converter: Converters::FailureConverter,
+ payload_converter: Converters::PayloadConverter
+ ) -> untyped
+
+ attr_reader context: Context
+ attr_reader logger: ReplaySafeLogger
+ attr_reader info: Workflow::Info
+ attr_reader scheduler: Scheduler
+ attr_reader disable_eager_activity_execution: bool
+ attr_reader pending_activities: Hash[Integer, Fiber]
+ attr_reader pending_timers: Hash[Integer, Fiber]
+ attr_reader pending_child_workflow_starts: Hash[Integer, Fiber]
+ attr_reader pending_child_workflows: Hash[Integer, ChildWorkflowHandle]
+ attr_reader pending_external_signals: Hash[Integer, Fiber]
+ attr_reader pending_external_cancels: Hash[Integer, Fiber]
+ attr_reader in_progress_handlers: Array[HandlerExecution]
+ attr_reader payload_converter: Converters::PayloadConverter
+ attr_reader failure_converter: Converters::FailureConverter
+ attr_reader cancellation: Cancellation
+ attr_reader continue_as_new_suggested: bool
+ attr_reader current_history_length: Integer
+ attr_reader current_history_size: Integer
+ attr_reader replaying: bool
+ attr_reader random: Random
+ attr_reader signal_handlers: Hash[String?, Workflow::Definition::Signal]
+ attr_reader query_handlers: Hash[String?, Workflow::Definition::Query]
+ attr_reader update_handlers: Hash[String?, Workflow::Definition::Update]
+ attr_reader context_frozen: bool
+
+ def initialize: (Details details) -> void
+
+ def activate: (untyped activation) -> untyped
+
+ def add_command: (untyped command) -> void
+
+ def instance: -> Object
+
+ def search_attributes: -> SearchAttributes
+
+ def memo: -> ExternallyImmutableHash[String, Object?]
+
+ def now: -> Time
+
+ def illegal_call_tracing_disabled: [T] { -> T } -> T
+
+ def patch: (patch_id: Symbol | String, deprecated: bool) -> bool
+
+ def metric_meter: -> Temporalio::Metric::Meter
+
+ def run_in_scheduler: [T] { -> T } -> T
+
+ def activate_internal: (untyped activation) -> untyped
+
+ def create_instance: -> Object
+
+ def apply: (untyped job) -> void
+
+ def apply_signal: (untyped job) -> void
+
+ def apply_query: (untyped job) -> void
+
+ def apply_update: (untyped job) -> void
+
+ def run_workflow: -> void
+
+ def schedule: (?top_level: bool, ?handler_exec: HandlerExecution?) { -> untyped } -> Fiber
+
+ def on_top_level_exception: (Exception err) -> void
+
+ def failure_exception?: (Exception err) -> bool
+
+ def with_context_frozen: [T] { -> T } -> T
+
+ def convert_handler_args: (
+ payload_array: Array[untyped],
+ defn: Workflow::Definition::Signal | Workflow::Definition::Query | Workflow::Definition::Update
+ ) -> Array[Object?]
+
+ def convert_args: (
+ payload_array: Array[untyped],
+ method_name: Symbol?,
+ raw_args: bool,
+ ?ignore_first_param: bool
+ ) -> Array[Object?]
+
+ def scoped_logger_info: -> Hash[Symbol, Object?]
+
+ def warn_on_any_unfinished_handlers: -> void
+ end
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/internal/worker/workflow_instance/child_workflow_handle.rbs b/temporalio/sig/temporalio/internal/worker/workflow_instance/child_workflow_handle.rbs
new file mode 100644
index 00000000..ce0be17c
--- /dev/null
+++ b/temporalio/sig/temporalio/internal/worker/workflow_instance/child_workflow_handle.rbs
@@ -0,0 +1,19 @@
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ class ChildWorkflowHandle < Workflow::ChildWorkflowHandle
+ def initialize: (
+ id: String,
+ first_execution_run_id: String,
+ instance: WorkflowInstance,
+ cancellation: Cancellation,
+ cancel_callback_key: Object
+ ) -> void
+
+ def _resolve: (untyped resolution) -> void
+ end
+ end
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/internal/worker/workflow_instance/context.rbs b/temporalio/sig/temporalio/internal/worker/workflow_instance/context.rbs
new file mode 100644
index 00000000..0dbf5eae
--- /dev/null
+++ b/temporalio/sig/temporalio/internal/worker/workflow_instance/context.rbs
@@ -0,0 +1,137 @@
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ class Context
+ def initialize: (WorkflowInstance `instance`) -> void
+
+ def all_handlers_finished?: -> bool
+
+ def cancellation: -> Cancellation
+
+ def continue_as_new_suggested: -> bool
+
+ def current_history_length: -> Integer
+
+ def current_history_size: -> Integer
+
+ def current_update_info: -> Workflow::UpdateInfo?
+
+ def deprecate_patch: (Symbol | String patch_id) -> void
+
+ def execute_activity: (
+ singleton(Activity::Definition) | Symbol | String activity,
+ *Object? args,
+ task_queue: String,
+ schedule_to_close_timeout: duration?,
+ schedule_to_start_timeout: duration?,
+ start_to_close_timeout: duration?,
+ heartbeat_timeout: duration?,
+ retry_policy: RetryPolicy?,
+ cancellation: Cancellation,
+ cancellation_type: Workflow::ActivityCancellationType::enum,
+ activity_id: String?,
+ disable_eager_execution: bool
+ ) -> Object?
+
+ def execute_local_activity: (
+ singleton(Activity::Definition) | Symbol | String activity,
+ *Object? args,
+ schedule_to_close_timeout: duration?,
+ schedule_to_start_timeout: duration?,
+ start_to_close_timeout: duration?,
+ retry_policy: RetryPolicy?,
+ local_retry_threshold: duration?,
+ cancellation: Cancellation,
+ cancellation_type: Workflow::ActivityCancellationType::enum,
+ activity_id: String?
+ ) -> Object?
+
+ def external_workflow_handle: (String workflow_id, ?run_id: String?) -> ExternalWorkflowHandle
+
+ def illegal_call_tracing_disabled: [T] { -> T } -> T
+
+ def info: -> Workflow::Info
+
+ def initialize_continue_as_new_error: (Workflow::ContinueAsNewError error) -> void
+
+ def logger: -> ReplaySafeLogger
+
+ def memo: -> ExternallyImmutableHash[String, Object?]
+
+ def metric_meter: -> Temporalio::Metric::Meter
+
+ def now: -> Time
+
+ def patched: (String patch_id) -> bool
+
+ def payload_converter: -> Converters::PayloadConverter
+
+ def query_handlers: -> HandlerHash[Workflow::Definition::Query]
+
+ def random: -> Random
+
+ def replaying?: -> bool
+
+ def search_attributes: -> SearchAttributes
+
+ def signal_handlers: -> HandlerHash[Workflow::Definition::Signal]
+
+ def sleep: (duration? duration, summary: String?, cancellation: Cancellation) -> void
+
+ def start_child_workflow: (
+ singleton(Workflow::Definition) | Workflow::Definition::Info | Symbol | String workflow,
+ *Object? args,
+ id: String,
+ task_queue: String,
+ cancellation: Cancellation,
+ cancellation_type: Workflow::ChildWorkflowCancellationType::enum,
+ parent_close_policy: Workflow::ParentClosePolicy::enum,
+ execution_timeout: duration?,
+ run_timeout: duration?,
+ task_timeout: duration?,
+ id_reuse_policy: WorkflowIDReusePolicy::enum,
+ retry_policy: RetryPolicy?,
+ cron_schedule: String?,
+ memo: Hash[String | Symbol, Object?]?,
+ search_attributes: SearchAttributes?
+ ) -> ChildWorkflowHandle
+
+ def timeout: [T] (
+ duration? duration,
+ singleton(Exception) exception_class,
+ *Object? exception_args,
+ summary: String?
+ ) { -> T } -> T
+
+ def update_handlers: -> HandlerHash[Workflow::Definition::Update]
+
+ def upsert_memo: (Hash[Symbol | String, Object?] hash) -> void
+
+ def upsert_search_attributes: (*SearchAttributes::Update updates) -> void
+
+ def wait_condition: [T] (cancellation: Cancellation?) { -> T } -> T
+
+ def _cancel_external_workflow: (id: String, run_id: String?) -> void
+
+ def _outbound=: (Temporalio::Worker::Interceptor::Workflow::Outbound outbound) -> void
+
+ def _signal_child_workflow: (
+ id: String,
+ signal: Workflow::Definition::Signal | Symbol | String,
+ args: Array[Object?],
+ cancellation: Cancellation
+ ) -> void
+
+ def _signal_external_workflow: (
+ id: String,
+ run_id: String?,
+ signal: Workflow::Definition::Signal | Symbol | String,
+ args: Array[Object?],
+ cancellation: Cancellation
+ ) -> void
+ end
+ end
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/internal/worker/workflow_instance/details.rbs b/temporalio/sig/temporalio/internal/worker/workflow_instance/details.rbs
new file mode 100644
index 00000000..731e41cd
--- /dev/null
+++ b/temporalio/sig/temporalio/internal/worker/workflow_instance/details.rbs
@@ -0,0 +1,37 @@
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ class Details
+ attr_reader namespace: String
+ attr_reader task_queue: String
+ attr_reader definition: Workflow::Definition::Info
+ attr_reader initial_activation: untyped
+ attr_reader logger: Logger
+ attr_reader metric_meter: Temporalio::Metric::Meter
+ attr_reader payload_converter: Converters::PayloadConverter
+ attr_reader failure_converter: Converters::FailureConverter
+ attr_reader interceptors: Array[Temporalio::Worker::Interceptor::Workflow]
+ attr_reader disable_eager_activity_execution: bool
+ attr_reader illegal_calls: Hash[String, :all | Hash[Symbol, bool]]
+ attr_reader workflow_failure_exception_types: Array[singleton(Exception)]
+
+ def initialize: (
+ namespace: String,
+ task_queue: String,
+ definition: Workflow::Definition::Info,
+ initial_activation: untyped,
+ logger: Logger,
+ metric_meter: Temporalio::Metric::Meter,
+ payload_converter: Converters::PayloadConverter,
+ failure_converter: Converters::FailureConverter,
+ interceptors: Array[Temporalio::Worker::Interceptor::Workflow],
+ disable_eager_activity_execution: bool,
+ illegal_calls: Hash[String, :all | Hash[Symbol, bool]],
+ workflow_failure_exception_types: Array[singleton(Exception)]
+ ) -> void
+ end
+ end
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/internal/worker/workflow_instance/external_workflow_handle.rbs b/temporalio/sig/temporalio/internal/worker/workflow_instance/external_workflow_handle.rbs
new file mode 100644
index 00000000..a75a288e
--- /dev/null
+++ b/temporalio/sig/temporalio/internal/worker/workflow_instance/external_workflow_handle.rbs
@@ -0,0 +1,15 @@
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ class ExternalWorkflowHandle < Workflow::ExternalWorkflowHandle
+ def initialize: (
+ id: String,
+ run_id: String?,
+ instance: Object
+ ) -> void
+ end
+ end
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/internal/worker/workflow_instance/externally_immutable_hash.rbs b/temporalio/sig/temporalio/internal/worker/workflow_instance/externally_immutable_hash.rbs
new file mode 100644
index 00000000..0ee2dd30
--- /dev/null
+++ b/temporalio/sig/temporalio/internal/worker/workflow_instance/externally_immutable_hash.rbs
@@ -0,0 +1,16 @@
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ class ExternallyImmutableHash[K, V] < Hash[K, V]
+ def initialize: (Hash[K, V] initial_hash) -> void
+
+ def _update: { (Hash[K, V]) -> void } -> void
+
+ def __getobj__: -> Hash[K, V]
+ def __setobj__: (Hash[K, V] value) -> void
+ end
+ end
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/internal/worker/workflow_instance/handler_execution.rbs b/temporalio/sig/temporalio/internal/worker/workflow_instance/handler_execution.rbs
new file mode 100644
index 00000000..fb1342ef
--- /dev/null
+++ b/temporalio/sig/temporalio/internal/worker/workflow_instance/handler_execution.rbs
@@ -0,0 +1,19 @@
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ class HandlerExecution
+ attr_reader name: String
+ attr_reader update_id: String?
+ attr_reader unfinished_policy: Workflow::HandlerUnfinishedPolicy::enum
+
+ def initialize: (
+ name: String,
+ update_id: String?,
+ unfinished_policy: Workflow::HandlerUnfinishedPolicy::enum
+ ) -> void
+ end
+ end
+ end
+ end
+end
diff --git a/temporalio/sig/temporalio/internal/worker/workflow_instance/handler_hash.rbs b/temporalio/sig/temporalio/internal/worker/workflow_instance/handler_hash.rbs
new file mode 100644
index 00000000..acb6cd38
--- /dev/null
+++ b/temporalio/sig/temporalio/internal/worker/workflow_instance/handler_hash.rbs
@@ -0,0 +1,14 @@
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ class HandlerHash[D] < Hash[String?, D]
+ def initialize: (
+ Hash[String?, D] initial_frozen_hash,
+ singleton(Workflow::Definition::Signal) | singleton(Workflow::Definition::Query) | singleton(Workflow::Definition::Update) definition_class,
+ ) ?{ (D) -> void } -> void
+ end
+ end
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/internal/worker/workflow_instance/illegal_call_tracer.rbs b/temporalio/sig/temporalio/internal/worker/workflow_instance/illegal_call_tracer.rbs
new file mode 100644
index 00000000..fe15476c
--- /dev/null
+++ b/temporalio/sig/temporalio/internal/worker/workflow_instance/illegal_call_tracer.rbs
@@ -0,0 +1,18 @@
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ class IllegalCallTracer
+ def self.frozen_validated_illegal_calls: (
+ Hash[String, :all | Array[Symbol]] illegal_calls
+ ) -> Hash[String, :all | Hash[Symbol, bool]]
+
+ def initialize: (Hash[String, :all | Hash[Symbol, bool]] illegal_calls) -> void
+
+ def enable: [T] { -> T } -> T
+ def disable: [T] { -> T } -> T
+ end
+ end
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/internal/worker/workflow_instance/inbound_implementation.rbs b/temporalio/sig/temporalio/internal/worker/workflow_instance/inbound_implementation.rbs
new file mode 100644
index 00000000..80c5fc8a
--- /dev/null
+++ b/temporalio/sig/temporalio/internal/worker/workflow_instance/inbound_implementation.rbs
@@ -0,0 +1,19 @@
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ class InboundImplementation < Temporalio::Worker::Interceptor::Workflow::Inbound
+ def initialize: (WorkflowInstance instance) -> void
+
+ def invoke_handler: (
+ String name,
+ Temporalio::Worker::Interceptor::Workflow::HandleSignalInput |
+ Temporalio::Worker::Interceptor::Workflow::HandleQueryInput |
+ Temporalio::Worker::Interceptor::Workflow::HandleUpdateInput input,
+ ?to_invoke: Symbol | Proc | nil
+ ) -> Object?
+ end
+ end
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/internal/worker/workflow_instance/outbound_implementation.rbs b/temporalio/sig/temporalio/internal/worker/workflow_instance/outbound_implementation.rbs
new file mode 100644
index 00000000..ae78b428
--- /dev/null
+++ b/temporalio/sig/temporalio/internal/worker/workflow_instance/outbound_implementation.rbs
@@ -0,0 +1,32 @@
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ class OutboundImplementation < Temporalio::Worker::Interceptor::Workflow::Outbound
+ def initialize: (WorkflowInstance instance) -> void
+
+ def execute_activity_with_local_backoffs: (
+ local: bool,
+ cancellation: Cancellation
+ ) { (untyped?) -> Integer } -> Object?
+
+ def execute_activity_once: (
+ local: bool,
+ cancellation: Cancellation,
+ last_local_backoff: untyped?
+ ) { (untyped?) -> Integer } -> Object?
+
+ def _signal_external_workflow: (
+ id: String,
+ run_id: String?,
+ child: bool,
+ signal: Workflow::Definition::Signal | Symbol | String,
+ args: Array[Object?],
+ cancellation: Cancellation,
+ headers: Hash[String, Object?]
+ ) -> void
+ end
+ end
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/internal/worker/workflow_instance/replay_safe_logger.rbs b/temporalio/sig/temporalio/internal/worker/workflow_instance/replay_safe_logger.rbs
new file mode 100644
index 00000000..2c142330
--- /dev/null
+++ b/temporalio/sig/temporalio/internal/worker/workflow_instance/replay_safe_logger.rbs
@@ -0,0 +1,16 @@
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ class ReplaySafeLogger < ScopedLogger
+ def initialize: (
+ logger: Logger,
+ instance: WorkflowInstance
+ ) -> void
+
+ def replay_safety_disabled: [T] { -> T } -> T
+ end
+ end
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/internal/worker/workflow_instance/replay_safe_metric.rbs b/temporalio/sig/temporalio/internal/worker/workflow_instance/replay_safe_metric.rbs
new file mode 100644
index 00000000..6f94e2b8
--- /dev/null
+++ b/temporalio/sig/temporalio/internal/worker/workflow_instance/replay_safe_metric.rbs
@@ -0,0 +1,15 @@
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ class ReplaySafeMetric < Temporalio::Metric
+ def initialize: (Temporalio::Metric) -> void
+
+ class Meter < Temporalio::Metric::Meter
+ def initialize: (Temporalio::Metric::Meter) -> void
+ end
+ end
+ end
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/internal/worker/workflow_instance/scheduler.rbs b/temporalio/sig/temporalio/internal/worker/workflow_instance/scheduler.rbs
new file mode 100644
index 00000000..e7745568
--- /dev/null
+++ b/temporalio/sig/temporalio/internal/worker/workflow_instance/scheduler.rbs
@@ -0,0 +1,26 @@
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowInstance
+ class Scheduler
+ def initialize: (WorkflowInstance instance) -> void
+
+ def context: -> Context
+
+ def run_until_all_yielded: -> void
+
+ def wait_condition: [T] (cancellation: Cancellation?) { -> T } -> T
+
+ def stack_trace: -> String
+
+ # Only needed to say block is required
+ def timeout_after: [T] (
+ duration? duration,
+ singleton(Exception) exception_class,
+ *Object? exception_args
+ ) { -> T } -> T
+ end
+ end
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/internal/worker/workflow_worker.rbs b/temporalio/sig/temporalio/internal/worker/workflow_worker.rbs
new file mode 100644
index 00000000..193d43df
--- /dev/null
+++ b/temporalio/sig/temporalio/internal/worker/workflow_worker.rbs
@@ -0,0 +1,70 @@
+module Temporalio
+ module Internal
+ module Worker
+ class WorkflowWorker
+ def self.workflow_definitions: (
+ Array[singleton(Workflow::Definition) | Workflow::Definition::Info] workflows
+ ) -> Hash[String?, Workflow::Definition::Info]
+
+ def initialize: (
+ worker: Temporalio::Worker,
+ bridge_worker: Bridge::Worker,
+ workflow_definitions: Hash[String?, Workflow::Definition::Info]
+ ) -> void
+
+ def handle_activation: (
+ runner: MultiRunner,
+ activation: untyped,
+ decoded: bool
+ ) -> void
+
+ def handle_activation_complete: (
+ runner: MultiRunner,
+ activation_completion: untyped,
+ encoded: bool,
+ completion_complete_queue: Queue
+ ) -> void
+
+ def on_shutdown_complete: -> void
+
+ def decode_activation: (MultiRunner runner, untyped activation) -> void
+ def encode_activation_completion: (MultiRunner runner, untyped activation_completion) -> void
+ def apply_codec_on_payload_visit: (untyped payload_or_payloads) { (untyped) -> Enumerable[untyped] } -> void
+
+ class State
+ attr_reader workflow_definitions: Hash[String?, Workflow::Definition::Info]
+ attr_reader bridge_worker: Bridge::Worker
+ attr_reader logger: Logger
+ attr_reader metric_meter: Temporalio::Metric::Meter
+ attr_reader data_converter: Converters::DataConverter
+ attr_reader deadlock_timeout: Float?
+ attr_reader illegal_calls: Hash[String, :all | Hash[Symbol, bool]]
+ attr_reader namespace: String
+ attr_reader task_queue: String
+ attr_reader disable_eager_activity_execution: bool
+ attr_reader workflow_interceptors: Array[Temporalio::Worker::Interceptor::Workflow]
+ attr_reader workflow_failure_exception_types: Array[singleton(Exception)]
+
+ def initialize: (
+ workflow_definitions: Hash[String?, Workflow::Definition::Info],
+ bridge_worker: Bridge::Worker,
+ logger: Logger,
+ metric_meter: Temporalio::Metric::Meter,
+ data_converter: Converters::DataConverter,
+ deadlock_timeout: Float?,
+ illegal_calls: Hash[String, :all | Hash[Symbol, bool]],
+ namespace: String,
+ task_queue: String,
+ disable_eager_activity_execution: bool,
+ workflow_interceptors: Array[Temporalio::Worker::Interceptor::Workflow],
+ workflow_failure_exception_types: Array[singleton(Exception)]
+ ) -> void
+
+ def get_or_create_running_workflow: [T] (String run_id) { -> T } -> T
+ def evict_running_workflow: (String run_id) -> void
+ def evict_all: -> void
+ end
+ end
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/retry_policy.rbs b/temporalio/sig/temporalio/retry_policy.rbs
index 48ec00c4..f357e26a 100644
--- a/temporalio/sig/temporalio/retry_policy.rbs
+++ b/temporalio/sig/temporalio/retry_policy.rbs
@@ -1,17 +1,17 @@
module Temporalio
class RetryPolicy
- attr_accessor initial_interval: Float
- attr_accessor backoff_coefficient: Float
- attr_accessor max_interval: Float?
+ attr_accessor initial_interval: duration
+ attr_accessor backoff_coefficient: duration
+ attr_accessor max_interval: duration?
attr_accessor max_attempts: Integer
attr_accessor non_retryable_error_types: Array[String]?
def self._from_proto: (untyped raw_policy) -> RetryPolicy
def initialize: (
- ?initial_interval: Float,
- ?backoff_coefficient: Float,
- ?max_interval: Float?,
+ ?initial_interval: duration,
+ ?backoff_coefficient: duration,
+ ?max_interval: duration?,
?max_attempts: Integer,
?non_retryable_error_types: Array[String]?
) -> void
diff --git a/temporalio/sig/temporalio/search_attributes.rbs b/temporalio/sig/temporalio/search_attributes.rbs
index d2087600..aeeb15fb 100644
--- a/temporalio/sig/temporalio/search_attributes.rbs
+++ b/temporalio/sig/temporalio/search_attributes.rbs
@@ -17,21 +17,28 @@ module Temporalio
attr_reader value: Object?
def initialize: (Key key, Object? value) -> void
+
+ def _to_proto_pair: -> [String, untyped]
end
- def self._from_proto: (untyped proto) -> SearchAttributes?
+ def self._from_proto: (
+ untyped proto,
+ ?disable_mutations: bool,
+ ?never_nil: bool
+ ) -> SearchAttributes?
+
+ def self._value_from_payload: (untyped payload) -> Object?
- def self.value_from_payload: (untyped payload) -> Object?
+ def self._to_proto_pair: (Key key, Object? value) -> [String, untyped]
def initialize: (SearchAttributes existing) -> void
| (Hash[Key, Object] existing) -> void
| -> void
- def []=: (Key key, Object? value) -> void
+ def []=: (Key | String | Symbol key, Object? value) -> void
def []: (Key key) -> Object?
- def delete: (Key key) -> void
- | (String key) -> void
+ def delete: (Key | String | Symbol key) -> void
def each: { (Key key, Object value) -> void } -> self
@@ -51,6 +58,12 @@ module Temporalio
def _to_proto: -> untyped
+ def _to_proto_hash: -> Hash[String, untyped]
+
+ def _assert_mutations_enabled: -> void
+
+ def _disable_mutations=: (bool value) -> void
+
module IndexedValueType
TEXT: Integer
KEYWORD: Integer
diff --git a/temporalio/sig/temporalio/testing/activity_environment.rbs b/temporalio/sig/temporalio/testing/activity_environment.rbs
index 31410dcc..5480c6df 100644
--- a/temporalio/sig/temporalio/testing/activity_environment.rbs
+++ b/temporalio/sig/temporalio/testing/activity_environment.rbs
@@ -14,7 +14,7 @@ module Temporalio
) -> void
def run: (
- Activity | singleton(Activity) | Activity::Definition activity,
+ Activity::Definition | singleton(Activity::Definition) | Activity::Definition::Info activity,
*Object? args
) -> untyped
end
diff --git a/temporalio/sig/temporalio/testing/workflow_environment.rbs b/temporalio/sig/temporalio/testing/workflow_environment.rbs
index 31627f8b..779bb1de 100644
--- a/temporalio/sig/temporalio/testing/workflow_environment.rbs
+++ b/temporalio/sig/temporalio/testing/workflow_environment.rbs
@@ -8,6 +8,7 @@ module Temporalio
?data_converter: Converters::DataConverter,
?interceptors: Array[Client::Interceptor],
?logger: Logger,
+ ?default_workflow_query_reject_condition: Client::WorkflowQueryRejectCondition::enum?,
?ip: String,
?port: Integer?,
?ui: bool,
@@ -20,10 +21,12 @@ module Temporalio
?dev_server_download_dest_dir: String?,
?dev_server_extra_args: Array[String]
) -> WorkflowEnvironment
- | [T] (
+ | [T] (
?namespace: String,
?data_converter: Converters::DataConverter,
?interceptors: Array[Client::Interceptor],
+ ?logger: Logger,
+ ?default_workflow_query_reject_condition: Client::WorkflowQueryRejectCondition::enum?,
?ip: String,
?port: Integer?,
?ui: bool,
@@ -37,12 +40,89 @@ module Temporalio
?dev_server_extra_args: Array[String]
) { (WorkflowEnvironment) -> T } -> T
+ def self.start_time_skipping: (
+ ?data_converter: Converters::DataConverter,
+ ?interceptors: Array[Client::Interceptor],
+ ?logger: Logger,
+ ?default_workflow_query_reject_condition: Client::WorkflowQueryRejectCondition::enum?,
+ ?port: Integer?,
+ ?runtime: Runtime,
+ ?test_server_existing_path: String?,
+ ?test_server_download_version: String,
+ ?test_server_download_dest_dir: String?,
+ ?test_server_extra_args: Array[String]
+ ) -> WorkflowEnvironment
+ | [T] (
+ ?data_converter: Converters::DataConverter,
+ ?interceptors: Array[Client::Interceptor],
+ ?logger: Logger,
+ ?default_workflow_query_reject_condition: Client::WorkflowQueryRejectCondition::enum?,
+ ?port: Integer?,
+ ?runtime: Runtime,
+ ?test_server_existing_path: String?,
+ ?test_server_download_version: String,
+ ?test_server_download_dest_dir: String?,
+ ?test_server_extra_args: Array[String]
+ ) { (WorkflowEnvironment) -> T } -> T
+
+ def self._with_core_server: (
+ core_server: Internal::Bridge::Testing::EphemeralServer,
+ namespace: String,
+ data_converter: Converters::DataConverter,
+ interceptors: Array[Client::Interceptor],
+ logger: Logger,
+ default_workflow_query_reject_condition: Client::WorkflowQueryRejectCondition::enum?,
+ runtime: Runtime,
+ supports_time_skipping: bool
+ ) -> WorkflowEnvironment
+ | [T] (
+ core_server: Internal::Bridge::Testing::EphemeralServer,
+ namespace: String,
+ data_converter: Converters::DataConverter,
+ interceptors: Array[Client::Interceptor],
+ logger: Logger,
+ default_workflow_query_reject_condition: Client::WorkflowQueryRejectCondition::enum?,
+ runtime: Runtime,
+ supports_time_skipping: bool
+ ) { (WorkflowEnvironment) -> T } -> T
+
def initialize: (Client client) -> void
def shutdown: -> void
+ def supports_time_skipping?: -> bool
+
+ def sleep: (duration duration) -> void
+
+ def current_time: -> Time
+
+ def auto_time_skipping_disabled: [T] { -> T } -> T
+
class Ephemeral < WorkflowEnvironment
- def initialize: (Client client, untyped core_server) -> void
+ def initialize: (
+ Client client,
+ Internal::Bridge::Testing::EphemeralServer core_server,
+ supports_time_skipping: bool
+ ) -> void
+
+ def time_skipping_unlocked: [T] { -> T } -> T
+ end
+
+ class TimeSkippingClientInterceptor
+ include Client::Interceptor
+
+ def initialize: (WorkflowEnvironment env) -> void
+
+ class Outbound < Client::Interceptor::Outbound
+ def initialize: (
+ Client::Interceptor::Outbound next_interceptor,
+ WorkflowEnvironment env
+ ) -> void
+ end
+
+ class TimeSkippingWorkflowHandle < Client::WorkflowHandle
+ def initialize: (Client::WorkflowHandle handle, WorkflowEnvironment env) -> void
+ end
end
end
end
diff --git a/temporalio/sig/temporalio/worker.rbs b/temporalio/sig/temporalio/worker.rbs
index 9756164f..5f8cdbff 100644
--- a/temporalio/sig/temporalio/worker.rbs
+++ b/temporalio/sig/temporalio/worker.rbs
@@ -3,10 +3,12 @@ module Temporalio
class Options
attr_accessor client: Client
attr_accessor task_queue: String
- attr_accessor activities: Array[Activity | singleton(Activity) | Activity::Definition]
- attr_accessor activity_executors: Hash[Symbol, Worker::ActivityExecutor]
+ attr_accessor activities: Array[Activity::Definition | singleton(Activity::Definition) | Activity::Definition::Info]
+ attr_accessor workflows: Array[singleton(Workflow::Definition) | Workflow::Definition::Info]
attr_accessor tuner: Tuner
- attr_accessor interceptors: Array[Interceptor]
+ attr_accessor activity_executors: Hash[Symbol, Worker::ActivityExecutor]
+ attr_accessor workflow_executor: Worker::WorkflowExecutor
+ attr_accessor interceptors: Array[Interceptor::Activity | Interceptor::Workflow]
attr_accessor build_id: String
attr_accessor identity: String
attr_accessor logger: Logger
@@ -22,14 +24,21 @@ module Temporalio
attr_accessor max_task_queue_activities_per_second: Float?
attr_accessor graceful_shutdown_period: Float
attr_accessor use_worker_versioning: bool
+ attr_accessor disable_eager_activity_execution: bool
+ attr_accessor illegal_workflow_calls: Hash[String, :all | Array[Symbol]]
+ attr_accessor workflow_failure_exception_types: Array[singleton(Exception)]
+ attr_accessor workflow_payload_codec_thread_pool: ThreadPool?
+ attr_accessor debug_mode: bool
def initialize: (
client: Client,
task_queue: String,
- activities: Array[Activity | singleton(Activity) | Activity::Definition],
- activity_executors: Hash[Symbol, Worker::ActivityExecutor],
+ activities: Array[Activity::Definition | singleton(Activity::Definition) | Activity::Definition::Info],
+ workflows: Array[singleton(Workflow::Definition) | Workflow::Definition::Info],
tuner: Tuner,
- interceptors: Array[Interceptor],
+ activity_executors: Hash[Symbol, Worker::ActivityExecutor],
+ workflow_executor: Worker::WorkflowExecutor,
+ interceptors: Array[Interceptor::Activity | Interceptor::Workflow],
build_id: String,
identity: String?,
logger: Logger,
@@ -44,7 +53,12 @@ module Temporalio
max_activities_per_second: Float?,
max_task_queue_activities_per_second: Float?,
graceful_shutdown_period: Float,
- use_worker_versioning: bool
+ use_worker_versioning: bool,
+ disable_eager_activity_execution: bool,
+ illegal_workflow_calls: Hash[String, :all | Array[Symbol]],
+ workflow_failure_exception_types: Array[singleton(Exception)],
+ workflow_payload_codec_thread_pool: ThreadPool?,
+ debug_mode: bool
) -> void
end
@@ -59,15 +73,19 @@ module Temporalio
?wait_block_complete: bool
) ?{ -> T } -> T
+ def self.default_illegal_workflow_calls: -> Hash[String, :all | Array[Symbol]]
+
attr_reader options: Options
def initialize: (
client: Client,
task_queue: String,
- ?activities: Array[Activity | singleton(Activity) | Activity::Definition],
- ?activity_executors: Hash[Symbol, Worker::ActivityExecutor],
+ ?activities: Array[Activity::Definition | singleton(Activity::Definition) | Activity::Definition::Info],
+ ?workflows: Array[singleton(Workflow::Definition) | Workflow::Definition::Info],
?tuner: Tuner,
- ?interceptors: Array[Interceptor],
+ ?activity_executors: Hash[Symbol, Worker::ActivityExecutor],
+ ?workflow_executor: Worker::WorkflowExecutor,
+ ?interceptors: Array[Interceptor::Activity | Interceptor::Workflow],
?build_id: String,
?identity: String?,
?logger: Logger,
@@ -82,7 +100,12 @@ module Temporalio
?max_activities_per_second: Float?,
?max_task_queue_activities_per_second: Float?,
?graceful_shutdown_period: Float,
- ?use_worker_versioning: bool
+ ?use_worker_versioning: bool,
+ ?disable_eager_activity_execution: bool,
+ ?illegal_workflow_calls: Hash[String, :all | Array[Symbol]],
+ ?workflow_failure_exception_types: Array[singleton(Exception)],
+ ?workflow_payload_codec_thread_pool: ThreadPool?,
+ ?debug_mode: bool
) -> void
def task_queue: -> String
@@ -98,8 +121,10 @@ module Temporalio
def _initiate_shutdown: -> void
def _wait_all_complete: -> void
def _bridge_worker: -> Internal::Bridge::Worker
- def _all_interceptors: -> Array[Interceptor]
- def _on_poll_bytes: (Symbol worker_type, String bytes) -> void
+ def _activity_interceptors: -> Array[Interceptor::Activity]
+ def _workflow_interceptors: -> Array[Interceptor::Workflow]
+ def _on_poll_bytes: (Internal::Worker::MultiRunner runner, Symbol worker_type, String bytes) -> void
+ def _on_shutdown_complete: -> void
private def to_bridge_slot_supplier_options: (
Tuner::SlotSupplier slot_supplier
diff --git a/temporalio/sig/temporalio/worker/activity_executor.rbs b/temporalio/sig/temporalio/worker/activity_executor.rbs
index 60d182fb..38bfe967 100644
--- a/temporalio/sig/temporalio/worker/activity_executor.rbs
+++ b/temporalio/sig/temporalio/worker/activity_executor.rbs
@@ -3,10 +3,10 @@ module Temporalio
class ActivityExecutor
def self.defaults: -> Hash[Symbol, ActivityExecutor]
- def initialize_activity: (Activity::Definition defn) -> void
- def execute_activity: (Activity::Definition defn) { -> void } -> void
+ def initialize_activity: (Activity::Definition::Info defn) -> void
+ def execute_activity: (Activity::Definition::Info defn) { -> void } -> void
def activity_context: -> Activity::Context?
- def set_activity_context: (Activity::Definition defn, Activity::Context? context) -> void
+ def set_activity_context: (Activity::Definition::Info defn, Activity::Context? context) -> void
end
end
end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/worker/activity_executor/thread_pool.rbs b/temporalio/sig/temporalio/worker/activity_executor/thread_pool.rbs
index ae1cd918..037f93b7 100644
--- a/temporalio/sig/temporalio/worker/activity_executor/thread_pool.rbs
+++ b/temporalio/sig/temporalio/worker/activity_executor/thread_pool.rbs
@@ -4,40 +4,7 @@ module Temporalio
class ThreadPool < ActivityExecutor
def self.default: -> ThreadPool
- def self._monotonic_time: -> Float
-
- def initialize: (
- ?max_threads: Integer?,
- ?idle_timeout: Float
- ) -> void
-
- def largest_length: -> Integer
- def scheduled_task_count: -> Integer
- def completed_task_count: -> Integer
- def active_count: -> Integer
- def length: -> Integer
- def queue_length: -> Integer
- def shutdown: -> void
- def kill: -> void
-
- def _remove_busy_worker: (Worker worker) -> void
- def _ready_worker: (Worker worker, Float last_message) -> void
- def _worker_died: (Worker worker) -> void
- def _worker_task_completed: -> void
- private def locked_assign_worker: { (?) -> untyped } -> void
- private def locked_enqueue: { (?) -> untyped } -> void
- private def locked_add_busy_worker: -> Worker?
- private def locked_prune_pool: -> void
- private def locked_remove_busy_worker: (Worker worker) -> void
- private def locked_ready_worker: (Worker worker, Float last_message) -> void
- private def locked_worker_died: (Worker worker) -> void
-
- class Worker
- def initialize: (ThreadPool pool, Integer id) -> void
- def <<: (Proc block) -> void
- def stop: -> void
- def kill: -> void
- end
+ def initialize: (?ThreadPool thread_poll) -> void
end
end
end
diff --git a/temporalio/sig/temporalio/worker/interceptor.rbs b/temporalio/sig/temporalio/worker/interceptor.rbs
index 9d8462b6..5c929e47 100644
--- a/temporalio/sig/temporalio/worker/interceptor.rbs
+++ b/temporalio/sig/temporalio/worker/interceptor.rbs
@@ -1,42 +1,306 @@
module Temporalio
class Worker
module Interceptor
- def intercept_activity: (ActivityInbound next_interceptor) -> ActivityInbound
-
- class ExecuteActivityInput
- attr_accessor proc: Proc
- attr_accessor args: Array[Object?]
- attr_accessor headers: Hash[String, Object?]
-
- def initialize: (
- proc: Proc,
- args: Array[Object?],
- headers: Hash[String, Object?]
- ) -> void
- end
+ module Activity
+ def intercept_activity: (Inbound next_interceptor) -> Inbound
- class HeartbeatActivityInput
- attr_accessor details: Array[Object?]
+ class ExecuteInput
+ attr_accessor proc: Proc
+ attr_accessor args: Array[Object?]
+ attr_accessor headers: Hash[String, Object?]
- def initialize: (details: Array[Object?]) -> void
- end
+ def initialize: (
+ proc: Proc,
+ args: Array[Object?],
+ headers: Hash[String, Object?]
+ ) -> void
+ end
+
+ class Inbound
+ attr_reader next_interceptor: Inbound
+
+ def initialize: (Inbound next_interceptor) -> void
+
+ def init: (Outbound outbound) -> Outbound
+
+ def execute: (ExecuteInput input) -> Object?
+ end
- class ActivityInbound
- attr_reader next_interceptor: ActivityInbound
+ class HeartbeatInput
+ attr_accessor details: Array[Object?]
- def initialize: (ActivityInbound next_interceptor) -> void
+ def initialize: (details: Array[Object?]) -> void
+ end
- def init: (ActivityOutbound outbound) -> ActivityOutbound
+ class Outbound
+ attr_reader next_interceptor: Outbound
- def execute: (ExecuteActivityInput input) -> Object?
+ def initialize: (Outbound next_interceptor) -> void
+
+ def heartbeat: (HeartbeatInput input) -> void
+ end
end
- class ActivityOutbound
- attr_reader next_interceptor: ActivityOutbound
+ module Workflow
+ def intercept_workflow: (Inbound next_interceptor) -> Inbound
+
+ class ExecuteInput
+ attr_accessor args: Array[Object?]
+ attr_accessor headers: Hash[String, Object?]
+
+ def initialize: (
+ args: Array[Object?],
+ headers: Hash[String, Object?]
+ ) -> void
+ end
+
+ class HandleSignalInput
+ attr_accessor signal: String
+ attr_accessor args: Array[Object?]
+ attr_accessor definition: Temporalio::Workflow::Definition::Signal
+ attr_accessor headers: Hash[String, Object?]
+
+ def initialize: (
+ signal: String,
+ args: Array[Object?],
+ definition: Temporalio::Workflow::Definition::Signal,
+ headers: Hash[String, Object?]
+ ) -> void
+ end
+
+ class HandleQueryInput
+ attr_accessor id: String
+ attr_accessor query: String
+ attr_accessor args: Array[Object?]
+ attr_accessor definition: Temporalio::Workflow::Definition::Query
+ attr_accessor headers: Hash[String, Object?]
+
+ def initialize: (
+ id: String,
+ query: String,
+ args: Array[Object?],
+ definition: Temporalio::Workflow::Definition::Query,
+ headers: Hash[String, Object?]
+ ) -> void
+ end
+
+ class HandleUpdateInput
+ attr_accessor id: String
+ attr_accessor update: String
+ attr_accessor args: Array[Object?]
+ attr_accessor definition: Temporalio::Workflow::Definition::Update
+ attr_accessor headers: Hash[String, Object?]
+
+ def initialize: (
+ id: String,
+ update: String,
+ args: Array[Object?],
+ definition: Temporalio::Workflow::Definition::Update,
+ headers: Hash[String, Object?]
+ ) -> void
+ end
+
+ class Inbound
+ attr_reader next_interceptor: Inbound
+
+ def initialize: (Inbound next_interceptor) -> void
+
+ def init: (Outbound outbound) -> Outbound
+
+ def execute: (ExecuteInput input) -> Object?
+
+ def handle_signal: (HandleSignalInput input) -> void
+
+ def handle_query: (HandleQueryInput input) -> Object?
+
+ def validate_update: (HandleUpdateInput input) -> void
+
+ def handle_update: (HandleUpdateInput input) -> Object?
+ end
+
+ class CancelExternalWorkflowInput
+ attr_accessor id: String
+ attr_accessor run_id: String?
+
+ def initialize: (
+ id: String,
+ run_id: String?
+ ) -> void
+ end
+
+ class ExecuteActivityInput
+ attr_accessor activity: singleton(Temporalio::Activity::Definition) | Symbol | String
+ attr_accessor args: Array[Object?]
+ attr_accessor task_queue: String
+ attr_accessor schedule_to_close_timeout: duration?
+ attr_accessor schedule_to_start_timeout: duration?
+ attr_accessor start_to_close_timeout: duration?
+ attr_accessor heartbeat_timeout: duration?
+ attr_accessor retry_policy: RetryPolicy?
+ attr_accessor cancellation: Cancellation
+ attr_accessor cancellation_type: Temporalio::Workflow::ActivityCancellationType::enum
+ attr_accessor activity_id: String?
+ attr_accessor disable_eager_execution: bool
+ attr_accessor headers: Hash[String, Object?]
+
+ def initialize: (
+ activity: singleton(Temporalio::Activity::Definition) | Symbol | String,
+ args: Array[Object?],
+ task_queue: String,
+ schedule_to_close_timeout: duration?,
+ schedule_to_start_timeout: duration?,
+ start_to_close_timeout: duration?,
+ heartbeat_timeout: duration?,
+ retry_policy: RetryPolicy?,
+ cancellation: Cancellation,
+ cancellation_type: Temporalio::Workflow::ActivityCancellationType::enum,
+ activity_id: String?,
+ disable_eager_execution: bool,
+ headers: Hash[String, Object?]
+ ) -> void
+ end
+
+ class ExecuteLocalActivityInput
+ attr_accessor activity: singleton(Temporalio::Activity::Definition) | Symbol | String
+ attr_accessor args: Array[Object?]
+ attr_accessor schedule_to_close_timeout: duration?
+ attr_accessor schedule_to_start_timeout: duration?
+ attr_accessor start_to_close_timeout: duration?
+ attr_accessor retry_policy: RetryPolicy?
+ attr_accessor local_retry_threshold: duration?
+ attr_accessor cancellation: Cancellation
+ attr_accessor cancellation_type: Temporalio::Workflow::ActivityCancellationType::enum
+ attr_accessor activity_id: String?
+ attr_accessor headers: Hash[String, Object?]
+
+ def initialize: (
+ activity: singleton(Temporalio::Activity::Definition) | Symbol | String,
+ args: Array[Object?],
+ schedule_to_close_timeout: duration?,
+ schedule_to_start_timeout: duration?,
+ start_to_close_timeout: duration?,
+ retry_policy: RetryPolicy?,
+ local_retry_threshold: duration?,
+ cancellation: Cancellation,
+ cancellation_type: Temporalio::Workflow::ActivityCancellationType::enum,
+ activity_id: String?,
+ headers: Hash[String, Object?]
+ ) -> void
+ end
+
+ class InitializeContinueAsNewErrorInput
+ attr_accessor error: Temporalio::Workflow::ContinueAsNewError
+
+ def initialize: (
+ error: Temporalio::Workflow::ContinueAsNewError
+ ) -> void
+ end
+
+ class SignalChildWorkflowInput
+ attr_accessor id: String
+ attr_accessor signal: Temporalio::Workflow::Definition::Signal | Symbol | String
+ attr_accessor args: Array[Object?]
+ attr_accessor cancellation: Cancellation
+ attr_accessor headers: Hash[String, Object?]
+
+ def initialize: (
+ id: String,
+ signal: Temporalio::Workflow::Definition::Signal | Symbol | String,
+ args: Array[Object?],
+ cancellation: Cancellation,
+ headers: Hash[String, Object?]
+ ) -> void
+ end
+
+ class SignalExternalWorkflowInput
+ attr_accessor id: String
+ attr_accessor run_id: String?
+ attr_accessor signal: Temporalio::Workflow::Definition::Signal | Symbol | String
+ attr_accessor args: Array[Object?]
+ attr_accessor cancellation: Cancellation
+ attr_accessor headers: Hash[String, Object?]
+
+ def initialize: (
+ id: String,
+ run_id: String?,
+ signal: Temporalio::Workflow::Definition::Signal | Symbol | String,
+ args: Array[Object?],
+ cancellation: Cancellation,
+ headers: Hash[String, Object?]
+ ) -> void
+ end
+
+ class SleepInput
+ attr_accessor duration: duration?
+ attr_accessor summary: String?
+ attr_accessor cancellation: Cancellation
+
+ def initialize: (
+ duration: duration?,
+ summary: String?,
+ cancellation: Cancellation
+ ) -> void
+ end
+
+ class StartChildWorkflowInput
+ attr_accessor workflow: singleton(Temporalio::Workflow::Definition) | Temporalio::Workflow::Definition::Info | Symbol | String
+ attr_accessor args: Array[Object?]
+ attr_accessor id: String
+ attr_accessor task_queue: String
+ attr_accessor cancellation: Cancellation
+ attr_accessor cancellation_type: Temporalio::Workflow::ChildWorkflowCancellationType::enum
+ attr_accessor parent_close_policy: Temporalio::Workflow::ParentClosePolicy::enum
+ attr_accessor execution_timeout: duration?
+ attr_accessor run_timeout: duration?
+ attr_accessor task_timeout: duration?
+ attr_accessor id_reuse_policy: WorkflowIDReusePolicy::enum
+ attr_accessor retry_policy: RetryPolicy?
+ attr_accessor cron_schedule: String?
+ attr_accessor memo: Hash[String | Symbol, Object?]?
+ attr_accessor search_attributes: SearchAttributes?
+ attr_accessor headers: Hash[String, Object?]
+
+ def initialize: (
+ workflow: singleton(Temporalio::Workflow::Definition) | Temporalio::Workflow::Definition::Info | Symbol | String,
+ args: Array[Object?],
+ id: String,
+ task_queue: String,
+ cancellation: Cancellation,
+ cancellation_type: Temporalio::Workflow::ChildWorkflowCancellationType::enum,
+ parent_close_policy: Temporalio::Workflow::ParentClosePolicy::enum,
+ execution_timeout: duration?,
+ run_timeout: duration?,
+ task_timeout: duration?,
+ id_reuse_policy: WorkflowIDReusePolicy::enum,
+ retry_policy: RetryPolicy?,
+ cron_schedule: String?,
+ memo: Hash[String | Symbol, Object?]?,
+ search_attributes: SearchAttributes?,
+ headers: Hash[String, Object?]
+ ) -> void
+ end
+
+ class Outbound
+ attr_reader next_interceptor: Outbound
+
+ def initialize: (Outbound next_interceptor) -> void
+
+ def cancel_external_workflow: (CancelExternalWorkflowInput input) -> void
+
+ def execute_activity: (ExecuteActivityInput input) -> Object?
+
+ def execute_local_activity: (ExecuteLocalActivityInput input) -> Object?
+
+ def initialize_continue_as_new_error: (InitializeContinueAsNewErrorInput input) -> void
+
+ def signal_child_workflow: (SignalChildWorkflowInput input) -> void
+
+ def signal_external_workflow: (SignalExternalWorkflowInput input) -> void
- def initialize: (ActivityOutbound next_interceptor) -> void
+ def sleep: (SleepInput input) -> void
- def heartbeat: (HeartbeatActivityInput input) -> void
+ def start_child_workflow: (StartChildWorkflowInput input) -> Temporalio::Workflow::ChildWorkflowHandle
+ end
end
end
end
diff --git a/temporalio/sig/temporalio/worker/thread_pool.rbs b/temporalio/sig/temporalio/worker/thread_pool.rbs
new file mode 100644
index 00000000..66986c9a
--- /dev/null
+++ b/temporalio/sig/temporalio/worker/thread_pool.rbs
@@ -0,0 +1,44 @@
+module Temporalio
+ class Worker
+ class ThreadPool
+ def self.default: -> ThreadPool
+
+ def self._monotonic_time: -> Float
+
+ def initialize: (
+ ?max_threads: Integer?,
+ ?idle_timeout: Float
+ ) -> void
+
+ def execute: { -> void } -> void
+
+ def largest_length: -> Integer
+ def scheduled_task_count: -> Integer
+ def completed_task_count: -> Integer
+ def active_count: -> Integer
+ def length: -> Integer
+ def queue_length: -> Integer
+ def shutdown: -> void
+ def kill: -> void
+
+ def _remove_busy_worker: (Worker worker) -> void
+ def _ready_worker: (Worker worker, Float last_message) -> void
+ def _worker_died: (Worker worker) -> void
+ def _worker_task_completed: -> void
+ private def locked_assign_worker: { (?) -> untyped } -> void
+ private def locked_enqueue: { (?) -> untyped } -> void
+ private def locked_add_busy_worker: -> Worker?
+ private def locked_prune_pool: -> void
+ private def locked_remove_busy_worker: (Worker worker) -> void
+ private def locked_ready_worker: (Worker worker, Float last_message) -> void
+ private def locked_worker_died: (Worker worker) -> void
+
+ class Worker
+ def initialize: (ThreadPool pool, Integer id) -> void
+ def <<: (Proc block) -> void
+ def stop: -> void
+ def kill: -> void
+ end
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/worker/workflow_executor.rbs b/temporalio/sig/temporalio/worker/workflow_executor.rbs
new file mode 100644
index 00000000..e8aada6c
--- /dev/null
+++ b/temporalio/sig/temporalio/worker/workflow_executor.rbs
@@ -0,0 +1,15 @@
+module Temporalio
+ class Worker
+ class WorkflowExecutor
+ def _validate_worker: (
+ Internal::Worker::WorkflowWorker worker,
+ Internal::Worker::WorkflowWorker::State worker_state
+ ) -> void
+
+ def _activate: (
+ untyped activation,
+ Internal::Worker::WorkflowWorker::State worker_state
+ ) { (untyped completion) -> void } -> void
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/worker/workflow_executor/ractor.rbs b/temporalio/sig/temporalio/worker/workflow_executor/ractor.rbs
new file mode 100644
index 00000000..4f3c5dd7
--- /dev/null
+++ b/temporalio/sig/temporalio/worker/workflow_executor/ractor.rbs
@@ -0,0 +1,9 @@
+module Temporalio
+ class Worker
+ class WorkflowExecutor
+ class Ractor < WorkflowExecutor
+ def self.instance: -> Ractor
+ end
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/worker/workflow_executor/thread_pool.rbs b/temporalio/sig/temporalio/worker/workflow_executor/thread_pool.rbs
new file mode 100644
index 00000000..409ca50b
--- /dev/null
+++ b/temporalio/sig/temporalio/worker/workflow_executor/thread_pool.rbs
@@ -0,0 +1,56 @@
+module Temporalio
+ class Worker
+ class WorkflowExecutor
+ class ThreadPool < WorkflowExecutor
+ def self.default: -> ThreadPool
+
+ def initialize: (
+ ?max_threads: Integer,
+ ?thread_pool: Temporalio::Worker::ThreadPool
+ ) -> void
+
+ def _thread_pool: -> Temporalio::Worker::ThreadPool
+
+ def _remove_workflow: (
+ Internal::Worker::WorkflowWorker::State worker_state,
+ String run_id
+ ) -> void
+
+ class Worker
+ LOG_ACTIVATIONS: bool
+
+ attr_accessor workflow_count: Integer
+
+ def initialize: (ThreadPool executor) -> void
+
+ def enqueue_activation: (
+ untyped activation,
+ Internal::Worker::WorkflowWorker::State worker_state
+ ) { (untyped completion) -> void } -> void
+
+ def shutdown: -> void
+
+ def run: -> void
+
+ def activate: (
+ untyped activation,
+ Internal::Worker::WorkflowWorker::State worker_state
+ ) { (untyped completion) -> void } -> void
+
+ def create_instance: (
+ untyped initial_activation,
+ Internal::Worker::WorkflowWorker::State worker_state
+ ) -> Internal::Worker::WorkflowInstance
+
+ def evict: (
+ Internal::Worker::WorkflowWorker::State worker_state,
+ String run_id
+ ) -> void
+ end
+
+ class DeadlockError < Exception
+ end
+ end
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/workflow.rbs b/temporalio/sig/temporalio/workflow.rbs
new file mode 100644
index 00000000..9a55ff72
--- /dev/null
+++ b/temporalio/sig/temporalio/workflow.rbs
@@ -0,0 +1,160 @@
+module Temporalio
+ module Workflow
+ def self.all_handlers_finished?: -> bool
+
+ def self.cancellation: -> Cancellation
+
+ def self.continue_as_new_suggested: -> bool
+
+ def self.current_history_length: -> Integer
+
+ def self.current_history_size: -> Integer
+
+ def self.current_update_info: -> UpdateInfo?
+
+ def self.execute_activity: (
+ singleton(Activity::Definition) | Symbol | String activity,
+ *Object? args,
+ ?task_queue: String,
+ ?schedule_to_close_timeout: duration?,
+ ?schedule_to_start_timeout: duration?,
+ ?start_to_close_timeout: duration?,
+ ?heartbeat_timeout: duration?,
+ ?retry_policy: RetryPolicy?,
+ ?cancellation: Cancellation,
+ ?cancellation_type: ActivityCancellationType::enum,
+ ?activity_id: String?,
+ ?disable_eager_execution: bool
+ ) -> Object?
+
+ def self.execute_child_workflow: (
+ singleton(Definition) | Definition::Info | Symbol | String workflow,
+ *Object? args,
+ ?id: String,
+ ?task_queue: String,
+ ?cancellation: Cancellation,
+ ?cancellation_type: ChildWorkflowCancellationType::enum,
+ ?parent_close_policy: ParentClosePolicy::enum,
+ ?execution_timeout: duration?,
+ ?run_timeout: duration?,
+ ?task_timeout: duration?,
+ ?id_reuse_policy: WorkflowIDReusePolicy::enum,
+ ?retry_policy: RetryPolicy?,
+ ?cron_schedule: String?,
+ ?memo: Hash[String | Symbol, Object?]?,
+ ?search_attributes: SearchAttributes?
+ ) -> Object?
+
+ def self.execute_local_activity: (
+ singleton(Activity::Definition) | Symbol | String activity,
+ *Object? args,
+ ?schedule_to_close_timeout: duration?,
+ ?schedule_to_start_timeout: duration?,
+ ?start_to_close_timeout: duration?,
+ ?retry_policy: RetryPolicy?,
+ ?local_retry_threshold: duration?,
+ ?cancellation: Cancellation,
+ ?cancellation_type: ActivityCancellationType::enum,
+ ?activity_id: String?
+ ) -> Object?
+
+ def self.external_workflow_handle: (String workflow_id, ?run_id: String?) -> ExternalWorkflowHandle
+
+ def self.in_workflow?: -> bool
+
+ def self.info: -> Info
+
+ def self.logger: -> ScopedLogger
+
+ def self.memo: -> Hash[String, Object?]
+
+ def self.metric_meter: -> Metric::Meter
+
+ def self.now: -> Time
+
+ def self.patched: (String patch_id) -> bool
+
+ def self.payload_converter: -> Converters::PayloadConverter
+
+ def self.query_handlers: -> Hash[String?, Workflow::Definition::Query]
+
+ def self.random: -> Random
+
+ def self.search_attributes: -> SearchAttributes
+
+ def self.signal_handlers: -> Hash[String?, Workflow::Definition::Signal]
+
+ def self.sleep: (duration? duration, ?summary: String?, ?cancellation: Cancellation) -> void
+
+ def self.start_child_workflow: (
+ singleton(Workflow::Definition) | Workflow::Definition::Info | Symbol | String workflow,
+ *Object? args,
+ ?id: String,
+ ?task_queue: String,
+ ?cancellation: Cancellation,
+ ?cancellation_type: Workflow::ChildWorkflowCancellationType::enum,
+ ?parent_close_policy: Workflow::ParentClosePolicy::enum,
+ ?execution_timeout: duration?,
+ ?run_timeout: duration?,
+ ?task_timeout: duration?,
+ ?id_reuse_policy: WorkflowIDReusePolicy::enum,
+ ?retry_policy: RetryPolicy?,
+ ?cron_schedule: String?,
+ ?memo: Hash[String | Symbol, Object?]?,
+ ?search_attributes: SearchAttributes?
+ ) -> ChildWorkflowHandle
+
+ def self.timeout: [T] (
+ duration? duration,
+ ?singleton(Exception) exception_class,
+ ?String message,
+ ?summary: String?
+ ) { -> T } -> T
+
+ def self.update_handlers: -> Hash[String?, Workflow::Definition::Update]
+
+ def self.upsert_memo: (Hash[Symbol | String, Object?] hash) -> void
+
+ def self.upsert_search_attributes: (*SearchAttributes::Update updates) -> void
+
+ def self.wait_condition: [T] (?cancellation: Cancellation?) { -> T } -> T
+
+ def self._current: -> Internal::Worker::WorkflowInstance::Context
+ def self._current_or_nil: -> Internal::Worker::WorkflowInstance::Context?
+
+ module Unsafe
+ def self.replaying?: -> bool
+
+ def self.illegal_call_tracing_disabled: [T] { -> T } -> T
+ end
+
+ class ContinueAsNewError < Error
+ attr_accessor args: Array[Object?]
+ attr_accessor workflow: singleton(Workflow::Definition) | String | Symbol | nil
+ attr_accessor task_queue: String?
+ attr_accessor run_timeout: duration?
+ attr_accessor task_timeout: duration?
+ attr_accessor retry_policy: RetryPolicy?
+ attr_accessor memo: Hash[String | Symbol, Object?]?
+ attr_accessor search_attributes: SearchAttributes?
+ attr_accessor headers: Hash[String, Object?]
+
+ def initialize: (
+ *Object? args,
+ ?workflow: singleton(Workflow::Definition) | String | Symbol | nil,
+ ?task_queue: String?,
+ ?run_timeout: duration?,
+ ?task_timeout: duration?,
+ ?retry_policy: RetryPolicy?,
+ ?memo: Hash[String | Symbol, Object?]?,
+ ?search_attributes: SearchAttributes?,
+ ?headers: Hash[String, Object?]
+ ) -> void
+ end
+
+ class InvalidWorkflowStateError < Error
+ end
+ class NondeterminismError < Error
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/workflow/activity_cancellation_type.rbs b/temporalio/sig/temporalio/workflow/activity_cancellation_type.rbs
new file mode 100644
index 00000000..7d817337
--- /dev/null
+++ b/temporalio/sig/temporalio/workflow/activity_cancellation_type.rbs
@@ -0,0 +1,11 @@
+module Temporalio
+ module Workflow
+ module ActivityCancellationType
+ type enum = Integer
+
+ TRY_CANCEL: enum
+ WAIT_CANCELLATION_COMPLETED: enum
+ ABANDON: enum
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/workflow/child_workflow_cancellation_type.rbs b/temporalio/sig/temporalio/workflow/child_workflow_cancellation_type.rbs
new file mode 100644
index 00000000..7d00764b
--- /dev/null
+++ b/temporalio/sig/temporalio/workflow/child_workflow_cancellation_type.rbs
@@ -0,0 +1,12 @@
+module Temporalio
+ module Workflow
+ module ChildWorkflowCancellationType
+ type enum = Integer
+
+ ABANDON: enum
+ TRY_CANCEL: enum
+ WAIT_CANCELLATION_COMPLETED: enum
+ WAIT_CANCELLATION_REQUESTED: enum
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/workflow/child_workflow_handle.rbs b/temporalio/sig/temporalio/workflow/child_workflow_handle.rbs
new file mode 100644
index 00000000..b818abf7
--- /dev/null
+++ b/temporalio/sig/temporalio/workflow/child_workflow_handle.rbs
@@ -0,0 +1,16 @@
+module Temporalio
+ module Workflow
+ class ChildWorkflowHandle
+ def id: -> String
+ def first_execution_run_id: -> String
+
+ def result: -> Object?
+
+ def signal: (
+ Workflow::Definition::Signal | Symbol | String signal,
+ *Object? args,
+ ?cancellation: Cancellation
+ ) -> void
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/workflow/definition.rbs b/temporalio/sig/temporalio/workflow/definition.rbs
new file mode 100644
index 00000000..5fd87d15
--- /dev/null
+++ b/temporalio/sig/temporalio/workflow/definition.rbs
@@ -0,0 +1,126 @@
+module Temporalio
+ module Workflow
+ class Definition
+ def self.workflow_name: (String | Symbol workflow_name) -> void
+ def self.workflow_dynamic: (?bool value) -> void
+ def self.workflow_raw_args: (?bool value) -> void
+ def self.workflow_failure_exception_type: (*singleton(Exception) types) -> void
+ def self.workflow_query_attr_reader: (*Symbol attr_names) -> void
+
+ def self.workflow_init: (?bool value) -> void
+
+ def self.workflow_signal: (
+ ?name: String | Symbol | nil,
+ ?dynamic: bool,
+ ?raw_args: bool,
+ ?unfinished_policy: HandlerUnfinishedPolicy::enum
+ ) -> void
+
+ def self.workflow_query: (
+ ?name: String | Symbol | nil,
+ ?dynamic: bool,
+ ?raw_args: bool
+ ) -> void
+
+ def self.workflow_update: (
+ ?name: String | Symbol | nil,
+ ?dynamic: bool,
+ ?raw_args: bool,
+ ?unfinished_policy: HandlerUnfinishedPolicy::enum
+ ) -> void
+
+ def self.workflow_update_validator: (Symbol update_method) -> void
+
+ def self.pending_handler_details: -> Hash[Symbol, untyped]?
+ def self.pending_handler_details=: (Hash[Symbol, untyped]? value) -> void
+
+ def self._workflow_definition: -> Info
+
+ def self._workflow_type_from_workflow_parameter: (
+ singleton(Workflow::Definition) | Workflow::Definition::Info | Symbol | String workflow
+ ) -> String
+
+ def self._build_workflow_definition: -> Info
+
+ def execute: (*Object? args) -> Object?
+
+ class Info
+ attr_reader workflow_class: singleton(Workflow::Definition)
+ attr_reader override_name: String?
+ attr_reader dynamic: bool
+ attr_reader init: bool
+ attr_reader raw_args: bool
+ attr_reader failure_exception_types: Array[singleton(Exception)]
+ attr_reader signals: Hash[String?, Signal]
+ attr_reader queries: Hash[String?, Query]
+ attr_reader updates: Hash[String?, Update]
+
+ def self.from_class: (singleton(Definition) workflow_class) -> Info
+
+ def initialize: (
+ workflow_class: singleton(Workflow::Definition),
+ ?override_name: String?,
+ ?dynamic: bool,
+ ?init: bool,
+ ?raw_args: bool,
+ ?failure_exception_types: Array[singleton(Exception)],
+ ?signals: Hash[String, Signal],
+ ?queries: Hash[String, Query],
+ ?updates: Hash[String, Update]
+ ) -> void
+
+ def name: -> String?
+ end
+
+ class Signal
+ attr_reader name: String?
+ attr_reader to_invoke: Symbol | Proc
+ attr_reader raw_args: bool
+ attr_reader unfinished_policy: HandlerUnfinishedPolicy::enum
+
+ def self._name_from_parameter: (Workflow::Definition::Signal | String | Symbol) -> String
+
+ def initialize: (
+ name: String?,
+ to_invoke: Symbol | Proc,
+ ?raw_args: bool,
+ ?unfinished_policy: HandlerUnfinishedPolicy::enum
+ ) -> void
+ end
+
+ class Query
+ attr_reader name: String?
+ attr_reader to_invoke: Symbol | Proc
+ attr_reader raw_args: bool
+
+ def self._name_from_parameter: (Workflow::Definition::Query | String | Symbol) -> String
+
+ def initialize: (
+ name: String?,
+ to_invoke: Symbol | Proc,
+ ?raw_args: bool
+ ) -> void
+ end
+
+ class Update
+ attr_reader name: String?
+ attr_reader to_invoke: Symbol | Proc
+ attr_reader raw_args: bool
+ attr_reader unfinished_policy: HandlerUnfinishedPolicy::enum
+ attr_reader validator_to_invoke: Symbol | Proc | nil
+
+ def self._name_from_parameter: (Workflow::Definition::Update | String | Symbol) -> String
+
+ def initialize: (
+ name: String?,
+ to_invoke: Symbol | Proc,
+ ?raw_args: bool,
+ ?unfinished_policy: HandlerUnfinishedPolicy::enum,
+ ?validator_to_invoke: Symbol | Proc | nil
+ ) -> void
+
+ def _with_validator_to_invoke: (Symbol | Proc | nil validator_to_invoke) -> Update
+ end
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/workflow/external_workflow_handle.rbs b/temporalio/sig/temporalio/workflow/external_workflow_handle.rbs
new file mode 100644
index 00000000..4142fefb
--- /dev/null
+++ b/temporalio/sig/temporalio/workflow/external_workflow_handle.rbs
@@ -0,0 +1,16 @@
+module Temporalio
+ module Workflow
+ class ExternalWorkflowHandle
+ def id: -> String
+ def run_id: -> String?
+
+ def signal: (
+ Workflow::Definition::Signal | Symbol | String signal,
+ *Object? args,
+ ?cancellation: Cancellation
+ ) -> void
+
+ def cancel: -> void
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/workflow/future.rbs b/temporalio/sig/temporalio/workflow/future.rbs
new file mode 100644
index 00000000..59b731af
--- /dev/null
+++ b/temporalio/sig/temporalio/workflow/future.rbs
@@ -0,0 +1,24 @@
+module Temporalio
+ module Workflow
+ class Future[unchecked out T]
+ def self.any_of: [T] (*Future[T] futures) -> Future[T]
+ def self.all_of: (*Future[untyped] futures) -> Future[nil]
+ def self.try_any_of: [T] (*Future[T] futures) -> Future[Future[T]]
+ def self.try_all_of: (*Future[untyped] futures) -> Future[nil]
+
+ attr_reader result: T?
+ attr_reader failure: Exception?
+
+ def initialize: ?{ -> T } -> void
+
+ def done?: -> bool
+ def result?: -> bool
+ def result=: (T result) -> void
+ def failure?: -> bool
+ def failure=: (Exception failure) -> void
+
+ def wait: -> T
+ def wait_no_raise: -> T?
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/workflow/handler_unfinished_policy.rbs b/temporalio/sig/temporalio/workflow/handler_unfinished_policy.rbs
new file mode 100644
index 00000000..d6880f3f
--- /dev/null
+++ b/temporalio/sig/temporalio/workflow/handler_unfinished_policy.rbs
@@ -0,0 +1,10 @@
+module Temporalio
+ module Workflow
+ module HandlerUnfinishedPolicy
+ type enum = Integer
+
+ WARN_AND_ABANDON: enum
+ ABANDON: enum
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/workflow/info.rbs b/temporalio/sig/temporalio/workflow/info.rbs
new file mode 100644
index 00000000..0bd8af92
--- /dev/null
+++ b/temporalio/sig/temporalio/workflow/info.rbs
@@ -0,0 +1,57 @@
+module Temporalio
+ module Workflow
+ class Info
+ attr_reader attempt: Integer
+ attr_reader continued_run_id: String?
+ attr_reader cron_schedule: String?
+ attr_reader execution_timeout: Float?
+ attr_reader last_failure: Exception?
+ attr_reader last_result: Object?
+ attr_reader namespace: String
+ attr_reader parent: ParentInfo?
+ attr_reader retry_policy: RetryPolicy?
+ attr_reader run_id: String
+ attr_reader run_timeout: Float?
+ attr_reader start_time: Time
+ attr_reader task_queue: String
+ attr_reader task_timeout: Float
+ attr_reader workflow_id: String
+ attr_reader workflow_type: String
+
+ def initialize: (
+ attempt: Integer,
+ continued_run_id: String?,
+ cron_schedule: String?,
+ execution_timeout: Float?,
+ last_failure: Exception?,
+ last_result: Object?,
+ namespace: String,
+ parent: ParentInfo?,
+ retry_policy: RetryPolicy?,
+ run_id: String,
+ run_timeout: Float?,
+ start_time: Time,
+ task_queue: String,
+ task_timeout: Float,
+ workflow_id: String,
+ workflow_type: String
+ ) -> void
+
+ def to_h: -> Hash[Symbol, untyped]
+
+ class ParentInfo
+ attr_reader namespace: String
+ attr_reader run_id: String
+ attr_reader workflow_id: String
+
+ def initialize: (
+ namespace: String,
+ run_id: String,
+ workflow_id: String
+ ) -> void
+
+ def to_h: -> Hash[Symbol, untyped]
+ end
+ end
+ end
+end
diff --git a/temporalio/sig/temporalio/workflow/parent_close_policy.rbs b/temporalio/sig/temporalio/workflow/parent_close_policy.rbs
new file mode 100644
index 00000000..8fd7e753
--- /dev/null
+++ b/temporalio/sig/temporalio/workflow/parent_close_policy.rbs
@@ -0,0 +1,12 @@
+module Temporalio
+ module Workflow
+ module ParentClosePolicy
+ type enum = Integer
+
+ UNSPECIFIED: enum
+ TERMINATE: enum
+ ABANDON: enum
+ REQUEST_CANCEL: enum
+ end
+ end
+end
\ No newline at end of file
diff --git a/temporalio/sig/temporalio/workflow/update_info.rbs b/temporalio/sig/temporalio/workflow/update_info.rbs
new file mode 100644
index 00000000..7d98f83e
--- /dev/null
+++ b/temporalio/sig/temporalio/workflow/update_info.rbs
@@ -0,0 +1,15 @@
+module Temporalio
+ module Workflow
+ class UpdateInfo
+ attr_reader id: String
+ attr_reader name: String
+
+ def initialize: (
+ id: String,
+ name: String
+ ) -> void
+
+ def to_h: -> Hash[Symbol, untyped]
+ end
+ end
+end
diff --git a/temporalio/temporalio.gemspec b/temporalio/temporalio.gemspec
index 9f0b96a1..fa6fe7ee 100644
--- a/temporalio/temporalio.gemspec
+++ b/temporalio/temporalio.gemspec
@@ -16,7 +16,8 @@ Gem::Specification.new do |spec|
spec.metadata['homepage_uri'] = spec.homepage
spec.metadata['source_code_uri'] = 'https://github.com/temporalio/sdk-ruby'
- spec.files = Dir['lib/**/*.rb', 'LICENSE', 'README.md', 'Cargo.*', 'temporalio.gemspec', 'Gemfile', 'Rakefile']
+ spec.files = Dir['lib/**/*.rb', 'LICENSE', 'README.md', 'Cargo.*',
+ 'temporalio.gemspec', 'Gemfile', 'Rakefile', '.yardopts']
spec.bindir = 'exe'
spec.executables = spec.files.grep(%r{\Aexe/}) { |f| File.basename(f) }
diff --git a/temporalio/test/api/payload_visitor_test.rb b/temporalio/test/api/payload_visitor_test.rb
new file mode 100644
index 00000000..40ebfdc8
--- /dev/null
+++ b/temporalio/test/api/payload_visitor_test.rb
@@ -0,0 +1,136 @@
+# frozen_string_literal: true
+
+require 'temporalio/api/payload_visitor'
+require 'test'
+
+module Api
+ class PayloadVisitorTest < Test
+ def test_basics
+ # Make protos that have:
+ # * single payload
+ # * obj payloads
+ # * repeated payloads
+ # * map value
+ # * search attributes obj
+ # * search attributes map
+ act = Temporalio::Internal::Bridge::Api::WorkflowActivation::WorkflowActivation.new(
+ jobs: [
+ Temporalio::Internal::Bridge::Api::WorkflowActivation::WorkflowActivationJob.new(
+ initialize_workflow: Temporalio::Internal::Bridge::Api::WorkflowActivation::InitializeWorkflow.new(
+ arguments: [
+ Temporalio::Api::Common::V1::Payload.new(data: 'repeated1'),
+ Temporalio::Api::Common::V1::Payload.new(data: 'repeated2')
+ ],
+ headers: {
+ 'header' => Temporalio::Api::Common::V1::Payload.new(data: 'map')
+ },
+ last_completion_result: Temporalio::Api::Common::V1::Payloads.new(
+ payloads: [
+ Temporalio::Api::Common::V1::Payload.new(data: 'obj1'),
+ Temporalio::Api::Common::V1::Payload.new(data: 'obj2')
+ ]
+ ),
+ search_attributes: Temporalio::Api::Common::V1::SearchAttributes.new(
+ indexed_fields: { 'sakey' => Temporalio::Api::Common::V1::Payload.new(data: 'saobj') }
+ )
+ )
+ )
+ ]
+ )
+ succ = Temporalio::Internal::Bridge::Api::WorkflowCompletion::Success.new(
+ commands: [
+ Temporalio::Internal::Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ complete_workflow_execution:
+ Temporalio::Internal::Bridge::Api::WorkflowCommands::CompleteWorkflowExecution.new(
+ result: Temporalio::Api::Common::V1::Payload.new(data: 'single')
+ )
+ ),
+ Temporalio::Internal::Bridge::Api::WorkflowCommands::WorkflowCommand.new(
+ upsert_workflow_search_attributes:
+ Temporalio::Internal::Bridge::Api::WorkflowCommands::UpsertWorkflowSearchAttributes.new(
+ search_attributes: { 'sakey' => Temporalio::Api::Common::V1::Payload.new(data: 'samap') }
+ )
+ )
+ ]
+ )
+ mutator = proc do |value|
+ case value
+ when Temporalio::Api::Common::V1::Payload
+ value.data += '-single' # steep:ignore
+ when Enumerable
+ value.replace( # steep:ignore
+ [Temporalio::Api::Common::V1::Payload.new(data: "#{value.map(&:data).join('-')}-repeated")]
+ )
+ else
+ raise 'Unrecognized type'
+ end
+ end
+
+ # Basic check including search attributes
+ visitor = Temporalio::Api::PayloadVisitor.new(&mutator)
+ mutated_act = act.class.decode(act.class.encode(act))
+ mutated_succ = succ.class.decode(succ.class.encode(succ))
+ visitor.run(mutated_act)
+ visitor.run(mutated_succ)
+ mutated_init = mutated_act.jobs.first.initialize_workflow
+ assert_equal 'repeated1-repeated2-repeated', mutated_init.arguments.first.data
+ assert_equal 'map-single', mutated_init.headers['header'].data
+ assert_equal 'obj1-obj2-repeated', mutated_init.last_completion_result.payloads.first.data
+ assert_equal 'saobj-single', mutated_init.search_attributes.indexed_fields['sakey'].data
+ assert_equal 'single-single', mutated_succ.commands.first.complete_workflow_execution.result.data
+ assert_equal 'samap-single',
+ mutated_succ.commands.last.upsert_workflow_search_attributes.search_attributes['sakey'].data
+
+ # Skip search attributes
+ visitor = Temporalio::Api::PayloadVisitor.new(skip_search_attributes: true, &mutator)
+ mutated_act = act.class.decode(act.class.encode(act))
+ mutated_succ = succ.class.decode(succ.class.encode(succ))
+ visitor.run(mutated_act)
+ visitor.run(mutated_succ)
+ mutated_init = mutated_act.jobs.first.initialize_workflow
+ assert_equal 'map-single', mutated_init.headers['header'].data
+ assert_equal 'saobj', mutated_init.search_attributes.indexed_fields['sakey'].data
+ assert_equal 'single-single', mutated_succ.commands.first.complete_workflow_execution.result.data
+ assert_equal 'samap', mutated_succ.commands.last.upsert_workflow_search_attributes.search_attributes['sakey'].data
+
+ # On enter/exit
+ entered = []
+ exited = []
+ Temporalio::Api::PayloadVisitor.new(
+ on_enter: proc { |v| entered << v.class.descriptor.name },
+ on_exit: proc { |v| exited << v.class.descriptor.name }
+ ) do
+ # Do nothing
+ end.run(act)
+ assert_equal entered.sort, exited.sort
+ assert_includes entered, 'coresdk.workflow_activation.InitializeWorkflow'
+ assert_includes entered, 'temporal.api.common.v1.Payloads'
+ end
+
+ def test_any
+ protocol = Temporalio::Api::Protocol::V1::Message.new(
+ body: Google::Protobuf::Any.pack(
+ Temporalio::Api::History::V1::WorkflowExecutionStartedEventAttributes.new(
+ input: Temporalio::Api::Common::V1::Payloads.new(
+ payloads: [Temporalio::Api::Common::V1::Payload.new(data: 'payload1')]
+ ),
+ search_attributes: Temporalio::Api::Common::V1::SearchAttributes.new(
+ indexed_fields: { 'foo' => Temporalio::Api::Common::V1::Payload.new(data: 'payload2') }
+ )
+ )
+ )
+ )
+ Temporalio::Api::PayloadVisitor.new(skip_search_attributes: true, traverse_any: true) do |p|
+ case p
+ when Temporalio::Api::Common::V1::Payload
+ p.data += '-visited'
+ when Enumerable
+ p.each { |v| v.data += '-visited' }
+ end
+ end.run(protocol)
+ attrs = protocol.body.unpack(Temporalio::Api::History::V1::WorkflowExecutionStartedEventAttributes)
+ assert_equal 'payload1-visited', attrs.input.payloads.first.data
+ assert_equal 'payload2', attrs.search_attributes.indexed_fields['foo'].data
+ end
+ end
+end
diff --git a/temporalio/test/base64_codec.rb b/temporalio/test/base64_codec.rb
new file mode 100644
index 00000000..17306bd6
--- /dev/null
+++ b/temporalio/test/base64_codec.rb
@@ -0,0 +1,23 @@
+# frozen_string_literal: true
+
+require 'temporalio/api'
+require 'temporalio/converters/payload_codec'
+
+class Base64Codec < Temporalio::Converters::PayloadCodec
+ def encode(payloads)
+ payloads.map do |p|
+ Temporalio::Api::Common::V1::Payload.new(
+ metadata: { 'encoding' => 'test/base64' },
+ data: Base64.strict_encode64(p.to_proto)
+ )
+ end
+ end
+
+ def decode(payloads)
+ payloads.map do |p|
+ Temporalio::Api::Common::V1::Payload.decode(
+ Base64.strict_decode64(p.data)
+ )
+ end
+ end
+end
diff --git a/temporalio/test/client_workflow_test.rb b/temporalio/test/client_workflow_test.rb
index 76524c29..7c5db812 100644
--- a/temporalio/test/client_workflow_test.rb
+++ b/temporalio/test/client_workflow_test.rb
@@ -132,8 +132,7 @@ def test_describe
assert_equal handle.result_run_id, desc.run_id
assert_instance_of Time, desc.start_time
assert_equal Temporalio::Client::WorkflowExecutionStatus::COMPLETED, desc.status
- # @type var attrs: Temporalio::SearchAttributes
- attrs = desc.search_attributes
+ attrs = desc.search_attributes #: Temporalio::SearchAttributes
assert_equal 'some text', attrs[ATTR_KEY_TEXT]
assert_equal 'some keyword', attrs[ATTR_KEY_KEYWORD]
assert_equal 123, attrs[ATTR_KEY_INTEGER]
diff --git a/temporalio/test/converters/data_converter_test.rb b/temporalio/test/converters/data_converter_test.rb
index 5f46e45b..21106c1f 100644
--- a/temporalio/test/converters/data_converter_test.rb
+++ b/temporalio/test/converters/data_converter_test.rb
@@ -1,5 +1,6 @@
# frozen_string_literal: true
+require 'base64_codec'
require 'temporalio/api'
require 'temporalio/converters/data_converter'
require 'temporalio/converters/payload_codec'
@@ -8,25 +9,6 @@
module Converters
class DataConverterTest < Test
- class Base64Codec
- def encode(payloads)
- payloads.map do |p|
- Temporalio::Api::Common::V1::Payload.new(
- metadata: { 'encoding' => 'test/base64' },
- data: Base64.strict_encode64(p.to_proto)
- )
- end
- end
-
- def decode(payloads)
- payloads.map do |p|
- Temporalio::Api::Common::V1::Payload.decode(
- Base64.strict_decode64(p.data)
- )
- end
- end
- end
-
def test_with_codec
converter = Temporalio::Converters::DataConverter.new(
failure_converter: Ractor.make_shareable(
diff --git a/temporalio/test/converters/failure_converter_test.rb b/temporalio/test/converters/failure_converter_test.rb
index 660ee5c1..3dec681a 100644
--- a/temporalio/test/converters/failure_converter_test.rb
+++ b/temporalio/test/converters/failure_converter_test.rb
@@ -67,8 +67,7 @@ def test_failure_with_causes
assert_equal 'RuntimeError', failure.cause.cause.cause.application_failure_info.type
# Confirm deserialized as expected
- # @type var new_err: untyped
- new_err = Temporalio::Converters::DataConverter.default.from_failure(failure)
+ new_err = Temporalio::Converters::DataConverter.default.from_failure(failure) #: untyped
assert_instance_of Temporalio::Error::ChildWorkflowError, new_err
assert_equal orig_err.backtrace, new_err.backtrace
assert_equal 'Child error', new_err.message
diff --git a/temporalio/test/scoped_logger_test.rb b/temporalio/test/scoped_logger_test.rb
index ba962cc8..83b27df9 100644
--- a/temporalio/test/scoped_logger_test.rb
+++ b/temporalio/test/scoped_logger_test.rb
@@ -6,7 +6,7 @@
class ScopedLoggerTest < Test
def test_logger_with_values
# Default doesn't change anything
- out, = capture_io do
+ out, = safe_capture_io do
logger = Temporalio::ScopedLogger.new(Logger.new($stdout, level: Logger::INFO))
logger.info('info1')
logger.error('error1')
@@ -23,7 +23,7 @@ def test_logger_with_values
# With a getter that returns some values
extra_vals = { some_key: { foo: 'bar', 'baz' => 123 } }
- out, = capture_io do
+ out, = safe_capture_io do
logger = Temporalio::ScopedLogger.new(Logger.new($stdout, level: Logger::INFO))
logger.scoped_values_getter = proc { extra_vals }
logger.add(Logger::WARN, 'warn1')
diff --git a/temporalio/test/sig/test.rbs b/temporalio/test/sig/test.rbs
index 2b05d1f5..919aff25 100644
--- a/temporalio/test/sig/test.rbs
+++ b/temporalio/test/sig/test.rbs
@@ -1,5 +1,6 @@
class Test < Minitest::Test
include ExtraAssertions
+ include WorkflowUtils
ATTR_KEY_TEXT: Temporalio::SearchAttributes::Key
ATTR_KEY_KEYWORD: Temporalio::SearchAttributes::Key
@@ -11,6 +12,9 @@ class Test < Minitest::Test
def self.also_run_all_tests_in_fiber: -> void
+ def skip_if_fibers_not_supported!: -> void
+ def skip_if_not_x86!: -> void
+
def env: -> TestEnvironment
def run_in_background: { (?) -> untyped } -> (Thread | Fiber)
@@ -19,6 +23,8 @@ class Test < Minitest::Test
def assert_no_schedules: -> void
def delete_schedules: (*String ids) -> void
+ def safe_capture_io: { (?) -> untyped } -> [String, String]
+
class TestEnvironment
include Singleton
diff --git a/temporalio/test/sig/worker_activity_test.rbs b/temporalio/test/sig/worker_activity_test.rbs
index 27d13859..903c25f5 100644
--- a/temporalio/test/sig/worker_activity_test.rbs
+++ b/temporalio/test/sig/worker_activity_test.rbs
@@ -12,7 +12,7 @@ class WorkerActivityTest < Test
?cancellation: Temporalio::Cancellation,
?raise_in_block_on_shutdown: bool,
?activity_executors: Hash[Symbol, Temporalio::Worker::ActivityExecutor],
- ?interceptors: Array[Temporalio::Worker::Interceptor],
+ ?interceptors: Array[Temporalio::Worker::Interceptor::Activity],
?client: Temporalio::Client
) ?{ (Temporalio::Client::WorkflowHandle, Temporalio::Worker) -> T } -> T | (
untyped activity,
@@ -27,7 +27,7 @@ class WorkerActivityTest < Test
?cancellation: Temporalio::Cancellation,
?raise_in_block_on_shutdown: bool,
?activity_executors: Hash[Symbol, Temporalio::Worker::ActivityExecutor],
- ?interceptors: Array[Temporalio::Worker::Interceptor],
+ ?interceptors: Array[Temporalio::Worker::Interceptor::Activity],
?client: Temporalio::Client
) -> Object?
diff --git a/temporalio/test/sig/workflow/definition_test.rbs b/temporalio/test/sig/workflow/definition_test.rbs
new file mode 100644
index 00000000..bbc5284c
--- /dev/null
+++ b/temporalio/test/sig/workflow/definition_test.rbs
@@ -0,0 +1,11 @@
+module Workflow
+ class DefinitionTest < Test
+ class ValidWorkflowAdvancedBase < Temporalio::Workflow::Definition
+ end
+
+ def assert_invalid_workflow_code: (
+ String message_contains,
+ String code_to_eval
+ ) -> void
+ end
+end
\ No newline at end of file
diff --git a/temporalio/test/sig/workflow_utils.rbs b/temporalio/test/sig/workflow_utils.rbs
new file mode 100644
index 00000000..813f33b6
--- /dev/null
+++ b/temporalio/test/sig/workflow_utils.rbs
@@ -0,0 +1,45 @@
+module WorkflowUtils
+ def execute_workflow: (
+ singleton(Temporalio::Workflow::Definition) workflow,
+ *Object? args,
+ ?activities: Array[Temporalio::Activity::Definition | singleton(Temporalio::Activity::Definition)],
+ ?more_workflows: Array[singleton(Temporalio::Workflow::Definition)],
+ ?task_queue: String,
+ ?id: String,
+ ?search_attributes: Temporalio::SearchAttributes?,
+ ?memo: Hash[String | Symbol, Object?]?,
+ ?retry_policy: Temporalio::RetryPolicy?,
+ ?workflow_failure_exception_types: Array[singleton(Exception)],
+ ?max_cached_workflows: Integer,
+ ?logger: Logger?,
+ ?client: Temporalio::Client,
+ ?workflow_payload_codec_thread_pool: Temporalio::Worker::ThreadPool?,
+ ?id_conflict_policy: Temporalio::WorkflowIDConflictPolicy::enum,
+ ?max_heartbeat_throttle_interval: Float,
+ ?task_timeout: duration?
+ ) -> Object? |
+ [T] (
+ singleton(Temporalio::Workflow::Definition) workflow,
+ *Object? args,
+ ?activities: Array[Temporalio::Activity::Definition | singleton(Temporalio::Activity::Definition)],
+ ?more_workflows: Array[singleton(Temporalio::Workflow::Definition)],
+ ?task_queue: String,
+ ?id: String,
+ ?search_attributes: Temporalio::SearchAttributes?,
+ ?memo: Hash[String | Symbol, Object?]?,
+ ?retry_policy: Temporalio::RetryPolicy?,
+ ?workflow_failure_exception_types: Array[singleton(Exception)],
+ ?max_cached_workflows: Integer,
+ ?logger: Logger?,
+ ?client: Temporalio::Client,
+ ?workflow_payload_codec_thread_pool: Temporalio::Worker::ThreadPool?,
+ ?id_conflict_policy: Temporalio::WorkflowIDConflictPolicy::enum,
+ ?max_heartbeat_throttle_interval: Float,
+ ?task_timeout: duration?
+ ) { (Temporalio::Client::WorkflowHandle, Temporalio::Worker) -> T } -> T
+
+ def assert_eventually_task_fail: (
+ handle: Temporalio::Client::WorkflowHandle,
+ ?message_contains: String?
+ ) -> void
+end
\ No newline at end of file
diff --git a/temporalio/test/test.rb b/temporalio/test/test.rb
index 0d08c43d..e2e6a4d6 100644
--- a/temporalio/test/test.rb
+++ b/temporalio/test/test.rb
@@ -10,6 +10,7 @@
require 'temporalio/internal/bridge'
require 'temporalio/testing'
require 'timeout'
+require 'workflow_utils'
# require 'memory_profiler'
# MemoryProfiler.start
@@ -20,6 +21,7 @@
class Test < Minitest::Test
include ExtraAssertions
+ include WorkflowUtils
ATTR_KEY_TEXT = Temporalio::SearchAttributes::Key.new('ruby-key-text',
Temporalio::SearchAttributes::IndexedValueType::TEXT)
@@ -72,6 +74,10 @@ def skip_if_fibers_not_supported!
skip('Fibers not supported in this Ruby version')
end
+ def skip_if_not_x86!
+ skip('Test only supported on x86') unless RbConfig::CONFIG['host_cpu'] == 'x86_64'
+ end
+
def env
TestEnvironment.instance
end
@@ -120,13 +126,26 @@ def delete_schedules(*ids)
end
end
+ def safe_capture_io(&)
+ out, err = capture_io(&)
+ out.encode!('UTF-8', invalid: :replace)
+ err.encode!('UTF-8', invalid: :replace)
+ [out, err]
+ end
+
class TestEnvironment
include Singleton
attr_reader :server
def initialize
- @server = Temporalio::Testing::WorkflowEnvironment.start_local(logger: Logger.new($stdout))
+ @server = Temporalio::Testing::WorkflowEnvironment.start_local(
+ logger: Logger.new($stdout),
+ dev_server_extra_args: [
+ # Allow continue as new to be immediate
+ '--dynamic-config-value', 'history.workflowIdReuseMinimalInterval="0s"'
+ ]
+ )
Minitest.after_run do
@server.shutdown
end
diff --git a/temporalio/test/testing/activity_environment_test.rb b/temporalio/test/testing/activity_environment_test.rb
index ea95aa35..e1d66e07 100644
--- a/temporalio/test/testing/activity_environment_test.rb
+++ b/temporalio/test/testing/activity_environment_test.rb
@@ -8,7 +8,7 @@ module Testing
class ActivityEnvironmentTest < Test
also_run_all_tests_in_fiber
- class SimpleActivity < Temporalio::Activity
+ class SimpleActivity < Temporalio::Activity::Definition
def initialize(init_arg = 'init-arg')
@init_arg = init_arg
end
@@ -28,7 +28,7 @@ def test_defaults
env.run(SimpleActivity.new('init-arg2'), 'arg2')
assert_equal 'exec arg: arg3, id: test',
env.run(
- Temporalio::Activity::Definition.new(name: 'SimpleActivity') do |arg|
+ Temporalio::Activity::Definition::Info.new(name: 'SimpleActivity') do |arg|
"exec arg: #{arg}, id: #{Temporalio::Activity::Context.current.info.activity_id}"
end,
'arg3'
@@ -37,7 +37,7 @@ def test_defaults
assert_equal 'Intentional error', err.message
end
- class WaitCancelActivity < Temporalio::Activity
+ class WaitCancelActivity < Temporalio::Activity::Definition
def execute
Temporalio::Activity::Context.current.cancellation.wait
end
@@ -56,7 +56,7 @@ def test_cancellation
assert_instance_of Temporalio::Error::CanceledError, err_queue.pop
end
- class WaitFiberCancelActivity < Temporalio::Activity
+ class WaitFiberCancelActivity < Temporalio::Activity::Definition
activity_executor :fiber
def execute
@@ -78,7 +78,7 @@ def test_fiber_cancellation
assert_instance_of Temporalio::Error::CanceledError, err_queue.pop
end
- class HeartbeatingActivity < Temporalio::Activity
+ class HeartbeatingActivity < Temporalio::Activity::Definition
def execute
Temporalio::Activity::Context.current.heartbeat(123, '456')
Temporalio::Activity::Context.current.heartbeat(Temporalio::Activity::Context.current.info.activity_id)
diff --git a/temporalio/test/testing/workflow_environment_test.rb b/temporalio/test/testing/workflow_environment_test.rb
new file mode 100644
index 00000000..24472559
--- /dev/null
+++ b/temporalio/test/testing/workflow_environment_test.rb
@@ -0,0 +1,141 @@
+# frozen_string_literal: true
+
+require 'securerandom'
+require 'temporalio/activity'
+require 'temporalio/client'
+require 'temporalio/testing/workflow_environment'
+require 'temporalio/worker'
+require 'temporalio/workflow'
+require 'test'
+require 'workflow_utils'
+
+module Testing
+ class WorkflowEnvironmentTest < Test
+ include WorkflowUtils
+
+ class SlowWorkflow < Temporalio::Workflow::Definition
+ TWO_DAYS = 2 * 24 * 60 * 60
+
+ def execute
+ sleep(TWO_DAYS)
+ 'all done'
+ end
+
+ workflow_query
+ def current_timestamp
+ Temporalio::Workflow.now.to_i
+ end
+
+ workflow_signal
+ def some_signal
+ # Do nothing
+ end
+ end
+
+ def test_time_skipping_auto
+ skip_if_not_x86!
+ Temporalio::Testing::WorkflowEnvironment.start_time_skipping(logger: Logger.new($stdout)) do |env|
+ worker = Temporalio::Worker.new(
+ client: env.client,
+ task_queue: "tq-#{SecureRandom.uuid}",
+ workflows: [SlowWorkflow],
+ # TODO(cretz): Ractor support not currently working
+ workflow_executor: Temporalio::Worker::WorkflowExecutor::ThreadPool.default
+ )
+ worker.run do
+ # Check that timestamp is around now
+ assert_in_delta Time.now, env.current_time, 30.0
+
+ # Run workflow
+ assert_equal 'all done',
+ env.client.execute_workflow(SlowWorkflow,
+ id: "wf-#{SecureRandom.uuid}", task_queue: worker.task_queue)
+
+ # Check that timestamp is now about two days from now
+ assert_in_delta Time.now + SlowWorkflow::TWO_DAYS, env.current_time, 30.0
+ end
+ end
+ end
+
+ def test_time_skipping_manual
+ skip_if_not_x86!
+ Temporalio::Testing::WorkflowEnvironment.start_time_skipping(logger: Logger.new($stdout)) do |env|
+ worker = Temporalio::Worker.new(
+ client: env.client,
+ task_queue: "tq-#{SecureRandom.uuid}",
+ workflows: [SlowWorkflow],
+ # TODO(cretz): Ractor support not currently working
+ workflow_executor: Temporalio::Worker::WorkflowExecutor::ThreadPool.default
+ )
+ worker.run do
+ # Start workflow
+ handle = env.client.start_workflow(SlowWorkflow,
+ id: "wf-#{SecureRandom.uuid}", task_queue: worker.task_queue)
+
+ # Send signal then check query is around now
+ handle.signal(SlowWorkflow.some_signal)
+ assert_in_delta Time.now, Time.at(handle.query(SlowWorkflow.current_timestamp)), 30.0 # steep:ignore
+
+ # Sleep for two hours then signal then check query again
+ two_hours = 2 * 60 * 60
+ env.sleep(two_hours)
+ handle.signal(SlowWorkflow.some_signal)
+ assert_in_delta(
+ Time.now + two_hours,
+ Time.at(handle.query(SlowWorkflow.current_timestamp)), # steep:ignore
+ 30.0
+ )
+ end
+ end
+ end
+
+ class HeartbeatTimeoutActivity < Temporalio::Activity::Definition
+ def initialize(env)
+ @env = env
+ end
+
+ def execute
+ # Sleep for twice as long as heartbeat timeout
+ timeout = Temporalio::Activity::Context.current.info.heartbeat_timeout or raise 'No timeout'
+ @env.sleep(timeout * 2)
+ 'all done'
+ end
+ end
+
+ class HeartbeatTimeoutWorkflow < Temporalio::Workflow::Definition
+ def execute
+ # Run activity with 20 second heartbeat timeout
+ Temporalio::Workflow.execute_activity(
+ HeartbeatTimeoutActivity,
+ schedule_to_close_timeout: 1000,
+ heartbeat_timeout: 20,
+ retry_policy: Temporalio::RetryPolicy.new(max_attempts: 1)
+ )
+ end
+ end
+
+ def test_time_skipping_heartbeat_timeout
+ skip_if_not_x86!
+ Temporalio::Testing::WorkflowEnvironment.start_time_skipping(logger: Logger.new($stdout)) do |env|
+ worker = Temporalio::Worker.new(
+ client: env.client,
+ task_queue: "tq-#{SecureRandom.uuid}",
+ workflows: [HeartbeatTimeoutWorkflow],
+ activities: [HeartbeatTimeoutActivity.new(env)],
+ # TODO(cretz): Ractor support not currently working
+ workflow_executor: Temporalio::Worker::WorkflowExecutor::ThreadPool.default
+ )
+ worker.run do
+ # Run workflow and confirm it got heartbeat timeout
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) do
+ env.client.execute_workflow(HeartbeatTimeoutWorkflow,
+ id: "wf-#{SecureRandom.uuid}", task_queue: worker.task_queue)
+ end
+ assert_instance_of Temporalio::Error::ActivityError, err.cause
+ assert_instance_of Temporalio::Error::TimeoutError, err.cause.cause
+ assert_equal Temporalio::Error::TimeoutError::TimeoutType::HEARTBEAT, err.cause.cause.type
+ end
+ end
+ end
+ end
+end
diff --git a/temporalio/test/worker/activity_executor/thread_pool_test.rb b/temporalio/test/worker/activity_executor/thread_pool_test.rb
deleted file mode 100644
index e2c53cec..00000000
--- a/temporalio/test/worker/activity_executor/thread_pool_test.rb
+++ /dev/null
@@ -1,111 +0,0 @@
-# frozen_string_literal: true
-
-require 'temporalio/activity'
-require 'test'
-
-module Worker
- module ActivityExecutor
- class ThreadPoolTest < Test
- DO_NOTHING_ACTIVITY = Temporalio::Activity::Definition.new(name: 'ignore') do
- # Empty
- end
-
- def test_unlimited_max_with_idle
- pool = Temporalio::Worker::ActivityExecutor::ThreadPool.new(idle_timeout: 0.3)
-
- # Start some activities
- pending_activity_queues = Queue.new
- 20.times do
- pool.execute_activity(DO_NOTHING_ACTIVITY) do
- queue = Queue.new
- pending_activity_queues << queue
- queue.pop
- end
- end
-
- # Wait for all to be waiting
- assert_eventually { assert_equal 20, pending_activity_queues.size }
-
- # Confirm some values
- assert_equal 20, pool.largest_length
- assert_equal 20, pool.scheduled_task_count
- assert_equal 0, pool.completed_task_count
- assert_equal 20, pool.active_count
- assert_equal 20, pool.length
- assert_equal 0, pool.queue_length
-
- # Complete 7 of the activities
- 7.times { pending_activity_queues.pop << nil }
-
- # Confirm values have changed
- assert_eventually do
- assert_equal 20, pool.largest_length
- assert_equal 20, pool.scheduled_task_count
- assert_equal 7, pool.completed_task_count
- assert_equal 13, pool.active_count
- assert_equal 0, pool.queue_length
- end
-
- # Wait twice as long as the idle timeout and send an immediately
- # completing activity and confirm pool length trimmed down
- sleep(0.6)
- pool.execute_activity(DO_NOTHING_ACTIVITY) { nil }
- assert_eventually do
- assert pool.length == 13 || pool.length == 14, "Pool length: #{pool.length}"
- end
-
- # Finish the rest, shutdown, confirm eventually all done
- pending_activity_queues.pop << nil until pending_activity_queues.empty?
- pool.shutdown
- assert_eventually do
- assert_equal 20, pool.largest_length
- assert_equal 21, pool.scheduled_task_count
- assert_equal 21, pool.completed_task_count
- assert_equal 0, pool.length
- end
- end
-
- def test_limited_max
- pool = Temporalio::Worker::ActivityExecutor::ThreadPool.new(max_threads: 7)
-
- # Start some activities
- pending_activity_queues = Queue.new
- 20.times do
- pool.execute_activity(DO_NOTHING_ACTIVITY) do
- queue = Queue.new
- pending_activity_queues << queue
- queue.pop
- end
- end
-
- # Wait for 7 to be waiting
- assert_eventually { assert_equal 7, pending_activity_queues.size }
-
- # Confirm some values
- assert_equal 7, pool.largest_length
- assert_equal 20, pool.scheduled_task_count
- assert_equal 0, pool.completed_task_count
- assert_equal 7, pool.active_count
- assert_equal 7, pool.length
- assert_equal 13, pool.queue_length
-
- # Complete 9 of the activities and confirm some values
- 9.times { pending_activity_queues.pop << nil }
- assert_eventually do
- assert_equal 9, pool.completed_task_count
- assert_equal 7, pool.active_count
- assert_equal 7, pool.length
- # Only 4 left because 9 completed and 7 are running
- assert_equal 4, pool.queue_length
- end
-
- # Complete the rest
- 11.times { pending_activity_queues.pop << nil }
- assert_eventually do
- assert_equal 20, pool.completed_task_count
- assert_equal 0, pool.queue_length
- end
- end
- end
- end
-end
diff --git a/temporalio/test/worker/thread_pool_test.rb b/temporalio/test/worker/thread_pool_test.rb
new file mode 100644
index 00000000..669f5e23
--- /dev/null
+++ b/temporalio/test/worker/thread_pool_test.rb
@@ -0,0 +1,105 @@
+# frozen_string_literal: true
+
+require 'temporalio/worker/thread_pool'
+require 'test'
+
+module Worker
+ class ThreadPoolTest < Test
+ def test_unlimited_max_with_idle
+ pool = Temporalio::Worker::ThreadPool.new(idle_timeout: 0.3)
+
+ # Start some blocks
+ pending_queues = Queue.new
+ 20.times do
+ pool.execute do
+ queue = Queue.new
+ pending_queues << queue
+ queue.pop
+ end
+ end
+
+ # Wait for all to be waiting
+ assert_eventually { assert_equal 20, pending_queues.size }
+
+ # Confirm some values
+ assert_equal 20, pool.largest_length
+ assert_equal 20, pool.scheduled_task_count
+ assert_equal 0, pool.completed_task_count
+ assert_equal 20, pool.active_count
+ assert_equal 20, pool.length
+ assert_equal 0, pool.queue_length
+
+ # Complete 7 of the blocks
+ 7.times { pending_queues.pop << nil }
+
+ # Confirm values have changed
+ assert_eventually do
+ assert_equal 20, pool.largest_length
+ assert_equal 20, pool.scheduled_task_count
+ assert_equal 7, pool.completed_task_count
+ assert_equal 13, pool.active_count
+ assert_equal 0, pool.queue_length
+ end
+
+ # Wait twice as long as the idle timeout and send an immediately
+ # completing block and confirm pool length trimmed down
+ sleep(0.6)
+ pool.execute { nil }
+ assert_eventually do
+ assert pool.length == 13 || pool.length == 14, "Pool length: #{pool.length}"
+ end
+
+ # Finish the rest, shutdown, confirm eventually all done
+ pending_queues.pop << nil until pending_queues.empty?
+ pool.shutdown
+ assert_eventually do
+ assert_equal 20, pool.largest_length
+ assert_equal 21, pool.scheduled_task_count
+ assert_equal 21, pool.completed_task_count
+ assert_equal 0, pool.length
+ end
+ end
+
+ def test_limited_max
+ pool = Temporalio::Worker::ThreadPool.new(max_threads: 7)
+
+ # Start some blocks
+ pending_queues = Queue.new
+ 20.times do
+ pool.execute do
+ queue = Queue.new
+ pending_queues << queue
+ queue.pop
+ end
+ end
+
+ # Wait for 7 to be waiting
+ assert_eventually { assert_equal 7, pending_queues.size }
+
+ # Confirm some values
+ assert_equal 7, pool.largest_length
+ assert_equal 20, pool.scheduled_task_count
+ assert_equal 0, pool.completed_task_count
+ assert_equal 7, pool.active_count
+ assert_equal 7, pool.length
+ assert_equal 13, pool.queue_length
+
+ # Complete 9 of the blocks and confirm some values
+ 9.times { pending_queues.pop << nil }
+ assert_eventually do
+ assert_equal 9, pool.completed_task_count
+ assert_equal 7, pool.active_count
+ assert_equal 7, pool.length
+ # Only 4 left because 9 completed and 7 are running
+ assert_equal 4, pool.queue_length
+ end
+
+ # Complete the rest
+ 11.times { pending_queues.pop << nil }
+ assert_eventually do
+ assert_equal 20, pool.completed_task_count
+ assert_equal 0, pool.queue_length
+ end
+ end
+ end
+end
diff --git a/temporalio/test/worker_activity_test.rb b/temporalio/test/worker_activity_test.rb
index c4459139..6dcf971e 100644
--- a/temporalio/test/worker_activity_test.rb
+++ b/temporalio/test/worker_activity_test.rb
@@ -12,7 +12,7 @@
class WorkerActivityTest < Test
also_run_all_tests_in_fiber
- class ClassActivity < Temporalio::Activity
+ class ClassActivity < Temporalio::Activity::Definition
def execute(name)
"Hello, #{name}!"
end
@@ -22,7 +22,7 @@ def test_class
assert_equal 'Hello, Class!', execute_activity(ClassActivity, 'Class')
end
- class InstanceActivity < Temporalio::Activity
+ class InstanceActivity < Temporalio::Activity::Definition
def initialize(greeting)
@greeting = greeting
end
@@ -37,11 +37,11 @@ def test_instance
end
def test_block
- activity = Temporalio::Activity::Definition.new(name: 'BlockActivity') { |name| "Greetings, #{name}!" }
+ activity = Temporalio::Activity::Definition::Info.new(name: 'BlockActivity') { |name| "Greetings, #{name}!" }
assert_equal 'Greetings, Block!', execute_activity(activity, 'Block')
end
- class FiberActivity < Temporalio::Activity
+ class FiberActivity < Temporalio::Activity::Definition
attr_reader :waiting_notification, :result_notification
activity_executor :fiber
@@ -78,7 +78,7 @@ def test_fiber
end
end
- class LoggingActivity < Temporalio::Activity
+ class LoggingActivity < Temporalio::Activity::Definition
def execute
# Log and then raise only on first attempt
Temporalio::Activity::Context.current.logger.info('Test log')
@@ -89,7 +89,7 @@ def execute
end
def test_logging
- out, = capture_io do
+ out, = safe_capture_io do
# New logger each time since stdout is replaced
execute_activity(LoggingActivity, retry_max_attempts: 2, logger: Logger.new($stdout))
end
@@ -98,7 +98,7 @@ def test_logging
assert(lines.one? { |l| l.include?('Test log') && l.include?(':attempt=>2') })
end
- class CustomNameActivity < Temporalio::Activity
+ class CustomNameActivity < Temporalio::Activity::Definition
activity_name 'my-activity'
def execute
@@ -115,10 +115,10 @@ def test_custom_name
end
end
- class DuplicateNameActivity1 < Temporalio::Activity
+ class DuplicateNameActivity1 < Temporalio::Activity::Definition
end
- class DuplicateNameActivity2 < Temporalio::Activity
+ class DuplicateNameActivity2 < Temporalio::Activity::Definition
activity_name :DuplicateNameActivity1
end
@@ -133,7 +133,7 @@ def test_duplicate_name
assert_equal 'Multiple activities named DuplicateNameActivity1', error.message
end
- class UnknownExecutorActivity < Temporalio::Activity
+ class UnknownExecutorActivity < Temporalio::Activity::Definition
activity_executor :some_unknown
end
@@ -159,10 +159,10 @@ def test_not_an_activity
activities: [NotAnActivity]
)
end
- assert error.message.end_with?('does not extend Activity')
+ assert error.message.end_with?('does not extend Temporalio::Activity::Definition')
end
- class FailureActivity < Temporalio::Activity
+ class FailureActivity < Temporalio::Activity::Definition
def execute(form)
case form
when 'simple'
@@ -205,7 +205,7 @@ def test_failure
assert_equal 1.23, error.cause.cause.next_retry_delay
end
- class UnimplementedExecuteActivity < Temporalio::Activity
+ class UnimplementedExecuteActivity < Temporalio::Activity::Definition
end
def test_unimplemented_execute
@@ -222,7 +222,7 @@ def test_not_found
)
end
- class MultiParamActivity < Temporalio::Activity
+ class MultiParamActivity < Temporalio::Activity::Definition
def execute(arg1, arg2, arg3)
"Args: #{arg1}, #{arg2}, #{arg3}"
end
@@ -232,7 +232,7 @@ def test_multi_param
assert_equal 'Args: {"foo"=>"bar"}, 123, baz', execute_activity(MultiParamActivity, { foo: 'bar' }, 123, 'baz')
end
- class InfoActivity < Temporalio::Activity
+ class InfoActivity < Temporalio::Activity::Definition
def execute
# Task token is non-utf8 safe string, so we need to base64 it
info_hash = Temporalio::Activity::Context.current.info.to_h # steep:ignore
@@ -262,7 +262,7 @@ def test_info
assert_equal 'kitchen_sink', info.workflow_type
end
- class CancellationActivity < Temporalio::Activity
+ class CancellationActivity < Temporalio::Activity::Definition
attr_reader :canceled
def initialize(swallow: false)
@@ -328,7 +328,7 @@ def test_cancellation_swallowed
end
end
- class HeartbeatDetailsActivity < Temporalio::Activity
+ class HeartbeatDetailsActivity < Temporalio::Activity::Definition
def execute
# First attempt sends a heartbeat with details and fails,
# next attempt just returns the first attempt's details
@@ -346,7 +346,7 @@ def test_heartbeat_details
execute_activity(HeartbeatDetailsActivity, retry_max_attempts: 2, heartbeat_timeout: 0.8)
end
- class ShieldingActivity < Temporalio::Activity
+ class ShieldingActivity < Temporalio::Activity::Definition
attr_reader :canceled, :levels_reached
def initialize
@@ -401,7 +401,7 @@ def test_activity_shielding
end
end
- class NoRaiseCancellationActivity < Temporalio::Activity
+ class NoRaiseCancellationActivity < Temporalio::Activity::Definition
activity_cancel_raise false
attr_reader :canceled
@@ -446,7 +446,7 @@ def test_no_raise_cancellation
end
end
- class WorkerShutdownActivity < Temporalio::Activity
+ class WorkerShutdownActivity < Temporalio::Activity::Definition
attr_reader :canceled
def initialize
@@ -511,7 +511,7 @@ def test_worker_shutdown
assert_equal 'WorkerShutdown', error.cause.cause.type
end
- class AsyncCompletionActivity < Temporalio::Activity
+ class AsyncCompletionActivity < Temporalio::Activity::Definition
def initialize
@task_token = Queue.new
end
@@ -614,7 +614,7 @@ def set_activity_context(_defn, context)
end
end
- class CustomExecutorActivity < Temporalio::Activity
+ class CustomExecutorActivity < Temporalio::Activity::Definition
activity_executor :my_executor
def execute
@@ -627,7 +627,7 @@ def test_custom_executor
execute_activity(CustomExecutorActivity, activity_executors: { my_executor: CustomExecutor.new })
end
- class ConcurrentActivity < Temporalio::Activity
+ class ConcurrentActivity < Temporalio::Activity::Definition
def initialize
@started = Queue.new
@continue = Queue.new
@@ -744,7 +744,7 @@ def test_concurrent_single_worker_fiber_activities
end
class TrackCallsInterceptor
- include Temporalio::Worker::Interceptor
+ include Temporalio::Worker::Interceptor::Activity
# Also include client interceptor so we can test worker interceptors at a
# client level
include Temporalio::Client::Interceptor
@@ -759,7 +759,7 @@ def intercept_activity(next_interceptor)
Inbound.new(self, next_interceptor)
end
- class Inbound < Temporalio::Worker::Interceptor::ActivityInbound
+ class Inbound < Temporalio::Worker::Interceptor::Activity::Inbound
def initialize(root, next_interceptor)
super(next_interceptor)
@root = root
@@ -776,7 +776,7 @@ def execute(input)
end
end
- class Outbound < Temporalio::Worker::Interceptor::ActivityOutbound
+ class Outbound < Temporalio::Worker::Interceptor::Activity::Outbound
def initialize(root, next_interceptor)
super(next_interceptor)
@root = root
@@ -789,7 +789,7 @@ def heartbeat(input)
end
end
- class InterceptorActivity < Temporalio::Activity
+ class InterceptorActivity < Temporalio::Activity::Definition
def execute(name)
Temporalio::Activity::Context.current.heartbeat('heartbeat-val')
"Hello, #{name}!"
@@ -822,56 +822,6 @@ def test_interceptor_from_client
assert_equal ['heartbeat-val'], interceptor.calls[2][1].details
end
- class CustomMetricActivity < Temporalio::Activity
- def execute
- counter = Temporalio::Activity::Context.current.metric_meter.create_metric(
- :counter, 'my-counter'
- ).with_additional_attributes({ someattr: 'someval' })
- counter.record(123, additional_attributes: { anotherattr: 'anotherval' })
- 'done'
- end
- end
-
- def test_activity_metric
- # Create a client w/ a Prometheus-enabled runtime
- prom_addr = "127.0.0.1:#{find_free_port}"
- runtime = Temporalio::Runtime.new(
- telemetry: Temporalio::Runtime::TelemetryOptions.new(
- metrics: Temporalio::Runtime::MetricsOptions.new(
- prometheus: Temporalio::Runtime::PrometheusMetricsOptions.new(
- bind_address: prom_addr
- )
- )
- )
- )
- conn_opts = env.client.connection.options.dup
- conn_opts.runtime = runtime
- client_opts = env.client.options.dup
- client_opts.connection = Temporalio::Client::Connection.new(**conn_opts.to_h) # steep:ignore
- client = Temporalio::Client.new(**client_opts.to_h) # steep:ignore
-
- assert_equal 'done', execute_activity(CustomMetricActivity, client:)
-
- dump = Net::HTTP.get(URI("http://#{prom_addr}/metrics"))
- lines = dump.split("\n")
-
- # Confirm we have the regular activity metrics
- line = lines.find { |l| l.start_with?('temporal_activity_task_received{') }
- assert_includes line, 'activity_type="CustomMetricActivity"'
- assert_includes line, 'task_queue="'
- assert_includes line, 'namespace="default"'
- assert line.end_with?(' 1')
-
- # Confirm custom metric has the tags we expect
- line = lines.find { |l| l.start_with?('my_counter{') }
- assert_includes line, 'activity_type="CustomMetricActivity"'
- assert_includes line, 'task_queue="'
- assert_includes line, 'namespace="default"'
- assert_includes line, 'someattr="someval"'
- assert_includes line, 'anotherattr="anotherval"'
- assert line.end_with?(' 123')
- end
-
# steep:ignore
def execute_activity(
activity,
@@ -889,7 +839,7 @@ def execute_activity(
interceptors: [],
client: env.client
)
- activity_defn = Temporalio::Activity::Definition.from_activity(activity)
+ activity_defn = Temporalio::Activity::Definition::Info.from_activity(activity)
extra_worker_args = {}
extra_worker_args[:activity_executors] = activity_executors if activity_executors
worker = Temporalio::Worker.new(
diff --git a/temporalio/test/worker_test.rb b/temporalio/test/worker_test.rb
index 54ebc78c..7b4e7984 100644
--- a/temporalio/test/worker_test.rb
+++ b/temporalio/test/worker_test.rb
@@ -8,7 +8,7 @@
class WorkerTest < Test
also_run_all_tests_in_fiber
- class SimpleActivity < Temporalio::Activity
+ class SimpleActivity < Temporalio::Activity::Definition
def execute(name)
"Hello, #{name}!"
end
diff --git a/temporalio/test/worker_workflow_activity_test.rb b/temporalio/test/worker_workflow_activity_test.rb
new file mode 100644
index 00000000..3106d15e
--- /dev/null
+++ b/temporalio/test/worker_workflow_activity_test.rb
@@ -0,0 +1,262 @@
+# frozen_string_literal: true
+
+require 'securerandom'
+require 'temporalio/client'
+require 'temporalio/testing'
+require 'temporalio/worker'
+require 'temporalio/workflow'
+require 'test'
+
+class WorkerWorkflowActivityTest < Test
+ class SimpleActivity < Temporalio::Activity::Definition
+ def execute(value)
+ "from activity: #{value}"
+ end
+ end
+
+ class SimpleWorkflow < Temporalio::Workflow::Definition
+ def execute(scenario)
+ case scenario.to_sym
+ when :remote
+ Temporalio::Workflow.execute_activity(SimpleActivity, 'remote', start_to_close_timeout: 10)
+ when :remote_symbol_name
+ Temporalio::Workflow.execute_activity(:SimpleActivity, 'remote', start_to_close_timeout: 10)
+ when :remote_string_name
+ Temporalio::Workflow.execute_activity('SimpleActivity', 'remote', start_to_close_timeout: 10)
+ when :local
+ Temporalio::Workflow.execute_local_activity(SimpleActivity, 'local', start_to_close_timeout: 10)
+ when :local_symbol_name
+ Temporalio::Workflow.execute_local_activity(:SimpleActivity, 'local', start_to_close_timeout: 10)
+ when :local_string_name
+ Temporalio::Workflow.execute_local_activity('SimpleActivity', 'local', start_to_close_timeout: 10)
+ else
+ raise NotImplementedError
+ end
+ end
+ end
+
+ def test_simple
+ assert_equal 'from activity: remote',
+ execute_workflow(SimpleWorkflow, :remote, activities: [SimpleActivity])
+ assert_equal 'from activity: remote',
+ execute_workflow(SimpleWorkflow, :remote_symbol_name, activities: [SimpleActivity])
+ assert_equal 'from activity: remote',
+ execute_workflow(SimpleWorkflow, :remote_string_name, activities: [SimpleActivity])
+ assert_equal 'from activity: local',
+ execute_workflow(SimpleWorkflow, :local, activities: [SimpleActivity])
+ assert_equal 'from activity: local',
+ execute_workflow(SimpleWorkflow, :local_symbol_name, activities: [SimpleActivity])
+ assert_equal 'from activity: local',
+ execute_workflow(SimpleWorkflow, :local_string_name, activities: [SimpleActivity])
+ end
+
+ class FailureActivity < Temporalio::Activity::Definition
+ def execute
+ raise Temporalio::Error::ApplicationError.new('Intentional error', 'detail1', 'detail2', non_retryable: true)
+ end
+ end
+
+ class FailureWorkflow < Temporalio::Workflow::Definition
+ def execute(local)
+ if local
+ Temporalio::Workflow.execute_local_activity(FailureActivity, start_to_close_timeout: 10)
+ else
+ Temporalio::Workflow.execute_activity(FailureActivity, start_to_close_timeout: 10)
+ end
+ end
+ end
+
+ def test_failure
+ # Most activity failure testing is already part of activity tests, this is just for checking it's propagated
+
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) do
+ execute_workflow(FailureWorkflow, false, activities: [FailureActivity])
+ end
+ assert_instance_of Temporalio::Error::ActivityError, err.cause
+ assert_instance_of Temporalio::Error::ApplicationError, err.cause.cause
+ assert_equal %w[detail1 detail2], err.cause.cause.details
+
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) do
+ execute_workflow(FailureWorkflow, true, activities: [FailureActivity])
+ end
+ assert_instance_of Temporalio::Error::ApplicationError, err.cause
+ assert_equal %w[detail1 detail2], err.cause.details
+ end
+
+ class CancellationSleepActivity < Temporalio::Activity::Definition
+ def execute(amount)
+ sleep(amount)
+ end
+ end
+
+ class CancellationActivity < Temporalio::Activity::Definition
+ attr_reader :started, :done
+
+ def initialize
+ # Can't use queue because we need to heartbeat during pop and timeout is not in Ruby 3.1
+ @force_complete = false
+ @force_complete_mutex = Mutex.new
+ end
+
+ def execute
+ @started = true
+ # Heartbeat every 100ms
+ loop do
+ Temporalio::Activity::Context.current.heartbeat
+ # Check or sleep-then-loop
+ val = @force_complete_mutex.synchronize { @force_complete }
+ if val
+ @done = :success
+ return val
+ end
+ sleep(0.1)
+ end
+ rescue Temporalio::Error::CanceledError
+ @done ||= :canceled
+ sleep(0.1)
+ 'cancel swallowed'
+ ensure
+ @done ||= :failure # rubocop:disable Naming/MemoizedInstanceVariableName
+ end
+
+ def force_complete(value)
+ @force_complete_mutex.synchronize { @force_complete = value }
+ end
+ end
+
+ class CancellationWorkflow < Temporalio::Workflow::Definition
+ def execute
+ Temporalio::Workflow.wait_condition { false }
+ end
+
+ workflow_update
+ def run(scenario, local)
+ cancellation_type = case scenario.to_sym
+ when :try_cancel
+ Temporalio::Workflow::ActivityCancellationType::TRY_CANCEL
+ when :wait_cancel
+ Temporalio::Workflow::ActivityCancellationType::WAIT_CANCELLATION_COMPLETED
+ when :abandon
+ Temporalio::Workflow::ActivityCancellationType::ABANDON
+ else
+ raise NotImplementedError
+ end
+ # Start
+ cancellation, cancel_proc = Temporalio::Cancellation.new
+ fut = Temporalio::Workflow::Future.new do
+ if local
+ Temporalio::Workflow.execute_local_activity(CancellationActivity,
+ schedule_to_close_timeout: 10,
+ cancellation:,
+ cancellation_type:)
+ else
+ Temporalio::Workflow.execute_activity(CancellationActivity,
+ schedule_to_close_timeout: 10,
+ heartbeat_timeout: 5,
+ cancellation:,
+ cancellation_type:)
+ end
+ end
+
+ # Wait a bit then cancel
+ if local
+ Temporalio::Workflow.execute_local_activity(CancellationSleepActivity, 0.1,
+ schedule_to_close_timeout: 10)
+ else
+ sleep(0.1)
+ end
+ cancel_proc.call
+
+ fut.wait
+ end
+ end
+
+ def test_cancellation
+ [true, false].each do |local|
+ # Try cancel
+ # TODO(cretz): This is not working for local because worker shutdown hangs when local activity completes after
+ # shutdown started
+ unless local
+ act = CancellationActivity.new
+ execute_workflow(CancellationWorkflow, activities: [act, CancellationSleepActivity],
+ max_heartbeat_throttle_interval: 0.2,
+ task_timeout: 3) do |handle|
+ update_handle = handle.start_update(
+ CancellationWorkflow.run, :try_cancel, local,
+ wait_for_stage: Temporalio::Client::WorkflowUpdateWaitStage::ACCEPTED
+ )
+ err = assert_raises(Temporalio::Error::WorkflowUpdateFailedError) { update_handle.result }
+ assert_instance_of Temporalio::Error::ActivityError, err.cause
+ assert_instance_of Temporalio::Error::CanceledError, err.cause.cause
+ assert_eventually { assert_equal :canceled, act.done }
+ end
+ end
+
+ # Wait cancel
+ act = CancellationActivity.new
+ execute_workflow(CancellationWorkflow, activities: [act, CancellationSleepActivity],
+ max_heartbeat_throttle_interval: 0.2,
+ task_timeout: 3) do |handle|
+ update_handle = handle.start_update(
+ CancellationWorkflow.run, :wait_cancel, local,
+ wait_for_stage: Temporalio::Client::WorkflowUpdateWaitStage::ACCEPTED
+ )
+ # assert_eventually { assert act.started }
+ # handle.signal(CancellationWorkflow.cancel)
+ assert_equal 'cancel swallowed', update_handle.result
+ assert_equal :canceled, act.done
+ end
+
+ # Abandon cancel
+ act = CancellationActivity.new
+ execute_workflow(CancellationWorkflow, activities: [act, CancellationSleepActivity],
+ max_heartbeat_throttle_interval: 0.2,
+ task_timeout: 3) do |handle|
+ update_handle = handle.start_update(
+ CancellationWorkflow.run, :abandon, local,
+ wait_for_stage: Temporalio::Client::WorkflowUpdateWaitStage::ACCEPTED
+ )
+ # assert_eventually { assert act.started }
+ # handle.signal(CancellationWorkflow.cancel)
+ err = assert_raises(Temporalio::Error::WorkflowUpdateFailedError) { update_handle.result }
+ assert_instance_of Temporalio::Error::ActivityError, err.cause
+ assert_instance_of Temporalio::Error::CanceledError, err.cause.cause
+ assert_nil act.done
+ sleep(0.2)
+ act.force_complete 'manually complete'
+ assert_eventually { assert_equal :success, act.done }
+ end
+ end
+ end
+
+ class LocalBackoffActivity < Temporalio::Activity::Definition
+ def execute
+ # Succeed on the third attempt
+ return 'done' if Temporalio::Activity::Context.current.info.attempt == 3
+
+ raise 'Intentional failure'
+ end
+ end
+
+ class LocalBackoffWorkflow < Temporalio::Workflow::Definition
+ def execute
+ # Give a fixed retry of every 200ms, but with a local threshold of 100ms
+ Temporalio::Workflow.execute_local_activity(
+ LocalBackoffActivity,
+ schedule_to_close_timeout: 30,
+ local_retry_threshold: 0.1,
+ retry_policy: Temporalio::RetryPolicy.new(initial_interval: 0.2, backoff_coefficient: 1)
+ )
+ end
+ end
+
+ def test_local_backoff
+ execute_workflow(LocalBackoffWorkflow, activities: [LocalBackoffActivity]) do |handle|
+ assert_equal 'done', handle.result
+ # Make sure there were two 200ms timers
+ assert_equal(2, handle.fetch_history_events.count do |e|
+ e.timer_started_event_attributes&.start_to_fire_timeout&.to_f == 0.2 # rubocop:disable Lint/FloatComparison
+ end)
+ end
+ end
+end
diff --git a/temporalio/test/worker_workflow_child_test.rb b/temporalio/test/worker_workflow_child_test.rb
new file mode 100644
index 00000000..3a4cc322
--- /dev/null
+++ b/temporalio/test/worker_workflow_child_test.rb
@@ -0,0 +1,279 @@
+# frozen_string_literal: true
+
+require 'securerandom'
+require 'temporalio/client'
+require 'temporalio/testing'
+require 'temporalio/worker'
+require 'temporalio/workflow'
+require 'test'
+
+class WorkerWorkflowChildTest < Test
+ class SimpleChildWorkflow < Temporalio::Workflow::Definition
+ def execute(scenario, arg = nil)
+ case scenario.to_sym
+ when :return
+ arg
+ when :fail
+ raise Temporalio::Error::ApplicationError.new('Intentional failure', 'detail1', 'detail2')
+ when :wait
+ Temporalio::Workflow.wait_condition { false }
+ else
+ raise NotImplementedError
+ end
+ end
+ end
+
+ class SimpleParentWorkflow < Temporalio::Workflow::Definition
+ def execute(scenario, arg = nil)
+ case scenario.to_sym
+ when :success
+ [
+ Temporalio::Workflow.execute_child_workflow(SimpleChildWorkflow, :return, arg),
+ Temporalio::Workflow.execute_child_workflow(:SimpleChildWorkflow, :return, arg),
+ Temporalio::Workflow.execute_child_workflow('SimpleChildWorkflow', :return, arg)
+ ]
+ when :fail
+ Temporalio::Workflow.execute_child_workflow(SimpleChildWorkflow, :fail)
+ when :already_exists
+ handle = Temporalio::Workflow.start_child_workflow(SimpleChildWorkflow, :wait)
+ Temporalio::Workflow.start_child_workflow(SimpleChildWorkflow, :wait, id: handle.id)
+ else
+ raise NotImplementedError
+ end
+ end
+ end
+
+ def test_simple
+ # Success
+ result = execute_workflow(SimpleParentWorkflow, :success, 'val', more_workflows: [SimpleChildWorkflow])
+ assert_equal %w[val val val], result
+
+ # Fail
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) do
+ execute_workflow(SimpleParentWorkflow, :fail, more_workflows: [SimpleChildWorkflow])
+ end
+ assert_instance_of Temporalio::Error::ChildWorkflowError, err.cause
+ assert_instance_of Temporalio::Error::ApplicationError, err.cause.cause
+ assert_equal %w[detail1 detail2], err.cause.cause.details
+
+ # Already exists
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) do
+ execute_workflow(SimpleParentWorkflow, :already_exists, more_workflows: [SimpleChildWorkflow])
+ end
+ assert_instance_of Temporalio::Error::ApplicationError, err.cause
+ assert_includes err.cause.message, 'already started'
+ end
+
+ class CancelChildWorkflow < Temporalio::Workflow::Definition
+ def execute
+ Temporalio::Workflow.wait_condition { false }
+ rescue Temporalio::Error::CanceledError
+ 'done'
+ end
+ end
+
+ class CancelParentWorkflow < Temporalio::Workflow::Definition
+ def execute(scenario)
+ case scenario.to_sym
+ when :cancel_wait
+ cancellation, cancel_proc = Temporalio::Cancellation.new
+ handle = Temporalio::Workflow.start_child_workflow(CancelChildWorkflow, cancellation:)
+ sleep(0.1)
+ cancel_proc.call
+ handle.result
+ when :cancel_try
+ cancellation, cancel_proc = Temporalio::Cancellation.new
+ handle = Temporalio::Workflow.start_child_workflow(
+ CancelChildWorkflow,
+ cancellation:,
+ cancellation_type: Temporalio::Workflow::ChildWorkflowCancellationType::TRY_CANCEL
+ )
+ sleep(0.1)
+ cancel_proc.call
+ handle.result
+ else
+ raise NotImplementedError
+ end
+ end
+ end
+
+ def test_cancel
+ # Cancel wait
+ assert_equal 'done', execute_workflow(CancelParentWorkflow, :cancel_wait, more_workflows: [CancelChildWorkflow])
+
+ # Cancel try
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) do
+ execute_workflow(CancelParentWorkflow, :cancel_try, more_workflows: [CancelChildWorkflow])
+ end
+ assert_instance_of Temporalio::Error::ChildWorkflowError, err.cause
+ assert_instance_of Temporalio::Error::CanceledError, err.cause.cause
+ end
+
+ class SignalChildWorkflow < Temporalio::Workflow::Definition
+ workflow_query_attr_reader :signals
+
+ def execute(scenario)
+ case scenario.to_sym
+ when :wait
+ Temporalio::Workflow.wait_condition { false }
+ when :finish
+ 'done'
+ else
+ raise NotImplementedError
+ end
+ end
+
+ workflow_signal
+ def signal(value)
+ (@signals ||= []) << value
+ end
+ end
+
+ class SignalDoNothingActivity < Temporalio::Activity::Definition
+ def execute
+ # Do nothing
+ end
+ end
+
+ class SignalParentWorkflow < Temporalio::Workflow::Definition
+ def execute(scenario)
+ case scenario.to_sym
+ when :signal
+ handle = Temporalio::Workflow.start_child_workflow(SignalChildWorkflow, :wait)
+ handle.signal(SignalChildWorkflow.signal, :foo)
+ handle.signal(:signal, :bar)
+ handle.id
+ when :signal_but_done
+ handle = Temporalio::Workflow.start_child_workflow(SignalChildWorkflow, :finish)
+ handle.result
+ handle.signal(SignalChildWorkflow.signal, :foo)
+ when :signal_then_cancel
+ handle = Temporalio::Workflow.start_child_workflow(SignalChildWorkflow, :wait)
+ cancellation, cancel_proc = Temporalio::Cancellation.new
+ Temporalio::Workflow::Future.new do
+ Temporalio::Workflow.execute_local_activity(SignalDoNothingActivity, start_to_close_timeout: 10)
+ cancel_proc.call
+ end
+ handle.signal(SignalChildWorkflow.signal, :foo, cancellation:)
+ else
+ raise NotImplementedError
+ end
+ end
+ end
+
+ def test_signal
+ # Successful signals
+ execute_workflow(SignalParentWorkflow, :signal, more_workflows: [SignalChildWorkflow]) do |handle|
+ child_id = handle.result #: String
+ assert_equal %w[foo bar], env.client.workflow_handle(child_id).query(SignalChildWorkflow.signals)
+ end
+
+ # Signalling already done
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) do
+ execute_workflow(SignalParentWorkflow, :signal_but_done, more_workflows: [SignalChildWorkflow])
+ end
+ assert_includes err.cause.message, 'not found'
+
+ # Send signal but then cancel in same task
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) do
+ execute_workflow(SignalParentWorkflow, :signal_then_cancel,
+ activities: [SignalDoNothingActivity], more_workflows: [SignalChildWorkflow])
+ end
+ assert_instance_of Temporalio::Error::CanceledError, err.cause
+ assert_includes err.cause.message, 'Signal was cancelled'
+ end
+
+ class ParentClosePolicyChildWorkflow < Temporalio::Workflow::Definition
+ def execute
+ Temporalio::Workflow.wait_condition { @finish }
+ end
+
+ workflow_signal
+ def finish
+ @finish = true
+ end
+ end
+
+ class ParentClosePolicyParentWorkflow < Temporalio::Workflow::Definition
+ def execute(scenario)
+ case scenario.to_sym
+ when :parent_close_terminate
+ handle = Temporalio::Workflow.start_child_workflow(ParentClosePolicyChildWorkflow)
+ handle.id
+ when :parent_close_request_cancel
+ handle = Temporalio::Workflow.start_child_workflow(
+ ParentClosePolicyChildWorkflow, parent_close_policy: Temporalio::Workflow::ParentClosePolicy::REQUEST_CANCEL
+ )
+ handle.id
+ when :parent_close_abandon
+ handle = Temporalio::Workflow.start_child_workflow(
+ ParentClosePolicyChildWorkflow, parent_close_policy: Temporalio::Workflow::ParentClosePolicy::ABANDON
+ )
+ handle.id
+ else
+ raise NotImplementedError
+ end
+ end
+ end
+
+ def test_parent_close_policy
+ # Terminate
+ execute_workflow(ParentClosePolicyParentWorkflow, :parent_close_terminate,
+ more_workflows: [ParentClosePolicyChildWorkflow]) do |handle|
+ child_id = handle.result #: String
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) do
+ env.client.workflow_handle(child_id).result
+ end
+ assert_instance_of Temporalio::Error::TerminatedError, err.cause
+ end
+
+ # Request cancel
+ execute_workflow(ParentClosePolicyParentWorkflow, :parent_close_request_cancel,
+ more_workflows: [ParentClosePolicyChildWorkflow]) do |handle|
+ child_id = handle.result #: String
+ assert_eventually do
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) do
+ env.client.workflow_handle(child_id).result
+ end
+ assert_instance_of Temporalio::Error::CanceledError, err.cause
+ end
+ end
+
+ # Abandon
+ execute_workflow(ParentClosePolicyParentWorkflow, :parent_close_abandon,
+ more_workflows: [ParentClosePolicyChildWorkflow]) do |handle|
+ child_id = handle.result #: String
+ child_handle = env.client.workflow_handle(child_id)
+ child_handle.signal(ParentClosePolicyChildWorkflow.finish)
+ child_handle.result
+ end
+ end
+
+ class SearchAttributesChildWorkflow < Temporalio::Workflow::Definition
+ def execute
+ Temporalio::Workflow.search_attributes.to_h.transform_keys(&:name)
+ end
+ end
+
+ class SearchAttributesParentWorkflow < Temporalio::Workflow::Definition
+ def execute
+ search_attributes = Temporalio::Workflow.search_attributes.dup
+ search_attributes[Test::ATTR_KEY_TEXT] = 'changed-text'
+ Temporalio::Workflow.execute_child_workflow(SearchAttributesChildWorkflow, search_attributes:)
+ end
+ end
+
+ def test_search_attributes
+ env.ensure_common_search_attribute_keys
+
+ # Unchanged
+ results = execute_workflow(
+ SearchAttributesParentWorkflow, :unchanged,
+ more_workflows: [SearchAttributesChildWorkflow],
+ search_attributes: Temporalio::SearchAttributes.new(
+ { ATTR_KEY_TEXT => 'some-text', ATTR_KEY_KEYWORD => 'some-keyword' }
+ )
+ )
+ assert_equal({ ATTR_KEY_TEXT.name => 'changed-text', ATTR_KEY_KEYWORD.name => 'some-keyword' }, results)
+ end
+end
diff --git a/temporalio/test/worker_workflow_external_test.rb b/temporalio/test/worker_workflow_external_test.rb
new file mode 100644
index 00000000..534c00e4
--- /dev/null
+++ b/temporalio/test/worker_workflow_external_test.rb
@@ -0,0 +1,97 @@
+# frozen_string_literal: true
+
+require 'securerandom'
+require 'temporalio/client'
+require 'temporalio/testing'
+require 'temporalio/worker'
+require 'temporalio/workflow'
+require 'test'
+
+class WorkerWorkflowExternalTest < Test
+ class ExternalWaitingWorkflow < Temporalio::Workflow::Definition
+ workflow_query_attr_reader :signals
+
+ def execute
+ Temporalio::Workflow.wait_condition { false }
+ end
+
+ workflow_signal
+ def signal(value)
+ (@signals ||= []) << value
+ end
+ end
+
+ class ExternalDoNothingActivity < Temporalio::Activity::Definition
+ def execute
+ # Do nothing
+ end
+ end
+
+ class ExternalWorkflow < Temporalio::Workflow::Definition
+ def execute(scenario, external_id)
+ handle = Temporalio::Workflow.external_workflow_handle(external_id)
+ case scenario.to_sym
+ when :signal
+ handle.signal(ExternalWaitingWorkflow.signal, :foo)
+ handle.signal(:signal, :bar)
+ when :signal_then_cancel
+ cancellation, cancel_proc = Temporalio::Cancellation.new
+ Temporalio::Workflow::Future.new do
+ Temporalio::Workflow.execute_local_activity(ExternalDoNothingActivity, start_to_close_timeout: 10)
+ cancel_proc.call
+ end
+ handle.signal(ExternalWaitingWorkflow.signal, :foo, cancellation:)
+ when :cancel
+ handle.cancel
+ else
+ raise NotImplementedError
+ end
+ end
+ end
+
+ def test_signal
+ # Start an external workflow
+ execute_workflow(ExternalWaitingWorkflow,
+ more_workflows: [ExternalWorkflow], activities: [ExternalDoNothingActivity]) do |handle, worker|
+ # Now run workflow to send the signals
+ env.client.execute_workflow(ExternalWorkflow, :signal, handle.id,
+ id: "wf-#{SecureRandom.uuid}", task_queue: worker.task_queue)
+ # Confirm they were sent
+ assert_equal %w[foo bar], handle.query(ExternalWaitingWorkflow.signals)
+
+ # Check ID that does not exist
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) do
+ env.client.execute_workflow(ExternalWorkflow, :signal, 'does-not-exist',
+ id: "wf-#{SecureRandom.uuid}", task_queue: worker.task_queue)
+ end
+ assert_includes err.cause.message, 'not found'
+
+ # Send signal but then cancel in same task
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) do
+ env.client.execute_workflow(ExternalWorkflow, :signal_then_cancel, handle.id,
+ id: "wf-#{SecureRandom.uuid}", task_queue: worker.task_queue)
+ end
+ assert_instance_of Temporalio::Error::CanceledError, err.cause
+ assert_includes err.cause.message, 'Signal was cancelled'
+ end
+ end
+
+ def test_cancel
+ # Start an external workflow
+ execute_workflow(ExternalWaitingWorkflow, more_workflows: [ExternalWorkflow]) do |handle, worker|
+ # Now run workflow to perform cancel
+ env.client.execute_workflow(ExternalWorkflow, :cancel, handle.id,
+ id: "wf-#{SecureRandom.uuid}", task_queue: worker.task_queue)
+ # Confirm canceled
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) { handle.result }
+ assert_instance_of Temporalio::Error::CanceledError, err.cause
+
+ # Check ID that does not exist
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) do
+ env.client.execute_workflow(ExternalWorkflow, :cancel, 'does-not-exist',
+ id: "wf-#{SecureRandom.uuid}", task_queue: worker.task_queue)
+ end
+ assert_includes err.cause.message, 'not found'
+ end
+ end
+end
diff --git a/temporalio/test/worker_workflow_handler_test.rb b/temporalio/test/worker_workflow_handler_test.rb
new file mode 100644
index 00000000..cadfe970
--- /dev/null
+++ b/temporalio/test/worker_workflow_handler_test.rb
@@ -0,0 +1,620 @@
+# frozen_string_literal: true
+
+require 'temporalio/client'
+require 'temporalio/testing'
+require 'temporalio/worker'
+require 'temporalio/workflow'
+require 'test'
+
+class WorkerWorkflowHandlerTest < Test
+ class SimpleWorkflow < Temporalio::Workflow::Definition
+ workflow_query_attr_reader :my_signal_result
+
+ def execute
+ Temporalio::Workflow.wait_condition { false }
+ end
+
+ workflow_signal
+ def my_signal(arg)
+ @my_signal_result = arg
+ end
+
+ workflow_query
+ def my_query(arg)
+ "Hello from query, #{arg}!"
+ end
+
+ workflow_update
+ def my_update(arg)
+ "Hello from update, #{arg}!"
+ end
+ end
+
+ def test_simple
+ execute_workflow(SimpleWorkflow) do |handle|
+ handle.signal(SimpleWorkflow.my_signal, 'signal arg')
+ assert_equal 'signal arg', handle.query(SimpleWorkflow.my_signal_result)
+ assert_equal 'Hello from query, Temporal!', handle.query(SimpleWorkflow.my_query, 'Temporal')
+ assert_equal 'Hello from update, Temporal!', handle.execute_update(SimpleWorkflow.my_update, 'Temporal')
+ end
+ end
+
+ class ManualDefinitionWorkflow < Temporalio::Workflow::Definition
+ workflow_query_attr_reader :signal_values, :dynamic_signal_values
+
+ def execute
+ Temporalio::Workflow.query_handlers['my_query'] = Temporalio::Workflow::Definition::Query.new(
+ name: 'my_query',
+ to_invoke: proc { |arg1, arg2| [arg1, arg2] }
+ )
+ Temporalio::Workflow.update_handlers['my_update'] = Temporalio::Workflow::Definition::Update.new(
+ name: 'my_update',
+ to_invoke: proc { |arg1, arg2| [arg1, arg2] }
+ )
+ Temporalio::Workflow.wait_condition { false }
+ end
+
+ workflow_signal
+ def define_signal_handler
+ # Make a new signal definition and expect it to process buffer
+ Temporalio::Workflow.signal_handlers['my_signal'] = Temporalio::Workflow::Definition::Signal.new(
+ name: 'my_signal',
+ to_invoke: proc { |arg1, arg2| (@signal_values ||= []) << [arg1, arg2] }
+ )
+ end
+
+ workflow_signal
+ def define_dynamic_signal_handler
+ Temporalio::Workflow.signal_handlers[nil] = Temporalio::Workflow::Definition::Signal.new(
+ name: nil,
+ to_invoke: proc { |arg1, *arg2| (@dynamic_signal_values ||= []) << [arg1, arg2] }
+ )
+ end
+ end
+
+ def test_manual_definition
+ # Test regular
+ execute_workflow(ManualDefinitionWorkflow) do |handle|
+ # Send 3 signals, then send a signal to define handler
+ handle.signal(:my_signal, 'sig1-arg1', 'sig1-arg2')
+ handle.signal(:my_signal, 'sig2-arg1', 'sig2-arg2')
+ handle.signal(ManualDefinitionWorkflow.define_signal_handler)
+
+ # Confirm buffer processed
+ expected = [%w[sig1-arg1 sig1-arg2], %w[sig2-arg1 sig2-arg2]]
+ assert_equal expected, handle.query(ManualDefinitionWorkflow.signal_values)
+
+ # Send a another and confirm
+ handle.signal(:my_signal, 'sig3-arg1', 'sig3-arg2')
+ expected << %w[sig3-arg1 sig3-arg2]
+ assert_equal expected, handle.query(ManualDefinitionWorkflow.signal_values)
+
+ # Send a couple for unknown signals and define dynamic
+ assert_nil handle.query(ManualDefinitionWorkflow.dynamic_signal_values)
+ handle.signal(:my_other_signal1, 'sig4-arg1', 'sig4-arg2')
+ handle.signal(:my_other_signal2, 'sig5-arg1', 'sig5-arg2')
+ handle.signal(ManualDefinitionWorkflow.define_dynamic_signal_handler)
+
+ # Confirm buffer processed
+ expected = [['my_other_signal1', %w[sig4-arg1 sig4-arg2]], ['my_other_signal2', %w[sig5-arg1 sig5-arg2]]]
+ assert_equal expected, handle.query(ManualDefinitionWorkflow.dynamic_signal_values)
+
+ # Send another and confirm
+ handle.signal(:my_other_signal3, 'sig6-arg1', 'sig6-arg2')
+ expected << ['my_other_signal3', %w[sig6-arg1 sig6-arg2]]
+ assert_equal expected, handle.query(ManualDefinitionWorkflow.dynamic_signal_values)
+
+ # Query and update
+ assert_equal %w[q1 q2], handle.query('my_query', 'q1', 'q2')
+ assert_equal %w[u1 u2], handle.execute_update('my_update', 'u1', 'u2')
+ end
+ end
+
+ class CustomNameWorkflow < Temporalio::Workflow::Definition
+ def execute
+ Temporalio::Workflow.wait_condition { @finish_with }
+ end
+
+ workflow_signal name: :custom_name1
+ def my_signal(finish_with)
+ @finish_with = finish_with
+ end
+
+ workflow_query name: 'custom_name2'
+ def my_query(arg)
+ "query result for: #{arg}"
+ end
+
+ workflow_update name: '5'
+ def my_update(arg)
+ "update result for: #{arg}"
+ end
+ end
+
+ def test_custom_name
+ execute_workflow(CustomNameWorkflow) do |handle|
+ assert_equal 'query result for: arg1', handle.query(CustomNameWorkflow.my_query, 'arg1')
+ assert_equal 'query result for: arg2', handle.query('custom_name2', 'arg2')
+ assert_equal 'query result for: arg3', handle.query(:custom_name2, 'arg3')
+ assert_equal 'update result for: arg4', handle.execute_update('5', 'arg4')
+ handle.signal(:custom_name1, 'done')
+ assert_equal 'done', handle.result
+ end
+ end
+
+ class ArgumentsWorkflow < Temporalio::Workflow::Definition
+ workflow_query_attr_reader :signals
+
+ def initialize
+ @signals = []
+ end
+
+ def execute
+ Temporalio::Workflow.wait_condition { false }
+ end
+
+ workflow_signal
+ def some_signal(single_arg)
+ @signals << single_arg
+ end
+
+ workflow_query
+ def some_query(single_arg)
+ "query done: #{single_arg}"
+ end
+
+ workflow_update
+ def some_update(single_arg)
+ "update done: #{single_arg}"
+ end
+
+ workflow_signal
+ def some_signal_with_defaults(single_arg = 'default signal arg')
+ @signals << single_arg
+ end
+
+ workflow_query
+ def some_query_with_defaults(single_arg = 'default query arg')
+ "query done: #{single_arg}"
+ end
+
+ workflow_update
+ def some_update_with_defaults(single_arg = 'default update arg')
+ "update done: #{single_arg}"
+ end
+ end
+
+ def test_arguments
+ # Too few/many args
+ execute_workflow(ArgumentsWorkflow) do |handle|
+ # For signals, too few are just dropped, too many are trimmed
+ handle.signal(ArgumentsWorkflow.some_signal)
+ handle.signal(ArgumentsWorkflow.some_signal, 'one')
+ handle.signal(ArgumentsWorkflow.some_signal, 'one', 'two')
+ assert_equal %w[one one], handle.query(ArgumentsWorkflow.signals)
+
+ # For query, too few fail query, too many are trimmed
+ err = assert_raises(Temporalio::Error::WorkflowQueryFailedError) { handle.query(ArgumentsWorkflow.some_query) }
+ assert_includes err.message, 'wrong number of required arguments for some_query (given 0, expected 1)'
+ assert_equal 'query done: one', handle.query(ArgumentsWorkflow.some_query, 'one')
+ assert_equal 'query done: one', handle.query(ArgumentsWorkflow.some_query, 'one', 'two')
+
+ # For update, too few fail update, too many are trimmed
+ err = assert_raises(Temporalio::Error::WorkflowUpdateFailedError) do
+ handle.execute_update(ArgumentsWorkflow.some_update)
+ end
+ assert_includes err.cause.message, 'wrong number of required arguments for some_update (given 0, expected 1)'
+ assert_equal 'update done: one', handle.execute_update(ArgumentsWorkflow.some_update, 'one')
+ assert_equal 'update done: one', handle.execute_update(ArgumentsWorkflow.some_update, 'one', 'two')
+ end
+
+ # Default parameters
+ execute_workflow(ArgumentsWorkflow) do |handle|
+ handle.signal(ArgumentsWorkflow.some_signal_with_defaults)
+ handle.signal(ArgumentsWorkflow.some_signal_with_defaults, 'one')
+ handle.signal(ArgumentsWorkflow.some_signal_with_defaults, 'one', 'two')
+ assert_equal ['default signal arg', 'one', 'one'], handle.query(ArgumentsWorkflow.signals)
+
+ assert_equal 'query done: default query arg', handle.query(ArgumentsWorkflow.some_query_with_defaults)
+ assert_equal 'query done: one', handle.query(ArgumentsWorkflow.some_query_with_defaults, 'one')
+ assert_equal 'query done: one', handle.query(ArgumentsWorkflow.some_query_with_defaults, 'one', 'two')
+
+ assert_equal 'update done: default update arg', handle.execute_update(ArgumentsWorkflow.some_update_with_defaults)
+ assert_equal 'update done: one', handle.execute_update(ArgumentsWorkflow.some_update_with_defaults, 'one')
+ assert_equal 'update done: one', handle.execute_update(ArgumentsWorkflow.some_update_with_defaults, 'one', 'two')
+ end
+ end
+
+ class DynamicWorkflow < Temporalio::Workflow::Definition
+ def execute(manual_override)
+ if manual_override
+ Temporalio::Workflow.signal_handlers[nil] = Temporalio::Workflow::Definition::Signal.new(
+ name: nil,
+ raw_args: true,
+ to_invoke: proc do |name, *args|
+ arg_str = args.map do |v|
+ Temporalio::Workflow.payload_converter.from_payload(v.payload)
+ end.join(' -- ')
+ @finish_with = "manual dyn signal: #{name} - #{arg_str}"
+ end
+ )
+ Temporalio::Workflow.query_handlers[nil] = Temporalio::Workflow::Definition::Query.new(
+ name: nil,
+ raw_args: true,
+ to_invoke: proc do |name, *args|
+ arg_str = args.map { |v| Temporalio::Workflow.payload_converter.from_payload(v.payload) }.join(' -- ')
+ "manual dyn query: #{name} - #{arg_str}"
+ end
+ )
+ Temporalio::Workflow.update_handlers[nil] = Temporalio::Workflow::Definition::Update.new(
+ name: nil,
+ raw_args: true,
+ to_invoke: proc do |name, *args|
+ arg_str = args.map { |v| Temporalio::Workflow.payload_converter.from_payload(v.payload) }.join(' -- ')
+ "manual dyn update: #{name} - #{arg_str}"
+ end
+ )
+ end
+ Temporalio::Workflow.wait_condition { @finish_with }
+ end
+
+ workflow_signal dynamic: true, raw_args: true
+ def dynamic_signal(name, *args)
+ arg_str = args.map { |v| Temporalio::Workflow.payload_converter.from_payload(v.payload) }.join(' -- ')
+ @finish_with = "dyn signal: #{name} - #{arg_str}"
+ end
+
+ workflow_query dynamic: true, raw_args: true
+ def dynamic_query(name, *args)
+ arg_str = args.map { |v| Temporalio::Workflow.payload_converter.from_payload(v.payload) }.join(' -- ')
+ "dyn query: #{name} - #{arg_str}"
+ end
+
+ workflow_update dynamic: true, raw_args: true
+ def dynamic_update(name, *args)
+ arg_str = args.map { |v| Temporalio::Workflow.payload_converter.from_payload(v.payload) }.join(' -- ')
+ "dyn update: #{name} - #{arg_str}"
+ end
+
+ workflow_signal
+ def non_dynamic_signal(*)
+ # Do nothing
+ end
+
+ workflow_query
+ def non_dynamic_query(*)
+ 'non-dynamic'
+ end
+
+ workflow_update
+ def non_dynamic_update(*)
+ 'non-dynamic'
+ end
+ end
+
+ def test_dynamic
+ [true, false].each do |manual_override|
+ prefix = manual_override ? 'manual ' : ''
+ execute_workflow(DynamicWorkflow, manual_override) do |handle|
+ # Non-dynamic
+ handle.signal('non_dynamic_signal', 'signalarg1', 'signalarg2')
+ assert_equal 'non-dynamic', handle.query('non_dynamic_query', 'queryarg1', 'queryarg2')
+ assert_equal 'non-dynamic', handle.execute_update('non_dynamic_update', 'updatearg1', 'updatearg2')
+
+ # Dynamic
+ assert_equal "#{prefix}dyn query: non_dynamic_query_typo - queryarg1 -- queryarg2",
+ handle.query('non_dynamic_query_typo', 'queryarg1', 'queryarg2')
+ assert_equal "#{prefix}dyn update: non_dynamic_update_typo - updatearg1 -- updatearg2",
+ handle.execute_update('non_dynamic_update_typo', 'updatearg1', 'updatearg2')
+ handle.signal('non_dynamic_signal_typo', 'signalarg1', 'signalarg2')
+ assert_equal "#{prefix}dyn signal: non_dynamic_signal_typo - signalarg1 -- signalarg2", handle.result
+ end
+ end
+ end
+
+ class UpdateValidatorWorkflow < Temporalio::Workflow::Definition
+ def initialize
+ Temporalio::Workflow.update_handlers['manual-update'] = Temporalio::Workflow::Definition::Update.new(
+ name: 'manual-update',
+ to_invoke: proc { |arg| "manual result for: #{arg}" },
+ validator_to_invoke: proc { |arg| raise 'Bad manual arg' if arg == 'bad' }
+ )
+ end
+
+ def execute
+ Temporalio::Workflow.wait_condition { false }
+ end
+
+ workflow_update
+ def some_update(arg)
+ "result for: #{arg}"
+ end
+
+ workflow_update_validator :some_update
+ def some_update_validator(arg)
+ raise 'Bad arg' if arg == 'bad'
+ end
+
+ workflow_update dynamic: true
+ def some_dynamic_update(_name, arg)
+ "dyn result for: #{arg}"
+ end
+
+ workflow_update_validator :some_dynamic_update
+ def some_dynamic_update_validator(_name, arg)
+ raise 'Bad dyn arg' if arg == 'bad'
+ end
+ end
+
+ def test_update_validator
+ execute_workflow(UpdateValidatorWorkflow) do |handle|
+ assert_equal 'manual result for: good', handle.execute_update('manual-update', 'good')
+ assert_equal 'result for: good', handle.execute_update(UpdateValidatorWorkflow.some_update, 'good')
+ assert_equal 'dyn result for: good', handle.execute_update('some_update_typo', 'good')
+
+ err = assert_raises(Temporalio::Error::WorkflowUpdateFailedError) do
+ handle.execute_update('manual-update', 'bad')
+ end
+ assert_equal 'Bad manual arg', err.cause.message
+ err = assert_raises(Temporalio::Error::WorkflowUpdateFailedError) do
+ handle.execute_update(UpdateValidatorWorkflow.some_update, 'bad')
+ end
+ assert_equal 'Bad arg', err.cause.message
+ err = assert_raises(Temporalio::Error::WorkflowUpdateFailedError) do
+ handle.execute_update('some_update_typo', 'bad')
+ end
+ assert_equal 'Bad dyn arg', err.cause.message
+ end
+ end
+
+ class UnfinishedHandlersWorkflow < Temporalio::Workflow::Definition
+ def initialize
+ @finish = {}
+ end
+
+ def execute
+ Temporalio::Workflow.wait_condition { @finish[:workflow] }
+ end
+
+ workflow_update
+ def some_update1
+ Temporalio::Workflow.wait_condition { @finish[:some_update1] }
+ end
+
+ workflow_update
+ def some_update2
+ Temporalio::Workflow.wait_condition { @finish[:some_update2] }
+ end
+
+ workflow_update unfinished_policy: Temporalio::Workflow::HandlerUnfinishedPolicy::ABANDON
+ def some_update_abandon
+ Temporalio::Workflow.wait_condition { @finish[:some_update_abandon] }
+ end
+
+ workflow_signal
+ def some_signal1
+ Temporalio::Workflow.wait_condition { @finish[:some_signal1] }
+ end
+
+ workflow_signal
+ def some_signal2
+ Temporalio::Workflow.wait_condition { @finish[:some_signal2] }
+ end
+
+ workflow_signal unfinished_policy: Temporalio::Workflow::HandlerUnfinishedPolicy::ABANDON
+ def some_signal_abandon
+ Temporalio::Workflow.wait_condition { @finish[:some_signal_abandon] }
+ end
+
+ workflow_query
+ def all_handlers_finished?
+ Temporalio::Workflow.all_handlers_finished?
+ end
+
+ workflow_signal
+ def finish(thing)
+ @finish[thing.to_sym] = true
+ end
+ end
+
+ def test_unfinished_handlers_warn
+ # Canceled workflow shows warning
+ _, err = safe_capture_io do
+ execute_workflow(UnfinishedHandlersWorkflow, logger: Logger.new($stdout)) do |handle|
+ # Send updates and signals
+ handle.start_update(UnfinishedHandlersWorkflow.some_update1,
+ wait_for_stage: Temporalio::Client::WorkflowUpdateWaitStage::ACCEPTED,
+ id: 'my-update-1')
+ handle.start_update(UnfinishedHandlersWorkflow.some_update1,
+ wait_for_stage: Temporalio::Client::WorkflowUpdateWaitStage::ACCEPTED,
+ id: 'my-update-2')
+ handle.start_update(UnfinishedHandlersWorkflow.some_update2,
+ wait_for_stage: Temporalio::Client::WorkflowUpdateWaitStage::ACCEPTED,
+ id: 'my-update-3')
+ handle.start_update(UnfinishedHandlersWorkflow.some_update_abandon,
+ wait_for_stage: Temporalio::Client::WorkflowUpdateWaitStage::ACCEPTED,
+ id: 'my-update-4')
+ handle.signal(UnfinishedHandlersWorkflow.some_signal1)
+ handle.signal(UnfinishedHandlersWorkflow.some_signal1)
+ handle.signal(UnfinishedHandlersWorkflow.some_signal2)
+ handle.signal(UnfinishedHandlersWorkflow.some_signal_abandon)
+
+ # Finish workflow
+ handle.signal(UnfinishedHandlersWorkflow.finish, :workflow)
+ handle.result
+ end
+ end
+ lines = err.split("\n")
+
+ # Check update
+ update_lines = lines.select { |l| l.include?('update handlers are still running') }
+ assert_equal 1, update_lines.size
+ trailing_arr = update_lines.first[update_lines.first.rindex('[')..] # steep:ignore
+ assert_equal [
+ { 'name' => 'some_update1', 'id' => 'my-update-1' },
+ { 'name' => 'some_update1', 'id' => 'my-update-2' },
+ { 'name' => 'some_update2', 'id' => 'my-update-3' }
+ ], JSON.parse(trailing_arr)
+
+ # Check update
+ signal_lines = lines.select { |l| l.include?('signal handlers are still running') }
+ assert_equal 1, signal_lines.size
+ trailing_arr = signal_lines.first[signal_lines.first.rindex('[')..] # steep:ignore
+ assert_equal [{ 'name' => 'some_signal1', 'count' => 2 }, { 'name' => 'some_signal2', 'count' => 1 }],
+ JSON.parse(trailing_arr)
+ end
+
+ def test_unfinished_handlers_all_finished
+ execute_workflow(UnfinishedHandlersWorkflow) do |handle|
+ assert handle.query(UnfinishedHandlersWorkflow.all_handlers_finished?)
+ handle.signal(UnfinishedHandlersWorkflow.some_signal1)
+ refute handle.query(UnfinishedHandlersWorkflow.all_handlers_finished?)
+ handle.signal(UnfinishedHandlersWorkflow.finish, :some_signal1)
+ assert handle.query(UnfinishedHandlersWorkflow.all_handlers_finished?)
+ end
+ end
+
+ class UpdateAndWorkflowCompletionWorkflow < Temporalio::Workflow::Definition
+ def initialize
+ @counter = 0
+ end
+
+ def execute(scenario, workflow_first)
+ @workflow_finish = workflow_first
+ @update_finish = true unless workflow_first
+ case scenario.to_sym
+ when :wait
+ Temporalio::Workflow.wait_condition { @finish && @workflow_finish }
+ "done: #{@counter += 1}"
+ when :raise
+ Temporalio::Workflow.wait_condition { @finish && @workflow_finish }
+ raise Temporalio::Error::ApplicationError, "Intentional failure: #{@counter += 1}"
+ else
+ raise NotImplementedError
+ end
+ ensure
+ @update_finish = true
+ end
+
+ workflow_update
+ def some_update(scenario)
+ case scenario.to_sym
+ when :wait
+ Temporalio::Workflow.wait_condition { @finish && @update_finish }
+ "done: #{@counter += 1}"
+ when :raise
+ Temporalio::Workflow.wait_condition { @finish && @update_finish }
+ raise Temporalio::Error::ApplicationError, "Intentional failure: #{@counter += 1}"
+ else
+ raise NotImplementedError
+ end
+ ensure
+ @workflow_finish = true
+ end
+
+ workflow_signal
+ def finish
+ @finish = true
+ end
+ end
+
+ def test_update_and_workflow_completion
+ # Normal complete, workflow done first
+ execute_workflow(UpdateAndWorkflowCompletionWorkflow, :wait, true) do |handle|
+ update_handle = handle.start_update(
+ UpdateAndWorkflowCompletionWorkflow.some_update, :wait,
+ wait_for_stage: Temporalio::Client::WorkflowUpdateWaitStage::ACCEPTED
+ )
+ handle.signal(UpdateAndWorkflowCompletionWorkflow.finish)
+ assert_equal 'done: 1', handle.result
+ assert_equal 'done: 2', update_handle.result
+ end
+ # Normal complete, update done first
+ execute_workflow(UpdateAndWorkflowCompletionWorkflow, :wait, false) do |handle|
+ update_handle = handle.start_update(
+ UpdateAndWorkflowCompletionWorkflow.some_update, :wait,
+ wait_for_stage: Temporalio::Client::WorkflowUpdateWaitStage::ACCEPTED
+ )
+ handle.signal(UpdateAndWorkflowCompletionWorkflow.finish)
+ assert_equal 'done: 2', handle.result
+ assert_equal 'done: 1', update_handle.result
+ end
+ # Fail, workflow done first
+ execute_workflow(UpdateAndWorkflowCompletionWorkflow, :raise, true) do |handle|
+ update_handle = handle.start_update(
+ UpdateAndWorkflowCompletionWorkflow.some_update, :raise,
+ wait_for_stage: Temporalio::Client::WorkflowUpdateWaitStage::ACCEPTED
+ )
+ handle.signal(UpdateAndWorkflowCompletionWorkflow.finish)
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) { handle.result }
+ assert_equal 'Intentional failure: 1', err.cause.message
+ err = assert_raises(Temporalio::Error::WorkflowUpdateFailedError) { update_handle.result }
+ assert_equal 'Intentional failure: 2', err.cause.message
+ end
+ # Fail, update done first
+ execute_workflow(UpdateAndWorkflowCompletionWorkflow, :raise, false) do |handle|
+ update_handle = handle.start_update(
+ UpdateAndWorkflowCompletionWorkflow.some_update, :raise,
+ wait_for_stage: Temporalio::Client::WorkflowUpdateWaitStage::ACCEPTED
+ )
+ handle.signal(UpdateAndWorkflowCompletionWorkflow.finish)
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) { handle.result }
+ assert_equal 'Intentional failure: 2', err.cause.message
+ err = assert_raises(Temporalio::Error::WorkflowUpdateFailedError) { update_handle.result }
+ assert_equal 'Intentional failure: 1', err.cause.message
+ end
+ end
+
+ class UpdateInfoWorkflow < Temporalio::Workflow::Definition
+ def execute
+ Temporalio::Workflow.logger.info('In workflow')
+ Temporalio::Workflow.wait_condition { @finish }
+ end
+
+ workflow_update
+ def some_update
+ Temporalio::Workflow.logger.info('In update')
+ Temporalio::Workflow.wait_condition { @finish }
+ Temporalio::Workflow.current_update_info.to_h
+ end
+
+ workflow_signal
+ def finish
+ @finish = true
+ end
+ end
+
+ def test_update_info
+ out, = safe_capture_io do
+ execute_workflow(UpdateInfoWorkflow, logger: Logger.new($stdout)) do |handle|
+ update1 = handle.start_update(
+ UpdateInfoWorkflow.some_update,
+ wait_for_stage: Temporalio::Client::WorkflowUpdateWaitStage::ACCEPTED,
+ id: 'update-1'
+ )
+ update2 = handle.start_update(
+ UpdateInfoWorkflow.some_update,
+ wait_for_stage: Temporalio::Client::WorkflowUpdateWaitStage::ACCEPTED,
+ id: 'update-2'
+ )
+ handle.signal(UpdateInfoWorkflow.finish)
+ handle.result
+ assert_equal({ 'id' => 'update-1', 'name' => 'some_update' }, update1.result)
+ assert_equal({ 'id' => 'update-2', 'name' => 'some_update' }, update2.result)
+ end
+ end
+ # Confirm logs for workflow and updates
+ lines = out.split("\n")
+ assert(lines.any? do |l|
+ l.include?('In workflow') && l.include?(':workflow_type=>"UpdateInfoWorkflow"') && !l.include?('update_id')
+ end)
+ assert(lines.any? do |l|
+ l.include?('In update') && l.include?(':workflow_type=>"UpdateInfoWorkflow"') &&
+ l.include?(':update_id=>"update-1"')
+ end)
+ assert(lines.any? do |l|
+ l.include?('In update') && l.include?(':workflow_type=>"UpdateInfoWorkflow"') &&
+ l.include?(':update_id=>"update-2"')
+ end)
+ end
+end
diff --git a/temporalio/test/worker_workflow_test.rb b/temporalio/test/worker_workflow_test.rb
new file mode 100644
index 00000000..af710fa9
--- /dev/null
+++ b/temporalio/test/worker_workflow_test.rb
@@ -0,0 +1,1760 @@
+# frozen_string_literal: true
+
+require 'base64_codec'
+require 'net/http'
+require 'temporalio/client'
+require 'temporalio/testing'
+require 'temporalio/worker'
+require 'temporalio/workflow'
+require 'test'
+require 'timeout'
+
+class WorkerWorkflowTest < Test
+ class SimpleWorkflow < Temporalio::Workflow::Definition
+ def execute(name)
+ "Hello, #{name}!"
+ end
+ end
+
+ def test_simple
+ assert_equal 'Hello, Temporal!', execute_workflow(SimpleWorkflow, 'Temporal')
+ end
+
+ class IllegalCallsWorkflow < Temporalio::Workflow::Definition
+ def execute(scenario)
+ case scenario.to_sym
+ when :argv
+ ARGV
+ when :date_new
+ Date.new
+ when :date_today
+ Date.today
+ when :env
+ ENV.fetch('foo', nil)
+ when :file_directory
+ File.directory?('.')
+ when :file_read
+ File.read('Rakefile')
+ when :http_get
+ Net::HTTP.get('https://example.com')
+ when :kernel_rand
+ Kernel.rand
+ when :random_new
+ Random.new.rand
+ when :thread_new
+ Thread.new { 'wut' }.join
+ when :time_new
+ Time.new
+ when :time_now
+ Time.now
+ else
+ raise NotImplementedError
+ end
+ end
+ end
+
+ def test_illegal_calls
+ exec = lambda do |scenario, method|
+ execute_workflow(IllegalCallsWorkflow, scenario) do |handle|
+ if method
+ assert_eventually_task_fail(handle:, message_contains: "Cannot access #{method} from inside a workflow")
+ else
+ handle.result
+ end
+ end
+ end
+
+ exec.call(:argv, nil) # Cannot reasonably prevent
+ exec.call(:date_new, 'Date initialize')
+ exec.call(:date_today, 'Date today')
+ exec.call(:env, nil) # Cannot reasonably prevent
+ exec.call(:file_directory, 'File directory?')
+ exec.call(:file_read, 'IO read')
+ exec.call(:http_get, 'Net::HTTP get')
+ exec.call(:kernel_rand, 'Kernel rand')
+ exec.call(:random_new, 'Random::Base initialize')
+ exec.call(:thread_new, 'Thread new')
+ exec.call(:time_new, 'Time initialize')
+ exec.call(:time_now, 'Time now')
+ end
+
+ class WorkflowInitWorkflow < Temporalio::Workflow::Definition
+ workflow_init
+ def initialize(arg1, arg2)
+ @args = [arg1, arg2]
+ end
+
+ def execute(_ignore1, _ignore2)
+ @args
+ end
+ end
+
+ def test_workflow_init
+ assert_equal ['foo', 123], execute_workflow(WorkflowInitWorkflow, 'foo', 123)
+ end
+
+ class RawValueWorkflow < Temporalio::Workflow::Definition
+ workflow_raw_args
+
+ workflow_init
+ def initialize(arg1, arg2)
+ raise 'Expected raw' unless arg1.is_a?(Temporalio::Converters::RawValue)
+ raise 'Expected raw' unless arg2.is_a?(Temporalio::Converters::RawValue)
+ end
+
+ def execute(arg1, arg2)
+ raise 'Expected raw' unless arg1.is_a?(Temporalio::Converters::RawValue)
+ raise 'Bad value' unless Temporalio::Workflow.payload_converter.from_payload(arg1.payload) == 'foo'
+ raise 'Expected raw' unless arg2.is_a?(Temporalio::Converters::RawValue)
+ raise 'Bad value' unless Temporalio::Workflow.payload_converter.from_payload(arg2.payload) == 123
+
+ Temporalio::Converters::RawValue.new(
+ Temporalio::Api::Common::V1::Payload.new(
+ metadata: { 'encoding' => 'json/plain' },
+ data: '{"foo": "bar"}'.b
+ )
+ )
+ end
+ end
+
+ def test_raw_value
+ assert_equal({ 'foo' => 'bar' }, execute_workflow(RawValueWorkflow, 'foo', 123))
+ end
+
+ class ArgCountWorkflow < Temporalio::Workflow::Definition
+ def execute(arg1, arg2)
+ [arg1, arg2]
+ end
+ end
+
+ def test_arg_count
+ # Extra arguments are allowed and just discarded, too few are not allowed
+ execute_workflow(ArgCountWorkflow) do |handle|
+ assert_eventually_task_fail(
+ handle:,
+ message_contains: 'wrong number of required arguments for execute (given 0, expected 2)'
+ )
+ end
+ assert_equal %w[one two], execute_workflow(ArgCountWorkflow, 'one', 'two')
+ assert_equal %w[three four], execute_workflow(ArgCountWorkflow, 'three', 'four', 'five')
+ end
+
+ class InfoWorkflow < Temporalio::Workflow::Definition
+ def execute
+ Temporalio::Workflow.info.to_h
+ end
+ end
+
+ def test_info
+ execute_workflow(InfoWorkflow) do |handle, worker|
+ info = handle.result #: Hash[String, untyped]
+ assert_equal 1, info['attempt']
+ assert_nil info.fetch('continued_run_id')
+ assert_nil info.fetch('cron_schedule')
+ assert_nil info.fetch('execution_timeout')
+ assert_nil info.fetch('last_failure')
+ assert_nil info.fetch('last_result')
+ assert_equal env.client.namespace, info['namespace']
+ assert_nil info.fetch('parent')
+ assert_nil info.fetch('retry_policy')
+ assert_equal handle.result_run_id, info['run_id']
+ assert_nil info.fetch('run_timeout')
+ refute_nil info['start_time']
+ assert_equal worker.task_queue, info['task_queue']
+ assert_equal 10.0, info['task_timeout']
+ assert_equal handle.id, info['workflow_id']
+ assert_equal 'InfoWorkflow', info['workflow_type']
+ end
+ end
+
+ class HistoryInfoWorkflow < Temporalio::Workflow::Definition
+ def execute
+ # Start 30 10ms timers and wait on them all
+ Temporalio::Workflow::Future.all_of(
+ *30.times.map { Temporalio::Workflow::Future.new { sleep(0.1) } }
+ ).wait
+
+ [
+ Temporalio::Workflow.continue_as_new_suggested,
+ Temporalio::Workflow.current_history_length,
+ Temporalio::Workflow.current_history_size
+ ]
+ end
+ end
+
+ def test_history_info
+ can_suggested, hist_len, hist_size = execute_workflow(HistoryInfoWorkflow) #: [bool, Integer, Integer]
+ refute can_suggested
+ assert hist_len > 60
+ assert hist_size > 1500
+ end
+
+ class WaitConditionWorkflow < Temporalio::Workflow::Definition
+ workflow_query_attr_reader :waiting
+
+ def execute(scenario)
+ case scenario.to_sym
+ when :stages
+ @stages = ['one']
+ Temporalio::Workflow::Future.new do
+ Temporalio::Workflow.wait_condition { @stages.last != 'one' }
+ raise 'Invalid stage' unless @stages.last == 'two'
+
+ @stages << 'three'
+ end
+ Temporalio::Workflow::Future.new do
+ Temporalio::Workflow.wait_condition { !@stages.empty? }
+ raise 'Invalid stage' unless @stages.last == 'one'
+
+ @stages << 'two'
+ end
+ Temporalio::Workflow::Future.new do
+ Temporalio::Workflow.wait_condition { !@stages.empty? }
+ raise 'Invalid stage' unless @stages.last == 'three'
+
+ @stages << 'four'
+ end
+ Temporalio::Workflow.wait_condition { @stages.last == 'four' }
+ @stages
+ when :workflow_cancel
+ @waiting = true
+ Temporalio::Workflow.wait_condition { false }
+ when :timeout
+ Timeout.timeout(0.1) do
+ Temporalio::Workflow.wait_condition { false }
+ end
+ when :manual_cancel
+ my_cancel, my_cancel_proc = Temporalio::Cancellation.new
+ Temporalio::Workflow::Future.new do
+ sleep(0.1)
+ my_cancel_proc.call(reason: 'my cancel reason')
+ end
+ Temporalio::Workflow.wait_condition(cancellation: my_cancel) { false }
+ when :manual_cancel_before_wait
+ my_cancel, my_cancel_proc = Temporalio::Cancellation.new
+ my_cancel_proc.call(reason: 'my cancel reason')
+ Temporalio::Workflow.wait_condition(cancellation: my_cancel) { false }
+ else
+ raise NotImplementedError
+ end
+ end
+ end
+
+ def test_wait_condition
+ assert_equal %w[one two three four], execute_workflow(WaitConditionWorkflow, :stages)
+
+ execute_workflow(WaitConditionWorkflow, :workflow_cancel) do |handle|
+ assert_eventually { assert handle.query(WaitConditionWorkflow.waiting) }
+ handle.cancel
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) { handle.result }
+ assert_equal 'Workflow execution canceled', err.message
+ assert_instance_of Temporalio::Error::CanceledError, err.cause
+ end
+
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) { execute_workflow(WaitConditionWorkflow, :timeout) }
+ assert_equal 'execution expired', err.cause.message
+
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) do
+ execute_workflow(WaitConditionWorkflow, :manual_cancel)
+ end
+ assert_equal 'Workflow execution failed', err.message
+ assert_instance_of Temporalio::Error::CanceledError, err.cause
+ assert_equal 'my cancel reason', err.cause.message
+
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) do
+ execute_workflow(WaitConditionWorkflow, :manual_cancel_before_wait)
+ end
+ assert_equal 'Workflow execution failed', err.message
+ assert_instance_of Temporalio::Error::CanceledError, err.cause
+ assert_equal 'my cancel reason', err.cause.message
+ end
+
+ class TimerWorkflow < Temporalio::Workflow::Definition
+ workflow_query_attr_reader :waiting
+
+ def execute(scenario)
+ case scenario.to_sym
+ when :sleep_stdlib
+ sleep(0.11)
+ when :sleep_workflow
+ Temporalio::Workflow.sleep(0.12, summary: 'my summary')
+ when :sleep_stdlib_workflow_cancel
+ sleep(1000)
+ when :sleep_workflow_cancel
+ Temporalio::Workflow.sleep(1000)
+ when :sleep_explicit_cancel
+ my_cancel, my_cancel_proc = Temporalio::Cancellation.new
+ Temporalio::Workflow::Future.new do
+ sleep(0.1)
+ my_cancel_proc.call(reason: 'my cancel reason')
+ end
+ Temporalio::Workflow.sleep(1000, cancellation: my_cancel)
+ when :sleep_cancel_before_start
+ my_cancel, my_cancel_proc = Temporalio::Cancellation.new
+ my_cancel_proc.call(reason: 'my cancel reason')
+ Temporalio::Workflow.sleep(1000, cancellation: my_cancel)
+ when :timeout_stdlib
+ Timeout.timeout(0.16) do
+ Temporalio::Workflow.wait_condition { false }
+ end
+ when :timeout_workflow
+ Temporalio::Workflow.timeout(0.17) do
+ Temporalio::Workflow.wait_condition { false }
+ end
+ when :timeout_custom_info
+ Temporalio::Workflow.timeout(0.18, Temporalio::Error::ApplicationError, 'some message') do
+ Temporalio::Workflow.wait_condition { false }
+ end
+ when :timeout_infinite
+ @waiting = true
+ Temporalio::Workflow.timeout(nil) do
+ Temporalio::Workflow.wait_condition { @interrupt }
+ end
+ when :timeout_negative
+ Temporalio::Workflow.timeout(-1) do
+ Temporalio::Workflow.wait_condition { false }
+ end
+ when :timeout_workflow_cancel
+ Timeout.timeout(1000) do
+ Temporalio::Workflow.wait_condition { false }
+ end
+ when :timeout_not_reached
+ Timeout.timeout(1000) do
+ Temporalio::Workflow.wait_condition { @return_value }
+ end
+ @waiting = true
+ Temporalio::Workflow.wait_condition { @interrupt }
+ @return_value
+ else
+ raise NotImplementedError
+ end
+ end
+
+ workflow_signal
+ def interrupt
+ @interrupt = true
+ end
+
+ workflow_signal
+ def return_value(value)
+ @return_value = value
+ end
+ end
+
+ def test_timer
+ event = execute_workflow(TimerWorkflow, :sleep_stdlib) do |handle|
+ handle.result
+ handle.fetch_history_events.find(&:timer_started_event_attributes)
+ end
+ assert_equal 0.11, event.timer_started_event_attributes.start_to_fire_timeout.to_f
+
+ event = execute_workflow(TimerWorkflow, :sleep_workflow) do |handle|
+ handle.result
+ handle.fetch_history_events.find(&:timer_started_event_attributes)
+ end
+ assert_equal 0.12, event.timer_started_event_attributes.start_to_fire_timeout.to_f
+ # TODO(cretz): Assert summary
+
+ execute_workflow(TimerWorkflow, :sleep_stdlib_workflow_cancel) do |handle|
+ assert_eventually { assert handle.fetch_history_events.any?(&:timer_started_event_attributes) }
+ handle.cancel
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) { handle.result }
+ assert_instance_of Temporalio::Error::CanceledError, err.cause
+ end
+
+ execute_workflow(TimerWorkflow, :sleep_workflow_cancel) do |handle|
+ assert_eventually { assert handle.fetch_history_events.any?(&:timer_started_event_attributes) }
+ handle.cancel
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) { handle.result }
+ assert_instance_of Temporalio::Error::CanceledError, err.cause
+ end
+
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) do
+ execute_workflow(TimerWorkflow, :sleep_explicit_cancel)
+ end
+ assert_equal 'Workflow execution failed', err.message
+ assert_instance_of Temporalio::Error::CanceledError, err.cause
+ assert_equal 'my cancel reason', err.cause.message
+
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) do
+ execute_workflow(TimerWorkflow, :sleep_cancel_before_start)
+ end
+ assert_equal 'Workflow execution failed', err.message
+ assert_instance_of Temporalio::Error::CanceledError, err.cause
+ assert_equal 'my cancel reason', err.cause.message
+
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) { execute_workflow(TimerWorkflow, :timeout_stdlib) }
+ assert_instance_of Temporalio::Error::ApplicationError, err.cause
+ assert_equal 'execution expired', err.cause.message
+ assert_equal 'Timeout::Error', err.cause.type
+
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) { execute_workflow(TimerWorkflow, :timeout_workflow) }
+ assert_instance_of Temporalio::Error::ApplicationError, err.cause
+ assert_equal 'execution expired', err.cause.message
+ assert_equal 'Timeout::Error', err.cause.type
+
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) do
+ execute_workflow(TimerWorkflow, :timeout_custom_info)
+ end
+ assert_instance_of Temporalio::Error::ApplicationError, err.cause
+ assert_equal 'some message', err.cause.message
+ assert_nil err.cause.type
+
+ execute_workflow(TimerWorkflow, :timeout_infinite) do |handle|
+ assert_eventually { assert handle.query(TimerWorkflow.waiting) }
+ handle.signal(TimerWorkflow.interrupt)
+ handle.result
+ refute handle.fetch_history_events.any?(&:timer_started_event_attributes)
+ end
+
+ execute_workflow(TimerWorkflow, :timeout_negative) do |handle|
+ assert_eventually_task_fail(handle:, message_contains: 'Sleep duration cannot be less than 0')
+ end
+
+ execute_workflow(TimerWorkflow, :timeout_workflow_cancel) do |handle|
+ assert_eventually { assert handle.fetch_history_events.any?(&:timer_started_event_attributes) }
+ handle.cancel
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) { handle.result }
+ assert_instance_of Temporalio::Error::CanceledError, err.cause
+ end
+
+ execute_workflow(TimerWorkflow, :timeout_not_reached) do |handle|
+ assert_eventually { assert handle.fetch_history_events.any?(&:timer_started_event_attributes) }
+ handle.signal(TimerWorkflow.return_value, 'some value')
+ assert_eventually { assert handle.query(TimerWorkflow.waiting) }
+ assert_eventually { assert handle.fetch_history_events.any?(&:timer_canceled_event_attributes) }
+ handle.signal(TimerWorkflow.interrupt)
+ assert_equal 'some value', handle.result
+ end
+ end
+
+ class SearchAttributeMemoWorkflow < Temporalio::Workflow::Definition
+ def execute(scenario)
+ case scenario.to_sym
+ when :search_attributes
+ # Collect original, upsert (update one, delete another), collect updated
+ orig = Temporalio::Workflow.search_attributes.to_h.transform_keys(&:name)
+ Temporalio::Workflow.upsert_search_attributes(
+ Test::ATTR_KEY_TEXT.value_set('another-text'),
+ Test::ATTR_KEY_KEYWORD.value_unset
+ )
+ updated = Temporalio::Workflow.search_attributes.to_h.transform_keys(&:name)
+ { orig:, updated: }
+ when :memo
+ # Collect original, upsert (update one, delete another), collect updated
+ orig = Temporalio::Workflow.memo.dup
+ Temporalio::Workflow.upsert_memo({ key1: 'new-val1', key2: nil })
+ updated = Temporalio::Workflow.memo.dup
+ { orig:, updated: }
+ else
+ raise NotImplementedError
+ end
+ end
+ end
+
+ def test_search_attributes_memo
+ env.ensure_common_search_attribute_keys
+
+ execute_workflow(
+ SearchAttributeMemoWorkflow,
+ :search_attributes,
+ search_attributes: Temporalio::SearchAttributes.new(
+ { ATTR_KEY_TEXT => 'some-text', ATTR_KEY_KEYWORD => 'some-keyword', ATTR_KEY_INTEGER => 123 }
+ )
+ ) do |handle|
+ result = handle.result #: Hash[String, untyped]
+
+ # Check result attrs
+ assert_equal 'some-text', result['orig'][ATTR_KEY_TEXT.name]
+ assert_equal 'some-keyword', result['orig'][ATTR_KEY_KEYWORD.name]
+ assert_equal 123, result['orig'][ATTR_KEY_INTEGER.name]
+ assert_equal 'another-text', result['updated'][ATTR_KEY_TEXT.name]
+ assert_nil result['updated'][ATTR_KEY_KEYWORD.name]
+ assert_equal 123, result['updated'][ATTR_KEY_INTEGER.name]
+
+ # Check describe
+ desc = handle.describe
+ attrs = desc.search_attributes || raise
+ assert_equal 'another-text', attrs[ATTR_KEY_TEXT]
+ assert_nil attrs[ATTR_KEY_KEYWORD]
+ assert_equal 123, attrs[ATTR_KEY_INTEGER]
+ end
+
+ execute_workflow(
+ SearchAttributeMemoWorkflow,
+ :memo,
+ memo: { key1: 'val1', key2: 'val2', key3: 'val3' }
+ ) do |handle|
+ result = handle.result #: Hash[String, untyped]
+
+ # Check result attrs
+ assert_equal({ 'key1' => 'val1', 'key2' => 'val2', 'key3' => 'val3' }, result['orig'])
+ assert_equal({ 'key1' => 'new-val1', 'key3' => 'val3' }, result['updated'])
+
+ # Check describe
+ assert_equal({ 'key1' => 'new-val1', 'key3' => 'val3' }, handle.describe.memo)
+ end
+ end
+
+ class ContinueAsNewWorkflow < Temporalio::Workflow::Definition
+ def execute(past_run_ids)
+ raise 'Incorrect memo' unless Temporalio::Workflow.memo['past_run_id_count'] == past_run_ids.size
+ unless Temporalio::Workflow.info.retry_policy&.max_attempts == past_run_ids.size + 1000
+ raise 'Incorrect retry policy'
+ end
+
+ # CAN until 5 run IDs, updating memo and retry policy on the way
+ return past_run_ids if past_run_ids.size == 5
+
+ past_run_ids << Temporalio::Workflow.info.continued_run_id if Temporalio::Workflow.info.continued_run_id
+ raise Temporalio::Workflow::ContinueAsNewError.new(
+ past_run_ids,
+ memo: { past_run_id_count: past_run_ids.size },
+ retry_policy: Temporalio::RetryPolicy.new(max_attempts: past_run_ids.size + 1000)
+ )
+ end
+ end
+
+ def test_continue_as_new
+ execute_workflow(
+ ContinueAsNewWorkflow,
+ [],
+ # Set initial memo and retry policy, which we expect the workflow will update in CAN
+ memo: { past_run_id_count: 0 },
+ retry_policy: Temporalio::RetryPolicy.new(max_attempts: 1000)
+ ) do |handle|
+ result = handle.result #: Array[String]
+ assert_equal 5, result.size
+ assert_equal handle.result_run_id, result.first
+ end
+ end
+
+ class DeadlockWorkflow < Temporalio::Workflow::Definition
+ def execute
+ loop do
+ # Do nothing
+ end
+ end
+ end
+
+ def test_deadlock
+ # TODO(cretz): Do we need more tests? This attempts to interrupt the workflow via a raise on the thread, but do we
+ # need to concern ourselves with what happens if that's accidentally swallowed?
+ # TODO(cretz): Decrease deadlock detection timeout to make test faster? It is 4s now because shutdown waits on
+ # second task.
+ execute_workflow(DeadlockWorkflow) do |handle|
+ assert_eventually_task_fail(handle:, message_contains: 'Potential deadlock detected')
+ end
+ end
+
+ class StackTraceWorkflow < Temporalio::Workflow::Definition
+ workflow_query_attr_reader :expected_traces
+
+ def initialize
+ @expected_traces = []
+ end
+
+ def execute
+ # Wait forever two coroutines deep
+ Temporalio::Workflow::Future.new do
+ Temporalio::Workflow::Future.new do
+ @expected_traces << ["#{__FILE__}:#{__LINE__ + 1}"]
+ Temporalio::Workflow.wait_condition { false }
+ end
+ end
+
+ # Inside a coroutine and timeout, execute an activity forever
+ Temporalio::Workflow::Future.new do
+ Timeout.timeout(nil) do
+ @expected_traces << ["#{__FILE__}:#{__LINE__ + 1}", "#{__FILE__}:#{__LINE__ - 1}"]
+ Temporalio::Workflow.execute_activity('does-not-exist',
+ task_queue: 'does-not-exist',
+ start_to_close_timeout: 1000)
+ end
+ end
+
+ # Wait forever inside a workflow timeout
+ Temporalio::Workflow.timeout(nil) do
+ @expected_traces << ["#{__FILE__}:#{__LINE__ + 1}", "#{__FILE__}:#{__LINE__ - 1}"]
+ Temporalio::Workflow.wait_condition { false }
+ end
+ end
+
+ workflow_signal
+ def wait_signal
+ added_trace = ["#{__FILE__}:#{__LINE__ + 2}"]
+ @expected_traces << added_trace
+ Temporalio::Workflow.wait_condition { @resume_waited_signal }
+ @expected_traces.delete(added_trace)
+ end
+
+ workflow_update
+ def wait_update
+ do_recursive_thing(times_remaining: 5, lines: ["#{__FILE__}:#{__LINE__}"]) # steep:ignore
+ end
+
+ def do_recursive_thing(times_remaining:, lines:)
+ unless times_remaining.zero?
+ do_recursive_thing( # steep:ignore
+ times_remaining: times_remaining - 1,
+ lines: lines << "#{__FILE__}:#{__LINE__ - 2}"
+ )
+ end
+ @expected_traces << (lines << "#{__FILE__}:#{__LINE__ + 1}")
+ Temporalio::Workflow.wait_condition { false }
+ end
+
+ workflow_signal
+ def resume_waited_signal
+ @resume_waited_signal = true
+ end
+ end
+
+ def test_stack_trace
+ execute_workflow(StackTraceWorkflow) do |handle|
+ # Start a signal and an update
+ handle.signal(StackTraceWorkflow.wait_signal)
+ handle.start_update(StackTraceWorkflow.wait_update,
+ wait_for_stage: Temporalio::Client::WorkflowUpdateWaitStage::ACCEPTED)
+ assert_expected_traces = lambda do
+ actual_traces = handle.query('__stack_trace').split("\n\n").map do |lines| # steep:ignore
+ # Trim off non-this-class things and ":in ..."
+ lines.split("\n").select { |line| line.include?('worker_workflow_test') }.map do |line|
+ line, = line.partition(':in')
+ line
+ end.sort
+ end.sort
+ expected_traces = handle.query(StackTraceWorkflow.expected_traces).map(&:sort).sort # steep:ignore
+ assert_equal expected_traces, actual_traces
+ end
+
+ # Wait for there to be 5 expected traces and confirm proper trace
+ assert_eventually { assert_equal 5, handle.query(StackTraceWorkflow.expected_traces).size } # steep:ignore
+ assert_expected_traces.call
+
+ # Now complete the waited handle and confirm again
+ handle.signal(StackTraceWorkflow.resume_waited_signal)
+ assert_equal 4, handle.query(StackTraceWorkflow.expected_traces).size # steep:ignore
+ assert_expected_traces.call
+ end
+ end
+
+ class TaskFailureError1 < StandardError; end
+ class TaskFailureError2 < StandardError; end
+ class TaskFailureError3 < StandardError; end
+ class TaskFailureError4 < TaskFailureError3; end
+
+ class TaskFailureWorkflow < Temporalio::Workflow::Definition
+ workflow_failure_exception_type TaskFailureError2, TaskFailureError3
+
+ def execute(arg)
+ case arg
+ when 1
+ raise TaskFailureError1, 'one'
+ when 2
+ raise TaskFailureError2, 'two'
+ when 3
+ raise TaskFailureError3, 'three'
+ when 4
+ raise TaskFailureError4, 'four'
+ when 'arg'
+ raise ArgumentError, 'arg'
+ else
+ raise NotImplementedError
+ end
+ end
+ end
+
+ def test_task_failure
+ # Normally just fails task
+ execute_workflow(TaskFailureWorkflow, 1) do |handle|
+ assert_eventually_task_fail(handle:, message_contains: 'one')
+ end
+
+ # Fails workflow when configured on worker
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) do
+ execute_workflow(TaskFailureWorkflow, 1, workflow_failure_exception_types: [TaskFailureError1])
+ end
+ assert_equal 'one', err.cause.message
+ assert_equal 'WorkerWorkflowTest::TaskFailureError1', err.cause.type
+
+ # Fails workflow when configured on workflow, including inherited
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) { execute_workflow(TaskFailureWorkflow, 2) }
+ assert_equal 'two', err.cause.message
+ assert_equal 'WorkerWorkflowTest::TaskFailureError2', err.cause.type
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) { execute_workflow(TaskFailureWorkflow, 4) }
+ assert_equal 'four', err.cause.message
+ assert_equal 'WorkerWorkflowTest::TaskFailureError4', err.cause.type
+
+ # Also supports stdlib errors
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) do
+ execute_workflow(TaskFailureWorkflow, 'arg', workflow_failure_exception_types: [ArgumentError])
+ end
+ assert_equal 'arg', err.cause.message
+ assert_equal 'ArgumentError', err.cause.type
+ end
+
+ class NonDeterminismErrorWorkflow < Temporalio::Workflow::Definition
+ workflow_query_attr_reader :waiting
+
+ def execute
+ # Do a timer only on non-replay
+ sleep(0.01) unless Temporalio::Workflow::Unsafe.replaying?
+ Temporalio::Workflow.wait_condition { @finish }
+ end
+
+ workflow_signal
+ def finish
+ @finish = true
+ end
+ end
+
+ class NonDeterminismErrorSpecificAsFailureWorkflow < NonDeterminismErrorWorkflow
+ # @type module: Temporalio::Workflow::Definition.class
+
+ workflow_failure_exception_type Temporalio::Workflow::NondeterminismError
+ end
+
+ class NonDeterminismErrorGenericAsFailureWorkflow < NonDeterminismErrorWorkflow
+ workflow_failure_exception_type StandardError
+ end
+
+ def test_non_determinism_error
+ # Task failure by default
+ execute_workflow(NonDeterminismErrorWorkflow, max_cached_workflows: 0) do |handle|
+ assert_eventually { assert handle.fetch_history_events.any?(&:workflow_task_completed_event_attributes) }
+ handle.signal(NonDeterminismErrorWorkflow.finish)
+ assert_eventually_task_fail(handle:, message_contains: 'Nondeterminism')
+ end
+
+ # Specifically set on worker turns to failure
+ execute_workflow(NonDeterminismErrorWorkflow,
+ max_cached_workflows: 0,
+ workflow_failure_exception_types: [Temporalio::Workflow::NondeterminismError]) do |handle|
+ assert_eventually { assert handle.fetch_history_events.any?(&:workflow_task_completed_event_attributes) }
+ handle.signal(NonDeterminismErrorWorkflow.finish)
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) { handle.result }
+ assert_includes err.cause.message, 'Nondeterminism'
+ end
+
+ # Generically set on worker turns to failure
+ execute_workflow(NonDeterminismErrorWorkflow,
+ max_cached_workflows: 0,
+ workflow_failure_exception_types: [StandardError]) do |handle|
+ assert_eventually { assert handle.fetch_history_events.any?(&:workflow_task_completed_event_attributes) }
+ handle.signal(NonDeterminismErrorWorkflow.finish)
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) { handle.result }
+ assert_includes err.cause.message, 'Nondeterminism'
+ end
+
+ # Specifically set on workflow turns to failure
+ execute_workflow(NonDeterminismErrorSpecificAsFailureWorkflow, max_cached_workflows: 0) do |handle|
+ assert_eventually { assert handle.fetch_history_events.any?(&:workflow_task_completed_event_attributes) }
+ handle.signal(NonDeterminismErrorSpecificAsFailureWorkflow.finish)
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) { handle.result }
+ assert_includes err.cause.message, 'Nondeterminism'
+ end
+
+ # Generically set on workflow turns to failure
+ execute_workflow(NonDeterminismErrorGenericAsFailureWorkflow, max_cached_workflows: 0) do |handle|
+ assert_eventually { assert handle.fetch_history_events.any?(&:workflow_task_completed_event_attributes) }
+ handle.signal(NonDeterminismErrorGenericAsFailureWorkflow.finish)
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) { handle.result }
+ assert_includes err.cause.message, 'Nondeterminism'
+ end
+ end
+
+ class LoggerWorkflow < Temporalio::Workflow::Definition
+ def initialize
+ @bad_logger = Logger.new($stdout)
+ end
+
+ def execute
+ Temporalio::Workflow.wait_condition { false }
+ end
+
+ workflow_update
+ def update
+ Temporalio::Workflow.logger.info('some-log-1')
+ Temporalio::Workflow::Unsafe.illegal_call_tracing_disabled { @bad_logger.info('some-log-2') }
+ sleep(0.01)
+ end
+
+ workflow_signal
+ def cause_task_failure
+ raise 'Some failure'
+ end
+ end
+
+ def test_logger
+ # Have to make a new logger so stdout after capturing here
+ out, = safe_capture_io do
+ execute_workflow(LoggerWorkflow, max_cached_workflows: 0, logger: Logger.new($stdout)) do |handle|
+ handle.execute_update(LoggerWorkflow.update)
+ # Send signal which causes replay when cache disabled
+ handle.signal(:some_signal)
+ end
+ end
+ lines = out.split("\n")
+
+ # Confirm there is only one good line and it has contextual info
+ good_lines = lines.select { |l| l.include?('some-log-1') }
+ assert_equal 1, good_lines.size
+ assert_includes good_lines.first, ':workflow_type=>"LoggerWorkflow"'
+
+ # Confirm there are two bad lines, and they don't have contextual info
+ bad_lines = lines.select { |l| l.include?('some-log-2') }
+ assert bad_lines.size >= 2
+ refute_includes bad_lines.first, ':workflow_type=>"LoggerWorkflow"'
+
+ # Confirm task failure logs
+ out, = safe_capture_io do
+ execute_workflow(LoggerWorkflow, logger: Logger.new($stdout)) do |handle|
+ handle.signal(LoggerWorkflow.cause_task_failure)
+ assert_eventually_task_fail(handle:)
+ end
+ end
+ lines = out.split("\n").select { |l| l.include?(':workflow_type=>"LoggerWorkflow"') }
+ assert(lines.any? { |l| l.include?('Failed activation') && l.include?(':workflow_type=>"LoggerWorkflow"') })
+ assert(lines.any? { |l| l.include?('Some failure') && l.include?(':workflow_type=>"LoggerWorkflow"') })
+ end
+
+ class CancelWorkflow < Temporalio::Workflow::Definition
+ def execute(scenario)
+ case scenario.to_sym
+ when :swallow
+ begin
+ Temporalio::Workflow.wait_condition { false }
+ rescue Temporalio::Error::CanceledError
+ 'done'
+ end
+ else
+ raise NotImplementedError
+ end
+ end
+ end
+
+ def test_cancel
+ execute_workflow(CancelWorkflow, :swallow) do |handle|
+ assert_eventually { assert handle.fetch_history_events.any?(&:workflow_task_completed_event_attributes) }
+ handle.cancel
+ assert_equal 'done', handle.result
+ end
+ end
+
+ class FutureWorkflowError < StandardError; end
+
+ class FutureWorkflow < Temporalio::Workflow::Definition
+ def execute(scenario)
+ case scenario.to_sym
+ when :any_of
+ # Basic any of
+ result = Temporalio::Workflow::Future.any_of(
+ Temporalio::Workflow::Future.new { sleep(0.01) },
+ Temporalio::Workflow::Future.new { 'done' }
+ ).wait
+ raise unless result == 'done'
+
+ # Any of with exception
+ begin
+ Temporalio::Workflow::Future.any_of(
+ Temporalio::Workflow::Future.new { sleep(0.01) },
+ Temporalio::Workflow::Future.new { raise FutureWorkflowError }
+ ).wait
+ raise
+ rescue FutureWorkflowError
+ # Do nothing
+ end
+
+ # Try any of
+ result = Temporalio::Workflow::Future.try_any_of(
+ Temporalio::Workflow::Future.new { sleep(0.01) },
+ Temporalio::Workflow::Future.new { 'done' }
+ ).wait.wait
+ raise unless result == 'done'
+
+ # Try any of with exception
+ try_any_of = Temporalio::Workflow::Future.try_any_of(
+ Temporalio::Workflow::Future.new { sleep(0.01) },
+ Temporalio::Workflow::Future.new { raise FutureWorkflowError }
+ ).wait
+ begin
+ try_any_of.wait
+ raise
+ rescue FutureWorkflowError
+ # Do nothing
+ end
+ when :all_of
+ # Basic all of
+ fut1 = Temporalio::Workflow::Future.new { 'done1' }
+ fut2 = Temporalio::Workflow::Future.new { 'done2' }
+ Temporalio::Workflow::Future.all_of(fut1, fut2).wait
+ raise unless fut1.done? && fut2.done?
+
+ # All of with exception
+ fut1 = Temporalio::Workflow::Future.new { 'done1' }
+ fut2 = Temporalio::Workflow::Future.new { raise FutureWorkflowError }
+ begin
+ Temporalio::Workflow::Future.all_of(fut1, fut2).wait
+ raise
+ rescue FutureWorkflowError
+ # Do nothing
+ end
+
+ # Try all of
+ fut1 = Temporalio::Workflow::Future.new { 'done1' }
+ fut2 = Temporalio::Workflow::Future.new { 'done2' }
+ Temporalio::Workflow::Future.try_all_of(fut1, fut2).wait
+ raise unless fut1.done? && fut2.done?
+
+ # Try all of with exception
+ fut1 = Temporalio::Workflow::Future.new { 'done1' }
+ fut2 = Temporalio::Workflow::Future.new { raise FutureWorkflowError }
+ Temporalio::Workflow::Future.try_all_of(fut1, fut2).wait
+ begin
+ fut2.wait
+ raise
+ rescue FutureWorkflowError
+ # Do nothing
+ end
+ when :set_result
+ fut = Temporalio::Workflow::Future.new
+ fut.result = 'some result'
+ raise unless fut.wait == 'some result'
+ when :set_failure
+ fut = Temporalio::Workflow::Future.new
+ fut.failure = FutureWorkflowError.new
+ begin
+ fut.wait
+ raise
+ rescue FutureWorkflowError
+ # Do nothing
+ end
+ raise unless fut.wait_no_raise.nil?
+ raise unless fut.failure.is_a?(FutureWorkflowError)
+ when :cancel
+ # Cancel does not affect future
+ fut = Temporalio::Workflow::Future.new do
+ Temporalio::Workflow.wait_condition { false }
+ rescue Temporalio::Error::CanceledError
+ 'done'
+ end
+ fut.wait
+ else
+ raise NotImplementedError
+ end
+ end
+ end
+
+ def test_future
+ execute_workflow(FutureWorkflow, :any_of)
+ execute_workflow(FutureWorkflow, :all_of)
+ execute_workflow(FutureWorkflow, :set_result)
+ execute_workflow(FutureWorkflow, :set_failure)
+ execute_workflow(FutureWorkflow, :cancel) do |handle|
+ assert_eventually { assert handle.fetch_history_events.any?(&:workflow_task_completed_event_attributes) }
+ handle.cancel
+ assert_equal 'done', handle.result
+ end
+ end
+
+ class FiberYieldWorkflow < Temporalio::Workflow::Definition
+ def execute
+ @fiber = Fiber.current
+ Fiber.yield
+ end
+
+ workflow_signal
+ def finish_workflow(value)
+ Temporalio::Workflow.wait_condition { @fiber }.resume(value)
+ end
+ end
+
+ def test_fiber_yield
+ execute_workflow(FiberYieldWorkflow) do |handle|
+ assert_eventually { assert handle.fetch_history_events.any?(&:workflow_task_completed_event_attributes) }
+ handle.signal(FiberYieldWorkflow.finish_workflow, 'some-value')
+ assert_equal 'some-value', handle.result
+ end
+ end
+
+ class PayloadCodecActivity < Temporalio::Activity::Definition
+ def execute(should_fail)
+ raise Temporalio::Error::ApplicationError.new('Oh no', 'some err detail') if should_fail
+
+ 'some activity output'
+ end
+ end
+
+ class PayloadCodecWorkflow < Temporalio::Workflow::Definition
+ def execute(should_fail)
+ # Activity
+ act_res = Temporalio::Workflow.execute_activity(
+ PayloadCodecActivity, should_fail,
+ start_to_close_timeout: 10,
+ retry_policy: Temporalio::RetryPolicy.new(max_attempts: 1)
+ )
+ raise 'Bad act result' if act_res != 'some activity output'
+
+ # SA
+ raise 'Bad SA' if Temporalio::Workflow.search_attributes[Test::ATTR_KEY_TEXT] != 'some-sa'
+
+ Temporalio::Workflow.upsert_search_attributes(Test::ATTR_KEY_TEXT.value_set('new-sa'))
+
+ # Memo
+ raise 'Bad memo' if Temporalio::Workflow.memo['some-memo-key'] != 'some-memo'
+
+ Temporalio::Workflow.upsert_memo({ 'some-memo-key' => 'new-memo' })
+
+ Temporalio::Workflow.wait_condition { @finish_with }
+ end
+
+ workflow_signal
+ def some_signal(finish_with)
+ @finish_with = finish_with
+ end
+
+ workflow_query
+ def some_query(input)
+ "query output from input: #{input}"
+ end
+
+ workflow_update
+ def some_update(input)
+ "update output from input: #{input}"
+ end
+ end
+
+ def test_payload_codec
+ env.ensure_common_search_attribute_keys
+
+ # Create a new client with the base64 codec
+ new_options = env.client.options.dup
+ new_options.data_converter = Temporalio::Converters::DataConverter.new(payload_codec: Base64Codec.new)
+ client = Temporalio::Client.new(**new_options.to_h)
+ assert_encoded = lambda do |payload|
+ assert_equal 'test/base64', payload.metadata['encoding']
+ Base64.strict_decode64(payload.data)
+ end
+
+ # Workflow success and many common payload paths
+ execute_workflow(
+ PayloadCodecWorkflow, false,
+ activities: [PayloadCodecActivity],
+ search_attributes: Temporalio::SearchAttributes.new({ ATTR_KEY_TEXT => 'some-sa' }),
+ memo: { 'some-memo-key' => 'some-memo' },
+ client:,
+ workflow_payload_codec_thread_pool: Temporalio::Worker::ThreadPool.default
+ ) do |handle|
+ # Check query, update, signal, and workflow result
+ query_result = handle.query(PayloadCodecWorkflow.some_query, 'query-input')
+ assert_equal 'query output from input: query-input', query_result
+ update_result = handle.execute_update(PayloadCodecWorkflow.some_update, 'update-input')
+ assert_equal 'update output from input: update-input', update_result
+ handle.signal(PayloadCodecWorkflow.some_signal, 'some-workflow-result')
+ assert_equal 'some-workflow-result', handle.result
+
+ # Now check that history has encoded values, with the exception of search attributes
+ events = handle.fetch_history_events
+
+ # Start
+ attrs = events.map(&:workflow_execution_started_event_attributes).compact.first
+ assert_encoded.call(attrs.input.payloads.first)
+ assert_encoded.call(attrs.memo.fields['some-memo-key'])
+ assert_equal 'json/plain', attrs.search_attributes.indexed_fields[ATTR_KEY_TEXT.name].metadata['encoding']
+
+ # Activity
+ attrs = events.map(&:activity_task_scheduled_event_attributes).compact.first
+ assert_encoded.call(attrs.input.payloads.first)
+ attrs = events.map(&:activity_task_completed_event_attributes).compact.first
+ assert_encoded.call(attrs.result.payloads.first)
+
+ # Upserts
+ attrs = events.map(&:upsert_workflow_search_attributes_event_attributes).compact.first
+ assert_equal 'json/plain', attrs.search_attributes.indexed_fields[ATTR_KEY_TEXT.name].metadata['encoding']
+ attrs = events.map(&:workflow_properties_modified_event_attributes).compact.first
+ assert_encoded.call(attrs.upserted_memo.fields['some-memo-key'])
+
+ # Signal and update
+ attrs = events.map(&:workflow_execution_signaled_event_attributes).compact.first
+ assert_encoded.call(attrs.input.payloads.first)
+ attrs = events.map(&:workflow_execution_update_accepted_event_attributes).compact.first
+ assert_encoded.call(attrs.accepted_request.input.args.payloads.first)
+ attrs = events.map(&:workflow_execution_update_completed_event_attributes).compact.first
+ assert_encoded.call(attrs.outcome.success.payloads.first)
+
+ # Check SA and memo on describe
+ desc = handle.describe
+ assert_equal 'new-sa', desc.search_attributes[ATTR_KEY_TEXT]
+ assert_equal 'new-memo', desc.memo['some-memo-key']
+ end
+
+ # Workflow failure
+ execute_workflow(
+ PayloadCodecWorkflow, true,
+ activities: [PayloadCodecActivity],
+ client:,
+ workflow_payload_codec_thread_pool: Temporalio::Worker::ThreadPool.default
+ ) do |handle|
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) { handle.result }
+ assert_instance_of Temporalio::Error::ActivityError, err.cause
+ assert_instance_of Temporalio::Error::ApplicationError, err.cause.cause
+ assert_equal 'Oh no', err.cause.cause.message
+ assert_equal 'some err detail', err.cause.cause.details.first
+
+ # Error message not encoded, but details are
+ events = handle.fetch_history_events
+ attrs = events.map(&:activity_task_failed_event_attributes).compact.first
+ assert_equal 'Oh no', attrs.failure.message
+ assert_encoded.call(attrs.failure.application_failure_info.details.payloads.first)
+ end
+
+ # Workflow failure with failure encoding
+ new_options = env.client.options.dup
+ new_options.data_converter = Temporalio::Converters::DataConverter.new(
+ failure_converter: Ractor.make_shareable(
+ Temporalio::Converters::FailureConverter.new(encode_common_attributes: true)
+ ),
+ payload_codec: Base64Codec.new
+ )
+ client = Temporalio::Client.new(**new_options.to_h)
+ execute_workflow(
+ PayloadCodecWorkflow, true,
+ activities: [PayloadCodecActivity],
+ client:,
+ workflow_payload_codec_thread_pool: Temporalio::Worker::ThreadPool.default
+ ) do |handle|
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) { handle.result }
+ assert_instance_of Temporalio::Error::ActivityError, err.cause
+ assert_instance_of Temporalio::Error::ApplicationError, err.cause.cause
+ assert_equal 'Oh no', err.cause.cause.message
+ assert_equal 'some err detail', err.cause.cause.details.first
+
+ # Error message is encoded
+ events = handle.fetch_history_events
+ attrs = events.map(&:activity_task_failed_event_attributes).compact.first
+ assert_equal 'Encoded failure', attrs.failure.message
+ end
+ end
+
+ class DynamicWorkflow < Temporalio::Workflow::Definition
+ workflow_dynamic
+ workflow_raw_args
+
+ def execute(*raw_args)
+ raise 'Bad arg' unless raw_args.all? { |v| v.is_a?(Temporalio::Converters::RawValue) }
+
+ res = raw_args.map { |v| Temporalio::Workflow.payload_converter.from_payload(v.payload) }.join(' -- ')
+ res = "#{Temporalio::Workflow.info.workflow_type} - #{res}"
+ # Wrap result in raw arg to test that too
+ Temporalio::Converters::RawValue.new(Temporalio::Workflow.payload_converter.to_payload(res))
+ end
+ end
+
+ class NonDynamicWorkflow < Temporalio::Workflow::Definition
+ def execute(input)
+ "output for input: #{input}"
+ end
+ end
+
+ def test_dynamic
+ worker = Temporalio::Worker.new(
+ client: env.client,
+ task_queue: "tq-#{SecureRandom.uuid}",
+ workflows: [DynamicWorkflow, NonDynamicWorkflow],
+ # TODO(cretz): Ractor support not currently working
+ workflow_executor: Temporalio::Worker::WorkflowExecutor::ThreadPool.default
+ )
+ worker.run do
+ # Non-dynamic
+ res = env.client.execute_workflow(
+ NonDynamicWorkflow, 'some-input1',
+ id: "wf-#{SecureRandom.uuid}", task_queue: worker.task_queue
+ )
+ assert_equal 'output for input: some-input1', res
+ res = env.client.execute_workflow(
+ 'NonDynamicWorkflow', 'some-input2',
+ id: "wf-#{SecureRandom.uuid}", task_queue: worker.task_queue
+ )
+ assert_equal 'output for input: some-input2', res
+
+ # Dynamic directly fails
+ err = assert_raises(ArgumentError) do
+ env.client.execute_workflow(
+ DynamicWorkflow, 'some-input3',
+ id: "wf-#{SecureRandom.uuid}", task_queue: worker.task_queue
+ )
+ end
+ assert_includes err.message, 'Cannot pass dynamic workflow to start'
+
+ # Dynamic
+ res = env.client.execute_workflow(
+ 'NonDynamicWorkflowTypo', 'some-input4', 'some-input5',
+ id: "wf-#{SecureRandom.uuid}", task_queue: worker.task_queue
+ )
+ assert_equal 'NonDynamicWorkflowTypo - some-input4 -- some-input5', res
+ end
+ end
+
+ class ContextFrozenWorkflow < Temporalio::Workflow::Definition
+ workflow_init
+ def initialize(scenario = :do_nothing)
+ do_bad_thing(scenario)
+ end
+
+ def execute(_scenario = :do_nothing)
+ Temporalio::Workflow.wait_condition { false }
+ end
+
+ workflow_query
+ def some_query(scenario)
+ do_bad_thing(scenario)
+ end
+
+ workflow_update
+ def some_update(scenario)
+ # Do nothing inside the update itself
+ end
+
+ workflow_update_validator :some_update
+ def some_update_validator(scenario)
+ do_bad_thing(scenario)
+ end
+
+ def do_bad_thing(scenario)
+ case scenario.to_sym
+ when :make_command
+ Temporalio::Workflow.upsert_memo({ foo: 'bar' })
+ when :fiber_schedule
+ Fiber.schedule { 'foo' }
+ when :wait_condition
+ Temporalio::Workflow.wait_condition { true }
+ when :do_nothing
+ # Do nothing
+ else
+ raise NotImplementedError
+ end
+ end
+ end
+
+ def test_context_frozen
+ # Init
+ execute_workflow(ContextFrozenWorkflow, :make_command) do |handle|
+ assert_eventually_task_fail(handle:, message_contains: 'Cannot add commands in this context')
+ end
+ execute_workflow(ContextFrozenWorkflow, :fiber_schedule) do |handle|
+ assert_eventually_task_fail(handle:, message_contains: 'Cannot schedule fibers in this context')
+ end
+ execute_workflow(ContextFrozenWorkflow, :wait_condition) do |handle|
+ assert_eventually_task_fail(handle:, message_contains: 'Cannot wait in this context')
+ end
+
+ # Query
+ execute_workflow(ContextFrozenWorkflow) do |handle|
+ err = assert_raises(Temporalio::Error::WorkflowQueryFailedError) do
+ handle.query(ContextFrozenWorkflow.some_query, :make_command)
+ end
+ assert_includes err.message, 'Cannot add commands in this context'
+ err = assert_raises(Temporalio::Error::WorkflowQueryFailedError) do
+ handle.query(ContextFrozenWorkflow.some_query, :fiber_schedule)
+ end
+ assert_includes err.message, 'Cannot schedule fibers in this context'
+ err = assert_raises(Temporalio::Error::WorkflowQueryFailedError) do
+ handle.query(ContextFrozenWorkflow.some_query, :wait_condition)
+ end
+ assert_includes err.message, 'Cannot wait in this context'
+ end
+
+ # Update
+ execute_workflow(ContextFrozenWorkflow) do |handle|
+ err = assert_raises(Temporalio::Error::WorkflowUpdateFailedError) do
+ handle.execute_update(ContextFrozenWorkflow.some_update, :make_command)
+ end
+ assert_includes err.cause.message, 'Cannot add commands in this context'
+ err = assert_raises(Temporalio::Error::WorkflowUpdateFailedError) do
+ handle.execute_update(ContextFrozenWorkflow.some_update, :fiber_schedule)
+ end
+ assert_includes err.cause.message, 'Cannot schedule fibers in this context'
+ err = assert_raises(Temporalio::Error::WorkflowUpdateFailedError) do
+ handle.execute_update(ContextFrozenWorkflow.some_update, :wait_condition)
+ end
+ assert_includes err.cause.message, 'Cannot wait in this context'
+ end
+ end
+
+ class InitializerFailureWorkflow < Temporalio::Workflow::Definition
+ workflow_init
+ def initialize(scenario)
+ case scenario.to_sym
+ when :workflow_failure
+ raise Temporalio::Error::ApplicationError, 'Intentional workflow failure'
+ when :task_failure
+ raise 'Intentional task failure'
+ else
+ raise NotImplementedError
+ end
+ end
+
+ def execute(_scenario)
+ 'done'
+ end
+ end
+
+ def test_initializer_failure
+ execute_workflow(InitializerFailureWorkflow, :workflow_failure) do |handle|
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) { handle.result }
+ assert_equal 'Intentional workflow failure', err.cause.message
+ end
+ execute_workflow(InitializerFailureWorkflow, :task_failure) do |handle|
+ assert_eventually_task_fail(handle:, message_contains: 'Intentional task failure')
+ end
+ end
+
+ class QueueWorkflow < Temporalio::Workflow::Definition
+ def initialize
+ @queue = Queue.new
+ end
+
+ def execute(timeout = nil)
+ # Timeout only works on 3.2+
+ if timeout
+ @queue.pop(timeout:)
+ else
+ @queue.pop
+ end
+ end
+
+ workflow_signal
+ def enqueue(value)
+ @queue.push(value)
+ end
+ end
+
+ def test_queue
+ execute_workflow(QueueWorkflow) do |handle|
+ # Make sure it has started first so we're not inadvertently testing signal-with-start
+ assert_eventually { assert handle.fetch_history_events.any?(&:workflow_task_completed_event_attributes) }
+ handle.signal(QueueWorkflow.enqueue, 'some-value')
+ assert_equal 'some-value', handle.result
+ end
+
+ # Timeout not added until 3.2, so can stop test here before then
+ major, minor = RUBY_VERSION.split('.').take(2).map(&:to_i)
+ return if major.nil? || major != 3 || minor.nil? || minor < 2
+
+ # High timeout not reached
+ execute_workflow(QueueWorkflow, 20) do |handle|
+ assert_eventually { assert handle.fetch_history_events.any?(&:workflow_task_completed_event_attributes) }
+ handle.signal(QueueWorkflow.enqueue, 'some-value2')
+ assert_equal 'some-value2', handle.result
+ handle.result
+ end
+
+ # Low timeout reached
+ execute_workflow(QueueWorkflow, 1) do |handle|
+ assert_eventually { assert handle.fetch_history_events.any?(&:workflow_task_completed_event_attributes) }
+ assert_nil handle.result
+ handle.result
+ end
+
+ # Timeout at the same time as signal sent. We are going to accomplish this by waiting for first task completion,
+ # stopping worker (ensuring timer not yet fired), sending signal, waiting for both timer fire and signal events to
+ # be present, then starting worker again. Hopefully 2 seconds is enough to catch the space between timer started but
+ # not fired.
+ orig_handle, task_queue = execute_workflow(QueueWorkflow, 2, max_cached_workflows: 0) do |handle, worker|
+ assert_eventually { assert handle.fetch_history_events.any?(&:workflow_task_completed_event_attributes) }
+ [handle, worker.task_queue]
+ end
+ # Confirm timer not fired
+ refute orig_handle.fetch_history_events.any?(&:timer_fired_event_attributes)
+ # Send signal and wait for both timer fired and signaled
+ orig_handle.signal(QueueWorkflow.enqueue, 'some-value3')
+ assert_eventually { assert orig_handle.fetch_history_events.any?(&:timer_fired_event_attributes) }
+ assert_eventually { assert orig_handle.fetch_history_events.any?(&:workflow_execution_signaled_event_attributes) }
+ # Start worker (not workflow though)
+ execute_workflow(
+ QueueWorkflow, 2,
+ task_queue:, id: orig_handle.id,
+ id_conflict_policy: Temporalio::WorkflowIDConflictPolicy::USE_EXISTING, max_cached_workflows: 0
+ ) do |handle|
+ assert_equal orig_handle.result_run_id, handle.result_run_id
+ assert_equal 'some-value3', handle.result
+ end
+ end
+
+ class MutexActivity < Temporalio::Activity::Definition
+ def initialize(queue)
+ @queue = queue
+ end
+
+ def execute
+ @queue.pop
+ end
+ end
+
+ class MutexWorkflow < Temporalio::Workflow::Definition
+ workflow_query_attr_reader :results
+
+ def initialize
+ @mutex = Mutex.new
+ @results = []
+ end
+
+ def execute
+ Temporalio::Workflow.wait_condition { false }
+ end
+
+ workflow_signal
+ def run_activity
+ @mutex.synchronize do
+ @results << Temporalio::Workflow.execute_activity(MutexActivity, start_to_close_timeout: 100)
+ end
+ end
+ end
+
+ def test_mutex
+ queue = Queue.new
+ execute_workflow(MutexWorkflow, activities: [MutexActivity.new(queue)]) do |handle|
+ # Send 3 signals and make sure all are in history
+ 3.times { handle.signal(MutexWorkflow.run_activity) }
+ assert_eventually do
+ assert_equal 3, handle.fetch_history_events.count(&:workflow_execution_signaled_event_attributes)
+ end
+
+ # Now finish 3 activities, checking result each time
+ queue << 'one'
+ assert_eventually { assert_equal ['one'], handle.query(MutexWorkflow.results) }
+ queue << 'two'
+ assert_eventually { assert_equal %w[one two], handle.query(MutexWorkflow.results) }
+ queue << 'three'
+ assert_eventually { assert_equal %w[one two three], handle.query(MutexWorkflow.results) }
+ end
+ end
+
+ class UtilitiesWorkflow < Temporalio::Workflow::Definition
+ workflow_query_attr_reader :result
+
+ def execute
+ @result = [
+ Temporalio::Workflow.random.rand(100),
+ Temporalio::Workflow.random.uuid,
+ Temporalio::Workflow.now
+ ]
+ end
+ end
+
+ def test_utilities
+ # Run the workflow with no cache, then query the workflow, confirm the values
+ execute_workflow(UtilitiesWorkflow, max_cached_workflows: 0) do |handle|
+ result = handle.result
+ assert_equal result, handle.query(UtilitiesWorkflow.result)
+ end
+ end
+
+ class PatchPreActivity < Temporalio::Activity::Definition
+ def execute
+ 'pre-patch'
+ end
+ end
+
+ class PatchPostActivity < Temporalio::Activity::Definition
+ def execute
+ 'post-patch'
+ end
+ end
+
+ class PatchWorkflowBase < Temporalio::Workflow::Definition
+ workflow_query_attr_reader :activity_result
+ attr_writer :activity_result
+ end
+
+ class PatchPreWorkflow < PatchWorkflowBase
+ workflow_name :PatchWorkflow
+
+ def execute
+ self.activity_result = Temporalio::Workflow.execute_activity(PatchPreActivity, schedule_to_close_timeout: 100)
+ end
+ end
+
+ class PatchWorkflow < PatchWorkflowBase
+ def execute
+ self.activity_result = if Temporalio::Workflow.patched('my-patch')
+ Temporalio::Workflow.execute_activity(PatchPostActivity, schedule_to_close_timeout: 100)
+ else
+ Temporalio::Workflow.execute_activity(PatchPreActivity, schedule_to_close_timeout: 100)
+ end
+ end
+ end
+
+ class PatchDeprecateWorkflow < PatchWorkflowBase
+ workflow_name :PatchWorkflow
+
+ def execute
+ Temporalio::Workflow.deprecate_patch('my-patch')
+ self.activity_result = Temporalio::Workflow.execute_activity(PatchPostActivity, schedule_to_close_timeout: 100)
+ end
+ end
+
+ class PatchPostWorkflow < PatchWorkflowBase
+ workflow_name :PatchWorkflow
+
+ def execute
+ self.activity_result = Temporalio::Workflow.execute_activity(PatchPostActivity, schedule_to_close_timeout: 100)
+ end
+ end
+
+ def test_patch
+ task_queue = "tq-#{SecureRandom.uuid}"
+ activities = [PatchPreActivity, PatchPostActivity]
+
+ # Run pre-patch workflow
+ pre_patch_id = "wf-#{SecureRandom.uuid}"
+ execute_workflow(PatchPreWorkflow, activities:, id: pre_patch_id, task_queue:) do |handle|
+ handle.result
+ assert_equal 'pre-patch', handle.query(PatchPreWorkflow.activity_result)
+ end
+
+ # Patch workflow and confirm pre-patch and patched work
+ patched_id = "wf-#{SecureRandom.uuid}"
+ execute_workflow(PatchWorkflow, activities:, id: patched_id, task_queue:) do |handle|
+ handle.result
+ assert_equal 'post-patch', handle.query(PatchWorkflow.activity_result)
+ assert_equal 'pre-patch', env.client.workflow_handle(pre_patch_id).query(PatchWorkflow.activity_result)
+ end
+
+ # Deprecate patch and confirm patched and deprecated work, but not pre-patch
+ deprecate_patch_id = "wf-#{SecureRandom.uuid}"
+ execute_workflow(PatchDeprecateWorkflow, activities:, id: deprecate_patch_id, task_queue:) do |handle|
+ handle.result
+ assert_equal 'post-patch', handle.query(PatchWorkflow.activity_result)
+ assert_equal 'post-patch', env.client.workflow_handle(patched_id).query(PatchWorkflow.activity_result)
+ err = assert_raises(Temporalio::Error::WorkflowQueryFailedError) do
+ env.client.workflow_handle(pre_patch_id).query(PatchWorkflow.activity_result)
+ end
+ assert_includes err.message, 'Nondeterminism'
+ end
+
+ # Remove patch and confirm post patch and deprecated work, but not pre-patch or patched
+ post_patch_id = "wf-#{SecureRandom.uuid}"
+ execute_workflow(PatchPostWorkflow, activities:, id: post_patch_id, task_queue:) do |handle|
+ handle.result
+ assert_equal 'post-patch', handle.query(PatchWorkflow.activity_result)
+ assert_equal 'post-patch', env.client.workflow_handle(deprecate_patch_id).query(PatchWorkflow.activity_result)
+ err = assert_raises(Temporalio::Error::WorkflowQueryFailedError) do
+ env.client.workflow_handle(pre_patch_id).query(PatchWorkflow.activity_result)
+ end
+ assert_includes err.message, 'Nondeterminism'
+ err = assert_raises(Temporalio::Error::WorkflowQueryFailedError) do
+ env.client.workflow_handle(patched_id).query(PatchWorkflow.activity_result)
+ end
+ assert_includes err.message, 'Nondeterminism'
+ end
+ end
+
+ class CustomMetricsActivity < Temporalio::Activity::Definition
+ def execute
+ counter = Temporalio::Activity::Context.current.metric_meter.create_metric(
+ :counter, 'my-activity-counter'
+ ).with_additional_attributes({ someattr: 'someval1' })
+ counter.record(123, additional_attributes: { anotherattr: 'anotherval1' })
+ 'done'
+ end
+ end
+
+ class CustomMetricsWorkflow < Temporalio::Workflow::Definition
+ def execute
+ histogram = Temporalio::Workflow.metric_meter.create_metric(
+ :histogram, 'my-workflow-histogram', value_type: :duration
+ ).with_additional_attributes({ someattr: 'someval2' })
+ histogram.record(4.56, additional_attributes: { anotherattr: 'anotherval2' })
+ Temporalio::Workflow.execute_activity(CustomMetricsActivity, schedule_to_close_timeout: 10)
+ end
+ end
+
+ def test_custom_metrics
+ # Create a client w/ a Prometheus-enabled runtime
+ prom_addr = "127.0.0.1:#{find_free_port}"
+ runtime = Temporalio::Runtime.new(
+ telemetry: Temporalio::Runtime::TelemetryOptions.new(
+ metrics: Temporalio::Runtime::MetricsOptions.new(
+ prometheus: Temporalio::Runtime::PrometheusMetricsOptions.new(
+ bind_address: prom_addr
+ )
+ )
+ )
+ )
+ conn_opts = env.client.connection.options.dup
+ conn_opts.runtime = runtime
+ client_opts = env.client.options.dup
+ client_opts.connection = Temporalio::Client::Connection.new(**conn_opts.to_h) # steep:ignore
+ client = Temporalio::Client.new(**client_opts.to_h) # steep:ignore
+
+ assert_equal 'done', execute_workflow(
+ CustomMetricsWorkflow,
+ activities: [CustomMetricsActivity],
+ client:
+ )
+
+ dump = Net::HTTP.get(URI("http://#{prom_addr}/metrics"))
+ lines = dump.split("\n")
+
+ # Confirm we have the regular activity metrics
+ line = lines.find { |l| l.start_with?('temporal_activity_task_received{') }
+ assert_includes line, 'activity_type="CustomMetricsActivity"'
+ assert_includes line, 'task_queue="'
+ assert_includes line, 'namespace="default"'
+ assert line.end_with?(' 1')
+
+ # Confirm we have the regular workflow metrics
+ line = lines.find { |l| l.start_with?('temporal_workflow_completed{') }
+ assert_includes line, 'workflow_type="CustomMetricsWorkflow"'
+ assert_includes line, 'task_queue="'
+ assert_includes line, 'namespace="default"'
+ assert line.end_with?(' 1')
+
+ # Confirm custom activity metric has the tags we expect
+ line = lines.find { |l| l.start_with?('my_activity_counter{') }
+ assert_includes line, 'activity_type="CustomMetricsActivity"'
+ assert_includes line, 'task_queue="'
+ assert_includes line, 'namespace="default"'
+ assert_includes line, 'someattr="someval1"'
+ assert_includes line, 'anotherattr="anotherval1"'
+ assert line.end_with?(' 123')
+
+ # Confirm custom workflow metric has the tags we expect
+ line = lines.find { |l| l.start_with?('my_workflow_histogram_sum{') }
+ assert_includes line, 'workflow_type="CustomMetricsWorkflow"'
+ assert_includes line, 'task_queue="'
+ assert_includes line, 'namespace="default"'
+ assert_includes line, 'someattr="someval2"'
+ assert_includes line, 'anotherattr="anotherval2"'
+ assert line.end_with?(' 4560')
+ end
+
+ class FailWorkflowPayloadConverter < Temporalio::Converters::PayloadConverter
+ def to_payload(value)
+ if value == 'fail-on-this-result'
+ raise Temporalio::Error::ApplicationError.new('Intentional error', type: 'IntentionalError')
+ end
+
+ Temporalio::Converters::PayloadConverter.default.to_payload(value)
+ end
+
+ def from_payload(payload)
+ value = Temporalio::Converters::PayloadConverter.default.from_payload(payload)
+ if value == 'fail-on-this'
+ raise Temporalio::Error::ApplicationError.new('Intentional error', type: 'IntentionalError')
+ end
+
+ value
+ end
+ end
+
+ class FailWorkflowPayloadConverterWorkflow < Temporalio::Workflow::Definition
+ def execute(arg)
+ if arg == 'fail'
+ "#{arg}-on-this-result"
+ else
+ Temporalio::Workflow.wait_condition { false }
+ end
+ end
+
+ workflow_update
+ def do_update(arg)
+ "#{arg}-on-this-result"
+ end
+ end
+
+ def test_fail_workflow_payload_converter
+ new_options = env.client.options.dup
+ new_options.data_converter = Temporalio::Converters::DataConverter.new(
+ payload_converter: Ractor.make_shareable(FailWorkflowPayloadConverter.new)
+ )
+ client = Temporalio::Client.new(**new_options.to_h)
+
+ # As workflow argument
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) do
+ execute_workflow(FailWorkflowPayloadConverterWorkflow, 'fail-on-this', client:)
+ end
+ assert_equal 'IntentionalError', err.cause.type
+
+ # As workflow result
+ err = assert_raises(Temporalio::Error::WorkflowFailedError) do
+ execute_workflow(FailWorkflowPayloadConverterWorkflow, 'fail', client:)
+ end
+ assert_equal 'IntentionalError', err.cause.type
+
+ # As an update argument
+ err = assert_raises(Temporalio::Error::WorkflowUpdateFailedError) do
+ execute_workflow(FailWorkflowPayloadConverterWorkflow, 'do-nothing', client:) do |handle|
+ handle.execute_update(FailWorkflowPayloadConverterWorkflow.do_update, 'fail-on-this')
+ end
+ end
+ # We do an extra `.cause` because this is wrapped in a RuntimeError that the update arg parsing failed
+ assert_equal 'IntentionalError', err.cause.cause.type
+
+ # As an update result
+ err = assert_raises(Temporalio::Error::WorkflowUpdateFailedError) do
+ execute_workflow(FailWorkflowPayloadConverterWorkflow, 'do-nothing', client:) do |handle|
+ handle.execute_update(FailWorkflowPayloadConverterWorkflow.do_update, 'fail')
+ end
+ end
+ assert_equal 'IntentionalError', err.cause.type
+ end
+
+ class ConfirmGarbageCollectWorkflow < Temporalio::Workflow::Definition
+ @initialized_count = 0
+ @finalized_count = 0
+
+ class << self
+ attr_accessor :initialized_count, :finalized_count
+
+ def create_finalizer
+ proc { @finalized_count += 1 }
+ end
+ end
+
+ def initialize
+ self.class.initialized_count += 1
+ ObjectSpace.define_finalizer(self, self.class.create_finalizer)
+ end
+
+ def execute
+ Temporalio::Workflow.wait_condition { false }
+ end
+ end
+
+ def test_confirm_garbage_collect
+ execute_workflow(ConfirmGarbageCollectWorkflow) do |handle|
+ # Wait until it is started
+ assert_eventually { assert handle.fetch_history_events.any?(&:workflow_task_completed_event_attributes) }
+ # Confirm initialized but not finalized
+ assert_equal 1, ConfirmGarbageCollectWorkflow.initialized_count
+ assert_equal 0, ConfirmGarbageCollectWorkflow.finalized_count
+ end
+
+ # Now with worker shutdown, GC and confirm finalized
+ assert_eventually do
+ GC.start
+ assert_equal 1, ConfirmGarbageCollectWorkflow.finalized_count
+ end
+ end
+
+ # TODO(cretz): To test
+ # * Common
+ # * Ractor with global state
+ # * Eager workflow start
+ # * Unawaited futures that have exceptions, need to log warning like Java does
+ # * Enhanced stack trace?
+ # * Separate abstract/interface demonstration
+ # * Replace worker client
+ # * Reset update randomness seed
+ # * Confirm thread pool does not leak, meaning thread/worker goes away after last workflow
+ # * Test workflow cancel causing other cancels at the same time but in different coroutines
+ # * 0-sleep timers vs nil timers
+ # * Interceptors
+ # * Handler
+ # * Signal/update with start
+ # * Activity
+ # * Local activity cancel (currently broken)
+end
diff --git a/temporalio/test/workflow/definition_test.rb b/temporalio/test/workflow/definition_test.rb
new file mode 100644
index 00000000..470ccc90
--- /dev/null
+++ b/temporalio/test/workflow/definition_test.rb
@@ -0,0 +1,303 @@
+# frozen_string_literal: true
+
+require 'temporalio/workflow/definition'
+require 'test'
+
+module Workflow
+ class DefinitionTest < Test
+ class ValidWorkflowSimple < Temporalio::Workflow::Definition
+ workflow_signal
+ def my_signal(some_arg); end
+
+ workflow_query
+ def my_query(some_arg); end
+
+ workflow_update
+ def my_update(some_arg); end
+ end
+
+ def test_valid_simple
+ defn = Temporalio::Workflow::Definition::Info.from_class(ValidWorkflowSimple)
+
+ assert_equal 'ValidWorkflowSimple', defn.name
+ assert_equal ValidWorkflowSimple, defn.workflow_class
+ refute defn.init
+ refute defn.raw_args
+
+ assert_equal 1, defn.signals.size
+ assert_equal 'my_signal', defn.signals['my_signal'].name
+ assert_equal :my_signal, defn.signals['my_signal'].to_invoke
+ refute defn.signals['my_signal'].raw_args
+ assert_equal Temporalio::Workflow::HandlerUnfinishedPolicy::WARN_AND_ABANDON,
+ defn.signals['my_signal'].unfinished_policy
+ assert_same defn.signals['my_signal'], ValidWorkflowSimple.my_signal
+
+ assert_equal 1, defn.queries.size
+ assert_equal 'my_query', defn.queries['my_query'].name
+ assert_equal :my_query, defn.queries['my_query'].to_invoke
+ refute defn.queries['my_query'].raw_args
+ assert_same defn.queries['my_query'], ValidWorkflowSimple.my_query
+
+ assert_equal 1, defn.updates.size
+ assert_equal 'my_update', defn.updates['my_update'].name
+ assert_equal :my_update, defn.updates['my_update'].to_invoke
+ refute defn.updates['my_update'].raw_args
+ assert_equal Temporalio::Workflow::HandlerUnfinishedPolicy::WARN_AND_ABANDON,
+ defn.updates['my_update'].unfinished_policy
+ assert_nil defn.updates['my_update'].validator_to_invoke
+ # Note, this would fail if there was a validator since adding a validator
+ # creates a new definition
+ assert_same defn.updates['my_update'], ValidWorkflowSimple.my_update
+ end
+
+ class ValidWorkflowAdvancedBase < Temporalio::Workflow::Definition
+ workflow_signal name: 'custom-signal-name-1'
+ def my_base_signal1; end
+
+ workflow_signal name: 'custom-signal-name-2'
+ def my_base_signal2; end
+
+ workflow_signal
+ def my_base_signal3; end
+ end
+
+ class ValidWorkflowAdvanced1 < ValidWorkflowAdvancedBase
+ workflow_name 'custom-workflow-name'
+ workflow_raw_args
+
+ workflow_init
+ def initialize(arg1, arg2); end # rubocop:disable Lint/MissingSuper
+
+ def execute(arg1, arg2); end
+
+ workflow_update dynamic: true,
+ raw_args: true,
+ unfinished_policy: Temporalio::Workflow::HandlerUnfinishedPolicy::ABANDON
+ def my_dynamic_update(*args); end
+
+ workflow_update_validator :my_dynamic_update
+ def my_dynamic_update_validator(*args); end
+
+ workflow_update_validator :another_update
+ def another_update_validator(arg1, arg2); end
+
+ workflow_update name: 'custom-update-name'
+ def another_update(arg1, arg2); end
+ end
+
+ class ValidWorkflowAdvanced2 < ValidWorkflowAdvancedBase
+ workflow_dynamic
+
+ workflow_signal name: 'custom-signal-name-1'
+ def my_base_signal1; end
+
+ workflow_signal name: 'custom-signal-name-2'
+ def my_renamed_signal; end
+
+ workflow_signal
+ def my_new_signal; end
+
+ workflow_update name: 'custom-update-name'
+ def another_update; end
+ end
+
+ def test_valid_advanced
+ defn = Temporalio::Workflow::Definition::Info.from_class(ValidWorkflowAdvanced1)
+
+ assert_equal 'custom-workflow-name', defn.name
+ assert_equal ValidWorkflowAdvanced1, defn.workflow_class
+ refute defn.dynamic
+ assert defn.init
+ assert defn.raw_args
+ assert_equal 3, defn.signals.size
+ assert_equal 2, defn.updates.size
+ assert_equal :my_dynamic_update, defn.updates[nil].to_invoke
+ assert defn.updates[nil].raw_args
+ assert_equal Temporalio::Workflow::HandlerUnfinishedPolicy::ABANDON, defn.updates[nil].unfinished_policy
+ assert_equal :my_dynamic_update_validator, defn.updates[nil].validator_to_invoke
+ refute ValidWorkflowAdvanced1.respond_to?(:my_dynamic_update)
+ assert_equal :another_update, defn.updates['custom-update-name'].to_invoke
+ refute defn.updates['custom-update-name'].raw_args
+ assert_equal Temporalio::Workflow::HandlerUnfinishedPolicy::WARN_AND_ABANDON,
+ defn.updates['custom-update-name'].unfinished_policy
+ assert_equal :another_update_validator, defn.updates['custom-update-name'].validator_to_invoke
+ assert_equal 'custom-update-name', ValidWorkflowAdvanced1.another_update.name
+
+ defn = Temporalio::Workflow::Definition::Info.from_class(ValidWorkflowAdvanced2)
+
+ assert_nil defn.name
+ assert_equal ValidWorkflowAdvanced2, defn.workflow_class
+ assert defn.dynamic
+ refute defn.init
+ refute defn.raw_args
+
+ assert_equal 4, defn.signals.size
+ assert_equal :my_base_signal1, defn.signals['custom-signal-name-1'].to_invoke
+ assert_equal :my_renamed_signal, defn.signals['custom-signal-name-2'].to_invoke
+ assert_equal :my_base_signal3, defn.signals['my_base_signal3'].to_invoke
+ assert_equal :my_new_signal, defn.signals['my_new_signal'].to_invoke
+ end
+
+ def assert_invalid_workflow_code(message_contains, code_to_eval)
+ # Eval, which may fail, then try to get definition from last class
+ err = assert_raises(StandardError) do
+ before_classes = ObjectSpace.each_object(Class).to_a
+ eval(code_to_eval) # rubocop:disable Security/Eval
+ (ObjectSpace.each_object(Class).to_a - before_classes).each do |new_class|
+ Temporalio::Workflow::Definition::Info.from_class(new_class) if new_class < Temporalio::Workflow::Definition
+ end
+ end
+ assert_includes err.message, message_contains
+ end
+
+ def test_invalid_dynamic_and_name
+ assert_invalid_workflow_code 'cannot be given a name and be dynamic', <<~CODE
+ class TestInvalidDynamicAndName < Temporalio::Workflow::Definition
+ workflow_name 'my-name'
+ workflow_dynamic
+ end
+ CODE
+ end
+
+ def test_invalid_duplicate_handlers
+ assert_invalid_workflow_code 'signal my_signal_1 defined on different methods', <<~CODE
+ class TestInvalidDuplicateHandlers < Temporalio::Workflow::Definition
+ workflow_signal
+ def my_signal_1; end
+
+ workflow_signal name: 'my_signal_1'
+ def my_signal_2; end
+ end
+ CODE
+ end
+
+ def test_invalid_duplicate_handlers_different_type
+ assert_invalid_workflow_code 'my-name already defined as a different handler type', <<~CODE
+ class TestInvalidDuplicateHandlersDifferentType < Temporalio::Workflow::Definition
+ workflow_signal name: 'my-name'
+ def my_signal; end
+
+ workflow_update name: 'my-name'
+ def my_update; end
+ end
+ CODE
+ end
+
+ def test_invalid_init_not_on_initialize
+ assert_invalid_workflow_code 'was applied to not_initialize instead of initialize', <<~CODE
+ class TestInvalidInitNotOnInitialize < Temporalio::Workflow::Definition
+ workflow_init
+ def not_initialize; end
+ end
+ CODE
+ end
+
+ def test_invalid_init_not_match_execute
+ assert_invalid_workflow_code 'parameter count of initialize and execute must be the same', <<~CODE
+ class TestInvalidInitNotMatchExecute < Temporalio::Workflow::Definition
+ workflow_init
+ def initialize(arg1, arg2); end
+
+ def execute(arg3, arg4, arg5); end
+ end
+ CODE
+ end
+
+ def test_invalid_shadow_class_method
+ assert_invalid_workflow_code 'Attempting to override Temporal-defined class definition method', <<~CODE
+ class TestInvalidShadowClassMethod < Temporalio::Workflow::Definition
+ workflow_signal
+ def my_signal_1; end
+
+ def self.my_signal_1; end
+ end
+ CODE
+ end
+
+ def test_invalid_two_handler_decorators
+ assert_invalid_workflow_code 'Previous signal handler was not put on method before this handler', <<~CODE
+ class TestInvalidTwoHandlerDecorators < Temporalio::Workflow::Definition
+ workflow_signal
+ workflow_update
+ def my_update; end
+ end
+ CODE
+ end
+
+ def test_invalid_leftover_decorator
+ assert_invalid_workflow_code 'Leftover signal handler not applied to a method', <<~CODE
+ class TestInvalidLeftoverDecorator < Temporalio::Workflow::Definition
+ workflow_signal
+ end
+ CODE
+ end
+
+ def test_invalid_update_validator_no_update
+ assert_invalid_workflow_code 'Unable to find update does_not_exist', <<~CODE
+ class TestInvalidUpdateValidatorNoUpdate < Temporalio::Workflow::Definition
+ workflow_update
+ def my_update; end
+
+ workflow_update_validator :does_not_exist
+ def my_update_validator; end
+ end
+ CODE
+ end
+
+ def test_invalid_update_validator_param_mismatch
+ assert_invalid_workflow_code 'my_update_validator does not have exact parameter signature of my_update', <<~CODE
+ class TestInvalidUpdateValidatorParamMismatch < Temporalio::Workflow::Definition
+ workflow_update
+ def my_update(arg1, arg2); end
+
+ workflow_update_validator :my_update
+ def my_update_validator(arg2, arg3); end
+ end
+ CODE
+ end
+
+ def test_invalid_multiple_dynamic
+ assert_invalid_workflow_code 'Workflow signal defined on different methods', <<~CODE
+ class TestInvalidMultipleDynamic < Temporalio::Workflow::Definition
+ workflow_signal dynamic: true
+ def my_signal_1; end
+
+ workflow_signal dynamic: true
+ def my_signal_2; end
+ end
+ CODE
+ end
+
+ def test_invalid_override_different_name
+ assert_invalid_workflow_code 'Superclass handler on my_signal has name foo but current class expects bar', <<~CODE
+ class TestInvalidOverrideDifferentNameBase < Temporalio::Workflow::Definition
+ workflow_signal name: 'foo'
+ def my_signal; end
+ end
+
+ class TestInvalidOverrideDifferentName < TestInvalidOverrideDifferentNameBase
+ workflow_signal name: 'bar'
+ def my_signal; end
+ end
+ CODE
+ end
+
+ def test_invalid_override_different_type
+ assert_invalid_workflow_code(
+ 'Superclass handler on do_thing is a Temporalio::Workflow::Definition::Update ' \
+ 'but current class expects Temporalio::Workflow::Definition::Signal',
+ <<~CODE
+ class TestInvalidOverrideDifferentTypeBase < Temporalio::Workflow::Definition
+ workflow_update
+ def do_thing; end
+ end
+
+ class TestInvalidOverrideDifferentType < TestInvalidOverrideDifferentTypeBase
+ workflow_signal
+ def do_thing; end
+ end
+ CODE
+ )
+ end
+ end
+end
diff --git a/temporalio/test/workflow_utils.rb b/temporalio/test/workflow_utils.rb
new file mode 100644
index 00000000..382520ca
--- /dev/null
+++ b/temporalio/test/workflow_utils.rb
@@ -0,0 +1,73 @@
+# frozen_string_literal: true
+
+require 'securerandom'
+require 'temporalio/client'
+require 'temporalio/testing'
+require 'temporalio/worker'
+require 'temporalio/workflow'
+require 'test'
+
+module WorkflowUtils
+ # @type instance: Test
+
+ def execute_workflow(
+ workflow,
+ *args,
+ activities: [],
+ more_workflows: [],
+ task_queue: "tq-#{SecureRandom.uuid}",
+ id: "wf-#{SecureRandom.uuid}",
+ search_attributes: nil,
+ memo: nil,
+ retry_policy: nil,
+ workflow_failure_exception_types: [],
+ max_cached_workflows: 1000,
+ logger: nil,
+ client: env.client,
+ workflow_payload_codec_thread_pool: nil,
+ id_conflict_policy: Temporalio::WorkflowIDConflictPolicy::UNSPECIFIED,
+ max_heartbeat_throttle_interval: 60.0,
+ task_timeout: nil
+ )
+ worker = Temporalio::Worker.new(
+ client:,
+ task_queue:,
+ activities:,
+ workflows: [workflow] + more_workflows,
+ # TODO(cretz): Ractor support not currently working
+ workflow_executor: Temporalio::Worker::WorkflowExecutor::ThreadPool.default,
+ workflow_failure_exception_types:,
+ max_cached_workflows:,
+ logger: logger || client.options.logger,
+ workflow_payload_codec_thread_pool:,
+ max_heartbeat_throttle_interval:
+ )
+ worker.run do
+ handle = client.start_workflow(
+ workflow,
+ *args,
+ id:,
+ task_queue: worker.task_queue,
+ search_attributes:,
+ memo:,
+ retry_policy:,
+ id_conflict_policy:,
+ task_timeout:
+ )
+ if block_given?
+ yield handle, worker
+ else
+ handle.result
+ end
+ end
+ end
+
+ def assert_eventually_task_fail(handle:, message_contains: nil)
+ assert_eventually do
+ event = handle.fetch_history_events.find(&:workflow_task_failed_event_attributes)
+ refute_nil event
+ assert_includes(event.workflow_task_failed_event_attributes.failure.message, message_contains) if message_contains
+ event
+ end
+ end
+end