latest--partial-step.schema
sharedA single step in a workflow. The 'step' property determines the type and schema to use.
One of
Definitions
Breaks out of the current loop (e.g., WHILE or FOR) based on the given condition.
Breaks out of the current loop (e.g., WHILE or FOR) based on the given condition.
The condition to evaluate. Loop will be broken when condition evaluates to true. If not provided, breaks unconditionally.
Logs debug information about the current flow context and step configuration. Optionally triggers a debugger breakpoint.
Logs debug information about the current flow context and step configuration. Optionally triggers a debugger breakpoint.
Optional identifier for the debug step, included in the log message.
If true, skips triggering the debugger breakpoint.
Executes a set of steps for a fixed number of iterations (numeric) or iterates over an array (items).
Executes a set of steps for a fixed number of iterations (numeric) or iterates over an array (items).
The name of the variable to set with the current iteration value.
The starting value of the loop variable (numeric).
The ending value of the loop variable (numeric, inclusive).
Step increment for numeric loops (can be negative). If omitted, inferred from start/end (1 or -1).
An array to iterate over (e.g., an expression producing an array). If provided, numeric start/end are ignored.
Steps to execute in each iteration of the loop.
Execute a conditional branch based on a condition
Execute a conditional branch based on a condition
The condition to evaluate. Can be a template expression with {{ }} syntax.
Legacy alternative to condition. The condition to evaluate using JavaScript syntax.
Steps to execute if the condition is true
Steps to execute if the condition is false
Ends the execution of a flow based on the given condition, with optional notification.
Ends the execution of a flow based on the given condition, with optional notification. (Note: Aliases END_IF are deprecated, use 'END' instead)
The condition to evaluate. Flow will be terminated gracefully when condition evaluates to true.
Optional notification to show when ending the flow.
Runs multiple step lists in parallel and merges their flow contexts upon completion.
Runs multiple step lists in parallel and merges their flow contexts upon completion.
An array of step arrays, where each sub-array represents a branch of steps to execute in parallel.
Sets variables in the flow context. Can execute steps, evaluate expressions, or set key-value pairs.
Sets variables in the flow context. Can execute steps, evaluate expressions, or set key-value pairs.
Steps to execute before setting the variable. The result is stored in the flow context.
JavaScript expression to evaluate and set as the variable value.
Text to resolve (with variables) and set as the variable value.
Raw text to set as the variable value without resolution.
Name of the variable to set in the flow context.
Convert the value to a specific type (e.g., "number").
Output mappings from step outputs to flow context
Evaluates a switch value and executes the steps for the matching case. Supports both array and object syntax for cases.
Evaluates a switch value and executes the steps for the matching case. Supports both array and object syntax for cases.
The cases to match against the switch value. Each case key should match a possible switch value, with the value being an array of steps to execute. Use "default" as a fallback case.
The condition to evaluate for the switch value. Can be a template expression with {{ }} syntax.
Legacy alternative to condition. The switch value to evaluate using JavaScript syntax.
Calls a predefined template of steps by name and executes it.
Calls a predefined template of steps by name and executes it.
The name of the template to call from the TEMPLATES: section.
Executes a set of steps in a try block, and if an error occurs, executes catch steps.
Executes a set of steps in a try block, and if an error occurs, executes catch steps.
Steps to execute in the try block.
Steps to execute if an error occurs in the try block.
Repeatedly executes a set of steps while a condition evaluates to true.
Repeatedly executes a set of steps while a condition evaluates to true.
The condition to evaluate for the loop. Can be a template expression with {{ }} syntax.
Legacy alternative to condition. The test expression to evaluate using JavaScript syntax.
Steps to execute in each iteration of the loop.
Retrieves text content from the document or a specific location and sets it as output.
Retrieves text content from the document or a specific location and sets it as output. (Note: Aliases CTX_GET_TEXT_CONTENT are deprecated, use 'GET_TEXT_CONTENT' instead)
Location to read text from. Options: CURSOR, CURSOR_PARAGRAPH, XPATH, REPORT.
XPath expression for reading text at a specific location.
Output mappings from step outputs to flow context
Retrieves XML content from the document or a specific location and sets it as output.
Retrieves XML content from the document or a specific location and sets it as output. (Note: Aliases CTX_GET_XML_CONTENT are deprecated, use 'GET_XML_CONTENT' instead)
Location to read XML from. Options: CURSOR, CURSOR_PARAGRAPH, XPATH, REPORT.
XPath expression for reading XML at a specific location.
Output mappings from step outputs to flow context
Retrieves information about the currently selected content object and sets it as output.
Retrieves information about the currently selected content object and sets it as output. (Note: Aliases CTX_SELECTED_OBJECT are deprecated, use 'SELECTED_OBJECT' instead)
Output mappings from step outputs to flow context
Set a field value in the object panel using a selector or XPath. Supports tagsinput and plain fields.
Set a field value in the object panel using a selector or XPath. Supports tagsinput and plain fields. (Note: Aliases CTX_SET_METADATA are deprecated, use 'SET_METADATA' instead)
CSS selector of the input element to update.
Input mappings from flow context to step inputs
Shows a response to the user in a modal. Can run inline steps before showing and supports HTML mode.
Shows a response to the user in a modal. Can run inline steps before showing and supports HTML mode. (Note: Aliases CTX_SHOW_RESPONSE are deprecated, use 'SHOW_RESPONSE' instead)
Steps to execute while showing an in-place loading UI before the response is displayed.
Render mode for the response. Use "html" to render HTML; otherwise plain text is used.
Input mappings from flow context to step inputs
Shows a notification to the user with optional title and resolved message.
Shows a notification to the user with optional title and resolved message. (Note: Aliases CTX_SHOW_NOTIFICATION are deprecated, use 'SHOW_NOTIFICATION' instead)
The message to show. Supports variable resolution.
Type of notification (info, warning, error, message, task, success).
Optional title for the notification.
Insert text at the given location (cursor, xpath or report). If the step reads text from the flowContext, you can use a preceding SET step to provide the content.
Insert text at the given location (cursor, xpath or report). If the step reads text from the flowContext, you can use a preceding SET step to provide the content. (Note: Aliases CTX_INSERT_TEXT are deprecated, use 'INSERT_TEXT' instead)
Optional path or variable name to read the content from the flowContext. If omitted, the step uses the content from the default text input.
Where to insert the text. Use CURSOR (default), XPATH, or REPORT.
XPath expression used when at is XPATH.
Write content even if the document is considered read-only.
Input mappings from flow context to step inputs
Insert XML content at the given location. The content can be provided via the flowContext (using in) or via the default text input.
Insert XML content at the given location. The content can be provided via the flowContext (using in) or via the default text input. (Note: Aliases CTX_INSERT_XML are deprecated, use 'INSERT_XML' instead)
Optional path or variable name to read the XML content from the flowContext. If omitted, the step uses the default text input.
Where to insert the XML. Use CURSOR (default) or XPATH.
XPath expression used when at is XPATH.
Position mode for insertion when using cursor (e.g. insertInline, insertBefore, insertAfter).
Write content even if the document is considered read-only.
Replace existing text at the given location (cursor paragraph, cursor or xpath) with the provided content. Content can come from a flow variable (using in) or the default text input.
Replace existing text at the given location (cursor paragraph, cursor or xpath) with the provided content. Content can come from a flow variable (using in) or the default text input. (Note: Aliases CTX_REPLACE_TEXT are deprecated, use 'REPLACE_TEXT' instead)
Optional text or variable name to read the content from the flowContext. If omitted, the step uses the default text input.
Where to replace the text. Options: XPATH, CURSOR_PARAGRAPH, CURSOR.
XPath expression used when at is XPATH.
Write content even if the document is considered read-only.
Replace XML content at the given location. Content can be sourced from a flow variable (in) or the default text input.
Replace XML content at the given location. Content can be sourced from a flow variable (in) or the default text input. (Note: Aliases CTX_REPLACE_XML are deprecated, use 'REPLACE_XML' instead)
Optional text or variable name to read the XML content from the flowContext.
Where to replace the XML. Options: XPATH, CURSOR_PARAGRAPH, CURSOR.
XPath expression used when at is XPATH.
Write content even if the document is considered read-only.
Insert a list of items into the document. The items can be provided as a list input or be generated from a text input (converted to a list using ProcessToList).
Insert a list of items into the document. The items can be provided as a list input or be generated from a text input (converted to a list using ProcessToList). (Note: Aliases CTX_INSERT_LIST are deprecated, use 'INSERT_LIST' instead)
Optional text or variable name to read the content (text) from the flowContext. If omitted, the step uses the default text input.
Container tag to use for the list (default 'ul').
List item tag to use for entries (default 'li').
When true, inserts items one-by-one instead of a full container.
XPath expression used when at is XPATH.
Where to insert the list. Use CURSOR (default) or XPATH.
Position mode for insertion when using cursor (e.g. insertInline, insertBefore, insertAfter).
Write content even if the document is considered read-only.
Input mappings from flow context to step inputs
Sanitizes text input. Can strip markdown code blocks and validate/repair XML content before forwarding the result to the flow context.
Sanitizes text input. Can strip markdown code blocks and validate/repair XML content before forwarding the result to the flow context.
If true, removes markdown code blocks (...) from the input before further processing.
If true, attempts to validate and repair XML fragments found in the input.
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Executes inline JavaScript (script) or a JS-style template (template). Use with caution — only in trusted flows.
Executes inline JavaScript (script) or a JS-style template (template). Use with caution — only in trusted flows.
A JavaScript expression or block to be executed in the step context. If mode: "async" the script is treated as async.
A template string that will be resolved using the step context and then parsed as JSON. Use this when you want to build structured output.
Execution mode for scripts. Use "async" to run the script using the asynchronous evaluator.
Convert a textual AI response into a list of strings. Detects JSON arrays, ordered/unordered lists, comma-separated values, or falls back to a single-item array.
Convert a textual AI response into a list of strings. Detects JSON arrays, ordered/unordered lists, comma-separated values, or falls back to a single-item array. (Note: Aliases PROCESS_TO_LIST are deprecated, use 'TO_LIST' instead)
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Parses JSON content from the text input and stores the resulting object in the flow context.
Parses JSON content from the text input and stores the resulting object in the flow context. (Note: Aliases PROCESS_PARSE_JSON are deprecated, use 'PARSE_JSON' instead)
Optional path or variable name to read the JSON string from the flowContext. If omitted, the default text input is used.
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Converts markdown text to HTML and sets it as text output.
Converts markdown text to HTML and sets it as text output.
Text input or flow variable to convert. If omitted, the default text input is used.
Converts a base64-encoded string to a Blob and stores it in the flow context as a blob output.
Converts a base64-encoded string to a Blob and stores it in the flow context as a blob output.
Optional path or variable name to read the base64 string from the flowContext. If omitted, the default text input is used.
Optional content type for the resulting Blob (e.g. "image/png").
Output mappings from step outputs to flow context
Upload an image to BrighterAI, poll for processing completion and download the anonymized image. The resulting anonymized image blob is written to the configured output.
Upload an image to BrighterAI, poll for processing completion and download the anonymized image. The resulting anonymized image blob is written to the configured output. (Note: Aliases SERVICE_BRIGHTER_AI are deprecated, use 'BRIGHTER_AI' instead)
Optional service name override. If not specified, defaults to 'BRIGHTER_AI'.
Service operation name (e.g. blur, dnat, mask). If not provided, blur is used.
Optional url parameters object passed to the BrighterAI upload endpoint.
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Translate text using DeepL. Supports many optional DeepL parameters and returns the translated text as the default text output.
Translate text using DeepL. Supports many optional DeepL parameters and returns the translated text as the default text output. (Note: Aliases SERVICE_DEEPL_TRANSLATE are deprecated, use 'DEEPL_TRANSLATE' instead)
Text to translate (can include templates). If omitted, the default text input is used.
Target language code for the translation (e.g. EN, DE, FR).
Optional service name override. If not specified, defaults to 'DEEPL'.
Optional: replace xml:lang attributes in the translated output with this language code.
How to handle tags; default is "xml".
Language of the text to be translated. If this parameter is omitted, the API will attempt to detect the language of the text and translate it.
Additional context that can influence a translation but is not translated itself.
Sets whether the translation engine should first split the input into sentences.
Sets whether the translation engine should respect the original formatting, even if it would usually
Sets whether the translated text should lean towards formal or informal language.
Specify the glossary to use for the translation.
Disable the automatic detection of XML
Comma-separated list of XML tags which never split sentences.
Comma-separated list of XML tags which always cause splits.
Comma-separated list of XML tags that indicate text not to be translated.
Specifies which DeepL model should be used for translation.
When true, the response will include the billed_characters parameter.
Output mappings from step outputs to flow context
Rephrase/write text using DeepL Write endpoint. Returns the rephrased text in the default text output.
Rephrase/write text using DeepL Write endpoint. Returns the rephrased text in the default text output. (Note: Aliases SERVICE_DEEPL_WRITE are deprecated, use 'DEEPL_WRITE' instead)
Optional service name override. If not specified, defaults to 'DEEPL_WRITE'.
Text to be rephrased or written. If omitted, default text input is used.
Optional target language for writing/rephrasing.
Optional writing style parameter.
Optional tone parameter for the writing API.
Output mappings from step outputs to flow context
Fetch object content from EDAPI and return it as blob or configured outputs. The content id can be provided via cfg.contentId or via a text input.
Fetch object content from EDAPI and return it as blob or configured outputs. The content id can be provided via cfg.contentId or via a text input.
Explicit content id to fetch. If omitted, the step will try to read the id from the default text input.
Format of the content to fetch.
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Upload an image to EDAPI by providing an image object (url or blob). Returns the created object metadata as output.
Upload an image to EDAPI by providing an image object (url or blob). Returns the created object metadata as output. (Note: Aliases CTX_UPLOAD_IMAGE are deprecated, use 'UPLOAD_IMAGE' instead)
Type of object to create (default: Image).
Creation mode, e.g. AUTO_RENAME.
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Update binary content of an existing object in EDAPI using the provided blob. Returns updated object metadata.
Update binary content of an existing object in EDAPI using the provided blob. Returns updated object metadata. (Note: Aliases CTX_UPDATE_BINARY_CONTENT are deprecated, use 'UPDATE_BINARY_CONTENT' instead)
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Upload content to EDAPI by providing a blob or specifying parameters (filename, type). Returns created object metadata.
Upload content to EDAPI by providing a blob or specifying parameters (filename, type). Returns created object metadata.
Obtions data for the UPLOAD
Filename to use for the uploaded object
BaseType - Used to identify upload location when using basefolder configuration.
Object type to create (e.g. File, Image)
Creation mode (e.g. AUTO_RENAME)
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Generate speech audio from text using ElevenLabs TTS. Text is taken from the default text input or a flow variable via in.
Generate speech audio from text using ElevenLabs TTS. Text is taken from the default text input or a flow variable via in.
Optional service name override. If not specified, defaults to 'ELEVENLABS'.
Override API key from service configuration
Override endpoint URL from service configuration
Voice identifier to use for synthesis
TTS model id
Optional voice settings object
Desired audio output format (e.g. mp3, wav, pcm)
Language code for the synthesis
Pronunciation dictionary locators (array or string) - resolved
Seed for deterministic generation
Optional previous_text context
Optional next_text context
Apply normalization to text
Apply language specific normalization
MIME type to use for the produced blob (e.g. audio/mpeg)
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Transcribe audio using ElevenLabs STT. Provides the transcribed text and optional JSON result.
Transcribe audio using ElevenLabs STT. Provides the transcribed text and optional JSON result.
Optional service name override. If not specified, defaults to 'ELEVENLABS'.
Override API key from service configuration
Override endpoint URL from service configuration
Model id to use for transcription - default: scribe_v1
Language code for transcription
Tag audio events option
Number of speakers to detect
Timestamps granularity
Enable diarization
Additional output formats
Input file format (e.g. wav, mp3)
URL to fetch input from cloud
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Initiate OAuth authorization flow for YouTube (requires ProActions-Hub). Opens the auth URL when available unless cfg.omitOpenAuth is true.
Initiate OAuth authorization flow for YouTube (requires ProActions-Hub). Opens the auth URL when available unless cfg.omitOpenAuth is true.
Optional service name override. If not specified, defaults to 'HUB'.
If true, do not automatically open the authorization URL in a new window; return the URL in the response instead.
Optional account identifier used by the Hub API.
Output mappings from step outputs to flow context
Check the status of a previously initiated YouTube OAuth authorization flow.
Check the status of a previously initiated YouTube OAuth authorization flow.
Optional service name override. If not specified, defaults to 'HUB'.
Optional account identifier used by the Hub API.
Output mappings from step outputs to flow context
Log out the configured Hub YouTube account. Uses POST to logout and returns the resulting object.
Log out the configured Hub YouTube account. Uses POST to logout and returns the resulting object.
Optional service name override. If not specified, defaults to 'HUB'.
Account identifier to log out (resolved).
Output mappings from step outputs to flow context
Upload a video to YouTube via the Hub service. Supports video file, optional thumbnail, metadata fields (title, description, privacyStatus, categoryId), tags and additionalMetadata. Progress can be shown when cfg.updateProgress is true.
Upload a video to YouTube via the Hub service. Supports video file, optional thumbnail, metadata fields (title, description, privacyStatus, categoryId), tags and additionalMetadata. Progress can be shown when cfg.updateProgress is true.
Optional service name override. If not specified, defaults to 'HUB'.
Video title
Video description
Privacy status: public, unlisted, or private
YouTube category id
Array of tags to set on the uploaded video
Optional object with additional metadata to attach
If true, progress will be reported to ProgressBar during upload
Account identifier for the Hub target (resolved)
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Extract textual content from a URL using the ProActions-Hub extraction tool. The URL can be provided via the default text input or the in/url configuration.
Extract textual content from a URL using the ProActions-Hub extraction tool. The URL can be provided via the default text input or the in/url configuration.
Optional service name override. If not specified, defaults to 'HUB'.
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Create or reuse an OpenAI assistant. If cfg.assistantId is provided the assistant will be retrieved; otherwise a new assistant is created. The created or retrieved assistant is written to the flow context.
Create or reuse an OpenAI assistant. If cfg.assistantId is provided the assistant will be retrieved; otherwise a new assistant is created. The created or retrieved assistant is written to the flow context.
Optional service name override. If not specified, defaults to 'OPENAI_COMPLETION'.
If provided, load the assistant with this ID instead of creating a new one.
If true and storeIn is provided, try to reuse a persisted assistant id.
Storage location for persisting assistant id: page/session/browser.
Initial instructions / system prompt for the assistant (supports templates).
Model to use when creating the assistant
Output mappings from step outputs to flow context
Create or reuse an OpenAI thread. If cfg.reuse and cfg.storeIn are provided the thread id may be reused. The created thread is written to the flow context.
Create or reuse an OpenAI thread. If cfg.reuse and cfg.storeIn are provided the thread id may be reused. The created thread is written to the flow context.
Optional service name override. If not specified, defaults to 'OPENAI_COMPLETION'.
If true and storeIn provided, try to reuse a persisted thread id
Storage location for persisting thread id: page/session/browser.
Output mappings from step outputs to flow context
Delete a thread using its id. Provide the thread object or id via inputs.
Delete a thread using its id. Provide the thread object or id via inputs.
Optional service name override. If not specified, defaults to 'OPENAI_COMPLETION'.
Input mappings from flow context to step inputs
Create a vector store for thread file tools and optionally upload files. Writes created store to flow context and optional uploaded file list to an optional output.
Create a vector store for thread file tools and optionally upload files. Writes created store to flow context and optional uploaded file list to an optional output.
Optional service name override. If not specified, defaults to 'OPENAI_COMPLETION'.
Name of the vector store
If true replace existing stores instead of appending
Expiration configuration for the vector store
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Send a message to a thread/assistant. The assistant and thread inputs are required. The message can be provided via cfg.instruction or via default text input. The assistant response text is stored in the default text output.
Send a message to a thread/assistant. The assistant and thread inputs are required. The message can be provided via cfg.instruction or via default text input. The assistant response text is stored in the default text output.
Optional service name override. If not specified, defaults to 'OPENAI_COMPLETION'.
Optional message instruction; if omitted the default text input is used.
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Chat/completion step that calls OpenAI (or Hub/Azure variants). Supports messages, images, audio, attachments, tools (functions), structured outputs (json_schema/json_object/list), reasoning and audio output. Many step-level configuration options are supported via cfg.*.
Chat/completion step that calls OpenAI (or Hub/Azure variants). Supports messages, images, audio, attachments, tools (functions), structured outputs (json_schema/json_object/list), reasoning and audio output. Many step-level configuration options are supported via cfg.*. (Note: Aliases SERVICE_OPENAI_COMPLETION are deprecated, use 'OPENAI_COMPLETION' instead)
Optional service name override. If not specified, defaults to 'OPENAI_COMPLETION'.
Model id or deployment id to use
Id of a stored prompt configuration to reuse
The user prompt to send to the LLM (supports templates)
The system prompt to send to the LLM (supports templates)
Raw options that will be merged into the request payload
Structured response format. Can be a string ("json_object" or "list") or an object with json_schema definition
Configuration object to request audio output (voice/format)
Optional tool selection policy
Maximum number of tool iterations to perform
Behavior after a tool result (none|auto)
Array of function descriptors for tool calling
Whether to reuse full flowContext when executing function templates
Optional safety identifier for the request
Array of prior messages to include in the conversation (overrides flowContext.messages)
Array or single image input(s) defined inline via cfg.images or cfg.image
Single inline image config
Single inline audio config
Array of audio inputs
Reasoning configuration object to pass to the API (for models supporting reasoning)
Image detail level for image inputs (low, high, auto)
Audio format for audio inputs (e.g., wav, mp3)
Array of generic attachments (images, audio, text) to include in the user message
Output mappings from step outputs to flow context
Chat/completion step that calls OpenAI (or Hub/Azure variants). Supports messages, images, audio, attachments, tools (functions), structured outputs (json_schema/json_object/list), reasoning and audio output. Many step-level configuration options are supported via cfg.*.
Chat/completion step that calls OpenAI (or Hub/Azure variants). Supports messages, images, audio, attachments, tools (functions), structured outputs (json_schema/json_object/list), reasoning and audio output. Many step-level configuration options are supported via cfg.*.
Optional service name override. If not specified, defaults to 'HUB'.
Model id or deployment id to use
Id of a stored prompt configuration to reuse
The user prompt to send to the LLM (supports templates)
The system prompt to send to the LLM (supports templates)
Raw options that will be merged into the request payload
Structured response format. Can be a string ("json_object" or "list") or an object with json_schema definition
Configuration object to request audio output (voice/format)
Optional tool selection policy
Maximum number of tool iterations to perform
Behavior after a tool result (none|auto)
Array of function descriptors for tool calling
Whether to reuse full flowContext when executing function templates
Optional safety identifier for the request
Array of prior messages to include in the conversation (overrides flowContext.messages)
Array or single image input(s) defined inline via cfg.images or cfg.image
Single inline image config
Single inline audio config
Array of audio inputs
Reasoning configuration object to pass to the API (for models supporting reasoning)
Image detail level for image inputs (low, high, auto)
Audio format for audio inputs (e.g., wav, mp3)
Array of generic attachments (images, audio, text) to include in the user message
Override the target used on service configuration level.
Output mappings from step outputs to flow context
Chat/completion step that calls OpenAI (or Hub/Azure variants). Supports messages, images, audio, attachments, tools (functions), structured outputs (json_schema/json_object/list), reasoning and audio output. Many step-level configuration options are supported via cfg.*.
Chat/completion step that calls OpenAI (or Hub/Azure variants). Supports messages, images, audio, attachments, tools (functions), structured outputs (json_schema/json_object/list), reasoning and audio output. Many step-level configuration options are supported via cfg.*. (Note: Aliases SERVICE_AZURE_OPENAI_COMPLETION are deprecated, use 'AZURE_OPENAI_COMPLETION' instead)
Optional service name override. If not specified, defaults to 'AZURE_OPENAI_COMPLETION'.
Model id or deployment id to use
Id of a stored prompt configuration to reuse
The user prompt to send to the LLM (supports templates)
The system prompt to send to the LLM (supports templates)
Raw options that will be merged into the request payload
Structured response format. Can be a string ("json_object" or "list") or an object with json_schema definition
Configuration object to request audio output (voice/format)
Optional tool selection policy
Maximum number of tool iterations to perform
Behavior after a tool result (none|auto)
Array of function descriptors for tool calling
Whether to reuse full flowContext when executing function templates
Optional safety identifier for the request
Array of prior messages to include in the conversation (overrides flowContext.messages)
Array or single image input(s) defined inline via cfg.images or cfg.image
Single inline image config
Single inline audio config
Array of audio inputs
Reasoning configuration object to pass to the API (for models supporting reasoning)
Image detail level for image inputs (low, high, auto)
Audio format for audio inputs (e.g., wav, mp3)
Array of generic attachments (images, audio, text) to include in the user message
Output mappings from step outputs to flow context
Generate images using OpenAI image models (DALL·E and GPT-Image-1). Supports model, size, quality, count (n), response_format and other parameters. The generated images are stored in the configured output key or default imageList.
Generate images using OpenAI image models (DALL·E and GPT-Image-1). Supports model, size, quality, count (n), response_format and other parameters. The generated images are stored in the configured output key or default imageList. (Note: Aliases SERVICE_OPENAI_IMAGE_GENERATION are deprecated, use 'OPENAI_IMAGE_GENERATION' instead)
Optional service name override. If not specified, defaults to 'OPENAI_IMAGE_GENERATION'.
Image model to use (gpt-image-1, dall-e-3, dall-e-2)
The user prompt to send to the LLM (supports templates)
Desired image size (model-dependent)
Image quality setting
Number of images to generate (1-4; DALL·E3 supports 1)
Image style for supported models (e.g. natural, vivid)
DALL·E response format: 'url' or 'b64_json' (gpt-image-1 returns b64_json)
Output format for gpt-image-1 (png|jpeg|webp)
Background setting (e.g. transparent)
Random seed for reproducible results
User identifier for attribution and policy monitoring
Request timeout in milliseconds
Maximum number of retries for recoverable errors
FlowContext key where images will be stored (default: imageList)
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
ImageGeneration using ProActions Hub.
ImageGeneration using ProActions Hub.
Optional service name override. If not specified, defaults to 'HUB'.
Image model to use (gpt-image-1, dall-e-3, dall-e-2)
The user prompt to send to the LLM (supports templates)
Desired image size (model-dependent)
Image quality setting
Number of images to generate (1-4; DALL·E3 supports 1)
Image style for supported models (e.g. natural, vivid)
DALL·E response format: 'url' or 'b64_json' (gpt-image-1 returns b64_json)
Output format for gpt-image-1 (png|jpeg|webp)
Background setting (e.g. transparent)
Random seed for reproducible results
User identifier for attribution and policy monitoring
Request timeout in milliseconds
Maximum number of retries for recoverable errors
FlowContext key where images will be stored (default: imageList)
Override the target used on service configuration level.
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
ImageGeneration using Azure OpenAI.
ImageGeneration using Azure OpenAI. (Note: Aliases SERVICE_AZURE_OPENAI_IMAGE_GENERATION are deprecated, use 'AZURE_OPENAI_IMAGE_GENERATION' instead)
Optional service name override. If not specified, defaults to 'AZURE_OPENAI_IMAGE_GENERATION'.
Image model to use (gpt-image-1, dall-e-3, dall-e-2)
The user prompt to send to the LLM (supports templates)
Desired image size (model-dependent)
Image quality setting
Number of images to generate (1-4; DALL·E3 supports 1)
Image style for supported models (e.g. natural, vivid)
DALL·E response format: 'url' or 'b64_json' (gpt-image-1 returns b64_json)
Output format for gpt-image-1 (png|jpeg|webp)
Background setting (e.g. transparent)
Random seed for reproducible results
User identifier for attribution and policy monitoring
Request timeout in milliseconds
Maximum number of retries for recoverable errors
FlowContext key where images will be stored (default: imageList)
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Generate speech audio using the OpenAI Speech API (or corresponding Hub/Azure variants). Reads text from the default text input and returns an audio blob.
Generate speech audio using the OpenAI Speech API (or corresponding Hub/Azure variants). Reads text from the default text input and returns an audio blob. (Note: Aliases SERVICE_OPENAI_SPEECH are deprecated, use 'OPENAI_SPEECH' instead)
Optional service name override. If not specified, defaults to 'OPENAI_SPEECH'.
Model id to use for TTS (e.g. tts-1)
Voice id to use for synthesis
Response audio format (e.g. mp3, wav, pcm)
Playback speed multiplier (e.g. 1.0)
Optional MIME type for the produced blob (e.g. audio/mpeg)
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
ProActionsHub-compatible speech generation step. Inherits behavior from OPENAI_SPEECH.
ProActionsHub-compatible speech generation step. Inherits behavior from OPENAI_SPEECH.
Optional service name override. If not specified, defaults to 'HUB'.
Model id to use for TTS (e.g. tts-1)
Voice id to use for synthesis
Response audio format (e.g. mp3, wav, pcm)
Playback speed multiplier (e.g. 1.0)
Optional MIME type for the produced blob (e.g. audio/mpeg)
Override the target used on service configuration level.
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Azure OpenAI speech step alias.
Azure OpenAI speech step alias. (Note: Aliases SERVICE_AZURE_OPENAI_SPEECH are deprecated, use 'AZURE_OPENAI_SPEECH' instead)
Optional service name override. If not specified, defaults to 'AZURE_OPENAI_SPEECH'.
Model id to use for TTS (e.g. tts-1)
Voice id to use for synthesis
Response audio format (e.g. mp3, wav, pcm)
Playback speed multiplier (e.g. 1.0)
Optional MIME type for the produced blob (e.g. audio/mpeg)
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Transcribe audio using OpenAI (or Azure/Hub variants). Reads an audio file/blob from inputs and writes the transcription text and optional segment list.
Transcribe audio using OpenAI (or Azure/Hub variants). Reads an audio file/blob from inputs and writes the transcription text and optional segment list. (Note: Aliases SERVICE_OPENAI_TRANSCRIPTION are deprecated, use 'OPENAI_TRANSCRIPTION' instead)
Optional service name override. If not specified, defaults to 'OPENAI_TRANSCRIPTION'.
Optional prompt/instruction to bias transcription.
Optional language code to hint at language for transcription.
Optional temperature / randomness parameter for transcription model.
Optional model override (e.g. 'whisper-1').
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Hub-compatible transcription step (delegates to hub service). Writes transcription text and optional segments list.
Hub-compatible transcription step (delegates to hub service). Writes transcription text and optional segments list. (Note: Aliases SERVICE_OPENAI_TRANSCRIPTION are deprecated, use 'HUB_TRANSCRIPTION' instead)
Optional service name override. If not specified, defaults to 'HUB'.
Optional prompt/instruction to bias transcription.
Optional language code to hint at language for transcription.
Optional temperature / randomness parameter for transcription model.
Optional model override (e.g. 'whisper-1').
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Azure OpenAI transcription alias (delegates to the OpenAI transcription implementation).
Azure OpenAI transcription alias (delegates to the OpenAI transcription implementation). (Note: Aliases SERVICE_AZURE_OPENAI_TRANSCRIPTION are deprecated, use 'AZURE_OPENAI_TRANSCRIPTION' instead)
Optional service name override. If not specified, defaults to 'AZURE_OPENAI_TRANSCRIPTION'.
Optional prompt/instruction to bias transcription.
Optional language code to hint at language for transcription.
Optional temperature / randomness parameter for transcription model.
Optional model override (e.g. 'whisper-1').
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Generic REST step. Builds a Request from the given configuration (url, method, headers, parameters, body or formData) and executes it using the host HTTP helper. Results are written to cfg.outputs when provided or to the default text output.
Generic REST step. Builds a Request from the given configuration (url, method, headers, parameters, body or formData) and executes it using the host HTTP helper. Results are written to cfg.outputs when provided or to the default text output. (Note: Aliases SERVICE_REST are deprecated, use 'REST' instead)
The URL to call. Supports template expressions and variable resolution.
HTTP method to use (GET, POST, PUT, DELETE, ...). Default: GET.
Optional headers object. Values will be resolved against the flow context.
Optional query parameters object. Values will be resolved against the flow context.
Optional request body. May be a string or object. Objects are JSON-stringified by the step.
Optional formData object. Supports File/Blob entries and arrays. Values are resolved against the flow context.
Optional low-level RequestInit overrides passed to the fetch helper.
Output mappings from step outputs to flow context
Upscale an image using StabilityAI upscale endpoint. Uploads a provided image blob and returns the upscaled image blob.
Upscale an image using StabilityAI upscale endpoint. Uploads a provided image blob and returns the upscaled image blob. (Note: Aliases SERVICE_STABILITY_AI_UPSCALE are deprecated, use 'STABILITY_AI_UPSCALE' instead)
Optional service name override. If not specified, defaults to 'STABILITY_AI'.
Optional prompt describing desired changes to the image
Desired output format (e.g. jpeg, png)
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Outpaint an image region using StabilityAI. Supports coordinates (left,right,up,down) to define the outpainting area.
Outpaint an image region using StabilityAI. Supports coordinates (left,right,up,down) to define the outpainting area. (Note: Aliases SERVICE_STABILITY_AI_OUTPAINT are deprecated, use 'STABILITY_AI_OUTPAINT' instead)
Optional service name override. If not specified, defaults to 'STABILITY_AI'.
Pixels to the left (default 0)
Pixels to the right (default 0)
Pixels to the top (default 0)
Pixels to the bottom (default 0)
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Search and replace regions or objects within an image using StabilityAI. Provide a prompt and a search_prompt to locate and replace visual elements.
Search and replace regions or objects within an image using StabilityAI. Provide a prompt and a search_prompt to locate and replace visual elements. (Note: Aliases SERVICE_STABILITY_AI_SEARCH_AND_REPLACE are deprecated, use 'STABILITY_AI_SEARCH_AND_REPLACE' instead)
Optional service name override. If not specified, defaults to 'STABILITY_AI'.
Prompt describing replacement target or desired result
Prompt describing what to search for in the image
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Search and recolor elements in an image using StabilityAI. Provide select_prompt and prompt to find and recolor elements.
Search and recolor elements in an image using StabilityAI. Provide select_prompt and prompt to find and recolor elements. (Note: Aliases SERVICE_STABILITY_AI_SEARCH_AND_RECOLOR are deprecated, use 'STABILITY_AI_SEARCH_AND_RECOLOR' instead)
Optional service name override. If not specified, defaults to 'STABILITY_AI'.
Prompt to select elements to recolor
Prompt describing the recoloring operation
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Reads content from the user clipboard. Prefers HTML content when available, falls back to plain text.
Reads content from the user clipboard. Prefers HTML content when available, falls back to plain text.
Output mappings from step outputs to flow context
Writes text to the user clipboard. Reads the content from the configured input (cfg.text) or from the default text input in the flow context.
Writes text to the user clipboard. Reads the content from the configured input (cfg.text) or from the default text input in the flow context.
Text to write to clipboard. If omitted, uses the default text input from flow context.
Downloads a file constructed from text or an existing Blob. The filename is required.
Downloads a file constructed from text or an existing Blob. The filename is required. (Note: Aliases DOWNLOAD_TEXT are deprecated, use 'DOWNLOAD' instead)
Name of the file to download (required).
The content-type to set for the download of the file.
Input mappings from flow context to step inputs
Opens a file upload dialog (modal) and stores selected file(s) into the flow context. Supports output mapping via the outputs config or default file output.
Opens a file upload dialog (modal) and stores selected file(s) into the flow context. Supports output mapping via the outputs config or default file output. (Note: Aliases USER_FILE_UPLOAD are deprecated, use 'FILE_UPLOAD' instead)
Prompt text shown in the upload dialog.
Comma-separated list of accepted MIME types for upload (e.g. "image/png,application/pdf").
Array of accepted MIME types for the upload (e.g. ["audio/mp3","audio/wav"]).
Maximum file size allowed in bytes.
Output mappings from step outputs to flow context
Shows a simple prompt to the user and stores the entered text into the default text output.
Shows a simple prompt to the user and stores the entered text into the default text output. (Note: Aliases USER_PROMPT are deprecated, use 'PROMPT' instead)
Text displayed in the prompt. Can contain template placeholders resolved against the flowContext.
Optional placeholder shown inside the input field.
Output mappings from step outputs to flow context
Displays a configurable form to the user (via FormBuilder). The form definition is provided in cfg.form. When submitted, the returned values are merged into the flowContext.
Displays a configurable form to the user (via FormBuilder). The form definition is provided in cfg.form. When submitted, the returned values are merged into the flowContext.
Form field definitions. Each key is the field name, and the value is a FormComponent object defining the field type and properties. See the comprehensive form documentation for all available field types and properties.
Title of the form modal.
Optional buttons configuration for the form modal. Array of button objects defining submit/cancel buttons.
Optional modal-level configuration (width, height, fullScreen, typography). See FormModalConfig for all options.
14 nested properties
Bootstrap dialog size: "sm", "md", "lg", "xl"
Modal width (e.g., "720px", "80vw")
Maximum modal width
Modal height (e.g., "600px", "80vh")
Maximum modal height
Force modal to cover entire viewport
Font size for modal body
Line height for modal body
Font size for form labels
Font size for form inputs
Line height for form inputs
Font size for diff components
Line height for diff components
Additional CSS class for modal
Optional steps to execute before showing the form while an in-place loading UI is shown.
Plays audio in a lightweight floating player. Audio source can be provided as a data URL, http(s) URL, or raw base64 via cfg.in or via the default text input. The step mounts a closed Shadow DOM audio player and resolves when the player is closed by the user.
Plays audio in a lightweight floating player. Audio source can be provided as a data URL, http(s) URL, or raw base64 via cfg.in or via the default text input. The step mounts a closed Shadow DOM audio player and resolves when the player is closed by the user.
Optional input path or variable name containing the audio source (data URL, http(s) URL or raw base64). If omitted, the default text input is used.
Attempt autoplay on load. Note: browsers may block autoplay; in that case the player will show a hint and wait for user interaction.
Show progress / seek bar in the player.
Show volume controls in the player.
Show current time and duration labels.
Theme of the player: 'dark' | 'light' or any valid CSS color string.
Player anchoring position: 'top-left' | 'top-right' | 'bottom-left' | 'bottom-right' | 'top-center' | 'bottom-center' | 'center'.
If true, loop playback.
Optional title shown in the player title bar.
Optional MIME type hint used when the provided input is raw base64 without a data URL header (e.g. "audio/mpeg").
Input mappings from flow context to step inputs
Displays a selection modal allowing the user to pick one item from a list. The list may be provided directly or produced by a pre-processing step (inlineSteps). The selected value is written to the default text output.
Displays a selection modal allowing the user to pick one item from a list. The list may be provided directly or produced by a pre-processing step (inlineSteps). The selected value is written to the default text output.
Optional steps to execute before showing the selection modal. Useful to prepare or transform input.
Prompt or title text shown at the top of the selection modal (supports variable resolution).
Optional title shown above additional info in the selection modal.
Optional descriptive text shown in the selection modal (supports variable resolution).
Enable keyboard navigation in the selection modal.
Input mappings from flow context to step inputs
Output mappings from step outputs to flow context
Shows an image picker modal allowing the user to pick one image from a provided list of image objects ({ url, previewUrl?, label? }). The selected image object is stored as an output.
Shows an image picker modal allowing the user to pick one image from a provided list of image objects ({ url, previewUrl?, label? }). The selected image object is stored as an output. (Note: Aliases USER_IMAGE_PICKER are deprecated, use 'IMAGE_PICKER' instead)
Prompt text shown in the image picker modal.
List of image objects to show. Each item should contain at least a url and optionally previewUrl and label.
Output mappings from step outputs to flow context
Show a progress bar with optional status text. Use position to place it and autoHide to auto-hide after completion.
Show a progress bar with optional status text. Use position to place it and autoHide to auto-hide after completion.
Status text to display next to the progress bar. Supports variable resolution.
Position of the progress bar. Common values: 'bottom', 'top'.
Whether the progress bar should auto-hide when completed.
Update an existing progress bar. Provide percentage (0-100) and optional status text.
Update an existing progress bar. Provide percentage (0-100) and optional status text.
Progress percentage (0-100). Can be a number or a string that resolves to a number.
Status text to update on the progress bar. Supports variable resolution.
Hide any shown progress bar.
Hide any shown progress bar.
Pause execution for a given delay (ms). Useful for pacing flows or waiting for external state changes.
Pause execution for a given delay (ms). Useful for pacing flows or waiting for external state changes.
Delay in milliseconds to sleep. Defaults to 1000 ms.
Clears the current text selection and optionally repositions the cursor to the start or end of the selection. Available in Swing editor context only.
Clears the current text selection and optionally repositions the cursor to the start or end of the selection. Available in Swing editor context only.
Where to move the cursor after clearing selection. Use "anchorStart" to move to the start or "anchorEnd" to move to the end.
Requests a change of the view port size. Useful to resize the command palette view in Prime 8.
Requests a change of the view port size. Useful to resize the command palette view in Prime 8.
Target width in pixels. Can be a number or an expression resolved against the flowContext.
Target height in pixels. Can be a number or an expression resolved against the flowContext.