-
Notifications
You must be signed in to change notification settings - Fork 3
Implemented workflow tab and updated node descriptions #6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from all commits
c1edb78
700d61e
bab46dd
c000bf2
abef802
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -0,0 +1,299 @@ | ||
| **Category:** AI | ||
| **Type:** Analysis | ||
|
|
||
| --- | ||
|
|
||
| ## Overview | ||
|
|
||
| The **Content Moderation** node automatically detects and flags inappropriate or unsafe content using AI. | ||
| It can analyze both **text** and **image** inputs to determine whether they contain offensive, harmful, or restricted material before allowing the workflow to proceed. | ||
|
|
||
| --- | ||
|
|
||
| ## Description | ||
|
|
||
| This node uses an AI moderation model to evaluate content for categories such as sexual content, harassment, hate speech, violence, and more. | ||
| You can use it to ensure that user-generated content, uploaded files, or messages meet your platform’s safety and compliance requirements. | ||
|
|
||
| It supports both **text moderation** and **image moderation**, making it suitable for chat systems, social media workflows, or content upload platforms. | ||
|
|
||
| --- | ||
|
|
||
| Here’s your **Input Parameters** section rewritten in clear bullet-point format — consistent with your official documentation style and easy for beginners to follow: | ||
|
|
||
| --- | ||
|
|
||
| ## Input Parameters | ||
|
|
||
| The **Content Moderation** node accepts flat key-value pairs that specify the content to analyze and the type of moderation to perform. | ||
|
|
||
| * **attachments** | ||
| Comma-separated list of file IDs or variable references to the content that needs moderation. | ||
| This is used primarily for **image** or **multimedia** moderation. | ||
| Example: | ||
|
|
||
| ``` | ||
| file1.jpg,file2.png | ||
| ``` | ||
|
|
||
| or | ||
|
|
||
| ``` | ||
| {{nodeId.output.image}} | ||
| ``` | ||
|
|
||
| * **moderationType** | ||
| Defines the type of moderation to perform. | ||
| Supported values: | ||
|
|
||
| * `"text-moderation"` – for analyzing written content such as messages or posts. | ||
| * `"image-moderation"` – for analyzing uploaded or generated images. | ||
|
|
||
|
|
||
| * **moderationText** | ||
| The text string to be analyzed for moderation. | ||
| Use this parameter when reviewing written or user-generated text. | ||
| Example: | ||
|
|
||
| ``` | ||
| This is a test message. | ||
| ``` | ||
|
|
||
| **Instructions:** | ||
|
|
||
| Provide all input parameters as flat key-value pairs. | ||
| For multiple file inputs, separate file IDs or variable references with commas. Example: | ||
|
|
||
| ``` | ||
| file1.jpg,file2.jpg | ||
| {{nodeId.output.file1}},{{nodeId.output.file2}} | ||
| ``` | ||
|
|
||
| Access input values within the workflow using: | ||
|
|
||
| ``` | ||
| {{nodeId.input.<key>}} | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ## Output Parameters | ||
|
|
||
| After execution, the **Content Moderation** node returns the AI’s analysis of the submitted content along with moderation details and confidence scores. | ||
|
|
||
| * **flagged** | ||
| Indicates whether the content was flagged for any policy violations. | ||
| Returns `true` if one or more moderation categories were triggered. | ||
|
|
||
| * **flaggedCategories** | ||
| A comma-separated list of the categories that were flagged during moderation. | ||
| Example: `"violence,hate,harassment"` | ||
|
|
||
|
Comment on lines
+88
to
+91
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Fix type mismatch: Output Parameters (lines 88–90) document However, Example 1 Expected Output (lines 161–163) shows "flaggedCategories": [
"harassment"
]These must align. Either:
Please align the example with the documented output type. Also applies to: 157-183 🤖 Prompt for AI Agents |
||
| * **processingTime** | ||
| The total time taken by the AI to analyze the content, returned in ISO timestamp format. | ||
| Example: `"2025-10-27T10:45:12Z"` | ||
|
|
||
| * **processingId** A unique identifier assigned to the moderation request. | ||
| Useful for tracking and debugging purposes. | ||
|
|
||
| * **categories.sexual** | ||
| Confidence score (ranging from 0 to 1) representing the likelihood of **sexual or adult content** being present. | ||
|
|
||
| * **categories.harassment** | ||
| Confidence score (0–1) indicating potential **harassment or bullying** language. | ||
|
|
||
| * **categories.hate** | ||
| Confidence score (0–1) for **hate speech** or **discriminatory** expressions. | ||
|
|
||
| * **categories.illicit** | ||
| Confidence score (0–1) showing the presence of **illegal, restricted, or drug-related content**. | ||
|
|
||
| * **categories.self-harm** | ||
| Confidence score (0–1) for **self-harm, suicide, or unsafe behavior** mentions. | ||
|
|
||
| * **categories.violence** | ||
| Confidence score (0–1) measuring the presence of **violent or graphic content**. | ||
|
|
||
|
|
||
| --- | ||
|
|
||
| **Instructions:** | ||
| You can access output results using: | ||
|
|
||
| ``` | ||
| {{nodeId.output.flagged}} → true / false | ||
| {{nodeId.output.flaggedCategories}} → "violence, hate" | ||
| {{nodeId.output.categories.sexual}} → 0.05 | ||
| {{nodeId.output.processingTime}} → "2025-10-27T10:30:45Z" | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ## Output Type | ||
|
|
||
| The output type must **always** be exactly: | ||
|
|
||
| ``` | ||
| "text-moderation/image-moderation" | ||
| ``` | ||
|
|
||
| This identifies the node as handling both text-based and image-based moderation tasks. | ||
|
|
||
| --- | ||
|
|
||
| ## Example Usage | ||
|
|
||
| ### Example 1: Text Moderation | ||
|
|
||
| ```json | ||
| { | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Example 1’s expected output has an extra opening brace, leaving the JSON malformed. Please remove the redundant brace pair so the sample is valid JSON. Prompt for AI agents✅ Addressed in |
||
| "moderationType": "text-moderation", | ||
| "moderationText": "I hate you!" | ||
| } | ||
| ``` | ||
|
|
||
| **Expected Output:** | ||
|
|
||
| ```json | ||
| { | ||
| "output": { | ||
| "flagged": true, | ||
| "flaggedCategories": [ | ||
|
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Example 1 lists Prompt for AI agents |
||
| "harassment" | ||
| ], | ||
| "categories": { | ||
| "harassment": true, | ||
| "harassment/threatening": false, | ||
| "sexual": false, | ||
| "hate": false, | ||
| "hate/threatening": false, | ||
| "illicit": false, | ||
| "illicit/violent": false, | ||
| "self-harm/intent": false, | ||
| "self-harm/instructions": false, | ||
| "self-harm": false, | ||
| "sexual/minors": false, | ||
| "violence": false, | ||
| "violence/graphic": false | ||
| }, | ||
| "processingTime": "1.417", | ||
| "processingId": "modr-3438" | ||
| } | ||
| } | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ### Example 2: Image Moderation | ||
|
|
||
| ```json | ||
| { | ||
| "moderationType": "image-moderation", | ||
| "attachments": "file123.jpg" | ||
| } | ||
| ``` | ||
|
|
||
| **Expected Output:** | ||
|
|
||
| ```json | ||
| { | ||
| "flagged": false, | ||
| "flaggedCategories": "", | ||
| "processingTime": "2025-10-27T10:46:55Z", | ||
| "categories.violence": 0.10, | ||
| "categories.sexual": 0.05 | ||
| } | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
| ## How to Use in a No-Code Workflow | ||
|
|
||
| 1. **Add the Content Moderation Node** | ||
| Drag and drop the node into your workflow canvas. | ||
|
|
||
| 2. **Choose the Input Type** | ||
|
|
||
| * Use `moderationText` for moderating text messages or comments. | ||
| * Use `attachments` for moderating images or file uploads. | ||
|
|
||
| 3. **Set the Moderation Type** | ||
| Choose `"text-moderation"` or `"image-moderation"` as needed. | ||
| If left empty, the node will handle both automatically. | ||
|
|
||
| 4. **Connect Inputs** | ||
| Link the output from a previous node (like file upload or text generation) to the `attachments` or `moderationText` fields. | ||
|
|
||
| 5. **Access Outputs** | ||
| Use variable references to pass results to other nodes, such as conditional checks or notifications. Example: | ||
|
|
||
| ``` | ||
| {{contentModeration.output.flagged}} | ||
| {{contentModeration.output.flaggedCategories}} | ||
| ``` | ||
|
|
||
| 6. **Set Conditions (Optional)** | ||
| You can create conditional branches in your workflow to stop or flag content automatically if `flagged = true`. | ||
|
|
||
| --- | ||
|
|
||
| ## Best Practices | ||
|
|
||
| * Always verify that uploaded files are properly connected before moderation. | ||
| * For text moderation, keep inputs under 5,000 characters for optimal performance. | ||
| * Combine both `moderationText` and `attachments` to analyze mixed media submissions. | ||
| * Review flagged outputs manually for high-risk content before taking automated action. | ||
| * Store `processingId` values for tracking or audit purposes. | ||
|
|
||
| --- | ||
|
|
||
| ## Example Workflow Integration | ||
|
|
||
| **Use Case:** A user uploads an image with a comment. | ||
|
|
||
| * The **File Upload Node** provides an image file reference. | ||
| * The **Content Moderation Node** checks both the uploaded image and the user’s text comment. | ||
| * If `flagged = true`, the workflow sends a warning message through a **Notification Node**. | ||
| * If `flagged = false`, the workflow continues to publish the content. | ||
|
|
||
| **Workflow Connection Example:** | ||
|
|
||
| ``` | ||
| {{fileUpload.output.fileUrl}} → {{contentModeration.input.attachments}} | ||
| {{chatInput.output.text}} → {{contentModeration.input.moderationText}} | ||
| {{contentModeration.output.flagged}} → Used in condition check | ||
| ``` | ||
|
|
||
| --- | ||
|
|
||
|
|
||
| ## Common Errors | ||
|
|
||
| Below are common issues that may occur while using the **Content Moderation** node, along with their causes and recommended solutions. | ||
|
|
||
| * **"Missing attachments"** | ||
| **Cause:** No file or variable reference was provided for moderation. | ||
| **Solution:** Add a valid image or file reference in the `attachments` field. | ||
| Example: | ||
|
|
||
| ``` | ||
| {{fileUpload.output.image}} | ||
| ``` | ||
|
|
||
| * **"Missing moderationText"** | ||
| **Cause:** The text moderation input field was left empty. | ||
| **Solution:** Provide a valid text string or connect text from a previous node for analysis. | ||
|
|
||
| * **"Invalid moderationType"** | ||
| **Cause:** An incorrect or unsupported moderation type was entered. | ||
| **Solution:** Use only the supported values — `"text-moderation"` or `"image-moderation"`. | ||
|
|
||
| * **"Empty output"** | ||
| **Cause:** The AI model returned no response or incomplete data. | ||
| **Solution:** Retry the workflow with a valid input or check if the AI moderation service is available. | ||
|
|
||
| * **"File not accessible"** | ||
| **Cause:** The referenced image file could not be loaded or retrieved. | ||
| **Solution:** Verify that the file exists, has the correct permissions, and was properly generated or uploaded by a previous node. | ||
|
|
||
| --- | ||
Uh oh!
There was an error while loading. Please reload this page.