An RPA integration solution that connects Robot Framework-based and pure Python automations with ProcessCube workflow engines.
The ProcessCube Robot Agent project is a comprehensive solution for integrating Robotic Process Automation (RPA) with ProcessCube, a Business Process Management (BPM) system. It supports two flexible approaches for robot development:
- RCC-based Robots (Robot Framework) - For UI automation and text-driven processes
- UV-based Robots (Pure Python) - For APIs, data processing, and Python libraries
- processcube_robot_agent - A Python-based microservice that manages and executes RPA robots
- robots - A collection of automation tasks (RCC and UV)
- studio_extension - A TypeScript/React extension for the 5Minds Studio IDE
The system supports both approaches in parallel without requiring migration:
| Approach | Type | Best For | Learn More |
|---|---|---|---|
| RCC | Robot Framework (Text) | UI automation, web scraping | README.md - RCC Guide |
| UV | Pure Python | APIs, data processing, microservices | UV_ROBOT_CREATION_GUIDE.md |
Unsure which approach? β See QUICK_START.md - Comparison Table
βββββββββββββββββββββββββββββββββββ
β ProcessCube Engine β (Workflow Engine)
β (BPMN Processes) β
ββββββββββββ¬βββββββββββββββββββββββ
β
External Tasks
β
ββββββββββββΌβββββββββββββββββββββββ
β Robot Agent Service (Python) β (REST API Port 42042)
β βββ Task Handler β
β βββ RCC Runner β
β βββ File Watcher (Auto-reload) β
ββββββββββββ¬βββββββββββββββββββββββ
β
βββΊ RPA Robots (.zip packages)
β βββ Web UI Automation
β βββ Windows UI Automation
β βββ Custom Tasks
β
βββΊ RCC (Robot Code Compiler)
βββ Robocorp's Robot Packaging Tool
- Installation & Setup
- Quick Start
- Configuration
- Project Structure
- Robot Development
- Studio Extension
- API Documentation
- Development & Debugging
- Troubleshooting
- Contributing
- Python 3.8 or higher
- Node.js 14.x or higher (for Studio extension)
- npm 6.x or higher
- RCC (Robocorp Command Center) - Download: https://github.com/robocorp/rcc
- Git
git clone https://github.com/5minds/processcube-robot-agent.git
cd processcube-robot-agent# Option A: With npm scripts (recommended)
npm install
# Option B: Directly with pip
pip install -r requirements.txt# Download RCC and place in PATH
# https://github.com/robocorp/rcc/releases
# Verification:
rcc versionnpm installcd studio_extension
npm ci
npm run build
cd ..# Start the Robot Agent Service
npm run processcube_robot_agent
# Output should look similar to:
# INFO: Started server process [12345]
# INFO: Waiting for application startup.
# 2025-11-17 18:49:34,089 - processcube.external_tasks - INFO - Starting external task worker for topic 'win.test'
# 2025-11-17 18:49:34,089 - processcube.external_tasks - INFO - Starting external task worker for topic 'win.webui'
# ...
# INFO: Application startup completeThe project is production-ready with the following quality metrics:
| Metric | Status | Details |
|---|---|---|
| Total Tests | β 360/360 | 100% Pass Rate |
| Python Tests | β 279/279 | 100% Pass Rate (216 unit + 63 integration) |
| TypeScript Tests | β 81/81 | 100% Pass Rate |
| Type Hints | β 85% | Python Code Coverage |
| Security | β Safe | Shell injection fixes, 0 npm vulnerabilities |
| Dependencies | β Modern | 20 packages updated |
# Check system requirements
python --version # >= 3.8
node --version # >= 14.x
npm --version # >= 6.x
rcc version # Installed
# Install dependencies
npm install
pip install -r requirements.txt# Production configuration (config.prod.json)
cat > config.prod.json << 'EOF'
{
"debugging": {
"enabled": false,
"hostname": "localhost",
"port": 5678,
"wait_for_client": false
},
"engine": {
"url": "http://processcube-engine:56100"
},
"rcc": {
"topic_prefix": "robot",
"wrap_dir": "robots/installed/rcc",
"unwrap_dir": "temp/robots/rcc/unwrapped",
"start_watch_project_dir": false,
"project_dir": "robots/src/rcc"
},
"rest_api": {
"port": 42042,
"host": "0.0.0.0"
}
}
EOF# Prepare all robots (before deployment)
npm run pack
# Output: Robots in robots/installed/rcc/*.zip
# Verification:
ls -lh robots/installed/rcc/# All tests (before production release)
npm test
# Or separately:
npm run test:python # Python tests (98 tests)
npm run test:typescript # TypeScript tests (81 tests)
# With coverage:
npm run test:coverage# Option A: Direct (simple)
CONFIG_FILE=$(pwd)/config.prod.json npm run processcube_robot_agent
# Option B: Docker (if available)
docker run -d \
-e CONFIG_FILE=/app/config.prod.json \
-p 42042:42042 \
-v $(pwd)/config.prod.json:/app/config.prod.json \
-v $(pwd)/robots:/app/robots \
processcube-robot-agent:latest
# Option C: Systemd service (Linux)
sudo systemctl start processcube-robot-agent
sudo systemctl enable processcube-robot-agent# 1. Health check: Is the service reachable?
curl -s http://localhost:42042/robot_agents/robots | jq .
# Expected:
# {
# "topics": [
# { "name": "...", "topic": "robot/..." },
# ...
# ]
# }
# 2. Are robots registered?
curl -s http://localhost:42042/robot_agents/robots | jq '.topics | length'
# Should be > 0
# 3. Is ProcessCube engine reachable?
# Check agent URL in ProcessCube engine configuration
# External task workers should be connected to engine
# 4. Check logs
tail -f /var/log/processcube-robot-agent/service.log# Production logging (config.prod.json):
{
"logging": {
"level": "INFO",
"format": "json",
"output": "/var/log/processcube-robot-agent/service.log"
}
}# Follow service logs
tail -100f ~/.processcube/robot-agent/logs.txt
# Errors only
grep ERROR ~/.processcube/robot-agent/logs.txt
# Robot executions
grep "Starting external task" ~/.processcube/robot-agent/logs.txt# Service resource usage
top -p $(pgrep -f "processcube_robot_agent")
# Processed tasks
curl http://localhost:42042/metrics # If Prometheus is integrated
# Open connections
netstat -an | grep 42042# Backup: Installed robots
tar -czf robots-backup-$(date +%Y%m%d).tar.gz robots/installed/
# Backup: Source robots
tar -czf robots-source-backup-$(date +%Y%m%d).tar.gz robots/src/
# Restore:
tar -xzf robots-backup-20251117.tar.gz
npm run pack# Backup
cp config.prod.json config.prod.json.backup
# Restore
cp config.prod.json.backup config.prod.json
systemctl restart processcube-robot-agent# 1. Check logs
journalctl -u processcube-robot-agent -n 50
# 2. Validate configuration
python -m json.tool config.prod.json
# 3. Check dependencies
pip check
npm audit
# 4. Is port available?
netstat -tuln | grep 42042# 1. Is ProcessCube URL reachable?
curl -v http://processcube-engine:56100/health
# 2. Are robots present?
curl http://localhost:42042/robot_agents/robots
# 3. Restart service
systemctl restart processcube-robot-agent
# 4. Check logs for errors
journalctl -u processcube-robot-agent -p err# 1. Restart service
systemctl restart processcube-robot-agent
# 2. Clear temp directory
rm -rf temp/robots/rcc/unwrapped/*
# 3. Repack robot caches
npm run pack
# 4. Enable monitoring
CONFIG_FILE=config.prod.json DEBUG=true npm run processcube_robot_agent# Agent 1 (Port 42042)
CONFIG_FILE=config.prod-1.json npm run processcube_robot_agent &
# Agent 2 (Port 42043)
CONFIG_FILE=config.prod-2.json npm run processcube_robot_agent &
# Load balancer (nginx.conf)
upstream robot_agents {
server localhost:42042;
server localhost:42043;
}
server {
listen 42040;
location / {
proxy_pass http://robot_agents;
}
}# In rest_api_command.py
@webapp.get("/health")
async def health_check():
return {
"status": "healthy",
"timestamp": datetime.now().isoformat(),
"robots_registered": len(get_registered_robots())
}# 1. Back up current code
git stash
# 2. Pull new code
git pull origin main
# 3. Update dependencies
npm install
pip install -r requirements.txt
# 4. Run tests
npm test
# 5. Restart service
systemctl restart processcube-robot-agent
# 6. Verify
curl http://localhost:42042/robot_agents/robots# 1. Stop service
systemctl stop processcube-robot-agent
# 2. Revert code
git revert HEAD
# 3. Start service
systemctl start processcube-robot-agent
# 4. Verify
journalctl -u processcube-robot-agent -n 20The project includes a Dockerfile for containerized deployment. Docker images are automatically built by GitHub Actions and pushed to GitHub Container Registry (ghcr.io).
# Build locally
docker build -t processcube-robot-agent:latest .
# With version tag
docker build -t processcube-robot-agent:0.1.0 .
# With multi-architecture support (ARM64/AMD64)
docker buildx build --platform linux/amd64,linux/arm64 \
-t processcube-robot-agent:latest .# Basic: With configuration file and robots directory
docker run -d \
--name robot-agent \
-p 42042:42042 \
-e CONFIG_FILE=/app/config.json \
-v $(pwd)/config.json:/app/config.json \
-v $(pwd)/robots:/app/robots \
processcube-robot-agent:latest
# With ProcessCube engine URL
docker run -d \
--name robot-agent \
-p 42042:42042 \
-e CONFIG_FILE=/app/config.json \
-e PROCESSCUBE_ENGINE_URL=http://processcube-engine:56100 \
-v $(pwd)/config.json:/app/config.json \
-v $(pwd)/robots:/app/robots \
processcube-robot-agent:latest
# With Docker Compose
docker-compose up -dversion: '3.8'
services:
robot-agent:
image: ghcr.io/5minds/processcube-robot-agent:latest
container_name: processcube-robot-agent
ports:
- "42042:42042"
environment:
CONFIG_FILE: /app/config.json
PROCESSCUBE_ENGINE_URL: http://processcube-engine:56100
LOG_LEVEL: INFO
volumes:
- ./config.json:/app/config.json
- ./robots:/app/robots
- robot-agent-logs:/var/log/processcube-robot-agent
depends_on:
- processcube-engine
restart: unless-stopped
healthcheck:
test: ["CMD", "curl", "-f", "http://localhost:42042/robot_agents/robots"]
interval: 30s
timeout: 10s
retries: 3
start_period: 40s
processcube-engine:
image: processcube/engine:latest
container_name: processcube-engine
ports:
- "56100:56100"
environment:
DATABASE_URL: postgresql://postgres:postgres@postgres:5432/processcube
depends_on:
- postgres
restart: unless-stopped
postgres:
image: postgres:15-alpine
container_name: processcube-postgres
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
POSTGRES_DB: processcube
volumes:
- postgres-data:/var/lib/postgresql/data
restart: unless-stopped
volumes:
robot-agent-logs:
postgres-data:GitHub Actions automatically pushes the following tags:
# After git push main
ghcr.io/5minds/processcube-robot-agent:main
ghcr.io/5minds/processcube-robot-agent:latest
ghcr.io/5minds/processcube-robot-agent:<commit-sha>
# After release tag (e.g., v0.1.0)
ghcr.io/5minds/processcube-robot-agent:0.1.0
ghcr.io/5minds/processcube-robot-agent:0.1
ghcr.io/5minds/processcube-robot-agent:<commit-sha>The following environment variables are supported:
| Variable | Default | Description |
|---|---|---|
CONFIG_FILE |
/app/config.json |
Path to configuration file |
PROCESSCUBE_ENGINE_URL |
- | ProcessCube engine URL (optional) |
LOG_LEVEL |
INFO |
Logging level (DEBUG, INFO, WARNING, ERROR) |
ROBOT_TIMEOUT |
300 |
Robot execution timeout (seconds) |
RCC_DEBUG |
false |
Enable RCC debug output |
PYTHONUNBUFFERED |
1 |
Disable Python buffering |
# Robots from host
-v /path/to/robots:/app/robots
# Configuration from host
-v /path/to/config.json:/app/config.json:ro
# Persist logs
-v robot-agent-logs:/var/log/processcube-robot-agent
# Temp directory (for RCC unwrapped)
-v robot-agent-temp:/app/temp# Run as non-root user (automatic in image)
docker run -u 1000:1000 \
-v $(pwd)/robots:/app/robots \
processcube-robot-agent:latest
# With read-only filesystem (except /tmp, /var)
docker run --read-only \
--tmpfs /tmp \
--tmpfs /var/tmp \
-v $(pwd)/robots:/app/robots:ro \
processcube-robot-agent:latestThe standard image is ~500MB with all dependencies. For smaller images:
# Production image (multi-stage build)
# Usage: docker build -f Dockerfile.prod -t processcube-robot-agent:prod .# View container logs
docker logs robot-agent
docker logs -f robot-agent # Live logs
# SSH into container
docker exec -it robot-agent sh
# Container status
docker ps | grep robot-agent
docker inspect robot-agent | jq '.[0].State'
# Health check
docker inspect --format='{{.State.Health.Status}}' robot-agent
# Check port
docker port robot-agent# Development build with additional tools
docker build -f Dockerfile.dev -t processcube-robot-agent:dev .
# With mounted source code for live reload
docker run -d \
-v $(pwd):/app \
-v /app/venv # Exclude venv
processcube-robot-agent:dev# Terminal 1: Start the Robot Agent Service
npm run processcube_robot_agent
# Output should look similar to:
# INFO: Started server process [12345]
# INFO: Waiting for application startup.
# INFO: Starting external task worker for topic 'rcc.webui'
# INFO: Starting external task worker for topic 'rcc.test'
# ...
# INFO: Application startup complete
# Service runs at http://localhost:42042# Option A: In the same terminal (Terminal 1)
# Press: Ctrl+C
# Option B: From another terminal (Terminal 2)
npm run stop
# Option C: Force stop (if hung)
npm run stop:force# Terminal 2: List all robots
curl http://localhost:42042/robot_agents/robots
# Output:
# {
# "topics": [
# {"name": "webui", "topic": "rcc/webui"}
# ]
# }# ProcessCube engine must be able to register with a known service
# The agent is then available at the configured URL
# (Default: http://localhost:42042)# 1. Create a new robot folder
mkdir robots/src/rcc/my-robot
cd robots/src/rcc/my-robot
# 2. Create robot template
cat > robot.yaml << 'EOF'
tasks:
MyTask:
robotTaskName: My Custom Task
condaConfigFile: conda.yaml
artifactsDir: output
PATH: [.]
PYTHONPATH: [.]
EOF
# 3. Define tasks
cat > tasks.robot << 'EOF'
*** Settings ***
Library RPA.Browser.Selenium
*** Tasks ***
MyTask
Log Hello World!
EOF
# 4. Define conda environment
cat > conda.yaml << 'EOF'
channels:
- conda-forge
dependencies:
- python=3.9
- pip
- pip:
- rpaframework>=15.1.4
EOF
# The robot will be automatically packed and registered on the next service startService configuration is done through JSON files in the root directory:
- config.dev.json - Linux/macOS development
- config.dev-win.json - Windows development
- Environment variable:
CONFIG_FILEdefines which file is loaded
{
"debugging": {
"enabled": true,
"hostname": "localhost",
"port": 5678,
"wait_for_client": false
},
"engine": {
"url": "http://localhost:56100"
},
"rcc": {
"topic_prefix": "rcc",
"wrap_dir": "robots/installed/rcc",
"unwrap_dir": "temp/robots/rcc/unwrapped",
"start_watch_project_dir": true,
"project_dir": "robots/src/rcc"
},
"rest_api": {
"port": 42042,
"host": "0.0.0.0"
}
}| Parameter | Description | Default |
|---|---|---|
debugging.enabled |
Enable debug mode (port 5678) | false |
engine.url |
ProcessCube engine URL | - |
rcc.topic_prefix |
Prefix for robot topics | robot_task |
rcc.wrap_dir |
Output directory for packed robots | robots/installed/rcc |
rcc.unwrap_dir |
Temp directory during unpacking | temp/robots/rcc/unwrapped |
rcc.start_watch_project_dir |
Auto-reload on file changes | true |
rcc.project_dir |
Robot source directory | robots/src/rcc |
rest_api.port |
Service port | 42042 |
rest_api.host |
Listen address | 0.0.0.0 |
# Select configuration file
export CONFIG_FILE=/path/to/config.json
# Python path for imports
export PYTHONPATH=/path/to/processcube-robot-agent
# RCC debug output
export RCC_DEBUG=trueFor different environments (dev, staging, prod):
# Development
CONFIG_FILE=./config.dev.json npm run processcube_robot_agent
# Production
CONFIG_FILE=./config.prod.json npm run processcube_robot_agentprocesscube-robot-agent/
β
βββ processcube_robot_agent/ # Backend microservice (Python)
β βββ __main__.py # CLI entry point
β βββ rest_api_command.py # REST API server
β βββ pack_robots_command.py # Packaging command
β βββ watch_robots_command.py # File watcher command
β β
β βββ robot_agent/ # Agent logic
β β βββ base_agent.py # Abstract base class
β β βββ builder.py # Factory builder
β β βββ error.py # Custom exceptions
β β β
β β βββ rcc/ # RCC implementation
β β βββ robot_agent.py # Main execution engine
β β βββ rcc_runner.py # RCC validator
β β βββ robot_task_handler_factory.py # Factory pattern
β β βββ project_packer.py # Robot packaging
β β βββ project_watcher.py # Hot-reload watcher
β β
β βββ external_task/ # ProcessCube integration
β β βββ robot_task_handler.py # External task handler
β β
β βββ rest_api/ # HTTP endpoints
β βββ robots.py # Robot list endpoint
β
βββ robots/ # RPA robot definitions
β βββ src/rcc/ # Source robots
β β βββ webui/ # Web UI automation
β β βββ windows/
β β β βββ ui/ # Windows UI automation
β β βββ windows-example-calculator/
β β βββ web-example-rpa-challenge/
β β
β βββ installed/ # Packed, ready-to-run robots
β β βββ rcc/ # RCC-packed .zip files
β β
β βββ backup/ # Robot backups
β βββ readexcel/ # Excel read example
β
βββ studio_extension/ # TypeScript/React IDE extension
β βββ index.ts # Extension entry point
β βββ robotServiceType/ # Robot service type UI
β β βββ initializeServiceTypeRobot.ts
β β βββ PropertiesRobotTaskPane.tsx
β β βββ PropertiesRobotTaskPaneContent.tsx
β β βββ fetchRobots.ts
β β βββ PropertiesRobotServiceTask.md
β β
β βββ agentSettings/ # Agent configuration UI
β β βββ initializeAgentSettingsEditor.ts
β β βββ getRobotAgents.ts
β β βββ RobotAgentsConfigDocument.ts
β β βββ RobotAgentsConfigEditor.tsx
β β
β βββ package.json
β βββ tsconfig.json
β βββ webpack.config.js
β βββ README.md
β
βββ processes/ # Example BPMN processes
β βββ RobotTask.bpmn
β βββ .processcube/
β
βββ package.json # Root NPM configuration
βββ requirements.txt # Python dependencies
βββ config.dev.json # Linux/macOS dev configuration
βββ config.dev-win.json # Windows dev configuration
βββ start_on_windows.sh # Windows startup script
β
βββ README.md # This file
Both approaches solve automation tasks, but with different strengths:
*** Settings ***
Library RPA.Browser.Selenium
Library RPA.HTTP
*** Tasks ***
Login And Process
Open Browser https://example.com chrome
Input Text id:username admin
Input Text id:password pw123
Click Button xpath://button[@type='submit']
${response}= Get Request https://api.example.com/process
Log ${response.status_code}
Close Browserimport requests
from robocorp.workitems import inputs, outputs
from selenium import webdriver
def main():
for input_item in inputs:
payload = input_item.payload
# Web Automation
driver = webdriver.Chrome()
driver.get("https://example.com")
driver.find_element("id", "username").send_keys("admin")
driver.find_element("id", "password").send_keys("pw123")
driver.find_element("xpath", "//button[@type='submit']").click()
# API Request
response = requests.post("https://api.example.com/process")
driver.quit()
# Output
result = {
"status": response.status_code,
"data": payload,
"processed": True
}
outputs.create(result).save()
if __name__ == "__main__":
main()When to use which approach?
| Scenario | RCC | UV | Reason |
|---|---|---|---|
| Web UI Automation | β Better | RPA.Browser optimized for UI automation | |
| REST APIs | β Possible | β Better | Python Requests/httpx are native |
| Data Processing | β Good | β Better | Pandas, NumPy, etc. are Python-native |
| Legacy System RPA | β Better | β Difficult | Windows UI, SAP, etc. require RPA Framework |
| Microservices | β Better | Lightweight, fast, easy to deploy | |
| Complex Logic | β Better | Python is more understandable for developers |
Robot Framework is a Python-based, text-driven automation tool with robot-readable syntax:
*** Settings ***
Library Collections
Library RPA.Browser.Selenium
*** Variables ***
${USERNAME} admin
${PASSWORD} pw123
*** Tasks ***
Login And Verify
Open Browser https://example.com/login chrome
Input Text id:username ${USERNAME}
Input Text id:password ${PASSWORD}
Click Button xpath://button[@type='submit']
Page Should Contain Welcome
*** Keywords ***
Login As User
[Arguments] ${user} ${pass}
Input Text id:username ${user}
Input Text id:password ${pass}
Click Button xpath://button[@type='submit']A minimal robot with the required files:
my-robot/
βββ robot.yaml # Robot metadata
βββ tasks.robot # Task definitions
βββ conda.yaml # Dependencies
βββ locators.json # Optional: UI element locators
βββ output/ # Output directory (system-created)
βββ output.xml # Test results
βββ log.html # HTML log
βββ report.html # Test report
# Task definitions
tasks:
TaskName:
robotTaskName: Display name for ProcessCube
SecondTask:
robotTaskName: Second Task
# Conda environment configuration
condaConfigFile: conda.yaml
# Output directory
artifactsDir: output
# Path variables
PATH: [.]
PYTHONPATH: [.]*** Settings ***
Library RPA.Browser.Selenium
Library RPA.HTTP
Library Collections
Resource common.robot
*** Tasks ***
WebUI Example
Open Available Browser https://www.example.com
Click Element When Visible css=.cookie-accept
Take Screenshot full
Close Browser
Process Data
${data}= Get Request https://api.example.com/data
${json}= Evaluate ${data.text}
Log ${json}[0][name]channels:
- conda-forge
- defaults
dependencies:
- python=3.9
- pip
- chromium
- pip:
- rpaframework>=15.1.4
- robotframework>=5.0.1
- robotframework-tidy
- selenium>=4.0.0Robot Framework works with Work Items for structured data management:
*** Settings ***
Library RPA.Robocorp.Process
*** Tasks ***
Process Purchase Order
${order_data}= Get Work Item Variable order_id
${customer}= Get Work Item Variable customer_name
Log Processing order ${order_id} for ${customer}
# ... further processing ...
Set Work Item Variable status completed
Set Work Item Variable result_data ${result}{
"order_id": "ORD-12345",
"customer_name": "Acme Corp",
"items": [
{"sku": "ITEM-001", "qty": 5}
]
}# Execute single task
cd robots/src/rcc/my-robot
robot --task TaskName tasks.robot
# All tasks
robot tasks.robot
# With RCC (as in production)
rcc robot run --task TaskName
# Check outputs
open output/log.html- Clear task names - Meaningful names in robot.yaml
- Error handling - Run Keyword If and error handling
- Logging - Sufficient log output for debugging
- Modularity - Keywords for reusable code
- Separate locators - locators.json for maintainability
- Timeouts - Explicit timeouts for stability
- Screenshots - On errors for debugging
For APIs, data processing, and modern Python-based automations, the system also offers UV Robots - pure Python implementations without Robot Framework overhead.
# 1. Create directory
mkdir robots/src/uv/my-api-robot
cd robots/src/uv/my-api-robot
# 2. Create pyproject.toml
cat > pyproject.toml << 'EOF'
[project]
name = "my-api-robot"
version = "0.1.0"
requires-python = ">=3.11"
dependencies = [
"robocorp-workitems>=1.0.0",
"requests>=2.31.0",
"pandas>=2.0.0",
]
EOF
# 3. Create main.py with business logic
cat > main.py << 'EOF'
import logging
import requests
from robocorp.workitems import inputs, outputs
logger = logging.getLogger(__name__)
def main():
for input_item in inputs:
try:
payload = input_item.payload
# API call
response = requests.get(
f"https://api.example.com/data/{payload.get('id')}",
timeout=10
)
result = {
"status": "success",
"data": response.json(),
"code": response.status_code
}
except Exception as e:
logger.error(f"Error: {e}")
result = {
"status": "error",
"error": str(e)
}
finally:
outputs.create(result).save()
if __name__ == "__main__":
main()
EOF
# 4. Test locally
uv run main.py- β‘ Fast - 20-40x faster than RCC through direct Python execution
- π¦ Lightweight - Minimal dependencies, simple packaging
- π Modern - Access to all Python libraries (requests, pandas, etc.)
- βοΈ Cloud-Ready - Optimal for microservices and serverless deployment
- π¨βπ» Dev-Friendly - Standard Python, not Robot Framework syntax
β Ideal for:
- REST API integration
- Data processing and ETL
- Microservices and backend tasks
- Python libraries (pandas, requests, httpx)
- Cloud deployment
β Not ideal for:
- Windows UI automation (requires RPA Framework)
- Legacy SAP/Mainframe systems
- Visual web automation with complex locators
For detailed guidance on UV robot development, see:
- Complete guide: UV_ROBOT_CREATION_GUIDE.md
- Quick Start: QUICK_START.md - Create new UV robot
- Architecture Details: ARCHITECTURE.md - Robot Execution Engines
- Migration & Updates: MIGRATION_GUIDE.md - UV Robot Support
The system provides specialized execution tools for robots without boilerplate main.py wrapper. These tools are defined via [project.scripts] entry points in pyproject.toml and enable direct robot execution.
Purpose: Standalone execution of Robot Framework .robot files as CLI tool
# Define in pyproject.toml:
[project.scripts]
robot_runner = "processcube_robot_agent.tools.robot_runner:main"
# Or with UV robot config:
[tool.processcube]
robot_file = "my_robot.robot" # Default robot file# Option 1: Pass robot file directly
robot_runner my_robot.robot
# Option 2: With variables
robot_runner my_robot.robot \
--variable USER=admin \
--variable PASSWORD=secret
# Option 3: With tags
robot_runner my_robot.robot \
--tag smoke \
--tag critical
# Option 4: From configuration (pyproject.toml)
robot_runner # Uses robot_file from [tool.processcube]
# Option 5: With Python directly
python -m processcube_robot_agent.tools.robot_runner my_robot.robot
# Show help
robot_runner --help[project]
name = "my-robot"
version = "0.1.0"
[project.scripts]
robot_runner = "processcube_robot_agent.tools.robot_runner:main"
[tool.processcube]
# Optional: Defaults for robot_runner
robot_file = "main.robot"
variables = { "USER" = "admin", "TIMEOUT" = "30" }
tags = ["smoke", "production"]The robot_runner integrates seamlessly with ProcessCube:
- Reads input work items from environment variables
- Executes Robot Framework
- Writes output work items
- Signals errors for ProcessCube error handling
# With ProcessCube work items
RPA_WORKITEMS_PATH=/tmp/workitems.json robot_runner task.robot
# Output is written to:
# $RPA_OUTPUT_WORKITEM_PATH/output.json| Aspect | robot_runner | Manual (main.py) |
|---|---|---|
| Boilerplate | β None | β Lots |
| Configurable | β TOML-based | |
| Variables | β CLI + TOML | β Hardcoded |
| Tags Support | β Yes | β No |
| Work Items | β Automatic | β Manual |
# 1. Robot structure
robots/src/rcc/my-task/
βββ robot.yaml
βββ main.robot
βββ conda.yaml
βββ pyproject.toml
# 2. pyproject.toml
[project]
name = "my-task"
requires-python = ">=3.11"
[project.scripts]
robot_runner = "processcube_robot_agent.tools.robot_runner:main"
[tool.processcube]
robot_file = "main.robot"
# 3. Direct execution
cd robots/src/rcc/my-task
robot_runner main.robot
# Or with variables
robot_runner main.robot --variable API_KEY=secret123For UV-based robots, the UV package manager is automatically used.
# Structure for UV robot
robots/src/uv/my-api-robot/
βββ pyproject.toml # with dependencies
βββ main.py # Entry point
βββ requirements.txt # Optional fallback
# Automatic execution via UV:
# uv run --directory robots/src/uv/my-api-robot main.pyFor RCC-based robots, the Robot Code Compiler is used:
# RCC robots are automatically packed
npm run pack
# Execution
rcc robot run --task TaskName --directory robots/src/rcc/webuiThe studio extension enables graphical configuration of robot agents and tasks in the 5Minds Studio IDE.
- Robot Service Type Registration - "Robot" as task type in BPMN
- Agent Management - Management of robot agent instances
- Topic Selection - Selection of robot to execute
- Visual Feedback - Robot icon on BPMN diagrams
- Properties Panel - Configuration in Studio
cd studio_extension
# Install dependencies
npm ci
# Create build
npm run build
# Install via npm script (must be configured in studio_extension/package.json)
npm run install_studio_extension- Open Studio:
5minds-studio - Menu: Settings β Extensions
- Select "processcube.robot.extension"
- Specify path to
studio_extension/out/index.js - Restart Studio
- Studio: Menu β Settings β "Configure Robot Agents"
- Add agent:
{ "name": "Local Robot Agent", "url": "http://localhost:42042", "uuid": "robot-agent-001" } - Save in
~/.processcube/robot-agent/agents.json
- Open BPMN editor
- Add service task
- Properties β Type: Select "Robot"
- Properties β Agent: Select configured agent
- Properties β Topic: Select available robot from list
rcc/webuircc/windows/ui- etc.
// Task Properties in BPMN
{
"taskConfig": {
"agent": "robot-agent-001",
"topic": "rcc/webui",
"timeout": 300,
"retry": 3
}
}- PropertiesRobotTaskPane.tsx - Main UI for robot tasks
- PropertiesRobotTaskPaneContent.tsx - Task properties
- RobotAgentsConfigEditor.tsx - Agent management UI
# Start Studio with extension dev mode
5minds-studio --extension-development-dir=./studio_extension
# VSCode Debugger: F5 to debug TypeScript# Development build
npm run build
# Outputs to studio_extension/out/index.js
# For beta
npm run copy_release_to_betaGET /robot_agents/robots
Response:
{
"topics": [
{
"name": "webui",
"topic": "rcc/webui"
},
{
"name": "windows/ui",
"topic": "rcc/windows/ui"
}
]
}cURL Example:
curl -X GET http://localhost:42042/robot_agents/robotsThe agent registers with ProcessCube as an external task subscriber:
# Example task execution
task_handler.execute(
payload={
"order_id": "ORD-123",
"customer": "Acme"
},
task={
"id": "task-123",
"topic": "rcc/webui"
}
){
"error": {
"code": "unwrap_failed",
"message": "Failed to unwrap robot package",
"details": {
"return_code": 1,
"robot_path": "robots/installed/rcc/webui.zip"
}
}
}# 1. Clone repository
git clone <repo-url>
cd processcube-robot-agent
# 2. Create Python venv
python -m venv venv
source venv/bin/activate # Linux/macOS
# or
venv\Scripts\activate # Windows
# 3. Install dependencies
pip install -r requirements.txt
# 4. Check RCC
rcc version# Enable debug mode in config.dev.json:
{
"debugging": {
"enabled": true,
"hostname": "localhost",
"port": 5678,
"wait_for_client": true
}
}
# Start service
npm run processcube_robot_agent
# Connect debugger (PyCharm/VSCode)
# Connect to localhost:5678# In processcube_robot_agent/__main__.py
import logging
logging.basicConfig(
level=logging.DEBUG,
format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)# Service with verbose output
LOG_LEVEL=DEBUG npm run processcube_robot_agent
# RCC debug output
RCC_DEBUG=true npm run processcube_robot_agent
# Follow service logs live
tail -f ~/.processcube/robot-agent/logs.txtWith automatic reload on file changes:
# config.dev.json:
{
"rcc": {
"start_watch_project_dir": true,
"project_dir": "robots/src/rcc"
}
}
# Start service - change a robot and watch auto-reload
npm run processcube_robot_agentFile changes are detected:
- New robot.yaml β Robot is packed and registered
- Changed robot.yaml β Repack and re-register
- New tasks.robot β Automatically repack
# 1. Test robot locally
cd robots/src/rcc/webui
robot --task WebUIExample tasks.robot
# 2. Pack robot with RCC
rcc robot wrap -z robots/src/rcc/webui
# 3. Unpack robot and inspect
rcc robot unwrap -z robots/installed/rcc/webui.zip -d temp/webui
# 4. Execute with RCC
rcc run -c robots/src/rcc/webui# Dependencies
pip install pytest pytest-asyncio pytest-cov
# Write tests (not yet created!)
mkdir tests
cat > tests/test_robot_agent.py << 'EOF'
import pytest
from unittest.mock import Mock, patch
from processcube_robot_agent.robot_agent.rcc.robot_agent import RobotAgent
def test_execute_missing_robot():
agent = RobotAgent()
with pytest.raises(Exception):
agent.execute({}, {"task_id": "test"})
EOF
# Run tests
pytest tests/ -v --cov# Installation
pip install mypy types-all
# Type checking
mypy processcube_robot_agent/
# With configuration
cat > mypy.ini << 'EOF'
[mypy]
python_version = 3.9
warn_return_any = True
warn_unused_configs = True
disallow_untyped_defs = True
EOF
mypy processcube_robot_agent/Problem: ModuleNotFoundError: No module named 'processcube_robot_agent'
# Solution 1: Set PYTHONPATH
export PYTHONPATH=$(pwd)
npm run processcube_robot_agent
# Solution 2: Reinstall dependencies
pip install -r requirements.txt --force-reinstall
# Solution 3: Use virtual environment
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
npm run processcube_robot_agentProblem: rcc: command not found
# Download RCC
cd /tmp
wget https://github.com/robocorp/rcc/releases/download/v12.x.x/rcc-linux-64bit
chmod +x rcc
sudo mv rcc /usr/local/bin/
# Or add to PATH
export PATH=$PATH:/path/to/rcc/directory
# Verify
rcc versionProblem: Robot not visible in list after adding
# 1. Watch mode enabled?
# config.dev.json: "start_watch_project_dir": true
# 2. Pack robots manually
npm run pack
# 3. Restart service
npm run processcube_robot_agent
# 4. List robots
curl http://localhost:42042/robot_agents/robotsProblem: Robot fails, output not visible
# 1. Check logs
tail -100 ~/.processcube/robot-agent/logs.txt
# 2. Test robot locally
cd robots/src/rcc/my-robot
robot tasks.robot
# 3. Check output
open output/log.html
# 4. Read RCC output
rcc run -c robots/src/rcc/my-robot
# 5. Check debug JSON
cat robots/installed/rcc/my-robot/output.xmlProblem: Service running, but ProcessCube can't find it
# 1. Check service URL
curl http://localhost:42042/robot_agents/robots
# Should succeed
# 2. Check configuration
cat config.dev.json
# rest_api.host and rest_api.port correct?
# 3. Firewall/Network
# Port 42042 open from ProcessCube?
netstat -tuln | grep 42042
# 4. Check ProcessCube config
# ProcessCube should be configured with:
# http://<agent-host>:42042Problem: "Robot" task type not available in Studio
# 1. Extension built?
cd studio_extension
npm run build
# 2. out/index.js exists?
ls -la out/index.js
# 3. Studio debug mode
5minds-studio --extension-development-dir=./studio_extension
# 4. Check browser console (F12)
# Look for errors in ExtensionsProblem: Address already in use
# Find and kill process
lsof -i :42042
kill -9 <PID>
# Or use different port
cat config.dev.json | sed 's/42042/42043/' > config.dev.json.new
mv config.dev.json.new config.dev.json
npm run processcube_robot_agentCurrent Status: β PRODUCTION READY
- All critical security issues fixed
- 360 tests with 100% pass rate (279 Python + 81 TypeScript)
- 85% type hints coverage
- 90% docstring coverage
- 0 npm vulnerabilities
- 20 packages modernized
Completed Improvements:
- β Shell injection vulnerabilities closed
- β Unit tests added (114 tests)
- β Dependencies updated (20 packages)
- β Type hints added (85%)
- β Docstrings added (90%)
- β Production build for Studio extension (Webpack 0 errors)
Detailed Analysis: See ANALYSIS.md and PROJECT_STATUS.md
- Backend service with RPA executor
- Main file:
processcube_robot_agent/__main__.py - Documentation: See Robot Development Details
- Robot Framework projects
- Structure:
robots/src/rcc/ - Packaging: Automatically via RCC or
npm run pack
- TypeScript/React IDE integration
- Build:
npm run buildinstudio_extension/directory - Installation: In 5Minds Studio
- Write tests - Build up
tests/directory - Update dependencies - Keep modern
- Error handling - Increase robustness
- Documentation - Add docstrings
- Logging - Improve debuggability
# Feature
git commit -m "feat: add robot auto-discovery"
# Bug fix
git commit -m "fix: shell injection vulnerability in subprocess calls"
# Documentation
git commit -m "docs: add testing guide"
# Refactor
git commit -m "refactor: simplify factory builder"
# Tests
git commit -m "test: add unit tests for robot_agent.py"- Fork repository
- Create feature branch:
git checkout -b feature/amazing-feature - Commit changes:
git commit -m "feat: ..." - Push branch:
git push origin feature/amazing-feature - Open pull request
- Tests must pass
- Perform code review
Apache 2.0 License - See LICENSE for details
- Check logs - See Troubleshooting
- Simple example - Test webui robot
- Isolate - Reproduce problem
- GitHub Issue - https://github.com/5minds/processcube-robot-agent/issues
- Robot Framework: https://robotframework.org/
- RPA Framework: https://rpaframework.org/
- ProcessCube: https://processcube.io/
- 5Minds Studio: https://docs.5minds.de/
- Robocorp (RCC): https://robocorp.com/
*** Settings ***
Library RPA.Browser.Selenium
Library Collections
*** Variables ***
${BROWSER} Chrome
${URL} https://app.example.com
*** Tasks ***
Automated Login
Open Available Browser ${URL} ${BROWSER}
Input Text id:username myuser@example.com
Input Text id:password SecurePassword123
Click Button id:login-btn
Wait Until Page Contains Dashboard
Close Browser*** Settings ***
Library RPA.Excel.Files
Library RPA.HTTP
Library Collections
*** Tasks ***
Process Excel Data
Open Workbook data.xlsx
${data}= Read Worksheet sheet=Orders as_table=True
Close Workbook
FOR ${row} IN @{data}
Log Processing order ${row}[order_id]
${status}= Process Order ${row}[order_id] ${row}[customer]
Log Order status: ${status}
END
*** Keywords ***
Process Order
[Arguments] ${order_id} ${customer}
Log Custom processing logic here
RETURN completed*** Settings ***
Library RPA.HTTP
Library RPA.JSON
*** Tasks ***
Fetch And Process API Data
${response}= Get Request https://api.example.com/customers
${json}= Evaluate ${response.text}
Set Work Item Variable customer_count ${json.__len__()}
FOR ${customer} IN @{json}
Log Customer: ${customer}[name]
Set Work Item Variable last_customer ${customer}[name]
ENDDocumentation Version: 1.0 Last Updated: November 2025 Status: Production Ready (with upcoming improvements)