Skip to content

ProcessCube integration for robot-based RPA automation with dual Robot Framework and Python execution

License

Notifications You must be signed in to change notification settings

5minds/processcube-robot-agent

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

ProcessCube Robot Agent

An RPA integration solution that connects Robot Framework-based and pure Python automations with ProcessCube workflow engines.

🎯 Overview

The ProcessCube Robot Agent project is a comprehensive solution for integrating Robotic Process Automation (RPA) with ProcessCube, a Business Process Management (BPM) system. It supports two flexible approaches for robot development:

  1. RCC-based Robots (Robot Framework) - For UI automation and text-driven processes
  2. UV-based Robots (Pure Python) - For APIs, data processing, and Python libraries

Components

  1. processcube_robot_agent - A Python-based microservice that manages and executes RPA robots
  2. robots - A collection of automation tasks (RCC and UV)
  3. studio_extension - A TypeScript/React extension for the 5Minds Studio IDE

Two Approaches to Robot Development

The system supports both approaches in parallel without requiring migration:

Approach Type Best For Learn More
RCC Robot Framework (Text) UI automation, web scraping README.md - RCC Guide
UV Pure Python APIs, data processing, microservices UV_ROBOT_CREATION_GUIDE.md

Unsure which approach? β†’ See QUICK_START.md - Comparison Table

Architecture

β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚    ProcessCube Engine           β”‚ (Workflow Engine)
β”‚    (BPMN Processes)             β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
           β”‚
      External Tasks
           β”‚
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β–Όβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Robot Agent Service (Python)   β”‚ (REST API Port 42042)
β”‚  β”œβ”€β”€ Task Handler               β”‚
β”‚  β”œβ”€β”€ RCC Runner                 β”‚
β”‚  └── File Watcher (Auto-reload) β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
           β”‚
           β”œβ”€β–Ί RPA Robots (.zip packages)
           β”‚   β”œβ”€β”€ Web UI Automation
           β”‚   β”œβ”€β”€ Windows UI Automation
           β”‚   └── Custom Tasks
           β”‚
           └─► RCC (Robot Code Compiler)
               └── Robocorp's Robot Packaging Tool

πŸ“‹ Table of Contents

  1. Installation & Setup
  2. Quick Start
  3. Configuration
  4. Project Structure
  5. Robot Development
  6. Studio Extension
  7. API Documentation
  8. Development & Debugging
  9. Troubleshooting
  10. Contributing

πŸš€ Installation & Setup

Prerequisites

  • Python 3.8 or higher
  • Node.js 14.x or higher (for Studio extension)
  • npm 6.x or higher
  • RCC (Robocorp Command Center) - Download: https://github.com/robocorp/rcc
  • Git

Installation

1. Clone the repository

git clone https://github.com/5minds/processcube-robot-agent.git
cd processcube-robot-agent

2. Install Python dependencies

# Option A: With npm scripts (recommended)
npm install

# Option B: Directly with pip
pip install -r requirements.txt

3. Install and validate RCC

# Download RCC and place in PATH
# https://github.com/robocorp/rcc/releases

# Verification:
rcc version

4. Install Node dependencies

npm install

5. Build Studio extension (optional)

cd studio_extension
npm ci
npm run build
cd ..

First Execution

# Start the Robot Agent Service
npm run processcube_robot_agent

# Output should look similar to:
# INFO:     Started server process [12345]
# INFO:     Waiting for application startup.
# 2025-11-17 18:49:34,089 - processcube.external_tasks - INFO - Starting external task worker for topic 'win.test'
# 2025-11-17 18:49:34,089 - processcube.external_tasks - INFO - Starting external task worker for topic 'win.webui'
# ...
# INFO:     Application startup complete

🚒 Deployment Guide

Production Readiness

The project is production-ready with the following quality metrics:

Metric Status Details
Total Tests βœ… 360/360 100% Pass Rate
Python Tests βœ… 279/279 100% Pass Rate (216 unit + 63 integration)
TypeScript Tests βœ… 81/81 100% Pass Rate
Type Hints βœ… 85% Python Code Coverage
Security βœ… Safe Shell injection fixes, 0 npm vulnerabilities
Dependencies βœ… Modern 20 packages updated

Deployment Steps

1. Verify prerequisites

# Check system requirements
python --version          # >= 3.8
node --version           # >= 14.x
npm --version            # >= 6.x
rcc version              # Installed

# Install dependencies
npm install
pip install -r requirements.txt

2. Create configuration

# Production configuration (config.prod.json)
cat > config.prod.json << 'EOF'
{
    "debugging": {
        "enabled": false,
        "hostname": "localhost",
        "port": 5678,
        "wait_for_client": false
    },
    "engine": {
        "url": "http://processcube-engine:56100"
    },
    "rcc": {
        "topic_prefix": "robot",
        "wrap_dir": "robots/installed/rcc",
        "unwrap_dir": "temp/robots/rcc/unwrapped",
        "start_watch_project_dir": false,
        "project_dir": "robots/src/rcc"
    },
    "rest_api": {
        "port": 42042,
        "host": "0.0.0.0"
    }
}
EOF

3. Pack robots

# Prepare all robots (before deployment)
npm run pack

# Output: Robots in robots/installed/rcc/*.zip
# Verification:
ls -lh robots/installed/rcc/

4. Run tests

# All tests (before production release)
npm test

# Or separately:
npm run test:python          # Python tests (98 tests)
npm run test:typescript      # TypeScript tests (81 tests)

# With coverage:
npm run test:coverage

5. Start service

# Option A: Direct (simple)
CONFIG_FILE=$(pwd)/config.prod.json npm run processcube_robot_agent

# Option B: Docker (if available)
docker run -d \
  -e CONFIG_FILE=/app/config.prod.json \
  -p 42042:42042 \
  -v $(pwd)/config.prod.json:/app/config.prod.json \
  -v $(pwd)/robots:/app/robots \
  processcube-robot-agent:latest

# Option C: Systemd service (Linux)
sudo systemctl start processcube-robot-agent
sudo systemctl enable processcube-robot-agent

Deployment Verification

# 1. Health check: Is the service reachable?
curl -s http://localhost:42042/robot_agents/robots | jq .

# Expected:
# {
#   "topics": [
#     { "name": "...", "topic": "robot/..." },
#     ...
#   ]
# }

# 2. Are robots registered?
curl -s http://localhost:42042/robot_agents/robots | jq '.topics | length'
# Should be > 0

# 3. Is ProcessCube engine reachable?
# Check agent URL in ProcessCube engine configuration
# External task workers should be connected to engine

# 4. Check logs
tail -f /var/log/processcube-robot-agent/service.log

Monitoring & Logging

Enable logging

# Production logging (config.prod.json):
{
  "logging": {
    "level": "INFO",
    "format": "json",
    "output": "/var/log/processcube-robot-agent/service.log"
  }
}

Live logs

# Follow service logs
tail -100f ~/.processcube/robot-agent/logs.txt

# Errors only
grep ERROR ~/.processcube/robot-agent/logs.txt

# Robot executions
grep "Starting external task" ~/.processcube/robot-agent/logs.txt

Performance monitoring

# Service resource usage
top -p $(pgrep -f "processcube_robot_agent")

# Processed tasks
curl http://localhost:42042/metrics  # If Prometheus is integrated

# Open connections
netstat -an | grep 42042

Backup & Recovery

Back up robots

# Backup: Installed robots
tar -czf robots-backup-$(date +%Y%m%d).tar.gz robots/installed/

# Backup: Source robots
tar -czf robots-source-backup-$(date +%Y%m%d).tar.gz robots/src/

# Restore:
tar -xzf robots-backup-20251117.tar.gz
npm run pack

Back up configuration

# Backup
cp config.prod.json config.prod.json.backup

# Restore
cp config.prod.json.backup config.prod.json
systemctl restart processcube-robot-agent

Production Troubleshooting

Service won't start

# 1. Check logs
journalctl -u processcube-robot-agent -n 50

# 2. Validate configuration
python -m json.tool config.prod.json

# 3. Check dependencies
pip check
npm audit

# 4. Is port available?
netstat -tuln | grep 42042

External tasks not registered

# 1. Is ProcessCube URL reachable?
curl -v http://processcube-engine:56100/health

# 2. Are robots present?
curl http://localhost:42042/robot_agents/robots

# 3. Restart service
systemctl restart processcube-robot-agent

# 4. Check logs for errors
journalctl -u processcube-robot-agent -p err

Memory leak / Performance issues

# 1. Restart service
systemctl restart processcube-robot-agent

# 2. Clear temp directory
rm -rf temp/robots/rcc/unwrapped/*

# 3. Repack robot caches
npm run pack

# 4. Enable monitoring
CONFIG_FILE=config.prod.json DEBUG=true npm run processcube_robot_agent

Scaling & High Availability

Multiple agent instances

# Agent 1 (Port 42042)
CONFIG_FILE=config.prod-1.json npm run processcube_robot_agent &

# Agent 2 (Port 42043)
CONFIG_FILE=config.prod-2.json npm run processcube_robot_agent &

# Load balancer (nginx.conf)
upstream robot_agents {
    server localhost:42042;
    server localhost:42043;
}

server {
    listen 42040;
    location / {
        proxy_pass http://robot_agents;
    }
}

Health check endpoint

# In rest_api_command.py
@webapp.get("/health")
async def health_check():
    return {
        "status": "healthy",
        "timestamp": datetime.now().isoformat(),
        "robots_registered": len(get_registered_robots())
    }

Update & Rollback

Perform update

# 1. Back up current code
git stash

# 2. Pull new code
git pull origin main

# 3. Update dependencies
npm install
pip install -r requirements.txt

# 4. Run tests
npm test

# 5. Restart service
systemctl restart processcube-robot-agent

# 6. Verify
curl http://localhost:42042/robot_agents/robots

Rollback on error

# 1. Stop service
systemctl stop processcube-robot-agent

# 2. Revert code
git revert HEAD

# 3. Start service
systemctl start processcube-robot-agent

# 4. Verify
journalctl -u processcube-robot-agent -n 20

🐳 Docker Image Configuration & Usage

The project includes a Dockerfile for containerized deployment. Docker images are automatically built by GitHub Actions and pushed to GitHub Container Registry (ghcr.io).

Build Docker image

# Build locally
docker build -t processcube-robot-agent:latest .

# With version tag
docker build -t processcube-robot-agent:0.1.0 .

# With multi-architecture support (ARM64/AMD64)
docker buildx build --platform linux/amd64,linux/arm64 \
  -t processcube-robot-agent:latest .

Start Docker container

# Basic: With configuration file and robots directory
docker run -d \
  --name robot-agent \
  -p 42042:42042 \
  -e CONFIG_FILE=/app/config.json \
  -v $(pwd)/config.json:/app/config.json \
  -v $(pwd)/robots:/app/robots \
  processcube-robot-agent:latest

# With ProcessCube engine URL
docker run -d \
  --name robot-agent \
  -p 42042:42042 \
  -e CONFIG_FILE=/app/config.json \
  -e PROCESSCUBE_ENGINE_URL=http://processcube-engine:56100 \
  -v $(pwd)/config.json:/app/config.json \
  -v $(pwd)/robots:/app/robots \
  processcube-robot-agent:latest

# With Docker Compose
docker-compose up -d

Docker Compose (docker-compose.yml)

version: '3.8'

services:
  robot-agent:
    image: ghcr.io/5minds/processcube-robot-agent:latest
    container_name: processcube-robot-agent
    ports:
      - "42042:42042"
    environment:
      CONFIG_FILE: /app/config.json
      PROCESSCUBE_ENGINE_URL: http://processcube-engine:56100
      LOG_LEVEL: INFO
    volumes:
      - ./config.json:/app/config.json
      - ./robots:/app/robots
      - robot-agent-logs:/var/log/processcube-robot-agent
    depends_on:
      - processcube-engine
    restart: unless-stopped
    healthcheck:
      test: ["CMD", "curl", "-f", "http://localhost:42042/robot_agents/robots"]
      interval: 30s
      timeout: 10s
      retries: 3
      start_period: 40s

  processcube-engine:
    image: processcube/engine:latest
    container_name: processcube-engine
    ports:
      - "56100:56100"
    environment:
      DATABASE_URL: postgresql://postgres:postgres@postgres:5432/processcube
    depends_on:
      - postgres
    restart: unless-stopped

  postgres:
    image: postgres:15-alpine
    container_name: processcube-postgres
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: postgres
      POSTGRES_DB: processcube
    volumes:
      - postgres-data:/var/lib/postgresql/data
    restart: unless-stopped

volumes:
  robot-agent-logs:
  postgres-data:

Docker image tags on ghcr.io

GitHub Actions automatically pushes the following tags:

# After git push main
ghcr.io/5minds/processcube-robot-agent:main
ghcr.io/5minds/processcube-robot-agent:latest
ghcr.io/5minds/processcube-robot-agent:<commit-sha>

# After release tag (e.g., v0.1.0)
ghcr.io/5minds/processcube-robot-agent:0.1.0
ghcr.io/5minds/processcube-robot-agent:0.1
ghcr.io/5minds/processcube-robot-agent:<commit-sha>

Docker image configuration

The following environment variables are supported:

Variable Default Description
CONFIG_FILE /app/config.json Path to configuration file
PROCESSCUBE_ENGINE_URL - ProcessCube engine URL (optional)
LOG_LEVEL INFO Logging level (DEBUG, INFO, WARNING, ERROR)
ROBOT_TIMEOUT 300 Robot execution timeout (seconds)
RCC_DEBUG false Enable RCC debug output
PYTHONUNBUFFERED 1 Disable Python buffering

Docker container mounting

# Robots from host
-v /path/to/robots:/app/robots

# Configuration from host
-v /path/to/config.json:/app/config.json:ro

# Persist logs
-v robot-agent-logs:/var/log/processcube-robot-agent

# Temp directory (for RCC unwrapped)
-v robot-agent-temp:/app/temp

Docker image security

# Run as non-root user (automatic in image)
docker run -u 1000:1000 \
  -v $(pwd)/robots:/app/robots \
  processcube-robot-agent:latest

# With read-only filesystem (except /tmp, /var)
docker run --read-only \
  --tmpfs /tmp \
  --tmpfs /var/tmp \
  -v $(pwd)/robots:/app/robots:ro \
  processcube-robot-agent:latest

Optimize Docker image

The standard image is ~500MB with all dependencies. For smaller images:

# Production image (multi-stage build)
# Usage: docker build -f Dockerfile.prod -t processcube-robot-agent:prod .

Docker troubleshooting

# View container logs
docker logs robot-agent
docker logs -f robot-agent  # Live logs

# SSH into container
docker exec -it robot-agent sh

# Container status
docker ps | grep robot-agent
docker inspect robot-agent | jq '.[0].State'

# Health check
docker inspect --format='{{.State.Health.Status}}' robot-agent

# Check port
docker port robot-agent

Docker image for development

# Development build with additional tools
docker build -f Dockerfile.dev -t processcube-robot-agent:dev .

# With mounted source code for live reload
docker run -d \
  -v $(pwd):/app \
  -v /app/venv  # Exclude venv
  processcube-robot-agent:dev


⚑ Quick Start

1. Start the service

# Terminal 1: Start the Robot Agent Service
npm run processcube_robot_agent

# Output should look similar to:
# INFO:     Started server process [12345]
# INFO:     Waiting for application startup.
# INFO:     Starting external task worker for topic 'rcc.webui'
# INFO:     Starting external task worker for topic 'rcc.test'
# ...
# INFO:     Application startup complete

# Service runs at http://localhost:42042

2. Stop the service

# Option A: In the same terminal (Terminal 1)
# Press: Ctrl+C

# Option B: From another terminal (Terminal 2)
npm run stop

# Option C: Force stop (if hung)
npm run stop:force

3. Check available robots

# Terminal 2: List all robots
curl http://localhost:42042/robot_agents/robots

# Output:
# {
#   "topics": [
#     {"name": "webui", "topic": "rcc/webui"}
#   ]
# }

4. Connect with ProcessCube

# ProcessCube engine must be able to register with a known service
# The agent is then available at the configured URL
# (Default: http://localhost:42042)

5. Create your own robot

# 1. Create a new robot folder
mkdir robots/src/rcc/my-robot
cd robots/src/rcc/my-robot

# 2. Create robot template
cat > robot.yaml << 'EOF'
tasks:
  MyTask:
    robotTaskName: My Custom Task
condaConfigFile: conda.yaml
artifactsDir: output
PATH: [.]
PYTHONPATH: [.]
EOF

# 3. Define tasks
cat > tasks.robot << 'EOF'
*** Settings ***
Library    RPA.Browser.Selenium

*** Tasks ***
MyTask
    Log    Hello World!
EOF

# 4. Define conda environment
cat > conda.yaml << 'EOF'
channels:
  - conda-forge
dependencies:
  - python=3.9
  - pip
  - pip:
    - rpaframework>=15.1.4
EOF

# The robot will be automatically packed and registered on the next service start

βš™οΈ Configuration

Configuration File

Service configuration is done through JSON files in the root directory:

  • config.dev.json - Linux/macOS development
  • config.dev-win.json - Windows development
  • Environment variable: CONFIG_FILE defines which file is loaded

Configuration Structure

{
  "debugging": {
    "enabled": true,
    "hostname": "localhost",
    "port": 5678,
    "wait_for_client": false
  },
  "engine": {
    "url": "http://localhost:56100"
  },
  "rcc": {
    "topic_prefix": "rcc",
    "wrap_dir": "robots/installed/rcc",
    "unwrap_dir": "temp/robots/rcc/unwrapped",
    "start_watch_project_dir": true,
    "project_dir": "robots/src/rcc"
  },
  "rest_api": {
    "port": 42042,
    "host": "0.0.0.0"
  }
}

Configuration Parameters Explained

Parameter Description Default
debugging.enabled Enable debug mode (port 5678) false
engine.url ProcessCube engine URL -
rcc.topic_prefix Prefix for robot topics robot_task
rcc.wrap_dir Output directory for packed robots robots/installed/rcc
rcc.unwrap_dir Temp directory during unpacking temp/robots/rcc/unwrapped
rcc.start_watch_project_dir Auto-reload on file changes true
rcc.project_dir Robot source directory robots/src/rcc
rest_api.port Service port 42042
rest_api.host Listen address 0.0.0.0

Environment Variables

# Select configuration file
export CONFIG_FILE=/path/to/config.json

# Python path for imports
export PYTHONPATH=/path/to/processcube-robot-agent

# RCC debug output
export RCC_DEBUG=true

Multiple Configurations

For different environments (dev, staging, prod):

# Development
CONFIG_FILE=./config.dev.json npm run processcube_robot_agent

# Production
CONFIG_FILE=./config.prod.json npm run processcube_robot_agent

πŸ“ Project Structure

processcube-robot-agent/
β”‚
β”œβ”€β”€ processcube_robot_agent/          # Backend microservice (Python)
β”‚   β”œβ”€β”€ __main__.py                   # CLI entry point
β”‚   β”œβ”€β”€ rest_api_command.py           # REST API server
β”‚   β”œβ”€β”€ pack_robots_command.py        # Packaging command
β”‚   β”œβ”€β”€ watch_robots_command.py       # File watcher command
β”‚   β”‚
β”‚   β”œβ”€β”€ robot_agent/                  # Agent logic
β”‚   β”‚   β”œβ”€β”€ base_agent.py             # Abstract base class
β”‚   β”‚   β”œβ”€β”€ builder.py                # Factory builder
β”‚   β”‚   β”œβ”€β”€ error.py                  # Custom exceptions
β”‚   β”‚   β”‚
β”‚   β”‚   └── rcc/                      # RCC implementation
β”‚   β”‚       β”œβ”€β”€ robot_agent.py        # Main execution engine
β”‚   β”‚       β”œβ”€β”€ rcc_runner.py         # RCC validator
β”‚   β”‚       β”œβ”€β”€ robot_task_handler_factory.py  # Factory pattern
β”‚   β”‚       β”œβ”€β”€ project_packer.py     # Robot packaging
β”‚   β”‚       └── project_watcher.py    # Hot-reload watcher
β”‚   β”‚
β”‚   β”œβ”€β”€ external_task/                # ProcessCube integration
β”‚   β”‚   └── robot_task_handler.py     # External task handler
β”‚   β”‚
β”‚   └── rest_api/                     # HTTP endpoints
β”‚       └── robots.py                 # Robot list endpoint
β”‚
β”œβ”€β”€ robots/                           # RPA robot definitions
β”‚   β”œβ”€β”€ src/rcc/                      # Source robots
β”‚   β”‚   β”œβ”€β”€ webui/                    # Web UI automation
β”‚   β”‚   β”œβ”€β”€ windows/
β”‚   β”‚   β”‚   └── ui/                   # Windows UI automation
β”‚   β”‚   β”œβ”€β”€ windows-example-calculator/
β”‚   β”‚   └── web-example-rpa-challenge/
β”‚   β”‚
β”‚   β”œβ”€β”€ installed/                    # Packed, ready-to-run robots
β”‚   β”‚   └── rcc/                      # RCC-packed .zip files
β”‚   β”‚
β”‚   └── backup/                       # Robot backups
β”‚       └── readexcel/                # Excel read example
β”‚
β”œβ”€β”€ studio_extension/                 # TypeScript/React IDE extension
β”‚   β”œβ”€β”€ index.ts                      # Extension entry point
β”‚   β”œβ”€β”€ robotServiceType/             # Robot service type UI
β”‚   β”‚   β”œβ”€β”€ initializeServiceTypeRobot.ts
β”‚   β”‚   β”œβ”€β”€ PropertiesRobotTaskPane.tsx
β”‚   β”‚   β”œβ”€β”€ PropertiesRobotTaskPaneContent.tsx
β”‚   β”‚   β”œβ”€β”€ fetchRobots.ts
β”‚   β”‚   └── PropertiesRobotServiceTask.md
β”‚   β”‚
β”‚   β”œβ”€β”€ agentSettings/                # Agent configuration UI
β”‚   β”‚   β”œβ”€β”€ initializeAgentSettingsEditor.ts
β”‚   β”‚   β”œβ”€β”€ getRobotAgents.ts
β”‚   β”‚   β”œβ”€β”€ RobotAgentsConfigDocument.ts
β”‚   β”‚   └── RobotAgentsConfigEditor.tsx
β”‚   β”‚
β”‚   β”œβ”€β”€ package.json
β”‚   β”œβ”€β”€ tsconfig.json
β”‚   β”œβ”€β”€ webpack.config.js
β”‚   └── README.md
β”‚
β”œβ”€β”€ processes/                        # Example BPMN processes
β”‚   β”œβ”€β”€ RobotTask.bpmn
β”‚   └── .processcube/
β”‚
β”œβ”€β”€ package.json                      # Root NPM configuration
β”œβ”€β”€ requirements.txt                  # Python dependencies
β”œβ”€β”€ config.dev.json                   # Linux/macOS dev configuration
β”œβ”€β”€ config.dev-win.json               # Windows dev configuration
β”œβ”€β”€ start_on_windows.sh               # Windows startup script
β”‚
└── README.md                         # This file

πŸ€– Robot Development

πŸ”€ Side-by-Side Comparison: RCC vs UV

Both approaches solve automation tasks, but with different strengths:

RCC (Robot Framework)

*** Settings ***
Library    RPA.Browser.Selenium
Library    RPA.HTTP

*** Tasks ***
Login And Process
    Open Browser    https://example.com    chrome
    Input Text    id:username    admin
    Input Text    id:password    pw123
    Click Button   xpath://button[@type='submit']

    ${response}=    Get Request    https://api.example.com/process
    Log    ${response.status_code}
    Close Browser

UV (Pure Python)

import requests
from robocorp.workitems import inputs, outputs
from selenium import webdriver

def main():
    for input_item in inputs:
        payload = input_item.payload

        # Web Automation
        driver = webdriver.Chrome()
        driver.get("https://example.com")
        driver.find_element("id", "username").send_keys("admin")
        driver.find_element("id", "password").send_keys("pw123")
        driver.find_element("xpath", "//button[@type='submit']").click()

        # API Request
        response = requests.post("https://api.example.com/process")

        driver.quit()

        # Output
        result = {
            "status": response.status_code,
            "data": payload,
            "processed": True
        }
        outputs.create(result).save()

if __name__ == "__main__":
    main()

When to use which approach?

Scenario RCC UV Reason
Web UI Automation βœ… Better ⚠️ Possible RPA.Browser optimized for UI automation
REST APIs βœ… Possible βœ… Better Python Requests/httpx are native
Data Processing βœ… Good βœ… Better Pandas, NumPy, etc. are Python-native
Legacy System RPA βœ… Better ❌ Difficult Windows UI, SAP, etc. require RPA Framework
Microservices ⚠️ Overhead βœ… Better Lightweight, fast, easy to deploy
Complex Logic ⚠️ Verbose βœ… Better Python is more understandable for developers

Robot Framework Basics

Robot Framework is a Python-based, text-driven automation tool with robot-readable syntax:

*** Settings ***
Library    Collections
Library    RPA.Browser.Selenium

*** Variables ***
${USERNAME}    admin
${PASSWORD}    pw123

*** Tasks ***
Login And Verify
    Open Browser    https://example.com/login    chrome
    Input Text    id:username    ${USERNAME}
    Input Text    id:password    ${PASSWORD}
    Click Button   xpath://button[@type='submit']
    Page Should Contain    Welcome

*** Keywords ***
Login As User
    [Arguments]    ${user}    ${pass}
    Input Text    id:username    ${user}
    Input Text    id:password    ${pass}
    Click Button   xpath://button[@type='submit']

Robot Project Structure

A minimal robot with the required files:

my-robot/
β”œβ”€β”€ robot.yaml              # Robot metadata
β”œβ”€β”€ tasks.robot             # Task definitions
β”œβ”€β”€ conda.yaml              # Dependencies
β”œβ”€β”€ locators.json           # Optional: UI element locators
└── output/                 # Output directory (system-created)
    β”œβ”€β”€ output.xml          # Test results
    β”œβ”€β”€ log.html            # HTML log
    └── report.html         # Test report

robot.yaml - Configuration

# Task definitions
tasks:
  TaskName:
    robotTaskName: Display name for ProcessCube
  SecondTask:
    robotTaskName: Second Task

# Conda environment configuration
condaConfigFile: conda.yaml

# Output directory
artifactsDir: output

# Path variables
PATH: [.]
PYTHONPATH: [.]

tasks.robot - Task Definition

*** Settings ***
Library    RPA.Browser.Selenium
Library    RPA.HTTP
Library    Collections
Resource   common.robot

*** Tasks ***
WebUI Example
    Open Available Browser    https://www.example.com
    Click Element When Visible    css=.cookie-accept
    Take Screenshot    full
    Close Browser

Process Data
    ${data}=    Get Request    https://api.example.com/data
    ${json}=    Evaluate    ${data.text}
    Log    ${json}[0][name]

conda.yaml - Dependencies

channels:
  - conda-forge
  - defaults

dependencies:
  - python=3.9
  - pip
  - chromium

  - pip:
    - rpaframework>=15.1.4
    - robotframework>=5.0.1
    - robotframework-tidy
    - selenium>=4.0.0

Work Items (Input/Output)

Robot Framework works with Work Items for structured data management:

Input in tasks.robot

*** Settings ***
Library    RPA.Robocorp.Process

*** Tasks ***
Process Purchase Order
    ${order_data}=    Get Work Item Variable    order_id
    ${customer}=      Get Work Item Variable    customer_name

    Log    Processing order ${order_id} for ${customer}
    # ... further processing ...

    Set Work Item Variable    status    completed
    Set Work Item Variable    result_data    ${result}

JSON Input from ProcessCube

{
  "order_id": "ORD-12345",
  "customer_name": "Acme Corp",
  "items": [
    {"sku": "ITEM-001", "qty": 5}
  ]
}

Test robots locally

# Execute single task
cd robots/src/rcc/my-robot
robot --task TaskName tasks.robot

# All tasks
robot tasks.robot

# With RCC (as in production)
rcc robot run --task TaskName

# Check outputs
open output/log.html

Best Practices for Robot Development

  1. Clear task names - Meaningful names in robot.yaml
  2. Error handling - Run Keyword If and error handling
  3. Logging - Sufficient log output for debugging
  4. Modularity - Keywords for reusable code
  5. Separate locators - locators.json for maintainability
  6. Timeouts - Explicit timeouts for stability
  7. Screenshots - On errors for debugging

🐍 UV Robot Development (Pure Python)

For APIs, data processing, and modern Python-based automations, the system also offers UV Robots - pure Python implementations without Robot Framework overhead.

Quick Start - Create UV Robot

# 1. Create directory
mkdir robots/src/uv/my-api-robot
cd robots/src/uv/my-api-robot

# 2. Create pyproject.toml
cat > pyproject.toml << 'EOF'
[project]
name = "my-api-robot"
version = "0.1.0"
requires-python = ">=3.11"
dependencies = [
    "robocorp-workitems>=1.0.0",
    "requests>=2.31.0",
    "pandas>=2.0.0",
]
EOF

# 3. Create main.py with business logic
cat > main.py << 'EOF'
import logging
import requests
from robocorp.workitems import inputs, outputs

logger = logging.getLogger(__name__)

def main():
    for input_item in inputs:
        try:
            payload = input_item.payload

            # API call
            response = requests.get(
                f"https://api.example.com/data/{payload.get('id')}",
                timeout=10
            )

            result = {
                "status": "success",
                "data": response.json(),
                "code": response.status_code
            }
        except Exception as e:
            logger.error(f"Error: {e}")
            result = {
                "status": "error",
                "error": str(e)
            }
        finally:
            outputs.create(result).save()

if __name__ == "__main__":
    main()
EOF

# 4. Test locally
uv run main.py

Benefits of UV Robots

  • ⚑ Fast - 20-40x faster than RCC through direct Python execution
  • πŸ“¦ Lightweight - Minimal dependencies, simple packaging
  • πŸ”Œ Modern - Access to all Python libraries (requests, pandas, etc.)
  • ☁️ Cloud-Ready - Optimal for microservices and serverless deployment
  • πŸ‘¨β€πŸ’» Dev-Friendly - Standard Python, not Robot Framework syntax

When to Use UV?

βœ… Ideal for:

  • REST API integration
  • Data processing and ETL
  • Microservices and backend tasks
  • Python libraries (pandas, requests, httpx)
  • Cloud deployment

❌ Not ideal for:

  • Windows UI automation (requires RPA Framework)
  • Legacy SAP/Mainframe systems
  • Visual web automation with complex locators

Further UV Documentation

For detailed guidance on UV robot development, see:


πŸ› οΈ Robot Execution Tools & Entry Points

The system provides specialized execution tools for robots without boilerplate main.py wrapper. These tools are defined via [project.scripts] entry points in pyproject.toml and enable direct robot execution.

robot_runner - Robot Framework Entry Point

Purpose: Standalone execution of Robot Framework .robot files as CLI tool

Installation & Configuration

# Define in pyproject.toml:
[project.scripts]
robot_runner = "processcube_robot_agent.tools.robot_runner:main"

# Or with UV robot config:
[tool.processcube]
robot_file = "my_robot.robot"  # Default robot file

Usage

# Option 1: Pass robot file directly
robot_runner my_robot.robot

# Option 2: With variables
robot_runner my_robot.robot \
  --variable USER=admin \
  --variable PASSWORD=secret

# Option 3: With tags
robot_runner my_robot.robot \
  --tag smoke \
  --tag critical

# Option 4: From configuration (pyproject.toml)
robot_runner  # Uses robot_file from [tool.processcube]

# Option 5: With Python directly
python -m processcube_robot_agent.tools.robot_runner my_robot.robot

# Show help
robot_runner --help

Configuration in pyproject.toml

[project]
name = "my-robot"
version = "0.1.0"

[project.scripts]
robot_runner = "processcube_robot_agent.tools.robot_runner:main"

[tool.processcube]
# Optional: Defaults for robot_runner
robot_file = "main.robot"
variables = { "USER" = "admin", "TIMEOUT" = "30" }
tags = ["smoke", "production"]

ProcessCube Integration

The robot_runner integrates seamlessly with ProcessCube:

  • Reads input work items from environment variables
  • Executes Robot Framework
  • Writes output work items
  • Signals errors for ProcessCube error handling
# With ProcessCube work items
RPA_WORKITEMS_PATH=/tmp/workitems.json robot_runner task.robot

# Output is written to:
# $RPA_OUTPUT_WORKITEM_PATH/output.json

Benefits vs. Manual

Aspect robot_runner Manual (main.py)
Boilerplate ❌ None βœ… Lots
Configurable βœ… TOML-based ⚠️ Hardcoded
Variables βœ… CLI + TOML ❌ Hardcoded
Tags Support βœ… Yes ❌ No
Work Items βœ… Automatic ❌ Manual

Example: RCC Robot with robot_runner

# 1. Robot structure
robots/src/rcc/my-task/
β”œβ”€β”€ robot.yaml
β”œβ”€β”€ main.robot
β”œβ”€β”€ conda.yaml
└── pyproject.toml

# 2. pyproject.toml
[project]
name = "my-task"
requires-python = ">=3.11"

[project.scripts]
robot_runner = "processcube_robot_agent.tools.robot_runner:main"

[tool.processcube]
robot_file = "main.robot"

# 3. Direct execution
cd robots/src/rcc/my-task
robot_runner main.robot

# Or with variables
robot_runner main.robot --variable API_KEY=secret123

UV-Runner - Python Execution Engine

For UV-based robots, the UV package manager is automatically used.

# Structure for UV robot
robots/src/uv/my-api-robot/
β”œβ”€β”€ pyproject.toml  # with dependencies
β”œβ”€β”€ main.py         # Entry point
└── requirements.txt # Optional fallback

# Automatic execution via UV:
# uv run --directory robots/src/uv/my-api-robot main.py

RCC-Runner - Robot Framework Compiler

For RCC-based robots, the Robot Code Compiler is used:

# RCC robots are automatically packed
npm run pack

# Execution
rcc robot run --task TaskName --directory robots/src/rcc/webui

πŸ“¦ Studio Extension

The studio extension enables graphical configuration of robot agents and tasks in the 5Minds Studio IDE.

Features

  • Robot Service Type Registration - "Robot" as task type in BPMN
  • Agent Management - Management of robot agent instances
  • Topic Selection - Selection of robot to execute
  • Visual Feedback - Robot icon on BPMN diagrams
  • Properties Panel - Configuration in Studio

Installation in Studio

cd studio_extension

# Install dependencies
npm ci

# Create build
npm run build

# Install via npm script (must be configured in studio_extension/package.json)
npm run install_studio_extension

Manual Installation

  1. Open Studio: 5minds-studio
  2. Menu: Settings β†’ Extensions
  3. Select "processcube.robot.extension"
  4. Specify path to studio_extension/out/index.js
  5. Restart Studio

Usage in Studio

1. Configure agent

  1. Studio: Menu β†’ Settings β†’ "Configure Robot Agents"
  2. Add agent:
    {
      "name": "Local Robot Agent",
      "url": "http://localhost:42042",
      "uuid": "robot-agent-001"
    }
  3. Save in ~/.processcube/robot-agent/agents.json

2. Create Robot Task in BPMN

  1. Open BPMN editor
  2. Add service task
  3. Properties β†’ Type: Select "Robot"
  4. Properties β†’ Agent: Select configured agent
  5. Properties β†’ Topic: Select available robot from list
    • rcc/webui
    • rcc/windows/ui
    • etc.

3. Configure Work Items

// Task Properties in BPMN
{
  "taskConfig": {
    "agent": "robot-agent-001",
    "topic": "rcc/webui",
    "timeout": 300,
    "retry": 3
  }
}

Studio Extension Development

Components

  • PropertiesRobotTaskPane.tsx - Main UI for robot tasks
  • PropertiesRobotTaskPaneContent.tsx - Task properties
  • RobotAgentsConfigEditor.tsx - Agent management UI

Debugging

# Start Studio with extension dev mode
5minds-studio --extension-development-dir=./studio_extension

# VSCode Debugger: F5 to debug TypeScript

Build & Release

# Development build
npm run build

# Outputs to studio_extension/out/index.js

# For beta
npm run copy_release_to_beta

πŸ”Œ API Documentation

REST API Endpoints

Get Robot List

GET /robot_agents/robots

Response:

{
  "topics": [
    {
      "name": "webui",
      "topic": "rcc/webui"
    },
    {
      "name": "windows/ui",
      "topic": "rcc/windows/ui"
    }
  ]
}

cURL Example:

curl -X GET http://localhost:42042/robot_agents/robots

ProcessCube External Task Integration

The agent registers with ProcessCube as an external task subscriber:

# Example task execution
task_handler.execute(
    payload={
        "order_id": "ORD-123",
        "customer": "Acme"
    },
    task={
        "id": "task-123",
        "topic": "rcc/webui"
    }
)

Error Responses

{
  "error": {
    "code": "unwrap_failed",
    "message": "Failed to unwrap robot package",
    "details": {
      "return_code": 1,
      "robot_path": "robots/installed/rcc/webui.zip"
    }
  }
}

πŸ”§ Development & Debugging

Project Setup for Development

# 1. Clone repository
git clone <repo-url>
cd processcube-robot-agent

# 2. Create Python venv
python -m venv venv
source venv/bin/activate  # Linux/macOS
# or
venv\Scripts\activate  # Windows

# 3. Install dependencies
pip install -r requirements.txt

# 4. Check RCC
rcc version

Start Service with Debug Mode

# Enable debug mode in config.dev.json:
{
  "debugging": {
    "enabled": true,
    "hostname": "localhost",
    "port": 5678,
    "wait_for_client": true
  }
}

# Start service
npm run processcube_robot_agent

# Connect debugger (PyCharm/VSCode)
# Connect to localhost:5678

Logging & Debugging

Enable Console Logging

# In processcube_robot_agent/__main__.py
import logging

logging.basicConfig(
    level=logging.DEBUG,
    format='%(asctime)s - %(name)s - %(levelname)s - %(message)s'
)

Track Log Output

# Service with verbose output
LOG_LEVEL=DEBUG npm run processcube_robot_agent

# RCC debug output
RCC_DEBUG=true npm run processcube_robot_agent

# Follow service logs live
tail -f ~/.processcube/robot-agent/logs.txt

Watch Mode for Development

With automatic reload on file changes:

# config.dev.json:
{
  "rcc": {
    "start_watch_project_dir": true,
    "project_dir": "robots/src/rcc"
  }
}

# Start service - change a robot and watch auto-reload
npm run processcube_robot_agent

File changes are detected:

  • New robot.yaml β†’ Robot is packed and registered
  • Changed robot.yaml β†’ Repack and re-register
  • New tasks.robot β†’ Automatically repack

Manual Testing

# 1. Test robot locally
cd robots/src/rcc/webui
robot --task WebUIExample tasks.robot

# 2. Pack robot with RCC
rcc robot wrap -z robots/src/rcc/webui

# 3. Unpack robot and inspect
rcc robot unwrap -z robots/installed/rcc/webui.zip -d temp/webui

# 4. Execute with RCC
rcc run -c robots/src/rcc/webui

Add Unit Testing

# Dependencies
pip install pytest pytest-asyncio pytest-cov

# Write tests (not yet created!)
mkdir tests
cat > tests/test_robot_agent.py << 'EOF'
import pytest
from unittest.mock import Mock, patch
from processcube_robot_agent.robot_agent.rcc.robot_agent import RobotAgent

def test_execute_missing_robot():
    agent = RobotAgent()
    with pytest.raises(Exception):
        agent.execute({}, {"task_id": "test"})
EOF

# Run tests
pytest tests/ -v --cov

Type Checking (mypy)

# Installation
pip install mypy types-all

# Type checking
mypy processcube_robot_agent/

# With configuration
cat > mypy.ini << 'EOF'
[mypy]
python_version = 3.9
warn_return_any = True
warn_unused_configs = True
disallow_untyped_defs = True
EOF

mypy processcube_robot_agent/

⚠️ Troubleshooting

Service won't start

Problem: ModuleNotFoundError: No module named 'processcube_robot_agent'

# Solution 1: Set PYTHONPATH
export PYTHONPATH=$(pwd)
npm run processcube_robot_agent

# Solution 2: Reinstall dependencies
pip install -r requirements.txt --force-reinstall

# Solution 3: Use virtual environment
python -m venv venv
source venv/bin/activate
pip install -r requirements.txt
npm run processcube_robot_agent

RCC not found

Problem: rcc: command not found

# Download RCC
cd /tmp
wget https://github.com/robocorp/rcc/releases/download/v12.x.x/rcc-linux-64bit
chmod +x rcc
sudo mv rcc /usr/local/bin/

# Or add to PATH
export PATH=$PATH:/path/to/rcc/directory

# Verify
rcc version

Robot not registered

Problem: Robot not visible in list after adding

# 1. Watch mode enabled?
# config.dev.json: "start_watch_project_dir": true

# 2. Pack robots manually
npm run pack

# 3. Restart service
npm run processcube_robot_agent

# 4. List robots
curl http://localhost:42042/robot_agents/robots

Error running robot

Problem: Robot fails, output not visible

# 1. Check logs
tail -100 ~/.processcube/robot-agent/logs.txt

# 2. Test robot locally
cd robots/src/rcc/my-robot
robot tasks.robot

# 3. Check output
open output/log.html

# 4. Read RCC output
rcc run -c robots/src/rcc/my-robot

# 5. Check debug JSON
cat robots/installed/rcc/my-robot/output.xml

ProcessCube won't connect

Problem: Service running, but ProcessCube can't find it

# 1. Check service URL
curl http://localhost:42042/robot_agents/robots
# Should succeed

# 2. Check configuration
cat config.dev.json
# rest_api.host and rest_api.port correct?

# 3. Firewall/Network
# Port 42042 open from ProcessCube?
netstat -tuln | grep 42042

# 4. Check ProcessCube config
# ProcessCube should be configured with:
# http://<agent-host>:42042

Studio Extension won't load

Problem: "Robot" task type not available in Studio

# 1. Extension built?
cd studio_extension
npm run build

# 2. out/index.js exists?
ls -la out/index.js

# 3. Studio debug mode
5minds-studio --extension-development-dir=./studio_extension

# 4. Check browser console (F12)
# Look for errors in Extensions

Port 42042 already in use

Problem: Address already in use

# Find and kill process
lsof -i :42042
kill -9 <PID>

# Or use different port
cat config.dev.json | sed 's/42042/42043/' > config.dev.json.new
mv config.dev.json.new config.dev.json
npm run processcube_robot_agent

πŸ“Š Project Status & Quality

Code Quality Summary

Current Status: βœ… PRODUCTION READY

  • All critical security issues fixed
  • 360 tests with 100% pass rate (279 Python + 81 TypeScript)
  • 85% type hints coverage
  • 90% docstring coverage
  • 0 npm vulnerabilities
  • 20 packages modernized

Completed Improvements:

  • βœ… Shell injection vulnerabilities closed
  • βœ… Unit tests added (114 tests)
  • βœ… Dependencies updated (20 packages)
  • βœ… Type hints added (85%)
  • βœ… Docstrings added (90%)
  • βœ… Production build for Studio extension (Webpack 0 errors)

Detailed Analysis: See ANALYSIS.md and PROJECT_STATUS.md


πŸ“š Further Documentation

Related Components

processcube_robot_agent

  • Backend service with RPA executor
  • Main file: processcube_robot_agent/__main__.py
  • Documentation: See Robot Development Details

robots

  • Robot Framework projects
  • Structure: robots/src/rcc/
  • Packaging: Automatically via RCC or npm run pack

studio_extension

  • TypeScript/React IDE integration
  • Build: npm run build in studio_extension/ directory
  • Installation: In 5Minds Studio

🀝 Contributing

Areas for Improvement

  1. Write tests - Build up tests/ directory
  2. Update dependencies - Keep modern
  3. Error handling - Increase robustness
  4. Documentation - Add docstrings
  5. Logging - Improve debuggability

Commit Conventions

# Feature
git commit -m "feat: add robot auto-discovery"

# Bug fix
git commit -m "fix: shell injection vulnerability in subprocess calls"

# Documentation
git commit -m "docs: add testing guide"

# Refactor
git commit -m "refactor: simplify factory builder"

# Tests
git commit -m "test: add unit tests for robot_agent.py"

Pull Request Process

  1. Fork repository
  2. Create feature branch: git checkout -b feature/amazing-feature
  3. Commit changes: git commit -m "feat: ..."
  4. Push branch: git push origin feature/amazing-feature
  5. Open pull request
  6. Tests must pass
  7. Perform code review

πŸ“ License

Apache 2.0 License - See LICENSE for details


πŸ†˜ Support & Contact

Fix Error Messages

  1. Check logs - See Troubleshooting
  2. Simple example - Test webui robot
  3. Isolate - Reproduce problem
  4. GitHub Issue - https://github.com/5minds/processcube-robot-agent/issues

Additional Resources


πŸŽ“ Examples & Tutorials

Example 1: Automate Web Login

*** Settings ***
Library    RPA.Browser.Selenium
Library    Collections

*** Variables ***
${BROWSER}    Chrome
${URL}        https://app.example.com

*** Tasks ***
Automated Login
    Open Available Browser    ${URL}    ${BROWSER}
    Input Text    id:username    myuser@example.com
    Input Text    id:password    SecurePassword123
    Click Button    id:login-btn
    Wait Until Page Contains    Dashboard
    Close Browser

Example 2: Data Processing

*** Settings ***
Library    RPA.Excel.Files
Library    RPA.HTTP
Library    Collections

*** Tasks ***
Process Excel Data
    Open Workbook    data.xlsx
    ${data}=    Read Worksheet    sheet=Orders    as_table=True
    Close Workbook

    FOR    ${row}    IN    @{data}
        Log    Processing order ${row}[order_id]
        ${status}=    Process Order    ${row}[order_id]    ${row}[customer]
        Log    Order status: ${status}
    END

*** Keywords ***
Process Order
    [Arguments]    ${order_id}    ${customer}
    Log    Custom processing logic here
    RETURN    completed

Example 3: API Integration

*** Settings ***
Library    RPA.HTTP
Library    RPA.JSON

*** Tasks ***
Fetch And Process API Data
    ${response}=    Get Request    https://api.example.com/customers
    ${json}=        Evaluate    ${response.text}

    Set Work Item Variable    customer_count    ${json.__len__()}

    FOR    ${customer}    IN    @{json}
        Log    Customer: ${customer}[name]
        Set Work Item Variable    last_customer    ${customer}[name]
    END

Documentation Version: 1.0 Last Updated: November 2025 Status: Production Ready (with upcoming improvements)