Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
107 changes: 107 additions & 0 deletions .github/workflows/deploy.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,107 @@
# Sample workflow for building and deploying a Next.js site to GitHub Pages
#
# To get started with Next.js see: https://nextjs.org/docs/getting-started
#
name: Deploy Next.js site to Pages

on:
# Runs on pushes targeting the default branch
# update to main for production
push:
branches: ["*"]

# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:

# Sets permissions of the GITHUB_TOKEN to allow deployment to GitHub Pages
permissions:
contents: read
pages: write
id-token: write

# Allow only one concurrent deployment, skipping runs queued between the run in-progress and latest queued.
# However, do NOT cancel in-progress runs as we want to allow these production deployments to complete.
concurrency:
group: "pages"
cancel-in-progress: false

jobs:
# Build job
build:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout@v4

- name: Detect package manager
id: detect-package-manager
run: |
cd frontend
if [ -f "${{ github.workspace }}/bun.lockb" ]; then
echo "manager=bun" >> $GITHUB_OUTPUT
echo "command=install" >> $GITHUB_OUTPUT
echo "runner=bun" >> $GITHUB_OUTPUT
exit 0
elif [ -f "${{ github.workspace }}/package.json" ]; then
echo "manager=bun" >> $GITHUB_OUTPUT
echo "command=ci" >> $GITHUB_OUTPUT
echo "runner=bun --no-install" >> $GITHUB_OUTPUT
exit 0
else
echo "Unable to determine package manager"
exit 1
fi

- name: Setup Bun
uses: oven-sh/setup-bun@v1
with:
bun-version: latest
cache: ${{ steps.detect-package-manager.outputs.manager }}

- name: Setup Pages
uses: actions/configure-pages@v4

- name: Restore cache
uses: actions/cache@v4
with:
path: |
frontend/.next/cache
# Generate a new cache whenever packages or source files change.
key: ${{ runner.os }}-nextjs-${{ hashFiles('**/package-lock.json', '**/yarn.lock') }}-${{ hashFiles('**.[jt]s', '**.[jt]sx') }}
# If source files changed but packages didn't, rebuild from a prior cache.
restore-keys: |
${{ runner.os }}-nextjs-${{ hashFiles('**/package-lock.json', '**/yarn.lock') }}-

- name: Check .env variable
run: |
cd frontend
echo "NEXT_PUBLIC_SOCKET_URL=${{ vars.NEXT_PUBLIC_SOCKET_URL }}"

- name: Install dependencies
run: |
cd frontend
bun install

- name: Build with Next.js
run: |
cd frontend
bun run build
env:
NEXT_PUBLIC_SOCKET_URL: ${{ vars.NEXT_PUBLIC_SOCKET_URL }}

- name: Upload artifact
uses: actions/upload-pages-artifact@v3
with:
path: ./frontend/out

# Deployment job
deploy:
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
needs: build
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4
72 changes: 61 additions & 11 deletions backend/interview_case/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,13 @@ git clone https://github.com/akhatua2/aact-openhands
cd aact-openhands
```

2. Start the AACT runtime instance:
2. Install dependencies using uv:
```bash
cd backend
uv pip install -e .
```

3. Start the AACT runtime instance:
```bash
cd openhands; poetry install; cd ..
poetry install
Expand All @@ -18,22 +24,66 @@ Keep this running in a separate terminal window.

## Running the Interview

1. Make sure you're in the interview_case directory:
### Method 1: Direct TOML File

Run the interview using the provided script:
```bash
cd interview_case
uv run aact run-dataflow interview.toml
```

2. Run the interview using:
### Method 2: REST API

1. Start the Flask API server:
```bash
uv run aact run-dataflow interview.toml
uv run start-api
```
This will start the server on port 9000.

2. Use the API endpoint to initialize agents:
Check `test_curl.sh` for an example curl command.
> You MUST update line 84 in `test_curl.sh` to point to the correct log directory.
## Directory Structure

```
interview_case/
├── interview.toml # Main dataflow configuration
├── interview_agent.py # Interview agent implementation
├── base_agent.py # Base agent class
└── nodes/ # Node implementations
```
backend/
├── pyproject.toml # Project configuration and dependencies
└── interview_case/
├── interview.toml # Main dataflow configuration
├── interview_agent.py # Interview agent implementation
├── base_agent.py # Base agent class
├── app.py # Flask API server
└── nodes/ # Node implementations
```

## API Documentation

### POST /init-agents

Initializes the interview agents using provided configuration.

**Request Body:**
- `redis_url` (string, required): Redis connection URL
- `extra_modules` (array, required): List of Python modules to import
- `nodes` (array, required): List of node configurations
- Each node requires:
- `node_name` (string)
- `node_class` (string)
- `node_args` (object): Configuration specific to the node type

**Response:**
```json
{
"config_file": "......./interview_37489.toml",
"message": "Interview process started",
"pid": 37502,
"status": "success"
}
```

**Error Response:**
```json
{
"error": "Error message",
"details": "Error details..."
}
```
102 changes: 102 additions & 0 deletions backend/interview_case/app.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,102 @@
from flask import Flask, request, jsonify, Response
import toml
import subprocess
import os
import sys
from typing import Union, Tuple

app = Flask(__name__)


@app.route('/health', methods=['GET'])
def health() -> str:
return 'OK'

@app.route('/init-agents', methods=['POST'])
def init_agents() -> Union[Response, Tuple[Response, int]]:
try:
data = request.get_json()
if not data:
return jsonify({'error': 'No JSON data received'}), 400

required_fields = ['redis_url', 'extra_modules', 'nodes']
for field in required_fields:
if field not in data:
return jsonify({'error': f'Missing required field: {field}'}), 400

current_dir = os.path.dirname(os.path.abspath(__file__))

# Create a directory for temporary files if it doesn't exist
temp_dir = os.path.join(current_dir, 'temp')
os.makedirs(temp_dir, exist_ok=True)

# Create a unique filename
temp_file = os.path.join(temp_dir, f'interview_{os.getpid()}.toml')

# Write the TOML file
with open(temp_file, 'w') as f:
toml_str = toml.dumps(data)
f.write(toml_str)
f.flush()
os.fsync(f.fileno()) # Ensure file is written to disk

try:
# Run the command in background
process = subprocess.Popen(
['uv', 'run', 'aact', 'run-dataflow', temp_file],
stdout=subprocess.PIPE,
stderr=subprocess.PIPE,
cwd=current_dir,
start_new_session=True # This ensures the process continues running
)

# Check if process started successfully
if process.poll() is None: # None means process is still running
return jsonify({
'status': 'success',
'message': 'Interview process started',
'pid': process.pid,
'config_file': temp_file
})
else:
# Process failed to start
return jsonify({
'error': 'Process failed to start',
'details': f'Exit code: {process.poll()}'
}), 500

except Exception as e:
# Clean up file if process fails to start
if os.path.exists(temp_file):
os.unlink(temp_file)
return jsonify({
'error': 'Failed to start interview process',
'details': str(e)
}), 500

except Exception as e:
return jsonify({'error': str(e)}), 500

def run_interview() -> int:
"""Run the interview directly using the default TOML configuration"""
current_dir = os.path.dirname(os.path.abspath(__file__))
toml_path = os.path.join(current_dir, 'interview.toml')

try:
# Run in foreground for direct execution
subprocess.run(
['uv', 'run', 'aact', 'run-dataflow', toml_path],
check=True,
cwd=current_dir
)
return 0
except subprocess.CalledProcessError as e:
print(f"Error: {e.stderr}", file=sys.stderr)
return 1

def main() -> None:
"""Entry point for the application script"""
app.run(host='0.0.0.0', port=9000)

if __name__ == '__main__':
main()
4 changes: 2 additions & 2 deletions backend/interview_case/interview.toml
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@ input_text_channels = ["Jane:Jack"]
input_env_channels = ["Scene:Jack", "Runtime:Agent"]
input_tick_channel = "tick/secs/1"
goal = "Your goal is to effectively test Jane's technical ability and finally decide if she has passed the interview. Make sure to also evaluate her communication skills, problem-solving approach, and enthusiasm."
model_name = "gpt-4o-mini"
model_name = "gpt-4o"
agent_name = "Jack"

[[nodes]]
Expand All @@ -26,7 +26,7 @@ input_text_channels = ["Jack:Jane"]
input_env_channels = ["Scene:Jane", "Runtime:Agent"]
input_tick_channel = "tick/secs/1"
goal = "Your goal is to perform well in the interview by demonstrating your technical skills, clear communication, and enthusiasm for the position. Stay calm, ask clarifying questions when needed, and confidently explain your thought process."
model_name = "gpt-4o-mini"
model_name = "gpt-4o"
agent_name = "Jane"

[[nodes]]
Expand Down
Loading
Loading