🚀 Comprehensive Playwright-based automation framework for testing AI features across multiple languages in Betterworks platform
- Overview
- Features
- Architecture
- Installation
- Quick Start
- Configuration
- AI Features Tested
- Multi-language Support
- Output Structure
- Screenshots
- Testing
- Troubleshooting
- Contributing
- Documentation
This automation framework validates AI features in the Betterworks platform across multiple languages and user accounts. It uses Playwright for browser automation and captures both functional behavior and visual evidence through automated screenshots.
- ✅ Multi-language AI Testing: Validates AI responses in 25+ languages
- ✅ Screenshot Documentation: Automatic capture of UI states and AI interactions
- ✅ Comprehensive Reporting: Detailed CSV reports with language detection results
- ✅ Error Handling: Robust retry mechanisms and error recovery
- ✅ Network Monitoring: Tracks API errors and response times
- ✅ Iframe Support: Handles complex iframe-based UI interactions
- Goal Assist - AI-powered goal suggestions with multiple recommendations
- Feedback Summary - Automated feedback summarization across time periods
- Feedback Writing Assist - AI-enhanced feedback composition
- Conversation Assist - AI-supported conversation guidance
- Language Detection: Automated validation using
langdetectlibrary - Visual Validation: Full-page screenshots at key interaction points
- Multi-user Testing: Supports testing across different user accounts
- Retry Logic: Built-in error recovery and API failure handling
- Performance Monitoring: Network response tracking and error logging
AI_features_Automation/
├── main.py # Core automation framework
├── users.csv # Test user configuration
├── test/ # Testing utilities
│ └── test_goal_assist.py # Goal Assist selector validation
├── Documents/ # Comprehensive documentation
│ ├── ARCHITECTURE.md # System architecture details
│ ├── API_DOCUMENTATION.md # API and function references
│ ├── CONFIGURATION.md # Configuration guide
│ └── QUICK_START.md # Getting started guide
├── Regression_AI_Feature_Screenshots/ # Screenshot outputs
└── Reports/ # CSV output files
- Python 3.9+
- Node.js (for Playwright browsers)
- macOS/Linux/Windows
-
Clone the repository:
git clone <repository-url> cd AI_features_Automation
-
Create and activate virtual environment:
python3 -m venv venv source venv/bin/activate # On Windows: venv\Scripts\activate
-
Install dependencies:
pip install playwright pandas langdetect asyncio
-
Install Playwright browsers:
playwright install chromium
-
Verify installation:
python test/test_goal_assist.py
# Run all AI features for all supported languages
python main.py
# Test specific feature (edit main.py to uncomment desired features)
# Available: run_goal_assist_flow, validate_feedback_summary,
# validate_feedback_writing_assist, validate_conversation_assistusers.csv - Test user configuration:
email,expected_lang,supported,expected_lang_name,AI_input
alice.taylor33665@pharma.com,en,True,English (US),Sample AI input text
julia.smith16586@pharma.com,de,True,German,Deutsche Beispieleingabeemail,feature,expected_lang,detected_lang,status,suggestions_count,suggestion_1_title,suggestion_1_source,suggestion_1_full
alice.taylor33665@pharma.com,Goal Assist,en,en,✅ PASS,6,"Enhance Product Development Efficiency","Based on product department feedback","Full suggestion text..."- Functionality: AI-powered goal creation suggestions
- Implementation: Iframe-based interaction with multiple suggestion capture
- Output: Up to 6 goal suggestions with title, source, and full text
- Languages: All supported languages (25+)
Key Selectors:
iframe = page.frame_locator('iframe[title="Betterworks"]')
await iframe.locator('[data-test="goals-assist-button"]').click()
suggestions = iframe.locator('.rounded.bg-surface-0.p-4')- Functionality: Automated feedback period summarization
- Implementation: Time-based summary generation
- Output: AI-generated summary text with language detection
- Screenshots: Timeline selection, generation button, loading, completed
- Functionality: AI-enhanced feedback composition
- Implementation: Text input with AI suggestion generation
- Output: AI-improved feedback text
- Error Handling: API retry logic for 500 errors
- Functionality: AI-supported conversation guidance
- Implementation: Conversation thread enhancement
- Output: AI-generated conversation improvements
- Multi-retry: Built-in API error recovery
Legend: ✅ Active, 🔄 Available for testing (set supported=True in users.csv)
- Library:
langdetect(Google's language detection) - Validation: Automatic comparison of expected vs detected languages
- Accuracy: High precision across supported language sets
# Main Results: Regression_AI_Feature.csv
email,feature,expected_lang,detected_lang,status,suggestions_count,
suggestion_1_title,suggestion_1_source,suggestion_1_full,
suggestion_2_title,suggestion_2_source,suggestion_2_full,...
# Error Log: Regression_AI_Feature_Error.csv
email,status,url,error,timestampRegression_AI_Feature_Screenshots/
├── English (US)/
│ ├── Goal_Assist/
│ │ ├── alice.taylor_before_assist.png
│ │ ├── alice.taylor_assist_loading.png
│ │ └── alice.taylor_assist_completed.png
│ ├── Feedback_Summary/
│ └── Feedback_Writing/
├── German/
├── Spanish (Spain)/
└── [other languages]/
# Validate Goal Assist selectors
python test/test_goal_assist.py
# Test specific user language
# Edit users.csv to enable/disable specific test cases# Enable detailed debugging in main.py
browser = await p.chromium.launch(headless=False) # Visual browser
print(f"[DEBUG] Current step: {step_description}") # Debug output1. Selector Not Found
# Run the test script to validate selectors
python test/test_goal_assist.py2. Language Detection Failures
# Check AI text length and content
print(f"AI text length: {len(ai_text)}")
print(f"AI text sample: {ai_text[:100]}")3. Screenshot Timing Issues
# Increase wait times in main.py
await page.wait_for_timeout(5000) # Wait 5 seconds
await wait_for_iframe_loader_to_disappear(iframe) # Custom wait function4. API Errors (500)
- Built-in retry mechanisms handle temporary API failures
- Check
Regression_AI_Feature_Error.csvfor persistent issues
# Adjust timeouts for slower networks
timeout=120000 # 2 minutes
max_retries=15 # Increase retry attempts# Add to users.csv
new.user@company.com,fr,True,French (France),Texte d'exemple en français# Edit main.py feature_results loop
for feature_func in [
run_goal_assist_flow, # Goal suggestions
# validate_feedback_summary, # Uncomment as needed
# validate_feedback_writing_assist,
# validate_conversation_assist,
]:# Modify screenshot parameters
await page.screenshot(
path=screenshot_path,
full_page=True, # Full page capture
clip={'x': 0, 'y': 0, 'width': 1200, 'height': 800} # Custom area
)# Multiple suggestions capture
ai_suggestions = [] # Structured data: title, source, full_text
suggestion_1_title = "Enhance Product Development Efficiency"
suggestion_1_source = "Based on product department feedback"
suggestion_1_full = "Complete suggestion with context"
# Language-specific suggestions
# English: Business-focused goal suggestions
# German: Qualitätssicherung (Quality Assurance) goals
# Spanish: Desarrollo de tecnologías (Technology development)
# French: Améliorer les processus (Process improvement)# Wait for iframe availability
iframe = page.frame_locator('iframe[title="Betterworks"]')
# Multi-step interaction
await iframe.locator('[data-test="goals-create-button"]').click()
await iframe.locator('[data-test="goals-assist-button"]').click()
# Content validation with retry
for attempt in range(max_retries):
suggestions = iframe.locator('.rounded.bg-surface-0.p-4')
if await suggestions.count() > 0:
# Process suggestions
break
await page.wait_for_timeout(5000)# API retry logic
while retry_count < max_retries:
try:
# Attempt API call
if not api_error_occurred:
break
except Exception as e:
retry_count += 1
await page.wait_for_timeout(2000)
# Network monitoring
async def handle_response(response, email):
if "/accelerators/llm/assistant/" in response.url:
if response.status >= 500:
network_errors.append({
"email": email,
"status": response.status,
"url": response.url,
"error": response.text
})Detailed documentation is available in the Documents/ directory:
- ARCHITECTURE.md - System design and technical details
- API_DOCUMENTATION.md - Function references and APIs
- CONFIGURATION.md - Advanced configuration options
- QUICK_START.md - Step-by-step getting started guide
- Fork the repository
- Create a feature branch (
git checkout -b feature/amazing-feature) - Commit your changes (
git commit -m 'Add amazing feature') - Push to the branch (
git push origin feature/amazing-feature) - Open a Pull Request
- Follow Python PEP 8 style guidelines
- Add comprehensive docstrings to new functions
- Include test cases for new features
- Update documentation for significant changes
# Core dependencies
playwright==1.54.0+ # Browser automation
pandas>=1.3.0 # Data processing
langdetect>=1.0.9 # Language detection
asyncio # Asynchronous operations# Chromium with custom viewport
browser = await p.chromium.launch(headless=False)
context = await browser.new_context(viewport={"width": 1520, "height": 1080})# Multi-stage capture
await page.screenshot(path=before_screenshot) # Initial state
await page.screenshot(path=loading_screenshot) # During AI generation
await page.screenshot(path=completed_screenshot) # Final result# CSV generation with pandas
results_df = pd.DataFrame(results)
results_df.to_csv(OUTPUT_CSV, index=False)
# Error tracking
error_df = pd.DataFrame(network_errors)
error_df.to_csv(ERROR_LOG_CSV, index=False)- Average runtime per user: 2-3 minutes
- Screenshot capture time: ~500ms per screenshot
- Language detection accuracy: 95%+
- API retry success rate: 90%+
- Memory: ~200MB during execution
- Network: ~50MB per full test run
- Storage: Screenshots ~5MB per user per feature
- Rate Limiting: May encounter API rate limits with many concurrent users
- Network Dependencies: Requires stable internet connection
- Browser Resources: Memory usage increases with longer test runs
- Language Coverage: Some languages may have lower AI content accuracy
For questions, issues, or contributions:
- 📧 Email: [your-email@company.com]
- 🐛 Issues: [Repository Issues Page]
- 📖 Documentation:
Documents/directory - 🧪 Testing:
test/directory scripts
Last Updated: August 12, 2025
Version: 2.0.0
Maintained by: AI Automation Team