Skip to content

Commit a08f51d

Browse files
committed
Add AI test generator, .env setup, prompt templates, and documentation updates
1 parent b338e39 commit a08f51d

24 files changed

Lines changed: 750 additions & 492 deletions

.env.example

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1 @@
1+
OPENAI_API_KEY=

.github/workflows/playwright.yml

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -19,6 +19,8 @@ jobs:
1919
run: npx playwright install --with-deps
2020
- name: Run Playwright tests
2121
run: npx playwright test
22+
- name: Run Playwright Component Tests (CT)
23+
run: npx playwright test -c playwright-ct.config.js
2224
- uses: actions/upload-artifact@v4
2325
if: ${{ !cancelled() }}
2426
with:

.gitignore

Lines changed: 3 additions & 5 deletions
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,8 @@ Thumbs.db
1919

2020
# VS Code
2121
.vscode/
22-
22+
*.db
23+
!testdata/*.csv
2324
# Misc
2425
*.swp
2526
*.swo
@@ -34,7 +35,4 @@ dist/
3435
# IDE
3536
.idea/
3637

37-
# Ignore Excel/DB test data
38-
*.xlsx
39-
*.csv
40-
*.db
38+

README-ai.md

Lines changed: 64 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,64 @@
1+
# AI Test Generator (OpenAI-powered)
2+
3+
This module enables AI-driven Playwright test generation using OpenAI's API and prompt templates.
4+
5+
## Features
6+
- Generates Playwright tests from prompt templates and config data
7+
- Supports CSV-driven test data, API endpoints, and DB config
8+
- Uses OpenAI API for intelligent test creation
9+
- Secure API key management via `.env`
10+
11+
## Usage
12+
1. **Set up OpenAI API Key:**
13+
- Copy `.env.example` to `.env` and add your OpenAI API key:
14+
```
15+
OPENAI_API_KEY=your-openai-api-key-here
16+
```
17+
2. **Install dependencies:**
18+
- Run:
19+
```
20+
npm install
21+
```
22+
3. **Configure appConfig:**
23+
- Edit `config/appConfig.js` to set paths for test data, API endpoints, and DB config.
24+
4. **Edit prompt templates:**
25+
- Update or create prompt files in `prompts/` (e.g., `playwright_login_test_generation.txt`).
26+
5. **Run the generator:**
27+
- Execute:
28+
```
29+
node ai/generate_generic_tests_ai.js
30+
```
31+
- The generated test will be saved to `tests/mab/AI_GeneratedTest.spec.js`.
32+
33+
## Customization
34+
- Add new prompt templates for different test scenarios.
35+
- Update `ai/generate_generic_tests_ai.js` to use your desired prompt file.
36+
37+
## Security
38+
- `.env` is gitignored by default.
39+
- Never commit your API key.
40+
41+
## Troubleshooting
42+
- Ensure your OpenAI API key is valid and set in `.env`.
43+
- If you see `Missing credentials` errors, check your `.env` setup.
44+
- For OpenAI API errors, verify your network and API quota.
45+
46+
## Example Prompt Template
47+
See `prompts/playwright_login_test_generation.txt` for a sample login test prompt.
48+
49+
## License
50+
MIT
51+
52+
**Experimental Stage:**
53+
This AI test generator is currently in an experimental phase. Features and workflows may change, and reliability is not guaranteed for production use. Feedback and contributions are welcome!
54+
55+
## Future Enhancements Planned
56+
57+
- Support for Playwright API and DB test generation
58+
- Multi-prompt scenario chaining
59+
- Improved error handling and reporting
60+
- Integration with CI/CD pipelines
61+
- Customizable output file locations
62+
- Enhanced prompt templating and variable injection
63+
- Support for other AI providers (Azure, Anthropic, etc.)
64+
- Automated test validation and coverage analysis

ai/generate_generic_tests_ai.js

Lines changed: 49 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,49 @@
1+
// Generic AI-powered Test Generator
2+
// Place this in ai/
3+
// Requires: npm install openai
4+
5+
const fs = require('fs');
6+
const path = require('path');
7+
const { OpenAI } = require('openai');
8+
9+
// Example input files
10+
11+
const appConfig = require('../config/appConfig');
12+
const csvPath = path.join(__dirname, '..', appConfig.testDataPath);
13+
const apiPath = path.join(__dirname, '..', appConfig.apiEndpointsPath);
14+
const dbConfigPath = path.join(__dirname, '..', appConfig.dbConfigPath);
15+
16+
const configData = JSON.stringify(appConfig, null, 2);
17+
const csvData = fs.existsSync(csvPath) ? fs.readFileSync(csvPath, 'utf8') : '';
18+
const apiData = fs.existsSync(apiPath) ? fs.readFileSync(apiPath, 'utf8') : '';
19+
const dbConfigData = fs.existsSync(dbConfigPath) ? fs.readFileSync(dbConfigPath, 'utf8') : '';
20+
21+
const outputPath = path.join(__dirname, '../tests/mab/AI_GeneratedTest.spec.js');
22+
23+
// ...existing code...
24+
// To generate login tests, use the login prompt file:
25+
const promptTemplatePath = path.join(__dirname, '../prompts/playwright_login_test_generation.txt');
26+
let prompt = fs.readFileSync(promptTemplatePath, 'utf8');
27+
prompt = prompt.replace('{{config}}', configData)
28+
.replace('{{csv}}', csvData)
29+
.replace('{{api}}', apiData)
30+
.replace('{{db}}', dbConfigData);
31+
32+
33+
const openai = new OpenAI({
34+
apiKey: process.env.OPENAI_API_KEY,
35+
});
36+
37+
async function generateTest() {
38+
const response = await openai.completions.create({
39+
model: 'text-davinci-003',
40+
prompt,
41+
max_tokens: 1500,
42+
temperature: 0.2,
43+
});
44+
const testCode = response.choices[0].text;
45+
fs.writeFileSync(outputPath, testCode);
46+
console.log('AI-generated test file:', outputPath);
47+
}
48+
49+
generateTest();

config/appConfig.js

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,3 +1,6 @@
11
module.exports = {
2-
baseUrl: 'https://k11softwaresolutions.com'
2+
baseUrl: 'https://k11softwaresolutions.com',
3+
testDataPath: '../testdata/login_data.csv',
4+
apiEndpointsPath: '../testdata/api_endpoints.json',
5+
dbConfigPath: '../testdata/db_config.json'
36
};

doc/ai-vs-user-representation.md

Lines changed: 43 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,43 @@
1+
# AI-Driven Testing vs Accurate User Representation
2+
3+
## Balancing AI-Augmented Automation and Real User Flows
4+
5+
AI-driven testing (such as AI-powered locators, auto-healing selectors, and smart test generation) is transforming how we approach test automation. However, maintaining accurate user representation—ensuring tests reflect real user behavior and business intent—is critical for meaningful quality assurance.
6+
7+
---
8+
9+
## How We Balance Both in K11TechLab
10+
11+
### 1. AI-Augmented Features
12+
- **AI Locators:** Use AI to identify robust selectors, reducing flaky tests.
13+
- **Auto-Healing:** Automatically update selectors when UI changes, minimizing maintenance.
14+
- **Smart Test Generation:** Leverage AI to suggest test cases based on user flows and analytics.
15+
16+
### 2. User-Centric Test Design
17+
- **Page Object Model (POM):** Abstracts UI for business-readable tests.
18+
- **Manual Review:** All AI-generated tests are reviewed for business logic and user intent.
19+
- **Realistic Data:** Use real or production-like data for test scenarios.
20+
- **Assertions:** Focus on user-visible outcomes, not just DOM changes.
21+
22+
### 3. Hybrid Approach
23+
- **AI for Maintenance:** Use AI to keep tests stable and up-to-date.
24+
- **Human for Intent:** QA engineers ensure tests reflect real user journeys, edge cases, and business rules.
25+
- **Feedback Loop:** AI suggestions are validated and refined by human testers.
26+
27+
---
28+
29+
## Practical Example
30+
- AI locators identify elements for login form.
31+
- Test flow is designed to match real user login steps.
32+
- Assertions check for dashboard visibility, not just form submission.
33+
- If UI changes, auto-healing updates selectors, but test logic remains user-focused.
34+
35+
---
36+
37+
## Summary
38+
39+
AI-driven testing accelerates automation and reduces maintenance, but human oversight is essential to ensure tests represent real user actions and business value. K11TechLab combines both for robust, meaningful quality engineering.
40+
41+
---
42+
43+
*This approach enables rapid, resilient automation while keeping user experience and business goals at the center of testing.*
Lines changed: 54 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,54 @@
1+
# Context-Aware Testing with MCP and Playwright: Real-World Benefits
2+
3+
## What Is Context-Aware Testing?
4+
Context-aware testing means your tests adapt to the application's state, user flows, and runtime conditions. By using a Model Context Protocol (MCP) and tools like XState with Playwright, you capture and leverage runtime metadata, making tests smarter and more resilient.
5+
6+
---
7+
8+
## Real-Life Advantages
9+
10+
### 1. Robustness Against UI Changes
11+
- Tests use dynamic context and locator maps, so they are less brittle when UI changes.
12+
- Auto-healing selectors and context logs help quickly identify and fix broken tests.
13+
14+
### 2. Better Coverage of User Journeys
15+
- State machines (XState) model real user flows, ensuring tests reflect business logic.
16+
- Context logs show which paths were exercised, helping QA teams spot gaps.
17+
18+
### 3. Smarter Failure Analysis
19+
- When a test fails, context logs and saved reports show the exact state transitions and outcomes.
20+
- This makes debugging faster and more actionable.
21+
22+
### 4. Data-Driven and Adaptive Testing
23+
- Tests can adapt based on runtime metadata (e.g., feature flags, user roles, app state).
24+
- This enables more realistic, production-like test scenarios.
25+
26+
### 5. Automated Reporting and Traceability
27+
- Test results and context logs are saved as JSON artifacts for traceability.
28+
- Teams can review historical runs, analyze trends, and improve test reliability.
29+
30+
---
31+
32+
## Example: Login Flow
33+
- The MCP test models the login flow as a state machine.
34+
- It logs every state transition (e.g., start → enteringCredentials → loggingIn → dashboard/error).
35+
- If the login fails, the test checks for the error element and logs the outcome.
36+
- The result is saved in reports/mcp for audit and analysis.
37+
38+
---
39+
40+
## Why It Matters
41+
- **Faster Debugging:** Context logs pinpoint where and why a test failed.
42+
- **Resilience:** Tests adapt to app changes, reducing maintenance.
43+
- **Business Alignment:** State machines ensure tests match real user journeys.
44+
- **Traceability:** Saved reports provide evidence for compliance and quality audits.
45+
- **Continuous Improvement:** Teams can use context data to refine tests and coverage.
46+
47+
---
48+
49+
## Conclusion
50+
Context-aware testing with MCP and Playwright enables smarter, more reliable, and business-aligned automation. It helps teams deliver quality faster, with less maintenance and more actionable insights.
51+
52+
---
53+
54+
*Adopt context-aware testing to future-proof your QA and accelerate digital transformation!*
Lines changed: 53 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,53 @@
1+
# Context-Aware Testing Infrastructure: Roadmap
2+
3+
## 1. Evaluate Current Test Infrastructure
4+
- Review existing Playwright and Cypress test suites for context-awareness.
5+
- Identify gaps: Are tests aware of app state, user flows, and runtime conditions?
6+
- Document current metadata capture (e.g., logs, screenshots, traces).
7+
8+
## 2. Pilot MCP-Style Testing
9+
- Prototype Model Context Protocol (MCP) testing using Playwright or Cypress.
10+
- Integrate XState for state machine modeling of user flows and app states.
11+
- Run pilot tests to validate context-driven scenarios.
12+
13+
## 3. Build Minimal Context Engine
14+
- Develop a lightweight engine to capture runtime metadata:
15+
- App state (XState)
16+
- User actions
17+
- Network requests
18+
- UI changes
19+
- Store metadata alongside test artifacts for analysis.
20+
21+
## 4. Integrate AI Decision Logic / Rule Engines
22+
- Add AI or rule-based logic to:
23+
- Select relevant tests based on app state
24+
- Adapt test flows dynamically
25+
- Flag tests for review based on context changes
26+
- Use open-source AI libraries or custom rule engines.
27+
28+
## 5. Iterate & Measure
29+
- Track test relevance: Are tests covering meaningful user flows?
30+
- Monitor reliability: Are flaky tests reduced?
31+
- Measure performance: Is test execution faster and more targeted?
32+
- Refine context engine and AI logic based on feedback.
33+
34+
---
35+
36+
## Example Workflow
37+
1. Test starts, context engine captures app state.
38+
2. XState models user journey; AI logic selects relevant tests.
39+
3. Tests adapt to runtime conditions (e.g., feature flags, user roles).
40+
4. Metadata is stored for traceability and analysis.
41+
5. Results are reviewed, improvements tracked.
42+
43+
---
44+
45+
## Benefits
46+
- Smarter, more resilient tests
47+
- Better coverage of real user scenarios
48+
- Reduced maintenance and flakiness
49+
- Actionable insights for QA and dev teams
50+
51+
---
52+
53+
*This roadmap enables context-aware, AI-augmented testing for modern JavaScript apps using Playwright or Cypress.*

0 commit comments

Comments
 (0)