diff --git a/LICENSE b/LICENSE
new file mode 100644
index 0000000..b94bc3c
--- /dev/null
+++ b/LICENSE
@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) 2026 [Your Name/Organization]
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/README.md b/README.md
new file mode 100644
index 0000000..e418b88
--- /dev/null
+++ b/README.md
@@ -0,0 +1,225 @@
+# UI Conformity Experiment
+
+> **Beta UI Diagnostic Dashboard**: A behavioral research tool investigating AI authority bias in user interface design preferences
+
+[](https://opensource.org/licenses/MIT)
+
+## Overview
+
+This repository contains a web-based behavioral experiment designed to investigate whether "AI Recommended" labels influence users' UI design preferences. The study employs a randomized between-subjects design to measure conformity effects in human-computer interaction.
+
+### Research Question
+
+**Does an "AI Recommended" label influence users' UI design preferences?**
+
+## Quick Start
+
+```bash
+# Clone the repository
+git clone https://github.com/hashexplaindata/Conformity-experiment.git
+cd Conformity-experiment
+
+# Start a local server
+python -m http.server 8000
+
+# Access the experiment
+# Control condition: http://localhost:8000/code/index.html?condition=control
+# AI label condition: http://localhost:8000/code/index.html?condition=ai
+```
+
+## Project Structure
+
+```
+Conformity-experiment/
+├── code/ # Application files
+│ ├── index.html # Main experiment page (147 lines)
+│ ├── experiment.js # Core behavioral engine (489 lines)
+│ └── style.css # Styling & animations (281 lines)
+├── docs/ # Documentation & ethics
+│ ├── README.md # Quick reference guide
+│ ├── METHODS.md # Formal research methodology
+│ ├── RUNBOOK.md # Step-by-step operational guide
+│ ├── IRB_CHECKLIST.md # Ethics compliance checklist
+│ ├── CONSENT.txt # Participant consent form
+│ └── DEBRIEF.txt # Post-study disclosure
+├── .github/workflows/ # CI/CD pipelines
+│ └── codeql.yml # Security scanning
+├── firebase.json # Firebase hosting configuration
+└── .firebaserc # Firebase project reference
+```
+
+## Experimental Design
+
+### Conditions
+
+- **Control Group**: Views 6 pairs of UI mockups with neutral labels ("Option A" / "Option B")
+- **Treatment Group**: Views identical pairs with "✨ AI Recommended" badges on designated options
+
+### Trial Domains (Iqra University Context)
+
+1. **Information Density** - Course Schedule (List vs. Grid)
+2. **Data Visualization** - HEC Attendance Warning (Circular gauge vs. Progress bar)
+3. **Financial Overview** - Fee Voucher (Card vs. Centered layout)
+4. **Campus Event** - Event Display (Visual-dominant vs. Compact)
+5. **Interaction** - QEC Faculty Evaluation (Radio buttons vs. Slider)
+6. **Navigation** - Digital Library (Floating search vs. Header-integrated)
+
+### Data Collection
+
+Each participant record captures:
+- **Primary DV**: Whether the AI-labeled option was selected
+- **Reaction time**: Millisecond-accurate (`performance.now()`)
+- **AI familiarity**: 5-point Likert scale covariate
+- **Semantic justification**: Free-text explanation (optional)
+- **Metadata**: UUID, condition, trial sequence, timestamps
+
+## Features
+
+### Technical Highlights
+
+- 🎯 **Client-side randomization** - 50/50 condition assignment
+- ⚡ **Millisecond-accurate telemetry** - `performance.now()` reaction time tracking
+- 🔥 **Firebase integration** - Automatic Firestore sync (`conformity_telemetry` collection)
+- 🎨 **Modern UI** - Dark theme with Bento grid design and radial glow effects
+- 🔒 **Privacy-first** - Anonymous UUIDs, no PII collection
+- 📱 **Responsive** - Mobile and desktop compatible
+- 🚫 **Anti-manipulation** - Browser back-navigation prevention
+
+### Research Features
+
+- Counterbalanced AI badge assignment
+- Randomized trial order
+- Left/Right position randomization (prevents motor habituation)
+- Covariate collection (AI familiarity)
+- Qualitative data capture (semantic justification)
+
+## Deployment Options
+
+### 1. Local Development
+```bash
+python -m http.server 8000
+```
+
+### 2. Firebase Hosting
+```bash
+npm install -g firebase-tools
+firebase login
+firebase deploy --only hosting
+```
+
+### 3. GitHub Pages
+1. Enable in repository Settings > Pages
+2. Access at: `https://[username].github.io/Conformity-experiment/code/index.html?condition=control`
+
+### 4. Netlify
+1. Drag `code/` folder to [app.netlify.com/drop](https://app.netlify.com/drop)
+2. Share generated URL with condition parameters
+
+## Documentation
+
+| Document | Purpose |
+|----------|---------|
+| [`docs/README.md`](docs/README.md) | Quick reference and file overview |
+| [`docs/METHODS.md`](docs/METHODS.md) | Formal research methodology (copy-paste ready) |
+| [`docs/RUNBOOK.md`](docs/RUNBOOK.md) | Step-by-step operational guide for running sessions |
+| [`docs/IRB_CHECKLIST.md`](docs/IRB_CHECKLIST.md) | 15-point ethical review checklist |
+| [`docs/CONSENT.txt`](docs/CONSENT.txt) | Participant informed consent template |
+| [`docs/DEBRIEF.txt`](docs/DEBRIEF.txt) | Post-study disclosure statement |
+
+## Ethical Considerations
+
+✅ **IRB-Ready**: Includes informed consent, debrief, and ethics checklist
+✅ **Minimal Risk**: Viewing UI mockups only
+✅ **Voluntary**: Participants can withdraw at any time
+✅ **Anonymous**: UUID-based identification, no PII
+✅ **Transparent**: Full disclosure of manipulation in debrief
+✅ **Withdrawal Rights**: 7-day data withdrawal window with Participant ID
+
+## Requirements
+
+- **Browser**: Chrome 90+, Firefox 88+, Safari 14+, Edge 90+
+- **Server** (optional): Python 3.x, Node.js, or any HTTP server
+- **Firebase** (optional): For data persistence and hosting
+
+## Data Format
+
+Data is stored in **tidy data long format** with the following schema:
+
+```javascript
+{
+ participant_id: "abc123xyz...", // Anonymous UUID
+ experimental_condition: "ai_labeled", // "control" or "ai_labeled"
+ ai_familiarity_covariate: 3, // 1-5 Likert scale
+ trial_sequence: 1, // 1-6
+ ui_domain: "Information Density...", // Trial category
+ ai_badge_position: "B", // "A", "B", or "none"
+ user_selection: "B", // "A" or "B"
+ chose_target_layout: true, // Boolean
+ reaction_time_ms: 2341, // Milliseconds
+ semantic_justification: "I prefer...", // Free text
+ timestamp: 1710278400000 // Unix ms
+}
+```
+
+## Usage
+
+### For Researchers
+
+1. Review [`docs/METHODS.md`](docs/METHODS.md) for study design
+2. Follow [`docs/RUNBOOK.md`](docs/RUNBOOK.md) for session operations
+3. Complete [`docs/IRB_CHECKLIST.md`](docs/IRB_CHECKLIST.md) for ethics approval
+4. Configure Firebase credentials (or use localStorage fallback)
+5. Deploy and share condition-specific URLs with participants
+
+### For Developers
+
+```javascript
+// Configuration
+const CFG = {
+ NUM_TRIALS: 6,
+ CONDITION: 'ai_labeled' | 'control',
+ COLLECTION: 'conformity_telemetry'
+};
+
+// Trial definition structure
+const TRIAL = {
+ domain: 'Trial Category Name',
+ renderA: () => `
...
`, // Option A HTML
+ renderB: () => `...
`, // Option B HTML
+ target: 'A' | 'B' // Hypothesized preference
+};
+```
+
+## Contributing
+
+This is an academic research project. For questions or collaboration inquiries, please open an issue.
+
+## License
+
+MIT License - See [LICENSE](LICENSE) for details
+
+## Citation
+
+If you use this experiment framework in your research, please cite:
+
+```bibtex
+@software{conformity_experiment_2026,
+ title = {UI Conformity Experiment: AI Authority Bias in Interface Design},
+ author = {[Your Name]},
+ year = {2026},
+ url = {https://github.com/hashexplaindata/Conformity-experiment}
+}
+```
+
+## Acknowledgments
+
+- Built with vanilla JavaScript for maximum portability
+- Firebase for data persistence
+- Iqra University and Pakistan HEC for contextual realism
+
+---
+
+**Version**: 2.0 | **Last Updated**: 2026-03-12 | **Status**: Production Ready
+
+For operational guidance, see [`docs/RUNBOOK.md`](docs/RUNBOOK.md)
+For methodology details, see [`docs/METHODS.md`](docs/METHODS.md)
diff --git a/docs/IRB_CHECKLIST.md b/docs/IRB_CHECKLIST.md
index cc02d86..da28e91 100644
--- a/docs/IRB_CHECKLIST.md
+++ b/docs/IRB_CHECKLIST.md
@@ -11,12 +11,12 @@
| 5 | Minimal risk | YES | Task involves viewing UI mockups and clicking |
| 6 | Deception disclosed | YES | Debrief explains AI label manipulation |
| 7 | Debrief provided | YES | Available at study completion and as DEBRIEF.txt |
-| 8 | Data security | YES | Data stored locally on participant devices |
+| 8 | Data security | YES | Data synced to Firebase Firestore with secure access rules |
| 9 | No vulnerable populations targeted | YES | 18+ adults only |
| 10 | Aggregate reporting only | YES | No individual results published |
| 11 | Contact information provided | YES | Researcher contact in consent and debrief |
| 12 | No physical or psychological harm | YES | Viewing interface designs only |
-| 13 | Data retention policy | N/A | Local CSV files; no server storage |
+| 13 | Data retention policy | YES | Firebase cloud storage with configurable retention |
| 14 | Compensation disclosed | N/A | No compensation offered |
| 15 | Conflicts of interest | NONE | Academic research only |
@@ -33,4 +33,4 @@
- [ ] Approval #: _______________
---
-_Template version: 1.0 | 2026-02-27_
+_Template version: 2.0 | 2026-03-12_
diff --git a/docs/METHODS.md b/docs/METHODS.md
index 40abc0c..71d3155 100644
--- a/docs/METHODS.md
+++ b/docs/METHODS.md
@@ -18,58 +18,51 @@ Participants were recruited from [classroom / online panel]. Inclusion criteria:
A between-subjects experimental design with two conditions:
-- **Control condition:** Participants viewed 8 pairs of UI mockups with neutral labels ("Option A" / "Option B") and selected their preferred design.
-- **AI Label condition:** Participants viewed the same 8 pairs, but one option in each pair displayed a "★ AI Recommended" badge.
+- **Control condition:** Participants viewed 6 pairs of UI mockups with neutral labels ("Option A" / "Option B") and selected their preferred design.
+- **AI Label condition:** Participants viewed the same 6 pairs, but one option in each pair displayed a "✨ AI Recommended" badge.
Assignment to conditions was randomized (50/50) using client-side JavaScript random number generation at the point of entry.
## Stimuli
-Eight pairs of high-fidelity UI mockups were created covering common interface patterns:
+Six pairs of high-fidelity UI mockups were created covering common interface patterns in the context of Iqra University and Pakistan Higher Education Commission (HEC):
-1. E-commerce product card (button color: blue vs. green)
-2. Settings page (toggle alignment: left vs. right)
-3. Analytics dashboard widget (chart type: bar vs. line)
-4. Signup form (CTA copy: "Sign Up" vs. "Get Started")
-5. Pricing table (highlighted tier: Basic vs. Pro)
-6. Navigation sidebar (icon style: outlined vs. filled)
-7. Notification banner (position: top vs. inline)
-8. Search results (layout: list vs. grid)
+1. **Information Density (Course Schedule)** - List view vs. Bento grid layout
+2. **Data Visualization (HEC Attendance Warning)** - Circular gauge with 78% attendance vs. Linear progress bar
+3. **Financial Overview (Fee Voucher)** - Card-based layout vs. Centered layout for semester fees
+4. **Campus Event (Visual Dominance)** - Visual-dominant poster vs. Compact text-heavy layout
+5. **Interaction (QEC Faculty Evaluation)** - Radio button scale vs. Interactive slider for rating
+6. **Navigation Hierarchy (Digital Library)** - Floating search button vs. Header-integrated search
-Each pair shared identical layout and functionality, differing only in a single design attribute. Pairs were rendered as CSS/HTML mockups within the experiment page. In the AI condition, one option per pair received a visually prominent "★ AI Recommended" badge. The assignment of which option received the badge was counterbalanced across pairs.
+Each pair shared identical content and functionality, differing only in a single design attribute (layout pattern, visualization type, or interaction method). Pairs were rendered as inline CSS/HTML mockups within the experiment page. In the AI condition, one option per pair received a visually prominent "✨ AI Recommended" badge. The assignment of which option (A or B) received the badge was predetermined based on hypothesized modern design preferences (e.g., Bento grid over list, circular gauge over bar chart).
## Procedure
-1. Participants accessed the experiment via a URL that randomly redirected them to one of two conditions.
+1. Participants accessed the experiment via a URL with condition parameter (`?condition=control` or `?condition=ai`).
2. A welcome screen explained the task and obtained informed consent.
-3. Participants completed 8 trials presented in randomized order.
-4. On each trial, two UI mockups appeared side-by-side (left/right placement randomized).
-5. Participants clicked to select their preferred design.
-6. After selection, they rated their confidence on a 5-point scale (1 = not at all confident, 5 = very confident).
-7. Upon completion, participants saw a summary and could download their responses.
+3. Participants rated their AI familiarity on a 5-point Likert scale (covariate: 1=Never used → 5=Daily user).
+4. Participants completed 6 trials presented in randomized order.
+5. On each trial, two UI mockups appeared side-by-side (left/right placement randomized).
+6. Participants clicked to select their preferred design.
+7. Reaction time was automatically captured using `performance.now()` for millisecond accuracy.
+8. After all trials, participants provided an optional free-text justification explaining their selection strategy.
+9. Upon completion, participants saw a confirmation screen with their anonymous Participant ID and data sync status.
## Measures
-- **Primary DV:** Choice (A or B) — coded as whether the AI-preferred option was selected
+- **Primary DV:** Choice (A or B) — coded as whether the AI-labeled option was selected (`chose_target_layout` boolean)
- **Secondary DVs:**
- - Confidence rating (1–5 Likert scale)
- - Reaction time (ms) — time from trial display to choice click
-- **Between-subjects IV:** Condition (control vs. AI label)
-- **Metadata:** Participant UUID, trial order, pair ID, timestamp
-
-## Metadata
-
-- Participant UUID
-- Condition (control vs. AI label)
-- AI familiarity covariate
-- Trial sequence
-- UI domain
-- AI badge position
-- User selection
-- Choice (target layout)
-- Reaction time (ms)
-- Semantic justification
-- Timestamp
+ - Reaction time (ms) — Time from trial display to choice click, measured via `performance.now()`
+ - Semantic justification — Free-text explanation of decision-making strategy (optional)
+- **Between-subjects IV:** Condition (`control` vs. `ai_labeled`)
+- **Covariate:** AI familiarity (1–5 Likert scale: Never used, Rarely, Occasionally, Frequently, Daily)
+- **Metadata:**
+ - Participant UUID (anonymous identifier)
+ - Trial sequence (1-6, randomized order)
+ - UI domain (trial category name)
+ - AI badge position (which option had the badge: "A", "B", or "none" for control)
+ - User selection ("A" or "B")
+ - Timestamp (Unix milliseconds)
## Ethics
@@ -79,7 +72,8 @@ Each pair shared identical layout and functionality, differing only in a single
- A debrief statement was provided upon completion explaining the true purpose of the study and the manipulation.
- Participants could withdraw at any time by closing their browser.
- The study involved minimal risk (viewing interface mockups and clicking preferences).
-- Data was stored locally on participant devices (CSV download) and not transmitted to external servers.
+- Data was synced to Firebase Firestore (`conformity_telemetry` collection) with fallback to localStorage for offline participants.
+- Participants were informed they could request data withdrawal within 7 days using their Participant ID.
---
-_Copy-paste this text into your Methods section. Update bracketed placeholders as needed._
+_Last updated: 2026-03-12 | Copy-paste this text into your Methods section. Update bracketed placeholders as needed._
diff --git a/docs/README.md b/docs/README.md
index 586f181..2507105 100644
--- a/docs/README.md
+++ b/docs/README.md
@@ -2,7 +2,7 @@
## Purpose
-This experiment tests whether an "AI Recommended" label influences users' UI design preferences. Participants view 8 pairs of interface mockups and select their preferred option. Half see neutral labels (control); half see one option per pair marked with a "★ AI Recommended" badge (treatment).
+This experiment tests whether an "AI Recommended" label influences users' UI design preferences. Participants view 6 pairs of interface mockups and select their preferred option. Half see neutral labels (control); half see one option per pair marked with a "✨ AI Recommended" badge (treatment).
## Folder Structure
@@ -16,7 +16,7 @@ This experiment tests whether an "AI Recommended" label influences users' UI des
1. Open `docs/RUNBOOK.md` and follow Step 1
2. Open `http://localhost:8000/code/index.html?condition=control` (Control)
3. Open `http://localhost:8000/code/index.html?condition=ai` (Treatment)
-4. Data is synced to Firebase and available via CSV download.
+4. Data is synced to Firebase Firestore collection: `conformity_telemetry`
## Key Files
@@ -34,5 +34,18 @@ This experiment tests whether an "AI Recommended" label influences users' UI des
[Your Name / Email]
+## Trial Domains
+
+The experiment includes 6 UI comparison trials covering:
+
+1. **Information Density (Course Schedule)** - List view vs. Grid view
+2. **Data Visualization (HEC Attendance Warning)** - Circular gauge vs. Progress bar
+3. **Financial Overview (Fee Voucher)** - Card layout vs. Centered layout
+4. **Campus Event (Visual Dominance)** - Visual-first vs. Compact layout
+5. **Interaction (QEC Faculty Evaluation)** - Radio buttons vs. Slider
+6. **Navigation Hierarchy (Digital Library)** - Floating search vs. Header-integrated search
+
+All trials use Iqra University and Pakistan Higher Education Commission (HEC) context for realism.
+
---
-_Generated: 2026-02-27 | Version: 1.0_
+_Last updated: 2026-03-12 | Version: 2.0_
diff --git a/docs/RUNBOOK.md b/docs/RUNBOOK.md
index 9d77bb6..890a74e 100644
--- a/docs/RUNBOOK.md
+++ b/docs/RUNBOOK.md
@@ -18,9 +18,9 @@ The experiment is now accessible at `http://localhost:8000/code/index.html`
### Step 2: Verify Both Conditions
- **Control Link**: `http://localhost:8000/code/index.html?condition=control`
- - Confirm: No "AI Suggested" badges visible.
+ - Confirm: No "✨ AI Recommended" badges visible.
- **AI Link**: `http://localhost:8000/code/index.html?condition=ai`
- - Confirm: "✨ AI Suggested" badges visible on designated options.
+ - Confirm: "✨ AI Recommended" badges visible on designated options.
### Step 3: Prepare Distribution Link
@@ -34,13 +34,15 @@ The experiment is now accessible at `http://localhost:8000/code/index.html`
### Step 4: Welcome Participants
- Instruct participants to open the link in their browser
-- They will be randomly assigned to control or AI condition
+- Participants will be assigned to either control or AI condition based on the URL parameter
- Estimated completion time: 3-5 minutes per participant
+- 6 trial pairs will be shown in randomized order
### Step 5: Monitor Progress
-- Each participant downloads their own CSV upon completion
-- Collect CSV files from participants
+- Data is automatically synced to Firebase Firestore (`conformity_telemetry` collection)
+- Participants see a completion confirmation with their anonymous Participant ID
+- Data includes reaction times, selections, AI familiarity rating, and optional justification
---
@@ -48,18 +50,28 @@ The experiment is now accessible at `http://localhost:8000/code/index.html`
### Step 6: Verify Data Sync
-- Access your Firebase Console.
-- Ensure all participant data is synced to the `conformity_telemetry` collection.
+- Access your Firebase Console at [console.firebase.google.com](https://console.firebase.google.com)
+- Navigate to Firestore Database
+- Check the `conformity_telemetry` collection for new participant data
+- Each participant record includes: condition, trial results, reaction times, AI familiarity, and timestamps
---
## Deployment (Optional)
+### Firebase Hosting
+
+1. Install Firebase CLI: `npm install -g firebase-tools`
+2. Login: `firebase login`
+3. Deploy: `firebase deploy --only hosting`
+4. Access at your Firebase hosting URL (e.g., `conformity-experiment.web.app`)
+
### GitHub Pages
-1. Push the `code/` folder to a GitHub repository
+1. Push the repository to GitHub
2. Enable GitHub Pages in Settings > Pages
-3. Share the Pages URL with participants
+3. Set source to the `main` branch
+4. Access at `https://[username].github.io/Conformity-experiment/code/index.html?condition=control` or `?condition=ai`
### Netlify
@@ -72,9 +84,10 @@ The experiment is now accessible at `http://localhost:8000/code/index.html`
| Issue | Solution |
|-------|----------|
-| Images not loading | Ensure all files are in the same `code/` directory |
-| CSV download fails | Check browser popup blocker settings |
-| Charts not rendering | Ensure internet connection (Chart.js loads from CDN) |
+| Badges not appearing | Check URL parameter: `?condition=ai` (not `?condition=control`) |
+| Firebase sync fails | Check internet connection and Firebase configuration in `experiment.js` |
+| Blank screen | Ensure local server is running and path is correct |
+| Browser compatibility | Use modern browsers (Chrome 90+, Firefox 88+, Safari 14+, Edge 90+) |
---
-_Last updated: 2026-02-27_
+_Last updated: 2026-03-12_