A data science approach to employee wellbeing - using statistical simulation and power analysis to quantify the impact of 15-minute naps on productivity and energy levels.
Tech companies invest heavily in employee perks (free food, gyms, game rooms), but how do we quantify ROI on workplace wellness initiatives? This project applies rigorous data science methodology to answer: Can a simple 15-minute nap intervention improve productivity metrics?
Using Monte Carlo simulation and statistical power analysis, we tested whether structured napping could move the needle on two key employee metrics:
- Productivity (measured via Occupational Self-Efficacy Scale)
- Energy levels (subjective ratings)
- Effect Size: +10 points on OSS scale (14% improvement)
- Statistical Confidence: 99.3% detection rate
- 95% CI: [5.68, 14.40]
- Sample Size: 39 per group (78 total)
- Effect Size: +1.5 points (30% improvement from baseline)
- Statistical Confidence: 90.5% detection rate
- 95% CI: [0.60, 2.39]
- Sample Size: 39 per group (78 total)
- Scalable intervention: Low cost, high impact
- Data-backed: 10,000 simulations validate effectiveness
- Quick wins: 15 minutes = measurable productivity gains
- ROI: Minimal infrastructure investment for significant output improvement
# Core libraries
library(ggplot2) # Data visualization
library(kableExtra) # Table formatting
base::power.t.test() # Statistical power calculation
# Statistical methods
- Monte Carlo simulation (10,000 iterations)
- Two-sample t-tests
- Power analysis
- Confidence interval estimationData Generation β Statistical Testing β Effect Size Calculation β Visualization
β β β β
rnorm() t.test() mean(Ξ) ggplot2
(n=39) (Ξ±=0.05, Ξ²=0.10) (distributions)
Productivity Model:
# Control group (no nap)
ΞΌ_control = 70 (OSS score)
Ο = 10
# Treatment group (15-min nap)
ΞΌ_treatment = 80 (OSS score)
Ο = 10
# Power calculation
Ξ = 10 points
n = 39 per group
power = 0.90Energy Model:
# Control group
ΞΌ_control = 5.0
Ο = 2
# Treatment group
ΞΌ_treatment = 6.5
Ο = 2
# Power calculation
Ξ = 1.5 points
n = 39 per group
power = 0.90Calculated minimum detectable effect size and required sample size to achieve 90% statistical power:
- Productivity: n=23 per group (actual: 39 for safety margin)
- Energy: n=34 per group (actual: 39 for consistency)
simulate_study <- function(n_per_group, effect_present) {
control <- rnorm(n_per_group, mean = baseline, sd = std_dev)
treatment <- rnorm(n_per_group, mean = baseline + effect, sd = std_dev)
test <- t.test(control, treatment)
effect_size <- mean(treatment) - mean(control)
return(list(p_value = test$p.value, effect = effect_size))
}
# Run 10,000 simulations per scenario
results <- replicate(10000, simulate_study(39, TRUE))- False positive rate: 4.9-5.1% (expected ~5% at Ξ±=0.05)
- True positive rate: 90.5-99.3% (exceeds 90% power target)
- Effect size confidence: Narrow CIs indicate robust estimates
Generated two key plots showing distribution overlap:
-
Productivity Distribution
- Blue: Null hypothesis (no effect)
- Orange: Alternative hypothesis (10-point increase)
- Minimal overlap = high power to detect true effects
-
Energy Distribution
- Red: Null hypothesis
- Green: Alternative hypothesis (1.5-point increase)
- Demonstrates 90.5% detection capability
- Data-driven wellness programs: Move beyond anecdotal evidence
- A/B testing framework: Template for testing other interventions
- Metric selection: Demonstrates choosing measurable outcomes
- Productivity metrics: OSS as proxy for self-reported efficiency
- Statistical rigor: Power analysis ensures meaningful conclusions
- Scalable measurement: Survey-based approach works for large orgs
- Developer productivity: Energy β Focus β Code quality
- Meeting optimization: Post-lunch naps vs. afternoon meetings
- Remote work policies: Async schedules enabling optimal rest
- Product features: Nap tracking, reminder notifications
- Impact measurement: Benchmarking framework for customers
- Data storytelling: Model for presenting wellness ROI
- Google: Nap pods in offices
- NASA: Endorsed 26-minute naps for astronauts (26% boost in alertness)
- Nike: Designated quiet rooms
- Uber: Nap-friendly policies for drivers
Phase 1: Pilot (4 weeks)
- Recruit 80 volunteers from engineering teams
- Randomized control trial with OSS surveys
- Track: productivity, energy, meeting performance
Phase 2: Analysis (2 weeks)
- Statistical testing following this methodology
- A/B test results presentation to leadership
Phase 3: Scale (if positive results)
- Install nap spaces in offices
- Update remote work guidelines
- Integrate into wellness app
.
βββ README.md # This file
βββ Group_6_R_Simulation.Rmd # Reproducible R analysis
βββ Group_6_Final_Report.html # Full technical report
βββ Group_6_PPT_Slides.pdf # Executive presentation
βββ docs/
βββ methodology.md # Detailed statistical methods
βββ implementation_guide.md # Practical rollout steps
# R version 4.0+
install.packages(c("ggplot2", "kableExtra"))# Clone repository
git clone https://github.com/traceyho59/workplace-napping-productivity.git
# Open R Markdown file
rmarkdown::render("Group_6_R_Simulation.Rmd")
# Outputs:
# - Statistical tables
# - Distribution plots
# - Power analysis results# Modify parameters in simulation functions
productivity_baseline <- 70 # Your org's baseline OSS
productivity_target <- 80 # Target improvement
sample_size <- 50 # Your pilot size
# Re-run simulations with your parameters- Reproducibility: All code available, fully documented
- Transparency: Simulation parameters clearly stated
- Validation: 10,000 iterations ensure robust conclusions
- Scalability: Framework works for any wellness intervention
- Generalizability: Study focused on 22-26 age group in finance/consulting/tech
- Long-term effects: Current study simulates 4-week intervention
- Individual differences: Some employees may benefit more than others
- Optimal duration: 15 minutes chosen based on literature; could A/B test 10/15/20 min
- Sleep inertia: Some individuals experience grogginess; need mitigation strategies
- Multi-arm testing: Compare 10/15/20/30 minute naps
- Time-of-day effects: Morning vs afternoon naps
- Job function analysis: Engineering vs sales vs operations
- Longitudinal tracking: 3/6/12 month impact studies
Project Team:
- Tracey Ho (Lead Analyst)
- Victor Zhan
- Yi (Billy) Chen
- Yi Ching (Sunny) Wang
- Samantha Yung
Academic Context: Columbia University, Master of Science in Applied Analytics (Dec 2025)
Course Focus: Experimental design, statistical simulation, applied data science
- Monte Carlo simulation
- Statistical hypothesis testing
- Power analysis & sample size calculation
- Data visualization (ggplot2)
- Reproducible research (R Markdown)
- Problem framing (wellness ROI)
- Metric selection (OSS, SUDS)
- Stakeholder communication
- Implementation planning
- GitHub documentation
- Executive presentations
- Technical reports
- Code documentation
GitHub: @traceyho59
Other Projects:
- NYC Rent Analysis - Real estate market analytics (Python)
- CTR Prediction - Machine learning for marketing (R/XGBoost)
This project was completed as part of Columbia University's Applied Analytics program. Code and methodology are available for educational and commercial use.
Thoughts on workplace napping policies? Open an issue or start a discussion!
Want to replicate this study at your company? Feel free to fork this repo and adapt the methodology.
Questions about the statistical approach? Check out the full technical report or reach out!
Keywords: employee productivity, workplace wellness, statistical simulation, A/B testing, people analytics, data science, R programming, Monte Carlo methods, power analysis, tech company benefits, developer productivity, employee experience
Tech Stack: R Β· ggplot2 Β· Statistical Modeling Β· Monte Carlo Simulation Β· Hypothesis Testing Β· Data Visualization Β· Reproducible Research