Facebook has introduced a new bidding strategy called "average bidding" as an alternative to the existing "maximum bidding" method.
A client, bombabomba.com, wants to evaluate whether this new method leads to higher conversion rates.
To achieve this:
- An A/B test has been conducted for one month
- The goal is to determine if there is a statistically significant difference between the two bidding methods
π The key success metric is: Purchase
The dataset contains website interaction and performance data, including user engagement with ads and resulting revenue.
There are two separate groups:
- Control Group β Maximum Bidding
- Test Group β Average Bidding
These datasets are stored in different sheets of the ab_testing.xlsx file.
| Variable | Description |
|---|---|
| Impression | Number of ad impressions |
| Click | Number of clicks on ads |
| Purchase | Number of purchases after clicks |
| Earning | Revenue generated from purchases |
- Compare conversion performance between bidding strategies
- Determine whether the difference is statistically significant
- Support decision-making with data-driven insights
- Exploratory data analysis (EDA)
- Hypothesis testing:
- Hβ (Null Hypothesis): No difference between groups
- Hβ (Alternative Hypothesis): There is a difference
- Assumption checks:
- Normality (Shapiro-Wilk Test)
- Homogeneity of variance (Levene Test)
- Statistical testing:
- Independent two-sample t-test (if assumptions hold)
- Mann-Whitney U test (if assumptions are violated)
- No statistically significant difference was found between the control and test groups based on the Purchase metric.
- This suggests that average bidding does not outperform maximum bidding in terms of conversions.
- Clear evaluation of bidding strategy performance
- Data-backed recommendation for marketing optimization
- Improved decision-making for advertising investments