Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
376 changes: 376 additions & 0 deletions your-code/.ipynb_checkpoints/challenge-1-checkpoint.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,376 @@
{
"cells": [
{
"cell_type": "markdown",
"metadata": {},
"source": [
"# Challenge 1 - T-test\n",
"\n",
"In statistics, t-test is used to test if two data samples have a significant difference between their means. There are two types of t-test:\n",
"\n",
"* **Student's t-test** (a.k.a. independent or uncorrelated t-test). This type of t-test is to compare the samples of **two independent populations** (e.g. test scores of students in two different classes). `scipy` provides the [`ttest_ind`](https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.stats.ttest_ind.html) method to conduct student's t-test.\n",
"\n",
"* **Paired t-test** (a.k.a. dependent or correlated t-test). This type of t-test is to compare the samples of **the same population** (e.g. scores of different tests of students in the same class). `scipy` provides the [`ttest_re`](https://docs.scipy.org/doc/scipy-0.15.1/reference/generated/scipy.stats.ttest_rel.html) method to conduct paired t-test.\n",
"\n",
"Both types of t-tests return a number which is called the **p-value**. If p-value is below 0.05, we can confidently declare the null-hypothesis is rejected and the difference is significant. If p-value is between 0.05 and 0.1, we may also declare the null-hypothesis is rejected but we are not highly confident. If p-value is above 0.1 we do not reject the null-hypothesis.\n",
"\n",
"Read more about the t-test in [this article](http://b.link/test50) and [this Quora](http://b.link/unpaired97). Make sure you understand when to use which type of t-test. "
]
},
{
"cell_type": "code",
"execution_count": 10,
"metadata": {},
"outputs": [],
"source": [
"# Import libraries\n",
"import pandas as pd"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Import dataset\n",
"\n",
"In this challenge we will work on the Pokemon dataset. The goal is to test whether different groups of pokemon (e.g. Legendary vs Normal, Generation 1 vs 2, single-type vs dual-type) have different stats (e.g. HP, Attack, Defense, etc.)."
]
},
{
"cell_type": "code",
"execution_count": 11,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
" # Name Type 1 Type 2 Total HP Attack Defense \\\n",
"0 1 Bulbasaur Grass Poison 318 45 49 49 \n",
"1 2 Ivysaur Grass Poison 405 60 62 63 \n",
"2 3 Venusaur Grass Poison 525 80 82 83 \n",
"3 3 VenusaurMega Venusaur Grass Poison 625 80 100 123 \n",
"4 4 Charmander Fire NaN 309 39 52 43 \n",
"\n",
" Sp. Atk Sp. Def Speed Generation Legendary \n",
"0 65 65 45 1 False \n",
"1 80 80 60 1 False \n",
"2 100 100 80 1 False \n",
"3 122 120 80 1 False \n",
"4 60 50 65 1 False \n"
]
}
],
"source": [
"# Your code here:\n",
"# Load the Pokemon dataset\n",
"df = pd.read_csv( r'C:\\Users\\navin\\OneDrive\\Desktop\\Ironhack\\Week_Lab\\Week 12\\lab-hypothesis-testing-2\\your-code\\Pokemon.csv')\n",
"\n",
"# Display the first few rows of the dataset to understand its structure\n",
"print(df.head())\n",
"\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### First we want to define a function with which we can test the means of a feature set of two samples. \n",
"\n",
"In the next cell you'll see the annotations of the Python function that explains what this function does and its arguments and returned value. This type of annotation is called **docstring** which is a convention used among Python developers. The docstring convention allows developers to write consistent tech documentations for their codes so that others can read. It also allows some websites to automatically parse the docstrings and display user-friendly documentations.\n",
"\n",
"Follow the specifications of the docstring and complete the function."
]
},
{
"cell_type": "code",
"execution_count": 17,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'HP': 3.330647684846191e-15, 'Attack': 7.827253003205333e-24, 'Defense': 1.5842226094427255e-12, 'Sp. Atk': 6.314915770427266e-41, 'Sp. Def': 1.8439809580409594e-26, 'Speed': 2.3540754436898437e-21, 'Total': 3.0952457469652825e-52}\n"
]
}
],
"source": [
"from scipy.stats import ttest_ind\n",
"\n",
"def t_test_features(s1, s2, features=['HP', 'Attack', 'Defense', 'Sp. Atk', 'Sp. Def', 'Speed', 'Total']):\n",
" \"\"\"Test means of a feature set of two samples\n",
" \n",
" Args:\n",
" s1 (dataframe): sample 1\n",
" s2 (dataframe): sample 2\n",
" features (list): an array of features to test\n",
" \n",
" Returns:\n",
" dict: a dictionary of t-test scores for each feature where the feature name is the key and the p-value is the value\n",
" \"\"\"\n",
"\n",
" results = {}\n",
"\n",
" # Your code here\n",
"\n",
" for feature in features:\n",
" # Perform a two-sample t-test\n",
" t_stat, p_value = ttest_ind(s1[feature], s2[feature], nan_policy='omit')\n",
" \n",
" # Store the p-value in the results dictionary\n",
" results[feature] = p_value\n",
" \n",
" return results"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Using the `t_test_features` function, conduct t-test for Lengendary vs non-Legendary pokemons.\n",
"\n",
"*Hint: your output should look like below:*\n",
"\n",
"```\n",
"{'HP': 1.0026911708035284e-13,\n",
" 'Attack': 2.520372449236646e-16,\n",
" 'Defense': 4.8269984949193316e-11,\n",
" 'Sp. Atk': 1.5514614112239812e-21,\n",
" 'Sp. Def': 2.2949327864052826e-15,\n",
" 'Speed': 1.049016311882451e-18,\n",
" 'Total': 9.357954335957446e-47}\n",
" ```"
]
},
{
"cell_type": "code",
"execution_count": 18,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'HP': 3.330647684846191e-15, 'Attack': 7.827253003205333e-24, 'Defense': 1.5842226094427255e-12, 'Sp. Atk': 6.314915770427266e-41, 'Sp. Def': 1.8439809580409594e-26, 'Speed': 2.3540754436898437e-21, 'Total': 3.0952457469652825e-52}\n"
]
}
],
"source": [
"# Your code here\n",
"# Split the data into Legendary and non-Legendary Pokémon\n",
"legendary_pokemon = df[df['Legendary'] == True]\n",
"non_legendary_pokemon = df[df['Legendary'] == False]\n",
"\n",
"# Conduct t-tests on the specified features\n",
"t_test_results = t_test_features(legendary_pokemon, non_legendary_pokemon)\n",
"\n",
"# Print the results\n",
"print(t_test_results)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### From the test results above, what conclusion can you make? Do Legendary and non-Legendary pokemons have significantly different stats on each feature?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Your comment here\n",
"- This strongly suggests that Legendary Pokémon have significantly different (generally higher) stats across all these features compared to non-Legendary Pokémon."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Next, conduct t-test for Generation 1 and Generation 2 pokemons."
]
},
{
"cell_type": "code",
"execution_count": 19,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'HP': 0.13791881412813622, 'Attack': 0.24050968418101457, 'Defense': 0.5407630349194362, 'Sp. Atk': 0.14119788176331508, 'Sp. Def': 0.16781226231606386, 'Speed': 0.0028356954812578704, 'Total': 0.5599140649014442}\n"
]
}
],
"source": [
"# Your code here\n",
"\n",
"# Filter the DataFrame to get Generation 1 and Generation 2 Pokémon\n",
"gen1 = df[df['Generation'] == 1] # Generation 1 Pokémon\n",
"gen2 = df[df['Generation'] == 2] # Generation 2 Pokémon\n",
"\n",
"# Use the previously defined function to run t-tests \n",
"t_test_results_gen1_vs_gen2 = t_test_features(gen1, gen2)\n",
"\n",
"# Print the results\n",
"print(t_test_results_gen1_vs_gen2)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### What conclusions can you make?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Your comment here\n",
"The t-tests are printed, indicate that there are statistically significant differences in the mean values of each feature between Generation 1 and Generation 2 Pokémon."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Compare pokemons who have single type vs those having two types."
]
},
{
"cell_type": "code",
"execution_count": 20,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"{'HP': 0.11060643144431842, 'Attack': 0.00015741395666164396, 'Defense': 3.250594205757004e-08, 'Sp. Atk': 0.0001454917404035147, 'Sp. Def': 0.00010893304795534396, 'Speed': 0.024051410794037463, 'Total': 1.1749035008828752e-07}\n"
]
}
],
"source": [
"# Your code here\n",
"\n",
"# Filter DataFrames: single-type and dual-type Pokémon\n",
"single_type = df[df['Type 2'].isnull()] # Pokémon with only one type\n",
"dual_type = df[df['Type 2'].notnull()] # Pokémon with two types\n",
"\n",
"# Use t-tests on specified features\n",
"t_test_results_single_vs_dual = t_test_features(single_type, dual_type)\n",
"\n",
"# Print the results\n",
"print(t_test_results_single_vs_dual)\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### What conclusions can you make?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Your comment here\n",
"-- The p-values for features like Attack, Sp. Atk, Sp, Def and speed are below 0.05, this means that single-type and dual-type Pokémon have significantly different average values for these features.\n",
"-- The p-values for features like HP and Defense are above 0.05, this means that there is no significant difference between single-type and dual-type Pokémon for these stats. This means that, statistically, their averages are similar."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### Now, we want to compare whether there are significant differences of `Attack` vs `Defense` and `Sp. Atk` vs `Sp. Def` of all pokemons. Please write your code below.\n",
"\n",
"*Hint: are you comparing different populations or the same population?*"
]
},
{
"cell_type": "code",
"execution_count": 21,
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Paired t-test between Attack and Defense:\n",
"T-statistic: 4.325566393330478, P-value: 1.7140303479358558e-05\n",
"\n",
"Paired t-test between Sp. Atk and Sp. Def:\n",
"T-statistic: 0.853986188453353, P-value: 0.3933685997548122\n"
]
}
],
"source": [
"# Your code here\n",
"from scipy.stats import ttest_rel\n",
"\n",
"# Paired t-test between Attack and Defense\n",
"attack_defense_t_stat, attack_defense_p_value = ttest_rel(df['Attack'], df['Defense'], nan_policy='omit')\n",
"\n",
"# Paired t-test between Sp. Atk and Sp. Def\n",
"spatk_spdef_t_stat, spatk_spdef_p_value = ttest_rel(df['Sp. Atk'], df['Sp. Def'], nan_policy='omit')\n",
"\n",
"# Display results\n",
"print(f\"Paired t-test between Attack and Defense:\")\n",
"print(f\"T-statistic: {attack_defense_t_stat}, P-value: {attack_defense_p_value}\")\n",
"\n",
"print(f\"\\nPaired t-test between Sp. Atk and Sp. Def:\")\n",
"print(f\"T-statistic: {spatk_spdef_t_stat}, P-value: {spatk_spdef_p_value}\")\n"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"#### What conclusions can you make?"
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"# Your comment here\n",
"Attack vs. Defense: There is a significant difference, with Attack generally being higher.\n",
"Sp. Atk vs. Sp. Def: There is no significant difference; the two stats are quite similar on average."
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.11.7"
}
},
"nbformat": 4,
"nbformat_minor": 4
}
Loading