diff --git a/your-code/.ipynb_checkpoints/Loading Datasets into Scikit-learn-checkpoint.ipynb b/your-code/.ipynb_checkpoints/Loading Datasets into Scikit-learn-checkpoint.ipynb
new file mode 100644
index 0000000..33c103c
--- /dev/null
+++ b/your-code/.ipynb_checkpoints/Loading Datasets into Scikit-learn-checkpoint.ipynb
@@ -0,0 +1,780 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Loading Datasets into Scikit-learn\n",
+ "\n",
+ "\n",
+ "**Lesson Goals**\n",
+ "\n",
+ "In this lesson you will learn how to:\n",
+ "\n",
+ " Load Scikit-learn's bundled datasets.\n",
+ " Load other external datasets of the most relevant formats.\n",
+ " Visualize your dataset.\n",
+ "\n",
+ "**Introduction**\n",
+ "\n",
+ "In the Machine Learning workflow presented in previous lessons, extracting data, transforming it, and loading your dataset are your first stages. When you read the dataset from your Python application, you will load the dataset into a data object: typically a dataframe, an ndarray, a dictionary, or a list. The process of loading a dataset with scikit-learn depends on the type of dataset: whether it is a dataset bundled with scikit-learn or not, and if not, depending on the format of the dataset. We will cover the cases separately, providing you with code snippets for you to reuse in the implementations of your Machine Learning workflow.\n",
+ "Load Bundled Dataset\n",
+ "\n",
+ "As mentioned in the lesson introducing Scikit-learn, it comes with several datasets bundled that you can load quickly from your Python application. There are three datasets representing regression problems:\n",
+ "\n",
+ " Boston house prices.\n",
+ " Diabetes.\n",
+ " Linnerud.\n",
+ "\n",
+ "In these datasets, the domain of the target attribute is numeric.\n",
+ "\n",
+ "There are also four datasets representing classification problems:\n",
+ "\n",
+ " Iris.\n",
+ " Digits.\n",
+ " Wine.\n",
+ " Breast cancer.\n",
+ "\n",
+ "In these datasets, the target attribute might be categorical or an integer with a limited number of values (for example, 0 or 1).\n",
+ "\n",
+ "All of them are public open datasets that you can use to test your Machine Learning workflows.\n",
+ "\n",
+ "You can load one of these datasets and have a look at the structure using the following Python code:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "dict_keys(['data', 'target', 'DESCR', 'feature_names', 'data_filename', 'target_filename'])"
+ ]
+ },
+ "execution_count": 1,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "from sklearn import datasets\n",
+ "\n",
+ "# dictionary-like object\n",
+ "diabetesDataset = datasets.load_diabetes()\n",
+ "\n",
+ "# Print all attributes\n",
+ "diabetesDataset.keys()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "By printing the DESCR attribute of the dataset you get its documentation informing you about what the dataset describes, number of instances, number of attributes, target attribute, any preprocessing already performed on the dataset, and a citation of its original source. But note that this only works for the bundled datasets.\n",
+ "\n",
+ "\n",
+ "**Load External Dataset**\n",
+ "\n",
+ "The datasets that come bundled with Scikit-learn are very convenient to get you started quickly with building Machine Learning workflows and testing your code. But most of the time you will be working with external datasets that you will download from the web and load from your computer storage device (e.g. your hard disk) into your Python program.\n",
+ "\n",
+ "\n",
+ "# CSV Format\n",
+ "\n",
+ "CSV stands for \"Comma Separated Values.\" In a csv file, data is saved in a table format where in each row, columns (or features) are separated by a comma.\n",
+ "\n",
+ "Typically, we read the csv dataset into a pandas DataFrame, which is often a good idea due to its flexibility and versatility. In this lesson, we will use the census data to demonstrate loading."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "
\n",
+ "\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
CensusId
\n",
+ "
State
\n",
+ "
County
\n",
+ "
TotalPop
\n",
+ "
Men
\n",
+ "
Women
\n",
+ "
Hispanic
\n",
+ "
White
\n",
+ "
Black
\n",
+ "
Native
\n",
+ "
...
\n",
+ "
Walk
\n",
+ "
OtherTransp
\n",
+ "
WorkAtHome
\n",
+ "
MeanCommute
\n",
+ "
Employed
\n",
+ "
PrivateWork
\n",
+ "
PublicWork
\n",
+ "
SelfEmployed
\n",
+ "
FamilyWork
\n",
+ "
Unemployment
\n",
+ "
\n",
+ " \n",
+ " \n",
+ "
\n",
+ "
0
\n",
+ "
1001
\n",
+ "
Alabama
\n",
+ "
Autauga
\n",
+ "
55221
\n",
+ "
26745
\n",
+ "
28476
\n",
+ "
2.6
\n",
+ "
75.8
\n",
+ "
18.5
\n",
+ "
0.4
\n",
+ "
...
\n",
+ "
0.5
\n",
+ "
1.3
\n",
+ "
1.8
\n",
+ "
26.5
\n",
+ "
23986
\n",
+ "
73.6
\n",
+ "
20.9
\n",
+ "
5.5
\n",
+ "
0.0
\n",
+ "
7.6
\n",
+ "
\n",
+ "
\n",
+ "
1
\n",
+ "
1003
\n",
+ "
Alabama
\n",
+ "
Baldwin
\n",
+ "
195121
\n",
+ "
95314
\n",
+ "
99807
\n",
+ "
4.5
\n",
+ "
83.1
\n",
+ "
9.5
\n",
+ "
0.6
\n",
+ "
...
\n",
+ "
1.0
\n",
+ "
1.4
\n",
+ "
3.9
\n",
+ "
26.4
\n",
+ "
85953
\n",
+ "
81.5
\n",
+ "
12.3
\n",
+ "
5.8
\n",
+ "
0.4
\n",
+ "
7.5
\n",
+ "
\n",
+ "
\n",
+ "
2
\n",
+ "
1005
\n",
+ "
Alabama
\n",
+ "
Barbour
\n",
+ "
26932
\n",
+ "
14497
\n",
+ "
12435
\n",
+ "
4.6
\n",
+ "
46.2
\n",
+ "
46.7
\n",
+ "
0.2
\n",
+ "
...
\n",
+ "
1.8
\n",
+ "
1.5
\n",
+ "
1.6
\n",
+ "
24.1
\n",
+ "
8597
\n",
+ "
71.8
\n",
+ "
20.8
\n",
+ "
7.3
\n",
+ "
0.1
\n",
+ "
17.6
\n",
+ "
\n",
+ "
\n",
+ "
3
\n",
+ "
1007
\n",
+ "
Alabama
\n",
+ "
Bibb
\n",
+ "
22604
\n",
+ "
12073
\n",
+ "
10531
\n",
+ "
2.2
\n",
+ "
74.5
\n",
+ "
21.4
\n",
+ "
0.4
\n",
+ "
...
\n",
+ "
0.6
\n",
+ "
1.5
\n",
+ "
0.7
\n",
+ "
28.8
\n",
+ "
8294
\n",
+ "
76.8
\n",
+ "
16.1
\n",
+ "
6.7
\n",
+ "
0.4
\n",
+ "
8.3
\n",
+ "
\n",
+ "
\n",
+ "
4
\n",
+ "
1009
\n",
+ "
Alabama
\n",
+ "
Blount
\n",
+ "
57710
\n",
+ "
28512
\n",
+ "
29198
\n",
+ "
8.6
\n",
+ "
87.9
\n",
+ "
1.5
\n",
+ "
0.3
\n",
+ "
...
\n",
+ "
0.9
\n",
+ "
0.4
\n",
+ "
2.3
\n",
+ "
34.9
\n",
+ "
22189
\n",
+ "
82.0
\n",
+ "
13.5
\n",
+ "
4.2
\n",
+ "
0.4
\n",
+ "
7.7
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
5 rows × 37 columns
\n",
+ "
"
+ ],
+ "text/plain": [
+ " CensusId State County TotalPop Men Women Hispanic White Black \\\n",
+ "0 1001 Alabama Autauga 55221 26745 28476 2.6 75.8 18.5 \n",
+ "1 1003 Alabama Baldwin 195121 95314 99807 4.5 83.1 9.5 \n",
+ "2 1005 Alabama Barbour 26932 14497 12435 4.6 46.2 46.7 \n",
+ "3 1007 Alabama Bibb 22604 12073 10531 2.2 74.5 21.4 \n",
+ "4 1009 Alabama Blount 57710 28512 29198 8.6 87.9 1.5 \n",
+ "\n",
+ " Native ... Walk OtherTransp WorkAtHome MeanCommute Employed \\\n",
+ "0 0.4 ... 0.5 1.3 1.8 26.5 23986 \n",
+ "1 0.6 ... 1.0 1.4 3.9 26.4 85953 \n",
+ "2 0.2 ... 1.8 1.5 1.6 24.1 8597 \n",
+ "3 0.4 ... 0.6 1.5 0.7 28.8 8294 \n",
+ "4 0.3 ... 0.9 0.4 2.3 34.9 22189 \n",
+ "\n",
+ " PrivateWork PublicWork SelfEmployed FamilyWork Unemployment \n",
+ "0 73.6 20.9 5.5 0.0 7.6 \n",
+ "1 81.5 12.3 5.8 0.4 7.5 \n",
+ "2 71.8 20.8 7.3 0.1 17.6 \n",
+ "3 76.8 16.1 6.7 0.4 8.3 \n",
+ "4 82.0 13.5 4.2 0.4 7.7 \n",
+ "\n",
+ "[5 rows x 37 columns]"
+ ]
+ },
+ "execution_count": 2,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "import pandas as pd\n",
+ "\n",
+ "census = pd.read_csv('../census.csv')\n",
+ "census.head()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "It is often a good idea to check the shape of the resulting DataFrame:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "(3220, 37)"
+ ]
+ },
+ "execution_count": 3,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "census.shape"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We can also look at the columns and their types using the dtypes function: "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "CensusId int64\n",
+ "State object\n",
+ "County object\n",
+ "TotalPop int64\n",
+ "Men int64\n",
+ "Women int64\n",
+ "Hispanic float64\n",
+ "White float64\n",
+ "Black float64\n",
+ "Native float64\n",
+ "Asian float64\n",
+ "Pacific float64\n",
+ "Citizen int64\n",
+ "Income float64\n",
+ "IncomeErr float64\n",
+ "IncomePerCap int64\n",
+ "IncomePerCapErr int64\n",
+ "Poverty float64\n",
+ "ChildPoverty float64\n",
+ "Professional float64\n",
+ "Service float64\n",
+ "Office float64\n",
+ "Construction float64\n",
+ "Production float64\n",
+ "Drive float64\n",
+ "Carpool float64\n",
+ "Transit float64\n",
+ "Walk float64\n",
+ "OtherTransp float64\n",
+ "WorkAtHome float64\n",
+ "MeanCommute float64\n",
+ "Employed int64\n",
+ "PrivateWork float64\n",
+ "PublicWork float64\n",
+ "SelfEmployed float64\n",
+ "FamilyWork float64\n",
+ "Unemployment float64\n",
+ "dtype: object"
+ ]
+ },
+ "execution_count": 4,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "census.dtypes"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Note that not all csv files will contain the column names in the first row. If this is the case, it is best to read the csv file with the following argument in the read_csv function: "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "#we name the columns ourselves by creating a list of column names. \n",
+ "#For example, we have here a dataset with student name and age\n",
+ "#column_list = ['name', 'age']\n",
+ "#df = pd.read_csv(path, header=None, names=column_list)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# JSON Format\n",
+ "\n",
+ "JSON is a popular data format. JSON stands for (JavaScript Object Notation) and it is documented at www.json.org. A JSON object is a sequence of label:value pairs, conceptually similar to a Python dictionary or database record. To read a JSON dataset into a pandas DataFrame you can use this code: "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "#df = pd.read_json(path)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Sometimes our data might be nested. To flatten the data, we can use the json_normalize function provided in Pandas. Here is an example of flattening an online json file containing information about different Pokemon.\n",
+ "\n",
+ "Since we are using a file that is stored online rather than saved locally, we will need to load the data from the web using the urllib library and then extract the json object using the json library\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "import json\n",
+ "from urllib.request import urlopen\n",
+ "import pandas as pd\n",
+ "from pandas.io.json import json_normalize\n",
+ "\n",
+ "url = 'https://raw.githubusercontent.com/Biuni/PokemonGO-Pokedex/master/pokedex.json'\n",
+ "#In Python3 we will need to decode the data as well, hence the use of the decode function\n",
+ "url_read = urlopen(url).read().decode()\n",
+ "url_json = json.loads(url_read)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "In order to confirm that we are properly flattening, let's look at the data"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "{'id': 1,\n",
+ " 'num': '001',\n",
+ " 'name': 'Bulbasaur',\n",
+ " 'img': 'http://www.serebii.net/pokemongo/pokemon/001.png',\n",
+ " 'type': ['Grass', 'Poison'],\n",
+ " 'height': '0.71 m',\n",
+ " 'weight': '6.9 kg',\n",
+ " 'candy': 'Bulbasaur Candy',\n",
+ " 'candy_count': 25,\n",
+ " 'egg': '2 km',\n",
+ " 'spawn_chance': 0.69,\n",
+ " 'avg_spawns': 69,\n",
+ " 'spawn_time': '20:00',\n",
+ " 'multipliers': [1.58],\n",
+ " 'weaknesses': ['Fire', 'Ice', 'Flying', 'Psychic'],\n",
+ " 'next_evolution': [{'num': '002', 'name': 'Ivysaur'},\n",
+ " {'num': '003', 'name': 'Venusaur'}]}"
+ ]
+ },
+ "execution_count": 8,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "url_json['pokemon'][0]"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "It looks like there is one key in this dictionary with many nested values; therefore, flattening at this level will not produce great results. We can see this by looking at the keys in this dictionary:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "dict_keys(['pokemon'])"
+ ]
+ },
+ "execution_count": 9,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "url_json.keys()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Therefore, we will flatten one layer into the nesting in this json object."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "
\n",
+ "\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
avg_spawns
\n",
+ "
candy
\n",
+ "
candy_count
\n",
+ "
egg
\n",
+ "
height
\n",
+ "
id
\n",
+ "
img
\n",
+ "
multipliers
\n",
+ "
name
\n",
+ "
next_evolution
\n",
+ "
num
\n",
+ "
prev_evolution
\n",
+ "
spawn_chance
\n",
+ "
spawn_time
\n",
+ "
type
\n",
+ "
weaknesses
\n",
+ "
weight
\n",
+ "
\n",
+ " \n",
+ " \n",
+ "
\n",
+ "
0
\n",
+ "
69.0
\n",
+ "
Bulbasaur Candy
\n",
+ "
25.0
\n",
+ "
2 km
\n",
+ "
0.71 m
\n",
+ "
1
\n",
+ "
http://www.serebii.net/pokemongo/pokemon/001.png
\n",
+ "
[1.58]
\n",
+ "
Bulbasaur
\n",
+ "
[{'num': '002', 'name': 'Ivysaur'}, {'num': '0...
\n",
+ "
001
\n",
+ "
NaN
\n",
+ "
0.690
\n",
+ "
20:00
\n",
+ "
[Grass, Poison]
\n",
+ "
[Fire, Ice, Flying, Psychic]
\n",
+ "
6.9 kg
\n",
+ "
\n",
+ "
\n",
+ "
1
\n",
+ "
4.2
\n",
+ "
Bulbasaur Candy
\n",
+ "
100.0
\n",
+ "
Not in Eggs
\n",
+ "
0.99 m
\n",
+ "
2
\n",
+ "
http://www.serebii.net/pokemongo/pokemon/002.png
\n",
+ "
[1.2, 1.6]
\n",
+ "
Ivysaur
\n",
+ "
[{'num': '003', 'name': 'Venusaur'}]
\n",
+ "
002
\n",
+ "
[{'num': '001', 'name': 'Bulbasaur'}]
\n",
+ "
0.042
\n",
+ "
07:00
\n",
+ "
[Grass, Poison]
\n",
+ "
[Fire, Ice, Flying, Psychic]
\n",
+ "
13.0 kg
\n",
+ "
\n",
+ "
\n",
+ "
2
\n",
+ "
1.7
\n",
+ "
Bulbasaur Candy
\n",
+ "
NaN
\n",
+ "
Not in Eggs
\n",
+ "
2.01 m
\n",
+ "
3
\n",
+ "
http://www.serebii.net/pokemongo/pokemon/003.png
\n",
+ "
None
\n",
+ "
Venusaur
\n",
+ "
NaN
\n",
+ "
003
\n",
+ "
[{'num': '001', 'name': 'Bulbasaur'}, {'num': ...
\n",
+ "
0.017
\n",
+ "
11:30
\n",
+ "
[Grass, Poison]
\n",
+ "
[Fire, Ice, Flying, Psychic]
\n",
+ "
100.0 kg
\n",
+ "
\n",
+ "
\n",
+ "
3
\n",
+ "
25.3
\n",
+ "
Charmander Candy
\n",
+ "
25.0
\n",
+ "
2 km
\n",
+ "
0.61 m
\n",
+ "
4
\n",
+ "
http://www.serebii.net/pokemongo/pokemon/004.png
\n",
+ "
[1.65]
\n",
+ "
Charmander
\n",
+ "
[{'num': '005', 'name': 'Charmeleon'}, {'num':...
\n",
+ "
004
\n",
+ "
NaN
\n",
+ "
0.253
\n",
+ "
08:45
\n",
+ "
[Fire]
\n",
+ "
[Water, Ground, Rock]
\n",
+ "
8.5 kg
\n",
+ "
\n",
+ "
\n",
+ "
4
\n",
+ "
1.2
\n",
+ "
Charmander Candy
\n",
+ "
100.0
\n",
+ "
Not in Eggs
\n",
+ "
1.09 m
\n",
+ "
5
\n",
+ "
http://www.serebii.net/pokemongo/pokemon/005.png
\n",
+ "
[1.79]
\n",
+ "
Charmeleon
\n",
+ "
[{'num': '006', 'name': 'Charizard'}]
\n",
+ "
005
\n",
+ "
[{'num': '004', 'name': 'Charmander'}]
\n",
+ "
0.012
\n",
+ "
19:00
\n",
+ "
[Fire]
\n",
+ "
[Water, Ground, Rock]
\n",
+ "
19.0 kg
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
"
+ ],
+ "text/plain": [
+ " avg_spawns candy candy_count egg height id \\\n",
+ "0 69.0 Bulbasaur Candy 25.0 2 km 0.71 m 1 \n",
+ "1 4.2 Bulbasaur Candy 100.0 Not in Eggs 0.99 m 2 \n",
+ "2 1.7 Bulbasaur Candy NaN Not in Eggs 2.01 m 3 \n",
+ "3 25.3 Charmander Candy 25.0 2 km 0.61 m 4 \n",
+ "4 1.2 Charmander Candy 100.0 Not in Eggs 1.09 m 5 \n",
+ "\n",
+ " img multipliers name \\\n",
+ "0 http://www.serebii.net/pokemongo/pokemon/001.png [1.58] Bulbasaur \n",
+ "1 http://www.serebii.net/pokemongo/pokemon/002.png [1.2, 1.6] Ivysaur \n",
+ "2 http://www.serebii.net/pokemongo/pokemon/003.png None Venusaur \n",
+ "3 http://www.serebii.net/pokemongo/pokemon/004.png [1.65] Charmander \n",
+ "4 http://www.serebii.net/pokemongo/pokemon/005.png [1.79] Charmeleon \n",
+ "\n",
+ " next_evolution num \\\n",
+ "0 [{'num': '002', 'name': 'Ivysaur'}, {'num': '0... 001 \n",
+ "1 [{'num': '003', 'name': 'Venusaur'}] 002 \n",
+ "2 NaN 003 \n",
+ "3 [{'num': '005', 'name': 'Charmeleon'}, {'num':... 004 \n",
+ "4 [{'num': '006', 'name': 'Charizard'}] 005 \n",
+ "\n",
+ " prev_evolution spawn_chance spawn_time \\\n",
+ "0 NaN 0.690 20:00 \n",
+ "1 [{'num': '001', 'name': 'Bulbasaur'}] 0.042 07:00 \n",
+ "2 [{'num': '001', 'name': 'Bulbasaur'}, {'num': ... 0.017 11:30 \n",
+ "3 NaN 0.253 08:45 \n",
+ "4 [{'num': '004', 'name': 'Charmander'}] 0.012 19:00 \n",
+ "\n",
+ " type weaknesses weight \n",
+ "0 [Grass, Poison] [Fire, Ice, Flying, Psychic] 6.9 kg \n",
+ "1 [Grass, Poison] [Fire, Ice, Flying, Psychic] 13.0 kg \n",
+ "2 [Grass, Poison] [Fire, Ice, Flying, Psychic] 100.0 kg \n",
+ "3 [Fire] [Water, Ground, Rock] 8.5 kg \n",
+ "4 [Fire] [Water, Ground, Rock] 19.0 kg "
+ ]
+ },
+ "execution_count": 10,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "pokemon = json_normalize(url_json['pokemon'])\n",
+ "pokemon.head()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Dataset Generator\n",
+ "\n",
+ "Instead of looking for the appropriate dataset, downloading it, and reading it, to test your workflow, you can also create a synthetic dataset according to your needs, using Scikit-learn. For instance, you can use sklearn.datasets.make_classification() to generate a synthetic dataset to test your classification workflow: "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from sklearn import datasets\n",
+ "\n",
+ "#featureVectors, targets = datasets.make_classification(amt_instances, amt_features, amt_classes)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.8.8"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/your-code/.ipynb_checkpoints/Supervised Learning with Scikit-Learn-checkpoint.ipynb b/your-code/.ipynb_checkpoints/Supervised Learning with Scikit-Learn-checkpoint.ipynb
new file mode 100755
index 0000000..a4d5e23
--- /dev/null
+++ b/your-code/.ipynb_checkpoints/Supervised Learning with Scikit-Learn-checkpoint.ipynb
@@ -0,0 +1,1004 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Supervised Learning with Scikit-Learn\n",
+ "\n",
+ "\n",
+ "**Lesson Goals**\n",
+ "\n",
+ "This lesson will serve as an introduction to supervised learning using Scikit-learn. Two important algorithms will be covered along with implementation and examples.\n",
+ "\n",
+ "\n",
+ "**Introduction**\n",
+ "\n",
+ "Supervised learning is an extremely important part of machine learning. This is because a large portion of machine learning algorithms are used for classification and regression. The scikit-learn has implementations for a large number of supervised learning algorithms. In this lesson we will explore two algorithms in depth.\n",
+ "\n",
+ "\n",
+ "**Linear Regression**\n",
+ "\n",
+ "Definition\n",
+ "\n",
+ "Linear regression is one of the most used models in statistics. The general idea behind this model is that we have a predictor (or independent) variables and one or more response (also known as target or dependent) variables. We would like to to predict our response variable using a linear combination of the predictor variables. Typically, for a set of predictor variables X 1, X 2,..., X n, and a response variable Y, we construct the following model: \n",
+ "\n",
+ "\n",
+ "\n",
+ "Where β 0, β 1,...,β n are constants that we compute. We find the optimal values of these constants for each model based on the data. We then generate predictions using this model. The difference between the observed values and the predicted values is called the error (or residual). Our goal is to minimize the error.\n",
+ "Linear Regression with Scikit-learn\n",
+ "\n",
+ "Linear regression in scikit-learn is performed using the linear_regression submodule. To demonstrate a linear model with scikit-learn, we will use the beer dataset.\n",
+ "\n",
+ "First we import the dataset using Pandas."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 1,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "
\n",
+ "\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
beer
\n",
+ "
tpc
\n",
+ "
ma
\n",
+ "
dsa
\n",
+ "
asa
\n",
+ "
orac
\n",
+ "
rp
\n",
+ "
mca
\n",
+ "
\n",
+ " \n",
+ " \n",
+ "
\n",
+ "
0
\n",
+ "
1
\n",
+ "
148.23
\n",
+ "
13.37
\n",
+ "
0.66
\n",
+ "
0.81
\n",
+ "
3.81
\n",
+ "
0.45
\n",
+ "
10.65
\n",
+ "
\n",
+ "
\n",
+ "
1
\n",
+ "
2
\n",
+ "
160.38
\n",
+ "
10.96
\n",
+ "
0.63
\n",
+ "
0.64
\n",
+ "
2.85
\n",
+ "
0.41
\n",
+ "
15.47
\n",
+ "
\n",
+ "
\n",
+ "
2
\n",
+ "
3
\n",
+ "
170.41
\n",
+ "
9.22
\n",
+ "
0.62
\n",
+ "
0.81
\n",
+ "
3.34
\n",
+ "
0.48
\n",
+ "
15.70
\n",
+ "
\n",
+ "
\n",
+ "
3
\n",
+ "
4
\n",
+ "
208.65
\n",
+ "
9.65
\n",
+ "
0.90
\n",
+ "
1.01
\n",
+ "
3.34
\n",
+ "
0.50
\n",
+ "
76.65
\n",
+ "
\n",
+ "
\n",
+ "
4
\n",
+ "
5
\n",
+ "
146.03
\n",
+ "
11.72
\n",
+ "
0.64
\n",
+ "
0.90
\n",
+ "
3.18
\n",
+ "
0.47
\n",
+ "
9.39
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
"
+ ],
+ "text/plain": [
+ " beer tpc ma dsa asa orac rp mca\n",
+ "0 1 148.23 13.37 0.66 0.81 3.81 0.45 10.65\n",
+ "1 2 160.38 10.96 0.63 0.64 2.85 0.41 15.47\n",
+ "2 3 170.41 9.22 0.62 0.81 3.34 0.48 15.70\n",
+ "3 4 208.65 9.65 0.90 1.01 3.34 0.50 76.65\n",
+ "4 5 146.03 11.72 0.64 0.90 3.18 0.47 9.39"
+ ]
+ },
+ "execution_count": 1,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "import pandas as pd\n",
+ "from sklearn.linear_model import LinearRegression\n",
+ "\n",
+ "beer = pd.read_csv('../lager_antioxidant_reg.csv')\n",
+ "beer.head()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The dataset contains 7 variables :\n",
+ "\n",
+ " tpc - Total phenolic content\n",
+ " ma - melanoidin content\n",
+ " dsa - DPPH radical scavenging activity\n",
+ " asa - ABTS radical cation scavenging activity\n",
+ " orac - Oxygen radical absorbance activity\n",
+ " rp - Reducing Power\n",
+ " mca - Metal Chelaing Activity\n",
+ "\n",
+ "The next step for scikit-learn is to separate the dataset into two parts - the predictor variables and the response variable. In this case we would like to predict the level of total phenolic content using the remaining 6 variables."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 2,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "x_columns = [col for col in beer.columns.values if col != \"tpc\"]\n",
+ "beer_x = beer[x_columns]\n",
+ "beer_y = beer[\"tpc\"]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 3,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "18.83038391314807"
+ ]
+ },
+ "execution_count": 3,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "beer_model = LinearRegression()\n",
+ "#create the model\n",
+ "beer_model.fit(beer_x, beer_y)\n",
+ "#now we print the model coefficients\n",
+ "beer_model.intercept_"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 4,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "array([ 5.84731786e-02, 1.28827809e+00, 1.27650959e+02, -6.14737240e-01,\n",
+ " -1.09375291e+00, 7.35403422e+01, 3.76892085e-01])"
+ ]
+ },
+ "execution_count": 4,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "beer_model.coef_"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 5,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "0.8219280156188545"
+ ]
+ },
+ "execution_count": 5,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "#score returns the coefficient of determination or r squared. \n",
+ "#This number tells us what proportion of the variation in the data is explained by the model\n",
+ "beer_model.score(beer_x, beer_y)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "What these coefficients mean is that our linear model is:\n",
+ "\n",
+ "tpc = 19.049664352739313 = 1.28791969 * ma + 125.33843146 * dsa + (-0.92370963) * asa + (-0.93261523) * orac + 76.61686364 * rp + 0.38036155 * mca\n",
+ "\n",
+ "Typically, we perform a few diagnostic tests to ensure that a linear model is the most appropriate choice for this data.\n",
+ "\n",
+ " The predictor variables are linearly independent\n",
+ " There is a linear relationship between predictors and response\n",
+ " The errors have a constant variance\n",
+ " The errors are normally distributed\n",
+ "\n",
+ "As far as testing assumptions, we will focus on the last two. We will plot the residuals vs. fit plot to diagnose a problem with assumption number 3. A model that meets this assumption will have a random pattern of points in this plot. This means that there is no trend in the variance of the residuals.\n",
+ "\n",
+ "This plot exists in the yellowbrick library. We will install this library and then use our existing linear model to plot the residual vs. fit graph.\n",
+ "\n",
+ "#!pip install yellowbrick"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 6,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "0.8219280156188545\n"
+ ]
+ },
+ {
+ "data": {
+ "text/plain": [
+ "
"
+ ]
+ },
+ "metadata": {},
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "from yellowbrick.regressor import ResidualsPlot\n",
+ "\n",
+ "visualizer = ResidualsPlot(beer_model, hist=False)\n",
+ "visualizer.fit(beer_x, beer_y) # Fit the training data to the model\n",
+ "print (visualizer.score(beer_x, beer_y)) \n",
+ "visualizer.poof()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We see that except for one outlier, we have a fairly random pattern. So the assumption is met.\n",
+ "\n",
+ "Now we will look at the 4th assumption. In order to examine the distribution of the residuals, we can plot a Normal QQ plot of the residuals. This plot will compare the residuals with a theoretical normal distribution. If the graph of the actual vs. the theoretical will produce a linear pattern, this means that the residuals are approximately normally distributed.\n",
+ "\n",
+ "To do this, we use the statsmodels library\n",
+ "\n",
+ "#!pip install patsy\n",
+ "#!pip install statsmodels"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 7,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAYYAAAEGCAYAAABhMDI9AAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4zLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvnQurowAAH7FJREFUeJzt3XmcVPWZ7/FPA9IqRsAlQFDT7fYk0BeDHY0oRkk7GL3kmtFEGcXEUWIWnauvXCM6cZ9EXKMzY67RYNQRtyRmGdxCxERFiMOtkEhDfFwCiREkZExj3Jqt7h/nFNQpqk6f6q7lVPf3/Xrxsvt3Tp3zcNTz1G9vymaziIiI5AyqdwAiIpIuSgwiIhKhxCAiIhFKDCIiEjGk3gGUK5PJNAOHAGuAzXUOR0SkUQwGxgBL2tvbu+NObLjEQJAUnql3ECIiDepIYGHcCY2YGNYAHHjggQwdOhSAzs5O2tra6hpUTxRjZSjGykh7jGmPDxovxg0bNvDiiy9C+A6N04iJYTPA0KFDaW5u3lqY/3NaKcbKUIyVkfYY0x4fNGyMPTbBq/NZREQilBhERCRCiUFERCKUGEREJKIRO59FRKSIB5au5JoFnaxYu55xo4ZzSusw2tvLv44Sg4hIP/DA0pWcNnfb9IRla7pYtqaL1n1XMn1ia1nXUlOSiEg/cM2CzqLl1y5YXva1lBhERPqBFWvXlyjvKvtaSgwiIlX2wNKVfOSGeQz92lw+csM8Hli6suL3GDdqeInyEWVfS4lBRKSKcm3/y9Z0sXlLlmVrujht7sKKJ4eLOoovzzGrY3zZ11Lns4hIjMKRPhd1tJXVmRvX9l9up3Cc3LWuXbCcFWu7GDdqBCe37tyreygxiIiUUGykT+73AxJeo5Jt/z2ZPrE1kggymUyvrqOmJBGREiox0qeSbf+1osQgIlJCJb7tV7Ltv1bUlCQiA0o5fQbjRg1n2Zrtk0A53/aLtf3P6hhf0f6FSqtLYjCz6wh2ERoCzAaWAPcQbD23Bjjd3WO3nhMRyUn6so/rMyh2/kUdbZHzc2Z1jIctbySOr7DtP+1q3pRkZlOANnefBHwSuBm4Cvi2ux8JvAycWeu4RKQxlTMctNw+g+kTW7l3xmQmjBnJkEFNTBgzkntnTG6ol3xv1KPG8DTwX+HPXcAw4GjgS2HZPOAC4NaaRyYiDaec4aC96TNotG/7ldCUzWbrdnMzO5ugSelYd39/WLYfcI+7H17sM5lMpgWo/LRBEWlIk+5fweYir7HBTbD4H8ZFyk599BVe7tq+lXr/Ec3cd/x+1QoxbVrb29tXxZ1Qt85nMzsBOAuYCryUd6gpyefb2tq27mWayWRo783asjWkGCtDMVZG2mMsJ75xv1hdtIN4/OiR213jykG7Fe0zuHLaIbSXWStI+zOEaIzd3d10dhavXRWqy3BVMzsW+DpwnLuvB94ys53Cw2OB1fWIS0QaTznDQQdqn0G5al5jMLPhwPXAMe6e69Z/AjgJmBv+8/FaxyUijanc4aADsc+gXPVoSjoF2AP4vpnlyj4PzDGzLwJ/AO6uQ1wikkJJhqLqZV9ZNU8M7n47cHuRQ39X61hEpL56eumXO+9AKkMzn0Wkqkq9/JMsUFerlUklSolBRKom7uUf99KfM2UMUNuVSWUbLaInIlUT9/JP8tJvxJVJ+wMlBhFJJG57ylLH4l7+SV76jbgyaX+gpiQR6VFckxBQ8ljc6qSzOsb3uEBdI65M2h8oMYhIj+KahLIUX1bn2gXLY1cnjXvpZzLbVi7VUNTaU2IQkR7FNQmVWm1txdquHr/x66WfTkoMIhJRbHhpXJNQlmzsZjZ6+TcedT6LyFal9jY4ar9RRc+f1TFeHcT9kGoMIrJVqb6Ep1/5M/fOmBzbCawO4v5DiUFEtorrS4hrElJzUf+ixCAyQJXblyADh/oYRAag+avWl92XIAOHagwiDarYN36g5IJ1+eV/Xv9W0Wsm6UuQ/k+JQSTlSiWAuJnI+WWLV63jloUeKS+lp74EGRiUGERSrNRSFHuP2DnxNeb86uXE56ovQUB9DCKpVmr46Ktd7yS+xnubNic+V30JAkoMIqlWavhoOXYcMrho+T4jhjFhzEiGDGpiwpiR3DtjspqQBFBTkkiqlRo+us+IYfyx6+1E15h52P6RPoac2dMmKhFIUUoMIilS2NF81H6jiiaG2dMmAtvPNi5WNn1iK5Na9oyUn9y6s5KClKTEIJISxTqal63p4tzJxtOv/Lnk6qSFSpXll2cymSr8DaS/UGIQSYm4dYqWXjCtxtHIQKbOZ5GU0Mb3khZKDCIpoY3vJS2UGERSQvsaSFqoj0EkJbTxvaSFEoNIimidIkkDNSWJiEiEagwidVJs1VTVFiQNlBhE6qDUqqlQfIKaSC2pKUmkDkpNZrt2wfIaRyKyPdUYRGqgsNlo+evFJ61pMpukgRKDSJUVazYqRZPZJA1SkxjM7CbgMCALnOfuS+ockkivzF+1nrN+MW9r7aDr3Q2JP6vJbJIGqUgMZnYUcIC7TzKzDwPfAybVOSyRHhVbJvuWRa9tPR5XOxjUBG2jR2oym6ROKhID0AH8BMDdf2dmI81sV3d/s85xiWxVNAnkbYCTWyY7qbbRI7VqqqRSUzabrXcMmNntwCPu/tPw92eAs9z9xcJzM5lMC7CythHKQDd/1XouyasJVMI3Dh/L1JbiC+eJVFFre3v7qrgT0lJjKNTU0wltbW00NzcDwaYj7e3tVQ+qLxRjZdQqxsLaQTn9BMXsM2IYI3Yamppmo7T/u057fNB4MXZ3d9PZWXyYdKG0JIbVwOi83z8ArKlTLDLAlTOKKCntryyNJC0T3OYDnwEws4OB1e7+t/qGJANVqclnSZ184EgmjBnJkEFNTBgzkntnTFZSkIbSY43BzFqAse7+rJl9gWBI6Q3u/rtKBeHui8wsY2aLgC3AOZW6tki5Su2kVkyx/ZgP2PJG6psYROIkaUq6E7jQzCYCM4ErgX8D/q6Sgbj7RZW8nkhShf0JH9h1J17teme785L2E2Qyb9QibJGqSZIYsu6+xMyuAm5x90fN7KvVDkykFsrpT1A/gQwUSRLDLmZ2CEEfwFFm1gyMrG5YItWRdLRR2kYRidRSksRwI/Bd4DZ3X2dms4H7qhuWSOWVUztY/eY7rLz0xFqEJZI6PSYGd3/QzH4I7BkWfd3dt1Q3LJG+68tcBC1mJwNZj8NVzewTwMvAL8OiG81M8/gl1XK1g2Vruti8JcuyNV1FO5RL0WJ2MpAlaUq6mmCI6gPh798EHg7/iNRdsS0yy5mLoP4EkagkieEtd19rZgC4+1/MrG/rA4hUSKktMgf1uKjKNhptJBKVJDG8Gy6L3WRmI4HpwHvVDUukuKR7HQwdPJj3Nm3erly1A5GeJUkMXwFuBQ4BXgGeAc6uZlAixTywdGVkhdO4UUUbNm+fFEC1A5EkkoxKehVQZ7PUXTn9Bm2jRzKrYzzXLliu2oFImUomhnBPhJKbNbj7x6sSkQwoxTqOp09sLVpezhpGuSSgRCBSvrgawyU1i0IaSqmXedyxYuVA0Y7jxavWbbcz2mlzF7L3iJ37tIaRiCQTlxjedPel4TwGEaD0KKCccl/0xcz51ctFy5tK7N+kfgORyopLDKcDS4FLixzLAk9WJSJJtVLt/NcuWE62RMtjqRd9qQlnxUYTQbBMxTcOH8v3V76j2oFIFZVMDO6eW0H1Knf/Rf4xM/t0VaOS1CrVzr9ibVfJDqlSL/pSdhxSfKjpuFEjmNoynItPUiVWpJriOp9bgP2AG8JltnP1+B2Am4GfVD06SZ1xo4YXHSY6btQIsmSLHiv1ot9nxDD+2PX2duUzD9s/0vSUM6tjPGzRXgci1Ra3VtIY4BSgBbiMoEnpUuBC4DtVj0xSKddpXGhWx/iSx2Yetn/R8tnTJnLvjMnbbYP5r39/aNFyNRmJ1EZcU9JiYLGZPeruqh0IwNaXc9z8gGLHJrXsWfIzxV74GmoqUj9JZj6/ZGY3AbuxrTkJd/9c1aKSVIt7aZc6phe9SONIkhi+DzwI/KbKsYiISAokSQyvu/tVVY9EUiNuApuI9H9JEsNjZjaVYKOeTblC7eLWP8VNYFNyEBkYetzBjWBpjMcJltreFP7ZWM2gpH7iJrCJyMCQZHXV7Ta/NbMDqhOO1FvcBDYRGRh6TAxmNhg4FtgjLGoGvk4wv0EaUFwfQtwENhEZGJL0McwFRgIHAQsJ9n++vJpBSfXE9SEcQDCBLf94zqyO8bUKUUTqLEkfw17u/knA3f2zwGSC3dykAfXUhzB9YqtmHYsMcElqDFvPNbMd3f0PZqavjw0qvg9hDKDJaCIDXZIaw5NmdiHBonm/NrNHEn5OUmjcqOElytWHICKBHl/w7n45cKO73wDMBOYQdEZLA4pbBE9EBJKNSjoz/Gd+8SnA96oUk/RBT7OW4xbBy2S0pLWIJOtjODLv56HAx4BnUWKomaRLVCSdtaw+BBGJk2SC2z/m/25mOwN3Vi0iiShniYq4EUdKBCKSVDmjkgBw93fMrPjOKz0wsyHAHQQ7ww0BLnD3hWZ2EHArwV7Sz7v7l3tz/f6onJe9Zi2LSCUk6WN4BiLb+Y4Fnu/l/U4H3nb3yeGQ1zuBQwm2Cj3P3ZeY2X1mdpy7P9bLe/Qr5bzsNWtZRCohSY3hkryfs8CbwG97eb+5wP3hz+uA3c1sKNDq7kvC8nnAMYASA+W97DVrWUQqoSmbzcaeYGZHAx8lWF31eXd/uhI3NrOrgc0ETUiPuPvEsLwDOMvdTy32uUwm0wKsrEQM9TB/1XruWvEXVq7vpnV4M2eM24OpLcXnFuTOv2TRa9uVf+PwsUU/V+71RWTAaW1vb18Vd0LJGoOZ7Q48BAwDnguLTzezDcDx7v43M/uiu99W4vMzCeY95Lvc3X9mZucABwOfAvYsOKeJBNra2mhubgYgk8nQ3t6e5GN1k8lkeGnQblyyaMXWspe7urlk0Wu07lt6lFB7O7TuuzJ2j+XC8y8+qfcxNsJzVIx9l/YY0x4fNF6M3d3ddHYW77MsFNeUdD0wz91vzC80s3OBbwFfAM4GiiYGd59DMBkuwszOIkgIn3b3jWa2Dtg975SxwOpE0TeY3o4a0vBSEamluJnPBxcmBQB3vwU4zMweB5Kln5CZ7Qt8CTjR3d8Lr7cReMHMJoennUiwMVC/o1FDItII4moM78Yc6wYeAP6jzPvNJKgdPJo3k3oqcD5wm5kNAp5z9yfKvG5D0KghEWkEcYlhkJmNcfc1+YVmtjewk7vfVe7N3P2fgX8ucmgF0RnWqZR0BnIpGjUkIo0gLjFcDjxuZl8DlhA0O00CrgOuqH5o6VLODORS4tYpEhFJi5KJwd0fN7PNBAliIkHTUidwobs/XKP4UqNSy02oI1lE0i52gpu7/xz4eY1iSTV1HIvIQKENdxLSBjciMlAoMSSkDW5EZKAoe3XVgUodxyIyUCRZXfUggqWyd3H3D5nZpcB8d3+uh4/2O+o4FpGBIElT0i3AmUBuPsODBEtiiIhIP5QkMWx09637L7j7i8Cm6oUkIiL1lCQxbDKzVsLNeszsOBKugCoiIo0nSefzBcBPATOz9cAq4PPVDEpEROqnx8QQNiNNMLM9gW53f7P6YYmISL3EbdRzD9G9nnPlALj756oXloiI1EtcjaFfLn0tIiLx4hbRuzv3s5m1AeMIahDPu7vXIDYREamDHkclmdn1wI+BTwMnEWyy8y/VDkxEROojyaikTwDjwi04MbNmYBFwaTUDExGR+kgyj+F1ohPaNhAMWRURkX4oSY3hL8ASM3uSIJF8HPi9mV0F4O6XVTE+ERGpsSSJ4ffhn5xHqhSLiIikQJIJblfWIhAREUmHJMtuXwxcCOwaFjUBWXcfXM3ARESkPpJ0Pn8O+AgwNPyzQ/hPERHph5L0MSwH/uTum6sdjIiI1F+SxHA38LyZZcgbturuZ1YtKhERqZskieEm4B7gT1WORUREUiBJYnhZI5NERAaOJInhOTO7EniWaFPSk1WLSkRE6iZJYvh4wT8hWGVViUFEpB9KMsFtSmGZmZ1UnXBERKTekkxw2wc4F9gjLGomWHH1oSrGJSIidZJkgts9wBvAJCAD7AmcXs2gRESkfpIkhk3ufg2w1t2/Dfwv4Jy+3NTMRpnZX83s6PD3g8xskZk9a2a39uXaIiLSN0kSw05mthewxcz2BTYCLX287/VEV2y9GTjP3Y8AhpvZcX28voiI9FKSxHAd0EHwMv8Nwf4Mi3p7QzP7BPA3YFn4+1Cg1d2XhKfMA47p7fVFRKRvmrLZbOKTzWwI8D53/2tvbhYmgZ8DJxDUEu4CXgQecfeJ4TkdwFnufmqxa2QymRZgZW/uLyIitLa3t6+KO6HkqCQz25XgBX1T+PsXgS8DL5vZOe6+Nu7CZjYTmFlQ/BjwXXfvMrNSH22Ku25OW1sbzc3NAGQyGdrb25N8rG4UY2UoxspIe4xpjw8aL8bu7m46OzsTfS5uuOpthHs7m9mBwGzgZGA/4F+B6XEXdvc5wJz8MjN7FhhsZueG1zkU+Adg97zTxgKrE0UvIiIVF9fHsK+7Xxz+/BngB+7+hLvfBozuzc3c/Qh3P8zdDyPYIvQr7v5b4AUzmxyediLweG+uLyIifRdXY3gr7+ejgTvyft9S4TjOB24zs0HAc+7+RIWvLyIiCcUlhiFm9n7gfQST204BMLNdgGF9vbG7n5H38wrgyL5eU0RE+i4uMVwDrAB2Bq5w97+a2U7AQuC7tQhORERqr2Qfg7s/BowBRrv7dWHZu8CF4QxoERHph2IX0XP3jQQznfPL5lc1IhERqaskM59FRGQAUWIQEZEIJQYREYlQYhARkQglBhERiVBiEBGRCCUGERGJUGIQEZEIJQYREYlQYhARkQglBhERiVBiEBGRCCUGERGJUGIQEZEIJQYREYlQYhARkQglBhERiVBiEBGRCCUGERGJUGIQEZEIJQYREYlQYhARkQglBhERiVBiEBGRCCUGERGJUGIQEZEIJQYREYlQYhARkQglBhERiVBiEBGRiCG1vqGZXQDMADYCX3H3JWZ2EHArkAWed/cv1zouEREJ1LTGYGbjgenAR4EvAtPCQzcD57n7EcBwMzuulnGJiMg2ta4xTAO+7+6bgF8DvzazoUCruy8Jz5kHHAM8VuPYREQEaMpmszW7mZndCmwG9gd2AL4KrAMecfeJ4TkdwFnufmqxa2QymRZgZU0CFhHpf1rb29tXxZ1QtRqDmc0EZhYUjwIeB44DjgDmACcUnNOU5PptbW00NzcDkMlkaG9v71O81aYYK0MxVkbaY0x7fNB4MXZ3d9PZ2Znoc1VLDO4+h+DFv5WZXQm84O5ZYKGZtRDUGHbPO20ssLpacYmISLxaD1d9DDgWwMw+BLzq7huBF8xscnjOiQS1ChERqYOaJgZ3/xXwBzNbDNwJnBMeOh+YbWbPAq+4+xO1jEtERLap+TwGd78cuLygbAVwZK1jERGR7Wnms4iIRCgxiIhIhBKDiIhEKDGIiEiEEoOIiEQoMYiISIQSg4iIRCgxiIhIhBKDiIhEKDGIiEiEEoOIiETUfK2kanlg6UquWdDJirXrGTdqOBd1tDF9Ymu9wxIRaTj9IjHMX7WeSxat2Pr7sjVdnDZ3IYCSg4hImfpFU9JdK/5StPzaBctrHImISOPrF4lh5fruouUr1nbVOBIRkcbXLxJD6/DmouXjRo2ocSQiIo2vXySGM8btUbR8Vsf4GkciItL4+kVimNoynHtnTGbCmJEMGdTEhDEjuXfGZHU8i4j0Qr8YlQTB6CMlAhGRvusXNQYREakcJQYREYlQYhARkQglBhERiWjEzufBABs2bIgUdncXn+SWJoqxMhRjZaQ9xrTHB40VY947c3BPn2nKZrNVDKnyMpnMZOCZeschItKgjmxvb18Yd0Ij1hiWAEcCa4DNdY5FRKRRDAbGELxDYzVcjUFERKpLnc8iIhKhxCAiIhFKDCIiEqHEICIiEUoMIiIS0YjDVTGzIcAdwH4Ef4cL3H1hwTmnAecDW4Db3f2OOsR5FPAD4Ex3f7jI8Y3As3lFHe5e0yG4CWKs23M0sx2Au4APEgxN/kd3/33BOXV7hmZ2E3AYkAXOc/cleceOAa4miPtRd/+XWsRUZoyrgFfZNuz7NHd/rQ4xtgE/BW5y91sKjqXlOcbFuIp0PMfrCIbyDwFmu/uP8o6V9RwbMjEApwNvu/tkMxsP3AkcmjtoZsOAy8KyDcASM/uxu79RqwDNbD/gq0RfWoXWu/vRtYloez3FmILneCrQ5e6nmdlUYDZwSsE5dXmGYUI9wN0nmdmHge8Bk/JO+TfgWOA14Ckze8jdV6QsRoDj3P2tWsaVL/xv7N+BBSVOScNz7ClGqP9znAK0hf+udweWAj/KO6Ws59ioTUlzCV5oAOuA3QuOfwxY4u7r3f1dghffETWMD4IJeCcC62t833L0FGO9n2MH8OPw5ydqfO+edAA/AXD33wEjzWxXADPbF3jD3V919y3Ao+H5qYkxRbqB44HVhQdS9BxLxpgiTwOfDX/uAoaZ2WDo3XNsyBqDu28ENoa/ng/cV3DKaIKEkfNnghl/NePu7wCYWdxpO5rZfQRNJQ+5+7dqEVtOghjr/Ry33t/dt5hZ1syGunv+Qln1eoajgUze7+vCsjcp/tz2q1Fc+eJizPmOmbUAC4GL3b2mM17dfROwqcR/g6l4jj3EmFPv57gZeDv89SyC5qJc01bZzzH1icHMZgIzC4ovd/efmdk5wMHAp3q4TFNVggvFxdjDRy8gqP1kgafN7Gl3/38pizFf1Z5jifg+luD+NXuGPYh7NlX9768MhXFcBjwOvEFQszgJ+GGtgypDWp5jodQ8RzM7gSAxTI05rcfnmPrE4O5zgDmF5WZ2FkFC+HRYg8i3miBL5owFflXrGBN87ju5n81sAfA/gKq81HoZY82eY7H4zOyu8P6/DTuimwpqCzV9hgUKn80HCJrmih0bS32aIeJixN3/I/ezmT1K8OzSlBjS8hxjpeU5mtmxwNeBT7p7fvNw2c+xIfsYwjazLwEnuvt7RU55DjjEzEaY2S4EbdOpWpHVAveZWVM4yuoIYHm94ypQ7+c4n23tpp8CfpF/sM7PcD7wmTCOg4HV7v43AHdfBexqZi1hXNPC82utZIxmNtzMfmZmQ8NzjwI66xBjSSl6jiWl5Tma2XDgemBa4eCQ3jzHhlxEz8yuBqYDf8wrnkrQIf2Uuy82s88AXyNoYvh3d7+3xjH+z/D+HyJo31vj7lPN7KK8GK8FPkEwFPQ/3f2bKYyxbs8x7DybAxxA0AF4hru/mpZnaGbXAB8P730OMJFglNSPzezjwLXhqQ+5+w21iquMGM8DPg+8SzCK5Z9q3TZuZu3AjUALQb/ha8B/AivT8hwTxJiG53g2cAXwYl7xk8Cy3jzHhkwMIiJSPQ3ZlCQiItWjxCAiIhFKDCIiEqHEICIiEUoMIiISkfoJbtL/hatCHgrsSDCccnF4aOsKuu5+SQ3jmeHuc81sNMEQ3c/2+KHtr5EFdgiXU8gvPxC4mW0TjtYBs9z9N32Nu+A+HwA+5O5PmtkZwGB3v6NUXCL5lBik7tz9QoDcWjP5q6Wa2RW1jCWcO3EZMNfdX2fbBLtKXHtPgsUAT3P3Z8Kyo4B5ZvYRd//vSt0LmAJ8GHjS3e+q4HVlAFBikEawl5n9kGAi3i/d/VzYOtHxCGAn4CngQnfPmtklBLM7NxLMQv3fBMsAzAOWAZ3ufnWxzxMsTf1BM5sPnE2QqPYys/cTLO8+nGBN+3PcvdPMrmLbSpV/AmYUWaIl53zg/lxSAHD3p8zsB8B5wGX53+jDb/rHuPsMM/v7ML73CP6/Pd3dV5nZLwmSzeHAgcDlwCLgm0CTmb0B7EpBrSucqfttYH/gfWFcN1qw78DtBBMKdwaucvdHev5XJP2J+hikEexPMNP9o8DnzWx3M/ssMNbdj3L3Q8NzppnZJIJFzI509yOBPQn2dYDgG/SVYVIo+nmCF+s6dy9chGw2wYqVkwlqFKeHywu8E97rCGAEwZr3pUwE/qtI+eLw7xZnBHCKu08hWDb53Lxju7j78QSLp13o7isJNji6J2a12fMIlsiYQrBY4XQzmwB8AfhpWP4ptl/SXgYA1RikESzMW/r4vwleklOASeE3Zgi+ybcS9Ek8lfet/ZfAIQQ1gjfc3cPyUp9fViKGjwHfguBbfng9zGwz8IyZbSKo0ewR8/d4j9Jfxoqt+ZVvLXC3mQ0i6J9YnHcs93f4A7BbD9fJmUJQEzsq/H1HguT4EHCXmX0QeBi4J+H1pB9RYpBGUNhR2kTQ1HF74Zov4bo1hefm1n3JX5m11OdbSsSQpeClbmZHAGcCH3X3t8PmrjidBDuo/aCg/BDg+SLnDw3vswPwIHCwu79kZucSrWHkP5+kS1N3EzQTbRdz2JzUAZwBzGBbjUsGCDUlSaNaCJwYNudgZpeZ2QEEy4JPCV+mELzgii0VXurzW4Adipy/CPhkeO5kM7sbGAWsCpPCBwn2Vm6Oifn/AqdZsA0j4bUOJ9iu9Naw6E1g7/Dn3HnvC+NaZWY7Aif0cB9i/h45C4GTwxgGmdm3zGw3M/snYC93n0fQNFW4J4YMAKoxSKP6EcGLeFHYnPNr4PfhN+oHCJp3cuX3A/sk+TzBN+7XzSwDfC7v/EuBO80stynUueH5/8fMFhIs930FQQdyZHnwHHd/3YK9q2+zYB/hLMEKwce7+9rwtGuA+Wb2EvBbYG93f8OCXeqWEDQXXQ/cE/aTlPIM8KCZbWDbJvX5vg2MN7PFwGDg4fA+LwD3m9mbYflFMfeQfkqrq4rUQVg7eRoY5+5/rXc8IvnUlCRSB+7+EsFIp0Vmdlu94xHJpxqDiIhEqMYgIiIRSgwiIhKhxCAiIhFKDCIiEqHEICIiEf8fhSEAL0u7zgUAAAAASUVORK5CYII=\n",
+ "text/plain": [
+ "
"
+ ]
+ },
+ "metadata": {
+ "needs_background": "light"
+ },
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "import statsmodels.api as sm\n",
+ "\n",
+ "predictions = beer_model.predict(beer_x)\n",
+ "residuals = beer_y - predictions\n",
+ "plot=sm.qqplot(residuals)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Since we have a linear relationship, we can assume that the residuals are normally distributed.\n",
+ "\n",
+ "\n",
+ "**Logistic Regression**\n",
+ "\n",
+ "While linear regression is used for predicting a numeric variable, logistic regression is used for classification. Logistic is used to explain a relationship between the predictor variables and a response variable(s) that can take values of either 0 or 1. Logistic regression does not need to satisfy the same assumptions as linear regression. The only assumptions we need to satisfy are that the predictor variables are independent of each other and not correlated with each other. We also need the response variable to be binary (meaning, have only two possible values) and the residuals to be independent of each other.\n",
+ "\n",
+ "Our regression equation is:\n",
+ " \n",
+ "\n",
+ "\n",
+ "Where p̂ (pronounced p hat) is the predicted probability of success. Notice that we have our regression equation in the exponent.\n",
+ "Logistic Regression with Scikit-learn\n",
+ "\n",
+ "Here we use the linear_model submodule from scikit-learn as well. We will be applying the logistic regression model to the famous Titanic dataset from Kaggle.\n",
+ "\n",
+ "Before we apply the model to the data, we must do some essential munging.\n",
+ "\n",
+ "First, let's look at the data using the head function. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 8,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "
\n",
+ "\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
PassengerId
\n",
+ "
Survived
\n",
+ "
Pclass
\n",
+ "
Name
\n",
+ "
Sex
\n",
+ "
Age
\n",
+ "
SibSp
\n",
+ "
Parch
\n",
+ "
Ticket
\n",
+ "
Fare
\n",
+ "
Cabin
\n",
+ "
Embarked
\n",
+ "
\n",
+ " \n",
+ " \n",
+ "
\n",
+ "
0
\n",
+ "
1
\n",
+ "
0
\n",
+ "
3
\n",
+ "
Braund, Mr. Owen Harris
\n",
+ "
male
\n",
+ "
22.0
\n",
+ "
1
\n",
+ "
0
\n",
+ "
A/5 21171
\n",
+ "
7.2500
\n",
+ "
NaN
\n",
+ "
S
\n",
+ "
\n",
+ "
\n",
+ "
1
\n",
+ "
2
\n",
+ "
1
\n",
+ "
1
\n",
+ "
Cumings, Mrs. John Bradley (Florence Briggs Th...
\n",
+ "
female
\n",
+ "
38.0
\n",
+ "
1
\n",
+ "
0
\n",
+ "
PC 17599
\n",
+ "
71.2833
\n",
+ "
C85
\n",
+ "
C
\n",
+ "
\n",
+ "
\n",
+ "
2
\n",
+ "
3
\n",
+ "
1
\n",
+ "
3
\n",
+ "
Heikkinen, Miss. Laina
\n",
+ "
female
\n",
+ "
26.0
\n",
+ "
0
\n",
+ "
0
\n",
+ "
STON/O2. 3101282
\n",
+ "
7.9250
\n",
+ "
NaN
\n",
+ "
S
\n",
+ "
\n",
+ "
\n",
+ "
3
\n",
+ "
4
\n",
+ "
1
\n",
+ "
1
\n",
+ "
Futrelle, Mrs. Jacques Heath (Lily May Peel)
\n",
+ "
female
\n",
+ "
35.0
\n",
+ "
1
\n",
+ "
0
\n",
+ "
113803
\n",
+ "
53.1000
\n",
+ "
C123
\n",
+ "
S
\n",
+ "
\n",
+ "
\n",
+ "
4
\n",
+ "
5
\n",
+ "
0
\n",
+ "
3
\n",
+ "
Allen, Mr. William Henry
\n",
+ "
male
\n",
+ "
35.0
\n",
+ "
0
\n",
+ "
0
\n",
+ "
373450
\n",
+ "
8.0500
\n",
+ "
NaN
\n",
+ "
S
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
"
+ ],
+ "text/plain": [
+ " PassengerId Survived Pclass \\\n",
+ "0 1 0 3 \n",
+ "1 2 1 1 \n",
+ "2 3 1 3 \n",
+ "3 4 1 1 \n",
+ "4 5 0 3 \n",
+ "\n",
+ " Name Sex Age SibSp \\\n",
+ "0 Braund, Mr. Owen Harris male 22.0 1 \n",
+ "1 Cumings, Mrs. John Bradley (Florence Briggs Th... female 38.0 1 \n",
+ "2 Heikkinen, Miss. Laina female 26.0 0 \n",
+ "3 Futrelle, Mrs. Jacques Heath (Lily May Peel) female 35.0 1 \n",
+ "4 Allen, Mr. William Henry male 35.0 0 \n",
+ "\n",
+ " Parch Ticket Fare Cabin Embarked \n",
+ "0 0 A/5 21171 7.2500 NaN S \n",
+ "1 0 PC 17599 71.2833 C85 C \n",
+ "2 0 STON/O2. 3101282 7.9250 NaN S \n",
+ "3 0 113803 53.1000 C123 S \n",
+ "4 0 373450 8.0500 NaN S "
+ ]
+ },
+ "execution_count": 8,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "titanic = pd.read_csv('../titanic.csv')\n",
+ "titanic.head()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We see that there is a number of columns that convey information that cannot be modeled. Particularly the Name and Ticket columns. We will delete these features from the dataset. Additionally, the PassengerId column contains a number that is simply incremented with every row and contains no information about the data. We will drop this column as well.\n",
+ "\n",
+ "We also see that there are quite a few NaNs in the Cabin column. Let's investigate how many NaNs we have in each column to evaluate how to address the missing data."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 9,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "Survived 0\n",
+ "Pclass 0\n",
+ "Sex 0\n",
+ "Age 177\n",
+ "SibSp 0\n",
+ "Parch 0\n",
+ "Fare 0\n",
+ "Cabin 687\n",
+ "Embarked 2\n",
+ "dtype: int64"
+ ]
+ },
+ "execution_count": 9,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "titanic_drop = titanic.drop(columns=['Name', 'Ticket', 'PassengerId'])\n",
+ "titanic_drop.isnull().sum(axis = 0)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We can see the NaN count for each column. The Cabin column has 687 NaNs. With so much missing data, we are better off just dropping this column all together.\n",
+ "\n",
+ "We have identified 4 columns for dropping. Let's drop them using the drop function in Pandas. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 10,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "titanic_drop = titanic.drop(columns=['Name', 'Ticket', 'PassengerId', 'Cabin'])"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "To address the remaining missing data, we will drop all rows that contain at least one NaN."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 11,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "
\n",
+ "\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
Survived
\n",
+ "
Pclass
\n",
+ "
Sex
\n",
+ "
Age
\n",
+ "
SibSp
\n",
+ "
Parch
\n",
+ "
Fare
\n",
+ "
Embarked
\n",
+ "
\n",
+ " \n",
+ " \n",
+ "
\n",
+ "
0
\n",
+ "
0
\n",
+ "
3
\n",
+ "
male
\n",
+ "
22.0
\n",
+ "
1
\n",
+ "
0
\n",
+ "
7.2500
\n",
+ "
S
\n",
+ "
\n",
+ "
\n",
+ "
1
\n",
+ "
1
\n",
+ "
1
\n",
+ "
female
\n",
+ "
38.0
\n",
+ "
1
\n",
+ "
0
\n",
+ "
71.2833
\n",
+ "
C
\n",
+ "
\n",
+ "
\n",
+ "
2
\n",
+ "
1
\n",
+ "
3
\n",
+ "
female
\n",
+ "
26.0
\n",
+ "
0
\n",
+ "
0
\n",
+ "
7.9250
\n",
+ "
S
\n",
+ "
\n",
+ "
\n",
+ "
3
\n",
+ "
1
\n",
+ "
1
\n",
+ "
female
\n",
+ "
35.0
\n",
+ "
1
\n",
+ "
0
\n",
+ "
53.1000
\n",
+ "
S
\n",
+ "
\n",
+ "
\n",
+ "
4
\n",
+ "
0
\n",
+ "
3
\n",
+ "
male
\n",
+ "
35.0
\n",
+ "
0
\n",
+ "
0
\n",
+ "
8.0500
\n",
+ "
S
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
"
+ ],
+ "text/plain": [
+ " Survived Pclass Sex Age SibSp Parch Fare Embarked\n",
+ "0 0 3 male 22.0 1 0 7.2500 S\n",
+ "1 1 1 female 38.0 1 0 71.2833 C\n",
+ "2 1 3 female 26.0 0 0 7.9250 S\n",
+ "3 1 1 female 35.0 1 0 53.1000 S\n",
+ "4 0 3 male 35.0 0 0 8.0500 S"
+ ]
+ },
+ "execution_count": 11,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "titanic_missing = titanic_drop.dropna()\n",
+ "titanic_missing.head()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 12,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "\n",
+ "Int64Index: 712 entries, 0 to 890\n",
+ "Data columns (total 8 columns):\n",
+ "Survived 712 non-null int64\n",
+ "Pclass 712 non-null int64\n",
+ "Sex 712 non-null object\n",
+ "Age 712 non-null float64\n",
+ "SibSp 712 non-null int64\n",
+ "Parch 712 non-null int64\n",
+ "Fare 712 non-null float64\n",
+ "Embarked 712 non-null object\n",
+ "dtypes: float64(2), int64(4), object(2)\n",
+ "memory usage: 50.1+ KB\n"
+ ]
+ }
+ ],
+ "source": [
+ "titanic_missing.info()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We now have 712 rows and 8 columns\n",
+ "\n",
+ "As we can see, there is still one more step before we can model the data, we need to create dummy variables out of the Pclass, Sex, and Embarked columns."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 13,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "
\n",
+ "\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
Survived
\n",
+ "
Age
\n",
+ "
SibSp
\n",
+ "
Parch
\n",
+ "
Fare
\n",
+ "
Pclass_2
\n",
+ "
Pclass_3
\n",
+ "
Sex_male
\n",
+ "
Embarked_Q
\n",
+ "
Embarked_S
\n",
+ "
\n",
+ " \n",
+ " \n",
+ "
\n",
+ "
0
\n",
+ "
0
\n",
+ "
22.0
\n",
+ "
1
\n",
+ "
0
\n",
+ "
7.2500
\n",
+ "
0
\n",
+ "
1
\n",
+ "
1
\n",
+ "
0
\n",
+ "
1
\n",
+ "
\n",
+ "
\n",
+ "
1
\n",
+ "
1
\n",
+ "
38.0
\n",
+ "
1
\n",
+ "
0
\n",
+ "
71.2833
\n",
+ "
0
\n",
+ "
0
\n",
+ "
0
\n",
+ "
0
\n",
+ "
0
\n",
+ "
\n",
+ "
\n",
+ "
2
\n",
+ "
1
\n",
+ "
26.0
\n",
+ "
0
\n",
+ "
0
\n",
+ "
7.9250
\n",
+ "
0
\n",
+ "
1
\n",
+ "
0
\n",
+ "
0
\n",
+ "
1
\n",
+ "
\n",
+ "
\n",
+ "
3
\n",
+ "
1
\n",
+ "
35.0
\n",
+ "
1
\n",
+ "
0
\n",
+ "
53.1000
\n",
+ "
0
\n",
+ "
0
\n",
+ "
0
\n",
+ "
0
\n",
+ "
1
\n",
+ "
\n",
+ "
\n",
+ "
4
\n",
+ "
0
\n",
+ "
35.0
\n",
+ "
0
\n",
+ "
0
\n",
+ "
8.0500
\n",
+ "
0
\n",
+ "
1
\n",
+ "
1
\n",
+ "
0
\n",
+ "
1
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
"
+ ],
+ "text/plain": [
+ " Survived Age SibSp Parch Fare Pclass_2 Pclass_3 Sex_male \\\n",
+ "0 0 22.0 1 0 7.2500 0 1 1 \n",
+ "1 1 38.0 1 0 71.2833 0 0 0 \n",
+ "2 1 26.0 0 0 7.9250 0 1 0 \n",
+ "3 1 35.0 1 0 53.1000 0 0 0 \n",
+ "4 0 35.0 0 0 8.0500 0 1 1 \n",
+ "\n",
+ " Embarked_Q Embarked_S \n",
+ "0 0 1 \n",
+ "1 0 0 \n",
+ "2 0 1 \n",
+ "3 0 1 \n",
+ "4 0 1 "
+ ]
+ },
+ "execution_count": 13,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "titanic_with_dummies = pd.get_dummies(titanic_missing, columns=['Pclass', 'Sex', 'Embarked'], drop_first=True)\n",
+ "titanic_with_dummies.head()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "At this point, we can perform the logistic regression. We start, as before, by separating the data into predictor and response variables. Then we create a model. We look at the r squared for the model using the score function. This number explains what percent of the variation in the data is explained by our model. The more variation our model can explain, the better it is at producing predictions.\n",
+ "\n"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 14,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "0.8047752808988764"
+ ]
+ },
+ "execution_count": 14,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "from sklearn.linear_model import LogisticRegression\n",
+ "\n",
+ "x_columns = [col for col in titanic_with_dummies.columns.values if col != \"Survived\"]\n",
+ "titanic_x = titanic_with_dummies[x_columns]\n",
+ "titanic_y = titanic_with_dummies[\"Survived\"]\n",
+ "titanic_model = LogisticRegression(solver='lbfgs', max_iter=400)\n",
+ "titanic_model.fit(titanic_x, titanic_y)\n",
+ "titanic_model.score(titanic_x, titanic_y)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Our model predicts almost 80% of the variation in the data.\n",
+ "\n",
+ "\n",
+ "**ROC Curve**\n",
+ "\n",
+ "The ROC (or Receiving Operator Characteristic) curve is a graph that gives us more information about how well our classification algorithm classifies our data. The goal is to increase the area under the curve as much as possible. If the area under the curve is below the y = x line, this means that our algorithm is worse than a coin flip. Therefore, we must aspire to be at least above that line. However, what we really aspire to is an area of 0.9 or higher.\n",
+ "\n",
+ "This plot utilizes matplotlib. Additionally, we will compute the true positive rate and false positive rate (tpr, fpr) to generate this plot."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 15,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "image/png": "iVBORw0KGgoAAAANSUhEUgAAAXIAAAD4CAYAAADxeG0DAAAABHNCSVQICAgIfAhkiAAAAAlwSFlzAAALEgAACxIB0t1+/AAAADl0RVh0U29mdHdhcmUAbWF0cGxvdGxpYiB2ZXJzaW9uIDMuMC4zLCBodHRwOi8vbWF0cGxvdGxpYi5vcmcvnQurowAAGq9JREFUeJzt3Xt01OWdx/H3TELCHQOC3EUUHsEowihykYuituulri3W7tq67mqPWnrWXtZW19au2+3adlVOqbXqtrvunq2t1opV6wWrVlEQcVAkAl9EBIRwiRCSQMLkMrN/zCSGXGYmyVzym/m8zvH0d3l+v/k+Tvrp02d+F18kEkFERLzLn+0CRESkZxTkIiIepyAXEfE4BbmIiMcVZvLDgsFgMXA2sAdoyuRni4h4WAEwClgbCARCbXdmNMiJhvjKDH+miEiumAe83nZjpoN8D8DkyZMpKirq8sFlZWWUlpamvKjeTH3OD+pzfuhun+vr69myZQvEMrStTAd5E0BRURHFxcXdOkF3j/My9Tk/qM/5oYd97nBKWj92ioh4nIJcRMTjFOQiIh6nIBcR8bikfux0zpUCfwSWmtl9bfZdAPw70Un4Z83shymvUkREOpVwRO6cGwD8HHipkybLgC8Ac4GLnHNTU1eeiIgkksyIPARcDHy37Q7n3ETgoJl9HFt/FlgEbExlkSIimVBZG+KO59dTfbQh5ef2+2Dh0DCBlJ85iSA3s0ag0TnX0e6RQEWr9f3AyYnOWVZWlmx97QSDwW4f61Xqc35Qn3umoSnChgO1NIW7dlxVfRO3v76LfoV+ahu7eHAX9Sk9ntI0fM+pviHIl0yj0tLSbl0UHwwGCQTS8b9nvZf6nB/U55677Zl1/PSVHd0+vrYxzPQxQ4lEIvzksgCThw9OWW0QHZHv3bqpW30OhUJxB8A9DfJyoqPyZmNi20RE4moMR3hn10Fe3FLOva9upKigILmRYCfKq+sAuGnOZE4Y1K9LxxYV+Pm7s09m5OCuHddV+3w96WHnehTkZrbdOTfYOTcB2AVcClydisJEJDccqqvngVVGbf2xd5c/9vY2Pji06ZhtE4cN7PbnnDpiMD/4zDQWTzux2+fwqoRB7pwLAPcAE4AG59xi4CngIzNbDtwE/DbW/FEz25KmWkXEQ5rCYW78/Rr+662tnba5dOpYJg0fxIiBffn2wqkU+HVrS3ck82NnEFgYZ/9rwOwU1iQiHvf4+h3c8dy7WEU1AH0LC/j552cyafiglja7tm3lS4vm4EvTdEM+yfTTD0UkBzSFwxysrT9mW0NTmK8+tpqqugZW74hezPbXp4/ji9MmcNX0Ce3OEazcpRBPEQW5iCRUc7SB5WU7CcUuz7vx92/GbV/g9/GN+VP46WX5dSVOtijIRSShh1Zv4TvPrGu3/QtnjD9mvdDv59sLpxIYNyxTpQkKchFJYH9NXUuI37aolCknDAFg9oThTBw2KN6hkiEKchFhX00ddzz/LodDje32HTjy6bt+v3t+KYP69slkaZIEBblInqs4fJS7X9nIr97s/DJBgF8uPkch3kspyEXyyDMbd/Hsxt3HbHtw9ae3fjxw5SwumTKm3XF9CvwMH9g37fVJ9yjIRTxu074qqo7Wx22zu6qWrz66mqo4T/VbdsXZXD3jJPoXKRa8Rt+YSC+1ce8hfrPuI8LhSKdtVm7b33LNdrIuLx3Hjy6efsy2E0sGKMA9TN+cSA/V1jeyo/JIwnZ3vrCe9eWVHT4Y6mjoKH1f3HXMtua7IpNxyvGD+Pzp4+O26dungJvmTGZEFx8oJb2fglwkCY+v38HWTzoO1tuffbdL5zphUPu55oaGJo5Gjp0eGTGwLwV+H49eM58Cf+d3QA4oKqR05HG6SzKPKchFWqk4fJTGcJgn3tvJb4IfESHCwdp6tn5Sk/DYG2ZPTtjmQjeKKzoYOefj88gldRTkktee2biLj2PTIg+u3sKGPYda9hX4ffTx+wlHIhT6ffzVlDHcNKfDN2UxY+xQXdUhWaMgl5wVamyiKRzhBSvnP9/8gKY2PxqWV9WycV9Vu+Nmjh/GtNFDuW1RKScO7f7zsUUyRUEuOaOxKcxzm3dTfbSBp97fxePrk3vt18KTT+CGOdFpkVGD+zFv4gnpLFMk5RTkkjPm3fc8b+08cMy2kn5FnDdpJN89v5QzRh3X7hifz0efAr3MQLxNQS6eFolEWPvxAbYfPNwS4ledOYHzJo1kWP9i/rp0HP44V3yI5AIFuXjSwdoQ//bie2w7cJin3//0+utLpo7hka/My2JlIpmnIBdPemDVFn722mYg+tLdf5h5Cn6/j8tLx2W5MpHMU5CLp+ysPMLdr7zPL94wAO69/Cy+NtdpnlvymoJcep1dNfUU7alst/1gbT3n37+iZb3Q7+MfZp6iEJe8pyCXrKmsDfHrNVupa2hq2bZ8w07Wl1fC0/Gfjf3nmy5k3kkjKFSIiyjIJTOqj9ZzqK6BuoZGHn7rQ176YA/BXQc7bX/6qOOY38H13MWFBfzjvFMZVzIgneWKeIqCXNLmhc3lbKmoYm/NUX78Utkx+4oL/QwsLuRIfSMPXjmLk1rdQVm+/UOuvmCuHgIlkiQFufRYJBKhus0LC2obGrns1y+3uy3+y4GJzJ4wnGvPPpm+fQo6PF+wulwhLtIFCnJJaE91LS9u2UOkk/cbfP2JNdTWN3W475zxx/NP551GnwIfiyaN0ssLRNJA/62SDgU/PsCPXy6jsSnMU+/vSnwAcNlpY49Z9wFLzj2VCyaPSkOFItJMQZ6HPjpQE70ypJUbfv8mnxwJxT3uwStndfqCg+ljhnLmmKEpq1FEkqcgzyP3vPI+ZXsP8b9vb+u0zYKTP71SZEjfPjz0xdn0KfAzsKhQl/qJ9FIK8hzT2BRm1fYKLv3Vy/h9Ppp/MzxS39juh8d7Lz/rmPV5E0cwY+ywTJUqIimiIM8x972+mW8/FWxZP3N0SctyUyTC30w/iaumT2DskP4aYYvkiKSC3Dm3FJgFRICbzWxtq31LgC8DTcDbZvaNdBQqyVm3O3qTzXXnnMIt553GpOGDs1yRiKRbwiGZc24BMMnMZgPXActa7RsM3ALMM7NzganOuVnpKlY69155JVf/30p+E/wIgOtnTVKIi+SJZEbki4AnAcxsk3OuxDk32MyqgfrYPwOdc4eB/kDn911Lyt28/C1e2bqXbQcOtzyzZHzJAAJjdQWJSL5IJshHAsFW6xWxbdVmdtQ5dyewDagDfmdmWxKdsKysLFGTTgWDwcSNckxHfW4KR/jqi9spO1AHwKgBffjWjBGcPKSYKUP78e4772S6zJTS95wf1OfU6M6PnS0XEsemVv4ZmAxUAy8756aZ2fp4JygtLaW4uLjLHxwMBgkEAl0+zsta9/lQXT2NTWEAVmzZQ9mBTQD84KIzuOMz07JWY6rl+/ecL9Tn5IVCobgD4GSCvJzoCLzZaGBPbHkKsM3MPgFwzq0EAkDcIJeu++Uq4+t/eKvd9u9deHpOhbiIdF0yQb4CuBN40Dk3Ayg3s5rYvu3AFOdcPzOrA84Cnk1LpXnsUF19S4gvmjSSIf2KAPjcaeP4ylkTs1maiPQCCYPczFY554LOuVVAGFjinLsWqDKz5c65/wBecc41AqvMbGV6S84fq7dX8OqOKmY+8mjLtt9dM5+h/bs+LSUiuSupOXIzu7XNpvWt9j0IPJjKovJFZW2I7z33LlV19e32lVfX8eqH+47Ztv6fLlWIi0g7urMzS5rCYY7//mMJ240bWMR3LjyTz5w6mpOPH5SBykTEaxTkWfBx5RF+8vKnv0D/+aYLmdzBzTuFfh+7tmwkEHCZLE9EPEZBngX/F9zGL1dFL7e/ac5kzjtlZKdtk3sSuIjkMwV5FjTGnkL466vm8JWzTspyNSLidXr8XRaNL+lPgV9fgYj0jFJERMTjFOQZtr+mjppQQ+KGIiJJ0hx5Bv3rC+u5c8V7Let+X8fvvxQR6QoFeQZEIhG+9ce3eezdHQBcOnUs40sGcM6Jx2e5MhHJBQryNHhzRwUfHTjcsn64vpFlKzcDcGLJAB69Zj59+xRkqzwRyTEK8hR4ZeteHly1hQjR2+5f+mBvh+2uDpzEw1+ai9+vKRURSR0FeQ/sra7j/jeMH/15Q7t940sGcOui0pZ1v8/HJVPGKMRFJOUU5N20eV8Vp/30qZb1IX37sOnWy/ERDe3jB/bNXnEiklcU5EmqrA2xcV9Vy/ovXreW5fsXn8O1Z59McaHmvUUk8xTkCdQ1NHLvXzZyx/Mdv/Qo+K1LOHOMXnQsItmjII/jF69v5h+Xrz1m222t5r1HD+7PtNElmS5LROQYCvI4lm/YCcCw/sXc+dlpfDkwkUF9+2S5KhGRYynIO7G+/CCvbI2+oWfvnVfqahMR6bUU5K28aOXY/moA/vzBHgD69SlAd9KLSG+mII+prA1x6a9ebnlWeLMVN1yAT0kuIr2Yghx44r2dXPW/rxGORDhr3DC+c/5pAAztX8zsCcOzXJ2ISHx5H+S/fMP4xRtGOBJh9onD+f5FZ/CZU0dnuywRkaTldZDft3IzNz8ZvbywX58C/udv5+pN9SLiOXkd5Gt2fgLAjXMmc/fnAvTrk9f/OkTEo/L2DUGRSIR3dh8Eojf5KMRFxKvyNshX2B42xZ6dUlSQt/8aRCQH5G2Cbdp3CIB5E0cwYlC/LFcjItJ9eRnkRxua+PZTQQCuPfuULFcjItIzeRnkR+obW5YvLx2bxUpERHou74L8YG2I/1qzFYArTh9PSf/iLFckItIzSV2q4ZxbCswCIsDNZra21b5xwG+BImCdmd2YjkJTofpoPcO//1jLekm/oixWIyKSGglH5M65BcAkM5sNXAcsa9PkHuAeM5sJNDnnxqe+zNSY8MMnWpYfuHIWd38ukMVqRERSI5kR+SLgSQAz2+ScK3HODTazauecH5gH/E1s/5L0ldpzTZHoA7E23HIZU0cel+VqRERSI5kgHwkEW61XxLZVA8OBGmCpc24GsNLMbkt0wrKysm6UGhUMBhM3aqMxHGHt3iM0NDbhSvpSt/tDgru7XULGdafPXqc+5wf1OTW6czujr83yGOBnwHbgT865S8zsT/FOUFpaSnFx139kDAaDBAJdnw75w3s7uPkvmwAYftzgbp0jW7rbZy9Tn/OD+py8UCgUdwCczFUr5URH4M1GA3tiy58AO8zsQzNrAl4CTutylWlWVdcAwNWBk3jgyllZrkZEJLWSCfIVwGKA2PRJuZnVAJhZI7DNOTcp1jYAWDoKTYULJo9iyglDsl2GiEhKJZxaMbNVzrmgc24VEAaWOOeuBarMbDnwDeDh2A+fG4Cn01mwiIgcK6k5cjO7tc2m9a32bQXOTWVRIiKSvJy/s7OhKcy+mrpslyEikjY5/RDu/TV1jPqXx1vWC/QSZRHJQTk9Ir//jS0ty99cMIWLp4zJYjUiIumRsyPycDjCI+s+AuB318znymknZrkiEZH0yNkR+eL/eZUPD9QAcLpuxxeRHJazQf7Mxl0ALJnrOFXXjotIDsvZqRW/z8e0sSUs+/zMbJciIpJWOTsi96GXKotIfsjJpFuzo4L6pnC2yxARyYicDPK7/7IRgKF6jZuI5IGcC/LK2hBPvLcTgP/+0pwsVyMikn45F+S/enNry/Kgvn2yWImISGbkVJBX1dVz65/WAbDsirMpLizIckUiIumXU0FeWVffsvyVsyZmsRIRkczJqSBvds1ZExnctyjbZYiIZEROBflDq7ckbiQikmNyKsibr1YJjB2W5UpERDInp4Lc7/Nx/IBivj7v1GyXIiKSMTkT5HUNjVhFNX69PEJE8kzOBPl/vPw+gG7NF5G84/mnH977l43c9dIGDocaAfjRxdOzXJGISGZ5OsgrDh/llqeDAJw2cghjhgzg787W9eMikl88HeSPr9/RsrzuW5dSqMfWikge8nTyNcTmwx+4cpZCXETylqfTrzrUAMDxA/S4WhHJX54N8sraED94fj0ABbrkUETymGeD/HvPvduyvPCUE7JYiYhIdnk2yJ/btBuAB6+cpQdkiUhe82yQ+30+xh3Xn+tnTcp2KSIiWeXZIBcRkSgFuYiIxyV1Q5BzbikwC4gAN5vZ2g7a3AXMNrOFKa1QRETiSjgid84tACaZ2WzgOmBZB22mAvNTX17HbH8VHx08TCSSqU8UEem9kplaWQQ8CWBmm4AS59zgNm3uAW5PcW2duv8NA8Dv1/XjIiLJTK2MBIKt1iti26oBnHPXAq8C25P90LKysqQLbCsYDLJn334A/nXmCILBYIIjvC8f+tiW+pwf1OfU6M5Ds1qGwc65ocDfAxcAY5I9QWlpKcXFXb+tPhgMEggEGL59DXxQSeCMUqaOPK7L5/GS5j7nE/U5P6jPyQuFQnEHwMlMrZQTHYE3Gw3siS2fDwwHVgLLgRmxH0ZFRCRDkgnyFcBiAOfcDKDczGoAzOxxM5tqZrOAK4B1ZvbNtFUrIiLtJAxyM1sFBJ1zq4hesbLEOXetc+6KtFcnIiIJJTVHbma3ttm0voM224GFPS9JRES6Qnd2ioh4nIJcRMTjFOQiIh6nIBcR8TgFuYiIxynIRUQ8TkEuIuJxCnIREY9TkIuIeJyCXETE4xTkIiIe57kgD4cjfFBRDYDPpzcEiYh4Lshf27aPlz7Yy8zxw5g8fFC2yxERyTrPBXn10QYAvnjmBAr8nitfRCTllIQiIh6nIBcR8TgFuYiIxynIRUQ8TkEuIuJxCnIREY9TkIuIeJyCXETE4xTkIiIepyAXEfE4BbmIiMd5Lsh/vnJztksQEelVPBXkjeEIL2/dC8CEoQOzXI2ISO/gqSBvdsaoEq44fXy2yxAR6RU8GeTDBxZnuwQRkV7Dk0EuIiKfKkymkXNuKTALiAA3m9naVvvOA+4CmgADrjezcBpqFRGRDiQckTvnFgCTzGw2cB2wrE2Th4DFZjYXGAR8NuVViohIp5KZWlkEPAlgZpuAEufc4Fb7A2a2K7ZcAQxLbYkiIhJPMlMrI4Fgq/WK2LZqADOrBnDOjQIuAr6f6IRlZWVdLrS1mpoagsFg4oY5Ip/62kx9zg/qc2okNUfehq/tBufcCOBp4GtmdiDRCUpLSyku7vqVJ2vWvg3AoEGDCAQCXT7ei4LBYN70tZn6nB/U5+SFQqG4A+Bkgryc6Ai82WhgT/NKbJrlOeB2M1vR5QpFRKRHkpkjXwEsBnDOzQDKzaym1f57gKVm9nwa6hMRkQQSjsjNbJVzLuicWwWEgSXOuWuBKuAF4BpgknPu+tghj5jZQ+kqWEREjpXUHLmZ3dpm0/pWy7rNUkQki3Rnp4iIxynIRUQ8TkEuIuJxCnIREY/zVJBHItmuQESk9/FUkP/07eh9SH5fu5tLRUTylqeCfPfhegC+uWBqlisREek9PBXkzS6YPDJxIxGRPOHJIBcRkU8pyEVEPE5BLiLicQpyERGPU5CLiHicglxExOMU5CIiHqcgFxHxOAW5iIjHKchFRDxOQS4i4nEKchERj1OQi4h4nIJcRMTjFOQiIh6nIBcR8TgFuYiIxynIRUQ8TkEuIuJxCnIREY9TkIuIeJyCXETE4zwV5E2RbFcgItL7FCbTyDm3FJgFRICbzWxtq30XAP8ONAHPmtkP01HoC5vLeWd/bTpOLSLiaQlH5M65BcAkM5sNXAcsa9NkGfAFYC5wkXNuasqrBDbsqQRg+pihFPg99X8kRETSKplEXAQ8CWBmm4AS59xgAOfcROCgmX1sZmHg2Vj7tLnzs9PSeXoREc9JZmplJBBstV4R21Yd+8+KVvv2AycnOmFZWVkXSowaFz7K/LGDKK7cTTC4r8vHe1kwGEzcKMeoz/lBfU6NpObI2/B1c1+L0tJSiouLu/ShAeCU44IEAoEuHed1waD6nA/U5/zQ3T6HQqG4A+BkplbKiY68m40G9nSyb0xsm4iIZEgyQb4CWAzgnJsBlJtZDYCZbQcGO+cmOOcKgUtj7UVEJEMSTq2Y2SrnXNA5twoIA0ucc9cCVWa2HLgJ+G2s+aNmtiVt1YqISDtJzZGb2a1tNq1vte81YHYqixIRkeTpgmwREY9TkIuIeJyCXETE47pzHXlPFADU19d3+wShUChlxXiF+pwf1Of80J0+t8rMgo72+yKRzD1SMBgMnguszNgHiojklnmBQOD1thszPSJfC8wjekNRU4Y/W0TEqwqAUUQztJ2MjshFRCT19GOniIjHKchFRDxOQS4i4nEKchERj1OQi4h4XKYvP0xab3jhc6Yl6PN5wF1E+2zA9bHX63lavD63anMXMNvMFma4vJRL8B2PI/ok0SJgnZndmJ0qUytBn5cAXyb6d/22mX0jO1WmnnOuFPgjsNTM7muzL6UZ1itH5L3lhc+ZlESfHwIWm9lcYBDw2QyXmHJJ9JnYdzs/07WlQxL9vQe4x8xmAk3OufGZrjHV4vU59u7fW4B5ZnYuMNU5Nys7laaWc24A8HPgpU6apDTDemWQ08te+JwhnfY5JmBmu2LLFcCwDNeXDon6DNFwuz3ThaVJvL9rP9Gb5Z6K7V9iZjuzVWgKxfuO62P/DIy9mKY/cDArVaZeCLiYDt6Ylo4M661B3valzs0vfO5o336idzx5Xbw+Y2bVAM65UcBFRL98r4vb59gLTF4Ftme0qvSJ19/hQA2w1Dn3emw6KRd02mczOwrcCWwDdgBrcuXFNGbWaGZ1nexOeYb11iBvq8cvfPagdv1yzo0Anga+ZmYHMl9S2rX02Tk3FPh7oiPyXOVrszwG+BmwAJjunLskK1WlV+vveDDwz8Bk4CTgHOfctGwVlkU9zrDeGuT5+MLneH1u/qN/DviemeXKe1Hj9fl8oqPUlcByYEbsRzMvi9ffT4AdZvahmTURnVs9LcP1pUO8Pk8BtpnZJ2ZWT/S77vor5r0n5RnWW4M8H1/43GmfY+4h+uv389koLk3ifc+Pm9lUM5sFXEH0Ko5vZq/UlIjX30Zgm3NuUqxtgOjVSV4X7+96OzDFOdcvtn4W8EHGK8ywdGRYr31olnPux0SvVggDS4DpxF747JybD/wk1vQPZnZ3lspMqc76DLwAVAKrWzV/xMweyniRKRbve27VZgLwcI5cfhjv7/oU4GGiA6wNwE05colpvD7fQHQKrRFYZWbfyV6lqeOcCxAdfE0AGoDdRH/I/igdGdZrg1xERJLTW6dWREQkSQpyERGPU5CLiHicglxExOMU5CIiHqcgFxHxOAW5iIjH/T8ZqK0rQwjXJgAAAABJRU5ErkJggg==\n",
+ "text/plain": [
+ "
"
+ ]
+ },
+ "metadata": {
+ "needs_background": "light"
+ },
+ "output_type": "display_data"
+ }
+ ],
+ "source": [
+ "from sklearn import metrics\n",
+ "import matplotlib.pyplot as plt\n",
+ "\n",
+ "y_pred_proba = titanic_model.predict_proba(titanic_x)[::,1]\n",
+ "fpr, tpr, _ = metrics.roc_curve(titanic_y, y_pred_proba)\n",
+ "auc = metrics.roc_auc_score(titanic_y, y_pred_proba)\n",
+ "plt.plot(fpr,tpr);"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We can see that the area under the curve is larger than the x = y diagonal. In fact, we have computed it to be over 0.85."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 16,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "0.859096567085954"
+ ]
+ },
+ "execution_count": 16,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "auc"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": []
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.8.8"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/your-code/.ipynb_checkpoints/main-checkpoint.ipynb b/your-code/.ipynb_checkpoints/main-checkpoint.ipynb
new file mode 100755
index 0000000..bcd6cda
--- /dev/null
+++ b/your-code/.ipynb_checkpoints/main-checkpoint.ipynb
@@ -0,0 +1,2771 @@
+{
+ "cells": [
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Before your start:\n",
+ "- Read the README.md file\n",
+ "- Comment as much as you can and use the resources in the README.md file\n",
+ "- Happy learning!"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 45,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Import your libraries:\n",
+ "import pandas as pd\n",
+ "import numpy as np"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Challenge 1 - Explore the Scikit-Learn Datasets\n",
+ "\n",
+ "Before starting to work on our own datasets, let's first explore the datasets that are included in this Python library. These datasets have been cleaned and formatted for use in ML algorithms."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "First, we will load the diabetes dataset. Do this in the cell below by importing the datasets and then loading the dataset to the `diabetes` variable using the `load_diabetes()` function ([documentation](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.load_diabetes.html))."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 46,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Your code here:\n",
+ "from sklearn import datasets"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 47,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "diabetes = datasets.load_diabetes()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Let's explore this variable by looking at the different attributes (keys) of `diabetes`. Note that the `load_diabetes` function does not return dataframes. It returns you a Python dictionary."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 48,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "dict_keys(['data', 'target', 'frame', 'DESCR', 'feature_names', 'data_filename', 'target_filename'])"
+ ]
+ },
+ "execution_count": 48,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Your code here:\n",
+ "diabetes.keys()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### The next step is to read the description of the dataset. \n",
+ "\n",
+ "Print the description in the cell below using the `DESCR` attribute of the `diabetes` variable. Read the data description carefully to fully understand what each column represents.\n",
+ "\n",
+ "*Hint: If your output is ill-formatted by displaying linebreaks as `\\n`, it means you are not using the `print` function.*"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 49,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "'.. _diabetes_dataset:\\n\\nDiabetes dataset\\n----------------\\n\\nTen baseline variables, age, sex, body mass index, average blood\\npressure, and six blood serum measurements were obtained for each of n =\\n442 diabetes patients, as well as the response of interest, a\\nquantitative measure of disease progression one year after baseline.\\n\\n**Data Set Characteristics:**\\n\\n :Number of Instances: 442\\n\\n :Number of Attributes: First 10 columns are numeric predictive values\\n\\n :Target: Column 11 is a quantitative measure of disease progression one year after baseline\\n\\n :Attribute Information:\\n - age age in years\\n - sex\\n - bmi body mass index\\n - bp average blood pressure\\n - s1 tc, T-Cells (a type of white blood cells)\\n - s2 ldl, low-density lipoproteins\\n - s3 hdl, high-density lipoproteins\\n - s4 tch, thyroid stimulating hormone\\n - s5 ltg, lamotrigine\\n - s6 glu, blood sugar level\\n\\nNote: Each of these 10 feature variables have been mean centered and scaled by the standard deviation times `n_samples` (i.e. the sum of squares of each column totals 1).\\n\\nSource URL:\\nhttps://www4.stat.ncsu.edu/~boos/var.select/diabetes.html\\n\\nFor more information see:\\nBradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani (2004) \"Least Angle Regression,\" Annals of Statistics (with discussion), 407-499.\\n(https://web.stanford.edu/~hastie/Papers/LARS/LeastAngle_2002.pdf)'"
+ ]
+ },
+ "execution_count": 49,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Your code here:\n",
+ "diabetes.DESCR"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 50,
+ "metadata": {
+ "scrolled": false
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "['age', 'sex', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']"
+ ]
+ },
+ "execution_count": 50,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "diabetes.feature_names"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Based on the data description, answer the following questions:\n",
+ "\n",
+ "1. How many attributes are there in the data? What do they mean?\n",
+ "\n",
+ "1. What is the relation between `diabetes['data']` and `diabetes['target']`?\n",
+ "\n",
+ "1. How many records are there in the data?"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 51,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Enter your answer here:\n",
+ "# How many attributes are there in the data? What do they mean?\n",
+ "# 11\n",
+ "\n",
+ "# What is the relation between diabetes['data'] and diabetes['target']?\n",
+ "# The value of target is dependent of data\n",
+ "\n",
+ "# How many records are there in the data?\n",
+ "# 442"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Now explore what are contained in the *data* portion as well as the *target* portion of `diabetes`. \n",
+ "\n",
+ "Scikit-learn typically takes in 2D numpy arrays as input (though pandas dataframes are also accepted). Inspect the shape of `data` and `target`. Confirm they are consistent with the data description."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 52,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "(442, 10)"
+ ]
+ },
+ "execution_count": 52,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Your code here:\n",
+ "diabetes.data.shape"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 53,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "(442,)"
+ ]
+ },
+ "execution_count": 53,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "diabetes.target.shape"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Challenge 2 - Perform Supervised Learning on the Dataset"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "The data have already been split to predictor (*data*) and response (*target*) variables. Given this information, we'll apply what we have previously learned about linear regression and apply the algorithm to the diabetes dataset.\n",
+ "\n",
+ "#### Let's briefly revisit the linear regression formula:\n",
+ "\n",
+ "```\n",
+ "y = β0 + β1X1 + β2X2 + ... + βnXn + ϵ\n",
+ "```\n",
+ "\n",
+ "...where:\n",
+ "\n",
+ "- X1-Xn: data \n",
+ "- β0: intercept \n",
+ "- β1-βn: coefficients \n",
+ "- ϵ: error (cannot explained by model)\n",
+ "- y: target\n",
+ "\n",
+ "Also take a look at the `sklearn.linear_model.LinearRegression` [documentation](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html).\n",
+ "\n",
+ "#### In the cell below, import the `linear_model` class from `sklearn`. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 54,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Your code here:\n",
+ "from sklearn.linear_model import LinearRegression"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Create a new instance of the linear regression model and assign the new instance to the variable `diabetes_model`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 55,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Your code here:\n",
+ "diabetes_model = LinearRegression()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Next, let's split the training and test data.\n",
+ "\n",
+ "Define `diabetes_data_train`, `diabetes_target_train`, `diabetes_data_test`, and `diabetes_target_test`. Use the last 20 records for the test data and the rest for the training data."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 56,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from sklearn.model_selection import train_test_split"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 124,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "X = pd.DataFrame(diabetes.data, columns=diabetes.feature_names)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 126,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "(442, 10)"
+ ]
+ },
+ "execution_count": 126,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "X.shape"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 125,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "y = pd.DataFrame(diabetes.target, columns=['target'])"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 127,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "(442, 1)"
+ ]
+ },
+ "execution_count": 127,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "y.shape"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 120,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "X = X.values.reshape(-1, 10)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 133,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Your code here:\n",
+ "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Fit the training data and target to `diabetes_model`. Print the *intercept* and *coefficients* of the model."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 134,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "LinearRegression()"
+ ]
+ },
+ "execution_count": 134,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Your code here:\n",
+ "diabetes_model.fit(X_train, y_train)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 135,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[152.53813352]\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(diabetes_model.intercept_)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 136,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[[ -35.55683674 -243.1692265 562.75404632 305.47203008 -662.78772128\n",
+ " 324.27527477 24.78193291 170.33056502 731.67810787 43.02846824]]\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(diabetes_model.coef_)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 161,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "0.5539285357415583"
+ ]
+ },
+ "execution_count": 161,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "diabetes_model.score(X_train, y_train)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 162,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "0.3322220326906514"
+ ]
+ },
+ "execution_count": 162,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "diabetes_model.score(X_test, y_test)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Inspecting the results\n",
+ "\n",
+ "From the outputs you should have seen:\n",
+ "\n",
+ "- The intercept is a float number.\n",
+ "- The coefficients are an array containing 10 float numbers.\n",
+ "\n",
+ "This is the linear regression model fitted to your training dataset.\n",
+ "\n",
+ "#### Using your fitted linear regression model, predict the *y* of `diabetes_data_test`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 154,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "89"
+ ]
+ },
+ "execution_count": 154,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Your code here:\n",
+ "\n",
+ "y_pred = diabetes_model.predict(X_test)\n",
+ "len(y_pred)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Print your `diabetes_target_test` and compare with the prediction. "
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 156,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "target [238.47145247]\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Your code here:\n",
+ "for i, j in zip(y_test, y_pred):\n",
+ " print(i, j)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 157,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from sklearn.metrics import mean_squared_error as mse"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 158,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "3424.3166882137343"
+ ]
+ },
+ "execution_count": 158,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "mse(y_test, y_pred)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Is `diabetes_target_test` exactly the same as the model prediction? Explain."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Your explanation here:\n",
+ "# Not it isn't since the model didn't have a great performance "
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Bonus Challenge 1 - Hypothesis Testing with `statsmodels`\n",
+ "\n",
+ "After generating the linear regression model from the dataset, you probably wonder: then what? What is the statistical way to know if my model is reliable or not?\n",
+ "\n",
+ "Good question. We'll discuss that using Scikit-Learn in Challenge 5. But for now, let's use a fool-proof way by using the ([Linear Regression class of StatsModels](https://www.statsmodels.org/dev/regression.html)) which can also conduct linear regression analysis plus much more such as calcuating the F-score of the linear model as well as the standard errors and t-scores for each coefficient. The F-score and t-scores will tell you whether you can trust your linear model.\n",
+ "\n",
+ "To understand the statistical meaning of conducting hypothesis testing (e.g. F-test, t-test) for slopes, read [this webpage](https://onlinecourses.science.psu.edu/stat501/node/297/) at your leisure time. We'll give you a brief overview next.\n",
+ "\n",
+ "* The F-test of your linear model is to verify whether at least one of your coefficients is significantly different from zero. Translating that into the *null hypothesis* and *alternative hypothesis*, that is:\n",
+ "\n",
+ " ```\n",
+ " H0 : β1 = β2 = ... = β10 = 0\n",
+ " HA : At least one βj ≠ 0 (for j = 1, 2, ..., 10)\n",
+ " ```\n",
+ "\n",
+ "* The t-tests on each coefficient is to check whether the confidence interval for the variable contains zero. If the confidence interval contains zero, it means the null hypothesis for that variable is not rejected. In other words, this particular vaiable is not contributing to your linear model and you can remove it from your formula.\n",
+ "\n",
+ "Read the documentations of [StatsModels Linear Regression](https://www.statsmodels.org/dev/regression.html) as well as its [`OLS` class](https://www.statsmodels.org/dev/generated/statsmodels.regression.linear_model.OLS.html) which stands for *ordinary least squares*.\n",
+ "\n",
+ "#### In the next cell, analyze `diabetes_data_train` and `diabetes_target_train` with the linear regression model of `statsmodels`. Print the fit summary.\n",
+ "\n",
+ "Your output should look like:\n",
+ "\n",
+ ""
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Interpreting hypothesis testing results\n",
+ "\n",
+ "Answer the following questions in the cell below:\n",
+ "\n",
+ "1. What is the F-score of your linear model and is the null hypothesis rejected?\n",
+ "\n",
+ "1. Does any of the t-tests of the coefficients produce a confidence interval containing zero? What are they?\n",
+ "\n",
+ "1. How will you modify your linear reguression model according to the test results above?"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": null,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Your answers here:"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Challenge 3 - Peform Supervised Learning on a Pandas Dataframe"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now that we have dealt with data that has been formatted for scikit-learn, let's look at data that we will need to format ourselves.\n",
+ "\n",
+ "In the next cell, load the `auto-mpg.csv` file included in this folder and assign it to a variable called `auto`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 191,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Your code here:\n",
+ "auto = pd.read_csv('../auto-mpg.csv')"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Look at the first 5 rows using the `head()` function:"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 192,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "
"
+ ],
+ "text/plain": [
+ " cylinders\n",
+ "4 199\n",
+ "8 103\n",
+ "6 83\n",
+ "3 4\n",
+ "5 3"
+ ]
+ },
+ "execution_count": 201,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Your code here:\n",
+ "auto.cylinders.value_counts().to_frame()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "We would like to generate a linear regression model that will predict mpg. To do this, first drop the `car_name` column since it does not contain any quantitative data. Next separate the dataframe to predictor and response variables. Separate those into test and training data with 80% of the data in the training set and the remainder in the test set. \n",
+ "\n",
+ "Assign the predictor and response training data to `X_train` and `y_train` respectively. Similarly, assign the predictor and response test data to `X_test` and `y_test`.\n",
+ "\n",
+ "*Hint: To separate data for training and test, use the `train_test_split` method we used in previous labs.*"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 202,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Your code here:\n",
+ "auto.drop('car_name', axis=1, inplace=True)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 203,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "y = auto.mpg"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 204,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "auto.drop('mpg', axis=1, inplace=True)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 206,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "X = auto"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 207,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Your code here:\n",
+ "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Now we will processed and peform linear regression on this data to predict the mpg for each vehicle. \n",
+ "\n",
+ "#### In the next cell, create an instance of the linear regression model and call it `auto_model`. Fit `auto_model` with your training data."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 208,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "auto_model = LinearRegression()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 209,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "LinearRegression()"
+ ]
+ },
+ "execution_count": 209,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "auto_model.fit(X_train, y_train)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 210,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "0.8088490656511089"
+ ]
+ },
+ "execution_count": 210,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "auto_model.score(X_train, y_train)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Challenge 4 - Evaluate the Model\n",
+ "\n",
+ "In addition to evaluating your model with F-test and t-test, you can also use the *Coefficient of Determination* (a.k.a. *r squared score*). This method does not simply tell *yes* or *no* about the model fit but instead indicates how much variation can be explained by the model. Based on the r squared score, you can decide whether to improve your model in order to obtain a better fit.\n",
+ "\n",
+ "You can learn about the r squared score [here](). Its formula is:\n",
+ "\n",
+ "\n",
+ "\n",
+ "...where:\n",
+ "\n",
+ "* yi is an actual data point.\n",
+ "* ŷi is the corresponding data point on the estimated regression line.\n",
+ "\n",
+ "By adding the squares of the difference between all yi-ŷi pairs, we have a measure called SSE (*error sum of squares*) which is an application of the r squared score to indicate the extent to which the estimated regression model is different from the actual data. And we attribute that difference to the random error that is unavoidable in the real world. Obviously, we want the SSE value to be as small as possible.\n",
+ "\n",
+ "#### In the next cell, compute the predicted *y* based on `X_train` and call it `y_pred`. Then calcualte the r squared score between `y_pred` and `y_train` which indicates how well the estimated regression model fits the training data.\n",
+ "\n",
+ "*Hint: r squared score can be calculated using `sklearn.metrics.r2_score` ([documentation](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.html)).*"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 211,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from sklearn.metrics import r2_score"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 214,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Your code here:\n",
+ "y_pred = auto_model.predict(X_train)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 215,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "0.8088490656511089"
+ ]
+ },
+ "execution_count": 215,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "r2_score(y_train, y_pred)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Our next step is to evaluate the model using the test data. \n",
+ "\n",
+ "We would like to ensure that our model is not overfitting the data. This means that our model was made to fit too closely to the training data by being overly complex. If a model is overfitted, it is not generalizable to data outside the training data. In that case, we need to reduce the complexity of the model by removing certain features (variables).\n",
+ "\n",
+ "In the cell below, use the model to generate the predicted values for the test data and assign them to `y_test_pred`. Compute the r squared score of the predicted `y_test_pred` and the oberserved `y_test` data."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 216,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Your code here:\n",
+ "y_test_pred = auto_model.predict(X_test)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 217,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "0.8088938602131777"
+ ]
+ },
+ "execution_count": 217,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "r2_score(y_test, y_test_pred)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Explaining the results\n",
+ "\n",
+ "The r squared scores of the training data and the test data are pretty close (0.8146 vs 0.7818). This means our model is not overfitted. However, there is still room to improve the model fit. Move on to the next challenge."
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Challenge 5 - Improve the Model Fit\n",
+ "\n",
+ "While the most common way to improve the fit of a model is by using [regularization](https://datanice.github.io/machine-learning-101-what-is-regularization-interactive.html), there are other simpler ways to improve model fit. The first is to create a simpler model. The second is to increase the train sample size.\n",
+ "\n",
+ "Let us start with the easier option and increase our train sample size to 90% of the data. Create a new test train split and name the new predictors and response variables `X_train09`, `X_test09`, `y_train09`, `y_test09`."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 218,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Your code here:\n",
+ "X_train09, X_test09, y_train09, y_test09 = train_test_split(X, y, test_size=0.1, random_state=0)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Initialize a new linear regression model. Name this model `auto_model09`. Fit the model to the new sample (training) data."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 219,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Your code here:\n",
+ "auto_model09 = LinearRegression()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 220,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "LinearRegression()"
+ ]
+ },
+ "execution_count": 220,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Your code here:\n",
+ "auto_model09.fit(X_train09, y_train09)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 224,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "y_pred09 = auto_model09.predict(X_train09)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Compute the predicted values and r squared score for our new model and new sample data."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 225,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "0.8102752620591411"
+ ]
+ },
+ "execution_count": 225,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "r2_score(y_train09, y_pred09)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Compute the r squared score for the smaller test set. Is there an improvement in the test r squared?"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 226,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "y_pred09 = auto_model09.predict(X_test09)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 227,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "0.7971768846093389"
+ ]
+ },
+ "execution_count": 227,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "r2_score(y_test09, y_pred09)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "# Bonus Challenge 2 - Backward Elimination \n",
+ "\n",
+ "The main way to produce a simpler linear regression model is to reduce the number of variables used in the model. In scikit-learn, we can do this by using recursive feature elimination. You can read more about RFE [here](https://scikit-learn.org/stable/modules/generated/sklearn.feature_selection.RFE.html).\n",
+ "\n",
+ "In the next cell, we will import RFE"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 228,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from sklearn.feature_selection import RFE"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Follow the documentation and initialize an RFE model using the `auto_model` linear regression model. Set `n_features_to_select=3`"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 229,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Your code here:\n",
+ "auto_model = LinearRegression()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 230,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "selector = RFE(auto_model, n_features_to_select=3)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Fit the model and print the ranking"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 232,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Your code here:\n",
+ "selector = selector.fit(X, y)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 233,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "array([1, 2, 4, 3, 1, 1])"
+ ]
+ },
+ "execution_count": 233,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "selector.ranking_"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 234,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "
\n",
+ "\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
\n",
+ "
cylinders
\n",
+ "
displacement
\n",
+ "
horse_power
\n",
+ "
weight
\n",
+ "
acceleration
\n",
+ "
model_year
\n",
+ "
\n",
+ " \n",
+ " \n",
+ "
\n",
+ "
0
\n",
+ "
8
\n",
+ "
307.0
\n",
+ "
130.0
\n",
+ "
3504.0
\n",
+ "
12.0
\n",
+ "
70
\n",
+ "
\n",
+ "
\n",
+ "
1
\n",
+ "
8
\n",
+ "
350.0
\n",
+ "
165.0
\n",
+ "
3693.0
\n",
+ "
11.5
\n",
+ "
70
\n",
+ "
\n",
+ "
\n",
+ "
2
\n",
+ "
8
\n",
+ "
318.0
\n",
+ "
150.0
\n",
+ "
3436.0
\n",
+ "
11.0
\n",
+ "
70
\n",
+ "
\n",
+ "
\n",
+ "
3
\n",
+ "
8
\n",
+ "
304.0
\n",
+ "
150.0
\n",
+ "
3433.0
\n",
+ "
12.0
\n",
+ "
70
\n",
+ "
\n",
+ "
\n",
+ "
4
\n",
+ "
8
\n",
+ "
302.0
\n",
+ "
140.0
\n",
+ "
3449.0
\n",
+ "
10.5
\n",
+ "
70
\n",
+ "
\n",
+ " \n",
+ "
\n",
+ "
"
+ ],
+ "text/plain": [
+ " cylinders displacement horse_power weight acceleration model_year\n",
+ "0 8 307.0 130.0 3504.0 12.0 70\n",
+ "1 8 350.0 165.0 3693.0 11.5 70\n",
+ "2 8 318.0 150.0 3436.0 11.0 70\n",
+ "3 8 304.0 150.0 3433.0 12.0 70\n",
+ "4 8 302.0 140.0 3449.0 10.5 70"
+ ]
+ },
+ "execution_count": 234,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "auto.head()"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Feature importance is ranked from most important (1) to least important (4). Generate a model with the three most important features. The features correspond to variable names. For example, feature 1 is `cylinders` and feature 2 is `displacement`.\n",
+ "\n",
+ "Perform a test-train split on this reduced column data and call the split data `X_train_reduced`, `X_test_reduced`, `y_test_reduced`, `y_train_reduced`. Use an 80% split."
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 254,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "X = auto[['cylinders', 'displacement', 'acceleration', 'model_year']]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 255,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "X1 = X.values.reshape(-1, len(X.columns))"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 256,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "array([[ 8. , 307. , 12. , 70. ],\n",
+ " [ 8. , 350. , 11.5, 70. ],\n",
+ " [ 8. , 318. , 11. , 70. ]])"
+ ]
+ },
+ "execution_count": 256,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "X1[:3]"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 249,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "y = y.values.reshape(-1, 1)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 262,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Your code here:\n",
+ "X_train_reduced, X_test_reduced, y_train_reduced, y_test_reduced = train_test_split(X1, y, test_size=0.2, random_state=0)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "Generate a new model called `auto_model_reduced` and fit this model. Then proceed to compute the r squared score for the model. Did this cause an improvement in the r squared score?"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 264,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "auto_model_reduced = LinearRegression()"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 265,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "LinearRegression()"
+ ]
+ },
+ "execution_count": 265,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Your code here: \n",
+ "auto_model_reduced.fit(X_train_reduced, y_train_reduced)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 267,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "y_pred_reduced = auto_model_reduced.predict(X_test_reduced)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 268,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "0.7692545641513725"
+ ]
+ },
+ "execution_count": 268,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "r2_score(y_test_reduced, y_pred_reduced)"
+ ]
+ },
+ {
+ "cell_type": "markdown",
+ "metadata": {},
+ "source": [
+ "#### Conclusion\n",
+ "\n",
+ "You may obtain the impression from this lab that without knowing statistical methods in depth, it is difficult to make major progress in machine learning. That is correct. If you are motivated to become a data scientist, statistics is the subject you must be proficient in and there is no shortcut. \n",
+ "\n",
+ "Completing these labs is not likely to make you a data scientist. But you will have a good sense about what are there in machine learning and what are good for you. In your future career, you can choose one of the three tracks:\n",
+ "\n",
+ "* Data scientists who need to be proficient in statistical methods.\n",
+ "\n",
+ "* Data engineers who need to be good at programming.\n",
+ "\n",
+ "* Data integration specialists who are business or content experts but also understand data and programming. This cross-disciplinary track brings together data, technology, and business and will be in high demands in the next decade."
+ ]
+ }
+ ],
+ "metadata": {
+ "kernelspec": {
+ "display_name": "Python 3",
+ "language": "python",
+ "name": "python3"
+ },
+ "language_info": {
+ "codemirror_mode": {
+ "name": "ipython",
+ "version": 3
+ },
+ "file_extension": ".py",
+ "mimetype": "text/x-python",
+ "name": "python",
+ "nbconvert_exporter": "python",
+ "pygments_lexer": "ipython3",
+ "version": "3.8.8"
+ }
+ },
+ "nbformat": 4,
+ "nbformat_minor": 2
+}
diff --git a/your-code/Loading Datasets into Scikit-learn.ipynb b/your-code/Loading Datasets into Scikit-learn.ipynb
index edc7e19..33c103c 100755
--- a/your-code/Loading Datasets into Scikit-learn.ipynb
+++ b/your-code/Loading Datasets into Scikit-learn.ipynb
@@ -772,7 +772,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.7.2"
+ "version": "3.8.8"
}
},
"nbformat": 4,
diff --git a/your-code/Supervised Learning with Scikit-Learn.ipynb b/your-code/Supervised Learning with Scikit-Learn.ipynb
index 77d0cea..a4d5e23 100755
--- a/your-code/Supervised Learning with Scikit-Learn.ipynb
+++ b/your-code/Supervised Learning with Scikit-Learn.ipynb
@@ -996,7 +996,7 @@
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
- "version": "3.7.2"
+ "version": "3.8.8"
}
},
"nbformat": 4,
diff --git a/your-code/main.ipynb b/your-code/main.ipynb
index 0102ef9..bcd6cda 100755
--- a/your-code/main.ipynb
+++ b/your-code/main.ipynb
@@ -12,11 +12,13 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 45,
"metadata": {},
"outputs": [],
"source": [
- "# Import your libraries:\n"
+ "# Import your libraries:\n",
+ "import pandas as pd\n",
+ "import numpy as np"
]
},
{
@@ -37,11 +39,21 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 46,
"metadata": {},
"outputs": [],
"source": [
- "# Your code here:\n"
+ "# Your code here:\n",
+ "from sklearn import datasets"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 47,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "diabetes = datasets.load_diabetes()"
]
},
{
@@ -53,11 +65,23 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 48,
"metadata": {},
- "outputs": [],
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "dict_keys(['data', 'target', 'frame', 'DESCR', 'feature_names', 'data_filename', 'target_filename'])"
+ ]
+ },
+ "execution_count": 48,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
"source": [
- "# Your code here:\n"
+ "# Your code here:\n",
+ "diabetes.keys()"
]
},
{
@@ -73,13 +97,45 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 49,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "'.. _diabetes_dataset:\\n\\nDiabetes dataset\\n----------------\\n\\nTen baseline variables, age, sex, body mass index, average blood\\npressure, and six blood serum measurements were obtained for each of n =\\n442 diabetes patients, as well as the response of interest, a\\nquantitative measure of disease progression one year after baseline.\\n\\n**Data Set Characteristics:**\\n\\n :Number of Instances: 442\\n\\n :Number of Attributes: First 10 columns are numeric predictive values\\n\\n :Target: Column 11 is a quantitative measure of disease progression one year after baseline\\n\\n :Attribute Information:\\n - age age in years\\n - sex\\n - bmi body mass index\\n - bp average blood pressure\\n - s1 tc, T-Cells (a type of white blood cells)\\n - s2 ldl, low-density lipoproteins\\n - s3 hdl, high-density lipoproteins\\n - s4 tch, thyroid stimulating hormone\\n - s5 ltg, lamotrigine\\n - s6 glu, blood sugar level\\n\\nNote: Each of these 10 feature variables have been mean centered and scaled by the standard deviation times `n_samples` (i.e. the sum of squares of each column totals 1).\\n\\nSource URL:\\nhttps://www4.stat.ncsu.edu/~boos/var.select/diabetes.html\\n\\nFor more information see:\\nBradley Efron, Trevor Hastie, Iain Johnstone and Robert Tibshirani (2004) \"Least Angle Regression,\" Annals of Statistics (with discussion), 407-499.\\n(https://web.stanford.edu/~hastie/Papers/LARS/LeastAngle_2002.pdf)'"
+ ]
+ },
+ "execution_count": 49,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Your code here:\n",
+ "diabetes.DESCR"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 50,
"metadata": {
"scrolled": false
},
- "outputs": [],
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "['age', 'sex', 'bmi', 'bp', 's1', 's2', 's3', 's4', 's5', 's6']"
+ ]
+ },
+ "execution_count": 50,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
"source": [
- "# Your code here:\n"
+ "diabetes.feature_names"
]
},
{
@@ -97,11 +153,19 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 51,
"metadata": {},
"outputs": [],
"source": [
- "# Enter your answer here:\n"
+ "# Enter your answer here:\n",
+ "# How many attributes are there in the data? What do they mean?\n",
+ "# 11\n",
+ "\n",
+ "# What is the relation between diabetes['data'] and diabetes['target']?\n",
+ "# The value of target is dependent of data\n",
+ "\n",
+ "# How many records are there in the data?\n",
+ "# 442"
]
},
{
@@ -115,11 +179,43 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 52,
"metadata": {},
- "outputs": [],
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "(442, 10)"
+ ]
+ },
+ "execution_count": 52,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Your code here:\n",
+ "diabetes.data.shape"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 53,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "(442,)"
+ ]
+ },
+ "execution_count": 53,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
"source": [
- "# Your code here:\n"
+ "diabetes.target.shape"
]
},
{
@@ -156,11 +252,12 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 54,
"metadata": {},
"outputs": [],
"source": [
- "# Your code here:\n"
+ "# Your code here:\n",
+ "from sklearn.linear_model import LinearRegression"
]
},
{
@@ -172,11 +269,12 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 55,
"metadata": {},
"outputs": [],
"source": [
- "# Your code here:\n"
+ "# Your code here:\n",
+ "diabetes_model = LinearRegression()"
]
},
{
@@ -190,11 +288,88 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 56,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "from sklearn.model_selection import train_test_split"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 124,
"metadata": {},
"outputs": [],
"source": [
- "# Your code here:\n"
+ "X = pd.DataFrame(diabetes.data, columns=diabetes.feature_names)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 126,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "(442, 10)"
+ ]
+ },
+ "execution_count": 126,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "X.shape"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 125,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "y = pd.DataFrame(diabetes.target, columns=['target'])"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 127,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "(442, 1)"
+ ]
+ },
+ "execution_count": 127,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "y.shape"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 120,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "X = X.values.reshape(-1, 10)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 133,
+ "metadata": {},
+ "outputs": [],
+ "source": [
+ "# Your code here:\n",
+ "X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=0)"
]
},
{
@@ -206,11 +381,100 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 134,
"metadata": {},
- "outputs": [],
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "LinearRegression()"
+ ]
+ },
+ "execution_count": 134,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Your code here:\n",
+ "diabetes_model.fit(X_train, y_train)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 135,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[152.53813352]\n"
+ ]
+ }
+ ],
"source": [
- "# Your code here:\n"
+ "print(diabetes_model.intercept_)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 136,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "[[ -35.55683674 -243.1692265 562.75404632 305.47203008 -662.78772128\n",
+ " 324.27527477 24.78193291 170.33056502 731.67810787 43.02846824]]\n"
+ ]
+ }
+ ],
+ "source": [
+ "print(diabetes_model.coef_)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 161,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "0.5539285357415583"
+ ]
+ },
+ "execution_count": 161,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "diabetes_model.score(X_train, y_train)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 162,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "0.3322220326906514"
+ ]
+ },
+ "execution_count": 162,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "diabetes_model.score(X_test, y_test)"
]
},
{
@@ -231,11 +495,25 @@
},
{
"cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Your code here:\n"
+ "execution_count": 154,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "89"
+ ]
+ },
+ "execution_count": 154,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "# Your code here:\n",
+ "\n",
+ "y_pred = diabetes_model.predict(X_test)\n",
+ "len(y_pred)"
]
},
{
@@ -247,11 +525,50 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 156,
+ "metadata": {},
+ "outputs": [
+ {
+ "name": "stdout",
+ "output_type": "stream",
+ "text": [
+ "target [238.47145247]\n"
+ ]
+ }
+ ],
+ "source": [
+ "# Your code here:\n",
+ "for i, j in zip(y_test, y_pred):\n",
+ " print(i, j)"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 157,
"metadata": {},
"outputs": [],
"source": [
- "# Your code here:\n"
+ "from sklearn.metrics import mean_squared_error as mse"
+ ]
+ },
+ {
+ "cell_type": "code",
+ "execution_count": 158,
+ "metadata": {},
+ "outputs": [
+ {
+ "data": {
+ "text/plain": [
+ "3424.3166882137343"
+ ]
+ },
+ "execution_count": 158,
+ "metadata": {},
+ "output_type": "execute_result"
+ }
+ ],
+ "source": [
+ "mse(y_test, y_pred)"
]
},
{
@@ -267,7 +584,8 @@
"metadata": {},
"outputs": [],
"source": [
- "# Your explanation here:\n"
+ "# Your explanation here:\n",
+ "# Not it isn't since the model didn't have a great performance "
]
},
{
@@ -300,15 +618,6 @@
""
]
},
- {
- "cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Your code here:\n"
- ]
- },
{
"cell_type": "markdown",
"metadata": {},
@@ -351,11 +660,12 @@
},
{
"cell_type": "code",
- "execution_count": null,
+ "execution_count": 191,
"metadata": {},
"outputs": [],
"source": [
- "# Your code here:\n"
+ "# Your code here:\n",
+ "auto = pd.read_csv('../auto-mpg.csv')"
]
},
{
@@ -367,11 +677,247 @@
},
{
"cell_type": "code",
- "execution_count": null,
- "metadata": {},
- "outputs": [],
- "source": [
- "# Your code here:\n"
+ "execution_count": 192,
+ "metadata": {
+ "scrolled": true
+ },
+ "outputs": [
+ {
+ "data": {
+ "text/html": [
+ "