A mini ML competition. Who can produce the best model to predict pass/fail
- Download the Open University Learning Analytics dataset from here
- Import the
studentVle.csv,studentAssessment.csvandstudentInfo.csvfiles into R
- Calculate the average daily number of clicks (site interactions) for each student from the
studentVledataset - Calculate the average assessment score for each student from the
studentAssessmentdataset - Merge your click and assessment score average values into the the
studentInfodataset
- Split your data into two new datasets,
TRAININGandTEST, by randomly selecting 20% of the students for theTESTset
- Generate summary statistics for the variable
final_result - Ensure that the final_result variable is binary (Remove all students who withdrew from a courses and convert all students who recieved distinctions to pass)
- Visualize the distributions of each of the variables for insight
- Visualize relationships between variables for insight
-
You will be allocated one of the following models to test:
CART, Neural Network, Genetic Algorithm, Naive Bayes, K-nearest neighbors
-
Using the
trainControlcommand in thecaretpackage create a 10-fold cross-validation harness:
control <- trainControl(method="cv", number=10) -
Using the standard caret syntax fit your model and measure accuracy:
fit <- train(final_result~., data=TRAINING, method=YOUR MODEL, metric="accuracy", trControl=control) -
Generate a summary of your results and create a visualization of the accuracy scores for your ten trials
-
Make any tweaks to your model to try to improve its performance
- Use the
predictfunction to test your model
predictions <- predict(fit, TEST) - Generate a confusion matrix for your model test
confusionMatrix(predictions, TEST$final_result)