Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
73 changes: 61 additions & 12 deletions Assignment7.Rmd
Original file line number Diff line number Diff line change
Expand Up @@ -11,62 +11,90 @@ In the following assignment you will be looking at data from an one level of an

#Upload data
```{r}
data<-read.csv("online.data.csv",header = TRUE)

```

#Visualization
#According to the correlationship plot, variables are postively linear related.
```{r}
library(ggplot2)
library(tidyr)
library(dplyr)
data2<-gather(data,"measure","score",2:7)
ggplot(data2,aes(score)) +facet_wrap(~measure,scales = "free")+geom_histogram(stat = "count")

#Start by creating histograms of the distributions for all variables (#HINT: look up "facet" in the ggplot documentation)

#Then visualize the relationships between variables

pairs(data)
#Try to capture an intution about the data and the relationships


```
#Classification tree
```{r}
#Create a classification tree that predicts whether a student "levels up" in the online course using three variables of your choice (As we did last time, set all controls to their minimums)
library(rpart)
rp <- rpart(as.factor(level.up) ~ forum.posts + pre.test.score + av.assignment.score + messages, method = "class", data = data)

#Plot and generate a CP table for your tree

printcp(rp)
post(rp, file = "tree.ps", title = "levels up")
#Generate a probability value that represents the probability that a student levels up based your classification tree

D1$pred <- predict(rp, type = "prob")[,2]#Last class we used type = "class" which predicted the classification for us, this time we are using type = "prob" to see the probability that our classififcation is based on.
#According to the classification tree, average assignment socre>=0.255 and message>=77.5 represent the possibility of the level up of a student.


data$pred <- predict(rp, type = "prob")[,2]#Last class we used type = "class" which predicted the classification for us, this time we are using type = "prob" to see the probability that our classififcation is based on.
```
## Part II
#Now you can generate the ROC curve for your model. You will need to install the package ROCR to do this.
```{r}
library(pROC)
library(ROCR)

#Plot the curve
pred.detail <- prediction(D1$pred, D1$level.up)
plot(performance(pred.detail, "tpr", "fpr"))
abline(0, 1, lty = 2)
data$level.up<-ifelse(data$level.up=="yes",1,0)
roc(data$level.up,data$pred,plot=TRUE,legacy.axes=TRUE,print.auc=TRUE)

#Calculate the Area Under the Curve
unlist(slot(performance(Pred2,"auc"), "y.values"))#Unlist liberates the AUC value from the "performance" object created by ROCR
#According to above output, the area under the curve is 1.


#Now repeat this process, but using the variables you did not use for the previous model and compare the plots & results of your two models. Which one do you think was the better model? Why?

pred.detail2 <- prediction(data$forum.posts, data$level.up)
plot(performance(pred.detail2, "tpr", "fpr"))
abline(0, 1, lty = 2)
unlist(slot(performance(pred.detail2,"auc"), "y.values"))

#The first model is better than the second model. Because the auc value of the first model is larger than the second one.The first model with auc of value 0.955, which means the true postive rate is 100%. But the second model has the auc value of 64.4%.
```
## Part III
#Thresholds
```{r}
#Look at the ROC plot for your first model. Based on this plot choose a probability threshold that balances capturing the most correct predictions against false positives. Then generate a new variable in your data set that classifies each student according to your chosen threshold.

threshold.pred1 <-
data$threshold.pred1 <- ifelse(data$pred >= 0.7, 1, 0)


#Now generate three diagnostics:

D1$accuracy.model1 <-
data$accuracy.model1 <-mean(ifelse(data$level.up == data$threshold.pred1, 1, 0))

D1$precision.model1 <-
data$TP <- ifelse(data$level.up == 1 & data$threshold.pred1 == 1, 1, 0)
data$FP <- ifelse(data$level.up == 0 & data$threshold.pred1 == 1, 1,0)
data$FN <- ifelse(data$level.up == 0 & data$threshold.pred1 == 0, 1,0)
data$TN <- ifelse(data$level.up == 1 & data$threshold.pred1 == 0, 1,0)
data$precision.model1 <- sum(data$TP)/(sum(data$TP) + sum(data$FP))
data$recall.model1 <- sum(data$TP)/(sum(data$TP)) + sum(data$FN)

D1$recall.model1 <-

#Finally, calculate Kappa for your model according to:

#First generate the table of comparisons
table1 <- table(D1$level.up, D1$threshold.pred1)
table1 <- table(data$threshold.pred1,data$level.up)

#Convert to matrix
matrix1 <- as.matrix(table1)
Expand All @@ -76,6 +104,27 @@ kappa(matrix1, exact = TRUE)/kappa(matrix1)

#Now choose a different threshold value and repeat these diagnostics. What conclusions can you draw about your two thresholds?

data$threshold.pred2 <- ifelse(data$pred >= 0.8, 1, 0)

data$TP2 <- ifelse(data$level.up == 1 & data$threshold.pred2 == 1, 1, 0)
data$FP2 <- ifelse(data$level.up == 0 & data$threshold.pred2 == 1, 1,0)
data$FN2 <- ifelse(data$level.up == 0 & data$threshold.pred2 == 0, 1,0)
data$TN2 <- ifelse(data$level.up == 1 & data$threshold.pred2 == 0, 1,0)
data$precision.model2 <- sum(data$TP)/(sum(data$TP) + sum(data$FP))
data$recall.model2 <- sum(data$TP)/(sum(data$TP)) + sum(data$FN)

#Finally, calculate Kappa for your model according to:
#First generate the table of comparisons
table2 <- table(data$level.up, data$threshold.pred2)

#Convert to matrix
matrix2 <- as.matrix(table2)

#Calculate kappa
kappa(matrix2, exact = TRUE)/kappa(matrix2)

#The second threshold have a slightly higher value of kappa than the first one. So the first threshold is better.

```

### To Submit Your Assignment
Expand Down
Loading