# Week 6 Activity

In the activity for Week 6, we were asked to calculate different metrics for assessing models which were discussed in Ryan Baker’s unit of Behavior Detection and Model Assessment. Two data sets, classifier-data-asgn2.csv and regressor-data-asgn2.csv were given.
I used Excel for these calculations and for the last metric (A’ or AUC), I downloaded a plugin called XLSTAT from http://www.xlstat.com/en/ since SPSS didnot give the correct answer. I will detail out the steps which I followed to complete this activity containing 11 questions. I urge you to save all the steps since you may need the answer of previous steps to continue the next steps. To better understand the steps I’ve described, refer the lecture videos 🙂

#### Q1) Using regressor-data-asgn2.csv, what is the Pearson correlation between data and predicted (model)? (Round to three significant digits; e.g. 0.24675 should be written as 0.247) (Hint: this is easy to compute in Excel)

Use the excel function CORREL or PEARSON to calculate the Pearson correlation for the regressor model using the given two input arrays of data. Round the number you get, instead of truncating it to get the correct answer.

#### Q2) Using regressor-data-asgn2.csv, what is the RMSE between data and predicted (model)? (Round to three significant digits; e.g. 0.24675 should be written as 0.247) (Hint: this is easy to compute in Excel)

Calculate the residual (difference between actual data and predicted model) and use those values for the array in the below formula:
=SQRT(SUMSQ(A2:A1001)/COUNTA(A2:A1001))

#### Q3) Using regressor-data-asgn2.csv, what is the MAD between data and predicted (model)? (Round to three significant digits; e.g. 0.24675 should be written as 0.247) (Hint: this is easy to compute in Excel)

Calculate the absolute values of the previous residual values in an array =ABS(RMSE!A2:A1001) and average them.

#### Q4) Using classifier-data-asgn2.csv, what is the accuracy of the predicted (model)? Assume a threshold of 0.5. (Just give a rounded value rather than including the decimal; e.g. write 57.213% as 57) (Hint: this is easy to compute in Excel)

Compute the column of predicted model values with Y based on the given threshold of 0.5 (If >0.5, then Y). Compare it with the no of Ys in data to find the number of agreements. Calculate “= no. of agreements/ total count” for the accuracy.

#### Q5) Using classifier-data-asgn2.csv, how well would a detector perform, if it always picked the majority (most common) class? (Just give a rounded value rather than including the decimal; e.g. write 57.213% as 57) (Hint: this is easy to compute in Excel)

Calculate “= number of disagreements/total count”. Use previous step values.

#### Q6) Is this detector’s performance better than chance, according to the accuracy and the frequency of the most common class?

Answer Yes/No based on the previous values you got.

#### Q7) What is this detector’s value for Cohen’s Kappa? Assume a threshold of 0.5. (Just round to the first two decimal places; e.g. write 0.74821 as 0.75).

I calculated the agreements between data and prediction model to form the confusion matrix of the number of True Negatives(TN), True Positives (TP), False Positives (FP), False Negatives (FN) and listed them as below from O5 to O8 and then used a formula:
 00 (TN) 11 (TP) 01 (FP) 10 (FN)
=((O5+O6)-((((O6+O7)*(O6+O8))/SUM(O5:O8))+(((O5+O7)*(O5+O8))/SUM(O5:O8))))/((SUM(O5:O8))-((((O6+O7)*(O6+O8))/SUM(O5:O8))+(((O5+O7)*(O5+O8))/SUM(O5:O8))))

Alternatively, you may apply the values from your confusion matrix to any online calculator for Cohen’s Kappa.

#### Q8) What is this detector’s precision, assuming we are trying to predict “Y” and assuming a threshold of 0.5 (Just round to the first two decimal places; e.g. write 0.74821 as 0.75).

Use formula Precision = TP/ (TP+FP)

#### Q9) What is this detector’s recall, assuming we are trying to predict “Y” and assuming a threshold of 0.5 (Just round to the first two decimal places; e.g. write 0.74821 as 0.75).

Use formula Recall = TP/ (TP+FN)

#### Q10) Based on the precision and recall, should this detector be used for strong interventions that have a high cost if mis-applied, or fail-soft interventions with low benefit and a low cost if mis-applied?

Select the correct option from the list of options.

#### Q11) What is this detector’s value for A’? (Hint: There are some data points with the exact same detector confidence, so it is probably preferable to use a tool that computes A’, such as http://www.columbia.edu/~rsb2162/computeAPrime.zip — rather than a tool that computes the area under the ROC curve).

I used ROC Curve from XLSTAT plugin to compute Area under the curve (AUC) using excel.

To compute A’ without ROC curve, you may follow our co-learner’s steps listed in his blog:

Hope this helps you to reach this screen! 🙂

## 2 thoughts on “Week 6 Activity”

1. shallowr00ts says:

Thanks so much for putting this together – makes such a difference to see where you are going;)

2. Karen A says:

Thank you for showing your steps. It is very helpful!