## ML-CDS 2019: Challenge

### Evaluation

The evaluation in this challenge will be based on the AUC measure derived from the ROC curve.

In the binary problem of detecting the presence of lines and tubes, the AUC for the positive class (lines/tubes present) will be used for comparison.

In the multi-label lines and tubes type detection problem, the AUC measure will be computed for each of the 15 labels, and the micro average AUC across all 15 classes will be used for evaluation and comparison of the submitted models.

To compute the AUC we recommend using the scikit-learn Python library. To evaluate and plot your results, you can follow the examples below. Note that the AUC values in these example are for illustration purposed only and are not related to the actual classification problem.

```
num_classes = 15
# Compute ROC curve and ROC area for each class. 'y_test' is the
# ground truth matrix of size (no_samples,no_classes)
# and 'y_score' is the predictions matrix
# and has the same dimensions as y_test.
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(num_classes):
fpr[i], tpr[i], t = roc_curve(y_test[:,i], y_score[:,i], pos_label=1)
roc_auc[i] = auc(fpr[i], tpr[i])

# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
# Print micro average AUC score
print("Micro average AUC: " + str(roc_auc["micro"]))
```
```Micro average AUC: 0.722630173564753
```
```
lines_and_tubes_present = ['ET tube','Enteric tube','Tracheostomy tube',
'Sternotomy wires',
'Overlying EKG wires','Left IJ line','Right IJ line','Left PICC',
'Right PICC',
'Left subclavian line','Right subclavian line','Left central line',
'Right central line',
'Left chest tube','Right chest tube']

plt.figure()
class_num = 2
lw = 1
plt.plot(fpr[class_num], tpr[class_num], color='red',
lw=lw, label='Enteric tube present (area = %0.2f)' % roc_auc[class_num])
plt.plot([0, 1], [0, 1], color='navy', lw=lw, linestyle='--')
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('One class ROC curve')
plt.legend(loc="lower right")
plt.show()

``` ```
# Plot all ROC curves
plt.figure()

plt.plot(fpr["micro"], tpr["micro"],
label='micro-average ROC curve (area = {0:0.2f})'
''.format(roc_auc["micro"]),
color='deeppink', linestyle=':', linewidth=4)

colors = cycle(['red','black', 'green', 'yellow', 'cyan', 'aqua',
'darkorange', 'cornflowerblue'])
for i, color in zip(range(num_classes), colors):
plt.plot(fpr[i], tpr[i], color=color, lw=lw,
label=lines_and_tubes_present[i] + ' (area = {1:0.2f})'
''.format(i, roc_auc[i]))

plt.plot([0, 1], [0, 1], 'k--', lw=lw)
plt.xlim([0.0, 1.0])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title('Lines and Tubes Multi-Class ROC')
plt.legend(loc="center right", bbox_to_anchor=(1.8, 0.5))
plt.show()
``` 