Confusion Matrix — how it’s connected with the cyber-security
In the field of machine learning and specifically, the problem of statistical classification Confusion Matrix is a fairly common term. Today I would be trying to relate it with cybersecurity.
The confusion matrix is another classification metric that can be used to tell how well our model is performing. Yet it is more often used in various places which might not be using the confusion matrix.
It is a performance measurement for a machine learning classification problem where output can be two or more classes. It is a table with 4 different combinations of predicted and actual values.
True Positive: This column holds the number of data out of the total, which is True in actual data and is correctly predicted by the machine.
False Positive: This column hold the number of data out of the total, which is True in actual data, but the machine predicted them false.
False Negative: This column holds the number of data out of the total, which is False in actual data and machine predicted then wrong, i.e., True.
True Negative: This column holds the number of data out of the total, which is False in actual data, and the machine also predicted then false, i.e., which means correct prediction.
So this would give an idea of what the four boxes in the confusion matrix are representing.
So what makes the confusion matrix so peculiar is the presence and distinction of type 1 and type 2 errors.
High accuracy is always the goal be it machine learning or any other field. But the question is does high accuracy always mean better results. Well in most cases the answer is yes but let me give you an example where we might have to go beyond the common notion that we can blindly go towards a higher accuracy.
Let’s say an anti-virus company came with an AI-based anti-virus that detects all the suspecting files. This model is giving 97 percent accuracy. Let’s say the model is working on your PC and you are there working on the next big thing. You just created an executable script which is very crucial for you but the anti-virus being an AI model gave a “FALSE POSITIVE” that your file is a virus.
But on the other hand, let’s say that you downloaded a few music videos that might have contained some malicious package but your model was unable to detect it and gave a “FALSE NEGATIVE”.
So now you have a choice. What type of model would you prefer. The mere existence of a choice here means that just accuracy doesn’t suffice the need in some cases because in both these cases the accuracy remained the same.
So you might now have a gist of the importance of the two types of error in the confusion matrix and what they mean.
Cybercrime can be anything like:
- Stealing of personal data
- Identity stolen
- For stealing organizational data
- Steal bank card details.
- Hack emails for gaining information.
The trade-off between type 1 and type 2 errors is very critical in cybersecurity. Let’s take another example. Consider a face recognition system that is installed in front of the data warehouse which holds critical error. Consider that the manager comes and the recognition system is unable to recognize him. He tries to log in again and is allowed in.
This seems a pretty normal scenario. But let’s consider another condition. A new person comes and tries to log himself in. The recognition system makes an error and allows him in. Now, this is very dangerous. An unauthorized person has made an entry. This could be very damaging to the whole company.
In both cases, there was an error made by the security system. But the tolerance for False Negative here is 0 although we can still bear False Positive.
This shows the critical nature that might vary from use case to use case where we want a tradeoff between the two types of error.