Many in the cybersecurity industry use terms including vulnerabilities, threats, and risks as though they are synonymous, but they are not. Understanding the differences, and using the terms correctly and consistently, is an important part of creating a more systematized and defensible cybersecurity strategy.
A vulnerability is a weakness or flaw in a computing device. Vulnerabilities can arise from flaws in the way the hardware is designed, such as those dubbed Spectre and Meltdown; from flaws in the software, such as those creating SQL injection vulnerabilities; or from simply being connected, such as Denial of Service vulnerabilities.
An exploit is a tool or technique used by an attacker that takes advantage of a vulnerability to achieve a goal. Such goals can include causing commands to be executed by the victim’s computer, retrieving data from a database without authorization, and causing a device to stop providing service to others.
A patch is a software-based update that fixes a vulnerability. A properly patched vulnerability cannot be exploited.
A threat is a vulnerability that can be exploited. It is important to note that the mere existence of an exploit is not enough for a vulnerability to become a threat. The threat actor (i.e., criminal or hacker) must have the ability to use the exploit on the vulnerability before the combination can become a threat.
Risk is usually expressed as the product of the likelihood that a vulnerability will be exploited and the severity or impact of the vulnerability. Risks can be expressed in a variety of ways, including simple ordinals (e.g., low, medium, and high) or as a quantity (e.g., using techniques described by the FAIR Institute).