Hermeneutics, a hodge-podge of psychology, sociology, anthropology and philosophy — with a dose of linguistics thrown in for good measure — examines the variables around which we construct and impute meaning to our world. This process is more colloquially known as interpretation theory. Today, the data points that can inform any interpretation are growing exponentially and are surpassing our ability to cognitively wrap our minds around them. This is a challenge in whatever sphere we operate — be it physical or cyber. Math model (algorithmic) bias is an example of how this error, experienced in the physical world, raises its head in cyberspace. In these instances, the possibility of committing what is known in cognitive research as a Group Attribution Error looms large as the most pervasive and potentially consequential risk of our day.

Group Attribution Error encompasses the proclivity of people to believe that either a group’s decision or way of thinking is reflected or shared by each member of that group, or that the preferences and characteristics of an individual are reflective of the group as a whole. And while this term originated within the discipline of cognitive science, it’s a phenomenon reflected with increasing frequency in daily newscasts and in many aspects of cybersecurity.

The possibility of falling prey to Group Attribution Error is exacerbated both by the porosity that undermines the distinctions we could confidently make between an individual and their associated group, as well as the cognitive limits of rationality that we as humans carry around with us. There are only so many data points that humans can comprehend. Group Attribution Error occurs as we attempt a convenient shortcut, called an abstraction, in formulating an interpretation based on a quick glance, failing to consider all the data possibly in play.

To appreciate the potential consequences of such an error, we need look no further than the destruction and violence that ensued when recently the qualities of one police officer were imputed to all police officers. The potential consequences in cybersecurity are no less sobering. Math model or algorithmic bias of artificial intelligence (AI) can yet occur, at least until we approach a sample size of “all,” i.e. where theoretically N=All. With a sprawling threat landscape that continues to grow exponentially as we move toward a more digitally hyperconnected world, the propensity for Group Attribution Error has never been higher. In our efforts to combat the most error-proned aspects of our human nature, turning to AI and its growing ability to approach sample sizes of N=All, can potentially save us from the worst parts of ourselves and more robustly secure the cyber realm.

As I’ve written in the past, utilizing AI-driven predictive capabilities and technologies will further augment cyber defenses and help identify and prevent cyber intrusions before they become unacceptably consequential. As malware advance and mutate on a daily basis, it’s imperative to appreciate that while what we knew yesterday can help us proactively predict and prevent — pre-execution, i.e. take the data from yesterday and apply it to interpreting other instances of related malware moving forward — Group Attribution Error can remain a concern because of learning bias that can creep into our math models and limited sample sizes. We cannot, consequently, rest on our laurels and become cavalier in our application of what we think we know about the known and its predictive relationship to the unknown — that’s a job we appreciate as better suited for the algorithms of AI. This technology can respond in real-time and intake many more data points than humans can cognitively process. And because AI doesn’t suffer from the innately human effects of stress, fatigue or burnout — variables that often compel us into Group Attribution Error — it can better manage a threat landscape that suffers a constant onslaught of cyber assaults. AI remains, however, a human invention, and as such we must not allow our biases to unconsciously creep into its operations. This calls for a stronger commitment within our cybersecurity community to a new consideration of diversity, outside historic biases unduly influenced by Group Attribution Error.