Skills and competencies that support the function of intelligence analysis within security organizations have always been in high demand. Today these capabilities are even more sought after, as use of artificial intelligence (AI) is becoming commonplace in organizations.

The ability to quickly assess whether an identified risk is either true or a figment of a computer’s imagination is a recently added responsibility to the already-full portfolio of services provided by security departments. Security leaders are seeking candidates who can analyze and investigate within this fluid AI environment.


Working with AI in intelligence

The security function is increasingly viewed as an organizational partner that works to align the elements of security and risk programs with the enterprises’ strategic initiatives. Intelligence programs within security portfolios continue to gain attention as an invaluable information source for leadership. The programs are key drivers in resilience and critical for the development of mitigation programs.

The continued success of these efforts will depend on the thoughtful evaluation and implementation of new tools that aid not only in the collection of large amounts of AI-generated information, but the accurate evaluation and assessment of it. Reliable analysis depends heavily on the ability to achieve this.


Examine your information sources

The “GPT” in ChatGPT is an acronym for Generative Pre-trained Transformer. Used online via the AI provider’s stand-alone interface or integrated within specialty program tools, it can sift through massive qualities of collected information to provide information to user questions without, in theory, being influenced by commercial advertising.

While this is an extremely valuable tool for research, it is based on information provided by individuals, organizations and governments, some of whose intentions may be misleading. To build on this, the accuracy of data gathering is becoming more dangerous because of increases in speed and sophistication of societal communication patterns.

Information contained in these AI bodies of research can be willfully false, misleading, polarizing, narcissistic and reckless, with little regard for verification. Security leaders will need to drive for heightened organizational awareness of this. Vetting the accuracy of the underlying sources of decision-critical information is paramount.


Fight disinformation

AI-created disinformation has the potential to negatively impact all departments in an organization. This influence and the weaponization of false data mean security leaders will be seeking emotionally mature talent who possess the critical thinking skills necessary to put programs and processes in place to provide appropriate levels of due diligence.

The need for counterintelligence programs will also increase. AI-created disinformation will mean organizations must strive to protect against reputational issues, elicitation, insertion, manipulation or misappropriation of data within the company or employee violence or involvement in violent groups that are created intentionally.

Disinformation has always been a tool for influence, marketing and politics. The rapidity with which it is currently being spread will only increase. It is more critical than ever for security leaders to understand the potential threats and develop skills and competencies to manage the potential risks of AI hallucinations.