Large language models and generative AI, such as ChatGPT, can be helpful tools but can also open organizations up to unintended risks if used improperly. 

Here, we talk to Christopher Hodson, Chief Security Officer for Cyberhaven.

Security magazine: Tell us about your title and background.

Hodson: I am currently Chief Security Officer for Cyberhaven, a leader in Data Detection and Response (DDR). I oversee all facets of security, both for our customers, as well as our own company and employees. 

I am also currently an executive member of the Comprehensive Cyber Capabilities Working Group (C3WG), which last year published the first Data Security Maturity Model (DSMM). The Data Security Maturity Model (DSMM) aligns to the NIST Cybersecurity Framework and the Cyber Defense Matrix, and is designed to provide security leaders with a comprehensive list of capabilities needed to secure data across their organizations. 

I am a Fellow of the Chartered Institute of Security Professionals and hold accreditations for the Certified Information Systems Security Professional (CISSP), Certified Ethical Hacker (CEH), BCS certificates for Enterprise and Solution Architecture and CompTIA Advanced Security Professional (CASP+) status. I’m also a Certified Blockchain Professional (C|BP).  I started my journey into systems engineering with Microsoft Certified Systems Engineer (MSCE) and CompTIA A+ and Network+ accreditations.

In the last 20 years, I have held a number of security and IT roles at such companies Contentful, Zscaler, Tanium, Visa and others. I love working with early stage security start-ups and provide board advisory and product development services to various companies.

As for academia, I earned a master’s degree in cybersecurity from Royal Holloway, a public research-focused university — at the University of London. In Europe, Royal Holloway is considered somewhat analogous to MIT in the US. 

Security magazine: In remote work environments, how can CISOs address the increased potential for insider threats?

Hodson: The pandemic and The Great Resignation prompted a massive wave of data exfiltration incidents in the last few years. Research conducted last year revealing shocking proof that companies are hemorrhaging data from their own employees. For example, we found that nearly one in ten employees will exfiltrate sensitive data in a six-month period

The fact is most organizations have a hard time understanding their data flows — i.e., where data resides, how employees are sharing information, whether it’s leaving company walls, etc. 

This was made worse given the shift away from data-center-centric security, which only accounts for data that remains within company walls. 

In order for CISOs to mitigate this problem they need to first identify every sensitive repository of information. They need to ensure that data protection controls are available irrespective of network location — this last point is critical.

These controls should be implemented at the endpoint in order to mitigate the data visibility issues brought about by ubiquitous encryption.

Taking a broader view, organizations need to raise awareness that, in many cases, insider threats are more insidious than external threats. It always starts with awareness, and it has to be at the business level. Then organizations need to define a set of incident response playbooks that include insider risk use cases, not just external threats. After that comes, adopting the right technologies.

Security magazine: What are the potential risks associated with using chat platforms, such as ChatGPT, in terms of data and IP leaks? 

Hodson: The widespread adoption of ChatGPT since its debut late last year has been nothing short of impressive. But it’s also accelerated data exfiltration incidents. According to research, as of June 1 of this year, 10.8% of employees had used ChatGPT in the workplace and 8.6% had pasted company data into it since it launched.

This problem is specifically relevant for companies that allow employees to freely use the public version of ChatGPT with no controls. But it isn’t the only problem.

We now have virtually every company adopting some form of large language model — from OpenAI’s off-the-shelf models, to open-source models like Llama 2. Many organizations are also building their own custom models. And all these models need to be trained, which begs the question: can organizations be certain they aren’t being trained with sensitive data?

The lines are currently blurred. For example, Zoom recently sparked outrage due to their terms of service which essentially allowed the company to use some customer content to train its AI. They recently walked that back given public complaints, but this won’t be the last case. 

Security magazine: How can CISOs ensure that security policies and procedures adequately address the unique risks posed by these chat interactions?

Hodson: The major challenge here is supply chain assurance. Your organization can have the tightest controls, but you can’t be certain that all the subsidiaries and partners also follow similar guidelines. 

Every company today is leveraging Generative AI in some way. How do we know the models they are using have been properly evaluated? How do we know they are not prone to hallucinations? How do we know they are trained on data retrieved via consent? Unfortunately, most organizations can’t answer these questions.

It is the Wild Wild West in this respect. Companies need to start by considering Generative AI usage as part of their supply chain due diligence. Security teams need to ask their suppliers how and where they’re using AI solutions and the data privacy practices associated therein.    

Both OWASP and NIST have created guidelines on how to operate and risk manage AI solutions.  Security practitioners need to stay on top of compliance frameworks and industry baselines for AI.

Security magazine: What can CISOs do to help prevent insider threats, especially in the context of ChatGPT?

Hodson: First, start by implementing policies around data use, supported with continuous, relevant security education and awareness

Next, look for solutions that consider data risk from cradle-to-grave. These need to be able to map data flow and understand the context of the data in question. This way security teams can understand risk and prioritize what to focus on. Not all data is created equal; you need to know what the crown jewels are, where they reside, and how they are moving through the organization. Only then can organizations stop data from leaving company walls. 

Security magazine: What role does employee training and awareness play in preventing insider threats?

Hodson: As I mentioned previously, training and education play a massive role. Employees need to understand what cyber hygiene means in the context of data security and what their responsibility is in terms of protecting their employer’s data. 

Also, ongoing communication with employees on policies can paint a full picture for security teams as to the what (i.e., types of data) but also the why (i.e., the root cause) concerning data security.

Ultimately, winning the hearts and minds of users is a vital step in keeping your company safe from malicious and accidental threats.