As another year comes to a close, cybersecurity leaders are not only looking back and reviewing the top trends of 2023, but considering what the future holds for 2024. Topics top of mind for many in the industry is passwordless authentication adoption, generative AI, training, phishing, and more.

Here, cybersecurity leaders share some of their thoughts.

Adoption of passwordless authentication

Joseph Carson, chief security scientist and Advisory CISO at Delinea:

Multi-Factor Authentication (MFA) will become a standard requirement for most online services and applications. Traditional methods like SMS-based MFA will decline in favor of more secure options, such as time-based one-time passwords (TOTP) generated by authenticator apps. The move toward passwordless authentication will continue, reducing reliance on traditional passwords. Methods like passkeys, biometrics, hardware tokens, or public-key cryptography will replace or supplement passwords for access to accounts and systems.

Ricardo Amper, Founder and CEO at Incode:

Both enterprises and consumers are increasingly adopting passwordless solutions across various sectors, and Google's recent policy change underscores the growing demand for seamless and highly secure authentication methods. This transition from traditional passwords empowers individuals to take greater control of their data, especially in response to the ever-evolving landscape of cyber threats.

Passkeys offer several advantages, starting with the elimination of the need to remember passwords, as users can log into accounts and apps using their unique biometrics rather than an easily stolen, traditional password. This biometric verification method not only enhances security but also simplifies the login process.

Furthermore, the versatility of passkeys allows users to employ the same biometric verification method across multiple devices and accounts. This unified approach creates a seamless and efficient means of unlocking devices and accessing various accounts with ease.

Transitioning to a passwordless mindset may appear unconventional, as it requires users to change their habits. However, the enhanced security and the seamless experience it offers reduce the learning curve, making the transition more user-friendly.

Cybersecurity will be a higher priority for law firms

Michael Mumcuoglu, CEO and co-founder at CardinalOps:

More than a quarter of law firms in a 2022 American Bar Association survey said they had experienced a data breach — and a recent report published by the UK’s National Cyber Security Centre (NCSC) found that nearly 75% of the UK's top-100 law firms have been affected by cyberattacks. Today’s cybersecurity realities are increasingly recognized by professionals at law firms: highly sensitive data, a continuously evolving threat landscape and an ever-increasing attack surface in corporate environments.

For nearly any law firm, part of the ‘big picture’ approach to cybersecurity includes an ability to scale detection and response capabilities. Being able to evaluate and optimize their detection posture is key towards building a successful cybersecurity operation. Other areas of focus that law firms will prioritize in 2024 include their improvement of threat detection coverage for sensitive internal and client data — while reducing risk and vulnerabilities for systems specific to how they do business — for example, document and file sharing software.   

Artificial intelligence and large language models

Patrick Harr, CEO at SlashNext:

Phishing and BEC attacks are becoming more sophisticated because attackers are using personal information pulled from the Dark Web (stolen financial information, social security numbers, addresses, etc.), LinkedIn and other internet sources to create targeted personal profiles that are highly detailed and convincing. They also use trusted services such as Outlook.com or Gmail for greater credibility and legitimacy. And finally, cybercriminals have moved to more multi-stage attacks in which they first engage by email, but then convince victims to speak or message with them over the phone where they can create more direct verbal trust, foster a greater sense of urgency, and where victims have less protection. They are using AI to generate these attacks, but often with the goal to get you on the phone with a live person.  

We should also expect the rise of 3D attacks, meaning not just text but also voice and video. This will be the new frontier of phishing. We are already seeing highly realistic deep fakes or video impersonations of celebrities and executive leadership. As this technology becomes more widely available and less expensive, criminals will leverage to impersonate trusted contacts of their intended victims. In 2024 we will assuredly see a rise of 3D phishing and social engineering that combines the immersion of voice, video, and text-based messages.

Drew Perry, Chief Innovation Officer at Ontinue:

I expect to see a major breach of an AI company’s training data exposing the dark side of large language models (LLM) and the personal data they hold that were scraped from open sources. Likely left in an exposed S3 bucket or user error. Comparable to the data brokers of the advertising industry leading to new regulation and privacy controls.

Dr. Ian Pratt, Global Head of Security for Personal Systems at HP Inc.:

One of the big trends we expect to see in 2024 is a surge in use of generative AI to make phishing lures much harder to detect, leading to more endpoint compromise. Attackers will be able to automate the drafting of emails in minority languages, scrape information from public sites — such as LinkedIn — to pull information on targets and create highly-personalized social engineering attacks en masse. Once threat actors have access to an email account, they will be able to automatically scan threads for important contacts and conversations, and even attachments, sending back updated versions of documents with malware implanted, making it almost impossible for users to identify malicious actors. Personalizing attacks used to require humans, so having the capability to automate such tactics is a real challenge for security teams. Beyond this, continued use of ML-driven fuzzing, where threat actors can probe systems to discover new vulnerabilities. We may also see ML-driven exploit creation emerge, which could reduce the cost of creating zero day exploits, leading their greater use in the wild.

Simultaneously, we will see a rise in ‘AI PC’s’, which will revolutionize how people interact with their endpoint devices. With advanced compute power, AI PCs will enable the use of “local Large Language Models (LLMs)” — smaller LLMs running on-device, enabling users to leverage AI capabilities independently from the Internet. These local LLMs are designed to better understand the individual user’s world, acting as personalized assistants. But as devices gather vast amounts of sensitive user data, endpoints will be a higher risk target for threat actors.  

As many organizations rush to use LLMs for their chatbots to boost convenience, they open themselves up to users abusing chatbots to access data they previously wouldn’t have been able to. Threat actors will be able to socially engineer corporate LLMs with targeted prompts to trick them into overriding its controls and giving up sensitive information — leading to data breaches.  

And, at a time when risks are increasing, the industry is also facing a skills crisis — with the latest figures showing 4 million open vacancies in cybersecurity; the highest level in five years. Security teams will have to find ways to do more with less, while protecting against both known and unknown threats. Key to this will be protecting the endpoint and reducing the attack surface. Having strong endpoint protection that aligns to Zero Trust principles straight out-of-the-box will be essential. By focusing on protecting against all threats — known and unknown – organizations will be much better placed in the new age of AI.”

Piyush Pandey, CEO at Pathlock:

With the increase in regulatory and security requirements, GRC data volumes continue to grow at what will eventually be an unmanageable rate. Because of this, AI and ML will increasingly be used to identify real-time trends, automate compliance processes, and predict risks.

Continuous, automated monitoring of compliance posture using AI can, and will, drastically reduce manual efforts and errors. More granular, sophisticated risk assessments will be available via ML algorithms, which can process vast amounts of data to identify subtle risk patterns, offering a more predictive approach to reducing risk and financial losses.   

Prioritize training

Paul Baird, Field Chief Technical Security Officer at Qualys:

Insider threats are a leading problem for IT/security teams — many attacks stem from internal stakeholders stealing and/or exploiting sensitive data, which succeed because they use accepted services to do so. In 2024, IT leaders will need to help teams understand their responsibilities and how they can prevent credential and data exploitation.  

On the developer side, management will need to assess their identity management strategies to secure credentials from theft, either from a code repository hosted publicly or within internal applications and systems that have those credentials coded in. On the other hand, end users need to understand how to protect themselves from common targeted methods of attack, such as business email compromise, social engineering and phishing attacks.  

Security teams need to prioritize collaboration with other departments within their organization to make internal security training more effective and impactful. Rather than requiring training emails/videos to be completed with little to no attention to their contents, security executives need to better understand how people outside of their department think on a daily basis. Using techniques like humor, memorable tropes and simple examples will all help to solve the problem around insufficient and ineffective security training — creating a better line of defense against insider threats.