The rise of generative AI have led to a variety of security concerns. According to research by Grammarly and Forrester, most companies still don’t have a clear strategy to deploy generative AI within their organizations at scale.

The study finds organizations are turning to the technology to address challenges like improving writing quality (47%), increasing revenue (46%) and speeding up execution (42%) — and 43% are moving more quickly than in the past with other innovations. Still, companies lag behind employees on adoption, and 45% have an enterprise-wide strategy to ensure secure, aligned deployment across the entire organization. That leaves them vulnerable to security threats and technical consolidation challenges from disjointed and ungoverned use of generative AI — putting their business, customers and employees at risk and jeopardizing their ability to realize the technology’s benefits down the line.

According to the report, generative AI is a critical or important priority for 89% of respondents’ companies, and by 2025, nearly all (97%) will be using the technology to support communication. Companies’ top concern with not using generative AI is falling behind competitors (35%) — but hurdles like security concerns (32%), lack of a cohesive AI strategy (30%) and lack of internal policies to govern generative AI (27%) prevent adoption.

The findings reinforce that generative AI will change how work gets done: 62% of respondents expect it to transform workflows across their entire company within a year. The study suggests a pressing need to build better privacy and security practices and literacy around generative AI. Respondents cited enterprise data security as both the most critical criterion in their investments and the top technical challenge to adoption. Yet, 64% of respondents’ companies don't know how to evaluate the security of potential generative AI partners.