The security concerns of generative artificial intelligence (AI) use were analyzed in a recent report by ExtraHop. According to the report, 73% of IT and security leaders admit their employees use generative AI tools or large language models (LLM) sometimes or frequently at work, yet, they aren’t sure how to appropriately address security risks.
The report found that IT and security leaders are more concerned about getting inaccurate or nonsensical responses (40%) than security-centric issues, like exposure of customer and employee personally identifiable information (PII) (36%), exposure of trade secrets (33%) and financial loss (25%).