Nearly half (48.5 percent) of C-suite and other executives at organizations that use artificial intelligence (AI) expect to increase AI use for risk management and compliance efforts in the year ahead, according to a recent Deloitte poll.

The findings are based on more than 565 C-suite and other executives working at organizations using AI, polled online during a Deloitte webcast. 

Only 21.1 percent of respondents report that their organizations have an ethical framework in place for AI use within risk management and compliance programs. While AI ethics frameworks for risk management and compliance may be few, there is a silver lining, says the report: companies are more likely than not to involve top leaders in developing ethical AI practices. More than half of respondents (53.5 percent) indicated that AI ethics responsibilities are established including the C-suite. Just one-fifth (19 percent) indicate the C-suite in their organizations have no AI ethics responsibilities, notes the report. 

"C-suite and board executives need to ask questions early and often about ethical use of technology and data—inclusive of and beyond AI—to mitigate unintended and unethical consequences. As data and technology uses evolve, tying efforts directly to organizational mission statements and corporate conduct policies can help organizations ensure that future advancements start with a strong ethical foundation. Further, a board-level data committee should be established to discuss enterprisewide AI use, monitoring and modeling with appropriate C-suite leaders," says Maureen Mohlenkamp, Deloitte Risk & Financial Advisory principal specializing in ethics and compliance services.