This website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.
This Website Uses Cookies By closing this message or continuing to use our site, you agree to our cookie policy. Learn MoreThis website requires certain cookies to work and uses other cookies to help you have the best experience. By visiting this website, certain cookies have already been set, which you may delete and block. By closing this message or continuing to use our site, you agree to the use of cookies. Visit our updated privacy and cookie policy to learn more.
Lately it seems conversations about artificial intelligence (AI) are everywhere. There are constant discussions on the potential for popular AI chatbot ChatGPT, developed by OpenAI, to take over jobs ranging from media to analysts to the tech industry, and maybe even malicious phishing attacks.
But can AI really replace humans? That’s what recent research from Hoxhunt, a cybersecurity behavior change software company, hoped to explore by analyzing the effectiveness of ChatGPT-generated phishing attacks.