A new report by Professor Gabriel Weimann of University of Haifa underscores the many ways AI language platforms, such as ChatGPT can educate, recruit and fundraise for terrorists.
Follow Israel Hayom on Facebook, Twitter, and Instagram
"AI has been able to exploit newer technologies for individuals and groups, making the threat of cyberattacks and espionage more pervasive than ever before. It has the potential to be both a tool and a threat in the context of terrorist and extremist groups," the study, titled "Terror: The Risks of Generative AI Exploitation," said.
According to the authors, AI has the potential to serve as a powerful tool for terrorist activity because it can generate and distribute propaganda at rapid speed, can act as a mechanism for interactive recruitment, wield automated attacks by taking over drones or autonomous vehicles, exploit social media, and and carry out cyberattacks.
"The findings of this initial exploration into how terrorists or other violent extremist actors could make use of these platforms offer interesting and deeply concerning insights into the vulnerabilities of these platforms," Weimann said. "Overall, AI holds great potential as both a tool and a threat in the context of extremist actors. Governments and developers must proactively monitor and anticipate these developments to negate the harmful utilization of AI."