Artificial intelligence promises to reshape the way we work and run our businesses. And it has the potential to improve security too. But it also poses security threats.
Over the last few months, AI has attracted even more attention than usual. Much of this is driven by OpenAI‘s ChatGPT tool, which allows anyone to create convincing, “human sounding” text from just a web browser.
But GPT-3, GPT3.5 and generative AI can be misused. In the wrong hands, could make it easier to carry out online fraud and cybercrime, create fake news, or abuse and intimidate internet users. Services including ChatGPT have safeguards built in. But the tools to create natural language text are becoming cheaper, and it is perfectly plausible that less well intentioned actors will adopt the technology too.
Security researchers at Finnish firm WithSecure put this to the test, in a project supported by the EU’s Horizon fund. They used a range of scenarios to see how “prompt engineering” could be misused.
In this episode, WithSecure’s intelligence researcher, Andy Patel discusses the threat, and how we can counter it.