2024-09-13 –, Main hall
There is currently no bigger disruptor in most areas of technology than artificial intelligence (AI). Businesses worldwide seem to be in a rush to adopt and integrate AI technology with the goal of improving their operations and, ultimately, their revenue.
Malicious actors have been doing the same.
But this is not a talk about deepfakes. Granted, deepfakes will eventually start claiming their place in the world of cyber crime and social engineering but if we are realistic, there are some more pressing, newfound capabilities in the here-and-now that cybersecurity professionals will have to defend against.
Artificial intelligence algorithms are already part of the toolkit of cyber criminals. The most popular type being large language models (LLMs) like ChatGPT, and its by-products like FraudGPT and BadGPTs.
What are the observed tactics that cyber criminals follow right now to improve their social engineering attacks and success rates? Are we adequately prepared to defend against these new capabilities and what’s to come?
The presentation will provide insights on how LLMs are currently being exploited by threat actors to research and identify targets, physical security vulnerabilities, supercharge social engineering tactics, and the types of enhancements we have been observing.
We will also discuss some myths. There will be examples as well as demonstrations from our own research.
Christina Lekati is a psychologist and a social engineer.
She works with Cyber Risk GmbH as a social engineering consultant and trainer.
Christina is the leading developer of the social engineering training programs provided by Cyber Risk GmbH. She also conducts vulnerability assessments on corporations and high-value targets. Those reports are based on Open-Source Intelligence (OSINT). Their goal is to help organizations identify and manage risks related to human or physical vulnerabilities.