top of page
1_1.jpg

Generative AI is transforming the security landscape, offering both new attack vectors and innovative defenses in the battle for cognitive security.

This course equips students with the skills to deploy Generative AI in Red and Blue team operations, exploring how adversaries use AI-generated text, voice, and video to manipulate perception, erode trust, and exploit cognitive biases.

 

Attendees will learn how to...

  • Construct AI-powered scam bots leveraging large language models (LLMs) for high-impact phishing and persuasion campaigns.
     

  • Develop countermeasures like "honeybots"—AI-powered deception detectors designed to neutralize these threats. 
     

  • Understand and build synthetic and manipulated media attacks, including AI generated audio and video deception, and AI-enhanced information warfare tactics.
     

  • Understand how AI tools can be weaponized to fabricate identities, impersonate trusted figures, and orchestrate social engineering attacks at scale.
     

  • Master deception and counterdeception frameworks that have been used in military and political deception contexts and now being used in the cyber arms-race.

4-days in-person!
Sat. Aug 2 - Tue. Aug 5
Las Vegas, NV

 Early Bird Pricing! 

$3,500

$4,000

Masterclass Instructors

Perry Carpenter

Perry Carpenter is a multi-award-winning author, podcaster, and speaker with a lifelong fascination for both deception and technology. As a cybersecurity professional, human factors expert, and deception researcher, Perry has spent over two decades at the forefront of exploring how cybercriminals exploit human behavior.

Perry’s career has been a relentless pursuit of understanding how bad actors exploit human nature. His fascination for the art and science of deception began in childhood with magic tricks and mental manipulations, evolving into a mission to protect others from digital threats. As the Chief Human Risk Management Strategist at KnowBe4, Perry helps organizations and individuals build robust defenses against the ever-evolving landscape of online deceptions. In his latest book, FAIK: A Practical Guide to Living in a World of Deepfakes, Disinformation, and AI-Generated Deceptions (Wiley: Oct 2024), Perry tackles the fascinating and often daunting world of artificial intelligence. He explores AI's potential benefits and the darker side of its application in deception and misinformation. Through engaging storytelling and practical advice, Perry equips readers with the knowledge and tools needed to navigate the complexities of AI-driven deception. He demystifies complex concepts, making them accessible to a general audience while providing actionable strategies for protecting oneself in the digital age. Perry’s contributions to the field are widely recognized. His first book, Transformational Security Awareness, was inducted into the Cybersecurity Canon Hall of Fame. He also hosts two award winning podcasts, 8th Layer Insights and Digital Folklore, where he explores the intersection of technology and humanity in an entertaining and thought-provoking manner. Whether speaking on stage, writing, or podcasting, Perry empowers his audience to stay vigilant, think critically, and harness the power of technology responsibly. His work has not only educated but also inspired countless professionals and individuals to take proactive steps in safeguarding their digital lives and helping others do the same. As Perry often says, “The fight against AI-driven deception won’t be won by technology alone. Our greatest weapon is exactly what bad actors are trying to exploit…it’s our humanity and our minds.”

Cameron Malin

Cameron Malin, JD, CISSP, Co-founder and Director of Behavioral Profiling, is a Behavioral Profiler and former Supervisory Special Agent/Behavioral Profiler with the Federal Bureau of Investigation (FBI); he has over twenty-two years of experience investigating, analyzing and profiling cyber adversaries across the spectrum of criminal to national security attacks.

During his tenure in the FBI, he was the founder of both the FBI Behavioral Analysis Unit’s (BAU) Cyber Behavioral Analysis Center (CBAC) , the FBI BAU's methodology and application of science-based behavioral profiling and assessment to national security and criminal cyber offenders—and the BAU’s Deception and Influence Group (DIG), a uniquely trained and experienced cadre of Behavioral Profilers specialized in analyses and countermeasures to adversary cyber deception campaigns and influence operations. He is a co-author of the authoritative cyber deception book, Deception in the Digital Age: Exploiting and Defending Human Targets Through Computer-Mediated Communications (published by Academic Press, an imprint of Elsevier, Inc.) and co-author of the Malware Forensics book series: Malware Forensics: Investigating and Analyzing Malicious Code, Malware Forensics Field Guide for Windows Systems, and Malware Forensics Field Guide for Linux Systems (all published by Syngress, an imprint of Elsevier, Inc.).

Dr. Matthew Canham

Dr. Matthew Canham is the Executive Director of the Cognitive Security Institute and a former Supervisory Special Agent with the Federal Bureau of Investigation (FBI), he has a combined twenty-one years of experience in conducting research in cognitive security and human-technology integration. He built his first multi-layer perceptron in 2003 and has been working in the cognitive security and artificial intelligence domain ever since.

He currently holds an affiliated faculty appointment with George Mason University, where his research focuses on the cognitive factors in synthetic media social engineering and online influence campaigns. He was previously a research professor with the University of Central Florida, School of Modeling, Simulation, and Training’s Behavioral Cybersecurity program. His work has been funded by NIST (National Institute of Standards and Technology), DARPA (Defense Advanced Research Projects Agency), and the US Army Research Institute. He has provided cognitive security awareness training to the NASA Kennedy Space Center, DARPA, MIT, US Army DevCom, the NATO Cognitive Warfare Working Group, the Voting and Misinformation Villages at DefCon, and the Black Hat USA security conference. He holds a PhD in Cognition, Perception, and Cognitive Neuroscience from the University of California, Santa Barbara, and SANS certifications in mobile device analysis (GMOB), security auditing of wireless networks (GAWN), digital forensic examination (GCFE), and GIAC Security Essentials (GSEC).

Dr. Cameron Jones

Dr. Cameron Jones is a postdoctoral scholar at the Cognitive Science department at UC San Diego investigating the risks that LLMs and AI technologies pose through persuasion and deception. He completed his PhD in Cognitive Science, where his dissertation work established that LLM achieve parity with humans at social cognition tasks and that people cannot distinguish between humans and LLMs in a Turing test. His current research focuses on how capabilities advancements could lead to a rapid increase in risks from persuasive and deceptive AI through reinforcement learning, agent architectures, and developing trust and rapport through social relationships.

Dr. Fred Heiding

Dr. Fred Heiding is a research fellow at the Harvard Kennedy School focusing on computer security at the intersection of technical capabilities, business implications, and policy remediations. Fred is a member of the World Economic Forum's Cybercrime Center and a teaching fellow for the Generative AI as well as the National & International Security courses at Harvard. Fred leads the cybersecurity division of the Harvard AI Safety Student Team (HAISST). His work has been presented at leading conferences, including Black Hat, Defcon, BSides, academic journals like IEEE Access, and professional journals like Harvard Business Review. In early 2022, Dr. Heiding got media attention for hacking the King of Sweden and the Swedish European Commissioner. He has assisted in the discovery of more than 45 critical computer vulnerabilities (CVEs).

Don't miss out on this exclusive opportunity!

2_1.jpg

© 2025 Cognitive Security Institute. All rights reserved.

bottom of page