In this episode, David Bombal sits down with vulnerability researcher Vladimir Tokarev (with Dawid on the interview) to show what AI-assisted vulnerability research looks like when it actually works.
Vladimir walks through two real vulnerability case studies and uses them to explain a practical workflow for finding bugs faster with LLMs, without pretending the AI is “fully autonomous.”
Demo 1: Gemini CLI command injection
Vladimir demonstrates a command injection issue in Gemini CLI tied to the IDE / VS Code extension install flow. He shows how a malicious VSIX file name or path can be crafted so that when the install command is executed, the system ends up running an attacker-controlled command (the demo uses a harmless calculator launch to prove execution). The conversation then breaks down what a VSIX is, what the realistic attack paths are (user tricked into installing a malicious extension or placing it in the right directory), and why this class of bug matters for endpoints running local AI agents.
Demo 2: VirtualBox integer overflow and VM escape class impact
Next, Vladimir switches to a VirtualBox vulnerability involving an integer overflow that can lead to out-of-bounds read/write in the host process. Because of architecture constraints, he shows the exploit behavior via a recorded clip, then explains the bug using source code. The key teaching moment is the mismatch between 32-bit arithmetic used in bounds checking and 64-bit pointer arithmetic used during the actual memory move, creating a pathway to bypass checks and copy memory outside the intended buffer.
Vladimir also explains why having both read and write primitives is powerful for exploitation, and how modern mitigations make “blind” exploitation unrealistic without memory disclosure.
How the bugs were found with AI
Vladimir then explains the workflow he uses in real engagements:
• Run static analysis to generate leads at scale
• Use an LLM to triage and filter out noise
• Validate the remaining findings by tracing code paths and checking exploitability
• Use AI again to accelerate setup, debugging, reverse engineering, and iteration
He shares a key insight: the win is not “AI finds everything for you,” it is that AI helps you spend your time on the hardest parts—validation, exploit logic, and decision-making—instead of drowning in thousands (or millions) of findings.
Why there is no fully autonomous vuln-research agent yet
Finally, Vladimir lays out four practical blockers:
1. Depth reasoning (long multi-step exploit chains)
2. Context limits (missing system-level constraints and assumptions)
3. Learning from failure (repeating bad leads)
4. Exploration (poor goal-driven search without strong reinforcement learning)
The interview ends with advice for newcomers: start by reproducing known CVEs, learn by doing, and use AI with guardrails to avoid hallucinations and time-wasting rabbit holes. Vladimir also mentions workshops where he teaches these workflows at conferences.
// Vladimir Tokarev’s SOCIAL //
X: https://x.com/G1ND1L4
LinkedIn: / vladimir-eliezer-tokarev
// Dawid van Straaten’s SOCIAL //
LinkedIn: / dawid-van-straaten-31a3742b
X: https://x.com/nullaxiom?s=21
// David’s Social //
================
Coect with me:
================
Discord: http://discord.davidbombal.com
X: https://www.x.com/davidbombal
Instagram: https://www.instagram.com/davidbombal
LinkedIn: https://www.linkedin.com/in/davidbombal
Facebook: https://www.facebook.com/davidbombal.co
TikTok: http://tiktok.com/@davidbombal
YouTube Main https://www.youtube.com/davidbombal
YouTube Tech: https://www.youtube.com/chael/UCZTIRrENWr_rjVoA7BcUE_A
YouTube Clips: https://www.youtube.com/chael/UCbY5wGxQgIiAeMdNkW5wM6Q
YouTube Emerging Technologies: https://www.youtube.com/chael/UCbY5wGxQgIiAeMdNkW5wM6Q
YouTube Shorts: https://www.youtube.com/chael/UCEyCubIF0e8MYi1jkgVepKg
Apple Podcast: https://davidbombal.wiki/applepodcast
Spotify Podcast: https://open.spotify.com/show/3f6k6gERfuriI96efWWLQQ
SoundCloud: / davidbombal
================
Support me:
================
Or, buy my CCNA course and support me:
DavidBombal.com: CCNA ($10): http://bit.ly/yt999ccna
Udemy CCNA Course: https://bit.ly/ccnafor10dollars
GNS3 CCNA Course: CCNA ($10): https://bit.ly/gns3ccna10
// MY STUFF //
https://www.amazon.com/shop/davidbombal
// SPONSORS //
Interested in sponsoring my videos? Reach out to my team here: sponsors@davidbombal.com
// MENU //
0:00 – Coming up
01:14 – Vladimir Tokarev introduction and AI hacking tools
03:27 – Gemini CLI hacking demo
09:03 – Vulnerability code explained
10:08 – Integer overflow vulnerability demo
14:28 – Source code vulnerability explained
22:24 – How the vulnerabilities were discovered
26:34 – AI speeding up the process // Issues with AI
34:06 – How much does AI help with workflow
35:43 – AI for defenders and for attackers
38:12 – How to become like Vladimir
42:57 – AI accelerating workflow
44:33 – How to prevent AI hallucinations
46:23 – AI assistant tools
48:13 – Conclusion
Please note that links listed may be affiliate links and provide me with a small percentage/kickback should you use them to purchase any of the items listed or recommended. Thank you for supporting me and this channel!
Disclaimer: This video is for educational purposes only.
#vulnerabilityresearch #ai #virtualbox













