The Growing Danger of AI Generated Videos: My Encounter with a Deep Fake Victor Davis Hanson Interview
- Norman Fenton
- 14 hours ago
- 3 min read

I regularly watch the excellent conservative commentator Victor Davis Hanson (VDH) on YouTube. On 23 November, I clicked on what I assumed was his channel and watched what appeared to be his latest video https://youtu.be/Cj9MP_3ZYBs?si=W3gK3AUdIdeLzCCS. It covered the case of Mark Kelly, the Democratic Senator from Arizona and retired military officer, who had recently been accused of sedition.
At first glance, everything seemed normal. The video looked and sounded exactly like VDH — the cadence, the tone, even the presentation style. But after several minutes, I began to suspect something was seriously wrong.
It became increasingly clear that what ‘he’ was saying didn’t match VDH’s opinions or manner of speech. The tone was unusually legalistic and dry, arguing points that aligned precisely with pro-Kelly talking points, essentially defending him against the sedition accusations. In fact, the script read exactly like something an AI would generate if prompted: “Provide a defence of Mark Kelly with respect to the sedition accusation against him.”
The sophistication of the video was frightening. For someone like me, who watches almost all of VDH’s content, the fact that it almost fooled me shows just how dangerous these AI-generated fakes can be. The production was highly professional, replicating both his voice and face in a convincing manner. Incredible effort must have been invested into recreating VDH's YouTube platform context. It’s not hard to imagine that this was a deliberate highly professional attempt to discredit a prominent commentator, presumably done by (or at the request of) senior Democrat activists.
I shared the link with my colleague Martin Neil, noting my suspicions:
"This is so odd. I watch Victor Davis Hanson all the time and I‘m 99% sure this is totally AI generated. It’s his voice and face but it’s not how he talks and what he is saying is 100% AI generated boring text. https://youtu.be/Cj9MP_3ZYBs?si=W3gK3AUdIdeLzCCS "
However, by the time he clicked on it the next day the link produced a screen saying
“This video is no longer available because the YouTube account associated with this video has been terminated.”
It was clear that the video was designed to deceive, yet there was no publicly accessible copy to warn others about it.
I tried to find archived versions, and while some discussions confirmed this was not the first time VDH had been targeted by AI-generated content, no reliable copy of this particular clip existed. Even AI tools like Grok could not retrieve it. Grok provided the following related Wayback Machine link
" https://web.archive.org/web/20251127000000/https://x.com/2001Tricks/status/1993687135707136460 (captures the embedded YouTube short: https://youtu.be/Cj9MP_3ZYBs—AI VDH on "sedition backfire," ~2 minutes; channel since deleted).”
But this also failed, demonstrating just how fleeting and difficult to track these deepfakes can be.
This incident underscores a chilling reality: deepfake technology has advanced to the point where it can convincingly mimic respected public figures and spread disinformation in politically charged contexts. The danger is not just theoretical. When these videos are crafted with precision, they can deceive even well-informed audiences, potentially influencing public opinion, discrediting credible voices, and eroding trust in media.
We are entering a world where seeing is no longer believing. As AI-generated videos become more sophisticated, the public must develop new tools and critical thinking skills to discern fact from fabrication. Transparency, education, and proactive archiving of potential deepfakes are essential to prevent these manipulations from shaping political discourse.
The VDH deepfake was a small glimpse into what could become a massive problem for democracy. If this could almost fool someone very familiar with the source material, imagine the impact on the wider public. We must act, before seeing and hearing can no longer be trusted.






Comments