AI Risks: Fake Audio and Deep Fakes

Thanks to the release of ChatGPT , and Bing’s Chat search, AI is becoming a household word. In case you missed it, there was some controversy this week about AI. Elon Musk, one of the original investors in OpenAI, and a group of other AI scientists, wrote a letter asking for a pause in the development of AI until some regulations can be crafted. While there are many ways that bad actors can take advantage of AI, I am only going to mention two in this article. But the video below discusses some others.

Some of the concerns about AI are legitimate fears that AI will take human jobs, but others are about personal safety, autonomous weapon systems, and national cybersecurity and misinformation related to the ability of AI technology to create very hard-to-detect fake videos, photos, and even audio. This is not new information or a new problem, but many of us are just now learning about what has been happening for several years. There are also feuds over who gets to profit off of AI technology now that it is finally starting to pay off. Many tech companies do not want to pause their work. Google and Microsoft are racing to grab the prize for an AI-powered Search functions.

“As Malcolm Murdock, machine-learning engineer and author of the 2019 novel The Quantum Price, puts it, “AI doesn’t have to be sentient to kill us all. There are plenty of other scenarios that will wipe us out before sentient AI becomes a problem.”

You may be thinking that fake audio and video doesn’t affect you. The problem is that these fakes often get a lot of clicks and shares. They will be so believable that they will 1) make it even more difficult to trust our own eyes and ears, and 2)feed the anger and hate between different groups that has been growing for the past decade.

Fakes can affect the stock market and elections, lead to violence, and ruin people’s lives. And with increased public access to this technology, everyone is at risk of being the victim of a fake video or voice clone.

Fake Audio

Voice cloning technology has the potential to be used for a variety of purposes, such as creating synthetic voices for people with speech impairments, or for use in the entertainment industry. However, it also raises concerns around the potential for malicious use, such as creating convincing fake audio recordings for the purpose of spreading disinformation or impersonating someone for fraud or other nefarious purposes.

It’s possible for AI to recreate a person’s voice with only a few seconds of recorded audio using a technique called voice cloning or voice synthesis. Voice cloning involves training an AI model on a person’s voice recordings to create a model of their voice, which can then be used to generate new speech that sounds like the person.

There are a few ways to protect against fake audio recordings created using voice cloning technology:

  1. Verify the source: One way to protect against fake audio is to verify the source of the recording. If you receive a recording from an unknown source, it’s important to take steps to verify the authenticity of the recording, such as contacting the person who allegedly made the recording or verifying the information with other trusted sources. We may all need to have a predetermined code word.
  2. Analyze the audio: There are several tools available that can be used to analyze audio recordings and detect signs of tampering. These tools can analyze characteristics such as the frequency range of the audio signal or the waveform, which can indicate whether the audio has been manipulated or generated using voice cloning technology.
  3. Educate yourself: Educating yourself about the potential risks of fake audio and how to identify potentially fake audio recordings can also help you to protect against them. This can include learning about the techniques used to create fake audio, such as voice cloning, and understanding how to analyze audio recordings to detect signs of manipulation.
  4. Use trusted sources: When listening to audio recordings, it’s important to use trusted sources of information. If you are unsure about the authenticity of an audio recording, it’s important to verify the information with other trusted sources before drawing any conclusions or taking any action.

The problem is that we must first create widespread public awareness about the existence of these fake voices. If you get a call from your child or friend asking for emergency money transfers or to come get them, you should hang up and then call their number back to be sure it is the real person. But what is more likely to happen is a barage of fake videos of politicians or celebrities saying and doing things that can hurt their reputation. This is already a problem.

Deep Fakes

Deep fakes are manipulated images or videos that use machine learning algorithms to create convincing fake content, often used to spread misinformation, slander, or disinformation.

The concern around deep fakes is that they have the potential to deceive people and erode trust in important institutions such as the news media or political processes. This can lead to serious societal harm, including interference in elections, exacerbation of social tensions, and damage to reputations.

To mitigate the risk of deep fakes, it’s important to develop and implement regulations around the use of AI in content creation, as well as to invest in technologies that can detect and identify deep fakes.

What Can We Do to Protect Ourselves?

Recognizing deep fakes can be a challenging task, as deep fakes are designed to be convincing and difficult to detect. Some people are already very skeptical and aware of the possibility that what they are looking at could be fake. They know how to take steps to verify information, such as looking at the source, checking other media outlets to see if they say the same thing, or taking a wait and see approach to see if new information comes out later that confirms the validity of the media. Some sites are obviously known for regularly producing fake news. However, it is still important to educate the general public about the risks of deep fakes:

  1. Develop public awareness campaigns: Governments, non-profit organizations, and media outlets can develop campaigns to raise awareness about the potential risks of deep fakes. These campaigns can include information about how deep fakes are created, how they can be used to spread disinformation, and how to identify them. This is especially important for older adults and children who are not tech savvy.
  2. Encourage media literacy: Promoting media literacy can help people to be more critical consumers of information. This can involve teaching people how to identify trustworthy sources, how to fact-check information, and how to identify potential biases.
  3. Foster critical thinking skills: Encouraging critical thinking skills can help people to be more discerning when it comes to information that they encounter online. This can involve teaching people how to analyze arguments, how to identify logical fallacies, and how to evaluate evidence.

But even with these safety measures, social media platforms such as Facebook, Instagram, and Twitter must take more responsibility for protecting the public from deep fakes and misinformation.

Create your own AI images

This is just for fun. I used Canva’s Text to Image feature to make these. I typed in some instructions and out came this designs.



  1. A more serious concern,as well as voice cloning, is for AI to determine what information we will be able to retrieve. If it is programmed to not retrieve “hateful or dangerous” information the program will determine what info falls into that category. “1984” becomes a reality.
    For some time now I’ve been a proponent of “hard copy” information…books, mostly. While future editions of a work may be altered, the originals are always within our grasp and maybe that is how we hold onto Truth. Never give up Truth or Freedom for convenience.

  2. People get fooled by fishing emails, spear fishing emails etc every day.
    And these are so unsophisticated when compared to AI.
    I’m pretty good with spotting suspicious email, and I’ve learned to take a breath and think before clicking on anything. But it’s not 100% and you only have to fool me once.
    All of your points about verification and reading with a critical eye are spot on.
    Fools are easily fooled, but with this new technology all of us can be turned into fools.

    • I think we are too far into it to stop. I fear unintended consequences. We have not idea where it will lead and most of us can’t imagine the havoc it could wreak. Regulations will probably have minimal penalties. It’s really an open playground for bad actors.

      • I’m trying to be hopeful, but I’m afraid you are right, which is why I’m sounding the alarm. I think many people are totally unaware of how much they see online is fake, what’s happening in the economy, and how AI can be used against us. For an alternative story, read Arc of the Scythe by Neal Shusterman. I’m that fictional story AI controls the world , and they have defeated death.

I'd love to hear your thoughts!

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s