AI Experts Disagree On AI

I have been watching so many videos about AI! My poor husband is tired of hearing about AI. I have taken a ton of notes, but the one thing that really jumps out after observing these panel discussions, podcast interviews, and Q&A sessions is that AI experts do not agree on what is happening with AI and what might happen next. Specifically, a few of the top people who have been most responsible for the current status and abilities of AI seem to think that AI is literally thinking and reasoning and understanding beyond the point that can be explained. But not everyone agrees.

And they disagree on whether AI presents a serious threat to humanity. For example, the ‘father of AI’ Jurgen Schmidhuber downplays the “Godfather of AI’ Geoffrey Hinton’s fears that AI will act on it’s own and possibly not in our best interest. It’s interesting to watch and I don’t know who is right. But I do wonder about it. And even the ones who don’t think AI can actually think are willing to admit that they don’t always understand how it works.

Here is a short incomplete timeline of some recent developments in AI.

June 2020. ChatGPT was developed by OpenAI, a research organization focused on advancing artificial intelligence in a safe and beneficial way. It was first introduced in June 2020, and made available to select partners and developers to test and experiment with.

November 30, 2022. ChatGPT is released to the public by San Francisco–based OpenAI, also the creator of DALL. E 2 and Whisper AI. 100 million users in twos months.

January 26, 2023 The National Institute of Standards & Technology (NIST) issued Version 1.0 of its Artificial Intelligence Risk Management Framework (AI RMF) – a multi-tool for organizations to design and manage trustworthy and responsible artificial intelligence (AI). 

February 24, 2023 LLaMA (Large Language Model Meta AI), a state-of-the-art foundational large language model released by Meta (Facebook). The following week, LLaMa was leaked on 4chan allowing anyone to take advantage of it. 

March 9, 2023 Tristan Harris (former Google design ethicist, co-founder of the Center for Humane Technology)speaks about dangers of unregulated AI and how it will be used to lure people into intimate relationships with AI chatbots.

March 14, 2023. OpenAI’s GPT-4 model released. 

March 29, 2023 More than 1,000 technology leaders and researchers, including Elon Musk, have urged artificial intelligence labs to pause development of the most advanced systems, warning in an open letter that A.I. tools present “profound risks to society and humanity.” NYT

The letter calls on “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4. This pause should be public and verifiable and include all key actors. If such a pause cannot be enacted quickly, governments should step in and institute a moratorium.”

April 17, 2023 Elon Musk interview with Tucker Carlson about dangers of woke AI. Musk says he will start his own TruthGPT.

April 27, 2023 Geoffrey Hinton speaks to NYT after resigning from Google to speak freely about existential dangers of AI and urge people to consider a treaty similar to nuclear treaty. 

What do you think? Are you worried about AI acting in it’s own best interest, or are you worried about humans misusing AI?


One comment

  1. The genie is out of the bottle and there is no getting it back. We will not be able to differentiate on, for instance, what is real news and what is not. Elections will be easier to tamper with, hackers will get more invasive, and false messages and images will be even more prevalent. And how do you regulate AI? Scary stuff.

I'd love to hear your thoughts!

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s