That's fair, the way many people talk about A.I. makes it harder to have real discussions on the topic as they get carried away with imagined and outlandish hypotheticals.
Thing is, we already face massive threats from A.I. grounded in reality. And if left unchecked I would say they will indeed lead to existential threats.
A.I.'s already being used in the
American justice system to determine prison sentences. The intent was to 1) save time so judges didn't have to crawl through historical data to ensure
this offender got a reasonable term based on how other people were charged (like if the punishment is two to five years, ensuring the exact term they select is fair), and 2) remove bias so that they didn't allow racism or other prejudice to sway the punishment. Problem is, they fed A.I. historical data that WAS filled with human bias, so they discovered the A.I. produced the same racism it was fed. Black citizens were deemed higher risk of re-offending than white citizens, even when those white citizens had similar or
worse criminal histories.
If you understand the
climate crisis, you understand that "business as usual" will lead to catastrophic collapse of our infrastructure, our supply chains and their logistics which includes food distribution, clean water supply and distribution, etc. And financial modeling based on historical data and human bias that leads oil companies to continue drilling and pushing against sensible, fact-based climate policy instead of investing (more) heavily into green energy is indeed leading to the increasing threatening (and risk of become existentially threatening) climate catastrophes from more numerous and more severe hurricanes, severe weather, and ultimately deaths from both.
And finally, given the verified and somewhat terrifying reality that many social media accounts are bots (and not just the "hey babe click this link to see my sexy photos" type, but ones) that post misinformation and interact like they were real people, we're seeing the masses swayed by lies. When public opinion shifts and political party policy changes due to misinformation, our society suffers. This isn't theoretical or academic, either. Whether it's the bot army who helped elect Trump and approve Brexit, to the covid deaths due to covid/vax-deniers, we're already seeing human deaths from A.I. - And I don't mean "they machines want us dead" which is crazy, but "humans created algorithms that promote what the humans who wrote them want promoted" that lead to actual deaths and public policy that contributes to chaos.
So we're seeing justice systems, political systems, our economy already corrupted by A.I., already leading to human deaths, and already pushing us towards more, larger, deadlier catastrophes.
I'd say that's an existential threat if ever I've heard one.
Not even from Skynet or some self-aware program.
But from human stupidity and selfishness amplified by software we cannot yet control.
Just my two cents.
But just because it isn't nonsense doesn't mean it isn't being overhyped.
The people doing massive hype jobs on Existential threats from AI are overhyping things.
And doing that, it is making it much harder to have real discussions about its use and how to deal with that.