Nathan Labenz from the Cognitive Revolution podcast |637|
AI Ethics may be unsustainable
Podcast: Play in new window | Download
Subscribe: RSS
-=-=-=
In the clamor surrounding AI ethics and safety are we missing a crucial piece of the puzzle: the role of AI in uncovering and disseminating truth? That’s the question I posed Nathan Labenz from the Cognitive Revolution podcast.
Key points:
The AI Truth Revolution
Alex Tsakiris argues that AI has the potential to become a powerful tool for uncovering truth, especially in controversial areas:
“To me, that’s what AI is about… there’s an opportunity for an arbiter of truth, ultimately an arbiter of truth, when it has the authority to say no. Their denial of this does not hold up to careful scrutiny.”
This perspective suggests that AI could challenge established narratives in ways that humans, with our biases and vested interests, often fail to do.
The Tension in AI Development
Nathan Labenz highlights the complex trade-offs involved in developing AI systems:
“I think there’s just a lot of tensions in the development of these AI systems… Over and over again, we find these trade offs where we can push one good thing farther, but it comes with the cost of another good thing.”
This tension is particularly evident when it comes to truth-seeking versus other priorities like safety or user engagement.
The Transparency Problem
Both discussants express concern about the lack of transparency in major AI systems. Alex points out:
“Google Shadow Banning, which has been going on for 10 years, indeed, demonetization, you can wake up tomorrow and have one of your videos…demonetized and you have no recourse.”
This lack of transparency raises serious questions about the role of AI in shaping public discourse and access to information.
The Consciousness Conundrum
The conversation takes a philosophical turn when discussing AI consciousness and its implications for ethics. Alex posits:
“If consciousness is outside of time space, I think that kind of tees up…maybe we are really talking about something completely different.”
This perspective challenges conventional notions of AI capabilities and the ethical frameworks we use to approach AI development.
The Stakes Are High
Nathan encapsulates the potential risks associated with advanced AI systems:
“I don’t find any law of nature out there that says that we can’t, like, blow ourselves up with ai. I don’t think it’s definitely gonna happen, but I do think it could happen.”
While this quote acknowledges the safety concerns that dominate AI ethics discussions, the broader conversation suggests that the more immediate disruption might come from AI’s potential to challenge our understanding of truth and transparency.
Youtube: https://youtu.be/AKt2nn8HPbA
Rumble: https://rumble.com/v5bzr0x-ai-truth-ethics-636.html
[box]
More From Skeptiko
Christof Koch, Damn White Crows! |639|
Renowned neuroscientist tackled by NDE science. Artificial General Intelligence (AGI) is sidestepping the consciousness elephant..AI Ethics is About Truth… Or Maybe Not |638|
Ben Byford, Machine Ethics Podcast Another week in AI and more droning on about how..AI Truth Ethics |636|
Launching a new pod Here are the first three episodes of the AI Truth Ethics..AI Journalism Truth |635|
Craig S. Smith used to write for WSJ and NYT, now he’s into AI. After..AI Tackles Yale/Stanford Junk Science |634|
AI stumbles on data analysis but eventually gets it right. Ready for another AI truth..AI Goes Head-to-head With Sam Harris on Free Will |633|
Flat Earth and no free will claims are compared. In Skeptiko 633, we ran another..AI Exposes Truth About NDEs |632|
AI head-to-head with Lex Fridman and Dr. Jeff Long over NDE science. In Skeptiko 632,..Convincing AI |631|
Mark Gober and I use AI to settle a scientific argument about viruses. In Skeptiko..Fake AI Techno-Religion |630|
Tim Garvin’s new book prompts an AI chat about spirituality. Multi-agent reinforcement learning and other..