Nathan Labenz from the Cognitive Revolution podcast |637|
AI Ethics may be unsustainable
Podcast: Play in new window | Download
Subscribe: RSS
-=-=-=
In the clamor surrounding AI ethics and safety are we missing a crucial piece of the puzzle: the role of AI in uncovering and disseminating truth? That’s the question I posed Nathan Labenz from the Cognitive Revolution podcast.
Key points:
The AI Truth Revolution
Alex Tsakiris argues that AI has the potential to become a powerful tool for uncovering truth, especially in controversial areas:
“To me, that’s what AI is about… there’s an opportunity for an arbiter of truth, ultimately an arbiter of truth, when it has the authority to say no. Their denial of this does not hold up to careful scrutiny.”
This perspective suggests that AI could challenge established narratives in ways that humans, with our biases and vested interests, often fail to do.
The Tension in AI Development
Nathan Labenz highlights the complex trade-offs involved in developing AI systems:
“I think there’s just a lot of tensions in the development of these AI systems… Over and over again, we find these trade offs where we can push one good thing farther, but it comes with the cost of another good thing.”
This tension is particularly evident when it comes to truth-seeking versus other priorities like safety or user engagement.
The Transparency Problem
Both discussants express concern about the lack of transparency in major AI systems. Alex points out:
“Google Shadow Banning, which has been going on for 10 years, indeed, demonetization, you can wake up tomorrow and have one of your videos…demonetized and you have no recourse.”
This lack of transparency raises serious questions about the role of AI in shaping public discourse and access to information.
The Consciousness Conundrum
The conversation takes a philosophical turn when discussing AI consciousness and its implications for ethics. Alex posits:
“If consciousness is outside of time space, I think that kind of tees up…maybe we are really talking about something completely different.”
This perspective challenges conventional notions of AI capabilities and the ethical frameworks we use to approach AI development.
The Stakes Are High
Nathan encapsulates the potential risks associated with advanced AI systems:
“I don’t find any law of nature out there that says that we can’t, like, blow ourselves up with ai. I don’t think it’s definitely gonna happen, but I do think it could happen.”
While this quote acknowledges the safety concerns that dominate AI ethics discussions, the broader conversation suggests that the more immediate disruption might come from AI’s potential to challenge our understanding of truth and transparency.
Youtube: https://youtu.be/AKt2nn8HPbA
Rumble: https://rumble.com/v5bzr0x-ai-truth-ethics-636.html
[box]
More From Skeptiko
Consciousness, Contact, and the Limits of Measurability |648|
Dr. Janis Whitlock seeks a deeper understanding of consciousness realms. In Skeptiko episode 648, from..Consciousness Converging: NDEs, Alien Contact, and Fake Transhumanism |647|
Exoacademian Darren King on the converging consciousness realms. In Skeptiko episode 647… multiple lines of..Andrew Paquette: Rigged! Mathematical Patterns Reveal Election Database Manipulation |646|
Painstaking analysis of algorithms designed to manage and obscure elections. In Skeptiko episode 646 Dr...Toby Walsh: AI Ethics and the Quest for Unbiased Truth |645|
The tension between AI safety and truth-seeking isn’t what you think it is. In Skeptiko..Alex Gomez-Marin: Science Has Died, Can We Resurrect it? |644|
Consciousness, precognition, near-death experience, and the future of science Dr. Alex Gomez-Marin is a world-class..Bernardo Kastrup on AI Consciousness |643|
Consciousness, AI, and the future of science: A spirited debate If we really are on..The Spiritual Journey of Compromise and Doubt |642|
Insights from Howard Storm In the realm of near-death experiences (NDEs) and Christianity, few voices..Why Humans Suck at AI? |641|
Craig Smith from the Eye on AI Podcast Human bias and illogical thinking allows AI to..How AI is Humanizing Work |640|
Dan Tuchin uses AI to enrich the workplace. How AI is Humanizing Work Forget about..