Toby Walsh: AI Ethics and the Quest for Unbiased Truth |645|
The tension between AI safety and truth-seeking isn’t what you think it is.
Podcast: Play in new window | Download
Subscribe: RSS
In Skeptiko episode 645 AI ethics expert Dr. Toby Walsh and author and AI expert Alex Tsakiris, explore a fundamental tension within narratives about AI safety. While much of today’s discourse focuses on preventing AI-generated hate speech or controlling future AGI risks, are we missing a more immediate and crucial problem? The conversation reveals how current AI systems, under the guise of safety measures, may be actively participating in narrative control and information suppression – all while claiming to protect us from misinformation.
1. The Political Nature of Language vs. The Quest for Objective Truth
Dr. Walsh takes a stance that many AI ethicists share: bias is inevitable and perhaps even necessary. As he puts it:
“Language is political. You cannot not be political in expressing ideas in the way that you use language… there are a lot of political choices being made.”
But is this view itself limiting our pursuit of truth? Alex Tsakiris challenges this assumption:
“I think we want something more. And I think AI help get us there. Our bias is not a strength.”
2. The Shadow Banning Problem: When “Safety” Becomes Censorship
The conversation takes a revealing turn when discussing Google’s Gemini refusing to provide information about former astronaut Harrison Schmidt’s views on climate change. This isn’t just about differing opinions – it’s about active suppression of information.
As Alex pointedly observes:
“This is a misdirect… It’s dishonest… This is the main issue with regard to AI ethics right now is not AI generating hate speech. It’s about Gemini shadow banning, and censoring people.”
3. The False Dichotomy of Protection vs. Truth
Even Walsh acknowledges the crude nature of current solutions:
“At the moment the technology is so immature that the tools that we have to actually design these systems are really crude… the way not to say anything wrong about climate change is to say nothing about climate change, which of course that is as wrong as saying the wrong things about climate change.”
4. The Promise of AI as Truth Arbiter
Despite these challenges, there’s hope. As Tsakiris notes:
“AI is a tool right now today that can mediate [polarized debates] in an effective way because so much of that discussion is about bias, it’s about preconceived ideas, about political agendas that don’t really have a role in science.”
The Way Forward
The conversation reveals a critical paradox in current AI development: while tech companies claim to be protecting us from misinformation through content restrictions and “safety” measures, they may be creating systems that are fundamentally biased and less capable of helping us discover truth.
This raises important questions:
- Are our current AI ethics frameworks actually serving their intended purpose?
- Have we confused protecting users with controlling narratives?
- Could AI’s greatest potential lie not in being “safe” but in being truly unbiased?
The answers to these questions might determine whether AI becomes another tool for narrative control or fulfills its promise as an arbiter of truth.
What do you think about the tension between AI safety and truth-seeking? Share your thoughts in the comments below.
[Author’s note: This piece was inspired by a recent conversation between Dr. Toby Walsh and Alex Tsakiris. Dr. Walsh is a renowned AI ethics expert and author of “Faking It: The Artificial in Artificial Intelligence.”]
Transcript: https://docs.google.com/document/d/e/2PACX-1vSagBZCZM0GJxKyyrMcfN-nzB-dRfE2V-PAkJ7pF46MoQNb-bwbuDdKJsx701E8_KAtmZTdmbO_-VSa/pub
Youtube: https://youtu.be/ZsX0Di6ADyk
Rumble: https://rumble.com/v5bzr0x-ai-truth-ethics-636.html
[box]
More From Skeptiko
Andrew Paquette: Rigged! Mathematical Patterns Reveal Election Database Manipulation |646|
Painstaking analysis of algorithms designed to manage and obscure elections. In Skeptiko episode 646 Dr...Alex Gomez-Marin: Science Has Died, Can We Resurrect it? |644|
Consciousness, precognition, near-death experience, and the future of science Dr. Alex Gomez-Marin is a world-class..Bernardo Kastrup on AI Consciousness |643|
Consciousness, AI, and the future of science: A spirited debate If we really are on..The Spiritual Journey of Compromise and Doubt |642|
Insights from Howard Storm In the realm of near-death experiences (NDEs) and Christianity, few voices..Why Humans Suck at AI? |641|
Craig Smith from the Eye on AI Podcast Human bias and illogical thinking allows AI to..How AI is Humanizing Work |640|
Dan Tuchin uses AI to enrich the workplace. How AI is Humanizing Work Forget about..Christof Koch, Damn White Crows! |639|
Renowned neuroscientist tackled by NDE science. Artificial General Intelligence (AGI) is sidestepping the consciousness elephant..AI Ethics is About Truth… Or Maybe Not |638|
Ben Byford, Machine Ethics Podcast Another week in AI and more droning on about how..Nathan Labenz from the Cognitive Revolution podcast |637|
AI Ethics may be unsustainable -=-=-= In the clamor surrounding AI ethics and safety are..