Christof Koch, Damn White Crows! |639|
Renowned neuroscientist tackled by NDE science.
Podcast: Play in new window | Download
Subscribe: RSS
Artificial General Intelligence (AGI) is sidestepping the consciousness elephant that isn’t in the room, the brain, or anywhere else. As we push the boundaries of machine intelligence, we will inevitably come back to the most fundamental questions about our own experience. And as AGI inches closer to reality, these questions become not just philosophical musings, but practical imperatives.
This interview with neuroscience heavyweight Christof Koch brings this tension into sharp focus. While Koch’s work on the neural correlates of consciousness has been groundbreaking, his stance on consciousness research outside his immediate field raises critical questions about the nature of consciousness – questions that AGI developers can’t afford to ignore.
Four key takeaways from this conversation:
1. The Burden of Proof in Consciousness Studies
Koch argues for a high standard of evidence when it comes to claims about consciousness existing independently of the brain. However, this stance raises questions about scientific objectivity:
“Extraordinary claims require extraordinary evidence… I haven’t seen any [white crows], so far all the data I’ve looked at, I’ve looked at a lot of data. I’ve never seen a white coal.”
Key Question: Does the demand for “extraordinary evidence” have a place in unbiased scientific inquiry, especially with regard to published peer-reviewed work?
2. The Challenge of Interdisciplinary Expertise
Despite Koch’s eminence in neuroscience, the interview reveals potential gaps in his knowledge of near-death experience (NDE) research:
“I work with humans, I work with animals. I know what it is. EEG, I know the SNR, right? So I, I know all these issues.”
Key Question: How do we balance respect for expertise in one field with the need for deep thinking about contradictory data sets? Should Koch have degraded gracefully?
3. The Limitations of “Agree to Disagree” in Scientific Discourse
When faced with contradictory evidence, Koch resorts to a diplomatic but potentially unscientific stance:
“I guess we just have to disagree.”
Key Question: “Agreeing to disagree” doesn’t carry much weight in scientific debates, so why did my AI assistant go there?
4. The “White Crow” Dilemma in Consciousness Research
The interview touches on William James’ famous “white crow” metaphor, highlighting the tension between individual cases and cumulative evidence:
“One instance of it would violate it. One two instance of, yeah, I totally agree. But we, I haven’t seen any…”
Key Question: can AI outperform humans in dealing with contradictory evidence?
Thoughts?
-=-=Another week in AI and more droning on about how superintelligence is just around the corner and human morals and ethics are out the window. Maybe not. In this episode, Alex Tsakiris of Skeptiko/Ai Truth Ethics and Ben Byford of the Machine Ethics podcast engage in a thought-provoking dialogue that challenges our assumptions about AI’s role in discerning truth, the possibility of machine consciousness, and the future of human agency in an increasingly automated world. Their discussion offers a timely counterpoint to the AGI hype cycle.
Key Points:
- AI as an Arbiter of Truth: Promise or Peril? Alex posits that AI can serve as an unbiased arbiter of truth, while Ben cautions against potential dogmatism.
Alex: “AI does not bullshit their way out of stuff. AI gives you the logical flow of how the pieces fit together.”
Implication for AGI: If AI can indeed serve as a reliable truth arbiter, it could revolutionize decision-making processes in fields from science to governance. However, the risk of encoded biases becoming amplified at an AGI scale is significant.
- The Consciousness Conundrum: A Barrier to True AGI? The debate touches on whether machine consciousness is possible or if it’s fundamentally beyond computational reach.
Alex: “The best evidence suggests that AI will not be sentient because consciousness in some way we don’t understand is outside of time space, and we can prove that experimentally.”
AGI Ramification: If consciousness is indeed non-computational, it could represent a hard limit to AGI capabilities, challenging the notion of superintelligence as commonly conceived.
- Universal Ethics vs. Cultural Relativism in AI Systems They clash over the existence of universal ethical principles and their implementability in AI.
Alex: “There is an underlying moral imperative.” Ben: “I don’t think there needs to be…”
Superintelligence Consideration: The resolution of this debate has profound implications for how we might align a superintelligent AI with human values – is there a universal ethical framework we can encode, or are we limited to culturally relative implementations?
- AI’s Societal Role: Tool for Progress or Potential Hindrance? The discussion explores how AI should be deployed and its potential impacts on human agency and societal evolution.
Ben: “These are the sorts of things we don’t want AI running because we actually want to change and evolve.”
Future of AGI: This point raises critical questions about the balance between leveraging AGI capabilities and preserving human autonomy in shaping our collective future.
Youtube: https://youtu.be/AKt2nn8HPbA
Rumble: https://rumble.com/v5bzr0x-ai-truth-ethics-636.html
[box]
More From Skeptiko
Bernardo Kastrup on AI Consciousness |643|
Consciousness, AI, and the future of science: A spirited debate If we really are on..The Spiritual Journey of Compromise and Doubt |642|
Insights from Howard Storm In the realm of near-death experiences (NDEs) and Christianity, few voices..Why Humans Suck at AI? |641|
Craig Smith from the Eye on AI Podcast Human bias and illogical thinking allows AI to..How AI is Humanizing Work |640|
Dan Tuchin uses AI to enrich the workplace. How AI is Humanizing Work Forget about..AI Ethics is About Truth… Or Maybe Not |638|
Ben Byford, Machine Ethics Podcast Another week in AI and more droning on about how..Nathan Labenz from the Cognitive Revolution podcast |637|
AI Ethics may be unsustainable -=-=-= In the clamor surrounding AI ethics and safety are..AI Truth Ethics |636|
Launching a new pod Here are the first three episodes of the AI Truth Ethics..AI Journalism Truth |635|
Craig S. Smith used to write for WSJ and NYT, now he’s into AI. After..AI Tackles Yale/Stanford Junk Science |634|
AI stumbles on data analysis but eventually gets it right. Ready for another AI truth..