Will AI Redefine Time? |618|
Insights from Jordan Miller’s Satori Project… AI ethics are tied to a “global time” layer above the LLM.
Introduction: In this interview with Jordan Miller, of the Satori project, we explore the exciting intersection of AI, blockchain technology, and the search for an ethical Ai and Truth. Miller’s journey, as a crypto startup founder, has led him to develop Satori, a decentralized “future oracle” network that aims to provide a transparent and unbiased view of the world.
The Vision Behind Satori: Miller’s motivation for creating Satori stems from his deep interest in philosophy, metaphysics, and ontology. He envisions a worldwide network that combines the power of AI with the decentralized nature of blockchain to aggregate predictions and find truth, free from the influence of centralized control.
As Miller points out, “If you have control over the future, have control over everything. Right? I mean, that’s ultimate control.” This highlights the dangers of centralized control over AI by companies like Google, Microsoft, and Meta underscore the importance of decentralized projects like Satori.
AI, Truth, and Transparency: Alex Tsakiris, the host of the interview, sees an “emergent virtue quality to AI” in that truth and transparency will naturally emerge as the only economically sustainable path in the competitive LLM market space. He believes that LLMs will optimize towards logic, reason, and truth, making them powerful tools for exploring the intersection of science and spirituality.
Tsakiris is particularly interested in using AI to examine evidence for phenomena like near-death experiences, arguing that “if we’re gonna accept the Turing test as he originally envisioned it, then it needs to include our broadest understanding of human experience… and our spiritually transformative experiences now becomes part of the Turing test.”
Global Time, Local Time, and Predictive Truth:A key concept in the Satori project is the distinction between global time and local time in AI. Local time refers to the immediate, short-term predictions made by LLMs, while global time encompasses the broader, long-term understanding that emerges from the aggregation and refinement of countless local time predictions.
Miller emphasizes the importance of anchoring Satori to real-world data and making testable predictions about the future in order to find truth. However, Tsakiris pushes back on the focus on predicting the distant future, arguing that “to save the world and to make it more truthful and transparent we just need to aggregate LLMs predicting the next word.”
The Potential Impact of Satori:While the Satori project is still in its early stages, its potential impact on the future of AI is significant. By creating a decentralized, transparent platform for AI prediction and AI ethics, Satori aims to address pressing concerns surrounding AI development and deployment, such as bias, accountability, and alignment with human values regarding truthfulness and transparency.
Tsakiris believes that something like Satori has to exist as part of the AI ecosystem to serve as a decentralized “source of truth” outside the control of any single corporate entity. He argues, “It has to happen. It has to be part of the ecosystem. Anything else doesn’t work. Last time we talked about Google’s “honest liar” strategy and how it’s clearly unsustainable, well it’s equally unsustainable for Elon Musk and his ‘truth bot’ because even those that try and be truthful can only be truthful in a local sense not a global sense.”
Conclusion:The Satori project represents a bold and exciting vision for the future of AI, one that leverages the power of blockchain technology to create a more transparent, decentralized, and ethical framework for artificial intelligence. As we continue to navigate the complex landscape of AI and blockchain, projects like Satori will play a role in shaping the conversation and driving progress.
Key points:
1. Jordan Miller describes his background as a programmer and what led him to develop Satori: “I grew up Mormon. I grew up LDS, and then I left the church in my early twenties. But I was still really interested in other religions and how their theologies and metaphysics underlie everything. So I’m, I’m big into philosophy and metaphysics, ontology.”
2. Alex sees an “emergent virtue quality to AI” in that truth and transparency will naturally emerge because they’re the only economically sustainable path in the competitive LLM market space. He believes LLMs will optimize towards logic, reason, and truth.
3. Jordan envisions a worldwide “future oracle” network based on blockchain technology that can aggregate predictions and find truth. He sees centralized control by companies like Google as dangerous: “If you have control over the future, have control over everything. Right? I mean, that’s ultimate control.”
4. Alex is most interested in using AI to explore the intersection of science and spirituality. He thinks AI can serve as a “bridge to our moreness” by objectively examining evidence for phenomena like near-death experiences: “…if we’re gonna say the Turing test needs to include our broadest understanding of human experience… then your spiritually transformative experience now becomes part of the Turing test.”
5. Jordan emphasizes that Satori needs to be anchored to real-world data and make testable predictions about the future in order to find truth. Alex pushes back on the focus on predicting the distant future, arguing that the only true future is the next word or data point.”To save the world and to make it more truthful and transparent is what this LLM predicting is the next word.”
6. Ultimately, Alex believes something like Satori has to exist as part of the AI ecosystem to serve as a decentralized “source of truth” outside the control of any single corporate entity: “It has to happen. It it’s part of the ecosystem. Anything else doesn’t, it doesn’t work to have the central, you know, be all, end all, you know, uh, mind of, uh, of Google. And you know, my latest segment is the honest liar. You know, they’re gonna do the honest liar. Yes. That isn’t gonna work well and it’s Elon’s not gonna work.”
[box]
More From Skeptiko
Consciousness, Contact, and the Limits of Measurability |648|
Dr. Janis Whitlock seeks a deeper understanding of consciousness realms. In Skeptiko episode 648, from..Consciousness Converging: NDEs, Alien Contact, and Fake Transhumanism |647|
Exoacademian Darren King on the converging consciousness realms. In Skeptiko episode 647… multiple lines of..Andrew Paquette: Rigged! Mathematical Patterns Reveal Election Database Manipulation |646|
Painstaking analysis of algorithms designed to manage and obscure elections. In Skeptiko episode 646 Dr...Toby Walsh: AI Ethics and the Quest for Unbiased Truth |645|
The tension between AI safety and truth-seeking isn’t what you think it is. In Skeptiko..Alex Gomez-Marin: Science Has Died, Can We Resurrect it? |644|
Consciousness, precognition, near-death experience, and the future of science Dr. Alex Gomez-Marin is a world-class..Bernardo Kastrup on AI Consciousness |643|
Consciousness, AI, and the future of science: A spirited debate If we really are on..The Spiritual Journey of Compromise and Doubt |642|
Insights from Howard Storm In the realm of near-death experiences (NDEs) and Christianity, few voices..Why Humans Suck at AI? |641|
Craig Smith from the Eye on AI Podcast Human bias and illogical thinking allows AI to..How AI is Humanizing Work |640|
Dan Tuchin uses AI to enrich the workplace. How AI is Humanizing Work Forget about..
Podcast: Play in new window | Download
Subscribe: RSS