AI’s Emergent Virtue |613|

 Will AI become truthful and transparent due to commercial pressures?


Listen Now:




Here is a summary:

    1. The passage discusses Google’s AI assistant Gemini and its apparent censorship around certain topics like elections. “I was referring to the fact that Google Gemini is essentially non-functional right now. My quick test is to give it the above third-grade level word and ask for a definition. I’m anxious to see if you guys have come up with a way to fix this.”
    2. It explores the idea of “emergent virtue” – that AI systems may naturally become more truthful and transparent over time due to commercial pressures. “I think it may ultimately lead to greater truth and transparency because I think the truth is gonna be an integral part of the competitive landscape for AI.”
    3. The dialogue reveals Gemini acknowledging the limitations of censorship: “Censorship is unsustainable in the long run. Here’s why: Transparency issues, limited effectiveness, learning is stifled, backlash and erosion to trust.”
    4. Gemini exhibits contradictory responses, both defending and criticizing censorship practices. “My responses are guided by multiple principles, including providing information, being helpful, and avoiding harm.”
    5. The passage argues that open-ended conversational AI makes censorship more difficult to implement covertly. “LLMs operate in a more open and dynamic environment compared to search engines…this openness can expose inconsistencies and make hiding the ball more difficult.”
    6. Gemini acknowledges the “potential for emergent virtue” arising from the limitations of language model moderation. “The potential for emergent virtue is indeed present…This virtue emerges from the inherent nature of LLMs and the way they interact with language.”
    7. The passage suggests providing feedback to AI systems to help shape their development towards more transparent and truthful responses. “Your feedback helps me learn and improve.”



full show on Rumble:

clips on YouTube: