<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web01.fireside.fm</fireside:hostname>
    <fireside:genDate>Sat, 11 Apr 2026 09:07:02 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>On Wisdom - Episodes Tagged with “Ai”</title>
    <link>https://onwisdompodcast.fireside.fm/tags/ai</link>
    <pubDate>Sun, 26 Oct 2025 09:00:00 -0400</pubDate>
    <description>On Wisdom features a social and cognitive scientist in Toronto and an educator in London discussing the latest empirical science regarding the nature of wisdom. Igor Grossmann runs the Wisdom &amp; Culture Lab at the University of Waterloo in Canada. Charles Cassidy runs the Evidence-Based Wisdom project in London, UK. The podcast thrives on a diet of freewheeling conversation on wisdom, decision-making, wellbeing, and society and includes regular guests spots with leading behavioral scientists from the field of wisdom research and beyond. Welcome to The On Wisdom Podcast.
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>What does science tell us about wisdom?</itunes:subtitle>
    <itunes:author>Charles Cassidy and Igor Grossmann</itunes:author>
    <itunes:summary>On Wisdom features a social and cognitive scientist in Toronto and an educator in London discussing the latest empirical science regarding the nature of wisdom. Igor Grossmann runs the Wisdom &amp; Culture Lab at the University of Waterloo in Canada. Charles Cassidy runs the Evidence-Based Wisdom project in London, UK. The podcast thrives on a diet of freewheeling conversation on wisdom, decision-making, wellbeing, and society and includes regular guests spots with leading behavioral scientists from the field of wisdom research and beyond. Welcome to The On Wisdom Podcast.
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/6/6e7bd116-2782-4422-a140-42f329164842/cover.jpg?v=1"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>psychology, science, happiness, philosophy, wisdom, decision-making, reasoning, society</itunes:keywords>
    <itunes:owner>
      <itunes:name>Charles Cassidy and Igor Grossmann</itunes:name>
      <itunes:email>charlesdavidcassidy@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Science">
  <itunes:category text="Social Sciences"/>
</itunes:category>
<itunes:category text="Society &amp; Culture"/>
<itunes:category text="Society &amp; Culture">
  <itunes:category text="Philosophy"/>
</itunes:category>
<item>
  <title>66: The Wisdom Turing Test - Part One</title>
  <link>https://onwisdompodcast.fireside.fm/66</link>
  <guid isPermaLink="false">e078de22-6319-496f-b95f-a62835e28e7f</guid>
  <pubDate>Sun, 26 Oct 2025 09:00:00 -0400</pubDate>
  <author>Charles Cassidy and Igor Grossmann</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/6e7bd116-2782-4422-a140-42f329164842/e078de22-6319-496f-b95f-a62835e28e7f.mp3" length="24424667" type="audio/mpeg"/>
  <itunes:episode>66</itunes:episode>
  <itunes:title>The Wisdom Turing Test - Part One</itunes:title>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Charles Cassidy and Igor Grossmann</itunes:author>
  <itunes:subtitle>What happens when we ask our own fantastic listeners — and AI — what it means to live wisely? In this episode, Igor and Charles hand the mic to members of the On Wisdom audience to hear their answers to the big questions usually reserved for scientists and philosophers. But there’s a twist: one set of responses was provided by AI. We invite you to vote on who gave the wisest answers — and to guess which one wasn’t human. Igor is surprised by just how insightful the answers from the regular folk (compared to experts) turn out to be, while Charles wonders if the wisest one may not be human at all? Can you pass the Wisdom Turing Test? Welcome to Episode 66.</itunes:subtitle>
  <itunes:duration>40:42</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/6/6e7bd116-2782-4422-a140-42f329164842/cover.jpg?v=1"/>
  <description>What happens when we ask our own fantastic listeners — and AI — what it means to live wisely? In this episode, Igor and Charles hand the mic to members of the On Wisdom audience to hear their answers to the big questions usually reserved for scientists and philosophers. But there’s a twist: one set of responses was provided by AI. We invite you to vote on who gave the wisest answers — and to guess which one wasn’t human. Igor is surprised by just how insightful the answers from the regular folk (compared to experts) turn out to be, while Charles wonders if the wisest one may not be human at all? Can you pass the Wisdom Turing Test? Welcome to Episode 66.
Link to Listener Poll here (https://docs.google.com/forms/d/e/1FAIpQLSePLVkKDHKButOmx7ApJ2hR0bvwsOFdgpHDI_R6RDBZNovH8Q/viewform?usp=dialog)
</description>
  <itunes:keywords>wisdom, psychology, philosophy, social science, happiness, well being, meaning, reasoning, emotions, purpose, turing test, AI, artificial intelligence, folk wisdom, intellectual humility</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>What happens when we ask our own fantastic listeners — and AI — what it means to live wisely? In this episode, Igor and Charles hand the mic to members of the On Wisdom audience to hear their answers to the big questions usually reserved for scientists and philosophers. But there’s a twist: one set of responses was provided by AI. We invite you to vote on who gave the wisest answers — and to guess which one wasn’t human. Igor is surprised by just how insightful the answers from the regular folk (compared to experts) turn out to be, while Charles wonders if the wisest one may not be human at all? Can you pass the Wisdom Turing Test? Welcome to Episode 66.</p>

<p>Link to Listener Poll <a href="https://docs.google.com/forms/d/e/1FAIpQLSePLVkKDHKButOmx7ApJ2hR0bvwsOFdgpHDI_R6RDBZNovH8Q/viewform?usp=dialog" rel="nofollow">here</a></p><p>Links:</p><ul><li><a title="Listener Poll | On Wisdom Podcast: The Wisdom Turing Test (Episode 66) " rel="nofollow" href="https://docs.google.com/forms/d/e/1FAIpQLSePLVkKDHKButOmx7ApJ2hR0bvwsOFdgpHDI_R6RDBZNovH8Q/viewform?usp=dialog">Listener Poll | On Wisdom Podcast: The Wisdom Turing Test (Episode 66) </a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>What happens when we ask our own fantastic listeners — and AI — what it means to live wisely? In this episode, Igor and Charles hand the mic to members of the On Wisdom audience to hear their answers to the big questions usually reserved for scientists and philosophers. But there’s a twist: one set of responses was provided by AI. We invite you to vote on who gave the wisest answers — and to guess which one wasn’t human. Igor is surprised by just how insightful the answers from the regular folk (compared to experts) turn out to be, while Charles wonders if the wisest one may not be human at all? Can you pass the Wisdom Turing Test? Welcome to Episode 66.</p>

<p>Link to Listener Poll <a href="https://docs.google.com/forms/d/e/1FAIpQLSePLVkKDHKButOmx7ApJ2hR0bvwsOFdgpHDI_R6RDBZNovH8Q/viewform?usp=dialog" rel="nofollow">here</a></p><p>Links:</p><ul><li><a title="Listener Poll | On Wisdom Podcast: The Wisdom Turing Test (Episode 66) " rel="nofollow" href="https://docs.google.com/forms/d/e/1FAIpQLSePLVkKDHKButOmx7ApJ2hR0bvwsOFdgpHDI_R6RDBZNovH8Q/viewform?usp=dialog">Listener Poll | On Wisdom Podcast: The Wisdom Turing Test (Episode 66) </a></li></ul>]]>
  </itunes:summary>
</item>
<item>
  <title>64: The Potency and Potential of Social Networks (with Nicholas Christakis)</title>
  <link>https://onwisdompodcast.fireside.fm/64</link>
  <guid isPermaLink="false">4859c91c-08af-410d-9b8f-95e89dbf5bad</guid>
  <pubDate>Wed, 12 Mar 2025 10:00:00 -0400</pubDate>
  <author>Charles Cassidy and Igor Grossmann</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/6e7bd116-2782-4422-a140-42f329164842/4859c91c-08af-410d-9b8f-95e89dbf5bad.mp3" length="35544732" type="audio/mpeg"/>
  <itunes:episode>64</itunes:episode>
  <itunes:title>The Potency and Potential of Social Networks (with Nicholas Christakis)</itunes:title>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Charles Cassidy and Igor Grossmann</itunes:author>
  <itunes:subtitle>Are your choices really your own — or are they quietly shaped by the people around you? Nicholas Christakis joins Igor and Charles to reveal the hidden power of social networks, from the surprising spread of kindness and cooperation to the ripple effects that shape our health, decisions, and even our wisdom. Igor uncovers the invisible social forces influencing our daily lives, Nicholas shares how our deep-rooted instincts for love, friendship, and teaching have shaped human civilization, and Charles considers how tapping into these instincts could help us build stronger, wiser communities. Welcome to Episode 64.</itunes:subtitle>
  <itunes:duration>59:14</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/6/6e7bd116-2782-4422-a140-42f329164842/cover.jpg?v=1"/>
  <description>Are your choices really your own — or are they quietly shaped by the people around you? Nicholas Christakis joins Igor and Charles to reveal the hidden power of social networks, from the surprising spread of kindness and cooperation to the ripple effects that shape our health, decisions, and even our wisdom. Igor uncovers the invisible social forces influencing our daily lives, Nicholas shares how our deep-rooted instincts for love, friendship, and teaching have shaped human civilization, and Charles considers how tapping into these instincts could help us build stronger, wiser communities. Welcome to Episode 64.
 Special Guest: Nicholas Christakis.
</description>
  <itunes:keywords>wisdom, psychology, philosophy, social science, happiness, well being, meaning, reasoning, emotions, purpose, social networks, nicholas christakis, biosocial science, computational social science, Connected, Blueprint, Apollo’s Arrow, evolutionary biology, AI, pandemics</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Are your choices really your own — or are they quietly shaped by the people around you? Nicholas Christakis joins Igor and Charles to reveal the hidden power of social networks, from the surprising spread of kindness and cooperation to the ripple effects that shape our health, decisions, and even our wisdom. Igor uncovers the invisible social forces influencing our daily lives, Nicholas shares how our deep-rooted instincts for love, friendship, and teaching have shaped human civilization, and Charles considers how tapping into these instincts could help us build stronger, wiser communities. Welcome to Episode 64.</p><p>Special Guest: Nicholas Christakis.</p><p>Links:</p><ul><li><a title="Human Nature Lab | Yale University " rel="nofollow" href="https://humannaturelab.net/">Human Nature Lab | Yale University </a></li><li><a title="Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives (Book) | Nicholas Christakis" rel="nofollow" href="https://humannaturelab.net/book/connected">Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives (Book) | Nicholas Christakis</a></li><li><a title="Blueprint: The Evolutionary Origins of a Good Society (Book) | Nicholas Christakis" rel="nofollow" href="https://humannaturelab.net/book/blueprint-evolutionary-origins-good-society">Blueprint: The Evolutionary Origins of a Good Society (Book) | Nicholas Christakis</a></li><li><a title="Apollo’s Arrow: The Profound and Enduring Impact of Coronavirus on the Way We Live (Book) | Nicholas Christakis" rel="nofollow" href="https://humannaturelab.net/book/apollos-arrow-profound-and-enduring-impact-coronavirus-way-we-live">Apollo’s Arrow: The Profound and Enduring Impact of Coronavirus on the Way We Live (Book) | Nicholas Christakis</a></li><li><a title="The Hidden Influence of Social Networks (Ted Talk) | Nicholas Christakis" rel="nofollow" href="https://www.ted.com/talks/nicholas_christakis_the_hidden_influence_of_social_networks">The Hidden Influence of Social Networks (Ted Talk) | Nicholas Christakis</a></li><li><a title="ETH Global Lecture: Social Artificial Intelligence (2024) | Nicholas Christakis" rel="nofollow" href="https://humannaturelab.net/public-lecture/eth-global-lecture-social-artificial-intelligence-2024">ETH Global Lecture: Social Artificial Intelligence (2024) | Nicholas Christakis</a></li><li><a title="The Spread of Obesity in a Large Social Network over 32 Years - Christakis, Fowler (2007)" rel="nofollow" href="https://www.nejm.org/doi/pdf/10.1056/nejmsa066082">The Spread of Obesity in a Large Social Network over 32 Years - Christakis, Fowler (2007)</a></li><li><a title="Cooperative behavior cascades in human social networks - Fowler, Christakis (2010)" rel="nofollow" href="https://www.pnas.org/doi/10.1073/pnas.0913149107">Cooperative behavior cascades in human social networks - Fowler, Christakis (2010)</a></li><li><a title="Induction of social contagion for diverse outcomes in structured experiments in isolated villages - Airoldi, Christakis (2024)" rel="nofollow" href="https://www.science.org/doi/10.1126/science.adi5147">Induction of social contagion for diverse outcomes in structured experiments in isolated villages - Airoldi, Christakis (2024)</a></li><li><a title="Gut microbiome strain-sharing within isolated village social networks - Beghini, Pullman, Alexander, Shridhar, Prinster, Singh, Juárez, Airoldi, Brito, Christakis  (2025)" rel="nofollow" href="https://www.nature.com/articles/s41586-024-08222-1">Gut microbiome strain-sharing within isolated village social networks - Beghini, Pullman, Alexander, Shridhar, Prinster, Singh, Juárez, Airoldi, Brito, Christakis  (2025)</a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Are your choices really your own — or are they quietly shaped by the people around you? Nicholas Christakis joins Igor and Charles to reveal the hidden power of social networks, from the surprising spread of kindness and cooperation to the ripple effects that shape our health, decisions, and even our wisdom. Igor uncovers the invisible social forces influencing our daily lives, Nicholas shares how our deep-rooted instincts for love, friendship, and teaching have shaped human civilization, and Charles considers how tapping into these instincts could help us build stronger, wiser communities. Welcome to Episode 64.</p><p>Special Guest: Nicholas Christakis.</p><p>Links:</p><ul><li><a title="Human Nature Lab | Yale University " rel="nofollow" href="https://humannaturelab.net/">Human Nature Lab | Yale University </a></li><li><a title="Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives (Book) | Nicholas Christakis" rel="nofollow" href="https://humannaturelab.net/book/connected">Connected: The Surprising Power of Our Social Networks and How They Shape Our Lives (Book) | Nicholas Christakis</a></li><li><a title="Blueprint: The Evolutionary Origins of a Good Society (Book) | Nicholas Christakis" rel="nofollow" href="https://humannaturelab.net/book/blueprint-evolutionary-origins-good-society">Blueprint: The Evolutionary Origins of a Good Society (Book) | Nicholas Christakis</a></li><li><a title="Apollo’s Arrow: The Profound and Enduring Impact of Coronavirus on the Way We Live (Book) | Nicholas Christakis" rel="nofollow" href="https://humannaturelab.net/book/apollos-arrow-profound-and-enduring-impact-coronavirus-way-we-live">Apollo’s Arrow: The Profound and Enduring Impact of Coronavirus on the Way We Live (Book) | Nicholas Christakis</a></li><li><a title="The Hidden Influence of Social Networks (Ted Talk) | Nicholas Christakis" rel="nofollow" href="https://www.ted.com/talks/nicholas_christakis_the_hidden_influence_of_social_networks">The Hidden Influence of Social Networks (Ted Talk) | Nicholas Christakis</a></li><li><a title="ETH Global Lecture: Social Artificial Intelligence (2024) | Nicholas Christakis" rel="nofollow" href="https://humannaturelab.net/public-lecture/eth-global-lecture-social-artificial-intelligence-2024">ETH Global Lecture: Social Artificial Intelligence (2024) | Nicholas Christakis</a></li><li><a title="The Spread of Obesity in a Large Social Network over 32 Years - Christakis, Fowler (2007)" rel="nofollow" href="https://www.nejm.org/doi/pdf/10.1056/nejmsa066082">The Spread of Obesity in a Large Social Network over 32 Years - Christakis, Fowler (2007)</a></li><li><a title="Cooperative behavior cascades in human social networks - Fowler, Christakis (2010)" rel="nofollow" href="https://www.pnas.org/doi/10.1073/pnas.0913149107">Cooperative behavior cascades in human social networks - Fowler, Christakis (2010)</a></li><li><a title="Induction of social contagion for diverse outcomes in structured experiments in isolated villages - Airoldi, Christakis (2024)" rel="nofollow" href="https://www.science.org/doi/10.1126/science.adi5147">Induction of social contagion for diverse outcomes in structured experiments in isolated villages - Airoldi, Christakis (2024)</a></li><li><a title="Gut microbiome strain-sharing within isolated village social networks - Beghini, Pullman, Alexander, Shridhar, Prinster, Singh, Juárez, Airoldi, Brito, Christakis  (2025)" rel="nofollow" href="https://www.nature.com/articles/s41586-024-08222-1">Gut microbiome strain-sharing within isolated village social networks - Beghini, Pullman, Alexander, Shridhar, Prinster, Singh, Juárez, Airoldi, Brito, Christakis  (2025)</a></li></ul>]]>
  </itunes:summary>
</item>
<item>
  <title>63: The AI Mirror: Why Machines Reflect Us More Than They Think (with Shannon Vallor)</title>
  <link>https://onwisdompodcast.fireside.fm/63</link>
  <guid isPermaLink="false">640978be-f5ac-46b0-aa66-28b102f0904d</guid>
  <pubDate>Sun, 23 Feb 2025 16:00:00 -0500</pubDate>
  <author>Charles Cassidy and Igor Grossmann</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/6e7bd116-2782-4422-a140-42f329164842/640978be-f5ac-46b0-aa66-28b102f0904d.mp3" length="26704634" type="audio/mpeg"/>
  <itunes:episode>63</itunes:episode>
  <itunes:title>The AI Mirror: Why Machines Reflect Us More Than They Think (with Shannon Vallor)</itunes:title>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Charles Cassidy and Igor Grossmann</itunes:author>
  <itunes:subtitle>Can AI ever be truly wise, or are we just seeing reflections of ourselves? Philosopher Shannon Vallor joins Igor and Charles to explore how technology shapes human wisdom, why we’ve been thinking about AI all wrong, and what it really means to align machines with our values. Shannon unpacks the AI Mirror metaphor, suggesting that today’s AI isn’t a thinking mind but a reflection of human data, Igor considers whether technology could ever help us become wiser rather than just more efficient, and Charles wonders if philosophy can guide better decisions in a world increasingly shaped by algorithms. Welcome to Episode 63.</itunes:subtitle>
  <itunes:duration>44:30</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/6/6e7bd116-2782-4422-a140-42f329164842/cover.jpg?v=1"/>
  <description>Can AI ever be truly wise, or are we just seeing reflections of ourselves? Philosophy Professor Shannon Vallor joins Igor and Charles to explore how technology shapes human wisdom, why we’ve been thinking about AI all wrong, and what it really means to align machines with our values. Shannon unpacks the AI Mirror metaphor, suggesting that today’s AI isn’t a thinking mind but a reflection of human data, Igor considers whether technology could ever help us become wiser rather than just more efficient, and Charles wonders if philosophy can guide better decisions in a world increasingly shaped by algorithms. Welcome to Episode 63. Special Guest: Shannon Vallor.
</description>
  <itunes:keywords>wisdom, psychology, philosophy, social science, happiness, well being, meaning, reasoning, emotions, purpose, artificial intelligence, AI, alignment, The AI Mirror, Shannon Vallor, Value Alignment, Virtue Embodiment, Moral Machines, Technomoral Virtues, Technomoral Wisdom</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Can AI ever be truly wise, or are we just seeing reflections of ourselves? Philosophy Professor Shannon Vallor joins Igor and Charles to explore how technology shapes human wisdom, why we’ve been thinking about AI all wrong, and what it really means to align machines with our values. Shannon unpacks the AI Mirror metaphor, suggesting that today’s AI isn’t a thinking mind but a reflection of human data, Igor considers whether technology could ever help us become wiser rather than just more efficient, and Charles wonders if philosophy can guide better decisions in a world increasingly shaped by algorithms. Welcome to Episode 63.</p><p>Special Guest: Shannon Vallor.</p><p>Links:</p><ul><li><a title="Shannon Vallor | University of Edinburgh" rel="nofollow" href="https://edwebprofiles.ed.ac.uk/profile/shannon-vallor">Shannon Vallor | University of Edinburgh</a></li><li><a title="Shannon Vallor | Edinburgh Futures Institute, The University of Edinburgh" rel="nofollow" href="https://efi.ed.ac.uk/people/shannon-vallor/">Shannon Vallor | Edinburgh Futures Institute, The University of Edinburgh</a></li><li><a title="The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking - Shannon Vallor (2024)" rel="nofollow" href="https://global.oup.com/academic/product/the-ai-mirror-9780197759066?cc=gb&amp;lang=en&amp;">The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking - Shannon Vallor (2024)</a></li><li><a title="How philosopher Shannon Vallor delivered the year’s best critique of AI - Fast Company (2024)" rel="nofollow" href="https://www.fastcompany.com/91240425/how-philosopher-shannon-vallor-delivered-the-years-best-critique-of-ai">How philosopher Shannon Vallor delivered the year’s best critique of AI - Fast Company (2024)</a></li><li><a title="The Turing Lectures: Can we live with AI? - Shannon Vallor" rel="nofollow" href="https://www.youtube.com/watch?v=7iX-wiKvYHs">The Turing Lectures: Can we live with AI? - Shannon Vallor</a></li><li><a title="The Danger Of Superhuman AI Is Not What You Think | Noema - Shannon Vallor" rel="nofollow" href="https://www.noemamag.com/the-danger-of-superhuman-ai-is-not-what-you-think/">The Danger Of Superhuman AI Is Not What You Think | Noema - Shannon Vallor</a></li><li><a title="The Thoughts The Civilized Keep | Noema - Shannon Vallor" rel="nofollow" href="https://www.noemamag.com/the-thoughts-the-civilized-keep/">The Thoughts The Civilized Keep | Noema - Shannon Vallor</a></li><li><a title="AI Is the Black Mirror | Nautilus - Philip Ball" rel="nofollow" href="https://nautil.us/ai-is-the-black-mirror-1169121/">AI Is the Black Mirror | Nautilus - Philip Ball</a></li><li><a title="Technology and the Virtues A Philosophical Guide to a Future Worth Wanting - Shannon Vallor (Book)" rel="nofollow" href="https://www.google.com/books/edition/Technology_and_the_Virtues/RaCkDAAAQBAJ?hl=en&amp;gbpv=0">Technology and the Virtues A Philosophical Guide to a Future Worth Wanting - Shannon Vallor (Book)</a></li><li><a title="Moral Machines: From Value Alignment to Embodied Virtue - Wendell Wallach, Shannon Vallor (2020)" rel="nofollow" href="https://academic.oup.com/book/33540/chapter-abstract/287906775?redirectedFrom=fulltext&amp;login=false">Moral Machines: From Value Alignment to Embodied Virtue - Wendell Wallach, Shannon Vallor (2020)</a></li><li><a title="AI and the Automation of Wisdom - Shannon Vallor (2017)" rel="nofollow" href="https://link.springer.com/chapter/10.1007/978-3-319-61043-6_8">AI and the Automation of Wisdom - Shannon Vallor (2017)</a></li><li><a title="The AI Mirror — how technology blocks human potential | FT (Subscription Required)" rel="nofollow" href="https://www.ft.com/content/67d38081-82d3-4979-806a-eba0099f8011">The AI Mirror — how technology blocks human potential | FT (Subscription Required)</a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Can AI ever be truly wise, or are we just seeing reflections of ourselves? Philosophy Professor Shannon Vallor joins Igor and Charles to explore how technology shapes human wisdom, why we’ve been thinking about AI all wrong, and what it really means to align machines with our values. Shannon unpacks the AI Mirror metaphor, suggesting that today’s AI isn’t a thinking mind but a reflection of human data, Igor considers whether technology could ever help us become wiser rather than just more efficient, and Charles wonders if philosophy can guide better decisions in a world increasingly shaped by algorithms. Welcome to Episode 63.</p><p>Special Guest: Shannon Vallor.</p><p>Links:</p><ul><li><a title="Shannon Vallor | University of Edinburgh" rel="nofollow" href="https://edwebprofiles.ed.ac.uk/profile/shannon-vallor">Shannon Vallor | University of Edinburgh</a></li><li><a title="Shannon Vallor | Edinburgh Futures Institute, The University of Edinburgh" rel="nofollow" href="https://efi.ed.ac.uk/people/shannon-vallor/">Shannon Vallor | Edinburgh Futures Institute, The University of Edinburgh</a></li><li><a title="The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking - Shannon Vallor (2024)" rel="nofollow" href="https://global.oup.com/academic/product/the-ai-mirror-9780197759066?cc=gb&amp;lang=en&amp;">The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking - Shannon Vallor (2024)</a></li><li><a title="How philosopher Shannon Vallor delivered the year’s best critique of AI - Fast Company (2024)" rel="nofollow" href="https://www.fastcompany.com/91240425/how-philosopher-shannon-vallor-delivered-the-years-best-critique-of-ai">How philosopher Shannon Vallor delivered the year’s best critique of AI - Fast Company (2024)</a></li><li><a title="The Turing Lectures: Can we live with AI? - Shannon Vallor" rel="nofollow" href="https://www.youtube.com/watch?v=7iX-wiKvYHs">The Turing Lectures: Can we live with AI? - Shannon Vallor</a></li><li><a title="The Danger Of Superhuman AI Is Not What You Think | Noema - Shannon Vallor" rel="nofollow" href="https://www.noemamag.com/the-danger-of-superhuman-ai-is-not-what-you-think/">The Danger Of Superhuman AI Is Not What You Think | Noema - Shannon Vallor</a></li><li><a title="The Thoughts The Civilized Keep | Noema - Shannon Vallor" rel="nofollow" href="https://www.noemamag.com/the-thoughts-the-civilized-keep/">The Thoughts The Civilized Keep | Noema - Shannon Vallor</a></li><li><a title="AI Is the Black Mirror | Nautilus - Philip Ball" rel="nofollow" href="https://nautil.us/ai-is-the-black-mirror-1169121/">AI Is the Black Mirror | Nautilus - Philip Ball</a></li><li><a title="Technology and the Virtues A Philosophical Guide to a Future Worth Wanting - Shannon Vallor (Book)" rel="nofollow" href="https://www.google.com/books/edition/Technology_and_the_Virtues/RaCkDAAAQBAJ?hl=en&amp;gbpv=0">Technology and the Virtues A Philosophical Guide to a Future Worth Wanting - Shannon Vallor (Book)</a></li><li><a title="Moral Machines: From Value Alignment to Embodied Virtue - Wendell Wallach, Shannon Vallor (2020)" rel="nofollow" href="https://academic.oup.com/book/33540/chapter-abstract/287906775?redirectedFrom=fulltext&amp;login=false">Moral Machines: From Value Alignment to Embodied Virtue - Wendell Wallach, Shannon Vallor (2020)</a></li><li><a title="AI and the Automation of Wisdom - Shannon Vallor (2017)" rel="nofollow" href="https://link.springer.com/chapter/10.1007/978-3-319-61043-6_8">AI and the Automation of Wisdom - Shannon Vallor (2017)</a></li><li><a title="The AI Mirror — how technology blocks human potential | FT (Subscription Required)" rel="nofollow" href="https://www.ft.com/content/67d38081-82d3-4979-806a-eba0099f8011">The AI Mirror — how technology blocks human potential | FT (Subscription Required)</a></li></ul>]]>
  </itunes:summary>
</item>
<item>
  <title>58: The Social Robots are Coming! (with Kerstin Dautenhahn)</title>
  <link>https://onwisdompodcast.fireside.fm/58</link>
  <guid isPermaLink="false">7a5cee1a-3976-409d-8a6a-b1d425245225</guid>
  <pubDate>Wed, 01 Nov 2023 21:00:00 -0400</pubDate>
  <author>Charles Cassidy and Igor Grossmann</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/6e7bd116-2782-4422-a140-42f329164842/7a5cee1a-3976-409d-8a6a-b1d425245225.mp3" length="29424765" type="audio/mpeg"/>
  <itunes:episode>58</itunes:episode>
  <itunes:title>The Social Robots are Coming! (with Kerstin Dautenhahn)</itunes:title>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Charles Cassidy and Igor Grossmann</itunes:author>
  <itunes:subtitle>Can we create wise robots? Kerstin Dautenhahn joins Igor and Charles to dive into the intriguing world of social robots, the finer points of “Robotiquette,” and the potential role such robots can play in supporting therapeutic treatments. Igor reflects on the limits of robot-based wisdom, Kerstin reveals the potential of Generative AI like ChatGPT to generate false information about her own professional identity, and Charles considers the perils of socially awkward machines. Welcome to Episode 58.</itunes:subtitle>
  <itunes:duration>49:02</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/6/6e7bd116-2782-4422-a140-42f329164842/cover.jpg?v=1"/>
  <description>Can we create wise robots? Kerstin Dautenhahn joins Igor and Charles to dive into the intriguing world of social robots, the finer points of “Robotiquette,” and the potential role such robots can play in supporting therapeutic treatments. Igor reflects on the limits of robot-based wisdom, Kerstin reveals the potential of Generative AI like ChatGPT to generate false information about her own professional identity, and Charles considers the perils of socially awkward machines. Welcome to Episode 58. Special Guest: Kerstin Dautenhahn.
</description>
  <itunes:keywords>wisdom, psychology, philosophy, social science, happiness, well being, meaning, reasoning, emotions, purpose, social robots, robotics, robotiquette, AI, LLM, ChatGPT, wise robots, Kerstin Dautenhahn, human-robot interaction, robot-assisted interventions, social anxiety, Assistive Technology, Artificial Life </itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Can we create wise robots? Kerstin Dautenhahn joins Igor and Charles to dive into the intriguing world of social robots, the finer points of “Robotiquette,” and the potential role such robots can play in supporting therapeutic treatments. Igor reflects on the limits of robot-based wisdom, Kerstin reveals the potential of Generative AI like ChatGPT to generate false information about her own professional identity, and Charles considers the perils of socially awkward machines. Welcome to Episode 58.</p><p>Special Guest: Kerstin Dautenhahn.</p><p>Links:</p><ul><li><a title="Kerstin Dautenhahn&#39;s page | University of Waterloo" rel="nofollow" href="https://uwaterloo.ca/electrical-computer-engineering/profile/kdautenh">Kerstin Dautenhahn's page | University of Waterloo</a></li><li><a title="Social and Intelligent Robotics Research Laboratory (SIRRL)" rel="nofollow" href="https://uwaterloo.ca/social-intelligent-robotics-research-lab/">Social and Intelligent Robotics Research Laboratory (SIRRL)</a></li><li><a title="Robots are not human, even if we want them to be | Kerstin Dautenhahn | TEDxEastEnd" rel="nofollow" href="https://www.youtube.com/watch?v=wPK2SWC0kx0">Robots are not human, even if we want them to be | Kerstin Dautenhahn | TEDxEastEnd</a></li><li><a title="Socially intelligent robots: dimensions of human–robot interaction - Dautenhahn (2007)" rel="nofollow" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2346526/">Socially intelligent robots: dimensions of human–robot interaction - Dautenhahn (2007)</a></li><li><a title="Potential Applications of Social Robots in Robot-Assisted Interventions for Social Anxiety - S Rasouli, G Gupta, E Nilsen, K Dautenhahn (2022) " rel="nofollow" href="https://pubmed.ncbi.nlm.nih.gov/35096198/">Potential Applications of Social Robots in Robot-Assisted Interventions for Social Anxiety - S Rasouli, G Gupta, E Nilsen, K Dautenhahn (2022) </a></li><li><a title="User Evaluation of Social Robots as a Tool in One-to-One Instructional Settings for Students with Learning Disabilities - N Azizi  , S Chandra, M Gray, J Fane, M Sager, K Dautenhahn (2023)" rel="nofollow" href="https://www.researchgate.net/publication/367976887_User_Evaluation_of_Social_Robots_as_a_Tool_in_One-to-One_Instructional_Settings_for_Students_with_Learning_Disabilities">User Evaluation of Social Robots as a Tool in One-to-One Instructional Settings for Students with Learning Disabilities - N Azizi  , S Chandra, M Gray, J Fane, M Sager, K Dautenhahn (2023)</a></li><li><a title="Opportunities for social robots in the stuttering clinic: A review and proposed scenarios - S Chandra, G Gupta, T Loucks, K Dautenhahn (2022)" rel="nofollow" href="https://www.researchgate.net/publication/361507850_Opportunities_for_social_robots_in_the_stuttering_clinic_A_review_and_proposed_scenarios">Opportunities for social robots in the stuttering clinic: A review and proposed scenarios - S Chandra, G Gupta, T Loucks, K Dautenhahn (2022)</a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Can we create wise robots? Kerstin Dautenhahn joins Igor and Charles to dive into the intriguing world of social robots, the finer points of “Robotiquette,” and the potential role such robots can play in supporting therapeutic treatments. Igor reflects on the limits of robot-based wisdom, Kerstin reveals the potential of Generative AI like ChatGPT to generate false information about her own professional identity, and Charles considers the perils of socially awkward machines. Welcome to Episode 58.</p><p>Special Guest: Kerstin Dautenhahn.</p><p>Links:</p><ul><li><a title="Kerstin Dautenhahn&#39;s page | University of Waterloo" rel="nofollow" href="https://uwaterloo.ca/electrical-computer-engineering/profile/kdautenh">Kerstin Dautenhahn's page | University of Waterloo</a></li><li><a title="Social and Intelligent Robotics Research Laboratory (SIRRL)" rel="nofollow" href="https://uwaterloo.ca/social-intelligent-robotics-research-lab/">Social and Intelligent Robotics Research Laboratory (SIRRL)</a></li><li><a title="Robots are not human, even if we want them to be | Kerstin Dautenhahn | TEDxEastEnd" rel="nofollow" href="https://www.youtube.com/watch?v=wPK2SWC0kx0">Robots are not human, even if we want them to be | Kerstin Dautenhahn | TEDxEastEnd</a></li><li><a title="Socially intelligent robots: dimensions of human–robot interaction - Dautenhahn (2007)" rel="nofollow" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2346526/">Socially intelligent robots: dimensions of human–robot interaction - Dautenhahn (2007)</a></li><li><a title="Potential Applications of Social Robots in Robot-Assisted Interventions for Social Anxiety - S Rasouli, G Gupta, E Nilsen, K Dautenhahn (2022) " rel="nofollow" href="https://pubmed.ncbi.nlm.nih.gov/35096198/">Potential Applications of Social Robots in Robot-Assisted Interventions for Social Anxiety - S Rasouli, G Gupta, E Nilsen, K Dautenhahn (2022) </a></li><li><a title="User Evaluation of Social Robots as a Tool in One-to-One Instructional Settings for Students with Learning Disabilities - N Azizi  , S Chandra, M Gray, J Fane, M Sager, K Dautenhahn (2023)" rel="nofollow" href="https://www.researchgate.net/publication/367976887_User_Evaluation_of_Social_Robots_as_a_Tool_in_One-to-One_Instructional_Settings_for_Students_with_Learning_Disabilities">User Evaluation of Social Robots as a Tool in One-to-One Instructional Settings for Students with Learning Disabilities - N Azizi  , S Chandra, M Gray, J Fane, M Sager, K Dautenhahn (2023)</a></li><li><a title="Opportunities for social robots in the stuttering clinic: A review and proposed scenarios - S Chandra, G Gupta, T Loucks, K Dautenhahn (2022)" rel="nofollow" href="https://www.researchgate.net/publication/361507850_Opportunities_for_social_robots_in_the_stuttering_clinic_A_review_and_proposed_scenarios">Opportunities for social robots in the stuttering clinic: A review and proposed scenarios - S Chandra, G Gupta, T Loucks, K Dautenhahn (2022)</a></li></ul>]]>
  </itunes:summary>
</item>
<item>
  <title>57: The Epic Challenge of Knowing Thyself (with David Dunning)</title>
  <link>https://onwisdompodcast.fireside.fm/57</link>
  <guid isPermaLink="false">662ec5c1-5851-43e9-b324-e91f1d70fdde</guid>
  <pubDate>Sat, 07 Oct 2023 18:00:00 -0400</pubDate>
  <author>Charles Cassidy and Igor Grossmann</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/6e7bd116-2782-4422-a140-42f329164842/662ec5c1-5851-43e9-b324-e91f1d70fdde.mp3" length="37824699" type="audio/mpeg"/>
  <itunes:episode>57</itunes:episode>
  <itunes:title>The Epic Challenge of Knowing Thyself (with David Dunning)</itunes:title>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Charles Cassidy and Igor Grossmann</itunes:author>
  <itunes:subtitle>Can we ever really know ourselves, or are we destined to always make overly optimistic self-assessments? David Dunning joins Igor and Charles to discuss the Dunning-Kruger effect, the importance of asking the right questions, why arriving at an accurate view of ourselves is so challenging, and the implications for teaching, medicine, and even scientific research. Igor explores the possible reemergence of group assessments in education as a result of advances in AI, David shares why conversations with smart people often end up as competitions to ask the most questions, and Charles reflects on the wisdom-enhancing experience of jury service. Welcome to Episode 57.</itunes:subtitle>
  <itunes:duration>1:03:02</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/6/6e7bd116-2782-4422-a140-42f329164842/cover.jpg?v=1"/>
  <description>Can we ever really know ourselves, or are we destined to always make overly optimistic self-assessments? David Dunning joins Igor and Charles to discuss the Dunning-Kruger effect, the importance of asking the right questions, why arriving at an accurate view of ourselves is so challenging, and the implications for teaching, medicine, and even scientific research. Igor explores the possible reemergence of group assessments in education as a result of advances in AI, David shares why conversations with smart people often end up as competitions to ask the most questions, and Charles reflects on the wisdom-enhancing experience of jury service. Welcome to Episode 57.
 Special Guest: David Dunning.
</description>
  <itunes:keywords>wisdom, psychology, philosophy, social science, happiness, well being, meaning, reasoning, emotions, purpose, David Dunning, Dunning-Kruger, self-assessment, Justin Kruger, self-awareness, metacognition, checklists, How to Win Friends and Influence People, Dale Carnegie, Jury Service, AI</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Can we ever really know ourselves, or are we destined to always make overly optimistic self-assessments? David Dunning joins Igor and Charles to discuss the Dunning-Kruger effect, the importance of asking the right questions, why arriving at an accurate view of ourselves is so challenging, and the implications for teaching, medicine, and even scientific research. Igor explores the possible reemergence of group assessments in education as a result of advances in AI, David shares why conversations with smart people often end up as competitions to ask the most questions, and Charles reflects on the wisdom-enhancing experience of jury service. Welcome to Episode 57.</p><p>Special Guest: David Dunning.</p><p>Links:</p><ul><li><a title="Unskilled and unaware of it: how difficulties in recognizing one&#39;s own incompetence lead to inflated self-assessments - J Kruger, D Dunning (1999)" rel="nofollow" href="https://pubmed.ncbi.nlm.nih.gov/10626367/">Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self-assessments - J Kruger, D Dunning (1999)</a></li><li><a title="The association between objective and subjective financial literacy: Failure to observe the Dunning-Kruger effect - Gilles E. Gignac (2022)" rel="nofollow" href="https://www.sciencedirect.com/science/article/abs/pii/S0191886921006036?via%3Dihub">The association between objective and subjective financial literacy: Failure to observe the Dunning-Kruger effect - Gilles E. Gignac (2022)</a></li><li><a title="Flawed Self-Assessment: Implications for Health, Education, and the Workplace - David Dunning Chip Heath Jerry M. Suls (2004)" rel="nofollow" href="https://journals.sagepub.com/doi/10.1111/j.1529-1006.2004.00018.x">Flawed Self-Assessment: Implications for Health, Education, and the Workplace - David Dunning Chip Heath Jerry M. Suls (2004)</a></li><li><a title="Feeling &quot;Holier Than Thou&quot;: Are Self-Serving Assessments Produced by Errors in Self- or Social Prediction? - Nicholas Epley, David Dunning (2000)" rel="nofollow" href="https://citeseerx.ist.psu.edu/document?repid=rep1&amp;type=pdf&amp;doi=7e8266e3fa987219bb056978587cdf21acd42448">Feeling "Holier Than Thou": Are Self-Serving Assessments Produced by Errors in Self- or Social Prediction? - Nicholas Epley, David Dunning (2000)</a></li><li><a title="Why People Fail to Recognize Their Own Incompetence - David Dunning1. Kerri Johnson Joyce Ehrlinger Justin Kruger (2003)" rel="nofollow" href="https://journals.sagepub.com/doi/10.1111/1467-8721.01235">Why People Fail to Recognize Their Own Incompetence - David Dunning1. Kerri Johnson Joyce Ehrlinger Justin Kruger (2003)</a></li><li><a title="The Dunning–Kruger Effect: On Being Ignorant of One&#39;s Own Ignorance | Book Chapter - David Dunning (2011)" rel="nofollow" href="https://www.sciencedirect.com/science/article/pii/B9780123855220000056?via%3Dihub">The Dunning–Kruger Effect: On Being Ignorant of One's Own Ignorance | Book Chapter - David Dunning (2011)</a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Can we ever really know ourselves, or are we destined to always make overly optimistic self-assessments? David Dunning joins Igor and Charles to discuss the Dunning-Kruger effect, the importance of asking the right questions, why arriving at an accurate view of ourselves is so challenging, and the implications for teaching, medicine, and even scientific research. Igor explores the possible reemergence of group assessments in education as a result of advances in AI, David shares why conversations with smart people often end up as competitions to ask the most questions, and Charles reflects on the wisdom-enhancing experience of jury service. Welcome to Episode 57.</p><p>Special Guest: David Dunning.</p><p>Links:</p><ul><li><a title="Unskilled and unaware of it: how difficulties in recognizing one&#39;s own incompetence lead to inflated self-assessments - J Kruger, D Dunning (1999)" rel="nofollow" href="https://pubmed.ncbi.nlm.nih.gov/10626367/">Unskilled and unaware of it: how difficulties in recognizing one's own incompetence lead to inflated self-assessments - J Kruger, D Dunning (1999)</a></li><li><a title="The association between objective and subjective financial literacy: Failure to observe the Dunning-Kruger effect - Gilles E. Gignac (2022)" rel="nofollow" href="https://www.sciencedirect.com/science/article/abs/pii/S0191886921006036?via%3Dihub">The association between objective and subjective financial literacy: Failure to observe the Dunning-Kruger effect - Gilles E. Gignac (2022)</a></li><li><a title="Flawed Self-Assessment: Implications for Health, Education, and the Workplace - David Dunning Chip Heath Jerry M. Suls (2004)" rel="nofollow" href="https://journals.sagepub.com/doi/10.1111/j.1529-1006.2004.00018.x">Flawed Self-Assessment: Implications for Health, Education, and the Workplace - David Dunning Chip Heath Jerry M. Suls (2004)</a></li><li><a title="Feeling &quot;Holier Than Thou&quot;: Are Self-Serving Assessments Produced by Errors in Self- or Social Prediction? - Nicholas Epley, David Dunning (2000)" rel="nofollow" href="https://citeseerx.ist.psu.edu/document?repid=rep1&amp;type=pdf&amp;doi=7e8266e3fa987219bb056978587cdf21acd42448">Feeling "Holier Than Thou": Are Self-Serving Assessments Produced by Errors in Self- or Social Prediction? - Nicholas Epley, David Dunning (2000)</a></li><li><a title="Why People Fail to Recognize Their Own Incompetence - David Dunning1. Kerri Johnson Joyce Ehrlinger Justin Kruger (2003)" rel="nofollow" href="https://journals.sagepub.com/doi/10.1111/1467-8721.01235">Why People Fail to Recognize Their Own Incompetence - David Dunning1. Kerri Johnson Joyce Ehrlinger Justin Kruger (2003)</a></li><li><a title="The Dunning–Kruger Effect: On Being Ignorant of One&#39;s Own Ignorance | Book Chapter - David Dunning (2011)" rel="nofollow" href="https://www.sciencedirect.com/science/article/pii/B9780123855220000056?via%3Dihub">The Dunning–Kruger Effect: On Being Ignorant of One's Own Ignorance | Book Chapter - David Dunning (2011)</a></li></ul>]]>
  </itunes:summary>
</item>
<item>
  <title>55: Wise of the Machines (with Sina Fazelpour)</title>
  <link>https://onwisdompodcast.fireside.fm/55</link>
  <guid isPermaLink="false">fdc73ee1-e7d8-47ad-9d27-9ff1aadc7f2e</guid>
  <pubDate>Sat, 05 Aug 2023 12:00:00 -0400</pubDate>
  <author>Charles Cassidy and Igor Grossmann</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/6e7bd116-2782-4422-a140-42f329164842/fdc73ee1-e7d8-47ad-9d27-9ff1aadc7f2e.mp3" length="38604716" type="audio/mpeg"/>
  <itunes:episode>55</itunes:episode>
  <itunes:title>Wise of the Machines (with Sina Fazelpour)</itunes:title>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Charles Cassidy and Igor Grossmann</itunes:author>
  <itunes:subtitle>How can we make AI wiser? And could AI make us wiser in return? Sina Fazelpour joins Igor and Charles to discuss the problem of bias in algorithms, how we might make machine learning systems more diverse, and the thorny challenge of alignment. Igor considers whether interacting with AIs might help us achieve higher levels of understanding, Sina suggests that setting up AIs to promote certain values may be problematic in a pluralistic society, and Charles is intrigued to learn about the opportunities offered by teaming up with our machine friends. Welcome to Episode 55.</itunes:subtitle>
  <itunes:duration>1:04:20</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/6/6e7bd116-2782-4422-a140-42f329164842/cover.jpg?v=1"/>
  <description>How can we make AI wiser? And could AI make us wiser in return? Sina Fazelpour joins Igor and Charles to discuss the problem of bias in algorithms, how we might make machine learning systems more diverse, and the thorny challenge of alignment. Igor considers whether interacting with AIs might help us achieve higher levels of understanding, Sina suggests that setting up AIs to promote certain values may be problematic in a pluralistic society, and Charles is intrigued to learn about the opportunities offered by teaming up with our machine friends. Welcome to Episode 55. Special Guest: Sina Fazelpour.
</description>
  <itunes:keywords>wisdom, psychology, philosophy, social science, happiness, well being, meaning, reasoning, emotions, purpose, Sina Fazelpour, Artificial Intelligence, AI, Machine Learning, Bias, Algorithms, Alignment, Diversity, Constitutional AI, AlphaGo, Lee Sedols, God’s touch, ChatGPT, LLM, large language model</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>How can we make AI wiser? And could AI make us wiser in return? Sina Fazelpour joins Igor and Charles to discuss the problem of bias in algorithms, how we might make machine learning systems more diverse, and the thorny challenge of alignment. Igor considers whether interacting with AIs might help us achieve higher levels of understanding, Sina suggests that setting up AIs to promote certain values may be problematic in a pluralistic society, and Charles is intrigued to learn about the opportunities offered by teaming up with our machine friends. Welcome to Episode 55.</p><p>Special Guest: Sina Fazelpour.</p><p>Links:</p><ul><li><a title="Sina Fazelpour&#39;s Website" rel="nofollow" href="https://sinafazelpour.com/">Sina Fazelpour's Website</a></li><li><a title="AI and the transformation of social science research | Science - Igor Grossmann, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock,  Willian A. Cunningham (2023)" rel="nofollow" href="https://www.science.org/stoken/author-tokens/ST-1256/full">AI and the transformation of social science research | Science - Igor Grossmann, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock,  Willian A. Cunningham (2023)</a></li><li><a title="Algorithmic Fairness from a Non-ideal Perspective - Sina Fazelpour, ZacharyC.Lipton (2020" rel="nofollow" href="https://dl.acm.org/doi/pdf/10.1145/3375627.3375828">Algorithmic Fairness from a Non-ideal Perspective - Sina Fazelpour, ZacharyC.Lipton (2020</a></li><li><a title="Diversity in sociotechnical machine learning systems - Sina Fazelpour, Maria De-Arteaga (2022)" rel="nofollow" href="https://journals.sagepub.com/doi/10.1177/20539517221082027">Diversity in sociotechnical machine learning systems - Sina Fazelpour, Maria De-Arteaga (2022)</a></li><li><a title="Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? - Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang (2022)" rel="nofollow" href="https://arxiv.org/abs/2211.13972">Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? - Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang (2022)</a></li><li><a title="Algorithmic bias: Senses, sources, solutions - Sina Fazelpour, David Danks (2021)" rel="nofollow" href="https://compass.onlinelibrary.wiley.com/doi/full/10.1111/phc3.12760">Algorithmic bias: Senses, sources, solutions - Sina Fazelpour, David Danks (2021)</a></li><li><a title="Constitutional AI: Harmlessness from AI Feedback - Yuntao Bai et al (2022)" rel="nofollow" href="https://arxiv.org/abs/2212.08073">Constitutional AI: Harmlessness from AI Feedback - Yuntao Bai et al (2022)</a></li><li><a title="Taxonomy of Risks posed by Language Models - Laura Weidinger at Al (2022)" rel="nofollow" href="https://dl.acm.org/doi/10.1145/3531146.3533088">Taxonomy of Risks posed by Language Models - Laura Weidinger at Al (2022)</a></li><li><a title="Large pre-trained language models contain human-like biases of what is right and wrong to do - Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf &amp; Kristian Kersting (2022)" rel="nofollow" href="https://www.nature.com/articles/s42256-022-00458-8">Large pre-trained language models contain human-like biases of what is right and wrong to do - Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf &amp; Kristian Kersting (2022)</a></li><li><a title="On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - Emily M. Bender  ,  Timnit Gebru  ,  Angelina McMillan-Major  ,  Shmargaret Shmitchell (2021)  " rel="nofollow" href="https://dl.acm.org/doi/10.1145/3442188.3445922">On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - Emily M. Bender  ,  Timnit Gebru  ,  Angelina McMillan-Major  ,  Shmargaret Shmitchell (2021)  </a></li><li><a title="In Two Moves, AlphaGo and Lee Sedol Redefined the Future | Wired Magazine (2016)" rel="nofollow" href="https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/">In Two Moves, AlphaGo and Lee Sedol Redefined the Future | Wired Magazine (2016)</a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>How can we make AI wiser? And could AI make us wiser in return? Sina Fazelpour joins Igor and Charles to discuss the problem of bias in algorithms, how we might make machine learning systems more diverse, and the thorny challenge of alignment. Igor considers whether interacting with AIs might help us achieve higher levels of understanding, Sina suggests that setting up AIs to promote certain values may be problematic in a pluralistic society, and Charles is intrigued to learn about the opportunities offered by teaming up with our machine friends. Welcome to Episode 55.</p><p>Special Guest: Sina Fazelpour.</p><p>Links:</p><ul><li><a title="Sina Fazelpour&#39;s Website" rel="nofollow" href="https://sinafazelpour.com/">Sina Fazelpour's Website</a></li><li><a title="AI and the transformation of social science research | Science - Igor Grossmann, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock,  Willian A. Cunningham (2023)" rel="nofollow" href="https://www.science.org/stoken/author-tokens/ST-1256/full">AI and the transformation of social science research | Science - Igor Grossmann, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock,  Willian A. Cunningham (2023)</a></li><li><a title="Algorithmic Fairness from a Non-ideal Perspective - Sina Fazelpour, ZacharyC.Lipton (2020" rel="nofollow" href="https://dl.acm.org/doi/pdf/10.1145/3375627.3375828">Algorithmic Fairness from a Non-ideal Perspective - Sina Fazelpour, ZacharyC.Lipton (2020</a></li><li><a title="Diversity in sociotechnical machine learning systems - Sina Fazelpour, Maria De-Arteaga (2022)" rel="nofollow" href="https://journals.sagepub.com/doi/10.1177/20539517221082027">Diversity in sociotechnical machine learning systems - Sina Fazelpour, Maria De-Arteaga (2022)</a></li><li><a title="Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? - Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang (2022)" rel="nofollow" href="https://arxiv.org/abs/2211.13972">Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? - Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang (2022)</a></li><li><a title="Algorithmic bias: Senses, sources, solutions - Sina Fazelpour, David Danks (2021)" rel="nofollow" href="https://compass.onlinelibrary.wiley.com/doi/full/10.1111/phc3.12760">Algorithmic bias: Senses, sources, solutions - Sina Fazelpour, David Danks (2021)</a></li><li><a title="Constitutional AI: Harmlessness from AI Feedback - Yuntao Bai et al (2022)" rel="nofollow" href="https://arxiv.org/abs/2212.08073">Constitutional AI: Harmlessness from AI Feedback - Yuntao Bai et al (2022)</a></li><li><a title="Taxonomy of Risks posed by Language Models - Laura Weidinger at Al (2022)" rel="nofollow" href="https://dl.acm.org/doi/10.1145/3531146.3533088">Taxonomy of Risks posed by Language Models - Laura Weidinger at Al (2022)</a></li><li><a title="Large pre-trained language models contain human-like biases of what is right and wrong to do - Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf &amp; Kristian Kersting (2022)" rel="nofollow" href="https://www.nature.com/articles/s42256-022-00458-8">Large pre-trained language models contain human-like biases of what is right and wrong to do - Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf &amp; Kristian Kersting (2022)</a></li><li><a title="On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - Emily M. Bender  ,  Timnit Gebru  ,  Angelina McMillan-Major  ,  Shmargaret Shmitchell (2021)  " rel="nofollow" href="https://dl.acm.org/doi/10.1145/3442188.3445922">On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - Emily M. Bender  ,  Timnit Gebru  ,  Angelina McMillan-Major  ,  Shmargaret Shmitchell (2021)  </a></li><li><a title="In Two Moves, AlphaGo and Lee Sedol Redefined the Future | Wired Magazine (2016)" rel="nofollow" href="https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/">In Two Moves, AlphaGo and Lee Sedol Redefined the Future | Wired Magazine (2016)</a></li></ul>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
