<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web02.fireside.fm</fireside:hostname>
    <fireside:genDate>Sun, 19 Apr 2026 23:30:24 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>On Wisdom - Episodes Tagged with “Artificial Intelligence”</title>
    <link>https://onwisdompodcast.fireside.fm/tags/artificial%20intelligence</link>
    <pubDate>Sun, 16 Nov 2025 14:00:00 -0500</pubDate>
    <description>On Wisdom features a social and cognitive scientist in Toronto and an educator in London discussing the latest empirical science regarding the nature of wisdom. Igor Grossmann runs the Wisdom &amp; Culture Lab at the University of Waterloo in Canada. Charles Cassidy runs the Evidence-Based Wisdom project in London, UK. The podcast thrives on a diet of freewheeling conversation on wisdom, decision-making, wellbeing, and society and includes regular guests spots with leading behavioral scientists from the field of wisdom research and beyond. Welcome to The On Wisdom Podcast.
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>What does science tell us about wisdom?</itunes:subtitle>
    <itunes:author>Charles Cassidy and Igor Grossmann</itunes:author>
    <itunes:summary>On Wisdom features a social and cognitive scientist in Toronto and an educator in London discussing the latest empirical science regarding the nature of wisdom. Igor Grossmann runs the Wisdom &amp; Culture Lab at the University of Waterloo in Canada. Charles Cassidy runs the Evidence-Based Wisdom project in London, UK. The podcast thrives on a diet of freewheeling conversation on wisdom, decision-making, wellbeing, and society and includes regular guests spots with leading behavioral scientists from the field of wisdom research and beyond. Welcome to The On Wisdom Podcast.
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/6/6e7bd116-2782-4422-a140-42f329164842/cover.jpg?v=1"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>psychology, science, happiness, philosophy, wisdom, decision-making, reasoning, society</itunes:keywords>
    <itunes:owner>
      <itunes:name>Charles Cassidy and Igor Grossmann</itunes:name>
      <itunes:email>charlesdavidcassidy@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Science">
  <itunes:category text="Social Sciences"/>
</itunes:category>
<itunes:category text="Society &amp; Culture"/>
<itunes:category text="Society &amp; Culture">
  <itunes:category text="Philosophy"/>
</itunes:category>
<item>
  <title>67: The Wisdom Turing Test - Part Two (with Steve Rathje)</title>
  <link>https://onwisdompodcast.fireside.fm/67</link>
  <guid isPermaLink="false">f4899082-8f1e-4252-805b-4fc889eb1313</guid>
  <pubDate>Sun, 16 Nov 2025 14:00:00 -0500</pubDate>
  <author>Charles Cassidy and Igor Grossmann</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/6e7bd116-2782-4422-a140-42f329164842/f4899082-8f1e-4252-805b-4fc889eb1313.mp3" length="29984830" type="audio/mpeg"/>
  <itunes:episode>67</itunes:episode>
  <itunes:title>The Wisdom Turing Test - Part Two (with Steve Rathje)</itunes:title>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Charles Cassidy and Igor Grossmann</itunes:author>
  <itunes:subtitle>What can insights from the psychology of technology teach us about wisdom in the age of AI? In this special follow-up episode, Igor and Charles are joined by Steve Rathje to explore how classic ideas like the Turing Test hold up now that AI can talk compellingly about human wisdom. Steve unpacks what today’s generative models are actually capable of, Igor is intrigued by how quickly the line between human and machine reasoning seems to be blurring, and Charles realises that telling human insight from machine insight isn’t nearly as straightforward as he'd hoped. The trio also reveal the results of our listener poll — who sounded the wisest, and was the audience able to spot the AI? Welcome to Episode 67.</itunes:subtitle>
  <itunes:duration>49:58</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/6/6e7bd116-2782-4422-a140-42f329164842/cover.jpg?v=1"/>
  <description>What can insights from the psychology of technology teach us about wisdom in the age of AI? In this special follow-up episode, Igor and Charles are joined by Steve Rathje to explore how classic ideas like the Turing Test hold up now that AI can talk compellingly about human wisdom. Steve unpacks what today’s generative models are actually capable of, Igor is intrigued by how quickly the line between human and machine reasoning seems to be blurring, and Charles realises that telling human insight from machine insight isn’t nearly as straightforward as he'd hoped. The trio also reveal the results of our listener poll — who sounded the wisest, and was the audience able to spot the AI? Welcome to Episode 67. Special Guest: Steve Rathje.
</description>
  <itunes:keywords>wisdom, psychology, philosophy, social science, happiness, well being, meaning, reasoning, emotions, purpose, Steve Rathje, Turing Test, artificial intelligence, The Chinese Room, psychology of technology, AI sycophancy, social media, listener poll, wisdom turing test, Alan Turing, Benedict Cumberbatch</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>What can insights from the psychology of technology teach us about wisdom in the age of AI? In this special follow-up episode, Igor and Charles are joined by Steve Rathje to explore how classic ideas like the Turing Test hold up now that AI can talk compellingly about human wisdom. Steve unpacks what today’s generative models are actually capable of, Igor is intrigued by how quickly the line between human and machine reasoning seems to be blurring, and Charles realises that telling human insight from machine insight isn’t nearly as straightforward as he&#39;d hoped. The trio also reveal the results of our listener poll — who sounded the wisest, and was the audience able to spot the AI? Welcome to Episode 67.</p><p>Special Guest: Steve Rathje.</p><p>Links:</p><ul><li><a title="Steve Rathje&#39;s Site: " rel="nofollow" href="https://stevenrathje.com/">Steve Rathje's Site: </a></li><li><a title="Sycophantic AI increases attitude extremity and overconfidence (Preprint) - Steve Rathje, Meryl Ye, Laura K. Globig, Raunak M. Pillai, Victoria Oldemburgo de Mello, Jay J Van Bavel (2025)" rel="nofollow" href="https://osf.io/preprints/psyarxiv/vmyek">Sycophantic AI increases attitude extremity and overconfidence (Preprint) - Steve Rathje, Meryl Ye, Laura K. Globig, Raunak M. Pillai, Victoria Oldemburgo de Mello, Jay J Van Bavel (2025)</a></li><li><a title="Imagining and building wise machines: The centrality of AI metacognition - Johnson, Karimi, Bengio, Chater, Gerstenberg, Larson, Levine, Mitchell, Rahwan, Schölkopf, Grossmann (2024)" rel="nofollow" href="https://arxiv.org/abs/2411.02478">Imagining and building wise machines: The centrality of AI metacognition - Johnson, Karimi, Bengio, Chater, Gerstenberg, Larson, Levine, Mitchell, Rahwan, Schölkopf, Grossmann (2024)</a></li><li><a title="The Turing test: Can a computer pass for a human? | TedEd Video - Alex Gendler " rel="nofollow" href="https://youtu.be/3wLqsRLvV-c?si=MKb7UvaO79hurYvW">The Turing test: Can a computer pass for a human? | TedEd Video - Alex Gendler </a></li><li><a title="The Chinese Room Experiment | The Hunt for AI | BBC Studios" rel="nofollow" href="https://youtu.be/D0MD4sRHj1M?si=h_Fq9-W6a86NbdI8">The Chinese Room Experiment | The Hunt for AI | BBC Studios</a></li><li><a title="The Chinese Room Argument | Stanford Encyclopedia of Philosophy" rel="nofollow" href="https://plato.stanford.edu/entries/chinese-room/">The Chinese Room Argument | Stanford Encyclopedia of Philosophy</a></li><li><a title="Her | Movie Trailer (2013)" rel="nofollow" href="https://youtu.be/dJTU48_yghs?si=QUO-pjnXrd-ibg8a">Her | Movie Trailer (2013)</a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>What can insights from the psychology of technology teach us about wisdom in the age of AI? In this special follow-up episode, Igor and Charles are joined by Steve Rathje to explore how classic ideas like the Turing Test hold up now that AI can talk compellingly about human wisdom. Steve unpacks what today’s generative models are actually capable of, Igor is intrigued by how quickly the line between human and machine reasoning seems to be blurring, and Charles realises that telling human insight from machine insight isn’t nearly as straightforward as he&#39;d hoped. The trio also reveal the results of our listener poll — who sounded the wisest, and was the audience able to spot the AI? Welcome to Episode 67.</p><p>Special Guest: Steve Rathje.</p><p>Links:</p><ul><li><a title="Steve Rathje&#39;s Site: " rel="nofollow" href="https://stevenrathje.com/">Steve Rathje's Site: </a></li><li><a title="Sycophantic AI increases attitude extremity and overconfidence (Preprint) - Steve Rathje, Meryl Ye, Laura K. Globig, Raunak M. Pillai, Victoria Oldemburgo de Mello, Jay J Van Bavel (2025)" rel="nofollow" href="https://osf.io/preprints/psyarxiv/vmyek">Sycophantic AI increases attitude extremity and overconfidence (Preprint) - Steve Rathje, Meryl Ye, Laura K. Globig, Raunak M. Pillai, Victoria Oldemburgo de Mello, Jay J Van Bavel (2025)</a></li><li><a title="Imagining and building wise machines: The centrality of AI metacognition - Johnson, Karimi, Bengio, Chater, Gerstenberg, Larson, Levine, Mitchell, Rahwan, Schölkopf, Grossmann (2024)" rel="nofollow" href="https://arxiv.org/abs/2411.02478">Imagining and building wise machines: The centrality of AI metacognition - Johnson, Karimi, Bengio, Chater, Gerstenberg, Larson, Levine, Mitchell, Rahwan, Schölkopf, Grossmann (2024)</a></li><li><a title="The Turing test: Can a computer pass for a human? | TedEd Video - Alex Gendler " rel="nofollow" href="https://youtu.be/3wLqsRLvV-c?si=MKb7UvaO79hurYvW">The Turing test: Can a computer pass for a human? | TedEd Video - Alex Gendler </a></li><li><a title="The Chinese Room Experiment | The Hunt for AI | BBC Studios" rel="nofollow" href="https://youtu.be/D0MD4sRHj1M?si=h_Fq9-W6a86NbdI8">The Chinese Room Experiment | The Hunt for AI | BBC Studios</a></li><li><a title="The Chinese Room Argument | Stanford Encyclopedia of Philosophy" rel="nofollow" href="https://plato.stanford.edu/entries/chinese-room/">The Chinese Room Argument | Stanford Encyclopedia of Philosophy</a></li><li><a title="Her | Movie Trailer (2013)" rel="nofollow" href="https://youtu.be/dJTU48_yghs?si=QUO-pjnXrd-ibg8a">Her | Movie Trailer (2013)</a></li></ul>]]>
  </itunes:summary>
</item>
<item>
  <title>66: The Wisdom Turing Test - Part One</title>
  <link>https://onwisdompodcast.fireside.fm/66</link>
  <guid isPermaLink="false">e078de22-6319-496f-b95f-a62835e28e7f</guid>
  <pubDate>Sun, 26 Oct 2025 09:00:00 -0400</pubDate>
  <author>Charles Cassidy and Igor Grossmann</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/6e7bd116-2782-4422-a140-42f329164842/e078de22-6319-496f-b95f-a62835e28e7f.mp3" length="24424667" type="audio/mpeg"/>
  <itunes:episode>66</itunes:episode>
  <itunes:title>The Wisdom Turing Test - Part One</itunes:title>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Charles Cassidy and Igor Grossmann</itunes:author>
  <itunes:subtitle>What happens when we ask our own fantastic listeners — and AI — what it means to live wisely? In this episode, Igor and Charles hand the mic to members of the On Wisdom audience to hear their answers to the big questions usually reserved for scientists and philosophers. But there’s a twist: one set of responses was provided by AI. We invite you to vote on who gave the wisest answers — and to guess which one wasn’t human. Igor is surprised by just how insightful the answers from the regular folk (compared to experts) turn out to be, while Charles wonders if the wisest one may not be human at all? Can you pass the Wisdom Turing Test? Welcome to Episode 66.</itunes:subtitle>
  <itunes:duration>40:42</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/6/6e7bd116-2782-4422-a140-42f329164842/cover.jpg?v=1"/>
  <description>What happens when we ask our own fantastic listeners — and AI — what it means to live wisely? In this episode, Igor and Charles hand the mic to members of the On Wisdom audience to hear their answers to the big questions usually reserved for scientists and philosophers. But there’s a twist: one set of responses was provided by AI. We invite you to vote on who gave the wisest answers — and to guess which one wasn’t human. Igor is surprised by just how insightful the answers from the regular folk (compared to experts) turn out to be, while Charles wonders if the wisest one may not be human at all? Can you pass the Wisdom Turing Test? Welcome to Episode 66.
Link to Listener Poll here (https://docs.google.com/forms/d/e/1FAIpQLSePLVkKDHKButOmx7ApJ2hR0bvwsOFdgpHDI_R6RDBZNovH8Q/viewform?usp=dialog)
</description>
  <itunes:keywords>wisdom, psychology, philosophy, social science, happiness, well being, meaning, reasoning, emotions, purpose, turing test, AI, artificial intelligence, folk wisdom, intellectual humility</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>What happens when we ask our own fantastic listeners — and AI — what it means to live wisely? In this episode, Igor and Charles hand the mic to members of the On Wisdom audience to hear their answers to the big questions usually reserved for scientists and philosophers. But there’s a twist: one set of responses was provided by AI. We invite you to vote on who gave the wisest answers — and to guess which one wasn’t human. Igor is surprised by just how insightful the answers from the regular folk (compared to experts) turn out to be, while Charles wonders if the wisest one may not be human at all? Can you pass the Wisdom Turing Test? Welcome to Episode 66.</p>

<p>Link to Listener Poll <a href="https://docs.google.com/forms/d/e/1FAIpQLSePLVkKDHKButOmx7ApJ2hR0bvwsOFdgpHDI_R6RDBZNovH8Q/viewform?usp=dialog" rel="nofollow">here</a></p><p>Links:</p><ul><li><a title="Listener Poll | On Wisdom Podcast: The Wisdom Turing Test (Episode 66) " rel="nofollow" href="https://docs.google.com/forms/d/e/1FAIpQLSePLVkKDHKButOmx7ApJ2hR0bvwsOFdgpHDI_R6RDBZNovH8Q/viewform?usp=dialog">Listener Poll | On Wisdom Podcast: The Wisdom Turing Test (Episode 66) </a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>What happens when we ask our own fantastic listeners — and AI — what it means to live wisely? In this episode, Igor and Charles hand the mic to members of the On Wisdom audience to hear their answers to the big questions usually reserved for scientists and philosophers. But there’s a twist: one set of responses was provided by AI. We invite you to vote on who gave the wisest answers — and to guess which one wasn’t human. Igor is surprised by just how insightful the answers from the regular folk (compared to experts) turn out to be, while Charles wonders if the wisest one may not be human at all? Can you pass the Wisdom Turing Test? Welcome to Episode 66.</p>

<p>Link to Listener Poll <a href="https://docs.google.com/forms/d/e/1FAIpQLSePLVkKDHKButOmx7ApJ2hR0bvwsOFdgpHDI_R6RDBZNovH8Q/viewform?usp=dialog" rel="nofollow">here</a></p><p>Links:</p><ul><li><a title="Listener Poll | On Wisdom Podcast: The Wisdom Turing Test (Episode 66) " rel="nofollow" href="https://docs.google.com/forms/d/e/1FAIpQLSePLVkKDHKButOmx7ApJ2hR0bvwsOFdgpHDI_R6RDBZNovH8Q/viewform?usp=dialog">Listener Poll | On Wisdom Podcast: The Wisdom Turing Test (Episode 66) </a></li></ul>]]>
  </itunes:summary>
</item>
<item>
  <title>63: The AI Mirror: Why Machines Reflect Us More Than They Think (with Shannon Vallor)</title>
  <link>https://onwisdompodcast.fireside.fm/63</link>
  <guid isPermaLink="false">640978be-f5ac-46b0-aa66-28b102f0904d</guid>
  <pubDate>Sun, 23 Feb 2025 16:00:00 -0500</pubDate>
  <author>Charles Cassidy and Igor Grossmann</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/6e7bd116-2782-4422-a140-42f329164842/640978be-f5ac-46b0-aa66-28b102f0904d.mp3" length="26704634" type="audio/mpeg"/>
  <itunes:episode>63</itunes:episode>
  <itunes:title>The AI Mirror: Why Machines Reflect Us More Than They Think (with Shannon Vallor)</itunes:title>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Charles Cassidy and Igor Grossmann</itunes:author>
  <itunes:subtitle>Can AI ever be truly wise, or are we just seeing reflections of ourselves? Philosopher Shannon Vallor joins Igor and Charles to explore how technology shapes human wisdom, why we’ve been thinking about AI all wrong, and what it really means to align machines with our values. Shannon unpacks the AI Mirror metaphor, suggesting that today’s AI isn’t a thinking mind but a reflection of human data, Igor considers whether technology could ever help us become wiser rather than just more efficient, and Charles wonders if philosophy can guide better decisions in a world increasingly shaped by algorithms. Welcome to Episode 63.</itunes:subtitle>
  <itunes:duration>44:30</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/6/6e7bd116-2782-4422-a140-42f329164842/cover.jpg?v=1"/>
  <description>Can AI ever be truly wise, or are we just seeing reflections of ourselves? Philosophy Professor Shannon Vallor joins Igor and Charles to explore how technology shapes human wisdom, why we’ve been thinking about AI all wrong, and what it really means to align machines with our values. Shannon unpacks the AI Mirror metaphor, suggesting that today’s AI isn’t a thinking mind but a reflection of human data, Igor considers whether technology could ever help us become wiser rather than just more efficient, and Charles wonders if philosophy can guide better decisions in a world increasingly shaped by algorithms. Welcome to Episode 63. Special Guest: Shannon Vallor.
</description>
  <itunes:keywords>wisdom, psychology, philosophy, social science, happiness, well being, meaning, reasoning, emotions, purpose, artificial intelligence, AI, alignment, The AI Mirror, Shannon Vallor, Value Alignment, Virtue Embodiment, Moral Machines, Technomoral Virtues, Technomoral Wisdom</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Can AI ever be truly wise, or are we just seeing reflections of ourselves? Philosophy Professor Shannon Vallor joins Igor and Charles to explore how technology shapes human wisdom, why we’ve been thinking about AI all wrong, and what it really means to align machines with our values. Shannon unpacks the AI Mirror metaphor, suggesting that today’s AI isn’t a thinking mind but a reflection of human data, Igor considers whether technology could ever help us become wiser rather than just more efficient, and Charles wonders if philosophy can guide better decisions in a world increasingly shaped by algorithms. Welcome to Episode 63.</p><p>Special Guest: Shannon Vallor.</p><p>Links:</p><ul><li><a title="Shannon Vallor | University of Edinburgh" rel="nofollow" href="https://edwebprofiles.ed.ac.uk/profile/shannon-vallor">Shannon Vallor | University of Edinburgh</a></li><li><a title="Shannon Vallor | Edinburgh Futures Institute, The University of Edinburgh" rel="nofollow" href="https://efi.ed.ac.uk/people/shannon-vallor/">Shannon Vallor | Edinburgh Futures Institute, The University of Edinburgh</a></li><li><a title="The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking - Shannon Vallor (2024)" rel="nofollow" href="https://global.oup.com/academic/product/the-ai-mirror-9780197759066?cc=gb&amp;lang=en&amp;">The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking - Shannon Vallor (2024)</a></li><li><a title="How philosopher Shannon Vallor delivered the year’s best critique of AI - Fast Company (2024)" rel="nofollow" href="https://www.fastcompany.com/91240425/how-philosopher-shannon-vallor-delivered-the-years-best-critique-of-ai">How philosopher Shannon Vallor delivered the year’s best critique of AI - Fast Company (2024)</a></li><li><a title="The Turing Lectures: Can we live with AI? - Shannon Vallor" rel="nofollow" href="https://www.youtube.com/watch?v=7iX-wiKvYHs">The Turing Lectures: Can we live with AI? - Shannon Vallor</a></li><li><a title="The Danger Of Superhuman AI Is Not What You Think | Noema - Shannon Vallor" rel="nofollow" href="https://www.noemamag.com/the-danger-of-superhuman-ai-is-not-what-you-think/">The Danger Of Superhuman AI Is Not What You Think | Noema - Shannon Vallor</a></li><li><a title="The Thoughts The Civilized Keep | Noema - Shannon Vallor" rel="nofollow" href="https://www.noemamag.com/the-thoughts-the-civilized-keep/">The Thoughts The Civilized Keep | Noema - Shannon Vallor</a></li><li><a title="AI Is the Black Mirror | Nautilus - Philip Ball" rel="nofollow" href="https://nautil.us/ai-is-the-black-mirror-1169121/">AI Is the Black Mirror | Nautilus - Philip Ball</a></li><li><a title="Technology and the Virtues A Philosophical Guide to a Future Worth Wanting - Shannon Vallor (Book)" rel="nofollow" href="https://www.google.com/books/edition/Technology_and_the_Virtues/RaCkDAAAQBAJ?hl=en&amp;gbpv=0">Technology and the Virtues A Philosophical Guide to a Future Worth Wanting - Shannon Vallor (Book)</a></li><li><a title="Moral Machines: From Value Alignment to Embodied Virtue - Wendell Wallach, Shannon Vallor (2020)" rel="nofollow" href="https://academic.oup.com/book/33540/chapter-abstract/287906775?redirectedFrom=fulltext&amp;login=false">Moral Machines: From Value Alignment to Embodied Virtue - Wendell Wallach, Shannon Vallor (2020)</a></li><li><a title="AI and the Automation of Wisdom - Shannon Vallor (2017)" rel="nofollow" href="https://link.springer.com/chapter/10.1007/978-3-319-61043-6_8">AI and the Automation of Wisdom - Shannon Vallor (2017)</a></li><li><a title="The AI Mirror — how technology blocks human potential | FT (Subscription Required)" rel="nofollow" href="https://www.ft.com/content/67d38081-82d3-4979-806a-eba0099f8011">The AI Mirror — how technology blocks human potential | FT (Subscription Required)</a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Can AI ever be truly wise, or are we just seeing reflections of ourselves? Philosophy Professor Shannon Vallor joins Igor and Charles to explore how technology shapes human wisdom, why we’ve been thinking about AI all wrong, and what it really means to align machines with our values. Shannon unpacks the AI Mirror metaphor, suggesting that today’s AI isn’t a thinking mind but a reflection of human data, Igor considers whether technology could ever help us become wiser rather than just more efficient, and Charles wonders if philosophy can guide better decisions in a world increasingly shaped by algorithms. Welcome to Episode 63.</p><p>Special Guest: Shannon Vallor.</p><p>Links:</p><ul><li><a title="Shannon Vallor | University of Edinburgh" rel="nofollow" href="https://edwebprofiles.ed.ac.uk/profile/shannon-vallor">Shannon Vallor | University of Edinburgh</a></li><li><a title="Shannon Vallor | Edinburgh Futures Institute, The University of Edinburgh" rel="nofollow" href="https://efi.ed.ac.uk/people/shannon-vallor/">Shannon Vallor | Edinburgh Futures Institute, The University of Edinburgh</a></li><li><a title="The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking - Shannon Vallor (2024)" rel="nofollow" href="https://global.oup.com/academic/product/the-ai-mirror-9780197759066?cc=gb&amp;lang=en&amp;">The AI Mirror: How to Reclaim Our Humanity in an Age of Machine Thinking - Shannon Vallor (2024)</a></li><li><a title="How philosopher Shannon Vallor delivered the year’s best critique of AI - Fast Company (2024)" rel="nofollow" href="https://www.fastcompany.com/91240425/how-philosopher-shannon-vallor-delivered-the-years-best-critique-of-ai">How philosopher Shannon Vallor delivered the year’s best critique of AI - Fast Company (2024)</a></li><li><a title="The Turing Lectures: Can we live with AI? - Shannon Vallor" rel="nofollow" href="https://www.youtube.com/watch?v=7iX-wiKvYHs">The Turing Lectures: Can we live with AI? - Shannon Vallor</a></li><li><a title="The Danger Of Superhuman AI Is Not What You Think | Noema - Shannon Vallor" rel="nofollow" href="https://www.noemamag.com/the-danger-of-superhuman-ai-is-not-what-you-think/">The Danger Of Superhuman AI Is Not What You Think | Noema - Shannon Vallor</a></li><li><a title="The Thoughts The Civilized Keep | Noema - Shannon Vallor" rel="nofollow" href="https://www.noemamag.com/the-thoughts-the-civilized-keep/">The Thoughts The Civilized Keep | Noema - Shannon Vallor</a></li><li><a title="AI Is the Black Mirror | Nautilus - Philip Ball" rel="nofollow" href="https://nautil.us/ai-is-the-black-mirror-1169121/">AI Is the Black Mirror | Nautilus - Philip Ball</a></li><li><a title="Technology and the Virtues A Philosophical Guide to a Future Worth Wanting - Shannon Vallor (Book)" rel="nofollow" href="https://www.google.com/books/edition/Technology_and_the_Virtues/RaCkDAAAQBAJ?hl=en&amp;gbpv=0">Technology and the Virtues A Philosophical Guide to a Future Worth Wanting - Shannon Vallor (Book)</a></li><li><a title="Moral Machines: From Value Alignment to Embodied Virtue - Wendell Wallach, Shannon Vallor (2020)" rel="nofollow" href="https://academic.oup.com/book/33540/chapter-abstract/287906775?redirectedFrom=fulltext&amp;login=false">Moral Machines: From Value Alignment to Embodied Virtue - Wendell Wallach, Shannon Vallor (2020)</a></li><li><a title="AI and the Automation of Wisdom - Shannon Vallor (2017)" rel="nofollow" href="https://link.springer.com/chapter/10.1007/978-3-319-61043-6_8">AI and the Automation of Wisdom - Shannon Vallor (2017)</a></li><li><a title="The AI Mirror — how technology blocks human potential | FT (Subscription Required)" rel="nofollow" href="https://www.ft.com/content/67d38081-82d3-4979-806a-eba0099f8011">The AI Mirror — how technology blocks human potential | FT (Subscription Required)</a></li></ul>]]>
  </itunes:summary>
</item>
<item>
  <title>55: Wise of the Machines (with Sina Fazelpour)</title>
  <link>https://onwisdompodcast.fireside.fm/55</link>
  <guid isPermaLink="false">fdc73ee1-e7d8-47ad-9d27-9ff1aadc7f2e</guid>
  <pubDate>Sat, 05 Aug 2023 12:00:00 -0400</pubDate>
  <author>Charles Cassidy and Igor Grossmann</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/6e7bd116-2782-4422-a140-42f329164842/fdc73ee1-e7d8-47ad-9d27-9ff1aadc7f2e.mp3" length="38604716" type="audio/mpeg"/>
  <itunes:episode>55</itunes:episode>
  <itunes:title>Wise of the Machines (with Sina Fazelpour)</itunes:title>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Charles Cassidy and Igor Grossmann</itunes:author>
  <itunes:subtitle>How can we make AI wiser? And could AI make us wiser in return? Sina Fazelpour joins Igor and Charles to discuss the problem of bias in algorithms, how we might make machine learning systems more diverse, and the thorny challenge of alignment. Igor considers whether interacting with AIs might help us achieve higher levels of understanding, Sina suggests that setting up AIs to promote certain values may be problematic in a pluralistic society, and Charles is intrigued to learn about the opportunities offered by teaming up with our machine friends. Welcome to Episode 55.</itunes:subtitle>
  <itunes:duration>1:04:20</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/6/6e7bd116-2782-4422-a140-42f329164842/cover.jpg?v=1"/>
  <description>How can we make AI wiser? And could AI make us wiser in return? Sina Fazelpour joins Igor and Charles to discuss the problem of bias in algorithms, how we might make machine learning systems more diverse, and the thorny challenge of alignment. Igor considers whether interacting with AIs might help us achieve higher levels of understanding, Sina suggests that setting up AIs to promote certain values may be problematic in a pluralistic society, and Charles is intrigued to learn about the opportunities offered by teaming up with our machine friends. Welcome to Episode 55. Special Guest: Sina Fazelpour.
</description>
  <itunes:keywords>wisdom, psychology, philosophy, social science, happiness, well being, meaning, reasoning, emotions, purpose, Sina Fazelpour, Artificial Intelligence, AI, Machine Learning, Bias, Algorithms, Alignment, Diversity, Constitutional AI, AlphaGo, Lee Sedols, God’s touch, ChatGPT, LLM, large language model</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>How can we make AI wiser? And could AI make us wiser in return? Sina Fazelpour joins Igor and Charles to discuss the problem of bias in algorithms, how we might make machine learning systems more diverse, and the thorny challenge of alignment. Igor considers whether interacting with AIs might help us achieve higher levels of understanding, Sina suggests that setting up AIs to promote certain values may be problematic in a pluralistic society, and Charles is intrigued to learn about the opportunities offered by teaming up with our machine friends. Welcome to Episode 55.</p><p>Special Guest: Sina Fazelpour.</p><p>Links:</p><ul><li><a title="Sina Fazelpour&#39;s Website" rel="nofollow" href="https://sinafazelpour.com/">Sina Fazelpour's Website</a></li><li><a title="AI and the transformation of social science research | Science - Igor Grossmann, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock,  Willian A. Cunningham (2023)" rel="nofollow" href="https://www.science.org/stoken/author-tokens/ST-1256/full">AI and the transformation of social science research | Science - Igor Grossmann, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock,  Willian A. Cunningham (2023)</a></li><li><a title="Algorithmic Fairness from a Non-ideal Perspective - Sina Fazelpour, ZacharyC.Lipton (2020" rel="nofollow" href="https://dl.acm.org/doi/pdf/10.1145/3375627.3375828">Algorithmic Fairness from a Non-ideal Perspective - Sina Fazelpour, ZacharyC.Lipton (2020</a></li><li><a title="Diversity in sociotechnical machine learning systems - Sina Fazelpour, Maria De-Arteaga (2022)" rel="nofollow" href="https://journals.sagepub.com/doi/10.1177/20539517221082027">Diversity in sociotechnical machine learning systems - Sina Fazelpour, Maria De-Arteaga (2022)</a></li><li><a title="Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? - Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang (2022)" rel="nofollow" href="https://arxiv.org/abs/2211.13972">Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? - Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang (2022)</a></li><li><a title="Algorithmic bias: Senses, sources, solutions - Sina Fazelpour, David Danks (2021)" rel="nofollow" href="https://compass.onlinelibrary.wiley.com/doi/full/10.1111/phc3.12760">Algorithmic bias: Senses, sources, solutions - Sina Fazelpour, David Danks (2021)</a></li><li><a title="Constitutional AI: Harmlessness from AI Feedback - Yuntao Bai et al (2022)" rel="nofollow" href="https://arxiv.org/abs/2212.08073">Constitutional AI: Harmlessness from AI Feedback - Yuntao Bai et al (2022)</a></li><li><a title="Taxonomy of Risks posed by Language Models - Laura Weidinger at Al (2022)" rel="nofollow" href="https://dl.acm.org/doi/10.1145/3531146.3533088">Taxonomy of Risks posed by Language Models - Laura Weidinger at Al (2022)</a></li><li><a title="Large pre-trained language models contain human-like biases of what is right and wrong to do - Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf &amp; Kristian Kersting (2022)" rel="nofollow" href="https://www.nature.com/articles/s42256-022-00458-8">Large pre-trained language models contain human-like biases of what is right and wrong to do - Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf &amp; Kristian Kersting (2022)</a></li><li><a title="On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - Emily M. Bender  ,  Timnit Gebru  ,  Angelina McMillan-Major  ,  Shmargaret Shmitchell (2021)  " rel="nofollow" href="https://dl.acm.org/doi/10.1145/3442188.3445922">On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - Emily M. Bender  ,  Timnit Gebru  ,  Angelina McMillan-Major  ,  Shmargaret Shmitchell (2021)  </a></li><li><a title="In Two Moves, AlphaGo and Lee Sedol Redefined the Future | Wired Magazine (2016)" rel="nofollow" href="https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/">In Two Moves, AlphaGo and Lee Sedol Redefined the Future | Wired Magazine (2016)</a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>How can we make AI wiser? And could AI make us wiser in return? Sina Fazelpour joins Igor and Charles to discuss the problem of bias in algorithms, how we might make machine learning systems more diverse, and the thorny challenge of alignment. Igor considers whether interacting with AIs might help us achieve higher levels of understanding, Sina suggests that setting up AIs to promote certain values may be problematic in a pluralistic society, and Charles is intrigued to learn about the opportunities offered by teaming up with our machine friends. Welcome to Episode 55.</p><p>Special Guest: Sina Fazelpour.</p><p>Links:</p><ul><li><a title="Sina Fazelpour&#39;s Website" rel="nofollow" href="https://sinafazelpour.com/">Sina Fazelpour's Website</a></li><li><a title="AI and the transformation of social science research | Science - Igor Grossmann, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock,  Willian A. Cunningham (2023)" rel="nofollow" href="https://www.science.org/stoken/author-tokens/ST-1256/full">AI and the transformation of social science research | Science - Igor Grossmann, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock,  Willian A. Cunningham (2023)</a></li><li><a title="Algorithmic Fairness from a Non-ideal Perspective - Sina Fazelpour, ZacharyC.Lipton (2020" rel="nofollow" href="https://dl.acm.org/doi/pdf/10.1145/3375627.3375828">Algorithmic Fairness from a Non-ideal Perspective - Sina Fazelpour, ZacharyC.Lipton (2020</a></li><li><a title="Diversity in sociotechnical machine learning systems - Sina Fazelpour, Maria De-Arteaga (2022)" rel="nofollow" href="https://journals.sagepub.com/doi/10.1177/20539517221082027">Diversity in sociotechnical machine learning systems - Sina Fazelpour, Maria De-Arteaga (2022)</a></li><li><a title="Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? - Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang (2022)" rel="nofollow" href="https://arxiv.org/abs/2211.13972">Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? - Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang (2022)</a></li><li><a title="Algorithmic bias: Senses, sources, solutions - Sina Fazelpour, David Danks (2021)" rel="nofollow" href="https://compass.onlinelibrary.wiley.com/doi/full/10.1111/phc3.12760">Algorithmic bias: Senses, sources, solutions - Sina Fazelpour, David Danks (2021)</a></li><li><a title="Constitutional AI: Harmlessness from AI Feedback - Yuntao Bai et al (2022)" rel="nofollow" href="https://arxiv.org/abs/2212.08073">Constitutional AI: Harmlessness from AI Feedback - Yuntao Bai et al (2022)</a></li><li><a title="Taxonomy of Risks posed by Language Models - Laura Weidinger at Al (2022)" rel="nofollow" href="https://dl.acm.org/doi/10.1145/3531146.3533088">Taxonomy of Risks posed by Language Models - Laura Weidinger at Al (2022)</a></li><li><a title="Large pre-trained language models contain human-like biases of what is right and wrong to do - Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf &amp; Kristian Kersting (2022)" rel="nofollow" href="https://www.nature.com/articles/s42256-022-00458-8">Large pre-trained language models contain human-like biases of what is right and wrong to do - Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf &amp; Kristian Kersting (2022)</a></li><li><a title="On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - Emily M. Bender  ,  Timnit Gebru  ,  Angelina McMillan-Major  ,  Shmargaret Shmitchell (2021)  " rel="nofollow" href="https://dl.acm.org/doi/10.1145/3442188.3445922">On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - Emily M. Bender  ,  Timnit Gebru  ,  Angelina McMillan-Major  ,  Shmargaret Shmitchell (2021)  </a></li><li><a title="In Two Moves, AlphaGo and Lee Sedol Redefined the Future | Wired Magazine (2016)" rel="nofollow" href="https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/">In Two Moves, AlphaGo and Lee Sedol Redefined the Future | Wired Magazine (2016)</a></li></ul>]]>
  </itunes:summary>
</item>
<item>
  <title>47: Charting Pandemic Waters: A Common Wisdom Model for Uncertain Times (with Howard Nusbaum) - Rebroadcast</title>
  <link>https://onwisdompodcast.fireside.fm/47</link>
  <guid isPermaLink="false">c6066877-c59c-401d-9833-66b59aaa6102</guid>
  <pubDate>Wed, 20 Jul 2022 17:00:00 -0400</pubDate>
  <author>Charles Cassidy and Igor Grossmann</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/6e7bd116-2782-4422-a140-42f329164842/c6066877-c59c-401d-9833-66b59aaa6102.mp3" length="37224144" type="audio/mpeg"/>
  <itunes:episode>47</itunes:episode>
  <itunes:title>Charting Pandemic Waters: A Common Wisdom Model for Uncertain Times (with Howard Nusbaum) - Rebroadcast</itunes:title>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Charles Cassidy and Igor Grossmann</itunes:author>
  <itunes:subtitle>(First Broadcast - 21st June 2020)

What is the value of wisdom in the time of the global pandemic? Does the community of behavioural scientists studying wisdom agree on anything about the nature of wisdom? Can we say what we now know about wisdom and, conversely, what do we know we don’t yet know? Howard Nusbaum joins Igor and Charles to discuss the recently assembled Toronto Wisdom Task Force and the resulting Common Wisdom Model, meta-cognition, the thorny issue of moral-grounding, and sage advice regarding how to measure wisdom in the lab. Igor stresses the importance of building solid theoretical foundations for the field in the context of the pandemic, Howard reflects on the viability of evil wisdom, and Charles learns that we had better pay close attention today to the values we program into the decision-making robots of tomorrow. </itunes:subtitle>
  <itunes:duration>1:02:02</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/6/6e7bd116-2782-4422-a140-42f329164842/cover.jpg?v=1"/>
  <description>(First Broadcast - 21st June 2020)
What is the value of wisdom in the time of the global pandemic? Does the community of behavioural scientists studying wisdom agree on anything about the nature of wisdom? Can we say what we now know about wisdom and, conversely, what do we know we don’t yet know? Howard Nusbaum joins Igor and Charles to discuss the recently assembled Toronto Wisdom Task Force and the resulting Common Wisdom Model, meta-cognition, the thorny issue of moral-grounding, and sage advice regarding how to measure wisdom in the lab. Igor stresses the importance of building solid theoretical foundations for the field in the context of the pandemic, Howard reflects on the viability of evil wisdom, and Charles learns that we had better pay close attention today to the values we program into the decision-making robots of tomorrow.  Special Guest: Howard Nusbaum.
</description>
  <itunes:keywords>adversity, alfred binet, artificial intelligence, balance of self- and other-oriented interests, candace vogler, centre for practical wisdom, common wisdom model, cortex-adaptability, dialectal thinking, emotions, epistemic humility, happiness, howard nusbaum, iq, jingle-jangle fallacy, keith stanovich, meaning, metacognition, moral-grounding, nancy snow, perspectival insight, perspectivism, philosophy, propositional logic, psychology, purpose, pursuit of truth, reasoning, shared humanity, social science, social-cognitive processing, toronto wisdom task force, university of chicago, valerie tiberius, value-action gap, values, well being, wisdom, wisdom measurement</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>(First Broadcast - 21st June 2020)</p>

<p>What is the value of wisdom in the time of the global pandemic? Does the community of behavioural scientists studying wisdom agree on anything about the nature of wisdom? Can we say what we now know about wisdom and, conversely, what do we know we don’t yet know? Howard Nusbaum joins Igor and Charles to discuss the recently assembled Toronto Wisdom Task Force and the resulting Common Wisdom Model, meta-cognition, the thorny issue of moral-grounding, and sage advice regarding how to measure wisdom in the lab. Igor stresses the importance of building solid theoretical foundations for the field in the context of the pandemic, Howard reflects on the viability of evil wisdom, and Charles learns that we had better pay close attention today to the values we program into the decision-making robots of tomorrow. </p><p>Special Guest: Howard Nusbaum.</p><p>Links:</p><ul><li><a title="Original Broadcast: Episode 29 - Charting Pandemic Waters: A Common Wisdom Model for Uncertain Times (with Howard Nusbaum)" rel="nofollow" href="https://onwisdompodcast.fireside.fm/29">Original Broadcast: Episode 29 - Charting Pandemic Waters: A Common Wisdom Model for Uncertain Times (with Howard Nusbaum)</a></li><li><a title="The Science of Wisdom (AEON)" rel="nofollow" href="https://aeon.co/essays/how-psychological-scientists-found-the-empirical-path-to-wisdom">The Science of Wisdom (AEON)</a></li><li><a title="The Science of Wisdom in a Polarized World: Knowns and Unknowns: Psychological Inquiry: Vol 31, No 2" rel="nofollow" href="https://www.tandfonline.com/doi/abs/10.1080/1047840X.2020.1750917?journalCode=hpli20">The Science of Wisdom in a Polarized World: Knowns and Unknowns: Psychological Inquiry: Vol 31, No 2</a></li><li><a title="A Common Model Is Essential for a Cumulative Science of Wisdom: Psychological Inquiry: Vol 31, No 2" rel="nofollow" href="https://www.tandfonline.com/doi/abs/10.1080/1047840X.2020.1750920?journalCode=hpli20">A Common Model Is Essential for a Cumulative Science of Wisdom: Psychological Inquiry: Vol 31, No 2</a></li><li><a title="University of Chicago Center for Practical Wisdom | Center for Practical Wisdom | The University of Chicago" rel="nofollow" href="https://wisdomcenter.uchicago.edu/">University of Chicago Center for Practical Wisdom | Center for Practical Wisdom | The University of Chicago</a></li><li><a title="Wisdom in Context - Igor Grossmann, 2017" rel="nofollow" href="https://journals.sagepub.com/doi/abs/10.1177/1745691616672066">Wisdom in Context - Igor Grossmann, 2017</a></li><li><a title="Toronto Wisdom Task Force Meeting 2019 (edited) - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=9tGxVBEoebU">Toronto Wisdom Task Force Meeting 2019 (edited) - YouTube</a></li><li><a title="AI Open Letter - Future of Life Institute" rel="nofollow" href="https://futureoflife.org/2015/10/27/ai-open-letter/?cn-reloaded=1">AI Open Letter - Future of Life Institute</a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>(First Broadcast - 21st June 2020)</p>

<p>What is the value of wisdom in the time of the global pandemic? Does the community of behavioural scientists studying wisdom agree on anything about the nature of wisdom? Can we say what we now know about wisdom and, conversely, what do we know we don’t yet know? Howard Nusbaum joins Igor and Charles to discuss the recently assembled Toronto Wisdom Task Force and the resulting Common Wisdom Model, meta-cognition, the thorny issue of moral-grounding, and sage advice regarding how to measure wisdom in the lab. Igor stresses the importance of building solid theoretical foundations for the field in the context of the pandemic, Howard reflects on the viability of evil wisdom, and Charles learns that we had better pay close attention today to the values we program into the decision-making robots of tomorrow. </p><p>Special Guest: Howard Nusbaum.</p><p>Links:</p><ul><li><a title="Original Broadcast: Episode 29 - Charting Pandemic Waters: A Common Wisdom Model for Uncertain Times (with Howard Nusbaum)" rel="nofollow" href="https://onwisdompodcast.fireside.fm/29">Original Broadcast: Episode 29 - Charting Pandemic Waters: A Common Wisdom Model for Uncertain Times (with Howard Nusbaum)</a></li><li><a title="The Science of Wisdom (AEON)" rel="nofollow" href="https://aeon.co/essays/how-psychological-scientists-found-the-empirical-path-to-wisdom">The Science of Wisdom (AEON)</a></li><li><a title="The Science of Wisdom in a Polarized World: Knowns and Unknowns: Psychological Inquiry: Vol 31, No 2" rel="nofollow" href="https://www.tandfonline.com/doi/abs/10.1080/1047840X.2020.1750917?journalCode=hpli20">The Science of Wisdom in a Polarized World: Knowns and Unknowns: Psychological Inquiry: Vol 31, No 2</a></li><li><a title="A Common Model Is Essential for a Cumulative Science of Wisdom: Psychological Inquiry: Vol 31, No 2" rel="nofollow" href="https://www.tandfonline.com/doi/abs/10.1080/1047840X.2020.1750920?journalCode=hpli20">A Common Model Is Essential for a Cumulative Science of Wisdom: Psychological Inquiry: Vol 31, No 2</a></li><li><a title="University of Chicago Center for Practical Wisdom | Center for Practical Wisdom | The University of Chicago" rel="nofollow" href="https://wisdomcenter.uchicago.edu/">University of Chicago Center for Practical Wisdom | Center for Practical Wisdom | The University of Chicago</a></li><li><a title="Wisdom in Context - Igor Grossmann, 2017" rel="nofollow" href="https://journals.sagepub.com/doi/abs/10.1177/1745691616672066">Wisdom in Context - Igor Grossmann, 2017</a></li><li><a title="Toronto Wisdom Task Force Meeting 2019 (edited) - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=9tGxVBEoebU">Toronto Wisdom Task Force Meeting 2019 (edited) - YouTube</a></li><li><a title="AI Open Letter - Future of Life Institute" rel="nofollow" href="https://futureoflife.org/2015/10/27/ai-open-letter/?cn-reloaded=1">AI Open Letter - Future of Life Institute</a></li></ul>]]>
  </itunes:summary>
</item>
<item>
  <title>29: Charting Pandemic Waters: A Common Wisdom Model for Uncertain Times (with Howard Nusbaum)</title>
  <link>https://onwisdompodcast.fireside.fm/29</link>
  <guid isPermaLink="false">d7ca46f8-22e1-417d-9ab2-8565fbd42c48</guid>
  <pubDate>Sun, 21 Jun 2020 14:00:00 -0400</pubDate>
  <author>Charles Cassidy and Igor Grossmann</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/6e7bd116-2782-4422-a140-42f329164842/d7ca46f8-22e1-417d-9ab2-8565fbd42c48.mp3" length="32644620" type="audio/mpeg"/>
  <itunes:episode>29</itunes:episode>
  <itunes:title>Charting Pandemic Waters: A Common Wisdom Model for Uncertain Times (with Howard Nusbaum)</itunes:title>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Charles Cassidy and Igor Grossmann</itunes:author>
  <itunes:subtitle>What is the value of wisdom in the time of the global pandemic? Does the community of behavioural scientists studying wisdom agree on anything about the nature of wisdom? Can we say what we now know about wisdom and, conversely, what do we know we don’t yet know? Howard Nusbaum joins Igor and Charles to discuss the recently assembled Toronto Wisdom Task Force and the resulting Common Wisdom Model, meta-cognition, the thorny issue of moral-grounding, and sage advice regarding how to measure wisdom in the lab. Igor stresses the importance of building solid theoretical foundations for the field in the context of the pandemic, Howard reflects on the viability of evil wisdom, and Charles learns that we had better pay close attention today to the values we program into the decision-making robots of tomorrow. Welcome to Episode 29.</itunes:subtitle>
  <itunes:duration>1:08:00</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/6/6e7bd116-2782-4422-a140-42f329164842/cover.jpg?v=1"/>
  <description>What is the value of wisdom in the time of the global pandemic? Does the community of behavioural scientists studying wisdom agree on anything about the nature of wisdom? Can we say what we now know about wisdom and, conversely, what do we know we don’t yet know? Howard Nusbaum joins Igor and Charles to discuss the recently assembled Toronto Wisdom Task Force and the resulting Common Wisdom Model, meta-cognition, the thorny issue of moral-grounding, and sage advice regarding how to measure wisdom in the lab. Igor stresses the importance of building solid theoretical foundations for the field in the context of the pandemic, Howard reflects on the viability of evil wisdom, and Charles learns that we had better pay close attention today to the values we program into the decision-making robots of tomorrow. Welcome to Episode 29. Special Guest: Howard Nusbaum.
</description>
  <itunes:keywords>wisdom, psychology, philosophy, social science, happiness, well being, meaning, reasoning, emotions, purpose, howard nusbaum, centre for practical wisdom, university of chicago, common wisdom model, Toronto wisdom task force, moral-grounding, social-cognitive processing, balance of self- and other-oriented interests, pursuit of truth, shared humanity, metacognition, cortex-adaptability, perspectivism, dialectal thinking, epistemic humility, propositional logic, perspectival insight, IQ, Alfred Binet, wisdom measurement, jingle-jangle fallacy, adversity, artificial intelligence, keith stanovich, values, valerie tiberius, nancy snow, candace vogler, value-action gap</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>What is the value of wisdom in the time of the global pandemic? Does the community of behavioural scientists studying wisdom agree on anything about the nature of wisdom? Can we say what we now know about wisdom and, conversely, what do we know we don’t yet know? Howard Nusbaum joins Igor and Charles to discuss the recently assembled Toronto Wisdom Task Force and the resulting Common Wisdom Model, meta-cognition, the thorny issue of moral-grounding, and sage advice regarding how to measure wisdom in the lab. Igor stresses the importance of building solid theoretical foundations for the field in the context of the pandemic, Howard reflects on the viability of evil wisdom, and Charles learns that we had better pay close attention today to the values we program into the decision-making robots of tomorrow. Welcome to Episode 29.</p><p>Special Guest: Howard Nusbaum.</p><p>Links:</p><ul><li><a title="The Science of Wisdom in a Polarized World: Knowns and Unknowns: Psychological Inquiry: Vol 31, No 2" rel="nofollow" href="https://www.tandfonline.com/doi/full/10.1080/1047840X.2020.1750917">The Science of Wisdom in a Polarized World: Knowns and Unknowns: Psychological Inquiry: Vol 31, No 2</a></li><li><a title="A Common Model Is Essential for a Cumulative Science of Wisdom: Psychological Inquiry: Vol 31, No 2" rel="nofollow" href="https://www.tandfonline.com/doi/full/10.1080/1047840X.2020.1750920">A Common Model Is Essential for a Cumulative Science of Wisdom: Psychological Inquiry: Vol 31, No 2</a></li><li><a title="University of Chicago Center for Practical Wisdom | Center for Practical Wisdom | The University of Chicago" rel="nofollow" href="https://wisdomcenter.uchicago.edu/">University of Chicago Center for Practical Wisdom | Center for Practical Wisdom | The University of Chicago</a></li><li><a title="Wisdom in Context - Igor Grossmann, 2017" rel="nofollow" href="https://journals.sagepub.com/doi/abs/10.1177/1745691616672066">Wisdom in Context - Igor Grossmann, 2017</a></li><li><a title="Toronto Wisdom Task Force Meeting 2019 (edited) - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=9tGxVBEoebU">Toronto Wisdom Task Force Meeting 2019 (edited) - YouTube</a></li><li><a title="AI Open Letter - Future of Life Institute" rel="nofollow" href="https://futureoflife.org/ai-open-letter/?cn-reloaded=1">AI Open Letter - Future of Life Institute</a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>What is the value of wisdom in the time of the global pandemic? Does the community of behavioural scientists studying wisdom agree on anything about the nature of wisdom? Can we say what we now know about wisdom and, conversely, what do we know we don’t yet know? Howard Nusbaum joins Igor and Charles to discuss the recently assembled Toronto Wisdom Task Force and the resulting Common Wisdom Model, meta-cognition, the thorny issue of moral-grounding, and sage advice regarding how to measure wisdom in the lab. Igor stresses the importance of building solid theoretical foundations for the field in the context of the pandemic, Howard reflects on the viability of evil wisdom, and Charles learns that we had better pay close attention today to the values we program into the decision-making robots of tomorrow. Welcome to Episode 29.</p><p>Special Guest: Howard Nusbaum.</p><p>Links:</p><ul><li><a title="The Science of Wisdom in a Polarized World: Knowns and Unknowns: Psychological Inquiry: Vol 31, No 2" rel="nofollow" href="https://www.tandfonline.com/doi/full/10.1080/1047840X.2020.1750917">The Science of Wisdom in a Polarized World: Knowns and Unknowns: Psychological Inquiry: Vol 31, No 2</a></li><li><a title="A Common Model Is Essential for a Cumulative Science of Wisdom: Psychological Inquiry: Vol 31, No 2" rel="nofollow" href="https://www.tandfonline.com/doi/full/10.1080/1047840X.2020.1750920">A Common Model Is Essential for a Cumulative Science of Wisdom: Psychological Inquiry: Vol 31, No 2</a></li><li><a title="University of Chicago Center for Practical Wisdom | Center for Practical Wisdom | The University of Chicago" rel="nofollow" href="https://wisdomcenter.uchicago.edu/">University of Chicago Center for Practical Wisdom | Center for Practical Wisdom | The University of Chicago</a></li><li><a title="Wisdom in Context - Igor Grossmann, 2017" rel="nofollow" href="https://journals.sagepub.com/doi/abs/10.1177/1745691616672066">Wisdom in Context - Igor Grossmann, 2017</a></li><li><a title="Toronto Wisdom Task Force Meeting 2019 (edited) - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=9tGxVBEoebU">Toronto Wisdom Task Force Meeting 2019 (edited) - YouTube</a></li><li><a title="AI Open Letter - Future of Life Institute" rel="nofollow" href="https://futureoflife.org/ai-open-letter/?cn-reloaded=1">AI Open Letter - Future of Life Institute</a></li></ul>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
