<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web01.fireside.fm</fireside:hostname>
    <fireside:genDate>Sun, 12 Apr 2026 08:47:43 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>On Wisdom - Episodes Tagged with “Bias”</title>
    <link>https://onwisdompodcast.fireside.fm/tags/bias</link>
    <pubDate>Sat, 05 Aug 2023 12:00:00 -0400</pubDate>
    <description>On Wisdom features a social and cognitive scientist in Toronto and an educator in London discussing the latest empirical science regarding the nature of wisdom. Igor Grossmann runs the Wisdom &amp; Culture Lab at the University of Waterloo in Canada. Charles Cassidy runs the Evidence-Based Wisdom project in London, UK. The podcast thrives on a diet of freewheeling conversation on wisdom, decision-making, wellbeing, and society and includes regular guests spots with leading behavioral scientists from the field of wisdom research and beyond. Welcome to The On Wisdom Podcast.
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>What does science tell us about wisdom?</itunes:subtitle>
    <itunes:author>Charles Cassidy and Igor Grossmann</itunes:author>
    <itunes:summary>On Wisdom features a social and cognitive scientist in Toronto and an educator in London discussing the latest empirical science regarding the nature of wisdom. Igor Grossmann runs the Wisdom &amp; Culture Lab at the University of Waterloo in Canada. Charles Cassidy runs the Evidence-Based Wisdom project in London, UK. The podcast thrives on a diet of freewheeling conversation on wisdom, decision-making, wellbeing, and society and includes regular guests spots with leading behavioral scientists from the field of wisdom research and beyond. Welcome to The On Wisdom Podcast.
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/6/6e7bd116-2782-4422-a140-42f329164842/cover.jpg?v=1"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>psychology, science, happiness, philosophy, wisdom, decision-making, reasoning, society</itunes:keywords>
    <itunes:owner>
      <itunes:name>Charles Cassidy and Igor Grossmann</itunes:name>
      <itunes:email>charlesdavidcassidy@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Science">
  <itunes:category text="Social Sciences"/>
</itunes:category>
<itunes:category text="Society &amp; Culture"/>
<itunes:category text="Society &amp; Culture">
  <itunes:category text="Philosophy"/>
</itunes:category>
<item>
  <title>55: Wise of the Machines (with Sina Fazelpour)</title>
  <link>https://onwisdompodcast.fireside.fm/55</link>
  <guid isPermaLink="false">fdc73ee1-e7d8-47ad-9d27-9ff1aadc7f2e</guid>
  <pubDate>Sat, 05 Aug 2023 12:00:00 -0400</pubDate>
  <author>Charles Cassidy and Igor Grossmann</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/6e7bd116-2782-4422-a140-42f329164842/fdc73ee1-e7d8-47ad-9d27-9ff1aadc7f2e.mp3" length="38604716" type="audio/mpeg"/>
  <itunes:episode>55</itunes:episode>
  <itunes:title>Wise of the Machines (with Sina Fazelpour)</itunes:title>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Charles Cassidy and Igor Grossmann</itunes:author>
  <itunes:subtitle>How can we make AI wiser? And could AI make us wiser in return? Sina Fazelpour joins Igor and Charles to discuss the problem of bias in algorithms, how we might make machine learning systems more diverse, and the thorny challenge of alignment. Igor considers whether interacting with AIs might help us achieve higher levels of understanding, Sina suggests that setting up AIs to promote certain values may be problematic in a pluralistic society, and Charles is intrigued to learn about the opportunities offered by teaming up with our machine friends. Welcome to Episode 55.</itunes:subtitle>
  <itunes:duration>1:04:20</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/6/6e7bd116-2782-4422-a140-42f329164842/cover.jpg?v=1"/>
  <description>How can we make AI wiser? And could AI make us wiser in return? Sina Fazelpour joins Igor and Charles to discuss the problem of bias in algorithms, how we might make machine learning systems more diverse, and the thorny challenge of alignment. Igor considers whether interacting with AIs might help us achieve higher levels of understanding, Sina suggests that setting up AIs to promote certain values may be problematic in a pluralistic society, and Charles is intrigued to learn about the opportunities offered by teaming up with our machine friends. Welcome to Episode 55. Special Guest: Sina Fazelpour.
</description>
  <itunes:keywords>wisdom, psychology, philosophy, social science, happiness, well being, meaning, reasoning, emotions, purpose, Sina Fazelpour, Artificial Intelligence, AI, Machine Learning, Bias, Algorithms, Alignment, Diversity, Constitutional AI, AlphaGo, Lee Sedols, God’s touch, ChatGPT, LLM, large language model</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>How can we make AI wiser? And could AI make us wiser in return? Sina Fazelpour joins Igor and Charles to discuss the problem of bias in algorithms, how we might make machine learning systems more diverse, and the thorny challenge of alignment. Igor considers whether interacting with AIs might help us achieve higher levels of understanding, Sina suggests that setting up AIs to promote certain values may be problematic in a pluralistic society, and Charles is intrigued to learn about the opportunities offered by teaming up with our machine friends. Welcome to Episode 55.</p><p>Special Guest: Sina Fazelpour.</p><p>Links:</p><ul><li><a title="Sina Fazelpour&#39;s Website" rel="nofollow" href="https://sinafazelpour.com/">Sina Fazelpour's Website</a></li><li><a title="AI and the transformation of social science research | Science - Igor Grossmann, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock,  Willian A. Cunningham (2023)" rel="nofollow" href="https://www.science.org/stoken/author-tokens/ST-1256/full">AI and the transformation of social science research | Science - Igor Grossmann, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock,  Willian A. Cunningham (2023)</a></li><li><a title="Algorithmic Fairness from a Non-ideal Perspective - Sina Fazelpour, ZacharyC.Lipton (2020" rel="nofollow" href="https://dl.acm.org/doi/pdf/10.1145/3375627.3375828">Algorithmic Fairness from a Non-ideal Perspective - Sina Fazelpour, ZacharyC.Lipton (2020</a></li><li><a title="Diversity in sociotechnical machine learning systems - Sina Fazelpour, Maria De-Arteaga (2022)" rel="nofollow" href="https://journals.sagepub.com/doi/10.1177/20539517221082027">Diversity in sociotechnical machine learning systems - Sina Fazelpour, Maria De-Arteaga (2022)</a></li><li><a title="Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? - Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang (2022)" rel="nofollow" href="https://arxiv.org/abs/2211.13972">Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? - Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang (2022)</a></li><li><a title="Algorithmic bias: Senses, sources, solutions - Sina Fazelpour, David Danks (2021)" rel="nofollow" href="https://compass.onlinelibrary.wiley.com/doi/full/10.1111/phc3.12760">Algorithmic bias: Senses, sources, solutions - Sina Fazelpour, David Danks (2021)</a></li><li><a title="Constitutional AI: Harmlessness from AI Feedback - Yuntao Bai et al (2022)" rel="nofollow" href="https://arxiv.org/abs/2212.08073">Constitutional AI: Harmlessness from AI Feedback - Yuntao Bai et al (2022)</a></li><li><a title="Taxonomy of Risks posed by Language Models - Laura Weidinger at Al (2022)" rel="nofollow" href="https://dl.acm.org/doi/10.1145/3531146.3533088">Taxonomy of Risks posed by Language Models - Laura Weidinger at Al (2022)</a></li><li><a title="Large pre-trained language models contain human-like biases of what is right and wrong to do - Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf &amp; Kristian Kersting (2022)" rel="nofollow" href="https://www.nature.com/articles/s42256-022-00458-8">Large pre-trained language models contain human-like biases of what is right and wrong to do - Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf &amp; Kristian Kersting (2022)</a></li><li><a title="On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - Emily M. Bender  ,  Timnit Gebru  ,  Angelina McMillan-Major  ,  Shmargaret Shmitchell (2021)  " rel="nofollow" href="https://dl.acm.org/doi/10.1145/3442188.3445922">On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - Emily M. Bender  ,  Timnit Gebru  ,  Angelina McMillan-Major  ,  Shmargaret Shmitchell (2021)  </a></li><li><a title="In Two Moves, AlphaGo and Lee Sedol Redefined the Future | Wired Magazine (2016)" rel="nofollow" href="https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/">In Two Moves, AlphaGo and Lee Sedol Redefined the Future | Wired Magazine (2016)</a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>How can we make AI wiser? And could AI make us wiser in return? Sina Fazelpour joins Igor and Charles to discuss the problem of bias in algorithms, how we might make machine learning systems more diverse, and the thorny challenge of alignment. Igor considers whether interacting with AIs might help us achieve higher levels of understanding, Sina suggests that setting up AIs to promote certain values may be problematic in a pluralistic society, and Charles is intrigued to learn about the opportunities offered by teaming up with our machine friends. Welcome to Episode 55.</p><p>Special Guest: Sina Fazelpour.</p><p>Links:</p><ul><li><a title="Sina Fazelpour&#39;s Website" rel="nofollow" href="https://sinafazelpour.com/">Sina Fazelpour's Website</a></li><li><a title="AI and the transformation of social science research | Science - Igor Grossmann, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock,  Willian A. Cunningham (2023)" rel="nofollow" href="https://www.science.org/stoken/author-tokens/ST-1256/full">AI and the transformation of social science research | Science - Igor Grossmann, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock,  Willian A. Cunningham (2023)</a></li><li><a title="Algorithmic Fairness from a Non-ideal Perspective - Sina Fazelpour, ZacharyC.Lipton (2020" rel="nofollow" href="https://dl.acm.org/doi/pdf/10.1145/3375627.3375828">Algorithmic Fairness from a Non-ideal Perspective - Sina Fazelpour, ZacharyC.Lipton (2020</a></li><li><a title="Diversity in sociotechnical machine learning systems - Sina Fazelpour, Maria De-Arteaga (2022)" rel="nofollow" href="https://journals.sagepub.com/doi/10.1177/20539517221082027">Diversity in sociotechnical machine learning systems - Sina Fazelpour, Maria De-Arteaga (2022)</a></li><li><a title="Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? - Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang (2022)" rel="nofollow" href="https://arxiv.org/abs/2211.13972">Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? - Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang (2022)</a></li><li><a title="Algorithmic bias: Senses, sources, solutions - Sina Fazelpour, David Danks (2021)" rel="nofollow" href="https://compass.onlinelibrary.wiley.com/doi/full/10.1111/phc3.12760">Algorithmic bias: Senses, sources, solutions - Sina Fazelpour, David Danks (2021)</a></li><li><a title="Constitutional AI: Harmlessness from AI Feedback - Yuntao Bai et al (2022)" rel="nofollow" href="https://arxiv.org/abs/2212.08073">Constitutional AI: Harmlessness from AI Feedback - Yuntao Bai et al (2022)</a></li><li><a title="Taxonomy of Risks posed by Language Models - Laura Weidinger at Al (2022)" rel="nofollow" href="https://dl.acm.org/doi/10.1145/3531146.3533088">Taxonomy of Risks posed by Language Models - Laura Weidinger at Al (2022)</a></li><li><a title="Large pre-trained language models contain human-like biases of what is right and wrong to do - Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf &amp; Kristian Kersting (2022)" rel="nofollow" href="https://www.nature.com/articles/s42256-022-00458-8">Large pre-trained language models contain human-like biases of what is right and wrong to do - Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf &amp; Kristian Kersting (2022)</a></li><li><a title="On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - Emily M. Bender  ,  Timnit Gebru  ,  Angelina McMillan-Major  ,  Shmargaret Shmitchell (2021)  " rel="nofollow" href="https://dl.acm.org/doi/10.1145/3442188.3445922">On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - Emily M. Bender  ,  Timnit Gebru  ,  Angelina McMillan-Major  ,  Shmargaret Shmitchell (2021)  </a></li><li><a title="In Two Moves, AlphaGo and Lee Sedol Redefined the Future | Wired Magazine (2016)" rel="nofollow" href="https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/">In Two Moves, AlphaGo and Lee Sedol Redefined the Future | Wired Magazine (2016)</a></li></ul>]]>
  </itunes:summary>
</item>
<item>
  <title>Episode 18: The End of the World is Nigh: Polarised Tribes, Passionate Words, and the Partisan Brain (with Jay Van Bavel)</title>
  <link>https://onwisdompodcast.fireside.fm/18</link>
  <guid isPermaLink="false">7704fc91-c204-4189-81fe-8f135ddfc9d2</guid>
  <pubDate>Sat, 29 Jun 2019 06:00:00 -0400</pubDate>
  <author>Charles Cassidy and Igor Grossmann</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/6e7bd116-2782-4422-a140-42f329164842/7704fc91-c204-4189-81fe-8f135ddfc9d2.mp3" length="30708633" type="audio/mp3"/>
  <itunes:episode>18</itunes:episode>
  <itunes:title>The End of the World is Nigh: Polarised Tribes, Passionate Words, and the Partisan Brain (with Jay Van Bavel)</itunes:title>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Charles Cassidy and Igor Grossmann</itunes:author>
  <itunes:subtitle>How did politics get so damn polarised? Jay Van Bavel joins Igor and Charles to discuss political polarisation, the partisan brain, the inexorable rise of superheroes in dark times, the misperceptions of polarisation levels, and how to reach out to other tribes. Igor highlights the partisanship-transcending benefits of a Watchmen-style alien invasion, Jay proposes the judicious use of ‘off-ramps’ when engaging with loved-ones from across the political divide, and Charles learns that even the abstract purity of Mathematics is not immune from the tentacles of partisanship when guns are involved. Welcome to Episode 18.</itunes:subtitle>
  <itunes:duration>1:03:58</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/6/6e7bd116-2782-4422-a140-42f329164842/cover.jpg?v=1"/>
  <description>How did politics get so damn polarised? Jay Van Bavel joins Igor and Charles to discuss political polarisation, the partisan brain, the inexorable rise of superheroes in dark times, the misperceptions of polarisation levels, and how to reach out to other tribes. Igor highlights the partisanship-transcending benefits of a Watchmen-style alien invasion, Jay proposes the judicious use of ‘off-ramps’ when engaging with loved-ones from across the political divide, and Charles learns that even the abstract purity of Mathematics is not immune from the tentacles of partisanship when guns are involved. Welcome to Episode 18. Special Guest: Jay Van Bavel.
</description>
  <itunes:keywords>culture, psychology, social psychology, wisdom, partisanship, polarisation, off-ramps, echo chambers, moral-emotional language, social media, bias, politics, mathematics, motivated reasoning, superheroes, perception, neuroscience, Jay Van Bavel, </itunes:keywords>
  <content:encoded>
    <![CDATA[<p>How did politics get so damn polarised? Jay Van Bavel joins Igor and Charles to discuss political polarisation, the partisan brain, the inexorable rise of superheroes in dark times, the misperceptions of polarisation levels, and how to reach out to other tribes. Igor highlights the partisanship-transcending benefits of a Watchmen-style alien invasion, Jay proposes the judicious use of ‘off-ramps’ when engaging with loved-ones from across the political divide, and Charles learns that even the abstract purity of Mathematics is not immune from the tentacles of partisanship when guns are involved. Welcome to Episode 18.</p><p>Special Guest: Jay Van Bavel.</p><p>Links:</p><ul><li><a title="Social Perception and Evaluation Lab" rel="nofollow" href="http://www.psych.nyu.edu/vanbavel/lab/index.html">Social Perception and Evaluation Lab</a></li><li><a title="The dangers of the partisan brain | Jay Van Bavel | TEDxSkoll - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=NOkFWZLJk8I">The dangers of the partisan brain | Jay Van Bavel | TEDxSkoll - YouTube</a></li><li><a title="The Partisan Brain: An Identity-Based Model of Political Belief - ScienceDirect" rel="nofollow" href="https://www.sciencedirect.com/science/article/abs/pii/S1364661318300172">The Partisan Brain: An Identity-Based Model of Political Belief - ScienceDirect</a></li><li><a title="Emotion shapes the diffusion of moralized content in social networks - Brady, Wills, Jost, Tucker and Van Bavel (2016)" rel="nofollow" href="http://www.psych.nyu.edu/vanbavel/lab/documents/Brady.etal.2017.PNAS.pdf">Emotion shapes the diffusion of moralized content in social networks - Brady, Wills, Jost, Tucker and Van Bavel (2016)</a></li><li><a title="An Ideological Asymmetry in the Diffusion of Moralized Content on Social Media Among Political Leaders - Brady, Wills, Burkart, Jost, Van Bavel (2018)" rel="nofollow" href="http://www.psych.nyu.edu/vanbavel/lab/documents/Brady.etal.2019.JEPG.pdf">An Ideological Asymmetry in the Diffusion of Moralized Content on Social Media Among Political Leaders - Brady, Wills, Burkart, Jost, Van Bavel (2018)</a></li><li><a title="How to go viral: stick to your morals but add a hint of emotion | WIRED UK" rel="nofollow" href="https://www.wired.co.uk/article/moral-emotional-content-is-the-key-to-going-viral">How to go viral: stick to your morals but add a hint of emotion | WIRED UK</a></li><li><a title="What Brexit can teach us about the psychology of fear - Vox" rel="nofollow" href="https://www.vox.com/2016/6/25/12023768/brexit-psychology-fear">What Brexit can teach us about the psychology of fear - Vox</a></li><li><a title="Letters to Young Scientists | Science | AAAS" rel="nofollow" href="https://www.sciencemag.org/tags/letters-young-scientists">Letters to Young Scientists | Science | AAAS</a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>How did politics get so damn polarised? Jay Van Bavel joins Igor and Charles to discuss political polarisation, the partisan brain, the inexorable rise of superheroes in dark times, the misperceptions of polarisation levels, and how to reach out to other tribes. Igor highlights the partisanship-transcending benefits of a Watchmen-style alien invasion, Jay proposes the judicious use of ‘off-ramps’ when engaging with loved-ones from across the political divide, and Charles learns that even the abstract purity of Mathematics is not immune from the tentacles of partisanship when guns are involved. Welcome to Episode 18.</p><p>Special Guest: Jay Van Bavel.</p><p>Links:</p><ul><li><a title="Social Perception and Evaluation Lab" rel="nofollow" href="http://www.psych.nyu.edu/vanbavel/lab/index.html">Social Perception and Evaluation Lab</a></li><li><a title="The dangers of the partisan brain | Jay Van Bavel | TEDxSkoll - YouTube" rel="nofollow" href="https://www.youtube.com/watch?v=NOkFWZLJk8I">The dangers of the partisan brain | Jay Van Bavel | TEDxSkoll - YouTube</a></li><li><a title="The Partisan Brain: An Identity-Based Model of Political Belief - ScienceDirect" rel="nofollow" href="https://www.sciencedirect.com/science/article/abs/pii/S1364661318300172">The Partisan Brain: An Identity-Based Model of Political Belief - ScienceDirect</a></li><li><a title="Emotion shapes the diffusion of moralized content in social networks - Brady, Wills, Jost, Tucker and Van Bavel (2016)" rel="nofollow" href="http://www.psych.nyu.edu/vanbavel/lab/documents/Brady.etal.2017.PNAS.pdf">Emotion shapes the diffusion of moralized content in social networks - Brady, Wills, Jost, Tucker and Van Bavel (2016)</a></li><li><a title="An Ideological Asymmetry in the Diffusion of Moralized Content on Social Media Among Political Leaders - Brady, Wills, Burkart, Jost, Van Bavel (2018)" rel="nofollow" href="http://www.psych.nyu.edu/vanbavel/lab/documents/Brady.etal.2019.JEPG.pdf">An Ideological Asymmetry in the Diffusion of Moralized Content on Social Media Among Political Leaders - Brady, Wills, Burkart, Jost, Van Bavel (2018)</a></li><li><a title="How to go viral: stick to your morals but add a hint of emotion | WIRED UK" rel="nofollow" href="https://www.wired.co.uk/article/moral-emotional-content-is-the-key-to-going-viral">How to go viral: stick to your morals but add a hint of emotion | WIRED UK</a></li><li><a title="What Brexit can teach us about the psychology of fear - Vox" rel="nofollow" href="https://www.vox.com/2016/6/25/12023768/brexit-psychology-fear">What Brexit can teach us about the psychology of fear - Vox</a></li><li><a title="Letters to Young Scientists | Science | AAAS" rel="nofollow" href="https://www.sciencemag.org/tags/letters-young-scientists">Letters to Young Scientists | Science | AAAS</a></li></ul>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
