<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web02.fireside.fm</fireside:hostname>
    <fireside:genDate>Wed, 15 Apr 2026 12:26:42 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>On Wisdom - Episodes Tagged with “Machine Learning”</title>
    <link>https://onwisdompodcast.fireside.fm/tags/machine%20learning</link>
    <pubDate>Sat, 05 Aug 2023 12:00:00 -0400</pubDate>
    <description>On Wisdom features a social and cognitive scientist in Toronto and an educator in London discussing the latest empirical science regarding the nature of wisdom. Igor Grossmann runs the Wisdom &amp; Culture Lab at the University of Waterloo in Canada. Charles Cassidy runs the Evidence-Based Wisdom project in London, UK. The podcast thrives on a diet of freewheeling conversation on wisdom, decision-making, wellbeing, and society and includes regular guests spots with leading behavioral scientists from the field of wisdom research and beyond. Welcome to The On Wisdom Podcast.
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>What does science tell us about wisdom?</itunes:subtitle>
    <itunes:author>Charles Cassidy and Igor Grossmann</itunes:author>
    <itunes:summary>On Wisdom features a social and cognitive scientist in Toronto and an educator in London discussing the latest empirical science regarding the nature of wisdom. Igor Grossmann runs the Wisdom &amp; Culture Lab at the University of Waterloo in Canada. Charles Cassidy runs the Evidence-Based Wisdom project in London, UK. The podcast thrives on a diet of freewheeling conversation on wisdom, decision-making, wellbeing, and society and includes regular guests spots with leading behavioral scientists from the field of wisdom research and beyond. Welcome to The On Wisdom Podcast.
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/6/6e7bd116-2782-4422-a140-42f329164842/cover.jpg?v=1"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>psychology, science, happiness, philosophy, wisdom, decision-making, reasoning, society</itunes:keywords>
    <itunes:owner>
      <itunes:name>Charles Cassidy and Igor Grossmann</itunes:name>
      <itunes:email>charlesdavidcassidy@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Science">
  <itunes:category text="Social Sciences"/>
</itunes:category>
<itunes:category text="Society &amp; Culture"/>
<itunes:category text="Society &amp; Culture">
  <itunes:category text="Philosophy"/>
</itunes:category>
<item>
  <title>55: Wise of the Machines (with Sina Fazelpour)</title>
  <link>https://onwisdompodcast.fireside.fm/55</link>
  <guid isPermaLink="false">fdc73ee1-e7d8-47ad-9d27-9ff1aadc7f2e</guid>
  <pubDate>Sat, 05 Aug 2023 12:00:00 -0400</pubDate>
  <author>Charles Cassidy and Igor Grossmann</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/6e7bd116-2782-4422-a140-42f329164842/fdc73ee1-e7d8-47ad-9d27-9ff1aadc7f2e.mp3" length="38604716" type="audio/mpeg"/>
  <itunes:episode>55</itunes:episode>
  <itunes:title>Wise of the Machines (with Sina Fazelpour)</itunes:title>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Charles Cassidy and Igor Grossmann</itunes:author>
  <itunes:subtitle>How can we make AI wiser? And could AI make us wiser in return? Sina Fazelpour joins Igor and Charles to discuss the problem of bias in algorithms, how we might make machine learning systems more diverse, and the thorny challenge of alignment. Igor considers whether interacting with AIs might help us achieve higher levels of understanding, Sina suggests that setting up AIs to promote certain values may be problematic in a pluralistic society, and Charles is intrigued to learn about the opportunities offered by teaming up with our machine friends. Welcome to Episode 55.</itunes:subtitle>
  <itunes:duration>1:04:20</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/6/6e7bd116-2782-4422-a140-42f329164842/cover.jpg?v=1"/>
  <description>How can we make AI wiser? And could AI make us wiser in return? Sina Fazelpour joins Igor and Charles to discuss the problem of bias in algorithms, how we might make machine learning systems more diverse, and the thorny challenge of alignment. Igor considers whether interacting with AIs might help us achieve higher levels of understanding, Sina suggests that setting up AIs to promote certain values may be problematic in a pluralistic society, and Charles is intrigued to learn about the opportunities offered by teaming up with our machine friends. Welcome to Episode 55. Special Guest: Sina Fazelpour.
</description>
  <itunes:keywords>wisdom, psychology, philosophy, social science, happiness, well being, meaning, reasoning, emotions, purpose, Sina Fazelpour, Artificial Intelligence, AI, Machine Learning, Bias, Algorithms, Alignment, Diversity, Constitutional AI, AlphaGo, Lee Sedols, God’s touch, ChatGPT, LLM, large language model</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>How can we make AI wiser? And could AI make us wiser in return? Sina Fazelpour joins Igor and Charles to discuss the problem of bias in algorithms, how we might make machine learning systems more diverse, and the thorny challenge of alignment. Igor considers whether interacting with AIs might help us achieve higher levels of understanding, Sina suggests that setting up AIs to promote certain values may be problematic in a pluralistic society, and Charles is intrigued to learn about the opportunities offered by teaming up with our machine friends. Welcome to Episode 55.</p><p>Special Guest: Sina Fazelpour.</p><p>Links:</p><ul><li><a title="Sina Fazelpour&#39;s Website" rel="nofollow" href="https://sinafazelpour.com/">Sina Fazelpour's Website</a></li><li><a title="AI and the transformation of social science research | Science - Igor Grossmann, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock,  Willian A. Cunningham (2023)" rel="nofollow" href="https://www.science.org/stoken/author-tokens/ST-1256/full">AI and the transformation of social science research | Science - Igor Grossmann, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock,  Willian A. Cunningham (2023)</a></li><li><a title="Algorithmic Fairness from a Non-ideal Perspective - Sina Fazelpour, ZacharyC.Lipton (2020" rel="nofollow" href="https://dl.acm.org/doi/pdf/10.1145/3375627.3375828">Algorithmic Fairness from a Non-ideal Perspective - Sina Fazelpour, ZacharyC.Lipton (2020</a></li><li><a title="Diversity in sociotechnical machine learning systems - Sina Fazelpour, Maria De-Arteaga (2022)" rel="nofollow" href="https://journals.sagepub.com/doi/10.1177/20539517221082027">Diversity in sociotechnical machine learning systems - Sina Fazelpour, Maria De-Arteaga (2022)</a></li><li><a title="Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? - Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang (2022)" rel="nofollow" href="https://arxiv.org/abs/2211.13972">Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? - Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang (2022)</a></li><li><a title="Algorithmic bias: Senses, sources, solutions - Sina Fazelpour, David Danks (2021)" rel="nofollow" href="https://compass.onlinelibrary.wiley.com/doi/full/10.1111/phc3.12760">Algorithmic bias: Senses, sources, solutions - Sina Fazelpour, David Danks (2021)</a></li><li><a title="Constitutional AI: Harmlessness from AI Feedback - Yuntao Bai et al (2022)" rel="nofollow" href="https://arxiv.org/abs/2212.08073">Constitutional AI: Harmlessness from AI Feedback - Yuntao Bai et al (2022)</a></li><li><a title="Taxonomy of Risks posed by Language Models - Laura Weidinger at Al (2022)" rel="nofollow" href="https://dl.acm.org/doi/10.1145/3531146.3533088">Taxonomy of Risks posed by Language Models - Laura Weidinger at Al (2022)</a></li><li><a title="Large pre-trained language models contain human-like biases of what is right and wrong to do - Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf &amp; Kristian Kersting (2022)" rel="nofollow" href="https://www.nature.com/articles/s42256-022-00458-8">Large pre-trained language models contain human-like biases of what is right and wrong to do - Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf &amp; Kristian Kersting (2022)</a></li><li><a title="On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - Emily M. Bender  ,  Timnit Gebru  ,  Angelina McMillan-Major  ,  Shmargaret Shmitchell (2021)  " rel="nofollow" href="https://dl.acm.org/doi/10.1145/3442188.3445922">On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - Emily M. Bender  ,  Timnit Gebru  ,  Angelina McMillan-Major  ,  Shmargaret Shmitchell (2021)  </a></li><li><a title="In Two Moves, AlphaGo and Lee Sedol Redefined the Future | Wired Magazine (2016)" rel="nofollow" href="https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/">In Two Moves, AlphaGo and Lee Sedol Redefined the Future | Wired Magazine (2016)</a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>How can we make AI wiser? And could AI make us wiser in return? Sina Fazelpour joins Igor and Charles to discuss the problem of bias in algorithms, how we might make machine learning systems more diverse, and the thorny challenge of alignment. Igor considers whether interacting with AIs might help us achieve higher levels of understanding, Sina suggests that setting up AIs to promote certain values may be problematic in a pluralistic society, and Charles is intrigued to learn about the opportunities offered by teaming up with our machine friends. Welcome to Episode 55.</p><p>Special Guest: Sina Fazelpour.</p><p>Links:</p><ul><li><a title="Sina Fazelpour&#39;s Website" rel="nofollow" href="https://sinafazelpour.com/">Sina Fazelpour's Website</a></li><li><a title="AI and the transformation of social science research | Science - Igor Grossmann, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock,  Willian A. Cunningham (2023)" rel="nofollow" href="https://www.science.org/stoken/author-tokens/ST-1256/full">AI and the transformation of social science research | Science - Igor Grossmann, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock,  Willian A. Cunningham (2023)</a></li><li><a title="Algorithmic Fairness from a Non-ideal Perspective - Sina Fazelpour, ZacharyC.Lipton (2020" rel="nofollow" href="https://dl.acm.org/doi/pdf/10.1145/3375627.3375828">Algorithmic Fairness from a Non-ideal Perspective - Sina Fazelpour, ZacharyC.Lipton (2020</a></li><li><a title="Diversity in sociotechnical machine learning systems - Sina Fazelpour, Maria De-Arteaga (2022)" rel="nofollow" href="https://journals.sagepub.com/doi/10.1177/20539517221082027">Diversity in sociotechnical machine learning systems - Sina Fazelpour, Maria De-Arteaga (2022)</a></li><li><a title="Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? - Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang (2022)" rel="nofollow" href="https://arxiv.org/abs/2211.13972">Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? - Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang (2022)</a></li><li><a title="Algorithmic bias: Senses, sources, solutions - Sina Fazelpour, David Danks (2021)" rel="nofollow" href="https://compass.onlinelibrary.wiley.com/doi/full/10.1111/phc3.12760">Algorithmic bias: Senses, sources, solutions - Sina Fazelpour, David Danks (2021)</a></li><li><a title="Constitutional AI: Harmlessness from AI Feedback - Yuntao Bai et al (2022)" rel="nofollow" href="https://arxiv.org/abs/2212.08073">Constitutional AI: Harmlessness from AI Feedback - Yuntao Bai et al (2022)</a></li><li><a title="Taxonomy of Risks posed by Language Models - Laura Weidinger at Al (2022)" rel="nofollow" href="https://dl.acm.org/doi/10.1145/3531146.3533088">Taxonomy of Risks posed by Language Models - Laura Weidinger at Al (2022)</a></li><li><a title="Large pre-trained language models contain human-like biases of what is right and wrong to do - Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf &amp; Kristian Kersting (2022)" rel="nofollow" href="https://www.nature.com/articles/s42256-022-00458-8">Large pre-trained language models contain human-like biases of what is right and wrong to do - Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf &amp; Kristian Kersting (2022)</a></li><li><a title="On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - Emily M. Bender  ,  Timnit Gebru  ,  Angelina McMillan-Major  ,  Shmargaret Shmitchell (2021)  " rel="nofollow" href="https://dl.acm.org/doi/10.1145/3442188.3445922">On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - Emily M. Bender  ,  Timnit Gebru  ,  Angelina McMillan-Major  ,  Shmargaret Shmitchell (2021)  </a></li><li><a title="In Two Moves, AlphaGo and Lee Sedol Redefined the Future | Wired Magazine (2016)" rel="nofollow" href="https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/">In Two Moves, AlphaGo and Lee Sedol Redefined the Future | Wired Magazine (2016)</a></li></ul>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
