<?xml version="1.0" encoding="UTF-8"?>
<rss version="2.0" encoding="UTF-8" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:sy="http://purl.org/rss/1.0/modules/syndication/" xmlns:admin="http://webns.net/mvcb/" xmlns:atom="http://www.w3.org/2005/Atom/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:googleplay="http://www.google.com/schemas/play-podcasts/1.0" xmlns:itunes="http://www.itunes.com/dtds/podcast-1.0.dtd" xmlns:fireside="http://fireside.fm/modules/rss/fireside">
  <channel>
    <fireside:hostname>web02.fireside.fm</fireside:hostname>
    <fireside:genDate>Sun, 26 Apr 2026 07:38:51 -0500</fireside:genDate>
    <generator>Fireside (https://fireside.fm)</generator>
    <title>On Wisdom - Episodes Tagged with “Llm”</title>
    <link>https://onwisdompodcast.fireside.fm/tags/llm</link>
    <pubDate>Wed, 01 Nov 2023 21:00:00 -0400</pubDate>
    <description>On Wisdom features a social and cognitive scientist in Toronto and an educator in London discussing the latest empirical science regarding the nature of wisdom. Igor Grossmann runs the Wisdom &amp; Culture Lab at the University of Waterloo in Canada. Charles Cassidy runs the Evidence-Based Wisdom project in London, UK. The podcast thrives on a diet of freewheeling conversation on wisdom, decision-making, wellbeing, and society and includes regular guests spots with leading behavioral scientists from the field of wisdom research and beyond. Welcome to The On Wisdom Podcast.
</description>
    <language>en-us</language>
    <itunes:type>episodic</itunes:type>
    <itunes:subtitle>What does science tell us about wisdom?</itunes:subtitle>
    <itunes:author>Charles Cassidy and Igor Grossmann</itunes:author>
    <itunes:summary>On Wisdom features a social and cognitive scientist in Toronto and an educator in London discussing the latest empirical science regarding the nature of wisdom. Igor Grossmann runs the Wisdom &amp; Culture Lab at the University of Waterloo in Canada. Charles Cassidy runs the Evidence-Based Wisdom project in London, UK. The podcast thrives on a diet of freewheeling conversation on wisdom, decision-making, wellbeing, and society and includes regular guests spots with leading behavioral scientists from the field of wisdom research and beyond. Welcome to The On Wisdom Podcast.
</itunes:summary>
    <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/6/6e7bd116-2782-4422-a140-42f329164842/cover.jpg?v=1"/>
    <itunes:explicit>no</itunes:explicit>
    <itunes:keywords>psychology, science, happiness, philosophy, wisdom, decision-making, reasoning, society</itunes:keywords>
    <itunes:owner>
      <itunes:name>Charles Cassidy and Igor Grossmann</itunes:name>
      <itunes:email>charlesdavidcassidy@gmail.com</itunes:email>
    </itunes:owner>
<itunes:category text="Science">
  <itunes:category text="Social Sciences"/>
</itunes:category>
<itunes:category text="Society &amp; Culture"/>
<itunes:category text="Society &amp; Culture">
  <itunes:category text="Philosophy"/>
</itunes:category>
<item>
  <title>58: The Social Robots are Coming! (with Kerstin Dautenhahn)</title>
  <link>https://onwisdompodcast.fireside.fm/58</link>
  <guid isPermaLink="false">7a5cee1a-3976-409d-8a6a-b1d425245225</guid>
  <pubDate>Wed, 01 Nov 2023 21:00:00 -0400</pubDate>
  <author>Charles Cassidy and Igor Grossmann</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/6e7bd116-2782-4422-a140-42f329164842/7a5cee1a-3976-409d-8a6a-b1d425245225.mp3" length="29424765" type="audio/mpeg"/>
  <itunes:episode>58</itunes:episode>
  <itunes:title>The Social Robots are Coming! (with Kerstin Dautenhahn)</itunes:title>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Charles Cassidy and Igor Grossmann</itunes:author>
  <itunes:subtitle>Can we create wise robots? Kerstin Dautenhahn joins Igor and Charles to dive into the intriguing world of social robots, the finer points of “Robotiquette,” and the potential role such robots can play in supporting therapeutic treatments. Igor reflects on the limits of robot-based wisdom, Kerstin reveals the potential of Generative AI like ChatGPT to generate false information about her own professional identity, and Charles considers the perils of socially awkward machines. Welcome to Episode 58.</itunes:subtitle>
  <itunes:duration>49:02</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/6/6e7bd116-2782-4422-a140-42f329164842/cover.jpg?v=1"/>
  <description>Can we create wise robots? Kerstin Dautenhahn joins Igor and Charles to dive into the intriguing world of social robots, the finer points of “Robotiquette,” and the potential role such robots can play in supporting therapeutic treatments. Igor reflects on the limits of robot-based wisdom, Kerstin reveals the potential of Generative AI like ChatGPT to generate false information about her own professional identity, and Charles considers the perils of socially awkward machines. Welcome to Episode 58. Special Guest: Kerstin Dautenhahn.
</description>
  <itunes:keywords>wisdom, psychology, philosophy, social science, happiness, well being, meaning, reasoning, emotions, purpose, social robots, robotics, robotiquette, AI, LLM, ChatGPT, wise robots, Kerstin Dautenhahn, human-robot interaction, robot-assisted interventions, social anxiety, Assistive Technology, Artificial Life </itunes:keywords>
  <content:encoded>
    <![CDATA[<p>Can we create wise robots? Kerstin Dautenhahn joins Igor and Charles to dive into the intriguing world of social robots, the finer points of “Robotiquette,” and the potential role such robots can play in supporting therapeutic treatments. Igor reflects on the limits of robot-based wisdom, Kerstin reveals the potential of Generative AI like ChatGPT to generate false information about her own professional identity, and Charles considers the perils of socially awkward machines. Welcome to Episode 58.</p><p>Special Guest: Kerstin Dautenhahn.</p><p>Links:</p><ul><li><a title="Kerstin Dautenhahn&#39;s page | University of Waterloo" rel="nofollow" href="https://uwaterloo.ca/electrical-computer-engineering/profile/kdautenh">Kerstin Dautenhahn's page | University of Waterloo</a></li><li><a title="Social and Intelligent Robotics Research Laboratory (SIRRL)" rel="nofollow" href="https://uwaterloo.ca/social-intelligent-robotics-research-lab/">Social and Intelligent Robotics Research Laboratory (SIRRL)</a></li><li><a title="Robots are not human, even if we want them to be | Kerstin Dautenhahn | TEDxEastEnd" rel="nofollow" href="https://www.youtube.com/watch?v=wPK2SWC0kx0">Robots are not human, even if we want them to be | Kerstin Dautenhahn | TEDxEastEnd</a></li><li><a title="Socially intelligent robots: dimensions of human–robot interaction - Dautenhahn (2007)" rel="nofollow" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2346526/">Socially intelligent robots: dimensions of human–robot interaction - Dautenhahn (2007)</a></li><li><a title="Potential Applications of Social Robots in Robot-Assisted Interventions for Social Anxiety - S Rasouli, G Gupta, E Nilsen, K Dautenhahn (2022) " rel="nofollow" href="https://pubmed.ncbi.nlm.nih.gov/35096198/">Potential Applications of Social Robots in Robot-Assisted Interventions for Social Anxiety - S Rasouli, G Gupta, E Nilsen, K Dautenhahn (2022) </a></li><li><a title="User Evaluation of Social Robots as a Tool in One-to-One Instructional Settings for Students with Learning Disabilities - N Azizi  , S Chandra, M Gray, J Fane, M Sager, K Dautenhahn (2023)" rel="nofollow" href="https://www.researchgate.net/publication/367976887_User_Evaluation_of_Social_Robots_as_a_Tool_in_One-to-One_Instructional_Settings_for_Students_with_Learning_Disabilities">User Evaluation of Social Robots as a Tool in One-to-One Instructional Settings for Students with Learning Disabilities - N Azizi  , S Chandra, M Gray, J Fane, M Sager, K Dautenhahn (2023)</a></li><li><a title="Opportunities for social robots in the stuttering clinic: A review and proposed scenarios - S Chandra, G Gupta, T Loucks, K Dautenhahn (2022)" rel="nofollow" href="https://www.researchgate.net/publication/361507850_Opportunities_for_social_robots_in_the_stuttering_clinic_A_review_and_proposed_scenarios">Opportunities for social robots in the stuttering clinic: A review and proposed scenarios - S Chandra, G Gupta, T Loucks, K Dautenhahn (2022)</a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>Can we create wise robots? Kerstin Dautenhahn joins Igor and Charles to dive into the intriguing world of social robots, the finer points of “Robotiquette,” and the potential role such robots can play in supporting therapeutic treatments. Igor reflects on the limits of robot-based wisdom, Kerstin reveals the potential of Generative AI like ChatGPT to generate false information about her own professional identity, and Charles considers the perils of socially awkward machines. Welcome to Episode 58.</p><p>Special Guest: Kerstin Dautenhahn.</p><p>Links:</p><ul><li><a title="Kerstin Dautenhahn&#39;s page | University of Waterloo" rel="nofollow" href="https://uwaterloo.ca/electrical-computer-engineering/profile/kdautenh">Kerstin Dautenhahn's page | University of Waterloo</a></li><li><a title="Social and Intelligent Robotics Research Laboratory (SIRRL)" rel="nofollow" href="https://uwaterloo.ca/social-intelligent-robotics-research-lab/">Social and Intelligent Robotics Research Laboratory (SIRRL)</a></li><li><a title="Robots are not human, even if we want them to be | Kerstin Dautenhahn | TEDxEastEnd" rel="nofollow" href="https://www.youtube.com/watch?v=wPK2SWC0kx0">Robots are not human, even if we want them to be | Kerstin Dautenhahn | TEDxEastEnd</a></li><li><a title="Socially intelligent robots: dimensions of human–robot interaction - Dautenhahn (2007)" rel="nofollow" href="https://www.ncbi.nlm.nih.gov/pmc/articles/PMC2346526/">Socially intelligent robots: dimensions of human–robot interaction - Dautenhahn (2007)</a></li><li><a title="Potential Applications of Social Robots in Robot-Assisted Interventions for Social Anxiety - S Rasouli, G Gupta, E Nilsen, K Dautenhahn (2022) " rel="nofollow" href="https://pubmed.ncbi.nlm.nih.gov/35096198/">Potential Applications of Social Robots in Robot-Assisted Interventions for Social Anxiety - S Rasouli, G Gupta, E Nilsen, K Dautenhahn (2022) </a></li><li><a title="User Evaluation of Social Robots as a Tool in One-to-One Instructional Settings for Students with Learning Disabilities - N Azizi  , S Chandra, M Gray, J Fane, M Sager, K Dautenhahn (2023)" rel="nofollow" href="https://www.researchgate.net/publication/367976887_User_Evaluation_of_Social_Robots_as_a_Tool_in_One-to-One_Instructional_Settings_for_Students_with_Learning_Disabilities">User Evaluation of Social Robots as a Tool in One-to-One Instructional Settings for Students with Learning Disabilities - N Azizi  , S Chandra, M Gray, J Fane, M Sager, K Dautenhahn (2023)</a></li><li><a title="Opportunities for social robots in the stuttering clinic: A review and proposed scenarios - S Chandra, G Gupta, T Loucks, K Dautenhahn (2022)" rel="nofollow" href="https://www.researchgate.net/publication/361507850_Opportunities_for_social_robots_in_the_stuttering_clinic_A_review_and_proposed_scenarios">Opportunities for social robots in the stuttering clinic: A review and proposed scenarios - S Chandra, G Gupta, T Loucks, K Dautenhahn (2022)</a></li></ul>]]>
  </itunes:summary>
</item>
<item>
  <title>55: Wise of the Machines (with Sina Fazelpour)</title>
  <link>https://onwisdompodcast.fireside.fm/55</link>
  <guid isPermaLink="false">fdc73ee1-e7d8-47ad-9d27-9ff1aadc7f2e</guid>
  <pubDate>Sat, 05 Aug 2023 12:00:00 -0400</pubDate>
  <author>Charles Cassidy and Igor Grossmann</author>
  <enclosure url="https://aphid.fireside.fm/d/1437767933/6e7bd116-2782-4422-a140-42f329164842/fdc73ee1-e7d8-47ad-9d27-9ff1aadc7f2e.mp3" length="38604716" type="audio/mpeg"/>
  <itunes:episode>55</itunes:episode>
  <itunes:title>Wise of the Machines (with Sina Fazelpour)</itunes:title>
  <itunes:episodeType>full</itunes:episodeType>
  <itunes:author>Charles Cassidy and Igor Grossmann</itunes:author>
  <itunes:subtitle>How can we make AI wiser? And could AI make us wiser in return? Sina Fazelpour joins Igor and Charles to discuss the problem of bias in algorithms, how we might make machine learning systems more diverse, and the thorny challenge of alignment. Igor considers whether interacting with AIs might help us achieve higher levels of understanding, Sina suggests that setting up AIs to promote certain values may be problematic in a pluralistic society, and Charles is intrigued to learn about the opportunities offered by teaming up with our machine friends. Welcome to Episode 55.</itunes:subtitle>
  <itunes:duration>1:04:20</itunes:duration>
  <itunes:explicit>no</itunes:explicit>
  <itunes:image href="https://media24.fireside.fm/file/fireside-images-2024/podcasts/images/6/6e7bd116-2782-4422-a140-42f329164842/cover.jpg?v=1"/>
  <description>How can we make AI wiser? And could AI make us wiser in return? Sina Fazelpour joins Igor and Charles to discuss the problem of bias in algorithms, how we might make machine learning systems more diverse, and the thorny challenge of alignment. Igor considers whether interacting with AIs might help us achieve higher levels of understanding, Sina suggests that setting up AIs to promote certain values may be problematic in a pluralistic society, and Charles is intrigued to learn about the opportunities offered by teaming up with our machine friends. Welcome to Episode 55. Special Guest: Sina Fazelpour.
</description>
  <itunes:keywords>wisdom, psychology, philosophy, social science, happiness, well being, meaning, reasoning, emotions, purpose, Sina Fazelpour, Artificial Intelligence, AI, Machine Learning, Bias, Algorithms, Alignment, Diversity, Constitutional AI, AlphaGo, Lee Sedols, God’s touch, ChatGPT, LLM, large language model</itunes:keywords>
  <content:encoded>
    <![CDATA[<p>How can we make AI wiser? And could AI make us wiser in return? Sina Fazelpour joins Igor and Charles to discuss the problem of bias in algorithms, how we might make machine learning systems more diverse, and the thorny challenge of alignment. Igor considers whether interacting with AIs might help us achieve higher levels of understanding, Sina suggests that setting up AIs to promote certain values may be problematic in a pluralistic society, and Charles is intrigued to learn about the opportunities offered by teaming up with our machine friends. Welcome to Episode 55.</p><p>Special Guest: Sina Fazelpour.</p><p>Links:</p><ul><li><a title="Sina Fazelpour&#39;s Website" rel="nofollow" href="https://sinafazelpour.com/">Sina Fazelpour's Website</a></li><li><a title="AI and the transformation of social science research | Science - Igor Grossmann, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock,  Willian A. Cunningham (2023)" rel="nofollow" href="https://www.science.org/stoken/author-tokens/ST-1256/full">AI and the transformation of social science research | Science - Igor Grossmann, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock,  Willian A. Cunningham (2023)</a></li><li><a title="Algorithmic Fairness from a Non-ideal Perspective - Sina Fazelpour, ZacharyC.Lipton (2020" rel="nofollow" href="https://dl.acm.org/doi/pdf/10.1145/3375627.3375828">Algorithmic Fairness from a Non-ideal Perspective - Sina Fazelpour, ZacharyC.Lipton (2020</a></li><li><a title="Diversity in sociotechnical machine learning systems - Sina Fazelpour, Maria De-Arteaga (2022)" rel="nofollow" href="https://journals.sagepub.com/doi/10.1177/20539517221082027">Diversity in sociotechnical machine learning systems - Sina Fazelpour, Maria De-Arteaga (2022)</a></li><li><a title="Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? - Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang (2022)" rel="nofollow" href="https://arxiv.org/abs/2211.13972">Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? - Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang (2022)</a></li><li><a title="Algorithmic bias: Senses, sources, solutions - Sina Fazelpour, David Danks (2021)" rel="nofollow" href="https://compass.onlinelibrary.wiley.com/doi/full/10.1111/phc3.12760">Algorithmic bias: Senses, sources, solutions - Sina Fazelpour, David Danks (2021)</a></li><li><a title="Constitutional AI: Harmlessness from AI Feedback - Yuntao Bai et al (2022)" rel="nofollow" href="https://arxiv.org/abs/2212.08073">Constitutional AI: Harmlessness from AI Feedback - Yuntao Bai et al (2022)</a></li><li><a title="Taxonomy of Risks posed by Language Models - Laura Weidinger at Al (2022)" rel="nofollow" href="https://dl.acm.org/doi/10.1145/3531146.3533088">Taxonomy of Risks posed by Language Models - Laura Weidinger at Al (2022)</a></li><li><a title="Large pre-trained language models contain human-like biases of what is right and wrong to do - Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf &amp; Kristian Kersting (2022)" rel="nofollow" href="https://www.nature.com/articles/s42256-022-00458-8">Large pre-trained language models contain human-like biases of what is right and wrong to do - Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf &amp; Kristian Kersting (2022)</a></li><li><a title="On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - Emily M. Bender  ,  Timnit Gebru  ,  Angelina McMillan-Major  ,  Shmargaret Shmitchell (2021)  " rel="nofollow" href="https://dl.acm.org/doi/10.1145/3442188.3445922">On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - Emily M. Bender  ,  Timnit Gebru  ,  Angelina McMillan-Major  ,  Shmargaret Shmitchell (2021)  </a></li><li><a title="In Two Moves, AlphaGo and Lee Sedol Redefined the Future | Wired Magazine (2016)" rel="nofollow" href="https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/">In Two Moves, AlphaGo and Lee Sedol Redefined the Future | Wired Magazine (2016)</a></li></ul>]]>
  </content:encoded>
  <itunes:summary>
    <![CDATA[<p>How can we make AI wiser? And could AI make us wiser in return? Sina Fazelpour joins Igor and Charles to discuss the problem of bias in algorithms, how we might make machine learning systems more diverse, and the thorny challenge of alignment. Igor considers whether interacting with AIs might help us achieve higher levels of understanding, Sina suggests that setting up AIs to promote certain values may be problematic in a pluralistic society, and Charles is intrigued to learn about the opportunities offered by teaming up with our machine friends. Welcome to Episode 55.</p><p>Special Guest: Sina Fazelpour.</p><p>Links:</p><ul><li><a title="Sina Fazelpour&#39;s Website" rel="nofollow" href="https://sinafazelpour.com/">Sina Fazelpour's Website</a></li><li><a title="AI and the transformation of social science research | Science - Igor Grossmann, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock,  Willian A. Cunningham (2023)" rel="nofollow" href="https://www.science.org/stoken/author-tokens/ST-1256/full">AI and the transformation of social science research | Science - Igor Grossmann, Matthew Feinberg, Dawn C. Parker, Nicholas A. Christakis, Philip E. Tetlock,  Willian A. Cunningham (2023)</a></li><li><a title="Algorithmic Fairness from a Non-ideal Perspective - Sina Fazelpour, ZacharyC.Lipton (2020" rel="nofollow" href="https://dl.acm.org/doi/pdf/10.1145/3375627.3375828">Algorithmic Fairness from a Non-ideal Perspective - Sina Fazelpour, ZacharyC.Lipton (2020</a></li><li><a title="Diversity in sociotechnical machine learning systems - Sina Fazelpour, Maria De-Arteaga (2022)" rel="nofollow" href="https://journals.sagepub.com/doi/10.1177/20539517221082027">Diversity in sociotechnical machine learning systems - Sina Fazelpour, Maria De-Arteaga (2022)</a></li><li><a title="Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? - Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang (2022)" rel="nofollow" href="https://arxiv.org/abs/2211.13972">Picking on the Same Person: Does Algorithmic Monoculture lead to Outcome Homogenization? - Rishi Bommasani, Kathleen A. Creel, Ananya Kumar, Dan Jurafsky, Percy Liang (2022)</a></li><li><a title="Algorithmic bias: Senses, sources, solutions - Sina Fazelpour, David Danks (2021)" rel="nofollow" href="https://compass.onlinelibrary.wiley.com/doi/full/10.1111/phc3.12760">Algorithmic bias: Senses, sources, solutions - Sina Fazelpour, David Danks (2021)</a></li><li><a title="Constitutional AI: Harmlessness from AI Feedback - Yuntao Bai et al (2022)" rel="nofollow" href="https://arxiv.org/abs/2212.08073">Constitutional AI: Harmlessness from AI Feedback - Yuntao Bai et al (2022)</a></li><li><a title="Taxonomy of Risks posed by Language Models - Laura Weidinger at Al (2022)" rel="nofollow" href="https://dl.acm.org/doi/10.1145/3531146.3533088">Taxonomy of Risks posed by Language Models - Laura Weidinger at Al (2022)</a></li><li><a title="Large pre-trained language models contain human-like biases of what is right and wrong to do - Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf &amp; Kristian Kersting (2022)" rel="nofollow" href="https://www.nature.com/articles/s42256-022-00458-8">Large pre-trained language models contain human-like biases of what is right and wrong to do - Patrick Schramowski, Cigdem Turan, Nico Andersen, Constantin A. Rothkopf &amp; Kristian Kersting (2022)</a></li><li><a title="On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - Emily M. Bender  ,  Timnit Gebru  ,  Angelina McMillan-Major  ,  Shmargaret Shmitchell (2021)  " rel="nofollow" href="https://dl.acm.org/doi/10.1145/3442188.3445922">On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? - Emily M. Bender  ,  Timnit Gebru  ,  Angelina McMillan-Major  ,  Shmargaret Shmitchell (2021)  </a></li><li><a title="In Two Moves, AlphaGo and Lee Sedol Redefined the Future | Wired Magazine (2016)" rel="nofollow" href="https://www.wired.com/2016/03/two-moves-alphago-lee-sedol-redefined-future/">In Two Moves, AlphaGo and Lee Sedol Redefined the Future | Wired Magazine (2016)</a></li></ul>]]>
  </itunes:summary>
</item>
  </channel>
</rss>
