Podchaser Logo
Home
🎯 Outsider's Guide to AI Risk Management Frameworks: NIST Generative AI | irResponsible AI EP5S01

🎯 Outsider's Guide to AI Risk Management Frameworks: NIST Generative AI | irResponsible AI EP5S01

Released Tuesday, 4th June 2024
Good episode? Give it some love!
🎯 Outsider's Guide to AI Risk Management Frameworks: NIST Generative AI | irResponsible AI EP5S01

🎯 Outsider's Guide to AI Risk Management Frameworks: NIST Generative AI | irResponsible AI EP5S01

🎯 Outsider's Guide to AI Risk Management Frameworks: NIST Generative AI | irResponsible AI EP5S01

🎯 Outsider's Guide to AI Risk Management Frameworks: NIST Generative AI | irResponsible AI EP5S01

Tuesday, 4th June 2024
Good episode? Give it some love!
Rate Episode

Got questions or comments or topics you want us to cover? Text us!

In this episode we discuss AI Risk Management Frameworks (RMFs) focusing on NIST's Generative AI profile:
✅ Demystify misunderstandings about AI RMFs: what they are for, what they are not for
✅ Unpack challenges of evaluating AI frameworks 
✅ Inert knowledge in frameworks need to be activated through processes and user-centered design to bridge the gap between theory and practice.

What can you do?
🎯 Two simple things: like and subscribe. You have no idea how much it will annoy the wrong people if this series gains traction.  

🎙️Who are your hosts and why should you even bother to listen? 
Upol Ehsan makes AI systems explainable and responsible so that people who aren’t at the table don’t end up on the menu. He is currently at Georgia Tech and had past lives at {Google, IBM, Microsoft} Research. His work pioneered the field of Human-centered Explainable AI. 

Shea Brown is an astrophysicist turned AI auditor, working to ensure companies protect ordinary people from the dangers of AI. He’s the Founder and CEO of BABL AI, an AI auditing firm.

All opinions expressed here are strictly the hosts’ personal opinions and do not represent their employers' perspectives. 

Follow us for more Responsible AI and the occasional sh*tposting:
Upol: https://twitter.com/UpolEhsan 
Shea: https://www.linkedin.com/in/shea-brown-26050465/ 

CHAPTERS:
00:00 - What will we discuss in this episode?
01:22 - What are AI Risk Management Frameworks
03:03 - Understanding NIST's Generative AI Profile
04:00 - What's the difference between NIST's AI RMF vs GenAI Profile?
08:38 - What are other equivalent AI RMFs? 
10:00- How we engage with AI Risk Management Frameworks?
14:28 - Evaluating the Effectiveness of Frameworks
17:20 - Challenges of Framework Evaluation
21:05 - Evaluation Metrics are NOT always quantitative
22:32 - Frameworks are inert-- they need to be activated
24:40 - The Gap of Implementing a Framework in Practice
26:45 - User-centered Design solutions to address the gap
28:36 - Consensus-based framework creation is a chaotic process
30:40 - Tip for small businesses to amplify profile in RAI
31:30 - Takeaways 


#ResponsibleAI #ExplainableAI #podcasts #aiethics

Support the Show.

What can you do?
🎯 You have no idea how much it will annoy the wrong people if this series goes viral. So help the algorithm do the work for you!

Follow us for more Responsible AI:
Upol: https://twitter.com/UpolEhsan
Shea: https://www.linkedin.com/in/shea-brown-26050465/

Show More
Rate

Join Podchaser to...

  • Rate podcasts and episodes
  • Follow podcasts and creators
  • Create podcast and episode lists
  • & much more

Episode Tags

Do you host or manage this podcast?
Claim and edit this page to your liking.
,

Unlock more with Podchaser Pro

  • Audience Insights
  • Contact Information
  • Demographics
  • Charts
  • Sponsor History
  • and More!
Pro Features