MindSET

Confronting AI’s Trust Issues with Ron Keesing

Episode Summary

When technology works, humans trust it very quickly. But you can’t go from zero to building a really trusted AI system overnight. It takes time for technology to get to the stage where people are ready to develop a trust relationship with it. Sharing how we can confront AI’s trust issues, is Head of AI at Leidos, Ron Keesing. Key takeaways: The consequences if AI isn’t trusted From self driving cars to the sea hunter platform The 4AI methodology for developing trust in AI Navigating the ethics of AI What’s next for trusted AI?

Episode Notes

When technology works, humans trust it very quickly. But you can’t go from zero to building a really trusted AI system overnight. It takes time for technology to get to the stage where people are ready to develop a trust relationship with it. 

We’ve come so far in the last decade developing AI at Leidos, as a company we’re incredibly passionate about our trusted AI mission. While research has shown we trust self-driving cars within 10 minutes, there's still a tremendous need for trusted AI across the government, in combat and in matters of national security

In this episode of the Leidos MindSET podcast, we have Head of AI at Leidos, Ron Keesing, talking about confronting AI’s trust issues. He shares how the AI that we view today as an easily trusted system is in fact the result of extensive development, testing, expertise and interaction that’s occurred over a long period of time. 

On today’s podcast: