August 2, 2017

Explainable AI & Machine Teaching: Mark Hammond at O’Reilly 2017

At the O’Reilly AI conference in New York this past June, I had a chance to speak on an important topic: the latest techniques and cutting-edge research currently underway to build Explainable AI.

Greater interpretability is crucial to greater adoption of applied AI, yet today’s most popular approaches to building AI models don’t allow for this. Explainability of intelligent systems has run the gamut from traditional expert systems, which are totally explainable but inflexible and hard to use, to deep neural networks, which are effective but virtually impossible to see inside.

In this talk, I examine two approaches to building explainability into AI models - learning deep explanations and model induction—and discuss the effectiveness of each in explaining classification tasks.

I also look at how a third category - learning more interpretable models with recomposability—uses building blocks to build explainability into control tasks. This last approach, along with Machine Teaching, is a cornerstone of the Bonsai Platform. I demonstrate exactly how this works within the Platform at the end of my talk by building an AI model to beat the game Lunar Lander.

I invite you to watch the talk in its entirety below or view the slides here. To learn more about how Bonsai is building Explainable AI with Machine Teaching, visit our How It Works page or check out the Bonsai Early Access Program.

 

Subscribe to our newsletter to stay up-to-date on our latest product news & more!

SUBSCRIBE NOW