January 27, 2017

Explainable AI: Bonsai Speaks on Deep Learning at SF Meetup | Bonsai

Explainable AI: Bonsai Speaks on Deep Learning at SF Meetup | Bonsai

First of all, thank you to Mattermark for hosting us and to SF Bay Area’s Machine Learning Meetup for inviting Bonsai to speak last week. They were a friendly bunch of folk and Sarah Catanzaro from Canvas Ventures was a force to be reckoned with in her talk about the pitfalls of machine intelligence startups. Keen Browne here at Bonsai spoke about the Recomposability and Explainability of Deep Learning Systems, and I could tell everyone was very engaged in both talks based on the fantastic questions from the audience.

Explainability is about trust. It’s important to know why our self-driving car decided to slam on the brakes, or maybe in the future why the IRS auto-audit bots decide it’s your turn. Good or bad decision, it's important to have visibility into how they were made, so that we can bring the human expectation more in line with how the algorithm actually behaves. In an earlier blog on explainability, we touched on its importance in driving broader AI market adoption, and today we’re going to explore this concept from a research perspective.

DARPA defines the goal of Explainable Artificial Intelligence (or XAI as they call it) to “create a suite of new or modified machine learning techniques that produce explainable models that, when combined with effective explanation techniques, enable end users to understand, appropriately trust, and effectively manage the emerging generation of Artificial Intelligence (AI) systems.“ That is quite the mouthful! To help us understand this concept, Keen unpacks what explainability is and takes a look at how some notable universities around the country are approaching this problem.

 

For those of you that don’t have time to watch the video (the TL;DW if you will) I will go over the main topics Keen speaks about from the meetup with the relevant links from the presentation. 

Learning Semantic Associations

The first paper addressed is around learning semantic associations in video from a whitepaper, Multimedia Event Detection and Recounting, out of SRI International Sarnoff and others. This research was also used in a presentation about XAI (pictured below) by David Gunning at Georgia Tech. Their research was to pick out what they refer to as “concepts” in the video via the audio, visuals, or even text found in the imaging or captions. The end result was rich metadata that helped you be able to search throughout the video based on seeing what the computer thought would be such as a “groom”, a “bride”, “they are looking this way”, “the crowd is looking that way”, etc from a wedding video.

Generating Visual Explanations

The second method, Generating Visual Explanations, is a paper out of UC Berkeley that talks about introspection versus justification. You might not have the introspection on what’s going on in the algorithm based on its weights going in, but it would be possible to justify why the classifier did what it did in hindsight. This is with the aim of making accurate - and more importantly discriminative - sentences about the image subject based on the visual and text description.

Local Interpretable Model-Agnostic Explanations (LIME)

The third, and one of our favorites, is LIME out of University of Washington from a paper aptly titled “Why Should I Trust You?”.They did something pretty cool where instead of trying to apply a textual label, they would highlight in a picture or body of text what made it an exemplar. A perfectly natural dog playing an acoustic guitar was used for the paper and is shown below. This was done by examining “super-pixels” (a contiguous patch of similar pixels), graying them out, and feeding the resulting image back through the classifier to see if the classification changed. The image below presents the super-pixels with highest positive weights as an explanation, graying out everything else.

When working with text in the 20 newsgroups data set, instead of super-pixels, the researchers removed similar groups of words were removed to test if the classification changed. The task was to classify whether a post came from a Christian newsgroup or an Atheist newsgroup, two classes that are hard to distinguish because they share many words. Keen goes through the sample code in the video and you can follow along on github

Rationalizing Neural Predictions

The fourth paper considered is research out of MIT’s Computer Science and Artificial Intelligence Laboratory titled Rationalizing Neural Predictions. In this research, two neural networks, an encoder and a generator, are trained together. Similar to the research done with LIME, the goal was to predict what part of a paragraph, in this case beer reviews, corresponded with the rating it was given. The encoder’s job was to classify the text and build a mapping based on the beer rating, which was then passed through the generator for prediction of the most probable continuous chunk of text that represented that rating.

Explainable Reinforcement Learning

Lastly, let’s take a look at what Bonsai is doing to bring explainability to reinforcement learning. When you’re building a model, instead of focusing on how you’re going to build that model, what if instead you focus on how you’re going to train it? In Bonsai’s programming language, Inkling, you describe a curriculum for training a learning system to perform a task, broken down into concepts and lessons, and the backend handles building models and running them through your curriculum.

By decomposing behaviors into sub-behaviors and organize them into a hierarchy of concepts, you can now see which behavior is being chosen. Once you have this hierarchy that breaks down the task the system will perform, you can ask which of those concepts contributed to any given behavior the system generates. For instance, you can ask if the car turned right because it was executing its pedestrian avoidance concept, or because its traffic sign detection concept reported a no left turn sign. Explainability means expressing how a concept contributed to the result and how much it contributed.

Our approach is focused on four things, amounting to our strategy for explainable reinforcement learning:

If you only learn one thing from this blog, it’s that explainability in AI is important! And not only is it important, but it is possible. A human is able to explain what they are doing when approaching a problem, and your AI should too. So consider the different techniques out there, and if you are frustrated by the lack of explainability in your current approach, or are embarking on a quest where explainability is required, I would love to hear from you.

This is Katherine, Developer Advocate here at Bonsai, signing off. Next week I’ll talk a little about what I do here at Bonsai, and what you can expect from me in the future. You can find me @KatMcCat or over on our forums until then.

Subscribe to our newsletter to stay up-to-date on our latest product news & more!

SUBSCRIBE NOW