Scope is a measure of the key concepts, both in number and complexity, that you can teach an AI model through training. Most AI platforms today are limited in their ability to incorporate more than a single concept in their training process due to the time required, as well as the general complexity of low level algorithms and techniques.
The inner workings of AI model decision making processes today are predominantly black boxes, providing very little insight into what contributed to a prediction or action. Explainability is the ability for an AI platform to convey what inputs contributed to a specific prediction so that producers and consumers of AI can better debug, inspect and trust a system's decision making process.
Reusability in the context of AI is the ability to reuse, share, and collaborate on the development of AI models just as development teams do with traditional code. Most AI models produced today are difficult to reuse because they don't properly separate concerns, intermixing the specifics of algorithmic approaches, data flow, training sources, and training techniques. Moreover, any previous time spent training a model with current approaches is usually discarded when revisions are made.
As deep learning and other machine learning algorithms, architectures, and topologies evolve, it is important to preserve your development investments. Most machine learning models are hard coded to a specific technique and cannot be easily adapted to take advantage of advances in the latest low-level algorithms and advances in underlying high performance computing architectures.
Current solutions force unnecessary tradeoffs
The Bonsai platform enables developers to add intelligence to a specific system or business process without requiring extensive expertise in complex AI algorithms