Where some businesses are employing artificial intelligence to sell you more, IBM is using it to sell you less.
Specifically, it’s employing one set of AI tools to minimise the amount of compute time on its cloud services you need to buy in order to train another set of AI tools to run your business.
That will also allow IBM’s customers to make the most of another scarce and expensive resource, AI expertise, according to Ruchir Puri, chief architect for IBM Watson and an IBM Fellow.
“We’re lowering the barrier to entry for machine learning capabilities for enterprise,” Puri said.
The barrier Puri is talking of is the scarcity of human expertise in deep learning, a way of training an artificial intelligence in a particular domain of expertise.
The process of training an AI is computationally intensive, and typically requires staff with expertise in the domain concerned, something most businesses will have, and also in the development and tuning of deep learning models, something they may not.
“It’s becoming the bottleneck for enterprises, not everyone can afford an AI expert,” he said.
IBM incorporated deep learning tools into its Watson suite of AI technologies a few years ago, and has now amassed enough experience of fine-tuning the deep learning process that it has use it to train an AI to fine-tune the training of others.
This fine tuning concerns the choice of “hyper-parameters” used in the AI training process. A deep learning expert will have an instinctive feel for what’s right for a particular task, allowing them to minimise the amount of computing resources needed to develop a model, whereas a beginner might be forced to plod through all possible combinations of parameters in order to find the right one.
“Our job is to help you through the process by automated tuning, thereby narrowing the compute time and resources you might otherwise have used,” Puri said.
This tuning process is part of IBM’s latest Watson Studio offering, Deep Learning-as-a-Service, for which the company unveiled pricing on Tuesday.
There are a three tiers, the first being a free one for businesses wanting to see how good IBM’s model-tuning AI is.
The Enterprise (v2) plan is the most expensive, with a minimum spend of US$6,000 per month. This includes access to the environment for five authorised users and unlimited viewer collaborators, and 5,000 “capacity unit-hours.” (Additional hours are $0.50 each.)
One capacity unit consists of two virtual CPUs with 8GB of RAM; bigger and smaller virtual server instances are available, with a corresponding variation in their capacity unit cost.
While IBM’s goal is obviously to drive usage of its cloud AI tools -- making it cheaper and simpler to use makes it more likely that businesses will try it out -- the company is also making efforts to grow the overall market.
One of these involves an expansion of IBM’ Spark Technology Center, which focused on using Apache Spark, an open source, big data processing engine, for deep learning and other applications.
Under its new name, the Center of Open Source Data and AI Technologies, it will aim to make it easier for enterprises to create, deploy and manage AI models, and has already launched its first two initiatives.
One, the Fabric for Deep Learning (FfDL, or “fiddle”), uses the containerised application management system Kubernetes to simplify the management of computing resources in deep learning frameworks. It can orchestrate TensorFlow, Caffe, Caffe2, PyTorch, and Keras workloads across cloud computing fabrics composed of CPUs and GPUs.
The other, Model Asset eXchange (MAX), is a kind of app store for training models, providing a standardised way for businesses to deploy deep learning models that they, or others, have already built.
If you want to incorporate an AI into your existing cloud-based business processes, “You can get it trained within Watson Studio then take it outside,” Puri said.