Some years ago, monitoring software vendor Dynatrace made the decision to pivot to an AI-first approach. The decision was based, as the story goes, on the belief that it was better "to be the disruptors rather than the disrupted" in the words of current CEO John Van Siclen. To move from a traditional monitoring and logging vendor to a 'software intelligence' company would involve a massive leap of faith though, and a radical restructuring of the company from inside and out.
Five years on from the initial decision, Computerworld UK sat down with Van Siclen during Dynatrace's annual Perform conference in Las Vegas, to talk about how he transformed the company and went about building the underlying AI engine that powers its software today.
The decision to embrace AI at all costs was a simple one, according to Van Siclen.
"The monitoring business was about gathering different kinds of data together and presenting it to humans to figure out what to do with it," he says. "We wanted to change this paradigm. What people really wanted was faster access to answers, so why not just surface the answers, as opposed to making people wade through more and more data?"
He says this was the only logical response to the reality today, where data piles are accumulating by a factor of 10 or even 100 every year. AI is the only smart way to deal with such a colossal collection of data.
Before the business developed its AI core, called Davis, Van Siclen says it sought out other companies in the field that claimed to be developing intelligence or correlation engines.
"We found out they were really weak and we figured out what not to do," he says.
Instead, the company decided to take its own approach, allowing it to build AI models from the bottom up, but only after discovering the answer a vital question: would it be a deterministic or a deep learning system?
The answer, to Van Siclen, was clear. "In the world of cloud, you can't choose a [deep] learning model because it's infinitely variable," he says. "The way I describe it to people is: if learning systems could deal with infinite variability, IBM Watson would be predicting Wall Street - it can't. It can play chess really well because there is a finite number of variables.
"The cloud is like Wall Street which is why our AI engine is built on top of a dependency map." This map lays out every piece of the jigsaw that interacts at throughout the technology stack.
Siclen says that the model's greatest value is the ability to eliminate noise in the data and to serve up an actionable insight to IT teams. The system also intelligently prioritises issues in terms of whether customers are affected.
The programme is based on algorithms that were developed in-house by mathematicians and data scientists, with domain knowledge layered on top. One part of the programme determines what the issue is, and another surfaces the corresponding context for the incident that is useful to tailor a response.
Van Siclen notes that he has talked to a lot of CIOs and CTOs - from major banks, hospitality and healthcare corporations - about how Dynatrace modelled its AI engine.
"If they're technical, they get it, because they look at the problem and say 'that's what we've figured out is the exact way to do it,'" he says.
Van Siclen asserts that Dynatrace competitors - if they're seriously working with AI at all - are experimenting with learning rather than deterministic models, which he assures will only lead to dead ends.
"They're never going to get there, it's a waste of time," he says. "The longer they keep trying, the more advanced we get," he laughs.
He believes that these efforts stem from a lack of real commitment to AI: "I don't think they've spent a lot of time with their best engineers worrying about it. I think that it's a project off to the side, and they're trying to just bolt it on and make a marketing statement rather than a real business value statement."
"Instead of doing the marketing statement, we make sure it works, but then we forget to market it sometimes," he jokes.
Dynatrace also firmly put its money where its mouth is, making a huge bet on its belief in the future of AI in software intelligence. "It was a good $100 million investment that went into that platform before it was ready for enterprise-class workloads," Van Siclen says.
Is he worried about other companies in this space suddenly pivoting to a deterministic model approach too? "Well they could do it," he admits, "but there's lots of pieces to that puzzle."
"You need to have all the data sources to create a dependency map that's rich enough to solve these problems," he says.
How do you go about obtaining all of these data sources within a dynamic cloud environment? "Well, you're going to need instrumentation that can deal with continuous discovery of everything that's changing," says Van Siclen. "You can't do it manually, so you have to change your instrumentation."
This is exactly what Dynatrace did five years ago. "We had to solve the whole spectrum to create the AI agent," he says. "That's why we had to reinvent everything."
Van Siclen says that the launch of the AI engine wasn't met with as much fanfare as you might imagine. "The AI engine is a huge leap forward for our customers, they're all a little sceptical about it until they try it," he says. However, he adds that it's proved itself - SAP, the sprawling software corporation, has been yet to prove the Dynatrace AI insights wrong.
But the deterministic basis of Dynatrace's core AI doesn't mean that Van Siclen denies the efficacy of learning AI models, and he agrees that they can be valuable in other use cases: "Maybe one day we'll use a learning model around something like user experience, for example. It wouldn't be for root cause, but it could be for some outcome-oriented AI with the same dataset."
Van Siclen says that in future the company will be further examining how to unite business KPIs, such as conversion rates or revenue, and marrying the two sides of the equation - both the technical software performance and its overall business impact. "A much more intelligent business approach compared to the technically oriented approach we take right now," he adds.
He says this will be a key area for investigation in 2019, and although at first the team will be experimenting with deterministic AI models, they are "looking at what other patterns and what other algorithms we should be doing".
He uses the example of the newly released Session Replay technology that will allow corporations to isolate unique web sessions and replay them to see what the customer sees, as well as what was happening at every level of the stack, through to code level.
While useful to businesses, it creates the problem of yet more data to wade through. "I can't look at all that stuff if I'm Walmart, so what should I look at?" says Siclen. Of course, a more useful system would only flag up the sessions that truly require attention.
"That's where a learning system can maybe over time, do some things that are different from a problem pattern that is more deterministic," says Van Siclen. "So we're looking at all of those things."