If Cumulus Networks has its way, companies will use its Cumulus Linux to decouple the network operating system from the hardware and break free of the integrated approach that has driven the industry for decades. Network World Editor in Chief John Dix talked about the vision with Co-Founder and CEO JR Rivers.
Where do you see opportunity for Cumulus?
The industry has gotten to the point where standard silicon supports every network function, whether it's routing or load balancing or security. And we recognized the industry was likely to change from being built around appliances -- whether it's a Cisco network switch or an F5 load balancer -- to being a software business that leveraged industry-standard hardware.
We decided a company like Cisco would find it very hard to change their business model to reflect the new perspective, and that you needed to start a company to get there because it would mean making different decisions around software, around your sales channel, your cost strategies and everything. And that's why we started Cumulus.
You describe your product as a Linux operating system for network equipment. Expand on that.
A Linux distribution like Red Hat comes with a whole set of tools and utilities and Red Hat makes sure they all work together -- everything is compiled with the right libraries and the build tools all work and all of that stuff. Then they release and maintain that, make sure patches make it out to the customer base, etc., and then also enhance the foundation to inch the whole movement along.
That's what we do with Cumulus Linux, but specifically with a network slant. So, for instance, the version of the kernel we started our distribution with was based on version 3.2.20-something. And a little bit later the Linux community added VXLAN to the kernel, which is an SDN transport technology. So we took that technology, we worked with the kernel communities to add it into the mainstream Linux, and then also back-ported it to our released version so customers could enjoy that functionality earlier than they would if they had to wait for our next big major release.
We're focused on data center Layer 2, Layer 3 networking, including routing protocols like OSPF and BGP, bridging functionality with VLANs and multi-chassis link aggregation, SDN technologies like VXLAN, and monitoring technologies like sFlow. So we either pull in open source packages or develop them as necessary to enhance the network functionality of the Linux distribution.
As a distribution, everything we do that involves an open-source package gets pushed back upstream to that package maintainer, so the next time around it's available to all comers.
Another place where we do work that is not necessarily network-focused but helps our network customers is around automation. So Puppet, Chef, CF engine, Nagios, Answerable, they compile their agents and build for the latest Linux distribution, and we make sure they are also built for our Linux distribution.
You only deliver software?
We don't sell any hardware. We've worked hard to help hardware partners build supply chains so the customers are able to buy hardware from them directly or through their resellers.
And what do the resultant products end being competitive with?
If you look at a Cisco product, it would be something like a Nexus 9000 or a Nexus 3132, an Arista 7050 class product.
How do you compare performance wise given you are on generic silicon?
The thing that's important to recognize is most of the technology customers are consuming today is based on industry-standard silicon, like the Broadcom Trident or Broadcom Trident II. For instance, the Cisco Nexus 9000 is based on Broadcom Trident II. Sometime in the future, customers will be able to buy a plug-in module that will add some Cisco secret sauce to the product, but they're selling a ton of that right now. But what companies are buying today is 100% industry-standard silicon. So from a performance and function perspective, all of those behaviors are available to our customers as well. The same thing is true of Arista and Juniper and Brocade and everybody else.
So your products are competitive performance wise but presumably cost less?
Exactly. And in fairness, what features get enabled and which features we think matter are going to be a little different between ourselves or a Cisco or a Juniper, and that's what differentiates us.
How do you go to market?
From a business perspective it's easier to think of us like Red Hat. Let's use Dell as an easy-to-describe model. Back in January Dell decided to open up their hardware to third-party network operating systems, like Cumulus Linux. So now customers can order a piece of Dell hardware that either has Cumulus Linux on it or comes with no operating system on it. That's the same model they use for servers. If you buy a Dell server you can buy it bare metal or with Windows Server or Red Hat Enterprise Linux or SUSE or even VMware installed. So it is up to the customer in terms of how they want to acquire the technology.
Most of our customers are service provider types. Whether they're small or large, they have an Internet-facing footprint and a service provider type business. But we have a bunch of enterprises that are in various proof-of-concepts or trials, but none that are in production right now. But they are all paying customers. They'll typically buy through a reseller.
Are your enterprise customers typically larger organizations?
Actually even some smaller shops. Acquiring Cumulus Linux is very similar to the way people acquire and consume servers and operating systems today. So we have really small shops that look at it, and we have big shops that look at it. On the enterprise side, right now we're not doing much outbound marketing. Everything is kind of inbound. We're talking to people that are looking for new technology that might help their businesses, like the financial houses and some of the pharmaceutical companies.
Some people use the term "white box" to describe the approach you're talking about, picking software to run on an industry standard box, but you haven't mentioned the term.
I specifically don't use the term white box because what we do doesn't fit the gross description of white box. We use the term "bare metal" because the term white box means it doesn't matter who you buy the hardware from. It's just a random spot market problem, a big commodities play. We found people care about who they buy their hardware from, they know what silicon is inside, and the supply chain is important to them. And that's why we don't call it white box.
Going back to those customers that showed early interest, the pharmaceutical guys and financial houses, how would you characterize their interest? Is it simply the cost factor or are they trying to break the shackles of their legacy suppliers?
It's kind of hard to call it one thing. What we've seen, in general, is they're looking at their current IT infrastructure and recognizing that it's not going to work for them going forward. They have made the decision that they need to own and maintain and manage some portion of their IT resources, but the old way isn't working.
So they start to look around to see who might step up as news partners. Is it going to be VMware for the virtualization space? Is it going to be OpenStack? Who is going to be my server vendor? Who do I want to use for networking? And many feel they have been trapped by Cisco for a very long time and they don't look at Cisco as being a trusted partner going forward. So they're trying to figure out when and how they can decouple themselves from lock-in.
So it's more than the cost appeal?
Well, that's what gets them started looking. Then when they decide to look at something different, they ask, what do I care about? I care about quality, I care about functionality, I care about cost, I care about automation, the types of suppliers that I want to be relating with. And we're lucky in that we can help in every one of those dimensions. So that's why it's an easy discussion to have with them. We help them on cost, help them on automation, and they like working with us.
You mentioned VXLAN and software-defined networking in passing. Has that changed your game at all?
I would say it helps. A lot of customers have built very brittle infrastructure around large Layer 2 domains based on decades-old Cisco technology and now want to break free from it. So they want to use network virtualization and network function virtualization as a way to make their data centers less brittle, more fluid and easier to automate. And because VXLAN is an important construct and because it's built into Linux and it becomes really easy to automate around, it means we're able to partner with the overlay providers so they're able to use our equipment.
I noticed VMware is a partner of yours. Are you taking from them or are they taking from you, or is it a two-way street?
Kind of a two-way street. They're focused on the workflow orchestration, managing customers as a business, deploying an application along with VM storage, network interconnect, firewalls, load balancers, routers, all that sort of thing. And so VXLAN is a key construct for them. They set up a partner program and reached out to pretty much everybody in the network industry, and said, "Hey, this is how we want to interact with your VXLAN-capable devices."
So in our case we ended up with an agent that lives on the platform and terminates communications with the VMware controller and creates and destroys VXLAN based under its control. So it's a pretty easy-to-use protocol. That partnership was, on the technical side, like a three-week exercise, and then the legal and go-to-market side was longer because they were waiting for their launch. We had a similar relationship with Midokura, and we're working on the same thing with Nuage. It's really pretty easy to automate around VXLAN using this platform.
Who do you compete with?
Our biggest competitor is Cisco, second biggest is Arista and third biggest is Juniper.
What's your biggest challenge going forward? Is it convincing people that you're a realistic alternative?
If you had to pick the biggest one, it would be that. And obviously there's a lot of nuance to that statement. There's longevity, there's knowing somebody that's been using the technology for a year and has no problems with it. There are people singing your praises, people talking about their operational cost savings, people talking about how easy it's been to work with you. It's earning your spot on the team.
Some of your story sounds similar to the story told by Vyatta. Do you see them in the market?
They were acquired by Brocade and we do see them. They seem to be very focused on the network function virtualization business at this point. Early on they had a somewhat similar perspective, but it was a little bit twisted in that they started off looking at the branch router space, recognizing you could use an x86 server as a branch router.
So they took Linux, which had all the right functionality, but then put a Cisco [CLI] around the outside of Linux so someone that was comfortable with a Cisco branch router could just pick up a Vyatta router and go off and run. We did it almost the other way around. We said in the modern data center, Linux is the lingua franca, so we want to make sure people are able to leverage and use Linux as much as humanly possible and not hide it from them.
I suppose you guys had the opportunity to position yourselves as an SDN player as that started to heat up, but you resisted. Why?
There was no clear definition of what SDN really meant. Everybody was talking about two things. One was network virtualization, which is the ability to partition a segment of connectivity in a programmatic and provisioning way. And the other was network function virtualization, which is this concept of taking a router or a firewall and, instead of it being a big metal appliance, essentially making it a VM that can be moved around throughout the system. And we're neither of those. So tying ourselves to that crowd just didn't really make sense.
Google's big mantra has been complex applications on generic infrastructure. And that's really become the current idea of the software-defined data center. So as a customer I can go bunch of hardware from Dell and install it today to do job X, and tomorrow it can do job Y. It gives customers the ability to pick and choose what's happening in their data center at any given point in time.
Compare that to the vendor lock in they're used to today in networking. But customers are starting to get savvy because they see the difference on the server side of the world. They can deploy VMware today and if they don't like VMware they can switch to Red Hat, use KVM as their hypervisor, or if they want to use workflow orchestration they can change to OpenStack. So it really isn't a lock-in on the hardware. You don't have to replace your hardware to switch between any of those things.
And many customers want to get to the point where it's the same on the network side. They can just buy the gear, cable it up, and then everything after that is simply a matter of software.
Will overlays, the network virtualization flavor of SDN, be all we'll really need going forward, or do you think a bigger picture SDN will emerge that involves more legacy equipment?
I think you're going to find customers going towards overlays over the next two or three years. It's starting now. We see it happening, and it's going to go pretty quickly, because it's so much easier to automate, diagnose, debug and provision than the old way. And we know that's true because look at all the big players -- the Googles, Amazons, Microsofts, the Facebooks -- they all use some form of overlay when they need to virtualize their networks. None of them use Layer 2 constructs anymore, and they haven't for a very long time.