Despite all the hype about containers, the application packaging technology is still evolving, especially as relates to networking.
In the past year though there have been significant advancements to Docker container networking functionality. At the same time Docker has built a plug-in architecture that allows more advanced network management tools to control containers.
+MORE AT NETWORK WORLD: Cisco’s “open” data center OS embraces containers +
Meanwhile, startups have developed custom platforms for managing containers, while traditional vendors such as Cisco and VMware have enabled their network management tools to control containers. So, the earliest container networking challenges are beginning to be solved, but there’s still more work to be done.
There have always been container networking issues. Containers hosted on the same physical server can interact with one another and share data. But Docker developers didn’t initially build in the ability to migrate a container from one host to another, or connect one container to another on a different host.
“The biggest challenges have been in cross-container communications,” says Keith Townsend, a technology analyst and blogger. “From one container to another, that’s the biggest frustration that most networking professionals will encounter.”
Engineers at Docker, the company that develops the open source project of the same name, quickly realized they needed to fix this.
Batteries included, but swappable
The networking issues led Docker in March 2015 to buy startup SocketPlane, which aimed to bring software-defined networking capabilities natively to Docker. In June, Docker announced the integration of SocketPlane technology into the open source project. New networking capabilities use basic Linux bridging features and VXLANs (virtual extensible LAN) to allow containers to communicate with other containers in the same Swarm, which is Docker’s moniker for a cluster of containers. Container networking across hosts had been solved.
At the same time, Docker also released libnetwork, the codename for an open source project that allows third-party network management products to be “plugged in” to replace the built-in Docker networking functionality. Virtual networking products like VMware’s NSX, Cisco’s ACI and more than a half-dozen others were the first supported third-party network tools.
“It sets up an abstraction,” says Docker Senior Vice President of Product Scott Johnston. “It’s a Layer 3 network overlay that allows containers to be attached to it.”
Docker now has two flavors of network management. There is native, out-of-the-box functionality supplied by Docker thanks to the SocketPlane acquisition that allows for networking across hosts. If users want more advanced network functionality - such as spinning up new networks programmatically, setting network policies, installing firewalls, load balancers or other virtual apps on the network - then a variety of network management products can be used. Docker calls its approach “batteries included, but swappable.” Johnston says he hopes to have a similar plug-in model for container storage soon too.
Technology is the easy part
Johnston says these technology capabilities are the easy part. Getting developers that build apps in containers and the IT shop who will run them on the same page is an even bigger challenge.
Containerized apps have very different characteristics from traditional enterprise apps. Whereas in the past IT’s goal was to provide resilient systems that would not fail, now the priority is to provide instant-on capacity and agile, flexible networks.
“From a networking perspective, application delivery and performance is tied to how well the network infrastructure is able to support these new apps and use cases,” says Ken Owens, CTO of Cisco’s Cloud Infrastructure Services. “The role of the network engineer is to think about how things like programmable networking and software defined networking, network function virtualization can help.”
These tools allow for the automatic provisioning of network resources – instead of manual provisioning – which could soon be table-stakes requirements for organizations that truly embrace these new application paradigms.