GE is rethinking many aspects of IT, including its internal reporting structure, where and how it supports apps, and how it networks its 4,500 offices. Network World Editor in Chief John Dix got an update from Chris Drumgoole, Chief Technology Officer of IT.
As Chief Technology Officer for GE IT (GE also has a CTO on the product side), how do you work with the IT teams in the business units?
We’re in the middle of a big transition that GE CEO Jeff Immelt talked about in our last investor call. If you know us you know we’re a matrix organization, so historically each one of the businesses had a CTO/CIO that would be responsible for that business, and essentially a dotted line in the matrix to myself or my boss, our CIO Jim Fowler. This helped us drive consistency but there would still be distinct roles within the businesses.
On that call Jeff gave a high level overview of how we’re unifying the digital technology vision
into a single team. It’s all about how we approach things like our IoT strategy, our field service strategy, how we predict downtime in our equipment. There’s a lot of value to unifying that and making it one big team that has specialties around the individual product lines as opposed to keeping it a loose coalition of different teams.
It’s a really big shift for us, but it’s great because IT today looks nothing like it did 35 years ago, and most corporate IT departments still look like they did long ago.
Presumably these shifts will add up to some impressive savings?
$700 million is the number Jeff put out there in terms of expected productivity savings over the next year, and realigning our work horizontally across IT will help us meet these goals.
I understand you’re also about a year into a three-year plan to migrate 9,000 applications to the cloud?
Yes, although there’s a bigger focus now on decommissioning applications, because part of the savings Jeff talked about is around Aviation and Power and Digital all having different applications doing roughly the same business function. We’re going to consolidate those so you’d get anything from two to four applications decommissioned. Let’s just say we’ll have a significantly smaller portfolio of applications two years from now than we do right now.
The construct is cloud-first; we don’t want to be in the infrastructure business. If you’re writing anything you’re writing for the cloud, and if you’re moving anything then move it to the cloud. Over 95% of our new application deployments are going to cloud, and that’s been the case for over a year now and we’ll just continue to double down. The cloud share of our wallet continues to increase and the hardware vendors’ share of our wallet continues to decrease, and not in small terms. It’s a pretty dramatic shift.
What percentage of your applications are cloud-based now?
As a portion of our total estate, broadly speaking I would say it’s about 30%, depending on how you’re counting. We try to stay away from counting and look more at resource utilization.
That being said, if you look at our mainline businesses, one of them is already running ERP in the cloud and there’s another one that’s in the process of moving over.
You say you’re cloud first, but do you have a preference for Infrastructure-as-a-Service or Software-as-a-Service?
The more SaaS we can buy the better off we are, especially for non-differentiated applications like HR, scheduling, administrative, bill paying, taxes, compliance, customs, etc. The world can’t get to SaaS fast enough for us. The core applications that make GE different -- how we do field services better, how we sell better, how we do inventory, planning and predictive analysis better -- that stuff we don’t want as SaaS because there is differentiation there for us. Our software and our analytics allow us to do better than our competitors. That’s where we invest.
Our feedback to the vendors that want to come in and sell us infrastructure as a service … skip that. We can already run stuff pretty cheap. We’ve got a great cloud strategy and we’ll move when we need to. Give me SaaS, that’s what I really want.
Your three year plan also calls for closing some data centers. Have you shuttered any yet?
We closed six in 2016 and we will close more.
Let’s turn to the network. I understand your plans also call for shifting some GE offices off of your MPLS backbone to the Internet?
The first phase of the pilot is done, and we’re very, very happy with it. It achieved exactly what we hoped and now we’re rolling it out to a wider group and have actually expanded the project a bit, too. When we started we were looking at what we spend on MPLS and long-haul circuits and the complexity, and considered the fact that security has traditionally been a network-heavy role but applications and identify have a piece of security now, so asked ourselves, “Why have a network for the branches at all?”
We have this massive cost, and when you try to line it up against a benefit, it’s just not there. In fact, the reality is it’s probably hurting us. You’ve got a worse experience in the branch because we’re backhauling traffic all over the world so we can inspect it, when we don’t really care about it anyway because it’s not core to GE. So we said, can we pull the network out of the branches and replace it with Internet and that was what we started to do and it has been very successful. We hit some bumps but it’s been very successful.
Are you going to a specific type of Internet access?
We’re using a bit of everything, actually. Our network is not the largest in the world by volume, but it’s probably one of the widest. We have about 4,500 offices in 180-plus countries. It really depends on what’s available and where we are in the world. We brought the provisioning function in-house because we realized we were missing out by sticking to the big telcos and cable companies. There is a whole tier of smaller but important players. For example, in one building where we have a sales office we went from paying several thousand on a monthly loop charge for a two-gig circuit, to buying a three-gig circuit for less than half that because the provider had worked out a deal with the building.
Were all the branches on MPLS before?
Predominately. There’s always an exception, but as a rule. Most of those go away. Of the 4,500 locations, around 500 locations are factories and they’ll probably maintain some sort of MPLS connectivity because the legacy applications in factories tend to be very latency sensitive.
I’m hedging a little bit because we’re still working through this and that’s the point of doing the pilot.
You said you’re expanding the pilot, in what way?
We started thinking, “If we’re going to simplify the WAN to the branch, why not simplify the branch itself?” Should we even have wired links anymore? Should we have desk phones anymore? Why are we investing in this massive Wi-Fi access point infrastructure that is fully managed and resilient when there are up and comers in the market I can buy for literally 10% of the price. Yes, they’re unmanaged, but they’re 10% of the price. If they break, we keep a few in stock and replace them, and we get completely out of this endless cycle of vendor updates and support contracts and all that stuff.
So the second phase of the pilot is saying, let’s continue to do the circuit stuff, but let’s also see what we can to simplify the branch further, make the experience better, drive costs down. We have about 100 sites where we changed out the network and we’ve got about another 50 sites where we’re doing that added simplification right now. Assuming that goes as planned, then we hit like gangbusters in the second or third quarter and start massive projects to swing over the offices.
Beyond Wi-Fi and desk phones, does the simplification effort include stripping out appliances?
That’s the stuff we’re working through now. How do you handle the badging systems, how do you handle printing, because all that stuff is typically done by some sort of onsite server, and if you get rid of that, what do we do? We think that’s another big cost cutting and simplification opportunity. What we’re really aiming for is driving a better experience. We want to make printing easier. We want to make everything better, and we think we can make it cheaper at the same time. Our users at the early pilot sites are happy as clams because suddenly they’re getting fast connectivity and it’s cheaper for us.
How will you deal with branch security?
That’s the question I always get whenever I talk about this in technical circles. It’s not a wide open network because there are firewalls built into those Wi-Fi devices, but they are consumer-grade firewalls. We’re reaping the benefits of years of investment into improving our security stance on all of our devices and in all of our applications. We knew we were losing some network security, but we actually think we’re going to be more secure when we’re done and, again, everything will be much simpler because there is no more edge device.
For the factories where we’re still using MPLS, we think we’re going to use some sort of white box supporting a virtualized network function, but we don’t know exactly what it’s going to look like yet. It will not be a traditional router.
A white box controlled and managed by you folks, or do you think you’ll add a managed service for that?
That’s a loaded question. My opening position would be that we’ll probably manage it – unless we find a way to do it cheaper. We’re trying to embrace elegant simplicity. Not bare-bones; simple because simple is beautiful and works better, and to enterprise managed service providers, complexity is beautiful. They want to tell us about all the management stuff, and we’re like no, we just want you to fix it when it breaks. My gut would be we’ll probably do it ourselves, but I’m open.
Can you share what savings you’re expecting to realize from the shift away from MPLS?
We’re not going to break out specific numbers, but suffice it to say that the network is a key part of the strategy that Jeff outlined. The network piece will be measured in millions, not thousands.
Ok. Let’s shift to the Internet of Things. Obviously IoT is a product opportunity for GE, but it is also potentially a way to improve operations, so how do you go about approaching it?
A big part of our strategy going forward is what we call Predix, which is our operating system, if you will, for Industrial IoT. There are massive GE teams, thousands of people, invested in building the tool set, the analytics and the platform to run IoT for the industrial world. And part of that strategy is eating our own dog food, being our own best customer. That’s why almost every one of my teams is working on ways to use Predix to make our own operation run better.
I’ll give you some examples. One is around predictive analytics. As we collect data we can build what we call a digital twin of an asset out in the field -- essentially an electronic model of that asset -- and keep it up to date and use our Predix platform to figure out when that, say, turbine, is going to break before it breaks, leading to a better operating outcome for the owner of that turbine. Great technology. But why does it have to be limited to an asset like a turbine? I run a massive network with servers all over the place, with 350,000-plus laptops out there. Why don’t I use those same capabilities to collect performance data from those devices and start using those same analytics to predict problems? That’s one area we’re studying.
Another is our buildings. As we sensor-enable everything in those buildings -- the lighting, the heating, the cooling, etc. – we can bring that data back into Predix and make decisions that lessen our environmental footprint and drive down costs. Besides potentially saving money, our investigation into internal uses like that helps make the Predix platform better.
The last one though is where the real opportunity is, and that’s taking the same IoT strategy into our factories. We have an initiative called Brilliant Factory, which is essentially IoT-enabling our factory machines. Looking at all the data inside a factory gives you two things:
* It gives the factory leaders the ability to run that factory better. Instead of having to walk the floor and wait for someone to press a red button when they notice an anomaly, they’re seeing it before a human can even detect it. The factory can reposition, we get much better material planning, much lower cost of quality, a much higher inventory turn just driving real efficiency.
* The other part is we can take the digital twin of, again, that turbine and track data for components all the way back to, say, the piece of metal that came into the factory that was machined to make the hub. It gives us a much richer data set to drive a better outcome for our customer.
That’s about as deep as I go on that, but it’s really cool. It’s really interesting.