No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Are Our New Technologies Too Complex?

Anna Berkut Alamy Stock Photo.jpg

Image: Anna Berkut - Alamy Stock Photo
The whole may or may not be greater than the sum of its parts, but it’s becoming clear that the whole is more complicated than it first appeared. Mathematicians will tell you that the complexity of an interrelated system grows exponentially with the number of pieces. We seem to be proving that with things like virtualization and application componentization. Enterprises are starting to realize that the growth in complexity can threaten the benefits of new technologies and even their IT and network operations.
 
An old friend in the financial industry, now head of application development for a banking consortium, recently made an interesting, slightly frightening comment. “Twenty years ago, my main application was a single software package that ran on a single mainframe. Today it’s fifty-seven components that run partly in my own server pool of a couple of hundred systems and partly in a cloud with who-knows-how-many [systems]. Is it any surprise my operations are more complicated?”
 
This prompted me to look back at data I’d collected since the year 2000. What I found was this: Enterprises are reporting their IT operations costs today are almost triple what they were in 2000. Meanwhile the number of operations errors that caused application outages had risen by 800%, and the time it took them to find a suitable operations professional had increased from two months to almost eight. Every enterprise ranked “increased complexity” as the number one or number two cause of their problems.
 
CIOs and CFOs both find this surprising because both say that the reason for resource pools and componentized applications is that they’re more resilient and more efficient. Some, but not all, members of these two groups say that they believe that their total IT costs have been contained by modern technologies like virtualization and componentization, that their operations challenges were offset by lower costs elsewhere. Even those who believe that their IT costs have gone down say they’re uncomfortable with how complexity is threatening their operations. What’s more interesting is what tech professionals think is the cause of this and what might fix it.
 
According to enterprises, the primary reason for exploding IT and networking complexity is the lack of broad systemization. Virtualization and cloud computing, the Internet, security, rapid development, and other rapidly changing tech areas all seem to be evolving in their own direction, at their own pace, even though these areas and others all have to somehow coalesce into a company’s IT plan. My old friend puts it like this: “I know what a mainframe compute model was supposed to look like, and how networking related to it. What is a cloud model? It’s a work in progress—but meanwhile my applications have to work in the real world.”
 
He recounts a story of a software company that gave his team a presentation on “Cloud-Native Development in the Financial Industry.” The presentation showed how a cloud-native version of key banking systems could be developed, but there were two problems. First, as a bank, his company already had “key banking systems.” Was he supposed to convince management to build new ones? Second, when he (and a representative of the CFO, who also attended) pressed the vendor on specific benefits, what they got were comments like “this is the way of the future” and “this will ensure your systems will evolve smoothly for the next decade.” Try translating those into a business case.
 
The second-most-cited reason for increased complexity was that tools intended to reduce or manage complexity were adding to it instead. Here, enterprises realized that they were part of the problem. An investment in IT and network hardware or software requires a business case, and a business case requires some projection of useful life. This encourages enterprises to respond to changes in their IT platforms and applications by adding new tools, which create new layers of operations management. Over 20 years, enterprises reported a shift from having two primary management tools to having five, and two-thirds thought they would be adding in two or more tools by the end of next year.
 
The specific area where enterprises said they had the greatest complexity challenge was security. Every enterprise said they’d experienced at least one hacking incident per year, and the average number reported increased from nine per year in 2020 to 37 in 2021. Enterprises said they now expended a full quarter of their IT operations time, and a fifth of their overall operations budget, on security-related tools and activities. This didn’t even count the cost of security devices and software, only their use by operations staff.
 
OK, this is bad. What do we do to fix it? Enterprises want to go back to that notion of a systemic approach. Vendors, they say, need to have an overall operations model they can talk about, they need to fit their products into that model, and they need to evolve the products in that model rather than always adding new ones.
 
I can’t disagree with enterprises on that point, but I’m not confident that vendors will do what’s needed. A change to a systemic model for operations would surely involve a major product refresh, new enterprise spending on the resulting products, and retraining of staff. Tech is under pressure right now, and any salesperson will tell you that you can’t make your sales quota by facing customers with a big new cost. Will some vendor figure out another way? I hope so, and I think enterprises do too, but in the meantime, enterprises need to think more about growing complexity when they adopt new technologies.