Why do we seem to be having such a problem with IT security? Part of the reason is that we have more people trying to breach it, but another part is that we’ve changed how we build and use applications. In the old days, say thirty years ago, we ran nice big comfortable monolithic applications. They had one user interface and everything else was totally opaque. In modern terms, we had a very limited attack surface, and our recent reliance on remote and hybrid work has made that surface a lot bigger.
Attack surface! Ah, there’s a term you’ve heard a lot about. An “attack surface” is the set of interfaces that would be visible, meaning accessible, from the outside. Anything a user wants to access is part of an attack surface because attackers would work hard to mimic authorized users. But an attack surface can be spread out, accidentally, by a bunch of different things.
One such thing is your own VPN. Many companies have a range of IP addresses assigned to them, public IP addresses that nobody else can use. Suppose you assign these, as many companies do, by organization or type of application, so suppose your order entry system has a piece of this range. One of the addresses might be the one you expect your real orders to come in on, but here’s the rub. All these addresses are public, visible on the Internet. Your attack surface was bigger than you believed it to be.
This problem of multiplying IP addresses – and the expanded attack surface they introduce -- was acknowledged by developers when Docker, containers, and Kubernetes became popular means of deploying multi-component applications on virtual servers for efficiency. We saw an important innovation here: the widespread use of private IP addresses. They have an important advantage in that they cannot be routed over the Internet at all. The new container hosting system assigned private, non-routable, addresses to everything, and then “exposed” only specific addresses to the corporate VPN. This created what we might call a “visibility surface” -- only selected addresses would poke through to become part of the attack surface, and the same strategy was used in the cloud.
Of course, attackers knew about all this too, and the response was to rely on planting hostile outposts inside the visibility surface via strategies like phishing. Compromise a single computer system on the company network and you have control over a system that’s inside that network's ring of protection because an attack from it looks like legitimate access. The good news is that even phishing outposts can’t get inside the visibility surface. The bad news is that these hostile outposts can get at what is open to attack, so the next step in reducing the attack surface was to extend access control to limit a given user system’s access to exposed addresses that the worker legitimately needs to access.
Some of the most advanced security strategies today are based on this form of connection control. A given user system, with a given IP address, should be sending and receiving things only from the addresses the worker is entitled to connect with. Explicit connection permission is always required. So by identifying these relationships, attack potential is limited. Since a phishing attack probably won’t know what permissions a compromised system has, it’s even possible to identify and isolate compromised systems before they can damage or steal things.
Of course, all of this is vulnerable to carelessness, which is the biggest security challenge today. Three issues rooted in carelessness regularly emerge,, and any of them can lead to a security breach.
The first challenge is overexposure. Cloud and data center components that aren’t exposed into the VPN address space can’t be accessed by users at all. Thus, there’s a tendency to expose components that might at some point need to be accessed. There’s also a problem with the popular concept of “observability”. The probes and logs that monitoring needs to access have to be addressed by the monitoring software. That means some level of exposure, which means protection can be lost. Probes are often simple elements with minimal security capability, like IoT devices. Smart enterprises gather probe/log data inside their visibility surface, and expose only the address of the collected material, not the sources.
The second challenge is overpermission. Ann and Charlie need access to a dozen applications, but maybe not the same ones. Add in a dozen other workers and you could end up with a lot of separate connection control policies to maintain, so why not just give everyone a fixed set of permissions and save the work? Or an organizational change moved people around and so system connection controls have to be changed, or here’s a desk a lot of branch people with various applications need. Why not just open things up a little to save the time needed to get everyone set up with the right control policies? Now we have systems with broader access than they’re entitled to, and if compromised they can now attack more of a company’s applications and data resources.
The final challenge is overdistribution. Everyone gets excited by new technology ideas, and with the cloud we got things like scaling and redeployment. In hybrid and multi-cloud, enterprises often move components between cloud providers and the data center, and any component that has to be moved has to be connected to all its usual workflow partner components. Often this means these components are exposed on the VPN so they can be accessed anywhere, and that means they’re outside the visibility surface and can be attacked.
WFH and hybrid work make all these issues critical, because workers are now connecting via the Internet, which is certainly the home of a lot of bad actors. Most enterprises divide up their security strategies, so in many cases there’s no single place where all the safeguards are managed, or even known. Have a security meeting of all the players, organize your approach to look for the three challenges, and plug any holes, before your new distributed enterprise becomes the next sad story of a big hack.