No Jitter is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.

Implications of Socializing the Network

Some things never change, and some old things get reinvented or reintroduced with slick modifications. This is the same in communications as anywhere else.

Some No Jitter readers probably don't remember what happened in the early 1960s when a few select telcos in small towns test-marketed AT&T central office switches offering a new feature: Touch-Tone service. Touch-Tone -- i.e., push-button -- dialing sped up call completion and made service delivery better, faster, and easier, but the telcos didn't make phone service any cheaper for users. Full Touch-Tone market penetration didn't even occur until more than 30 years later.

Thinking back to the old days, how many school papers did you retype because of typing, spelling, or grammatical errors? The word processor made life better, and then it gave way to the PC revolution with software that would assist the user in avoiding the same mistakes experienced with typewriters. And that amazing QWERTY keyboard is still with us today, reduced down to smartphone size.

portable Both wired and wireless communications networks are enabling millions of device-toting users to connect to something or someplace on demand. Transactional processing is an everyday event with smartphones. Mass exchanges of money for goods or services are transacted without much thought, and these occur as naturally and with more ease than ever. Mass marketing engages users with customized messages originating from apps that monitor and learn consumer behavior with GPS accuracy.

For network managers and architects, achieving and maintaining the right balance between speed, security, reliability, and cost in this networking environment is a seemingly evasive goal. Availability and immediate access present challenges, too.

Getting to Know Your Traffic
Better use of existing bandwidth translates to different things. Enforcing acceptable use and traffic prioritization is a start, but the effort shouldn't end there. Knowing what data is being uploaded, downloaded, and stored where and when could lead to changes in network topologies. Knowing your traffic and then determining the impact of cloud use, whether private or public, requires analytical tools. The ideal is in being able to predict and integrate automated processes, removing human latency from the time it takes to react to shifts in complex traffic patterns.

But automated traffic flow and fulfillment doesn't really exist. The movement of data that is the combined essence of everything across the Internet isn't as efficient as it could be, and making better use of existing bandwidth is a global challenge. Is it really practical for bandwidth to be infinite? Without dampening technological improvements, taking an approach of optimization and efficiency seems better. How traffic is routed and where and how data is consumed, I think, are challenges worth exploring -- and this means complex traffic engineering and automated routing to achieve on-the-fly results in a practical way.

Troubleshooting and reducing the mean time-to-repair/restore is demanding and having the right tools in the right places doesn't always guarantee success. While locations and certain assets of a network are static, the traffic carried is dynamic and changes by the second. Relying on help desk reports is akin to taking the old telco approach that everything is fine unless you dare to call and report otherwise -- and you'd better be right or else you pay the toll for dispatch.

Unearthing Human Latency
Human latency is buried within many networks and network management methods. Automated processes to reduce the time to problem recognition, and then identifying its root cause are imperative -- but fixing the problem correctly is often another human latency issue.

I think about a situation I encountered when a high-availability firewall started kicking out alarms that traffic was failing over to a secondary route, signaling a failover state of the primary fiber carrier. SIP phones randomly lost connection for a minute or two. A visual inspection of the closet revealed that the carrier's router was not connected to an uninterruptible power supply, a situation corrected immediately. Still, days later, SIP phones again lost connection randomly and the alerts in the firewall log indicated failover state of the primary fiber carrier. Ultimately we discovered the root cause: errant programming rules caused probes to detect DNS failure and invoke the failover from the primary to secondary carrier when, in fact, no failure had occurred.

Securing stationary and mobile endpoints and perimeter security still largely relies on keeping client software updated, systems patched, and subscriptions for firewalls and security appliances current. These are all moving targets and patching often leads to correction of issues but introduction of new problems and or network behaviors. Hence, IT's resistance to change -- itself another form of human latency that, I might add, isn't without merit. The other latent concerns are reactions to new exploits and the time it takes to counter them. This is the cycle, and the process, and perhaps it's due for change.

Today, workers don't necessarily clog their desks with paper files but may jam their desktops with content files, email messages, and other data. While I don't miss typewriters, I definitely appreciate the Bell System and favor the technological improvements that we've gained. We've created a socialized network employing huge resources to extend access and improve availability. Without adding bandwidth, memory or processing power, what three things can be made more efficient?

Follow Matt Brunk on Twitter and Google+!
@telecomworx
Matt Brunk on Google+