When then-Facebook announced it would focus its future on a virtual-reality world, the “metaverse,” and even changed its company name
to “Meta,” it launched both a wave of genuine technical promise and a wave of hype. One example of the latter was the comment that the metaverse would be the giant computer of the future. One example of the former is if the metaverse is to work, whoever offers it, we’ll have to transform the world into a distributed computer. Starting in 2022, how that happens will revolutionize both computing and networking, and it may justify our inventing a new acronym too, which I’ll describe more below.
A metaverse is comparable to a human-populated game, an alternate reality where an avatar represents everyone and everything. The sense of reality—the extent to which you could “live” in the metaverse and not feel like you’re moving in slow motion or seeing a simple block-figure world, depends on making what happens in the real world synchronized fully with what happens in the metaverse. You move, and my avatar moves. I move, and it’s the same. We could talk without satellite-or-worse latencies thrown in, shake hands, hug and kiss, or even box, and it would be realistic. It’s intuitively true, but from a technical perspective—it’s difficult (to say the least)—for three reasons.
First, syncing your avatar
with your behavior will necessitate the use of small detectable tabs that assess movement or use real-time image processing to generate a simulation of motions to apply to an avatar. The second option (real-time image processing) will almost certainly be necessary to capture your avatar's realistic behavior in a metaverse because it detects subtle gestures and expressions. You would likely have to host this process in your current location. Perhaps an avatar appliance supporting avatar chips? NVIDIA already does some avatar synthesizing on its high-end GPUs.
Second, you must synthesize your avatar’s place in the metaverse. What would you “see” there? Who else is nearby, and what are they doing? If we were to assume that a practical metaverse was composed of a large collection of “locales” where avatars could congregate to interact, each would have to have its own compute resources to construct a 3D model of its contents. Avatars could then present that model to the virtual inhabitants for processing by their local resources (yes, another mission for that appliance).
You can see by now that the infrastructure needed to host a metaverse is a kind of extreme distributed computing—distributed to each user/inhabitant. You’d have personal resources as a part of this edge/metro computing resources for locale hosting. There would also likely be regional/central resources used to support movement or interaction between locales. If we think of this metaverse as a mission for an evolved cloud, we’re coming close to the real model. The software needed would be distributed over this structure, and as avatars moved around, their “locale” would change, and their committed resources would also change.
The third and most complex reason for metaverse difficulty is latency. One of the primary missions of a metaverse is to erase the limitations that the real world puts on the interaction among people and things. One such limitation is distance, so we could expect the next person/avatar we meet, greet, hug, or fight with could be half a world away. When latency is introduced into a locale, it creates a lag between what a person does and what others in the locale see their avatar doing. Any significant latency translates to a loss of realism in the interaction, and enough latency would make it impossible to synchronize behaviors of everyone/every avatar within a locale.
It’s not enough to give the personal processor managing our avatar a low-latency path to our metro/edge, as many say 5G could do. Our local metro/edge might not host our avatar, and if it does, it’s not local to the others who are also in our locale. All our metro/edge sites will have to be linked together with extremely low-latency paths, perhaps with minimal-hop optically switched, or even direct, connections. Our metaverse hosting would create a vast distributed computer that spans the planet.
Why would that massive global computing resource stop at metaverse-hosting? The same kind of resource could revolutionize massive multiplayer games. Internet of Things (IoT) applications could transform if low-latency control loops could span thousands of miles. While it’s not clear whether having avatars collaborate would create a useful simulation of human collaboration, it’s possible to build tools onto the basic metaverse framework. And that businesses would adapt their practices to use them optimally as their workers became metaverse-literate.
Metaverse literacy includes recognizing that the avatar-centric metaverse vision and the sensor-centric IoT and “digital twin” vision are two faces of the same thing. When we apply metaverse thinking to IoT, we end up with a wholly new concept (and our new acronym as mentioned above), a metaverse of things (MoT). The convergence would be more than just one nomenclature; I think that a common platform as a service software model will end up serving both applications, as it should. We can’t afford to have silos of resources and development tools that we expect to use to make these critical edge computing applications practical.
MoT is what makes the world a computer. It’s what could abstract both people and things down to a model that synchronizes with the real world, a model that we can use to control real behaviors, make real predictions and simulations, as well as socialize, collaborate, and entertain. It’s the computer of the future, and at the same time, what will drive computing and networking to a new level. We can expect to see clear signs of this revolution in the coming year. Happy New Year!