The Observable Metaverse
When someone says “the metaverse,” what do they mean? At the moment, the observable metaverse is a continuum that spans experiences from virtual reality (VR) through mixed reality (MR) to augmented reality (AR). The experiences are different, but some of the most successful metaverse projects share many common creative and technological attributes. What are they? To help figure it out, we’ve been developing a framework we call the Standard Model of the Metaverse.
Hundreds of Technologies, Unlimited Creativity
While organizing various landscapes that describe the range of technologies required to design, create, and interact with the observable metaverse, we have identified many foundational elements, but there are gaps in the model: from underpowered wireless networking capabilities to the need for additional computational resources, human interface issues, battery life, and on and on.
It also became clear that the advent and deployment of this missing infrastructure will have a significant impact on the speed and scale with which we can explore and enjoy the benefits of blended physical and digital experiences. Predicting how these gaps will be filled may help us better understand how the metaverse will evolve. Of course, some gaps will never be filled. Figuring out workarounds (if there are any) will be helpful as well.
What’s the Metaverse Made Of?
The goal of our project is to create a Standard Model of the Metaverse that organizes the foundational technologies and makes predictions about which technologies are missing and which are likely to evolve.
At the moment, the experiences we can observe range from avatars in virtual worlds (cartoon people in a cartoon universe such as Roblox, Decentraland, and The Sandbox) to metaverses that empower us to see displays of real-time data (text, graphics, video, audio, and haptics) superimposed on our physical world such as Pokémon GO or the heads-up display in a car. To help us figure out and visualize what these metaverses are made of, we’re going to categorize and classify as many technologies as will be practical. It’s a daunting task, and we will need your help to accomplish it. (More about that later.)
Goal and Definitions
While there is no agreed-upon definition of the metaverse (or Web3, for that matter), we need some agreed-upon terms. You may have heard other definitions, but for the sake of clarity, we will use the following definitions for the Standard Model of the Metaverse.
The Observable Metaverse
We are starting with the idea that the metaverse exists as a bridge between our physical and digital worlds. On one edge, the metaverse is superimposed on the physical world. On the other edge, it is blended into the digital world. The technologies that empower our ability to transition seamlessly between the physical and digital worlds are AR, MR, and VR. We will use the phrase “extended reality” (XR) as a catch-all phrase.
Basic Definitions
Virtual Reality (VR) is an immersive group of technologies that provide the ability, through a human/machine interface, to interact with computer-generated objects (including computer-generated avatars of other people or bots) in computer-generated environments. These computer-generated environments can artistically range from primitive blocks of color to photorealistic animation. VR is generally experienced wearing VR headgear that replaces your entire field of vision with computer-generated imagery and also provides related audio. VR can also be experienced in purpose-built rooms (although these rooms are usually found in commercial training applications).
Virtual Worlds can be experienced using VR systems, but these are not required. There are many virtual worlds represented on 2D screens including Roblox, The Sandbox, and Decentraland, to name a few. Video games are often played in virtual worlds.
Augmented Reality (AR) is created by a group of technologies that superimpose computer-generated images on a user’s view of the real world, creating a composite view. AR experiences can be projected or placed using human/machine interfaces, but they are not always required. AR can be projected into many environments.
Human/Machine Interfaces are a means by which humans can interact with data. This can be a web browser, smartphone, computer, game console, handheld device, headset, helmet, or any other means where data can be surfaced in real time and consumed by a human being.
Human Fusions are devices that are embedded into a body or connected directly to the human nervous system that allow us to experience the metaverse without using our traditional biological sense organs as endpoints (eyes, ears, nose, mouth, skin). Human Fusion sensors can be surgically connected, but they do not always have to be. Powerful external sensors are being tested that can be incorporated into fabric. The entire field is in its infancy. However, companies such as Neuralink and new technologies such as those being studied at Case Western Reserve University’s Human Fusions Institute are starting to test our understanding of what it means to experience the world around us.
Mixed Reality (MR) is experienced as a combination of VR and AR seamlessly woven into the fabric of our daily physical-world lives.
Where This Is Going
The basic definitions above are just a few of many general categories we have identified. There are many more. The Standard Model of the Metaverse is a grand experiment, and we need your help to complete it. If you are interested in contributing to the project, please fill in the form at the bottom of the page here.
More By This Author:
It’s Getting Cold In The Metaverse
Block Im Park – Web 2.5 Is Here
MiCA, The EU’s Proposed Crypto Regulation Leaks and…
Disclosure: This is not a sponsored post. I am the author of this article and it expresses my own opinions. I am not, nor is my company, receiving compensation for it.