(Dis)connected: the paradox of isolation in a virtually connected world

(dis)connected: The Paradox Of Isolation In A Virtually Connected World

Our ancestors once huddled in small, isolated communities, their faces illuminated by flickering fires. 

The first evidence of controlled campfires for cooking and warmth dates back some 700,000 years, though hominids had already existed for several million years by then.

Archaeological evidence suggests that it wasn’t until the last 20,000 years or so that humans began to ‘settle down’ and engage in increasingly complex societal and cultural practices.

Only in the last 20,000 years or so have humans begun to ‘settle down’ and engage in increasingly complex societal and cultural practices.

Over 100,00 years before that, homo erectus started to live in small social groups, at which point we can observe changes in the vocal tract, which indicate primitive forms of communication. 

This is when early humans began to translate and share their internal states, essentially building a primitive worldview in which someone and something existed beyond the self.

Early forms of communication and social bonding brought about a cascade of changes that thrust human evolution forward, culminating in the formation and dominance of modern humans, homo sapiens.

Little did early hominids know that the fire around which they gathered was but a pale reflection of the fire that burned within them – the fire of consciousness illuminating them on the path to becoming human. 

And little did they know that countless generations later, their descendants would find themselves gathered around a different kind of fire – the bright, electric glow of their screens.

The primal roots of human thought

To understand the nature of this primitive mind, we must look to the work of evolutionary psychologists and anthropologists who have sought to reconstruct the cognitive world of our distant ancestors.

One of modern evolutionary psychology’s key insights is that the human mind is not a blank slate but a collection of specialized cognitive modules shaped by natural selection to solve specific adaptive problems. 

This is not exclusive to humans. Darwin’s early evolutionary research observed that the Galapagos finches, for example, shared highly specialized beaks that enabled them to occupy different ecological niches. 

These varied tools correlated with diverse behaviors. One finch might crack nuts with its large, broad beak, whereas another might pry berries from a bush using its razor-like bill. 

Darwin’s finches indicated the importance of domain specialism in evolution. Source: Wikimedia Commons.

As psychologist Leda Cosmides and her colleagues, including Steven Pinker, have argued in theories now summed as ‘evolutionary psychology,’ the brain’s modules operate largely independently of one another, each processing domain-specific information.

In the context of primitive history, this modular architecture would have been highly adaptive. 

In a world where survival depended on the ability to quickly detect and respond to threats and opportunities in the environment, a mind composed of specialized, domain-specific modules would have been more efficient than a general-purpose, domain-general one.

Our distant ancestors inhabited this world. It was a world of immediate sensations, primarily unconnected by an overarching narrative or sense of self.

However, over the course of thousands of years, hominid brains became more broadly interconnected, enabling tool use, protolanguage, language, and social interaction.

Timeline of human development: Source: ResearchGate.

Today, we know that different parts of the brain become heavily integrated from birth. fMRI studies, such as Raichle et al. (2001), show that information is shared between various parts of the brain continually at rest. 

For example, Holloway’s research (1996) on early hominid brains indicates changes in brain architecture over time supported  enhanced integration. Stout and Chaminade (2007) explored how tool-making activities correlate with neural integration, suggesting that the demands of these tasks may have driven the development of broader integrative neural capacities.

The need for complex communication and abstract reasoning increased as humans progressed from small-scale groups where individuals were intimately familiar with one another’s experiences to larger groups that often included individuals from different geographies, backgrounds, and appearances. 

Language was perhaps the most powerful catalyst for this cognitive revolution, creating shared meaning by providing a means of encoding and transmitting complex ideas and experiences across minds and generations.

Humans who could efficiently communicate and work with others gained advantages over those less able. And gradually, humans started to vocalize and communicate just because we could rather than for any specific adaptive or survival value. 

Entering the age of hyper-personalized realities

Let us re-center to the present day, where technology may again present humanity with highly individualized worlds in what might be described as a strange circle back to the ancient mind. 

AI and VR, for example, are tailoring worlds to our individual preferences and desires. By generating highly realistic and context-aware text, images, and even 3D models, we can create immersive environments, characters, and narratives extending beyond the natural world. 

Moreover, recent breakthroughs in edge computing and on-device AI processing have enabled VR devices to run sophisticated AI algorithms locally without relying on cloud-based servers.

This enables real-time, low-latency AI applications within VR environments, such as dynamic object recognition, gesture tracking, and speech interfaces.

In parallel, the latest VR hardware, exemplified by the Apple Vision Pro, has seen huge improvements in display resolution, refresh rates, and field of view, enhancing visual fidelity and immersion.

Advanced eye-tracking and hand-tracking technologies, powered by depth-sensing cameras and machine learning algorithms, now enable more natural and intuitive interactions within virtual environments.

In 2016, Mark Zuckerberg strode through an event as attendees donned the Meta 2 headset, the resulting image becoming an iconic forwarding of VR’s perils to isolate people in their personal worlds.

VR didn’t take off back then; however, with the Apple Vision Pro, it’s on the cusp of a new era of mass adoption.

Many who have tried the Apple Vision Pro say it’ll ‘change everything,’ and wearing VR in public is already becoming common.

In this world, the concept of a shared reality, a common ground of experience and understanding, could become tenuous and fragmented.

VR’s impact on the brain

What sets the hyper-personalized realities of the AI and VR age apart is their scope and granularity. 

With machine learning, it’s now possible to create virtual worlds that are not just superficially tailored to our tastes but fundamentally shaped by our cognitive quirks and idiosyncrasies. 

As signaled by the Apple Vision Pro, we’re not just content consumers but active participants in our own private realities.

But what are the impacts? Is it all just a novelty?

Several studies have investigated the potential effects of VR on cognitive processes, mental health, and social interactions.

A study by Madary and Metzinger (2016), for example, raised ethical concerns about the use of VR, particularly regarding its impact on personal identity and autonomy.

The authors argued that VR’s immersive nature could lead to a “loss of perspective” and a blurring of the boundaries between virtual and real experiences, potentially affecting an individual’s sense of self and decision-making processes.

A systematic review by Spiegel (2018) examined VR use’s potential risks and side effects. The findings suggested that prolonged exposure to VR environments could lead to symptoms such as eye strain, headaches, and nausea, collectively called “cybersickness.” 

Cybersickness induced by VR. Source: Chandra, Jamiy, and Reza (2022)

Among the stranger impacts of VR, a study by Yee and Bailenson (2007) explored the concept of the “Proteus Effect,” which refers to the phenomenon where an individual’s behavior in a virtual environment is influenced by their avatar’s appearance. 

The study found that participants assigned taller avatars exhibited more confident and assertive behavior in subsequent virtual interactions, demonstrating the potential for VR to alter behavior and self-perception.

The positive case for VR

While it’s important to acknowledge and address the risks associated with VR, it’s equally crucial to recognize this technology’s numerous benefits and opportunities. 

One of the most promising applications of VR is in the field of education. Immersive virtual environments can provide students with engaging and interactive learning experiences, allowing them to explore complex concepts and phenomena in ways that traditional teaching methods cannot replicate. 

For example, a study by Parong and Mayer (2018) found that students who learned through a VR simulation exhibited better retention and knowledge transfer than those who learned through a desktop simulation or slideshow.

VR also holds massive potential in the realm of healthcare, particularly in the areas of therapy and rehabilitation.

For example, a meta-analysis by Fodor et al. (2018) examined the effectiveness of VR interventions for various mental health conditions, including anxiety disorders, phobias, and post-traumatic stress disorder (PTSD). The findings indicated that VR-based treatments effectively reduced symptoms and improved patient outcomes, highlighting this technology’s therapeutic potential.

Moreover, VR can serve as a powerful tool for fostering empathy and social connection. VR can promote understanding, compassion, and inclusivity by allowing individuals to experience the world from different perspectives and walk in others’ shoes. 

An intriguing study by Herrera et al. (2018) investigated the impact of a VR experience designed to promote empathy toward homeless individuals. 

The paradox of connectivity

At first glance, the prospect of living in a world of hyper-personalized realities may seem like the ultimate fulfillment of a solipsistic dream – a chance to finally inhabit a universe that is perfectly tailored to our own individual needs and desires.

It might also be a world we can live in forever, saving and loading checkpoints as we roam digital environments perpetually. 

But left unmitigated, there is another side to this ultimate form of autonomy.

The notion of “reality” as a stable and objective ground of experience depends on a common perceptual and conceptual framework—a set of shared assumptions, categories, and norms that allow us to communicate and coordinate our actions with others.

If we become enveloped in our individualized virtual worlds where each individual inhabits their own bespoke reality, this common ground might become increasingly fragmented. 

When your virtual world radically differs from mine, not just in its surface details but in its deepest ontological and epistemological foundations, mutual understanding and collaboration risk fraying at the edges. 

In this sense, the hyper-personalized realities of the AI age represent a paradox: even as they promise to connect us in ever-more intimate and immersive ways, they threaten to drive us further apart.

Steering a hyper-personalized future

The advent of AI-powered, hyper-personalized virtual realities oddly mirrors our distant ancestors’ isolated, individualized worlds.

As humanity spends more time in these hyper-personalized realities, the more our thoughts, emotions, and behaviors could become attuned to their own unique logic and structure.

So, how can we adopt the advantages of next-gen VR without losing sight of our shared humanity? 

Vigilance, awareness, and respect are critical. The future will see some who embrace living in VR worlds, augmenting themselves with brain implants and cybernetics, and so on. It will also see those who reject that favor a more traditional lifestyle. 

We must respect both perspectives.

This means being mindful of the algorithms and interfaces that mediate our experience of the world and actively seeking experiences that challenge our assumptions and biases. Hopefully, it will become intuitive to keep one foot outside of the virtual world.

So, as we gather around the flickering screens of our digital campfires, let us not forget the lessons of our ancestors, the importance of intersubjectivity, and the perils of retreating into isolation.

Beneath the surface of our differences and idiosyncrasies, we share a fundamental cognitive architecture shaped by millions of years of evolutionary history.