Forged On Olympus

View Original

A Real World Engine: Metasynthesis

Information Pyramid: What is the Capstone?
If you labeled every piece of information in the world, and ranked them based on importance, what would your order be? What would your Information Pyramid look like?

This idea spawned from an interesting essay about the hierarchy of information. The author’s breakdown:

My mental hierarchy of information value is:

1. Actual secrets
2. In-house operational data (of companies & govts)
3. Up-to-date expert research papers
4. Books
5. Wikipedia
6. Paywalled news (WSJ)
7. Non-paywalled news (huffpo)
8. Hokey free content like this blog
9. Gibberish farms like wikihow and about.com

A great initial list, but it’s missing important information: medical records, search histories, ineffable cultural norms, genetic data, etc.

I’ll explore this more deeply in future essays, but for now I want to highlight two information streams I believe will be most important this decade: measurements and generated content.

Age of Exploration
It took about 70,000 years for us to explore and settle all corners of the Earth. A hundred years ago, it was just barely possible to fly an aircraft; now technology enables us to not only fly across the globe, but also fly to the Moon.

With this exploration came a desire to map our surroundings. Each year, our ability to measure and map the world increased, oftentimes dramatically so. From the phones in our pockets to the satellites in orbit, the world is being measured and mapped at greater detail than ever before.

A short list of examples:

The impulse to explore is as old as humanity itself, predating written history. It’s what humans have been doing since we first left the cave to see what was over the next hill.

As we’ve explored farther, our ability to capture reality has steadily progressed commensurately. Reality has always been mappable; the level of detail was the bottleneck. Those details are coming into sharper focus, most notably in biology.

Sea of Simulation
At the same time, our ability to simulate reality has grown exponentially. No place is this more evident than video games.

Kobe Bryant: NBA 2K vs NBA 2K18

Gaming hardware, consoles, and games have all advanced in tandem with Moore’s Law. Their revenue growth has followed a similar trajectory. Combined, the processing capabilities and profitability fuel a powerful flywheel that leads to further technical breakthroughs and growth.

Exponential processing capabilities, paired with ballooning budgets, mean video games, and their engines specifically, have become the most powerful generative tools in the world. Local processing improved, while cloud computing unlocked even more speed and performance. This is why, today, these engines are the most sophisticated simulation tools in the world. Urban planning, architecture, automotive engineering, music entertainment, filmmaking, etc. have all partially adopted or shifted wholesale to new workflow processes with these engines.

Both The Mandalorian and The Lion King were shot using these tools. Hong Kong designed their International Airport in Unity to simulate changes in passenger volume, weather, and other natural disasters. Militaries are using Unreal.

While nascent, these workflow changes highlight how important these capabilities are. Their utility outside traditional gaming increases every year, meaning their value will be commensurately larger. The clip below shows how realistic these engines are physically.

Rope, acoustics, and physics. Longer video here.

Matthew Ball, a renowned media tech analyst whose six-part primer on Epic Games encompasses many of these changes, explains:

In this case, “game” engines were built for games, but now they’re often the best at running any virtual simulation. And that’s a BIG opportunity.

The Middle Layer
There’s a popular photo on Reddit displaying Alaskan glacial sediment water meeting ocean water.

A river filled with clay and sediment from Alaska’s glaciers (left) meeting the ocean (right). Credit: Ken Bruland, professor of ocean sciences at University of California-Santa Cruz

The gap between simulation and reality emulates this. Initially, there’s a stark difference — you can tell which is which. As this article points out though, over time, the glacial sediment eventually mixes with the salt water, and you can’t tell the difference anymore.

We’re approaching this moment, and the reason has less to do with graphical performance of game engines (though that certainly is an important factor), and more to do with the computational middle layer.

The compute layer works in-between the engine and reality, making predictions non-stop. Any surprise triggers a learning, which strengthens the compute layer’s model, and increases the accuracy of its next prediction. Autonomous vehicles do this. So do robots, satellites, and drones.

What’s different here, is that these general purpose engines open an even bigger door, one that some companies have already started to explore.

We saw this with Google’s Pixel smartphone line; their computational photography championed doing less with hard-wired circuitry and more with code. By capturing + combining multiple pictures to create a single, better picture, they began marching towards the software-defined camera. They fed these images into Google Photos, which fueled their machine learning algorithms, leading to industry-leading portrait, night sight, and astrophotography modes.

Microsoft’s Flight Simulator 2020 exhibits an even grander version of this blend of capture, computation, and (re)creation. From Protocol’s write up:

The algorithms and data — including OpenStreetMap — were then fed into Microsoft's vast Azure computing cloud to generate Flight Simulator's 2.5-petabyte model, which includes 2 trillion trees, 1.5 billion buildings, 117 million lakes and just about every road, mountain, city and airport on the planet.

These engines not only process enormous amounts logic from real-world data streams, they do so in real-time. Over time, these systems will use deep learning to teach themselves how to improve.

This also illustrates how traditional camera companies like Cannon, Nikon, and Sony, as well as lens companies like Lecia Camera, Zeiss Group, and Olympus Corporation will be disrupted. None are prepared for this software-based world of capture and creation.

As these capabilities converge, we approach the holy grail: An engine that captures data down to each atom, generates its own model, and computes the two together. This middle layer is where computational photography, flight simulators, and video game engines blur together. This is where the real magic will happen.

Metasynthesis
This decade, we will see the proliferation of computational engines that act increasingly like a digital sandbox, allowing us to experiment using reality-informed rules. These engines will incorporate our understanding of physics, biology, and mathematics to produce virtual reproductions of physical environments. These engines would be calibrated with real-world data via the compute middle layer — creating a reality-informed engine — metasynthesis. From there, we can conduct research at an unheralded pace.

Data + Engine = Metasynthesis

The brilliance of engines is that they allow us to experiment without having to worry about physical reparations. Limited externalities accelerate experimentation.

An example: Upon the successful digitization of Usain Bolt, Puma can design digital shoe mockups and test them in-engine on Bolt's avatar. Bolt would race on a digital track while the engine measures the incremental improvements or deteriorations in performance. This repeats until a final design is selected. From there, the prototype would be physically created (3D printed perhaps), and then tested again. What would normally take two months could be accomplished in two days.

22 years of progress: 1996 to 2018. Imagine what’s next.

Other industries are interested in this too. Architects want to model their designs in different environments, evaluate structural weaknesses, and experiment with generative design. Militaries want these technologies to enhance reconnaissance drones, threat identification systems, and training techniques.

But the greatest promises lie with Healthcare. Doctors and scientists want these models for drug-discovery, robotic surgeries, and precision medicine. As COVID-19 has illustrated, everyone’s body reacts differently to sickness, medication, and pathogens. We have plenty of different drugs for heart disease, yet we don’t know which one(s) will work best for each patient.

This is the endgame — digitized, personalized models of everyone — running experiments on them while we rest, eat and sleep, so that, if something happens, we know what to do. Imagine drugs ensconced in a virtual sandbox, testing themselves against your digital biomarkers while you carry on your day.

At the opening of this essay, I detailed examples of our continued progress in capturing reality. We’re at the precipice of what will be trajectory-changing discoveries across all of healthcare. Biology is undergoing a resolution revolution, providing an atomic, in vivo level of detail previously deemed infeasible. The granularity of this data will supercharge metasynthesis, elevating model accuracy and propelling us forward.

Computational modeling — metasynthesis — will become an integral part of the personalization of medicine. We will reconstruct a patient’s anatomy, predict their biologic response, and will eventually choose specific drugs and procedures for a specific person. Scientists are already modeling molecules, drugs, the heart, and the brain. More will emerge.

Project Living Heart — An interactive virtual heart

This is the medicine of the future. One that we’ll see in our lifetimes.

New Worlds to Explore
From birth, we build a mental representation of the world. Every new piece of information, every surprise, recalibrates our brain. We’ve long had the ability to capture and replicate, and now we’re building the tools to recalibrate our creations.

We're digitizing the world and populating it with more realism over time. Pacman doesn't look much like the reality, but Metal Gear Solid does.

At its core, synthesize means “composition, a putting together.” Metasynthesis combines real data with the engine. Meta is Greek, meaning “changed, between; beyond.” Each evokes distinct aspects of what's happening. We've been digitally synthesizing for a few decades now. The computational middle layer lies “between” reality and simulation. It also represents new tools, “changed,” and the directionality of these trends — hence “beyond.”

It also invokes metaphysics — the philosophical branch focused on the theory of reality. Metasynthesis extends this exploration of reality to bits and bytes.

Directionally, the gap between reality and simulation is shrinking. We can capture the world in increasing resolution, and at the same time we can replicate the world with similar clarity.

The increasingly blurry boundary between games and life are a testament to our species’ exploration and discovery. The sci-fi digital world of Tron may become a reality; one we use to explore amazing new opportunities.

The desire to explore is in our DNA. Now, we get to explore DNA in incredible detail.