This blog delves into the intricacies of aligning the C-suite around compelling narratives to achieve unprecedented success.
Creative Director
As a consultancy, 1P understands the need for strategic, resonant content in any customer journey. Our collaborative Discovery as a Service process solves complex business problems at company-wide level and often results in the creation of a Unified Narrative Framework.
Once all company stakeholders are united under this shared understanding, we put on our agency hat to drive more effective—and more targeted—storytelling experiences for our clients. So we're building new workflows with emerging technology and experimenting with new tools to do just that.
Part of that that could be building an interactive storytelling experience for your upcoming conference. It’s tough to plan because you know the entire thing has to fit within a specifically designated area. To maximize story potential, you want to know where people will be standing—and how they’ll be taking in auditory and visual stimuli at every angle. You need an accurate 3D model of the space, and building that from scratch takes a lot of time and money. Until now.
We all know Rome wasn’t built in a day:
It was built in two days—from a variety of scans—with Neural Radiance Fields (NeRF), a technology that’s closely related to photogrammetry. Let’s look at both.
Photogrammetry is a way of generating geometry and constructing a 3D mesh based on a series of rapid still images that are being taken at several frames per second. This is already prominent in LiDAR technology. For example, self-driving cars are constantly scanning the road for obstacles (pedestrians, potholes, etc.).
Sometimes this can be expensive and difficult, but using photogrammetry, Quixel developed an entire photoreal real-time world in just 12 weeks. By hand, this might have taken several months to do. Even if approached procedurally, it wouldn’t look as realistic as this final result:
But photogrammetry requires a scan of every 3D surface, while NeRF uses AI to make assumptions based on user input to fill in gaps. Sometimes, you don’t even need to scan anything—you can get a 3D world from a collection of still images:
Through the power of AI, this technology is taking giant leaps forward, like being able to approximate what a person might look like in 3D—even when you only have one angle available. AI is progressing so fast, in fact, that even movement can be transferred to the generated 3D model:
In the past, people had to make these kinds of 3D models point-by-point, by hand:
With photogrammetry, this process can be more automated. This environment was scanned in about seven minutes and uploaded to the cloud, on the spot. It has around 22 million points and was meshed automatically:
Photogrammetry makes millions upon millions of polygons, but people are building techniques to reduce those millions into something that’s more usable. This means functionality in more real-time applications.
Photogrammetry scans and restoration help build the foundation to use augmented reality so that you can walk around and see historical artifacts within their original environments where they no longer exist.
Or maybe you’re considering hosting your virtual event in an unusual outdoor spot that has no pre-existing reference: Photogrammetry can map that space and help designers understand how to place things in a set.
On a sheer practical level, we don’t have to rely on event hall CAD models to previsualize a stage show/setup anymore, like in this interactive opening number we created for AdobeMAX:
The walls that separate cutting-edge tech and accessibility are quickly disappearing, and we’re taking advantage of it. While photogrammetry is the technology that’s most readily available now, NeRF is a quickly-developing technology we are keeping our eye on.
The Story at the Center blog shares insights and strategies that have helped organizations—from startups to Fortune 100s—harness the power of storytelling to navigate complexities and dominate their markets.
Subscribe to receive more insights, analysis, and perspectives from First Person.
Subscribe to receive more insights, analysis, and perspectives from First Person.