In the first two articles of our "Lessons from Sundance" series, we looked at how filmmakers and experience designers use augmented reality (AR) to tell stories. We conclude the series by looking at how three Wunderkind engineers created an AR experience that allowed two people to see and interact with the same holograms, and how they overcame some difficult challenges inherent to making collaborative AR experiences. In this part, we chat through their creative process and get a feel for what has it been like working on a Sundance installation. In Part 2 (coming out next week) the engineers share their tips and recommended resources for those looking to make collaborative AR experiences.
Showing the creative direction and experience design considerations that went into making the Sundance Film Festival's first collaborative augmented reality (AR) experience tells only part of the overall experience. To get the meat of the experience though, you have to look at the technical aspects –the code and graphics work underneath it all – to truly understand the breadth and depth of what it's like creating a Sundance-worthy project using Unity.
If you're developing and designing for AR, or are thinking about doing so, you're essentially creating an experience for just one person (the end user) who is wearing the AR headset. Everything you design is then produced from the perspective of that user: How would the holograms look? Where will holographic buttons and screens appear in their field of view when they're sitting and standing? In essence, the question that guides your work is "How can I translate what has traditionally been a flat 2D experience seen on computer screens into a 3D experience seen in front of and around people?" While it isn't as simple or as straightforward as porting existing apps and programs from one operating system into another, taking an app or experience that has been in a 2D environment (e.g. Excel spreadsheets) and placing it into the real-world is feasible with some patience and elbow grease.
But what if you want to create shareable and collaborative AR experiences where two or more people can work on the same holographic info? How do you ensure that multiple people seeing the same holograms from different angles actually see the holograms correctly, and not just as flat or blocked images?
It's an enitrely new set of challenges when you're creating an AR experience that allows two people to simultaneously interact with the same holograms. For example, you have to ensure that through each person's headset, they see the holographic object as it would appear if it were a real-world object occupying physical space from their respective viewpoints.
Enter the "Sundance Kids" Charles, Garrett, and Paulo. They answered those questions over the course of their work and were bestowed the nickname by the entire Meta team as a nod to both their work at Sundance and to Robert Redford, the original Sundance Kid.
The Sundance Kids' Creative Process
Victor: Guys, thank you for chatting with me. I know you have been busy, but I want people to know what the Sundance Kids have accomplished! I'm curious to know what has it been like working on a Sundance-worthy piece? And please, no "alternative facts" on what it's been like.
Charles: It was a lot of fun, but also really tiring. Sundance is known for showcasing such high-caliber experiences that for us to make something that would even measure on the same scale required [tons] of dedication and coordination.
Garrett: Rewarding. We worked very hard to make every detail line up with the vision outlined by [creative] directors Daniel and Eran.
Paulo: It was a great opportunity to be part of the amazing team working on the project. I think it was a great experience for learning and improving, too.
Y'all being too modest here. Combined with your ingenuity, and the computer vision and software teams' doggedness and dedication to improving the Meta 2, I think you all deserve a huge round of applause, and then some! It really has been a "team effort" in every sense of the word.
Paulo, Garret, and Charles: Thanks, Victor.
So what was your creative process behind making the scenes “come alive”, i.e., how did you guys know and figure out how certain scenes should be illustrated?
Charles: We worked closely with the creative directors, Eran and Daniel, for this project. They actually produced a full video of the experience that served as our central reference. On top of that, our tech art wizards, Garrett and Hal, pulled off a lot of magic to force Unity’s rendering engine to do what we wanted from it. Special shaders were written for our effects and animations that did things from real-time explosion simulation to animating a beating heart.
Garrett: I'd say a huge resource was referencing the vision of Daniel and Eran. They produced a complete concept video of the experience with narration, sound effects, and prototype animations. Although some things changed in the transfer from 2D to 3D, almost everything lined up.
I am impressed you guys pushed Unity to its limits. I know it can handle a lot, and it's breathtaking to see technology pushed to its limits to produce a beautiful work of art – similar to how Unity transformed technology into breathtaking art with Adam [a short film made with the Unity engine].
Paulo, Garret, and Charles: Thanks, Victor!
As you can see from the first half of the interview, it takes creativity and a willingness to push the proverbial envelope when it comes to making collaborative AR experiences. As you'll see next week, the Sundance Kids reveal how it takes some outside-of-the-box thinking (and quit a bit of trial and error) to figure out how holographic objects should appear from two people's perspectives. Part 2 will feature their in-depth tips and resources, so keep an eye out to catch their learnings!