Meta's AR Design Guideline #1 - Think Spatial 

Place Tools and Content in Space

In this post, we dive deeper into Guideline #1: Think Spatial. And as you may have seen in the scientific framework for spatial interfaces, these guidelines are guided by our philosophy of minimizing the time and effort necessary to understand an interface and take action. We plan on continuing our deep dives into each guideline over the next few weeks, but we highly recommend downloading all nine guidelines (see the blue box entitled "Get All of Meta's AR Design Guidelines") so that you can get a better understanding of how they all tie together with the ultimate goal of enhancing our abilities to create, communicate, and collaborate.


Think Spatial: Place Tools and Content in Space

Replace flat layouts of windows, menus, and buttons, which generally favor only half of the visual system, with a Spatial Interface that arranges tools and content in 3D space around the user. This leverages the user’s full perception of form and depth.

Do: Arrange volumetric tools in space



Don't: Cram traditional GUI elements (windows, icons, and menus) into 2D panels




UI Design Suggestions

Reconsider traditional windows- and screen-based GUI conventions for AR
For example, don’t place a “start menu” in space, since it was designed for a small screen and presents a cluttered mess of menus, buttons, abstract icons, and tools. Instead...


...separate constituent elements into two buckets: volumetric tools and content
An application designed with spatial design guidelines should maintain a clear separation between volumetric tools and content. Tools, in particular, must conform to the users’ priors—the mental model they bring with them from the real world into AR—in a way that suggests their use and function (see The Neural Path of Least Resistance above). Content is more likely to vary in its form between abstract flatness and realistic volume. Text and video, for instance, are inherently flat, and should be contained within tangible 3D structures with their own sense of affordance. 3D models, on the other hand, may exist as free-standing objects of their own.


Distinguishing between tools and content

Regardless of appearance, tools and content differ in one significant way: content is an experience in and of itself; it conveys information or some kind of sensory experience simply by existing. Tools, on the other hand, serve the purpose of creating, modifying or in some way interacting with content.


Avoid expandable or hidden menus in AR

UI such as drop-downs are no longer needed since there’s plenty of open space for tools and content around the user, and the Dorsal Pathway of the visual system makes it easy to process them quickly. Take inspiration from real workspaces, like art studios and workshops, rather than mimicking the flat world of traditional UIs. Think in terms of tools that the user can grab, rather than menus and buttons. For instance, if you need a variant of a paintbrush, why not have a paint brush holder where you physically grab the paintbrush, rather than have a menu?


The Neuroscience Behind the Guideline
Although we tend to imagine visual perception as a linear process, it’s actually the product of two distinct but complementary systems. One is the Ventral Pathway, primarily concerned with recognizing and classifying objects, answering the question of “what” we see. The other is the Dorsal Pathway, which identifies relationships through space and movement, addressing questions of “how” and “where” to interact with objects and the environment.

Ideally, these systems work in tandem to achieve comprehensive awareness of one’s surroundings. Screen-based UIs, however, present a flat plane of dense objects and icons, which tends to favor the Ventral Pathway over the Dorsal Pathway. By arranging volumetric interface elements in true 3D space around the user, we can engage both pathways, giving the user a deeper, fuller understanding.


Further Study

Insights into the complementary nature of the two visual pathways can be found in Kravitz et al.’s (2011) study: Kravitz DJ, Saleem KS, Baker CI, Mishkin M. (2011) A new neural framework for visuospatial processing. Nature Review Neuroscience. 12(4):217-30. doi: 10.1038/nrn3008


The next post in our series will be taking a closer look at the second design guideline: Minimize Abstractions. Subscribe to this series to be one of the first to know when new guidelines are published on the blog! And download the full set of nine guidelines if you want to get rockin' and rollin' with AR development and design (see the blue box up top entitled "Get All of Meta's AR Design Guidelines").

Subscribe to AR news & best practices