In a previous post, I shared some of my initial thoughts on what appears to be a new, more rigorous discipline for designing interfaces called Cognitive Engineering. Cognitive Engineering is all about using information from many different areas of science to feed an engineering discipline designed to produce interfaces with very small impedance between a user’s mental model and the form and function of the application.

One of these many areas of science involves something called spatial schemas. Our minds are incredibly good at dealing with the concept of space. From the moment our senses first start being processed by our mind, our mind is constantly constructing spatial models. These spatial schemas can aid in everything from memory to locomotion to pathfinding and even to logic.

We take for granted how pervasive spatial schemas and referential spatial models are in our daily lives. For example, when we compare two abstract concepts we often do so by gesturing with one hand and then saying, “… on the other hand, …”. Without a built-in spatial schema in our mind to represent the difference between the left and right sides of our bodies, can you imagine that a comparison like this would be easy?

Another fascinating example of how spatial schemas help us is with models that have been so burned into our consciousness that we can actually compare stimuli against those models before performing the relatively slow step of cognition.

For example, if you’re a soldier (or play one in a video game) and someone near you shouts “on your 5!”. Your body will have already started the 150 degree turn necessary to turn to “your 5”. This happens in around 10 milliseconds whereas it takes over 300ms for a stimulus to make a round-trip through your conscious mind. Why is this possible?

It’s because the stuff that happens unconsciously (the high-performance part of your mind, the part of your mind that has spent hundreds of millions of years evolving to help you survive long enough to continue the species) can do a rapid comparison against a retained spatial schema… that of an analog clock.

Our brains are actually quite good at determining direction (angles) and even if we don’t know how or why, we are aware of subtle differences in angles even down to 1 degree. So when someone shouts “on your 5” the high-speed portion of your brain is already encoding the mapping between 5 on an analog clock to the 150 degree turn into signals to send to your muscles to accomplish this… and it does this all before your conscious mind has processed that because from an evolutionary standpoint, if your conscious mind was responsible for mapping spatial schemas to survive, you’d be dead. It takes less than 300ms for an animal to jump out “from your 5” and start mauling your dumb, hairless butt.

So what does this have to do with interface design?

It seems obvious when you think about it but you need something (like this informative, albeit dry, textbook) to get you thinking about it. Knowing how the brain constructs spatial models and knowing how these spatial models aid memory, logic, reasoning, and comparison will help you build interfaces that take advantage of this.

Interfaces built this way can “cheat” … if you differentiate things spatially, a user won’t have to waste valuable cognitive time processing that differentiation. For example, if a user sees “A > B” on the screen, that little blurb has to go all the way through to their slow brain (cognition) to figure out that A is greater than B, which will create a spatial model in their head representing the two objects.

How much work do you think your brain has to do if object A appears physically higher on the screen relative to B? None. That display model maps exactly to the spatial model that the brain would have constructed as a result of the slow round-trip to parse “A>B”.

What if one object is bigger than another one, visually? What if one object appears far to the left while others appear far to the right? What if an object is rotated ever-so-slightly? All of these spatial representations of entities can be designed to map the spatial model that we expect a human brain to create from more obtuse, less informative interfaces … so why not save the user the effort and create that model ahead of time, right there in the interface.

Assign meaning to space, size, spatial relationships, directionality, curvature, velocity … all of these things can be used to create amazing interfaces, not because they will be pretty or shiny, but because they will not slow the user down and will not force their minds to do work they don’t need to be doing.

Anyway, as I said in the other blog post I haven’t yet really tried to use any of these techniques to build user interfaces but I have seen the results of similarly engineered interfaces and it is really powerful stuff.  Hopefully reading this blog has made you think a little bit more about the role interfaces play and how they interact at a chemical, biological, neurological level with our users.