This week I’ve been receiving some training in what appears to be the relatively new field of Cognitive Engineering. This training comes from Brad Paley of didi.com.

Hopefully all of us computer professionals are aware of the importance of user interface. However, many designers confuse “prettiness” with “usefulness” and many developers think that a simple grid view or spreadsheet should suffice because rows and columns are a perfectly acceptable model for that data.

I’ve known for quite some time now that spreadsheets are an evil abomination and force our brains to do things that they should not have to do for 8 hours a day but haven’t really had the tools, machinery, or vocabulary to articulate why. Thanks to this class, I believe I now have the vocabulary to articulate why one interface design is better than another.

The core of this discipline is the notion that an expert at some particular job function has a mental model of the entities and tasks involved in accomplishing their job. It is our responsibility as designers using Cognitive Engineering to extract their mental model and use techniques and information from a half dozen fields of science completely unrelated to computers in order to produce a design that provides a user experience model that has little to no impedance between their mental model.

To summarize, the user should be able to look through their user interface and the widgets with which they interact on screen should not only come as close as possible to their mental model but, perhaps more importantly, the interface should be designed in such a way that as much pre-cognitive processing as possible can be done by the user.

We know which tasks take the brain a long time to do. We know which tasks increase cognitive load. We know what things human cognition, the human eye, our ears, our spatial processing system are all adept at. The key to this discipline is using all of that information to guide the design process. If we can design an interface such that a user can rapidly categorize, sort, associate, and parse most of it in the 10s of milliseconds of time the brain spends before involving conscious thought, then users will be able to plow through their job tasks with insane productivity using software designed this way.

They will feel less fatigued after using software designed this way and probably won’t even know why. They should be able to sit in front of this software and because they are an expert in their domain, the screen should so closely resemble their mental model of their job that they should be able to just work.

They won’t be thinking about clicking, mouse dragging, typing, searching, or right-clicking. They’ll be thinking about completing orders, shipping boxes, canning tuna – the software disappears in service of their job.

At least, that’s the promise of this discipline. I’ve seen much of this thinking in Cooper’s Interaction Design field and there is a lot of overlap here.  The focus of cognitive engineering is really on the physical, chemical mechanics involved in the brain’s information processing and reaction capabilities and using that science to inform design.

I haven’t yet used this discipline to produce a UI by extracting an expert’s mental model yet but, for the first time in a long time, I can’t wait to try this out and produce a design. It’s a rare thing that a few quick sessions can inspire this much interest in me but this has and only time and experience will tell if it produces a better product.