UI animations in a digital product can be a double-edged sword. When applied thoughtfully, motion is a powerful tool to direct users’ attention. It can connect interactions and help weave together a single fluid experience. But excessive or unrelated animations are visually distracting, leading to confusion and increased mental effort.
An important step in development of our design system at HubSpot was coming up with a small set of guiding principles for motion design. How should our product move? Starting with clear, focused principles gives us a framework to evaluate the individual component animations in a way that’s scalable across product teams.
“Why does motion draw attention? And how can knowing that help us make better design decisions?”
A little respect
With that in mind, one of the key motion design principles we came up with in HubSpot Canvas (there are only three) relates directly to attention:
Motion in the product should respect users’ attention
Motion is not flashy or superfluous. It is used intentionally, subtly, and with thought given to the story it’s meant to tell in the context of a task. Animations respect users’ attention, and don’t steal it from their current task unless that is the explicit goal (as in an alert).
It may seem obvious. Of course too much motion is distracting. But as working designers it’s important to explore deeper reasons behind the best practices we follow. Anyone can tell you too many things bouncing around the screen will be distracting. But why does motion draw attention? And how can knowing that help us make better design decisions?
Related: 5 examples of web animation done just right
So let’s dig deeper into vision, visual attention, and how motion plays a role.
Hungry eyes
Human vision is an incredibly complex system, made up of several neural organs and a series of processing stages. There’s a lot involved with turning light into meaning. But at a high level it’s actually pretty straightforward. Visual attention, not surprisingly, all starts with your eyes.
Light enters the sphere of your eye through the pupil and hits a layer of about 130 million specialized cells on the back surface called the retina (traveling through all kinds of fluid and blood vessels along the way). There are two types of cells in the retina with names based roughly on their shape: rods and cones. At the very center of the retina is the fovea. The fovea is an important structure because it’s where light hits most directly based on where you’re looking. Remember: fovea = focus.
The differences between rods and cones, especially in the way they’re distributed on the retina, is important to understand vision and attention.
Even though there are about 20x more rods, cones are the cells dedicated to fine detail and color vision. The fovea is packed with cones. But cones are also directionally sensitive and not great in low light. It takes a lot more photons to activate a cone.
Rods, on the other hand, are directionally insensitive. They respond to light coming in at all angles. They also respond to light faster than cones (it takes fewer photons to activate them), and they are much more sensitive to differences in contrast.
Rods make up the majority of the retina, but only outside the fovea (aka the parafovea). You probably know this region as your peripheral vision. Rods in your peripheral vision play a critical role in drawing your attention to things you’re not directly looking at.
Now I’ve got you in my sights
The rest of the story of vision happens mostly in the brain. There are several stages of visual processing with names based on the region of the brain they take place in. Names like lateral geniculate nucleus (LGN) and visual cortex layers 1-6 (V1 – V6).
Those names aren’t super important to remember for this discussion, but what is important is that these stages take time and filter as they go. The time between light entering the eye and being processed by your brain can be in the hundreds of milliseconds, and you aren’t consciously aware of what you’re looking at until the third or fourth stage.
There’s even a way we can “see” some of this delay.
Think back to a time (or try it now) when you glanced quickly at a clock with a ticking second hand. You might notice that the current second, the one right when you look at the clock, seemed to last longer than the following ones. That’s not your imagination. It’s called chronostasis and it’s a thing your brain does to compensate for delays in visual processing.
“Long before you’re aware of what you’re looking at, your brain has already made lots of decisions about what’s important.”
Every moment your eyes are open they’re taking in an enormous amount of information. Luckily, our brains are powerful heuristic-based input filtering machines. By the time all this processing happens, long before you’re aware of what you’re looking at, your brain has already made lots of decisions about what’s important. Conscious attention on any particular object in a scene is only possible once that object has made it through those gates.
And it probably doesn’t surprise you to learn that moving things are treated a little special during this whole process.
Preattentive cues: Attentional shortcuts
Even though our brain is actively processing all the information it’s taking in through the retina, that information isn’t all treated the same. Some, like what’s currently falling on the fovea, is given more weight for deeper cognitive processing. After all, that’s the part of the scene you’re focusing on at any given moment. That’s generally what we say we’re paying attention to (visually).
On the other hand, visual information coming in through the periphery usually isn’t as important for the immediate task at hand. Your brain usually doesn’t bother surfacing any of that to your conscious attention.
That is, until it does.
When objects or events in the periphery are flagged by your brain as important enough to pay attention to, we call that attentional capture. Your eye pivots to bring that region into your foveal view, and it’s given higher weight for cognitive processing. You start paying attention to it.
So your brain is always paying a kind of “unconscious attention” to a visual region that’s larger than what you’re consciously aware of. Interestingly, the size of that region (that is, the amount of your visual field that’s available for attentional capture) can change. One name for this behavior is the zoom-lens model.
When a person is focused on a complex task the size of the “lens” (not to be confused with the physical lens of your eye) might be small. But if they’re not as engaged, or if they’ve been trained to expect things interrupting from the periphery, the lens gets bigger. And even more critical in this model is that the size of the lens is inversely correlated with efficiency of cognitive processing.
In other words, the more distracted you are by potential events diverting attention, the less cognitively efficient you are at whatever you’re focused on. Again, this may seem like common sense but at least now we have some fundamental insight on why it’s happening.
So what exactly triggers attentional capture? What are the rules your brain uses to decide which objects in a scene to pay attention to?
Preattentive attentional cues
As it turns out the whole filtering system is tuned to pay attention to a specific set of attributes in a scene. Because these attributes are used as filters before information is surfaced to conscious attention, they’re known as preattentive cues. It’s likely these will already be familiar to designers. They include qualities like orientation, size, color hue and intensity, and of course motion.
In a visual scene, strong variations in preattentive cues will always capture attention.
A classic example of preattentive cues affecting the ease of a task is to count the number of 3s in a random set of numbers:
It’s difficult, and requires a lot of conscious effort. You probably have to scan all the numbers sequentially. But if we change a single preattentive attribute, hue, it becomes much easier:
A person with normal vision can count all the red 3s almost immediately. That’s because color is preattentively processed. It’s made available for attentional capture faster and without the need for slower sequential processing.
More examples of attentional capture through motion
Let’s look at a couple of examples of motion capturing attention in real-world web interfaces:
The spinners in this search filtering UI on Ford’s vehicle browser are better examples of motion. They help tell you the filter you just requested is being applied, and capture attention so subtly that you might miss them the first time. But that’s not a bad thing.
A small bit of motion meant to reinforce one specific action and convey more information about the task that was just performed can be very effective. The placement here, appearing right where the user just clicked, is perfect to complement that task rather than distract from it.
The animated transitions applied to MailChimp’s primary dropdown menu are quite well executed. They are snappy and subtle, but still complex enough to effectively convey spatial orientation (especially for the submenu screens). The slight bounce on the submenu dismissal is a great example of conveying their brand’s style in the motion of the product while not distracting from the task. Small details like this are where effective motion design can really shine.Bringing it back to principles
Effectively applying motion in a digital product is largely about considering attention and your user’s goals. When a user is engaged in a task, even small visual distractions can pile up and increase the overall mental effort required to get through it.
The flip side is that more intentionally designed motion can be a powerful tool to guide users’ attention exactly where we want it. Motion directly related to the current task is a strong reinforcement mechanism. If objects or events are closely related to each other, using animation can help convey that relationship. Motion is also powerful in directing attention away from the current task in those (rare!) times when it’s necessary. Alerting and notification systems are great examples of this.
So take an inventory of the motion in your products. Get rid of animations that are excessive or distracting from a task. Understand why that motion is distracting. For whatever’s left, to quote from HubSpot’s third principle for motion design: When in doubt, make it faster.
Further reading
Both Information Visualization: Perception for Design by Colin Ware and Information Dashboard Design by Stephen Few discuss design considerations for preattentive processing. The former goes into much more depth on visual perception.
Two newsletters I highly recommend with consistently amazing resources on motion design and animation for the web are Val Head’s UI Animation Newsletter and Rachel Nabors’ Web Animation Weekly.
I’m a product designer at HubSpot with a deep interest in interaction design and perception. I'm passionate about the application of technology and the potential of thoughtful design to improve everyday life.