At The Mobile Majority, we help brands gain exposure through mobile advertising.
As most of us know, the mobile advertising industry is growing fast. Running a single mobile campaign today will have you working with up to 12 different vendors—and managing all of them is a time-consuming and exhausting process. In a nutshell, it takes a lot of pieces to make one campaign go.
Fortunately, we’ve been able to build a single platform to accomplish all this. To that end, our platform needs to be robust, but also maintain a simplistic and intuitive process. Ensuring this keeps our product, UX, and engineering teams on their toes as we figure out how to seamlessly integrate new features into our platform.
The problem
During our user research, we discovered that we need to give our users a simple way to generate highly customizable reports for forecasting mobile ad campaigns.
In other words, we needed a tool that will allow our users to determine how much it would cost to run a mobile ad campaign with specific targeting parameters.
“Take time with your team to find every conceivable way you can design your product.”
For example, if a brand like Nike spent $5,000 on a mobile campaign targeting females who live in Washington and are interested in a healthy lifestyle, how many impressions will the brand get over a one-month period? 500,000? A million?
This type of tool is crucial in generating business and is used by a sales team to tell potential clients what they can expect in a campaign.
The process
To shake things up and try to find a solution, we decided to run a design sprint inspired by the Google Ventures Design Sprint (a 5-day process you can read about here).
A design sprint is a compressed process of exploring business strategy, innovation, and behavior science by building and testing prototypes with real users. So instead of waiting months to build and release a minimum viable product (MVP), companies can get real, immediate feedback through a prototype in just days.
We couldn’t allocate an entire 5 days to this, so we had to fit it into 3 days—for only 2 hours each day. Our head of product, an intern, 2 engineers, and an ad operations personnel joined me for this 3-day design sprint.
Our agenda:
- Day 1: Digging into the problem
- Day 2: Explore, explore, explore
- Day 3: User study
Day 1: Digging into the problem
On the first day we needed to find the users’ needs and come to a common understanding.
The following exercises only took us an hour—I kept things on track by making sure we didn’t spend too much time on the following exercises:
- First, our Head of Product, Marcos Escalante, informed us on the business opportunity
- Then, our Ad Ops person, Alex Steady, and intern Chris Cohen walked us through competitor tools
- Finally, we completed team interviews to understand the context of this tool we needed to create
As we listened, we wrote questions on Post-it notes. These questions were “How Might We” questions, which are questions specifically phrased to inspire creativity rather than inhibit it. We’d use them for a later exercise.
“Use ‘How Might We’ questions to inspire creativity rather than inhibit it.”
We finished the day by drawing out a complete storyboard and determining exactly what part of the story to tackle. Looking back, we should have chosen a smaller part to focus on.
We also scheduled 2 users to test on Thursday (just 2 days away), which was crazy because we didn’t have any designs yet. (5 user testings is ideal, but we had to shorten it based on our time and resources.)
Day 2: Explore, explore, explore
This was the fun day. It took 3 hours—I was exhausted by the end. The purpose of Day 2 was to find every conceivable way we could design our product from all the different perspectives involved in the design sprint.
We also emphasized our desire to test out-of-the-box ideas, as this design sprint was the best opportunity to do so. We didn’t necessarily want to test ideas we knew would work—we wanted to find something innovative that could really help our users.
In order to get as many new concepts out on the table, we did all the following on Day 2 (just like Google!):
- We reviewed the previous day’s work as a refresher
- We each created a mind map to assist us in our design explorations
- Crazy Eights drawing exercise (twice)
- We each created our own storyboard on paper
- We silently critiqued each design, and then each design was critiqued out loud
After these exercises, it was time to decide on which design to prototype for user testing. After reviewing the critiques and having a short discussion, we knew which concept to prototype.
I spent the following afternoon, late evening, and next morning creating a prototype with Sketch and InVision. Thanks to our pattern library, coming up with new components and styles was easy.
Day 3: User study
Well, this day started off badly. Our prototype got erased the night before, so I had to start over at 2am and finish up the morning of user testing. I know—I also wonder how people can erase files with computers that have auto-save and redundancies. But trust me, it can happen.
“Don’t test ideas that you know are going to work.”
Despite staying up late and hustling to get the prototype done (again) on the morning of the user study sessions, the internet in our office went down and it gave us a little more time to refine the prototype.
Startup life.
The next day we completed our user testing remotely. We had users call in from San Francisco, Ohio, and Los Angeles, and we recorded the audio and the users’ screens via Zoom. We had approximately 5 observers for each call who took notes as I led the session. Afterwards, they sent me their notes and feedback.
Conclusions
After reviewing our consolidated notes, here’s what we learned:
- Next time we should focus on a smaller section of the user story. Based on our bandwidth and resources, this will help us get more done in less time.
- Our users understood the flow of our concept very well. This means we can keep the overall flow and focus on other parts like the interactions of certain components work.
- Most of the time our users knew exactly what kind of report they wanted and exactly what they’re going to target. This means we don’t need to give them a browsing experience—we need to give them a searching experience.
- Our users trusted our smart suggestions. We discovered this process was not an exact science, but that the users have to use their best judgement most of the time. If we can execute on creating smart suggestions that are seriously helpful, then we’ll be onto something big.
“Focus on a smaller section of the user story.”
Based on these findings, we decided to explore a forecasting tool focused more on a smart search feature with smart suggestions. This is different than the average report builder made up of checkboxes and form fields.
We know that what we came up with has potential and that the users understood the main flow and were able to accomplish their tasks during the user study. If we decide to implement such a tool into our current platform, another design sprint might give us that last bit of feedback to determine if this is something we build or put on the side.
This article was originally published on The Mobile Majority.
Michael is Director of User Experience at Lucidity, which was awarded Blockchain Startup of 2018 at The Blocks Awards and Best Marketing Analytics Attribution Platform at The DigiDay Tech Awards. He is an award-winning user experience designer, having founded and advised several companies ranging from the #1 paid social networking app in the iTunes store to the most used hospice software in the country. He loves turning ideas into tangible working products and believes in a holistic user experience that also includes elements outside the confinements of just the digital screen. He has helped companies like Grindr, Honda, EXOS, MC & Saatchi, Daily Associates, Ohio State University, HCHB, DivX, and Entertainment Arts. Michael has taught at UCLA and Cal State Long Beach and received his BA from UC Irvine.