Timeline: May 2024 (4 weeks)

My role: I was both the team lead and IC for a remote design team of two

Scope: Persona Creation, High-Fidelity Prototyping, Usability Testing, Visual Design

COMPANY OVERVIEW

Mindtech Guru (MTG) assists employees in finding the right mental health solutions by providing a specialized questionnaire and a curated collection of tools tailored to their specific mental health and wellness challenges. It also offers information on whether these solutions can be covered by health and insurance benefits.

PROJECT IMPACT

  1. Proto-persona creation for increased empathy and targeted design solutions in future iterations

  2. Providing design solutions for key screens in the questionnaire flow that needed to be optimized for mobile before beta launch and testing

  3. Conducting initial user testing before beta launch

1. Laying the Groundwork

Before we began, the client had sent us some background material on the project, as well as their desired goals. During the kick-off meeting, we confirmed alignment between the design team and client on the project scope and deliverables:

2. Diving into Research

In order to better understand the product and market landscape, we reviewed the existing research, which included some competitive analysis, as well as a design audit of the existing questionnaire flow by the previous design team. 

🔑 However, I realized that there was a missing link in terms of documentation when it came to how UI/UX decisions were made. I thought it might be useful to develop proto-personas as a tangible place to direct stakeholders when making future design decisions. The client agreed, and based on client input and additional research — we developed four proto-personas with an emphasis on their story (as written in the scenario section):

Proto-personas

Next, we delegated specific UI elements and corresponding screens that needed to be designed and optimized for the mobile web version of the questionnaire flow. I focused on the impact scale and progress tracker, conducting research and ideating potential solutions, while my partner worked on pop-ups and the results screen.

One challenge was that we had to work off of screenshots, since the questionnaire flow was only available on the client’s local server, which meant we had to be meticulous in asking follow-up questions about details such as the UI of interactive states.

UI Element 1: Impact Scale

UI Elements

UI Element 2: Progress Tracker

3. Crafting the Experience

After conferring with the client, I was able to finalize the relevant UI elements and screens. One of the driving factors behind the design decisions was simplicity, since the client’s priority was preparing the questionnaire flow ASAP for the beta launch. However, as we built trust with the client — they started asking for design feedback and updates on several other key screens, which I was able to provide.

Updated impact scale screen also showing the progress tracker in context

Other design updates: Updated summary of preferences with more consistency in visual design as other screens in the flow

Other design updates: New selected state (purple border) avoids issue of awkward-looking white square that surrounded the illustration in the previous selected state (purple highlight)

Style Guide/UI Element Library

I also realized that the previous design team had not built a style guide, so I decided to create a simple one with an UI element library that the client could share with future collaborators as I progressed through the high-fidelity design.

High-Fidelity Design (Desktop Web)

The client also wanted Figma files for the desktop version, which we created — for some screens, the client asked to integrate the designs that we optimized and updated for mobile back into the desktop web version.

High-Fidelity Design (Mobile Web)

4. Gathering User Insights

Usability Testing

Next, we recruited five participants to conduct remote usability testing for the questionnaire flow to identify any issues in the primary red route and uncover other potential opportunities for improvement.

The inquiries that guided us were:

1. Could users successfully complete the questionnaire? (i.e. were they empowered to achieve their high-level goal?)

2. What did they think of the content of the questionnaire?

3. What did they think of the results page? Did they have enough information to select a relevant solution?

4. What general opinions did they have on the UI? (desirability)

High-level, we discovered that:

🤔   2 out of 5 users needed help in finding the navigational buttons to move forward in the questionnaire on the second screen

🤔   Users thought the questionnaire was thorough but struggled to understand a couple of the questions

😊   Users thought the results page was comprehensive and appreciated the information on insurance coverage and cost

😊   Users liked that the health-focused app had a casual and clear visual feel that wasn’t clinical

For our first 🤔 finding on users struggling to find the navigational buttons on the second screen (where they were located below the fold), we recommended that the client monitor this issue in their beta launch. One solution we had considered was making the navigational buttons sticky at the bottom of the site, but since it wasn’t a common UI pattern for mobile web, we decided it would be wisest if they gathered more data before implementing a potential change.

For our second 🤔 finding where users struggled to comprehend some of the questions, we recommended specific wording changes, like updating “Are you interested in any of these specialized paths?” to “Do you want to prioritize solutions that target/focus on…” where the answers were: Women, BIPOC, LGBTQ+, and None.

We also supported the client in preparing for the next phase of the project (beta launch and testing) by presenting our findings and recommendations. A couple of standout recommendations are noted below.

Recommendation 1

While our prototype of the questionnaire flow wasn’t set up to test the 40+ options within the four parent categories, we had enough user feedback to suspect that users being able to find their desired option might pose a challenge. We recommended that the client conduct a card sorting exercise if users struggled to find their desired option in the beta launch.

Recommendation 2

On the results screen (designed by my partner), users commented that:

  1. They wanted the ability to identify what type of tool (e.g. podcast, app) each result was, as well as filter the results by tool

  2. They weren’t sure how to interpret cost (was it per subscription? session? something else?)

  3. They weren’t sure what would show up if they selected “Click to Find Your Insurance”

We recommended:

  1. To add a tag under “Features” with the tool type and to add a filter

  2. To clarify that the cost would be per month (which was a data point already available on the back end)

  3. To update the language to “Check If Insurance Coverage is Available” for clarification

7. Reflecting on the Journey

This project gave me an opportunity to leverage my skills in client communication and project management, as well as practice adaptability since we had to adjust from a team of three to a two-person design duo when a member dropped out unexpectedly 20% through the project. 

This experience also pushed me to hone my skills in explaining the value of the design process as well as the rationale behind every design decision in order to build trust with the client. I felt like this was achieved when the client became open to expanding the initial scope of the project from designing a couple of screens to creating prototypes for the entire questionnaire flow for both mobile and web, conducting usability testing, and also asking for suggestions on how to adapt the design for tablet screens.

Many thanks to MindTech Guru for trusting us with this process and to my great design partner, Katie Moeller.