This “case study” is a little different

Being the first product designer on a team affords a unique opportunity to do some broad system-level thinking alongside specific product work.

So, instead of focusing on a single project with a nicely defined goal, this is an overview of my tenure at this early-stage startup, where the challenge was figuring out what to work on, as much as it was shipping quality experiences.

It’s broken out into three main sections—each its own “act” within the larger story.


Intro

Company & Product Overview

Fresh off of raising a Series A, this ed-tech startup was quickly hiring to grow their nascent platform. This is the story of scrappy design at a fast-growth startup just hitting its stride.

I was the first designer on the team of about 30. Hired initially as a part-time UX consultant, I transitioned to Head of Design to help them:

  • introduce a design system and update existing product UI
  • define design and UX processes and hire the first product design team
  • research, design, and launch dozens of new product features
  • refine product marketing and launch a new website

During the ~20 months I was there, the whole team’s hard work enabled AdmitHub to expand quickly, and it paid off:

+5x growth in learners served year-over-year, with double the bot interactions per user

+2x new institutional customers added year-over-year, with avg. 3x end user growth

$19M series B raised during early uncertainty of the pandemic

More importantly, the product helped millions of students navigate their college journey: applying to school and financial aid, and helping them thrive while enrolled with customized assistance.


AdmitHub in a nutshell

The SaaS platform combines a helpful and customizable chatbot with outbound behavioral-science-based campaigns, which educational institutions use to "nudge" students to more successful outcomes.

There are two main types of users:

Students are the end user, but they might not know it. They get messages via SMS or their school’s website, so we call this a “No UI” solution. Though they have to opt in, they generally just know they’re using a service provided by their school.

School admins use the web-based suite you mainly see in this case study to author campaigns that are sent to students and maintain the Knowledge Base that the bot uses to respond to students.

more about the platform
Above: Overview slide from fundraising deck

This slide from a fundraising deck I designed is a good represenation of the platform. Character illustrations by the talented Gabi Homonoff, (lightly art-directed by me) as part of our marketing re-brand effort.


Role & team

I was the company's sole designer for about 6 months, working as a consultant for about 20 hours per week while I had other clients. I transitioned to Head of Design and hired our first full-time Product Designer, Roya Rakhshan, soon after. She reported to me and we collaborated on everything thereafter.

Product design work isn't done in a vacuum, of course. In addition to Roya, I collaborated closely with the product, engineering, AI, and research teams—especially Becky Sacks, VP Product, and Caroline Alexander, Product Manager, who were there from the start.

Though all of the projects were highly collaborative, for all selected work here, I was the lead product designer and researcher.


02

Laying the UX Foundation:
Design Systems & Process

Early teams want to "ship, ship, ship". It might not feel feel like you have the time for these two things at the start, but in my experince, taking the time for them actually helps ship faster, and pays dividends for future designers:

  1. Empower the whole company to work with you by establishing the role of design.
  2. Document an early design system of patterns and components for re-use.

*Note: this is presented linearly here, but in practice this work happens in a cycle alongside feature projects.


Establish the role of design

On early teams, it's important for you to learn about a company, but it's also important for them to learn about you. And if you're the first designer on the team, it's important that they understand what you do and how you can help them. If you're an early design leader, you sort of have the unique opportunity—and perhaps responsibility—to define the role of design generally.

Educate on the myriad roles of design

I always advocate for giving a quick presentation at a company all-hands, or at the very least sending an intro email with links to shared internal documentation about design's role on the team with the goal of inviting the rest of the company to be part of the design process.

Slides from my "Design + Research" talk

This presentation is an overview of the many functions that design and UX research can serve, given at an all-hands meeting to get the whole team on the same page.

slides from my design+research talk

Recruiting internal champions

Who perked up at your presentation? You'll quickly find the folks that are interested in helping to deliver a better experience for your customers, and these folks—regardless of their role or team—can be your internal champions for user research and delivering great experiences.

Find them, and give them an easy way to find you. Then, make them part of the process—officially or unofficially—and recognize their contributions.

At AdmitHub, this took a few forms:

  • I ended up taking on some "UX interns" who took time away from their actual jobs to help out with research and sit in on user interviews. This really helped me on a resource-contrained design team, and they learned a lot!

  • I also regularly scheduled time with a handful of UX enthusiasts across the org, from education specialists to partner success reps. These informal conversations helped to inform my product thinking, and helped me communicate outward what progress the team was making towards their feedback.

None of these take that much time, and don't have to feel like a huge lift—but they can really pay off in furthering the impact of design. Plus, taking the time to teach people even helped launch a few budding UX research careers!


Building a design system as you go

I'm always thinking about the design as a system—how does this current decision fit into the larger system? If you make note of these things from the get-go, it's actually pretty easy to start to flesh out a design system or pattern/component library to draw from.

A system-wide UI “refresh” to start

Sometimes, a fresh coat of paint alone can make a huge impact a product. The "aesthetic-usability effect" is a real phenomenon, where better-looking interfaces are perceived as working better, all else being equal.

If you've done the intro prioritization work mentioned in part 01, then getting this type of work on the roadmap should be easier.

This can seem like a massive undertaking, but sometimes, as was the case with AdmitHub, the changes can be evolutionary instead of revolutionary.

before the UI refresh
Screenshot before refresh

Product screenshot (with blurred personal info) of the conversations view before the UI refresh.

Most of the information was there, it just needed a little information hierarchy and UI polish to get it into shape.

after the UI refresh
Designs post UI refresh

Mock of the refreshed version that launched, making use of new design system components.

Since there was front-end refactoring to do, we took the opportunity to add the new app-wide sidebar convention that made navigating between views—previously hidden in a “more” menu—much more prominent.

A scrappy design system

Even if you're the solo designer, there are huge benefits to documenting your work. You make it easier for engineering and product teams to reference the latest designs, and if you're sharing your progress, you sort of force yourself into being more organized with verisoning, etc.

So, just start documenting—it’ll lead to a proper design system faster than keeping it all in your head!

Design system in Figma

Screenshot of the design system file in Figma, used to create a shared component library for the design team.

The latest approved version of designs makes it onto this page, which powers the prototype shared with the product team (below), while new design ideas, commentary, and refinements happen on other pages hidden to the larger team.

screenshot of design system in Figma
Some advice for early-stage product designers
  • It's okay for your file to be messy and disorganized—things will change rapidly, just get it all in one place.
  • Don't over-plan a "system" naming convention (i.e. atoms, molecules, etc.) You'll get hung up on terms that don't matter yet. Just group them logically—it will change.
  • Do start component and pattern naming that is shared with engineering so you have a single vocabulary for referencing the parts of your experince. (But, don't be sacred with the actual nouns.) This means working closely with front-end engineers to ensure your system is working with them.
  • Figma team libraries and prototypes are a real game-changer. By defining your colors, fonts, spacing, etc. up front, you're forcing the early design team to make decisions about introducing new variables—that alone is helpful. By creating shared components that you just drag onto your artboard, you accelerate process, and again force consideration of pattern proliferation.
  • (Really, Figma is awesome, you should use it if you’re making a design system.)

Below: some of the pages from our early design system.

detail snapshot of design system pages
A note on designers and coding

Do you, or does somebody on your design team code? Or conversely, is there a developer who loves design? Super! Find the combination that works right for your team and work closely to implement components to the initial design spec. You can save a ton of time by not statically designing all of the transitions, hover states, responsive design breakpoints, etc. and instead work closely with your trusted partner to build the coded components in a way that reflects all of your design intent.

At AdmitHub, we put together a small working group to accomplish this, and they set me up with an environment so I could actually finalize much of the tricky CSS myself, once they gave me an established baseline. This was the fastest for us at the time, but every team is different—the important part is to start documenting and seek out enthusiastic engineering partners early.

Design system website

The clickable version generated by the above file, seen in Chrome, below.

The engineering and product teams can reference this link right in their browser without needing to learn new software. Here, they can see the latest design specs without the cruft of the design team‘s commentary and rough work.

Figma makes the “design system website” super easy. Highly recommended.

screenshot of design system website in browser

Foundation established, tools in the belt. Now we’re ready to utilize the tools and plan we've created to build new features.


03

Product Design In Depth:
Iteration, Prototyping & Testing

How often do things go exactly to plan? I suppose I could write about a project tied up neatly with a bow that shows a flawless process with perfect metrics—but that’s not always how things play out, especially on early-stage teams.

The reality is more interesting anyways. I worked on dozens of different product features while at AdmitHub, but selected the following few to show how every project—even those that don’t work out—is an opportunity to learn from, and delight customers.

Hey! Please note that some content below—especially user insights—is intentionally vague; and some images have been modified to protect proprietary information. I don't think this detracts from the story about the process, and it protects this company's IP. You'd want me to do the same for you.


Project

The Bot Trainer

A failed attempt to turn an internal tool into a customer feature still yields benefits

The first project that I was hired to work on never actually came to fruition—at least not in the form initially envisioned. Though it didn't launch, I think what follows is a success story about working on a truly "agile" team. Read on to see how this project’s research informed the roadmap and subsequent projects to come.

Problems

error_outline

For the business: for the business to scale, we needed to reduce the internal person-hours per end user that it takes to train the AI model.

error_outline

For customers: some customers are frustrated by a lack of control over their bot’s logic, which they assume leads to a higher rate of incorrect responses. (Because we don’t expose this information to users, this seems like the “black box” of AI problem—“why did the bot respond the way it did?”)

Assumptions

We thought we could address these two issues with a single tool, designed to:

  1. reduce internal team hours per end user by leveraging partners’ input to add bot training data
  2. give partners the control they crave over tuning bot responses
  3. bonus: deprecate internal tool in favor of a single interface shared with partners

Discovery

In my experience, it’s a good idea to have a shared visual frame of reference for complex products. Having a simple visual aids future design conversations, and serves as a learning tool for the team. I’m often surprised when it also helps to illustrate discrepancies in understanding about existing products between teams.

This is especially true when multiple teams are involved, and moreso when AI is part of it. So, I spent lots of time with our AI and engineering teams to make a visual model of how the bot worked so the whole product team had a shared understanding of the status quo, and where designs would alter it.

Workflow Chart

Below: A decision tree on how a user might "train" the bot. Some details intentionally blurred.

bot training decision tree

User Interviews

As part of my initial discovery (see part 01, above), I had already spent time shadowing our internal training team and had exploratory conversations with users. Now it was time for more pointed interviews. As our questioning got more specific, we segmented users by how they utilized the existing tools.

We were able to identify a segment of users for whom changes to this particular tool would likely make the highest impact. These “power users” became our target audience.

lightbulb

Interviews revealed how different sets of users had created their own teams, workflows, and external processes to track updates they needed to make to their bot’s logic, either (1) elsewhere in our platform, or (2) by reporting them to our internal team.

lightbulb

Some customers would be interested in a service to keep their bot's knowledge base up-to-date for them.

User Interview Summary

Below: research output is always shared across the team in a project folder everyone can reference and link to. This example is a re-creation of the research summary with generalized findings for this public case study.

user summary interview doc

Solution Concept

We thought we could give the Power Users a tool right in context of the conversation view that would allow them to update bot behavior immediately, instead of having to track updates externally and apply them later.

As a broad overview, this tool needed to allow users to:

  • update bot responses (the actual message text)
  • identify why a given bot response was incorrect
  • add additional training data to help improve question identification—especially if the bot “couldn't understand” the question

All of the above may sound simple enough, but when you're using the data to update a natural-language-processing (NLP) model, the information needed is nuanced and needs to be accurate in order to improve the model. Our internal team was trained on collecting this data. Could end users correctly tag the data without training?

The workflow we identified earlier had all of the necessary steps, but an intuitive UX that set up guardrails was needed to integrate it into the product, so we envisioned a step-by-step “wizard” type of interaction.

Wireframes

I started by putting together some low-fidelity wireframes in order to work out the user flow for conceptual discussions across the product team and feasibility review with the engineering team.

Low-fidelity wireframes

Below: Simple boxes that suggest or approximate functionality help internal teams discuss concepts early in feature development without getting bogged down by differences in aesthetic preferences.

low fidelity wireframes
Mid-fidelity wireframes

Below: after a few rounds of quick input from the product, engineering, and AI teams, we settled on a concept with enough detail to turn into a presentation for users.

mid fidelity wireframes

Gut-check with users

In order to ensure our concept made sense to users, I went back to the target user group—Power Users—as well as our internal team, to review concepts.

Power Users were universally excited about the concept, expressing enthusiasm about the additional control that they’d get to customize their bots. Internal team users agreed that the workflow captured their current tasks correctly. So, I continued on the path towards fleshing out the wireframes into full fidelity designs.

In hindsight, this was incomplete information, as you'll see below. I probably should have created a prototype at this stage, or even lower fidelity. That said, it didn’t take too much effort to turn wireframes into high-fidelity designs, and sometimes the high-fi versions give you more realistic feedback.

UI Design

If you’ve structured your wireframes the right way, then you’re a long way towards your final designs already. I start by fleshing out finalized design for a single instance of the screen, applying assets from my library, and then getting feedback from the product team. After one screen is agreed upon by the team, it’s time to apply those styles to all of your wireframes.

During this phase I work closely with engineers and PMs to ensure we address all use cases, in order to turn the designs into a final spec for development.

High-fidelity designs and Figma presentation artboard

Below: the annotated Figma artboard helps the internal team have asynchronous reviews, and everyone can utilize Figma comments to take part in shaping the UX.

high fidelity designs and Figma artboard

The image below shows a selection of frames from the high-fidelity design at this stage, which were presented to users in a clickable prototype. You can see the “thumbs up/down” icons that open the “bot trainer panel” concept as an overlay, as well as some screens that walk users through the process.

high-fidelity designs in prototype phase

Prototype Testing

Once the designs are close to complete, I turn them into a clickable prototype using Figma. The logic for this whole thing was very complex, and I wanted to ensure that users would be able to understand what was happening and where they could enter and exit the process. So, I went back to the same users I'd been speaking with, plus our internal team, to test the prototype as I watched.

Figma prototype layout

Below: a look at the layout of the prototype in Figma, and screenshot of the final clickable prototype in a browser. This was shared with users so they could navigate themselves while narrating what they thought about it.

figma artboard for prototype

The results were pretty surprising: the power users—previously the most enthusiastic bunch—said they wouldn’t use it!

This is why prototyping is so important—despite the strongly desired new functionality, some problematic UX would have hampered or even prevented adoption with the target users had we launched with this version.

Interestingly, this wasn’t an issue of confusing usability—all of the testers could complete all of the tasks and understood the workflow, they just thought it took too long to complete. Couldn’t we just take some steps out?

In other words, users had assumed that training the AI would be much simpler than it actually is. Their mental model was far more simplistic than the reality of what was required.

Given this new feedback, I went back to brainstorm with the team.

User Testing Results Summary

Below: surprising results from the testers documented in the project's shared research doc.

user testing results research summary

Back to the drawing board?

We wondered if there was a way to streamline the UX to remove some steps or checks. Some of the testers even had ideas for things we might be able to remove.

Unfortunately, the answer was no—the team wasn't comfortable removing any of the steps because they were designed as guardrails to minimize human error and insure proper data collection.

At an impasse?

With users asking for a simpler experience, and technical requirements that must be maintained—what can we do?

While nothing could be removed, I thought there were probably UX interventions to make it seem faster by breaking actions into different workflows.

I reviewed notes and recordings from user interviews and testing sessions to dig deeper, looking for specific points of frustration or delight. Interestingly, all of the testers really appreciated a detail at the very beginning of the prototype flows. Initially I had assumed that users were commenting on this particular feature out of excitement about the overall concept. Now I wondered if we were on to something else.

Once users clicked the "thumbs down" to indicate a poor response, we revealed quick the "bot logic" atop the trainer panel. Specifically, it showed what the bot thought was the intent of the chatter's last message, in order to produce the answer it sent.

Below: a closer look at the beginning of the workflow, focusing on the feature that users found compelling while testing the prototype.

before new idea

Are users getting tired of me yet?

I circled back to a few of the testers to find out why this specific feature resonated with them.

As it turned out, simply revealing why the bot responded the way it did was enough to:

  1. make users more forgiving of erroneous or inadequate replies, since they could see where it went wrong
  2. help address their desire to have more control over bot logic

Importantly, this new feature often highlighted that users themselves hadn’t input the necessary information for the bot to respond with.

It looked like potentially launching part of this feature might be a step in the right direction after all.

Meanwhile, in engineering...

I had been working closely with the engineering and AI teams all along to keep estimates up-to-date. During a review, they realized that if we waited a few months for some new infrastructure work to be completed, they could actually do the work faster and remove the need for a chunk of the UX process that was meant to introduce some safe-guards—which was a chief complaint amongst testers.

Armed with this information, it looked like this project as initially envisioned was going to be paused.

Is incremental change be better than none at all?

But the new insights about simply revealing the bot's intent had me wonder if there was merit in releasing just part of this feature set not impacted by the engineering work. So, I went back into the designs and removed all of the "training" and focused on making just the "information" more helpful.

Forcing focus on a small, seemingly simple interaction can be a really fruitful design exercise.

In this case, I realized we could eliminate the slide-out "bot trainer panel" altogether, since we we just showing additional information, not the full wizard workflow. This drastically simplified the experience, but I wondered if it were too simple to deliver real value. Potentially worse, would users be frustrated that they had more information, but no tools to act upon it?

Below: a look at the revised design concept that placed the logic in context on the conversations view.

after new idea

User feedback, again

I took the revised concept as a prototype back to some users for review. This time, overwhelmingly everyone agreed it was a step in the right direction.

Not only that, but users had a bunch of new ideas on how to act on this information.

By simplifying the workflow to be purely informational, we reinforced the idea that different tasks were going to be handled by different types of users.

Instead of forcing all users to complete a lengthy, required workflow, we now allowed one type of user—the "responders" to simply make note of issues, while a different type of user—the "trainers" could make use of this data at a later stage.

lightbulb

Different "types" of users don't necessarily need to be different people—it's more like a different "mode" of working. Sometimes, a single person will do both, but during different parts of their day or week.

lightbulb

Interestingly—even if users didn't have all the tools needed to improve responses, simply understanding why the bot responded the way it did made them more forgiving.

Final design solution, plus design system updates

With positive feedback, we decided to fully flesh out this concept and ship it. While users couldn't directly take action, they were armed with more information than they previously had, and a direct link to take partial action elsewhere in the platform.

As a bonus, the UI treatment that I arrived at once we focused on a drastically scaled-back approach ended up being a novel pattern that I realized could be applied to a few other features we had planned. So, I detailed the pattern as an update to a flexible front-end component in our design system, making note of potential future uses.

animated gif showing opening reveal

Shipping and measuring success

The team shipped the re-envisioned feature to a small sub-set of power users and after positive feedback released it into the wild for everybody.

Measuring success on a feature like this is tricky—there isn't necessarily a completion action that a user would take to indicate success. We could have tracked clicks—how many times users expanded the new section, or clicked on a link to view the related Knowledge Base entry—but that wouldn't give us a great indication of whether of not simply seeing the additional information was useful.

Being a scrappy startup with a small team, we decided to listen for qualitative feedback instead of relying on quantitative tracking that wouldn't tell the full story, and would take additional engineering time to implement.

So, we prepped our Customer Success team to be on the lookout for feedback about the new feature, and we asked some of the testers ourselves.

Overwhelmingly, the feature was positively received, and we heard anecdotal evidence that users had greater appreciation of the complexity of bot training now that part of the "black box of AI" was revealed.

Additionally, we heard that some users were more likely to update their Knowledge Base entries more often because we revealed the data. And, while some went in wanting full control, many now felt this simple intervention was a step in the right direction.

Project Wrapup

Did we solve the project problems? Well, not as initially defined. But we did learn a lot about addressing them. Let's look:


error_outline

For the business: for the business to scale, we needed to reduce the internal person-hours per end user that it takes to train the AI model.

lightbulb

New insight: We didn't end up releasing a tool that would address this, but our research revealed that customers would potentially be interested in different services to take care of training for them. That means we might not need to reduce team time after all.


error_outline

For customers: some customers—“power users”—are frustrated by a lack of fine-tune control over their bot’s logic, which they assume leads to a higher rate of incorrect responses.

rule lightbulb

Partially solved, plus new insights: as we saw, simply revealing more information to users went a long way towards reducing frustration and created greater empathy for the process. While we didn't completely solve this, we gained valuable insights to move is forward in solving the problem in a different way, which ended up being a better overall experience for users.


The project never launched as originally envisioned—but it was still hugely successful in advancing our understanding of user needs.

I think it’s worth noting that by the time I left AdmitHub the originally envisioned tool still hadn't launched—some 20 months later—and this was a good thing! Why? By continually talking to users and incrementally shipping features that addressed the most pressing user need at the time, the tool simply became less pressing.

This helps illustrate the value of inviting users into the process continually, iterating in the design phase, and working on an agile team.

Had any parts of the process been left out, we might have spent time building an overly cumbersome tool that ultimately didn't serve users needs as well as a much simpler intervention. While this project alone didn't actually solve the problems we set out to address per se, it gave us a better understanding of how to solve them better and added new features to the roadmap that we'd go on to release, making the overall experience better than the initial feature set could have been.

As this illustrates, every project and user test is an opportunity to answer the question "what should we build next?"

Read on below to see how we used the insights gained in this project to define the next few projects on the roadmap.


Next Steps

Using insights to define the product roadmap

This case study is probably long enough already, but I wanted to briefly highlight how user research from this project informed the next projects on the roadmap.

As we learned, organizations were creating specialized user roles (or modes) on their teams to handle different types of tasks in AdmitHub, separating “responding to students” from “improving the bot”.

In order to fully solve the initial problem of fine-tune control over bot logic, we decided to learn more about the "improving the bot" user type and release tools specifically for them. You can see a snapshot of some of this design work below in the Knowledge Base project, below.

And since the above Bot Trainer project revealed more information to the "respond to students" users, we knew we wanted to make that more actionable, somehow. We dug in for more twists and turns in another Inbox and Conversations project, which you can preview below.


Project

The Inbox

Where humans and bots collaborate to help students

The research insights from the previous Bot Trainer project revealed to us that organizations were creating specialized user roles on their teams to handle different types of tasks in AdmitHub, separating “responding to students” from “improving the bot”. The Inbox focuses on giving the former group better tools.


Project

The Knowledge Base

The “brains” behind a bot get an update so users can make them more personal

The research insights from the previous Bot Trainer project revealed to us that organizations were creating specialized user roles on their teams to handle different types of tasks in AdmitHub, separating “responding to students” from “improving the bot”. The Knowledge Base focuses on giving the latter group better tools.


04

Takeaways

I may return to add more information to the projects mentioned above, but they follow the same overarching process as the bot trainer:

Define problems --> Initial discovery --> Talk to users --> Design iteration --> Repeat talking to users and design iteration until it makes sense --> Build --> Launch --> Measure --> Define next steps.

There are different twist and turns with better or worse outcomes for every project, but as long as you're talking to the people you're designing for and observing how your features are being used, I don't think an iterative approach will fail you.

If you're interested in learning more, please feel free to drop me a line and let me know, or contact me if you think this type of process would be useful on your team.