This “case study” is a little different
Being the first product designer on a team affords a unique opportunity to do some broad system-level thinking alongside specific product work.
So, instead of focusing on a single project with a nicely defined goal, this is an overview of my tenure at this early-stage startup, where the challenge was figuring out what to work on, as much as it was shipping quality experiences.
It’s broken out into three main sections—each its own “act” within the larger story.
Company & Product Overview
Fresh off of raising a Series A, this ed-tech startup was quickly hiring to grow their nascent platform. This is the story of scrappy design at a fast-growth startup just hitting its stride.
I was the first designer on the team of about 30. Hired initially as a part-time UX consultant, I transitioned to Head of Design to help them:
- introduce a design system and update existing product UI
- define design and UX processes and hire the first product design team
- research, design, and launch dozens of new product features
- refine product marketing and launch a new website
During the ~20 months I was there, the whole team’s hard work enabled AdmitHub to expand quickly, and it paid off:
+5x growth in learners served year-over-year, with double the bot interactions per user
+2x new institutional customers added year-over-year, with avg. 3x end user growth
$19M series B raised during early uncertainty of the pandemic
More importantly, the product helped millions of students navigate their college journey: applying to school and financial aid, and helping them thrive while enrolled with customized assistance.
AdmitHub in a nutshell
The SaaS platform combines a helpful and customizable chatbot with outbound behavioral-science-based campaigns, which educational institutions use to "nudge" students to more successful outcomes.
There are two main types of users:
Students are the end user, but they might not know it. They get messages via SMS or their school’s website, so we call this a “No UI” solution. Though they have to opt in, they generally just know they’re using a service provided by their school.
School admins use the web-based suite you mainly see in this case study to author campaigns that are sent to students and maintain the Knowledge Base that the bot uses to respond to students.
Above: Overview slide from fundraising deck
Role & team
I was the company's sole designer for about 6 months, working as a consultant for about 20 hours per week while I had other clients. I transitioned to Head of Design and hired our first full-time Product Designer, Roya Rakhshan, soon after. She reported to me and we collaborated on everything thereafter.
Product design work isn't done in a vacuum, of course. In addition to Roya, I collaborated closely with the product, engineering, AI, and research teams—especially Becky Sacks, VP Product, and Caroline Alexander, Product Manager, who were there from the start.
Though all of the projects were highly collaborative, for all selected work here, I was the lead product designer and researcher.
Getting Started at Startups:
Research & Prioritization
If you've never worked at a Series A-stage startup, I'll tell you that it's like drinking from a firehose while deciding which fires are even worth putting out. The path forward is often ambiguous, UX research is necessarily scrappy, and deciding what to work on is just as important as knowing when to give up on an idea. As a solo UX designer, where do you start?
This section turned into a lengthy article about framing product goals. Since it applies primarily to very early-stage products, I’ve taken the liberty of hiding the bulk of the article text. Keep on scrollin’ if you’re a skimmer.
Or, if you want to learn about:
- early-stage UX discovery
- exploratory user interviews
- magic wands, maybe
Still with me? Cool, thanks!
A note about startups
Perhaps the most important thing to know about early-stage startups is that they are, by their definition, resource-constrained. Good startups may have the ambition to tackle many problems related to their core goal, but great startups know the value of focus. Leading projects effectively—especially for a design leader—comes down to resource management.
I always find myself and other product folks asking questions like these:
- What’s “good enough”?
- What’s the real MVP?
- Does this even need visual design?
- When can I justify a new hire?
This section attempts to give you a process to answer them.
Framing the initial problems
At this stage at AdmitHub, the product team was myself, a single PM, and the Director of Product. At this point, they both had their hands full with technical AI projects and overall company strategy, which left me to really focus on the experience and interface of the platform.
Previously, the engineering-heavy team had released TONS of features to make the product quite robust. Despite a pretty UX-conscious team, without a designer onboard to shepherd the experience the usability was inconsistent and the experience suffered as a result.
- overall inconsistent and confusing usability
- not sure which issues are important to users
- no defined user personas or segments—not sure which issues apply to whom
- unclear how to prioritize UX concerns alongside new feature development
- identify usability issues and user pain points
- prioritize work on highest impact issues
First, fresh eyes: a quick overview and an audit
Regardless of the stage product you're walking into, a set of fresh eyes experiencing the product for the first time are bound to identify issues that the team building it is too close to see. And if you're an early (or the first!) designer on the team, don't underestimate how much your knowledge of best-practices alone can really improve the "low-hanging fruit" usability issues.
So whenever starting on a new project, I like to spend some time using it—preferably on a demo account where I can use all of the features without fear of screwing up real data—without set goals or preconceived notions of product needs.
This is your chance to:
- identify usability issues
- explore "discoverability" of features
- form an initial impression of navigation
- create a mental model of the current information architecture (to compare with intent)
When this is shared, it helps the team understand:
- can the platform be used without training?
- what features were buried, or missed entirely?
- is the information architecture organized as intended?
At this point, I make careful notes and sketches to document what I learned, but I don't share them with the team until I can put some prioritization context around them.
Customer Interviews & Observation
For initial conversations I try to find users who are committed to the product and willing to spend a good amount of time helping improve it. I keep my questions pretty broad and let the conversation dictate where to go.
I like to start with this prompt:
“pretend that I’m totally new to the product: teach me how you use it to do your job in 10-15 minutes.”
I’ll follow along via a screen share, or looking over their shoulder. This naturally helps focus observations on what is most useful to them (and what isn't, by omission), and it allows me to see any usability issues without prompting a specific task.
That walk-through usually naturally transitions to Q&A, where I try to cover the following:
- What are we doing well?
- What can we improve?
- How does this make your job easier? Harder?
By focusing the conversation around their “job” I find that you hear more about the important issues instead of minor usability or design gripes.
I like to keep in mind the general “jobs to be done” premise when talking to users. Rather that sticking to a prescribed JTBD framework, I find that simply focusing on the idea of “jobs” that users are trying to accomplish when using your software helps to align research sessions with the most important feedback.
And remember, a “job” isn’t a simple task—it’s the core reason that a user sought out your product in the first place. For example, the “task” might be to send an SMS campaign to remind a student about a deadline, but the larger “job” here might be improving second-semester enrollment rate for a specific cohort of students.
It’s important to remember to frame the job as the user’s larger objective for using your product, lest you start designing only to make your existing solution better.
While that might lead to a good outcome, we don't want to exclude other possible solutions just yet and we'd be missing the full picture.
To end interviews, I like to borrow a question from an old colleague,
“if you could wave magic wand and change one thing about the product right now, what would it be?”
This has the effect of homing in to the most critical point, or allowing them the space to bring up anything else that's really important to them. I find that it's a nice way to let users know that their individual opinion matters to you.
During this whole process, either myself or another person is taking fast notes, and I try to record these sessions if the user is comfortable with it. Immediately afterwards, I try to give myself an extra 15-20 minutes to summarize the more important parts of the interview.
- New interview documentation including my summary, all rough notes, and screen recordings/audio, are uploaded and cataloged in the team's research repository.
- For all UX research, I think it's important to make deliverables as readily-available and searchable as possible, so I document in the preferred system of the product team. For some, it's something like Confluence or Notion. For AdmitHub, it was Google Docs in a shared Google Drive. If an organizational structure doesn't exist, I start one, and "tag" the session with keywords that relate back to product features.
- I'll either add to, or start a contact directory for users that are willing to work with you—this helps with additional research and forming a customer advisory board in the future.
- Did you promise the user anything? If so—put a deadline on your calendar or task list! Make them feel heard, even if you can't give them exactly what they want, and they'll usually turn into an asset that will continue to help you drive product improvement.
- After they're all done, I'll summarize the main findings and trends in a single document that links out to the specifics. This will be included in the plan (below).
Internal User Interviews & Shadowing
After the initial customer interviews (that are unbiased by you team's opinions), I like to spend time with any internal users. There are usually some internal team members that have to spend a good deal of time either onboarding or supporting users, and they usually have a unique perspective on what's important.
At AdmitHub, there's actually a whole team that uses internal-facing tools to train the proprietary AI model and help customers manage incoming messages. One of the first projects that I was assigned—rethinking the Bot Trainer (below)—relied heavily on information from this team, so that's where I started.
In addition to informal conversations, I spend time shadowing different team members while they do their job and also have them train me, so I can get a real feel for using the product.
Regardless of how well you can internalize how your product works, nothing quite gives you the same experience as using the tool for an extended period of time to actually accomplish the job of a team member, especially if that job entails working directly with customers. The customers you interact with, and the interactions you have with them, are often very different in support and training roles (as opposed to UX).
- Summaries and notes are uploaded alongside customer research.
- Set up a Slack channel to collect ongoing feedback from this team specifically, where each person can not only provide ideas, but everyone else can upvote or downvote the things they think are important.
Once I understood the state of the world for users, it was time to be schooled in how the product is actually working from the engineering and AI teams. Since this was my first formal foray into designing for AI, I asked the team for their recommended "intro" resources on AI and scheduled some deep-dive sessions to see how the tool actually works.
If it doesn't already exist, I create the beginnings of a site architecture map, as well as user workflows for common tasks, based on how it actually works. I previously mentioned a "mental map" in my upfront audit. Now is the time that comes in handy—I'll take my sketch from that, plus any related feedback from users, and use it to draw a comparison between how the product technically works, and how users perceive that it works. We can further use this baseline between "now" and whatever the identified "future" state is. Seeing a visual representation usually makes conversations about "how to get from here to there" a bit easier.
- Visual sitemap, decision tree, or other user flow that the team (especially product and engineering) can easily reference to ensure we have a shared understanding of where we're at.
Basic sitemap snapshot from Figma
Making a plan: a picture’s worth a thousand words
After all of that background, I make a fast "readout" and presentation of what I learned, identifying weak points, areas for opportunity, and the first pass at what I think may be a plan of attack to be presented to product management.
This usually includes product screenshots, accompanied by annotations and very low-fidelity wireframes describing improvement ideas.
Importantly, this is all presented to the team as a fast, rough collection of "chunks" of ideas that need further exploration. It's better if it's not polished—I like to present it as a "sketchbook" of "notes" that need further exploration with the team. Still some visual representation is better than none. It's very hard for people—even the people building the tool—to follow along with written suggestions that lack the context of usage.
- Shareable rough product "chunks", somewhere the team can comment
- Presentation to the larger team with time scheduled for discussions with smaller breakout teams
Example project "chunks" draft
Plan into practice
While every team does their agile and sprint process differently, I've found almost universally that what works well for engineers doesn't translate to effective design process. I'll cover more of this later, but wanted to point out that design should be part of the prioritization process—including the often tedious documentation and agile tools—from the get-go.
By working in the same tools as the rest of team, you're more aware of the "state of the world", which lets you more effectively advocate for projects that are important to improving the UX.
- Roughly prioritized discovery tickets for your UX backlog
- Epics, tickets, and more details added to your agile process tool
- Linked detailed design and engineering specs for any UX-led projects
Agile process prioritization
Now that you're organized and understand UX priorities, it's time to get to the fun stuff.
Laying the UX Foundation: Design Systems & Process
Early teams want to "ship, ship, ship". It might not feel feel like you have the time for these two things at the start, but in my experince, taking the time for them actually helps ship faster, and pays dividends for future designers:
- Empower the whole company to work with you by establishing the role of design.
- Document an early design system of patterns and components for re-use.
*Note: this is presented linearly here, but in practice this work happens in a cycle alongside feature projects.
Establish the role of design
On early teams, it's important for you to learn about a company, but it's also important for them to learn about you. And if you're the first designer on the team, it's important that they understand what you do and how you can help them. If you're an early design leader, you sort of have the unique opportunity—and perhaps responsibility—to define the role of design generally.
Educate on the myriad roles of design
I always advocate for giving a quick presentation at a company all-hands, or at the very least sending an intro email with links to shared internal documentation about design's role on the team with the goal of inviting the rest of the company to be part of the design process.
Slides from my "Design + Research" talk
Recruiting internal champions
Who perked up at your presentation? You'll quickly find the folks that are interested in helping to deliver a better experience for your customers, and these folks—regardless of their role or team—can be your internal champions for user research and delivering great experiences.
Find them, and give them an easy way to find you. Then, make them part of the process—officially or unofficially—and recognize their contributions.
At AdmitHub, this took a few forms:
- I ended up taking on some "UX interns" who took time away from their actual jobs to help out with research and sit in on user interviews. This really helped me on a resource-contrained design team, and they learned a lot!
- I also regularly scheduled time with a handful of UX enthusiasts across the org, from education specialists to partner success reps. These informal conversations helped to inform my product thinking, and helped me communicate outward what progress the team was making towards their feedback.
None of these take that much time, and don't have to feel like a huge lift—but they can really pay off in furthering the impact of design. Plus, taking the time to teach people even helped launch a few budding UX research careers!
Building a design system as you go
I'm always thinking about the design as a system—how does this current decision fit into the larger system? If you make note of these things from the get-go, it's actually pretty easy to start to flesh out a design system or pattern/component library to draw from.
A system-wide UI “refresh” to start
Sometimes, a fresh coat of paint alone can make a huge impact a product. The "aesthetic-usability effect" is a real phenomenon, where better-looking interfaces are perceived as working better, all else being equal.
If you've done the intro prioritization work mentioned in part 01, then getting this type of work on the roadmap should be easier.
This can seem like a massive undertaking, but sometimes, as was the case with AdmitHub, the changes can be evolutionary instead of revolutionary.
Screenshot before refresh
Designs post UI refresh
A scrappy design system
Even if you're the solo designer, there are huge benefits to documenting your work. You make it easier for engineering and product teams to reference the latest designs, and if you're sharing your progress, you sort of force yourself into being more organized with verisoning, etc.
So, just start documenting—it’ll lead to a proper design system faster than keeping it all in your head!
Design system in Figma
Some advice for early-stage product designers
- It's okay for your file to be messy and disorganized—things will change rapidly, just get it all in one place.
- Don't over-plan a "system" naming convention (i.e. atoms, molecules, etc.) You'll get hung up on terms that don't matter yet. Just group them logically—it will change.
- Do start component and pattern naming that is shared with engineering so you have a single vocabulary for referencing the parts of your experince. (But, don't be sacred with the actual nouns.) This means working closely with front-end engineers to ensure your system is working with them.
- Figma team libraries and prototypes are a real game-changer. By defining your colors, fonts, spacing, etc. up front, you're forcing the early design team to make decisions about introducing new variables—that alone is helpful. By creating shared components that you just drag onto your artboard, you accelerate process, and again force consideration of pattern proliferation.
- (Really, Figma is awesome, you should use it if you’re making a design system.)
A note on designers and coding
Do you, or does somebody on your design team code? Or conversely, is there a developer who loves design? Super! Find the combination that works right for your team and work closely to implement components to the initial design spec. You can save a ton of time by not statically designing all of the transitions, hover states, responsive design breakpoints, etc. and instead work closely with your trusted partner to build the coded components in a way that reflects all of your design intent.
At AdmitHub, we put together a small working group to accomplish this, and they set me up with an environment so I could actually finalize much of the tricky CSS myself, once they gave me an established baseline. This was the fastest for us at the time, but every team is different—the important part is to start documenting and seek out enthusiastic engineering partners early.
Design system website
Foundation established, tools in the belt. Now we’re ready to utilize the tools and plan we've created to build new features.
Product Design In Depth:
Iteration, Prototyping & Testing
How often do things go exactly to plan? I suppose I could write about a project tied up neatly with a bow that shows a flawless process with perfect metrics—but that’s not always how things play out, especially on early-stage teams.
The reality is more interesting anyways. I worked on dozens of different product features while at AdmitHub, but selected the following few to show how every project—even those that don’t work out—is an opportunity to learn from, and delight customers.
The Bot Trainer
A failed attempt to turn an internal tool into a customer feature still yields benefits
The first project that I was hired to work on never actually came to fruition—at least not in the form initially envisioned. Though it didn't launch, I think what follows is a success story about working on a truly "agile" team. Read on to see how this project’s research informed the roadmap and subsequent projects to come.
For the business: for the business to scale, we needed to reduce the internal person-hours per end user that it takes to train the AI model.
For customers: some customers are frustrated by a lack of control over their bot’s logic, which they assume leads to a higher rate of incorrect responses. (Because we don’t expose this information to users, this seems like the “black box” of AI problem—“why did the bot respond the way it did?”)
We thought we could address these two issues with a single tool, designed to:
- reduce internal team hours per end user by leveraging partners’ input to add bot training data
- give partners the control they crave over tuning bot responses
- bonus: deprecate internal tool in favor of a single interface shared with partners
In my experience, it’s a good idea to have a shared visual frame of reference for complex products. Having a simple visual aids future design conversations, and serves as a learning tool for the team. I’m often surprised when it also helps to illustrate discrepancies in understanding about existing products between teams.
This is especially true when multiple teams are involved, and moreso when AI is part of it. So, I spent lots of time with our AI and engineering teams to make a visual model of how the bot worked so the whole product team had a shared understanding of the status quo, and where designs would alter it.
As part of my initial discovery (see part 01, above), I had already spent time shadowing our internal training team and had exploratory conversations with users. Now it was time for more pointed interviews. As our questioning got more specific, we segmented users by how they utilized the existing tools.
We were able to identify a segment of users for whom changes to this particular tool would likely make the highest impact. These “power users” became our target audience.
Interviews revealed how different sets of users had created their own teams, workflows, and external processes to track updates they needed to make to their bot’s logic, either (1) elsewhere in our platform, or (2) by reporting them to our internal team.
Some customers would be interested in a service to keep their bot's knowledge base up-to-date for them.
User Interview Summary
We thought we could give the Power Users a tool right in context of the conversation view that would allow them to update bot behavior immediately, instead of having to track updates externally and apply them later.
As a broad overview, this tool needed to allow users to:
- update bot responses (the actual message text)
- identify why a given bot response was incorrect
- add additional training data to help improve question identification—especially if the bot “couldn't understand” the question
All of the above may sound simple enough, but when you're using the data to update a natural-language-processing (NLP) model, the information needed is nuanced and needs to be accurate in order to improve the model. Our internal team was trained on collecting this data. Could end users correctly tag the data without training?
The workflow we identified earlier had all of the necessary steps, but an intuitive UX that set up guardrails was needed to integrate it into the product, so we envisioned a step-by-step “wizard” type of interaction.
I started by putting together some low-fidelity wireframes in order to work out the user flow for conceptual discussions across the product team and feasibility review with the engineering team.
Gut-check with users
In order to ensure our concept made sense to users, I went back to the target user group—Power Users—as well as our internal team, to review concepts.
Power Users were universally excited about the concept, expressing enthusiasm about the additional control that they’d get to customize their bots. Internal team users agreed that the workflow captured their current tasks correctly. So, I continued on the path towards fleshing out the wireframes into full fidelity designs.
In hindsight, this was incomplete information, as you'll see below. I probably should have created a prototype at this stage, or even lower fidelity. That said, it didn’t take too much effort to turn wireframes into high-fidelity designs, and sometimes the high-fi versions give you more realistic feedback.
If you’ve structured your wireframes the right way, then you’re a long way towards your final designs already. I start by fleshing out finalized design for a single instance of the screen, applying assets from my library, and then getting feedback from the product team. After one screen is agreed upon by the team, it’s time to apply those styles to all of your wireframes.
During this phase I work closely with engineers and PMs to ensure we address all use cases, in order to turn the designs into a final spec for development.
High-fidelity designs and Figma presentation artboard
The image below shows a selection of frames from the high-fidelity design at this stage, which were presented to users in a clickable prototype. You can see the “thumbs up/down” icons that open the “bot trainer panel” concept as an overlay, as well as some screens that walk users through the process.
Once the designs are close to complete, I turn them into a clickable prototype using Figma. The logic for this whole thing was very complex, and I wanted to ensure that users would be able to understand what was happening and where they could enter and exit the process. So, I went back to the same users I'd been speaking with, plus our internal team, to test the prototype as I watched.
Figma prototype layout
The results were pretty surprising: the power users—previously the most enthusiastic bunch—said they wouldn’t use it!
This is why prototyping is so important—despite the strongly desired new functionality, some problematic UX would have hampered or even prevented adoption with the target users had we launched with this version.
Interestingly, this wasn’t an issue of confusing usability—all of the testers could complete all of the tasks and understood the workflow, they just thought it took too long to complete. Couldn’t we just take some steps out?
In other words, users had assumed that training the AI would be much simpler than it actually is. Their mental model was far more simplistic than the reality of what was required.
Given this new feedback, I went back to brainstorm with the team.
User Testing Results Summary
Back to the drawing board?
We wondered if there was a way to streamline the UX to remove some steps or checks. Some of the testers even had ideas for things we might be able to remove.
Unfortunately, the answer was no—the team wasn't comfortable removing any of the steps because they were designed as guardrails to minimize human error and insure proper data collection.
At an impasse?
With users asking for a simpler experience, and technical requirements that must be maintained—what can we do?
While nothing could be removed, I thought there were probably UX interventions to make it seem faster by breaking actions into different workflows.
I reviewed notes and recordings from user interviews and testing sessions to dig deeper, looking for specific points of frustration or delight. Interestingly, all of the testers really appreciated a detail at the very beginning of the prototype flows. Initially I had assumed that users were commenting on this particular feature out of excitement about the overall concept. Now I wondered if we were on to something else.
Once users clicked the "thumbs down" to indicate a poor response, we revealed quick the "bot logic" atop the trainer panel. Specifically, it showed what the bot thought was the intent of the chatter's last message, in order to produce the answer it sent.
Are users getting tired of me yet?
I circled back to a few of the testers to find out why this specific feature resonated with them.
As it turned out, simply revealing why the bot responded the way it did was enough to:
- make users more forgiving of erroneous or inadequate replies, since they could see where it went wrong
- help address their desire to have more control over bot logic
Importantly, this new feature often highlighted that users themselves hadn’t input the necessary information for the bot to respond with.
It looked like potentially launching part of this feature might be a step in the right direction after all.
Meanwhile, in engineering...
I had been working closely with the engineering and AI teams all along to keep estimates up-to-date. During a review, they realized that if we waited a few months for some new infrastructure work to be completed, they could actually do the work faster and remove the need for a chunk of the UX process that was meant to introduce some safe-guards—which was a chief complaint amongst testers.
Armed with this information, it looked like this project as initially envisioned was going to be paused.
Is incremental change be better than none at all?
But the new insights about simply revealing the bot's intent had me wonder if there was merit in releasing just part of this feature set not impacted by the engineering work. So, I went back into the designs and removed all of the "training" and focused on making just the "information" more helpful.
Forcing focus on a small, seemingly simple interaction can be a really fruitful design exercise.
In this case, I realized we could eliminate the slide-out "bot trainer panel" altogether, since we we just showing additional information, not the full wizard workflow. This drastically simplified the experience, but I wondered if it were too simple to deliver real value. Potentially worse, would users be frustrated that they had more information, but no tools to act upon it?
User feedback, again
I took the revised concept as a prototype back to some users for review. This time, overwhelmingly everyone agreed it was a step in the right direction.
Not only that, but users had a bunch of new ideas on how to act on this information.
By simplifying the workflow to be purely informational, we reinforced the idea that different tasks were going to be handled by different types of users.
Instead of forcing all users to complete a lengthy, required workflow, we now allowed one type of user—the "responders" to simply make note of issues, while a different type of user—the "trainers" could make use of this data at a later stage.
Different "types" of users don't necessarily need to be different people—it's more like a different "mode" of working. Sometimes, a single person will do both, but during different parts of their day or week.
Interestingly—even if users didn't have all the tools needed to improve responses, simply understanding why the bot responded the way it did made them more forgiving.
Final design solution, plus design system updates
With positive feedback, we decided to fully flesh out this concept and ship it. While users couldn't directly take action, they were armed with more information than they previously had, and a direct link to take partial action elsewhere in the platform.
As a bonus, the UI treatment that I arrived at once we focused on a drastically scaled-back approach ended up being a novel pattern that I realized could be applied to a few other features we had planned. So, I detailed the pattern as an update to a flexible front-end component in our design system, making note of potential future uses.
Shipping and measuring success
The team shipped the re-envisioned feature to a small sub-set of power users and after positive feedback released it into the wild for everybody.
Measuring success on a feature like this is tricky—there isn't necessarily a completion action that a user would take to indicate success. We could have tracked clicks—how many times users expanded the new section, or clicked on a link to view the related Knowledge Base entry—but that wouldn't give us a great indication of whether of not simply seeing the additional information was useful.
Being a scrappy startup with a small team, we decided to listen for qualitative feedback instead of relying on quantitative tracking that wouldn't tell the full story, and would take additional engineering time to implement.
So, we prepped our Customer Success team to be on the lookout for feedback about the new feature, and we asked some of the testers ourselves.
Overwhelmingly, the feature was positively received, and we heard anecdotal evidence that users had greater appreciation of the complexity of bot training now that part of the "black box of AI" was revealed.
Additionally, we heard that some users were more likely to update their Knowledge Base entries more often because we revealed the data. And, while some went in wanting full control, many now felt this simple intervention was a step in the right direction.
Did we solve the project problems? Well, not as initially defined. But we did learn a lot about addressing them. Let's look:
For the business: for the business to scale, we needed to reduce the internal person-hours per end user that it takes to train the AI model.
New insight: We didn't end up releasing a tool that would address this, but our research revealed that customers would potentially be interested in different services to take care of training for them. That means we might not need to reduce team time after all.
For customers: some customers—“power users”—are frustrated by a lack of fine-tune control over their bot’s logic, which they assume leads to a higher rate of incorrect responses.
Partially solved, plus new insights: as we saw, simply revealing more information to users went a long way towards reducing frustration and created greater empathy for the process. While we didn't completely solve this, we gained valuable insights to move is forward in solving the problem in a different way, which ended up being a better overall experience for users.
The project never launched as originally envisioned—but it was still hugely successful in advancing our understanding of user needs.
I think it’s worth noting that by the time I left AdmitHub the originally envisioned tool still hadn't launched—some 20 months later—and this was a good thing! Why? By continually talking to users and incrementally shipping features that addressed the most pressing user need at the time, the tool simply became less pressing.
This helps illustrate the value of inviting users into the process continually, iterating in the design phase, and working on an agile team.
Had any parts of the process been left out, we might have spent time building an overly cumbersome tool that ultimately didn't serve users needs as well as a much simpler intervention. While this project alone didn't actually solve the problems we set out to address per se, it gave us a better understanding of how to solve them better and added new features to the roadmap that we'd go on to release, making the overall experience better than the initial feature set could have been.
As this illustrates, every project and user test is an opportunity to answer the question "what should we build next?"
Read on below to see how we used the insights gained in this project to define the next few projects on the roadmap.
Using insights to define the product roadmap
This case study is probably long enough already, but I wanted to briefly highlight how user research from this project informed the next projects on the roadmap.
As we learned, organizations were creating specialized user roles (or modes) on their teams to handle different types of tasks in AdmitHub, separating “responding to students” from “improving the bot”.
In order to fully solve the initial problem of fine-tune control over bot logic, we decided to learn more about the "improving the bot" user type and release tools specifically for them. You can see a snapshot of some of this design work below in the Knowledge Base project, below.
And since the above Bot Trainer project revealed more information to the "respond to students" users, we knew we wanted to make that more actionable, somehow. We dug in for more twists and turns in another Inbox and Conversations project, which you can preview below.
Where humans and bots collaborate to help students
The research insights from the previous Bot Trainer project revealed to us that organizations were creating specialized user roles on their teams to handle different types of tasks in AdmitHub, separating “responding to students” from “improving the bot”. The Inbox focuses on giving the former group better tools.
The Knowledge Base
The “brains” behind a bot get an update so users can make them more personal
The research insights from the previous Bot Trainer project revealed to us that organizations were creating specialized user roles on their teams to handle different types of tasks in AdmitHub, separating “responding to students” from “improving the bot”. The Knowledge Base focuses on giving the latter group better tools.
I may return to add more information to the projects mentioned above, but they follow the same overarching process as the bot trainer:
Define problems --> Initial discovery --> Talk to users --> Design iteration --> Repeat talking to users and design iteration until it makes sense --> Build --> Launch --> Measure --> Define next steps.
There are different twist and turns with better or worse outcomes for every project, but as long as you're talking to the people you're designing for and observing how your features are being used, I don't think an iterative approach will fail you.
If you're interested in learning more, please feel free to drop me a line and let me know, or contact me if you think this type of process would be useful on your team.