Building a SaaS Platform for a Support Center


Research type: Continuous Discovery
UX skills: Concept Testing, Contextual Inquiry, Diary Study, Focus Group, Journey Mapping, Personas, Usability Testing
My role: UX Research Lead
Cross-functional team: UX Research, UX/UI Design, Product Management, Engineering, Support Center Operations, Service Design
Timeline: Nine months for MVP build


Context

A national nonprofit that connects eligible people to government programs was using proprietary support center software to screen and apply people for public benefits.

The software was outdated, inefficient, and not user-friendly, leading to expensive staff training and time-consuming workarounds. The team decided to build new software that would be efficient, user-friendly, and scalable. I wanted the users, the support center agents, to provide substantial input into the development process to ensure the software met their needs. With this in mind, I decided on a continuous discovery approach.


Research Process: Continuous Discovery

RECRUITMENT

The support center was under-resourced and agents had strict schedules, with restricted time off the phones. Scheduling ad hoc research sessions with agents proved very difficult.

I developed a continuous discovery process for the duration of the build, with two group sessions dedicated each week to research and testing with five rotating agents from different campaigns.

With this format, operations could efficiently resource plan across campaigns ahead of time. Cross-functional partners (PMs, devs, designers) were invited as “optional” to these sessions and we held debrief sessions about key insights every week.

RESEARCH GOALS

Foundational/discovery research: Understand support center agents’ experiences with the current software and map their journey highlighting pain points and opportunities.

METHODS: Interviews, Focus Groups, Contextual Inquiry, Diary Study (conducted outside of weekly sessions)

Evaluative research: Test new software designs iteratively with support center agents throughout the build.

METHODS: Concept Testing, Usability Testing


Analysis & Synthesis

I used thematic analysis/affinity mapping throughout the duration of the project to synthesize and analyze the large amounts of qualitative data being collected through the various research methods.

Diary study: daily “thorn” affinity mapping
Every week, I added new insights into a Mural, eventually synthesizing the data into primary user needs.

Outputs

USER JOURNEY

I partnered with our UX Designer to create a user journey map through the current software, using the insights from the discovery research. We mapped support center agents’ emotional journey, pain points, and opportunities to help guide the product roadmap.

PERSONAS

I ran a series of workshops with our cross-functional team and stakeholders to create personas based on the research insights. Involving the whole team and stakeholders helped socialize the personas as useful artifacts throughout the build process.

INSIGHTS & ITERATIONS

Every week, we held a debrief discussing the insights from the user sessions. The insights guided feature creation and design iterations, from concept to wireframe to prototype, week by week, as shown below.

Design Iterations

Concept for Client Profile
Wireframe for Client Profile
Prototype for Client Profile


Reflections & Next Steps

The continuous discovery process was overall a success for this project and helped the team get into a solid cadence of research, testing, and design iterations.

It was, however, difficult to analyze the data and provide useful insights to the rest of the team every couple of days for 9 months. Doing this again, I would allow a little more time for data synthesis/analysis (e.g., do design sprints in a cadence ahead of dev sprints).

After launch, I gathered feedback from agents using the new software through a dedicated Slack feedback channel, watched call recordings of agents using the new software, and ran 1-on-1 feedback sessions. I also partnered with the data analytics team to gather metrics, such as number of errors (noted during quality control call monitoring), length of calls, and number of applications submitted. Using all this data, I partnered with product to add bug tickets, enhancements, and new feature requests to the backlog.