Establishing a UX Analytics Framework

Kevin Shih, Ph.D.
Senior UX Researcher

Regardless of what government service you are building or modernizing, decisions should be grounded in how users actually experience your offering. Without systematically tracking your service’s user engagement and performance, it is hard for your team to identify opportunities for improvement and assess how improvements are performing.

Establishing an analytics framework that is informed by user experience (UX) thinking will give you a more holistic picture of your service’s user journeys. Traditional analytics (like page views, completion rates, and drop-off points) can show you patterns in how your users interact with your service, whereas UX research reveals the circumstances, goals, and frustrations behind those patterns — making your data more actionable.

Many government teams already track analytics through tools like Google Analytics or DataDog, but these efforts often live separately from UX work. Bridging this gap doesn't require starting from scratch — it simply means bringing UX thinking into the frameworks you already have.

Here are the three high-level steps to establish a UX analytics framework that would empower your team to deliver in a data-informed manner.

Step 1: Align on service goals and establish KPIs

First, work with your program/team leadership to align on the goals for your service that include 1) determining what successful engagement with your service looks like for the people who use it; and 2) achieving program outcomes. It is crucial that these goals are not developed in a vacuum and you have buy-in from your team.

Once the goals have been established, list the key performance indicators (KPIs) that would allow you to measure how your team is doing with each goal. For example, if your team is modernizing your state's online driver's license renewal and has a goal of encouraging users to review their applications before submitting to reduce user errors, you might add a review page where users can check their answers before submitting. A relevant KPI for this goal could be: What percentage of users who complete the application actually review their answers on the summary page?

Step 2: Establish your baseline, starting with qualitative research

After aligning on your goals and KPIs with your team, you will need to start capturing how well your service is currently performing. This serves two purposes: it reveals which parts of your service need iteration most urgently, and it establishes a baseline to measure whether your improvements are working. Start by reviewing any existing qualitative research on how users engage with your service — particularly where they succeed or struggle with their goals and why. If no prior research exists, conduct moderated usability studies to capture these patterns. Focus on understanding the “why” behind user behavior, as these insights will directly inform your improvements. While small-sample research won't be statistically generalizable, it provides a valuable baseline to gauge whether your iterations are moving in the right direction.

Why start with qualitative research? Qualitative and quantitative research serve different purposes: qualitative research reveals why users behave the way they do through methods like interviews and usability testing, while quantitative research measures what they do and how often through surveys and behavioral data. This is why working with qualitative findings before setting up analytics platforms is so valuable — it helps you identify what's actually worth measuring. For example, building on the driver’s license renewal scenario, let’s say your team wants to understand whether users need receipts after submitting. You might add a download link on the confirmation page and track engagement. If few users click it, traditional analytics alone might suggest users don't care about receipts.

But qualitative research could reveal a different story: maybe your users expect receipts to arrive automatically via email — they don't have an existing habit of downloading one from a confirmation page because they’ve been conditioned by other websites to expect receipts via email. This insight would prevent your team from misinterpreting low download rates as a lack of interest in receipts, when the real issue is a mismatch between user expectations and your design pattern. Armed with this insight, you might instead track email delivery rates and survey users about receipt preferences — metrics that actually reflect user needs.

Once you are done working with your qualitative insights, you are ready to set up your actual analytics tracking in platforms like Google Analytics and DataDog RUM. Some possible quantitative metrics to track include:

  1. Task completion rate

  2. Task time

  3. Error rate

  4. Bounce rate

  5. Conversion rate

  6. Drop-off rate

  7. Frustration signals, like rage clicks

Which metrics you use depends entirely on your team's goals and how your service is designed. For example, if you want to know whether users review their answers on a driver's license renewal application before submitting, you could measure this as a task completion rate: Of those who submitted, how many completed the review step?

But if you're interested in the quality of the review experience itself — especially if the review page is a standalone step — other metrics become more informative. You might track task time (How long do users spend reviewing?) to understand engagement depth, or rage clicks (Where are users repeatedly clicking in frustration?) to identify confusing elements on the review page.

Step 3: Iterate and keep tracking KPIs (rinse, repeat)

Now, your team is set up to make data-informed decisions. You've set up analytics to track the right quantitative metrics, and you understand where improvement opportunities exist based on your qualitative research.

Every time your team iterates on your service, track your metrics against the established baselines to assess performance. Keep gathering qualitative feedback, especially in the form of contextual inquiries, moderated user interviews, and usability studies. Quantitative metrics are great at describing who, what, when, where, and how your users are engaging with your service, but they are often insufficient at explaining why. Both quantitative and qualitative data are needed to drive meaningful improvement.

This framework isn't a one-time setup — it's an ongoing practice. As your service evolves and user needs shift, your metrics and research focus should evolve too. The government services we build have real impact on people's lives, and this approach helps ensure we're improving them with confidence rather than guesswork.

Kevin supports the Department of Veterans Affairs’ Disability Benefits Crew as a senior UX researcher. If you are interested in learning more about how Aquia can help your agency modernize its systems and processes, contact us at federal@aquia.us

Aquia

Securing The Digital Transformation ®

Aquia is a cloud and cybersecurity digital services firm and “2024 Service-Disabled, Veteran-Owned Small Business (SDVOSB) of the Year” awardee. We empower mission owners in the U.S. government and public sector to achieve secure, efficient, and compliant digital transformation.

As strategic advisors and engineers, we help our customers develop and deploy innovative cloud and cybersecurity technologies quickly, adopt and implement digital transformation initiatives effectively, and navigate complex regulatory landscapes expertly. We provide multi-cloud engineering and advisory expertise for secure software delivery; security automation; SaaS security; cloud-native architecture; and governance, risk, and compliance (GRC) innovation.

Founded in 2021 by United States veterans, we are passionate about making our country digitally capable and secure, and driving transformational change across the public and private sectors. Aquia is an Amazon Web Services (AWS) Advanced Tier partner and member of the Google Cloud Partner Advantage Program.

Previous
Previous

Why Your Federal Enterprise Architecture Is Incomplete

Next
Next

Why Veterans are Built for Zero Trust Cybersecurity