UX designer / Product designer
UX Metrics Strategy
Strategy plan for the UX team to quantitatively track the progress prior to product release


Background
The idea to work with a UX Metrics Strategy was born during a two-week online course with Jared Spool called “Persuasive UX Metrics for UX Leaders” which I attended a month prior to the start of this project. This was my final project of the UX Designer program, which I worked with alongside performing ordinary UX work tasks at Consafe Logistics.
Rationale
As product development stretched over the years, it was very important to establish a continuous testing strategy to measure usability data prior to deployment to make sure our design fulfills the set of technical requirements, as well as delivers an efficient, effective, and enjoyable product to our users.
Goal
The goal of this project was to outline the first draft of the UX Metrics Strategy for the UX team at Consafe Logistics and establish ways to start measuring the usability of our product prior to release. The purpose was to give arguments persuading our key stakeholders of the value of UX Design work – showing results and progress that communicated in numbers.
Consafe Logistics is a software company that provides solutions for warehouse management operations, administration, and control. Warehouse operators, the target group of users, use the product called Astro WMS®, which is extremely powerful, however, complex, and requires a great deal of technical understanding. New users make mistakes rather easily, lose time figuring out the system, and often need assistance to be able to work. The UX team has been tasked to design a modern user interface for warehouse operations that builds on the Consafe Logisitcs’ strong technical platform and sharp product functionality to improve the current user experience.
My process
Tasks I worked with during the two-week training with Jared Spool included prototyping, forming a test plan, usability testing, data analysis and visualization, report writing, and presenting results to the RnD department. After the UX metrics course, I could not stop thinking about how to ground our design efforts in numbers. I read plenty of articles on the topic, however, the current situation seemed bleak. We did not have any access to quantitative data from customer servers on the current product and have not yet delivered our new version of the product to the customer, therefore, no way to compare or track any of the conventional Google Analytics types of data. As I read on, I found an instruction book from Albert & Tullis who describe how to measure usability based on the performance of the prototype during usability testing. Goldmine!



While I kept cracking Albert & Tullis’ “manual” I co-created a prototype for two common scenarios when operators pick orders at the warehouse. Afterward, I planned usability sessions and did my best to establish contact with our existing customers to recruit test participants. To prepare for the sessions, I discussed with my colleague and carefully crafted questions we wanted to ask the user or observe the user doing. I divided the flow into sections based on the task and a) established what is considered as a successful performing of the task, and b) created a hypothesis for what could go wrong in each task. The next step was to perform individual remote usability testing with 18 warehouse operators whohad experience using our current software. Establishing the contacts and finding them took me a couple of weeks, but the effort laid the ground for a solid list of customer contacts. I prepared a table where my colleague could note down the results for each informant and write down notes while I moderated every test. When tests were done, I took the recordings and filled out the rest of the table that provided us with data to analyze.I applied the theoretical knowledge behind how to start measuring usability on the data from the testing to visualize the first iteration of task success and error metrics to track progress in usability of the company’s future product.



The next step was to perform individual remote usability testing with 18 warehouse operators who had experience using our current software. Establishing the contacts and finding them took me a couple of weeks, but the effort laid the ground for a solid list of customer contacts. I prepared a table where my colleague could note down the results for each informant and write down notes while I moderated every test. See an example of a filled-out table above (Figure D). When tests were done, I took the recordings and filled out the rest of the table that provided us with data to analyze. I applied the theoretical knowledge behind how to start measuring usability on the data from the testing to visualize the first iteration of task success and error metrics to track progress in the usability of the company’s future product.


Result
The work resulted in quantitative visualization of the data from usability testing to help us target our improvement efforts and general prioritization of tasks. Documenting the strategy gave us the opportunity to start tracing our usability metrics over time. Presenting the results gave us arguments for value added by the UX design team to the company’s product development by showing results and progress that were communicated in numbers.


Learnings
I learned that there are two thresholds for successful implementation of UX metrics strategy within the UX design team. The first one is to realize that it does not have to be hard to start evaluating software performance quantitatively even prior to release. Secondly, putting together the strategy must be a collaborative effort that solidifies a way of thinking rather than a static manifesto called “UX Strategy”. Therefore, I see the UX Metrics strategy being a living guide that every UX team should strive for having to see and understand the success of development efforts over time. To fully implement the strategy requires a stable design process where continuous usability testing is not skipped due to “time constraints”. Reflecting back on the work done, there had been much less follow-up on the strategy than anticipated due to the lack of usability testing.
Methods
Hi-fi prototyping, remote usability testing. Albert & Tullis UX Metrics Strategy of task success metrics and error metrics to measure the solution’s effectiveness.
Time-on-task and learnability metrics to measure the solution’s efficiency.
Tools
Sketch, MIRO, MS Office suit
Duration
Approximately 200 hours spread across 3 months, 2022.