top of page

Beta testing

Analytics Beta

Planning and carrying out the research to define the next steps for my company's Analytics solution, giving our clients the power to make the right decisions.

Topics:

UX Research, Diary study, User interviews, Priority matrix, Stakeholder management

My company developed an advanced analytics software product designed to deliver comprehensive insights from SharePoint, empowering clients to craft data-driven business and communication strategies.

 

I led the beta testing phase of the product, devising a research plan that combined diary studies with user interviews. My responsibilities included identifying and documenting bugs that were undetected during earlier testing, uncovering and prioritising potential new features for future iterations, and fostering strong, collaborative relationships with the clients participating in the study.

​

By aligning user feedback with actionable improvements, my work was pivotal in refining the product, ensuring its reliability, and positioning it as an indispensable tool for our clients' strategic planning needs.

To avoid breaching confidentiality clauses, I will not disclose the name of the companies involved in this case study.

mymind-KnUX9qtR_4E-unsplash.jpg

Details

Previous research

Analytics - 1st research.png

WHEN

Initial Research phase (early 2023) and Design phase (spring and summer of 2023).

AUTHORS

Sonia Bensouda, Product Designer

Lucia Rios, UX Consultant

Celia Fiallos, UX Consultant

METHODS

  • User interviews

  • Moderated usability testing

RESEARCH GOALS

Initial Research phase: understanding the needs of our clients, the experiences they had with Microsoft's built-in analytics solution and the problems to be solved by our new product

Design phase: validating and refining the initial design decisions

Key insights

Gathering insights is made difficult by overwhelming interfaces. Users would like to view data in a more visual, efficient, and concise manner.

Content: Users want clear and succinct text content requiring minimal scrolling.
Interface: Users want a clear and simple layout to make navigation easier.
Dashboard conciseness: Users want to view metrics at a glance.

​

​

Users want a personalised experience of their analytics dashboard, allowing them to filter data and understand their audiences.

Timeframe: Users need to be able to view data over more flexible time frames than the 90 day current limit.


Filtering: Users want the ability to filter data in order to gather different insights.


Audience segmentation: Users want metrics on the different audiences in their organisation.

Users unanimously expressed that their main goal with the analytics dashboard is to track engagement.

User journey: Dashboard owners want to see the user journey to determine if the content is discoverable.


Content interaction: Dashboard owners want to track if people are viewing the shared content.

Community & Campaigns: Dashboard owners want to see engagement in these categories.

Users want to be able to visualise data graphically and gather insights in real time while being able to share reports and metrics with others.

Visualisation: The overall results show that people want to view data and metrics in a visual way.


Frequency: Users want to view updates in real time, with no delays beyond an hour.


Efficient sharing: Users want to be able to share reports efficiently without having to manually track changes in their metrics.

Beta testing approach

Participants

12 users from 5 different companies.

Structure

Week 0 

  • Introduction to the study and to the Page tier.

Week 1 

  • User interview.

  • Introduction to the Site tier.

Week 2 

  • User interview.

  • Introduction to the Hub tier.

Week 3 

  • User interview.

  • Introduction to the SP tier.

Week 4 

  • User interview.

  • Close-up.

Challenges

The main obstacle was that the product was large and complex. To ensure we got a good understanding of its overall performance, I devised a research plan that combined diary studies and interviews over a period of 4 weeks, focusing on the functionalities of one tier (or product area) each week.

​

Our users' availability, as busy professionals from a variety of different industries, was also a concern and a reason for devising an asynchronous, unmoderated method that they could fit around their schedule.​

Methodology
  • Diary studies

  • User interviews

​

​Why?​

The diary studies were meant to capture the habits and everyday engagement of the users with the product, and to log the problems they experienced in a real context. The interviews were aimed at assessing their overall satisfaction while giving us the chance to ask follow-up questions.The study was structured over 4 weeks with one 45 minute session per week, which allowed us to get granular information about each product functionality. Users were usually interviewed in groups of 2-3.

Analysis

Page tier

Site tier

Hub tier

SP tier

Page tier
  • Mark given by users: 7.2

  • Comments: 7/13 deemed the Page tier intuitive and easy to use.

Site tier
  • Mark given by users: 7.4

  • Comments: 5 users deem the tier useful to understand how news content performs.

Hub tier
  • Mark given by users: 7

  • Comments: the number of errors was larger than in previous tiers and a source of frustration.

SP tier
  • Mark given by users: 7

  • Comments: the tier was deemed especially useful for audit purposes, but more bugs were reported.

Themes
  • 6/12 users expressed it would be very useful for them to be able to export the data from the platform, so that they would be able to create reports to include it in their presentations.

  • 5/12 users thought that it would be useful to be able to pick a date window in the filters to narrow down the data5/12 users questioned the data reliability and where it came from.

  • 6/12 users found the loading time excessive, particularly in the hub and SP tier.

  • 5/12 users experienced some kind of technical problem in their use of the platform.

  • 6/12 users did not have a good experience with the Department and Location filters for their audiences because their directory is not up to date and the data was not reliable.

  • Answers about usefulness of certain charts or tools over others were mixed, with no particular feature emerging as the most useful and value depending on the specific user’s needs.

  • There is a correlation between the tiers’ received mark and the number of technical errors experienced by the users: the number of errors was 3 times higher in the hub and SP tiers compared to the page and site tiers

Insights
  • Users would appreciate being able to export the information from the platform in a PDF format, so that they can include it in their presentations to colleagues and stakeholders

  • Users would like to select specific dates to increase the flexibility and accuracy of their results

  • Users would like to know the origin of the information on their screen, so that they feel reassured about its accuracy

  • Users would like to see the data filtered by their audience’s department and location, but the current system is being inefficient due to the data directory being used not being up to date

Priorities

Once I had aggregated and analysed the results of the studies, the next step was to define the tasks and features that would be incorporated into the backlog and the order of priority that they should hold.  I consulted with the development team and used their assistance throughout the process. I defined three groups of Priority based on their ability to disrupt the core use of the product, their desirability by users and their feasibility. 

​

Priority 0 includes the issues that prevent users from getting a full, satisfactory experience from the product, like bugs.

Priority 1 corresponds with the features that are desirable for users and have a good balance of value and feasibility.

Priority 2 corresponds with the features that are desirable for users but, due to their difficult nature or the reduced value they bring to the product, can be postponed in the backlog for later iterations.

Priority 0 - Bugs and other technical problems

Errors per tier

The errors per tier metric shows that the Hub and SharePoint tiers displayed the biggest number of errors, some of them critical to the correct functioning of the product as we'll see in the graph below.

Type of error

The error with the highest ocurrence (5 reports) was a lack of data in certain charts that would in some cases become a lack of data in the whole tier in the case of the Hub and SharePoint tiers (4 reports). 

Three instances were also reported in which discrepancies between the product´s data and the data shown by SharePoint could be observed.

​

These two kinds of error were persistent and were reported to be the most frustrating to users, as they impacted their use of the product in its totality and affected negatively the opinion they had of it.

Priority 1 & 2 - Features

I used a priority matrix to visualise the features that were mentioned by users in relation to the added value they brought to the product and their feasability in terms of design and production effort. 

Priority 1

The three features that presented the best value/feasability ratio were:

  • Data export: Allowing users to export the product's data in a pdf/jpg/png format that they could use in their presentations.

  • Loading time: Reducing the time it takes for the product to scan the clients' tennant and provide its results

  • Custom dates: Allowing users to select custom dates to visualise the data associated with a particular time window.

Priority 2

The twofeatures that presented more complication or less added value were:

  • Sankey re-design: Some users reported that the design of one of the graphic charts in the product was confusing. Re-designing this chart would be feasable, but the reduced number of users reporting confusion at it diminished its value.

  • Alternative data source: Some users' companies did not keep an organised, up-to-date directory in Microsoft Azure, which impacted the usefulness of some of the data related to locations or departments. Engineering the product to collect its information from alternative data sources in these specific cases required an abundant use of resources without a certainty of it being ultimately possible. 

Conclusions

The research study that I conducted in this project allowed me to make recommendations for the improvement of the product and had an impact on the users' satisfaction. My reports directly influenced the refinement of the product, reducing its technical errors and making it more adapted to the day-to-day needs of our end users. The main technical problems detected were addressed within the next two iterations, and the loading times were reduced by 40% over the following two months.

 

This work was instrumental in driving the product's growth and success, improving adoption rates and reinforcing its value for my clients. By bridging the gap between user needs and technical implementation, my research ensured the product delivered tangible results, solidifying its value proposition and supporting the company's broader strategic objectives.

Next project

Hill Robinson's Team hub

Redesigned and tested Hill Robinson’s intranet to improve usability and engagement — leading to a 37% rise in users and clearer access to key content across 8 internal sites.

© 2024 by Lucía Ríos

bottom of page