OTT APP USABILITY STUDY
One of the largest broadcast television companies wanted to know the impressions, pain points, and overall experience a realistic user base has when navigating their OTT application. In addition, a competitor's app was tested.
*The client is anonymous for confidentiality reasons
Role: Lead Researcher
Method: In-Person Moderated Usability Test
Product: OTT News Streaming App
Insights
Users prefer the overall experience of the company app to the competitor's app.
Based on content, overall appearance, and the top navigation bar.
Participants liked that the video starts playing where they left off.
Users struggle with not being able to control some automated features of the app.
Users have to wait or find the video to make the background video full screen.
The auto-full screen while browsing sometimes interrupted browsing or reading.
The participants expected features found in different apps (such as Hulu and Netflix).
Recently-watched section.
Preview window when fast-forwarding.
The interrupter proved useful for staying in the app.
Prevented users from accidentally exiting the app on Roku.
A glitch found on the Fire TV bypasses the interrupter.
The “Watch” carousel was intuitive to use, but users had mixed feelings about the automated features.
Auto-play and auto-full screen were not all user’s preferences.
I created an "at-home" experience:
I simulated how a user would actually experience the application on their TV at home. The testing room was equipped with audio and visual capture devices and streamed live to stakeholders.
Participants
13 participants were recruited for this study:
7 Roku Users + 6 Fire TV Users
Age range: 20-53
Local to Boise Metro Area
7 Females; 6 Males
1-hour long observations
Methods
I created a series of tasks for the user to perform that tested the following features on Roku and Firestick for the company app and competitor app:
App discovery
Home page navigation
24/7 stream and replays
Background video player
In-App navigation
Fast forward/pause
I also created a companion facilitator guide to take notes and ask task-specific follow-up questions.
Analysis
I took notes during each session and reviewed the recordings afterward.
This allowed me to ensure I was presenting an accurate depiction of the test and to find insights I missed during the sessions.
For each session I analyzed:
the ease of each task
pain-points
user impressions of the UI and functionality
The final report was organized by the features tested in the app and not by task.
This allowed me to do a competitive analysis of two different-looking apps with similar functionality and have a point of comparison.
I included clips from the associated session and quotes from users.
Design Recommendations (these are just a few based on the most obvious pain points)
Users should easily be able to make the background video full-screen again. Users intuitively hit the back arrow or tried to click “now playing.”
The interrupter keeps users from accidentally leaving the app and should be consistent across both Roku and Fire TV versions of the app.
Being able to see where the cursor was in the video while fast forwarding made the experience feel quicker and easier in the competitor's app and should be incorporated.
Reflections
Wait until the end to take observer questions and set my expectations from the start.
Because all the tests were streamed live via Zoom to the stakeholders of the company, my observers had the ability to ask questions and comment during the sessions. Some of the observers were the designers of the app and this was a great chance for collaboration among teams. Unfortunately, the questions and comments they brought forward interfered at times with the natural flow of the tasks and were not always structured in a way that wouldn't give faulty data. Going forward, I will be more mindful of the way I receive live feedback from stakeholders and make my expectations clear from the beginning to limit questions or wait until the end.
Take notes immediately after each session when memory is the freshest.
Note-taking is a crucial part of the usability test. However, this is a balance between writing all the useful information you observe and being present. In these tests, I opted for note-taking with a pen and paper and jotted briefly what the users did and said. These notes appeared to be important in the beginning, but I still watched the recordings of the sessions afterward. The notes I took during the session missed critical moments and important insights the camera captured. Memory is typically freshest after an event happens, so instead of focusing on note-taking during the session, opting to watch the playback video right after the session and taking notes would allow me to be more present during the session, while still producing high-quality notes.