I helped Mux generate new design solutions for presenting video analytics to content providers like IGN and Disney. The driving goal was to increase the fidelity of data presented within the same visual space. We did this by deconstructing the data relationships, identifying a more effective visualization framework, and defining a new language for presentation.
Role Product Designer
Team Product Design, Engineering
Date 2016
Existing product
Existing analytics were presented in two scales, 'Player Time' and 'Relative Time,' communicated within one single-axis timeline. You can see why this approach was challenged user comprehension of the story behind the data.
Data deconstruction
By exploring the raw data powering existing visualizations, we can understand the current visualization logic and identify better solutions. Taking the example data set below, I set out to understand User Events and Player Time in context of Session Time.
Combining sequences
An opportunity simplify our communicative logic presented itself when we explored combining raw events into sequences that would help reduce the amount of ink required to communicate the data.
From the 11 original Unique Events
- viewstart
- adstart
- adend
- play
- playing
- pause
- seeking
- seeked
- play
- playing
- viewend
We can refine to 6 Event Sequences
- Startup
- Playback
- Paused
- Rebuffering
- Seeked
- Error
Visualizing two dimensions
It was time to explore a dual-axis timeline solution to improve our clarity of Events in context of two scales: Video Time (y axis) and Session time (x axis)
Combining this strategy with Event Sequencing iconography allows us to tell a more descriptive story with very little text or numbers. We end up with a (hopefully) up-and-to-right visualization that allows our end users to grasp the full story of a Viewer's experience, a massive improvement over the visually-ambiguous single-axis timeline.
Defining a Visual Language
Creating a stark visual identity for each Event Sequence allows us to visually represent data more efficiently without the need for labels.
Edge-case solutions
Recognizing edge cases that threaten our display model, we incorporated visual abbreviations for instances of exaggerated graph distances like long Seeking and Pausing, allowing us to display more actionable data in one view.
Presentation format
Each Session Events contains context for Video and Session Time. In a sense, each Event has its very own graph. By orienting our UI on individual Events rather than a universal time graph, we are able to incorporate conditional Event formatting without the constraint of a universal time scale.
Hover and Fade
Hovering individual Session Events exposes exact Video and Session Time context for that single Event, allowing the user journey to be explored one event at a time. In addition to employing hover interactions, we can also fade all unselected Events to improve legibility, as shown in the next example.
Gross Time
In addition to the Video and Session time Duration of each Event, we can also communicate the overall Video and Session time using references along the X and Y axis. Exposing overall Session and Video time along the X and Y axis as our user hovers along any given Event allows us to respect the time leaps elapsed in conditionally formatted Events.