Sherry Liao

UXD
3 years in B2E, B2C product
Resume
Playground

Sherry Liao

UXD
3 years in B2E, B2C product
About
Playground
UXR/ HFE Intern @ Pegatron Corp.

Transforming a BMC Monitoring System to Support Diverse Diagnostic Workflows

Designed a configurable BMC monitoring dashboard that adapts to diverse engineering workflows, reducing diagnostic friction in complex system environments.

Human Factors EngineeringUX DesignB2B · Business
Designed a configurable BMC monitoring dashboard that adapts to diverse engineering workflows, reducing diagnostic friction in complex system environments.
OVERview
ROLE

UXR / HFE Intern

Taiwan
DURATION

1 month

2025/12 - 2026/01
DOMAIN

AI server infrastructure · Cross-market

(TW / US)

This project involved designing a BMC (Baseboard Management Controller) dashboard used by server administrators and engineers in enterprise environments. Unlike consumer-facing products, the system needed to support engineers with significantly different diagnostic workflows, technical expertise levels, and market expectations.

The problem

A single rigid dashboard was forcing diverse users into one workflow

The existing system presented a fundamental mismatch: the dashboard was structured around system architecture, not around how engineers actually diagnose issues. Three specific tensions were identified:

Workflow rigidity

The navigation assumed all engineers follow the same diagnostic path. In reality, some start with alerts, others begin with logs or metrics — and this varies by experience level and role.

Terminology gaps

Technical terms used in the interface did not have consistent meaning across teams or between Taiwan and U.S. markets, leading to different mental models for the same UI element.

Information architecture misalignment

System data was grouped by machine logic rather than user task logic, requiring engineers to mentally re-map information before they could act on it.

Research

Conducted interviews with server administrators and engineers across Taiwan and the U.S. to understand how they actually interpret system signals. Key finding:

Engineers prioritize different signals based on experience, role, and system familiarity. Senior engineers rely on pattern recognition across logs; junior engineers need structured, sequential guidance. Neither group could work efficiently in the same rigid layout.

Supporting findings from interviews:

Signal priority varies by role

Hardware-focused engineers prioritized alert states. Software engineers relied more heavily on log sequences and metric trends. The same dashboard gave each group friction for different reasons.

Feature importance is task-dependent

The same feature (e.g. CPU temperature) carried different urgency depending on whether the engineer was doing routine monitoring or diagnosing a live incident.

Cross-market terminology mismatch

U.S.-based engineers used different vocabulary for the same system states compared to Taiwan-based teams, revealing a need for adaptable labeling and contextual help.

Design process

Four-stage process from system deconstruction to validated wireframes

1.

System deconstruction

Reverse-engineered existing system functions and mapped the relationships between machine-level data and user-level tasks. This step produced a structured understanding of what the system could do vs. what users needed to accomplish.
2.

Information architecture synthesis

Reorganized fragmented system data into user-centered groupings based on diagnostic task goals. Established information hierarchy prioritizing critical signals over secondary monitoring data.
3.

Multi-path workflow mapping

Defined distinct diagnostic paths for different user types (alert-first, log-first, metric-first). Identified key decision points where the interface needed to accommodate diverging user behaviors without forcing a single entry point.
4.

Iterative validation with engineers

Created wireframes reflecting real-world diagnostic flows and conducted validation sessions with core users. Refined IA and interaction patterns based on expert feedback across multiple cycles.
Key design decisions

Three decisions that shaped the final system

Decision 1 · Why configurable layouts?
Alternative considered
Role-based fixed templates (one layout per user type)
Why we chose configurable
Roles don't fully predict behavior. Same-role engineers had different habits. User-controlled layouts reduced the need to predict all future states.
Decision 2 · Unified view over separate tabs
Alternative considered
Separate tabs for alerts, logs, and metrics
Why we chose unified
Cross-referencing across tabs created cognitive load during time-sensitive diagnostics. A unified view reduced context switching and supported pattern recognition.
Decision 3 · IA based on task goals, not system structure
Original structure
Grouped by system component (CPU, memory, fans, etc.)
Redesigned structure
Grouped by diagnostic goal (system health overview → anomaly detection → root cause investigation), matching real task sequences.
Impact

Reduced diagnostic friction across role types and markets

Quantitative usability metrics are not disclosed due to NDA restrictions.
Improved navigation alignment

Validation sessions showed engineers could locate critical information without reordering their workflow to match the interface — a core failure of the original system.

Reduced cross-referencing

The unified view and task-based IA reduced the number of navigation steps required to correlate alerts with their root-cause signals.

Configurable layout adoption

Engineers in validation sessions reported setting up personalized views immediately, and senior engineers in particular valued the ability to surface log patterns alongside metric graphs.

Reflection

This project pushed me to engage with system design at a depth beyond typical UI work — navigating ambiguity, technical domain knowledge, and users whose mental models diverged significantly from each other. The core lesson: when designing for expert users in high-complexity environments, the design problem is rarely about the interface itself. It's about whether the underlying information architecture reflects how real people actually think.In addition to the dashboard, I contributed to HFE research for an AR exhibition installation and an EEG-controlled hand rehabilitation device — extending my ability to apply human factors principles across different interaction modalities.

Let's get to know each other.

Get in touch.

© Sherry Liao 2026
ResumeLinkedInMedium