︎

︎daniela
︎aka “dani”

︎ aka badguy studio





daniela friedson-trujillo


XR/UX Design



home   work    play   contact


HUD with complete voice-UI for Hololens 2 MR headset

︎ Confidential



Related Projects

MASTERY︎︎︎
Gamified eLearning platform for desktop and mobile devices


MAGPIE︎︎︎
Framework for developing intelligent VR training systems

HUD for Army Field Medics

Wearable MR Display, UX Researcher, Visual and Audio UI Designer

This project was designed for the Microsoft Hololens to enable military medical personnel to efficiently and effectively input critical medical information across a range of operational environments.

I worked as a researcher and designer with a small team of six to develop this project, utilizing optical character recognition and passive context-aware data capture through the use of natural language processing techniques. My role was to provide awareness of documentation status through interfaces implemented within a heads-up display (HUD).

*HUD DISPLAY CONFIDENTIAL*
The main HUD shows vital information at an at-a-glance- view, without distracting the medic administering life-saving treatments. Top left: Patient list and selected patient data, top middle: main documentation inputs and status, top right: user-defined timers, left: sub-display headers, bottom right: user feedback for icons for voice recognition and data capture

Traditionally, medical personnel are required to complete a Tactical Combat Casualty Care card (or "TCCC card") after treating each patient. However, due to the overwhelming nature of treating multiple patients in active war-fighting environments, important patient data is often missing or incorrect. This can lead to critical issues during patient hand-off and continuing care.

Traditional TCCC card used by army field medics to document patient care. Due to the nature of treating multiple patients hastily in dangerous environments, medics are often not able to recall all the details necessary to fill out these cards completely. Our challenge was to ensure the XR product would capture all the information provided in these cards.

We developed a cutting-edge multimodal interface to facilitate complex medical-information documentation requirements, while the user is engaged in rigorous life-saving tasks in challenging operational environments. While engaging in patient care, the user can monitor the HUD and gain an at-a-glance understanding of patient care and documentation needs.

User Journey
︎
New patient added
Optical character recognition results in passive context-aware data capture from a scanned Battle Roster ID. A new patient is added to the HUD, and corresponding personal data (name, blood type, allergies, etc.) is shown.
︎
HUD Updates as medic performs patient care
As the medic works, patient information such as injury locations, recorded vital signs, and administered treatments are passively captured by the system and recorded on the HUD.
︎
Medic reviews HUD and manually adds information
The medic glances at the HUD interface to check for any documentation or assessment procedures that are remaining. By using the voice UI, the medic can manually add documentation information, confirm and/or edit passively captured information, and capture audio notes and visual photographs.
︎
Documentation used for reports and hand-off
Once the medic is in a safe environment, all interactions and documentation is available for review on the HUD or can be exported to another device. Patient history can also be transferred to other medical staff to facilitate patient hand-off and continuing care.

Additionally, I designed a component library to universally support qualitative and quantitative documentation throughout the interface. High contrast, intuitive color schemes, and placement allows users to easily identify components in all lighting conditions, as well as supporting color-blindness accessibility.

Component Key:
Orange box = no documentation
Orange line = partial documentation
Blue line = passive data capture
Checkbox = complete documentation
White circle = linked media attachment

Using outputs from the work domain analysis and a user-centered iterative design approach, I designed a set of multimodal interfaces and context-aware support tools that more efficiently and effectively support information capture activities of medical care personnel across operational contexts and echelons of care. These interfaces included augmented-reality wearable glasses for visual image capture and heads-up peripheral information display, advanced natural language processing technologies, and a range of voice-based input methods for natural and more robust audio capture in noisy contexts. These interfaces will aid in extended design, development, and integration efforts and directly support the demonstration and evaluation efforts under follow-on efforts.


︎  daniela@badguy.studio