OverviewThe painResearchAssume and ideateTest the ideationsTest ResultsImplementation
Case study #1

Conveying the right information to doctors

2020-2021 Diagnostic robotics Ltd.

Overview

The Diagnostic Robotics system delivers medical summaries generated by patient questionnaires to MDs in emergency departments and home-care settings.

the goal was to improve the low accuracy of medical summaries, which throughout the research, I found to be caused mainly by patients selecting incorrect symptoms.

This realization shifted the focus of the research toward helping users choose symptoms correctly.

Roles and Responsibilities

  • Product design - Myself, with Shani Brusilovsky, Head of Design
  • Medical team - MDs from the Diagnostic Robotics team
  • Product team - Janna Tenenbaum-Katan, Orad Weisberg

Scope & Unique factors

  • The overarching scope of the research was to improve the medical summary score metric.
  • During the research, the scope shifted towards improving the symptom selection match score (to improve the medical summary score metric).
  • After this shift, the scope wasn't to divert wrong selection or to come up with a new interface, but rather to improve the current one.
  • Limited data (due to early test phase)
  • No option for user interviews abroad (only in Rambam ER)
  • No option to record (due to medical privacy issues)

The pain

When testing the platform, we realized that there was an inaccuracy in the patient's medical summary and tried to understand the reason for that.

Problem statement

A medical summary is the output of a patient-filled medical questionnaire, provided to doctors before treatment. This study emerged from a low metric score of the Diagnostic Robotics medical summary.

The business goal

  • Non-accurate medical summaries disqualify the product itself.
  • Non-accurate medical summaries lead to wrong ML, which is the company's main objective.

The patient meet the medical questionnaire at ER or home-care (Telemedicine)

The product high level flow

The patient answer demographic questions, choose a symptom that he suffer from and answer medical questionnaire created by a medical team, then his answers goes through ML process and served to the MD as a medical summary.

Example of medical summary inaccuracy

In a face-to-face meeting with their doctor, a patient asked about getting a second dose of a vaccine, in light of shoulder pain that appeared after the first dose.

In the digital questionnaire, he selected the symptom of a shoulder injury and arm weakness.

Focus on symptom selection phase

After analyzing the medical summaries, we found that some inaccuracy in the medical summaries is due to wrong selection over the symptom selection phase, which leads to a false questionnaire selection and medical summary output.

The process

  • Understand the low score of medical summaries
  • Shift the research to understand low symptom match
  • Research symptom match
  • Assume symptom match issues and ideate solutions
  • Test the solutions
  • Implement the solutions that improved symptom match

Research

The low medical summary accuracy is a result of the patient selection in the symptom selection phase: Improved symptom matching would raise the medical summary score. 

Current symptom selection:

Analyze cases that are "indirect match" and "no match" scores.

After realizing that "indirect match" and "no match" were some of the leading causes for low medical summary accuracy, we divided the cases into groups. We gave each group a title:

Following the division into four groups, I decided to focus on the group "I chose a symptom, but it is not accurate" to try improve the symptom match.
For this purpose, I performed four different studies:

Research methods

1. User research in the emergency department

I did user research in the emergency department of Rambam Hospital’s clinical trial, studying the doctors and patients using the system and comparing them to the face-to-face triage (the process of sorting people based on their need for medical treatment). These visits helped me understand the gap between what patients think they have and what doctors think needs to be treated first. For example, patients usually talk about their most significant pain, even if it's something they have had for years and not what made them come to the ER, which is much more relevant to the MD.

2. Analysis of incorrect medical summaries

This included going through the medical summaries with the medical team and identifying problems in symptom selection, for example, we could see a choice of arm injury instead of arm pain, where the end patient didn't understand the difference.

3. Understanding “Diagnostic Robotics” medical team’s state of mind

Another area of research was writing the questionnaire, talking to doctors who write the questions, and understanding why they ask specific questions and what they are trying to get from them.

One of the things I understood was very relevant for injury cases is that the doctor often asks situation-related questions. For example, for a patient with a headache that started as an injury from an external source, the doctor would want to rule out things like concussions.

4. User testing wrong selection

I decided to better understand users with the help of a testing platform: I let different users choose symptoms on the existing interface and mapped selection problems from there.

A patient who didn't know which body part to choose for "high blood pressure", didn't think about using the search field, and the figure is very dominant

Assume and ideate

After the research, I came up with a list of reasons why users choose incorrect and inaccurate symptoms; based on these assumptions, I began to ideate new UX solutions for the symptom selection phase

Come up with assumptions for choosing incorrect and inaccurate symptoms

  • Users will choose the search more if they won’t see the body figure
  • Non-specific body parts symptoms should have no body figure flow
  • Search-only flow will get better results
  • preparation or explanations ahead will help the user get better results
  • Trauma and Mental symptoms separation will get better results

Ideate designs based on assumptions

1. Current

Explanation: I presented the user with both search and body over the same screen.

Value: Current status benchmark

The current flow:

2. Search only

Explanation: I presented the user with only a search.

Value: Understand if a search is better than a body figure to find specific body part symptoms; if it gets better results, and if the body figure is misleading.

The Search only:

3. Search or body

Explanation: I presented the user with a screen containing two options (search function or body figure) with explanations and examples.

Value: Understand if separating the body figure and search function to different screens with explanations and examples before seeing them, helps the user select the correct mechanism for their issue.

The search or body flow:

4. Pre menu

Explanation: I presented the user with a three-option screen, where “injuries” and “mental condition” led to single-answer screens and “symptoms’ appearance” led to current flow.

Value: Understand if separating injury/mental illness helps finds them.

The pre menu flow:

The pre menu design:

Test the ideations

After ideating the four options; I wanted to test them on a usability testing platform. With the help of the medical staff, we wrote ten cases of different symptom selections that would test the assumptions from the previous step.

Test which ideations led to better symptom match

I divided users into four groups and showed each group a different design from the ideation phase.

Test results

After the examination, four of the six assumptions stood out as getting better match results

Assumptions that stood out:

  • Users will choose the search more if they won’t see the body figure
  • Non-specific body parts symptoms should have no body figure flow
  • Search-only flow will get better results
  • preparation or explanations ahead will help the user get better results
  • Trauma and Mental symptoms separation will get better results

Example #1: Current flow leads users to use the body figure much more often

Example #2: When we moved injury to a separate menu, the selected symptom was more accurate.

When presenting users with the case: "While cleaning, you fell from the top of a ladder, and you now have severe pain in your right arm" with the different flows, the results were clear: adding the "injury" pre-menu (Design 4) led to selection of the "Fall" symptom, compared to the rest of the designs where users searched for "arm pain" and "arm injury" symptoms.

According to interviews with medical teams, in trauma cases the selection "fall" is more relevant than "arm pain" or "arm injury". In trauma cases, the system asks additional relevant questions for example, when selecting "fall" the questionnaire rules out a concussion.

Implementation

After summarizing the results and understanding the places where symptom selection could be clearly improved, we implemented the pre-menu in the clinical trial to see the effect.

Improvement in clinical trials

Improvement of 20%

We saw improvement in the clinical trial symptom match of 20%; and there was a decrease in symptom selection with an "Indirect match." There was also an increase in the number of "No-match" selections for which no explanation was found and required further investigation.

21% decrease of "injury" cases indirect match

When analyzing the "indirect match" cases, we saw a decrease of 21% in the selection of injury cases. This improvement resulted from the new trauma sub-menu that led users to select the correct symptom in case of an injury.