The Diagnostic Robotics system delivers medical summaries generated by patient questionnaires to MDs in emergency departments and home-care settings.
the goal was to improve the low accuracy of medical summaries, which throughout the research, I found to be caused mainly by patients selecting incorrect symptoms.
This realization shifted the focus of the research toward helping users choose symptoms correctly.
When testing the platform, we realized that there was an inaccuracy in the patient's medical summary and tried to understand the reason for that.
A medical summary is the output of a patient-filled medical questionnaire, provided to doctors before treatment. This study emerged from a low metric score of the Diagnostic Robotics medical summary.

The patient answer demographic questions, choose a symptom that he suffer from and answer medical questionnaire created by a medical team, then his answers goes through ML process and served to the MD as a medical summary.

In a face-to-face meeting with their doctor, a patient asked about getting a second dose of a vaccine, in light of shoulder pain that appeared after the first dose.
In the digital questionnaire, he selected the symptom of a shoulder injury and arm weakness.
After analyzing the medical summaries, we found that some inaccuracy in the medical summaries is due to wrong selection over the symptom selection phase, which leads to a false questionnaire selection and medical summary output.
The low medical summary accuracy is a result of the patient selection in the symptom selection phase: Improved symptom matching would raise the medical summary score.Â

After realizing that "indirect match" and "no match" were some of the leading causes for low medical summary accuracy, we divided the cases into groups. We gave each group a title:


Following the division into four groups, I decided to focus on the group "I chose a symptom, but it is not accurate" to try improve the symptom match.
For this purpose, I performed four different studies:
I did user research in the emergency department of Rambam Hospital’s clinical trial, studying the doctors and patients using the system and comparing them to the face-to-face triage (the process of sorting people based on their need for medical treatment). These visits helped me understand the gap between what patients think they have and what doctors think needs to be treated first. For example, patients usually talk about their most significant pain, even if it's something they have had for years and not what made them come to the ER, which is much more relevant to the MD.

This included going through the medical summaries with the medical team and identifying problems in symptom selection, for example, we could see a choice of arm injury instead of arm pain, where the end patient didn't understand the difference.
Another area of research was writing the questionnaire, talking to doctors who write the questions, and understanding why they ask specific questions and what they are trying to get from them.
One of the things I understood was very relevant for injury cases is that the doctor often asks situation-related questions. For example, for a patient with a headache that started as an injury from an external source, the doctor would want to rule out things like concussions.
I decided to better understand users with the help of a testing platform: I let different users choose symptoms on the existing interface and mapped selection problems from there.
A patient who didn't know which body part to choose for "high blood pressure", didn't think about using the search field, and the figure is very dominant
After the research, I came up with a list of reasons why users choose incorrect and inaccurate symptoms; based on these assumptions, I began to ideate new UX solutions for the symptom selection phase
Explanation: I presented the user with both search and body over the same screen.
Value: Current status benchmark

Explanation: I presented the user with only a search.
Value: Understand if a search is better than a body figure to find specific body part symptoms; if it gets better results, and if the body figure is misleading.

Explanation: I presented the user with a screen containing two options (search function or body figure) with explanations and examples.
Value: Understand if separating the body figure and search function to different screens with explanations and examples before seeing them, helps the user select the correct mechanism for their issue.

Explanation: I presented the user with a three-option screen, where “injuries” and “mental condition” led to single-answer screens and “symptoms’ appearance” led to current flow.
Value: Understand if separating injury/mental illness helps finds them.


After ideating the four options; I wanted to test them on a usability testing platform. With the help of the medical staff, we wrote ten cases of different symptom selections that would test the assumptions from the previous step.
I divided users into four groups and showed each group a different design from the ideation phase.

After the examination, four of the six assumptions stood out as getting better match results


When presenting users with the case: "While cleaning, you fell from the top of a ladder, and you now have severe pain in your right arm" with the different flows, the results were clear: adding the "injury" pre-menu (Design 4) led to selection of the "Fall" symptom, compared to the rest of the designs where users searched for "arm pain" and "arm injury" symptoms.
According to interviews with medical teams, in trauma cases the selection "fall" is more relevant than "arm pain" or "arm injury". In trauma cases, the system asks additional relevant questions for example, when selecting "fall" the questionnaire rules out a concussion.
After summarizing the results and understanding the places where symptom selection could be clearly improved, we implemented the pre-menu in the clinical trial to see the effect.
We saw improvement in the clinical trial symptom match of 20%; and there was a decrease in symptom selection with an "Indirect match." There was also an increase in the number of "No-match" selections for which no explanation was found and required further investigation.
When analyzing the "indirect match" cases, we saw a decrease of 21% in the selection of injury cases. This improvement resulted from the new trauma sub-menu that led users to select the correct symptom in case of an injury.
