top of page

Empowering public health communicators through AI.

Client: S2P

Role: UX Researcher
Team: Researchers, Designers, Manager, Strategists
Research type: Discovery/exploratory, Mixed methods
​​Timeline: 11 weeks

Background / Context

S2P, a health science nonprofit has a mission to develop technologies that make health information more accessible and understandable for public audiences.

The company built an AI chat bot to help social media content creators generate clear, accurate health content. After pitching the tool to various organizations, the Public Health Communicators Collaborative (PHCC) expressed interest in adopting a customized version for their organization.

Opportunity

This partnership created an opportunity for the S2P to scale its impact by developing a white labeled version of its AI tool, one that could be tailored for public health communicators. However, there is a need to first understand PHCC's expectations to ensure product fit and adoption.

"How might we identify the unmet needs and opportunities in PHCC’s content creation process?"

Research Objectives

1

Who is PHCC?

  • Roles

  • Responsibilities

  • Workflow

2

What challenges do they face?

  • Content creation

  • Content adaptation

  • Content sharing

3

How does AI fit in?

  • AI tools used

  • Attitudes & behavior with AI

  • AI confidence & literacy

Discovery & Approach

Secondary Research

We conducted desk research to understand PHCC’s mission, priorities, and operations, which helped shape the questions we asked in our first round of interviews. By reviewing PHCC's website, we identified focus areas and topics that matter most to them such as:

  • Types of resources they provide

  • Communication tools used

  • Collaborated partners and more

 

This research provided a solid foundation to form assumptions about their workflows, needs and  challenges. We then considered the implications if these assumptions were true or false, which helped us craft clear, targeted questions for our upcoming user interviews. 

Untitled (2).jpg

Participant Sampling

Untitled (3).jpg

Because our target audience was highly specific, we used snowball sampling to recruit participants. We collaborated with a member of the PHCC, who shared our study through a newsletter to reach professionals who would be interested. These members opted in through a provided link, allowing us to connect with participants directly engaged in content creation and dissemination.

Given the niche nature of the study, we decided not to apply a strict screening criteria. Our goal during this early, discovery phase was to gather diverse perspectives in their daily work.

Research Methods

Round A: Interviews & Focus Groups
  • Sample size: 41 participants 

  • Format:

    • Remote sessions via Google meet

    • Semi structured question set to guide both formats

  • Primarily 1:1 interviews, though a few sessions evolved into small focus groups
Round B: Hybrid Surveys (Interview + Ranking Activity)
  • Sample size: 16 participants

  • Format:

    • Moderated remote sessions Via Google meet & OpinionX

    • Short interviews in the first half of the sessions

    • Rank order activity in the second half

  • Participants were asked to prioritize public health communication activities across: importance, difficulty, personal control, time investment, and openness to AI.

Round C: Usability (Concept) Tests
  • Sample size: 7 participants (5 completed full sessions; 2 participated in interview style discussions)

  • Format:

    • Remote & moderated sessions hosted on Maze

    • Participants interacted with a prototype, performing pre defined tasks

    • Each task was followed by ease of use and open ended reflection questions

    • The session concluded with a System Usability Scale (SUS) questionnaire and an emotional response question (“I feel good when using this tool”)

Rationale

  • Interviews provided in depth understanding of PHCC members’ workflows, pain points, and perceptions of AI’s relevance in their day to day work. The semi structured format allowed flexibility, enabling spontaneous discussions and the discovery of context specific needs that would have been missed in a fully structured format.

  • Surveys emerged from a stakeholder request to validate qualitative themes quantitatively. Although a large scale survey wasn’t feasible, combining a ranking activity with interviews would provide directional quantitative insight on tasks where assistive features and workflow design could deliver the most impact.

  • Usability tests were conducted to evaluate early design concepts and measure usability, emotional resonance and perceived usefulness of early design concepts.

Key Insights

Overarching themes

ChatGPT Image Oct 19, 2025, 03_49_45 PM.png
  • Trust must be earned

  • Public health professionals are skeptical of using AI for high stakes communication.

  • They need tools that cite authoritative sources and credible evidence, while allowing human experts to stay in the loop for fact checking and final approval.

  • Trust isn’t just about transparency, it’s about ensuring public accountability. 

  • Collaboration and approval processes are essential, but also time consuming.

  • Communications often require input and sign off from multiple stakeholders across departments. This slows down urgent messaging.

  • Users would value solutions that streamline collaboration and help them move through complex approval processes more efficiently without losing visibility or control.

ChatGPT Image Oct 19, 2025, 04_00_29 PM.png
ChatGPT Image Oct 19, 2025, 04_19_13 PM.png
  • PHCC's audiences are diverse, and so is their messaging.

  • PHCC members are tasked with delivering content across cultural, linguistic, and literacy boundaries.

  • They’re often addressing controversial or sensitive topics.

  • They need flexible tools that help them adjust tone, format, language, and style so the message remains accurate and accessible.

PHCC User Persona

user persona.jpg

Learnings from Ranking Activity

Ranking.jpg

Participants ranked activities across: importance, difficulty, personal control, time investment, and openness to AI

Note: Participants were told they could leave items unranked if an item did not apply to their role or experience.

Participants completed five separate rankings, one prompt per criterion:

  • Importance: “How important is this task to your public health related daily or strategic work?”

  • Difficulty: “To what extent do you feel this task is an unmet need, something that’s hard to do, or current tools fall short?”

  • Control: “How much control do you personally have over this task?”

  • Time spent: “Roughly how much time do you spend on this task in a typical week?”

  • AI Openness: “How likely would you be to use AI to support this task?”

  • Note: Each criterion produced an independent ranked list of the same 12 activities.

Scoring & Analysis

We used a Borda-count variant, the Dowdall method, with a 100-point scaling. For a participant’s list, the score for an item at rank r is = 100/r

For example:

  • 1st=100

  • 2nd=50

  • 3rd≈33.33

  • 4th=25,

dowdall method calc.gif

Results by Task 

Ranking scores.jpg

Usability Through the User’s Eyes

Through concept testing, we found that users approached the prototype as a conversational tool (e.g., ChatGPT).

  • Gravitating toward the text input box and missing key feature buttons.

  • Navigation challenges, missed interactions, and confusion over task flow all pointed to mismatched mental models.

  • We’ve shared these findings with the design team, who implemented early design changes to align the interface more closely with user expectations and behaviors.

Untitled (1).jpg

Usability challenges

Accessibility/Readability

The prototype’s ease of use depends heavily on users’ familiarity with digital tools.

  • While some users navigate effortlessly, others particularly those with lower digital literacy experience accessibility challenges.

Recommendations:

  • Consider ways to enhance visual readability and clarity of instructions to support users who may struggle with small text or dense layouts.

“Print is fairly small, so it would be hard for me to use. There's a big space between the top and the little boxes. We see three boxes. Are there more than three?”

-- P59

“So the one that I wasn't able to figure out, I think that was like the generating the press release one, I didn't find that intuitive if there was supposed to be a button or I may have completely missed it.”

-- P68

Navigation & Discoverability

Users had difficulty finding and interacting with certain features on the platform.

  • Specifically, they were unsure about the functionality of the "Mind map" feature

  • They were also unable to type in the "Ask the research" box, suggesting issues with the placement and intuitiveness of these elements.

  • Additionally, users expressed uncertainty about the overall purpose and content of the platform, indicating a need for clearer navigation and labeling.

Recommendations:

  • Consider solutions to help users quickly understand where to start and what each feature does to reduce confusion during first time use.

    • for ex, improving labeling, visual hierarchy, or onboarding cues to clarify each feature’s purpose.

Prototype Positives

Overall, the users express positive impressions of the prototype's features and design, emphasizing its user friendliness, credibility, and potential to enhance productivity and decision making. Users express positive sentiments about various features of the prototype, including the ease of use and user friendliness of the platform, with comments highlighting how straightforward and simple it is to navigate and explore

Users appreciate the clean, modern design and lack of distractions.

"Um, not too much, but I like how clean it is...not a lot of distractions, looks modern, and um, the suggestions seem pretty useful."

--P60

“Yeah, I think people would learn to use it pretty quickly. You know, it's just kind of like Chat GPT. You just jump in, you're like, "All right, what do I do?"

-- P67

“I think I do admire how reliable it is. It's also pretty um it was pretty straightforward. And I think I like how it wasn't one thing I found unique about this is that it wasn't long. It wasn't lengthy.”

--P65

Users highlight the benefit of having sources readily available rather than having to search for them separately

The versatility of the platform is noted as being suitable for users in different roles, such as government workers, community partners, and local community members.

Quantitative Insights

To complement qualitative findings, we analyzed behavioral and attitudinal data using heat maps, post-task surveys, and usability metrics. These methods helped us assess where users struggled, how they perceived the prototype, and how efficiently they completed tasks. 

Behavioral Analysis (Heat Maps)

Heat map data revealed where users clicked during tasks, uncovering several usability blind spots.

  • Key functions like the “Use Style” and “Translate with AI” buttons were largely overlooked, suggesting issues with visual hierarchy and affordance.

  • Participants intuitively began typing into the text box (similar to ChatGPT), reinforcing that the product’s mental model aligns with conversational AI rather than a content management tool.

  • Feature buttons in the Content Studio received minimal attention, showing low discoverability.

User Feedback (Post-Task & SUS Surveys)

Survey and SUS results showed mixed but promising perceptions of usability.

  • Average SUS score: 75.7 (Excellent usability)

  • Most participants reported tasks as moderately easy, though Task 3 stood out as the most difficult.

  • Variation in scores reflected differences in technical confidence, users less familiar with digital tools found the prototype harder to use.

  • Emotional satisfaction scores aligned with usability ratings, those who felt “good” about the tool also rated it higher on SUS. 

Performance Metrics (Effectiveness & Efficiency)

  • Average task success rate: 78%

  • Efficiency varied by task, with longer durations signaling points of confusion or unclear task flow.

  • Task 3 and Task 4.1 had the lowest completion rates (0% and 25%, respectively), reinforcing qualitative findings about unclear navigation and instructions.

Recommedations

  • Improve discoverability and intuitiveness of key features

    • Consider solutions that draw attention to core functions like “Use Style,” “Translate with AI,” and sharing options through placement, visual hierarchy, or interaction cues, ensuring users can easily locate and understand their purpose.

  • Streamline complex tasks

    • Simplify the process of drafting and revising content, especially for tasks requiring adjustments to tone or format (e.g., press releases)?

      • Consider clearer task flows, contextual guidance, or supportive visual aids to reduce confusion and enhance task efficiency.

  • Enhance accessibility and readability

    • How might we ensure the interface remains readable and accessible across varying visual abilities and devices?

      • Design solutions should address font size, color contrast, and layout structure to improve overall legibility and user comfort.

  • Integrate with existing workflows

    • Consider integration opportunities with popular cloud platforms like Google Drive or OneDrive to streamline collaboration and sharing. This will make the tool feel like a natural extension of users’ existing work environments

  • Provide comprehensive onboarding and support

    • Consider solutions like interactive demos, tutorials, or contextual hints that introduce key features as users explore. This will help new or less tech confident users quickly understand and navigate the platform’s capabilities.

  • Refine prompt-based interactions

    • Think of ways to make natural prompt typing behaviors feel more cohesive with the platform’s other features.

      • Allow users to trigger actions (e.g., translation, style adjustment) through text based inputs while maintaining clarity between prompt and button interactions.

  • Incorporate location-specific relevance

    • How might we personalize user experience by surfacing content and insights relevant to users’ regions or contexts?

      • Consider leveraging location based trends or localized data to enhance relevance and engagement across diverse audiences

Impact

Based on research insights, we identified critical user needs and opportunities, which directly informed

  • Research based personas

  • User journey map

  • Prioritized roadmap features for

    • Continuous product strategic development

      • Content generation

      • UI/UX redesign

      • Recommendations for improved clarity, flexibility, and content trust building

  • ​Guided long-term collaboration: Insights now serve as a reference point for any future initiatives involving PHCC from content tools to training programs.​

Future success indicators (post-launch):

📈 Adoption and frequency of use among PHCC members.

💬 Qualitative feedback on message clarity, accessibility, and relevance.

⏱ Reduction in time spent creating or revising communication materials.

bottom of page