Blog Header – AI for good

What Happens When AI Tells Someone with Depression to ‘Just Try Harder’?

Published May 28, 2025

By Caylee Southland, Digital Marketing Manager at Civilian

AI has opened countless doors in marketing. For industries selling skincare, sparkling drinks, or new tech gadgets, AI can optimize ads, target audiences, and even write copy. The quality of AI’s output varies wildly, and sometimes goes awry. There are also many environmental and ethical concerns we need to consider when it comes to using AI, which is energy intensive to run and trained on work created by humans.

That said, its potential goes far beyond consumer products. In public and mental health, AI can be a powerful tool for reaching the right people, tailoring messages to diverse communities, and uncovering insights that might otherwise go unnoticed.

We see this potential every day. Much of our work focuses on some of the most sensitive and high-stakes issues — youth suicide prevention, Adverse Childhood Experiences (ACEs), substance use, and other public health challenges. In these areas, AI isn’t just a marketing tool. It’s a force that must be used with robust human oversight, care, and empathy. The consequences of getting it wrong can be profound.

These systems that make our work more efficient can also backfire, generating messaging that’s tone-deaf, stigmatizing, or even harmful. That’s why in sensitive campaigns, using AI isn’t just about what it can do, it’s about how we use it responsibly.

AI Doesn't Do Feelings

The internet is a vast, chaotic place, full of unfiltered data, bias, and misinformation — and AI pulls from all of it. When AI generates content, it’s not considering the credibility of where those ideas came from. It might pull phrasing or perspectives rooted in subtle (or not-so-subtle) racism, classism, or ableism. It can unintentionally reflect or amplify stigma, especially when dealing with mental health or trauma-related topics.

As humans, we don’t believe everything we hear. We’re skeptical. We interpret tone, context, and intent. We understand that a phrase like “just try harder” can be devastating to someone experiencing depression. But AI lacks that emotional depth. It doesn’t feel. It doesn’t understand. And without human oversight, it can generate messages that are tone-deaf at best, and harmful at worst.

AI Has a Stigma Problem

One of the biggest risks in using AI for mental health campaigns is the potential to reinforce stigma. AI can oversimplify or default to harmful tropes, presenting mental illness as something shameful or extreme, rather than nuanced and human. These missteps don’t just miss the mark, they can undo years of progress in normalizing mental health conversations.

We believe AI must be paired with critical thinking, ethical review, and — above all — a human touch. Public health campaigns require emotional intelligence, cultural awareness, and lived experience. No algorithm can replicate that. But used thoughtfully, with input from subject matter experts (SMEs) and affected communities, AI can play a meaningful role.

For example, when developing messaging for a public health campaign, we might use AI to analyze existing research, identify key themes, and surface common questions or misconceptions. This gives us a foundation, not a final product. We then use those insights to draft initial messaging. From there, we collaborate with SMEs and community members to review the content for accuracy, tone, and cultural relevance. AI can help us generate possibilities, but real people with lived experience should shape and validate what actually goes out into the world.

Making Sure No One’s Left Out

AI is also a powerful tool for identifying outreach gaps and connecting with communities that are underserved —- whether due to geography, income, or systemic inequities. It can analyze patterns, surface needs, and efficiently distribute content to a wide range of audiences. But its capabilities have limits.

Underserved communities are often the most affected by the digital divide — the gap between those with reliable access to internet and digital tools, and those without. AI relies on data, but what happens when that data is missing, incomplete, or outdated? If people don’t have consistent access to the internet, or if their communities have historically been excluded from datasets, the AI’s insights will be skewed. That means campaigns might overlook the very people who need support most.

​​To close these gaps, we can’t rely on AI alone. A more equitable approach means expanding beyond digital channels and investing in offline outreach like printed materials, community events, and partnerships with trusted local organizations such as community-based organizations (CBOs) that already have deep relationships in underserved areas. While AI can help generate content or identify trends, community collaboration is essential to making those messages relevant and respectful. Combining AI with local expertise helps ensure that campaigns reach and truly resonate with those most often left out.

There’s also the issue of trust. When communicating with vulnerable groups, we have to ensure our messages aren’t manipulative. AI can be optimized for engagement, but not always for ethics. We must be intentional about how and why we use it, especially with audiences who may already feel unseen, underserved, exploited, or mistrustful of public systems.

To use AI ethically, marketers need to lead with responsibility. That includes:

  • Transparency and Accountability: Be clear about when AI is being used.
  • Bias Mitigation: Review content critically, involving SMEs and community voices to identify and correct bias.
  • Human Oversight: AI can assist, but should never replace human insight, especially when it comes to sensitive topics.
  • Ethics at the Core: From privacy to inclusion, ethics should be at the heart of every step. Clear guidelines are needed to ensure AI respects the dignity of those it aims to serve.

AI is a tool, not a solution. It can amplify reach, generate insight, and support creative processes. But in conversations around mental health, trauma, and equity, the stakes are higher.

At Civilian, we’ve learned the most impactful campaigns don’t just rely on data — they rely on thoughtful, deeply felt, human-centered strategy. That’s how we build trust. That’s how we drive change.

Editor's Pick

Article Expert Insights Inside Shorty Press Report

STAY IN TOUCH

Shorty Awards on Instagram Shorty Awards on Facebook
join our newsletter
Shorty Awards® is a registered
trademark of Shorty Awards LLC.