How Headspace scaled clinical QA from 2% to 100% with Brellium
As Headspace prepared to grow to a network of 700 therapists, they turned to Brellium's AI-powered compliance platform to achieve full clinical documentation coverage, reduce compliance risk, and scale quality care.
Transitioning to Brellium was significantly more cost-effective than hiring additional staff, while also allowing us to expand the scope of our QA efforts. It enabled our clinical QA specialists to operate at the top of their license and fully leverage their expertise.
Headspace offers therapy, psychiatry, coaching, and AI-powered mental health support to members across the country. As Director of Care Quality, Alexis Hernandez-Hons is responsible for making sure that the organization's therapy care — and the clinical documentation that supports it — meets the highest standards.
Before Brellium, that responsibility fell on a team of three clinical QA specialists who reviewed notes manually. With a growing provider network, the team was able to audit just 2% of all notes per year — roughly six members per provider, twice annually. Auditing notes took nearly 85–90% of each specialist's time. It was essentially a full-time job.
"It would be difficult for any organization to confidently say they're capturing all documentation errors by reviewing just 2% of notes. That gap is what ultimately led us to seek a solution like Brellium."
The coverage gap created a visibility problem. With such a narrow sample, thematic issues across the provider network were nearly impossible to detect. Subtle patterns in diagnostic accuracy, CPT coding errors, or gaps in safety planning could persist unnoticed.
The inflection point came as Headspace began planning to scale its therapy network. At roughly 150 clinicians, leadership faced a choice: continue hiring QA specialists and maintain the same inadequate coverage, or find a fundamentally different approach.
From sample-based audits to complete visibility
Headspace evaluated their options and quickly focused on Brellium, drawn to its ability to QA at scale while supporting full customization of review criteria. That last point mattered enormously.
"We didn't want to rely on a generic set of criteria. It was important for us to maintain control over how we define and assess quality."
The team built out a comprehensive review framework tailored to Headspace's care model — covering clinical quality, safety and risk, billing compliance, and operational and cultural responsiveness standards.
Today, Headspace has used Brellium to review over 187,000 intake and follow-up session notes. Their coverage went from 2% to 100%, all with the same three-person QA team.
Evidence based feedback
The shift from sample-based to comprehensive review changed the nature of what the quality team could do entirely.
"Today, we're able to conduct QA on 100% of notes and drill into specific quality domains to understand where providers may be struggling. This gives us a comprehensive view of network performance and allows us to identify targeted areas needing additional support."
For example, analyzing notes with Brellium surfaced an unexpected insight about clinicians' use of Headspace's AI notetaker. The team found clinicians using the note taker scored meaningfully higher on diagnostic accuracy measures than those who didn't. That finding became a data-driven case for provider adoption — one the team could share with their therapist network based on real aggregate evidence, not gut instinct or anecdote.
Alexis says Brellium has been popular with therapists, too. Although there was initially some skepticism about using AI to review notes, she says therapists now appreciate that Brellium sees the full picture of their documentation quality, instead of judging their performance off a single encounter.
Headspace has even begun to use Brellium to award spot bonuses and company swag for therapists who consistently submit high quality notes.
The decision to tie incentives to Brellium scores was deliberate. Headspace wanted to intentionally recognize strong provider performance and create meaningful motivation for improvement across the network. Because Brellium evaluates individual providers' documentation quality and identifies specific areas for improvement, it made the link between incentives and performance clear and defensible.
Now, high performance in clearly defined quality areas gets rewarded, helping to reinforce documentation practices that align with both clinical standards and member care expectations. This initiative wouldn't have been possible without Brellium's 100% documentation visibility.
Clinical specialists working at the top of their license
With Brellium handling the volume, the QA team's role evolved from note reviewers to program builders. They are able to provide clinical oversight of Headspace's AI products, design and test new QA criteria as the care model evolves, and, in partnership with the training team, launch a weekly micro-learning program based on the Brellium data.
Each week, the micro-learnings address a different diagnostic category where provider scores indicate a need for support. The micro-learnings are delivered through weekly training emails sent to the full provider network, each with a brief overview of the topic and a direct link to the module in Headspace's LMS — so clinicians can access and revisit content anytime. They're intentionally designed to be brief, highly targeted, and tied directly to quality opportunities surfaced through the Brellium QA process. Headspace will formally assess the training program's impact on providers' diagnostic accuracy scores later this year. The QA team also now contributes to accreditation work that had previously been slow to progress due to capacity limitations. The accreditation work in particular is a strong signal of how Brellium's value compounds, according to Alexis. Accreditation work wasn't a project born from Brellium — it already existed in concept but it struggled to move forward without the capacity to support it.
Advice for other practices
When asked what she'd tell another clinical organization considering a QA AI solution, Alexis frames it as a question of honest self assessment: what does quality actually mean to you, and how confidently can you measure it today?
"Take a close look at where your gaps are — there are always opportunities for continuous improvement. Then explore how a solution like Brellium can help address those areas."
For rollout, she recommends being transparent with providers about how the AI works, what data is stored and how, and — critically — how the output will be used to support them rather than penalize them. The framing matters: clinical QA at scale should be framed as an investment in clinician development. A year and a half in, Headspace continues to find new ways to use Brellium. New questions are being tested. Criteria are evolving alongside the care model. And the most surprising finding may be the simplest one: the platform's value keeps growing. "Over a year later, we're still finding new ways to leverage the system, which has been very exciting," Alexis said. "We didn't initially anticipate just how innovatively we'd be able to use it."
Ready to achieve similar results?
See how Brellium can transform your clinical compliance operations.
Get a Demo