A California startup, Kintsugi, has shut down after seven years of development, releasing its AI-based depression and anxiety detection software as open-source. The company failed to secure FDA clearance, highlighting the challenges of navigating medical regulation for rapidly evolving AI technologies. This marks a setback for early AI in mental health, but also opens a path for continued research and potential misuse outside of clinical settings.

The Promise and Hurdles of AI-Driven Mental Health Screening

Kintsugi’s technology analyzed speech patterns—pauses, sentence structure, and speed—to identify subtle shifts indicative of mental health issues. Unlike traditional mental health assessments relying on questionnaires, the AI aimed to provide a more objective signal, expanding screening capabilities for health systems, insurers, and employers. However, the FDA’s “De Novo” approval pathway for novel medical devices proved slow and inflexible.

The regulatory framework, designed for traditional devices like implants and pacemakers, struggles to accommodate AI’s continuous learning and optimization. While the Trump administration sought to streamline AI approvals, Kintsugi’s founder, Grace Chang, said that regulatory inertia and government shutdowns stalled progress. The company ran out of funding while awaiting final submission.

Open-Source Release Raises Ethical Concerns

Rather than accept unfavorable funding offers, Kintsugi chose to open-source its core technology. This decision carries risks: the software could be deployed outside healthcare settings—by employers or insurers, for example—without appropriate safeguards. While misuse may be unlikely due to logistical barriers, the potential remains.

Nicholas Cummins, a speech analysis expert at King’s College London, cautions that open-source releases often lack the documentation regulators require for approval, making future FDA clearance difficult. Companies may use the model as a starting point but will need their own validation processes.

From Mental Health to Deepfake Detection: A Silver Lining

Kintsugi’s research unexpectedly revealed another capability: detecting synthetic or manipulated voices. While refining mental health models, the AI distinguished between human and AI-generated speech. This technology, unlike mental health screening, does not require FDA oversight and presents a potentially lucrative opportunity for security applications.

Kintsugi’s failure underscores a broader tension between startup timelines and medical regulation. Without systemic changes, similar cases may follow. Despite this, the company hopes others will build upon its work, even as the current reality discourages founders from pursuing similar paths.