California Unions Lead Fight Against AI in Healthcare, Raising Concerns for ABA

California's powerful healthcare union, NUHW, is pushing back against Kaiser Permanente's use of artificial intelligence, citing fears of job displacement and compromised patient safety. This labor dispute highlights broader legislative efforts to regulate AI's rapidly expanding presence in the behavioral health sector.

The Policy Change

Workers represented by the National Union of Healthcare Workers (NUHW) at Kaiser Permanente, one of California’s largest private employers, are at the forefront of a significant labor dispute concerning the integration of artificial intelligence (AI) into healthcare operations. The union contends that Kaiser’s increasing reliance on AI tools could lead to widespread job losses for human clinicians and administrative staff, while simultaneously posing risks to patient health and privacy. As part of ongoing negotiations, NUHW is demanding explicit protections, including pledges that AI will only serve as an assistive technology, not a replacement for human workers.

Kaiser Permanente, which employs over 180,000 individuals and nearly 19,000 physicians across multiple states, views AI as a beneficial tool to enhance efficiency and improve patient care. A Kaiser spokesperson indicated that AI can streamline tedious tasks such as note-taking and paperwork, allowing clinicians to focus more on direct patient interaction. The organization asserts that AI is intended to support better diagnostics, optimize clinician time, and ensure equitable care experiences, explicitly stating that “AI does not replace human assessment and care.”

The debate extends beyond Kaiser’s bargaining table, influencing broader legislative efforts in California. The California Federation of Labor Unions is sponsoring a package of bills aimed at protecting workers from AI-related risks, including job displacement and surveillance. Concurrently, the California Psychological Association (CPA) is advocating for legislation that would mandate clear, written consent before any therapy session is recorded or transcribed, especially when AI tools are involved. This proposed bill also seeks to prohibit individuals or companies, including those leveraging AI, from offering therapy services in California without proper licensure, underscoring a growing concern about unregulated AI chatbots providing mental health advice.

Impact on ABA

The policy discussions and labor actions at Kaiser Permanente have direct implications for the Applied Behavior Analysis (ABA) industry. ABA professionals, including Board Certified Behavior Analysts (BCBAs) and Registered Behavior Technicians (RBTs), are clinicians providing essential behavioral health services. The concerns raised by mental health workers at Kaiser—such as the use of AI for transcribing sensitive patient conversations and privacy issues—are highly relevant to ABA practice, where detailed and confidential client data is routinely collected and managed.

The potential for AI to automate administrative tasks, while presented by Kaiser as a benefit, raises similar fears within ABA clinics regarding job security for administrative assistants and even RBTs whose roles might involve data entry or scheduling. Furthermore, the proliferation of AI-powered mental health apps and chatbots, some of which Kaiser already offers to enrollees, poses a challenge to the regulated practice of ABA. The article highlights instances where AI chatbots, despite not being licensed therapists, have been used by individuals for mental health advice, leading to lawsuits against AI companies for alleged harm. This underscores the critical need for clear guidelines and regulatory oversight to ensure that AI tools in behavioral health settings, including those used in conjunction with ABA, are ethically implemented and do not inadvertently compromise client well-being or the integrity of evidence-based practice.

The legislative push for explicit patient consent for recording therapy sessions and the requirement for licensed professionals to deliver therapy directly impacts how ABA services are documented and delivered. ABA providers must closely monitor these policy developments to ensure compliance and to advocate for frameworks that support the responsible integration of technology while safeguarding professional standards and client privacy.

Next Steps

Negotiations between Kaiser Permanente and the National Union of Healthcare Workers are ongoing, with the union pushing for a commitment that AI will not replace human workers. Kaiser has indicated a willingness to bargain over changes to working conditions resulting from new AI technologies. In the legislative arena, California lawmakers are being urged to pass bills that would establish guardrails for AI use, with State Senator Steve Padilla emphasizing the limited window to implement necessary regulations for this rapidly evolving technology.

Beyond policy, the mental health community is actively working to understand and mitigate the risks associated with AI. Dr. John Torous, director of digital psychiatry at Beth Israel Deaconess Medical Center, is collaborating with the National Alliance on Mental Illness (NAMI) to develop benchmarks for evaluating how different AI tools respond to mental health inquiries. This research is crucial for informing future best practices and ensuring that AI integration in behavioral health, including ABA, is guided by evidence and ethical considerations.

Fast Facts

Key Point Why It Matters for ABA
Kaiser Permanente employs over 180,000 individuals Illustrates the scale of AI’s potential impact on healthcare jobs, including behavioral health roles.
Close to half of Kaiser behavioral health professionals in Northern California are uncomfortable with AI tools Reflects significant clinician apprehension about AI, likely mirrored by ABA professionals.
12% of adults likely to use AI chatbots for mental healthcare in next six months Highlights the growing public reliance on AI for mental health, posing both opportunities and risks for regulated ABA services.

Expert Perspective

AI is not the savior, and the nuances of human interaction can fall through the cracks, potentially leading to catastrophic outcomes in mental health care.

Source: latimes.com