The Force Is Strong with This One: Why ChatGPT Isn’t Your Supervisor (Or Your Therapist)
Understanding AI helps psychologists protect clients – and themselves – from the ‘dark side’ of technology.
TL;DR:
As ChatGPT becomes more common in mental health spaces, psychologists need to understand how it works, its ethical risks, and its limitations. Whether or not you use AI tools directly, digital literacy is now part of ethical practice under AHPRA and the APS Code.
The Allure of Instant Answers (and Why It’s Risky)
ChatGPT feels helpful – fast, fluent, and non-judgemental. But it’s not a therapist or supervisor. Without context, its answers can mirror bias, reinforce poor reasoning, or create false confidence.
In Melbourne and across Australia, we’re seeing this more often in practice:
Clients self-diagnosing or using chatbots to manage anxiety.
Clinicians seeking ‘free supervision’ online.
Abusive partners using AI outputs to justify coercive control.
The appeal is understandable – but using ChatGPT in therapy or supervision without clinical oversight risks breaching ethical boundaries and undermining client safety.
Why Understanding AI Is an Ethical Requirement for Psychologists
Even if you’re not using AI in your practice, you still need to understand it.
As a psychologist, it’s part of Core Competency 6: Digital Literacy – and it’s now impossible to separate from ethical decision-making.
Ask yourself:
Was this tool built for clinical use, and do I understand its dataset and biases?
Who can access data I input – and can I guarantee confidentiality?
Is AI use covered under my indemnity insurance?
Have I gained informed consent if using AI-generated text in notes or interventions?
Understanding these factors isn’t optional – it’s about protecting your clients, your registration, and the profession’s integrity.
ChatGPT vs Custom Clinical AI: A Supervision Case Study
At 12 Points Psychology in Melbourne, we compared free ChatGPT responses with those from Sophia P. Supervisor, a custom AI trained on supervision models, psychotherapy frameworks, and Australian ethical standards.
The difference was stark.
Psychologists using Sophia reported feeling grounded and ethically guided – not just reassured. Generic ChatGPT, in contrast, offered polished but potentially unsafe advice.
The takeaway? Not all AI is created equal. Understanding how and why a tool works is essential before integrating it into clinical reflection or supervision. And as clinicians, our role is not to reject or revere AI, but to use it with critical awareness.
Danielle Graber
Clinical Psychologist & Director
Interested in AI-competent supervision or therapist-specific AI Tools?
Book a Professional Supervision Session with 12 Points Director, Clinical Psychologist & Board-Approved Supervisor, Danielle Graber – focused on safe, ethical integration of digital tools in practice.
Or purchase one of our ReadyMade Therapist-Safe Custom Bots or on-demand webinars.