
Health & Medicine
The promise and peril of AI chatbots in healthcare
We need to ensure that AI coaches can be challenged, are responsive to human feedback and accountable to the communities they serve
Published 20 October 2025
From wellbeing apps to workplace assistants, AI is being used to guide, encourage and comfort people.
For young people, the promise is compelling: 24/7 access, personalised advice and scalable support in a world where human services are stretched thin.
But this raises urgent questions: What makes an AI coach trustworthy? How can we protect privacy, autonomy and wellbeing? And where does responsibility lie between human and machine?
These are some of the issues we are exploring in the Digital Wellbeing Communities Research Hub.
We recently hosted an industry and academic expert panel discussion, where we were joined by Dr Samantha-Kaye Johnston (Assessment and Evaluation Research Centre, University of Melbourne), Associate Professor Shane Cross (Orygen Digital), Dr Lara Mossman (Centre for Wellbeing Science, University of Melbourne) and Mr Zane Harris (CEO of Neuro).
The discussion prompt was ‘What does an ethical AI coach for young people look like?’. The panel examined the ethical barriers and opportunities of AI coaching for young people and gave us much food for thought.
Here are our key takeaways from the discussion:
Many AI platforms function more like recommendation engines than genuine coaches, telling users what to do, rather than supporting them to make decisions for themselves.
Health & Medicine
The promise and peril of AI chatbots in healthcare
Sharing preliminary research results, Dr Samantha-Kaye Johnston told us that teachers are reporting that students who use AI coaches become unwilling to think and are more prone to intellectual dishonesty.
Most AI ‘personalisation’ narrows our options. It drives efficiency but discourages deep thinking. Students learn to shortcut. Cheating becomes easier. Critical thinking and autonomy can be weakened.
AI coaches should expand horizons. They should support reflection. They should empower shared agency across students, teachers, families and tools.
Safety cannot rely on surveillance. Drawing on her ongoing human coaching research, Dr Lara Mossman noted that monitoring undermines autonomy and breeds compliance rather than motivation.
Data security, hacking and deepfakes remain real risks, so trust requires transparency.
AI should never pretend to be human but remain transparent about its limits, with clear pathways to human oversight.
There needs to be some way of regularly reminding the young person that the AI is not human, and to guide users back to people when problems arise.
Young people are adopting AI companions quickly, but are they using the right ones and what about data security and liability?
We should be wary of using generalist tools for specialist applications, like coaching. Coaching is one use case where you want your tool to be good at that one thing.
Poorly designed AI risks replicating the problem we have with social media algorithms by reinforcing rather than challenging users’ thinking.
Coaching by humans is already poorly regulated, and adding AI magnifies the risks.
Coaching can cover everything from lifestyle coaching, education, or even coaching for mental health difficulties.
Associate Professor Shane Cross, with two decades of clinical psychology experience, advises that we first need to understand the purpose of the coaching, and then ask: What are the features of high-quality coaching for this purpose?
Then, we should ask, what can AI systems do well now, and what can’t they do well now?
AI seems to be good at helping with structured goal support, reminders, psychoeducation and conversational encouragement.
But it cannot fully replicate nuanced human skills like questioning, reframing or accountability and it cannot repair any ruptures that appear.
There have also been emerging cases of ‘AI psychosis’, where chatbots inadvertently reinforced delusional thinking. Poorly configured chatbots may intensify unhealthy thinking.
The best models are hybrid models that combine AI with human oversight. With the mental health system stretched and often unaffordable, hybrid models can help scale up mental health care for more people.
But humans should remain in the loop, and safety and quality regulation will be key.
Marco Almada at the University of Luxembourg has proposed the principle of projected contestability – designing AI with built-in opportunities for people to question, contest and reshape it.
Sciences & Technology
ADHD and autism are different, but on social media those differences are shrinking
Rather than fixing culture and norms into static code, projected contestability requires transparency, space for feedback and responsiveness over time.
For young people, this means AI coaches should not only offer advice but also prompt reflection, seek human input at critical junctures, and remain open to contestation by students, teachers and parents alike.
Industry alone cannot define ethical AI coaching. Industry tends to focus on profitability and overlook safety.
And it also can’t be left to government, as it will be likely that fewer resources are committed to it, and the result will be something that is safe but unengaging.
Industry, governments, clinicians, educators, researchers and young people themselves all need to be part of its development.
Another question is whose cultural and ethical frameworks should we rely on to guide development?
Perhaps rather than debates about abstract ‘ethics’, the best way forward may be building trust through transparency about the values we are embedding in these products.
The panellists converged on a shared principle: AI should not substitute for human care but should elevate what humans do best.
With Australia’s Productivity Commission recently seeking submissions on AI regulation, and international frameworks beginning to emerge, the moment is ripe to ensure that young people’s voices, safety and critical capacities are placed at the centre of design.
Health & Medicine
Teen brains are wired to take risks, but that can be a good thing
Projected contestability offers one path forward: designing AI coaches that remain open to challenge, responsive to human feedback and accountable to the communities they serve.
As Zane Harris, CEO of Neuro Group, put it: “There’s what AI can do, what humans can do and what humans can do with AI.”
“The goal is to build partnership, not replacement.”