Interview

Beyond the Technology: Human Values in the Age of AI - A conversation with Eva M. Ellis

Guest: Eva M. Ellis · Topic: AI ethics, mental health, and human-centred technology
Format: Written Interview Q&A · Published: 30/3/2026

As artificial intelligence (AI) becomes increasingly present in education, healthcare, and everyday life, questions surrounding its ethical use are becoming harder to ignore. In this interview, I spoke with Eva M. Ellis, an AI Ethics and Mental Health Partner whose work explores the intersection between technology, human wellbeing, and ethical reflection. Our conversation focused not only on how AI may support mental health work, but also on why psychology, adaptability, and human relationships remain central in this rapidly changing field.

How Eva’s Journey into AI Ethics Began

Interestingly, Eva’s path into AI ethics did not begin with technology itself, but with human relationships. Her early work in mental health coaching, particularly with young adults and families, showed her that growth and healing rarely happen only within formal sessions. Instead, meaningful change often takes place in everyday social moments, family interactions, and the rebuilding of relationships over time.

As Eva expanded into ADHD coaching and in-family communication work, she began to notice how people’s relationships with technology changed when they started questioning why they were using it. Long before AI became the centre of her work, this sparked her interest in the ethical use of technology. Later, during the COVID-19 pandemic, writing her novel CarIAm #7days prompted her to think more deeply about AI to human relationships, eventually leading her into formal study in AI ethics.


Why Psychology Is Highly Transferable into AI Ethics

One of the most striking points Eva raised was that psychology and AI ethics are not as separate as they may first appear. In her view, mental health training already provides many of the ethical foundations required in AI-related work.

She drew a clear parallel between long-established human ethics and contemporary AI ethics: informed consent relates closely to transparency, confidentiality links to data privacy, and the professional principle of “do no harm” mirrors AI safety. For Eva, this means that mental health professionals and educators already carry the ethical language of AI adoption within their existing training, even if they have not yet been shown how it translates into technological contexts.

She also emphasised the importance of adaptability. Alongside IQ and EQ, Eva highlighted AQ (Adaptability Quotient) as an increasingly important skill for professionals navigating AI. The ability to learn, unlearn, and respond thoughtfully to uncertainty may become one of the most valuable strengths in this field.


Eva’s Advice for Students Interested in This Field

For students hoping to explore AI ethics and mental health, Eva’s advice was surprisingly grounded: start with your passion, not AI.

Rather than building a career around technology alone, she encouraged students to first think about the population they want to serve, the issues they genuinely care about, and the professional foundation they want to build. In her view, once that human focus is clear, the connection to AI will emerge more naturally and meaningfully.

Eva also noted that mental health already intersects with AI in several important areas, including digital therapeutics, workplace wellbeing, crisis intervention, and AI-supported clinical tools. Across all of these spaces, she suggested that ethical and psychological insight will remain just as important as technical knowledge.


How AI Can Support Rather than Harm Mental Wellbeing

Eva sees genuine potential in AI as a mental health support tool, especially between human sessions. She described how AI can assist with check-ins, mood tracking, reflection prompts, habit-building exercises, and preparation between sessions, allowing support to continue outside the therapy or coaching room.

At the same time, she made a clear distinction: AI may function as a companion to care, but not as a replacement for human therapy itself. For Eva, therapy and coaching are fundamentally human processes built around mutual vulnerability, emotional presence, and a relationship that AI cannot fully replicate.

This means responsible use depends not only on what AI can do, but on whether humans maintain ethical oversight and clear boundaries around its role.


The Thinking Behind Her Newsletter

Eva’s newsletter, AI Ethics-Mental Health Studio, grew out of a simple but powerful realisation: the ideas she had been developing through study and reflection needed a space beyond private notes.

As she explored AI ethics more deeply, she began using AI as a thinking partner to help organise, challenge, and expand her ideas. Rather than seeing AI only as a tool for editing or efficiency, she experienced it as something that could support the thinking and writing process itself.

Through this process, the newsletter became more than a publication. It evolved into what she describes as a thinking studio: a living space for reflection, writing, and dialogue about the ethical relationship between humans and technology.


The Kind of Conversations Eva Hopes to Spark

Throughout the interview, Eva returned repeatedly to one core idea: the most important conversation is not simply about what AI is, but about what AI means for us as humans.

She described strong parallels between the mental health journey and the AI ethics journey. Both require self-awareness before action, regulation before reaction, and reflection on values that cannot be fully reduced to systems or rules.

In this sense, her platform is not only about informing people about AI developments, but about helping professionals, educators, and students remain grounded in their humanity while navigating technological change.


How Students Can Stay Critically Engaged

When asked how students can remain informed in such a rapidly evolving field, Eva did not offer a single formula. Instead, she encouraged students to begin with self-reflection: why does this field matter to you, and what does mental health mean to you personally?

From there, she suggested approaching AI critically and experientially; by testing tools, researching the ethics behind them, observing their effects, and then stepping back to reflect on what one’s own values and professional instincts are saying.

For Eva, staying engaged is not just about consuming information. It is about building personal ethical filters, practical skills, and the courage to think independently in a field that is still being written.

Kumi's Reflection from the Discussion

What stood out to me most in this interview was Eva’s advice to start with passion rather than AI itself.

Initially, I assumed that entering this field would require focusing heavily on understanding AI systems and keeping up with rapid technological developments. However, Eva’s perspective shifted this assumption. Instead of centring technology, she emphasised the importance of first understanding who we want to help and why we are drawn to the field in the first place.

I found this particularly meaningful because it reframes AI not as the starting point, but as something that should align with an already established purpose; psychology students do not need to “chase” AI, but need to develop strong foundations in human understanding, allowing the connection to technology to naturally emerge.

Another key point I took away was her emphasis on adaptability. Beyond academic knowledge or technical skills, the ability to learn, unlearn, and adjust to uncertainty is becoming one of the most important competencies in a rapidly evolving field like AI ethics.

This conversation made me reflect on how preparing for the future of psychology is not only about gaining new knowledge, but also about developing clarity in one’s motivations and the flexibility to navigate change while staying grounded in core human values.


Note: This interview is shared for educational purposes. It does not constitute clinical advice.

Note: This article is shared for educational and portfolio purposes. It does not constitute professional or clinical advice.

Written by Cheng U (Kumi) Lam for Psych with Kumi.
© 2026 Cheng U (Kumi) Lam. All rights reserved.