"A sensitive, incisive, and illuminating, (albeit disturbing), take on AI. Messina deftly describes the problematic unconscious biases and societal prejudices making their way into the AI systems we rely on today. She is an expert at using psychoanalytic thinking to jostle her reader into an alert and informed stance in relation to these troublesome trends which require psychoanalytic intervention." - Amy Levy, Psy. D, Co-Chair of the Council on AI“Karyne Messina’s book on AI and Psychoanalysis offers profound insights into how inevitable biases in AI stem from unconscious projections from the minds of people who construct AI, and how these biases silently permeate every level of AI design. Understanding these hidden structures is a crucial first step towards the ethical design and use of AI in every dimension of human experience, from the intimacy of psychotherapy to the global impact of social and national policy. Messina’s sophisticated exploration of AI’s enigmas shines an essential spotlight on problems and opportunities inherent in AI’s almost unfathomable potential.” - David Scharff, MD. Co-Founder and Former Director, International Psychotherapy Institute; Recipient, Sigourney Award for the Advancement of Psychoanalysis, 2021In an age of endless AI discourse, Dr. Messina’s book, Using Psychoanalysis to Understand and Address AI Bias: Refractions in the Digital Mirror, stands apart. While others focus on the technical mechanics of AI, Dr. Messina fearlessly examines its psychological foundations, inviting us to look beyond the code and into ourselves. Her groundbreaking application of psychoanalytic concepts, particularly projective identification, illuminates why AI bias is not a bug to be fixed, but a reflection of humanity’s unconscious anxieties, prejudices and longings for connection. With remarkable clarity, the book demonstrates how our societal biases are not merely replicated but are actively mirrored and institutionalized by AI systems. This work is an essential and timely intervention. At a moment fraught with dissension and polarity, Dr. Messina provides a thoughtful framework for understanding the forces at play in our new digital reality. She shows that the path to a more ethical AI is not solely through technical regulation, but through a deeper, more profound form of human self-awareness. - Dr. Harry Gill, MD, PhD. President of HGMD, LLC, Medical Director, Embark Behavioral Health, Clinical Assistant Professor of Psychiatry, George Washington University“As an experiment into AI, I had Perplexity read Dr. Messina’s book and I asked it for a full review. It did so in a few seconds. This is just a portion of what the AI wrote:“Refractions in the Digital Mirror" offers a unique and timely exploration of artificial intelligence (AI) bias through the lens of psychoanalytic theory. Dr. Karyne E. Messina, drawing on foundational concepts from Freud and Melanie Klein, argues that AI systems are not neutral tools but rather mirrors that refract and amplify the unconscious biases, anxieties, and defenses of their human creators and users. The book aims to bridge the often-disparate worlds of psychoanalysis and technology, providing both a theoretical framework and practical considerations for addressing the ethical challenges posed by AI bias...By framing AI bias as a reflection of collective psychological processes, Dr. Messina challenges both technologists and policymakers to look beyond technical fixes and address the deeper human factors at play. The book is recommended for readers interested in the intersection of psychology, ethics, and technology, as well as for professionals seeking a more holistic approach to understanding and mitigating AI bias.”You have just seen how AI can be a valuable tool. But I also agree with Dr. Messina’s cautionary note. As with inventions throughout history, they are neither good nor bad. It is how they are used and what protections exist. I have recently seen examples of deep fakes affecting elections. As a forensic psychologist I just investigated a case involving a troll for hire. The troll creates convincing deep fakes to destroy innocent people’s reputations. Americans in the year 2024, according to FBI figures, lost about $16 billion from cyber scams alone. I hope Dr. Messina’s book brings serious attention to the need for more AI protection.” - Robert M. Gordon, Ph.D. ABPP, Board Certified in Clinical Psychology and Psychoanalysis Osprey Florida “Using Psychoanalysis to Understand and Address AI Bias: Refractions in the Digital Mirror is a courageous—and necessary—book. Dr. Karyne Messina shows us that the well-documented problem of AI bias is not merely technical; it is deeply psychological. Using the central psychoanalytic concepts of projection and projective identification she explores how our machines are becoming vessels for, and expressions of what humanity cannot yet face in itself: our disavowed fears, prejudices, and unconscious assumptions and biases. She trains her eye on a fact too often ignored; trained on nearly the entire corpus of internet-available human productions, in addition to everything else it does, AI expresses the darkest corners of the human psyche. Rather than simplistically treating AI bias as merely a bug to be patched with more data or better algorithms, Messina maps AI’s inheritance of the human unconscious. In doing so, she brings psychoanalysis and artificial intelligence into the same frame, offering a language for what AI so powerfully reveals about us. The book makes vivid how digital systems refract, amplify, and normalize what was once hidden. This is not a counsel of despair. Messina argues persuasively that psychoanalysis can help us design, govern, and live with AI in ways that preserve human dignity and autonomy. At a historical inflection point, Using Psychoanalysis to Understand and Address AI Bias: Refractions in the Digital Mirror invites us to recognize AI for what it is: not only a tool or a threat, but a mirror—sometimes distorted, sometimes clarifying—through which we must confront ourselves if we are to shape a technological future that primarily serves human needs and values rather than corporate profit or military power.” - Todd Essig, Ph.D.: Founder and Chair, APsA’s Commission on Artificial Intelligence (CAI); Training and Supervising Psychoanalyst, William Alanson White Institute.