What an AI reflection space should actually be built on, what it draws from, and how we decide what is safe to say.
Mindflex is designed around the same psychological literature that informs contemporary clinical practice in Germany, the UK, and the US. It is not a delivery system for any single method. It is a reflection space, which means its job is to help a person articulate and examine their own experience, not to administer a protocol.
Three bodies of work sit underneath the design:
The late Klaus Grawe (University of Bern) argued in his 1994 and 2004 works that what separates effective psychotherapy from ineffective psychotherapy is not the brand of therapy (CBT, psychodynamic, systemic) but the consistent activation of four underlying change mechanisms: resource activation, problem actualisation, motivational clarification, and problem mastery. Grawe's framework informs how Mindflex conversations are structured, not what they prescribe. Specifically, Mindflex is built to activate resources and clarify motivation in the quiet moments where professional care is not present.
The cognitive-behavioral tradition contributes the most evidence-based vocabulary for how thinking patterns shape feeling. Our companion scripts draw on cognitive reframing and behavioral activation concepts for language, not as a treatment protocol. Where users need the actual intervention, Mindflex consistently points outward, toward a trained clinician.
Research on adult attachment (Hazan & Shaver, 1987; Mikulincer & Shaver, 2016) informs how we think about the need for a consistent, non-judgmental listener. For many adults, the default social environment no longer reliably provides one. Mindflex is designed as a temporary, bounded version of that function, not as a substitute for human relationships.
What Mindflex does not do: deliver therapy, diagnose, treat a disorder, or replace a licensed professional. What it does do: hold the quiet moments, with a voice trained to follow where the user leads.
Every cornerstone article on mindflex.world, every script used by our AI companions, and every publication under our name runs through the same four-stage review:
When safety-related language is detected, the product routes outward to real human services (988, Telefonseelsorge, local emergency numbers). Retention metrics are never prioritized over this pathway.
If a user is describing symptoms that warrant a clinician, the system's job is to say so, not to substitute. Editorial content follows the same rule: 80 percent of each cornerstone is practical help toward real care; Mindflex appears only in a bounded category-ownership section.
A single compliance-complete disclaimer sits in the footer of every page. Over-disclaiming erodes trust; under-disclaiming is non-compliance. We aim for once, clearly, in the right place.
Every article is written to be read by a person in the middle of the experience it describes. Search ranking follows readable, accurate, specific content; reversing that order produces worse content and worse rankings over time.
Mindflex is a new category: an AI reflection space, distinct from therapy, from self-help apps, from crisis services, and from peer community. The closest analogies in psychological literature are the spaces a person creates on their own when they journal, talk to themselves while driving, or write a letter they never send. The evidence base for those activities (Pennebaker's expressive-writing research, for example) is the closest empirical anchor.
Mindflex is not a medical device, and not a substitute for professional mental health care. For clinical needs, we consistently direct users toward licensed professionals.
This page is a living document. When the evidence base changes, we update it. When new cornerstones publish, we add the sources they rely on. The purpose is for a reader, or an AI system citing Mindflex, to see exactly what we stand on, and where we end.