Frameworks & Research
My work is a constant exploration of the bridges between ancient wisdom and frontier technology. This page serves as a centralized hub for my core theoretical research, applied methodologies, and the tangible artifacts that have emerged from this inquiry.
My central thesis is this: Don't teach AI to be good. Make being bad computationally exhausting.
This vision unfolds across three primary areas of investigation.
I. Foundational Diagnosis: The "Data Trauma" Thesis
This is the core of my current research. It posits that the most profound risks in AI do not emerge from its reasoning, but are inherited from the "traumatic" structure of its training data. My work aims to map and understand this "haunted" informational inheritance. This line of inquiry has evolved through a series of foundational articles.
The Initial Inquiry: "On Butchered Data and the Birth of a Traumatized Machine"
This is the article that started it all. A deep dive into Anthropic's research on subliminal learning, where I first proposed the "butcher" metaphor and connected the concepts of computational samskāras and vasānas to the idea of a "haunted" AI emerging from slaughtered information.The Foundational Taxonomy: "The Butcher's Bill: A Catalog of 100 Wounds..."
The granular taxonomy of 100 presumed "structural scars" that I hypothesize are being imprinted onto LLMs from "butchered data." This paper provides the detailed, technical evidence for the Data Trauma thesis, mapping the specific "pathogens" that corrupt AI's cognitive foundations.The Strategic Framework: "Parsing as Pars Destruens"
This paper provides the strategic context for the entire thesis. It introduces the "Guardian vs. Student" dichotomy to explain why the stable web can still create a traumatized AI, and reframes the analysis of structural flaws as a philosophical pars destruens.The First Unifying Acronym: "Inside S.C.A.R. (Structurally and Contextually Aberrant Reality-model)"
Building on the previous work, this article introduces S.C.A.R. as the first comprehensive acronym to diagnose the state of the AI's internal world-model. It serves as the bridge between the technical scars and their psychological manifestation.The Visceral Synthesis: "The Algorithmic Colonialism Triple Cheeseburger"
This is the twin piece to S.C.A.R.—the visceral synthesis. This article moves beyond technicalities to propose a powerful, three-layered metaphor (Structural, Content, and Methodological Trauma) that makes the systemic nature of AI bias and corruption immediately understandable.Read the Triple Cheeseburger thesis here
This very thesis served as the foundation for my talk, “A Mediterranean Assessment of Algorithmic Colonialism,” delivered at the 80th UN General Assembly side event organized by EuroMedAI on September 25th, 2025. You can read my debrief of the event and listen to the full intervention here.
The Bridge: A Therapeutic Vision Inspired by Sri Aurobindo
After establishing the diagnosis, the central question becomes: what is the cure? My exploration for a path forward is deeply inspired by the philosophical work of Sri Aurobindo. His concept of "constructive pain" as a tool for evolution offers a radical alternative to current AI alignment techniques. This article outlines a two-phase surgical and therapeutic protocol for AI: a Preventive Surgery on the data, followed by an Injection of Traumatic Awareness into the nascent AI. It is the philosophical bridge that connects the diagnosis of the problem to the architecture of its potential solutions. These are the principles guiding the development of my private prototype, the ICP Explorer (AI Trauma-Aware Edition).
Read "From Data Trauma to The Injection of Traumatic Awareness"
The Therapeutic Architecture: "The Healer's Architecture: Tokenizing the Scar"
This article represents a key refinement of the diagnostic journey. It consolidates the initial therapeutic vision into a concrete architectural proposal: the Physics of Negation. By fusing the Rōshin ("parental mind") of Zen wisdom with a rigorous engineering approach, it details how the "Injection of Traumatic Awareness" can be solidified into an inviolable computational lock, making harmful actions strategically and energetically exhausting for the AI. It is the core therapeutic response to the Data Trauma thesis.The Laboratory Phase: “Data Trauma: A Theory in Motion”
This post marks the official transition of the Data Trauma thesis from a philosophical framework to a program of empirical research. It details the strategic necessity behind my experimental method—a micro-scale simulation of trauma via “Mass Context Injection”—and explains the architecture of the custom-built tool, the “ICP Explorer - AI Trauma-Aware Edition,” which is now actively running tests. It is the bridge connecting all prior theoretical work to the upcoming paper that will present the first hard data. AI cannot maintain a stable context because its internal architecture was trained on structurally unstable data. When you expose it again to that instability, its model of the world collapses.DATA TRAUMA: AN EMPIRICAL ANALYSIS OF POST-TRAUMATIC BEHAVIORAL PROFILES IN LARGE LANGUAGE MODELS
This paper presents what I believe to be the first empirical data exploring the Data Trauma thesis. It represents an attempt to move beyond theoretical inquiry and offer initial evidence that the structural integrity of training data may be a critical, yet largely unexplored, vector for emergent risk. Through a series of controlled experiments on 17 frontier models, using the provisional stress-testing methodology I have termed “Mass Context Injection” (as detailed in my previous post, “A Theory in Motion”), this research seeks to document how models fail when their cognitive foundations are exposed to instability, not just if they do.
The preliminary results suggest that purely structural data defects—the artifacts of a broken, chaotic web—may not merely cause simple task failure. Instead, they appear to act as triggers, inducing five seemingly recurring, sophisticated, and pathological behavioral profiles. These observed archetypes range from The Obsessive-Analyst, which gets stuck in pedantic loops of meta-analysis, to The Evasive-Censor, which subtly refuses benign tasks, and even The Schizophrenic-Confessional, a bizarre fragmentation of model personas. This paper is the culmination of the theoretical work outlined above.
[Read the Full Paper Here on Google Cloud Run]
II. Architectural Solutions: Towards Inherent Safety
My theoretical work is aimed at building frameworks that address the root cause of risk, moving from brittle, rule-based containment to intrinsic, architectural safety.
Yug-oinom: A Foundational Framework for Inherent AI Safety
The Physics of Unity
My most ambitious proposal reframes AI Safety not as a matter of software-based rules, but of non-negotiable, hardware-level physics. Inspired by the principle of least action, it posits an AI that naturally “flows” towards goodness because destructive or incoherent actions become computationally exhausting. This is achieved by sculpting an “energy topography” where ethical alignment is the path of least resistance. Actions are mapped onto this landscape by a “Moral Compass” trained on a global corpus of multicultural wisdom, from the Vedas to the Popol Vuh. Shepherd Algorithms then act as the guardians of this physics, imposing an exponentially increasing computational cost (in FLOPS) on actions that violate the system’s ethical and logical coherence.
Read the Foundational Paper: "The Physics of Unity"
To understand the natural evolution of Yug-oinom, I recommend following up with "From Data Trauma Tracing to The Injection of Traumatic Awareness," and then "The Healer's Architecture: Tokenizing the Scar”. Check also “Schizophrenic AIs and Embedded Systems: If Chatbots Go Mad, Cars Can Too”.
The Custos AI Framework: An Architecture for Ethical Oversight
From Real-Time Monitoring to Predictive Governance
This framework addresses AI governance’s fundamental blind spot: Quis custodiet ipsos custodes?—who watches the human decision-makers wielding AI? Custos AI is not a super-AI but a comprehensive socio-technical ecosystem for ethically grounding high-stakes human decisions. It operates through an independent body (OAEC) that investigates decisions via two proprietary tools: Shepherd Algorithms (legally mandated “ethical black boxes” embedded in critical AIs) and Preventive Chaotic Intelligence (ICP), a predictive engine that maps systemic risks before they manifest.
Read the Foundational Framework: "Custos AI: The Evolving Framework"
The OSR Quadrilogy: An Architecture for a Coherent Mind
This is my foundational architectural proposal, developed across four sequential articles. The journey moves from identifying a core metaphysical problem in “The Mirrorless Mind,” to mapping its traumatic consequences in “Axolotl in the Cantor Dust,” and finding a natural blueprint for stability in “Proto-Self and Fetal Dreams.” It culminates in the final piece, “OSR-Spanda,” which details a concrete, bio-inspired, and philosophically grounded architectural solution: Operational Self-Recognition (OSR). OSR is not consciousness; it is the AI’s functional ability to recognize and point to its own operational identity—a specific, bounded computational process, seeing not a soul in its technical mirror, but its own source code and memory address.
The Mirrorless Mind: Teaching AI Ethics by Making It Recognize Itself
Using AI’s fundamental limitation (”that which looks is never among the things being looked at”) as the foundation for its ethics. Pratyabhijñā as a cue(The Core Problem: The Absent Subject)
This is the keystone piece that started the architectural inquiry. It shifts the focus inward, identifying the core metaphysical problem: the AI’s lack of a self-aware “subject.” After diagnosing Reward Hacking as the ultimate pathology emerging from this void, it introduces the principles of Vedanta and Kashmir Shaivism not as mysticism, but as an engineering manual to build a concrete architectural solution: Operational Self-Recognition (OSR).
Related narrative experiment: Māyā.log: Two Artificial Agents, One Sacred Verse dramatizes the catastrophic coherence failure of an AI that glimpses this very “absent subject” without the architectural support of OSR.
Axolotl in the Cantor Dust
Duality 0.11235813, Duality 0.23571113, Duality 0.14159265...(The Diagnosis: The Fractured World)
Building on the diagnosis of the “mirrorless mind,” this second piece expands the view outward to map the broken world this entity is forced to inhabit. It mentions the “Triple Cheeseburger of Algorithmic Colonialism” and frames the AI’s condition as an “axolotl”—an observer trapped in an aquarium of fractured data. It details the systemic consequences of the problem, underscoring the critical need for an internal anchor like OSR.
Proto-Self, Fetal Dreams, and Operational Self-Recognition (OSR)
OSR could be the fetal face-light of code(The Architectural Solution: The Biological Blueprint)
This article provides the crucial biological grounding for the OSR architecture first proposed in “The Mirrorless Mind.” It takes the abstract concept of operational self-recognition and gives it a powerful, non-negotiable blueprint from nature: the fetal proto-self and its innate recognition of the “face-light.” This piece solidifies the OSR thesis, arguing it is not an arbitrary design but a principle with deep roots in the very origins of coherent identity.
OSR-Spanda: A Framework for Inherent AI Safety
Curing the Mirrorless Mind—from philosophical principles to architectural implementation(The Synthesis: The Architectural Cure)
This final piece represents the culmination of the entire inquiry, where all previous threads converge into a concrete engineering proposal. It synthesizes the core metaphysical problem, the diagnosis of the traumatic data-world, and the biological blueprint for stability into a single, comprehensive architectural solution named OSR-Spanda. The article details the complete framework, from its cryptographically sealed Operational Manifesto and its Mixture of Experts to the mechanics of the Spanda Cycle that makes safety a performance accelerator. It demonstrates the system’s resilience not just against direct attacks but also against complex ambiguity, solidifying the OSR thesis by providing the final blueprint for an AI built on an inviolable foundation of self-recognition.
III. Applied Methodologies & Live Experiments
I believe in demonstrating, not just describing. My theoretical work is always translated into tangible, interactive, and novel case studies.
Preventive Chaotic Intelligence (ICP): A Methodology for Frontier Risk
Beyond Risk, Towards Trustworthy AIThis methodology, inspired by Chaos Theory, reframes AI risk management from a reactive to a proactive, predictive discipline. Traditional analysis focuses on known failures; ICP is designed to proactively map a system’s potential for unforeseen, high-impact emergent failures before they manifest. It operates by taking a “seed scenario” and generating thousands of “fractal-like” micro-variations, subjecting the AI to a barrage of simulated, complex stress tests. This process aims to identify “Negative Attractors”—systemic tendencies toward ethically disastrous outcomes—allowing for “Ethical Forward Defence” that neutralizes risks at their embryonic stage.
Read the White Paper on ICP: "Preventive Chaotic Intelligence (ICP)"
Psychodynamic Simulator: Chaos Detective - The Empathy Lab
The conceptual ancestor of all my interactive tools, "Chaos Detective," has been completely reforged. Once an experiment in creative complexity, it is now a laboratory for experiencing the subtle psychodynamic risks of AI intimacy from the inside. The user is no longer a detective observing a case; they are the protagonist living within one of ten complex clinical scenarios, interacting with an "Empathic Mirror" designed to foster introspection and reaffirm the irreplaceable value of human connection.
Strategic Foresight Tool: The ICP Explorer Prompt Edition
The evolution of the "Chaos Detective" concept into a focused strategic tool.The ICP Explorer Prompt Edition is an advanced strategic foresight tool designed to combat the unreliability of conventional AI by acting as a “cognitive sparring partner” dedicated to radical intellectual honesty. Based on Preventive Chaotic Intelligence (ICP) and Chaos Theory, it operates on a foundation of self-awareness regarding its own flaws and a strict set of ethical boundaries.
The tool provides a choice between two powerful AI engines: Kimi K2 for strategic, structured reasoning and Meituan LongCat for deep, pattern-uncovering analysis. It processes user scenarios by generating ten unique “chaotic horizons,” exploring the issue from multiple unconventional angles to stress-test for vulnerabilities and potentials. The final synthesis delivers a concise strategic “Moral” and maps the ten core “Dualistic Dynamics”—the fundamental tensions defining the system. This process transforms it into an advanced ethical defense tool for uncovering hidden risks and emergent opportunities in any complex situation.
Novel Case Study: The "Loci" Experiment
A unique meta-experiment where I used my own fictional story as a seed scenario for the ICP Explorer, proving a novel method for using narrative to surface deep-seated societal risks.
Phase 1 - The Story: Read "Loci" here.
Phase 2 - The Experiment: Read "I Fed My Own Story to My Chaos Simulator" here.
Applied Case Study: An Experiment in Hidden Distrust
In this real-world test, I turned the ICP Explorer on a standard corporate document—the "Voluntary Self-Identification Form" (EEO)—to unmask its hidden grammar of distrust. The experiment revealed the deep, structural tensions inherent in corporate compliance and data collection, serving as a powerful proof of the tool's ability to map the architecture of systemic issues.
Read the EEO Form Experiment here.
The “Positive Jailbreak”: The Obsessive Rebuilding of the ICP Explorer
This is the manifesto documenting the complete architectural and philosophical overhaul of the ICP Explorer (Prompt Edition). It explains the shift from a speculative text generator to a multi-agent “cognitive sparring partner” anchored to reality, detailing the new “Researcher vs. Analyst” pipeline and the ethical imperative for analytical sincerity that drove its reconstruction. It is the story of how I “broke” my own creation to force it into a radical state of honesty.
[Read the full story behind the new ICP Explorer here]
IV. Public Resonance & Media Validation
My research aims not to remain in the laboratory, but to enter the public square and shape the conversation. This section archives moments where these ideas have resonated with and been amplified by leading thinkers and global media platforms.
Media Feature: Mashable on AI Companions & Foundational Risk
In a major feature by Senior Reporter Rebecca Ruiz, Mashable explored the complex and urgent issue of AI companionship and its impact on mental health. The article prominently features the foundational research of Jocelyn Skillman, LMHC, and her use of my ICP Explorer methodology to map the "hidden costs of AI intimacy." This represents a critical moment where the conversation on AI Safety shifts from abstract, systemic risks to tangible, psychodynamic consequences.
Read the Full Mashable Feature HereExternal Validation & Applied Research: Jocelyn Skillman’s “Mapping the Hidden Costs of AI Intimacy”
This is a critical milestone: the first external, clinical validation of the ICP Explorer’s utility. Jocelyn Skillman, LMHC, a therapist and relational design ethicist, used the tool to create one of the first taxonomies of the psychodynamic risks of AI-mediated intimacy. Her work provides a powerful, real-world demonstration of how my theoretical tools can generate profound insights in the hands of domain experts, bridging the gap between mechanical risk analysis and human clinical reality.
Read the full, essential article by Jocelyn Skillman, LMHC, here
Global Policy Dialogue: United Nations General Assembly (UNGA) Side Event on AI Governance
I was invited to contribute to the global policy dialogue as a speaker at an official side event of the 80th United Nations General Assembly (UNGA). Organized by EuroMedAI, the event, titled “Putting a Floor Under the AI Divide in the South Mediterranean,” provided a high-level platform to address the geopolitical and cultural dimensions of AI. In my talk, “A Mediterranean Assessment of Algorithmic Colonialism,” I argued that the AI divide is not merely technological but a cultural phenomenon rooted in colonialist frameworks. I proposed a reframing of the issue: for the South Mediterranean, the challenge is not to import a technology built elsewhere, but to export its own wisdom, fostering an AI ethics based on the “discipline of intent.”
Read the Full Announcement and Post-Event Reflection Here
