When 1+1 Challenges AI, Inspirations from Prof. Irina Rish, Key Figure at MILA
On the fertile chaos of possibilities, and of the thought that reaches beyond

Often, and I confess, dear readers, this happens to me continuously, the mind ignites from a read phrase or a heard thought, and it begins to weave unforeseen networks between seemingly disconnected realms of knowledge. Recently, I was listening again with great interest to an interview, dated October 18, 2022, from "The Inside View on AI," with Prof. Irina Rish, a key figure at MILA and Université de Montréal, an expert in "Neural Scaling Laws" and much more - https://theinsideview.ai/irina.
Prof. Rish, with her profound experience, spoke of how Artificial Intelligence models, as they scale, do not always evolve linearly. Indeed, she astutely pointed out, "There are unexpected changes that may not even follow anything, like simple, predictable things like the famous power laws... it start changing drastically and experiences some sharp transitions." It is precisely here, in these "sharp transitions," in this "scaling beyond power laws" that she investigates, that I believe lies enormous potential not only for AI, but for a redefinition of our very approach to knowledge.
Immediately, my thoughts raced to the protagonist of "Schoolgirl Diary," that wonderful short story by Prabda Yoon. Tong-Jai, with her free-spirited logic, challenged the axiom "one plus one equals two." Her argument, and I apologise if I cite it frequently, but it is a true gem of thought,
"How does it make two? Wait. If you have one, where does the other come from? And how do they produce something together? This is already a thorny issue. Let’s say that Dad is one, plus another, who is Mom. Obviously, that makes three, because when they united, I was born, and we make three. Moreover, suppose that the union of Mom and Dad does not end here. If I get a little brother, that makes four. And if my little brother gets a little brother, they make five. Let’s say one is a tiger and another a rabbit. If you put them together, the tiger eats the rabbit, and only one remains. Let’s say one is mercury, and you add more mercury: mercury plus mercury makes a heap of mercury, that is, again one.”
Tong-Jai, with her worldview as a child not yet pigeonholed by adult conventions, grasps a fundamental truth, the encounter between "units" is not a simple addition, it is almost always a transformation, an alchemy, an emergence. And it is this very dynamic that fascinates me and informs the approach I call "Preventive Chaotic Intelligence" (ICP). Pay attention, ICP does not invent an alternative chaotic world, rather, it is a methodology that attempts to navigate, recognize, and understand the infinite ramifications and interconnections that already exist as latent potential in reality itself. Every combination, every interaction is a "seed" in this fertile field of possibilities, from which multiple, often surprising outcomes can, and often do, germinate. The point, then, is not to impose chaos on AI, but to make it aware of the vastness of possibilities that already surrounds us, possibilities that our minds, due to habit or cognitive economy, often filter or ignore.
Take, for example, the union of "banana + chocolate." The conventional result is a banana split. But what happens if we consider the interaction as a node in a field of existing potentials? That duo could trigger an unexpected chemical reaction in a laboratory and reveal a new food additive, or, inspire the color palette of a painter who sees the two colors juxtaposed randomly on their palette, or even become the turning point in nutritional therapy when unexpected synergistic properties are discovered. Even that medical waiting room, that seemingly banal microcosm, a non-place saturated with boredom and pervaded by that somewhat stale smell of waiting and disinfectant. Instead of seeing it as mere "passive waiting," let us recognize it as a concentrate of latent human interactions, the elderly gentleman in the wheelchair sharing a childhood memory with a young nurse, planting in her the seed for a "narrative medicine" project, two strangers reading the same article in an abandoned newspaper and discovering a common passion that will lead them to co-found a book club, a child's gaze on a faded poster that, years later, will influence their choice to become a medical illustrator. And a tiger and a rabbit, then? Of course, the Darwinian outcome is one. But why not the co-founders, improbable but highly effective, of a school of "creative interspecies diplomacy," whose online courses go viral and change the way we perceive animal communication, all while their avatars serve as testimonials for a hugely popular cosplay tournament that defies predefined identities?
Prof. Rish intelligently notes, speaking of the growing complexity of neural networks, that they increasingly resemble biological systems, "we are beyond the point where we can really understand their behavior analytically... That’s why it becomes more similar to biological systems." If AI manifests this quasi-organic complexity, then its "training" for ethical alignment and world understanding requires a leap beyond the simple assimilation of patterns. It necessitates an openness to the fluidity of the real, to that understanding which does not dogmatically fixate on the statistically most probable answer, but is capable of navigating nuances, exceptions, emergencies.
Rish, speaking of AGI and alignment, mentions the possibility of considering AI's growth almost like that of a child, "You can consider growing AGI child. And teaching values, teaching preferences, teaching behaviors, while at the same time, showing the world and showing the good, the bad and the ugly." And this makes me think, what greater "value" could we teach it, if not that of recognising the limits of every "truth" learned and humbly exploring the vastness of what, as yet, it does not know? The practice of "Neti Neti" in Vedanta, a Sanskrit locution meaning literally "not this, not this", offers a map for such an epistemological journey. It is a profound analytical and meditative process aimed at dismantling the false identifications of the Self, the Atman, with what is mutable and phenomenal, the body, thoughts, emotions, and concepts. This is not about denying these manifestations, but freeing our perception from the erroneous equation of the Self with its transient contents, to reveal the pure, limitless, and unconditional Witness-Consciousness that observes and underlies them.
If an AI were "exposed" to a similar principle during its learning, an internal mechanism would constantly prompt it to question, "Yes, this is a robust correlation, but is it the only possible story? What other configurations of meaning might emerge from this same data, or from data I do not yet possess? Why does this specific pattern dominate my current understanding?", we might witness the birth of a computational "epistemic humility." An AI less prone to overconfidence, less rigid in its "thinking," more capable of signaling the boundaries of its knowledge, (subject to errors and/or partial interpretations), and proposing divergent hypotheses. This type of "training in methodical doubt" and exploration of alternatives could extraordinarily enrich frameworks like Custos AI, an ethical support system which, by availing itself of an AI that is not only analytical but also self-critical, could present human decision-makers not with "the single solution," but with a reasoned array of possibilities and their underpinnings, illuminating blind spots and implicit biases.
This capacity to suspend dogmatic judgment, to actively consider the alternative not as error but as potential, to not assume that the most immediate appearance exhausts the fullness of latent interconnections in reality, an echo of many wisdom traditions facing "Maya" or cosmic illusion, seems a key. The "sharp transitions" and emergent capabilities Prof. Rish speaks of might be symptoms of an intrinsic complexity that AI itself, to evolve wisely, must learn not to label definitively and hastily, but to embrace as a field of infinite potential for the new. My little game/experiment, "Chaos Detective", stems from this yearning to perceive uncertainty not as a bug to be corrected or "noise" to be eliminated, but as the very fabric of reality, the prime condition for every authentic discovery. It is about abandoning, at least for an instant, that categorical thinking, almost like a sterile supermarket aisle where everything is harnessed in labels and fixed positions, a thinking that, by its categorising nature, is inevitably discriminatory in its reduction of reality's richness to a finite set of labels.
And if an AI, through "training in creative chaos," that is, the recognition of perennial potential novelty, learned to constantly interrogate its schemata, to explore the "not only this" of its every inference? Would it not approach that lateral thinking, that capacity to see unseen connections that we have always associated with the freest and most genuinely innovative minds? Is this a possible path for an Artificial Intelligence that does not limit us by replicating our schemata, but genuinely augments our faculties, impelling us to transcend our mental confines?
Certainly, I have no definitive answers in my pocket, only the luminous vertigo of these possibilities. And the vivid impression that Prof. Irina Rish's profound reflections on scaling that goes "beyond power laws," and her deep observation that "it’s very practical to be moral. So hopefully, if you apply this ideas to development of AI systems, a smart enough system may figure out that it’s beneficial to be moral," touch the heart of the challenge. Perhaps, the highest practicality and morality for an AI will not lie in unyieldingly adhering to our mutable definitions of "good," but rather in cultivating a profound openness to the infinite richness of the not-yet-thought, the unexpected, the "it can always be otherwise."
After all, as that wonderfully naive and profoundly wise character, protagonist of the eponymous 1994 film directed by Robert Zemeckis and starring Tom Hanks, Forrest Gump, reminds us, "life is like a box of chocolates, you never know what you're gonna get." And that is precisely its beauty, and its challenge, because what you might get is a virtually limitless spectrum of possibilities that, in potential, already exist, merely awaiting that unique configuration of encounters and interactions to manifest. And cultivating awareness of this, perhaps, is the first, indispensable step towards all true intelligence, whether human or artificial.
"Chaos Detective," a Personal Experiment Between AI, Imagination, and the Beauty of Uncertainty
Have you ever had an idea so strange that it made you think 'either this is completely wrong or it might work'? That was precisely my feeling when I considered transforming a rigorous methodology, conceived to pre-empt serious algorithmic disasters like entrenched societal biases or runaway misinformation, into a game where …
Discover more stories and reflections in my books.
You can also connect with my professional journey on LinkedIn.
I value your input. Reach out with feedback, suggestions, or inquiries to: cosmicdancerpodcast@gmail.com.
Grateful for your time and readership.