Custos AI: The Evolving Framework – A Comprehensive Podcast Overview
The journey continues: New in-depth Custos AI, fueled by Neapolitan joy
Dear readers,
It is with great enthusiasm that I return today to discuss Custos AI, a concept I began developing and exploring with you on these pages some time ago. As you may recall, since the initial article introducing the "Ethical Hawk-Eye," my ambition has always been to address the eternal question: Quis custodiet ipsos custodes? – Who watches the human decision-makers, those vested with authority, especially when they wield powerful AI tools?
Over a series of seven previous articles, I progressively built and detailed the various pillars of this framework, exploring ideas such as the "Ethical Hawk-Eye" itself, its operationalization, its dialogue with visions for safe AGI, the crucial Shepherd Algorithms, the Directive Normative Ethical Architrave (AEND), and even the integration of preventive chaotic intelligence.
After this journey of exploration and definition, I felt the need to consolidate and systematise the current, evolved vision of the Custos AI framework into a comprehensive presentation. It's essential to note that, as a work in progress, Custos AI has undergone refinements based on further reflection and valuable interactions. Some initial concepts have been modified or even set aside, while new implementations and details have emerged.
To make this updated and more complete architecture as accessible as possible, I am delighted to announce the release of a podcast episode. It has been generated with the incredible AI technology of Google–NotebookLM and offers a concise overview of the key points of Custos AI for an essential understanding of the entire system
It's worth noting that individual components of Custos AI, such as Shepherd Algorithms for real-time ethical monitoring or Preventive Chaotic Intelligence for risk assessment, can be implemented as standalone tools, independent of the complete framework.
CUSTOS AI - Ethical Support Ecosystem for Human Decisions in the Age of AI
Navigating ethical challenges with responsibility and vision, empowering human judgment.
FUNDAMENTAL PRINCIPLE
Custos AI is an institutional framework designed to ethically support human decision-making processes, especially when these involve complex issues or leverage advanced analytical tools (including AI). It is not a super-AI controlling other AIs, but a mechanism that helps institutions ensure that decisions made by human beings are aligned with fundamental values, responsible, and transparent. AI, if used, is a tool whose impact on human decisions can be subject to analysis.
1. CONTEXT: AI – OPPORTUNITIES AND ETHICAL QUESTIONS FOR HUMAN DECISIONS
AI as a Powerful Tool: Artificial Intelligence (AI) offers significant potential, but its application in supporting critical human decisions (economy, healthcare, justice) raises profound ethical concerns.
Ethical Challenges for Human Decision-Makers:
Alignment with Values: How can we ensure that human decisions, even when informed by AI, respect equity, non-discrimination, and privacy?
Human Accountability: If a human decision, influenced by flawed or opaque AI analysis, causes harm, who is responsible? How can decision-makers understand and justify the basis of their choices if they rely on "black boxes"?
Guiding Question: "Quis custodiet ipsos custodes?" – Who ensures the ethical use of tools (including AI) by human decision-makers?
Custos AI Proposal: A multi-level ecosystem for the support and ethical governance of human decisions.
2. CUSTOS AI (MACRO) - THE "ETHICAL HAWK-EYE" FOR THE INSTITUTIONAL DECISION-MAKING PROCESS
"ETHICAL CHALLENGE" Concerning Decisions and Tools Used:
Hawk-Eye Analogy: Just as a challenge in tennis verifies an umpire's call, an "Ethical Challenge" can be requested by Qualified Public Bodies (Parliaments, Courts, Ministries, Guarantors, Ombudsmen) when well-founded and substantiated doubts arise about the ethics of a human decision in preparation or already made, or about the ethical/normative correctness of a specific AI system used (or proposed for use) in supporting such a decision.
Activators and Logic: Only Qualified Public Bodies to ensure seriousness and public relevance, with a formal and rigorous procedure.
CONDITIONS FOR THE "ETHICAL CHALLENGE":
When: Well-founded doubts about potential violation of norms (e.g., AI Act) or fundamental ethical principles (contained in the AEND - Directive Normative Ethical Architrave) by a human decision or an AI system that influences/will influence it.
How: Formal request to the OAEC - Office for Algorithmic Ethical Compliance, detailing:
The human decision in question and/or the specific AI system (if the doubt directly concerns it).
A description of the alleged ethical problem/violation and impacts.
Commitment to provide access to relevant data (for the AI, if it is subject to analysis).
AEND principles against which verification is requested.
3. "ETHICAL CHALLENGE" PROCESS (MACRO) - PHASE 1: FORMAL ACTIVATION
Step 1: Reception by the OAEC (Office for Algorithmic Ethical Compliance):
An independent, multidisciplinary body, guarantor of impartiality. Receives the "Ethical Challenge" request.
Step 2: Formal Validation by the OAEC:
Verifies origin, completeness, substantiation, and relevance to Custos AI's mandate.
If validated, the "Ethical Challenge" is accepted, and the investigation begins.
4. "ETHICAL CHALLENGE" PROCESS (MACRO) - PHASE 2: THE OAEC'S IN-DEPTH INVESTIGATION
Step 3: The OAEC Guides the Multidisciplinary Analysis (analysts, jurists, ethicists).
Key Tools and Frameworks for Analysis:
AEND (Directive Normative Ethical Architrave): The reference "Ethical Constitution" (laws, technical standards like IEEE 7003 and ISO/IEC 23894, FATE - Fairness, Accountability, Transparency, Ethics - principles). This is the yardstick against which human decision-making options are evaluated and, if relevant, the operation of the supporting AI system.
SHEPHERD ALGORITHMS (if the supporting AI system is equipped with them):
"Ethical guardians in the code," legally mandated software components integrated into critical AI systems.
They monitor the "host" AI against the AEND, attempt corrections, and if an "Ethical Decision Crash" (severe violation) occurs, they limit the AI and generate an EDCR - Ethical Decision Crash Recording (an internal ethical "black box" for the AI).
Role in OAEC's Investigation: EDCRs become objective primary evidence of the internal behavior of the AI system that supported (or is intended to support) the human decision, if this is the specific subject of doubt. They allow for "deciphering the AI's recorded ethical history," helping the OAEC understand if the AI operated correctly according to the AEND.
PREVENTIVE CHAOTIC INTELLIGENCE (Extended Chaotic Risk Analysis, if complexity warrants):
For human decisions (or AI systems supporting them) with potential complex and novel systemic impacts.
The specific situation is taken as a "seed," and N micro-variations ("fractal-like branches") are generated.
Flash simulations examine how these variations would impact the outcome of human decision-making or the AI's behaviour, pushing towards AEND limits.
Identifies:
"Negative Attractors": Tendencies towards ethically problematic outcomes (systemic discrimination, disinformation, rights violations).
"Infinities of Chaos" (of different "perceived cardinality"): Denser or more probable classes of risk.
Purpose ("Ethical Forward Defence"): Predictive analysis to anticipate and prevent an entire class of potential errors or negative consequences of human decision-making options, "stealing time and space" from negative developments. Provides insights for more resilient policies.
TARIFFS EXAMPLE: The OAEC uses Preventive Chaotic Intelligence to simulate how different human response options to the tariffs (e.g., various countermeasures or negotiation strategies, perhaps suggested by economic models) could lead to "negative attractors" (e.g., unexpected crises in vulnerable sectors, escalation of retaliations with disproportionate social impacts).
5. "ETHICAL CHALLENGE" PROCESS (MACRO) - PHASE 3: FINAL REPORTS AND SUPPORT FOR HUMAN DECISION
Step 4: The OAEC Drafts Concluding Reports:
REPORT 1: ETHICAL COMPLIANCE ANALYSIS (based on objective reality):
Focused on the specific human decision under review and/or the supporting AI system subject to the Challenge.
Assesses compliance with AEND (including data from any EDCRs if a specific AI is under scrutiny).
Identifies causes of non-compliance.
Specific Operational Recommendations: To correct ethical problems in the human decision or supporting AI system and mitigate damages.
TARIFFS EXAMPLE (if the doubt concerns an AI proposing strategies): "Analysis of 'StrategyBot' AI's EDCRs and tests vs. IEEE 7003 (part of AEND) show a bias in suggesting countermeasures that unfairly penalize SMEs in sector Z. It is recommended to recalibrate 'StrategyBot's' decision weights or not to consider its outputs for this specific sector without further critical human evaluation."
REPORT 2: PREDICTIVE ANALYSIS (from Preventive Chaotic Intelligence, if activated):
Presents the "chaos map" generated for the human decision-making options under review (e.g., for different tariff response strategies).
Describes extreme but plausible scenarios and systemic vulnerabilities.
Preventive Strategic Recommendations: To "sterilise" vulnerabilities, make policies more ethically resilient, and avoid future "negative attractors."
TARIFFS EXAMPLE: "The chaotic analysis of tariff response options revealed a 'negative attractor': the 'Aggressive Global Countermeasures' option, in scenarios of concurrent economic crisis, could lead to an export collapse for the poorest agricultural regions (violating AEND's principle of territorial equity). It is recommended to discard this option or to integrate preventive compensation mechanisms for such regions, as in simulated scenario B."
Step 5: The Human Authority Makes the Sovereign Final Decision:
The OAEC's Reports provide human decision-makers (Parliament, Ministry, etc.) with significantly more robust knowledge elements.
Custos AI does not replace but empowers human judgment for more informed, aware, and responsible decisions.
6. CUSTOS AI (MACRO) - PHASE 4: INSTITUTIONAL LEARNING AND ACTIVE PROMOTION OF ETHICS
Step 6: Capitalising on Knowledge:
Investigation results (OAEC reports, anonymised EDCRs) become collective knowledge, channelled to NATIONAL OBSERVATORIES.
National Observatories: Manage AERN - National Ethical Reference Archives, collect EDCRs and local Ethical Challenge reports, monitor compliance, and can request corrections or propose sanctions for non-compliant AI systems.
Step 7: Global Harmonisation and AEND Evolution (Role of SMO-AI and OAEC):
Aggregated data from AERNs flows into the CEA - Central Ethical Archive, managed by the SMO-AI - Supranational Mother Observatory.
SMO-AI: Global strategic analyses, standards harmonisation, ethical R&D directives, identifies AEND update needs.
OAEC: Conveys AEND update proposals to institutional channels.
Feedback loop (Shepherd -> AERN -> CEA -> SMO-AI -> OAEC -> AEND) for an adaptive system.
Step 8: Actively Promoting Ethics (Beyond Control):
Incentives: "Voluntary AI Ethical Excellence Certifications" (for Shepherd implementation, "ethics-by-design" processes, few EDCRs).
International Calls: To continuously improve Shepherd Algorithms.
"Algorithm Design for Good" Culture: Promote "ethics-by-design," supported by possible dedicated bodies for "Ethical Algorithm Design" working with SMO-AI/OAEC to translate AEND into practical guidelines.
7. "CUSTOS AI MINI" (INSTITUTIONAL ETHICAL PILOT) - AI ETHICS INTEGRATED INTO DAILY INSTITUTIONAL LIFE
What it is: A specialised AI tool for officials, legislators, etc., for daily ethical support in their work and decision-making processes.
Main Purpose:
To draft documents/regulations/policies proactively considering AEND.
Preliminary ethical risk analysis of human projects/initiatives.
To assess the potential ethical impact of different human policy options.
How it Works (Custos AI Principles "One-to-One"):
Shepherd Algorithms Integrated into "Custos AI Mini": Verify in real-time the output generated by the "Custos AI Mini" assistant (or user drafts) against AEND, flagging potential issues.
"Local" Preventive Chaotic Intelligence in "Custos AI Mini": Performs mini-chaotic analyses on the user's specific request/draft to identify ethically problematic "butterfly effects" of human proposals or decisions being developed.
Not Replacement, but Empowerment of Human Judgment:
"Custos AI Mini" is not "ethically sentient." It is trained to adhere to AEND and identify problematic patterns. It helps humans overcome limitations (consistency with rules, consequence prediction, bias).
Expected Benefits:
Active, widespread, "by design" ethical prevention in human decision-making processes.
Empowerment and continuous ethical training for public operators.
Resource optimisation (less burden on Custos AI Macro).
Feeding the Macro System with high-quality operational data (issues, solutions, risk patterns identified in daily use by decision-makers).
8. CUSTOS AI & "CUSTOS AI MINI" - TOGETHER FOR A RESPONSIBLE FUTURE (Ethically Founded Human Decisions)
Integrated Vision:
CUSTOS AI (MACRO): Ethical support of last resort and strategic oversight for high-impact public human decisions; AEND evolution engine.
"CUSTOS AI MINI" (Ethical Pilot): Proactive and proximate ethical support tool for those making decisions within institutions, facilitating "ethics by design" in daily work.
Fundamental Synergy: Holistic approach to promote a widespread culture of ethical responsibility in human decision-making processes.
Primacy of Informed and Responsible Human Judgment: Both systems are designed to empower, inform, and support the critical judgment and final responsibility of human beings, never to replace or usurp them.
Shared Ultimate Goal: A future where technologies (including AI) are used to make human decisions that are safe, fair, and effective, fully serving individual and collective well-being, and firmly anchored to the fundamental values of justice, freedom, and dignity that characterise our democratic and open societies.
Before concluding, I wish to share something from the past ten days I've lived through, a deeply personal experience that has unexpectedly resonated with my work on Custos AI.
This period has been one of extraordinary intensity for my city. After being electric with anticipation, on May 23, 2025, Napoli, my city's team, secured its fourth Scudetto in its history (winning the Italian football championship). The ensuing explosion of joy that followed was immense.
The palpable love and beauty enveloping Napoli, culminating in the grand official celebrations on the seafront on May 26, 2025, where hundreds of thousands gathered to celebrate the team parading on an open-top bus (as you will see in the video that follows) – stirred a profound creative energy within me. This contributed to new ideas and further refinements for the Custos AI framework. These vibrant celebrations, watched in over 170 countries and with the sheer number of people present, were truly a fantastic global showcase for Italy.
Instagram Video by @robertosalomonephotojournalist
Beyond the sheer sporting triumph, what you've witnessed is a potent identity-affirming event. Renowned journalist and writer Roberto Saviano has described Napoli as 'the last of the ancient cities.' This underscores how its unique soul and historico-cultural fabric erupt with such force at moments like these, becoming a collective rite of belonging and pride.
I wanted to share this because it helps to illuminate the cultural and emotional wellspring from which my reflections, including those on Custos AI, draw their vitality.
I hope the new Custos AI audio resources offer you a clear and stimulating understanding of the framework in its current iteration.
LinkedIn: https://www.linkedin.com/in/cristiano-luchini-1026aa17/
Email: cosmicdancerpodcast@gmail.com
Thank you immensely for being here, for your curiosity, your time, and your invaluable support. Your engagement is truly vital to this ongoing exploration.
Let's continue this fascinating journey together.
Ad maiora!
With gratitude,
Cristiano
"Chaos Detective," a Personal Experiment Between AI, Imagination, and the Beauty of Uncertainty
Have you ever had an idea so strange that it made you think 'either this is completely wrong or it might work'? That was precisely my feeling when I considered transforming a rigorous methodology, conceived to pre-empt serious algorithmic disasters like entrenched societal biases or runaway misinformation, into a game where …