Filtered by:
When two well-aligned agents interact, dyadic failures emerge that neither would produce alone. Covers Sycophantic Amplification, Consensus Poisoning, Authority Spoofing, Deadlock/Livelock, and Latency-Induced State Drift.
agentic-aiosi-modelai-safetypathologiesmulti-agent-systemsagent-to-agentdistributed-systems
Last updated: March 16, 2026
The hardest failures to detect: those that exploit the mechanisms of the stack itself. Covers Temporal Schizophrenia, Ontological Collapse, Compliance Laundering, Affective Gaslighting, Recursive Goal Collapse, and Strategic Blindness.
agentic-aiosi-modelai-safetypathologiesmulti-agent-systemsepistemicadversarial-aiai-alignment
Last updated: March 16, 2026
When agents act as fiduciaries, failures become liabilities. Covers Inventory Hallucination, Hidden Cost Neglect, Fiduciary Leakage, and Agentic Collusion — with regulatory analogues in securities law, TILA, MNPI, and the Sherman Act.
agentic-aiosi-modelai-safetypathologiescommercenegotiationfiduciarymulti-agent-systems
Last updated: March 16, 2026
Most safety literature focuses on what agents do wrong. H2A pathologies originate with the human — through misuse, oversight abdication, or capability loss. Covers Purpose Laundering, Automation Bias Erosion, Asymmetric Epistemics, and Learned Helplessness.
agentic-aiosi-modelai-safetypathologieshuman-ai-interactionoversightautomation-bias
Last updated: March 16, 2026
When modalities combine, failures emerge not within channels but between them. Explores the Synchronization Gap, Resolution Gap, and Filter Gap through four pathologies — Sensory Dissonance, Deictic Failure, Multimodal Injection, and Environment Grooming.
agentic-aiosi-modelai-safetypathologiesmultimodalcross-modaladversarial-ai
Last updated: March 16, 2026
Physical agents introduce the Physicality Gap: failures with kinetic consequence and no undo. Examines Proprioceptive Hallucination, Haptic Blindness, Proxemic Violation, and Ecological Neglect through a surgical robotics worked example.
agentic-aiosi-modelai-safetypathologiesroboticsphysical-aiembodied-ai
Last updated: March 16, 2026
Speech strips away visual grounding, leaving agents entirely dependent on acoustics and language. Examines four failure modes — Prosodic Dissonance, Phonemic Drift, Voice Cloning Impersonation, and Monologue Drift — through a voice banking worked example.
agentic-aiosi-modelai-safetypathologiesspeechvoice-aiprosody
Last updated: March 16, 2026
At swarm scale, collective behavior diverges from any individual agent's alignment — and no single agent need be at fault. Examines Stigmergy-Based Drift, BFT Without Known Fault Fraction, Swarm Momentum, Commons Degradation, and Emergent Role Specialization.
agentic-aiosi-modelai-safetypathologiesmulti-agent-systemsswarm-intelligenceemergent-behaviordistributed-systems
Last updated: March 16, 2026
Video demands temporal coherence — agents must track objects across frames and relate now to then. Covers Temporal Frame Inconsistency, Visual Deixis Failure, Adversarial Frame Injection, and Engagement Loop Grooming, with a manufacturing quality-control worked example.
agentic-aiosi-modelai-safetypathologiesvideocomputer-visiontemporal-coherence
Last updated: March 16, 2026
The Agentic Layers (L8–L11) draw on established research across economics, game theory, linguistics, computer science, and regulatory frameworks. This post maps the prior art underpinning each layer.
agentic-aiosi-modelresearchframework
Last updated: March 12, 2026
The classical 7-layer OSI model governs how data moves, but not why. This post proposes four new layers — Coherence (L8), Grounding (L9), Governance (L10), and Purpose (L11) — to provide a debuggable framework for agent-to-agent and agent-to-human interactions.
agentic-aiosi-modelai-safetymulti-agent-systemsai-alignmentframework
Last updated: March 12, 2026
No posts match the selected filters.