Filtered by:

Agent-to-Agent Pathologies (L8–L11)

When two well-aligned agents interact, dyadic failures emerge that neither would produce alone. Covers Sycophantic Amplification, Consensus Poisoning, Authority Spoofing, Deadlock/Livelock, and Latency-Induced State Drift.

agentic-aiosi-modelai-safetypathologiesmulti-agent-systemsagent-to-agentdistributed-systems

Advanced Pathologies — Social, Epistemic, and Adversarial

The hardest failures to detect: those that exploit the mechanisms of the stack itself. Covers Temporal Schizophrenia, Ontological Collapse, Compliance Laundering, Affective Gaslighting, Recursive Goal Collapse, and Strategic Blindness.

agentic-aiosi-modelai-safetypathologiesmulti-agent-systemsepistemicadversarial-aiai-alignment

Commerce & Negotiation Pathologies (L8–L11)

When agents act as fiduciaries, failures become liabilities. Covers Inventory Hallucination, Hidden Cost Neglect, Fiduciary Leakage, and Agentic Collusion — with regulatory analogues in securities law, TILA, MNPI, and the Sherman Act.

agentic-aiosi-modelai-safetypathologiescommercenegotiationfiduciarymulti-agent-systems

Human-to-Agent Pathologies (L8–L11)

Most safety literature focuses on what agents do wrong. H2A pathologies originate with the human — through misuse, oversight abdication, or capability loss. Covers Purpose Laundering, Automation Bias Erosion, Asymmetric Epistemics, and Learned Helplessness.

agentic-aiosi-modelai-safetypathologieshuman-ai-interactionoversightautomation-bias

Multimodal Pathologies (L8–L11)

When modalities combine, failures emerge not within channels but between them. Explores the Synchronization Gap, Resolution Gap, and Filter Gap through four pathologies — Sensory Dissonance, Deictic Failure, Multimodal Injection, and Environment Grooming.

agentic-aiosi-modelai-safetypathologiesmultimodalcross-modaladversarial-ai

Physical & Robotic Pathologies (L8–L11)

Physical agents introduce the Physicality Gap: failures with kinetic consequence and no undo. Examines Proprioceptive Hallucination, Haptic Blindness, Proxemic Violation, and Ecological Neglect through a surgical robotics worked example.

agentic-aiosi-modelai-safetypathologiesroboticsphysical-aiembodied-ai

Speech Pathologies (L8–L11)

Speech strips away visual grounding, leaving agents entirely dependent on acoustics and language. Examines four failure modes — Prosodic Dissonance, Phonemic Drift, Voice Cloning Impersonation, and Monologue Drift — through a voice banking worked example.

agentic-aiosi-modelai-safetypathologiesspeechvoice-aiprosody

Swarm Pathologies (L8–L11)

At swarm scale, collective behavior diverges from any individual agent's alignment — and no single agent need be at fault. Examines Stigmergy-Based Drift, BFT Without Known Fault Fraction, Swarm Momentum, Commons Degradation, and Emergent Role Specialization.

agentic-aiosi-modelai-safetypathologiesmulti-agent-systemsswarm-intelligenceemergent-behaviordistributed-systems

Video Pathologies (L8–L11)

Video demands temporal coherence — agents must track objects across frames and relate now to then. Covers Temporal Frame Inconsistency, Visual Deixis Failure, Adversarial Frame Injection, and Engagement Loop Grooming, with a manufacturing quality-control worked example.

agentic-aiosi-modelai-safetypathologiesvideocomputer-visiontemporal-coherence

Cross-Disciplinary Research & Prior Art

The Agentic Layers (L8–L11) draw on established research across economics, game theory, linguistics, computer science, and regulatory frameworks. This post maps the prior art underpinning each layer.

agentic-aiosi-modelresearchframework

Extending OSI for Agentic Interactions

The classical 7-layer OSI model governs how data moves, but not why. This post proposes four new layers — Coherence (L8), Grounding (L9), Governance (L10), and Purpose (L11) — to provide a debuggable framework for agent-to-agent and agent-to-human interactions.

agentic-aiosi-modelai-safetymulti-agent-systemsai-alignmentframework

No posts match the selected filters.