The Protocol Gap
The AI coding tools market hit $7.37 billion in 2025 and is accelerating toward $24 billion by 2030. Developer adoption has crossed the tipping point: 84-91% of developers now use AI coding tools, and 75% of enterprises have integrated them into production workflows. The question is no longer whether teams will use AI agents to write code. The question is how many agents they will use, and whether those agents can work together.
Right now, they cannot. Not without friction that compounds with every new tool added to the stack.
Every coding agent speaks a different dialect. Cursor built its own IDE. Claude Code runs in the terminal. Cline is locked to VS Code. Devin ships an entirely separate cloud environment. For developers, this means choosing an agent means choosing an editor, or accepting the cost of context-switching between incompatible interfaces. For agent developers, it means building and maintaining N editor integrations instead of implementing one protocol. For IDE teams, it means negotiating M separate partnerships instead of adopting a single standard. The integration cost scales as O(n*m), and in February 2026, when every major player shipped multi-agent capabilities in the same two-week window, that cost multiplied overnight.
ACP is the missing protocol layer. Just as the Language Server Protocol turned an O(n*m) language intelligence problem into O(n+m), ACP standardises the connection between editors and coding agents. MCP solved how agents talk to tools. A2A addresses how agents talk to each other. ACP solves the remaining gap: how editors talk to agents. Together, these three protocols form the complete integration stack:
Without ACP, the industry is building a tower of custom integrations that will become increasingly expensive to maintain as the agent population grows.
Market Analysis #
The competitive landscape for AI coding tools has consolidated into four distinct layers, but beneath the surface concentration, the market is fracturing in a way that creates a structural opening for protocol standardisation.
The top three players control over 70% of the market, with GitHub Copilot alone commanding ~42% share and crossing $2 billion in ARR. But the fracture is vertical. IDE-integrated tools like Copilot, Cursor, and Windsurf each bundle their own agent capabilities inside proprietary editor experiences. Autonomous agents like Claude Code, Devin, and Aider each require their own runtime environment. Orchestration frameworks like LangGraph, CrewAI, and AutoGen each define their own workflow abstractions. No horizontal layer connects these verticals.
Market Sizing and Signals
| Signal | Data Point | Source |
|---|---|---|
| Market size (2025) | $7.37B | Mordor Intelligence |
| Projected market (2030) | $23.97B | Mordor Intelligence |
| CAGR | 26.6% | Mordor Intelligence |
| GitHub Copilot ARR | $2B+ | Microsoft |
| GitHub Copilot users | 20M+ | GitHub (July 2025) |
| Developer adoption rate | 84-91% | Stack Overflow, DX |
| Enterprise adoption | 75% | Industry surveys |
| Top-3 market concentration | 70%+ | CB Insights |
The Multi-Agent Inflection Point
February 2026 marked a watershed moment: every major tool shipped multi-agent capabilities in the same two-week window.
- Grok Build: 8 parallel agents
- Windsurf: 5 parallel agents
- Claude Code: Agent Teams
- Codex CLI: Agents SDK
- Devin: Parallel sessions
Running multiple agents simultaneously on different parts of a codebase is now table stakes. This has a direct consequence for protocol design: when agents proliferate, the cost of proprietary integration multiplies. Each new agent-IDE pairing requires custom integration work without a standard protocol. ACP turns an O(n*m) integration problem into an O(n+m) one.
Competitive Landscape
Where ACP Fits in the Stack
| Layer | Protocol | Function | Status |
|---|---|---|---|
| Agent-to-tool | MCP | Connects agents to external tools/data | Established |
| Editor-to-agent | ACP | Connects editors to coding agents | Live |
| Agent-to-agent | A2A | Inter-agent communication | Merging |
ACP occupies the critical middle layer that no other protocol addresses. It is the missing piece that connects the rapidly growing population of autonomous coding agents to the editors where developers actually work.
The market is large ($7.37B and growing at 26.6% CAGR), adoption is near-universal (84-91% of developers), and the shift to multi-agent workflows is already happening. But fragmentation is the dominant structural risk. ACP does not compete with any player in the landscape. It connects them. The protocol sits at the critical middle layer between editors and agents, the one layer no existing standard addresses.
The network effect is straightforward: every agent that implements ACP makes every ACP-supporting editor more valuable, and vice versa. First-mover advantage belongs to the protocol that achieves sufficient breadth of adoption before the market consolidates around proprietary integrations.
UX Analysis #
Current AI coding tool UX falls into four paradigms, each optimised for a different workflow. None handles multi-agent orchestration well.
The friction analysis reveals six critical pain points that compound when developers try to use multiple agents: agent selection is a guessing game, permission models are incompatible, progress reporting is fragmented, result review varies wildly between tools, context is lost when switching agents, and there are no cross-agent metrics.
These are not minor usability issues. They represent structural barriers to multi-agent adoption. A team lead trying to run five agents simultaneously today must monitor five different interfaces, manage five different permission systems, compare results in five different formats, and re-explain context every time they switch between tools.
Interaction Paradigms
Chat-Based
Dashboard-Based
IDE-Integrated
CLI-Based
| Dimension | Chat-Based | Dashboard | IDE-Integrated | CLI-Based |
|---|---|---|---|---|
| Discoverability | High | Medium | Very High | Low |
| Control Granularity | Medium | High | Low | Very High |
| Progress Feedback | Low | High | Medium | Medium |
| Context Switching | Medium | High | Very Low | Medium |
| Multi-Agent Support | Poor | Good | Poor | Medium |
| Async Workflow | Poor | Excellent | Poor | Good |
| Scriptability | Low | Low | Low | Excellent |
| Best For | Quick edits | Complex projects | Inline coding | Automation |
Critical Friction Points
Developers cannot easily compare agent capabilities. Each tool describes itself differently. There is no standard way to express what an agent can do, what languages it supports, what context window it needs, or how much it costs per task.
Impact: Teams default to a single agent even when another would be better for specific tasks. This leads to suboptimal results and wasted compute.
ACP Opportunity: Define a standard capability schema that every agent publishes. Include supported languages, max context, average task time, cost per token, supported operations, and quality benchmarks.
Every agent handles permissions differently. Some agents get full filesystem access. Others require explicit file-by-file approval. There is no middle ground, and no standard way to express scoped permissions like "read src/, write to tests/, never touch config files."
Impact: Security-conscious teams either over-restrict agents (limiting usefulness) or under-restrict them (creating risk). Enterprise adoption stalls because compliance teams cannot audit agent access patterns.
ACP Opportunity: Implement a permission specification language that defines scoped access controls per agent, per task. Think OAuth scopes for coding agents: read:src, write:tests, execute:lint, deny:*.env.
Each agent reports progress differently. Some stream tokens, some show step counts, some provide no feedback until completion. When running multiple agents concurrently, there is no unified view of what is happening.
Impact: Developers either anxiously watch idle terminals or miss important intermediate results. Team leads cannot assess whether agents are stuck, making progress, or have silently failed.
ACP Opportunity: Define a standard progress protocol with structured events: task_started, step_completed, file_modified, test_run, error_encountered, task_completed.
Agent outputs appear in different formats across tools. Some show diffs, some show full files, some stream raw output. Comparing results from two agents on the same task requires manual effort. There is no standard approval/rejection workflow.
Impact: Code review for agent-generated code is slower than it should be. Teams cannot A/B test agents effectively.
ACP Opportunity: Standardize a result envelope that wraps every agent output in a consistent format: structured diffs, test results, confidence scores, resource usage, and an approval/rejection API.
When switching between agents or resuming a task with a different agent, context is lost. The new agent starts from scratch. File state, conversation history, and partial progress do not transfer.
Impact: Developers waste time re-explaining context. Multi-agent workflows are effectively impossible because agents cannot build on each other's work.
ACP Opportunity: Define a context transfer protocol that packages task state (files modified, decisions made, constraints discovered) in a portable format any ACP-compatible agent can consume.
There is no standard way to measure and compare agent performance. Speed, accuracy, cost, and code quality metrics are siloed within each tool. Teams cannot make data-driven decisions about which agent to use for which task type.
Impact: Agent selection remains gut-feel rather than data-driven. Organizations cannot optimize their AI tooling spend.
ACP Opportunity: Define a metrics schema capturing standardized performance data across all agents: tokens consumed, time to completion, test pass rate, lines changed, reviewer approval rate.
Proposed UX: Wireframe Mockups
Unified Task Queue
Replace fragmented per-tool task management with a single task queue that spans all connected agents. Tasks show status, assigned agent, priority, progress, estimated completion, and cost so far.
+-----------------------------------------------------------------------------+ | ACP Orchestrator [Search...] [+ New Task] | +-----------------------------------------------------------------------------+ | Status: [All] Agent: [All] Priority: [All] [List] [Kanban] | +-----------------------------------------------------------------------------+ | | | * ACP-14 Add OAuth2 endpoint ============.. 78% | | Claude Code . 3 files changed . 2m 14s elapsed | | +-- Done: Created auth middleware | | +-- Done: Added token validation | | +-- Now: Writing integration tests... | | +-- Next: Update API documentation | | | | * ACP-15 Refactor user model ====........ 35% | | Codex . 1 file changed . 45s elapsed | | +-- Now: Migrating fields to new schema... | | | | ! ACP-16 Fix CORS headers ==========.. 80% BLOCKED | | Claude Code . Needs clarification | | "Should CORS allow credentials? Current config is ambiguous." | | [Reply to agent] [View context] | | | | o ACP-17 Add rate limiting ............ Queued | | Waiting for agent . Priority: Medium | | | | Done ACP-13 Database indexing ============ Done | | Codex . 2 files changed . Completed 4m ago | | [Review diff] [Approve] [Request changes] | | | +-----------------------------------------------------------------------------+ | Showing 5 of 12 tasks . 2 active . 1 blocked . 3 queued . 6 complete | +-----------------------------------------------------------------------------+
Key interactions: Drag-and-drop priority reordering, one-click agent reassignment, bulk operations (pause all, cancel by tag), filter by status/agent/project/author.
Smart Agent Selection
Present a capability-aware agent picker when creating tasks. ACP queries all connected agents for their capability manifests and recommends the best match based on task type, language, complexity, cost tolerance, and historical performance.
+-----------------------------------------------------------------------------+ | Select Agent [X Close] | +-----------------------------------------------------------------------------+ | | | Task: "Add OAuth2 endpoint with token refresh" | | Suggested: Claude Code (best match for backend API work) | | | | [Search agents...] [Capability] [Status] [Cost] | | | | +------------------------------------------------------------------+ | | | * Claude Code [Available] | | | | Capabilities: backend, api, testing, refactoring, debugging | | | | Languages: Python, TypeScript, Go, Rust | | | | Quality: ========.. 4.2/5 Speed: ======.... Fast | | | | Cost: ~$0.12/task Sessions today: 8 | | | | Match score: 94% [Select] | | | +------------------------------------------------------------------+ | | | | +------------------------------------------------------------------+ | | | Codex [Available] | | | | Capabilities: backend, migrations, scripting | | | | Languages: Python, JavaScript, SQL | | | | Quality: ======.... 3.5/5 Speed: ========.. Fast | | | | Cost: ~$0.08/task Sessions today: 3 | | | | Match score: 71% [Select] | | | +------------------------------------------------------------------+ | | | | [Cancel] [Auto-assign best match] | +-----------------------------------------------------------------------------+
Key interactions: Task description auto-suggests matching agents, side-by-side comparison (speed, cost, quality), "auto-route" mode for hands-off selection, favorite agents and custom routing rules.
Granular Permission Builder
Replace binary allow/deny with a visual permission builder that lets users define scoped access per agent per task. Uses a familiar file-tree UI with checkboxes for read/write/execute.
+-----------------------------------------------------------------------------+ | Permissions: Claude Code on "Add OAuth2 endpoint" [X Close] | +-----------------------------------------------------------------------------+ | | | Template: [Development - Standard] [Save as template] | | | | ORG POLICY (locked) | | x No access to .env, secrets/, credentials/ | | x No network requests to external APIs | | x No git push or branch deletion | | These restrictions are set by your organization. | | | | FILE ACCESS | | Read access | | [x] All project files | | [x] node_modules/ (read-only, for reference) | | [ ] Files outside project root | | | | Write access | | [x] src/ [+ Add path] | | [x] tests/ | | [ ] package.json (locked by org policy) | | [ ] Configuration files (*.config.js, *.rc) | | | | EXECUTION | | [x] npm test / npm run test:* | | [x] npx tsc --noEmit (type checking) | | [x] eslint, prettier (linting/formatting) | | [ ] npm install (dependency changes) | | [ ] Arbitrary shell commands | | | | APPROVAL MODE | | ( ) Manual: approve every action | | (*) Smart: auto-approve within permissions, ask for exceptions | | ( ) Autonomous: execute freely, log all actions | | | | [Cancel] [Apply permissions] | +-----------------------------------------------------------------------------+
Key interactions: Permission templates for common scenarios, visual diff of permission changes, audit log of all agent file access, one-click emergency revoke.
Structured Result Review
A dedicated review interface that presents agent results consistently regardless of which agent produced them. Shows diffs, test results, and confidence scores side by side.
+-----------------------------------------------------------------------------+
| Review: ACP-14 "Add OAuth2 endpoint" [X Close] |
| Agent: Claude Code . Duration: 3m 42s . Files: 4 . Cost: $0.14 |
+-----------------------------------------------------------------------------+
| |
| SUMMARY |
| Added OAuth2 token endpoint with refresh token support. |
| Created auth middleware, token validation, and integration tests. |
| All 12 tests passing. No lint errors. |
| |
| FILES CHANGED |
| + src/auth/oauth2.ts +87 lines (new file) |
| ~ src/middleware/auth.ts +23 / -4 lines |
| ~ src/routes/api.ts +12 / -1 lines |
| + tests/auth/oauth2.test.ts +145 lines (new file) |
| |
| DIFF: src/auth/oauth2.ts (new file) [Unified] [Side-by-side] |
| + 1 import { Request, Response } from 'express'; |
| + 2 import { TokenService } from '../services/token'; |
| + 3 import { UserRepository } from '../repos/user'; |
| + 5 export interface OAuth2Config { |
| + 6 accessTokenTTL: number; |
| ... |
| |
| TEST RESULTS |
| Done 12/12 tests passing |
| +-- Done OAuth2Handler.handleToken: issues access token (4ms) |
| +-- Done OAuth2Handler.handleRefresh: refreshes valid token (3ms) |
| +-- Done OAuth2Handler.handleRefresh: rejects expired token (2ms) |
| +-- ... 9 more tests |
| |
| [Approve all] [Request changes] [Approve with comments] |
+-----------------------------------------------------------------------------+
Key interactions: Unified diff view with syntax highlighting, approve/reject/request-changes workflow (like GitHub PR review), compare results from multiple agents, one-click apply to working directory, keyboard navigation (j/k between files, a to approve).
User Journey: Solo Developer
Goal: Use multiple AI coding agents to ship features faster without context-switching between tools.
| Stage | Action | Emotion | Pain Point |
|---|---|---|---|
| Trigger | Developer identifies a task too large for manual work | Motivated | Unclear which agent is best for the job |
| Spawn | Selects an agent and defines the task with context | Hopeful | Task specification is verbose, no templates |
| Handoff | ACP routes the task to the chosen agent with scoped permissions | Neutral | Permissions are all-or-nothing in current tools |
| Monitor | Watches progress via live output stream | Anxious | No standardised progress reporting across agents |
| Review | Examines agent output: diffs, generated files, test results | Critical | Context switching between agent output and codebase |
| Iterate | Provides feedback, requests changes, or approves the result | Varied | Feedback loops are slow, no structured way to guide agents |
| Merge | Integrates approved changes into the codebase | Relieved | Manual merge conflicts when multiple agents touch same files |
The UX gap is not about individual tool quality. Each paradigm (chat, dashboard, IDE-integrated, CLI) works well for its target use case. The gap is between tools: no standard for capability discovery, no unified progress reporting, no portable context, no cross-agent metrics.
ACP's UX value proposition is not a better chat interface or a better dashboard. It is the connective tissue that makes multi-agent workflows practical. The five proposed UX patterns (unified queue, smart selection, permission builder, live dashboard, structured review) map directly to the six friction points identified in the analysis. Any ACP implementation should prioritise these patterns in this order: task queue and progress reporting first, then agent selection and permissions, then result review and metrics.
UI Analysis #
Design system and interactive mockups for the ACP orchestrator interface, drawing from the Linear/Vercel/Raycast aesthetic.
The UI design system draws from the Linear/Vercel/Raycast aesthetic: dark mode default, high information density, zero visual clutter. The design specifications cover the complete ACP orchestrator interface across five core views: dashboard overview, task queue, agent selection panel, permission management dialog, and result review interface.
The wireframe specifications demonstrate a UI that respects developer cognitive patterns. The task queue surfaces exactly the information developers need at each stage. The agent selection modal introduces capability-aware matching with match scores. The permission dialog layers organisational policy over user-configurable scopes. The result review interface borrows from GitHub's PR review pattern.
Interactive Mockups
File Changes
Agents
Live Logs
Design System
Color Palette
Typography
Components
The UI specifications solve a critical design problem: how do you present a multi-agent orchestrator without overwhelming the developer? The answer is progressive disclosure and familiar patterns. The dashboard shows four stats and two feeds. The task queue shows one list with expandable rows. The agent picker shows ranked cards with a single "auto-assign" escape hatch. At no point does the UI require the developer to understand ACP's protocol internals.
The design system's token-based architecture ensures that a production implementation can maintain visual consistency at scale. The wireframes are implementation-ready: every component has defined dimensions, states, interactions, and responsive breakpoints.
Recommendations #
Seven strategic priorities for ACP, derived from the market, UX, and UI analyses.
Ship the task queue and progress protocol first
The unified task queue is ACP's most visible value proposition. Pair it with a standardised progress protocol (task_started, step_completed, file_modified, error_encountered, task_completed) so that every connected agent reports status consistently. This is the minimum viable product that demonstrates ACP's value to developers on day one.
Define a standard capability schema before scaling agent adoption
Agent selection today is a guessing game because there is no standard way to express what an agent can do. Before onboarding more agents to the ACP registry, define a capability manifest format covering supported languages, operation types, context window requirements, cost per token, and quality benchmarks. This schema powers the smart agent selection UI.
Implement OAuth-style scoped permissions
The permission model should follow the OAuth scopes pattern: read:src, write:tests, execute:lint, deny:*.env. Layer organisational policies (immutable) over user-configurable scopes (flexible per task). This addresses the enterprise adoption blocker where security teams cannot audit agent access patterns.
Target platform engineers as the initial adoption wedge
Platform engineers are the highest-leverage target. They build internal tooling, they feel the integration tax most acutely (building N connectors for N agents), and their adoption creates top-down pull within organisations. Build the SDK experience and documentation to serve this persona first.
Build the cross-agent metrics layer early
Define a metrics schema capturing tokens consumed, time to completion, test pass rate, lines changed, reviewer approval rate. Expose this through an analytics dashboard. This transforms ACP from a pure protocol into a decision-making platform, creating defensible value difficult for any single agent vendor to replicate.
Invest in the conflict resolution UX
Multi-agent workflows inevitably produce conflicts when two agents modify overlapping files. The side-by-side conflict resolution panel with four resolution options (keep A, keep B, manual merge, re-run both with awareness) is novel and high-value. Getting it right differentiates ACP from basic task routing.
Maintain strict protocol neutrality
ACP is co-developed by JetBrains and Zed, two IDE makers with different architectures and business models. This vendor neutrality is the protocol's most important strategic asset. Governance should follow the model established by MCP's Agentic AI Foundation: multi-stakeholder, open specification, shared registry.
Closing Synthesis
The ACP spike reveals a market that is simultaneously consolidating and fragmenting. The top layer is consolidating: a handful of large players dominate market share, and developer adoption of AI coding tools is near-universal. But the integration layer is fragmenting: every new agent, every new editor, every new multi-agent capability creates another custom integration that must be built and maintained. ACP resolves this tension by standardising the horizontal connection between editors and agents, just as LSP standardised the connection between editors and language servers a decade ago.
The analyses converge on a clear product direction. The market analysis shows a $7.37B market growing at 26.6% CAGR with a structural gap at the editor-agent layer. The UX analysis identifies six friction points that block multi-agent adoption and proposes five interaction patterns to address them. The UI design system provides implementation-ready specifications for a dashboard that surfaces multi-agent orchestration through familiar, progressive-disclosure patterns. The pieces fit together: ACP is not a new tool. It is the connective layer that makes existing tools composable.
The 90-day plan is straightforward. Weeks 1-4: ship the task queue view and progress protocol with two reference agents to demonstrate cross-agent orchestration. Weeks 5-8: add the capability schema, agent selection panel, and permission builder to enable team-scale deployment. Weeks 9-12: launch the metrics dashboard and conflict resolution UI to establish ACP as the data-driven orchestration layer. At every stage, the priority is breadth of agent adoption over depth of features. The network effect is the moat: every agent that implements ACP makes every editor more valuable, and every editor that adopts ACP makes every agent more accessible. The protocol wins by being the shortest path between any agent and any editor.