ACP Product Spike
Product Spike

The Protocol Gap

The AI coding tools market hit $7.37 billion in 2025 and is accelerating toward $24 billion by 2030. Developer adoption has crossed the tipping point: 84-91% of developers now use AI coding tools, and 75% of enterprises have integrated them into production workflows. The question is no longer whether teams will use AI agents to write code. The question is how many agents they will use, and whether those agents can work together.

Right now, they cannot. Not without friction that compounds with every new tool added to the stack.

Every coding agent speaks a different dialect. Cursor built its own IDE. Claude Code runs in the terminal. Cline is locked to VS Code. Devin ships an entirely separate cloud environment. For developers, this means choosing an agent means choosing an editor, or accepting the cost of context-switching between incompatible interfaces. For agent developers, it means building and maintaining N editor integrations instead of implementing one protocol. For IDE teams, it means negotiating M separate partnerships instead of adopting a single standard. The integration cost scales as O(n*m), and in February 2026, when every major player shipped multi-agent capabilities in the same two-week window, that cost multiplied overnight.

ACP is the missing protocol layer. Just as the Language Server Protocol turned an O(n*m) language intelligence problem into O(n+m), ACP standardises the connection between editors and coding agents. MCP solved how agents talk to tools. A2A addresses how agents talk to each other. ACP solves the remaining gap: how editors talk to agents. Together, these three protocols form the complete integration stack:

IDE / Editor <--[ACP]--> Coding Agent <--[MCP]--> Tools / Data

Without ACP, the industry is building a tower of custom integrations that will become increasingly expensive to maintain as the agent population grows.

Market Analysis #

The competitive landscape for AI coding tools has consolidated into four distinct layers, but beneath the surface concentration, the market is fracturing in a way that creates a structural opening for protocol standardisation.

The top three players control over 70% of the market, with GitHub Copilot alone commanding ~42% share and crossing $2 billion in ARR. But the fracture is vertical. IDE-integrated tools like Copilot, Cursor, and Windsurf each bundle their own agent capabilities inside proprietary editor experiences. Autonomous agents like Claude Code, Devin, and Aider each require their own runtime environment. Orchestration frameworks like LangGraph, CrewAI, and AutoGen each define their own workflow abstractions. No horizontal layer connects these verticals.

Market Sizing and Signals

Market Size (2025)
$7.37B
+26.6% CAGR
Projected (2030)
$23.97B
3.3x growth
Copilot ARR
$2B+
4.7M paid subs
Dev Adoption
84-91%
Near-universal
Signal Data Point Source
Market size (2025)$7.37BMordor Intelligence
Projected market (2030)$23.97BMordor Intelligence
CAGR26.6%Mordor Intelligence
GitHub Copilot ARR$2B+Microsoft
GitHub Copilot users20M+GitHub (July 2025)
Developer adoption rate84-91%Stack Overflow, DX
Enterprise adoption75%Industry surveys
Top-3 market concentration70%+CB Insights

The Multi-Agent Inflection Point

February 2026 marked a watershed moment: every major tool shipped multi-agent capabilities in the same two-week window.

  • Grok Build: 8 parallel agents
  • Windsurf: 5 parallel agents
  • Claude Code: Agent Teams
  • Codex CLI: Agents SDK
  • Devin: Parallel sessions

Running multiple agents simultaneously on different parts of a codebase is now table stakes. This has a direct consequence for protocol design: when agents proliferate, the cost of proprietary integration multiplies. Each new agent-IDE pairing requires custom integration work without a standard protocol. ACP turns an O(n*m) integration problem into an O(n+m) one.

Competitive Landscape

GitHub Copilot IDE-Integrated
Code generation, chat, Workspace, security fixes. ~42% market share among paid tools. 200M+ GitHub accounts feeding context.
VS Code/GitHub ecosystem lock-in; limited model choice
Cursor IDE-Integrated
Most sophisticated agent workflows via Composer mode. Multi-file awareness. Autonomous coding capabilities. $20-40/month.
Forked VS Code, proprietary; premium pricing
Windsurf IDE-Integrated
Cascade Flow, SWE-1.5 model, Codemaps for deep context. Acquired by OpenAI (May 2025). Enterprise security features.
Acquired by OpenAI, future direction uncertain
Tabnine IDE-Integrated
Privacy-first. Cloud, on-premises, or fully air-gapped deployment. Now ships agents on top of completion engine. Key play for regulated industries.
Smaller model ecosystem; enterprise-focused pricing
Amazon Q Developer IDE-Integrated
AWS-native AI integration. Code and infrastructure suggestions shaped by AWS best practices. Security scanning built-in.
AWS ecosystem lock-in; weaker outside AWS workflows
Sourcegraph Cody IDE-Integrated
Leverages Sourcegraph code intelligence and indexing. Enterprise-scale codebase understanding across multi-repo projects.
Requires Sourcegraph indexing infrastructure
Claude Code Coding Agent
Terminal-native agent with 200K token context. 80.9% SWE-bench score: highest real-world coding capability. Agent Teams for multi-agent workflows.
Terminal-only interface; requires API access
Devin Coding Agent
"Full AI software engineer." Hybrid chatbot/IDE in the cloud. Per-ACU pricing. Parallel sessions for concurrent task execution.
Expensive ACU pricing; cloud-only; proprietary
Aider Coding Agent
Free, open-source terminal pair programming. BYOM (bring your own model): users pay LLM provider rates directly. Git-native workflow.
Terminal-only; depends on external model quality
Cline Coding Agent
Free, open-source VS Code extension. BYOM agentic coding. Part of the ecosystem alongside Kilo Code and OpenCode.
VS Code-only; depends on external model quality
SWE-Agent Coding Agent
Research-grade autonomous agent (Princeton NLP). Turns LLMs into software engineering agents. Important as a benchmark tool.
Research tool, not production-ready; requires setup
Replit Agent Coding Agent
Full-stack scaffolding inside Replit. "Ghost Mode" (2026) enables headless AI runs. Strongest for rapid prototyping and non-engineer users.
Replit platform lock-in; limited for large codebases
LangGraph Orchestration
Graph-based architecture treating agent steps as DAG nodes. Reached v1.0 in late 2025. Default runtime for all LangChain agents.
Steep learning curve; tightly coupled to LangChain
CrewAI Orchestration
Role-based multi-agent collaboration. Each agent has a distinct skillset. "Crew" container coordinates workflows with shared context.
Opinionated structure; less flexible for custom flows
AutoGen Orchestration
Conversational agent workflows (Microsoft). Merging with Semantic Kernel into "Microsoft Agent Framework". GA expected Q1 2026.
Merging into Microsoft Agent Framework; API instability
smolagents Orchestration
Lightweight, code-driven patterns (Hugging Face). ReAct-style prompting. Can generate Python code on the fly. Minimal setup.
Minimal tooling; less suitable for complex workflows
MCP Protocol
Agent-to-tool communication (Anthropic). The vertical connection linking agents to databases, APIs, and data sources. Now under the Agentic AI Foundation with Anthropic, Google, Microsoft, OpenAI, and AWS.
Established standard; agent-to-tool only
ACP Protocol
Client-to-agent communication (JetBrains + Zed). The horizontal connection between editors and agents. Agent Registry launched January 2026. SDKs in Python, TypeScript, Kotlin, Rust.
Early stage; adoption breadth still growing
A2A Protocol
Agent-to-agent communication (Google). Inter-agent protocol. IBM's ACP (Agent Communication Protocol) merged with A2A under the Linux Foundation in August 2025.
Inter-agent only; Linux Foundation governance

Where ACP Fits in the Stack

Layer Protocol Function Status
Agent-to-tool MCP Connects agents to external tools/data Established
Editor-to-agent ACP Connects editors to coding agents Live
Agent-to-agent A2A Inter-agent communication Merging

ACP occupies the critical middle layer that no other protocol addresses. It is the missing piece that connects the rapidly growing population of autonomous coding agents to the editors where developers actually work.

So What?

The market is large ($7.37B and growing at 26.6% CAGR), adoption is near-universal (84-91% of developers), and the shift to multi-agent workflows is already happening. But fragmentation is the dominant structural risk. ACP does not compete with any player in the landscape. It connects them. The protocol sits at the critical middle layer between editors and agents, the one layer no existing standard addresses.

The network effect is straightforward: every agent that implements ACP makes every ACP-supporting editor more valuable, and vice versa. First-mover advantage belongs to the protocol that achieves sufficient breadth of adoption before the market consolidates around proprietary integrations.

UX Analysis #

Current AI coding tool UX falls into four paradigms, each optimised for a different workflow. None handles multi-agent orchestration well.

The friction analysis reveals six critical pain points that compound when developers try to use multiple agents: agent selection is a guessing game, permission models are incompatible, progress reporting is fragmented, result review varies wildly between tools, context is lost when switching agents, and there are no cross-agent metrics.

These are not minor usability issues. They represent structural barriers to multi-agent adoption. A team lead trying to run five agents simultaneously today must monitor five different interfaces, manage five different permission systems, compare results in five different formats, and re-explain context every time they switch between tools.

Interaction Paradigms

Chat-Based

Cursor, Cline, Aider
Strengths: Low learning curve, feels natural, good for exploratory tasks.
Weaknesses: Poor for complex multi-step tasks, no task queue, limited visibility into agent reasoning.

Dashboard-Based

Devin, Replit Agent
Strengths: Clear task tracking, good for async workflows, supports concurrent tasks.
Weaknesses: Context switching between dashboard and IDE, heavier cognitive load.

IDE-Integrated

GitHub Copilot, Windsurf
Strengths: Minimal context switching, seamless workflow, low friction for simple tasks.
Weaknesses: Limited to local context, poor multi-agent coordination, no task management.

CLI-Based

Claude Code, Aider, Codex CLI
Strengths: Scriptable, composable, fast for power users, CI/CD integration.
Weaknesses: Steep learning curve, poor discoverability, no visual feedback for complex outputs.
Dimension Chat-Based Dashboard IDE-Integrated CLI-Based
DiscoverabilityHighMediumVery HighLow
Control GranularityMediumHighLowVery High
Progress FeedbackLowHighMediumMedium
Context SwitchingMediumHighVery LowMedium
Multi-Agent SupportPoorGoodPoorMedium
Async WorkflowPoorExcellentPoorGood
ScriptabilityLowLowLowExcellent
Best ForQuick editsComplex projectsInline codingAutomation

Critical Friction Points

Developers cannot easily compare agent capabilities. Each tool describes itself differently. There is no standard way to express what an agent can do, what languages it supports, what context window it needs, or how much it costs per task.

Impact: Teams default to a single agent even when another would be better for specific tasks. This leads to suboptimal results and wasted compute.

ACP Opportunity: Define a standard capability schema that every agent publishes. Include supported languages, max context, average task time, cost per token, supported operations, and quality benchmarks.

Every agent handles permissions differently. Some agents get full filesystem access. Others require explicit file-by-file approval. There is no middle ground, and no standard way to express scoped permissions like "read src/, write to tests/, never touch config files."

Impact: Security-conscious teams either over-restrict agents (limiting usefulness) or under-restrict them (creating risk). Enterprise adoption stalls because compliance teams cannot audit agent access patterns.

ACP Opportunity: Implement a permission specification language that defines scoped access controls per agent, per task. Think OAuth scopes for coding agents: read:src, write:tests, execute:lint, deny:*.env.

Each agent reports progress differently. Some stream tokens, some show step counts, some provide no feedback until completion. When running multiple agents concurrently, there is no unified view of what is happening.

Impact: Developers either anxiously watch idle terminals or miss important intermediate results. Team leads cannot assess whether agents are stuck, making progress, or have silently failed.

ACP Opportunity: Define a standard progress protocol with structured events: task_started, step_completed, file_modified, test_run, error_encountered, task_completed.

Agent outputs appear in different formats across tools. Some show diffs, some show full files, some stream raw output. Comparing results from two agents on the same task requires manual effort. There is no standard approval/rejection workflow.

Impact: Code review for agent-generated code is slower than it should be. Teams cannot A/B test agents effectively.

ACP Opportunity: Standardize a result envelope that wraps every agent output in a consistent format: structured diffs, test results, confidence scores, resource usage, and an approval/rejection API.

When switching between agents or resuming a task with a different agent, context is lost. The new agent starts from scratch. File state, conversation history, and partial progress do not transfer.

Impact: Developers waste time re-explaining context. Multi-agent workflows are effectively impossible because agents cannot build on each other's work.

ACP Opportunity: Define a context transfer protocol that packages task state (files modified, decisions made, constraints discovered) in a portable format any ACP-compatible agent can consume.

There is no standard way to measure and compare agent performance. Speed, accuracy, cost, and code quality metrics are siloed within each tool. Teams cannot make data-driven decisions about which agent to use for which task type.

Impact: Agent selection remains gut-feel rather than data-driven. Organizations cannot optimize their AI tooling spend.

ACP Opportunity: Define a metrics schema capturing standardized performance data across all agents: tokens consumed, time to completion, test pass rate, lines changed, reviewer approval rate.

Proposed UX: Wireframe Mockups

Unified Task Queue

Replace fragmented per-tool task management with a single task queue that spans all connected agents. Tasks show status, assigned agent, priority, progress, estimated completion, and cost so far.

+-----------------------------------------------------------------------------+
|  ACP Orchestrator                              [Search...]  [+ New Task]    |
+-----------------------------------------------------------------------------+
|  Status: [All]  Agent: [All]  Priority: [All]           [List] [Kanban]     |
+-----------------------------------------------------------------------------+
|                                                                             |
|  * ACP-14  Add OAuth2 endpoint             ============..  78%             |
|    Claude Code . 3 files changed . 2m 14s elapsed                          |
|    +-- Done: Created auth middleware                                        |
|    +-- Done: Added token validation                                         |
|    +-- Now:  Writing integration tests...                                   |
|    +-- Next: Update API documentation                                       |
|                                                                             |
|  * ACP-15  Refactor user model              ====........  35%              |
|    Codex . 1 file changed . 45s elapsed                                    |
|    +-- Now:  Migrating fields to new schema...                              |
|                                                                             |
|  ! ACP-16  Fix CORS headers                 ==========..  80%  BLOCKED    |
|    Claude Code . Needs clarification                                        |
|    "Should CORS allow credentials? Current config is ambiguous."            |
|    [Reply to agent]  [View context]                                         |
|                                                                             |
|  o ACP-17  Add rate limiting                 ............  Queued           |
|    Waiting for agent . Priority: Medium                                     |
|                                                                             |
|  Done ACP-13  Database indexing              ============  Done            |
|    Codex . 2 files changed . Completed 4m ago                              |
|    [Review diff]  [Approve]  [Request changes]                              |
|                                                                             |
+-----------------------------------------------------------------------------+
|  Showing 5 of 12 tasks . 2 active . 1 blocked . 3 queued . 6 complete      |
+-----------------------------------------------------------------------------+

Key interactions: Drag-and-drop priority reordering, one-click agent reassignment, bulk operations (pause all, cancel by tag), filter by status/agent/project/author.

Smart Agent Selection

Present a capability-aware agent picker when creating tasks. ACP queries all connected agents for their capability manifests and recommends the best match based on task type, language, complexity, cost tolerance, and historical performance.

+-----------------------------------------------------------------------------+
|  Select Agent                                                    [X Close]  |
+-----------------------------------------------------------------------------+
|                                                                             |
|  Task: "Add OAuth2 endpoint with token refresh"                             |
|  Suggested: Claude Code (best match for backend API work)                   |
|                                                                             |
|  [Search agents...]          [Capability]  [Status]  [Cost]                 |
|                                                                             |
|  +------------------------------------------------------------------+      |
|  |  * Claude Code                                     [Available]   |      |
|  |  Capabilities: backend, api, testing, refactoring, debugging     |      |
|  |  Languages:    Python, TypeScript, Go, Rust                      |      |
|  |  Quality:  ========..  4.2/5    Speed: ======....  Fast          |      |
|  |  Cost:     ~$0.12/task          Sessions today: 8                |      |
|  |  Match score: 94%                                     [Select]   |      |
|  +------------------------------------------------------------------+      |
|                                                                             |
|  +------------------------------------------------------------------+      |
|  |  Codex                                              [Available]   |      |
|  |  Capabilities: backend, migrations, scripting                    |      |
|  |  Languages:    Python, JavaScript, SQL                           |      |
|  |  Quality:  ======....  3.5/5    Speed: ========..  Fast          |      |
|  |  Cost:     ~$0.08/task          Sessions today: 3                |      |
|  |  Match score: 71%                                     [Select]   |      |
|  +------------------------------------------------------------------+      |
|                                                                             |
|  [Cancel]                                      [Auto-assign best match]     |
+-----------------------------------------------------------------------------+

Key interactions: Task description auto-suggests matching agents, side-by-side comparison (speed, cost, quality), "auto-route" mode for hands-off selection, favorite agents and custom routing rules.

Granular Permission Builder

Replace binary allow/deny with a visual permission builder that lets users define scoped access per agent per task. Uses a familiar file-tree UI with checkboxes for read/write/execute.

+-----------------------------------------------------------------------------+
|  Permissions: Claude Code on "Add OAuth2 endpoint"               [X Close]  |
+-----------------------------------------------------------------------------+
|                                                                             |
|  Template: [Development - Standard]  [Save as template]                     |
|                                                                             |
|  ORG POLICY (locked)                                                        |
|  x No access to .env, secrets/, credentials/                                |
|  x No network requests to external APIs                                     |
|  x No git push or branch deletion                                           |
|  These restrictions are set by your organization.                           |
|                                                                             |
|  FILE ACCESS                                                                |
|  Read access                                                                |
|  [x] All project files                                                      |
|  [x] node_modules/ (read-only, for reference)                               |
|  [ ] Files outside project root                                             |
|                                                                             |
|  Write access                                                               |
|  [x] src/                                          [+ Add path]            |
|  [x] tests/                                                                 |
|  [ ] package.json (locked by org policy)                                    |
|  [ ] Configuration files (*.config.js, *.rc)                                |
|                                                                             |
|  EXECUTION                                                                  |
|  [x] npm test / npm run test:*                                              |
|  [x] npx tsc --noEmit (type checking)                                       |
|  [x] eslint, prettier (linting/formatting)                                  |
|  [ ] npm install (dependency changes)                                       |
|  [ ] Arbitrary shell commands                                               |
|                                                                             |
|  APPROVAL MODE                                                              |
|  ( ) Manual: approve every action                                           |
|  (*) Smart: auto-approve within permissions, ask for exceptions             |
|  ( ) Autonomous: execute freely, log all actions                            |
|                                                                             |
|  [Cancel]                                          [Apply permissions]      |
+-----------------------------------------------------------------------------+

Key interactions: Permission templates for common scenarios, visual diff of permission changes, audit log of all agent file access, one-click emergency revoke.

Structured Result Review

A dedicated review interface that presents agent results consistently regardless of which agent produced them. Shows diffs, test results, and confidence scores side by side.

+-----------------------------------------------------------------------------+
|  Review: ACP-14 "Add OAuth2 endpoint"                            [X Close]  |
|  Agent: Claude Code . Duration: 3m 42s . Files: 4 . Cost: $0.14            |
+-----------------------------------------------------------------------------+
|                                                                             |
|  SUMMARY                                                                    |
|  Added OAuth2 token endpoint with refresh token support.                    |
|  Created auth middleware, token validation, and integration tests.           |
|  All 12 tests passing. No lint errors.                                      |
|                                                                             |
|  FILES CHANGED                                                              |
|  + src/auth/oauth2.ts              +87 lines (new file)                     |
|  ~ src/middleware/auth.ts           +23 / -4 lines                          |
|  ~ src/routes/api.ts               +12 / -1 lines                          |
|  + tests/auth/oauth2.test.ts       +145 lines (new file)                    |
|                                                                             |
|  DIFF: src/auth/oauth2.ts (new file)          [Unified] [Side-by-side]     |
|  + 1  import { Request, Response } from 'express';                          |
|  + 2  import { TokenService } from '../services/token';                     |
|  + 3  import { UserRepository } from '../repos/user';                       |
|  + 5  export interface OAuth2Config {                                       |
|  + 6    accessTokenTTL: number;                                             |
|  ...                                                                        |
|                                                                             |
|  TEST RESULTS                                                               |
|  Done 12/12 tests passing                                                   |
|  +-- Done OAuth2Handler.handleToken: issues access token (4ms)              |
|  +-- Done OAuth2Handler.handleRefresh: refreshes valid token (3ms)          |
|  +-- Done OAuth2Handler.handleRefresh: rejects expired token (2ms)          |
|  +-- ... 9 more tests                                                       |
|                                                                             |
|  [Approve all]  [Request changes]  [Approve with comments]                  |
+-----------------------------------------------------------------------------+

Key interactions: Unified diff view with syntax highlighting, approve/reject/request-changes workflow (like GitHub PR review), compare results from multiple agents, one-click apply to working directory, keyboard navigation (j/k between files, a to approve).

User Journey: Solo Developer

Goal: Use multiple AI coding agents to ship features faster without context-switching between tools.

Stage Action Emotion Pain Point
TriggerDeveloper identifies a task too large for manual workMotivatedUnclear which agent is best for the job
SpawnSelects an agent and defines the task with contextHopefulTask specification is verbose, no templates
HandoffACP routes the task to the chosen agent with scoped permissionsNeutralPermissions are all-or-nothing in current tools
MonitorWatches progress via live output streamAnxiousNo standardised progress reporting across agents
ReviewExamines agent output: diffs, generated files, test resultsCriticalContext switching between agent output and codebase
IterateProvides feedback, requests changes, or approves the resultVariedFeedback loops are slow, no structured way to guide agents
MergeIntegrates approved changes into the codebaseRelievedManual merge conflicts when multiple agents touch same files
So What?

The UX gap is not about individual tool quality. Each paradigm (chat, dashboard, IDE-integrated, CLI) works well for its target use case. The gap is between tools: no standard for capability discovery, no unified progress reporting, no portable context, no cross-agent metrics.

ACP's UX value proposition is not a better chat interface or a better dashboard. It is the connective tissue that makes multi-agent workflows practical. The five proposed UX patterns (unified queue, smart selection, permission builder, live dashboard, structured review) map directly to the six friction points identified in the analysis. Any ACP implementation should prioritise these patterns in this order: task queue and progress reporting first, then agent selection and permissions, then result review and metrics.

UI Analysis #

Design system and interactive mockups for the ACP orchestrator interface, drawing from the Linear/Vercel/Raycast aesthetic.

The UI design system draws from the Linear/Vercel/Raycast aesthetic: dark mode default, high information density, zero visual clutter. The design specifications cover the complete ACP orchestrator interface across five core views: dashboard overview, task queue, agent selection panel, permission management dialog, and result review interface.

The wireframe specifications demonstrate a UI that respects developer cognitive patterns. The task queue surfaces exactly the information developers need at each stage. The agent selection modal introduces capability-aware matching with match scores. The permission dialog layers organisational policy over user-configurable scopes. The result review interface borrows from GitHub's PR review pattern.

Interactive Mockups

ACP Dashboard
Active Agents
4/6
Queued Tasks
12
Completed Today
47
Error Rate
2.1%
C
Claude Opus
Working
Refactor auth module
S
Claude Sonnet
Idle
Waiting for task
G
GPT-4o
Working
Fix API endpoint
X
Codestral
Offline
Disconnected
Active Tasks View All
#127 Refactor auth module Claude Opus
#125 Fix API rate limiting GPT-4o
#130 Add unit tests for parser Queued
#131 Update API documentation Queued
Task #127: Refactor auth module
Working P0 Critical Claude Opus 12m elapsed
Live Output
Pause Clear
10:42:03INFOAnalyzing auth module structure...
10:42:04INFOFound 3 files to refactor
10:42:06INFORefactoring auth/middleware.ts...
10:42:08INFOFile saved: middleware.ts (+24 -8)
10:42:10INFORefactoring auth/session.ts...
10:42:12WARNDeprecated API usage found in session.ts:45
10:42:14INFOFile saved: session.ts (+12 -3)
10:42:15INFORunning test suite...
10:42:18INFOTests passed: 24/24

File Changes

auth/middleware.ts +24 -8
auth/session.ts +12 -3
auth/types.ts +5 -0
Agent Configuration

Agents

C
Claude Opus 4
Anthropic
Connected
code-edit test debug refactor
Tasks
47
Avg time
2.3m
Success
94%
S
Claude Sonnet 4
Anthropic
Connected
code-edit review docs
Tasks
31
Avg time
1.1m
Success
97%
G
GPT-4o
OpenAI
Connected
code-edit test explain
Tasks
28
Avg time
1.8m
Success
91%
Live Logs

Live Logs

All Agents All Levels
10:42:03INFO[claude-opus]Analyzing file structure...
10:42:04INFO[claude-opus]Found 12 files matching pattern
10:42:04DEBUG[sonnet]Idle, waiting for task...
10:42:05WARN[gpt-4o]Rate limit approaching (80%)
10:42:06INFO[claude-opus]Refactoring middleware.ts...
10:42:07ERROR[codestral]Connection timeout
10:42:08INFO[claude-opus]File saved: middleware.ts
10:42:09INFO[gpt-4o]Starting API endpoint analysis
10:42:10INFO[claude-opus]Running test suite on auth module
10:42:12INFO[claude-opus]Tests passed: 24/24 in 1.8s
10:42:13INFO[gpt-4o]Found 3 rate limiting gaps
10:42:14WARN[gpt-4o]Deprecated express.json() usage at line 42
Live Showing 247 entries
Pause Clear Export

Design System

Color Palette

Primary BG
bg-primary
Secondary BG
bg-secondary
Tertiary BG
bg-tertiary
Blue
accent-blue
Green
accent-green
Amber
accent-amber
Red
accent-red
Purple
accent-purple

Typography

size-3xl / bold Page Heading
size-xl / semi Section Title
size-md / medium Subsection Heading
size-base / normal Body text for content areas and descriptions.
size-sm / mono const agent = new ACPAgent();
size-xs / upper Section Label

Components

Buttons
Badges
Blue Green Amber Red Purple Gray
Status Indicators
Working Idle Success Error Offline
So What?

The UI specifications solve a critical design problem: how do you present a multi-agent orchestrator without overwhelming the developer? The answer is progressive disclosure and familiar patterns. The dashboard shows four stats and two feeds. The task queue shows one list with expandable rows. The agent picker shows ranked cards with a single "auto-assign" escape hatch. At no point does the UI require the developer to understand ACP's protocol internals.

The design system's token-based architecture ensures that a production implementation can maintain visual consistency at scale. The wireframes are implementation-ready: every component has defined dimensions, states, interactions, and responsive breakpoints.

Recommendations #

Seven strategic priorities for ACP, derived from the market, UX, and UI analyses.

Ship the task queue and progress protocol first

The unified task queue is ACP's most visible value proposition. Pair it with a standardised progress protocol (task_started, step_completed, file_modified, error_encountered, task_completed) so that every connected agent reports status consistently. This is the minimum viable product that demonstrates ACP's value to developers on day one.

Define a standard capability schema before scaling agent adoption

Agent selection today is a guessing game because there is no standard way to express what an agent can do. Before onboarding more agents to the ACP registry, define a capability manifest format covering supported languages, operation types, context window requirements, cost per token, and quality benchmarks. This schema powers the smart agent selection UI.

Implement OAuth-style scoped permissions

The permission model should follow the OAuth scopes pattern: read:src, write:tests, execute:lint, deny:*.env. Layer organisational policies (immutable) over user-configurable scopes (flexible per task). This addresses the enterprise adoption blocker where security teams cannot audit agent access patterns.

Target platform engineers as the initial adoption wedge

Platform engineers are the highest-leverage target. They build internal tooling, they feel the integration tax most acutely (building N connectors for N agents), and their adoption creates top-down pull within organisations. Build the SDK experience and documentation to serve this persona first.

Build the cross-agent metrics layer early

Define a metrics schema capturing tokens consumed, time to completion, test pass rate, lines changed, reviewer approval rate. Expose this through an analytics dashboard. This transforms ACP from a pure protocol into a decision-making platform, creating defensible value difficult for any single agent vendor to replicate.

Invest in the conflict resolution UX

Multi-agent workflows inevitably produce conflicts when two agents modify overlapping files. The side-by-side conflict resolution panel with four resolution options (keep A, keep B, manual merge, re-run both with awareness) is novel and high-value. Getting it right differentiates ACP from basic task routing.

Maintain strict protocol neutrality

ACP is co-developed by JetBrains and Zed, two IDE makers with different architectures and business models. This vendor neutrality is the protocol's most important strategic asset. Governance should follow the model established by MCP's Agentic AI Foundation: multi-stakeholder, open specification, shared registry.

Closing Synthesis

The ACP spike reveals a market that is simultaneously consolidating and fragmenting. The top layer is consolidating: a handful of large players dominate market share, and developer adoption of AI coding tools is near-universal. But the integration layer is fragmenting: every new agent, every new editor, every new multi-agent capability creates another custom integration that must be built and maintained. ACP resolves this tension by standardising the horizontal connection between editors and agents, just as LSP standardised the connection between editors and language servers a decade ago.

The analyses converge on a clear product direction. The market analysis shows a $7.37B market growing at 26.6% CAGR with a structural gap at the editor-agent layer. The UX analysis identifies six friction points that block multi-agent adoption and proposes five interaction patterns to address them. The UI design system provides implementation-ready specifications for a dashboard that surfaces multi-agent orchestration through familiar, progressive-disclosure patterns. The pieces fit together: ACP is not a new tool. It is the connective layer that makes existing tools composable.

The 90-day plan is straightforward. Weeks 1-4: ship the task queue view and progress protocol with two reference agents to demonstrate cross-agent orchestration. Weeks 5-8: add the capability schema, agent selection panel, and permission builder to enable team-scale deployment. Weeks 9-12: launch the metrics dashboard and conflict resolution UI to establish ACP as the data-driven orchestration layer. At every stage, the priority is breadth of agent adoption over depth of features. The network effect is the moat: every agent that implements ACP makes every editor more valuable, and every editor that adopts ACP makes every agent more accessible. The protocol wins by being the shortest path between any agent and any editor.