Commerce Doesn't Have a Protocol (Yet): Inside UCP
Estimated reading time: 20-25 minutes | ~4,100 words
TL;DR
- I think UCP is the best-designed commerce protocol we have seen, and it still might fail unless Google forces adoption through AI Mode and Gemini.
- UCP is a layered protocol (Shopping Service, Capabilities, Extensions) that gives AI agents a standard way to discover merchants and complete purchases over plain HTTP.
- Discovery works through a
/.well-known/ucpmanifest, similar to OpenID Connect discovery, and capability negotiation computes the intersection of what both sides support. - The checkout state machine has six states, not three:
incomplete,requires_escalation,ready_for_complete,complete_in_progress,completed, andcanceled. Therequires_escalationstate is the most interesting, because it acknowledges that not everything can be automated. - Extensions compose on capabilities via JSON Schema
allOf, and namespaces use reverse-domain naming (dev.ucp.shopping.*,com.vendor.*) so anyone who owns a domain can publish extensions without a central registry. - Three previous commerce protocols (SET, IOTP, OBI) all died from complexity. UCP is simpler, but the variable that matters is distribution, not protocol quality.
Table of Contents
- Commerce Has No Protocol
- The Architecture
- Discovery and Negotiation
- A Checkout on the Wire
- The State Machine
- What’s Missing
- UCP vs ACP
- The Bootstrap Problem
Commerce Has No Protocol
Three commerce-specific protocols have launched and died since 1996. SET in 1996, backed by Visa, Mastercard, Microsoft, and IBM. IOTP in 1999, from the IETF. OBI in 1997, from a Fortune 500 consortium. All failed from complexity. The “protocol” that won for internet commerce was just HTML pages in web browsers.
UCP (Universal Commerce Protocol) is the fourth attempt. The problem it targets is real: HTTP gave the web a universal language for documents, SMTP did it for email, but if an AI agent wants to buy something today, there is no standard way to do it. Every merchant has a different API shape, different authentication, different checkout flow. An agent that can buy from Shopify cannot automatically buy from WooCommerce or a custom-built storefront. It is the N-times-N integration problem: every agent needs a bespoke integration with every merchant.
After reading the spec cover to cover, I think UCP is the best-designed of these attempts, and it still might fail unless Google forces adoption through AI Mode and Gemini. That tension between protocol quality and adoption risk is what this post is about.
UCP is co-developed by Google and Shopify, announced January 2026 at NRF, and backed by Visa, Mastercard, Walmart, Target, Stripe, Adyen, and 20+ other partners. The spec is Apache 2.0. As of February 2026, Etsy and Wayfair are the first merchants live in Google’s AI Mode.
Full disclosure: I work at Shopify, one of UCP’s co-developers. What follows is my take as a systems person, not a company position.
The Architecture
The marketing calls UCP “the TCP/IP of commerce.” That framing is partially useful and partially misleading, so let me start with what UCP actually is: schema plus capability modularity on top of HTTP.
UCP has three layers. Each layer can version and evolve independently:
Layer 1: Shopping Service. The core transaction primitives: checkout session, line items, totals, buyer info, messages, and status. This lives under the dev.ucp.shopping namespace and defines the basic vocabulary every UCP implementation shares. Monetary amounts are in minor currency units (cents). Buyer phone numbers use E.164. Sessions have a default 6-hour TTL via an expires_at timestamp. None of this is surprising, and that is the point.
Layer 2: Capabilities. Independently versioned functional domains that build on the Shopping Service:
- Checkout (
dev.ucp.shopping.checkout): cart management, session creation, payment orchestration - Identity Linking (
dev.ucp.shopping.identity_linking): OAuth 2.0-based authorization for platform actions on behalf of users - Order (
dev.ucp.shopping.order): webhook-driven lifecycle notifications (shipping, delivery, returns)
Layer 3: Extensions. Domain-specific schemas that compose on top of capabilities using the extends field:
- Discounts (
dev.ucp.shopping.discount, extends checkout): discount code application - Fulfillment (
dev.ucp.shopping.fulfillment, extends checkout): shipping methods, addresses, carrier info - AP2 Mandates (
dev.ucp.shopping.ap2_mandate, extends checkout): cryptographic proof of user authorization for high-value transactions
Extensions compose via JSON Schema allOf. The discount extension does not modify the checkout schema directly; it adds a discounts property alongside checkout’s existing fields. This means the checkout capability can stay stable while extensions evolve around it.
Where the TCP/IP analogy holds: separation of concerns, composability, independent versioning. Just as adding HTTPS did not require changing TCP, adding the discount extension does not require changing checkout.
Where the analogy breaks: UCP is an application-layer protocol. It sits on top of HTTP, not below it. The layers in UCP are closer to HTTP/REST/GraphQL distinctions than to TCP/IP/Ethernet distinctions. The analogy overstates UCP’s fundamentality.
The real benefit is simpler than the marketing suggests: the core Shopping Service can stay stable while capabilities and extensions evolve independently. That is genuinely useful for a domain as messy as commerce.
I think this layered decomposition is the single design decision that separates UCP from its predecessors. SET and IOTP tried to standardize everything in one schema. UCP lets the core stabilize while the edges evolve. That is the difference between a protocol that can adapt and one that collapses under its own weight.
Transport Bindings
UCP is transport-agnostic. The same checkout capability maps to four different transport bindings:
- REST (the primary binding): five endpoints (
POST /checkout-sessions,GET,PUT,POST .../complete,POST .../cancel). HTTPS required, JSON bodies. This is what most implementations will use. - MCP (Model Context Protocol): capabilities map 1:1 to MCP tools (
create_checkout,get_checkout,update_checkout,complete_checkout,cancel_checkout). This is for LLM-native agents that already speak MCP. - A2A (Agent-to-Agent): for peer agent communication using Google’s A2A protocol. Agents exchange structured
DataPartobjects. - Embedded Checkout: JSON-RPC 2.0 bidirectional messaging for UI-embedded experiences.
The multi-transport approach is a hedge against uncertainty about which agent communication pattern will win. I suspect most real-world traffic will use REST for the foreseeable future, with MCP gaining share as more agents are built natively on top of it.
The REST binding also supports request signing, but I am deliberately not going into the mechanics here. Different spec pages describe the signing mechanism differently (some reference a detached-JWS style, others point toward HTTP message signatures), and I would rather not risk getting a security detail wrong. If you are implementing, follow the authoritative spec for the binding you are using.
Namespaces
UCP uses reverse-domain naming for everything: dev.ucp.shopping.checkout is governed by ucp.dev, com.google.pay is governed by google.com, and a loyalty provider could publish com.loyaltyprovider.rewards under their own domain. Own the domain, own the namespace. No central registry needed.
The spec requires that capability spec URLs originate from matching namespace authorities and that platforms validate this binding. It is DNS as a governance mechanism, which is elegant and has a clear failure mode: if a domain changes hands, the new owner controls the namespace. The spec does not address domain governance disputes, but this is a well-understood tradeoff.
Discovery and Negotiation
Every UCP-compliant merchant publishes a JSON manifest at /.well-known/ucp. If you have seen .well-known/openid-configuration, this will feel familiar, but it is more ambitious: the manifest declares not just endpoints but capabilities, extensions, and payment handlers. I think this is the right discovery model. It is decentralized (no registry, no approval process) and self-describing (the agent can figure out what the merchant supports without documentation).
Here is a trimmed example based on the spec:
{
"ucp": {
"version": "2026-01-11",
"services": {
"dev.ucp.shopping": {
"rest": {
"schema": "https://ucp.dev/schemas/shopping/openapi.yaml",
"endpoint": "https://merchant.example.com/api/v2"
}
}
},
"capabilities": [
{
"name": "dev.ucp.shopping.checkout",
"version": "2026-01-11",
"spec": "https://ucp.dev/specification/checkout-rest/",
"schema": "https://ucp.dev/schemas/checkout/schema.json"
},
{
"name": "dev.ucp.shopping.fulfillment",
"version": "2026-01-11",
"extends": "dev.ucp.shopping.checkout",
"spec": "https://ucp.dev/specification/fulfillment/",
"schema": "https://ucp.dev/schemas/fulfillment/schema.json"
},
{
"name": "dev.ucp.shopping.discount",
"version": "2026-01-11",
"extends": "dev.ucp.shopping.checkout",
"spec": "https://ucp.dev/specification/discount/",
"schema": "https://ucp.dev/schemas/discount/schema.json"
}
]
},
"payment": {
"handlers": [
{
"id": "handler_gpay_001",
"name": "com.google.pay",
"version": "2026-01-23",
"config": {
"environment": "PRODUCTION",
"merchantInfo": { "merchantName": "Flower Shop" }
}
}
]
}
}
Both the platform (agent) and the merchant declare their capabilities. The negotiation algorithm computes the intersection:
- Include business capabilities where the platform has a matching
name. - Remove extensions whose parent capability is not in the intersection.
- Repeat step 2 until stable (this handles transitive extension chains).
The result is the set of mutually-supported capabilities with satisfied dependency chains. It is a server-selects architecture: the business controls which capabilities are active, and includes them in every response so the agent always knows the current state.
The tradeoff is that capability adoption will be gated by the slowest merchants, not driven by the most ambitious agents. Server-selects is the conservative choice, and I suspect it will slow extension adoption in practice.
What makes this more than “just add fields to a JSON API” is the composition step. After negotiation, the platform fetches the base schema and every active extension schema, then composes them client-side via allOf chains. The platform validates its own requests and the merchant’s responses against the composed schema. That gives you open-world extensibility (anyone can publish an extension under their own namespace) with machine-checkable contracts (the composed schema is the source of truth). Most “extensible JSON APIs” skip the composition step entirely and rely on documentation. UCP makes composition a first-class protocol requirement. That is, I think, the protocol’s most underappreciated technical contribution.
This does shift real complexity to the platform side. Schema fetching, caching, composition, and validation are nontrivial engineering. Google and Shopify can absorb that cost because they build it once and amortize it across millions of merchants. A smaller agent startup would need to get this right from scratch, which is a meaningful barrier to entry.
A Go agent that discovers and negotiates with a merchant might look like this:
func discover(merchantURL string, agentCaps []string) (*Profile, []Capability, error) {
resp, err := http.Get(merchantURL + "/.well-known/ucp")
if err != nil {
return nil, nil, err
}
defer resp.Body.Close()
var profile Profile
if err := json.NewDecoder(resp.Body).Decode(&profile); err != nil {
return nil, nil, err
}
agentSet := make(map[string]bool, len(agentCaps))
for _, c := range agentCaps {
agentSet[c] = true
}
// Step 1: intersection by name.
var active []Capability
for _, bc := range profile.UCP.Capabilities {
if agentSet[bc.Name] {
active = append(active, bc)
}
}
// Steps 2-3: prune orphaned extensions to fixed point.
for changed := true; changed; {
changed = false
names := make(map[string]bool, len(active))
for _, c := range active {
names[c.Name] = true
}
var pruned []Capability
for _, c := range active {
if c.Extends == "" || names[c.Extends] {
pruned = append(pruned, c)
} else {
changed = true
}
}
active = pruned
}
return &profile, active, nil
}
If the agent supports checkout, fulfillment, and discounts but the merchant only supports checkout and fulfillment, the discount extension gets pruned from the intersection. The agent can still complete a checkout; it just cannot apply promo codes.
A Checkout on the Wire
This is the core of the protocol. I will walk through a complete checkout flow, request by request, using the REST binding. The base URL comes from the manifest’s rest.endpoint field. I built a toy merchant and agent in Go to see this on the wire; the code snippets below are from that implementation.
Step 1: Create the session
The agent creates a checkout session with line items:
type CheckoutRequest struct {
LineItems []LineItem `json:"line_items"`
Currency string `json:"currency"`
}
func createCheckout(endpoint string, items []LineItem) (*Checkout, error) {
body := CheckoutRequest{LineItems: items, Currency: "USD"}
data, _ := json.Marshal(body)
req, _ := http.NewRequest("POST", endpoint+"/checkout-sessions", bytes.NewReader(data))
req.Header.Set("Content-Type", "application/json")
req.Header.Set("UCP-Agent", `profile="http://localhost:8080/profile"`)
req.Header.Set("Idempotency-Key", fmt.Sprintf("%d", time.Now().UnixNano()))
resp, err := http.DefaultClient.Do(req)
if err != nil {
return nil, err
}
defer resp.Body.Close()
var checkout Checkout
return &checkout, json.NewDecoder(resp.Body).Decode(&checkout)
}
The response comes back with status: "incomplete" and a messages array telling the agent exactly what is missing:
{
"id": "chk_1234567890",
"status": "incomplete",
"messages": [
{
"type": "error",
"code": "missing_field",
"path": "$.buyer.email",
"content": "Buyer email is required",
"severity": "recoverable"
}
],
"line_items": [
{
"id": "li_001",
"item": { "id": "prod_roses_001", "title": "Bouquet of Red Roses", "price": 3500 },
"quantity": 2,
"totals": { "subtotal": 7000, "total": 7000 }
}
],
"totals": { "subtotal": 7000, "total": 7000 }
}
That path: "$.buyer.email" is a JSONPath reference pointing at the exact field the agent needs to fill. The severity: "recoverable" tells the agent it can fix this programmatically (as opposed to requires_buyer_input, which would mean the human needs to get involved). I like this: it is the first time a commerce protocol has made missing data truly machine-fixable, with structured error codes an agent can act on without parsing prose.
Step 2: Add buyer and shipping info
PUT replaces the entire session state. The agent sends everything: the original line items plus the new buyer and fulfillment data.
{
"id": "chk_1234567890",
"line_items": [
{
"id": "li_001",
"item": { "id": "prod_roses_001", "title": "Bouquet of Red Roses", "price": 3500 },
"quantity": 2
}
],
"buyer": {
"full_name": "Jane Smith",
"email": "jane@example.com",
"first_name": "Jane",
"last_name": "Smith"
},
"fulfillment": {
"methods": ["shipping"],
"destinations": [
{
"street_address": "123 Main St",
"locality": "San Francisco",
"region": "CA",
"postal_code": "94105",
"country": "US"
}
]
},
"currency": "USD"
}
The merchant returns the session with shipping options and tax calculations. If everything is satisfied, the status transitions to ready_for_complete. This is the right separation: the merchant remains the authority on when a checkout is ready for payment, not the agent.
Step 3: Apply a discount (optional)
If the discount extension was negotiated, the agent can include a discount code in the update:
{
"discounts": { "codes": ["10OFF"] }
}
In practice, the agent would include the full session state in the PUT body. I am showing only the discount fields here for brevity.
The response shows the applied discount, including where it was allocated:
{
"status": "ready_for_complete",
"discounts": {
"codes": ["10OFF"],
"applied": [
{
"code": "10OFF",
"title": "10% Off",
"amount": 700,
"allocations": [{ "path": "$.totals.subtotal", "amount": 700 }]
}
]
},
"totals": {
"subtotal": 7000,
"discount": 700,
"tax": 504,
"fulfillment": 500,
"total": 7304
}
}
All monetary amounts are in cents. The allocations array tells you exactly where the discount was applied. Allocations are boring but essential: without them, agents cannot explain price math to humans, and unexplained price changes are the fastest way to lose trust in an automated checkout.
Step 4: Complete the checkout
Once the status is ready_for_complete, the agent submits payment data:
{
"payment_data": {
"id": "pi_001",
"handler_id": "handler_gpay_001",
"type": "card",
"brand": "visa",
"last_digits": "4242",
"billing_address": {
"street_address": "123 Main St",
"locality": "San Francisco",
"region": "CA",
"postal_code": "94105",
"country": "US"
},
"credential": {
"type": "PAYMENT_GATEWAY",
"token": "tok_visa_4242..."
}
}
}
The handler_id references a payment handler from the manifest. The credential is opaque to the protocol; it could be a Google Pay encrypted token, a Stripe token, or anything the handler specification defines. Credentials flow unidirectionally (platform to business only), which keeps the merchant’s PCI scope minimal.
A successful completion returns the order:
{
"status": "completed",
"order": {
"id": "ord_9876543210",
"permalink_url": "https://merchant.example.com/orders/ord_9876543210"
}
}
The merchant side
On the merchant side, the state machine logic that drives status transitions might look like this:
func (s *Server) computeStatus(checkout *Checkout) string {
if checkout.Buyer == nil || checkout.Buyer.Email == "" {
return "incomplete"
}
if needsFulfillment(checkout) && checkout.Fulfillment == nil {
return "incomplete"
}
if requiresEscalation(checkout) {
return "requires_escalation"
}
return "ready_for_complete"
}
func requiresEscalation(checkout *Checkout) bool {
// 3DS challenge, age verification, custom selling terms,
// anything the agent cannot handle programmatically.
return checkout.Flags.Requires3DS || checkout.Flags.RequiresTermsAcceptance
}
The merchant controls all status transitions. The agent reads the status and reacts. This is a deliberate design choice: the merchant is the authority on what is needed to complete a transaction.
The State Machine
The computeStatus logic above drives a broader state machine. The checkout has six states. The spec defines them clearly, but marketing materials often simplify to three. Here is the full picture:
incomplete --> requires_escalation --> ready_for_complete --> complete_in_progress --> completed
| |
+---------------+------------------------------> canceled
incomplete: the session is missing required information. The messages array tells the agent what to fix. The agent resolves issues by calling PUT /checkout-sessions/{id}.
requires_escalation: this is the most honest part of the spec. It means the agent has hit something it cannot handle programmatically: a 3DS challenge, address validation, terms acceptance, age verification, custom selling terms. The spec requires that any requires_escalation response include a continue_url and at least one message with an escalation severity. The agent hands the user off to the merchant’s UI via that URL.
I think requires_escalation is where UCP acknowledges the reality boundary between automation and the physical world. Commerce is not a pure API problem. Sometimes a human needs to look at a screen and click “I agree.” The spec does not pretend otherwise, and I respect that.
If most real-world checkouts hit escalation (and in Europe, where PSD2/SCA mandates multi-factor authentication for most payments, they will), then the “happy path” of fully autonomous checkout is actually the minority case. That tension between the automation promise and the escalation reality is, I think, the most important unresolved question in the protocol.
When escalation happens, the merchant provides a continue_url. For embedded experiences, this triggers UCP’s Embedded Checkout Protocol (ECP), a bidirectional JSON-RPC 2.0 channel between the agent’s host and the merchant’s checkout UI running in an iframe. ECP draws from Shopify’s embedded checkout work and was distilled into an open protocol. The host loads the continue_url with ECP parameters (ec_version, optional ec_delegate for payment and address delegation), and the embedded checkout sends lifecycle messages: ec.ready for the handshake, ec.start when visible, state change notifications as the buyer interacts, and ec.complete when the order is placed.
The delegation model is particularly interesting: the merchant’s embedded UI can request that the host show a native payment picker or address selector, keeping the user experience smooth while the merchant retains control of the checkout flow.
ready_for_complete: all required data is present. The agent can call POST /checkout-sessions/{id}/complete with payment data.
complete_in_progress: the agent calls POST /checkout-sessions/{id}/complete with payment data, and the merchant transitions to this state while processing the payment asynchronously. Once the merchant confirms the order, it moves to completed. This intermediate state exists because payment authorization (3DS challenges, bank round-trips, fraud checks) can take nontrivial time.
completed: the order was placed. The response includes an order object with an id and permalink_url. This state is immutable.
canceled: the session was abandoned or expired. Sessions have a default 6-hour TTL via expires_at, which prevents stuck sessions from accumulating.
The transitions are not strictly linear. incomplete can jump directly to ready_for_complete if no buyer interaction is needed. requires_escalation can return to incomplete after the buyer provides input via the escalated UI. Cancellation is possible from any non-terminal state.
The error processing algorithm is severity-partitioned. When the agent gets a response, it filters the messages array and partitions by severity: recoverable errors get fixed programmatically, while requires_buyer_input and requires_buyer_review errors trigger escalation via continue_url. Error codes are specific: missing, invalid, out_of_stock, payment_declined, requires_sign_in, requires_3ds.
What’s Missing
The spec is strong on interoperability but deliberately scoped to checkout for its initial release. I think the scoping is correct for a v1, but the gaps are worth naming because they will shape whether the protocol gains traction beyond early adopters. Several things are not yet covered:
Fraud prevention. The spec acknowledges this: “future extensions MAY standardize fraud signal schemas.” For now, fraud handling is entirely the merchant’s responsibility. Given how different fraud approaches are across merchants, this is probably the right call for v1.
Rate limiting. The spec mentions HTTP 429 status codes but defines no standard quotas, rate limit headers, or backoff strategies. If agent traffic grows as projected, merchants will need something more structured than “return 429 and hope the agent backs off.”
Catalog and search. UCP launched checkout-first. Product discovery is on the roadmap but not yet specified. For now, agents discover products through existing mechanisms (Google Shopping Graph, merchant APIs, web crawling) and use UCP only for the transaction.
Returns and refunds. Not in the initial capability set. The Order capability covers post-purchase notifications (shipping, delivery), but the return flow is not yet standardized.
Concurrency and optimistic locking. PUT replaces the entire session state, but the spec does not define an ETag or version token for optimistic concurrency control. If a buyer changes their shipping method in an embedded checkout while the agent simultaneously updates line items, last writer wins. For a v1 this is acceptable, but as multi-actor checkout flows become common, I expect this to be the first thing that needs a spec amendment.
Long-running error recovery. If complete_in_progress hangs, the spec does not define timeout behavior beyond the session’s expires_at TTL. The idempotency key system helps (retrying with the same key returns the cached result), but there is no explicit guidance on what agents should do when payment processing is taking unusually long.
I think the missing rate limiting specification will cause the most real-world pain. If agentic traffic grows, merchants will face bot-like request patterns from legitimate agents. Without rate limit standards in the protocol, every merchant will invent their own, and agents will need per-merchant rate limit handling. That recreates the N-times-N problem UCP is supposed to solve.
UCP vs ACP
ACP (Agentic Commerce Protocol) launched in September 2025, co-developed by OpenAI and Stripe. It powers Instant Checkout in ChatGPT.
The two protocols make different architectural bets:
| UCP | ACP | |
|---|---|---|
| Creators | Google + Shopify | OpenAI + Stripe |
| Discovery | Decentralized (/.well-known/ucp) | Platform-mediated (merchants onboard per platform) |
| Scope | Checkout + post-purchase (expanding) | Checkout-focused |
| Transport | REST, MCP, A2A, Embedded Checkout | REST, MCP |
| Payment | Decoupled handlers, processor-agnostic | Delegated payment tokens (Stripe SPT) |
| License | Apache 2.0 | Apache 2.0 |
The meaningful difference is architectural. UCP optimizes for ecosystem interoperability: any merchant can publish a manifest on their own domain, any agent can discover it, and payment flows through whatever processor the merchant already uses. ACP optimizes for a controlled experience inside a single platform: merchants apply through OpenAI, transactions use Stripe’s payment rails, and the experience is tightly integrated with ChatGPT.
Both are corporate bets disguised as open standards. Both are Apache 2.0. Both have major backing: Stripe, Shopify, Etsy, and Walmart appear in both ecosystems. The competitive landscape analysis from Ad Age and Checkout.com both conclude merchants will need to support both.
The question I keep asking is whether merchants actually have the engineering budget to do that. If neither protocol drives obvious volume in the next 18 months, we may end up with a third outcome: both fade and the “standard” remains bespoke REST APIs, the same way it has been for 25 years. The coexistence outcome resembles RSS and Atom more than VHS and Betamax, but “both survive at low adoption” is not a victory for either.
The Bootstrap Problem
I have spent time with the spec, the GitHub repo, the sample implementations, and the Shopify engineering blog post. Here is my honest assessment.
The protocol design is genuinely good. The layered architecture is sound. Capability negotiation with fixed-point extension pruning is elegant. Reverse-domain namespace ownership is the right governance model. The separation of payment instruments (what the consumer uses to pay) from payment handlers (the specification for how a processor handles it) is clean; it means a merchant can accept Google Pay and Shop Pay through different processors without the protocol caring. Date-based versioning (YYYY-MM-DD) with per-capability version granularity is practical. These are real engineering decisions that reflect experience building commerce systems.
The “TCP/IP of commerce” marketing oversells it. The real value is independent versioning plus schema composition. That is less catchy but more accurate. UCP is a good application-layer protocol, not a new fundamental layer of the internet.
requires_escalation is the most honest part of the spec. It acknowledges that autonomous AI purchasing has hard limits. 3DS challenges, terms acceptance, age verification, custom selling terms: these all require a human. Rather than pretending the agent can handle everything, the spec builds a structured handoff into the state machine. I think this design decision will age well.
The cold start problem is real. UCP is simpler than SET or IOTP, but more complex than “just use REST APIs with better schemas.” The question is whether the complexity buys enough interoperability to justify the adoption cost. For a small merchant already running on Shopify, UCP support will come automatically through Agentic Storefronts. For a merchant building a custom implementation, it is a real investment.
Google’s distribution is the bootstrap mechanism. AI Mode in Google Search and Gemini provide immediate consumer surfaces where UCP transactions can happen. Etsy and Wayfair are already live. If Google routes meaningful purchase volume through UCP for early adopters, that creates the economic incentive for others to follow. Whether that volume materializes at scale is the open question.
The spec is young. The GitHub repo is early and fast-moving, with two releases so far (2026-01-11 and 2026-01-23). The governance is primarily Google-led, with Shopify as co-developer. It is Apache 2.0 and genuinely open-source, but “open” in the way Android is open: the license is permissive, but the primary contributor controls the direction. Neither UCP nor ACP has been donated to the Agentic AI Foundation under the Linux Foundation, though the transport layers they build on (MCP, A2A) are now under neutral governance there.
I think UCP solves a real problem and solves it well at the protocol level. The harder questions are not technical. They are about incentives, adoption curves, and whether merchants want to participate in a protocol designed by the platforms that intermediate their customer relationships.
The pattern I keep coming back to is OAuth. OAuth succeeded because Google and Facebook made it the required protocol for their APIs, and the value of accessing billions of users was enough to get developers to adopt it despite the complexity. OAuth had existential leverage: access to user bases that developers could not reach any other way. UCP offers incremental convenience: a slightly easier way to do something merchants can already do with their existing checkout systems. That is a fundamentally weaker bootstrap. Google can push UCP through AI Mode and Gemini, but “buy things through Google’s AI” is not as compelling as “log in with Google.” The use case is a convenience improvement, not a fundamentally new capability.
If I check the commit log in six months and see meaningful extensions from contributors outside Google and Shopify, I will believe this is becoming a real standard. If it is still just checkout with the same handful of contributors, the historical pattern will hold. The protocol quality is not the variable. The distribution is.