Skip to main content

Designing webhook payloads

When you build a webhook system, one of the first decisions you face is what to put in the payload. This choice affects everything from bandwidth costs to how easy your webhooks are to consume. Get it wrong and you will frustrate developers, create unnecessary API load, or paint yourself into a corner when requirements change.

This article covers the three core decisions in webhook payload design: how much data to include, how to handle versioning as your API evolves, and how to structure your schema for clarity and reliability.

Fat payloads vs thin payloads

The fundamental tradeoff is between sending all the data a consumer might need versus sending just enough to identify what changed.

A fat payload includes the complete resource. When a user updates their profile, you send the entire user object with all its fields. Consumers have everything they need to react to the event without making additional API calls. This approach reduces latency, simplifies consumer logic, and works well when webhooks need to function even if your API is temporarily unavailable.

A thin payload contains only identifiers and metadata. For that same profile update, you might send just the user ID, the event type, and a timestamp. Consumers must call your API to fetch the current state. This approach uses less bandwidth, avoids sending stale data, and gives consumers the freshest information when they process the event.

Most webhook providers land somewhere in between. Stripe sends fat payloads with complete objects, which makes sense because payment processing often happens in contexts where an extra API call adds unacceptable latency. GitHub sends moderately fat payloads with the relevant objects embedded. Slack tends toward thinner payloads that reference resources by ID.

The right choice depends on your use case. Fat payloads work well when events are infrequent, payloads are small, consumers need to process events offline, or latency is critical. Thin payloads make sense when resources are large, data changes rapidly between event and processing, or you want to guarantee consumers always see the latest state.

One hybrid approach is to send a thin payload with a snapshot URL. The payload contains identifiers plus a signed URL that returns the resource state at the moment the event occurred. Consumers can choose whether to fetch the snapshot or call your regular API for the current state.

Versioning strategies

Your webhook payloads will change over time. Fields get added, renamed, or removed. The structure evolves as your product grows. Without a versioning strategy, these changes break consumer integrations.

The simplest approach is to version the entire webhook API. Consumers subscribe to a specific version, and you maintain multiple payload formats simultaneously. When you release v2, existing v1 subscribers continue receiving v1 payloads until they migrate. This mirrors how REST APIs handle versioning and gives consumers control over when they adopt changes.

You can embed the version in the payload itself, in a header, or in the subscription configuration. Stripe uses API versions that consumers set at the account level. When you create a Stripe account, it locks to the current API version, and all webhooks use that version's payload format until you explicitly upgrade.

A more granular approach is additive-only changes. You commit to never removing or renaming fields, only adding new ones. Consumers ignore fields they do not recognize, and new fields appear alongside existing ones. This works well for simple schemas but becomes unwieldy as your API accumulates legacy fields.

Some providers use schema evolution with explicit compatibility rules. You define which changes are backward compatible, such as adding optional fields, and which require a new version. Tools like JSON Schema or Avro can enforce these rules automatically.

Whatever strategy you choose, communicate it clearly. Document your compatibility guarantees, provide migration guides when versions change, and give consumers adequate time to update their integrations before deprecating old versions.

Schema design for clarity

A well-designed payload schema makes webhooks easier to consume and debug. Start with consistent naming conventions. If your API uses camelCase, your webhooks should too. If you abbreviate "identifier" as "id" in one place, do not spell it out as "identifier" elsewhere.

Include metadata that helps consumers process events correctly. A timestamp indicates when the event occurred, not when it was delivered. An event ID enables idempotent processing by letting consumers detect duplicates. An event type tells consumers how to interpret the payload without parsing the data itself.

Structure your payloads predictably. Many providers use an envelope pattern where metadata lives at the top level and the actual data sits in a nested object. This separates webhook infrastructure concerns from business data and makes it easier to add metadata fields without affecting the data schema.

{
"id": "evt_1234567890",
"type": "user.updated",
"created_at": "2024-01-15T10:30:00Z",
"data": {
"user": {
"id": "usr_abc123",
"email": "user@example.com",
"name": "Jane Doe"
}
}
}

Consider including a "previous" object for update events. Knowing what changed, not just the current state, helps consumers decide how to react. If only the user's name changed, a consumer might skip an expensive sync operation that an email change would require.

Finally, document your schema thoroughly. Provide JSON Schema definitions, example payloads for each event type, and clear descriptions of what each field contains. Good documentation reduces support burden and helps consumers build reliable integrations.