Posting to X from Claude: What the 2026 API Changes Mean for MCP
X's February 2026 API shakeup — pay-per-use replacing self-serve tiers, reply restrictions, PKCE-mandatory OAuth — changes what a good X MCP server has to do. Here's the honest reference.
Posting to X from Claude: What the 2026 API Changes Mean for MCP
X's API changed materially in February 2026. If you are evaluating an X MCP server, or building one, the rules you may have learned a year ago do not all hold. This post is the honest 2026 reference on what the X API does now, what Model Context Protocol standardizes on top of it, and what a credible X MCP server has to handle.
What Changed on X's API in 2026
Three shifts matter:
1. Pay-per-use replaced most self-serve sign-ups. In February 2026, coverage from We Are Founders and others reported X directing new developers to a pay-per-use credit model by default, with Basic and Pro tiers effectively closed to new customers. Existing subscribers kept their tiers. Pay-per-use has a 2M post-read ceiling before Enterprise kicks in.
2. Reply restrictions. As of February 2026, API-based replies are only permitted when the original author has mentioned your account or quoted one of your posts. Developers hitting the endpoint without that condition get a 403 with the message Reply to this conversation is not allowed because you have not been mentioned or otherwise engaged by the author of the post you are replying to. Original posting and quote-posting are unaffected. This kills most "reply bot" automation.
3. OAuth 2.0 with PKCE is the user-context auth. For posting on behalf of a user (as opposed to app-only endpoints), X requires OAuth 2.0 with PKCE — you generate a code verifier, derive a SHA-256 code challenge, and carry state through the redirect dance. This has been required for a while, but it catches first-time integrators.
Rate limits remain per-endpoint, per-15-minute-window, with monthly post quotas on top. The exact numbers depend on your tier or pay-per-use allocation.
What MCP Actually Standardizes
Model Context Protocol is not an X SDK. It standardizes three things for the client–server relationship:
- Tool discovery (
tools/list) so Claude or ChatGPT can enumerate what the server offers without custom code. - Authorization. MCP's authorization spec requires OAuth 2.1 with PKCE between the AI client and the MCP server. That is a separate auth layer from X's OAuth — the X token lives inside the server and is used on your behalf.
- Response shape. Servers can return structured content so clients can render preview cards, not just text.
Everything X-specific — PKCE handshake with X, token refresh, the reply-restriction-aware posting logic, rate-limit handling — lives in the server. MCP just gives the server a standardized front door the model can call.
The X-Specific Things a Good MCP Server Must Handle
This is the short list I would check before depending on any X MCP server:
- OAuth 2.0 with PKCE, end to end. The server should do the code-verifier/code-challenge handshake with X correctly. If it asks you to paste in a developer API key instead of redirecting you through X's OAuth flow, it is using app-only credentials and cannot post as you.
- Token refresh. X access tokens expire; refresh tokens rotate. A server that silently drops users when tokens rotate is a bad server.
- Rate-limit awareness. The server should surface rate-limit status back to the model, not just silently fail. Hitting the 15-minute window during a scheduled burst should be visible, not mysterious.
- Reply-restriction clarity. A 2026-aware server should not attempt API replies outside the "author mentioned you or quoted you" condition, or it should at least warn you. Quiet failures on this are the common failure mode.
- Character limit enforcement. X's base character limit is 280 for free/basic tiers; higher for X Premium. Good servers surface this, not just truncate.
What the Flow Looks Like from the Chat
With a server configured in Claude Code:
{
"mcpServers": {
"feedsquad": {
"type": "http",
"url": "https://feedsquad.com/api/mcp"
}
}
}
First tool call redirects you to authenticate with the MCP server. connect_platform for X kicks off the second OAuth hop — the one with X — and the server stores your per-user tokens. After that, create_post builds a draft, runs content checks, and returns a structured response. publish_post (or schedule_post) completes the work.
When you describe what you want, a well-designed server's create_post tool accepts the platform as an argument, so you can say "post this to X" or "create cross-platform versions for LinkedIn, X, and Threads" and get adapted drafts. That adaptation is the server's job, not MCP's.
What You Cannot Do (Regardless of Server)
Because of the February 2026 API changes:
- Automated replies to arbitrary posts. Not a server limitation — an X policy. Reply-growth automation is effectively dead on self-serve tiers.
- High-volume read operations cheaply. Pay-per-use is metered and capped at 2M post reads per month before Enterprise is required.
- Streaming endpoints on free/basic. Persistent streams are Pro/Enterprise.
A server that claims otherwise is either lying or running on Pro/Enterprise access and passing those costs to you.
Comparing X MCP Servers
A few categories you will see:
- Open-source, self-hosted single-platform servers. Useful if you want to own the stack and are fine managing your own X developer account and tokens. More work.
- Managed single-platform servers. A handful of X-only MCP services exist; they typically wrap their own X app credentials.
- Managed multi-platform servers. LinkedIn + X + Threads (and sometimes more) under one OAuth flow. Less control; less overhead.
The right choice depends on whether you want to own the X side of the stack. For most marketers and founders, managed is the pragmatic answer; for engineers building a product, self-hosted gives more room.
What This Changes for X Content Strategy
Practically: the 2026 API restrictions push programmatic use toward original posting and quote-posting, not reply outreach. If your strategy depended on reply-at-scale automation, you need a new strategy. If your strategy was "post consistently, quote interesting posts in your niche, let engagement come from that" — the same strategy you should have had — nothing changes.
And with Threads now passing X in daily mobile DAUs, it is increasingly reasonable to treat X as a co-equal short-form channel with Threads rather than the dominant one. MCP servers that cover both are pragmatic for that reason.
If you want a managed MCP server that handles PKCE, token refresh, rate limits, and the 2026 reply-restriction behavior, FeedSquad's MCP server covers X alongside LinkedIn and Threads. Free tier available.
Sources:
- Roboin — X limits API-based automated replies (Feb 2026)
- We Are Founders — X API Pricing in 2026: Every Tier Explained (pay-per-use shift)
- modelcontextprotocol.io — Authorization (OAuth 2.1)
- TechCrunch — Threads edges out X in daily mobile users (Jan 2026)
Ready to create content that sounds like you?
Get started with FeedSquad — 5 free posts, no credit card required.
Start freeReady to try FeedSquad?
Create content that actually sounds like you. 5 free posts to start, no credit card required.
5 posts free • No credit card required • Cancel anytime
Related Articles
Native MCP vs Bolt-On: Why Built-In Beats Add-On for Content Scheduling
Not all MCP integrations are the same. Why tools built around MCP operate differently from tools that wrapped it around an existing API.
How to Automate LinkedIn Posts with AI (Without Sounding Like a Robot)
LinkedIn's 2025 data shows AI-generated posts get 30% less reach and 55% less engagement. Here's an automation workflow that keeps your voice intact and your reach from tanking.
MCP Servers for Social Media: What's Actually Shipping in 2026
An honest field report on MCP servers for social media posting. Which platforms they cover, what they actually do, and where each breaks down.