How to Set Up an Employee Advocacy Program: A Step-by-Step First 90 Days
A concrete setup plan for an employee advocacy program — pilot group selection, tool evaluation, rollout phases. Names specific enterprise tools and what actually matters when choosing between them.
I've helped a couple of friends' companies scope employee advocacy programs from scratch. The pattern is consistent: the founders assume the hard part is picking the tool. It almost never is. The hard part is designing the program so it doesn't collapse in month three, and then choosing a tool that doesn't actively sabotage that design.
Here's the setup plan I'd use today, in the order the decisions actually need to happen.
Week 0: pick one outcome
Before anything else, pick the outcome you'll measure at day ninety. Not all of them — one. Most programs fail because they try to optimize for pipeline, employer brand, thought leadership, and reach simultaneously, and end up producing middling results on all four.
Pick the primary:
- Pipeline. You'll instrument UTM tracking, CRM source tagging, and inbound DM capture. The program will lean on sales and customer success as advocates.
- Employer brand. You'll track applications per role, candidate-mentioned-employee-post signal, and referral rate. The program will lean on engineering, product, and hiring managers.
- Thought leadership / category share of voice. You'll track branded search, speaking invitations, podcast appearances. The program will lean on leadership.
- Direct reach. You'll compare advocacy-post impressions and engagement to company-page baseline. This is the easiest to measure and the weakest signal.
This decision drives tier selection and content focus. Don't skip it.
Week 1: write the one-page policy
Not the ten-page policy. One page. It covers:
- What you can always post about.
- What needs a second look.
- What you can never post.
- Who to ask and what the response time is.
I wrote a dedicated post on the compliance side covering FTC Endorsement Guides, FINRA Rule 2210 (if you're in financial services), GDPR (if you have EU employees), and Reg FD (if you're public). Get that right before you sign anyone up, especially in regulated industries.
Week 2: pick the pilot group
Five to ten people. No more. Criteria:
- At least one Tier-1 participant (a leader who will actually post, not just approve budget).
- Three to five Tier-2 candidates: people who already post occasionally, or who have said they want to build their personal brand.
- Roles that span your target outcome. If you're chasing pipeline, include sales and CS. If you're chasing employer brand, include engineering and product. If you're chasing thought leadership, include the specific subject-matter expert on the topic.
Skip anyone who's enthusiastic but wants scripts. The people who ask "just tell me what to post" are not your pilot.
Week 3: evaluate tools (not before the pilot is defined)
The tool decision is downstream of the program design. Here's the actual enterprise landscape in 2026 and what differentiates them:
Sociabble and EveryoneSocial — content-library-centric platforms. Marketing curates and uploads content; employees share with one click. These are built around the model the ~8× engagement research from LinkedIn Talent Solutions talks about — but the model has aged. LinkedIn now actively throttles duplicate content, so a program where twenty employees post the same caption is fighting the algorithm. These tools are still the default for large enterprises because they handle compliance and retention well, which matters in regulated industries.
Hootsuite Amplify — similar content-library approach, tighter integration if you already use Hootsuite for the main publishing function. Strong reporting, weaker on personalization.
GaggleAmp — heavier emphasis on activity suggestions (like-this-post, comment-on-this-post) versus raw sharing, which actually maps better to a Tier-3 amplifier model.
Bambu by Sprout Social — now discontinued; Sprout has merged advocacy features into its main product. Worth checking in 2026 on current status.
DSMN8 — advocacy platform with better support for personalization and individual voice than most. Useful if the tool-lock-in of a larger suite isn't a factor.
FeedSquad (which I build) — individual voice matching and drafting, designed around the premise that per-employee voice matters more than library size. More on this in the CTA below; I'll keep this section honest about the others.
The real question when evaluating: does the tool preserve individual voice, or does it force everyone into the same template? If it's the latter, the best tool in the world will still run into algorithmic throttling and employee drop-off.
Week 3-4: onboarding session
One 90-minute session with the pilot group. Covers:
- Why this matters for them personally. Their career, their network, their visibility. Not primarily the company's goals.
- What good LinkedIn posts actually look like — show three real examples from people in similar roles.
- What the company will provide: topic suggestions, data they can reference, writing feedback on request.
- What the company will not do: write their posts, require approval before posting, dictate a content calendar.
The tone of this session sets the tone of the whole program. If it feels like a mandatory training, you've already lost.
Weeks 5-8: the first cadence
Target one to two posts per week per Tier-2 participant; two to three per week for Tier 1. More than that and people treat it like a second job. Less than one per week and momentum never builds.
Concrete supports to provide each week:
A weekly topic prompt email. Three to five topics with brief context. "We just launched X — here's the raw data if you want to riff on it." "A customer win we can talk about — here's the story." Not pre-written posts. Prompts.
A peer channel. A private Slack or Teams space where participants drop drafts and comment on each other's posts. Peer feedback beats marketing feedback because there's no implicit approval power.
Office hours. Thirty minutes a week where someone is available to help with a draft if asked. Unsolicited editing feels like surveillance, so wait to be asked.
Early metrics. Within the first month, show participants their own results — profile views, new connections, engagement rate. LinkedIn's own analytics are good enough for this.
Weeks 9-12: decide whether to scale
At day ninety you should be able to answer two questions:
- Is the content that's being produced actually good? Does it read like the individual people writing it, or has it started drifting toward brand voice?
- Is the primary outcome metric moving?
If yes to both, double the pilot to fifteen to twenty people. Reuse the pilot participants as onboarding peers for the new cohort — their experience is worth more than any marketing presentation.
If the content has drifted toward brand voice, stop and fix that before scaling. You're about to scale a problem.
If the primary outcome isn't moving, don't scale. Figure out why it isn't moving first. Usually the answer is one of: the pilot didn't include the right roles for the outcome, the tools are enforcing templated content, or the metric requires more time than ninety days.
A note on incentives
Don't pay per post. Don't tie advocacy to performance reviews. Both of these turn authentic posting into compliance behavior, which is immediately visible on the feed and — under FTC Endorsement Guides — potentially creates additional disclosure requirements.
Recognition, not compensation, is the right lever. Weekly shoutouts in the peer channel. Quarterly "advocate of the quarter" framing that rewards people visibly. Speaker opportunities, conference attendance, and executive mentorship for standout participants. These produce the motivation without the corruption.
The mistake I see most
Launching company-wide on day one. I've watched three separate companies try it. Each time, the first month looks great because everyone is new and excited. By month three, participation has collapsed to 10%, the content has homogenized because marketing started rewriting drafts to enforce brand standards, and the program gets quietly shelved.
Start with five. Build something that works for five. Then scale what actually worked.
If you want individual voice matching and a drafting surface that preserves each employee's actual writing style rather than flattening it, FeedSquad's team features are built around that constraint. Free tier to evaluate with a pilot.
Sources:
- FTC — Endorsement Guides
- FINRA — Rule 2210
- LinkedIn Marketing Solutions — The real value of your employees' social media reach
- Richard van der Blom — Algorithm Insights Report 2025
Ready to create content that sounds like you?
Get started with FeedSquad — 5 free posts, no credit card required.
Start freeReady to try FeedSquad?
Create content that actually sounds like you. 5 free posts to start, no credit card required.
5 posts free • No credit card required • Cancel anytime
Related Articles
How to Automate LinkedIn Posts with AI (Without Sounding Like a Robot)
LinkedIn's 2025 data shows AI-generated posts get 30% less reach and 55% less engagement. Here's an automation workflow that keeps your voice intact and your reach from tanking.
Posting to LinkedIn from Claude: How the MCP Integration Actually Works
The Model Context Protocol lets Claude post to LinkedIn directly. Here's what's happening under the hood, what LinkedIn's API allows, and where the integration stops.
FeedSquad vs ChatGPT for LinkedIn: An Honest Comparison from the Person Who Built Both Workflows
When ChatGPT is enough for LinkedIn and when a specialized tool earns its keep. An honest comparison from someone who spent a year running both workflows on the same account.