Brands Under Pressure

Brands Under Pressure

I Binged 6 LinkedIn Algorithm Reports: Here's What Matters

6 reports. 5 million posts analyzed. Here's what LinkedIn actually changed.

Kara Redman's avatar
Kara Redman
Mar 21, 2026
∙ Paid

I spent the last few months buying, reading, and cross-referencing six independent LinkedIn algorithm studies:

  • Richard van der Blom's analysis of 600,000+ posts

  • Trust Insights' technical breakdown of LinkedIn's own engineering papers

  • Saywhat's dataset of 329,000+ posts

  • AuthoredUp's analysis of 3 million+ posts

  • Ocean Labs' strategy report

  • Socialinsider's study of 1 million posts.

I also went to the primary source: LinkedIn’s published arXiv research papers and their March 2026 Engineering Blog announcement confirming the new architecture.

Here’s what I found: LinkedIn didn’t tweak the algorithm. They tore it down and rebuilt it from scratch. In March 2026, they publicly confirmed it.

Most of the advice you’re seeing online about how LinkedIn works is either outdated or made up. So I did the annoying thing and compared what the reports actually say: where they agree, where they don’t, and what it means for all of us.

Here’s everything.


TL;DR

Your reach is down. 98% of LinkedIn users experienced a decline (AuthoredUp). Views dropped 50-66% depending on the source.

LinkedIn replaced its old algorithm with a two-stage AI system that reads your content for meaning and matches it to audiences by topic — not by who you’re connected to.

Your profile now directly determines whether your posts even enter the competition. Topic consistency matters more than timing. Saves matter more than likes. And if your content sounds like ChatGPT wrote it, the algorithm knows (thank god).

The old LinkedIn playbook—golden hour, hashtag stacking, engagement pods, “Comment YES if you agree”—is dead. What replaced it rewards niche expertrise and engagement that’s substantial (as opposed to “congrats!” comments)


What every report agrees on

Your reach has tanked (and it’s structural, not temporary). Views down 50% (van der Blom, 600K+ posts). Median impressions down 66% from the 2023 peak (Saywhat, 329K posts). AuthoredUp’s tracking of 3 million+ posts found 98% of users experienced a decline, with median impressions falling 47% between mid-2024 and mid-2025. Socialinsider’s analysis of 1 million posts confirms the same direction. If you have 50K+ followers, it’s worse: reach down 62%, engagement down 67%, follower growth down 83% (van der Blom). The 15K-25K follower tier is actually the sweet spot right now.

LinkedIn replaced its algorithm with AI that actually reads your content. This is the biggest change. Trust Insights’ technical analysis of LinkedIn’s own engineering papers documented the full architecture: a two-stage pipeline consisting of a Causal LLM (built on LLaMA-3) that acts as a retrieval gate—deciding if your content even enters competition — and a Generative Recommender that decides where it ranks. In March 2026, LinkedIn’s Engineering Blog publicly confirmed this architecture (authored by engineering lead Hristo Danchev, backed by two arXiv research papers). Van der Blom, Saywhat, and Ocean Labs all describe the behavioral shift this creates.

Content now spreads by topic, not by connections. The old system: your connections liked your post, it showed up in their connections’ feeds. The new system: LinkedIn groups people by professional interests and matches content to topics. Van der Blom calls it the shift from social graph to interest graph. Trust Insights explains the mechanism: a unified embedding system replaced five or more separate content retrieval systems. Saywhat confirms niche content is now being rewarded while broad content declines. Every source agrees on this one.

Your profile is now a direct input to the algorithm. Trust Insights documented that the Causal LLM reads your headline, About section, and Experience as text to decide your retrieval eligibility. A separate model (Qwen3 0.6B) converts your profile into embeddings for ranking—making profile quality doubly important, affecting both whether your content enters competition and how it ranks once it does. Van der Blom and Ocean Labs both confirm the practical implication: if your profile says “Healthcare CFO” and you’re posting about crypto, the algorithm doesn’t know what to do with you. 80%+ of your posts should match your stated expertise.

Topic consistency compounds over time. Sticking to 2-3 core topics for 90 days correlates with +27% reach and +41% topic-based follows (van der Blom). Trust Insights explains why: scattered content literally dilutes your position in the system’s understanding of who you are—you’re authoring the document that positions you at a specific coordinate in concept-space, and inconsistency weakens that signal. Saywhat’s data shows the same pattern from the content side: niche winners are emerging while generic career and leadership content loses ground. Ocean Labs recommends the same 2-3 topic discipline. Think in sequences, not standalone posts—a 2-3 post series on the same theme shows +38% profile visits (van der Blom).

Engagement bait is actively suppressed. “Comment YES if you agree” hurts your reach now. Van der Blom, Saywhat, Trust Insights, and Ocean Labs all confirm. Saywhat adds that posts ending with generic questions actually perform slightly worse than posts without questions—users are tired of fake conversation starters. What works instead: specific scenario questions, either/or debates, provocative statements. Van der Blom’s data shows open reflections (”Curious how others handle this?”) outperform direct CTAs (”What do you think?”).

Hashtags are dead. LinkedIn shifted to scanning the actual words in your post for topic classification. Hashtags are navigation at best, spam signals at worst. Posts with more than 3 hashtags performed 70% worse than posts with none (Saywhat). Van der Blom agrees. Ocean Labs caps it at 3 maximum, if any. Saywhat’s co-founder noted that AI-generated content tends to default to hashtags—which may be part of why it gets deranked.

Comments are the #1 engagement signal. Not all engagement is equal. Thoughtful comments (3+ sentences) are the most valuable signal. Top 1% creators leave 286 replies per week versus 34 for average creators—that’s 741% more (Saywhat). There’s a strong correlation (0.545) between commenting frequency and follower growth. Van der Blom ranks the full hierarchy: thoughtful comments first, then saves, shares with commentary, reposts, and reactions/likes last. Ocean Labs recommends the A3 framework for commenting: Add insight, Ask a specific question, Anchor with a takeaway. Comments of 35-120 words from 4-7 distinct profiles per day across different roles and industries is the pattern that moves the needle (van der Blom).

Saves are the new metric that matters. LinkedIn added Saves and Sends to post analytics in late 2025—they’re telling you what they value. AuthoredUp’s data is the most specific here: saves drive 5x more reach than a like and 2x more than a comment. Posts that receive saves and substantive comments 24-72 hours after publishing perform 4-6x better because the system interprets late engagement as a signal of lasting value. Van der Blom found a save-to-comment ratio of 0.4 or higher signals strong rediscovery potential, with each save increasing the chance of appearing in suggested feeds by over 60%. Carousels have the highest save rate (29.2%) of any format (van der Blom). The takeaway across every source: create content people want to come back to (frameworks, checklists, reference material).

AI-generated content gets punished. Praise be. 44% of LinkedIn content now shows AI signals (van der Blom). Fully AI-generated posts get 2.8x less reach and nearly 5x less engagement. Detection triggers: semantic duplication, synthetic tone, repetitive comment patterns. Saywhat independently confirms. LinkedIn CEO Ryan Roslansky told Bloomberg that even LinkedIn’s own AI writing features haven’t caught on because “the barrier is much higher” to post on LinkedIn—the professional stakes make authenticity critical. If you use AI to draft, fine. Rewrite it so it doesn’t sound like the rest of the AI slop on there.

Older posts can resurface (within limits). LinkedIn confirmed in mid-2025 that posts 2-3 weeks old can reappear if they’re relevant to someone’s interests. Trust Insights adds the hard constraint: FishDB enforces a 30-day window for connection-based content. Beyond 30 days, it can’t resurface through your network regardless of quality. The out-of-network retrieval path (via the Causal LLM) operates under different constraints. AuthoredUp’s data confirms the practical effect: posts that receive saves and comments days after publishing can see massive late surges in reach.

Daily posting hurts. 2-3x per week is the sweet spot. Daily posting shows a -45% reach penalty (van der Blom). Ocean Labs adds tactical detail: post at least 4 hours apart, don’t schedule on round hours (:00)—use :04 or :07 to avoid automation detection signals, and use LinkedIn’s native scheduler rather than third-party tools. Every source agrees: consistency matters more than frequency, and the algorithm now penalizes volume without quality.


Where the reports disagree

These are the data points I find most interesting because if you’re making strategy decisions based on one report, you might be getting it wrong.

External links: the great debate. This is the most contested data point across every source I read. Van der Blom’s data shows a modest 5% reach gain for posts with links—a reversal from years of penalty. Saywhat’s data is much more aggressive: posts with 1-3 links saw 43% higher reach, and posts with 3+ links saw 210% higher reach. AuthoredUp found the opposite: posts with a single link perform worst of all, but resource-heavy posts with multiple links do better. Meanwhile, other sources still claim a ~60% penalty. LinkedIn confirmed to Greg Isenberg that editing links in after posting doesn’t affect reach. Ocean Labs says lead with value, put links at the end. The “never post links” era appears to be over but nobody agrees on what replaced it.

Optimal post length. Van der Blom says 900-1,200 characters (about 150-200 words) is the sweet spot, and penalizes posts with more than 40% blank lines or fewer than 4 lines. Saywhat says longer wins: 1,250-3,000 characters performed 31% better than short posts, and posts with 14+ paragraphs outperformed posts with fewer than 7 paragraphs by 71%. AuthoredUp’s earlier data aligned closer to van der Blom (800-1,000 characters). All three agree longer beats shorter. They disagree on how long. My suspicion: Saywhat’s dataset skews toward creator and influencer accounts whose audiences expect depth, so your sweet spot depends on your niche.

When to post. Van der Blom says 8-9 AM and 2-3 PM in your audience’s local time zone, but emphasizes the real variable is whether you can engage for 90 minutes after posting. Saywhat’s data shows weekend posts outperform weekdays: Sunday gets 1,531 median impressions versus ~1,075 on weekdays. Less competition from corporate content and ads, more leisure browsing time. Ocean Labs focuses less on timing and more on the pre-and-post engagement window (15-45 minutes of commenting before you post, 90 minutes of active replies after). All sources agree timing matters less than it used to because relevance now outweighs recency in the ranking system.

Video performance. Van der Blom: reach down -35.57%, engagement down -23.13%. Only works under specific conditions—under 60 seconds, manual subtitles, visual hook in first 2 seconds. AuthoredUp reported video content fell 72% (the steepest decline of any format). But Saywhat found the opposite on the high end: video is the most likely format to go viral at 2.0% probability, beating infographics (1.4%) and carousels (1.1%). Saywhat also found horizontal video outperforms vertical by 36%—possibly because people use LinkedIn at work and watching vertical video at your desk isn’t socially acceptable. The consensus: video gets low average reach but has a high ceiling. Don’t use it for consistent impressions. instead use it for trust-building and demos.

Format rankings. The sources mostly agree on what works but rank them differently. Van der Blom: documents/carousels lead with +6.92% reach and +46.73% engagement, followed by articles/newsletters (+47.92% reach—the biggest surprise). Saywhat: carousels get 4.1x the reach of text posts, infographics 3.4x. Socialinsider confirms document posts at 6.6% engagement rate as the highest of any format, with 15-20 seconds of average dwell time. Where they split: van der Blom shows articles surging; Saywhat barely mentions them. Van der Blom shows text+image as the safest consistent format; Saywhat shows infographics dominating the top 1% of posts (27.4% of outliers are infographics). The takeaway: carousels and documents are the consensus workhorse, but the specific format that works best depends on your audience and what you can produce consistently.


The myths that need to die

I want to be super clear about what is NOT confirmed because people keep posting about these as fact.

  • “A comment is worth 12x a like.” No confirmed weighting formula exists. LinkedIn has never published specific engagement multipliers. Every source I read either ignores this claim or debunks it.

  • “The algorithm changed last Tuesday.” LinkedIn rolls changes gradually through testing waves. There’s no single switch-flip moment. Trust Insights estimates 360Brew is 40-100% deployed. LinkedIn hasn’t confirmed the exact percentage.

  • “Post at exactly 8:07 AM on Tuesdays.” Your audience is not everyone else’s audience. And the data now shows timing is secondary to relevance anyway.

  • “Depth Score” is an official LinkedIn metric. It’s industry shorthand that commentators started treating as a product feature. LinkedIn has not published this as a named metric.

  • “Never post links.” Overstated and possibly wrong. The data is genuinely mixed (see above) but the blanket prohibition doesn’t hold up across any of the major studies.

  • “Engagement pods still work.” LinkedIn detects artificial interaction patterns (van der Blom, Ocean Labs). Pods with similar phrasing, timing, and reciprocal patterns get flagged. The system now looks for meaningful engagement from diverse profiles, not coordinated reactions from the same 20 people.

Behind the paywall: what I'd actually change, a format cheat sheet with real numbers, and the uncomfortable truth the data reveals.

If you want the full breakdown, lemme upgrade ya.

User's avatar

Continue reading this post for free, courtesy of Kara Redman.

Or purchase a paid subscription.
© 2026 Kara Redman · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture