Unless you’ve been living in a quiet corner of LinkedIn recently, you can’t have missed the increasing volume of discussion about whether your gender affects your post visibility.

The issue first dropped into my feed thanks to the incredible Felice Ayling (see this post as one example). Since then, I’ve been following the rollout of the story with increasing unease.

LinkedIn say gender isn’t something that their algorithm is interested in when deciding when to surface posts.

However, lots of hyper-smart folks have switched their gender from female to male label and seen almost instant results.

So I wanted to try it for myself. Therefore, in the past three weeks, I have run a small experiment. The findings are below.

A few notes:

    • This is only my experience. It isn’t meant to represent the whole of LinkedIn, the experience of all users regardless of gender

    • I have changed my gender label only for the purposes of this experiment. There is nothing to read into this about my own expression and experience of gender, nor my unswerving support for those who do not live within the commonly accepted gender binary.

The Experiment – structure and parameters

I have purposefully kept this experiment simple.

What have I changed?

    • Only one variable has been changed. This is the ‘back end’ gender label within LinkedIn.

    • I have not changed my pronouns, image or any gender identifiers that are visible on my profile.

    • I have also not posted about taking part in the experiment, in case that (by perverse algorithm logic) counts against me or my content.

How am I tracking any changes?

    • I changed my Gender label on each Saturday of the experiment, around 10am.
    • I then used LinkedIn’s own onboard Analytics to measure content performance from Sunday through to the following Saturday.

  •  

Experiment pattern

    • Mon 24 – Sat 29 Nov 2025: Control week. Normal posting – ie adhoc when something specific caught my eye

    • Sat 29 Nov – Sat 6 Dec 2025: Structured ‘male’ week.

    • Sat 6 – Sat 13 Dec 2025: Structured ‘female’ week.

In each of the gender experiment weeks, I scheduled posts using LinkedIn’s own scheduling tool for 9am each morning, with other ad hoc posts if anything caught my eye.  

I have not consciously changed writing style, added more/less hashtags or posted about anything wildly different to usual.  I’ve purposefully tried not to think about which week I was in, in case that influenced my story choice or writing style.

Virtually all of my reshared content each week has come from my Feedly library, which I’ve been using for years to try and stay abreast of latest news.

The Experiment – Results

The table below shows the cumulative performance across the experiment, followed by the posts from each week.

Discussion

I’m irritated with myself.  I didn’t realise that I’d slipped in five extra posts during the ‘male’ week.  Still, it is what it is.  While I can’t now do a direct comparison between the weeks, I can look at the average performance across the posts.

This is an interesting set of numbers.

Average impressions have risen across the experiment, as has the average number of members reached by the posts.

It across all three weeks, posts going back a long time – up to two years! – have been viewed, really underscoring LinkedIn’s long tail.  I don’t know if this is much different to how content would have surfaced prior to the latest algorithmic changes however.

What has caught my eye from these numbers is the engagement data.  Granted, it’s not huge – my audience evidently isn’t that engaged (although I feel as though I get a steady stream of reactions and comments).

However, the ‘female’ week is the least engaging of all, looking at the numbers.  It’s half that of the ‘ad hoc’ week, in fact, and 28.5% lower than the ‘male’ week.

Is that because the content is less engaging?  Average impressions and reach did go up after all, so perhaps there’s something in the content which just didn’t resonate as strongly in the final week of the experiment.

I need to acknowledge that this is a small dataset.  The posts were all different: you can’t necessarily compare my reaction to a Mark Ritson article with a promo for my contracting work as if they were the same thing.  And we’re in the run up to Christmas so who knows if that influences my audience’s engagement on the platform.  There are a lot of potential variables which I don’t feel a short term experiment with a fairly small audience can iron out.

Conclusion

As it stands, I don’t feel that my data supports the hypothesis that LinkedIn’s algorithm suppresses women’s voices on the platform.

These are purely my findings and they do not negate the experiences many are reporting.

Some of the women reporting significant changes updated other gender markers – for example changing their name from Samantha to Sam, or Simone to Simon.  Some also changed their profile photographs to appear more male as well.  I purposefully restricted my experiment to a single ‘back end’ gender marker so that only one variable was in play at any one moment.

In my opinion, equity should be the highest goal of society.  As users of different tools and technologies, I believe it’s our duty to ensure that they are not replicating historic biases or minoritisations; and if they are, we should not be afraid to call it out.

If that means running our own experiments every now and again, so be it.  It’s worth the extra effort if it means we’re moving towards a more equitable world.

Leave a Reply

Your email address will not be published. Required fields are marked *