In November, a product strategist, referred to as Michelle, decided to experiment by changing her LinkedIn profile gender to male, adopting the name Michael, as reported by TechCrunch.
This action was part of the #WearthePants initiative, where women investigated the theory that LinkedIn’s updated algorithm might be disadvantaging female users.
For several months, some frequent LinkedIn users observed a decline in their engagement and impression rates on the professional networking site. This downturn coincided with an announcement in August by Tim Jurka, LinkedIn’s vice president of engineering, who stated that the platform had recently begun employing Large Language Models (LLMs) to enhance content relevance for users.
Michelle, whose identity is verified by TechCrunch, grew suspicious of these alterations because, despite having over 10,000 followers and ghostwriting posts for her husband, who has only about 2,000, their posts typically received comparable impressions.
She asserted that “The only significant variable was gender.”
Marilynn Joyner, a founder, also modified her profile’s gender. After consistently posting on LinkedIn for two years, she noticed a drop in her posts’ visibility over recent months. “I changed my gender on my profile from female to male, and my impressions jumped 238% within a day,” she informed TechCrunch.
Megan Cornish and many others, including Rosie Taylor, Jessica Doyle Mekkes, Abby Nydam, Felicity Menzies, and Lucy Ferguson, reported similar outcomes.
LinkedIn, however, has stated that its “algorithm and AI systems do not use demographic information such as age, race, or gender as a signal to determine the visibility of content, profile, or posts in the Feed” and that “a side-by-side snapshot of your own feed updates that are not perfectly representative, or equal in reach, do not automatically imply unfair treatment or bias” within the Feed.
Experts in social algorithms agree that while overt sexism may not be the cause, implicit bias could be at play.
Platforms function as “an intricate symphony of algorithms that pull specific mathematical and social levers, simultaneously and constantly,” Brandeis Marshall, a data ethics consultant, explained to TechCrunch.
Marshall noted that “The changing of one’s profile photo and name is just one such lever,” adding that a user’s past and current interactions with other content also influence the algorithm.
“What we don’t know of is all the other levers that make this algorithm prioritize one person’s content over another. This is a more complicated problem than people assume,” Marshall commented.
Algorithmic Bias Concerns
The #WearthePants experiment was initiated by entrepreneurs Cindy Gallop and Jane Evans.
They enlisted two men to post identical content to their own, aiming to determine if gender accounted for the dip in engagement many women were experiencing. Gallop and Evans combined had substantial followings—over 150,000—compared to the two men who had approximately 9,400 at the time.
Gallop reported that her post reached only 801 individuals, while the male counterpart who posted the identical content reached 10,408 people, exceeding 100% of his followers. This prompted other women, including Joyner, who relies on LinkedIn for her business marketing, to participate and express concern.
“I’d really love to see LinkedIn take accountability for any bias that may exist within its algorithm,” Joyner stated.
However, LinkedIn, like other LLM-driven search and social media platforms, offers limited details on how its content-selection models are trained.
Marshall pointed out that most of these platforms “innately have embedded a white, male, Western-centric viewpoint” due to the backgrounds of those who trained the models. Researchers have found evidence of human biases like sexism and racism in popular LLM models because they are trained on human-generated content, and humans are frequently involved in post-training or reinforcement learning processes.
Nevertheless, the specific implementation of AI systems by any given company remains concealed within the algorithmic black box.
LinkedIn maintains that the #WearthePants experiment could not have definitively demonstrated gender bias against women. Jurka’s August statement, echoed by Sakshi Jain, LinkedIn’s Head of Responsible AI and Governance, in a November post, confirms that their systems do not utilize demographic data as a signal for content visibility.
Instead, LinkedIn informed TechCrunch that it evaluates millions of posts to connect users with opportunities. The company clarified that demographic data is used exclusively for testing purposes, such as ensuring posts “from different creators compete on equal footing and that the scrolling experience, what you see in the feed, is consistent across audiences.”
LinkedIn has a history of conducting research and making adjustments to its algorithm in an effort to provide a more equitable user experience.
Marshall suggests that unknown variables likely account for why some women observed increased impressions after switching their profile gender to male. Participating in a trending topic, for instance, can lead to an engagement surge; some accounts that had been inactive for a while may have been rewarded by the algorithm for posting again.
Tone and writing style might also contribute. Michelle, for example, noted that during the week she posted as “Michael,” she slightly altered her tone to a more simplistic, direct style, similar to how she writes for her husband. She reported that impressions subsequently rose by 200% and engagements by 27%.
She concluded that the system was not “explicitly sexist” but seemed to implicitly associate communication styles typically linked with women as “a proxy for lower value.”
Stereotypical male writing styles are often perceived as more concise, whereas female writing style stereotypes are imagined to be softer and more emotional. If an LLM is trained to prioritize writing that conforms to male stereotypes, this introduces a subtle, implicit bias. And as previously reported, researchers have confirmed that most LLMs are permeated with such biases.
Sarah Dean, an assistant professor of computer science at Cornell, noted that platforms like LinkedIn frequently leverage entire profiles, alongside user behavior, to decide which content to amplify. This encompasses job titles listed on a user’s profile and the types of content they typically engage with.
“Someone’s demographics can affect ‘both sides’ of the algorithm — what they see and who sees what they post,” Dean explained.
LinkedIn informed TechCrunch that its AI systems analyze hundreds of signals to determine what content is presented to a user, including insights from their profile, network, and activity.
The spokesperson added, “We run ongoing tests to understand what helps people find the most relevant, timely content for their careers. Member behavior also shapes the feed, what people click, save, and engage with changes daily, and what formats they like or don’t like. This behavior also naturally shapes what shows up in feeds alongside any updates from us.”
Chad Johnson, a sales expert active on LinkedIn, characterized the changes as a deprioritization of likes, comments, and reposts. He wrote in a post, the LLM system “no longer cares how often you post or at what time of day. It cares whether your writing shows understanding, clarity, and value.”
All these factors make it challenging to pinpoint the definitive cause of any observed #WearthePants results.
User Dissatisfaction with the Algorithm
Regardless, it appears that many users, across all genders, either disapprove of or misunderstand LinkedIn’s new algorithm.
Shailvi Wakhlu, a data science consultant, told TechCrunch that she has averaged at least one post daily for five years and used to see thousands of impressions, but now she and her husband rarely see more than a few hundred. She described this as “demotivating for content creators with a large loyal following.”
One male user reported a roughly 50% drop in engagement over the past few months. Conversely, another male user noted his post impressions and reach had increased by over 100% within a similar timeframe. He attributed this to focusing his writing on specific topics for specific audiences, which he believes the new algorithm favors, adding that his clients are experiencing similar improvements.
However, Marshall, who is Black, has personally observed that posts about her general experiences tend to perform worse than those related to her race. She stated, “If Black women only get interactions when they talk about black women but not when they talk about their particular expertise, then that’s a bias.”
Researcher Dean speculates that the algorithm might simply be amplifying “whatever signals there already are.” It could be rewarding certain posts not due to the writer’s demographics, but because those types of posts have historically garnered more response across the platform. While Marshall’s anecdotal evidence may point to another area of implicit bias, it’s not sufficient to draw a definitive conclusion.
LinkedIn provided some insights into what currently performs well. The company noted that its user base has grown, leading to a 15% year-over-year increase in posting and a 24% year-over-year rise in comments. “This means more competition in the feed,” the company explained. Posts offering professional insights and career lessons, industry news and analysis, and educational or informative content related to work, business, and the economy are all performing strongly.
Ultimately, users largely express confusion. Michelle articulated this sentiment: “I want transparency.”
However, given that content-picking algorithms are consistently treated as closely guarded proprietary secrets by companies, and transparency could lead to manipulation, this is a substantial request that is unlikely to be fulfilled.
This article was updated to correct the spelling of Wakhlu’s name.