In November, a product strategist, who we’ll refer to as Michelle (a pseudonym), altered her LinkedIn profile to identify as male and changed her name to Michael, as reported to TechCrunch.
This was part of the #WearthePants experiment, where women aimed to test the theory that LinkedIn’s updated algorithm exhibited bias against female users.
For several months, numerous active LinkedIn users had reported a decline in engagement and impressions on the professional networking site. This decline followed an August announcement by Tim Jurka, the company’s vice president of engineering, stating that LinkedIn had recently incorporated Large Language Models (LLMs) to enhance content visibility for users.
Michelle, whose identity has been verified by TechCrunch, grew suspicious of these changes. Despite having over 10,000 followers and ghostwriting posts for her husband, who has only about 2,000, she observed that both their accounts typically received similar post impressions, despite her significantly larger audience.
She stated, “The sole notable factor was gender.”
Founder Marilynn Joyner also modified her profile’s gender. After two years of consistent posting on LinkedIn and observing a recent decrease in her posts’ visibility, she informed TechCrunch, “I switched my profile gender from female to male, and my impressions soared by 238% within 24 hours.”
Similar outcomes were reported by Megan Cornish, Rosie Taylor, Jessica Doyle Mekkes, Abby Nydam, Felicity Menzies, Lucy Ferguson, and several others.
LinkedIn asserted that its “algorithm and AI systems do not utilize demographic data like age, race, or gender to influence the visibility of content, profiles, or posts in the Feed,” adding that “direct comparisons of your own feed updates, which may not be perfectly representative or show equal reach, do not inherently indicate unfair treatment or bias” within the Feed.
Experts in social algorithms concur that while overt sexism might not be the direct cause, implicit bias could nonetheless be a contributing factor.
Brandeis Marshall, a data ethics consultant, explained to TechCrunch that platforms operate as “an intricate symphony of algorithms that constantly and simultaneously engage specific mathematical and social levers.”
She further noted that “changing one’s profile photo and name represents merely one of these levers,” emphasizing that the algorithm is additionally shaped by a user’s past and present interactions with other content, among other factors.
Marshall concluded, “What remains unknown are all the other mechanisms influencing this algorithm to prioritize one person’s content over another. This issue is more complex than generally perceived.”
Male-Centric Programming
The #WearthePants initiative was launched by two entrepreneurs, Cindy Gallop and Jane Evans.
They engaged two men to publish identical content, seeking to ascertain if gender was the root cause of the observed decline in engagement experienced by many women. Gallop and Evans commanded substantial combined followings exceeding 150,000, significantly more than the approximately 9,400 followers the two men had at that time.
Gallop noted that her post reached merely 801 individuals, whereas an identical post by a male participant garnered 10,408 views, surpassing 100% of his follower count. Subsequently, other women joined the experiment, leading to concern among some, such as Joyner, who relies on LinkedIn for business marketing.
Joyner expressed, “I genuinely wish to see LinkedIn assume responsibility for any potential bias present in its algorithm.”
However, LinkedIn, much like other search and social media platforms relying on LLMs, provides very little information regarding the training methodologies of its content selection models.
Marshall suggested that most of these platforms inherently “possess an embedded white, male, Western-centric viewpoint” due to the demographics of those who trained the models. Researchers have uncovered human biases, including sexism and racism, within widely used LLM models, as these models learn from human-created content, and humans frequently participate directly in post-training or reinforcement learning processes.
Nonetheless, the specific implementation of AI systems by any given company remains concealed within the confidential “algorithmic black box.”
LinkedIn asserts that the #WearthePants experiment could not have conclusively proven gender bias against women. Both Jurka’s August statement and a subsequent November post by Sakshi Jain, LinkedIn’s Head of Responsible AI and Governance, reiterated that their systems do not use demographic data as a factor for content visibility.
Conversely, LinkedIn informed TechCrunch that it evaluates millions of posts to link users with relevant opportunities. The company specified that demographic data is solely utilized for testing purposes, such as ensuring that posts “from diverse creators compete fairly and that the scrolling experience, or what appears in the feed, remains consistent across various audiences.”
LinkedIn has garnered attention for its efforts in researching and modifying its algorithm with the aim of delivering a more equitable user experience.
Marshall suggested that unidentified variables likely account for the increased impressions some women observed after altering their profile gender to male. She explained that participating in a viral trend, for instance, can enhance engagement; additionally, some accounts, having been inactive for extended periods, might have been inadvertently favored by the algorithm upon resuming posts.
Tone and writing style could also be influential. Michelle, for instance, recounted that during the week she posted as “Michael,” she subtly altered her writing style to be more direct and simple, similar to how she ghostwrites for her husband. During this period, she reported a 200% surge in impressions and a 27% increase in engagements.
She deduced that while the system wasn’t “explicitly sexist,” it appeared to interpret communication styles typically linked with women as “a substitute for lesser value.”
Common perceptions suggest that male writing styles are more concise, whereas female writing styles are often characterized as softer and more emotional. Should an LLM be trained to promote writing that aligns with male stereotypes, this would represent a subtle, implicit bias. As previously highlighted, researchers have confirmed that most LLMs contain numerous such biases.
Sarah Dean, an assistant professor of computer science at Cornell, noted that platforms like LinkedIn frequently consider entire user profiles, alongside user behavior, when deciding which content to amplify. This encompasses job titles listed on a profile and the kinds of content a user typically interacts with.
Dean stated, “A person’s demographics can influence ‘both facets’ of the algorithm – both the content they view and the visibility of their own posts.”
LinkedIn informed TechCrunch that its AI systems analyze hundreds of signals, including details from a user’s profile, network, and activity, to determine what content is delivered to them.
A spokesperson explained, “We conduct continuous tests to ascertain what best assists individuals in discovering the most pertinent, timely content for their professional lives.” They added, “User behavior also influences the feed; what people click, save, and interact with varies daily, as do their preferred and disliked formats. This ongoing behavior inherently helps shape what appears in feeds, alongside any updates from us.”
Chad Johnson, a sales expert active on LinkedIn, characterized the changes as a shift away from prioritizing likes, comments, and reposts. Johnson stated in a post that the LLM system “no longer considers the frequency or time of day of your posts,” but rather “values whether your writing demonstrates comprehension, lucidity, and substance.”
Consequently, pinpointing the precise cause behind any of the #WearthePants experiment’s outcomes becomes challenging.
Algorithm Discontent Widespread
Nonetheless, it appears that a considerable number of users, irrespective of gender, are either dissatisfied with or do not fully grasp LinkedIn’s updated algorithm, regardless of its true nature.
Data science consultant Shailvi Wakhlu informed TechCrunch that after consistently posting at least once daily for five years and routinely achieving thousands of impressions, she and her husband now rarely exceed a few hundred. She commented, “This is disheartening for content creators who have built a substantial, loyal audience.”
One male user reported a roughly 50% decrease in engagement over recent months to TechCrunch. Conversely, another man indicated that his post impressions and reach had more than doubled within the same period. He explained to TechCrunch that this was “largely due to my focus on writing about specific topics for niche audiences, which the new algorithm appears to favor,” noting that his clients were experiencing comparable growth.
However, in Marshall’s personal experience as a Black woman, she perceives that posts detailing her personal experiences garner fewer interactions than those related to her race. She stated, “If Black women receive engagement only when discussing Black women, but not when showcasing their specific expertise, then that constitutes a bias.”
Dean, the researcher, speculates that the algorithm might merely be amplifying “existing signals.” It could be boosting particular posts not due to the writer’s demographics, but rather because those posts have historically generated more responses across the platform. While Marshall’s observations might point to another form of implicit bias, her anecdotal evidence is insufficient for a definitive conclusion.
LinkedIn provided some clarity on currently effective content strategies. The company noted user base expansion has led to a 15% year-over-year increase in posting and a 24% year-over-year rise in comments. “This indicates increased competition in the feed,” LinkedIn stated. The company also reported that posts featuring professional insights and career lessons, industry news and analysis, and educational or informative content pertaining to work, business, and the economy are performing strongly.
Ultimately, many users are simply perplexed. Michelle articulated, “I desire transparency.”
Nevertheless, given that content selection algorithms are typically highly proprietary company secrets, and full transparency could enable exploitation, this is a significant demand—one that is unlikely to ever be met.
An update was made to this article to rectify the spelling of Wakhlu’s name.