Bluesky sees 17x increase in modest reports in 2024 after rapid growth


Bluesky on Friday published it last year’s moderation report, noted the huge growth experienced by the social network in 2024 and how it affected the work of its Trust & Safety team. It also noted that the largest number of reports came from users reporting accounts or posts for harassment, trolling, or intolerance — an issue that plagued Bluesky as it grew, and even led to widespread protests sometimes over individual modernization decisions.

The company’s report did not mention or explain why it did or did not take action individual users, including the the most blocked list.

The company added more than 23 million users by 2024, as Bluesky became the new destination for former Twitter/X users for various reasons. Throughout the year, the social network benefited from several changes in X, including its decision to change how to block and train AI on user data. Other users left X after the results of the US presidential electionbased on how the politics of X owner Elon Musk began to dominate the platform. The app also surged in users while X temporarily banned in Brazil back in September.

To meet the demands caused by this growth, Bluesky has increased its moderation team to nearly 100 moderators, it said, and continues to hire. The company also started offering team members psychological counseling to help them with the difficult work of being exposed to graphic content regularly. (An area we hope AI will one day address, as humans are not built to handle this type of work.)

In total, there were 6.48 million reports on Bluesky’s moderation service, which is 17x from 2023 when there were only 358,000 reports.

Starting this year, Bluesky will begin accepting moderation reports directly from its app. Like X, it will allow users to track actions and updates more easily. Later, it will also support in-app appeals.

When Brazilian users flooded Bluesky in August, the company saw 50,000 reports per day, at its peak. This led to a backlog in resolving moderated reports and required Bluesky to hire additional Portuguese-speaking staff, including through a contract vendor.

In addition, Bluesky has begun automating several categories of reports beyond spam to help it deal with the influx, although this sometimes leads to false positives. However, automation has helped reduce processing time to just “seconds” for “high security” accounts. Before automation, most reports were handled within 40 minutes. Today, human moderators are kept in the loop to address false positives and appeals, if not always handling the initial decision.

Bluesky says 4.57% of its active users (1.19 million) will make at least one moderation report in 2024, up from 5.6% in 2023. Most of these — 3.5 million reports — are for individual users. post. Account profiles were reported 47,000 times, mostly for a profile picture or banner picture. The lists were reported 45,000 times; DMs were reported 17,700 times, with feeds and Starter Packs receiving 5,300 and 1,900 reports, respectively.

Most of the reports are on anti-social behavior, such as trolling and harassment – a signal from Bluesky users that they want to see a less toxic social network, compared to X.

Other reports are for the following categories, Bluesky said:

  • Misleading content (plagiarism, misinformation, or false claims about identity or affiliations): 1.20 million
  • Spam (excessive mentions, replies, or repetitive content): 1.40 million
  • Unwanted sexual content (nudity or adult content not properly labeled): 630,000
  • Illegal or urgent issues (clear violations of the law or Bluesky’s terms of service): 933,000
  • Other (issues that do not fit into the above categories): 726,000

The company is also offering an update to its labeling service, which includes labels added to posts and accounts. Human labelers added 55,422 “sexual figure” labels, followed by 22,412 “rude” labels, 13,201 “spam” labels, 11,341 “intolerant” labels, and 3,046 “threat” labels.

In 2024, 93,076 users submitted a total of 205,000 appeals to Bluesky’s moderation decision.

There were also 66,308 account takedowns from moderators and 35,842 automated account takedowns. Bluesky additionally sent 238 requests from law enforcement, governments, and legal firms. The company responded to 182 of them and followed up on 146. Most of the requests were law enforcement requests from Germany, the US, Brazil, and Japan, it said.

Bluesky’s full report also examines other types of issues, including trademark and copyright claims and child safety/CSAM reports. The company noted that it has submitted 1,154 confirmed CSAM reports to the National Center for Missing & Exploited Children (NCMEC).



Source link

  • Related Posts

    Best Buy Launches Huge 4-Day Sale for MLK Weekend to Help You Prepare an Upgraded Tech for the Super Bowl

    With the holiday weekend here today, Best Buy is starting a 4-Day sales event. If you missed your chance to get some big box items like a smart TV or…

    Big Tech is expanding its reach with new acquisitions and startup investments

    Welcome to Startups Weekly — your weekly recap of everything you can’t miss from the world of startups. Want this in your inbox every Friday? Sign up here. This week’s…

    Leave a Reply

    Your email address will not be published. Required fields are marked *