Reddit announced on Tuesday that it’s updating its Robots Exclusion Protocol (robots.txt file), which tells automated web bots whether they are permitted to crawl a site.
Historically, robots.txt file was used to allow search engines to scape a site and then direct people to the content. However, with the rise of AI, websites are being scraped and used to train models without acknowledging the actual source of the content.
Along with the updated robots.txt file, Reddit will continue rate-limiting and blocking unknown bots and crawlers from accessing its platform. The company told TechCrunch that bots and crawlers will be rate-limited or blocked if they don’t abide by Reddit’s Public Content Policy and don’t have an agreement with the platform.
Reddit says the update shouldn’t affect the majority of users or good faith actors, like researchers and organizations, such as the Internet Archive. Instead, the update is designed to deter AI companies from training their large language models on Reddit content. Of course, AI crawlers could ignore Reddit’s robots.txt file.
The announcement comes a few days after a Wired investigation found that AI-powered search startup Perplexity has been stealing and scraping content. Wired found that Perplexity seems to ignore requests not to scrape its website, even though it blocked the startup in its robots.txt file. Perplexity CEO Aravind Srinivas responded to the claims and said that the robots.txt file is not a legal framework.
Reddit’s upcoming changes won’t affect companies that it has an agreement with. For instance, Reddit has a $60 million deal with Google that allows the search giant to train its AI models on the social platform’s content. With these changes, Reddit is signaling to other companies that want to use Reddit’s data for AI training that they will have to pay.
“Anyone accessing Reddit content must abide by our policies, including those in place to protect redditors,” Reddit said in its blog post. “We are selective about who we work with and trust with large-scale access to Reddit content.”
The announcement doesn’t come as a surprise, as Reddit released a new policy a few weeks ago that was designed to guide how Reddit’s data is being accessed and used by commercial entities and other partners.
You Might Also Like
Chinese marketplace DHgate becomes a top US app as trade war intensifies
The Trump trade war has gone viral on TikTok, pushing a Chinese e-commerce app, DHgate, to the top of the...
Hertz says customers’ personal data and driver’s licenses stolen in data breach
Car rental giant Hertz has begun notifying its customers of a data breach that included their personal information and driver’s...
OpenAI plans to phase out GPT-4.5, its largest-ever AI model, from its API
OpenAI said on Monday that it would soon wind down the availability of GPT-4.5, its largest-ever AI model, via its...
Google’s newest AI model is designed to help study dolphin ‘speech’
Google’s AI research lab, Google DeepMind, says that it has created an AI model that can help decipher dolphin vocalizations,...