One of Spez’s answers in the infamous Reddit AMA struck me
Two things happened at the same time: the LLM explosion put all Reddit data use at the forefront, and our continuing efforts to reign in costs…
I am beginning to think all they wanted to do was getting their share of the AI pie, since we know Reddit’s data is one of the major datasets for training conversetional models. But they are such a bunch of bumbling fools, as well as being chronically understaffed, the whole thing exploded in their face. At this stage their only chance if survival may well be to be bought out by OpenAI…
Yes, but it could have been handled better. If ai was the problem they could have gone the route of api only being allowed after an application process so they know who is using it and everyone else trying to use it would get denied until they were assigned a key
100% and they also didn’t need to be total tools about it. giving a month window is a joke, being snarky assholes answering AMAs, telling their user base that profitability is the only thing that matters to them.
Surprising nobody, Reddit continues to make really awful business decisions. This is just another nail in their coffin.
This right here. They could have made a licensing agreement that is based on classification your use falls into. Apps has one pricing model, llm has another. This is just lazy and greedy.
I’m thinking, that they want to sell the generated data to AI companies as training data - and AI generated content would nullify that
edit: and obviously currently everyone can suck their data for free - although I don’t know how that should be different with their changes, if I just use a web scraper
Could they have something to do with it? Yes, for sure. But the thing is that they didn’t have to do any of this the way they did. They could have made an API plan that allowed third party apps to still exist/thrive, and also charge big companies that just want to use reddit to train LLM’s. Change the pricing/terms based around this idea. They deliberately went after third party apps, and then double and tripled down on it in the face of massive backlash. If spez was competent, he would have been able to better pivot this conversation and make it about training LLM’s for megacorps, but he didn’t and even then it would have still been bullshit that is easily seen past.
Charging for their api is reasonable in answer to the llm data scrapers. The amount they’re chsrging, and the speed of the changes is not reasonable however IMO.
The original announcement said they were making exceptions for applications that gave back to Reddit. I and many others hoped that was basically everyone who wasn’t AI scraping. But seems like they got greedy while they were at it and decided to kill everything
Reddit data is public and can be easily web scraped. Reddit doesn’t own it. Spez is just throwing random memes in to distract people.
I am sorry but you don’t know what you are talking about. These things are regulated by legal documents, you don’t just wake up on morning and say “trust me bro, their data is public”
If you go and read their TnC’s it explicitly statea that scraping is forbidden without prioir written consent. They only allow access to their data via APIs, which of course they charge for
The fact that it can be easily scraped it’s neither here nor there, if they catch you they can sue you
99% of LLMs have pirated content and will continue to regurgitate pirated content until there is enough money at stake for a big lawsuit.
Getty is already suing the Dall-E creators, and someone is suing MS for Copilot; so it’s already started
Again, big money users will get sued, everyone else will scrape with impunity.
Sure but I’m not sure why you are bringing this up. What’s the wider point you are trying to make?
Nah Terms of Service is not enforcable through browse wrap agreement in the US and most of EU. You can’t implicitly agree with a legal document just by looking at something.
Check out LinkedIn v Hiq case which went to 9th circuit and set the precedent for this. LinkedIn lost.
Unless I’m mistaken and something is different, this hasn’t been a problem for tools like newpipe, YouTube vanced, and fritter.
I’m very sure that this is the case. Reddit is pissed they gave away all the content as training data for free while struggling to monetize their platform adequately.
But I suspect the damage is already done. There are projects like “Orca” from Microsoft that skip the learning process from source data for a big part by using chatGPT and GPT4.
They missed the timing but are too stubborn and double down on it
What’s more, chat-gpt 4 is near the upper bound of what you can collect on the web in that way. They basically took everywhere you’d look to for information and grabbed it along with as much structure as they could… There’s plenty more information on the Internet, but the structure and quality are much lower. It’s very data poor and unstructured interactions between humans
Moving forward, everyone is talking about synthetic data sets - you can’t go bigger without some system to generate (or refine) training data - and if you have to generate the data anyways, you’re not going to pay much for a dataset that is just decent
So yeah, Reddit most definitely missed the timing.
I think Elon’s claims that he’s made Twitter profitable (despite a lot of evidence to the contrary) is also creating pressure for the other social networks to chase overly aggressive monetization schemes
Why not both? I think they see this as an opportunity to kill two birds with one stone.
Training data gets gathered with scrapers
IF the owners of the data agree, or, if they disagree, until they take you to court. Getty Images are taking the creators of Dall-E to court, an some tech company is taking MS to court for Copilot
No, law says that if its not supposed to be used for training data it has to be Mashine readable that its not supposed to be used for that. And for scientific purposes its basically irrelevant. You can take to court whoever you want, that doesn’t change stuff.
What “law” says that? That’s not how copyright works at all. If you don’t have an explicit license to use content you don’t own, you can’t legally use it.
https://www.gesetze-im-internet.de/urhg/__44b.html
German law and that’s where many of the data mining companys are located.
Is there an English translation available? That’s a hell of a departure from international copyright agreements that I wasn’t aware of if it’s true.
Act on Copyright and Related Rights (Copyright Act) § 44b Text and Data Mining (1) Text and data mining is the automated analysis of single or multiple digital or digitized works in order to extract information from them, in particular about patterns, trends and correlations. (2) Reproductions of legally accessible works for text and data mining are permitted. The reproductions shall be deleted when they are no longer required for text and data mining. (3) Uses according to paragraph 2 sentence 1 are only permitted if the right holder has not reserved them. A reservation of use in the case of works accessible online shall only be effective if it is made in machine-readable form.
There is no official englisch Translation but DeepL does a good job to my knowledge. If you have further questions just ask, German law is very complicated and very depended on interpretation, its sometimes just barely understandable even for our lawyers…
Interesting. Do you have a link to the specifics of the law you are talking about?
And lots of proxies.
At least seven proxies.
Yeah that as well.
Honestly, I think so. It looks like all big tech collected enough data from us, so that they now can create AI models from it. Like a snapshot of humanity for some years
Yup. AI consumers are more profitable than 3rd party apps. why focus on tiered pricing when you can just name a price point everyone has to pay that only huge AI companies are willing to.
Reddit gets their content for free. Reselling it at a high price to AI/ML consumers is an easy way to turn free content into profit with almost no effort.
The value of LLM’s has changed drastically in favor of open source since the Meta weights leak. The proprietary model looks pretty much wrecked now, at least as far as I understand the leaked internal memo from a google researcher last month.
https://www.semianalysis.com/p/google-we-have-no-moat-and-neither
Oh I’m not saying they are doing the right thing or that it was the correct decision. Just speculating whether LLMs is what kicked off the whole thing
I’m saying the premise that LLM’s have anything to do with it is either incompetent failure to keep up with LLM developments, or a pack of lies.
I disagree, it’s still too early and a bit presumptuous to make such conclusive statements
This is a fascinating read, thank you very much for sharing.
It is, but reddit don’t own the content on their site according to their TOS, posters merely grant them a license to redistribute it. So it’s not really their call to shut off ChatGPT scraping, it should be a community decision
“Merely” - the TOS basically grant Reddit the ability to do what the hell they want with it, LOL
When Your Content is created with or submitted to the Services, you grant us a worldwide, royalty-free, perpetual, irrevocable, non-exclusive, transferable, and sublicensable license to use, copy, modify, adapt, prepare derivative works of, distribute, store, perform, and display Your Content and any name, username, voice, or likeness provided in connection with Your Content in all media formats and channels now known or later developed anywhere in the world. This license includes the right for us to make Your Content available for syndication, broadcast, distribution, or publication by other companies, organizations, or individuals who partner with Reddit.
And furthermore
You also agree that we may remove metadata associated with Your Content, and you irrevocably waive any claims and assertions of moral rights or attribution with respect to Your Content.
Surprisingly tough question. On one hand, I don’t think every ex-Reddit user should go “Nah, it’s too late, fam” because then it wouldn’t even make sense for the devs to make any changes if they had no chance of regaining their userbase. On the other hand, I feel like even if they made really good changes, I would still always be on edge waiting for the bad thing to happen (pretty much what I imagine an abusive relationship to be like).
Reddit’s business model was not founded on selling LLM data. Reddit got greedy and decided to change their business model to cash in on an unexpected revenue stream. What was also unexpected (to Reddit) is that you cannot cater to social media users and monetize their data for LLM training effectively at the same time. And now Reddit will have neither, and will die just like all other businesses that adopt Enshitification as a core operating procedure.
Let this be a lesson to them and all that follow: do not let your greed make you blind to the consequences of your actions.
Does it matter what Reddit’s business model was founded on? Businesses respond to changing conditions all the time and pivot.
“they got greedy” seems really a naive way of looking at it. They are a business, that’s what businesses are all about. Additionally, they are a busienss which is NOT profitable, and need to to change things to survive now that the era of low interest rates has come to end. The real issue is that they are so inept IMHO
I find the word “entshittification” so cringe
Agreed and what this shows is the marketplace of ideas cannot be owned by a corporation. The interests of capital will always come first.
Yes but nothings stopping scraping of reddit content from the front end
Technically not (well, they can make it harder), but they can sue them for doing it
Sure, but they could do the same thing with an API. Make scraping for LLMs against the TOS; not personal use. I really do think (as the OP says) it’s two birds with one stone.
I think the LLM wave hit, they saw dollar signs, and they made a change without thinking it through, but then they were backed into a corner between money and avoiding outrage, but greed won over.