Calling what attention transformers do memorization is wildly inaccurate.
*Unless we’re talking about semantic memory.
It honestly blows my mind that people look at a neutral network that’s even capable of recreating short works it was trained on without having access to that text during generation… and choose to focus on IP law.
The issue is that next to the transformed output, the not-transformed input is being in use in a commercial product.
Are you only talking about the word repetition glitch?
How do you imagine those works are used?
It’s called learning, and I wish people did more of it.
This is an inaccurate understanding of what’s going on. Under the hood is a neutral network with weights and biases, not a database of copyrighted work. That neutral network was trained on a HEAVILY filtered training set (as mentioned above, 45 terabytes was reduced to 570 GB for GPT3). Getting it to bug out and generate full sections of training data from its neutral network is a fun parlor trick, but you’re not going to use it to pirate a book. People do that the old fashioned way by just adding type:pdf to their common web search.
You’ve made a lot of confident assertions without supporting them. Just like an LLM! :)
Just taking GPT 3 as an example, its training set was 45 terabytes, yes. But that set was filtered and processed down to about 570 GB. GPT 3 was only actually trained on that 570 GB. The model itself is about 700 GB. Much of the generalized intelligence of an LLM comes from abstraction to other contexts.
Table 2.2 shows the final mixture of datasets that we used in training. The CommonCrawl data was downloaded from 41 shards of monthly CommonCrawl covering 2016 to 2019, constituting 45TB of compressed plaintext before filtering and 570GB after filtering, roughly equivalent to 400 billion byte-pair-encoded tokens. Language Models are Few-Shot Learners
*Did some more looking, and that model size estimate assumes 32 bit float. It’s actually 16 bit, so the model size is 350GB… technically some compression after all!
deleted by creator
Equating LLMs with compression doesn’t make sense. Model sizes are larger than their training sets. if it requires “hacking” to extract text of sufficient length to break copyright, and the platform is doing everything they can to prevent it, that just makes them like every platform. I can download © material from YouTube (or wherever) all day long.
Aye, flux [pro] via glif.app, though it’s funny, sometimes I get better results from the smaller [schnell] model, depending on the use case.
The more the original work is transformed, the more likely it is to be considered fair use rather than infringement.
Cannot be done with Mint? I’ve OS hopped every few years - currently running Windows 11 at work and Mint at home. I much prefer the Mint install. That said, I’m a video producer - and video production just isn’t there yet on Linux. CUDA’s a pain to get working, proprietary codecs add steps, Davinci’s linux support is more limited than it seems, KDenLive works in a pinch but lacks features, Adobe and Linux are like oil and water, there’s no equivalent for After Effects… I don’t doubt that there are workarounds for many of these issues. But the ROI’s not there yet. I’d love to see a video production focused distro that really aimed for full production suite functionality. Especially since Hackintoshes are about to get even harder to build.
Genuine question: What evidence would make it seem likely to you that an AI “understands”? These papers are coming at an unyielding rate, so these conversations (regardless of the specifics) will continue. Do you have a test or threshold in mind?
deleted by creator
The paper is kind of saying that as well. I added a quote to the post to help set the context a bit more. As I understand it, they’ve shown that an LLM contains a model of its “world” (training data) and that this model becomes a more meaningful map of that “world” the longer the model is trained. Notably, they haven’t shown that this model is actively employed when the LLM is generating text (robot commands in this case), only that it exists within the neural network and can be probed. And to be clear - its world is so dissimilar from ours, the form its understanding takes is likely to seem alien.
From MIT again - our exploration of how LLMs can do the things they can is pointing us in some interesting directions re: our exploration of how our own brains understand.
I did some source digging to hopefully best address your observations. Science journalism (even when internal and likely done in concert with the authors) is fundamentally a game of telephone. But looking at the source papers:
They say it in an incredibly formal way, but they do seem to come to the conclusion that the LLM develops understanding. The paper makes that case within an incredibly narrow context, but it does include:
We anticipate that this technique may be generally applicable to a broad range of semantic probing experiments. We argue that the observed semantic content cannot be fully attributed to a retrieval-like process, and instead requires the LM to perform some degree of generalization over the semantics. More broadly, we see programs and their precise formal semantics as a promising direction for working toward a deeper understanding of the behavior of LMs, such as whether or how LMs acquire and use semantic representations of the underlying domain more generally.
With it now clear that the generalized case is not shown: the specific type of understanding that they have shown is non-trivial.
Conclusion: This paper presents empirical evidence that LMs of code can acquire the formal semantics of programs from next token prediction.
A foundational topic in the theory of programming languages, formal semantics (Winskel, 1993) is the study of how to formally specify the meaning of programs.
From Winskel: The Formal Semantics of Programming Languages provides the basic mathematical techniques necessary for those who are beginning a study of the semantics and logics of programming languages. These techniques will allow students to invent, formalize, and justify rules with which to reason about a variety of programming languages.
Also notable but unrelated: Jin and Rinard’s paper was supported, in part, by grants from the U.S. Defense Advanced Research Projects Agency (DARPA).