I assume that’s what was being referred to.
I assume that’s what was being referred to.
I was thinking a nice golden throne. More appropriate for a god-emperor.
I like the ‘:has’ pun in the title too. Supporting that is a real game changer!
I’ll just write thousands of lines of code inside a global object… I’m sure I won’t put a semicolon where a comma should be…
Alright this just has me wondering which is worse, a wet fuck or a dry one…
Can I teach you a lesson?
Excellent! So immersive!
Where’s the dedicated DRADIS monitor?
Was that Edelweiss? I don’t know what to do with this.
Okay, you didn’t have to put so much mustard on it.
Oh no, not Lucas!
A similar phenomenon is knowing you’re going to need to go back and update some older section of code and when you finally get around to it, it turns out you wrote it that way to begin with. It’s like… I didn’t think I knew about this approach before…
This isn’t the most substantive of your comments in this chain, but I think it deserves some attention. It’s perfectly worded and it’s a concept more people need to embrace: you don’t have to speak in absolutes and it’s okay to express the limits of your knowledge.
Like the infosquitos: “this guy sure loves porno!”
Do you have any theories as to why this is the case? I haven’t gone anywhere near it, so I have no idea. I imagine it’s tied up with the way it processes things from a language-first perspective, which I gather is why it’s bad at math. I really don’t understand enough to wrap my head around why we can’t seem to combine LLM and traditional computational logic.
Katamari Damacy is the first one.
Oh man that’s… Well done, well done!
Points for “sassy robot.” But you could have described it worse. This was the first one I could identify.
My sense in reading the article was not that the author thinks artificial general intelligence is impossible, but that we’re a lot farther away from it than recent events might lead you to believe. The whole article is about the human tendency to conflate language ability and intelligence, and the author is making the argument both that natural language does not imply understanding of meaning and that those financially invested in current “AI” benefit from the popular assumption that it does. The appearance or perception of intelligence increases the market value of AIs, even if what they’re doing is more analogous to the actions of a very sophisticated parrot.
Edit all of which is to say, I don’t think the article is asserting that true AI is impossible, just that there’s a lot more to it than smooth language usage. I don’t think she’d say never, but probably that there’s a lot more to figure out—a good deal more than some seem to think—before we get Skynet.
I feel compelled to point out that “back door man” was already a common expression in blues lyrics.