It’s like saying Microsoft Windows is the most loved OS on PC. People just go with the option in front of them. Spotify is the biggest streaming service now, Amazon Music ties in with Alexa.
It’s like saying Microsoft Windows is the most loved OS on PC. People just go with the option in front of them. Spotify is the biggest streaming service now, Amazon Music ties in with Alexa.
deleted by creator
I did not say companies should have no liability for publishing misinformation. Of course if someone uses AI to generate misinformation and tries to pass it off as factual information they should be held accountable. But it doesn’t seem like anyone did that in this case. Just a journalist putting his name in the AI to see what it generates. Nobody actually spread those results as fact.
Their product doesn’t claim to be a source of facts. It’s a generator of human-sounding text. It’s great for that purpose and they’re not liable for people misusing it or not understanding what it does.
Not sure what that’s supposed to help with. I’d be even more uncomfortable if my steak had eyes and made eye contact than when a person does it.
Make a large enough model, and it will seem like an intelligent being.
That was already true in previous paradigms. A non-fuzzy non-neural-network algorithm large and complex enough will seem like an intelligent being. But “large enough” is beyond our resources and processing time for each response would be too long.
And then you get into the Chinese room problem. Is there a difference between seems intelligent and is intelligent?
But the main difference between an actual intelligence and various algorithms, LLMs included, is that intelligence works on its own, it’s always thinking, it doesn’t only react to external prompts. You ask a question, you get an answer, but the question remains at the back of its mind, and it might come back to you 10min later and say you know, I’ve given it some more thought and I think it’s actually like this.
Exactly. As the mandatory sexual harassment and money laundering trainings have taught me repeatedly, if the company knows about it and doesn’t do anything, they’re equally liable (and in many cases even if they don’t know about it). So stopping inappropriate behavior is in their interest.
Remember to look into his eyes
I don’t know if it’s some neurodivergence or if other introverts feel the same way, but that is something I personally find very difficult and uncomfortable and I can’t hold eye contact for more than a second or two at a time. What feels natural to me is to look at a person’s mouth when they talk.
Pear and gorgonzola is a typical combination.
They are effective, but in the other direction. I wouldn’t be surprised if they’re funded by fossil fuel companies.
Somewhere on the vertical axis. 0 on the horizontal. The AGI angle is just to attract more funding. We are nowhere close to figuring out the first steps towards strong AI. LLMs can do impressive things and have their uses, but they have nothing to do with AGI
Review bombing is an intentional attack (e.g. someone posts a story about a shitty restaurant owner and everyone on the internet starts leaving negative reviews for that restaurant even though they’ve never been there). Just getting negative reviews organically for being bad is not review bombing.
Just heard the story. Apparently it cost 200m by the point they presented the alpha and it was absolute crap. So Sony put another 200m into outsourcing the work asap to fix it.
How about a voting license that needs to be renewed every 30 years? You have to pass a test that checks if you are capable of thinking objectively or something like that.
Any type of criteria that is not absolute (like age), can and will be used to exclude certain groups of people from voting.
If you read the whole text and interpret the highlights as emphasis then it’s just annoying and hard to read (sort of like those people who add random commas everywhere). If you read just the highlighted text then it sounds like a summary, but there are mistakes in it, which is why I assumed AI.
I think it’s an AI summary (if you read just the highlighted part)
If a family member gets banned for cheating while playing your copy of a game, you (the game owner) will also be banned in that game
Hm… so if you don’t trust your kids to not do dumb things in games you also play then don’t share them
It baffles me that these types of jobs exist in the same area as mine. My company doesn’t care what hours I work as long as I get things done, has gone fully remote and never going back, encourages people to not burn themselves out and take time off, we have actual unlimited PTO (i.e. nobody coming after me for using too much), etc. I always thought that’s just the Silicon Valley mentality, but I keep seeing news of big tech companies doing all kinds of crazy backwards things and I don’t get it. All the perks I get are not because my company is run by angels, it’s because they understand we’re actually more productive that way.
He didn’t get arrested for AI generated music. He got arrested for faking multiple accounts to upload music and using bots to generate fake listens, thus stealing millions of dollars. If he did the same thing with music he actually wrote and played, he would still be arrested.
I grew up as a PC gamer (if you can call 8-bit computers PCs too) and never had a console as a kid. I got an Xbox One when it came out, just because of the Kinect, and never played anything on it other than Just Dance. Playing on my PC is more convenient. I got a Switch and played some Pokémon, but couldn’t get in the habit of playing on a device instead of a PC. When I got a Switch emulator on my PC, I played more on that than I did on the actual Switch in all the time I owned it.