![](/static/66c60d9f/assets/icons/icon-96x96.png)
![](https://fry.gs/pictrs/image/c6832070-8625-4688-b9e5-5d519541e092.png)
Not really - it isn’t prediction, it is early detection. Interpretive AI (finding and interpreting patterns) is way ahead of generative AI.
These are all me:
I control the following bots:
Not really - it isn’t prediction, it is early detection. Interpretive AI (finding and interpreting patterns) is way ahead of generative AI.
The irony that this story was posted by a bot…
Huh, don’t know what that was about. Edited.
Somebody might be getting a nasty AWS bill at the end of the month.
I’ve reported pictures/gifs of accidental nudity that were posted on Reddit without any evidence of consent, and they blew me off. Not just ignored me - they took the time to say the content was fine.
Yeah, it was legal to post stuff like that - no reasonable expectation of privacy in public places and all that. But it isn’t ethical. Don’t do it. It isn’t funny.
Well, LED lights are half-wave rectifiers that light up, so you wouldn’t add one. I don’t think I’ve ever heard of a half wave rectifier referred to as a bridge rectifier.
A bridge rectifier flips the negative current to positive, so instead of a sine wave you get a series of humps. Then a capacitor acts as a battery like you describe to smooth out the dip between humps.
My LED burn outs were almost certainly defective, not normal wear.
Also, cheap ones run directly on AC, so they flicker at 60 Hz (50 in Europe) because the current is only flowing for half the cycle.
The most amazing thing to me - I’ve been using leds for 10+ years, and I think I’ve had to replace one or two of them. It is a wonder that prices can come down with demand dwindling so much.
That’s my point. The AI isn’t an independent subject to be criticized, it is a cultural mirror.
The bias isn’t in the software, it is in the data. The stock photos of professional women that were fed in were white.
That doesn’t say anything about the AI, but rather the community that created those biases.
I can’t claim to know what the designers intended, but having users spread across a large numbers of servers is terribly inefficient for how Lemmy works: each server maintains a copy of each community that it’s users are subscribed to, and changes to those communities need to be communicated across each of those instances.
Given this architecture, it is much more efficient and robust to have users concentrate on what are effectively high performance cacheing servers, and communities spread out on smaller, interest focused instances.
AI content isn’t watermarked, or detection would be trivial. What he’s talking about is that certain words have a certain probability of appearing after certain other words in a certain context. While there is some randomness to the output, certain words or phrases are unlikely to appear because the data the model was based on didn’t use them.
All I’m saying is that the more a writer’s writing style and word choice are similar to the data set, the more likely their original content would be flagged as AI generated.
Here’s the thing though - the probabilities for word choice come from the data the model was trained on. While someone that uses a substantially different writing style / word choice than the LLM could easily be identified as being not from the LLM, someone with a similar writing style might be indistinguishable from the LLM.
Or, to oversimplify: given that Reddit was a large portion of the input data for ChatGPT, all you need to do is write like a Redditor to sound like ChatGPT.
If it could, it couldn’t claim that the content out produced was original. If AI generated content were detectable, that would be a tacit admission that it is entirely plagiarized.
The base assumption of those with that argument is that an AI is incapable of being original, so it is “stealing” anything it is trained on. The problem with that logic is that’s exactly how humans work - everything they say or do is derivative from their experiences. We combine pieces of information from different sources, and connect them in a way that is original - at least from our perspective. And not surprisingly, that’s what we’ve programmed AI to do.
Yes, AI can produce copyright violations. They should be programmed not to. They should cite their sources when appropriate. AI needs to “learn” the same lessons we learned about not copy-pasting Wikipedia into a term paper.
Though, ironically a scale of Full - 3/4 - half - 1/4 - empty is perfectly fine for gas. There is usually a visual gauge of % for charge, but it isn’t as prominent as the range. Oddly, my car has it divided roughly in thirds.
The problem is that other vehicles adjust the projection based on current conditions - when I drive up a mountain, my projected range drops like a rock. When I drive back down I can end up with more range than I started. Reporting the “ideal” case during operation is misleading at best.
Nature knows how to solve this problem.