I gotta find out what the Knowledge Fight folks have to say about this.
I gotta find out what the Knowledge Fight folks have to say about this.
Oh yeah, just don’t read about what happens to our prime ministers when they attempt to defy the empire. Totes democracy we got over here.
To the ASIO agent assigned to tracking my every online move:
Fun fact: in Australia we don’t have a bill of rights of any kind, so the cops can just force you to reveal your passwords. The maximum penalty for refusing is 2 years imprisonment.
Also, what does it mean to “tolerate” the existence of minorities? What exactly are we “tolerating”? Tolerance in every other context means to accept deviation from a standard or some negative outcome.
Framing anyone’s mere existence as a thing to be “tolerated” is to imply they are deviant or negative.
That’s where the paradox of tolerance loses me. I don’t think we should be tolerant in general. I think we should make value judgements about what is good or bad and act accordingly. Every society does this, and pretending we’re above it all and completely neutral is dishonest.
And if the “tolerance” is of differing views, diversity of thought is also good, not a bad thing to be tolerated.
It’s simple: we identify behaviour that is bad, like bigotry and hatred, and we say no. We’re not rejecting it because it’s merely different, and to accept that framing is to accept the cry-bullying of fascists. We reject them because they suck, and we don’t owe them shit about it.
Yes, the companies have a reputation to protect, but it’s also just a standard hype-cycle. If you pay attention to tech history these things always go in cycles like this.
Whether the tech is actually useful or not doesn’t actually matter. What matters is whether you can convince investors to fork over the cash with a shiny presentation.
The tech industry has basically habituated to surviving on selling us bullshit through hype cycles. I think it’s become dependent on them.
It’s so predictable too. Did someone do something indefensible and you don’t want to face up to it? Try blaming the victim today! Ask your propagandist if victim blaming is right for you.
Why do they have to “WANT” that? Ignoring the fact that they literally said they were happy it was changed back, why does that matter to the criticism? If it’s true, it’s true, and the fact that corporations are the ones in a position to habitually make terrible decisions about FOSS is a big problem. It’s valid to point out that it would be good to find a better way.
If anything it sounds like you “WANT” to ignore it.
The phrase “synthesised expert knowledge” is the problem here, because apparently you don’t understand that this machine has no meaningful ability to synthesise anything. It has zero fidelity.
You’re not exposing people to expert knowledge, you’re exposing them to expert-sounding words that cannot be made accurate. Sometimes they’re right by accident, but that is not the same thing as accuracy.
You confused what the LLM is doing for synthesis, which is something loads of people will do, and this will just lend more undue credibility to its bullshit.
deleted by creator
If the judge said it then it would have been established fact in the case. This can be established by evidence and found as fact in the case, or it can be part of the agreed facts of the case, in which case the court doesn’t waste time hearing evidence. All it takes to become agreed fact is for the defence to present it as part of their case and for the prosecution to not dispute it.
In that context the finding of fact by the court is more than enough for the paper to report on it, and the two versions presented by you of it being said by the defence and by the judge, are entirely compatible with one another. Nobody is going to demand to see the boy’s medical history to verify an uncontroversial point like this. That would just be a waste of time.
The papers presented it as stated by the defence and the judge, they said nothing false or misleading, and I don’t see any problem with that part of their reporting.
Now, if you have an issue that it was reported because it casts autistic people in a bad light, the issue becomes whether you think it’s something the papers should leave out. Well, the defence considered it important, and it became news. Not much we can do about that after the fact.
Almost like it does work on Firefox but for some reason they don’t want you using it. Honestly it’s so damn weird, why do that? Is there some incentive for them?
My apologies, I see that I have made a mistake. There are in fact 3 w’s in the sentence “Howard likes strawberries.”
It’s an illusion. People think that because the language model puts words into sequences like we do, there must be something there. But we know for a fact that it is just word associations. It is fundamentally just predicting the most likely next word and generating it.
If it helps, we have something akin to an LLM inside our brain, and it does the same limited task. Our brains have distinct centres that do all sorts of recognition and generative tasks, including images, sounds and languge. We’ve made neural networks that do these tasks too, but the difference is that we have a unifying structure that we call “consciousness” that is able to grasp context, and is able to loopback the different centres into one another to achieve all sorts of varied results.
So we get our internal LLM to sequence words, one word after another, then we loop back those words via the language recognition centre into the context engine, so it can check if the words match the message it intended to create, it checks them against its internal model of the world. If there’s a mismatch, it might ask for different words till it sees the message it wanted to see. This can all be done very fast, and we’re barely aware of it. Or, if it’s feeling lazy today, it might just blurt out the first sentence that sprang to mind and it won’t make sense, and we might call that a brain fart.
Back in the 80s “automatic writing” took off, which was essentially people tapping into this internal LLM and just letting the words flow out without editing. It was nonesense, but it had this uncanny resemblance to human language, and people thought they were contacting ghosts, because obviously there has to be something there, right? But it’s not, it’s just that it sounds like people.
These LLMs only produce text forwards, they have no ability to create a sentence, then examine that sentence and see if it matches some internal model of the world. They have no capacity for context. That’s why any question involving A inside B trips them up, because that is fundamentally a question about context. "How many Ws in the sentence “Howard likes strawberries” is a question about context, that’s why they screw it up.
I don’t think you solve that without creating a real intelligence, because a context engine would necessarily be able to expand its own context arbitrarily. I think allowing an LLM to read its own words back and do some sort of check for fidelity might be one way to bootstrap a context engine into existence, because that check would require it to begin to build an internal model of the world. I suspect the processing power and insights required for that are beyond us for now.
I’d be happy to help! There are 3 "w"s in the string “Howard likes strawberries”.
(though some might consider this an anti-feature)
To be fair, not everyone would say that, and the only reason you would call it an “anti-feature” is if you had an accurate understanding of the issues.
Thanks! As far as I know I’m not describing anything too unusual with the mixed-up signals, I think pins & needles is essentially that, just a bunch of nerves randomly firing, so you probably do know what it’s like in little doses.
I’ve been paying attention to it since I wrote that, and it definitely is still slightly more numb on the affected side, I think I was right that not all the nerves regrew completely.
Thanks, I hope they don’t do it. I would expect the security community to be able to find something like this, since it’s not hard to hook up some devices and do packet sniffing to detect if they’re talking to each other.
This would be an excellent use case for LTT’s faraday cage room for instance.
I’d be interested to see more information on that. I don’t doubt companies would do that, but some good information on when it happens and how to prevent it would be useful.
Is Dan the loud one? I’ve never learned their names, but I’m waiting for him to scream something about them horning in on their territory.