☆ Yσɠƚԋσʂ ☆
- 4.98K Posts
- 7.19K Comments
And then they’d need to be able to verify that the code actually meets these requirements. That might even necessitate specifying these requirements in some sort of a formal language…
☆ Yσɠƚԋσʂ ☆@lemmy.mlOPtoGeneral Programming Discussion@lemmy.ml•cool-retro-term: a terminal emulator which mimics the old cathode display...
1·5 days agooh you can tune the effects in it not to be overbearing
☆ Yσɠƚԋσʂ ☆@lemmy.mlOPto
Technology@lemmy.ml•China’s Open-Source AI Blitz Overtakes America
161·7 days agoIt’s a completely different situation in China. This tech is being treated as open source commodity similar to Linux, and companies aren’t trying to monetize it directly. There’s no crazy investment bonanza happening in China either. Companies like DeepSeek are developing this tech on fairly modest budgets, and they’re already starting to make money https://www.cnbc.com/2025/07/30/cnbcs-the-china-connection-newsletter-chinese-ai-companies-make-money.html
☆ Yσɠƚԋσʂ ☆@lemmy.mlOPto
Technology@lemmy.ml•German researchers achieved 71.6% on ARC-AGI using a regular GPU for 2 cents per task. OpenAI's o3 gets 87% but costs $17 per task making it 850x more expensive.
2·7 days agoI mean the paper and code are published. This isn’t a heuristic, so there’s no loss of accuracy. I’m not sure why you’re saying this is too good to be true, the whole tech is very new and there are lots of low hanging fruit for optimizations that people are discovering. Every few months some discovery like this is made right now. Eventually, people will pluck all the easy wins and it’s going to get harder to dramatically improve performance, but for the foreseeable future we’ll be seeing a lot more stuff like this.
☆ Yσɠƚԋσʂ ☆@lemmy.mlOPto
Technology@lemmy.ml•German researchers achieved 71.6% on ARC-AGI using a regular GPU for 2 cents per task. OpenAI's o3 gets 87% but costs $17 per task making it 850x more expensive.
2·7 days agoAlmost certainly given that it drastically reduces the cost of running models. Whether you run them locally or it’s a company selling a service, the benefits here are pretty clear.
☆ Yσɠƚԋσʂ ☆@lemmy.mlOPto
Technology@lemmy.ml•German researchers achieved 71.6% on ARC-AGI using a regular GPU for 2 cents per task. OpenAI's o3 gets 87% but costs $17 per task making it 850x more expensive.
2·7 days agoI haven’t tried it with ollama, but it can download gguf files directly if you point it to a huggingface repo. There are a few other runners like vllm and llama.cpp, you can also just run the project directly with Python. I expect the whole Product of Experts algorithm is going to get adopted by all models going forward since it’s such a huge improvement, and you can just swap out the current approach.
☆ Yσɠƚԋσʂ ☆@lemmy.mlto
Asklemmy@lemmy.ml•What is a celebrated invention/discovery attributed to a single person but, had they not done it at the time, it is likely that someone else would have done it anyway not long after?
1·8 days agoexactly, a great tv series on the subject incidentally https://www.imdb.com/title/tt0078588/
☆ Yσɠƚԋσʂ ☆@lemmy.mlOPto
Technology@lemmy.ml•Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.
41·8 days agoI’ve literally been contextualizing the article throughout this whole discussion for you. At least we can agree that continuing this is pointless. Bye.
☆ Yσɠƚԋσʂ ☆@lemmy.mlOPto
Technology@lemmy.ml•Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.
41·8 days agoAnd once again, what the article is actually talking is how LLMs are being sold to investors. At this point, I get the impression that you simply lack basic reading comprehension to understand the article you’re commending on.
☆ Yσɠƚԋσʂ ☆@lemmy.mlOPto
Technology@lemmy.ml•Cutting-edge research shows language is not the same as intelligence. The entire AI bubble is built on ignoring it.
4·8 days agoThe title is not false. If you actually bothered to read the article, you’d see that the argument being made is that the AI tech companies are selling a vision to their investors that’s at odds with the research. The current LLM based approach to AI cannot achieve general intelligence.
If you’re using a modern computer then you’re buying it from one of the handful megacorps around. Apple isn’t really special in this regard.
does that run on Asahi though, I couldn’t figure out how to
Erlang isn’t special because it’s functional, but rather it’s functional because that was the only way to make its specific architecture work. Joe Armstrong and his team at Ericsson set out to build a system with nine nines of reliability. They quickly realized that to have a system that never goes down, you need to be able to let parts of it crash and restart without taking down the rest. That requirement for total isolation forced their hand on the architecture, which in turn dictated the language features.
The specialness is entirely in the BEAM VM itself, which acts less like a language runtime like the JVM or CLR, and more like a mini operating system. In almost every other environment, threads share a giant heap of memory. If one thread corrupts that memory, the whole ship sinks. In Erlang, every single virtual process has its own tiny, private heap. This is the killer architectural feature that makes Erlang special. Because nothing is shared, the VM can garbage collect a single process without stopping the world, and if a process crashes, it takes its private memory with it, leaving the rest of the system untouched.
The functional programming aspect is just the necessary glue to make a shared nothing architecture usable. If you had mutable state scattered everywhere, you couldn’t trivially restart a process to a known good state. So, they stripped out mutation to enforce isolation. The result is that Erlang creates a distributed system inside a single chip. It treats two processes running on the same core with the same level of mistrust and isolation as two servers running on opposite sides of the Atlantic.
Learning functional style can be a bit of a brain teaser, and I would highly recommend it. Once you learn to think in this style it will help you write imperative code as well because you’re going to have a whole new perspective on state management.
And yeah there are functional languages that don’t rely on using a VM, Carp is a good example https://github.com/carp-lang/Carp
RISCV would be a huge step forward, and there are projects like this one working on making a high performance architecture using it. But I’d argue that we should really be rethinking the way we do programming as well.
The problem goes deeper than just the translation layer because modern chips are still contorting themselves to maintain a fiction for a legacy architecture. We are basically burning silicon and electricity to pretend that modern hardware acts like a PDP-11 from the 1970s because that is what C expects. C assumes a serial abstract machine where one thing happens after another in a flat memory space, but real hardware hasn’t worked that way in decades. To bridge that gap, modern processors have to implement insane amounts of instruction level parallelism just to keep the execution units busy.
This obsession with pretending to be a simple serial machine also causes security nightmares like Meltdown and Spectre. When the processor speculates past an access check and guesses wrong, it throws the work away, but that discarded work leaves side effects in the cache that attackers can measure. It’s a massive security liability introduced solely to let programmers believe they are writing low level code when they are actually writing for a legacy abstraction. on top of that, you have things like the register rename engine, which is a huge consumer of power and die area, running constantly to manage dependencies in scalar code. If we could actually code for the hardware, like how GPUs handle explicit threading, we wouldn’t need all this dark silicon wasting power on renaming and speculation just to extract speed from a language that refuses to acknowledge how modern computers actually work. This is a fantastic read on the whole thing https://spawn-queue.acm.org/doi/10.1145/3212477.3212479
We can look at Erlang OTP for an example of a language platform looks like when it stops lying about hardware and actually embraces how modern chips work. Erlang was designed from the ground up for massive concurrency and fault tolerance. In C, creating a thread is an expensive OS-level operation, and managing shared memory between them is a nightmare that requires complex locking using mutexes and forces the CPU to work overtime maintaining cache coherency.
Meanwhile, in the Erlang world, you don’t have threads sharing memory. Instead, you have lightweight processes, that use something like 300 words of memory, that share nothing and only communicate by sending messages. Because the data is immutable and isolated, the CPU doesn’t have to waste cycles worrying about one core overwriting what another core is reading. You don’t need complex hardware logic to guess what happens next because the parallelism is explicit in the code, not hidden. The Erlang VM basically spins up a scheduler on each physical core and just churns through these millions of tiny processes. It feeds the hardware independent, parallel chunks of work without the illusion of serial execution which is exactly what it wants. So, if you designed a whole stack from hardware to software around this idea, you could get a far better overall architecture.
it’s all ARM now, there’s software x86 emulation on macos. I guess you could run x86 vm on Linux, but not sure how fast that will be.
The main problem is you’re pretty limited with software since you can only run stuff that’s been compiled against it.
I got one from a startup I worked at a couple of years ago, and then when the whole Silicon Valley bank crash happened they laid me off, but let me keep it. And yeah Asashi is still pretty barebones mainly cause you can basically just use open source apps on it that can be compiled against it. I’m really hoping to see something like M series from China but using RISCV and with Linux.

















that’s the joke :)