FunkyStuff [he/him]

  • 0 Posts
  • 11 Comments
Joined 3 years ago
cake
Cake day: June 9th, 2021

help-circle

  • They have a journalistic responsibility to include relevant information and seek comment from relevant groups. The article doesn’t include any comment from healthcare professionals or advocacy groups, nor does it contain any information about potential consequences of the surgeries being banned. It fails to actually inform the public on the issue at hand, and the auxiliary information that is brought up just puts the medical procedure into question by positing it as controversial (yeah, controversial because of transphobes) and questionable. Presenting only a limited section of the issue, but making it seem like it encompasses its entirety, makes this article functionally the same as transphobic propaganda.



  • There is absolutely reason to capitulate: the fact that the Ukrainian Armed Forces have an average age above 40 and are absolutely scraping the barrel by conscripting any poor man they find. There is no universe where Ukraine pushes back the front to recover any substantial amount of territory. They have already attempted to launch counteroffensives in better conditions, and all they’ve achieved is to slow down Russia, never actually regaining any territory. The only reason they’re even in the war still is because NATO wants to sacrifice Ukrainian lives to weaken Russia, and corrupt Ukrainian politicians are making a quick buck by privatizing they country in the meantime. This is obviously not sustainable long term; in another year or two they won’t be able to recruit more people, or they’ll run out of artillery shlels.







  • It won’t be long (maybe 3 years max) before industry adopts some technique for automatically prompting a LLM to generate code to fulfill a certain requirement, then iteratively improve it using test data to get it to pass all test cases. And I’m pretty sure there already are ways to get LLM’s to generate test cases. So this could go nightmarishly wrong very very fast if industry adopts that technology and starts integrating hundreds of unnecessary libraries or pieces of code that the AI just learned to “spam” everywhere so to speak. These things are way dumber than we give them credit for.


  • You have a pretty interesting idea that I hadn’t heard elsewhere. Do you know if there’s been any research to make an AI model learn that way?

    In my own time while I’ve messed around with some ML stuff, I’ve heard of approaches where you try to get the model to accomplish progressively more complex tasks but in the same domain. For example, if you wanted to train a model to control an agent in a physics simulation to walk like a humanoid you’d have it learn to crawl first, like a real human. I guess for an AGI it makes sense that you would have it try to learn a model of the world across different domains like vision, or sound. Heck, since you can plug any kind of input to it you could have it process radio, infrared, whatever else. That way it could have a very complete model of the world.