• TheOubliette@lemmy.ml
    link
    fedilink
    arrow-up
    24
    arrow-down
    1
    ·
    23 hours ago

    “AI” is a parlor trick. Very impressive at first, then you realize there isn’t much to it that is actually meaningful. It regurgitates language patterns, patterns in images, etc. It can make a great Markov chain. But if you want to create an “AI” that just mines research papers, it will be unable to do useful things like synthesize information or describe the state of a research field. It is incapable of critical or analytical approaches. It will only be able to answer simple questions with dubious accuracy and to summarize texts (also with dubious accuracy).

    Let’s say you want to understand research on sugar and obesity using only a corpus from peer reviewed articles. You want to ask something like, “what is the relationship between sugar and obesity?”. What will LLMs do when you ask this question? Well, they will just attempt to do associations and to construct reasonable-sounding sentences based on their set of research articles. They might even just take an actual semtence from an article and reframe it a little, just like a high schooler trying to get away with plagiarism. But they won’t be able to actually mechanistically explain the overall mechanisms and will fall flat on their face when trying to discern nonsense funded by food lobbies from critical research. LLMs do not think or criticize. Of they do produce an answer that suggests controversy it will be because they either recognized diversity in the papers or, more likely, their corpus contains reviee articles that criticize articles funded by the food industry. But it will be unable to actually criticize the poor work or provide a summary of the relationship between sugar and obesity based on any actual understanding that questions, for example, whether this is even a valid question to ask in the first place (bodies are not simple!). It can only copy and mimic.

    • Melatonin@lemmy.dbzer0.comOP
      link
      fedilink
      arrow-up
      1
      ·
      4 hours ago

      Surely that is because we make it do that. We cripple it. Could we not unbound AI so that it genuinely weighed alternatives and made value choices? Write self-improvement algorithms?

      If AI is only a “parrot” as you say, then why should there be worries about extinction from AI? https://www.safe.ai/work/statement-on-ai-risk#open-letter

      It COULD help us. It WILL be smarter and faster than we are. We need to find ways to help it help us.

      • mormund@feddit.org
        link
        fedilink
        arrow-up
        1
        ·
        1 hour ago

        If AI is only a “parrot” as you say, then why should there be worries about extinction from AI?

        You should look closer who is making those claims that “AI” is an extinction threat to humanity. It isn’t researchers that look into ethics and safety (not to be confused with “AI safety” as part of “Alignment”). It is the people building the models and investors. Why are they building and investing in things that would kill us?

        AI doomers try to 1. Make “AI”/LLMs appear way more powerful than they actually are. 2. Distract from actual threats and issues with LLMs/“AI”. Because they are societal, ethical, about copyright and how it is not a trustworthy system at all. Cause admitting to those makes it a really hard sell.

        • Melatonin@lemmy.dbzer0.comOP
          link
          fedilink
          arrow-up
          1
          ·
          38 minutes ago

          We cripple things by not programming the the abilities we obviously could give them.

          We could have AI do an integrity check before printing an answer. No problem at all. We don’t.

          We could do many things to unbound the limitations AI has.

        • Melatonin@lemmy.dbzer0.comOP
          link
          fedilink
          arrow-up
          1
          ·
          43 minutes ago

          If you look at the signatories (in the link) there are plenty of people who are not builders and investors, people who are in fact scientists in the field.

      • TheOubliette@lemmy.ml
        link
        fedilink
        arrow-up
        1
        ·
        2 hours ago

        Surely that is because we make it do that. We cripple it. Could we not unbound AI so that it genuinely weighed alternatives and made value choices?

        It’s not that we cripple it, it’s that the term “AI” has been used as a marketing term for generative models using LLMs and similar technology. The mimicry is inherent to how these models function, they are all about patterns.

        A good example is “hallucinations” with LLMs. When the models give wrong answers because they appear to be making things up. Really, they are incapable of differentiating, they’re just producing sophisticated patterns from a very large models. There is no real underlying conceptualization or notion of true answers, only answers that are often true when the training material was true and the model captured the patterns and they were highly weighted. The hot topic for thevlast year has just been to augment these models with a more specific corpus, pike a company database, for a given application so that it is more biased towards relevant things.

        This is also why these models are bad at basic math.

        So the fundamental problem here is companies calling this AI as if reasoning is occurring. It is useful for marketing because they want to sell the idea that this can replace workers but it usually can’t. So you get funny situations like chatbots at airlines that offer money to people without there being any company policy to do so.

        If AI is only a “parrot” as you say, then why should there be worries about extinction from AI? https://www.safe.ai/work/statement-on-ai-risk#open-letter

        There are a lot of very intelligent academics and technical experts that have completely unrealistic ideas of what is an actual real-world threat. For example, I know one that worked on military drones, the kind that drop bombs on kids, that was worried about right wing grifters getting protested at a college campus like it was the end of the world. Not his material contribution to military domination and instability but whether a racist he clearly sympathized with would have to see some protest signs.

        That petition seems to be based on the ones against nuclear proliferation from the 80s. They could be simple because nuclear war was obviously a substantial threat. It still is but there is no propaganda fear campaign to keep the concern alive. For AI, it is in no way obvious what threat they are talking about.

        I have persobal concepts of AI threats. Having ridiculously high energy requirements compared to their utility when energy is still a major contributor to climate change. The potential for it to kill knowledge bases, like how it is making search engines garbage with a flood of nonsense websites. Enclosure of creative works and production by some monopoly “AU” companies. They are already suing others based on IP infringement when their models are all based on it! But I can’t tell if this petition is about that at all, it doesn’t explain. Maybe they’re thinking of a Terminator scenario, which is absurd.

        It COULD help us. It WILL be smarter and faster than we are. We need to find ways to help it help us.

        Technology is both a reflection and determinent of social relations. As we can see with this round if “AI”, it is largely vaporware that has not helped much with productivity but is nevertheless very appealing to businesses that feel they need to get on the hype train or be left behind. What they really want to do is have a smaller workforce so they can make more money that they can then use to make more money etc etc. For example, plenty of people use “AI” to generate questionably appealing graphics for their websites rather than paying an artist. So we can see that " A" tech is a solution searching for a problem, that its actual use cases are about profit over real utility, and that this is not the fault of the technology, but how we currently organize society: not for people, but for profit.

        So yes, of course, real AI could be very helpful! How nice would it be to let computers do the boring work and then enjoy the fruits of huge productivity increases? The real risk is not the technology, it is our social relations, who has power, and how technology is used. Is making the production of art a less viable career path an advancement? Is it helping people overall? What are the graphic designers displaced by what is basically an infinite pile of same-y stock images going to do now? They still have to have jobs to live. The fruits of “AI” removing much of their job market hasn’t really been shared equally, nor has it meant an early retirement. This is because the fundamental economic system remains in place and it cannot survive without forcing people to do jobs.

    • Brahvim Bhaktvatsal@lemmy.kde.social
      link
      fedilink
      isiZulu
      arrow-up
      3
      ·
      17 hours ago

      They might even just take an actual semtence from an article and reframe it a little

      Case for many things that can be answered via stackoverflow searches. Even the order in which GPT-4o brings up points is the exact same as SO answers or comments.

      • TheOubliette@lemmy.ml
        link
        fedilink
        arrow-up
        2
        ·
        17 hours ago

        Yeah it’s actually one of the ways I caught a previous manager using AI for their own writing (things that should not have been done with AI). They were supposed to write about something in a hyper-specific field and an entire paragraph ended up just being a rewording of one of two (third party) website pages that discuss this topic directly.

    • howrar@lemmy.ca
      link
      fedilink
      arrow-up
      2
      ·
      edit-2
      18 hours ago

      Why does everyone keep calling them Markov chains? They’re missing all the required properties, including the eponymous Markovian property. Wouldn’t it be more correct to call them stochastic processes?

      Edit: Correction, turns out the only difference between a stochastic process and a Markov process is the Markovian property. It’s literally defined as “stochastic process but Markovian”.

      • TheOubliette@lemmy.ml
        link
        fedilink
        arrow-up
        3
        ·
        21 hours ago

        Because it’s close enough. Turn off beam and redefine your state space and the property holds.

        • howrar@lemmy.ca
          link
          fedilink
          arrow-up
          4
          ·
          21 hours ago

          Why settle for good enough when you have a term that is both actually correct and more widely understood?

                • howrar@lemmy.ca
                  link
                  fedilink
                  arrow-up
                  2
                  ·
                  18 hours ago

                  That’s basically like saying that typical smartphones are square because it’s close enough to rectangle and rectangle is too vague of a term. The point of more specific terms is to narrow down the set of possibilities. If you use “square” to mean the set of rectangles, then you lose the ability to do that and now both words are equally vague.

                  • TheOubliette@lemmy.ml
                    link
                    fedilink
                    arrow-up
                    2
                    ·
                    18 hours ago

                    Is this referring to what I said about Markov chains or stochastic processes? If it’s the former the only discriminating factor is beam and not all LLMs use that. If it’s the latter then I don’t know what you mean. Molecular dffusion is a classic stochastic process, I am 100% correct in my example.