• sugar_in_your_tea@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      0
      ·
      11 days ago

      It’s also neural networks, and probably some other CS structures.

      AI is a category, and even specific implementations tend to use multiple techniques.

      • brucethemoose@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        11 days ago

        Well there is a very specific architecture “rut” the LLMs people use have fallen into, and even small attempts to break out (like with Jamba) don’t seem to get much interest, unfortunately.

        • sugar_in_your_tea@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          0
          ·
          11 days ago

          Sure, but LLMs aren’t the only AI being used, nor will they eliminate the other forms of AI. As people see issues with the big LLMs, development focus will change to adopt other approaches.

          • commandar@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            edit-2
            11 days ago

            There is real risk that the hype cycle around LLMs will smother other research in the cradle when the bubble pops.

            The hyperscalers are dumping tens of billions of dollars into infrastructure investment every single quarter right now on the promise of LLMs. If LLMs don’t turn into something with a tangible ROI, the term AI will become every bit as radioactive to investors in the future as it is lucrative right now.

            Viable paths of research will become much harder to fund if investors get burned because the business model they’re funding right now doesn’t solidify beyond “trust us bro.”