I’m usually the one saying “AI is already as good as it’s gonna get, for a long while.”

This article, in contrast, is quotes from folks making the next AI generation - saying the same.

  • GetOffMyLan@programming.dev
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    1
    ·
    2 days ago

    LLMs are based on neural networks which are a massively simplified model of how our brain works. So you kind of can as long as you keep in mind they are orders of magnitude more simple.

    • utopiah@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 day ago

      At some point it becomes so “simplified” it’s arguably just not the same thing, even conceptually.

      • GetOffMyLan@programming.dev
        link
        fedilink
        English
        arrow-up
        2
        arrow-down
        1
        ·
        edit-2
        10 hours ago

        It is conceptually the same thing. A series of interconnected neurons with a firing threshold and weighted connections.

        The simplification comes with how the information is transmitted and how our brain learns.

        Many functions in the human body rely on quantum mechanical effects to function correctly. So to simulate it properly each connection really needs to be its own super computer.

        But it has been shown to be able to encode information in a similar way. The learning the part is not even close.