• mipadaitu@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    1
    ·
    18 days ago

    This shows that AI isn’t an infallible machine that gets everything right — instead, we can think of it as a person who can think quickly, but its output needs to be double-checked every time. AI is certainly a useful tool in many situations, but we can’t let it do the thinking for us, at least for now.

    No, it’s not “like a person who can think.” Unless you mean it’s like an ADHD person who got distracted halfway through the transcript and started working on a different project in the same file.

    • Optional@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      18 days ago

      Agreed.

      we can think of it as a person who can think quickly

      No.

      Do not do this. This way lies madness. It’s a text prediction system which is incredibly complex just to get it to barf out three sentences that sound about right. It is not “thinking” shit.

  • themurphy@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    ·
    18 days ago

    It’s great they start using these tools, but I hope they keep in mind that it needs much fine tuning before it’s reliable enough.

    You need to use a product in practice to make it better. But you don’t need to rely on it from the start. They need to invest the time in implementation, and where it can be used and shouldn’t be used.

    Nice progress towards something good, but we’re not there yet.

  • RobotToaster@mander.xyz
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    3
    ·
    18 days ago

    How can it be that bad?

    I’ve used zoom’s ai transcriptions, for far less mission critical stuff, and it’s generally fine, (I still wouldn’t trust it for medical purposes)

    • huginn@feddit.it
      link
      fedilink
      English
      arrow-up
      15
      arrow-down
      1
      ·
      18 days ago

      Zoom ai transcriptions also make things up.

      That’s the point. They’re hallucination engines. They pattern match and fill holes by design. It doesn’t matter if the match isn’t perfect, it will patch it over with nonsense instead.

    • ElPussyKangaroo@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      18 days ago

      It’s not the transcripts that are the issue here. It’s that the transcripts are being interpreted by the model to give information.

    • Grimy@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      18 days ago

      Whisper has been known to hallucinate during long moments of silence. Most of their examples though are most likely due to bad audio quality.

      I use whisper quite a bit and it will fumble a word here or there but never to the extent that is being shown in the article.