Dutch lawyers increasingly have to convince clients that they can’t rely on AI-generated legal advice because chatbots are often inaccurate, the Financieele Dagblad (FD) found when speaking to several lawfirms. A recent survey by Deloitte showed that 60 percent of lawfirms see clients trying to perform simple legal tasks with AI tools, hoping to achieve a faster turnaround or lower fees.

  • a4ng3l@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    1
    ·
    3 days ago

    It’s been doing wonders to help me improve materials I produce so that they fit better to some audiences. Also I can use those to spot missing points / inconsistencies against the ton of documents we have in my shop when writing something. It’s quite useful when using it as a sparing partner so far.

    • The_Almighty_Walrus@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 days ago

      It’s great when you have basic critical thinking skills and can use it like a tool.

      Unfortunately, many people don’t have those and just use AI as a substitute for their own brain.

      • a4ng3l@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        3 days ago

        Yeah well same applies for a lot of tools… I’m not certified for flying a plane and look at me not flying one either… but I’m not shitting on planes…

          • Lurking Hobbyist🕸️@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 days ago

            I understand what you mean, but… looks at Birgenair 301 and Aeroperu 603 looks at Qantas 72 looks at the 737 Max 8 crashes Planes have spat out false data, and in of the 5 cases mentioned, only one avoided disaster.

            It is down to the humans in the cockpits to filter through the data and know what can be trusted. Which could be similar to LLMs except cockpits have a two person team to catch errors and keep things safe.

            • ToTheGraveMyLove@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              ·
              2 days ago

              So you found five examples in the history of human aviation, how often do you think AI hallucinates information? Because I can guarantee you its a hell of a lot more frequently than that.

              • Lurking Hobbyist🕸️@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                2 days ago

                You should check out Air Crash Investigation, amigo, all 26 seasons, you’d be surprised what humans in metal life support machines can cause when systems breakdown.

                • ToTheGraveMyLove@sh.itjust.works
                  link
                  fedilink
                  English
                  arrow-up
                  2
                  ·
                  2 days ago

                  I’m not watching 26 seasons of a TV show ffs, I’ve got better things to do with my time. Skimming the IMBD though, I’m seeing a lot of different causes for the crashes, from bad weather, to machine failure, to running out of fuel, improper maintenance, pilot errors, etc. Remember, my point had nothing to do with mechanical failure. Any machine can fail. My point was that airplanes don’t routinely spit out false information in the day-to-day function of the machine like AI does. You’re getting into strawman territory mate.

          • a4ng3l@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            3 days ago

            If you can’t fly a plane chances are you’ll crash it. If you can’t use llms chances are you’ll get removed out of it… outcome of using a tool is directly correlated to one’s ability?

            Sound logical enough to me.

            • DrunkenPirate@feddit.org
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 days ago

              Sure. However, the outcome of the tool LLM always looks very likely. And if you aren‘t a subject matter expert the likely expected result looks very right. That‘s the difference - hard to spot the wrong things (even for experts)

              • a4ng3l@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                arrow-down
                1
                ·
                3 days ago

                So is a speedometer and an altimeter until you reaaaaaaaaly need to understand them.

                I mean it all boils down to proper tool with proper knowledge and ability. It’s slightly exacerbated by the apparent simplicity but if you look at it as a tool it’s no different.

    • mech@feddit.org
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      Rule of thumb is that AI can be useful if you use it for things you already know.
      They can save time and if they produce removed, you’ll notice.
      Don’t use them for things you know nothing about.

      • a4ng3l@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 days ago

        LLM’s specifically bc ai as a range of practices encompass a lot of things where the user can be slightly more dumb.

        You’re spot on in my opinion.