Dutch lawyers increasingly have to convince clients that they can’t rely on AI-generated legal advice because chatbots are often inaccurate, the Financieele Dagblad (FD) found when speaking to several lawfirms. A recent survey by Deloitte showed that 60 percent of lawfirms see clients trying to perform simple legal tasks with AI tools, hoping to achieve a faster turnaround or lower fees.

  • Pringles@sopuli.xyz
    link
    fedilink
    English
    arrow-up
    3
    ·
    2 days ago

    Meanwhile I strongly suspect our legal reviewer of using chatgpt to review contracts, because he sends some laughably stupid comments that look fully AI generated.

  • qwestjest78@lemmy.ca
    link
    fedilink
    English
    arrow-up
    8
    ·
    3 days ago

    I find it useless for even basic tasks. The fact that some people follow it blindly like a god is so concerning.

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      A lot of people are very stupid, and also very easily tricked/conned.

      We are basically just finding all the people who pretty much already were NPCs, and now, well, they’re formalizing that.

      To those people, well, the LLM probably just actually generally is more intelligent / informed than them.

      George Carlin:

      Imagine how stupid the average person is.

      Now, realize half of all people are more stupid than that.

    • ageedizzle@piefed.ca
      link
      fedilink
      English
      arrow-up
      3
      ·
      3 days ago

      I work in a health-care-adjacent industry and you’d be surprised how many people blindly follow LLMs for medical advice

      • qwestjest78@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 days ago

        No I don’t think I would be actually. People have turned to Google for health advice for a long time now. Ai is the next logical step for them.

      • PrettyFlyForAFatGuy@feddit.uk
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        My partners midwife googled stuff in front of us and parroted the AI summary back to us when we asked if a specific drug was okay for pregnant people

    • a4ng3l@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      3 days ago

      It’s been doing wonders to help me improve materials I produce so that they fit better to some audiences. Also I can use those to spot missing points / inconsistencies against the ton of documents we have in my shop when writing something. It’s quite useful when using it as a sparing partner so far.

      • The_Almighty_Walrus@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        ·
        3 days ago

        It’s great when you have basic critical thinking skills and can use it like a tool.

        Unfortunately, many people don’t have those and just use AI as a substitute for their own brain.

        • a4ng3l@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          3 days ago

          Yeah well same applies for a lot of tools… I’m not certified for flying a plane and look at me not flying one either… but I’m not shitting on planes…

            • Lurking Hobbyist🕸️@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 days ago

              I understand what you mean, but… looks at Birgenair 301 and Aeroperu 603 looks at Qantas 72 looks at the 737 Max 8 crashes Planes have spat out false data, and in of the 5 cases mentioned, only one avoided disaster.

              It is down to the humans in the cockpits to filter through the data and know what can be trusted. Which could be similar to LLMs except cockpits have a two person team to catch errors and keep things safe.

              • ToTheGraveMyLove@sh.itjust.works
                link
                fedilink
                English
                arrow-up
                2
                ·
                2 days ago

                So you found five examples in the history of human aviation, how often do you think AI hallucinates information? Because I can guarantee you its a hell of a lot more frequently than that.

                • Lurking Hobbyist🕸️@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  ·
                  2 days ago

                  You should check out Air Crash Investigation, amigo, all 26 seasons, you’d be surprised what humans in metal life support machines can cause when systems breakdown.

            • a4ng3l@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              3 days ago

              If you can’t fly a plane chances are you’ll crash it. If you can’t use llms chances are you’ll get removed out of it… outcome of using a tool is directly correlated to one’s ability?

              Sound logical enough to me.

              • DrunkenPirate@feddit.org
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 days ago

                Sure. However, the outcome of the tool LLM always looks very likely. And if you aren‘t a subject matter expert the likely expected result looks very right. That‘s the difference - hard to spot the wrong things (even for experts)

                • a4ng3l@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  0
                  arrow-down
                  1
                  ·
                  3 days ago

                  So is a speedometer and an altimeter until you reaaaaaaaaly need to understand them.

                  I mean it all boils down to proper tool with proper knowledge and ability. It’s slightly exacerbated by the apparent simplicity but if you look at it as a tool it’s no different.

      • mech@feddit.org
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        Rule of thumb is that AI can be useful if you use it for things you already know.
        They can save time and if they produce removed, you’ll notice.
        Don’t use them for things you know nothing about.

        • a4ng3l@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          LLM’s specifically bc ai as a range of practices encompass a lot of things where the user can be slightly more dumb.

          You’re spot on in my opinion.

  • whotookkarl@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    3
    ·
    edit-2
    3 days ago

    Everyone with domain specific knowledge*

    Language models do not reason or weigh evidence or evaluate or weigh risks, they predict what words and phrases are most likely to come next. Reinforcement learning can train language models to make better predictions in specific circumstances, but it’s not a substitute for conscious thought.

  • r00ty@kbin.life
    link
    fedilink
    arrow-up
    2
    ·
    3 days ago

    I think my least favourite thing about AI is when customers tell me something won’t take as long as I say, if I use AI. Look, if AI can do it why do you need me?

    The fact I’m not out of a job (yet) is because apparently AI cannot do everything I can. The very second it can I’ll be long gone.

    So I am on the side of the lawyers here. For the first and only time.

    • ZeDoTelhado@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      Law in particular is such a gnarly subject that you really want to have someone who knows what they are saying about anything. And even then they can be wrong too

  • osanna@thebrainbin.org
    link
    fedilink
    arrow-up
    2
    ·
    3 days ago

    I can’t stand AI, but the few times I’ve used it, I’ve used it as a starting point. Once it gives me advice, I then go and confirm that with other sources. But I don’t use AI much.

    • sqgl@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      I had it cite a case which didn’t exist. It was perfect for what I was fighting (it tends to figure out what you want to hear then makes up stuff to satisfy you).

      When I tried to search for a phrase from the case (hoping it just gave the wrong citation) it said there was no such case with that phrase.

      I asked why it said there was such a case earlier. It confessed that AI sometimes hallucinates and promised to try better in future.

      • RamRabbit@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        2 days ago

        and promised to try better in future

        This part in particular really pisses me off. It isn’t learning, it isn’t ‘going to do better’, it’s just saying what it thinks you want to hear.

        I fucking hate sycophants.

    • Silver Needle@lemmy.ca
      link
      fedilink
      English
      arrow-up
      0
      ·
      edit-2
      3 days ago

      Let’s consider what you are doing on a purely abstract level.

      1. You prompt an generative large language model what to do.
      2. You receive a set of information whose veracity you can not count on in any practical sense.
      3. You go and confirm this information. Likely you are inputting similar prompts into you search engine of choice giving you answers from experts that are more or less guaranteed to be relevant and useful.
      4. Then you act accordingly.

      We could also do the following:

      1. You have an idea/question that you search. You have keywords to type into forums. You get the relevant information. If need be you make a post on a questions board.
      2. Then you act accordingly
      • Null User Object@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        3 days ago

        You have keywords to type into forums.

        That’s great when you do, and you usually do, but sometimes you don’t.

        Case in point; A while back I was creating a 3D model for my 3D printer. It had a part that was essentially identical to a particular unusual pipe fitting that I have seen and knew existed, but didn’t know the name of (spoiler: I’m not a plumber), and I wanted to give the sketch in the modeling software a proper name for the thing.

        Just trying keywords that sort of described it’s shape in search engines was useless. Search engines would focus more on the “pipe fitting” part of the keywords and just return links to articles about plumbing. Then I asked an LLM, and it responded with, “That sounds like X.” Then I checked that it wasn’t just making it up by searching for “X” and found online stores selling the very thing I was trying to figure out the name of.

  • Eternal192@anarchist.nexus
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    Honestly if you are that dependent on A.I. now when it’s still in a test phase then you are already lost, A.I. won’t make us smarter if anything it has the opposite effect.

    • kescusay@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 days ago

      I’m watching that happen in my industry (software development). There’s this massive pressure campaign by damn near everyone’s employers in software dev to use LLM tools.

      It’s causing developers to churn out terrible, fragile, unmaintainable code at a breakneck pace, while they’re actively forgetting how to code for themselves.

  • pinball_wizard@lemmy.zip
    link
    fedilink
    English
    arrow-up
    0
    ·
    3 days ago

    Yes please. More folks need to all in on the idiocy of trusting an AI for legal advice. Let’s get this public lesson over with.

    This is one of the cases where they can simply be a hilarious example for the rest of us, rather than getting a bunch of the rest of us killed.