• andros_rex@lemmy.world
    link
    fedilink
    arrow-up
    78
    arrow-down
    2
    ·
    8 days ago

    Unfortunately it’s captured a lot of information and resources.

    Think about crafting - I’m going to find a lot more knitting tutorials on Reddit than I will here. Lemmy is very like early Reddit, where it’s only really active on topics like politics and technology.

    • dance_ninja@lemmy.world
      link
      fedilink
      arrow-up
      27
      arrow-down
      1
      ·
      7 days ago

      What do you need to craft in your life besides building the perfect Linux distro? /s

      On a side note, how do you find Reddit vs Ravelry for knitting?

      • stinely_yours@slrpnk.net
        link
        fedilink
        arrow-up
        2
        ·
        7 days ago

        Two toooootally different formats.

        Ravelry is a archive (want to know the attributes of any yarn or errata on any pattern ever?), project journal, occasionally interactive (imo the forum/groups can be dead).

        Knittit is just FO’s and chat (chat good for exchanging technique knowledge).

      • Mountainaire@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        7 days ago

        To be fair, I was totally lost even as a Reddit veteran, for quite a while, which is why my adoption to Lemmy was slow. It took me a very long time to really start to understand how the federation worked, and I still would not say I’m an expert in any way.

        I still need to use mobile apps’ auto-fill features to help myself properly tag users and communities on other instances, an issue I never one had on Reddit.

          • CileTheSane@lemmy.ca
            link
            fedilink
            arrow-up
            2
            ·
            6 days ago

            I’d rather use Reddit than AI, yes.

            If someone says something incorrect on Reddit there’s a good chance there’s someone pointing it out. AI will insist it is correct when it tells you “strawberry” has 2 “R’s”.

            • yucandu@lemmy.world
              link
              fedilink
              arrow-up
              1
              arrow-down
              1
              ·
              6 days ago

              If someone says something incorrect on Reddit there’s a good chance there’s someone pointing it out.

              There’s a very small chance of someone pointing it out. There’s a better chance they’ll be downvoted. There’s an even better chance someone right will be downvoted and someone pointing out their mistake, incorrectly, will be upvoted.

              You can’t be serious if you’re telling me you’re going to use Reddit comments as a reliable source of information, but then ideologically object to the idea of using an LLM for the same purpose.

              AI will insist it is correct when it tells you “strawberry” has 2 “R’s”.

              Have you used AI in the past year?

              • CileTheSane@lemmy.ca
                link
                fedilink
                arrow-up
                1
                ·
                6 days ago

                You can’t be serious if you’re telling me you’re going to use Reddit comments as a reliable source of information, but then ideologically object to the idea of using an LLM for the same purpose.

                I’m saying Reddit is more reliable than AI. I agree with you that you shouldn’t just trust Reddit as a reliable source of information, I just trust AI much less.

                Have you used AI in the past year?

                Yes yes, someone has hard-coded a fix for the strawberry thing. It’s still an excellent example of the root issue:

                1. it was a thing everybody knew was incorrect and they could see how AI dealt with it: guessing, and then insisting it made no mistakes.
                  If I can’t trust it for basic information I can double check myself then why the fuck would I trust it for information I can’t verify myself?

                2. everytime something like this comes up it gets “fixed”, sure. Someone hard codes a correct answer to the specific question that everyone can easily see is incorrect. Why the fuck would I assume that’s happening for some obscure thing that I don’t immediately know is incorrect?
                  Sure, it’s probably not telling people to put glue on pizza anymore, because everyone who reads that knows it’s a bad idea. How do I know it’s not suggesting something equally stupid when I ask it how to rewire a thermostat, something that the majority of people won’t immediately clock as “that will burn your house down”?

                LLMs are really good at sounding smart to people who don’t know when it is very wrong.