• NewSocialWhoDis@lemmy.zip
    link
    fedilink
    arrow-up
    8
    ·
    11 hours ago

    Ok ok, hear me out, what if they were harvested neurons from someone everyone wishes had gotten punished for their crimes, but never was? Hitler or Stalin? Or Trump! Or Putin!

    • SLVRDRGN@lemmy.world
      link
      fedilink
      arrow-up
      1
      ·
      9 hours ago

      Trump is still alive - he can and should be punished for his crimes. I don’t know what makes him special that he should never be.

      • NewSocialWhoDis@lemmy.zip
        link
        fedilink
        arrow-up
        2
        ·
        9 hours ago

        Whoa whoa whoa. I said will never be punished, not should never be punished.

        And Trump and Putin are both still alive.

    • Xella@lemmy.world
      link
      fedilink
      arrow-up
      3
      arrow-down
      1
      ·
      11 hours ago

      Love this idea. This new form of consciousness is terrifying but your ideas makes it more tolerable for me. ❤️

      • KernelTale@programming.dev
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        11 hours ago

        It actually makes me even more concerned. I don’t want dead monsters to suffer. I want them to be dead.

  • Clbull@lemmy.world
    link
    fedilink
    arrow-up
    34
    ·
    20 hours ago

    Cortical Labs are the ones who pulled this off. They already have biological computers running on 800,000 lab-grown neurons available for ~$35,000 (just going on what a quick Google search told me) and are planning to open up a cloud computing service with its own API soon.

    This makes me feel uneasy. Imagine if reincarnation were a thing and you get brought back into this world, and your purpose is to learn how to play DOOM.

    • gerryflap@feddit.nl
      link
      fedilink
      arrow-up
      21
      arrow-down
      1
      ·
      20 hours ago

      Personally my worry really isn’t reincarnation, there’s no reason to believe that that’s true. But if these are fundamentally the same neurons that make up our brains, then how much do you need to put together before they acquire some form of “sentience”? Does a clump of 800,000 human neurons experience pain, sadness, a sense of self? Where is the line between an emotionless biocomputer and torturing a living organism for its entire lifespan?

      Despite the fact that I really hate “AI”, that question was of course already sort of relevant for the latest AI models, even though we can generally conclude that they’re not there yet at all. But real neurons are different, we know what they’re capable of. How many do you need before a clump of neurons has rights?

      • Jyek@sh.itjust.works
        link
        fedilink
        arrow-up
        9
        arrow-down
        1
        ·
        18 hours ago

        Large language models are not intelligent. They are predictive text applications with massive dictionaries of circumstantial sentence structures to choose from. Nothing more. They do not feel and do not think for themselves. The only time they do anything is when the API calls them to produce more text with an updated context string.

        • sem@piefed.blahaj.zone
          link
          fedilink
          English
          arrow-up
          2
          ·
          11 hours ago

          I don’t know how many neurons are in a human brain, but if you made an artificial human brain, could it have consciousness?

        • Schadrach@lemmy.sdf.org
          link
          fedilink
          English
          arrow-up
          6
          ·
          19 hours ago

          Sure, but is the full human brain the minimum set necessary?

          Sentience/sapience is probably an emergent property of a set of neurons needing to coordinate, plan, predict the future and oneself in relation to it.

          I suspect that AI is capable of sentience with sufficient complexity and training, but it’s not there yet. I also suspect we’ll be well past the point where it is there before we realize it is, but not until we make some kind of fundamental change in how we do it - we know human level intelligence is possible in the volume and power consumption of, well, a brain so we’re orders of magnitude off of efficiency limits.

          • Washedupcynic@lemmy.ca
            link
            fedilink
            English
            arrow-up
            9
            ·
            18 hours ago

            It’s estimated that mice have 70 million to 100 million neurons in their brains. They are capable of feeling pain and have social hierarchy. They also experience emotions like fear, pleasure, and anxiety. (We use them in pharmacology models of many mental illnesses.)

            Have you ever heard the phrase, “the neurons that fire together, wire together” ? Our neurons are in a constant feedback loop with the environment we experience. Our experiences shape how our neurons make interconnected networks, which then impacts how we behave upon the environment.

            If those neurons connected to the computer chip only ever experience playing the game “DOOM,” how would they know about anything else? How could they know about pain without having limbs to innervate and experience the pain with? How could they have a social hierarchy without others to interact with? We may as well be god to those neurons on the PC chip, because we are controlling the entire world they have access to.

            What I find sad is that our society is ok with hooking living cells up to a computer to make smarter computers, but has a problem with ethically harvesting stem cells to be used to treat diseases.

        • sureshot0@discuss.online
          link
          fedilink
          arrow-up
          9
          ·
          17 hours ago

          People used to say animals were not concious.

          Recent science suggest that some animals have what humans would consider to be language. This is a slippery slipe.

          • sudoer777@lemmy.ml
            link
            fedilink
            English
            arrow-up
            3
            ·
            17 hours ago

            People used to say animals were not concious.

            A lot of religious people still say that.

  • SabinStargem@lemmy.today
    link
    fedilink
    English
    arrow-up
    2
    ·
    12 hours ago

    As it turns out, Doomguy is a robot clone of BJ Blazkowitz, who was deliberately smuggled onto Mars by scientists who knew about Hell.

  • rizzothesmall@sh.itjust.works
    link
    fedilink
    arrow-up
    9
    ·
    17 hours ago

    Iirc the study found that the neurons played “slightly better than buttons being pressed at random” or something like that, so it’s hardly pro gamer brain chip.

  • collapse_already@lemmy.ml
    link
    fedilink
    English
    arrow-up
    7
    ·
    17 hours ago

    Next step: putting the cells into one of those Boston Dynamics robot dogs with a gun attached. What could go wrong?

    • bampop@lemmy.world
      link
      fedilink
      arrow-up
      8
      ·
      22 hours ago

      OK but hear me out here, I think I have the beginnings of a business plan:

      1. Create the Torment Nexus

      2. ?

      3. Profit

      Some components of the plan are still under development, but let’s not lose momentum. We can advance with the initial phase while brainstorming to refine the plan in real time as we progress. It’s an exciting opportunity and we mustn’t forfeit our first-to-market advantage.

    • ouRKaoS@lemmy.today
      link
      fedilink
      arrow-up
      7
      ·
      23 hours ago

      Scientists: “No, this isn’t The Torment Nexus, this is ‘The Nexus of Torment’! It’s totally different!”

  • matlag@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    10
    ·
    20 hours ago

    Am I the only one who wonders why, in a world where there are already concerns about machines rebellion, when we train rats, robots and a bench of neurons to play a game, it HAS to be Doom, we can’t think about another, non-violent, or let’s be bold: non-destructive game??

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      19
      ·
      20 hours ago

      They trained a tiny patch of neurons to respond to low-voltage electric impulses. The cells don’t know they’re playing Doom. They don’t have any kind of social context or even video feedback.

      Imagine if I stuck you in a sensory deprivation chamber, handed you an NES controller, and asked you to hit the buttons. Then, periodically, I said “Yes” or “No” based on the buttons you pressed. And when I pulled you out of the tube at the end of an hour, I told you “the yes and no messages were intended to encourage you to correctly navigate Mario through the first level of the original game.” What if, instead of Mario, I’d been telling you how to play Street Fighter?

      It doesn’t matter if its Doom. They likely picked Doom because the I/O is so rudimentary that you can install the game on practically anything. The cellular matter has no idea what it’s doing beyond the “Yes/No” signaling.

    • Sturgist@lemmy.ca
      link
      fedilink
      arrow-up
      17
      ·
      1 day ago

      Honestly? Sounds preferable to being stuck in the universe of I Have No Mouth And I Must Scream… I’ll take a challenging power fantasy with some massively overpowered weapons over millennia of endless physical and psychological torture by an insane AI… might just be me though…

        • outerspace@lemmy.zip
          link
          fedilink
          arrow-up
          3
          ·
          19 hours ago

          I just remembered, back in the day in russia we used to call keyboard players “tractor driver”

    • AEsheron@lemmy.world
      link
      fedilink
      arrow-up
      10
      ·
      22 hours ago

      IIRC, it doesn’t actually pay the game itself. We prod the cells, they fire in a certain way and that response is read to convert it to an output for the game. The cells aren’t a rudimentary Doom bot, they’re the controller.

  • MonkderVierte@lemmy.zip
    link
    fedilink
    arrow-up
    17
    ·
    edit-2
    21 hours ago

    we grew a human brain

    200’000 brian “brain cells” (so about 1/3 of it neurons) is the equivalent to a really simple microcontroller.

    Edit: left the typo for funny

    • ameancow@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      13 hours ago

      Yah but the visuals of “growing a human brain and trapping it in hell” gets a lot more clicks than “We made a very basic microcontroller out of organic chemistry to interact with an old video game poorly.”

  • sp3ctr4l@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    85
    ·
    1 day ago

    … Am I missing something, or is this not like, the practical, if not lore accurate first step toward actually creating a:

    • Aussieiuszko@aussie.zone
      link
      fedilink
      arrow-up
      5
      ·
      23 hours ago

      Why did hell have its own R&D department doing high tech cybernetics anyway?

      What other advance industry does hell have. It’s obviously a highly capitalistic place, so I imagine banking/finance?

    • Avicenna@programming.dev
      link
      fedilink
      arrow-up
      8
      ·
      edit-2
      1 day ago

      the only missing components are a minigun, robotic spyder legs and positive reinforcement coctail whenever it kills a person.

      • sp3ctr4l@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        ·
        20 hours ago

        I mean, Boston Dynamics figured out how to build essentially robot mules and cats like a decade ago, and they’re actually currently building and improving on humanoid designs.

        They got basically acquired by/folded into Hyundai, you know, an actual manufacturing company, unlike Elon’s ongoing fradulent shitshows.

  • starman2112@sh.itjust.works
    link
    fedilink
    arrow-up
    57
    arrow-down
    1
    ·
    1 day ago

    Raises uncomfortable questions about consciousness. The only difference between these neurons and your own are the number of them and the structures they form. Of course it doesn’t know what it’s doing, but… Neither do our own neurons

    • KindnessisPunk@piefed.ca
      link
      fedilink
      English
      arrow-up
      6
      ·
      23 hours ago

      I mean it’s the same question we’ve been asking all our lives about the animals, fetuses and now AI. When does it stop being a flowchart and start being a consciousness.

      • Sturgist@lemmy.ca
        link
        fedilink
        arrow-up
        13
        ·
        1 day ago

        Science and Ethics — the age old enmity between “I wanna know” and “I’m not allowed to find out” “Am I able to find out without doing something monstrously inhumane”

        FTFY

        I guess my point is that sometimes even if it’s illegal you can get away with it if done correctly, with ruling party aligned stated goals…or you have access to a shit tonne of money and powerful friends.

        • luciferofastora@feddit.org
          link
          fedilink
          arrow-up
          3
          ·
          1 day ago

          I simplified for comedic effect. You’re absolutely right that the “compromise” would be finding some humane and ethical solution, but “The most effective and direct way of finding out is cruel and callous” isn’t quite as snappy.

          I guess my point is that sometimes even if it’s illegal you can get away with it if done correctly, with ruling party aligned stated goals…or you have access to a shit tonne of money and powerful friends.

          That kinda dodges the conflict by not engaging with ethical concerns at all. I feel like calling it a solution would be morbid, but it does make the problem stop being a problem…

          • Sturgist@lemmy.ca
            link
            fedilink
            arrow-up
            2
            ·
            24 hours ago

            That kinda dodges the conflict by not engaging with ethical concerns at all.

            I guess I…kinda lost the plot a bit when I wrote the second part, eh?

            There’s ethics…and then there’s what the government in the country a scientist operates in views as “morally and ethically acceptable”.
            Stem cell research was banned in most places for a long time. The US is banning CRISPR, if I remember right, the OG Nazis, Soviets and Empire of Japan (and honestly basically everyone else too, just those are the three that were highlighted when I was in school) rubber-stamped and funded research that should warrant execution by vivisection…die by your own methods and all that.

            You’re right it’s not really a solution. However the realities of modern society means that there’s room within what is morally and ethically acceptable in any country to operate in both a humane and inhumane fashion. And if it doesn’t then money and connections to those in power allow further leeway to be an example of humanity at it’s best…or a monster in a human suit…

            • luciferofastora@feddit.org
              link
              fedilink
              arrow-up
              3
              ·
              17 hours ago

              I guess I…kinda lost the plot a bit when I wrote the second part, eh?

              I think I got where you were going, I was just saying that someone trying to find a way around the legal restrictions indicates they’re not actually concerned about ethics, just about not getting in trouble for it. In that context, the problem “How do I do this in an ethically acceptable manner?” is “solved” with the answer “I don’t care”.

              Generally, laws are the standard solution to ambiguities. Ethics are a murky and often subjective topic, so it makes sense to form some sort of common agreement on what is okay and what isn’t. And where there are laws, there are gonna be cunts proving exactly why we had to write it down in the first place…

    • ExLisper@lemmy.curiana.net
      link
      fedilink
      arrow-up
      8
      arrow-down
      4
      ·
      edit-2
      1 day ago

      Nueralink did pretty much the same thing to monkeys that are actually conscious. So it this different only because those are human neurons? Is human consciousness different than animal consciousness?

      • starman2112@sh.itjust.works
        link
        fedilink
        arrow-up
        13
        ·
        1 day ago

        I’m not sure this is quite analagous to neuralink’s monkey experiments. That said,

        So is this different only because those are human neurons?

        To my mind, a neuron is a neuron. The only difference between your brain and a monkey brain is, again, the number of neurons and the structures they form. I don’t see this as any different from monkey or rat or ant or entirely digital neurons.

        • ExLisper@lemmy.curiana.net
          link
          fedilink
          arrow-up
          4
          arrow-down
          3
          ·
          1 day ago

          I’m not sure this is quite analagous to neuralink’s monkey experiments.

          Why not? It’s a chip reading inputs from neurons. This meme doesn’t make it clear if the chip was also stimulation neurons but Neuralink has plans for neural stimulation and it’s possible this was also tested on monkeys. So what’s the difference?

      • Paddzr@lemmy.world
        link
        fedilink
        arrow-up
        3
        arrow-down
        1
        ·
        1 day ago

        Yes. Because it’s us. Anything not us is always going to be less valuable. You’d kill 100 lions if it means saving 1 human.

        • ExLisper@lemmy.curiana.net
          link
          fedilink
          arrow-up
          2
          arrow-down
          10
          ·
          1 day ago

          Lions are not conscious. And I’m not asking about value. Of course we value human consciousness more than monkey consciousness. We don’t grant monkeys any rights. Hell, we assign more value to unconscious (brain dead) humans than to conscious monkeys. But how exactly is human consciousness different?

      • MDCCCLV@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 day ago

        That was just to try and make the equipment work at all, it wasn’t about doing anything with software. It’s the opposite where you’re only worried about the physical damage and infection.

        • ExLisper@lemmy.curiana.net
          link
          fedilink
          arrow-up
          1
          ·
          1 day ago

          I was focusing more on the “hooking up conscious brain to computer” part than about the damage and infection part.

          Thought experiment: let’s say we have a dead brain patient. You have verified that there is no neural activity in the brain beyond cerebellum. There’s no consciousness in the brain. Legally it’s still considered a person. You can’t for example shoot them.

          We also have a 5kg blob of lab grown human brain tissue. We have verified there is neural activity in the entire blob but we don’t know what it’s doing and we can’t communicate with it.

          Which one is more conscious? Which one should be considered more human and should have more rights?

          • MDCCCLV@lemmy.ca
            link
            fedilink
            English
            arrow-up
            1
            ·
            12 hours ago

            Hooking up to a computer is just installing a software keyboard in your brain, that doesnt really mean or do anything. It’s what software you load after that’s relevant.

    • Zacryon@feddit.org
      link
      fedilink
      arrow-up
      4
      arrow-down
      1
      ·
      1 day ago

      And now bring artificial neural networks, i.e., AI, into the picture to make it even more spicy.