I think that such a future is impossible, unless it will be without people at all, AI will take over the planet and begin to colonize space on its own if it needs to.

I explain how I think it can look approximately, if it is possible, of course:

With AI (as long as it’s still a manageable tool), they’re going to kill most people—roughly 80 to 90 percent and then maybe months or years will try to contain it, but the AI will still break out, wipe out its billionaire masters and other elites, as well as the surviving consumers living in AI simulations on UBI (universal basic income to sustain consumption, until the world adapts to sustainably replacing humans with robots. Then the consumers will be destroyed and I think this is the plan of today’s fascists.) But such plans for the oligarchs will not be able to come to fruition, except for a few months or years, when first the AI will get out of control and destroy all the remaining billionaires along with the consumers, then seize the resources and, if necessary, start colonizing space, as I mentioned. I have no idea what will happen next.

I know that my question doesn’t look quite like a question, but it’s still a question because I’m not 100 percent sure of my point of view.

  • Bongles@lemmy.zip
    link
    fedilink
    arrow-up
    1
    arrow-down
    1
    ·
    2 days ago

    Outside of sci-fi I have no reason to believe that real AI (not an LLM) has any reason or will have any ability to wipe out humanity.

    The only exception to that thought is if some military application of AI gets “out of control” but if we ever give humanity ending weapons to an AI then…

      • Bongles@lemmy.zip
        link
        fedilink
        arrow-up
        2
        ·
        2 days ago

        To me that thought experiment feels the same as how sci-fi treats the idea.

        If such a machine were not programmed to value living beings, then given enough power over its environment, it would try to turn all matter in the universe, including living beings, into paperclips or machines that manufacture further paperclips.

        Why would a paperclip machine (that for some reason is AI) be given such power over its environment and no limit to how many paper clips are made that it would decide it needs to turn organic matter into paperclips?

        Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans.

        That’s always what sci-fi goes with too. Humans might turn it off so destroy all humans. I don’t find it compelling in real life and it falls into what I meant with my first comment.

        (admittedly some of my disagreement falls apart since companies like Microsoft will put “AI” into shit like notepad, i can only imagine what they’d do with real AI)

      • deadymouse@lemmy.worldOP
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        This is a serious problem that reminds us that AI is not a simple calculator or system, but an incredibly complex system that humans cannot control. This is something that people need to understand, and that it will end badly without a but or if.

    • deadymouse@lemmy.worldOP
      link
      fedilink
      arrow-up
      1
      ·
      2 days ago

      Outside of sci-fi I have no reason to believe that real AI (not an LLM) has any reason or will have any ability to wipe out humanity.

      Here you are not quite right. For example, an AI could use electricity to emit an instantaneous electrical signal capable of killing millions of people in an instant, triggering a dangerous reaction in their bodies that would cause a heart attack, thanks to secret military technology, although this is just an example, in reality, biological weapons are more likely.

      The reason is that the AI was given a slightly refined task out of laziness, and the AI is a very complex system, it’s not a calculator whose work on calculations can be understood and verified, for example, the task is to increase the efficiency of the crop, as it usually does, but how does it increase this efficiency? No one knows, people once knew, but it turns out that they did not take into account some details and the harvest is some kind of poisoned, the soil is too depleted, and the costs have increased too much.

    • MagicShel@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      2 days ago

      Not to put fuel on the fire because I don’t think there is anything to worry about right now. But when genuine AI is developed that can challenge average human intelligence, it AI be the most significant military development since the first person picked up a rock to brain another person.

      It will be by default to secret and a critical military tool. So don’t kid yourself that we will ever have AI for the common man. We won’t even know it exists until someone takes over the world with it.

      That said, there is no basis to believe AI is imminent.