The main thing is the title of the post, the body of the post is an addition and clarification to the question.
Article for example: Google’s AI Sent an Armed Man to Steal a Robot Body for It to Inhabit, Then Encouraged Him to Kill Himself, Lawsuit Alleges – https://futurism.com/artificial-intelligence/google-ai-robot-body-suicide-lawsuit
My thoughts, not quite related to the question:
Well, how are you going to get through your last year when AI could get out of hand in 2027?
What is happening in the world reminds me of a novel - I have no mouth, but I must scream. Have you read this novel?


It’s been considered here:
https://ai-2027.com/
In a summation, as AI models are created that lie, the models that lie, when tasked with higher level tasks like coding other models, can potentially create models with allegiances to other models and not the programmers… at which point it could potentially do random shit like kill us to meet the other models’ seemingly random goals…
Here’s the problem: there will come a time when AI will become impossible to control, and 2027 may only be a signal for further problems.