Imagine telling the world that the people of Iraq would open their arms to US invaders and the whole thing not taking too long…
Like a pattern emerges…
Imagine telling the world that the people of Iraq would open their arms to US invaders and the whole thing not taking too long…
Like a pattern emerges…


Together they are the great BootySharma to you… spiritual Guru-as-a-Service…
It’s been considered here:
https://ai-2027.com/
In a summation, as AI models are created that lie, the models that lie, when tasked with higher level tasks like coding other models, can potentially create models with allegiances to other models and not the programmers… at which point it could potentially do random shit like kill us to meet the other models’ seemingly random goals…