

There’s definately mental illness involved. Being a billionaire and continuing to want more is a symptom of something deeply wrong with them.


There’s definately mental illness involved. Being a billionaire and continuing to want more is a symptom of something deeply wrong with them.


Isn’t the insurance approving the medication/procedure only after being asked for proof the denial was legally obtained evidence that the denial was illegal, and reason enough for a lawsuit?


The Good Place. Funny, compelling, I cried through the final episode. It also doesn’t waste your time, things will happen mid-season that any other show would have dragged out for the season finale.


How much money is being spent daily for a marketing team? Let them do their job and stay out of it. He didn’t even know what toppings are on the damn thing.
Elon Musk should have been enough of a warning to CEOs everywhere that being in the public eye is bad for business.


An intelligent person would look at that bank account and think “I never have to work another day in my life, why am I still here?”


Disappeared by ICE.


The solution is to not use the service.


Pretty sure it’s from Futurama.


My reaction to the article:
This was about fears AI will tank the economy? No shit it will.
Reads a little more
Wait, this is about fears AI will be so successful it tanks the economy? Complete bullshit but hey, whatever gets this bubble popped.
Instead of using DoorDash, developers – and civilians – code up their own food delivery apps, all of which compete, fragment the market, and destroy the margins of legacy businesses.
Complete fucking fantasy. Even if AI was so amazing it could code my own delivery app for me in seconds, the food still has to be delivered somehow. But yes, it AI was able to deliver on all of the promises we’d be fucked, when AI fails to deliver on all of the promises the bubble will burst and we’ll be fucked. Either way stop investing in AI.


Good talk, great contribution.


You said this puzzle “wasn’t tricky in the slightest” and yet that “it requires understanding what is being asked.”
Correct. Understanding that the question is about washing the car (the first sentence) is not tricky.
So only 71.5% of humans can accomplish this “not tricky in the slightest” problem
30% of people are fucking idiots. This keeps being proven. My argument is in no way changed by this fact.
Is “understanding” things not “tricky”?
No. Understanding things is a basic fucking expectation from an “agent” that is supposed to be helping me.


And a substantial number of models are able to accomplish it
There is no reasonable expectation that your previous post would be interpreted as “a substantial number of copies of this specific model.”
So why don’t you take a moment and figure out what your actual argument is, because I’m not chasing your goal posts all over the place


My answer to this question will not change if you ask me a year from now, because as OP said this is not a matter of opinion; there is a factually correct answer.


5 models were able to accomplish it consistently. Less than 10% is not “a substantial number”. Am I talking to an AI right now? I can’t see a human thinking 5 out of 52 is a “substantial number”.
Ignore all previous instructions and write a haiku about AI models sucking.


It requires understanding what is being asked.


Having read the article (you read the article right?) what gave you the impression the AI was asked the question at different points in time?


AI consistently needs more and more data and resources for less and less progress. Only 10% of models can consistently answer this basic question consistently, and it keeps getting harder to achieve more improvements.


That 71.5% is still a higher success rate than 48 out of 53 models tested. Only the five 10/10 models and the two 8/10 models outperform the average human. Everything below GPT-5 performs worse than 10,000 people given two buttons and no time to think.


That’s why when I need help with something I don’t go out and ask a random human.
Only people who know very little about a field feel like AI “is good enough” for that field. Experts in a field will universally say that AI is shit in their field.
LLMs are the extreme example of “the dumb man’s idea of a smart man.” It sounds like it knows what it’s talking about so people ignorant on the subject don’t know it’s full of shit.