You can’t be serious if you’re telling me you’re going to use Reddit comments as a reliable source of information, but then ideologically object to the idea of using an LLM for the same purpose.
I’m saying Reddit is more reliable than AI. I agree with you that you shouldn’t just trust Reddit as a reliable source of information, I just trust AI much less.
Have you used AI in the past year?
Yes yes, someone has hard-coded a fix for the strawberry thing. It’s still an excellent example of the root issue:
it was a thing everybody knew was incorrect and they could see how AI dealt with it: guessing, and then insisting it made no mistakes.
If I can’t trust it for basic information I can double check myself then why the fuck would I trust it for information I can’t verify myself?
everytime something like this comes up it gets “fixed”, sure. Someone hard codes a correct answer to the specific question that everyone can easily see is incorrect. Why the fuck would I assume that’s happening for some obscure thing that I don’t immediately know is incorrect?
Sure, it’s probably not telling people to put glue on pizza anymore, because everyone who reads that knows it’s a bad idea. How do I know it’s not suggesting something equally stupid when I ask it how to rewire a thermostat, something that the majority of people won’t immediately clock as “that will burn your house down”?
LLMs are really good at sounding smart to people who don’t know when it is very wrong.
I’m saying Reddit is more reliable than AI. I agree with you that you shouldn’t just trust Reddit as a reliable source of information, I just trust AI much less.
Yes yes, someone has hard-coded a fix for the strawberry thing. It’s still an excellent example of the root issue:
it was a thing everybody knew was incorrect and they could see how AI dealt with it: guessing, and then insisting it made no mistakes.
If I can’t trust it for basic information I can double check myself then why the fuck would I trust it for information I can’t verify myself?
everytime something like this comes up it gets “fixed”, sure. Someone hard codes a correct answer to the specific question that everyone can easily see is incorrect. Why the fuck would I assume that’s happening for some obscure thing that I don’t immediately know is incorrect?
Sure, it’s probably not telling people to put glue on pizza anymore, because everyone who reads that knows it’s a bad idea. How do I know it’s not suggesting something equally stupid when I ask it how to rewire a thermostat, something that the majority of people won’t immediately clock as “that will burn your house down”?
LLMs are really good at sounding smart to people who don’t know when it is very wrong.