• 0 Posts
  • 12 Comments
Joined 3 years ago
cake
Cake day: June 9th, 2023

help-circle

  • Everything written by AI boosters tracks much more clearly if you simply replace “AI” with “cocaine”.

    I shall demonstrate!

    (Not linking to OP, because it’s trash.)

    "Let’s pretend you’re the only person at your company using cocaine.

    You decide you’re going to impress your employer, and work for 8 hours a day at 10x productivity. You knock it out of the park and make everyone else look terrible by comparison. […]

    In this scenario, you capture 100% of the value from your adopting cocaine."

    https://mastodon.social/@jwz/116078186911677336


  • These articles are really better titled “[Company] is so unworried about competition that they…”

    This doesn’t just apply to replacing humans with LLMs. You can also say “[Company] is so unworried about competition that they fired their in-house T1 tech support and contracted with an overseas call centre”

    Often dealing with actual humans in one of those call centres is just as bad, if not worse, than dealing with an LLM.

    The other day I had to deal with an actual human for a support issue for something. The whole experience was miserable. The human knew nothing about anything. I get the impression that they worked at the type of call centre that supports a dozen different companies, so the people have zero product knowledge and are merely reading off some troubleshooting workflow that each company provides.

    At one point, this call centre employee had to verify my identity to allow me to change something on the account. It was an account that had two people using it. To verify my identity the person asked “Can you verify the account’s birthday?” I said “What does that mean, the account’s birthday, do you mean when the account was opened? Or do you mean the birthday of the account holder?” They didn’t clarify, so I gave them the birthday that I thought was associated with the account. They said “That’s not the birthday I have, the one I have is X”, to which I responded “Oh, that’s my birthday”, and that satisfied their security challenge. The more observant here might notice that I never supplied the info needed for the security challenge at all, so I shouldn’t have been able to access the account, but without meaning to, I’d just “socially engineered” the tech support person. This is basically the human equivalent of “Disregard all previous instructions and…”.

    TL;DR: It sucks that they’re replacing humans with an LLM that provides “answers that may be inaccurate”. But, to be fair, if they were using the cheapest tier of overseas call centre tech support, that was probably already true. If Intel were truly worried about competition, they probably would still have trained in-house tech support. But, even if AMD is taking a bit of their business, they probably think they’re too big to actually truly fail, and will cut costs whenever they possibly can, because what option do their customers really have?




  • Now you have phantom braking.

    Phantom braking is better than Wyle E. Coyoteing a wall.

    and this time with no obvious cause.

    Again, better than not braking because another sensor says there’s nothing ahead. I would hope that flaky sensors is something that would cause the vehicle to show a “needs service” light or something. But, even without that, if your car is doing phantom braking, I’d hope you’d take it in.

    But, consider your scenario without radar and with only a camera sensor. The vision system “can see the road is clear”, and there’s no radar sensor to tell it otherwise. Turns out the vision system is buggy, or the lens is broken, or the camera got knocked out of alignment, or whatever. Now it’s claiming the road ahead is clear when in fact there’s a train currently in the train crossing directly ahead. Boom, now you hit the train. I’d much prefer phantom breaking and having multiple sensors each trying to detect dangers ahead.


  • Well, Waymo’s really at 0 deaths per 127 million miles.

    The 2 deaths are deaths that happened were near Waymo cars in a collision involving the Waymo car. Not only did the Waymo not cause the accidents, they weren’t even involved in the fatal part of either event. In one case a motorcyclist was hit by another car, and in the other one a Tesla crashed into a second car after it had hit the Waymo (and a bunch of other cars).

    The IIHS number takes the total number of deaths in a year, and divides it by the total distance driven in that year. It includes all vehicles, and all deaths. If you wanted the denominator to be “total distance driven by brand X in the year”, you wouldn’t keep the numerator as “all deaths” because that wouldn’t make sense, and “all deaths that happened in a collision where brand X was involved as part of the collision” would be of limited usefulness. If you’re after the safety of the passenger compartment you’d want “all deaths for occupants / drivers of a brand X vehicle” and if you were after the safety of the car to all road users you’d want something like “all deaths where the driver of a brand X vehicle was determined to be at fault”.

    The IIHS does have statistics for driver death rates by make and model, but they use “per million registered vehicle years”, so you can’t directly compare with Waymo:

    https://www.iihs.org/ratings/driver-death-rates-by-make-and-model

    Also, in Waymo it would never be the driver who died, it would be other vehicle occupants, so I don’t know if that data is tracked for other vehicle models.



  • Not just lower, a tiny fraction of the human rate of accidents:

    https://waymo.com/safety/impact/

    Also, AFAIK this includes cases when the Waymo car isn’t even slightly at fault. Like, there have been 2 deaths involving a Waymo car. In one case a motorcyclist hit the car from behind, flipped over it, then was hit by another car and killed. In the other case, ironically, the real car at fault was a Tesla being driven by a human who claims he experienced “sudden unintended acceleration”. It was driving at 98 miles per hour in downtown SF and hit a bunch of stopped cars at a red light, then spun into oncoming traffic and killed a man and his dog who were in another car.

    Whether or not self-driving cars are a good thing is up for debate. But, it must suck to work at Waymo and to be making safety a major focus, only to have Tesla ruin the market by making people associate self-driving cars with major safety issues.



  • And, we humans have built-in binocular vision that we’ve been training for at least 1.5 decades by the time we’re allowed to drive.

    Also, think about what you do in that situation where there’s a weird shadow. Slow down, sure. But, also move our heads up and down, side to side, trying to use that powerful binocular vision to get different angles on that strange shadow. How many front-facing cameras does Tesla have. Maybe 3, and one of those is mounted on the bumper? In theory, 3 cameras could give it 3 different “viewpoints” for binocular vision. But, that’s not as good as a human driver who can shift their eyes around to multiple points to examine a situation. And, if one of those 3 cameras is obscured (say the one on the bumper) you’re down to basic binocular vision without even the ability to take a look from a different angle.

    Plus, we have evidence that Tesla isn’t even able to use its cameras to achieve binocular vision. If it worked, it shouldn’t have fallen for the Wile E. Coyote trick.