• 0 Posts
  • 12 Comments
Joined 3 years ago
cake
Cake day: June 12th, 2023

help-circle

  • I thought this was a very well written, transparent article that took accountability as seriously as it should. I am still not sure why people are using AI for translation when translation software already existed. People mention that AI is more context aware, but I feel like when you saw those friction points in old translation software it prompted you to look further into the context, whereas AI will just make an executive decision and people feel like it must be right because it’s AI. I guess it’s possible old language software, or even a translator, would have done the same thing, but I still think people would have less inherent trust in the old software alone. I do want to point out that this AI issue was just a small part of the problem and they addressed plenty of other issues and how they plan to remedy those.



  • I understand mutual aid as a concept, but my local anarchist groups seem happy to just do random mutual aid. They will just stand on a corner, distribute food to anyone that comes by and say “great job team!” It feels ineffectual and the lack of planning really hobbles them. I suggested doing a more organized approach and they were all “you can do that if you want”, which I already knew I could do. I was wondering if WE should maybe be a little more organized and they just aren’t interested. They’ll do a toy drive and then just go to a random park to give them out. It feels more like a random act of kindness group than a group trying to build parallel systems of power. I understand that it may just be my local groups, but I would love to hear about other groups experiences. Is there maybe a more anarchist friendly way of organizing that I’m not privy to? I can do some reading if necessary. I’m not really an anarchist, but I believe mutual aid is important, I’d just like to see it done more purposefully. Is your mutual aid group a chapter of one I’d be familiar with? I’d be interested in trying a different group if it felt more helpful.


  • Google is a bad company with bad policies, but I’d love to have them explain what caused the compromise. They dispute that it was uploaded publicly to GitHub, but don’t seem to provide any information as to what happened. They also didn’t have 2fa on, which is strange to hear because AWS (they’re using Google) required 2fa on all accounts at least a year ago, regardless of permissions if memory serves. Really sorry to hear this happened to them, and the fact you can’t set a hard cap on spend makes Google the party ultimately responsible here, but I’d appreciate having more information on the actual cause.


  • I get where you’re coming from, but I think it’s important that ars has held this person accountable. They have a journalistic standard they are sticking to, which is that there should be no AI use, and there are repercussions for people who don’t abide. There’s not an extremely large cohort that is willing to spend more to avoid AI, but I am certainly part of it, and seeing ars hold this person accountable helps me know that I can trust and patronize them ethically. There are businesses out there unwilling to acquiesce to an AI first narrative, and I’m just worried that elements of doomerism are going to make people unwilling to believe those companies when they have every reason to believe them.






  • I feel like that may be worse. Kind of like how if you have certain security measures while browsing the web it’s almost easier to fingerprint you. It’ll get a good idea of your age and that’ll be enough rather than sticking to a specific lie. Just always be 3 years older with one additional sibling or a sibling of the opposite sex. If the sex of your sibling is relevant just describe them as a close family friend or close cousin in that instance. I can’t say for sure, but if I had to guess having a static lie is maybe more obfuscation than a variable one. Though even posting on this thread is bad opsec.