• 0 Posts
  • 28 Comments
Joined 4 months ago
cake
Cake day: November 1st, 2025

help-circle
  • This is kind of a naive take.

    NY has submitted legislative bill proposals on making age verification checks mandatory at the OS level, restriction of access to social media without Age Verification, and other such proposals and new infrastructure like their Mobile ID, all point to a push for age verification. They also are pushing legislation for things like outlawing 3D printing. In conjunction with their push to say the Valve lawsuit is about child gambling rather than gambling full stop (including child gambling as a way to get people to go along with it, because gambling is illegal for adults too in NY), is extremely telling.

    This isn’t happening in a vacuum. I’m pretty sure Valve wouldn’t have made this statement about what the NYAG’s office request if they couldn’t back it up with documentation. Especially in light of them being sued. I’m sure they will absolutely bring said documentation with them to court.

    I’d bet good money that the AG’s office absolutely did ask for that and Valve refused and that’s why they are being sued.




  • Yeah I was missing the key detail that you need to pay to open the loot boxes which gives them a base monetary value as the starting point and does essentially equate to pay for the contents inside. That combined with them also running a market for trades and sales of the item is very synonymous with the pachinko machine setup and I do agree that this is gambling.

    Thank you for taking the time to explain. I haven’t played any of Valve’s games that have loot boxes so I didn’t understand how it was different from loot boxes in Destiny or Battlefield.


  • This explains quite a lot. I’ve only ever played a couple of games that have loot boxes (Destiny and Overwatch and like Battlefield) and you don’t pay to open those.

    I think I was missing that key detail which is you’re paying to open the loot boxes and that’s the “wager”.

    If the loot boxes were free to collect and open then they wouldn’t have a starting value.

    After that, assigning a value based on rarity for resale of the items would be on the players, not on Valve.

    But now I agree that that is problematic even if I don’t agree with the way the NYAG is going about trying to fix it.

    I especially object to the save the children angle as none of the games are rated E or even rated for children.

    Someone else in the thread brought up how kids can get around this with a gift card, but I question why they’d need a gift card to buy free games. Sounds to me like that’s a case of parents not doing parenting.

    Even if you used the gift card to pay to open the loot boxes, that seems like a problem with the parents too. I don’t know why it’s any more acceptable to sue Valve over this than it is to legislate who can buy gift cards. Like technically the parents own the gift card in the same way we don’t let kids have legal ownership of anything else.

    A literally solution would be preventing the purchase of gift cards by minors or preventing gift cards from being used to buy anything not rated for everyone. Or not rated for kids.

    I’ll put it another way, kids aren’t allowed to buy x-rated content. But you absolutely can use a gift card to purchase x-rated media from x-rated sites. It’s one of the things being proposed by several people in the wake of age verification stuff. So by the same logic a child could do that.



  • Yeah I’m confused by this too.

    “According to New York law gambling occurs when a person wagers something of value on a contest or game of chance or some other thing outside of their control, and that a sum will be paid or something of value returned based upon a particular outcome set by the wager. This definition is broad. It includes everything from fantasy sports, cockfighting, dice, car racing for titles, and betting on sports.”

    So to be clear, doesn’t there have to be a wager involved of some value in exchange for the loot boxes to take place before it reaches the threshold for gambling?

    I haven’t played any of the games in the suit, so I don’t know how their loot boxes work, but I kind of assumed you just got them by random chance from playing. Can you buy loot boxes?

    Edit:

    I think I was missing that key detail which is you’re paying to open the loot boxes and that’s the “wager”.

    If the loot boxes were free to collect and open then they wouldn’t have a starting value.


  • In this case the NY lawmakers have already banned gambling for adults as well. I honestly think a lot of the pushback they’re likely to get from the community has to do with the fact that they included this phrasing about child gambling at all. The games in question aren’t really made for children, and Valve didn’t really market any of their games to kids (while Pokemon cards absolutely are marketed toward children and amount to gambling, but NY’s AG doesn’t appear to go after New Yorkers who sell or buy Pokemon cards).

    If their logic was: “Gambling = Illegal, and running a web shop where the proceeds of gambling can be exchanged for real world cash” is the bar to clear then it doesn’t really matter who was able to gamble, that’s just a way to avoid backlash from parents who don’t want their kids gambling but don’t understand the world their kids live in.


  • You appear to have gone completely around the twist.

    You haven’t shown a logical progression of anything you claim. You don’t point to any current legal precedent, clearly aren’t paying attention to the actual wording being used to draft this bill/law proposal, and are spreading what amounts to FUD.

    About the only truthful logical statement you’ve made is that it’s not about whether you like or dislike these companies.

    Companies are considered a lawful entity with rights. The supreme Court literally just ruled that LLM’s do not count as the same kind of legal entity because if they did they’d be able to copyright their “work”. So I really do question how you think we go from that to “nobody has free speech because the LLM can’t give legal advice”.

    Speech that causes harm has pretty much never been a protected form of speech in the US, even if I were to humor you and assume that an LLM could have the rights to it.

    And you mean the “bad these companies have wrought”.


  • Who’s speach is being limited by limiting LLM’S? Because as a legal entity their speech cannot be infringed because the LLM doesn’t have basic rights in the way that a human does.

    So what you’re saying is that you don’t want these companies to be held to any legal standard for the information they output (which is different from reddit because the companies can’t be held responsible in the US under section 230 for what their users write).

    The chatbot is the output of the company’s data set and somehow you’re saying the company can’t be held responsible for what that output is and if it’s dangerous because it’s curtailing free speech?

    That’s such an interesting take.


  • In your example, say you go to a lawyer and ask legal questions. If the lawyer is not providing legal advise (I. e. taking on the role of being your lawyer and representing you in that matter), they are required by law to express that at the begining so that they will not be held liable because they are a legal professional.

    Wikipedia, Google, chatgpt etc are not legal authorities or legal professionals.

    There is also no human entity to hold legally responsible if the LLM hallucinates or sites a source that is not factual (satire for instance).

    We also know that the vast majority of people who use chatbots do not get the sources they come from.

    So. When Wikipedia presents information it is not giving legal advice. That is born out in case law.

    The reason it’s dangerous to get legal or health information from a chatbot is the same reason you wouldn’t want to randomly trust reddit.

    No lawyers are going to reddit to get help writing legal briefs. We have seen lawyers using LLM’S for that though.






  • My hope is that the ones who don’t build the skills to work in medicine don’t pass. Because at least then they don’t get to make decisions that affect a person’s health (even in non-life or death situations).

    But my trust in schools is waining as more and more of them sign up for chatgpt and other LLM’S, essentially forcing them on students.

    The entire schooling system including post secondary education is handling this pretty poorly from what I can see.

    Using LLM’S to detect if something is plagiarism, using it to detect if something is written by an LLM, using it to detect cheating, using it to write lesson plans, using it to offload work onto that are pretty significant portions of your job, encouraging students to use it without safeguards for making sure they do their own work and their own thinking.

    I can’t imagine going to school in this day and age, and having so many adults speak out of both sides of their mouth about LLM’s this way.

    How can you be a teacher or professor, assigning classwork written entirely by an AI and at the same time tell students to use it “responsibly”.

    We don’t even teach students the pitfalls of it. We don’t express how to use it responsibly. We don’t explain how to spot it, and tools to use to prevent ourselves from falling victim to the worst parts of it.


  • First question. What happens when the old cohort who don’t use AI die out? We are not seeing a decrease in adoption of AI use in these fields but an increase. And that increase is compounded by the people who never learn such skills in the first place because they use AI to do the work for them that gets them through the schooling that would teach them such skills.

    Second question did you read the parts about how news media is portraying studies, or the parts about how studies are using miniscule (entirely too small) sample sizes, or the parts where the studies aren’t being peer reviewed before the articles relating to them spread misinformation about them?

    The tools aren’t ready for prime time use, but they are being used in medicine.

    You seem to have glossed right over the detriments that doctors and researchers are already experiencing with Generative AI LLM’S (you keep saying ML, and that’s not exactly the subject we’re talking about here), And the fact that it takes extensive experience, and a knowlegable expert to fix, in a world where the AI LLM’S are contributing to a significant decline in the number of people who can do that, meaning that correcting LLM outputs will happen less and less over time because they require people to correct them, people to create the data sets, and people to understand and have expert knowledge in the data sets/subjects in order to verify the outputs and fix them.

    I can appreciate you not wanting to speak on a hypothetical but that just doesn’t ring true to me either because it means you haven’t thought about the implications of this tech and it’s effect on the industry being discussed or you have and you are ignoring it.

    Not weighing the huge benefits of a tech against its detriments is dangerous and a very naive way to look at the world.



  • I used that as a singular example of how AI is actually not doing as good a job with diagnostics in medicine as articles appear to portray but you should probably read the link I linked as well as the one at the bottom of this comment.

    In using AI to augment medical diagnostics we are literally seeing a decline in the abilities of diagnosticians. That means doctors are becoming worse at doing the job they are trained to do which is dangerous because it means they (the people most likely to be able to quality assure the results the AI spits out) are becoming less able to act as a check and balance against AI when it’s being used.

    This isn’t meant to be an attack on the tool, just to point out that the use cases of these AI in medical fields are also being exaggerated or misrepresented and nobody seems to be paying attention to that part.

    I would also caution you to ask yourself whether or not everyone being screened in this way would be a detriment by causing more work for doctors who’s workloads are already astronomical for a lot of false positive results.

    I understand that that may seem like a better result in the long run because it means more people may have their medical conditions caught earlier which lead to better treatment outcomes. But that isn’t a guarantee, and it may also lead to worse outcomes, especially if the decline in diagnostic ability in doctors continues or increases.

    What happens when the AI and the doctor both get it wrong?

    https://hms.harvard.edu/news/researchers-discover-bias-ai-models-analyze-pathology-samples