

That’s part of the cost of AI that the AI companies leave to their customers. There is a tradeoff and we know from a long history of for-profit corporate behaviour that they will generally prefer lower short term cost, despite consequent risk and harm. But if the companies that sell AI services don’t take care to ensure the outputs are true and the companies that use AI don’t take care then that leaves the ultimate customer/consumer to fact check everything. That or simply be oblivious or stop trusting anything. The problem is made worse by the fact that most companies won’t disclose their use of AI, because of the adverse impact on their reputation, unless they are compelled to do so. So far, I don’t see any legislation to compel disclosure.
The problem is cultural, not technical or legal. Most people are at best indifferent and more often supportive of the exploitation of others. Unless that changes, the exploitation will be relentless. AI is a new tool that facilitates a kind of exploitation. But the fundamental inclination to exploit with minimal appreciation and compensation is nothing new. Exploitation is not merely tolerated. It is broadly encouraged and venerated. The law is primarily a tool of the elite to protect themselves. It does little to protect the interests of a typical FOSS contributor and the state does even less. There have been a few cases fought and won but compared to the scale of the industry, the resources committed to defending FOSS are trivial. That’s no more the end of FOSS now than it was in the beginning. It will probably reduce revenue for a few companies that have been exploiting FOSS and FOSS producers for profit. The vast majority of contributors were never compensated. Of those that were, it was typically far less than the value of their contributions.