A user asked on the official Lutris GitHub two weeks ago “is lutris slop now” and noted an increasing amount of “LLM generated commits”. To which the Lutris creator replied:
It’s only slop if you don’t know what you’re doing and/or are using low quality tools. But I have over 30 years of programming experience and use the best tool currently available. It was tremendously helpful in helping me catch up with everything I wasn’t able to do last year because of health issues / depression.
There are massive issues with AI tech, but those are caused by our current capitalist culture, not the tools themselves. In many ways, it couldn’t have been implemented in a worse way but it was AI that bought all the RAM, it was OpenAI. It was not AI that stole copyrighted content, it was Facebook. It wasn’t AI that laid off thousands of employees, it’s deluded executives who don’t understand that this tool is an augmentation, not a replacement for humans.
I’m not a big fan of having to pay a monthly sub to Anthropic, I don’t like depending on cloud services. But a few months ago (and I was pretty much at my lowest back then, barely able to do anything), I realized that this stuff was starting to do a competent job and was very valuable. And at least I’m not paying Google, Facebook, OpenAI or some company that cooperates with the US army.
Anyway, I was suspecting that this “issue” might come up so I’ve removed the Claude co-authorship from the commits a few days ago. So good luck figuring out what’s generated and what is not. Whether or not I use Claude is not going to change society, this requires changes at a deeper level, and we all know that nothing is going to improve with the current US administration.



To be honest I don’t give a shit if a dev uses AI or not. As long as the code does what it is suppost to. In my personal experience AI, while still not anywhere near to capabilitys of a decent dev, can sometimes find and fix errors that I would have missed.
I use AI to look at my git diffs before I push them up. I use a local LLM and specifically instruct it to look for typos, left over debug prints, or stupid logic.
It’s caught quite a few stupid things that I’m apparently blind to and my coworker appreciates it.
That’s not to say I’d sit back and let it write whole features, pushing it right to master after a short skim… Like someone else I know has started doing. But it can absolutely have a useful purpose.
Or at least create boilerplate, test cases, etc.
The ai to do tests and boilerplate was like AI 3 months ago. Now just genuinely oneshots complex implementations
When we write code we use a compiler to translate it into other code that the computer can understand. Now we tell AI to write code that is then compiled into other code that the computer can understand.
It seems very similar at the end of the day. The problem is it makes the process easier. That’s what everyone is so upset about. And that’s only an issue because we don’t feel special anymore. It sucks but I’m sure it will pass. Even if it takes a generation
I must disagree with you here. Telling the compiler what to do is not like prompting an LLM. I see writing code as a form of art and a big part of that is understanding the logic behind the program and the creating process. Imagine it like painting a picture. The artist/dev will undergo all the stages of drawing/coding the vision will change in the process and the outcome might be different then what was originally anticipated.
This pipeline of creating gives the project usually a better result. One could say it gives the project more soul.
With AI you are no longer the artist you are the manager requesting the result and since AI does not undergo this process of creativity the result is a soulless husk. At best only what you asked for but nothing more.
If people where complaining about AI because of its ease of use the same people would be complaining about pythons approach of humanspeech-like-code. (Not saying that there are no people that do so)
So with this logic are you also not an artist if you use tools like Photoshop? Do you need to write with pen and paper?
Is writing code in any language other than assembly also cheating?
I don’t know why this reply is being downvoted
If I had to guess, it’s probably because most gamers aren’t programmers.
No of course not. Did you even finish reading my comment? I thought made it clear that the ease of use is not the issue. The lack of creativity is. Using Photoshop still requires you to think about what you want and how to get there. AI just gives you the output. There is no creativity involved in prompting.
When the first drawing tablets came out people loved them. Almost no one was the impression that it was “cheating”. Even with the use of AI you can still make creative projects but the creativity comes from you. Vibecoding or using image-gen does not involve creative thought.
EDIT: Imagine playing a game made by someone who is not passionate about their work. That’s what it feels to play an AI made game.
Vibecoding is idea driven implementation. You have an idea, you are creative in your ideas and not in the implementation.
There’s a difference between using AI to help you code and pure vibe coding. The latter is how you end up with slop, but the former can absolutely speed up skilled developers.
Same is true across the board with AI use. It can easily be a force multiplier for people as long as you don’t turn off your brain and slop away.
It’s similar, but it’s not the same thing.
Anyone can have an AI “write code”, but ultimately, you’re still responsible for the output of the AI and ensuring that the end result is good. If you are a competent developer, you know things like testing, storage, security and safety (especially when dealing with sensitive data like user data), backups, monitoring, etc along with understanding each line of code. AI will never be perfect because humans aren’t perfect either, AI requires code review just like humans require code review. If you aren’t a programmer, you won’t be able to review the code AI writes, and mistakes will be missed, just like not reviewing human-written code because humans make mistakes too. I don’t see that ever changing because no software is perfect, there will always be bugs no matter what (once the software is complex/sophisticated enough).
AI does generate societal damage, but that’s mostly because of how companies abuse it and less because of the technology itself.
That’s my thoughts on AI and especially AI coding. That ended up being much longer than I expected and there’s more to it but you get the idea.
I never said anything about not reviewing the code. You still need to review it and test it and all that. But using a tool to generate the code isn’t the end of the world. It’s just the next iteration of how we tell computers what to do. Saying no ai code seems like a recipe for failure.