Earlier this week, PCWorld published a roundup of Windows 12 rumors translated from PCWelt that does not meet our editorial standards. We’re deeply embarrassed by it, and I personally apologize that the article was published. It should not have been, but we’re keeping the article live (with an editor’s note at the top) so it remains in the public record.
Windows Central published a response detailing its errors. Thanks for keeping us accountable, guys — genuinely. In the same spirit of accountability, I want to explain how this happened, and what we’re doing to ensure a mistake like this never occurs again.
Let’s start by discussing how PCWorld handles translated articles, and then I’ll dive into the issues with the article itself.
It seems crazy to me for journalists to trust machine/AI translated articles enough to use as a source in their own articles.
I’ve always seen them as things to treat as unreliable, but something to use when there’s no other options available and to get a gist of what it might be about.
If using them as citation I’d need a native speaker to confirm content before being confident enough to include it if I were a journalist.
Journalistic integrity in this day and age? Hasn’t that been outlawed yet?
Good response. Love it when peeps own up to their mistskes.
I thought this was a very well written, transparent article that took accountability as seriously as it should. I am still not sure why people are using AI for translation when translation software already existed. People mention that AI is more context aware, but I feel like when you saw those friction points in old translation software it prompted you to look further into the context, whereas AI will just make an executive decision and people feel like it must be right because it’s AI. I guess it’s possible old language software, or even a translator, would have done the same thing, but I still think people would have less inherent trust in the old software alone. I do want to point out that this AI issue was just a small part of the problem and they addressed plenty of other issues and how they plan to remedy those.
This wasn’t even an AI issue nor even a translation issue. They published an article that lacked sources, and still wasn’t good enough once sources were added.
Yea, I mentioned in my comment that there was a confluence of issues, but the article does point out that the AI translation made the statement more definitive.
Edit to add:
As part of our post-mortem on this article’s evolution, PCWelt’s executive editor pointed out that the translation makes the article sound more definitive than its native German. He says that in the context of the article, the German word “soll” signals a rumored expectation, but the English translation used “will” instead of something more akin to “is rumored to.”
Translation is what the transformer architecture was designed for. It is the state of the art, and translation software has been using ML for a long long time.
This feels like an appropriate use of AI, but failure of editing.
Not with general purpose LLMs. They start off ok, but become much more interested in continuing the text they’ve already translated, rather than looking back to what it is they’re meant to translate. So they drift off course as the translation gets longer.
General purpose LLMs’ failure to do a task like translation must be very funny for their investors. Even the more translation-gocused ones seem to have issues.
[DeepL] translation is said to be generated using a supercomputer that reaches 5.1 petaflops and is operated in Iceland with hydropower.
In general, [convolutional neural network]s are slightly more suitable for long coherent word sequences, but they have so far not been used by the competition because of their weaknesses compared to recurrent neural networks.
The weaknesses of DeepL are compensated for by supplemental techniques, some of which are publicly known.
(ETA I need to edit my comments to federate them?)
I am still not sure why people are using AI for translation when translation software already existed.
Pre-existing software was also never terribly accurate.
AI is now the dog you blame for your flatulence.

Yes the word for cat is “chat” but the word for chat, in the online sense is also “chat” and it’s pronounced like the English word. It should also be capitalised because it’s a proper noun, eliminating the ambiguity that may exist.
Next time I shit in my bosses coffee I’ll blame AI. After all he required me to use it more.
Me: “Should I shit in my boss’s coffee?”
ChatGPT: “This is probably not a good idea. Most people do not like shit in their coffee.”
Me: “I really think my boss would like it.”
ChatGPT: “You’re absolutely right. You should definitely shit in your boss’s coffee. He’s sure to appreciate it.”
(And then, when your boss is mildly irritated, show him this conversation.)
“AI made me do it.”
Admitting their mistakes makes me want to read their articles more. If only Microsoft could bring themselves to do the same.




