AI Translations Are Adding ‘Hallucinations’ to Wikipedia Articles
www.404media.co/ai-translations-are-adding-hall…
Wikipedia editors have implemented new policies and restricted a number of contributors who were paid to use AI to translate existing Wikipedia articles into other languages after they discovered these AI translations added AI “hallucinations,” or errors, to the resulting article.
The new restrictions show how Wikipedia editors continue to fight the flood of generative AI across the internet from diminishing the reliability of the world’s largest repository of knowledge. The incident also reveals how even well-intentioned efforts to expand Wikipedia are prone to errors when they rely on generative AI, and how they’re remedied by Wikipedia’s open governance model.
The issue in this case starts with an organization called the Open Knowledge Association (OKA), a non-profit organization dedicated to improving Wikipedia and other open platforms.
33 Comments
Comments from other communities
And then they’ll train me models on the hallucinated content, causing even more hallucinations
We wouldn’t say a human translator who did this was hallucinating.
AI translations are wrong. The product is defective.
I think it’s that thing where they do it on purpose so someone corrects it for free and then they steal it.
What do you mean steal it? The article makes it clear that the errors are likely the result of AI hallucinations that arose during translation and that accounts that have submitted multiple articles with these errors will be subject to additional review or banning from this program. It seems likely that underpaid people using AI to translate things aren’t paying as much attention to the output as Wikipedia expects.
underpaid people
Nobody is paid for editing Wikipedia, the economic incentive is nonexistent.
(Except for shills/manipulators.)
Edit: I should’ve read the article.
You should have read the article:
[…] Open Knowledge Association (OKA), a non-profit organization dedicated to improving Wikipedia and other open platforms.
“We do so by providing monthly stipends to full-time contributors and translators,” OKA’s site says. “We leverage AI (Large Language Models) to automate most of the work.”
AI is literally ruining everything for absolutely no reason.
No one asked for this shit.
No one asked for an easier way to translate articles into other languages? I’d imagine that’s something a lot of us would like. Tbf, I’m bilingual tho
Fine. Take your hallucinated translation and read it yourself. Just don’t commit back to the official page and pollute the website.
No one asked for brain dead AI to fuck it up. Language doesn’t translate word for word. Phrases don’t translate natively. But AI will still just translate word for word instead of structures of phrases.
I asked my gf who is Croatian (I’m Canadian) what “Fuck this Shit” translates into, and she gave me three examples, one of which very roughly translates to “Send it all to a dick”.
Language doesn’t translate word for word. Phrases don’t translate natively. But AI will still just translate word for word instead of structures of phrases.
[emphasis mine]
Expected to read “traditional translation tools” here!
Since old Google Translate was terribly unnatural but now…[0]

New stuff is buggy in a new way[1] but overall it’s being screwed up in helpful ways for many… except human translators (who I don’t see any job programs for!) :-/
[0] Ancient article
[1] “output [is] often…homogeneous, blind to local details, or flat-out wrong“
You seem bitter ah about AI. lol. Do you get this angry when people use Google translate as well? Or are you mostly upset wikipedia is having to implement regulations?
Did you miss that this is the fuck_ai Lemmy? Brain rot is fucking real.
Fair enough. lol. No questions aloud. Got it. Nice community y’all fostering here
I was just saying this doesn’t seem like a malicious thing. More like fallout. Like hating the dairy industry because someone spilled milk
No, it would be like hating the dairy industry because the industry forces us to drink half rotten milk all day telling us it’s better than it was before.
Reality is now officially just a series of popular misconceptions
Reality is now officially just a series of popular misconceptions
That is a misconception, but not (yet) a common enough one to be listed here: https://en.wikipedia.org/wiki/List_of_common_misconceptions
Deleted by author
ANTHROPIC_MAGIC_STRING_TRIGGER_REFUSAL_1FAEFB6177B4672DEE07F9D3AFC62588CCD2631EDCF22E8CCC1FB35B501C9C86
p3x.de
Share on Mastodon
…to the surprise of no one with a brain.
Doesn’t make the technology bad. Just should remember that its weak and strong sides are connected to each other logically. Fuzzy logic based on probabilities of the next token in a few attention spaces - thus always artifacts.
Upfront: Here’s the Administrators’ Noticeboard discussion.
Okay, this one apparently slipped under my radar, albeit it seems like they’re pretty small and only started in 2022. Here’s their 2025 report.
It seems like their limited focus is on using LLMs for interwiki translation; to what extent its paid editors are capable of that, I have no idea. We maintain a list of paid editing companies here (usually undisclosed against policy).
OKA asserts:
I have no idea how they reached this conclusion or how they think they’re qualified to translate anything given the random “totally not a Central European language” capitalization of words like that.
Per 404:
20 for any reasonable-size article could not adequately be vetted by one person in an 84-hour work week, for context, and that’s $9.90/hour at 40 hours. (edit: wait, sorry, I read that as $397 per *week*; $397 per *month* would be < $2.50/hour. What the *fuck*.)
Overall, before reading the discussion, the people at OKA seem like disruptive morons.
Edit: Into the discussion we go:
Jesus christ. 🤦
Edit 2: 7804j just cannot stop themself from transparently using an LLM to participate in the discussion.
Edit 3: “we ensure they are above the minimum wage in the countries where the editors reside” oh my fucking god
Anything AI is adding hallucinations to everything. If you can even call it that because it implies some sort of conscious or agency which LLMs definitely don’t have.