HomeBlog

Is AI Translation Good Enough for a Published Book?

Melanie Koeppen9 min read
AI translationbook translationtranslation qualityself-publishing

The question I get asked most about LingoHop is the one in the title: is AI translation actually good enough for a book you intend to publish, charge for, and ask real readers to spend a Sunday afternoon on?

The honest answer is "it depends on the book, and you should test it yourself." But that answer is not very useful, so let me try to do better.

What "good enough" means in this context

A published book is not a chat message. The standard is not "the reader can understand what was meant." The standard is "the reader does not stop, does not notice the translation, finishes the book, and leaves a four-star review." That is a much higher bar.

There are three different failure modes for a translated book. The first is factual: a sentence means something different in the target language than in the source. The second is stylistic: the meaning is correct but the prose reads as stiff, foreign, or machine-translated. The third is voice: the sentences are fine in isolation but the book no longer sounds like you wrote it. AI translation in 2022 failed on all three. In 2026 it reliably handles the first, mostly handles the second, and partly handles the third. That is the real progress story.

What modern AI book translation actually does

The version of "AI translation" that most authors picture is Google Translate in a browser tab. That is not what a service like LingoHop is doing.

The pipeline is two passes. The first is DeepL, a neural translation engine that has been measured in head-to-head studies as the strongest commercial engine for European languages. DeepL is accurate, fluent, and very good at basic syntax. Its weakness is that the output is sometimes correct but flat: short sentences become shorter, idioms get translated literally, and the overall rhythm flattens out.

The second pass is Claude, a large language model from Anthropic. Claude takes the DeepL output one chunk at a time, reads it alongside the original, and rewrites the translation with three explicit goals: fix idioms that were translated word for word, restore rhythm and sentence variety, and preserve the author's stylistic notes (which the author writes once at the start of the project, for example "keep the narrator's voice dry and understated"). Claude is also told to leave proper nouns alone and to preserve chapter structure exactly.

The combination is meaningfully better than either step on its own. Here is what the difference actually looks like:

EN

She had not slept. Not really. She had lain in the dark for hours, watching the ceiling fan turn, listening to the house settle around her like an old animal getting comfortable. By the time the first grey light reached the curtains, she had made up her mind.

DE · DeepL

Sie hatte nicht geschlafen. Nicht wirklich. Sie hatte stundenlang im Dunkeln gelegen, hatte zugesehen, wie sich der Deckenventilator drehte, hatte zugehört, wie sich das Haus um sie herum setzte wie ein altes Tier, das es sich gemütlich macht. Als das erste graue Licht die Vorhänge erreichte, hatte sie ihre Entscheidung getroffen.

DE · Claude polish

Sie hatte nicht geschlafen. Nicht richtig. Stundenlang hatte sie im Dunkeln gelegen, hatte den Deckenventilator beobachtet und gehört, wie das Haus sich um sie herum zur Ruhe legte, wie ein altes Tier, das eine bequeme Stelle findet. Als das erste graue Licht an den Vorhängen ankam, war ihre Entscheidung gefallen.

DeepL on its own is correct. A German reader would understand every word. But you can feel the literal grammar: "hatte ihre Entscheidung getroffen" is the dictionary translation of "had made up her mind" and reads as slightly clinical. The Claude pass replaces it with "war ihre Entscheidung gefallen," which is what a German author would actually write. It also shortens "Nicht wirklich" to "Nicht richtig," tightens the rhythm of the middle sentence, and swaps "es sich gemütlich macht" for "eine bequeme Stelle findet," which is warmer and more visual. None of these are mistakes in the DeepL output. They are just choices a careful native writer would make and a literal translation would not.

Multiply that effect across 80,000 words and the result is a book that reads as if it was written in German, not translated into it.

Where AI translation still falls short

You should not assume the pipeline works equally well on every book.

Literary fiction with a strong stylistic voice is the hardest category. If your prose lives on rhythm, line breaks, ambiguity, double meanings, or carefully placed silences, AI will smooth them out by default. The Claude pass helps, especially when you supply style notes, but it will not catch every choice you made on purpose. For a Sally Rooney style literary novel, AI translation is a draft, not a final.

Poetry is worse. Poetry is essentially untranslatable by machine because it relies on metre, rhyme, and sound-play that does not survive the journey into another language. A human poet-translator is the only honest option for poetry, and even they will tell you it is a different poem in the new language.

Wordplay, puns, and dialect are also weak spots. AI does not know that "Mrs Malaprop" needs to keep mispronouncing words in the target language too, and it does not know how to translate a Glaswegian narrator into a recognisably regional German voice. You can flag these passages in the review and fix them yourself, but you have to know to look.

Very short text fragments (headings, chapter titles, two-word lines of dialogue) are unreliable on the polish step, because there is not enough context for Claude to do better than DeepL. LingoHop skips the Claude pass for anything under 15 words for this reason.

If your book is mostly any of the things in this section, AI translation is still useful as a first draft. It is not, on its own, a finished book.

The review step is the part nobody talks about

The thing that makes AI book translation viable as a publishing workflow, not just a curiosity, is the review interface. Every page is editable side by side with the original. You can read each chapter, fix anything that does not land, write a new sentence yourself, or apply a style note that propagates across the whole book.

This is the step where the author still matters. The AI does the heavy lifting (translating 80,000 words in under half an hour) and you do the quality work (reading the result with the eye of someone who wrote the book in the first place). For non-fiction and most genre fiction, that review is mostly approval with the occasional fix. For literary fiction, the review is more substantial. Either way you stay in control of what ships.

If you do not speak the target language, you can still review by paying a native speaker for a few hours of their time. Post-edit rates are around EUR 0.02 to EUR 0.05 per word, far below the cost of a full human translation, and the result is dramatically better than either pure AI or pure DIY. See the cost comparison post for the maths on this.

How to test it on your own book

You do not have to take my word for any of this. The way to find out whether AI translation is good enough for your book is to test a sample.

Pick one chapter that you know is representative. Not the easiest chapter, not the hardest, but one that contains a typical mix of narration, dialogue, and whatever stylistic moves your book actually relies on. Translate that chapter into one target language. Read the result. If you speak the language, that is enough. If you do not, send the chapter to a native-speaker friend or pay a freelance reader on Reedsy or Upwork for an hour of feedback.

Three questions to ask the reader. Did anything sound translated rather than written? Did any sentence stop you or feel wrong? Would you keep reading? If the answer to the first two is "no" and to the third is "yes," the rest of the book will go fine. If the answer is more mixed, you have learned something genuinely useful: this is a book that needs a human reviewer, and you now know to budget for one.

LingoHop offers two free sample translations on every new account, specifically so you can run this test before you pay for anything.

What I would actually do, by genre

Non-fiction, self-help, business, how-to, memoir: AI translation alone is fine for most authors. Review the result yourself, ship it.

Commercial genre fiction (thriller, mystery, romance, fantasy, science-fiction): AI translation plus a careful self-review is fine for most authors. If your book has a strong stylistic voice, add a native speaker reviewer at post-edit rates and budget around EUR 0.03 per word.

Upmarket or literary fiction: AI translation plus a paid native-speaker reviewer is the sensible default. Total cost lands around EUR 1,500 to EUR 4,000 for an 80,000-word novel, dramatically less than a full human translation while keeping the prose at a level you can publish proudly.

Literary fiction where the writing itself is the product, or poetry: hire a literary translator from the start. AI is not the right tool yet.

That is the honest map. AI translation is not a single yes-or-no question. It is a tool that is now genuinely good at most of what self-published authors actually write, while still being the wrong tool for a smaller, important subset.

If you are not sure where your book falls, run the sample and find out. That is the entire point of the free trial.

Melanie

Frequently asked questions

How good is AI book translation in 2026 compared to 2022?

Dramatically better. The combination of a strong neural translation engine (DeepL) and a literary-quality polish from a large language model (Claude) was not a viable workflow before 2023. By 2026 the output is fluent enough that native readers regularly do not identify it as machine-translated for non-fiction and most genre fiction. The remaining gap is in literary prose where rhythm, voice, and idiom carry the book.

Which genres translate best with AI?

Non-fiction (business, self-help, how-to, memoir), commercial genre fiction (thrillers, mysteries, romance, fantasy), and clear narrative writing in general. These categories rely on clarity and pace rather than ornate prose, and AI handles them very well.

Which genres are riskiest to translate with AI?

Literary fiction with a strong stylistic voice, poetry, books with heavy use of dialect, slang, or wordplay, and anything where the writing itself is the product. For these books AI can still produce a usable first draft, but you should plan to either invest in a human reviewer or hire a literary translator from the start.

Can a native speaker tell that a book was translated by AI?

Often no, if the pipeline includes a literary polish pass and the author reviews the result. Raw DeepL output is correct but stiff and can read as translation-ese. The Claude polish step fixes the rhythm, replaces literal translations of idioms, and makes the text sound native. The author review step catches anything the AI got wrong.

What does the review interface actually let me change?

Every page is editable side by side with the original. You can rewrite a sentence, fix a name, adjust a phrase, or rewrite the whole chapter in your target language if you speak it. You can also write a style note that the AI applies across the whole book (for example, 'keep dialogue informal' or 'use the formal you for the narrator'). You stay in control.

What about translating book covers and image text?

LingoHop handles text inside images using either Pillow overlay (for simple banners and interior illustrations) or an AI image model (for covers, where the surrounding artwork matters). You can preview each image translation in the review screen, and switch back to the original if a particular image does not work out.

Keep reading

Get the next post

New languages, new posts, tips for self-published authors. One short email a month, no spam.