Two attitudes about translation are on my mind. One is about Joseph Smith: “Seeing words appear in a seer stone is magic, not translation. Translation is when you have the equivalent text in a foreign language, like Google Translate.” The other attitude is not uncommon among translators and translation clients: “Google Translate isn’t translation. It’s just inputting one text and getting the mechanical equivalent in another language.”
In the first case, translation is defined as what Google Translate does; in the second, translation is precisely what Google Translate doesn’t do: further evidence that it’s easy to feel confident about what “translation” means until you take a close look. When people say that a translation should be linguistically equivalent, they often mean something else: cultural equivalency, or a “natural” tone, some spark of the human. A bit of magic, in other words.
So let’s talk about machine translation. Ten years ago and even more recently, Google Translate was a frequent source of hilariously bad student work. With the new statistically-based methods of the 2010s, machine translation improved markedly, making it more useful for getting the gist of a text in a foreign language, and occasionally producing an unobjectionable sentence or two. With the switch to AI-based machine translation using neural networks over the last few years, however, the situation has changed dramatically. Three years ago I could tell myself: at least I can deal with ungrammatical sentences and misspellings better than a computer can. Now, though, even on flawed original texts, computers are catching up.
When my students write essays today, I require them to write in class, using nothing but pencil and paper.
Do I use machine translation in my own translation work? Of course I do, at least for contemporary pragmatic texts. (AI-based translation is still not much use with sixteenth-century literature.) I have hooks to two different AI-based translation systems in my translation environment. The less literary and more pragmatic a text is, the more likely the output from one of them will contain at least a few useful chunks. Adopting the sentence unchanged may only be possible in 5% or less of cases, though. Most sentences will require at least some editing. But even when considerable editing is required, I work faster when I have ready translations of each word at my disposal and phrasal chunks to move around as needed.
It’s important not to let the output of Google Translate determine what I do. I’ve found the best approach is for me to first study the original sentence and translate it mentally, and only then look at the machine translation output and see if it’s usable, and if so, what needs to be changed or moved. There’s usually something useful there, although sometimes it’s faster to scrap the whole thing and start over.
But when Google Translate is dealing with a text similar to the translations its neural network was trained on, it effectively captures the expert decision-making of many human translators. Sometimes I discover in the machine translation an alternative I hadn’t considered, or a more efficient route to the meaning I’m trying to capture. Using that distillation of expertise helps me not only to translate more rapidly, but also to produce a qualitatively superior translation. Any translator (or at least any translator working on pragmatic texts in a reasonably common language pair) not using machine translation at the appropriate step in their workflow is working slower and less accurately than they could.
* * *
Finally, back to Joseph Smith. The distinction between human and machine translation seems clear enough, but the reality of machine-augmented human translation makes the distinction much murkier. And similarly, it’s hard to draw a boundary in practice between seeking secular knowledge and revelation.
Translation and revelation are in fact much alike. Having more information is always better, whether that means consulting experts or the handbook or a good dictionary. Unthinkingly following the handbook isn’t revelation, anymore than pasting in the results from Google is translation, but the best results consider the collected expertise provided in both places. This also means that revelation can agree 100% with the handbook, and that contradicting the handbook might reflect a lack of inspiration, and it’s impossible for the outside observer to know which case is which.
One paradoxical result is that translation can be harder using machine translation because the machine takes care of all the easy cases, leaving you only the edge cases to deal with where there may be no good answer. Either the meaning will be distorted, or a theme will be omitted, or an extended metaphor is going to be disturbed. Inspiration is hard when there is no good option and someone’s going to end up unhappy no matter what you do.
Translation resembles revelation not just metaphorically, but also as a matter of process. Both involve studying out various alternatives. Both involve carefully listening for a voice. When translating a literary text, I try to hear the logic of the argument, the tone, a humorous allusion or subtle contempt, or careful play on the audience’s expectations. Reproducing that aspect of a text in English is most often where I’ll need to add words not obviously in the original, or make some unstated logic explicit, or replace one metaphor with another. (This is another area where academic translation is different: we want to retain an author’s soccer comparisons so that students can write essays about them, but the people who need a translation to sell more widgets in the US or attract American investment capital very much want them replaced.) The similarity between translation and revelation is so close that translation can be an excellent spiritual exercise to prepare the mind to receive revelation.