Neural Machine Translation (NMT) has by now become without a doubt the trendiest term among the different players in the translation industry – freelancers, language service providers, translation buyers – and, above all, computational linguists.
This blog post would like to give a quick overview of NMT’s current status to those who are just beginning to familiarize themselves with the subject and wish to understand the importance of this new technology.
We will also briefly discuss whether translators should be afraid of machine translation replacing human translation, and if yes, how and when it might happen? But before coming to this recurring question in the philosophy of language, let us take a historical overview and assess the actual landscape.
How did machine translation work previously?
Even before NMT, there were several models for machine translation, like the rule-based model, which tries to reproduce the source text in the target language with the help of a set of grammar rules and a dictionary.
Similarly to this model, the example-based machine translation used previous translations to create logical deductions and assumptions on the correct translation of words and expressions in the source text. For example, if the Hungarian “Iszom” is “I drink” in English and “Nem iszom” is “I don’t drink”, then the translation of “nem” must be “don’t”. Both models had their advantages – and of course, their limits.
In the 2000’s, they were replaced by Statistical Machine Translation (SMT), which uses huge corpora (bilingual and monolingual text) and occurrence statistics to deduce the appropriate translation of terms and phrases. Until the beginning of 2017, Google Translate used SMT in all language pairs (however, due to the lack of bilingual corpora in some rare language pairs, an extra step into English was inserted in the process).
All the three models described above, like their hybrids, have many flaws.
Apart from the often grammatically incorrect output sentences, they tend to fail in “getting” and reproducing the meaning when it is more sophisticated or associative, as they usually prefer the most popular and basic equivalent of a given term.
Second – and this is an even bigger obstacle to their perfection – they ignore context and the subject area. While the engine can be “trained”, specialized in a subject field and calibrated, it still translates segments (sentences) without taking into consideration the preceding or following sentences, which thus have no effect on the translation. What’s more, sentences are too divided into sub-segments (put more simply: the engine looks for the longest available matching part that is already in the corpus), and the translations of these phrases don’t affect each other. Even outsiders can understand what a huge disadvantage this is during the translation of a flowing, coherent text.
According to many, SMT has reached the limits of its capacity, and without further adjustment (which means a considerable investment of time and money) it will not show any significant progress.
How does neural machine translation work, and why is it so popular?
One of the big announcements made by Google last year was that, according to their tests, in some environments, some language pairs and some text types, NMT produces a quality almost as good as human translators.
Numerically expressed, it claimed that in certain isolated cases NMT made 60% fewer errors than earlier, phrase-based models.
Even if the statement was cautiously phrased, the whole world took notice, and even the media outside of the translation industry (The Economist, for example) and other tech giants (like Facebook) have started dealing with the subject.
Moreover, big companies involved in machine translation started issuing statements almost every week about transitioning to the neural model (the most recent news came from Amazon).
The reason for this enthusiasm, apart from Google’s announcement, can be that NMT is a really novel, ground-breaking technology which, although it still has a long way to go, is already producing quality that is identical to or better than that of the earlier models (depending on the type of text).
NMT, which goes hand in hand with artificial intelligence research, is the first model that tries to imitate human thinking at the level that detaches itself from the word order and structural ties of the source text while also considering the context.
Without getting deep into the details of computational linguistics, probably the best way to explain the process is the following: the machine tries to grab and synthetize the meaning of the source text at a level which is almost independent of the language and then to recreate this level of meaning in the target language.
As it is lexically and structurally much less dependent on the source text, it makes many fewer conjugation and agreement mistakes than statistical machine translation.
Familiar and new challenges
No matter how popular, even hyped NMT became, it is important to remember that, to use the words of MT expert John Tinsley at the SlatorCon conference in London, “it is ultimately just another type of MT”.
It would be a mistake to think that it will eliminate all the shortcomings of the previous models and will produce perfect translations in every sense.
Although its development is still in an early phase, we can already see that it fails to cope with longer sentences and is much more reliable with shorter segments.
This trait, by the way, is equally true of previous MT solutions, especially in languages such as Hungarian.
What’s more, no matter how flowing and naturally sounding neural model translations are, NMT sometimes omits words, expressions or even phrases – and this was not typical of the previous models.
According to our current knowledge, agglutinative languages are still quite hard to process for NMT engines, at least compared to Romance and Germanic languages.
Also, fixing errors is slow because deep understanding of the program is needed to determine the cause of a translation error and it is not even sure that it is worth the effort.
Finally, the problem of hardware demand must be mentioned. Currently only the biggest tech giants or organizations with a separate NMT budget can afford the hardware pool and neural networks needed for operating this technology.

What to expect in years to come?
In the near future, it is expected that NMT research and development will show a lot of progress, but translation companies will, as usual, probably be more cautious with their reactions.
They will closely monitor the results and review their options, but it is not likely that they will quickly shift to an NMT + human post-editing process except in certain narrow areas of application and certain outstanding language pairs.
So far, some East Asian languages (like Chinese and Japanese) seem to show the most progress in MT quality.
Hungarian – we can safely say “as usual” – seems to be less adequate for reaching human quality even with NMT, both into English and vice versa.
The greatest improvement can be expected in uniform and repetitive technical documents pertaining to a given narrow field.
So when will NMT replace human translation?
Most experts say that for some subjects, some text types and some language pairs it is possible that NMT will prevail in a decade or two, but creative human translation will always be needed.
NMT’s importance lies rather in the fact that, when integrated into a CAT tool, it helps translators work more quickly and gives them more time to concentrate on translating segments that demand more creativity and abstraction.
We should also remember that NMT greatly helps nonprofessional translators. Even today, most of the people who use Google Translate and other free MT solutions are amateurs trying to understand a foreign text and not professional translators. The era of machine translation not requiring human checking or post-editing is still not on the horizon.