There’s been a lot of buzz lately about machine translation, with translators and translation industry watchers either condemning machine translation as useless and damaging to the profession or touting it as the next big thing in the industry. ATA President Jiri Stejskal wrote his most recent column for the ATA Chronicle about MT, reader Polly-Vous Français sent this link to a Financial Times blog about machine translation’s role in the blogosphere, and last evening the Colorado Translators Association sponsored an excellent presentation by Cris Silva of All in Portuguese, which focused on integrating machine translation into your existing suite of computer-assisted translation tools.
Cris’ presentation, which she also gave at the recent ATA conference with Giovanna Boselli as a co-presenter, was interesting in that she used the ATA certification exam grading scale to score the output from Google Translate. Out of three sample texts that Cris submitted to Google Translate, two would have failed the ATA certification exam and one would have passed (for what it’s worth, I think that 30% is a better passing rate than what is achieved by the humans who take the ATA exam!). So, what’s the outlook for MT? Friend? Foe? Colleague?
I’ll go on the record as saying that I think that most translators are much too paranoid about machine translation. MT technology has come a long way, and I think that we’re on the cusp of its being considered a standard productivity tool for translators, much as translation memory is today. However, I think it’s not worth losing sleep about machine translation sending human translators the way of the telegraph operator. For an industry leader’s outlook, check out 5 Questions with Renato Beninatto on Sarah Dillon’s blog “There’s Something About Translation.” In this interview, Renato makes the case that the translation industry is in a bind, with “good translators…scarce and becoming scarcer” and the demand for translation growing at 15-20% a year. Companies are turning to MT as an option because their volume of content demands it, not because they want to avoid human involvement in the process.
So, I decided to do my own unscientific test of what Google Translate is producing these days, using three different texts that are similar to what I might be translating on any given day: fluffy, dense and flowery. Let’s see how it did.
For the “fluffy” test, I pulled a tidbit from the French Wikipedia page about France’s first lady, Carla Bruni-Sarkozy. Let’s read about her life pre-Nicolas!
Alors qu’elle vit avec l’éditeur littéraire Jean-Paul Enthoven, elle entame une liaison avec le fils de celui-ci, Raphaël Enthoven, qui était alors marié avec Justine Lévy. En 2001, elle a un fils, Aurélien, avec Raphaël Enthoven[19]. En 2004, elle est l’un des personnages du premier best-seller de Justine Lévy Rien de grave. L’auteur, fille de Bernard-Henri Lévy (dont l’éditeur historique et meilleur ami n’est autre que le père de Raphaël Enthoven), y expose son important passage à vide et sa période de reconstruction à la suite de son divorce avec Raphaël Enthoven, parti avec Carla Bruni.
And the result from Google Translate:
While she lives with the literary editor Jean-Paul Enthoven, she began an affair with the son of the latter, Raphaël Enthoven, who was then married to Justine Lévy. In 2001, she has a son, Aurélien, with Raphael Enthoven [19]. In 2004, she is one of the first bestseller by Justine Lévy Nothing serious. The author, daughter of Bernard-Henri Lévy (including historical editor and best friend is none other than the father of Raphael Enthoven), sets out its important transition to vacuum and its period of reconstruction following his divorce from Raphaël Enthoven, who with Carla Bruni.
At first, with the exception of its confusion over the use of the present tense in French (which sometimes drives me crazy too!), things are looking good. Then, Mme. Bruni-Sarkozy herself becomes a bestseller, rather than a character in one. Finally, the “transition to vacuum” following “his” divorce is a complete disaster. This passage highlights some of the areas in which MT really struggles, for example confusion over the fact that French does not have separate words for “her” and “his.” The key question: is this editable, or could I have translated it from scratch in less time? I’d pick to redo this one entirely.
Next, let’s move on to something dense. Here’s an excerpt from the Quebec department of labor’s information about its purpose:
Emploi-Québec agit à la fois comme producteur, utilisateur et diffuseur d’information nationale et régionale sur le marché du travail.
- Son rôle de producteur de données lui permet d’offrir une information de qualité, fiable et à jour.
- Utilisant ces données dans sa prestation de services, Emploi-Québec occupe une position privilégiée pour bien cerner les besoins d’information des personnes et des entreprises.
Google Translate tells us that:
Emploi-Québec acts both as a producer, user and disseminator of information on national and regional labor market.
* His role as a producer of data allows it to provide quality information, reliable and current.
* Using these data in its services, Emploi-Québec occupies a privileged position to identify the information needs of individuals and businesses.
Here, I’d say that with the exception of some misplaced modifiers and the pesky problem of French referring to all nouns as “he” or “she,” we’re moving closer to something that’s editable, more along the lines of what one would expect from a non-expert human translator.
Now how about something flowery. Here’s a passage from a project I worked on last year, a promotional document for an arts festival in Senegal.
Faut-il s’étonner que, portés par cette lame de fond, les responsables africains aient, à leur tour, pris la décision historique de transformer l’Organisation de l’Unité Africaine (O.U.A.) en Union Africaine (U.A.) ? En reprenant à son compte la vision prospective de Kwame Nkrumah, la nouvelle Union Africaine a donné à l’Afrique de nouvelles ambitions.
And the result:
Is it any wonder that, carried by the slide background, African leaders have, in turn, took the historic decision to transform the Organization of African Unity (OAU) African Union (AU)? Showing in his account of the vision of Kwame Nkrumah, the new African Union has given to Africa for new ambitions. Complete their destiny.
Wow. To quote that immortal line from Cool Hand Luke, “What we’ve got here is a failure to communicate.” Even someone who doesn’t speak French can appreciate the exuberant incomprehensibility of this passage. Slide background? His account of the vision? And the OAU and the AU are now the same thing? We’ll just let Google Translate complete its destiny on passages such as this.
So, my unscientific observation would be that for straightforward texts, this type of MT is almost at the point where it could be a useful add-on to a translator’s tool suite. Many translation environment tools now incorporate some time of MT module, and on a text such as the second example, I would probably be willing to try this feature. For more nuanced texts, I think that the human touch is still critical
Of course, the MT experts will tell you that the key is source documents using “controlled language”. Sure, buddy.
Much of the technical documentation I see reads like it was written by drunken lemurs. How these critters are going to start exercising discipline in their authoring is beyond me. In fact, I translate a lot of text for an excellent technical documentation company here in Germany which promotes controlled language concepts, high-end editorial management systems and reusable text. Their work is far superior to most of what I see from other sources. And guess what? I have yet to do a manual for these guys where sloppy, minor variations of the instructions do not hinder reusability. And that stuff also looks like the product of binge drinking if you put it through MT.
Great article, Corinne.
@Kevin
A professor of mine told me how Caterpillar tried to implement a controlled language for their tractor-manual writers. What they found was that the MT didn’t get better, but their technical writers all started quitting.
Very interesting. I think, though, that the MT results here are as good as they are because French and English are relatively similar languages. I work with Japanese and English, and I have yet to see a MT that wasn’t absolute garbage.
Corinne,
Thanks for attending and your feedback!
Your experiment opens the door to talk numbers.
Let’s hypothesize here for a moment and say that you had about 5000 words to translate from scratch and that you had a text that was similar in nature to your text about Emploi-Québec, which has 55 words with very few mistakes/ errors (let’s propose an average of 5 edits per every 50 words, just to make it easy to calculate).
Now let’s imagine that you copy and paste 5000 words to translate in Google (which takes less than a minute) . If you had to translate 5000 words from scratch, without any leverage from your memory, it’d take you about 2 days, maybe a bit more, going by an industry standard of 2000 words a day. If we assume that every 50 some words in these 5000 words will have about 5 relatively easy edits, then 5000 words would yield 500 edits. If we think that we can confidently make 1 edit per minute (and most edits would not take this long), then we’re talking about a 8 hours to post MT this.
Now, what’s wrong with this picture? As Mylène Vialard expertly suggested last night, there is a factor of post MT editor fatigue. But of course, this lacks more real-world empirical data, both on the type of text and on the bias and fatigue factor of machine translation editors. Also, my assumption was based in the best-case scenario, not the worst, the one about Senegal. But still, a fascinating proposition!
Thanks again,
Cris
@Cris
I think you’re missing one more aspect. Yes, you can “rescue” MT in some cases. But no amount of editing will turn bad writing into good writing. Having “rescued” my share of poor translations done by native Japanese speakers, I’m very familiar with this truism.
So a career rescuing machine translation is a career delivering minimum-quality, low-value work. I don’t know about you, but that’s not where I want to position myself in the market. 🙂
I wouldn’t underestimate the longer-term impacts of this technology, although I’d expect it to become a productivity solution rather than an enemy.
The involved languages are also significant (and as far as I know the FrenchEnglish relation has been one of the better translated pairs partly due to the large number of bilingual documents available in Quebec).
I also work with Japanese/English and have no idea how to read French, but the translations looked much better than what is possible with Japanese/English. Even if this technology is viable sometime in the future, it will be languages like Spanish, French, etc. that will be able to be translated, and languages like Japanese/English which will be last. The reason being that Japanese leaves out all information that is already understood by the context. So its going to be extremely difficult for a computer program to get anything close to what a reasoning human mind can create.
Aaron (No relation to the Aaron above.)
Thanks Corinne, this was a very interesting experiment.
I share most of the doubts expressed in the comments above about controlled language and the quality of MT, especially for languages that are very different as far as syntax and other linguistic aspects are concerned.
One thing that has not been mentioned and which may, in part, be an explanation of the inconsistent quality of the results coming from Google’s MT, is the fact that the Google system is a statistical one. Taken straight from Google’s FAQ:
My understanding of this is that, at its basic level, the Google MT works no too unlike a huge translation memory system, and if there is a high similarity between the sentence you pasted int the MT window and a sentence located in Google’s bilingual/aligned repository, you are going to obtain a high-quality result (of course depending on the quality of the human translation that ended up in the MT in the first place, but that’s another story).
If, on the other hand, there is a high degree of “fuzziness” or no similarity whatsoever between your pasted text and the one contained in the repository, the MT is going to have to rely more heavily on its statistical algorithms, breaking up the sentences in smaller chunks and inevitably producing less accurate results, especially with languages that are more dissimilar (e.g. English-Japanese).
Did you notice that when you translate a web page with Google you are given the option to improve the suggested translation? I suppose that, in doing so, you are inserting a human-created “perfect match” into their repository. Theoretically, the next time the same sentence is encountered, even on a different page, the result sent by the MT should at least be very similar to the text you have translated and suggested. Such text will also be fed to the statistical rules, thereby slightly improving its rate of accuracy.
It would be nice to see evidence of a user-inserted translation being recycled somewhere else.
I am glad you decided to experiment in this controversial area, very interesting. I have also had my doubts/prejudices/fear related to MT; and before I read your samples, I asked myself which type of document MT would do the worst at. My hunch proofed to be correct, as in marketing/fluffy texts, contexts, connotation and intent are so paramount, and you cannot teach a machine (at least I do not think you can — yet?) a feel for languages. I was incredibly surprised that one of the MT tests would have passed the ATA certification exam — that is a fascinating!
As a developer of SMT systems and somebody who is engaged on a massive translation project of translating the English Wikipedia (3M pages+) into several Asian Languages I am glad to see some openness to MT from translators.
The project we are embarked on would not be possible without SMT and a highly structured man-machine collaboration. Humans are CRITICAL to getting any kind of reasonable quality from these efforts. As a developer of some pretty compelling capabilities in this technology I can assure you that we are a very very long way from replacing humans. SMT is necessary because the rate of information and knowledge creation is just too rapid. We are creating a world where English speakers have access to the greatest amount of information and knowledge and much of the world is facing information poverty. Asia Online is one attempt to reduce this and I think is a literacy project rather than a human translator replacement project.
Automated translation is necessary because it is important to share the knowledge of the world across language barriers.
MT will have an impact on the world of professional translation but only on the most repetitive kinds of material: User Manuals, Technical Product Documentation etc… where there is a high level of redundancy. Humans are still needed to proof-read and clean this up. Today, MT works best on content that is highly repetitive and frankly boring work for humans anyway. Yes there may be a new class of less skilled translator that will emerge, but the need for skilled translators is unlikely to be reduced. SMT systems work best material that is very close to the material that they were trained on. I don’t see anybody handing of any really critical content to computers. It would be foolish and dangerous.
I think one key future for translation (and MT) is as a driving force for knowledge dissemination across the world. I think we will see massive translation projects being undertaken to make basic information more freely available. Vince Cerf, Ethan Zuckerman and others have written about how important this is to the continued evolution of a “global” internet.
http://www.ethanzuckerman.com/blog/the-polyglot-internet/
The Asia Online business model I think will become a more common model for these massive translation projects. They will be built around partnerships with forward thinking LSPs (e.g. McElroy) and the model will look something like the process described below.
Our approach is unique, (today, but I think will become more commonplace) because we are also embarked on perhaps the largest translation project ever undertaken. We are planning to translate the Wikipedia and several other large English content sources (100M pages +) into 10 Asian languages (Thai, Indonesian, Hindi etc…) to help build a stronger knowledge and information foundation in these countries. This is done on a foundation of SMT and proof-reading and post-editing infrastructure to facilitate this huge content translation project. Thus, we are both, a very large user of MT, and a developer of SMT and linguistic analysis and correction tools. This allows us to have a tightly integrated feedback loop between the SMT systems development arm and the content translation focused, user arm of the company. We believe that this already formidable infrastructure will evolve and drive the creation of many new innovations that continue the march to superior quality.
These tools, processes and procedures will not only initially generate superior SMT output, but will also allow us to continuously keep climbing in MT quality through a highly structured feedback and error correction platform that combines tools, processes and crowd-sourcing. The work we are doing on the Wikipedia results in doubling the size of the Thai web by end of February.
A typical corporate or LSP translation engine building engagement would go through the following phases:
• Phase 1: Initial building of the core engine using your TM data together with our baseline systems
• Phase 2: Detailed review and correction of the most commonly used phrases in new corpus to ensure that the system is accurate with these critical phrases
• Phase 3: Expand the user base to the core translator community to gather corrective feedback from them
• Phase 4: Continue to gather corrective feedback on the translations from the primary user community to raise the quality of the systems through a managed community collaboration mechanism that is an integral part of an Asia Online system.
Thus the future from my perspective is to have a team of skilled linguists who steer groups of less skilled proof-readers to enable large blocks of high value content become multilingual.
We are already employing hundreds of bilingual (and a few professional translators) people in Thailand and the community as a whole will benefit from this initiative.
Great post.
I personally have a hard time taking machine translation seriously. Machine translation will simply never, ever be able to deal with human language in all its nuances, and it will never, ever be able to obviate human translators (or editors), and these kinds of presentations (and tests) always reinforce that view for me. Some day, with the kind of biocircuitry now only imagined to in Star Trek, I might envision some kind of organic-like “computer” simulating human language…but I’m quite certain that this kind of technology is 500 years away or more.
The optimism that people bring to the discussion of machine translation is charming, but the technology is not even *remotely* convincing when it comes down to it. And I shudder at the notion of virtual cattle yards of editors and translators processing mediocre computer output to become intelligible pabulum… It just ain’t a long-term model for anything.
Whoops, pablum I meant. 🙂 Funny how pabulum and pablum mean such different yet similar things.
@Ryan
Your professor likely didn’t know the Caterpillar context first-hand. I did because I trained all those authors and inhouse and external translators. 5 years after leaving the project, I was on the thesis committee of a PhD student who worked on the subsequent versions of the MT system for Caterpillar, with many improvements shown at the semantic level. Around the same time, I corresponded with one of the writers I had trained, and who had become one of the technical writing trainers. And he was still training on the evolved form of the Caterpillar Technical English of the earlier days.
The pilot and early production phases were not a piece of cake, but it did achieve extremely high content reuse in just a short set of manuals (not just for the tractors, but across the range of heavy-machinery products).
Keep in mind that the Caterpillar Controlled Language and MT project was one of the pioneers in the field for production implementation. It was extremely costly at the time. Projects now of the same magnitude for volume and quality expectations of MT spend far less from the amounts needed back then. I have been involved in several additional MT implementations for corporate customers in which they have insisted on continuing with MT because they had achieved ROI and quality at levels that could keep the projects moving forward successfully (and publish case studies on it). And those who take on such projects realize that they cannot have perfect translations in a couple of months. It is a long-term strategic investment.
@Cris,
I’ve done a lot in providing detailed descriptions of MT projects, platforms used, context of software and hardware, time constraints, type of task and amount of work done in specified time periods, and provided empirical data on it.
See the sets of MT project case studies at:
http://www.geocities.com/mtpostediting/
And don’t use the copy/paste idea with Google. There are much better ways of doing it. Read those project case studies which achieve a much higher volume and with quality that is deemed not just publishable, but in one case study it was for a critical customer project for a company. It brought back more business.
@Kevin
I promoted combined Controlled Language and MT for several years in contexts in which it was possible to refine the source texts. Then I found myself in other contexts on the receiving end with no opportunity to change the source. It was necessary to create adapted MT techniques without “Controlled Language” by emphasizing the terminology extraction and rapid terminology entry (with ability to handle variation) into commercial MT desktop packages, by optimizing a methodology with nothing more than MS Word and MS Excel + the MT software program. Nothing different that any professional translator would have at their desk.
“Controlled Language” covers a lot of different levels ranging from something as basic as terminology management, to grammatical rule standardization, and then even semantic domain modeling. It all comes down to being able to improving authoring and translation, and there are lots of style guides out there on International (technical) communication. Caterpillar had their controlled language guide with 120+ rules, General Motors with their CASL guide of 60+ rules, etc Sharon OBrien did her PhD thesis a few years ago to compare several different controlled language guides and find the overlapping between them. General Motors did however create a basic version based a dozen simple rules for improving international communication, and which leads to better translatability.
See the following site which gathers all of the primary set of info on controlled language based on the series of international workshops on the topic (combined with MT and translation)
http://www.geocities.com/controlledlanguage/
If a software checking application, specifically based on the writing rules, is not implemented, then it is very difficult to maintain the consistency over time. That the difference between having guidelines written in a document, and then having writing rules which are implemented in a software program that flags the authors every time a writing rule or terminology entry is not followed how it is expected.
@Jeff
The professor in question was Martin Kay (Stanford), and I’m pretty confident that he knew what he was talking about 🙂
Just to remind people that I was often laughed at 10 years ago when I said MT was going to improve dramatically around by 2006 for French, Spanish, etc, due to an exponential increase in computer power and the use of an SMT method. They would have bowled over with laughter if I said that in 2008, something called “Google” would actually pass an ATA examination.
I also work with Japanese and Chinese, and there have been large gains with Japanese/English over the past few years.
Try going to a Japanese news site and running it through Google Translate. Still not good enough in most cases to edit, but the news articles were not pre edited, and the program is free. $0.00.
Computer speed keeps doubling every 16 months, so we will see much better results in a few years. Even most Japanese/English will become editing work.
This is in reply to Kirti Vashee.
“The work we are doing on the Wikipedia results in doubling the size of the Thai web by end of February.”
This seems preposterous. The pie chart on the Asia Online page you linked claims that all other Asian languages (not Chinese, Japanese, or Korean) combined make up only 10M pages on the internet. That’s pages, not sites.
But there are many Thai websites in the million + page range each — the big discussion board sites. Sanook.com, Pantip.com, Kapook.com. And plenty more not far behind. And this is one language. How are you counting?
Saying you’ll double the number of Thai pages on the internet at all, let alone by the end of this month (48-hour warning) … I don’t know what to say. It sounds good to clueless investors, I guess.
As it happens, I saw (through a friend) some of the in-progress output of your company’s work from late last year. Two versions of your Thai translation of the Wikipedia article on Harry Potter. The earlier one was gobbledygook, about what we’d expect Google Translate to produce. The later one was much improved — it must have had a lot of human input. But even then it contained simple errors like Newark (the city in New Jersey) translated as นิวยอร์ค (“New York”), and $15 billion translated as 15 ดอลลาร์ล้าน (“15 dollars billion”). Those are the ones that stand out in my mind. And the rest was still often unintelligible overall.
Even if you did meet your project’s goal, that would mean that thanks to your efforts, half of the Thai content on the internet would be unnatural, non-native, and semi-incoherent. I don’t think that’s particularly valuable.
I have no beef with what your company is doing in theory, but please try to be more realistic about your goals, and the quality of what you can produce.
MaskedTranslator: “Machine translation will simply never, ever be able to deal with human language in all its nuances”
I agree. Many companies have a simple choice:
* Continue to write ‘fluffy’ text, and pay translators to translate the text.
* Write text that is clear for a machine to ‘understand’ and use machine translation.
Sometimes, machine translation is the best option for a company. Sometimes, human translation is the best option for a company. A 5-step flow chart on http://www.international-english.co.uk/mt-or-human-translation.html shows how to choose the best option.
I think MT can make the work of the translator easier through gisting. I translate French, but since I need to improve, I can only charge a low rate. I don’t think I could translate for money without using Google to increase my speed.
Here is a sample of French to English, which would help me if I got stuck:
In less than 25 years, the State of Quebec had restructured, modernized and developed in depth. Education at all levels cover the entire territory, health institutions were established in all major centers, Crown corporations multiplied, all major public services (police, roads, energy, local government) were assured everywhere. Failing to be independent, the State of Quebec became a French province, resulting in a new cultural revolution marked a transformation of the business and industry, environment hitherto traditionally anglophone and daunting to the language of the majority. In addition, Quebec has also promoted the development of bilingualism in the federal state.
Response to Rikker:
The data that we gathered on content by language was for a point in time in 2007. Also our focus was SE Asia excluding India. The research was validated by Gartner group colleagues and by the leading ISPs in the region who shared information with us. We also crawled much of the web in the region for local language content including all the sites you mention, so we feel that the estimates we came up with are reasonable and accurate in general. While there may be rounding errors, in general we stand by our research, but would be happy to change our assertions should a more credible data source be made available. It is true that the region is growing rapidly in terms of online population and content but the basic fact remains there is MUCH LESS content in the languages we are targeting.
In fact, you can go to the Wikipedia today and see that the Thai Wikipedia has about 40,000 articles versus 2,750,000 article in the English Wikipedia. If you look closer you will also see that the articles in the Thai wiki are much shorter and limited in scope. This would put a Thai student at a relative disadvantage in terms of information access. By translating the bulk of the English Wiki we will significantly more than double the volume of content on wiki related material available to monolingual Thai speakers.
We are very much aware that what we produce is not equal to human translation and sometimes can justly even be called crap. Thus we employ 300 part time translators who go through early versions of the MT and correct errors which are fed back into the system to raise the quality. This is done in a way to maximize the benefits of the corrections. As you point out it is still very likely to have errors (even though you did notice it improved) and will still need to be corrected, but the examples you point out are already useful to many people who are not professional translators. Our objective is to build a community of people who care about the content to come and help us “clean up” the errors and our technology platform is designed to collect these correction to help improve the ongoing and future automated translations. In case you have not noticed, this crowdsourcing effect is working in many other arenas. The Wikipedia itself is something that Encyclopedia Britannica dismissed as crap not so long ago. The Wikipedia is far from perfect but volunteer contributors have produced a resource that is today one of the top ten websites in the world with about 2M unique visitors per month and still growing.
Our intent is to get the quality to a point where a crowdsourcing engagement becomes possible, since volunteers will be much more likely to make small error correction contributions than completely translate a document. If you take a closer look at our Thai portal you will see that we ask people to join us to clean up our initially “crappy” translations. This effort is just beginning and we expect that it will help raise the quality to levels where there are few or no issues in terms accuracy and meaning even though we may still fall short of professional translator standards. We already have evidence that this approach works and invite you to monitor the site to see the content quality continue to rise. In time we think it will be more than compelling.
Already millions of people use MSN Live, Google and Babelfish to translate content they would not otherwise be able to access each and every day. Data indicates that many find this useful though it be gobbledygook sometimes. Ours is a more focused effort, that attempts to reach for significantly higher quality. This can only be done by engaging humans, millions of humans if possible. The internet and social networking trends increasingly support this type of collaboration. We know that while there may be 500,000+ professional translators in the world, there are probably hundreds of millions who are competently bilingual who may lend an occasional hand to correct raw MT. Our mission is to engage a tiny portion of these people. If we are successful we will at least double the useful web content that exists in these countries today even though along the way it may sometimes look wanting and useless.
Kirti, thanks for responding.
What you describe in your second comment sounds much more reasonable than claiming things like double the Thai content on the web by the end of the month. The first comment reads like a stump speech for investors.
My only beef is with outlandish claims (research or no research). I’m all for crowdsourcing, and I contribute to both Thai and English Wikipedias. If your project actually results in more high quality content, that’s great. I’ve signed up on your site so I can follow what your company is doing more closely.
I can’t see machines taking over the jobs of human translators in the near future, as they have done with so many other professions (remember telephone operators?)
These machine translators are ok when all u need is a quick understanding of a some rather simple text, but if you are running a business, or otherwise depend on accuracy of a translation, using professional translation services is the only way to go.