Sense-for-sense translation is the oldest norm for translating. It fundamentally means translating the meaning of each whole sentence before moving on to the next, stands in normative opposition to word-for-word translation, which means translating the meaning of each lexical item in sequence; the coiner of the term "sense-for-sense" was Jerome in his "Letter to Pammachius", where he said that, "except of course in the case of Holy Scripture, where the syntax contains a mystery," he translates non verbum e verbo sed sensum de sensu: not word for word but sense for sense. However, arguably Jerome is here not inventing the concept of sense-for-sense translation, which most scholars believe was invented by Cicero in De optimo genere oratorum, when he said that in translating from Greek to Latin "I did not think I ought to count them out to the reader like coins, but to pay them by weight, as it were."And he is not coining the term "word-for-word," but borrowing it from Cicero as well, or from Horace, who warned the writer interested in retelling ancient tales in an original way Nec verbo verbum curabit reddere / fidus interpretes: "not to try to render them word for word faithful translator."Some have read that passage in Horace differently: Boethius in 510 CE and Johannes Scotus Eriugena in the mid-9th century read it to mean that translating is "the fault/blame of the faithful interpreter/translator," and fear that they have incurred it.
In John Dryden’s 1680 preface to his translation of Ovid's Epistles, he proposed dividing translation into three parts called: metaphrase and imitation. Metaphrase is line by line translation from one language into another. Paraphrase is sense-for-sense translation where the message of the author is kept but his words are not so followed as his sense, which too can be altered or amplified. Imitation is the use of either metaphrase or paraphrase but the translator has the liberty to choose, appropriate and how the message will be conveyed. In 1813, during his “Über die Verschiedenen Methoden des Übersetzens” lecture, Friedrich Schleiermacher proposed the idea where “ither the translator leaves the author in peace, as much as possible, moves the reader towards him, or he leaves the reader in peace, as much as possible, he moves the author towards him”. In 1964, Eugene Nida described translation as having two different types of equivalence: formal and dynamic equivalence. Formal equivalence is when there is focus on the message itself, in both content.
The message in the target language should match the message in the source language as as possible. In contrast, there is less concern with matching the message in the target language with the message in the source language in dynamic equivalence; the goal is, however, to produce the same relationship between target text and target audience, as there was with the original source text and its audience. In 1981, Peter Newmark referred to translation as either communicative, he stated that semantic translation is one, source language bias and faithful to the source text and communicative translation is target language bias and idiomatic. A semantic translation’s goal is to stay as close as possible to the semantic and syntactic structures of the source language, allowing the exact contextual meaning of the original. A communicative translation’s goal is to produce on the readers an effect as close as possible to that produced upon the readers of the original. In addition to these concepts, in 1990, Brian Mossop presented his concept of idiomatic and unidiomatic translation.
Idiomatic translation is when the message of the source text is conveyed the way a target language writer would convey it, rather than staying to the way in which it was conveyed in the source text. Unidiomatic translation is translates individual words. In 1994 in modern Translation Studies, Lawrence Venuti introduced the concepts of domestication and foreignization, which are based on concepts from Friedrich Schleiermacher's 1813 lecture. Domestication is the adaption of culture-specific terms or cultural context, where as foreignization is the preservation of the original cultural context of the source text. Venuti described domestication as being fluent and transparent strategies that result in acculturation, where “a cultural other is domesticated, made intelligible”. Schleiermacher's distinction between "bringing the author to the reader" and "taking the reader to the author", dealt with a social concern and Venuti’s distinction between domestication and foreignization deals with ethical principles.
Gentzler, Edwin. Contemporary Translation Theories. 2nd Ed. London and New York: Routledge. Lefevere, André.. Translation/History/Culture: A Sourcebook. London and New York: Routledge. Newmark, Peter.. A Textbook of Translation. New York: Prentice Hall. Nida, Eugene A. and Charles R. Taber.. The Theory and Practice of Translation. Leiden: Brill. Robinson, Douglas.. Who Translates? Translator Subjectivities Beyond Reason. Albany: SUNY Press. Robinson, Douglas, ed.. Western Translation Theory From Herodotus to Nietzsche. Manchester: St. Jerome. Steiner, T. R.. English Translation Theory, 1650–1800. Amsterdam: Rodopi. Venuti, Lawrence. (
Dubbing, mixing, or re-recording is a post-production process used in filmmaking and video production in which additional or supplementary recordings are "mixed" with original production sound to create the finished soundtrack. The process takes place on a dub stage. After sound editors edit and prepare all the necessary tracks – dialogue, automated dialogue replacement, Foley, music – the dubbing mixers proceed to balance all of the elements and record the finished soundtrack. Dubbing is sometimes confused with ADR known as "additional dialogue replacement", "automated dialogue recording" and "looping", in which the original actors re-record and synchronize audio segments. Outside the film industry, the term "dubbing" refers to the replacement of the actor's voices with those of different performers speaking another language, called "revoicing" in the film industry. In the past, dubbing was practiced in musicals when the actor had an unsatisfactory singing voice. Today, dubbing enables the screening of audiovisual material to a mass audience in countries where viewers do not speak the same language as the performers in the original production.
Films and sometimes video games are dubbed into the local language of a foreign market. In foreign distribution, dubbing is common in theatrically released films, television films, television series and anime. Automated Dialog Replacement is the process of re-recording dialogue by the original actor after the filming process to improve audio quality or reflect dialogue changes. In India the process is known as "dubbing", while in the UK, it is called "post-synchronisation" or "post-sync"; the insertion of voice actor performances for animation, such as computer generated imagery or animated cartoons, is referred to as ADR although it does not replace existing dialogue. The ADR process may be used to: remove extraneous sounds such as production equipment noise, wind, or other undesirable sounds from the environment. Replace foul language for TV broadcasts of the movie. In conventional film production, a production sound mixer records dialogue during filming. During post-production, a supervising sound editor, or ADR supervisor, reviews all of the dialogue in the film and decides which lines must be re-recorded.
ADR is recorded during an ADR session. The actor the original actor from the set, views the scene with the original sound attempts to recreate the performance. Over the course of multiple takes, the actor performs the lines while watching the scene; the ADR process does not always take place in a post-production studio. The process may be recorded with mobile equipment. ADR can be recorded without showing the actor the image they must match, but by having them listen to the performance, since some actors believe that watching themselves act can degrade subsequent performances. Sometimes, a different actor than the original actor on set is used during ADR. One famous example is the Star Wars character Darth Vader portrayed by David Prowse. Other examples include: Ray Park, who acted as Darth Maul from Star Wars: Episode I – The Phantom Menace had his voice dubbed over by Peter Serafinowicz Frenchmen Philippe Noiret and Jacques Perrin, who were dubbed into Italian for Cinema Paradiso Austrian bodybuilder Arnold Schwarzenegger, dubbed for Hercules in New York Argentine boxer Carlos Monzón, dubbed by a professional actor for the lead in the drama La Mary Gert Frobe, who played Auric Goldfinger in the James Bond film Goldfinger, dubbed by Michael Collins Andie MacDowell's Jane, in Greystoke: The Legend of Tarzan, Lord of the Apes, dubbed by Glenn Close Tom Hardy, who portrayed Bane in The Dark Knight Rises, re-dubbed half of his own lines for ease of viewer comprehension Harvey Keitel was dubbed by Roy Dotrice in post production for Saturn 3 Dave Coulier dubbed replacement of swear words for Richard Pryor in multiple TV versions of his movies An alternative method to dubbing, called "rythmo band", has been used in Canada and France.
It provides a more precise guide for the actors and technicians, can be used to complement the traditional ADR method. The "band" is a clear 35 mm film leader on which the dialogue is hand-written in India ink, together with numerous additional indications for the actor—including laughs, length of syllables, mouth sounds and mouth openings and closings; the rythmo band is projected in scrolls in perfect synchronization with the picture. Studio time is used more efficiently, since with the aid of scrolling text and audio cues, actors can read more lines per hour than with ADR alone. With ADR, actors can average 10–12 lines per hour, while rythmo band can facilitate the reading of 35-50 lines per hour. However, the preparation of a rythmo band is a time-consuming process involving a series of specialists organized in a production line; this has prevented the technique from being more adopted, but software emulations of rythmo band technology overcome the dis
Translations of the Qur'an are interpretations of the scripture of Islam in languages other than Arabic. The Qur'an was written in the Arabic language and has been translated into most major African and European languages; the translation of the Qur'an into modern languages has always been a difficult issue in Islamic theology. Because Muslims revere the Qur'an as miraculous and inimitable, they argue that the Qur'anic text should not be isolated from its true form to another language or written form, at least not without keeping the Arabic text with it. Furthermore, an Arabic word, like a Hebrew or Aramaic word, may have a range of meanings depending on the context – a feature present in all Semitic languages, when compared to English and Romance languages – making an accurate translation more difficult. According to Islamic theology, the Qur'an is a revelation specifically in Arabic, so it should only be recited in Quranic Arabic. Translations into other languages are the work of humans and so, according to Muslims, no longer possess the uniquely sacred character of the Arabic original.
Since these translations subtly change the meaning, they are called "interpretations" or "translation of the meanings". For instance, Pickthall called his translation The Meaning of the Glorious Koran rather than The Koran; the task of translation of the Qur'an is not an easy one. A part of this is the innate difficulty of any translation. There is always an element of human judgement involved in translating a text; this factor is made more complex by the fact that the usage of words has changed a great deal between classical and modern Arabic. As a result Qur'anic verses which seem clear to native Arab speakers accustomed to modern vocabulary and usage may not represent the original meaning of the verse; the original meaning of a Qur'anic passage will be dependent on the historical circumstances of the prophet Muhammad's life and early community in which it originated. Investigating that context requires a detailed knowledge of hadith and sirah, which are themselves vast and complex texts; this introduces an additional element of uncertainty which cannot be eliminated by any linguistic rules of translation.
The first translation of the Qur'an was performed by Salman the Persian, who translated Surah al-Fatihah into the Persian language during the early 7th century. According to Islamic tradition contained in the hadith, Emperor Negus of Abyssinia and Byzantine Emperor Heraclius received letters from Muhammad containing verses from the Qur'an. However, during Muhammad's lifetime, no passage from the Qur'an was translated into these languages nor any other; the second known translation was into Greek and was used by Nicetas Byzantius, a scholar from Constantinople, in his'Refutation of Quran' written between 855 and 870. However, we know nothing about who, it is however probable that it was a complete translation. The first attested complete translations of the Quran were done between the 10th and 12th centuries in Persian language; the Samanid king, Mansur I, ordered a group of scholars from Khorasan to translate the Tafsir al-Tabari in Arabic, into Persian. In the 11th century, one of the students of Abu Mansur Abdullah al-Ansari wrote a complete tafsir of the Quran in Persian.
In the 12th century, Abu Hafs Omar al-Nasafi translated the Quran into Persian. The manuscripts of all three books have been published several times. In 1936, translations in 102 languages were known. Robertus Ketenensis produced the first Latin translation of the Qur'an in 1143, his version was entitled Lex Mahumet pseudoprophete. The translation was made at the behest of Peter the Venerable, abbot of Cluny, exists in the Bibliothèque de l'Arsenal in Paris. According to modern scholars, the translation tended to "exaggerate harmless text to give it a nasty or licentious sting" and preferred improbable and unpleasant meanings over and decent ones. Ketenensis' work was republished in 1543 in three editions by Theodore Bibliander at Basel along with Cluni corpus and other Christian propaganda. All editions contained a preface by Martin Luther. Many European "translations" of the Qur'an translated Ketenensis' Latin version into their own language, as opposed to translating the Qur'an directly from Arabic.
As a result, early European translations of the Qur ` an were distorted. In the early thirteenth century, Mark of Toledo made another, more literal, translation into Latin, which survives in a number of manuscripts. In the fifteenth century, Juan of Segovia produced another translation in collaboration with the Mudejar writer, Isa of Segovia. Only the prologue survives. In the sixteenth century, Juan Gabriel Terrolensis aided Cardenal Eguida da Viterbo in another translation into Latin. In the early seventeenth century, another translated was attributed to Cyril Lucaris. Ludovico Marracci, a teacher of the Arabic language at the Sapienza University of Rome and confessor to Pope Innocent XI, issued a second Latin tr
Bible translations into English
Partial Bible translations into languages of the English people can be traced back to the late 7th century, including translations into Old and Middle English. More than 450 translations into English have been written; the New Revised Standard Version is the version most preferred by biblical scholars. In the United States, 55% of survey respondents who read the Bible reported using the King James Version in 2014, followed by 19% for the New International Version, with other versions used by fewer than 10%. Although John Wycliffe is credited with the first translation of the Bible into English, there were in fact many translations of large parts of the Bible centuries before Wycliffe's work. Parts of the Bible were first translated from the Latin Vulgate into Old English by a few select monks and scholars; such translations were in the form of prose or as interlinear glosses. Few complete translations existed during that time. Most of the books of the Bible were read as individual texts, thus the sense of the Bible as history that exists today did not exist at that time.
Instead, an allegorical rendering of the Bible was more common and translations of the Bible included the writer’s own commentary on passages in addition to the literal translation. Toward the end of the 7th century, the Venerable Bede began a translation of scripture into Old English. Aldhelm translated the complete Book of Psalms and large portions of other scriptures into Old English. In the 10th century an Old English translation of the Gospels was made in the Lindisfarne Gospels: a word-for-word gloss inserted between the lines of the Latin text by Aldred, Provost of Chester-le-Street; this is the oldest extant translation of the Gospels into the English language. The Wessex Gospels are a full translation of the four gospels into a West Saxon dialect of Old English. Produced in 990, they are the first translation of all four gospels into English without the Latin text. In the 11th century, Abbot Ælfric translated much of the Old Testament into Old English; the Old English Hexateuch is an illuminated manuscript of the first six books of the Old Testament.
Another copy of that text, without lavish illustrations but including a translation of the Book of Judges, is found in Oxford, Bodleian Library, Laud Misc. 509. The Ormulum is in Middle English of the 12th century. Like its Old English precursor from Ælfric, an Abbot of Eynsham, it includes little Biblical text, focuses more on personal commentary; this style was adopted by many of the original English translators. For example, the story of the Wedding at Cana is 800 lines long, but fewer than 40 lines are the actual translation of the text. An unusual characteristic is that the translation mimics Latin verse, so is similar to the better known and appreciated 14th-century English poem, Cursor Mundi. Richard Rolle wrote an English Psalter. Many religious works are attributed to Rolle, but it has been questioned how many are genuinely from his hand. Many of his works were concerned with personal devotion, some were used by the Lollards; the 14th century theologian John Wycliffe is credited with translating what is now known as Wycliffe's Bible, though it is not clear how much of the translation he himself did.
This translation came out in two different versions. The earlier text is characterised by a strong adherence to the word order of Latin, might have been difficult for the layperson to comprehend; the text made more concessions to the native grammar of English. Early Modern English Bible translations are of between about 1500 and 1800, the period of Early Modern English. This, the first major period of Bible translation into the English language, began with the introduction of the Tyndale Bible; the first complete edition of his New Testament was in 1526. William Tyndale used the Greek and Hebrew texts of the New Testament and Old Testament in addition to Jerome's Latin translation, he was the first translator to use the printing press – this enabled the distribution of several thousand copies of his New Testament translation throughout England. Tyndale did not complete his Old Testament translation; the first printed English translation of the whole bible was produced by Miles Coverdale in 1535, using Tyndale's work together with his own translations from the Latin Vulgate or German text.
After much scholarly debate it is concluded that this was printed in Antwerp and the colophon gives the date as 4 October 1535. This first edition was adapted by Coverdale for his first "authorised version", known as the Great Bible, of 1539. Other early printed versions were the Geneva Bible, notable for being the first Bible divided into verses and which negated the Divine Right of Kings; the first complete Roman Catholic Bible in English was the Douay–Rheims Bible, of which the New Testament portion was published in Rheims in 1582 and the Old Testament somewhat in Douay in Gallicant Flanders. The Old Testament was completed by the time the New Testament was published, but due to extenuating circumstances and financial issues was not published until nearly three decades in two editions, the first released in 1609, the rest of the OT in 1610. In this version, the seven deuterocanonical books are mingled with the other books, rather than kept separate in an appendix. While early English Bibles were based on a small number of Greek texts, or on Latin translations, modern English translations of the Bible are based on a
Subtitles are text derived from either a transcript or screenplay of the dialog or commentary in films, television programs, video games, the like displayed at the bottom of the screen, but can be at the top of the screen if there is text at the bottom of the screen. They can either be a form of written translation of a dialog in a foreign language, or a written rendering of the dialog in the same language, with or without added information to help viewers who are deaf or hard of hearing to follow the dialog, or people who cannot understand the spoken dialogue or who have accent recognition problems; the encoded method can either be pre-rendered with the video or separate as either a graphic or text to be rendered and overlaid by the receiver. The separate subtitles are used for DVD, Blu-ray and television teletext/Digital Video Broadcasting subtitling or EIA-608 captioning, which are hidden unless requested by the viewer from a menu or remote controller key or by selecting the relevant page or service, always carry additional sound representations for deaf and hard of hearing viewers.
Teletext subtitle language follows the original audio, except in multi-lingual countries where the broadcaster may provide subtitles in additional languages on other teletext pages. EIA-608 captions are similar, except that North American Spanish stations may provide captioning in Spanish on CC3. DVD and Blu-ray only differ in using run-length encoded graphics instead of text, as well as some HD DVB broadcasts. Sometimes at film festivals, subtitles may be shown on a separate display below the screen, thus saving the film-maker from creating a subtitled copy for just one showing. Television subtitling for the deaf and hard of hearing is referred to as closed captioning in some countries. More exceptional uses include operas, such as Verdi's Aida, where sung lyrics in Italian are subtitled in English or in another local language outside the stage area on luminous screens for the audience to follow the storyline, or on a screen attached to the back of the chairs in front of the audience; the word subtitle is the prefix sub- followed by title.
In some cases, such as live opera, the dialog is displayed above the stage in what are referred to as surtitles. Today, professional subtitlers work with specialized computer software and hardware where the video is digitally stored on a hard disk, making each individual frame accessible. Besides creating the subtitles, the subtitler also tells the computer software the exact positions where each subtitle should appear and disappear. For cinema film, this task is traditionally done by separate technicians; the end result is a subtitle file containing the actual subtitles as well as position markers indicating where each subtitle should appear and disappear. These markers are based on timecode if it is a work for electronic media, or on film length if the subtitles are to be used for traditional cinema film; the finished subtitle file is used to add the subtitles to the picture, either: directly into the picture. Subtitles can be created by individuals using available subtitle-creation software like Subtitle Workshop for Windows, MovieCaptioner for Mac/Windows, Subtitle Composer for Linux, hardcode them onto a video file with programs such as VirtualDub in combination with VSFilter which could be used to show subtitles as softsubs in many software video players.
For multimedia-style Webcasting, check: SMIL Synchronized Multimedia Integration Language. Some programs and online software allow automatic captions using speech-to-text features. For example, in YouTube, automatic captions are available in English, French, Italian, Korean, Portuguese and Spanish. If automatic captions are available for the language, they'll automatically be published on the video, using the YT Video Manager in the Creator Studio. Same-language captions, i.e. without translation, were intended as an aid for people who are deaf or hard of hearing. Internationally, there are several major studies which demonstrate that same-language captioning can have a major impact on literacy and reading growth across a broad range of reading abilities; this method of subtitling is used by national television broadcasters in China and in India such as Doordarshan. This idea was struck upon by Brij Kothari, who believed that SLS makes reading practice an incidental and subconscious part of popular TV entertainment, at a low per-person cost to shore up literacy rates in India.
Same language subtitling is the use of synchronized captioning of musical lyrics as a repeated reading activity. The basic reading activity involves students viewing a short subtitled presentation projected onscreen, while completing a response worksheet. To be effective, the subtitling should have high quality synchronization of audio and text, better yet, subtitling should change color in syllabic synchronization to audio model, the text should be at a level to challenge students' language abilities. Closed captioning is the American term for closed subtitles intended for people who are deaf or hard of hearing; these are a transcription rather than a translation, cont
Legal translation is the translation of texts within the field of law. As law is a culture-dependent subject field, legal translation is not linguistically transparent. Intransparency in translation can be avoided somewhat by use of Latin legal terminology, where possible. Intransparency can lead to expensive misunderstandings in terms of a contract, for example, resulting in avoidable lawsuits. Legal translation is thus done by specialized law translators. Conflicts over the legal impact of a translation can be avoided by indicating that the text is "authentic" i.e. operative on its own terms or instead is a "convenience translation", which itself is not operative. Courts only apply authentic texts and do not rely on "convenience" translations in adjudicating rights and duties of litigants. Most legal writing is exact and technical, seeking to define binding rights and duties. Thus, precise correspondence of these rights and duties in the source text and in the translation is essential; as well as understanding and translating the legal rights and duties established in the translated text, legal translators must bear in mind the legal system of the source text and the legal system of the target text which may differ from each other: Anglo-American common law, Islamic law, or customary tribal law for examples.
Apart from terminological lacunae, textual conventions in the source language are culture-dependent and may not correspond to conventions in the target culture. Linguistic structures that are found in the source language may have no direct equivalent structures in the target language; the translator therefore has to be guided by certain standards of linguistic and cultural equivalence between the language used in the source text to produce a text in the target language. Those standards correspond to a variety of different principles defined as different approaches to translation in translation theory; each of the standards sets a certain priority among the elements of ST to be preserved in TT. For example, following the functional approach, translators try to find target language structures with the same functions as those in the source language thus value the functionality of a text fragment in ST more than, the meanings of specific words in ST and the order in which they appear there. Different approaches to translation should not be confused with different approaches to translation theory.
The former are the standards used by translators in their trade while the latter are just different paradigms used in developing translation theory. Few jurists are familiar with terms of translation theory, they may ask translators to provide verbatim translation. They view this term as a clear standard of quality that they desire in TT. However, verbatim translation is undesirable due to different grammar structures as well as different legal terms or rules in different legal systems. Jurists asking for "verbatim" translation are making the lay misconception that an accurate translation is achieved by substituting "the correct" words of the target language one-for-one from the ST. In reality, they just want to have a faithful and fluent translation of ST, having no doubt that a good translator will provide it, they do not realize that word-by-word translations could sound as complete nonsense in the target language, have no idea of different professional translation standards. Many translators would choose to adhere to the standard that they themselves find more appropriate in a given situation based on their experience rather than to attempt to educate the court personnel.
Legal translators consult specialized bilingual or polyglot law dictionaries. Care should be taken, as some bilingual law dictionaries are of poor quality and their use may lead to mistranslation. Skopos theory Translating "law" to other European languages Translating for legal equivalence Garzone, Giuliana. "Legal translation and functionalist approaches: a contradiction in terms?". University of Bologna. Archived from the original on 2003-12-11. Retrieved 2010-07-14. Hjort-Pedersen, Mett. "Lexical ambiguity and legal translation: a discussion". Multilingua. 20: 379–392. Doi:10.1515/mult.2001.008. Nielsen, Sandro; the bilingual LSP dictionary. Principles and practice for legal language. Gunter Narr Verlag. ISBN 978-3-8233-4533-6. Beuvant, Hugo. Les traductions du discours juridique. Perspectives historiques. Presses Universitaires de Rennes. ISBN 978-2-7535-6511-1. Nielsen, Sandro. "Translational Creativity: Translating Genre Conventions in Statutes". Vertimo Studijos. 3: 23–35. Doi:10.15388/VertStud.2010.3.10586.
Šarčević, Susan. "Legal translation and translation theory: a receiver-oriented approach". University of Rijeka. Archived from the original on 2003-10-10. Retrieved 2010-07-14. Ghadi, Alireza Sadeghi. "All New Theories And Concepts About Translation In New Century". Downing, Bruce T.. "Effective Patient-Provider Communication Across Language Barriers: A Focus on Methods of Translation". Retrieved 2010-07-14. Elmiger, Daniel. "Translating international gender-equality institutional/legal texts: The example of'gender' in Spanish". Gender and Language. 7: 75–96. Doi:10.1558/genl.v7i1.75
Machine translation, sometimes referred to by the abbreviation MT is a sub-field of computational linguistics that investigates the use of software to translate text or speech from one language to another. On a basic level, MT performs simple substitution of words in one language for words in another, but that alone cannot produce a good translation of a text because recognition of whole phrases and their closest counterparts in the target language is needed. Solving this problem with corpus statistical, neural techniques is a growing field, leading to better translations, handling differences in linguistic typology, translation of idioms, the isolation of anomalies. Current machine translation software allows for customization by domain or profession, improving output by limiting the scope of allowable substitutions; this technique is effective in domains where formal or formulaic language is used. It follows that machine translation of government and legal documents more produces usable output than conversation or less standardised text.
Improved output quality can be achieved by human intervention: for example, some systems are able to translate more if the user has unambiguously identified which words in the text are proper names. With the assistance of these techniques, MT has proven useful as a tool to assist human translators and, in a limited number of cases, can produce output that can be used as is; the progress and potential of machine translation have been debated much through its history. Since the 1950s, a number of scholars have questioned the possibility of achieving automatic machine translation of high quality and most notably by Yehoshua Bar-Hillel; some critics claim. The idea of machine translation may be traced back to the 17th century. In 1629, René Descartes proposed a universal language, with equivalent ideas in different tongues sharing one symbol; the field of "machine translation" appeared in Warren Weaver's Memorandum on Translation. The first researcher in the field, Yehosha Bar-Hillel, began his research at MIT.
A Georgetown University MT research team followed with a public demonstration of its Georgetown-IBM experiment system in 1954. MT research programs popped up in Japan and Russia, the first MT conference was held in London. Researchers continued to join the field as the Association for Machine Translation and Computational Linguistics was formed in the U. S. and the National Academy of Sciences formed the Automatic Language Processing Advisory Committee to study MT. Real progress was much slower and after the ALPAC report, which found that the ten-year-long research had failed to fulfill expectations, funding was reduced. According to a 1972 report by the Director of Defense Research and Engineering, the feasibility of large-scale MT was reestablished by the success of the Logos MT system in translating military manuals into Vietnamese during that conflict; the French Textile Institute used MT to translate abstracts from and into French, English and Spanish. Beginning in the late 1980s, as computational power increased and became less expensive, more interest was shown in statistical models for machine translation.
Various MT companies were launched, including Trados, the first to develop and market translation memory technology. The first commercial MT system for Russian / English / German-Ukrainian was developed at Kharkov State University. MT on the web started with SYSTRAN offering free translation of small texts, followed by AltaVista Babelfish, which racked up 500,000 requests a day. Franz Josef Och won DARPA's speed MT competition. More innovations during this time included MOSES, the open-source statistical MT engine, a text/SMS translation service for mobiles in Japan, a mobile phone with built-in speech-to-speech translation functionality for English and Chinese. Google announced that Google Translate translates enough text to fill 1 million books in one day; the idea of using digital computers for translation of natural languages was proposed as early as 1946 by A. D. Booth and others. Warren Weaver wrote an important memorandum "Translation" in 1949; the Georgetown experiment was by no means the first such application, a demonstration was made in 1954 on the APEXC machine at Birkbeck College of a rudimentary translation of English into French.
Several papers on the topic were published at the time, articles in popular journals. A similar application pioneered at Birkbeck College at the time, was reading and composing Braille texts by computer; the human translation process may be described as: Decoding the meaning of the source text. Behind this ostensibly simple procedure lies a complex cognitive operation. To decode the meaning of the source text in its entirety, the translator must interpret and analyse all the features of the text, a process that requires in-depth knowledge of the grammar, syntax, etc. of the source language, as well as the cultur