Department of Language and Literature, Kerman Science and Research Branch, Islamic Azad University, Kerman, Iran (firstname.lastname@example.org)
Department of Language and Literature, Kerman Science and Research Branch, Islamic Azad University, Kerman, Iran (email@example.com)
To investigate subject-verb agreement of Persian translated sentences in Google machine translation, 100 sentences were taken from the BBC’s English web site that 50 sentences were randomly selected to be translated by both Google Machine Translator and four Human translators, to reach a reference translation. Descriptive-statistics method was used to find out; firstly, whether Google translator compared to human translator can apply subject-verb agreement properly. Secondly, whether there is a relationship between human judgment and Machine Translation automated evaluation method. To this end, after reaching a reference translation, the researcher invited four other human translators to judge and score the Google output based on three criteria. A five-point Likert Scale was used for rating the Google translated sentences. The scale (1) was assigned for a completely unacceptable translation and (5) for an excellent one with almost no errors. Then, the researcher evaluated the same 50 Google translated sentences based on three automated metrics of Precision, Recall, and F-measure. Then, the researcher described the findings through his observation of the scores given by four human translators and found out that Google translator in comparison to the human reference translation could not apply the subject-verb agreement properly in all sentences. Then, after analyzing the “human and F-measure” assessment scores by SPSS version 22 and taking the correlation coefficient of the two scores, the researcher figured out that there was a significant relationship between human and F-measure scores.
Keywords: machine translator; Google translator; subject-verb agreement