Investigating the Translation Performance of a Large Multilingual Language Model: the Case of BLOOM - Traitement du Langage Parlé Access content directly
Conference Papers Year : 2023

Investigating the Translation Performance of a Large Multilingual Language Model: the Case of BLOOM

Abstract

The NLP community recently saw the release of a new large open-access multilingual language model, BLOOM (BigScience et al., 2022) covering 46 languages. We focus on BLOOM's multilingual ability by evaluating its machine translation performance across several datasets (WMT, Flores-101 and DiaBLa) and language pairs (high- and low-resourced). Our results show that 0-shot performance suffers from overgeneration and generating in the wrong language, but this is greatly improved in the few-shot setting, with very good results for a number of language pairs. We study several aspects including prompt design, model sizes, cross-lingual transfer and the use of discursive context.
Fichier principal
Vignette du fichier
eamt23.pdf (254.9 Ko) Télécharger le fichier
Origin Files produced by the author(s)
licence

Dates and versions

hal-04015863 , version 1 (06-03-2023)
hal-04015863 , version 2 (09-05-2023)

Licence

Identifiers

Cite

Rachel Bawden, François Yvon. Investigating the Translation Performance of a Large Multilingual Language Model: the Case of BLOOM. EAMT 2023 - 24th Annual Conference of the European Association for Machine Translation, Jun 2023, Tampere, Finland. ⟨10.48550/ARXIV.2303.01911⟩. ⟨hal-04015863v2⟩
160 View
82 Download

Altmetric

Share

Gmail Mastodon Facebook X LinkedIn More