Not All Languages Are Created Equal in LLMs: Improving Multilingual Capability by Cross-Lingual-Thought Prompting

Relevance: 3/10 230 cited 2023 paper

This paper introduces Cross-Lingual-Thought (XLT) prompting, a template-based method to improve large language models' performance across multiple languages by stimulating cross-lingual reasoning. The approach is evaluated on 7 NLP benchmarks covering reasoning, understanding, and generation tasks in high-resource and low-resource languages.

Large language models (LLMs) demonstrate impressive multilingual capability, but their performance varies substantially across different languages. In this work, we introduce a simple yet effective method, called cross-lingual-thought prompting (XLT), to systematically improve the multilingual capability of LLMs. Specifically, XLT is a generic template prompt that stimulates cross-lingual and logical reasoning skills to enhance task performance across languages. We conduct comprehensive evaluati

Framework Categories

Tool Types

Tags

reasoning evaluation LLMcomputer-sciencehighly-cited