Cross-lingual Prompting: Improving Zero-shot Chain-of-Thought Reasoning across Languages
This paper introduces cross-lingual prompting (CLP) to improve zero-shot chain-of-thought reasoning across different languages in large language models, proposing methods to align representations between languages and enhance multilingual reasoning capabilities. The work evaluates performance on mathematical reasoning and other benchmarks across multiple languages.
Chain-of-thought (CoT) is capable of eliciting models to explicitly generate reasoning paths, thus promoting reasoning accuracy and attracting increasing attention. Specifically, zero-shot CoT achieves remarkable improvements in a wide range of reasoning tasks by simply instructing the LLM with the prompt"Let's think step by step!". Despite the success of zero-shot CoT, the existing zero-shot prompting techniques remain limited to a single language, making it challenging to generalize to other l