Many AI models Prefer to "think" in Chinese
/Many AI models prefer to think in Chinese. Their internal chain of thought is built to process data this way because the Chinese language is 50% more space-efficient than English. Chinese programmers prefer to code in their own language even when their model is configured to output information in English. This is why Chinese models are generally faster than English-language models and why they use fewer tokens (requiring less energy). The Chinese model DeepSeek was trained for about $6 million, while US models have typically needed about $100 million for training.
