A growing number of AI research papers published at the prestigious Neural Information Processing Systems (NeurIPS) conference reveal a surprising level of collaboration between researchers from the US and China, suggesting that despite their rivalry in the field, both countries are heavily relying on each other to advance their artificial intelligence capabilities.
According to an analysis conducted by WIRED using OpenAI's Codex tool, over 3% of the total papers presented at NeurIPS last year involved collaboration between authors from US and Chinese institutions. This finding holds true for a recent count as well, with roughly 3% of the papers involving both sides in 2024. Furthermore, research on transformer architecture developed by Google researchers has been widely adopted in China, appearing in almost 30% of all papers featuring Chinese authors.
Moreover, Meta's large language model Llama was prominently featured in over a fifth of papers co-authored with Chinese institutions and is seen in more than 60 papers from US-based researchers. The trend holds for other key models like the Qwen large language model developed by Alibaba, appearing in nearly two-thirds of research papers co-authored with authors from the US.
Dr Jeffrey Ding, an expert on China's AI landscape, notes that "the collaboration between the US and China is not surprising" given their intertwined AI ecosystems. He further states that policymakers from both countries would benefit from acknowledging this cooperation rather than downplaying it.
Some researchers emphasize the role of personal relationships and international collaborations in fostering a culture of knowledge sharing. Dr Katherine Gorman of NeurIPS highlights how "NeurIPS itself is an example of international collaboration" as well as the long-lasting bonds formed between colleagues, which persist even after students leave their universities.
Despite fears from US politicians and tech executives about China's influence on AI research, this study serves as a reminder that there are many benefits for both countries to engage in collaborative research.
According to an analysis conducted by WIRED using OpenAI's Codex tool, over 3% of the total papers presented at NeurIPS last year involved collaboration between authors from US and Chinese institutions. This finding holds true for a recent count as well, with roughly 3% of the papers involving both sides in 2024. Furthermore, research on transformer architecture developed by Google researchers has been widely adopted in China, appearing in almost 30% of all papers featuring Chinese authors.
Moreover, Meta's large language model Llama was prominently featured in over a fifth of papers co-authored with Chinese institutions and is seen in more than 60 papers from US-based researchers. The trend holds for other key models like the Qwen large language model developed by Alibaba, appearing in nearly two-thirds of research papers co-authored with authors from the US.
Dr Jeffrey Ding, an expert on China's AI landscape, notes that "the collaboration between the US and China is not surprising" given their intertwined AI ecosystems. He further states that policymakers from both countries would benefit from acknowledging this cooperation rather than downplaying it.
Some researchers emphasize the role of personal relationships and international collaborations in fostering a culture of knowledge sharing. Dr Katherine Gorman of NeurIPS highlights how "NeurIPS itself is an example of international collaboration" as well as the long-lasting bonds formed between colleagues, which persist even after students leave their universities.
Despite fears from US politicians and tech executives about China's influence on AI research, this study serves as a reminder that there are many benefits for both countries to engage in collaborative research.