Waight, Hannah https://orcid.org/0009-0004-8665-0406
Yang, Eddie https://orcid.org/0000-0002-3696-3226
Yuan, Yin
Messing, Solomon
Roberts, Margaret E.
Stewart, Brandon M. https://orcid.org/0000-0002-7657-3089
Tucker, Joshua A. https://orcid.org/0000-0003-1321-8650
Article History
Received: 25 October 2024
Accepted: 8 April 2026
First Online: 13 May 2026
Competing interests
: H.W. and S.M. have personal financial interests in AI-related companies, in particular Meta (H.W. only), Nvidia, Alphabet, Microsoft and Taiwan Semiconductor (S.M. only). Two authors have past employment histories with AI-related companies: E.Y. was an intern at Microsoft Research in the summer of 2022 and 2023; and S.M. worked at Facebook (now Meta) in various capacities from 2011 to 2015 and 2018 to 2020, at Twitter (now X) from 2021 to 2023, and contracts for 501c6 non-profit MLCommons, which releases AI benchmarks (2026 to present). After acceptance of this paper, S.M. accepted a job at Google DeepMind. Finally, four authors received funding or other resources for unrelated projects from AI-related companies: for an unrelated project, B.M.S. received an unrestricted grant from Meta, ‘Foundational Integrity Research: Misinformation and Polarization’; S.M. received a 2010 Google Research Award for a research project on ‘Social cues and reliability in content selection and evaluation’; E.Y. received a Google Research Award for an unrelated project in 2026; and J.A.T. received a small fee from Facebook to compensate him for administrative time spent in organizing a 1-day conference for approximately 30 academic researchers and a dozen Facebook product managers and data scientists that was held at NYU in the summer of 2017 to discuss research related to civic engagement. J.A.T. is also one of the co-leads of the external academic team for the 2020 US Facebook and Instagram Election Study, a project that began in early 2020 and is still ongoing at the time of the writing of this article; J.A.T. was not compensated financially for his participation in this project by Meta, but the project involves working collaboratively with Meta researchers. J.A.T. also received a 2024 Google Research Grant to support a research project on ‘From search engines to answer engines: testing the effects of traditional and LLM-based search on belief in the veracity of news’. For an unrelated project, J.A.T. was listed as a co-investigator on a ‘Foundational Integrity Research: Misinformation and Polarization’ grant application for an unrestricted grant from Meta that was awarded to a principal investigator at a different university; no research funds were ever transferred to J.A.T. as part of this grant. J.A.T. is a Senior Geopolitical Risk Advisor at Kroll. M.E.R. and Y.Y. declare no competing interests.