Kieslich, Kimon
Helberger, Natali
Diakopoulos, Nicholas
Funding for this research was provided by:
UL Research Institutes through the Center for Advancing Safety of Machine Intelligence
Article History
Received: 18 July 2025
Accepted: 10 March 2026
First Online: 22 April 2026
Declarations
:
: The authors declare no competing interests.
: This paper didn’t involve human subjects; thus no ethics approval was required. However, we highlight that—if used by scholars—we strongly recommend obtaining ethical approval from the respective institution. All our empirical studies received ethical approval from the first author’s IRB.
: Additionally, we discuss potential ethical objections towards our approach in Sect. 4 (Anticipating and Meeting Objections). Yet, we want to add the following: SSE is an approach that responds to calls for new perspectives and methodologies that go beyond quantifiable impact metrics to identify previously overlooked issues and amplify the perspectives of typically underrepresented populations. We hope to spark an academic discussion about the limitations of current impact assessments, and to highlight the need for more qualitative impact assessments for generative AI technologies. We note that we don’t aim to replace current impact assessment, but rather to enrich the current landscape with alternative approaches that aim to capture previously overlooked impacts and illuminate the contextual nature of impacts.