Sheth, Paaras http://orcid.org/0000-0002-6186-6946
Kumarage, Tharindu http://orcid.org/0000-0002-9148-0710
Moraffah, Raha http://orcid.org/0000-0002-6891-2925
Chadha, Aman http://orcid.org/0000-0001-6621-9003
Liu, Huan http://orcid.org/0000-0002-3264-7904
Chapter History
First Online: 17 September 2023
Ethical Statement
: <b>Freedom of Speech and Censorship.</b> Our research aims to develop algorithms that can effectively identify and mitigate harmful language across multiple platforms. We recognize the importance of protecting individuals from the adverse effects of hate speech and the need to balance this with upholding free speech. Content moderation is one application where our method could help censor hate speech on social media platforms such as Twitter, Facebook, Reddit, etc. However, one ethical concern is our system’s false positives, i.e., if the system incorrectly flags a user’s text as hate speech, it may censor legitimate free speech. Therefore, we discourage incorporating our methodology in a purely automated manner for any real-world content moderation system until and unless a human annotator works alongside the system to determine the final decision. <b>Use of Hate Speech Datasets</b>. In our work, we incorporated publicly available well-established datasets. We have correctly cited the corresponding dataset papers and followed the necessary steps in utilizing those datasets in our work. We understand that the hate speech examples used in the paper are potentially harmful content that could be used for malicious activities. However, our work aims to help better investigate and help mitigate the harms of online hate. Therefore, we have assessed that the benefits of using these real-world examples to explain our work better outweigh the potential risks. <b>Fairness and Bias in Detection.</b> Our work values the principles of fairness and impartiality. To reduce biases and ethical problems, we openly disclose our methodology, results, and limitations and will continue to assess and improve our system in the future.
Conference Information
Conference Acronym: ECML PKDD
Conference Name: Joint European Conference on Machine Learning and Knowledge Discovery in Databases
Conference City: Turin
Conference Country: Italy
Conference Year: 2023
Conference Start Date: 18 September 2023
Conference End Date: 22 September 2023
Conference Number: 23
Conference ID: ecml2023
Conference URL: https://2023.ecmlpkdd.org/
Peer Review Information (provided by the conference organizers)
Type: Double-blind
Conference Management System: CMT
Number of Submissions Sent for Review: 829
Number of Full Papers Accepted: 196
Number of Short Papers Accepted: 0
Acceptance Rate of Full Papers: 24% - The value is computed by the equation "Number of Full Papers Accepted / Number of Submissions Sent for Review * 100" and then rounded to a whole number.
Average Number of Reviews per Paper: 3.63
Average Number of Papers per Reviewer: 4.5
External Reviewers Involved: Yes
Additional Info on Review Process: Applied Data Science Track: 239 submissions, 58 accepted papers; Demo Track: 31 submissions, 16 accepted papers.