Yoon, Wonjin http://orcid.org/0000-0002-6435-548X
Yoo, Jaehyo http://orcid.org/0000-0002-3600-6362
Seo, Sumin http://orcid.org/0000-0001-8703-0322
Sung, Mujeen http://orcid.org/0000-0002-7978-8114
Jeong, Minbyul http://orcid.org/0000-0002-1346-730X
Kim, Gangwoo http://orcid.org/0000-0003-4581-0384
Kang, Jaewoo http://orcid.org/0000-0001-6798-9106
Chapter History
First Online: 25 August 2022
Author Note
: This work is submitted to the 2022 CLEF - <i>Best of 2021 Labs</i> track. Our work originates from our challenge participation in the 9th BioASQ (2021 CLEF Labs), presented under the title <i>KU-DMIS at BioASQ 9: Data-centric and model-centric approaches for biomedical question answering (Yoon et al. 2021</i> [CitationRef removed]<i>)</i>.
Conference Information
Conference Acronym: CLEF
Conference Name: International Conference of the Cross-Language Evaluation Forum for European Languages
Conference City: Bologna
Conference Country: Italy
Conference Year: 2022
Conference Start Date: 5 September 2022
Conference End Date: 8 September 2022
Conference Number: 13
Conference ID: clef2022
Conference URL: https://clef2022.clef-initiative.eu/
Peer Review Information (provided by the conference organizers)
Type: Single-blind
Conference Management System: EasyChair
Number of Submissions Sent for Review: 14
Number of Full Papers Accepted: 7
Number of Short Papers Accepted: 3
Acceptance Rate of Full Papers: 50% - The value is computed by the equation "Number of Full Papers Accepted / Number of Submissions Sent for Review * 100" and then rounded to a whole number.
Average Number of Reviews per Paper: 3
Average Number of Papers per Reviewer: 3
External Reviewers Involved: Yes
Additional Info on Review Process: 7 best of labs + 14 lab overviews