Salas, Eduardo https://orcid.org/0000-0003-0433-8941
Navarro, Pedro J. https://orcid.org/0000-0001-8367-2934
Rosique, Francisca https://orcid.org/0000-0002-3311-8414
Benavente, Juan https://orcid.org/0000-0003-1578-0188
Rivadeneira, Ana MarÃa https://orcid.org/0000-0002-7266-3124
Funding for this research was provided by:
Universidad Politécnica de Cartagena
Article History
Received: 16 May 2025
Accepted: 10 October 2025
First Online: 6 November 2025
Declarations
:
: The authors have no competing interests to declare that are relevant to the content of this article.
: This study did not involve direct interaction with human participants or the use of personally identifiable data. All image captures were processed on-device by Edge-AI cameras, without transmission or storage of raw video outside the device, and without identifying or tracking any individual beyond a transient, anonymous count. Therefore, institutional ethics committee approval for human research was not required.
: The sensors operated in a public space where there is no reasonable expectation of privacy and no images were recorded that would allow personal identification. Consequently, obtaining individual informed consent was neither feasible nor required. All material captured exclusively for model training was immediately anonymized, used only for bounding-box annotation, and securely deleted after labeling was complete.
: To quantify metrics such as precision, recall, F1-score, and overall accuracy, two-minute video recordings were made with the installed sensors during periods of both high and low passenger flow. These recordings were used solely to simulate the system and extract aggregated passenger-flow data; at no point were faces or other identifiable features manually reviewed. All videos were encrypted and stored on a local, access-restricted server and retained only for the duration of the metric computation process, after which they were permanently destroyed.