Mauro, Giovanni
Minici, Marco
Pappalardo, Luca
Funding for this research was provided by:
Scuola Normale Superiore
Article History
Received: 10 April 2025
Revised: 13 August 2025
Accepted: 23 September 2025
First Online: 8 January 2026
Declarations
:
: The authors declare no Conflict of interest.
: Our framework and analysis are fully reproducible, with all code available at . The simulations were executed on high-performance computing platforms with 64 CPU cores (AMD EPYC 7313 16-Core Processor) and 1.18 TB RAM. While the simulations are computationally intensive, we implemented a highly parallelized and modular architecture to optimize performance and resource utilization. Experiments with neural recommenders, i.e., MultiVAE, LightGCN, BPRMF, have been conducted on a DGX Server equipped with 4 NVIDIA Tesla V100 GPU (32GB) and CUDA Version 12.2.