To compare the results of my models to already existing ones, I’d have to copy their code, create a mini version of their script, copy all their parameters, then adapt my data and preprocessing to their exact values. This tedious workload is the norm, and it takes days and weeks. The caveat is, even with all this effort, you’re looking at a coin-flip, where the results may end up unusable due to differences in the models’ parameters. What makes what we built unique is that we are streamlining this process to be applied in a much more detailed way for every paper. That way, models and results are available to all researchers and they can quickly adapt from previous research. By automatically identifying and adapting the structure of the code into a defined template, everything is standardised and can be reproduced and compared. This effort is too redundant and time-consuming for any single researcher to perform on each individual research project.