Since 2008, we have been experimenting with various open
submission and publication models where papers, code and data are
publicly shared and then discussed directly by the community via
social networking websites such as Slashdot
(see our motivation paper for more details).
After receiving positive feedback about our submission model
at our last workshops and events, we decided to use it
exclusively at ADAPT. The authors submit their articles directly
to ArXiv while we immediately open a discussion thread at Reddit
(which allows ranking of comments). This allows authors get an
immediate feedback from the community, defend their techniques,
fix obvious flaws, and improve their articles. It also helps
Program Chairs select the most appropriate, realistic and
reproducible techniques for the final review by the ADAPT
PC members. Hence, we also strongly encourage authors share
related code, data and experimental results along with their
article to help the community validate their approach and even
immediately start using it. We believe that such publication
model will let authors disseminate their ideas and tools much
faster while avoiding unfair reviews and plagiarism (even
if submitted paper is not accepted, it is already published as
a technical report with a time stamp and can be incrementally
improved based on the received feedback).
You may check out the following open access papers and public discussion threads:
In 2007, we started a pilot project to share the whole experimental setups
(code, data, dependencies, models) along with our publications
on machine learning based autotuning via cTuning.org
to be reproduced, validated and improved by the community.
Based on the success of this project, we decided to use similar
approach for this ADAPT workshop, i.e. encourage authors to either
provide a paragraph in their submitted papers describing how to validate
their experimental results, or to submit all related artifacts
along with their publication to be validated by the community.
We hope that such approach will enable truly collaborative, rigorous
and reproducible research and experimentation in computer engineering,
particularly needed on our way towards adaptive, self-tuning computer systems.
If you are interested to know more, please check out out related
initiatives, publications and tools:
Artifact Evaluation for PPoPP
(backed up by ACM).