Artifact Evaluation in Hardware Research

START:

"For the straightforward pathway had been lost."
— "Dante's Inferno"

Introduction

Hardware research has a crisis. I would say it's a reproducibility crisis, but it's worse than that. In a sense, it's a core issue deep in the field and in the community. I don't know if I can do anything about it, but it is always good to talk about issues on your mind like all good therapy. Maybe I'm not alone.

Artifact Evaluation

To talk about the issue I want to talk about, I will pick on the idea of artifact evaluation for the time being.

Artifact evaluation is an additional step in the conference or journal review process that allows authors to submit code, data, binaries, hardware designs, hardware access, and other digital "objects" for reviews to reproduce and validate results in the submission. The most common framework of artifact evaluation in my areas of research is one by the Association for Computing Machinery (ACM), as it covers a wide range of software and hardware conferences and journals \cite{noauthor_artifact_2020}. However, artifact evaluation is also present in conferences and journals outside the scope of ACM (such as IEEE or big ML conferences).

Artifact evaluation is also not necessarily a centralized requirement; venues typically adopt their own artifact evaluation system or use an already proposed system like the one from the ACM. Regardless, many software and hardware conferences have adopted artifact evaluation in their review process.

The final and most important detail is that artifact evaluation in hardware research fields is almost always optional. It is separate from the normal review process and is not required for publication. If your paper is accepted, you can still publish it without submitting supplemental materials for artifact evaluation.