With the improvement of hardware and algorithms, the main challenge of software model
checking has shifted from pure algorithmic performance toward supporting a wider set
of input programs. Successful toolchains tackle the problem of parsing a wide range
of inputs in an efficient way by reusing solutions from existing compiler technologies
such as Eclipse CDT or LLVM. Our experience suggests that well-established techniques
in compiler technology are not necessarily beneficial to model checkers and sometimes
can even hurt their performance. In this paper, we review the tools mature enough
to participate in the Software Verification Competition in terms of the employed analysis
and frontend techniques. We find that successful tools do exhibit a bias toward certain
combinations. We explore the theoretical reasons and suggest an adaptable approach
for model checking frameworks. We validate our recommendations by implementing a new
frontend for a model checking framework and show that it indeed benefits some of the
algorithms.