Testing a Verification Tool Markus Wagner The way verification environments are currently tested is not very satisfying. Most of the time, the tools are written by researchers, and as long as those environments are not actually used for the verification of critical systems, there is no real demand for trust in these systems. This however changes when verification environments are to be used in industrial applications. The Verifying C Compiler (VCC) is one of these tools that are being used for the verification of industrial products. We investigated systematic approaches for the validation of verification systems, and once we had specified what it means for a verification system to be correct, we were confronted with the generation and assessment of the tests. The input domain of such a system is the result of the manifold combination of elements from the C programming language and from the language of verification-specific annotations. We reduced the risk of constructing incorrect test cases by choosing trustworthy sources, such as, the official C language standard ISO/IEC 9899:201x, the test suite of the GNU Compiler Collection, and the test suites of the verification tools Spec# and Frama-C/Jessie. In order to assess the tests, we defined axiomatization coverage as our domain-specific metric for VCC, and to observe the impact that the individual elements of the axiomatization have on the verification process. Furthermore, the generated test reports, can be useful for certification purposes because they contain information on which part of the code was verified by using which verification system. Concerning the tests we performed, we noticed that VCC's own test suite uses only about 60% of the axiomatization. From this, we draw the following two conclusions: (1) the used part of the axiomatization seems to correctly reflect the developers' assumptions about how the verification methodology is supposed to work, and (2) additional test cases should be written to achieve a higher coverage.