Many software testing criteria have been proposed which demonstrate different testing capacities. However, there is still less research exploring what the quantitative difference is among different testing criteria. A quantitative difference among different criteria is apparently a good guideline for selecting appropriate testing methodologies. All-statements and all-branches testing criteria are well known, and both of them have been widely applied in software testing. This research infers a quantitative analysis to measure the difference between the all-statements criterion and all-branches criterion. The quantitative analysis provides a theoretical basis for measuring testing efforts between different testing methodologies. A testing metric is proposed to compare the all-statements criterion with all-branches. A CASE tool for this metric is presented.