Blogger List

Steven Feuerstein Indicates Oracle ACE director status
PL/SQL Obsession

Guy Harrison Indicates Oracle ACE status
Database topics

Bert Scalzo Indicates Oracle ACE status
Toad for Oracle, Data Modeling, Benchmarking
Dan Hotka Indicates Oracle ACE director status
SQL Tuning & PL/SQL Tips

Valentin Baev
It's all about Toad

Ben Boise
Toad SC Discussions

Dan Clamage

Kevin Dalton
Benchmark Factory

Peter Evans 
Business Intelligence, Data Integration, Cloud and Big Data

Vaclav Frolik  
Toad Data Modeler, Toad Extension for Eclipse

Devin Gallagher
Toad SC discussions

Stuart Hodgins
JProbe Discussions

Julie Hyman
Toad for Data Analysts

  Henrik "Mauritz" Johnson
Toad Tips & Tricks on the "other" Toads
  Mark Kurtz
Toad SC discussions
Daniel Norwood
Tips & Tricks on Toad Solutions
Amit Parikh
Toad for Oracle, Benchmark Factory,Quest Backup Reporter
Debbie Peabody
Toad Data Point
Gary Piper
Toad Reports Manager
John Pocknell
Toad Solutions
Jeff Podlasek
Toad for DB2
Kuljit Sangha
Toad SC discussions
Michael Sass 
Toad for DB2
Brad Wulf
Toad SC discussions
Richard To
SQL Optimization
  Toad Data Modeler Opens in a new window
Data Modeling
  Toad Higher Education
How Hi-Ed Uses Toad
  Real Automated Code Testing for Oracle
Quest Code Tester blog


Toad World blogs are a mix of insightful how-tos from Quest experts as well as their commentary on experiences with new database technologies.  Have some views of your own to share?  Post your comments!  Note:  Comments are restricted to registered Toad World users.

Do you have a topic that you'd like discussed?  We'd love to hear from you.  Send us your idea for a blog topic.

Sep 14

Written by: QCTO Blog
Monday, September 14, 2009  RssIcon

Written by Finn Ellebaek Nielsen


In my previous blog posts I suggested various best practices for establishing a test policy, as well as a test strategy for Oracle code. I've also described test approaches adhering to the policy and strategy. Before moving on to test design, I would like to write a few words on the amount of testing required under different circumstances.

How Much Testing Do We Need

You might ask "Where is the break-even between the investment into testing and the return in terms of reduced risk/cost and increased quality?" This is a difficult question to answer in general as it's very specific to your project. However, we can break the type of project into two generic categories:

  • Medium to low risk projects
  • High risk projects

In general, these two categories of projects require different amounts of testing. This means that we can determine the amount of testing required through the identification of the project category.

In order to determine the category of the project, we need to carry out a preliminary analysis of the product risk, which is the risk associated with the software product we produce on the project. Product risk examines the possibility that the software fails to satisfy reasonable expectations. This can occur in many different ways, such as:

  • Key functionality missing.
  • Poor reliability, unstable.
  • Failure, causing financial or physical damage.
  • Poor security, eg easy to break in, inject or attack for Denial-of-Service purposes.
  • Poor usability.
  • Poor performance.

Medium - Low Risk Projects

If your risk associated with all production defects is medium to low, I suggest you approach testing as follows:

  • Legacy projects: Introducing automated tests will be a major improvement over what you had and it will be further improved over time. The difficult part is where to stop. If you follow my suggestion of introducing test cases for code that needs to be changed the problem has been reduced to determining which and how many test cases you need.
  • Greenfield projects: As previously mentioned I suggest that you test everything on greenfield projects. So also here the problem is "only" the amount and types of test cases.

So in fact the only difference here is the scope of the testing (specific program/subprogram or everything). The depth of the testing should be the same.

The Code Coverage (CC) threshold you've established will guide your test case design and implementation as you can monitor your progress towards the goal of your CC threshold; ie, when you have tested sufficiently according to your standards. If it then turns out that it wasn't enough because you encounter too many defects in production, you can reassess the CC threshold or perhaps differentiate it and increase it for specific units (eg the most central and critical).

High Risk Projects

If your preliminary analysis revealed that the risk is above medium you need to perform a more detailed analysis of the product risk at hand. This can be done in a number of different ways but most people seem to prefer that it's carried out as a workshop with a team of various project stakeholders, testers and developers.

You can divide product risk into the following categories: Functionality, reliability, performance, usability, security etc. You then identify a list of risk items within each category. For each item you agree on the following:

  • Likelihood: The likelihood of this risk item occurring. You can assign one of the following factors (some use a scale of 1-3, some use a scale of 1-10):
    • 1: Very unlikely.
    • 2: Unlikely.
    • 3: 50/50.
    • 4: Likely.
    • 5: Highly likely.
  • Impact: The impact to the business, user etc if this risk item occurs. Once again, you can differentiate with more or fewer scales:
    • 1: No loss.
    • 2: Minor loss.
    • 3: Some loss.
    • 4: Significant loss.
    • 5: Immense loss.

Based on likelihood and impact you calculate a risk priority by multiplying them. So if you have a risk item with a likelihood of "unlikely (2)" and an impact of "immense loss (5), the risk priority is 2 * 5 = 10.

You can document the product risk analysis in a spreadsheet with the following structure (you could also spread the risk categories across worksheets making it easier to order by risk priority):

Product Risk
Risk Priority
Risk Category 1
Risk 1
Risk 2
Risk Category 2
Risk 3
Risk 4

Mitigation for the risks with highest priority could be to do the following for the program units involved: Differentiate the CC threshold by increasing it and focus on a more detailed code review.

The risk analysis is dynamic and should be revisited often as you learn new things about the system and perhaps discover new areas to be included over time.


It's important to remember that successful execution of your test cases against a given Software Under Test (SUT) doesn't prove that the SUT is free of defects. It just proves the correctness of the behavior with the test cases you've designed and implemented.

However, the product risk analysis and CC threshold established drive the test design and have a direct influence on the test amount required and implemented. This ensures that you design and implement the correct test effort relevant to your project.

If over time the product risk changes (either because of new knowledge or changed factors) you will need to reassess the analysis and potentially the amount of testing may have to change.

Future Blog Posts

Future blog post will cover related issues like:

  • Test design tips & tricks.


  • Foundations of Software Testing: ISTQB Certification by Dorothy Graham, Isabel Evans, Erik Van Veenendaal and Rex Black, Cengage, 2008. ISBN 978-1844809899.

Test Amount
del.icio.usFacebookDiggGoogleLive BookmarksNewsvineStumbleUponTechnoratiYahooDotNetKicks
Search Blog Entries
Blog Archives
<May 2013>
April, 2013 (13)
March, 2013 (10)
February, 2013 (5)
January, 2013 (7)
December, 2012 (6)
November, 2012 (10)
October, 2012 (8)
September, 2012 (6)
August, 2012 (8)
July, 2012 (8)
June, 2012 (12)
May, 2012 (21)
April, 2012 (10)
March, 2012 (16)
February, 2012 (19)
January, 2012 (20)
December, 2011 (19)
November, 2011 (14)
October, 2011 (12)
September, 2011 (17)
August, 2011 (15)
July, 2011 (16)
June, 2011 (13)
May, 2011 (15)
April, 2011 (8)
March, 2011 (21)
February, 2011 (17)
January, 2011 (16)
December, 2010 (13)
November, 2010 (13)
October, 2010 (7)
September, 2010 (15)
August, 2010 (11)
July, 2010 (13)
June, 2010 (12)
May, 2010 (14)
April, 2010 (12)
March, 2010 (13)
February, 2010 (12)
January, 2010 (7)
December, 2009 (10)
November, 2009 (12)
October, 2009 (15)
September, 2009 (18)
August, 2009 (13)
July, 2009 (23)
June, 2009 (14)
May, 2009 (17)
April, 2009 (7)
March, 2009 (14)
February, 2009 (7)
January, 2009 (12)
December, 2008 (7)
November, 2008 (11)
October, 2008 (19)
September, 2008 (14)
August, 2008 (11)
July, 2008 (14)
June, 2008 (19)
May, 2008 (12)
April, 2008 (18)
March, 2008 (13)
February, 2008 (8)
January, 2008 (7)
December, 2007 (5)
November, 2007 (8)
October, 2007 (13)
September, 2007 (13)
August, 2007 (16)
July, 2007 (11)
June, 2007 (6)
May, 2007 (5)
April, 2007 (5)
March, 2007 (8)
February, 2007 (6)
January, 2007 (6)
December, 2006 (5)
November, 2006 (8)
October, 2006 (4)
August, 2006 (3)