Blogger List

Steven Feuerstein Indicates Oracle ACE director status
PL/SQL Obsession

Guy Harrison Indicates Oracle ACE status
Database topics

Bert Scalzo Indicates Oracle ACE status
Toad for Oracle, Data Modeling, Benchmarking
Dan Hotka Indicates Oracle ACE director status
SQL Tuning & PL/SQL Tips

Valentin Baev
It's all about Toad

Ben Boise
Toad SC Discussions

Dan Clamage

Kevin Dalton
Benchmark Factory

Peter Evans 
Business Intelligence, Data Integration, Cloud and Big Data

Vaclav Frolik  
Toad Data Modeler, Toad Extension for Eclipse

Devin Gallagher
Toad SC discussions

Stuart Hodgins
JProbe Discussions

Julie Hyman
Toad for Data Analysts

  Henrik "Mauritz" Johnson
Toad Tips & Tricks on the "other" Toads
  Mark Kurtz
Toad SC discussions
Daniel Norwood
Tips & Tricks on Toad Solutions
Amit Parikh
Toad for Oracle, Benchmark Factory,Quest Backup Reporter
Debbie Peabody
Toad Data Point
Gary Piper
Toad Reports Manager
John Pocknell
Toad Solutions
Jeff Podlasek
Toad for DB2
Kuljit Sangha
Toad SC discussions
Michael Sass 
Toad for DB2
Brad Wulf
Toad SC discussions
Richard To
SQL Optimization
  Toad Data Modeler Opens in a new window
Data Modeling
  Toad Higher Education
How Hi-Ed Uses Toad
  Real Automated Code Testing for Oracle
Quest Code Tester blog


Toad World blogs are a mix of insightful how-tos from Quest experts as well as their commentary on experiences with new database technologies.  Have some views of your own to share?  Post your comments!  Note:  Comments are restricted to registered Toad World users.

Do you have a topic that you'd like discussed?  We'd love to hear from you.  Send us your idea for a blog topic.

Jul 4

Written by: QCTO Blog
Saturday, July 04, 2009  RssIcon

Written by Finn Ellebaek Nielsen
In this blog post I'll suggest various best practices for establishing a test policy, as well as a test strategy for Oracle code. By test policy I refer to the high-level approach towards testing. A test strategy on the other hand give the rules for testing in more detail. They must fit into the overall test policy/strategy that cover all the tiers and sites in the application architecture, so in terms of the bigger picture the test policy/strategy for Oracle code are partial.


I'll focus on the following very different circumstances:

  • Legacy project: Typically an old, tangled, poorly documented system with little or no automated testing.
  • Greenfield project: New development from scratch.

As always with these blog posts, my focus is on the business logic defined through PL/SQL programs, triggers, views, materialized views etc (collaterally referred to as Oracle code) stored in the Oracle database.

Test Policy

I've given many arguments for testing Oracle code in the database in my previous blog post "Reasons for Testing Oracle Code in the Database" so it should come as no surprise that my suggestion for a partial test policy is:

  • Automate unit and integration testing of Oracle code as much as possible, and run those tests within the database.
  • The Oracle developers carry out the test design (with potential help from the system testers) and test implementation.
  • Everybody on the team can and should execute the automated tests.

With the test policy in place, let's look at how to establish partial test strategies under the two different situations mentioned above.


This is the probably the most common circumstance you'll encounter. Development teams have been building and deploying successful, mission-critical applications based on PL/SQL and SQL for more than 15 years. That amounts to a lot of legacy code, some of it no doubt easy to understand and maintain, most of it spaghetti code.


You likely face challenges in the following areas as you consider putting together a test policy for your legacy application:

  • Source code size: Thousands of lines of PL/SQL code and numerous complex views, some of which are based on each other at multiple levels.
  • Documentation: The documentation is in a poor state and hasn't been updated for a long time.
  • Dependencies: A large set of PL/SQL packages have been developed without clear separation and interfaces -- they are heavily intra-dependent. This means that it's difficult to test them at the unit test level, one by one, as each unit requires specific test data, often the output of another subprogram, in order to work correctly.
  • Testability: Large amounts of up to date production data is used for manual testing; no effort has been made towards fabricating specific and limited test data. The code is developed in a way so that it's not easily testable. It may need refactoring, which is a risk in itself as you're then introducing changes to the existing code without being able to see whether you're also introducing defects.
  • Tests: There are no (or hardly any) standardized unit tests. A few developers have developed proprietary scripts of which the Code Coverage is poor and they only run on specific PCs owing to hardcoded file path names and inherent assumptions about program settings (eg SQL*Plus or Toad). Furthermore, the test scripts also assume that test data is set up in a specific way with hard coded primary keys.
  • Unfamiliar code: The original developers no longer work for the company. The new developers don't like to refactor the source code as they're afraid of breaking it owing to the lack of documentation and automated tests.
  • Skepticism: Developers are often skeptical about automated unit testing. It is not uncommon to hear: "Why do we have to test our code? That's what we have testers for".

Test Strategy

The above mentioned challenges likely seem overwhelming and maybe even insurmountable. Before you give up on the idea of improved testing, however, I suggest that you design strategies that can address these challenges in a manageable way. You can apply one (or two if they're not mutually exclusive) of the following different test strategies to your legacy project:

  • Big bang: Rip x months out of the calendar and assign all the developers on the team (apart from a few assigned to production support) to implement fine-grained unit tests to the entire Oracle code base.
  • Incremental, top-down: Introduce high level regression tests covering the units that are most critical in the system and are changed most frequently. For instance, in a system with a complex set of nested views based on a complex set of PL/SQL packages automated testing could be introduced for one high level view at a time.
  • Incremental, bottom-up: Introduce low-level unit tests and medium-level integration tests covering the units that are most critical in the system and are changed most frequently. In the above mentioned example system, automated testing could be introduced for one PL/SQL package at a time, perhaps for only a few procedures/functions in that package at a time if the package is very large.

The big bang approach is difficult to implement successfully because of the following factors:

  • The feedback in terms of improved software quality is too long away so it's difficult for the developers to keep the optimism and pace.
  • Users get frustrated that they won't get any new features for x months.
  • It's expensive, since time is spent constructing tests that cover units which may never even change.

Both the incremental options should be expanded over time, as a need arises to change a unit that isn't already covered by automated tests. Over time, the incremental approaches will end up covering large parts of the system, with a heavy emphasis on those programs that are maintained (and therefore likely used) most often. If the bottom-up approach is difficult owing to internal code dependencies and lack of testability, consider following the top-down path initially and then introduce bottom-up over time.

The incremental options are appealing because they are implemented gradually, allowing everybody to see the positive effects sooner rather than later. The Return On Investment from the incremental options is relatively quick, especially if the system was previously tested manually (time consuming and error prone) for each deployment (every 2-4 weeks in many Agile projects).

The test cases should be based on a mix of system requirements (they have most likely changed during the project's life time), historical defects reported, and a consolidation of any developer-specific test scripts that have been created over time. The testers should help the developers design the tests, leveraging their expertise (especially when it comes to brainstorming test scenarios, which we have found many developers to not be very good at).

Choosing the right strategies depends on various factors, such as:

  • Time: How long can the users/customers wait for the next release with critical new features? If you're in a competitive situation, how long can you afford to wait with that new release with killer features that will increase your market share? More testing on a legacy project will inevitably cause delays in terms of delivering new/changed functionality in the beginning (especially if you choose the big bang path), but soon you'll earn this delay and much more back, as described in my previous blog post "Reasons for Testing Oracle Code in the Database".
  • Resources: How many resources do you have available, both in terms of people and budget? What is the cost involved with each critical production bug and what about the customer's/user's trust in you? How do we convince our developers that proactive, thorough testing is a good idea?
  • Risk: What is the risk involved in not having automated tests in place? How critical is it that a bug is found in the production system(s)? Are people's lives at risk?
  • Break/even: Where is the break/even between the time and money you spend on automated testing and the corresponding improvement in software quality?
  • Return On Investment: What will provide you the best ROI observing the other factors?

In reality, a combination of the incremental options over time will probably be your best bet.

Another factor is how much testing effort is required and I'll discuss this in more detail in a future blog post.


Enough about legacy projects -- let's turn to greenfield projects: brand, spanking new application development! You're very fortunate if you're in this position, since (among other great benefits) you can design and implement your code for testability, right from the start. You should seriously consider applying Test-Driven Development (TDD), through which you let the requirements drive the test cases and in turn use these test cases to drive the programming (in other words, test cases become the to-do list for implementation). This ensures a good level of testing and also that you only implement what's required instead of thinking about and building what might be useful for some future set of requirements (and often never happens).

TDD doesn't require you to change your entire project methodology to be Agile. So how could you apply TDD to Oracle code with Quest Code Tester for Oracle?

  1. Develop the signature of the program/subprogram if it doesn't exist already. This is not strictly correct according to TDD but you need the signature in place in order for Quest Code Tester for Oracle to see the declaration and allow you to create a test definition against it. In strict TDD you develop the test before any code such that the first iteration of the test is that it fails because it doesn't compile.
  2. Add a test, based on a requirement, historical defect, etc.
  3. Run all tests, including the new one. Identify the tests that fail.
  4. Write/modify code to satisfy the tests that failed. The code doesn't have to be elegant and it can even be redundant. For now it's important to focus on getting the tests to pass.
  5. Run all tests again. If any fail move back to the previous step.
  6. Refactor your code in order to remove redundancy, hard coded behavior, things that don't comply with the architecture, standards etc.
  7. Repeat by going back to step 2 or 1.

I'll discuss TDD in more detail in a future blog post.

I suggest that you test all Oracle code developed on greenfield projects.

Fitting Into Overall Test Policy/Strategy

There is no sense in micro optimization though the perfect partial test policy/strategy for your Oracle code if this doesn't match the project's overall test policy/strategy -- they need to be coordinated and redundant efforts should be avoided in order to lower the cost.

If this is the first time that test policy/strategy are implemented for Oracle code the overall test policy and strategy may need to be amended to accommodate Oracle code.

During this coordination it should be considered whether we should try to avoid repeating the same test cases at different levels. The test policy/strategy must be agreed at the various relevant levels in the organization and with the customer and users.

Future Blog Posts

Future blog post will cover related issues like:

  • Test approach implementing the test strategy and policies we've now laid out.
  • Testing effort/depth required -- how much testing do we need?
  • Test design and approach tips & tricks.
  • How to break dependencies through creation/usage of mock objects in automated Oracle code testing.
  • How to apply Test-Driven Development to software projects using Oracle code.
  • Continuous Integration.
  • Code Coverage.


  • Working Effectively with Legacy Code by Michael C. Feathers, Prentice Hall, 2004. ISBN 978-0131177055.
  • Implementing Automated Software Testing by Elfriede Dustin, Thom Garrett and Bernie Gauf, Addison Wesley, 2009. ISBN 978-0321580511.
  • Foundations of Software Testing: ISTQB Certification by Dorothy Graham, Isabel Evans, Erik Van Veenendaal and Rex Black, Cengage, 2008. ISBN 978-1844809899.

Test Policy and Strategy
del.icio.usFacebookDiggGoogleLive BookmarksNewsvineStumbleUponTechnoratiYahooDotNetKicks
Search Blog Entries
Blog Archives
<May 2013>
April, 2013 (13)
March, 2013 (10)
February, 2013 (5)
January, 2013 (7)
December, 2012 (6)
November, 2012 (10)
October, 2012 (8)
September, 2012 (6)
August, 2012 (8)
July, 2012 (8)
June, 2012 (12)
May, 2012 (21)
April, 2012 (10)
March, 2012 (16)
February, 2012 (19)
January, 2012 (20)
December, 2011 (19)
November, 2011 (14)
October, 2011 (12)
September, 2011 (17)
August, 2011 (15)
July, 2011 (16)
June, 2011 (13)
May, 2011 (15)
April, 2011 (8)
March, 2011 (21)
February, 2011 (17)
January, 2011 (16)
December, 2010 (13)
November, 2010 (13)
October, 2010 (7)
September, 2010 (15)
August, 2010 (11)
July, 2010 (13)
June, 2010 (12)
May, 2010 (14)
April, 2010 (12)
March, 2010 (13)
February, 2010 (12)
January, 2010 (7)
December, 2009 (10)
November, 2009 (12)
October, 2009 (15)
September, 2009 (18)
August, 2009 (13)
July, 2009 (23)
June, 2009 (14)
May, 2009 (17)
April, 2009 (7)
March, 2009 (14)
February, 2009 (7)
January, 2009 (12)
December, 2008 (7)
November, 2008 (11)
October, 2008 (19)
September, 2008 (14)
August, 2008 (11)
July, 2008 (14)
June, 2008 (19)
May, 2008 (12)
April, 2008 (18)
March, 2008 (13)
February, 2008 (8)
January, 2008 (7)
December, 2007 (5)
November, 2007 (8)
October, 2007 (13)
September, 2007 (13)
August, 2007 (16)
July, 2007 (11)
June, 2007 (6)
May, 2007 (5)
April, 2007 (5)
March, 2007 (8)
February, 2007 (6)
January, 2007 (6)
December, 2006 (5)
November, 2006 (8)
October, 2006 (4)
August, 2006 (3)