May
17
Written by:
QCTO Blog
Sunday, May 17, 2009
Written by Finn Ellebaek Nielsen
Introduction
This post is the first in a series on various aspects of automated
Oracle code testing and is probably longer than most of my future ones.
I'd like to use some words at this stage in order to establish a common
foundation of understanding on which to build in future writings.
Also, to set the scope of my focus, when I say "Oracle code" I mean
business logic implemented in PL/SQL stored procedures and SQL in the
form of views and materialized views.
This blog post describes reasons for testing Oracle code in the
database. Generic arguments for low-level automated testing are also
presented in order to show the full benefits from Oracle code testing.
Compelling Reasons for Testing
Some of the compelling reasons for why it's a good idea to test your Oracle code in the database include:
- Cost reduction.
- Risk reduction.
- Software quality improvement.
- Improved trust and good vibes between all parties involved in the project, spawning improved productivity.
- Ease of refactoring and enhancements of the software for new requirements.
- Improved system documentation.
- Decreased risk of hacker attacks.
- Enabling a switch to Agile project methodology, which has its own advantages.
Let's look at each of these motivations in further detail.
Cost Reduction
The cost can be reduced directly because of the factors described in the following sections.
A myriad of surveys have documented how much money is spent owing to
software defects, both directly and indirectly. I won't bore you with
the statistics and we are all aware of them. The software industry needs
to grow up and assume responsibility for improving the quality and
stability in order to reduce cost for all parties.
Extreme Programming (in the following abbreviated XP) refers to the
concept Defect Cost Increase, which basically means that the later a
defect is found in the project life cycle, the more it costs to fix it.
Why is this? First of all, if a defect is found a long time after it was
introduced in the program it is much harder for the developer to
remember which changes were done in that particular release. Sure, a
source code version control system can show the differences, but why
were the changes done the way they were? Perhaps the developer that
implemented the change no longer works on the project? Secondly, all the
time spent testing, deploying and documenting the program has been
wasted as it has to be done again when the defect has been
corrected. If a defect is found shortly after it is introduced it will
be much quicker, easier and cheaper to fix it. This is obtained through
the practice of testing as soon as possible in the project life cycle.
XP goes further in reducing the number of defects through the
introduction of testing into the development loop in terms of automated
unit tests. Through these it's possible to perform cheap and frequent
testing, ensuring that the system under test is behaving as expected
before it is delivered to system integration test.
You don't have to change your project methodology (for example, go
"Agile") in order to benefit from this approach. By introducing
automated unit tests on your project you'll be able to gain the same
benefits related to cost reduction.
Productivity peaks when the developer is "in the zone". This is
severely affected by frequent production support requests for high
priority defect fixes as the context switch between the development and
production support is expensive.
Testing as close to the source of the business logic ensures that
defects are easier (thus cheaper) to find. If the only formal testing
carried out is end user testing of a GUI front end, it can be difficult
to trace a given defect back to its origin through all the layers in the
architecture: front end layers, optional application server layers and
database layers.
Testing in the same language as the code itself also reduces the
language gap. There is no need for the developer to spend time and money
learning another language just in order to test the original program.
Many Oracle-based systems with business logic in the database are
running in a dedicated database instance. This can be costly in terms of
Oracle licenses and related monitor software. Perhaps hardware
capabilities are also wasted and maintenance staff could probably be
utilized better with fewer servers/database instances. By consolidating
several systems into one Oracle database instance these costs can be
reduced. There is a potential conflict between the specific Oracle
database/patch versions that the different systems now sharing the same
database instance require. But if automated tests are in place for each
system it's easy and cheap to verify whether migration to a specific
version or deployment of a new patch set/security patch/one off patch
breaks the system. However, this consolidation is not possible if
there's a conflict with 3rd party software certification. Small/big
system maintenance windows may also conflict.
Furthermore, the cost can also be reduced indirectly through most of the other arguments.
Risk Reduction and Software Quality Improvement
In some environments the risk concerned with using software can be
extremely high, for instance in airplanes and medical equipment.
The risk is reduced when the software quality is improved, which
happens when all the testing done before releasing/deploying the
software to production reduces the number of defects.
Improved Trust and Feel Good Factor
By reducing the number of defects at all levels the trust between all
parties involved in the project (including the managers and users) can
be increased. Everybody would like to deliver and use high quality
products.
Haven't we all had the experience of releasing our code (which
everyone therefore presumed was well tested) to the system testers only
to find them immediately reporting obvious defects, ones that
we ought to have found ourselves? This is embarrassing and it simply
shouldn't happen. Furthermore, it results in the testers losing
confidence in the product. Also, if no automated unit tests are in place
it means that the testers will have to focus on less interesting
low-level testing instead of more interesting high-level system testing.
Delivering software of high quality boosts the developers'
confidence and makes them even more productive. It also increases the
testers' and the users' trust in the product. Improved trust and feel
good factor increases productivity and starts a positive cycle, with
better communication and smoother implementations in the future.
Standardizing how unit testing is done within the project (perhaps
even across the organization) again improves trust and good vibes, as
it's easy to create and share test cases because there is certainty
about how it's done. No longer is unit testing done sporadically through
disperse, small developer-specific scripts that no one can ever figure
out how to use as they're not standardized and probably not committed to
a version control system.
Refactoring and Enhancements
Refactoring is the concept of changing the interior of a program
without changing its exterior behavior. Through refactoring it's
possible to micro-optimize a given sub program without affecting other
components of the system.
Example refactorings of existing code:
- Implementation of a fix for a new defect.
- Addition of validation of input parameters in order to make the program less vulnerable to attacks.
- Change required in order for the program to be easier to read or maintain.
- Change of the algorithm for required performance improvement.
Some purists argue that this can be in conflict with readability and
maintainability and thus cannot be regarded as a true refactoring.
- Tidy up in order to avoid redundant code.
- Refactoring can be limited to one component or span several
components. For instance, the following types of refactoring may require
changes to several components:
- Rename of a function. This requires all calls to the function in question to be modified.
- Addition of a parameter to a procedure. If the existing behavior
can be obtained through a declared default parameter value, this change
only requires additional changes where the new behavior is required.
When a full set of quality automated unit tests are in place, any
developer -- including developers new to the development team -- can
refactor the existing code with confidence. Any unintended side effects
will be caught by the unit tests, assuming that the unit tests cover all
functionality.
Refactoring should only result in addition/modification of test cases
if the refactoring is required for fixing a new defect. Other types of
refactoring will simply rely on the existing tests. Executing these will
show whether the refactoring has broken any of these.
While implementing an enhancement to the software, new test cases
will have to be designed and written. Often, a number of small
refactorings facilitating the enhancement are identified as well. Once
again, the vast test case base ensures that the enhancement doesn't
break the existing behavior, unless this is intentional.
Documentation
The test cases developed also serve as documentation of
intentional/non intentional behavior of the program under test. This
form of documentation can be easier to understand for other team members
than a short program header comment in the source code. For example, if
a reusable API has been developed, the test cases form example correct
and incorrect usage of this API.
Security
Oracle releases security patches to the Oracle database on a
quarterly basis and many customers don't install them as they're
uncertain whether their application will continue to work afterwards.
This is most often due to the fact that there is no automated
unit/regression test suite in place for the applications running in the
database and manually testing these on a quarterly basis is too
expensive.
This is a serious issue as obviously there is a reason for Oracle
releasing these patches. Not installing the security patches will make
the database remain vulnerable to various types of attacks. Many
organizations have an auditing/security department that requires such
patches to be installed.
Agile Software Development
Projects following an Agile methodology deploy to production as often
as 1-4 times a month. A crucial element of ensuring quality is to
require that automated unit tests of the Oracle code are in place.
Frequent deployments to production would simply not be possible if
expensive manual tests have to be carried out each time.
Furthermore, automated unit testing is a crucial practice in the
Agile world. Without this in place on Agile projects it's difficult to
ensure the quality of the software and it can invalidate the entire
methodology.
Developer Resistance
Many developers find all sorts of reasons to avoid testing their code, some of them entirely subconscious.
One unvoiced, but I believe, common feeling is that developers don't
want to find more defects in their own code. They are "getting by" with
the level of testing and defects currently in the application. My
counter argument is that it's lots more embarrassing to deliver buggy
code early for system testing than to spend more time in the development
phase and deliver high quality code for system testing. At the same
time less time can be spent at production support. We all do it, so why
not simply admit that we're human and that we make mistakes -- and set
up a system for more proactively catching our mistakes? This attitude
towards software development is part of the cornerstones of XP: Embrace
change, be courageous, and work together as a team, instead of micro optimizing.
Others feel that they "rise above" testing: "I am so offended that
you would think my code has bugs." Yeah, right. My view is that it's an
integral part of being a developer and that we need to assume this
responsibility.
Test-Driven Development (TDD) is the concept of writing test code
before the actual program to be tested. This can result in more focused
programming because the "only" purpose of the program is to make all the
related test cases pass. Introducing TDD can make it easier for
developers to accept that they need to test more: some developers feel
that testing is a bore as it's traditionally carried out after the fun
programming. With TDD developing the code can be seen as the reward
after first having implemented the test, making "the big test pill"
easier to swallow. TDD with Oracle is the focus of a future blog post.
Don't get me wrong, testing isn't easy and there is (and probably
never will be a) "silver bullet" tool that can automatically figure out
what your code is supposed to do and exercise it for you.
A shift to automated testing takes discipline and planning, and
usually requires changes in the attitudes and work habits of developers.
However, I believe that the cost decrease, improved quality,
productivity and improved trust and confidence will be a sufficient
reward to the hard work at hand.
It is also crucial that each developer experience the positive
feedback loop of automated testing as soon as possible. It is,
therefore, important to carefully determine the best way to introduce
test automation to a team. This topic will be covered in a future blog
post.
Why Quest Code Tester for Oracle
There is a tradition for automated unit testing in other languages
such as Java and C# due to the availability of tools such as JUnit and
NUnit for many years. Still, adoption of automated unit testing in Java
and C# has not by any means reached 100%. I believe that adoption of
these so-called xUnit frameworks is constrained by the fact that with
JUnit and NUnit you still need to write most of the test
methods yourself, which can easily run into thousands of lines of code.
It's often seen that you can end up writing as many lines of test code
as the program under test, sometimes even more. Who has time to do this?
How do you manage all this code, maintain it as the programs it tests
are changed? How do you test your test code?
Throughout the years several attempts have been made at implementing a
solid framework/tool supporting automated testing with Oracle. These
include utPLSQL, PL/Unit, and Pluto. However, these all share two key
drawbacks: the lack of a user-friendly and productive UI for test case
design/maintenance, and the need to manually produce test code. Some
Oracle developers have also tried to use tools such as DbFit, JUnit or
NUnit, but that requires the developer to learn another language such as
Java or C#, which introduces a language gap and is, in my opinion,
unnecessary and too expensive.
Quest Code Tester is built on an Oracle repository and implemented
through a backend PL/SQL API. The test cases are created through an
intuitive GUI and the tool creates the test code for you, which saves
you from writing and maintaining thousands of lines of test code. Also,
Quest Code Tester can generate test cases for you in terms of typical
boundary control and random values. Even better, if you have a need for
really complex test cases, you can hook into the code generated with
customizations in terms of setup, teardown, comparison expressions, etc.
This means that Quest Code Tester Quest Code Tester doesn't fall into
the typical 80-20 pitfall through which it's very easy to implement 80%
of the task but extremely difficult to implement the remaining 20%.
Since 2007 I've been helping organizations in England and Denmark get
up and running with Quest Code Tester for Oracle, with the objective of
automating Oracle code testing. Some of my customers have used the tool
for high-level data-oriented regression testing of complex views and
others have taken a more function-oriented stored procedure
unit/regression testing approach. In all cases, we started applying the
tool where we identified the best return on investment.
Throughout my work with Quest Code Tester for Oracle, Steven
Feuerstein has been an invaluable resource and help and we've had a very
constructive dialog on standard usage, tweaks and extensions to Quest
Code Tester. One outcome of our dialogue is this blog, which we decided
to create in order to evangelize the benefits of automated testing to
the Oracle community.
Using Code Tester at Multiple Test Levels
The test terms used in the following adhere to the standards defined by International Software Testing Qualifications Board.
Quest Code Tester can be used at various test levels:
- Component or unit testing (according to your preference): Here
the focus is to test the individual unit/component, assuming that all
its dependencies are working correctly. This is the most fine-grained
test level. An example would be to test a package or view in isolation.
In order to test the component in isolation it may be necessary to
introduce mocks/stubs for the other components on which the component
under test depends. How to achieve this will be described in a future
blog post.
- Integration testing: Here the focus is to test the interaction
between multiple components and external resources such as operating
system, file system and hardware. The interface between systems can also
be tested here. In terms of component integration this is necessary
even though the components have been tested individually, because they
could have been used in a wrong sequence, context or simply incorrectly.
- System testing: This is the behavior of the entire system. It
only makes sense to use Quest Code Tester to test at this level if the
entire system has been implemented in Oracle code. Often this is not the
case and a test tool for testing the end-user interface will be used.
Within the above mentioned test levels both confirmation testing and regression testing can be carried out:
- Typically, confirmation testing is execution of the new test
cases that were created as a result of a new defect. Confirmation
testing also involves executing test cases that have suddenly failed
even though they have succeeded before. Before fixing a given defect a
new test case is created to expose it. Then the defect is fixed and this
new test case (plus others that failed before) are re-executed in order
to verify that it has been fixed correctly.
- Regression testing is execution of all test cases in order to
verify that no side effects have been introduced in the modifications
introduced for the defect fix/refactoring/new implementation. Typically a
regression test suite/pack is created in order to test the most
important functionality of the system under test at a high level.
The following test types can be created with Quest Code Tester:
- Functional testing: Testing of the functional behavior.
- Non-functional testing in terms of performance. Potentially
Quest Code Tester can also be used for load and stress testing but it
depends on the system's architecture whether a proper load and stress
testing requires physical client machines driving the test.
A so-called smoke test can be carried out in order to perform a
sanity check of a deployment to any environment. Quest Code Tester can
also be used for this purpose.
In other words, Quest Code Tester can be used to create tests of various types and at various levels.
Test Quality
Obviously the quality of the entire test suite depends upon the
individual test cases implemented. The test cases should not be regarded
as static. They are dynamic in nature and as previously mentioned new
tests cases must be added and existing test cases must be reviewed when a
defect is found.
The software quality is only as good as the quality of the tests so
this obviously is a highly critical factor. Defects can be hidden in
false-positives (an outcome is not tested properly and the test passes
incorrectly). False-negatives (an error is reported even though there is
no defect) are less critical as there is no underlying defect but these
tests must be corrected.
As the system testers design tests for a living they are an
invaluable resource and should be involved helping developers design
appropriate and sufficient test cases.
It is beyond the scope of this blog post to describe in detail how to
design your test strategy. This plus tips & tricks on test
approaches will be covered by a future blog post.
Conclusion
It is widely accepted that "all software has defects", accepting
along with that statement a high cost and increased risk of failure.
This is a shame and we all should do our best to reduce the cost
involved -- it's our responsibility. There is no such thing as a free
lunch. We can't just have the fun of developing software without the
hard and (according to some) less fun work of testing it properly.
With the right sort of tool, however, we can have a cheap lunch and
automate the mundane, repetitive steps and focus our efforts instead on
driving tests from our intimate knowledge of the application
requirements. With the right tool we can get that positive feeling when
we run the entire test suite to check whether the change we've just
implemented breaks the system, both when catching issues but certainly
also when proving the correctness.
I believe that Quest Code Tester is the best tool available for
testing Oracle code in the database. I hope you will try it for yourself
and see what it can do for you.
Testing is a heavy investment but it pays off in terms of reduced
cost and improved quality, trust and maintainability. Automated testing
can be introduced into any development methodology, no matter whether
you're using more traditional approaches such as waterfall or using more
contemporary approaches such as one of the Agile variants.
Future Blog Posts
Future blog post will cover related issues like:
- Test strategy: How to best start using Quest Code Tester on both
greenfield and legacy projects with the best Return On Investment and
how to design your tests.
- Test approach tips & tricks.
- How to create and use mock objects in automated Oracle code testing.
- How to apply Test-Driven Development to software projects using Oracle code.
References
3 comment(s) so far...
Re: Reasons for Testing Oracle Code in the Database
"all software has defects" may not be a nice thing to accept, but
how much would it cost to get your software to have no defects, and how
would you know?
I once interviewed a guy who claimed that he had managed a software
project that had no bugs. "How do you know?" I asked - expecting some
detailed explanation of how great his QA team was. He responded that the
system had never had a production bug reported. Great. But that doesn't
mean the software has NO defects.
There will always be a risk of defects in a system (even if none have
been found to date). How much effort (and cost) should we expend on
reducing the risk that there are bugs in live software? What is an
acceptable level of risk? Risk of failure vs cost of exhaustive testing
is a difficult balance.
By rmackie on
Wednesday, May 27, 2009
|
Re: Reasons for Testing Oracle Code in the Database
That's an excellent point: Where is the break even between the
investment into testing and the return in terms of reduced risk and cost
and increased quality?
I'll try to get back to this in a future blog post on test strategy. In
short: On a legacy project it's more about getting started and do it
where it pays off most: Testing the most central and critical components
that are frequently changed. On a greenfield project you may consider
applying Test-Driven Development, through which you let the requirements
drive the test cases and let these test cases drive the programming.
This ensures a good level of testing.
The test cases are not static and will need continuous
changes/enhancements.
Cheers
Finn
By QCTOblog on
Tuesday, June 02, 2009
|
Re: Reasons for Testing Oracle Code in the Database
The test strategy blog post was too big to also include
considerations for the amount of testing required so I'll cover this in
another post in a few weeks.
Thanks for your patience.
Cheers Finn
By QCTOblog on
Friday, July 10, 2009
|