POOL testing policy

 1) Definitions

- Component test:
  A testing procedure restricted to the single component. The bottom-top scheme starts
  here:
  - Unit test - procedure testing single class functionalities
  - Component test - procedure testing the use-cases of the component
interface.

- Integration test:
  A procedure verifying the behaviour of a subset of components within the framework.
  For the pool architecture this means that:
  - The use-cases of the interface of each component have to be matched with test cases.
  - The component under test uses all the other components which is depending on.

- System test:
  A procedure matching a well defined use case of the system, involving the framework as
  a whole. It may use many (all) components of the framework.
  It only makes use of the public interface of the system.
  The testing procedure can address:
   -pure functionalities
   -stressing (concurrency, intensive usage...)
   -error conditions (file system full, network link broken...)
   -performance
   -portability, interoperability (architectures, POOL versions,
    data format compatibility)
   -other specific scope (memory leakage in some critical area,...)
 
2) Responsibilities:

- Component test:
  The owner of the package is responsible for:
  -defining the test cases (derived from the use-cases listed in the component
  description document)
  -defining the validation criteria in agreement with the approved plan.
  -providing the implementation of the testing procedure

- Integration test:
  The owner of the TOP package is responsible for:
  -defining the test cases
  -defining validation criteria in agreement with the approved plan.
  -providing the implementation of the testing procedure

- System test:
  The infrastructure WP is responsible for:
  -defining the test cases
  -defining validation criteria in agreement with the approved plan.
  -providing the implementation of the testing procedure.
  In addition:
  -encourage the use of the framework among the experiments.
  -integrate the validation procedure with use cases and criteria provided by the external user community

3) Deliverables:

  A 'test' deliverable consists in:
  - an executable (a binary exe, a script)
  - a (optional) set of input data
  - a reference output for the test validation.
  - a (minimal) clear description of the  aim of the test
    A template for the documentation will be distributed (derived from the SPI proposal)
  The deliverable must be working within the scram configuration tool, and must be integrated
  with the steering tool (assumed Oval).
  Validation should include:
  - assertion of test condition
  - comparison of output data with reference
  - check of memory leakages (valgrind tool)
 

4) Actions:

- Component test
  Mandatory:
   - Provide a dummy implementation of the public interface, to be used in component
   tests of classes depending on it.
  - Provide a stand-alone functionality test for all methods of the most important classes
  (in term of responsibility within the component architecture). Use (whenever possible)
  stubs or empty-implementation for the classes depending on.
  - Provide component tests matching a set of test cases. Test cases have to map the
  set of use cases listed in the component description document.
  - Provide the Oval configuration for the steering of the unit tests. Both the execution
  of single unit test and the iteration on all the available tests have to be configured.
  Optional:
  - Usage of CppUnit libraries for the implementation of unit tests  is highly recommended
  (helps in condition assert

- Integration test:
  Mandatory:
  - Provide a test case for all the use-cases of the TOP component. Only the public interface
  of the component should be submitted to the procedure.
  Stubs or dummy-implementation of the depending components interfaces should be used
  (whenever possible) when not involved in the integration.
  - Provide the oval configuration for the steering of the test.

- System test:
   The definition and implementation of the tests in the four different scopes (see above)
   must be scheduled according to the declared goals of the pool release:
  - Releases 1.0 and 1.1:
    Implement the pure functionality testing. Define a set of test case and provide
    deliverables.
  - Release 1.2:
    Check the system performance. Define areas to be probed. Define validation
    criteria according to the system requirements. Provide deliverables.
  Portability tests have to be taken into account as well, if new platforms will be supported.
  More in general, the detailed priority can be changed when "hot areas" are identified.


Contact: G. Govi