Acceptance Testing

Acceptance testing is the process that a client uses to (weakly) verify that the delivered and installed product is ready to be put into production use. From the user's point of view, this means that every user oriented function should be thoroughly exercised. Include tests for all of the functions, including boundary cases where appropriate, such as typing an entry in a data field which is longer than the space allocated for it in the user interface.

Acceptance testing proceeds from the user's perspective, so it may not thoroughly test the exceptional conditions that arise during the operation of a product, such as system failures, timing problems, etc. But users can test most user-oriented functions under exceptional cases, for example ensuring that record locking is operating when two users attempt to modify the same record, or handling devices being off-line.

A typical acceptance test is organized as a script. The script is designed to verify that the major functions are properly operating in their most common mode. Then the testing proceeds to minor functions or rarer operating modes. In all cases, certain exceptional condition testing is performed to see how the functions handle error conditions, or pre-condition violations.

A typical acceptance testing script is hierarchically organized by subsystem and function. The top level of the script gives the overall plan for sequencing the tests. It should indicate what tests can be done in parallel, and what results from the current test must be achieved in order to proceed to the next tests. A test can have three results:

The minimum acceptable result for each test is specified, and the step in the plan proceeds if all tests achieve the minimum standard.

A sample fragment of overall plan, with tests results supplied, could be as follows. For a complicated plan, there will be test sub-plans. The plan script is designed to be marked up by the testers as they proceed through testing. Step 1 below has been performed, but testing stopped due to a failure if the NIDS add test. It may be useful to track the amount of time spent doing the tests in order to more accurately schedule the next round of testing


Acceptance Testing Script for Graphical Talk

Step 1 -

Status: incomplete
Date: 1998 Mar 01
Duration: .5hr

Subsystem: NIDS
Objective: Verify major use cases
Tests:

	Test Name		Result (Min Required/Got)
	NIDS startup		P/P
	NIDS shutdown		P/P
	NIDS add		P/R 
	NIDS remove		P/P
	NIDS update		P/P

Notes:

  1. NIDS add did not permit passwords with leading number.
  2. stopped testing, sent in bug report re add.

Step 2 -

Status:
Date:
Duration:

Subsystem: GT
Objective: Establish minimal functionality prior to detailed tests.
Tests:

	Test Name		Result (Min Required/Got)
	GT two party connect    P
        GT two party text 	R

Notes:


Schedule Note: the next two steps can be done in parallel.

Step 3.1 -

Status:
Date:
Duration:

Subsystem: GT
Objective: Verify major graphics functionality.
Tests:

	Test Name		Result (Min Required/Got)
	Sub-plan GT-graphics.	P

Notes:


Step 3.2 -

Status:
Date:
Duration:

Subsystem: NIDS
Objective: Verify NIDS password processing.
Tests:

	Test Name		Result (Min Required/Got)
	NIDS passwords	P	P

Notes:


Actual tests have varying levels of significance. We identify three:

A typical entry for a particular test could look like those below. Some convention is required to indicate when the tester supplies input, and when output format is very critical (eg fixed width).


Subsystem Name: NIDS
Test Name: NIDS startup
Test Level: Major
Test Details:

    Pre-conditions: no other NIDS must be up and running on 
	the default
    Procedure: start the NIDS as below
    Expected Behaviour:
	% nids
	nids started at Thu Mar  2 11:15:11 MST 1998
	LOG nids started at Thu Mar  2 11:15:11 MST 1998
	LOG nids is listening to port 5000
	LOG Listening for connection 0
    Minor deviations: the port number will be the NIDS default (this should 
	be recorded for other tests), the 
	date and time should be current.

Subsystem Name: NIDS
Test Name: NIDS shutdown
Test Level: Major
Test Details:

    Pre-conditions: NIDS must be up and running, you must know
	its contact port, and its system password.  
  	NIDS can have debugging set on or off.
    Procedure: run the program nids-shutdown, with the correct
	contact port.  Supply the correct system password if other 
	than none.
    Expected Behaviour:
    from nids-shutdown:
	% nids-shutdown
	nids-shutdown: Talking to NIDS at localhost on port 5000
	Shutdown password ( uses default) ? 
	Reply was "OK shutting down now"

    from nids:
	LOG Got it.
	LOG Connection 0 from: //129.128.4.224:1871
	LOG Got command "SHUTDOWN none"
	OK shutting down now
	LOG nids terminated at Thu Mar  2 11:19:35 MST 1998
	nids terminated at Thu Mar  2 11:19:35 MST 1998

    Minor deviations: the connection number may be bigger than zero,
	the connection port will depend on where nids-shutdown is running,
	the date and time will be current.

Fault diagnosis can be included as part of the acceptance testing script, so that when a test fails, some further diagnostic tests are performed to obtain information so that the supplier can narrow down the source of the failure. Often the implementors must be involved in these tests as they are the only ones who know enough details about the actual workings of the product.

Ideally, acceptance testing would be automated, so that a complete acceptance test could be done automatically after even the smallest change. This is possible for certain products, for example, a non-interactive batch type process, or an interactive text based system. It is much more difficult for window oriented systems.


Last Update: Nov. 17, 1999 [PGS]