oversight

Assessing the Reliability of Computer-Processed Data

Published by the Government Accountability Office on 1990-09-01.

Below is a raw (and likely hideous) rendition of the original report. (PDF)

                   United   States   General   Accounting   Office
                   Office of Policy
GAO


September   1990
                   Assessing the
                   Reliability of
                   Computer-Processed
                   Data
Preface


              Auditors and evaluators rely on data to accomplish
              their assignment objectives. In today’s computer
              age, more and more of the data available to them
              are computer-based and processed. Such data may
              come from a microcomputer,    minicomputer,  or
              mainframe and may range from a collection of ques-
              tionnaire responses to large national data bases.

              In considering the use of computer-based        data, the
              following logical questions arise:

          l   How do the data relate to the assignment’s
              objective(s)?

          l   What do we know about the data and the system
              that processed them?

          l   Are the data reasonably complete and accurate?

              The Government Auditing Standards-generally
              referred to as the “Yellow Book”-provide    the
              standards and requirements for financial and per-
              formance audits. A key standard covers the steps to
              be taken when relying on computer-based evidence.

              The purpose of this guide is to help GAO staff meet
              the Yellow Book standard for ensuring that com-
              puter-based data are reliable. The guide also pro-
              vides a helpful conceptual framework to expedite
              job performance and help staff address standards
               for assessing internal controls and compliance with
               applicable laws and regulations.

              The key steps in assessing reliability   are:

          l   Determine how computer-based data will be used
              and how they will affect the job objectives.

          l   Find out what is known about the data and the
              system that produced them.
    Preface




* Obtain an understanding of relevant system con-
  trols, which can reduce the risk to an acceptable
  level.

l   Test the data for reliability.

l   Disclose the data source and how data reliability
    was established or qualify the report if data relia-
    bility could not be established.

    The work described in this guide should normally
    be done by auditors and evaluators as an essential
    part of assignment planning. However, in those
    cases where computer specialist skills are needed,
    every effort should be made to secure these skills.

    The major contributors to this guide were Dan
    Johnson and Charles M. Allberry. For further assis-
    tance, please call 275-6172.




    Werner Grosshans
    Assistant Comptroller      General
      for Policy
Contents



Preface                                                           1

Chapter 1
Introduction        Government Auditing Standards
                    Using This Guide
                    System Review Versus a Limited
                         Approach
                    When Data Reliability Should Be
                         Determined
                    Who Should Evaluate Computer-Based
                         Data Reliability
                    Terms Defined

Chapter 2                                                        12
Assessing           Conceptual Framework                         12
                    Planned Use of Data                          13
Reliability Risk,   Knowledge/Experience     With System or      15
Understanding            Data
System Controls,    System Controls                              15
and Determining
Data Testing
Requirements
Chapter 3                                                        20
Data Testing        Objectives and Methods     of Data Testing   20
                    Various Levels of Data   Testing             26
                    Special Considerations   in Computer-        27
                         Programmed Data     Tests

Chapter 4                                                        30
Reporting on Data   Reporting   on Computer-Based    Evidence    30
Reliability
                Contents




Chapter 5
Case Study:     Case Example
                Determining Reliability Risk
Guaranteed      Understanding  System Controls
Student Loans   Data Testing

Appendixes      Appendix I: Examples of Data Tests
                Appendix II: Special Considerations in
                    Understanding    Computer System
                    Controls

Tables          Table 2.1: Factors in Developing Reliability    12
                     Risk
                Table 2.2: Factors in Determining               13
                     Extensiveness of Data Testing




                ADP        automated data processing
                DMTAG      Design, Methodology, and Technical
                               Assistance Group
                GAO        General Accounting Office
                IG         Inspector General
                OSM        Objectives, scope, and methodology
                TAG        Technical Assistance Group
Chapter 1
Introduction


                   This chapter discusses
               . standards     and requirements for using computer-
                   based evidence contained in GAO’s “Yellow Book”,
               .   the purpose of this guide,
               .   distinctions between a full system review and a
                   more limited effort,
               .   when (during the assignment) data reliability
                   should be determined including options to consider
                   when data is unreliable,
               .   who should determine data reliability, and
               .   definition of terms.’


Government         As part of the evidence standard for performance
                   audits, GAO’s Government Auditing Standards (the
Auditing           Yellow Book) and chapters 4 (“Standards”) of the
Standards          General Policy Manual and the Project Manual
                   include requirements for determining the reliability
                   of computer-based information.

                   The Yellow Book gives the following             guidance:

                   When computer-processed          data are an important
                   or integral part of the audit and the data’s relia-
                   bility is crucial to accomplishing       the audit objec-
                   tives, auditors    need to satisfy themselves         that
                   the data are relevant and reliable. This is impor-
                   tant regardless     of whether    the data are provided
                   to the auditor or the auditor independently
                   extracts    them. To determine      the reliability     of
                   the data, the auditors      may either (a) conduct a
                   review of the general and application           controls in
                   the computer-based       systems including tests as
                   are warranted;     or (b) if the general and applica-
                   tion controls    are not reviewed     or are determined
                   to be unreliable,    conduct other tests and
                   procedures.



                   ‘This guide supercedes the publication, Assessing the Reliability
                   of Computer Output (AFMD-81-91) dated June 1981.
                   Chapter      1
 1                 Introduction




                   When the reliability   of a computer-based   system
                   is the primary objective   of the audit, the audi-
                   tors should conduct a review of the system’s
                   general and application    controls.

                   When computer-processed        data are used by the
                   auditor, or included in the report, for back-
                   ground or informational      purposes    and are not
                   significant   to the audit results, citing the source
                   of the data in the report will usually satisfy the
                   reporting   standards   for accuracy and complete-
                   ness set forth in this statement.


Using This Guide   This guide helps staff ensure that their use of com-
                   puter-based data meets Yellow Book requirements.
                   It applies to both performance and financial
                   assignments.

                   Staff should not assume that computer-based data
                   are reliable. When using computer-processed data
                   as evidence, staff must take steps to provide rea-
                   sonable-not      absolute or complete-assurance     that
                   the data are valid and reliable. Effectively carried
                   out, the steps discussed in this guide will provide
                   that assurance. They will not-nor       do they need
                   to-ensure     that all data errors are detected.

                   The effectiveness of carrying out the steps in this
                   guide depends on judgment in determining how
                   much to rely on system controls, how to test data,
                   and how much testing to do. Errors in judgment
                   have undesirable consequences-too      much audit
                   effort wastes valuable resources, while too little
                   jeopardizes the credibility of our work.


System Review      There are basically two approaches to assessing the
                   reliability of computer-based data, the system
Versus a Limited   review generally performed by specialists and the
Approach           more limited approach (which this guide
                   addresses) designed for evaluators/auditors.
                     Chapter   1
                     Introduction




                     A system review assesses and tests all controls in a
                     computer system for the full range of its applica-
                     tion functions and products. These reviews (1)
                     examine a computer system’s general and applica-
                     tion controls, (2) test whether those controls are
                     being complied with, and (3) test data produced by
                     the system. While this approach provides the best
                     understanding of a system’s design and operation, it
                     tends to be time consuming. When the assignment’s
                     objective(s) dictate a complete system review, spe-
                     cialists should be consulted.

                     The limited review is targeted to particular data.
                     As a result, it normally requires a less extensive
                     understanding of general and application controls,
                     Pertinent controls are examined to the extent neces-
                     sary to judge the level of data testing needed to
                     determine data reliability. This can usually be per-
                     formed by generalist staff.

                     For most assignments using computer-based evi-
                     dence, the more limited approach described in this
                     guide is adequate. However, if GAO staff use a spe-
                     cific set of computer-based data for many different
                     assignments during an extended period, the length
                     and cost of a full system review may be warranted.
                     Such a review (periodically updated) might be less
                     expensive in the long run than individual determi-
                     nations of data reliability using the procedures in
                     this guide.


When Data            The reliability of computer-based data should be
                     determined early in the planning phase of an
Reliability Should   assignment. If an assignment relies on computer-
E3eDeterrnined       based evidence, staff must know if the data are reli-
                     able. If the data are not sufficiently reliable to meet
                     the assignment’s objective(s), they cannot be used
                     as the primary evidence and staff will need to plan
                     alternative approaches. The following options
                     should be considered and discussed with manage-
                     ment and, as necessary, with customers:
                   Chapter      1
                   Introduction




                   Seek evidence from other sources. Staff would need
                   to determine the reliability of such data.
                   Collect primary data to meet the assignment’s objec-
                   tive(s), rather than use secondary source data. This
                   would be possible only if the work could be com-
                   pleted in time to meet requester’s needs.
                   Redefine the assignment’s objective(s) to eliminate
                   the need to use unreliable data.
                   Use the data, but explain their limitations and
                   refrain from drawing unreasonable conclusions or
                   recommendations.    It is preferable to draw no con-
                   clusions or recommendations.
                   Terminate the assignment if no other alternative is
                   possible.

                   Assignment proposals should include adequate staff
                   time and identify the specific skills necessary to
                   complete reliability determinations   in a timely
                   manner.


Who Should         This guide is designed for the use of evaluators/
                   auditors. If expert help is needed, however, in car-
Evaluate           rying out this more limited approach, it should be
Computer-Based     obtained promptly. As with all evidence, evalu-
Data Reliability   ators/auditors   are responsible for its reliability;
                   computer-based data should be no different. The
                   basic tests of evidence apply. It should be best evi-
                   dence, competent, relevant and sufficient. In evalu-
                   ating the competence of evidence the evaluator/
                   auditor should carefully consider whether any
                   reason exists to doubt its validity, completeness,
                   and accuracy.


Terms Defined      Definitions      of terms used in this guide are as
                   follows:

                   Data reliability: A state that exists when data are
                   sufficiently complete and error free to be con-
                   vincing for their purpose and context. It is a rela-
                   tive concept that recognizes that data may contain
Chapter      1
Introduction




errors as long as ,they are not of a magnitude that
would cause a reasonable person, aware of the
errors, to doubt a finding or conclusion based on the
data.

Computer system controls: Policies and procedures
that provide reasonable assurance that computer-
based data are complete, valid, and reliable. They
include general and application controls.

General controls: The structure, methods, and pro-
cedures that apply to the overall computer opera-
tions in an agency. They include organization and
management controls, security controls, and system
software and hardware controls.

Application controls: Methods       and procedures
designed for each application     to ensure the
authority of data origination,    the accuracy of data
input, integrity of processing,    and verification and
distribution of output.

Systems review: An assessment of general and
application controls, test of the degree of compli-
ance with those controls, and appropriate data
tests.

Compliance testing: Verifying whether controls are
being complied with during the system’s operation.
Compliance testing does not directly test whether
particular computer data are valid and reliable.

Data testing: Testing to determine if particular data
produced by a computer system are valid and reli-
able. Data testing does not establish the existence or
adequacy of system controls or whether such con-
trols are being complied with, but may reveal indi-
cations of control weaknesses.

Source record: Information,  in manual or electronic
form, which is the basis for original entry of data to
a computer application.
Chapter      1
Introduction




Attribute test: An examination of a data element for
a logical or defined characteristic; also referred to
as an unconditional  test. For example, the status of
a loan application must be “approved”, “denied”, or
“pending”.

Relationship test: A comparison of values to vali-
date a logical or defined correlation; also referred to
as conditional tests. For example, an invoice date
must be the same as or earlier than the related pay-
ment date.

Data element: An individual piece of information
that has definable parameters (e.g., a social security
number).

Data record: A collection of data elements relating
to a specific event, transaction, or occurrence (e.g.,
name, age, social security number, school, date
enrolled, loan date, loan amount, amount repaid,
and loan balance).

Data file: A collection of data records relating to a
specific population (e.g., student loan applications
for Maryland schools).

Attributes: Characteristics of a data element
defined by the data dictionary (e.g., numeric or
alpha, acceptable values, and length).
Chapter 2
Assessing Reliability Risk, Understandings
System Controls, and Determining Data
Testing Requirements
                                        This chapter introduces the process and decision
                                        points for conducting a reliability assessment of
                                        computer-processed data. It examines the elements
                                        which influence the level of data testing required
                                        including

                            l     a conceptual framework,
                                l the planned use of the data relative to the assign-
                                  ment’s objective(s),
                            l     the existing knowledge base relating to the data and
                                  system, and
                                . the adequacy of system controls.


Conceptual                              When computer-processed data are being consid-
                                        ered for use in an assignment, staff must initially
Framework                               determine the reliability risk-the      risk that the
                                        data are unreliable for the planned use. As illus-
                                        trated in table 2.1, reliability risk is determined by
                                        considering both planned use and present knowl-
                                        edge of the data or the computer system.


Table 2.1: Factors in Developing Reliability Risk


                                           Knowledge/
                                           experience with data
   Planned use of data              +      or system                    =   Reliability risk


   Sole Support to Meet                    Unfavorable/                     High
   Objectives                                Nonexistent
                                           Adequate                         Moderate
                                           Favorable                        Moderate   to Low

                                           Unfavorable/                     Moderate
                                             Nonexistent
                                           Adequate                         Low
                                           Favorable                        Very Low

                                           [Not a mitigating   factor       Very Low
                                           at this level]
                               Chapter     2
                               Assessing      Reliability    Risk,
                               Understanding          System
                               Controls,     and Determining
                               Data Testing        Requirements




                               The second step (illustrated in table 2.2) is to under-
                               stand system controls and determine if they lower
                               the reliability risk. If system controls are strong,
                               they can lower the reliability risk to an acceptable/
                               prudent level and decrease the data testing that
                               would normally be required in a high risk
                               environment.


Table 2.2: Factors in Determining   Extensiveness           of Data Testing



                                Assessment           of system        =       Extensiveness       of
   Reliability risk      +      controls                                      reliability risk

   High                         Weak/Not Determined                           High
                                Adequate                                      High to Moderate
                                Strong                                        Moderate to Low          ,

   Moderate                     Weak/Not Determined                           Moderate
                                Adequate                                      Moderate   to Low
                                Strong                                        LOW


   LOW                          Weak/Not Determined                         LOW
                                Adequate                                    Low to Very Low
                                Strong                                      Very Low

  Very Low                      [Generally not necessary                    Veiy Low
                                and not cost effective]


                               Details of assessing reliability risk and determining
                               the extensiveness of data testing needed to reduce
                               that risk to an acceptable level appear in the sec-
                               tions that follow.


Planned Use of                 When an assignment requires computer-processed
                               data, the first step is to decide how the data will be
Data                           used-how will they contribute to meeting an
                               assignment’s objective(s)?

                               Normally,        data are used as

                             . the sole evidence supporting               a finding,
    Chapter     2
    Assessing      Reliability    Risk,
    Understanding          System
    Controls,     and Determining
    Data Testing        Requirements




l   corroborative or supporting           evidence, or
l   background information.

    If computer-processed      data are the sole support for
    an assignment’s objective(s), the need for confi-
    dence in their reliability is greatest. For example,
    assume GAO is asked to determine whether mine
    safety inspections are being made within a legisla-
    tively prescribed time frame. The agency under
    review has information      in a computerized data base
    that directly addresses this question. If GAO plans
    to use information in that data base without cor-
    roborating evidence, establishing the reliability of
    the data base is critical to the assignment’s
    objective(s).

    When computer-based data are supported by other
    evidence, the need for complete confidence’in that
    data varies depending on how effectively the other
    evidence-standing     alone-could    support the
    finding. For example, staff discussions with union
    representatives might reveal that regular mine
    inspections were not occurring, but such discussions
    were unable to clearly establish the interval
    between inspections. These discussions corroborate
    information in the agency’s computerized data
    bases and add to its persuasiveness. The finding
    still leans heavily on the computerized data base,
    and determining its reliability is important.

    The lowest risk of using computer-based data
    occurs when the data are used in the reportfor
    background or informational      purposes and are not
    vital to audit results. In these cases, citing the data
    source in the report and ensuring that the data are
    the best available will satisfy reporting standards
    for accuracy and completeness unless there is
    reason to believe that inaccuracies in the data
    would jeopardize the report’s credibility.
                          Chapter     2
 .                        Assessing      Reliability     Risk,
                          Understanding           System
                          Controls,     and Determining
                          Data Testing        Requirements




Knowledge/                After deciding how the computer-based data will be
                          used, the next step is to find out what is already
Experience With           known about the data and the system that
System or Data            processed them. This information    combined with
                          the planned use of the data, determines the relia-
                          bility risk.

                          The following examples illustrate ways in which
                          staff can learn more about the data or the system
                          controls:

                  l       GAO used the same data to support a prior finding
                          after adequately establishing their reliability. The
                          risk of using the data in a current report would be
                          low. But updating would be needed to ensure that
                          the data had not changed since they were last used.
                      l   GAO recently established the adequacy of system
                          controls used to process data critical to the assign-
                          ment’s objective(s). After determining that no sig-
                          nificant changes had occurred in the system since
                          the assessment, the reliability risk would be low.
                          System controls could be considered good, and min-
                          imal data testing would be required.
                      l   In a recent report, GAO cited information developed
                           from a different application of the same computer-
                          ized data base. Work previously done to determine
                          the adequacy of general controls after updating
                           could reduce the present data’s reliability risk.
                          Additional work would need to be done to establish
                          the adequacy of relevant application controls and
                          the level of data testing required.
                      l    The inspector general or other audit/evaluation
                           group studied the system controls or used the data.
                           The results of the work could establish reliability
                           risk, but the Yellow Book’s due professional care
                           requirement involving reliance on work performed
                           by others would need to be met. (See Government
                           Auditing Standards, pp. 3-14 through 3-16.)


System Controls           While some data testing is always necessary when
                          computer-based evidence is used to meet an assign-
                          ment’s objective(s), satisfactory system controls can
                          Chapter     2
                          Assessing      Reliability     Risk,
                          Understanding           System
                          Controls,     and Determining
                          Data Testing        Requirements




                          reduce the data testing required to establish relia-
                          bility. In such cases, less data testing is needed than
                          when those controls are weak or undetermined.

                          According to the Yellow Book,
                          The degree of testing needed to determine data
                          reliability generally increases to the extent that
                          the general or application    controls were deter-
                          mined to be unreliable or were not reviewed.


Understanding             Staff should understand system controls and their
System Controls           purposes to determine whether they can be relied
                          on to reduce data testing. This understanding
                          includes both general and application     controls
                          that relate to assignment evidence.

                          An understanding of gen’eral controls would
                          include knowledge of the following items which
                          affect data reliability.

                  l     Management commitment to system design and
                        operation: This includes management’s methods for
                        monitoring and following up on performance,
                        including corrective action on internal audit recom-
                        mendations and on user complaints.
                      l Organization of the system functions, including
                        assignment of responsibilities     and separation of
                        duties: This provides that key duties and responsi-
                        bilities in authorizing, processing, recording, and
                        reviewing transactions are assigned to different
                        individuals.
                      l Physical security of the computer facility and its
                        components, including restrictions on access:
                        Restricting access helps ensure data reliability by
                        reducing the risk of unauthorized data entry or
                        modification.
                      0 Supervision: Effective supervision requires a clear
                        communication       of duties and responsibilities,   reg-
                        ular oversight-particularly       at critical points-and
                        periodic performance evaluations.
        Chapter     2
        Assessing      Reliability    Risk,
        Understanding          System
        Controls,     and Detemg
        Data Testing        Requirements




    -
        In understanding application  controls,  staff
        should consider matters such as the following:
. procedures          to ensure that application software and
        subsequent modifications     are authorized and tested
        before implementation;
.       frequency of system modification       and the reasons
        for it;
.       whether program changes are controlled and
        promptly documented;
.       the review, approval, control, and editing of source
        transactions to ensure completeness and prevent
        error;
.       tables used in computer processing, their sources,
        and the frequency of updating;
.       the existence of current narrative system descrip-
        tions and flowcharts;
        reconciliation of output records with input entries;
        error detection and correction procedures;
        data user’s views of data reliability;   and
        internal audit reports and other evaluations or
        studies.

        Regardless of how well-conceived and designed
        system controls may be, they are ineffective if
        applied incorrectly and inconsistently. For example,
        a system may have a control that requires a data
        quality control group to verify that source data are
        accounted for and that they are complete and accu-
        rate, have been appropriately   authorized, and
        transmitted in a timely manner. But if that group is
        bypassed, the control contributes nothing to
        ensuring the integrity of data entry.

        Many reasons exist for bypassing or overriding con-
        trols such as time pressures, fatigue, boredom, inat-
        tention-or    even collusion for personal gain. As a
        result, staff should select the most significant con-
        trol procedures and confirm adherence to them.
        Although it is unnecessary to test all procedures,
        staff should conduct sufficient tests to afford a rea-
        sonable basis for reducing testing by relying on the
                       Chapter     2
                       Assessing      Reliability    Risk,
                       Understanding          System
                       Controls,     and Determining
                       Data Testing        Requirements




                       adequacy of controls, Observing the work environ-
                       ment for an ordered and businesslike atmosphere
                       can be helpful.

                       Documentation of a well-controlled system should
                       be complete and current. Absence of such documen-
                       tation may indicate that controls do not exist or, if
                       they do, that they are not understood or adequately
                       applied. Other red flags that suggest vulnerability
                       to data errors include

                       old systems with high program maintenance;
                       large volumes of data;
                       frequent processing and updating activity;
                       numerous transaction types and sources;
                       large number of coded data elements;
                       high employee turnover (e.g., data entry clerks,
                       operators, and analysts) and inadequate training;
                     . complex or messy data structures; and
                     . lack of ADP standards, especially related to
                       security, access, and program change control.

                       Discussions with knowledgeable agency personnel
                       can provide an effective beginning in gaining an
                       overall system understanding. Their testimonial evi-
                       dence, however, should be corroborated through
                       independent observations or tests whenever
                       possible.


Relying on System      Some data testing is essential whenever computer-
Controls to Reduce     based data will be used as evidence. Even when
                       system controls are well designed and generally
Data Testing           adhered to, data accuracy is not ensured. While
                       staff can rely on good controls to reduce data
                       testing, control reviews cannot substitute for data
                       testing.

                        After reviewing controls, staff should evaluate
                        their strength, that is, whether controls can be rea-
                        sonably expected to prevent errors and to detect
                        those that do occur. This evaluation determines
                       Chapter     2
                       Assessing      Reliability    Risk,
                       Understanding          System
                       Controls,     and Determining
                       Data Testing        Requirements




                       whether extensive, moderate, or minimal              data
                       testing is needed.

                       Staff should keep the purpose of reviewing the con-
                       trols in mind as they progress. If it is determined
                       that (1) system controls cannot be relied on to limit
                       data testing or (2) continuing the controls review is
                       more costly than expanding data testing, the review
                       should cease. In this case, staff must proceed with
                       data testing as if system controls were weak or
                       nonexistent.


Documenting the        The workpapers              should be documented   to disclose:
Basis for
                       What assignment objective(s) will computer-
Extensiveness of   l
                       processed data likely support?
                       How will the data support that objective?
                       Will that objective be supported by other evidence?
                       What is the other evidence?
                       What is known about the data?
                       What did staff do to understand the system and its
                       controls?
                       Did staff determine the reliability of system con-
                       trols? If so, are the controls strong, adequate, or
                       weak?
Chapter 3
Data Testing                                                                            ,



                      This chapter discusses

                  l   the objectives and methods of data testing,
                  l   the appropriateness of varying testing levels, and
                  l   factors to consider when using computer-assisted
                      testing techniques.


Objectives and        Yellow Book standards require evidence (regardless
                      of its source or format) to be competent, relevant,
Methods of Data       and sufficient. Data reliability focuses on assessing
Testing               the competency of data. Data testing is intended to
                      establish that evidence relied on is suitably accurate
                      for its specified purpose.

                      While it is unlikely that any computer system con-
                      tains error-free data, the concept of reliability does
                      not require perfect data. It should, however, include
                      steps to assess data completeness, data authen-
                      ticity, and the accuracy of computer processing.

                      Tests of data completeness confirm that the uni-
                      verse contains all data elements and records rele-
                      vant to the assignment’s objective(s) and to the
                      period covered by the audit. Missing data is particu-
                      larly harmful if it represents a specific segment of
                      the total population (e.g., all grant recipients from
                      California).

                      An analysis of data authenticity determines if the
                      computer-based data accurately reflect the source
                      records.’ This means that information in source
                      records should match that entered in computer-
                      based records and that each computer-based record
                      should be supported by a source record.

                      Steps aimed at the accuracy of computer processing
                      are designed to verify that all relevant records were

                      ‘Appropriate steps should also be taken to insure that the infor-
                      mation contained in source records is factual. If this is not done,
                      related limitations on the data should be fully disclosed in the
                      report.
                        Chapter   3
 ,                      Data Testing




                         completely processed and that computer processing
                      .’ met the intended objectives.

                        There are two varying approaches to testing com-
                        puter-based data. They are characterized as
                        auditing around the computer or auditing with the
                        computer. The appropriate approach or combina-
                        tion of approaches is dependent on the nature of the
                        related system.


Auditing Around the     Auditing around the computer assumes that tech-
Computer                niques and procedures the computer uses to process
                        data need not be considered as long as there is a vis-
                        ible audit trail and/or the result can be manually
                        verified. This approach bypasses the computer in
                        either of two ways.

                        In the first way, computer output is compared to or
                        confirmed by an independent source. This approach
                        confirms computer-processed data with third par-
                        ties or compares data with physical counts, inspec-
                        tions, records, files, and reports from other sources,
                        Physical counts and inspections can verify quan-
                        tity, type, and condition of tangible assets. Reports
                        on government programs and activities issued by
                        outside contractors, universities, audit and pri-
                        vately-funded organizations, and others can contain
                        a useful basis for comparison.

                        Examples of sources from which confirmation      can
                        be obtained include

                        banks (cash balances on hand or amounts of loans);
                        warehouses (assets stored or volume of transfers);
                        training institutions (number of students or dollar
                        volume of contracts);
                      . common carriers (rates for freight shipments or
                        volume of passengers between selected locations);
                      . medical facilities (daily rates for patient care or
                        types of outpatient services);
                      . private business concerns (billings for utility ser-
                        vices or wholesale prices of generic drugs); and
    Chapter   3
    Data Testing




l   other government agencies (checks cancelled by a
    U.S. Treasury Department disbursing center or sta-
    tistics on an agency’s use of General Services
    Administration   automobiles).

    Staff can also conduct common-sense examinations
    of printed data output to reveal potential reliability
    problems. These inspections can establish data reli-
    ability when a low to very low level of data testing
    is required. When a moderate to high level of
    testing is required, these tests should be supple-
    mented by more extensive procedures. The fol-
    lowing questions are examples of common-sense
    data tests:

l   Are amounts too small (cost per mile to operate a l-
    ton truck equals $.004)?
l   Are amounts too large (a student loan for
    $150,000)?
l   Are data fields complete (a loan payment amount is
    blank)?
l   Are calculations correct (inventory value is a nega-
    tive amount)?

    Although confirmations and comparisons directly
    test the accuracy of computer output and effec-
    tively disclose fictitious data, they may not detect
    incomplete data input. When data completeness is
    in doubt, confirmations or comparisons should be
    supplemented by tracing a sample of source records
    to computer output.

    The second way to bypass the computer in con-
    firming data reliability is to select source transac-
    tions, manually duplicate the computer processes,
    and compare the results with computer output.
    Examples include

l benefit payments for selected grant recipients,
l loan balances and delinquent amounts,
l resale prices of foreclosed and repossessed proper-
  ties, and
0 salary payments.
                    Chapter   3
                    Data Testing




                    Although this approach can test the completeness
                    of computer output as well as the accuracy of com-
                    puter processing, it does not disclose fictitious data
                    (i.e., data that have been entered into the computer
                    but are not supported by source records). If ficti-
                    tious data are an issue, tracing data from the com-
                    puter to source records should be considered.

                    The usefulness of auditing around the computer
                    diminishes as the number and complexity of com-
                    puter decisions increase. It may be impractical
                    when sophisticated data processing activities are
                    involved.


Auditing With the   Auditing with the computer means that computer
Computer            programmed tests are used, in part, to measure data
                    reliability.

                    After determining the completeness and accuracy of
                    computer input by manually tracing data to and/or
                    from a sample of source records, this approach uses
                    auditor-developed computer-programmed      tests to
                    examine data reasonableness and identify defects
                    that would make data unreliable.

                    An advantage of auditing with the computer is that
                    it can be used regardless of the computer system’s
                    complexity or the number of decisions the computer
                    makes. Auditing with the computer is also fast and
                    accurate, permitting a much larger scope of testing
                    than would be practical with other methods.

                    The first step in developing computer-programmed
                    tests is to identify what computer information is to
                    be used as evidence and what data elements were
                    used to produce it. Staff should test all data ele-
                    ments that affect the assignment’s objective(s).

                    When an audit-significant   data element is derived
                    (i.e., calculated by the computer based on two or
                    more data elements), staff should also test the
                    source data elements. For example, the element “net
Chapter   3
Data Testing




pay” might be planned for use as evidence to meet
an assignment’s objective(s). Review of the system’s
data dictionary shows that a computer program
uses three other data elements to calculate net
fiay-“hourly.rate”,   “hours worked”, and “deduc-
tions”. Errors in any of these data elements would
make “net pay” incorrect. Therefore, staff should
determine the accuracy of each.

After identifying the relevant data elements, the
data dictionary can be examined to define the
attributes of each and identify rules which each
should meet. If a:data element fails these require-
ments, the computer may exclude it or process it in
a way that does not ensure an accurate result. Com-
puter programs frequently have default logic that
may cause a missing or defective data element to be
erroneously processed.

For example, data to be entered into a computer
may identify whether a project is ongoing or com-
pleted. If the data element is not entered for a spe-
cific record, a computer program prescribes
treatment of the missing data. The record could be
put in an error file until the missing data is pro-
vided, or a programmed assumption could be made
about its status (i.e., if the status is blank, then the
project is ongoing). If that assumption is incorrect
in enough records, that data element will be
unreliable.

IJnderstanding   a data element also makes it possible
for staff to develop reasonableness assumptions
that can be programmed as common-sense tests-
for example, can a student loan recipient be a 12-
year-old? Common-sense tests do not establish that
a data element is erroneous. They raise red flags for
follow-up. Although it is possible for a lZyear-old
to be a college student, it is unlikely. (Other exam-
ples of common sense tests are included on page
22.)
       !
Chapter     3
DataTesting




Data attributes should also consider expected rela-
tionships among data elements. Although developed
independently,    a data element may have a reason-
able relationship to another data element. For
example, some kinds of medical procedures are age-
or gender-related. Determining and testing relation-
ships can reveal errors by disclosing irrational or
unlikely relationships such as a hysterectomy on a
male patient.

When staff have learned about each of the data ele-
ments that affect the information relied on, tests
are developed to detect errors. Tests are of two
types: those that disclose failures of data elements
to meet established requirements and those that dis-
close illogical relationships. (See appendix I for dis-
cussion and examples of these test types).

After data tests are developed, the computer is
programmed to apply them. The programmed data
tests must be validated and tested to ensure that
errors revealed during the data testing are the
result of incorrect data and not the result of invalid
test programs.

Data tests can be developed without knowledge of
the technical design of the data base, its structure,
and layout. This knowledge, however, is needed to
program the tests. If assignment staff are unfa-
miliar with the necessary programming techniques,
support is available from their division’s design,
methodology, and technical assistance group
(DMTAG) or region’s technical assistance group
(TAG).

Whether a microcomputer      or a mainframe should
be used to process data tests depends on factors
such as the size of the data base, the number and
complexity of data tests, required processing speed,
computer accessibility, and team expertise. If a
mainframe is required, staff will almost certainly
need to get support from their DMTAG or TAG.
                    Chapter   3
                    Data Testing




                    Commonly available retrieval or analysis applica-
                    tions may be used for programming     tests. These
                    include products such as Lotus l-2-3, dBASE, SAS,
                    SPSS, and DYL-280. While some programs have
                    been successfully used in testing data bases of over
                    a million records, staff should take care to ensure
                    that test requirements are properly matched to the
                    application and to the operating environment
                    (micro- versus mainframe computer).


Various Levels of   As stated in chapter 2, the level of data testing
                    depends on the reliability risk (based on data use
Data Testing        and experience with the data) and staff judgment of
                    the adequacy of system controls. The greater the
                    reliability risk, the more assurance is required to
                    reduce the risk to an acceptable level.

                    If a low level of data testing is adequate to establish
                    the reliability of computer-processed data, it may
                    be most appropriate to test only those items which
                    in the auditor’s judgment are most likely to have
                    errors. At this level of testing, reliance for data
                    acceptability rests primarily on staff judgment of
                    system controls. Data tests provide some confirma-
                    tion that relied-on system controls were operating
                    effectively. A judgmental sample size, which is ran-
                    domly selected, can give this confirmation but will
                    not define the confidence or precision levels
                    achieved by the testing.

                    If data test results detect no errors or suggest an
                    error rate that is acceptable for the data’s planned
                    use, the data could be considered reliable. If, how-
                    ever, the test error rate is high, staff evaluation of
                    system control adequacy-on         which reliance was
                    placed-may        have been in error. In this case,
                    sample size and scope of testing should be increased
                    or a statistically valid approach used to provide a
                    defensible basis for a decision on data reliability.

                    If moderate to high data testing is needed, reliance
                    is primarily on data testing rather than on system
                    Chapter   3
                    Data Testing




                    controls. Sufficient tests should be performed to
                    reasonably assure detection of significant errors. If
                    sampling methods are used, an adequate sample
                    size would be necessary to permit appropriate pre-
                    cision levels to be calculated and support the test
                    results.

                    A number of statistical approaches are discussed
                    and illustrated in Transfer Paper 6, Using Statis-
                    tical Sampling. The statistical approach depends on
                    whether GAO needs only to determine whether the
                    error rate is acceptable or whether it is necessary to
                    quantify the error rate.


Special              Because of the computer’s speed, computer-
                     programmed tests (used in auditing with the com-
Considerations in    puter to detect data defects and inconsistent rela-
Computer-           tionships) are usually not sampled but run against
Programmed Data      all records for each data element tested. Only when
                    using very large data bases would it be necessary to
Tests               limit testing to a sample of records. The testing level
                    (high, moderate, or low) normally relates to the
                    number of tests applied rather than to the number
                    of data elements or records tested. If low-level data
                    testing is adequate, it might be limited to those tests
                    that disclose failures of data elements to meet
                    established requirements. Moderate to high testing
                    levels would contain a wider variety of tests,
                    including increased use of relationship tests. See
                    appendix I for a discussion of various data tests.

                    In using results of computer-processed data tests,
                    staff should consider whether the same record or
                    data element failed more than one data test. If so,
                    the error rate may need to be adjusted. The fol-
                    lowing examples illustrate this situation:

                    Assume that for the data element, “loan balance”,
                    staff conducted two tests on a universe of 100
                    records. A range test counted any loan balance
                    below $0 or greater than $10,000 as an error. A der-
                    ivation test defined an error as any loan balance
Chapter   3
Data Testing




which did not equal “original loan amount” plus
“interest charges” minus “loan payments to date”.
Results were as follows:
Test                                          Data errors
Range
Derivation                                                  :

Since all errors relate to one test, the error rate is 5
percent.

But assume the following    results for the same tests.
Test                                          Data errors
Range
Derivation                                                  z

In the second example, each test identified failures.
Based on these results, from 5 to 10 percent of the
data are defective. The actual error rate depends on
whether a record failed one or both tests. If a lo-
percent rate would cause the data to be unreliable,
staff would need to make additional tests to deter-
mine if the same records were defective in the
various tests.

Data tests that detect inconsistent relationships
between data elements establish the likelihood of
error, but do not identify which data element is
defective. Staff must run additional tests or per-
form other audit work to determine which data ele-
ment to rely on. Similarly, the failure of a data
element to meet an expected attribute signals a
potential error. Additional follow-up (e.g., discus-
sions with knowledgeable agency personnel) may
identify acceptable explanations. Only after defec-
tive data are confirmed can the error rate be cor-
rectly calculated.

 Whenever an error rate is unacceptable, staff may
 consider two actions to make the data usable:
        Chapter,.3         /
        Data Testing                       _.




    l   Repair the defective data elements. By doing this,
        the acceptability of the corrected data error rate
        could be determined.
l       Exclude the defedtive data records from the assign-
        ment universe. By doing this, the data element used
        to support audit findings, conclusions, or recom-
        mendations includes, only data not found to be
        defective. This approach is inappropriate,  however,
        when the ‘data exclusion would introduce a systemic
        bias in assignment results. (See General Policy
        Manual and Project Manual, chapter 10, “Method-
        ology,“for a discussion of systemic and random
        bias.)
Chapter 4
Reporting on Data Reliability


                         This chapter discusses the reporting requirements
                         when using computer-based data to meet the assign-
                         ment’s objective(s). In addition, it suggests sample
                         report language for cases in which the data are

                 l       reliable,
                 l       unreliable but still usable,
                 l       unreliable and not usable, and
                 l       not assessed for reliability.




Computer-Based           Manual (12.8) require that data sources and the
Evidence                 methods used to determine data reliability should
                         be stated in the report. When material is included in
                         a report for background or informational     purposes
                         and is insignificant to audit results, staff can nor-
                         mally meet this reporting standard by citing the
                         data source in the report.

                         For computer-processed      data which is critical to the
                         assignment’s objective(s), the report should assure
                         readers that the information relied on is credible
                         and reliable. Specifically, it should
                     .   identify the scope of work done when system         con-
                         trols are relied on to reduce data testing;
                     l   describe the testing of the computer-processed        data,
                         including the tests performed, their purpose,       and
                         the error rates disclosed; and
                     l   present any factors known to limit the data’s       relia-
                         bility and if significant, the sensitivity of the   results
                         to the accuracy of the data.

                         If sampling was used to determine data reliability,
                         the description should include the purpose of the
                         sample; the universe and sample sizes; the basis of
                         the sample size (judgmental or statistical); the type
                         of sample (simple random, stratified, and so on);
                         confidence levels and precision; and errors detected.
                        Chapter     4
  .                     Reporting       on Data   Reliability




                        Staff should include a summary of the above in the
                        objectives, scope, and methodology (OSM) section of
                        the report. Technical details of complex sampling
                        methods and computer-programmed        data tests may
                        appear in the body of the report or in a technical
                        appendix.

                        If data reliability was not determined or was not
                        determined to the extent normally desired, the
                        product should include a clear statement to that
                        effect as well as a qualified conformity statement.
                        In these cases, statements of negative assurance’
                        may be useful. Auditors/evaluators      should consider
                        the appropriateness of presenting any conclusions
                        or recommendations    based on the data.

                        The following are examples of report language that
                        can be used in the OSM to meet established
                        reporting standards.


Reliable Data Is        “To achieve the assignment’s objective(s) we exten-
Used                    sively relied on computer-processed data contained
                        in [cite data base used]. We assessed the reliability
                        of this data including relevant general and applica-
                        tion controls and found them to be adequate. We
                        also conducted sufficient tests of the data. Based on
                        these tests and assessments we conclude the data
                        are sufficiently reliable to be used in meeting the
                        assignment’s objective(s).”


Unreliable Data Still   “To achieve the assignment’s objective(s) we exten-
Usable                  sively relied on computer-processed data contained
                        in [cite the data base used]. Our review of system
                        controls and the results of data tests showed an
                        error rate that casts doubt on the data’s validity.

                        ‘Negative assurance is a statement that nothing came to the
                        auditor/evaluator’s attention as a result of specified procedures
                        that caused them to doubt the acceptability of the data. The
                        auditor/evaluator, by using other data and information, came to
                        the conclusion that the data could be relied on to achieve the
                        assignment’s objective(s).
                      Chapter     4
                      Reporting       on Data   Reliability




                      However, when these data are viewed in context
                      with other available.evidence, we believe the opin-
                      ions, conclusions, and recommendations   in this
                      report are valid.”


Unreliable Data Not   “To achieve the assignment’s objective(s) we exten-
Usable                sively relied on computer-processed data contained
                      in [cite the data base used]. Our review of system
                      controls and the results of data tests showed an
                      error rate that casts doubt on the data’s validity.
                      Since the assignment’s objective(s) require specific
                      statements based on this data and sufficient inde-
                      pendent evidence is not available, we were unable
                      to provide specific projections, conclusions, or
                      recommendations.


Reliability Is Not    “To achieve the assignment’s objective(s) we exten-
Determined            sively relied on computer-processed data contained
                      in [cite the data base used]. We did not establish the
                      reliability of this data because [cite the reason(s)].
                      As a result, we are unable to provide projections,
                      conclusions; or recommendations        based on this
                      data.2 Except as noted above, GAO’s work was
                      conducted in accordance with generally accepted
                      government auditing standards.”

                      If the.reliability of critical data is not determined,
                      an exception to the generally accepted auditing
                      standards is necessary. Staff should discuss the cir-
                      cumstances with the Assistant Comptroller General
                      for Planning and Reporting and obtain approval
                      before final processing.



                      ‘There may be cases where sufficient other data could be relied
                      on to draw conclusions and recommendations from such data,
                      because the issues are broader (i.e., policy issues), where precise-
                      ness of data is not of paramount importance. In those rare cases,
                      conclusions and recommendations may be appropriate, but full
                      disclosure is needed. Staff should also consider the limitations dis-
                      cussed on page 20, footnote #l.
Chapter 5
Case Study: Guarmteed                Student Loans



                   This chapter presents a case example of how to
                   determine the reliability of computer-based evi-
                   dence. It discusses appropriate steps for
               .   assessing the reliability risk,
               .   examining the adequacy of system controls, and
               .   performing data testing at both an extensive and
                   minimal level.


Case Example       The following case illustrates how to apply the
                   requirements, concepts, and principles discussed in
                   this guide to an assignment. The circumstances of
                   this case are hypothetical and are intended to illus-
                   trate the factors affecting the extent of data testing.


Assignment         Assume that GAO has been requested to review the
Objectives         Stafford Student Loan Program and determine if

               l   the Department of Education is paying the correct
                   amount of interest and special allowance (interest
                   subsidy) to lenders,
               l   payments are made to lenders in a timely manner,
                   and
               l   interest payments are being made for defaulted
                   loans.


Background         Under the program, private lenders make loans at
                   lower-than-market     interest rates to qualified stu-
                   dents attending approved educational institutions.
                   The Department of Education pays the interest
                   while the student attends school and for a stipu-
                   lated grace period thereafter. Education also funds
                   special allowance payments during the life of the
                   loan to provide lenders the difference between the
                   loan interest rate and the rate on go-day Treasury
                   bills, plus 3-l/4 percent. If borrowers default on
                   their loans, Education repays the loan (usually
                   through state agencies) and stops paying interest
                   and special allowances.
                              Chapter   6
                              Case Study: Guaranteed
                              Student  Loans




                              The Department of Education makes interest and
                              special allowance payments directly to lenders
                              based on detailed quarterly billings. Lenders’ bill-
                              ings are entered into Education’s computerized
                              system, which summarizes and authorizes pay-
                              ments to lenders for interest and special allowances.
                              A separate data base maintains information on
                              defaulted loans.


Assignment Approach           Since the computer-based data compiled by Educa-
                              tion contains information   relating to interest pay-
                              ments and defaults, staff have identified it as a key
                              source of evidence to support their objective(s).
                              However, before beginning an analysis of this infor-
                              mation, auditors/evaluators     must assure them-
                              selves that the data are reliable. For example,

                      l Are individual loan amounts correct?
                      . Are interest calculations accurate?
                      l Do all records apply to the time period of our audit/
                        evaluation?
                        Are dates of loan defaults accurate?
                          l


                      l Are lender identification codes correct?

                              Reliability   assessment procedures should include:

                          . determining the importance of the computer-based
                            data in meeting the assignment’s objective(s),
                          l determining what past experience and current
                            knowledge is available about the data and the
                            system which processes them,
                          . reviewing general and application controls to the
                            extent they can be relied on to reduce the level of
                            data testing, and
                          l developing and performing data tests.

                              These efforts are focused on providing reasonable
                              assurance that the data does not contain significant
                              errors which would undermine the credibility of our
                              analyses and conclusions.
                              Chapter  6
                              Case Study: Guaranteed
                              Student  Loans




Determining                   The first step in meeting the case study objectives is
                              to determine the reliability risk. This includes the
Reliability Risk              risk that Education’s computerized data do not
                              accurately state amounts paid to lenders for
                              interest payments and special allowances and the
                              risk that default data do not accurately reflect the
                              eligibility of loans for continuing interest payments.
                              Reliability risk is determined by considering both
                              the planned use of the data and the existing knowl-
                              edge of the computer system and its data.


Planned Use of Data           In gauging how the planned use of computer-based
                              data affects reliability risk, staff should consider
                              matters such as the following:

                          l   Will the data be important in determining the accu-
                              racy and appropriateness of payments made to
                              lenders? Will the data merely provide background
                              information or provide a context for the assign-
                              ment’s conclusions? Background information nor-
                              mally suggests a very low reliability risk.
                          l   Is the computer-based data the only evidence avail-
                              able regarding payments made to lenders? Is the
                              computer-based data part of a broader body of cor-
                              roborating evidence? Evidence used as sole support
                              suggests a high reliability risk, while the reliability
                              risk of corroborative evidence is moderated by the
                              strength of the other evidence.
                      l       Is the issue of student loan payments and eligibility
                              so sensitive that the accuracy of any data presented
                              (even when used as background) is likely to be chal-
                              lenged? If there is reason to believe that the data’s
                              accuracy will be questioned, regardless of its use in
                              the report, the reliability risk increases,


Knowledge and                 The second component of reliability risk is recent
Experience With               experience or knowledge of the data and related
                              system. Favorable experience and/or knowledge
Data                          can reduce reliability risk, limit the review of
                              system controls, and reduce data testing. Unfavor-
                              able experience and/or knowledge leads to
                      Chapter   6
                      Case Study: Guaranteed
                      Student  Loans




                      increased doubts and requires greater assurance
                      that data is accurate.

                      In compiling information about the data and its
                      system the auditor/evaluator  should address the
                      following questions:

                  l    Has GAO used this data base to provide supporting
                       evidence in prior assignments? If so, what was our
                       assessment of its reliability at that time?
                  l    Has the Department of Education’s Inspector Gen-
                       eral staff reviewed the related system or assessed
                      the reliability of the data? If so, what recommenda-
                      tions, if any, did they make for improving system
                       controls? Did Education officials take steps to
                      implement these recommendations?         What opinion,
                       if any, did the IG express regarding data reliability?
                  l    What do Education officials and users say about the
                       data’s accuracy? How frequently do they encounter
                       errors with the data? How serious are these
                      problems? Do they rely on the data in performing
                      their duties or do they maintain separate manual
                       records?
                  l   Have lenders, state agencies, or loan recipients
                       reported payment problems or concerns?
                  l    Do corroborating sources of information tend to
                       support or contradict the computer-based data?

                      When evaluated together, the planned use of the
                      data and the current knowledge about it help the
                      auditor/evaluator   identify a level of risk. Lowering
                      that risk to an acceptable level can be accomplished
                      by performing detailed tests of the data. While the
                      need for data testing can never be completely elimi-
                      nated from an assignment, the extent of testing can
                      potentially be reduced by assessing the system of
                      controls.


Understanding         Understanding     and assessing controls is a normal
                      auditing activity. Strong system controls can
System Controls       diminish the reliability risk, thus reducing the
                      amount of data testing needed to determine data
    Chapter   6
    Case Study: Guaranteed
    Student  Loam




    reliability. In turn, knowledge and experience with
    the data can help direct the review of system con-
    trols to areas where they are most likely to be
    weak.

    System controls must be considered in terms of both
    general and application controls. Work should
    include gaining an understanding of those controls
    and observing that significant controls are being
    followed.

    A review of general controls should include the fol-
    lowing questions:
. Does Education’s      management take an active role in
    decisions affecting ADP functions?
.   Do external auditors and/or the IG routinely con-
    duct reviews of ADP functions? Have Education
    officials implemented all past audit recommenda-
    tions related to ADP operations?
.   Does Education’s organization provide adequate
    separation of duties within the ADP operation?
.   Does Education have standards for documenting
    ADP functions?
.   Do formal procedures exist for requesting,
    approving, testing, and implementing   system
    changes?
.   Are appropriate measures in place to physically
    secure Education’s computer facility and control
    user access to the system and data files?

    A review of application   controls should consider:
. Does Education have formal documentation which
  identifies procedures for data collection, authoriza-
  tion, input, and error handling?
. Does Education’s system perform edit checks on
  data prior to combining them with the existing data
  base? If so, what are those edits?
. Is data which fails to meet input requirements iden-
  tified, corrected, and re-entered to the system in a
  timely manner?
                    Chapter   6
                    Case Study: Guaranteed
                    Student  Lmns




                    Are reconciliations performed to insure that all
                    source input is accounted for?
                    Are system outputs reconciled against inputs to
                    account for all data?

                    The amount of time and effort expended in under-
                    standing and assessing system controls is directly
                    related to the potential reduction on detail data
                    testing. The “cost” of system control tasks should
                    not outweigh the “benefits” of reduced data testing.

                    The strength of system controls falls into a range
                    with the following end points.

                    Strong controls: This judgment assumes missing or
                    ineffective controls (if any) are minor; the overall
                    system could be expected to detect and correct any
                    significant data errors.
                    Weak controls: This judgment assumes that missing
                    or ineffective controls provide an opportunity for
                    significantly incorrect data to be introduced to the
                    data base. Control deficiencies could pervade the
                    entire system or affect only parts of it.


Data Testing        By considering the strength of system controls in
                    relation to the reliability risk, a level of data testing
                    is established. The type of tests are dictated by the
                    nature of the data and the ultimate data analysis to
                    be conducted.


Case 1: Extensive   Assume that auditors/evaluators       have determined
Testing             that Education’s computer-based data is the only
                    existing source of payment data. Since neither GAO
                    nor the IG have done any recent work with this
                    data, the reliability risk is high. The auditors/evalu-
                    ators have further determined that general and
                    application controls are inadequate. In this
                    instance, the results of data testing alone must pro-
                    vide the basis for reliance. Therefore, the number
                    and scope of tests will be extensive.
    Chapter   6
    Case Study: Guaranteed
    Student  Lmns




    After conducting procedures to determine that
    information contained on the lender billing state-
    ments is factual, staff should conduct tests to deter-
    mine the accuracy and completeness with which
    that data was entered into the computer. This
    testing should generally be based on statistically
    valid sample sizes and methods. Specifically, tasks
    would include

l   matching computer-based records against corre-
    sponding source records to measure the data input
    error rate, and
l   matching source records against corresponding
    computer-based records to determine that all rele-
    vant data had been entered into the computer.

    Computer-assisted procedures could then be per-
    formed on all computer-based records to verify that

l billing and payment dates fall within the assign-
  ment’s time frame,
l key data elements are present in all records (i.e.
  billing date, payment date, payment amount, loan
  balance, and so on),
* there are no negative payment amounts or zero loan
  balances, and
l payment amounts and loan balances fall within
  “reasonable” ranges.

    Further automated        tests could be designed to

l   re-compute lenders’ interest calculations,
l   sort and summarize payments by lender to identify
    duplicate records,
l   match lenders against Education’s list of eligible
    institutions,
l   compare payment dates against billing dates to
    determine that billing dates precede payment dates,
l   compare loan status against default date to insure
    that all defaulted loans contain a default date, and
l   compare loan status against payment amount to
    identify records showing payments on defaulted
    loans.
                      Chapter  6
                      Case Study: Guaranteed
                      Student  Loans




                      In addition, staff should review the automated
                      error file to determine if it includes billings for the
                      period that have not been processed.

                      The failure of a data element or record to pass a
                      reliability test does not prove the data is incorrect.
                      It merely identifies a potential matter for further
                      investigation.

                      The results of these tests and the follow-up investi-
                      gations will provide numeric error rates. Based on
                      the error rate and the seriousness of errors, the
                      auditor/evaluator   will make a judgment about the
                      data’s reliability.


Case 2: Minimal       Although the circumstances of this case do not lend
Testing               themselves to a discussion of minimal data testing,
                      assume that GAO staff used data from the same
                      data base to support report findings within the last
                      6 months. At that time, we concluded that system
                      controls were strong and the data was reliable.
                      Under this scenario, the reliability risk would be
                      low. An extensive system control assessment would
                      not be performed. Reliance would be placed prima-
                      rily on our prior knowledge and experience with the
                      data. However, even at this low risk level, some
                      testing should be performed to update the results of
                      previous work and detect any conspicuous errors.

                      Our understanding of the system controls should be
                      updated to determine

                  l what, if any, modifications have been made to the
                    system,
                  . that critical controls are still being adhered to, and
                  0 that any previous recommendations       relating to
                    system controls have been implemented.

                      Tracing computer records to source records and
                      vice versa to show completeness and accuracy of
                      data input could be accomplished through use of
                      small (judgmental) randomly selected samples.
    Chapter   6
    Case Study: Guaranteed
    Student  Loans




    Computer-assisted   procedures, aimed at locating
    large errors, would verify that

. all billing and payment dates fall within the assign-
  ment’s time frame,
l key data elements are present in all records (i.e.
  billing date, payment date, payment amount, loan
  balance, and so on),
l there are no negative payment amounts or zero loan
  balances, and
l all payment amounts and loan balances fall within
  “reasonable” ranges.

    If these basic tests produced significant    error rates,
    the scope of testing would be expanded.      Otherwise,
    based on the updating of prior reliability    work,
    auditors/evaluators    would conclude the    data is
    reliable.
Appendix I
Examples of Data Tests


                  This appendix describes some data tests which
                  should be considered in developing an overall
                  testing plan. The number and combination of tests
                  performed for a given assignment will be influenced
                  by the required level of testing, the complexity and
                  size of the data base, and established time frames.


Unconditional     The following are examples of data tests that dis-
                  close failures of data elements to meet established
Data Tests        requirements:
                . Derivation    tests identify data errors by using for-
                  mulas or tables to recalculate computer-generated
                  data elements.
                . Mode tests disclose data that are defective because
                  they do not comply with the numeric or alpha
                  requirement for the data element.
                . Pattern tests disclose data errors evidenced by
                  inconsistencies of a specific pattern of digits and
                  characters. Calendar date checks are a pattern test
                  that has considerable significance for some data
                  elements.
                . Presence/absence tests disclose data that are defec-
                  tive because they lack required information or
                  include information when they should not.
                  Sign tests detect data defects that result from an
                  inappropriate     positive or negative value.
                  Value/range/limit       tests detect data that are defec-
                  tive because they are not within a required set of
                  specific values; a set of values that fall into a given
                  range; or a set of values encoded in a list, table, or
                   file.

                   It is generally useful to test audit-significant data
                   elements against each of the requirements defined
                   for them. (Consult the data dictionary.)
                           Appendix   I
                           Examples   of Data   Tests




Conditional Data           The following are examples of tests that compare
                           two data elements that have a logical relationship:
Tests
                   l If college graduation date is given, the type of
                     degree must be identified.
                   . If loan date is between July 1, 1989, and September
                     30, 1989, the interest rate must be 12.5 percent.
                   l Loan approval date must be the same as or later
                     than the loan application date.
                   l If order quantity is greater than 5,000, the discount
                     rate must be 40 percent.
                   l If payments to an individual under a given entitle-
                     ment program exceed $10,000 in fiscal year 1988,
                     the eligibility code must be “C.”
                     The number of program graduates must equal the
                       l

                     number enrolled minus program dropouts.

                           These tests are not limited to comparisons of two
                           data elements in the information system’s data
                           base. They can include data rules that compare par-
                           ticular data elements with program or legislative
                           criteria or with information   from another data
                           system.

                           In developing data rules, staff should consider
                           whether reverse relationships exist among data ele-
                           ments. When information systems are developed,
                           data rules built into the system establish require-
                           ments for data elements and for relationships
                           among them. At times, reverse relationships are not
                           considered, and reversing data rules is not part of
                           system logic. In those cases, the likelihood of data
                           errors is increased.

                           Well-thought-out    reverse rule tests can effectively
                           disclose data inconsistencies (overlooked in systems
                           design) that have contributed to data base contami-
                           nation over time. A data rule could, for example,
                           test the requirement that if status is deceased, the
                           date of death must be present and valid. A reverse
                           data rule could reasonably test that if a date of
                           death is present and valid, the status must be
                           deceased.
Appemlix   I
Examples   of Data   Tests




Staff must exercise care, however, because seem-
ingly reasonable reverse relationships do not
always exist. For example, the rule, “if status is eli-
gible, then annual income must be less than
$10,000,” could be tested. But eligibility restrictions
may involve factors other than income, for
example, age. If that is the case, the reverse data
rule-“if   annual income is less than $10,000, then
status must be eligible”-could    not be used.
Appendix II
Special Considerations in Understanding
Computer System Controls

                   This appendix presents a sample of possible ques-
                   tions relating to general and application controls.
                   They are intended to help relate the internal control
                   approaches generally followed in performance
                   audits to the computer environment.’


General Controls   General controls apply to all computer processing
                   carried out at a facility and are independent of spe-
                   cific applications. They relate to organization;
                   system design, development, and modification; and
                   security.


Organization       Does top level management          take an active role in
                   ADP functions?

                   Does the ADP function received continuing              audit
                   coverage?

                   Is there evidence of effective actions to follow-up
                   on past audit recommendations?

                   Is there adequate separation of duties within the
                   ADP operation? The following functions are usually
                   performed by a different individual or group:

                   system analysis,
                   application programming,
                   acceptance testing,
                   program change control,
                   data control,
                   source transaction origination,
                   system software maintenance,
                   computer files maintenance, and
                   computer equipment operation.




                    1Further guidance is available in GAO’s Evaluating Internal Con-
                   trols in Computer-Based Systems, June 1981 (under revision).
                       Appendix      II
                       Special Considerations      iu
                       Understanding      Computer
                       System Controls




System Design,         Controls in this category are intended to insure that
Development, and       systems meet user needs, are developed economi-
                       tally, are thoroughly documented and tested, and
Modification           contain appropriate internal controls. Review tasks
                       might include the following questions.

                       Does the agency have a formal approach for system
                       development?

                       Are users involved          in the development      of system
                       requirements?

                       Do standards exist for documenting               different   ADP
                       functions?

                       Is the system documentation           current and does it
                       include:

                   l functional requirements documents,
                   l data collection requirements,
                   l design characteristics of the systems and compo-
                     nent subsystems,
                   l a user manual,
                   . a system operating manual,
                   l the strategy for testing the computer-based system
                     including test procedures and evaluation criteria,
                     and
                   l test analyses reports documenting test results and
                     findings?

                       Are requests for modifications to existing programs
                       documented and approved by appropriate manage-
                       ment levels?


Security               These controls should provide assurances that com-
                       puters and the data they contain are properly pro-
                       tected against theft, loss, unauthorized access, and
                       natural disaster? Reviews might consider:

                       Is a periodic risk analysis performed             and
                       documented?
              Appendix      II
              Special Considerations       in
              Understanding       Computer
              System     Controls




              Have responsibilities             for computer security been
              formally assigned?

              Is access to the computer room controlled through
              use of some physical device (i.e., locked door,
              security badges, etc.)

              Are two persons present in the computer room at all
              times?

              Is the responsibility for storing magnetic data
              clearly documented?

              Does the agency have an emergency disaster
              recovery plan?

              Is the disaster recovery plan periodically           tested?

              Is computer software used to control access to the
              computer system by identifying and verifying
              people who try to gain access?


Application   Controls which are incorporated directly into indi-
              vidual applications are intended to insure accurate
Controls      and reliable processing. They address the three
              major operations of data input, data processing, and
              data output.


Data Input    Controls in this category are designed to insure that
              data is converted to an automated form and entered
              into the application in an accurate, complete, and
              timely manner. Review tasks might address the fol-
              lowing questions.

              Do documented procedures exist for entering data
              into the application?

              Are controls in place which permit the number of
              records input to the application to be reconciled
              against the number presented for entry?
                  Appendix      II                                      J
                  Special Considerations      iu
                  Understauding      Computer
                  System CcmtroL9




                  Do all source records contain some indication    of
                  authorization (either physical or electronic)?

                  Are security measures in place to limit access to
                  input terminals and validate user sign-on?

                  Is data validation and editing performed    on all data
                  fields before entry into the system?

                  Are uses of methods to override or bypass data val-
                  idation and editing procedures recorded and ana-
                  lyzed for appropriateness and correctness by
                  supervisory personnel?

                  Do documented procedures exist that explain the
                  process of identifying, correcting, and reprocessing
                  data rejected by the application?

                  Is all data that does not meet edit requirements
                  rejected from further processing and written to an
                  automated suspense file?

                  Is the automated suspense file used to control
                  follow-up, correction, and reentry of rejected data?

                  Is the automated suspense file regularly analyzed to
                  determine the rate of data input error and the
                  status of uncorrected records?

                  Are corrective actions taken when error rates
                  become too high?

                  Are counts of rejected items produced and recon-
                  ciled with accepted records to account for all input?


Data Processing   Processing controls are designed to insure that data
                  is handled by the computer in an accurate, com-
                  plete, and timely manner. Review tasks might
                  include the following questions.
              Appendix      II
   .          Special Considerations      in
              Understanding      Computer
              System Controls




              Do documented procedures exist to explain the
              methods for proper data processing of each applica-
              tion program?

              Does a history log record events performed by the
              computer and its operators during application
              processing?

              Are application programs secured against direct
              input from operator consoles?

              Do on-line systems protect against concurrent       file
              updates?

              Are controls in place to prevent operators from cir-
              cumventing file checking routines?

              Are file completion checks performed to make sure
              that application files have been completely
              processed?

              Do processing controls make sure that output
              counts from the system equal input counts to the
              system?

              Is relationship editing performed between input
              transactions and master files to check for appropri-
              ateness and correctness before updating?


Data Output   Output    controls are used to insure the integrity of
              system    output and the correct and timely distribu-
              tion of   outputs. Review tasks could address the fol-
              lowing    questions.

              Do documented procedures exist that explain the
              procedures for balancing, reconciling, and distrib-
              uting output products?

              Are users questioned periodically    to determine    their
              continued need for the product?
Appendix      II
Special Considerations      in
Understanding      Computer
System controls




Is each output product labelled to identify the
product name, recipient’s name, and time and date
of production?

Do documented procedures exist that explain
methods for reporting, correcting, and reprocessing
output products with errors?

Are input record counts and controls totals recon-
ciled against output record counts and control totals
to insure that no data was lost or added during
processing?

Are system outputs reviewed for completeness and
accuracy before release to users? Does this review
include reconciling record counts and control totals?

Are source documents retained and stored in a log-
ical sequence for easy retrieval?
Ordering   Information

The first five copies of each GAO report are
free. Additional   copies are $2 each. Orders
should be sent to the following address, accom-
panied by a check or money order made out to
the Superintendent     of Documents, when neces-
sary. Orders for 100 or more copies to be mailed
to a single address are discounted 25 percent.

U.S. General Accounting Office
P.O. Box 6015
Gaithersburg, MD 20877

Orders may also be placed by calling   (202) 275-
6241.
United States                         First-Class M&l
General Accounting Office           Postage & Fees Paid
Washington,   D.C. 20348                     GAO
                                      Permit No. GlOO
Official   Business
Penalty    for Private   Use $300