IJCA - Volume I - Flipbook - Page 22
22
The International Journal of Conformity Assessment
Examination Process Requirements in the ISO/IEC
17024: 2012 standard
The examination process requirements are
described in clause 9.3.1 of ISO/IEC 17024:2012. This
standard states: “Examinations shall be designed to
assess competence based on, and consistent with,
the scheme, by written, oral, practical, observational,
or other reliable and objective means. The design
of examination requirements shall ensure the
comparability of results of each single examination,
both in content and difficulty, including the validity
of fail/pass decisions.”
Additional information is also provided in clause
9.3.5 of ISO/IEC 17024:2012, which states:
“Appropriate methodology and procedures (e.g.,
collecting and maintaining statistical data) shall
be documented and implemented in order to
reaffirm, at justified defined intervals, the fairness,
validity, reliability, and general performance of each
examination, and that all identified deficiencies are
corrected.”
The Function of an Exam (Assessment) in the
Personnel Certification Process
A test or examination (informally, exam or
evaluation) is an assessment intended to
measure a test taker’s knowledge, skill, aptitude,
or classification in many topics. The goal of
the exam is to determine if an individual has
sufficient knowledge, skills, and abilities (KSAs)
to be professionally competent at an entry-level
position in the specified field. An exam may be
administered verbally, on paper, on a computer, or
in a predetermined area that requires a test taker to
demonstrate or perform a set of skills.
There is no general consensus or invariable
standard for test formats and difficulty. Often, the
format and difficulty of the test is dependent upon
the requirements of accreditation or industrial
association. Standardized tests are usually used by
the personnel certification bodies to determine if a
test taker is allowed to practice a profession, use a
specific job title, or claim competency in a specific
set of skills. It is a direct method of assessment of
knowledge, skills, ability, and personal behaviors.
(Note: A personnel certification exam has to be
designed as a criterion-referenced standardized
test or in combination with the criterion-referenced
performance-based assessment.)
The assessment types that can be used in personnel
2022 | Volume 1, Issue 1
certification programs are as follows:
1. Criterion-referenced tests are designed to
measure candidate’s performance against a
fixed set of criteria or industry standards or
certification schemes, based on a construct of
“minimal acceptable competency.” It is possible
for all test takers to pass, just like it is possible
for all test takers to fail. A criterion-referenced
test will use questions that will be correctly
answered by candidates who are competent in
the specific subject.
2. Standardized test are administered and
scored in a consistent, or “standard,” manner.
Standardized tests are designed in such a way
that the questions, conditions for administering,
scoring procedures, and interpretations are
consistent; furthermore, these tests are
intended to be administered and scored in a
predetermined, standard manner. Any test
in which the same test is given in the same
manner to all test takers, and graded in the
same manner for everyone, is a standardized
test. This assessment tool may be formatted
as a written test, oral test, or practical skills
performance test. The questions can be simple
or complex. Standardized tests are designed to
permit reliable comparison of outcomes across
all test takers, because everyone is taking a test
designed to assess the same competencies.
Criterion-referenced scoring is used because it is
concerned solely with whether or not a particular
candidate’s answer is correct and complete.
3. Performance-based assessment are used
to evaluate objective data about a person’s
knowledge, skill, and attitude; the data
is collected from the actual or simulated
application site.
Fairness
The fairness of an exam refers to its freedom from
any kind of bias. The exam should be appropriate for
all qualified examinees, without regard for factors
that are irrelevant to professional competency such
as race, religion, gender, or age. The test should not
create a disadvantage for any examinee, or group of
examinees, on any basis other than the examinee’s
lack of knowledge or skills the test is intended to
measure.
Item writers should address the goal of fairness as
they undertake the task of writing items. In addition,
the items should be reviewed for potential fairness
problems during the item-review phase. Any items
identified as displaying potential bias or lack of
fairness should be revised or dropped from further
consideration.
Exam Validation Process Flow Chart
Basic Steps in the Exam Validation Process
1. Job Analysis: Conducting a job analysis is an
essential first step in establishing the content
validity of certification exams. Job analysis is
the foundation for defining the “certification
scheme” (ISO/IEC 17024, Section 8). A job
analysis will define the important elements
of professional competency through a series
of discrete “job tasks” and the associated
KSAs required to perform these tasks. Metrics
used for ranking the importance of job tasks
should consider their “relevance” (relation
to professional competency), “frequency”
(how often these are done), and “criticality”
(significance to professional success and to the
protection of public health, safety, and welfare).
In this process, job tasks should be eliminated
from consideration in an examination when the
KSA is adequately assessed by governmental
licensing agencies (such as driving skills), and
when no valid means of assessing competency
in the task is identified. The rationale for
eliminating tasks from consideration must be
documented. Job analysis information may be
gathered by directly observing people currently
23
in the job, interviewing experienced supervisors
and job incumbents, and through questionnaires,
personnel and equipment records, and work
manuals. Workshops are held to identify specific
job tasks and capabilities required for successful
job performance. During these workshops,
subject matter experts verify that the task
statements developed are technically correct,
unambiguous, and accurately reflect the job.
Identification of capabilities must be done on a
task-by-task basis, so that a link is established
between each task statement and its requisite
capability. Job analysis information is central in
deciding what to test for and which tests to use.
2. Review and Ranking of Job Tasks: Ranking the
importance of job tasks may be accomplished
through surveys or through structured focusgroup interviews of a representative panel of
competent practitioners. One common approach
is the “delphi research method,” which is
leveraged to build consensus and document
conclusions. When surveys are used, these
should be relayed to a representative group
of practitioners (both highly experienced and
entry-level) and impacted parties (the employers
of certified persons). Job analysis must be
periodically reviewed within a certain period of
time. If the certification body is not the owner
of the certification scheme, it must ensure the
owner of the scheme reviews the job analysis.
3. Exam Specification: Ratings are used to identify
the number of questions to appear on tests
for each subject area. The specification (often
called a “test blueprint”) must clearly link the
examination to the job analysis (both tasks and
associated KSAs).
4. Validate Existing Questions: Existing questions
are reviewed by subject matter experts for
relevance, accuracy, and style.
5. Write New Questions: New exam questions are
developed according to the job analysis
6. Validate New Questions: All new questions
must be reviewed by subject matter experts for
relevance, accuracy, and style.
7. Pilot Test Questions: Pilot tests allow for
volunteers to statistically review each question
and the entire test results
8. Develop Certification Exam (Test Blueprint):
Examination blueprints are compiled from