NLA Discussion: misfits in testing
MSchwarz at edc.org
MSchwarz at edc.org
Tue Nov 23 13:44:44 EST 1999
I hope you,Tom continue to talk in this intelligent way about these
accountability issues. The testing and accountability mania keeps getting
unconnected to what exactly these instruments can actually measure.
Marian Lapsley Schwarz, ALMA
Subject: NLA Discussion: misfits in testing
From: nla at europe.std.com at internet
Date: 11/18/99 7:33 PM
David: Following is the third of a series of Research Notes on
accountability and testing in adult literacy education that may be of
interest to NLA list members.
Research Note 11/18/99
Accountability in Adult Literacy Education III: Misfits Between
Identifying Adult Literacy Problems Nationally and Fixing Adult
Literacy Problems Locally
Thomas G. Sticht
Applied Behavioral & Cognitive Sciences, Inc.
The Workforce Investment Act 0f 1998, Title II: The Adult Education and
Family Literacy Act requires "core indicators" of performance by
federally funded literacy programs that can " .show the progress of the
eligible agency toward continuously improving in performance."
For one of its contributions to "continuously improving" adult literacy
education, the federal government has taken on the job of identifying
the scale of need for such education. The National Center for Education
Statistics, in cooperation with the Office of Vocational and Adult
Education (OVAE), Division of Adult Education and Literacy (DAEL)
designed and conducted door-to-door testing of adults' literacy skills
using the 1992 National Adult Literacy Survey (NALS). The latter was
then later modified and used in the 1995 International Adult Literacy
To assess adult's literacy in 1992, the NALS/IALS used "real world" or
"functional literacy" tasks such as filling out bank deposit slip, etc.
Also in 1992, Richard Venezky prepared a policy paper that noted the
differences between the "real world" tasks used in the NALS and the
types of basic skills (e.g., word recognition, vocabulary knowledge,
reading comprehension using literal and inferential meaning), that
local programs generally teach as "literacy" (Venezky, 1992). He
questioned the validity of the NALS/IALS-type "real world" tasks for
bridging from the representation of adult literacy at the national level
to providing useful information for teaching and learning literacy at
the local level in the thousands of adult literacy programs throughout
This raises the question of just what was the NALS supposed to provide
and how well did it provide it? The report of the design of the
National Adult Literacy Survey provides a list of five informational
products that the Congress wanted the National Adult Literacy Survey to
provide. Two involved factors such as literacy by demographic (gender,
race, etc.) and occupation (laborers, managers, etc.) and are not
considered here. Three were more central to literacy issues. Each of
these informational products are listed below followed by a comment on
what was actually provided for the informational product.
Product #1. Describe the levels of literacy demonstrated by the total
adult population as well as by adults comprising various subgroups,
including those targeted as "at risk."
Comment: The National Adult Literacy Survey developed three groups of
tasks called prose, document and quantitative literacy (PDQ),
administered the tasks to samples of adults, and used the tasks to scale
both the adults' literacy proficiencies on each of the three scales and
the difficulty levels of the tasks using Item Response Theory. The
difficulty level of each task was defined as the level of literacy
needed to have "... an 80 percent probability of correct response."
(Kirsch, Jungeblut, Jenkins, & Kolstad, 1993, p. 71). Next, subjects
were assigned to one of five literacy levels based on their proficiency
How well do these procedures characterize the literacy skills of adults?
First, the rationale for the decision to scale the adults' literacy
proficiency using a probability of .80 of being able to perform a given
task, when the probability could have been set at .70, .60 or any other
percent, was arbitrary. Kolstad, the Project Director for the NALS at
NCES has recently noted that the probability value that makes the fewest
classification errors, for example, saying that a person could not
perform a given task when in fact the person could perform the task, was
.50. This drops the percentage of adults assigned to the lowest level by
over half. So at the present time, the "real" numbers of adults "at
risk" for low literacy is not known.
Product #2. Provide an increased understanding of the skills and
knowledge associated with functioning in a technological society.
Comment: Probably the most important question that the NALS researchers
were asked to report on was, "Are the literacy skills of America's
adults adequate ... to ensure individual opportunities for all adults,
to increase worker productivity, or to strengthen America's
competitiveness around the world?" The NALS report answered the
question as, "Because it is impossible to say precisely what literacy
skills are essential for individuals to succeed in this or any other
society, the results of the National Adult Literacy Survey provide no
firm answers to such questions" (Kirsch, Jungeblut, Jenkins, & Kolstad,
1993, p. xviii).
Product #3. Interpret the findings related to information-processing
skills and strategies in a way that can inform curriculum decisions
pertaining to the education and training of adults.
Consistent with Venezky's (1992) concerns, Congress wanted information
regarding instructional "remedies" that might be taken to improve
adults' literacy skills. The developers of the NALS suggested that
adult basic skills programs should be geared to improving adults' skills
in prose, document and quantitative (PDQ) literacy (Mosenthal & Kirsch,
1994). Indeed, researchers at the Educational Testing Service worked on
an interactive video, computer-based instructional series that would
teach document literacy skills. A small pilot study with a group of some
10-12 adult basic skills students indicated that, while students made
improvements in document literacy, they made three to four times as much
gains on prose and quantitative literacy tests as on the document
literacy tests. This led the instructor who administered the pilot
course to observe that, "The gains were interesting considering the PDQ
curriculum did not include instruction in these skills."
Reder (1994) has indicated that the three scales correlate above +.90
(overlap of some 80 percent) suggesting that they draw upon the same
underlying cognitive system with its knowledge base and working memory
processes. For this reason, it is to be expected that many language and
processes developed in one domain may be available to other domains
because they all draw on the same cognitive system. The results of the
pilot instructional program would seem consistent with this theoretical
point of view (which was not part of any view put forth by the NALS/IALS
It seems to me that putting some resources into the formulation of a
theory of literacy that is both teacher and research based, and that
could account for the large overlap among the three NALS/IALS scales
would be a valuable course of action. It might reduce the need for three
separate scales and permit a more cost-effective approach to a national
assessment of adult literacy skills. It might also provide a bridge
from the national assessment to local adult literacy education
References: Citations are given in:Sticht, T. (1999, April). Using
Telephone and Mail Surveys as a Supplement or Alternative to
Door-to-Door Surveys in the Assessment of Adult Literacy. Washington,
DC: U. S. Department of Education, National Center for Education
Statistics, Education Statistics Service Institute (ESSI).
Venezky, R. L. (1992,May). Matching literacy testing with social policy:
what are the alternatives? Policy Brief Document No. PB92-1. National
Center on Adult Literacy, University of Pennsylvania, Philadelphia, PA.
More information about the Nla-nifl-archive