The Language Assessment Battery (LAB) is used by numerous
school districts to determine both native and English language reading
skills of English-language learners. A number of public school teachers
that I instruct have shared their concerns about the test. They report
that many of their Spanish-dominant students have a great deal of
difficulty with the LAB in both Spanish and English; that over the
designated three years which students are given to exit the bilingual
program, few score above the fortieth percentile in English. (#1)
After analyzing and discussing the LAB, many of my bilingual
and ESL teachers arrived at the conclusion that the test has some
general problems namely, that its cloze format(#2) and much of its
content is culturally-biased. This paper discusses the cultural biases
of the reading portion of the LAB to help explain why so many limited
English proficient (LEP)students have had trouble with it.
First, a short discussion on the development of reading
comprehension instruments. In 1973, Tuinman analyzed five popular
reading tests and found that students could answer a significant number
of questions correctly without recourse to the text because test items
had surprisingly low passage dependency (Tuinman 1973-74). In 1982,
Bensoussan noted that difficulty of passages did not match the difficulty
of questions in recall protocols requesting explicit information (Bensoussan,
1982). Others focused on what makes test questions easy
or difficult. Perkins and Miller (1984) suggest that test
items which evoke paraphrasing, inferencing, and productive skills
are more difficult than those requesting information derived directly
from the text. Much of the research concludes that language proficiency
is not considered in such second language (L2) reading comprehension
The cloze format has been used widely to assess L2 reading
proficiency because it has not yet been matched in strength with regard
to degree of passage dependency. Cloze also validly measures and easily
quantifies students ability to paraphrase and infer. These skills
are considered by many reading specialists to be of a higher order
than those associated with receptive measures such as recall.
Cloze protocols have a high degree of passage dependency
(Berkoff, 1979). That is, test items are highly dependent on the text.
Students must be able to pick out clues within the passage to supply
My bilingual and ESL teachers examined various levels
of the LABs reading portion for grades 6, 7, and 8. They noted
the following: There is an average of 55 cloze items per reading section.
Each reading passage begins with a complete (clozeless) sentence.
Throughout the passage, however, there is at least one cloze item
per sentence with as many as four per sentence. The ending sentences
for each passage vary. One passage ends with three complete (clozeless)
sentences while three passages end with one (clozeless) sentence.
The length of the passage as well as the number of cloze items increases
in order from the first to the last read. For example, in the last
passage there are 13 sentences and 20 cloze items as compared to six
sentences and five cloze items in the first passage.
Clearly, these LAB passages show that there is some
sentence to passage interdependence. The text is sequenced so that
clues carry over from one sentence to the next as well as from beginning
to end. However, the issue of what the test measures is greatly dependent
on the students ability to apply cloze techniques during the
task. That is, the student who is able to transfer native language
(L1) literacy skills to second language (L2) text is virtually excluded
through this narrow assessment.
Most of the teachers I interviewed reported that their
Spanish-dominant students showed better reading comprehension skills
in both Spanish and English when responding to open-ended question
formats than to cloze formats. Many concluded that the discrepancy
in performance is in part due to the students lack of familiarity
with cloze test strategies. That is, a number of their Spanish-dominant
students fail the LAB because they are not familiar with the cloze
The general consensus was that these students find cloze
reading passages unsettling because the flow of the story
is interrupted by a great number of blanks. One teacher explains,
There are so many blanks in each story! My students cant
make sense of it. Most agreed that these students need time
and practice to learn how to do cloze.
We considered the question When administered to students who
are not familiar with cloze format, can it be said that essentially,
the LAB is culturally biased? Most of my teachers said Yes.
Research (Beach & Hynds, 1990; Bleich, 1978; Rosenblatt,
1978) indicates that readers interact in meaningful ways with text
that calls up previous knowledge or experience that makes use of the
readers schemata; i.e., what they already know, their interests,
and their skills. With respect to L2 reading tests, Perkins and Jones
(1985) suggest that passages which integrate previous knowledge with
text-dependent items provide a more suitable assessment for evidence
of text comprehension (p 151-152).
The teachers argued that a disproportionate number of
their English learners fail the LAB because the test itself fails
to engage them in meaningful reading activity; it excludes student
schemata. They agreed that the LAB is culturally biased because much
of its content is uninteresting to the students and tends to alienate
rather than engage them in reading.
Each of the various levels of the test for grades 6,
7, and 8 contained four to six passages. All were expository in nature
and covered three general categories: science, social studies, and
technology. Indeed, we found that most of the content was not culture-friendly.
For example, the Level III test was limited to the following topics:
the roadrunner, the North American grizzly bear, Samuel Johnsons
dictionary, and helicopters. In addition, English-language expressions
appeared frequently in the text. Specifically, nature of the
beast, an idiom, is used to elaborate on the personality of
the grizzly, a bear native only to the North American continent. While
it does assess reading comprehension skills, it is also clear that
the LAB measures knowledge of culture-specific content; i.e., North
American wildlife, U.S. history and technology, and English-language
The students themselves seem to be the best critics
of the test:
Why cant they give us something more interesting, like
stories about people? I like to read about peoples lives.
Most of it is about things Ive never heard of...animals
I dont know...things about this country.
I like science, but the way they use it here is stupid...not
about interesting science...and there are so many blanks that you
cant really enjoy reading.
To conclude, the teachers suggest that a more culturally
appropriate test would use less (or no) cloze reading passages, include
open-ended questions, and allow students to use their dominant language
to respond to questions. They also agree that language testing specialists
should consider student knowledge, interests, and experiences when
selecting reading passages.
(#1). A score above the fortieth percentile designates
that a student is English proficient (EP), while those scoring below
are labeled limited English proficient (LEP).
(#2). Paragraphs in cloze passages contain a number
of blank spaces along with a column of choices for each blank. The
text is rich in contextual clues. To make correct choices, readers
must know how to apply knowledge of contextual clues within and across
Beach, R., &
Hynds, S. (1990). Research on the learning and teaching of literature:
Selected bibliography (National Research Center on Literature
Teaching and Learning Report Series R1). Albany, NY: University at
Albany, State University of New York.
(1982). Testing the test of advanced EFL reading comprehension: To
what extent does the difficulty of a multiple choice comprehension
test reflect the difficulty of the text? System, 10 (3), 285290.
Berkoff, N. (1979).
Reading skills in extended discourse in English as a foreign language.
Journal of Research in Reading, 2 (2), 95107.
Bleich, D. (1978).
Subjective criticism. Baltimore: Johns Hopkins University Press.
Perkins, K., &
Jones, B. (1985). Measuring passage contribution in ESL reading comprehension.
TESOL Quarterly, 19 (1), 137153.
Perkins, K. &
Miller, L. (1984). Comparative analysis of English as a second language
reading comprehension data. Language Testing, 1(1), 2132.
M. (1978). The reader, the text, the poem. Carbondale, IL:
Southern Illinois University Press.
Tuinman, J. (197374).
Determining the passage dependency of comprehension questions in 5
major tests. Reading Research Quarterly, 9 (2), 206223.