- Usually utilized in reference to (1) proficiency levels, scales, and cut-off scores on standardized tests and other forms of assessment, (2) students achieving or failing to achieve proficiency levels determined by tests and assessments, (3) students demonstrating or failing to demonstrate proficiency in relation to learning standards(for a related discussion, see proficiency-based learning); and (4) teachers being deemed.
Major Issues Related to Proficiency Determinations on Education:
- High standards vs. low standards:One source of debate is related to the standards upon which a proficiency determination is based, and whether the standards are being used consistently or fairly to produce accurate results. Some may argue, for example, that the standards or cut-off scores for “proficiency” on a given test are too low, and therefore the test results will only produce “false positives”—i.e., they will indicate that students are proficient when they are not. For instance, a test administered in eleventh grade that reflects a level of knowledge and skill students should have acquired in eighth grade. Because reported “proficiency” rises and falls in direct relation to the standards used to make a proficiency determination, it’s possible to manipulate the perception and interpretation of test results by elevating or lowering standards. Some states, for example, have been accused of lowering proficiency standards to increase the number of students achieving “proficiency,” and thereby avoid the consequences—negative press, public criticism, large numbers of students being held back or denied diplomas (in states that base graduation eligibility on test scores)—that may result from large numbers of students failing to achieve expected or required proficiency levels.
- Common systems vs. disparate systems: Since proficiency must be assessed by some form of measurement system, proficiency determinations can be more or less accurate based on the quality of the system being used, or they can be comparable (when common systems are used) or incomparable (when disparate systems are used). Confusion may result when there is disagreement about the methods being used to determine proficiency, or when two different systems are being compared even though the results are not comparable in a valid or reliable way. For example, when the Common Core State Standards were adopted by a number of states, the states were then required to use different standardized tests, based on a different set of standards, to determine “proficiency” (i.e., the tests would measure achievement against the more recently adopted Common Core standards, as opposed to the learning standards formerly used by the states). In this case, both the standards and the tests used to measure proficiency have changed significantly, which makes any comparisons between the old system (student test scores from previous years) and the new system (student scores on the new tests) difficult or impossible.
- Alignment vs. misalignment: Proficiency levels may also be influenced by the test and the content of lesson actually given. For example, if schools teach a selection of concepts and skills that are not evaluated on a given test, the results may produce a “false negative”—i.e., students may have learned what they were taught, but they were not tested on content they were taught, producing misleading results (proficiency is based on the content that was tested, not the content that was taught). For example, when states adopt a new set of learning standards, teachers then have to “align” what they teach to the new standards. If the process of alignment is poorly executed or delayed, students may take tests based on the new standards even though what they were taught was still based on an older set of standards. The adoption of the Common Core State Standards by a majority of states has become a source of discussion and debate on this issue.
- Learning vs. reporting:as previously described, it is possible for students to learn a lot (or very little) in schools but still appear to have learned very little (or a lot) due to the systems and standards being applied, or due to the misalignment of teaching and testing. Potential confusion and issues, therefore, may arise from the tendency of people to view test scores as accurate, absolute measures of learning, rather than relatively limited indicators of learning that may be potentially flawed or misleading. For example, students may learn important skills in school such as problem solving and researching that are not specifically evaluated by tests, or they may be have learned a large body of knowledge, just not the specific knowledge evaluated by a given test or assessment. In these cases, “proficiency” rates on tests—often reported as either percent proficient or proportion proficient—may present only a partial or misleading picture of what students have learned. It is for this reason, among others, that testing experts often recommend against making important decisions about students on the basis of a single test score.
- Appropriate vs. inappropriate proficiency levels: Proficiency assessment are also the object of debates related to the appropriateness or inappropriateness of a given proficiency scale, standard, or system. For example: Is it appropriate to hold a non-English-speaking student to the same proficiency standards, as measured by the same English-language tests, as a native-English-speaking student? Or, similarly, a recently immigrated student who has had very little formal education in her home country? Teacher evaluations are another object of debate and controversy on this issue, particularly when it comes to factoring student achievement into performance evaluations.