Asia-Pacific Forum on Science Learning and Teaching, Volume 17, Issue 1, Article 4 (Jun., 2016) |
This paper evaluates the conceptions that Panamanian students and teachers hold about the epistemology of science, which constitute an essential component of scientific literacy and an indicator of the comprehension of NOS, as a means to present the applicative potential of the new quantitative assessment methodology. One objective of the work was to serve as an example of the application and partial validation of the instrument and its associated methodological approach to diagnose epistemological conceptions (for this exemplification reason, only seven items were taken into account). Summing up, the present quantitative methodological approach to evaluating NOS conceptions based on the MRM model provides fuller (based on ratings along a wide spectrum of positions), sounder (contextualized on a specific frame) and more accurate (measured by sensitive indices) information about the respondent's views of a NOS issue than would a single response model. Furthermore, as the evaluation of the item is constructed from the scores on all of its sentences, the set of invariant multiple indices (sentence and average item indices) constitute global, valid, and reliable quantitative data that allow the application of statistical hypothesis testing procedures. Thus, the method guarantees the comparability and straightforwardness of the results, from which their qualitative analysis and subsequent discussion flow naturally.
Overall, the large sample exhibited inadequate or misinformed conceptions about the seven epistemological issues that were investigated because the average scores for all these issues were close to zero. Within this overall low profile, the lowest specific profiles were those of method and of the role of assumptions in scientific knowledge (laws, theories, etc.), whereas conceptions concerning observations and the tentative nature of science were slightly better. The remaining epistemological issues (scientific models, classification and epistemological status) had intermediate index scores, closer to zero than the other four.
Of course, the overall low item indices represent a general estimate of the comprehension of NOS because they are calculated as averages across all participants and all sentences within the items. However, the distribution of the personal mean item indices among the sample of respondents was more variable, with some participants presenting well-informed conceptions while others presented clearly misinformed conceptions. It is important to note that indices are quasi-continuum assessment parameters that go beyond the usual simplistic right/wrong or informed/misinformed classifications of NOS conceptions leaving room for controversy and complexity. In addition, the mean indices of the sentences of the items showed an even greater variation among the respondents, again with some participants presenting well-founded beliefs and others poorly informed beliefs; the qualitative analysis of individual profile answers may raise the “why” of personal understandings (explanations) and cultural interpretations and idiosyncrasies, as qualitative research does. These results therefore mean that the generally negative picture of teachers' and students' epistemological thinking that is transmitted by most of the studies mentioned in the introduction should be much more nuanced. In particular, within the different items and among the different respondents, both well- and ill-informed views about the epistemology of science can coexist, a general fact about NOS conceptions that is developed in depth elsewhere (Vázquez, García-Carmona, Manassero, & Bennàssar, 2013).
The multiple-rating instrument used in the present work and the methodological approach offer an economical, fast, and effective form of inquiring into people's conceptions about NOS, and can have various useful applications. The information obtained from the respondents by means of the instrument does not come from generic or abstract questions (for example, "Are scientific models copies of reality?"), and the respondents are not obliged to choose one sentence and ignore the others. Instead, each question is straightforward and specific, and is presented in a context. The respondents are thus being asked to evaluate simple, non-technical sentences expressing different positions on the issue. The set of index scores they assign to all the sentences portray their personal position, which naturally translates into a NOS personal profile. Aikenhead and Ryan (1992) claimed that the original empirical qualitative construction of the pool (the sentences were created from participants' open responses, and are expressed in plain language) warrants an inherent validity of the item pool; further, the new method and the factor analysis lend some empirical support to its validity. The excellent reliability coefficients of the whole set of items as well as the moderate coefficients for the items underpin evaluation through the instrument. All in all, it must be underlined that the items’ empirical construction and methodology does not reflect the researchers’ opinions, but that of experts, and then the simple language contributes to overcome the flaws associated with the instruments (e.g. immaculate perception, etc.).
Furthermore, the standardization of precise, homogeneous, and invariant indices with which to accurately evaluate NOS conceptions constitutes a common measure, allowing the use of statistical hypothesis testing, comparison of results of different research studies, and correlation analyses. This set of features seems to represent major advantages for future research because they provide absolute (index scores) and relative (group comparison) criteria with which to evaluate NOS conceptions. In particular, the indices allow statistical tests of hypotheses (e.g., comparisons between groups and between researchers), and the explicit and standardized interpretation of the indices facilitates contrasting the results with other methods of evaluation (e.g., qualitative instruments) as well as showing whether there is real improvement of the validity of the evaluation. For instance, the overall comparison between science and humanities groups does not display significant differences (an unreasonable expectation), which points to the inefficacy of science education to improve NOS understanding (Liu & Tsai, 2008; Vázquez, García-Carmona, Manassero, & Bennàssar, 2014), in this case of Panamanian education.
The method and tools are applicable to large samples without major increases in cost and time. They provide standardized numerical indices reflecting how much (or how little) and how well (or how poorly) each person knows and thinks about the different characteristics of NOS. As has been shown by other researchers (e.g., Dogan & Abd-El-Khalick, 2008; Chen et al., 2013), the above considerations are crucial for the feasibility and planning of diagnostic evaluations of large samples without requiring a major investment in time and resources. The large sample of participants in this study is representative of Panamanian students and teachers, thereby demonstrating the instrument’s capacity to perform representative studies with a minimal investment of time and resources, making it suitable for comparing different groups or different researchers, or for tracking participants' conceptions over time (Kind & Barmby, 2011).
The controversy between Allchin (2011) and Schwartz et al. (2012) on authentic assessment of complex NOS comprehension has shed light on the strengths and weaknesses of the different NOS evaluation proposals as well as on the objectives, contents, and methods of NOS teaching. The approach to NOS assessment offered in the present study also satisfies some of the features that Allchin (2011) assigns to the functional and authentic evaluation of NOS knowledge. It is authentic, as the stem of the items provides a contextual framework for the respondents, which situates a specific socio-scientific NOS issue close to reality; and it is functional, as it could be easily applied and tailored by researchers and teachers. Although the participants do not strictly compose their own written responses, the soundness of the analysis arises from the complete information provided by all the sentences of each item and the multiple-rating response to profile; of course, qualitative data from post-test interviews or freely drafted responses to the items constitute a natural complement of this method yet to be explored. Adaptability to diagnosis, to teacher training, or to general contexts of performance evaluation at school are ensured by the capacity for items to be tailored from the total pool and by the construction of individual NOS knowledge profiles based on the set of indices shown in more detail elsewhere (Vázquez, Manassero & Acevedo, 2006). In addition, the instrument's adaptability to various comparative uses (between individuals, groups, or locations), and the different stakeholders is quite evident because the quantitative method and the normalization of the indices allows the application of statistical hypothesis testing for comparative purposes; applications with large scale samples are rapid, low cost, and highly flexible and adaptable.
Furthermore, the use of this standardized method and instrument enables different research studies to be compared. To date, most studies have used non-equivalent methods which only allow coarse-grained comparison of the key ideas or results concerning NOS. The present standardized instrument and method can therefore contribute to NOS research by encouraging its synergic development, as suggested by Abd-El-Khalick (2012b) because they permit the results of different researchers to be compared, the establishment of benchmarks for comparative NOS evaluations, and the assessment of the quality and achievements of different competing NOS teaching methods (implicit, explicit, and reflective), etc.
In many countries such as Panama, whose curricula have never before included NOS contents, teaching NOS in school demands a special innovative effort, as many teachers lack training to teach these topics. Indeed, there is evidence that even when expert teachers are faced with teaching a new topic with which they are unfamiliar (as may be the case with NOS), they may be unable to transfer the expert behaviour that characterizes their teaching to this new, relatively uncomfortable, context. As a result, they may take unexpectedly incompetent approaches to this teaching that would be more typical of a novice teacher, and may find it harder to include NOS topics in their practice than might have been expected (Sanders, Borko & Lockard, 1993). Hence, teachers primarily require appropriate teaching materials to help them design and implement NOS activities in their classes, but they also need to engage in explicit and reflexive analysis of the issues of NOS. Indeed, this should become the core of both initial and ongoing science teacher education programs in order to stimulate authentic NOS teaching in schools. Although the non-expert science teachers’ obstacles to teach NOS stem mainly from diverse negative perceptions about NOS (Höttecke & Silva, 2011), the instrument and the standardized method presented herein should be of assistance to teachers in their educational evaluation tasks. However, the teachers’ knowledge and analysis of sentence and item content also constitute a guide to fostering the development of explicit and reflexive analyses of issues concerning the NOS curriculum, as each sentence’s category may orientate about its actual value and increase teachers’ training on NOS (Acevedo, 2009; Hanuscin, Lee & Akerson, 2011).
Summing up, the illustration of the assessment of some conceptions on epistemology of science in a large sample contributes to the field of NOS research by developing a significant method that provides quantitative, quick, valid and informative results on tailored NOS topics.
This study corresponds to grant EDU2010-16553 funded by the national R+D+i 2010 programme of the Ministry of Science & Innovation (Spain) with the support of the Organization of Ibero-American States (OEI).
Copyright (C) 2016 EdUHK APFSLT. Volume 17, Issue 1, Article 4 (Jun., 2016). All Rights Reserved.