Asia-Pacific Forum on Science Learning and Teaching, Volume 17, Issue 1, Article 4 (Jun., 2016) |
The history, philosophy and sociology of science (which include epistemology as an important part) have been extensively advocated as central contents in science education in order to provide students with a clearer understanding—a more accurate image—of science and improved future decision making in personal and social settings (Aikenhead, 2006). Most of the recent science education research on these interdisciplinary issues (including epistemology issues) has been labelled "nature of science" (NOS), which embraces a variety of areas related to the nature of scientific knowledge (epistemology of science, science community, the relationships between science, technology and society, socio-scientific issues), and many other related topics concerning their effective teaching and learning, methods, NOS teaching materials, evaluation of students' and teachers' conceptions, theoretical matters, teacher training, etc. (Coll, 2012; Lederman, 2007).
This paper focuses on the evaluation of epistemological conceptions, crossing various concerns of NOS research, teaching and learning. For instance, the controversial features of most epistemological topics make it difficult to devise valid methods and instruments for their evaluation. Furthermore, an underlying problem of epistemology and NOS research is the incommensurability of the studies, either because the methods and instruments are quite different, or because of the qualitative nature of results. Thus, accurate individual profiles are relatively incomparable, beyond broad stereotyped results on the poverty of students' and teachers' epistemological and NOS conceptions. When using the same qualitative instrument (e.g. Lederman and colleagues’ VNOS), only a rough percent of informed conceptions, developed through researcher-based criteria, are devised.
The aim of this paper is twofold. On the one hand, it presents a new methodological approach to evaluate epistemological conceptions which advances NOS research allowing specific comparisons and hypothesis testing. The approach is extensive, flexible, functional, meaningful and standardized, and it easily allows adaptable applications, statistical hypothesis testing between groups, across treatments, or over time. Thus, research studies can be compared on the same scoring baseline; scaling up to larger samples is faster, easier, and cheaper; and, in practice, its use by teachers for curricular development or classroom evaluation is straightforward. On the other hand, this paper illustrates these properties through a real assessment of some epistemology issues in a large, nationwide sample of Panamanian students and teachers, whose presence is scarce in science education research. Hypothesis testing and correlation analyses of the epistemological conceptions are also taken into consideration.
The Nature of Science as the global framework for epistemology in Science Education
Today, science education literature usually considers the epistemology of science under the NOS umbrella or, even more precisely, the nature of scientific knowledge (Lederman, 2007). NOS refers to the values, suppositions, scientific practices, community, society, and technology, etc. involved in scientific practices, which depict science as a human activity aimed at gaining valid knowledge. Scholars do not agree on a precise definition or delimitation of the NOS field, which is acknowledged as complex, controversial, multifaceted, and changing over time, although these disagreements do not impede researching or teaching NOS issues (Erduran & Dagher, 2014; Matthews, 2012). A simple approach characterizes NOS as a human way of gaining valid knowledge that is practised by a special community of professionals called scientists, who work under certain values and epistemological assumptions.The importance of NOS for science education stems from being considered a core content of scientific literacy. Besides the traditional contents "of" science (facts, laws, theories, processes, inquiry, etc.), NOS embodies knowledge "about" science (Osborne, Collins, Ratcliffe, Millar & Duschl, 2003). NOS issues have been adopted as curriculum content in the reforms of science education around the world, and consequently, NOS topics should also be a part of science teacher education (Eurydice, 2011; Next Generation Science Standard [NGSS], 2013). The aim of NOS teaching in pre-college science education is not to train students to become philosophers of science or to address particular philosophical standpoints. Instead, in responding to the crucial role of NOS in scientific literacy, students should be able to understand how science works, and hence, to have a more solid foundation on which to base their future decision making in personal and social settings. In a way, the NGSS provides an enhanced, streamlined and renewed vision of the curricular NOS along two strands: the features associated with scientific and engineering practices (scientific research, methods, empirical evidence, openness to revision, scientific models, laws, mechanisms and theories), and some global suppositions of scientific knowledge, which are considered as curricular cross-cutting concepts (human enterprise, assumption of order and consistency for natural systems and limited to the natural and material world).
There is some controversy about the most suitable NOS contents to include in the curriculum at the pre-college level, though different scholarly proposals do share some coincidences. However, closed lists of topics might run the risk of inducing sterile rote learning if pedagogical development is inappropriately applied (Abd-El-Khalick, 2012a; Deng, Chen, Tsai & Chai, 2011). All in all, the issues on the epistemology of science are widely agreed as a core component of NOS and are the centre of this paper.
Evaluation of epistemology conceptions within the Nature of Science framework
Recent years have seen a major growth in research on the evaluation of student and teachers' conceptions about NOS and epistemology. Many empirical studies using different methods and instruments have consistently found a broad collection of deficits in epistemology views. Neither students nor teachers understand the role that theories, laws, hypotheses, models, creativity, technology, tentativeness and scientific methods play in science. Furthermore, the results are broadly coherent across methods, countries and age groups, confirming the importance of the problem (Lederman, 2007; García-Carmona, Vázquez & Manassero, 2012).Science teachers' understanding of epistemology unfortunately reflects similar naïve patterns to those observed in students. They hold mythical conceptions about science, which reject the theory-laden, tentativeness and differences between scientific theories, laws, and hypotheses, and the status of scientific method(s), inference, observation, and empirical evidence (e.g., Abd-El-Khalick & Lederman, 2000; Celik & Bayrakçeken, 2006; García-Carmona, Vázquez & Manassero, 2011; Lederman, 2007).
Most diagnostic studies of conceptions have been performed using small, or convenience, samples of science participants. Recently however, some studies have started to use larger samples tied to applications of VOSTS-related instruments (e.g. Dogan & Abd-El-Khalick, 2008). Further, Holbrook et al. (2006) studied non-science students, and Liu and Tsai (2008) compared arts and science graduate students (including an initial teacher education group). In the latter, the two groups were generally not found to differ from each other, although the science students displayed less sophisticated beliefs (i.e., about the cultural dependency of scientific theories), and the science teacher education students scored lowest on all dimensions.
Educational evaluation has grown into a vast field of research. The numerous methods and instruments basically fall into two broad categories: qualitative (case studies, participant observation, interviews, open questionnaires, content analysis of lesson plans and classroom documents, concept maps, discourse analysis, etc.) and quantitative (ordinal Likert-type scales, multiple-choice and multiple-rating questionnaires, grids, etc.). Mainstream NOS research has drawn on this field to develop various methods of evaluating NOS conceptions (reviewed in Deng et al., 2011; Lederman, 2007; Liu, 2012).
The qualitative approach has its own unquestionable merits to penetrate the complex web of individuals’ ideas; although it also suffers from semantic problems, the categories of analysis are often idiosyncratic, not very explicitly defined, and hardly ever equivalent among studies (Deng et al., 2011). Even though their results depict broad patterns of the NOS conceptions, they are hardly comparable, and have limited influence on the inclusion of NOS in school science assessments and on encouraging schools to teach NOS in daily practice because they have targeted a readership by research specialists, far removed from school teachers (Chen, 2006). All in all, qualitative research provides valuable and unquestionable contributions, so that the previous criticisms are not intended to devalue it in any case, but only to frame some of its shortcomings.
On the other hand, reliance on quantitative scales and questionnaires has produced criticisms pointing to methodological shortcomings and poor validity or reliability. The researchers' perspective (philosophical preferences, biases and prejudices) of instrument construction may restrict its validity; for example, the adoption of cluster labels (relativist, constructivist, empiricist, etc.) to classify individuals (this is also a problem in qualitative research). The immaculate perception hypothesis (the implicit assumption that researcher and respondents perceive and understand the items in the same way) may severely affect the validity of investigator-designed instruments (Aikenhead, Fleming & Ryan, 1987; Lederman & O'Malley, 1990; Lederman, 2007). Other common criticisms refer to the scoring procedures, the underlying dimensionality of the models, the representativeness of the scores and the reliability statistics. Forced-choice instruments, in particular, limit the space of responses available to respondents (Lederman, Abd-El-Khalick, Bell & Schwartz, 2002).
The above criticisms about validity could also apply to qualitative research, as the qualitative processing of participants' open productions is often insufficiently detailed by researchers, thus preventing semantic and validity issues from becoming ostensible. This reflection intends to redress an apparent imbalance in research between qualitative and quantitative methods because the greater criticisms of the latter may be hindering them (Guerra-Ramos, 2012). The review of over one hundred research studies on students' NOS conceptions by Liu (2012) estimates the proportion of qualitative (two-thirds) to quantitative methods (one-third) used in research. Most studies (54%) combine several (two or more) methods to acquire their data, while the rest (46%) use just a single method. Overall, 81% of the data acquisition methods are qualitative, indicating the prevalence of the qualitative over the quantitative approach in current NOS research. Instead, we advocate trying to bridge the gap between the two methods because they can also complement each other in providing valuable information about the complex aspects of NOS conceptions through their different approaches to seeking evidence. Indeed, it is usually recommended to complement test scores with qualitative methods (interviews, observation, etc.) in order to better unveil the respondent's real conceptions (Aikenhead & Ryan, 1992; Chen, 2006; Lederman et al., 2002). This complementary approach to the qualitative/quantitative evaluation instrumentation has also been initiated from the qualitative facet through the work of Brunner, Summers, Myers and Abd-El-Khalick (2016), who try to quantify the responses of the most widely used qualitative evaluation tool (VNOS).
The notion of authentic evaluation has been introduced into general education to aid in the evaluation of complex learning, such as performances or actions in real settings ("close to real”). In this framework, and in light of some criticisms of the VNOS questionnaire (Lederman et al., 2002), Allchin (2011) recently argued for applying the criteria of authentic evaluation to NOS conceptions, highlighting the complexities of teaching and evaluating NOS.
Recent evaluation instruments
Science education research needs standardized, valid and reliable instruments to evaluate NOS for diverse reasons: to provide trustworthy common grounds for research results and to foster NOS teaching, providing practical tools for teachers (Chen, 2006; Lederman, 2007). Partial accounts of quantitative evaluation instruments in the literature have contributed to the invisibility of some available instruments. Lederman (2007) displays a huge list of instruments for the period 1954-1992, although for recent years, he just refers to the five-form VNOS, the 114-item Views on Science-Technology-Society (VOSTS) (Aikenhead & Ryan, 1992) and the Critical Incidents Scale (Nott & Wellington, 1995). Liu's review (Liu, 2012) adds three different instruments: Views about Sciences Survey (VASS) (Halloun & Hestenes, 1998), Thinking about Science Survey Instrument (TSSI) (Cobern & Loving, 2002) and Views on Science and Education (VOSE) (Chen, 2006). Some additional questionnaires are listed in the table of Appendix A.Though Lederman's VNOS may be the most influential qualitative instrument, Aikenhead's VOSTS item pool has also inspired a considerable amount of studies and some of the mentioned instruments. The teachers in Chen’s (2006) study found that it was harder, more frustrating, and required a greater effort to answer the VNOS in the time allocated than to answer the VOSE (a VOSTS-based instrument). The standardized instrument devised in this paper, based on the VOSTS pool, is oriented towards coping with some current challenges of NOS research and teaching (assessment), which in turn provide reasons to choose quantitative instruments to assess NOS conceptions.
- The instrument explicitly shows all the items, explains the method to obtain the scores, the interpretations of the scores, and its theoretical foundations. Such an explicit design allows straightforward and extensive use, replications, instrument improvements and associated data management through critical analysis of the results.
- A standardized instrument usually involves procedures that are inexpensive, rapid and easy to apply. These features make large-scale evaluations feasible at a state or national level, thus making the monitoring process more robust and representative (Kind & Barmby, 2011; Chen et al., 2013). Open-ended instruments, on the contrary, require idiosyncratic, expensive, tedious and slow processes that are managed by scholars.
- Standardized instruments would facilitate teachers’ evaluation tasks in the classroom, and consequently, are likely to stimulate teachers to incorporate NOS teaching into curricula, as their reluctance to teach NOS explicitly is partially due to the lack of evaluation instruments (Lederman, 2007). Particularly, the item pool used here is large enough for different instruments to be tailored to different applications and objectives by choosing the appropriate items from the pool.
- The standardized instrument provides researchers with a tool to delve deeper into the statistical analysis of data (group comparisons, time series, individual profiles, test-retest follow-up, correlation methods, strengths and weaknesses, inconsistencies and consistencies, etc.); this paper exemplifies this point. Furthermore, the relationships between NOS conceptions and other educational variables (learning, teaching, teacher attitudes, motivation, etc.) can be more readily examined (Deng et al., 2011).
- Standardization is especially well suited to compare individual profiles of respondents' NOS conceptions, which facilitate researcher and teachers’ evaluations of students, thus fostering the progress of NOS research and teaching.
- The contrast among different research studies cannot currently be solved due to the multiplicity of approaches. Standardized instruments would provide researchers with common grounds, which could make it possible to compare research findings from different studies, groups and countries. Indeed, this reason does not try to unify the field though it advances NOS research, for instance, in some of the critical lines identified by Lederman (2007) for future NOS research (effectiveness of interventions, development over time, factors that affect development, change of teachers' conceptions during classroom practice, etc.).
In response to the criticisms of quantitative instruments to cope with the challenges of assessment in NOS research and to faithfully represent the respondents' conceptions, the present study uses an instrument that is empirically developed (largely avoiding the immaculate perception) and based on a multiple-rating response model, which eludes forced choice (Vázquez & Manassero, 1999). The research questions of this paper refer to its twofold aims: does the evaluation instrument and its associated methodology allow for a new, valid, reliable, fast, effective, inexpensive and standardized evaluation of NOS conceptions on epistemology of science? Second, what are the Panamanian student and teachers’ conceptions on the epistemology of science? In particular, the development of statistical hypothesis testing (e.g., comparative group analyses, time series, profiles, etc.) and correlation analyses between variables (e.g., teaching factors, factor analysis, etc.) are more straightforward to perform with this instrument than with other instruments and methodological approaches.
Copyright (C) 2016 EdUHK APFSLT. Volume 17, Issue 1, Article 4 (Jun., 2016). All Rights Reserved.