Asia-Pacific Forum
on Science Learning and Teaching, Volume 11, Issue 2, Article 12 (Dec., 2010) |
In the present study, data was analyzed using a meta-analysis technique that is a secondary statistical analysis using primary research. Our approach is similar to Kulik et al. (1985) and Glass, McGaw and Smith’s (1981); firstly we located objective and replicable studies from reliable sources. Then, we coded these studies for prominent properties and created a common scale by outcomes of studies. Finally we performed statistical methods on the studies’ outcomes and calculated effect sizes.
In order to gather the studies included in meta-analysis, various sources were used in the study. Three type studies were brought together for the meta-analysis: journal articles, dissertations/theses and conference papers. The Social Science Citation Index (SSCI) journals, Turkish Academic Network and Information Center Social Science Database, national printed journals, Academic Search Complete, Education Research Complete and ERIC databases were searched for journal articles. The Council of Turkish Higher Education Thesis Center was scanned to get the dissertations/theses. The conference papers were collected from the papers of prominent conference of science education, educational technologies and educational sciences in Turkey. So, 52 studies were used in the meta-analysis.
The following criteria was established for choosing studies included in the meta-analysis.
1. Studies had to compare the effects of computer assisted instruction and others (traditional instruction, laboratory based, etc.) on students’ cognitive achievement.
2. Studies had to be in science subject area (Physics, Chemistry and Biology).
3. Studies had to include an experimental method with a experimental and a control group. Studies with no comparison group were not used in the analysis.
4. Studies had to report quantitative results.
5. Studies had to include Turkish students as subjects.
6. Studies had to report means, standard deviations and number of subjects of experimental and control groups separately (If these were not reported, F or t values had to exist).
7. Studies had to have been published between 2001-2007 years.
Studies were chosen to use in the meta-analysis. Then, a coding paper was prepared for the coding process. Two researchers' coded variables and quantitative data needed to calculate effect sizes to the paper for each study separately. The researchers compared the coding papers for coding reliability. Agreement was obtained 0.90 between the coding papers. The different codings were discussed by the researchers.
Six variables were coded for each study:
1. Publication year
2. Type of publication (journal article, dissertation/thesis or conference paper)
3. Grade level (elementary, secondary or university)
4. Subject area (physics, chemistry, biology)
5. Instruction method of comparison group (traditional, laboratory based…)
6. Sample size
Although there are several approaches to calculate an effect size, Hedge’s g, also known Hunter and Schmidt’s d (Hunter & Schmidt, 1990), was used in this analysis.
g= - / (Hedges & Olkin, 1985)
Here, g is effect size (ES), is the mean for experimental group, is the mean for control group and is pooled standard deviation of two groups.
, are the number of subject of experimental and control groups respectively and , are the standard deviation of experimental and control groups respectively. If means and standard deviations of groups were not reported, t and F values were used to calculate the ESs:
For t value: and for F value:
The SPSS package program was used to compute the ESs and variability measurement. Each variable was evaluated as a factor in an analysis of variance (ANOVA) to investigate whether there were significant differences within each variable on the ESs.
Copyright (C) 2010 HKIEd APFSLT. Volume 11, Issue 2, Article 12 (Dec., 2010). All Rights Reserved.