The Mystery of Taiga River game is a game-based science curriculum (water quality and scientific inquiry) designed with the goal of positioning students aged 10-14 as investigative reporters who must learn and apply scientific concepts (scientific investigation, water quality indicators, eutrophication, etc..) to solve the problem and restore the health of the park without alienating various stakeholders (i.e., loggers, fishers, farmers).
The CGI/E-Line game, Taiga, has won the international game competition!
The Mystery of Taiga River engages students in determining the elements of a healthy river system through an interactive narrative within an aquatic habitat, where a serious ecological problem has resulted in many fish dying. Positioned as agents of change, as water quality scientists, whose purpose is to help the Taiga National Park in uncovering the cause of the fish decline, a problem threatening the park’s very existence (see Lesson Plan | Full Unit Guide | Student Notebook | Learning Trajectory). Players soon learn that an application of science inquiry and systems thinking, coupled with understanding of water quality indicators, are all necessary to resolve the game’s narrative conflict. As the game progresses, players experience how their choices and use of science processes and inquiry dramatically change Taiga National Park, stakeholders in its welfare, and even the students themselves.
Here, we provide a brief walkthrough of the experience from the perspective of the player. Upon entering the park, the player is welcomed as an expert, and begins to talk with Non-Player Characters (NPCs), all who seem to have an opinion on whom is to blame for the fish decline. Players immediately hear stories about acid rain from employees of the park, so they begin to gather data, and test a water sample from Taiga River for pH levels, to see whether there is sufficient evidence to support that hypothesis. In a very scaffolded first investigation, players soon learn that the evidence does not support the acid rain hypothesis, and the mystery has not yet been solved. This scaffolded practice introduces the player to specific tools, both in-game and hard-copy, which guide them along the way. For example, they can run quick experiments using a simulation tool that allows players to interrogate and “play” with the interacting water quality indicators to better understand how, for example, increased turbidity can raise temperature and kill fish.
Another powerful tool players are introduced to is the chain of reasoning (CoR) tool in which they can drag important socio-scientific data such as the simulation results, water quality measurements, pictures of land use, and even collected scientific reports to build an argument around the core game problem of determining why fish are dying in the river. The CoR is an interactive object that allows students to organize all of the claims into a chain, and then determine if each of those claims holds true for Taiga. As elaboration on the chain of reasoning tool, they drag collected claims and evidence into the tool to build a chain of reasoning that supports, partially supports, or rejects a particular hypothesis. Then, using an algorithm that scores each piece of evidence in terms of the hypothesis being tested, the tool provides feedback on students’ chain of logic, affording the freedom to change and rearrange the evidence into the strongest argument in their effort to prove or disprove the current hypothesis. One of the first hypothesis to be tested using the CoR is the acid rain, which is rejected due to the fact that the player found a sample of water with a normal pH value.
Once they rule out acid rain from nearby factories, they have three other hypotheses to investigate: overfishing by the K-Fly Fishing Company, which holds a fishing tournament during spawning season; turbidity from the Build Rite Lumber Company, whose logging practices may be a bit too close to the river; and eutrophication from Green Leaf Farms, where fertilizer and cattle waste might be a run-off problem. They hear a lot of gossip and incriminations from each of these stakeholders, but ultimately, it is the player’s use of science processes and inquiry skills that will solve this mystery. The evidence that players collect doesn’t have any meaning in isolation, but must be carefully matched with claims in the CoR tool, pre-coded for each piece of evidence to evaluate their arguments and provide embedded assessment. In addition to gathering data from stakeholders and documentary evidence, players also collect water samples to test at the Taiga Science Center and use a virtual fish tank simulator tool that allows players to forecast into the future what would happen to the fish population with particular water quality values (temperature, turbidity, etc).
Elaborating on the CoR as a form of embedded assessment, their chains are scored according to a pre-determined coding system, and they eventually learn whether: 1) they crafted the best chain of reasoning possible from the evidence at hand, and 2) whether the argument either proves or disproves the hypothesis. Students use the constructed argument to write a variety of scientific reports, all of which are reviewed by the teacher along with player choices and evidence collected that is referenced in the player reports. In the end, student choices determine the outcome of the river, with different students advancing different arguments and resulting in different endings. This consequentiality occurs painlessly, through a device called a “Simulator” that was found in a hidden cavern. Using this device, players set regulations on each of the stakeholders and then visit a virtual future Taiga, talking to simulated copies of those stakeholders to see how the players’ regulations affected the park. They may repeat this activity as many times as they like, until they have found the correct balance of regulations that not only benefit the fish population and the park, but causes as little harm as possible to the stakeholder groups as well.
While this game was based on previous research (Arici, 2009; Barab, Gresalfi, et al., 2010; Barab, Sadler et al., 2007; Barab, Scott, et al., 2010; Barab, Zuiker et al., 2007; Hickey, Ingram-Goble, & Jameson, 2009), in this version we dramatically improved the quality of the game and the conceptual tools and recently completed the first comparison study. This design was implemented in an experimental design research study with 7 classes assigned the control and 7 assigned the experimental conditions—about 400 total kids were in the initial sample and 351 completed both the pretest and the posttest. Similar, to the Doctors Cure study reported above, these were 7th grade youth at the Sunnyside School District in southern Arizona, 90% having free-and-reduced lunch and a similar percentage being Hispanic—many of whom were English as second language speakers.
Quantitative results show that the both the treatment and the control conditions had statistically significant learning gains, and that when comparing the seven control with the seven experimental classes they were significantly greater for the experimental conditions with a large effect size difference.
A repeated measures ANOVA on the pre/post learning gains revealed a significant main effect for testing time, F(1,349)=272.95, p=.000, and a significant interaction, F(1,349)=28.07, p=.00, with a non-significant main-effect for condition, F(1,349)=.93, p=.34. Follow-up analyses indicate that both the control, t(165)=7.95, p=.000, and the treatment conditions, t(184)=15.49, p=.000, improved significantly from pre-post, with significant differences between groups with respect to the amount improved, t(349)=5.30, p=.000.
In summary, both the control (PreM = 6.92, PreSD = 3.95; PostM = 9.20, PostSD = 4.45) and treatment groups (PreM = 6.23, PreSD = 3.08; PostM = 10.65, PostSD = 4.87) demonstrated statistically significant learning gains on pre-post tests with a large effect size for the experimental (ES=1.13) and medium for the control (ES=.53). As stated above, there were statistically significant differences on gain scores, with the treatment condition improving significantly more (M = 4.43, SD = 3.89) than the control (M = 2.28, SD = 3.70) (+.57 SD). The test included released items culled from standardized tests, and the two raters trained to rate three open-ended questions had .87 interrater reliability when scoring independently.