Using Internet Reciprocal Teaching to Develop Second Graders’ Online Text Evaluation Skills
Classrooms today have more access to the internet and technology than ever before (Kuiper et al., 2008; Leu et al., 2017). Increased access provides opportunities for students and teachers to engage with different types of texts than what was previously available. However, this change in access also creates additional challenges in teaching literacy (Leu et al., 2017). As reading online texts becomes more prevalent both in and outside of the classroom, the need for instruction in online comprehension strategies also increases (Forzani, 2018; Kiili et al., 2018; Kuiper et al., 2008; Leu et al., 2017). Leu et al. (2017) encouraged researchers to examine further how students can be supported in developing the online reading comprehension skills and strategies required to be literate in the 21st century.
New literacies include the skills and strategies needed to comprehend online text (Coiro & Dobler, 2007; Leu et al. 2017). In the context of this study, online reading comprehension refers to the “new literacies of online research and comprehension” which includes a set of skills and strategies specific to reading on the internet (Leu et al., 2017, p. 7). New literacies researchers have determined that the skills and strategies needed to comprehend online texts are similar, yet more complex than traditional, or offline, reading comprehension skills (Coiro, 2011; Leu et al. 2017). There are four core skills that differ from traditional reading comprehension skills when applied in an online context: locating information, evaluating information, synthesizing information, and communicating information (Coiro, 2011; Forzani, 2018; Henry et al., 2012; Kiili et al., 2018; Leu et al., 2017; Sung et al., 2015; Wiley et al., 2009). While many traditional reading skills contribute to online reading comprehension, there are additional strategies needed to navigate the multiple dimensions of the internet (Coiro, 2011; Leu et al., 2017). For example, when reading online, readers often have to work with search engines, use hyperlinks, read in various text structures, and navigate texts with multiple media such as pictures, graphics, videos, and animations (Corio, 2011; Henry et al., 2012; Leu et al., 2014). In addition, the internet allows anyone to post regardless of biases or credibility. Readers of the internet then have to make important decisions about which websites or texts are reliable and which are not (Forzani, 2018; Kuiper et al., 2008; Wiley et al., 2009).
The added complexities of reading in an online environment present a unique need for instruction in new literacies for successful online reading comprehension (Coiro, 2011; Forzani, 2018; Kuiper et al., 2008; Leu et al., 2017; Wiley et al., 2009). While some students may have experience using computers and navigating the internet at home, others may not. Over the past two decades, there has been much conversation about the traditional, or offline reading achievement gap (Leu et al., 2014). The offline reading achievement gap refers to the difference between reading test scores on the National Assessment of Educational Progress (NAEP) for students from higher socioeconomic status and students from lower socioeconomic status (Leu et al., 2014; National Assessment of Educational Progress, 2019). A gap in reading scores for students who qualify for the National School Lunch Program and those who do not qualify for the program has existed for many years (National Assessment of Educational Progress, 2019). With a lack of instruction in the area of online reading comprehension skills and strategies, an achievement gap separate from the offline reading achievement gap has emerged (Leu et al., 2014). In Leu et al.’s (2014) study, researchers found a separate achievement gap for online reading ability among the same groups as the traditional achievement gap. Researchers are calling for changes in policy related to online reading comprehension (Leu et al., 2014) as well as encouraging schools to begin teaching online reading comprehension skills and strategies at an earlier age (Forzani, 2018). Researchers have identified critical evaluation as one of the new literacies for online reading comprehension that students are lacking the most (Forzani, 2018; Leu et al., 2014; Wiley et al, 2009).
Researchers have made progress in outlining some of the skills and strategies required for online reading comprehension and advocate for additional instruction in these areas (Coiro, 2011; Forzani, 2018; Henry et al., 2012; Kiili et al., 2018; Leu et al., 2014; Sung et al., 2015; Wiley et al., 2009). Some studies have focused on specific instructional practices to teach online reading comprehension (Colwell et al., 2013; Henry et al., 2012; Kuiper et al., 2008; Leu et al., 2008; Wiley et al., 2009). These studies have included participants ranging from fourth grade to college students. One method for teaching online reading comprehension skills and strategies is Internet Reciprocal Teaching (IRT) (Colwell et al., 2013; Henry et al., 2012; Leu et al., 2008). IRT mirrors traditional reciprocal teaching in that teachers and students share the role of modeling strategies with students taking on more responsibility as their expertise increases (Leu et al., 2008). Henry et al. (2012) identified positive outcomes in terms of student strategy use and engagement through the use of IRT. Still, little research has been conducted to determine what teaching strategies may be effective for teaching the new literacies of online reading comprehension to younger students.
The current study examined the actions and thought processes second grade students go through while reading online, implementing a research-based teaching strategy for new literacies, Internet Reciprocal Teaching or IRT (Leu et al., 2008). In particular, strategies for how to critically evaluate online text were introduced to second grade students. The current study can contribute to existing literature by exploring evaluation strategies that young students may already possess and addressing any possible relationships between IRT and evaluation strategies used by students while reading on the internet.
Evaluating Online Text
Evaluating information in an online reading environment requires the reader to determine the reliability and relevance of the text they are reading (Coiro, 2011; Forzani, 2018; Henry et al., 2012; Kiili et al., 2018; Leu et al., 2014; Sung et al., 2015; Wiley et al., 2009). This is especially important in the online environment because of the plethora of texts created by an abundance of authors. Online texts can contain bias or inaccurate information and the reader must evaluate the trustworthiness of the author, web page, and information provided (Coiro, 2011; Leu et al., 2014; Kiili et al., 2018). Researchers have found that students frequently lack the ability to critically evaluate online text (Forzani, 2018; Leu et al., 2014; Wiley et al., 2009). Some other researchers observed that students lacked evaluation skills during their active research, but most were able to appropriately evaluate a website when directly asked (Colwell et al., 2013; Kuiper et al., 2008). Researchers have outlined several skills students need to be able to effectively evaluate online texts.
Kiili et al. (2018) identified two separate sub-skills within evaluating, “Questioning Credibility” and “Confirming Credibility” (p. 321). These sub-skills account for added complexities when evaluating commercial versus academic text and allow students to look beyond a domain name such as .com or .edu (Kiili et al., 2018). Questioning credibility refers to the practice of identifying potentially biased or persuasive statements or author’s purposes and wondering about the trustworthiness of a particular author, article, or website (Kiili et al., 2018). Oftentimes young readers don’t question the reliability of information they read on the internet and view it solely as a convenient information source (Kuiper et al, 2008). For this reason, modeling and practicing strategies for questioning credibility is important. Confirming credibility refers to the process of identifying indicators that a particular author, website, or information is trustworthy (Kiili et al., 2018). This could include reading several websites to find the same information or finding information about the author or website’s credibility. These strategies may be new to young readers and thus require instruction. Colwell et al. (2013) suggests using open ended research tasks to promote the use of these critical evaluation strategies.
Forzani (2018) studied seventh graders’ ability to evaluate the credibility of information within the context of an online science research task. This researcher describes knowledge-claim credibility, source credibility, and context credibility as main components included in credibility evaluation. The results indicated that students scored particularly poorly on the evaluation components—identifying the author, evaluating author expertise, evaluating author point of view, and evaluating web page credibility. The most common area students scored correctly on, however, was identifying the author. The least common area students completed correctly was evaluating the overall web page credibility. The author speculates that this may be because the evaluation process is not “well defined” and “thus not well taught” (p. 387). She suggests that evaluation be viewed as a process and students be taught to evaluate using the three tiers examined in this study, rather than learning skills in isolation. This study generally highlights the need for additional instruction in the area of evaluation. Because the seventh grade participants in this study struggled with evaluation skills, the researcher suggests beginning instruction for evaluation at a younger age. Other research also shows that many young readers lack evaluation skills in the online context (Coiro, 2011; Coiro & Dobler, 2007; Leu et al., 2014; Wiley et al., 2009). Therefore further research in the area of online text evaluation for young readers is required. While Forzani suggests continuing to teach and assess evaluation skills in conjunction with locating, synthesizing, and communicating skills because of their interconnectedness, certain evaluation skills may be more applicable to younger readers than to older readers. More research is still needed to determine what these skills specifically are (Kiili et al., 2018).
Internet Reciprocal Teaching
Reciprocal Teaching involves the teacher teaching a specific skill or strategy to a group of students, and then allowing students to work together in groups to model the strategy or teach one another (Henry et al., 2012; Leu et al., 2008). This model has been studied and used with print-based texts. Internet Reciprocal Teaching (IRT) builds off of this method in that students take on the role of modeling and teaching peers, but the focus is on skills and strategies needed when reading on the internet: locating, evaluating, synthesizing, and communicating information (Colwell et al., 2013; Henry et al., 2012; Leu et al., 2008). The IRT instructional model moves from teacher-led instruction to increasingly independent work with the majority of the instruction and meaning-making coming from peer collaboration (Colwell et al., 2013; Henry et al. 2012; Leu et al., 2008).
An exemplary description of IRT instruction by Henry et al (2012) explained three phases: “Phase I (teacher-led instruction) to Phase II (collaborative modeling) and Phase III (inquiry of the IRT model)” (p. 289). The teacher’s lecture was minimized in their case to facilitate students’ collaboration. Students were even allowed to select their own groups. The teacher’s explicit instruction focused on essential strategies needed for online reading such as questioning, information search, critical evaluation of information, idea synthesis and communication in various formats. Their IRT model encouraged the students to assume experts’ roles that support others’ learning. For example, students who had expertise in a strategy were asked to demonstrate it to the classmates and were added to the classroom expert list for all students to know who can be a go-to person for the specific strategy.
Henry et al. (2012) examined three cases in which IRT and technology were used as motivating factors for struggling readers. This study was part of a larger study with a goal of observing how IRT impacts student roles in the classroom (Henry et al., 2012). In each case IRT was used and students selected their partners. The skills emphasized in the project were online reading comprehension skills: creating questions, locating information, evaluating information, synthesizing information, and communicating information. The researchers analyzed the data from interviews, observations, and screen and video recordings to find themes and patterns in “empowerment, engagement, and the development of new literacy skills” (Henry et al., 2012, p. 293). Results showed that two of the three students in these case studies improved in their online reading comprehension skills after the period of time using the IRT model. The third student, while not making academic gains, improved her attendance and her role within the classroom changed from watching and listening, to being actively engaged and viewed as a leader by her peers. All three students’ attitudes toward learning were positively impacted by the IRT process as well. They were all observed having more positive interactions with peers. Due to the nature of IRT, each student was provided with opportunities to be identified as an expert and teach their peers skills they had mastered. This led to an increase in engagement and self-confidence for all three students. The researchers conclude that the IRT model could be a beneficial method to improve student empowerment and engagement in classroom learning activities while also teaching online reading comprehension skills, especially for struggling readers.
Colwell et al. (2013) studied the process of IRT to identify outcomes, obstacles, and suggestions for implementation. Colwell et al.’s (2013) study took place within the context of two seventh grade science classes. The teacher along with 48 seventh grade students participated in the study. Researchers observed the students and teacher, took field notes, and took on the teaching role during the teacher-led phase of IRT. They also collected data through a survey on prior internet experience and usage, video recorded activities, and interviews. The data were analyzed to identify themes that developed throughout the IRT process. The researchers found that the students were highly dependent on their teacher and many lacked the skills required to work independently and collaboratively in IRT. Additionally, when asked directly, students could identify strategies to locate and evaluate online text, but often did not use these strategies when working independently. After noticing this, the researchers adapted their method to include more group work. Temporarily this increased strategy use and reliance on peer collaboration rather than dependence on help from the teacher. However, after a few sessions, students again began to ask their teacher for help rather than their peers. Students also viewed the internet as a space to find information quickly, which may have contributed to their lack of using evaluation strategies.
Another theme that emerged from the Colwell et al.’s (2013) data was related to the structure of the inquiry projects. The researchers found that students were most successful at utilizing the skills and strategies for online reading and collaborating with peers when they worked in small groups with semi-structured open-ended inquiry projects. In this structure, the project itself was open ended, students could research a specific topic of their choice within a broader science topic, but there were guiding questions that helped students plan their research. Students also frequently reverted to the strategies they had learned through their own internet inquiry outside of school rather than the strategies that were taught during the IRT process. The researchers and teacher encouraged students to share and critique each other’s strategies, which temporarily improved the use of the strategies taught in class, but students still often went back to the strategies they had used in their previous experiences. Still, in the interviews completed at the end of the IRT process, students were able to explain the strategies they should use when reading online, but did not consistently use them while actively engaged in online reading and research.
From the results of Colwell et al.’s (2013) study, the researchers had several recommendations for future use of IRT. First, the researchers suggest activities be structured in a way that encourages strategy application over a period of time, rather than solely immediately after the lesson. Second, structuring projects to be open-ended group work with students exchanging various roles for practicing online reading comprehension skills such as locating and evaluating could be most beneficial. Additionally, the teacher’s role should be guiding rather than an information source. When students ask that teacher questions, the teacher should inquire about what strategies the students have used and help them modify their strategies to find the answers to their questions. Finally, the researchers suggest that beginning strategy instruction at the elementary level may be beneficial in preparing students for projects like the one conducted in this study. These suggestions are important to consider prior to implementing IRT in the classroom.
Young Children Reading in a Digital Space
Of the research reviewed thus far, the youngest participants were fourth graders. Forzani (2018) encourages educators to begin online reading comprehension instruction at a younger age as research has noted a lack of skills in older readers. According to Duke & Cartwright (2021), many early literacy practitioners influenced by the Science of Reading overlook the development of strategic reading included in the Reading Rope model which the Science of Reading is originally based upon. Suggesting the Active View of Reading model, Duke & Cartwright (2021) emphasize that “readers must learn to regulate themselves, actively coordinate the various processes and text elements necessary for successful reading,” (p. S30) which goes beyond word-reading and language comprehension. Therefore, there is a need for additional strategy instruction in lower grades. Online reading, in particular, as it is different from print-based reading (Coiro, 2011; Bruner & Hutchison, 2023), necessitates additional strategy instruction regarding coordination of various processes and text elements for younger students to learn to locate, evaluate, synthesize, and communicate information. Digital texts that are often informal, multi-authored, interactive, and hyperlinked require readers’ skills to verify the validity and reliability of them, which is an important disciplinary literacy practice for elementary students according to Bruner & Hutchison (2023).
There have been some studies that observe younger readers utilizing digital reading spaces. While these studies do not analyze young readers’ online text evaluation, they do provide insights into how young learners’ digital text comprehension and adequate instruction.
Two studies included observations of kindergarten students reading electronic books (Christ et al., 2019; De Jong & Bus, 2004). In the first, De Jong and Bus (2004) analyzed how kindergarten students interact with and comprehend electronic texts. They found that as students had more encounters with electronic books, their comprehension was not hindered by the often irrelevant animations in the electronic text. This indicates that children do not solely make meaning from visual cues in electronic texts. They also use narrative text within electronic stories just as with printed texts. The authors concluded that children who have developed to the point at which they are able to understand stories, can also retell a story that they read in an electronic format with similar accuracy to stories they heard read aloud by an adult. Kindergartners also participated in Christ et al.’s (2019) study on app books’ impact on reading comprehension. Christ et al. (2019) examined the impacts of app characteristics (text, animations, etc.) and the reader’s interactions with the app on reading comprehension. Researchers first taught the 53 kindergarten participants how to use app books on an iPad, and then analyzed how the features of the app book as well as students’ interactions with the app book affected their reading comprehension outcomes. The authors found that students’ comprehension went down when there were more than the mean number of hotspots (A hotspot means a clickable spot in an online document that links to another online document). They also found that students needed to know how to use the hotspots appropriately in order for them to have a positive impact on vocabulary and comprehension. Implications from this study relate to the need for explicit instruction in literacy skills beyond those taught with traditional printed text. The kindergartners in the study were successful after having been taught how to use the technology and having had more practice using the technology for the purpose of reading comprehension.
In sum, several researchers suggested online reading comprehension skills and strategies be taught in younger grades to prepare students for the types of online reading and research they will likely participate in as they progress through primary and secondary school (Cowell et al., 2013; Forzani, 2018; Zawilinski et al., 2019). There is a need for additional research particularly in the area of online text evaluation with younger students.
Methodology
The current study specifically explored the students’ processes for evaluation while reading online texts, using a qualitative study method. The Internet Reciprocal Teaching (IRT) was implemented to examine any relationships between IRT and students’ use of evaluation strategies. This section identifies the participants, procedure, data collection, data analysis, and steps taken to minimize researcher influence and bias. These methods were used to explore the following research questions:
- Do 2nd grade students use evaluation strategies while reading online text?
- How did Internet Reciprocal Teaching assist 2nd grade students' evaluation processes when reading online text?
Participants
This study utilized a convenience sample of twenty-four 2nd grade students. The study took place at a school in a suburban community in a Midwestern state. At the school, 32.5% of students receive free and reduced lunch as of the 2020-2021 school year. Of the 24 participants, 71% are White, 25% are Black, and 4% are Hispanic. Additionally, 42% of participants met the grade level benchmark for reading and 58% did not based on Fall 2021 benchmark assessments. Each student in the classroom had their own Chromebook to use at school. Of the 24 participants that took part in the study, seven participants were randomly selected for in depth data analysis. These participants’ data were analyzed until saturation was reached (Corbin & Strauss, 2015; Glaser & Strauss, 2017). The seven participants selected for in depth data analysis are presented in Table 1. Table 1 also exhibits the mode of instruction such as online, hybrid, and homeschool for each participant.
Table 1
Participants Selected for In-Depth Data Analysis
Participant (pseudonyms) | Reading Level as Determined by Fall Benchmarking Assessments <56 wpm = Below Level 56-101 wpm = At Level >101 wpm = Above Level |
1st Grade Learning Mode Hybrid = ½ Week In Person, ½ Week Virtual Learning Online = 100% Virtual Learning |
---|---|---|
Kate | Above Level | Online |
Lucas | At Level | Hybrid |
Ava | Above Level | Online and Hybrid |
Noah | At Level | Online |
Lily | Below Level | Online |
Jayden | Below Level | Hybrid |
Sami** | Below Level | Homeschool |
* All children’s names are pseudonyms.
** Sami’s internet experience was different from the rest of the peers because students enrolled in a Homeschooling program and did not have access to their own computer throughout the first grade year as students enrolled in Hybrid or Online programs did.
As a final note, participants in this study (with the exception of Sami, who was homeschooled), participated in 100% online learning at some point during their first grade year due to the COVID-19 pandemic. Beginning with school closures in March 2020, the district of the school participating in this study provided Chromebooks to all students and hotspot internet access to those who needed it. During the participants’ first grade year, families had the option to enroll in hybrid (half in-person, half online) learning or 100% online learning. Throughout the year, there were points when the district moved to 100% online learning for everyone. As a result of the various enrollment styles due to COVID-19, the participants in this study had over a year of experience using technology and the internet daily in their homes or in school, not including any additional experience they gained from technology related activities that were not associated with school. During the time of this study, all participants were enrolled in standard enrollment (attending school in person daily) and still had access to their own Chromebooks at school and at home daily.
Procedures for Using IRT with Second Graders
The IRT process took place in three phases: Phase One, Teacher Led Instruction; Phase Two, Collaborative Modeling; and Phase Three, Collaborative Inquiry (Leu et al., 2008). An example of a task completed is: “Find three websites that would give you more information about the moon’s phases. How did you select those websites? How did you know those websites would be relevant to your question?” A task like this encouraged students to practice using relevant search terms, scanning search results, and evaluating the relevance of websites based on their content. Tasks in Phase Two in particular primarily focus on the online reading comprehension strategy of evaluation since that is the focus of this study. Because many online reading skills and strategies are interrelated, some sessions also focused on location, synthesizing, and communication, however the majority of the lessons focused on evaluation in conjunction with the other skills.
The checklists recommended by Leu et al. (2008) were utilized as a guide to help determine when students were ready to move on to the next Phase in IRT (p. 343-346). The Phase One checklist included items related to student mastery of computer basics (logging on and off, copy and paste, opening new windows and tabs, saving files, etc.), web searching basics (locating a search engine, using keywords, using the address window, using the refresh, back, and forward buttons, etc.), and general navigation basics (opening and closing tools, minimizing and maximizing the webpage, and moving between tabs). The email basics section of the Phase One checklist was not used as part of this study as it did not pertain to the tools students used. The Phase Two checklist included skills related to the online reading comprehension skills: understand and develop questions, locate information, critically evaluate information, synthesize information, and communicate information.
The IRT sessions took place during the literacy or science block. Sessions ranged in time from approximately 20-60 minutes. All research tasks related to the science and writing curriculum used at the school. During each session, students had access to their own Chromebook. Each Chromebook included the extension Google Read & Write. The “Hover Speech” tool on Google Read & Write allowed students to hover over text with their cursor and hear it read aloud if they chose. This tool was used to assist students in reading text that may have been above their reading level.
Participants used the web browser Google with the Safe Search setting turned on for all internet research tasks. The search engine Google with the Safe Search setting enabled was selected in order to provide the most access to a variety of search results and promote evaluation skills while also filtering content that is appropriate for children. Anuyah et al. (2019) found that child-oriented search engines such as KidzSearch and Kidrex, limited the amount of results when students attempted to locate information related to their coursework. Limited results can lead to frustration from young students if they can’t find the information they were searching for (Anuyah et al., 2019; Druin et al., 2009).
Google with Safe Search included results to websites that are less reliable such as Wikipedia in addition to educational websites (Anuyah et al., 2019). In the context of this study, the inclusion of more and less reliable websites was not a drawback because it allowed students to practice evaluating for relevance and reliability. Google with Safe Search, in addition to other search engines such as KidzSearch, Kidtopia, and Kidrex, also allow for ads. Ads were also not a drawback within the context of this study because students learned how to identify bias in the author's purpose, which is another key skill within online text evaluation. While Google with Safe Search does not include elements that may make searching easier for young children such as larger font and icons, less search results presented on a page, and easier options to enter search terms (Druin et al., 2009), it does provide features that match better with the context of this study than other child-centered search engines. Google with Safe Search is an appropriate tool for this study because it offers the benefits of a larger variety of search results, opportunities for evaluation, and assistive searching while also increasing the filter of inappropriate content compared to standard Google search (Anuyah et al., 2019).
Data Collection
Data sources included interviews, observations, video recordings of whole class sessions, video and screen recording of student work sessions, and artifacts of student work. The study was approved by the Institutional Review Board (IRB) to ensure ethical standards were maintained throughout the research process. Informed consent forms were obtained from all participants prior to beginning data collection. Data was collected over a period of about eight weeks in the Fall 2021. In addition, a research journal was kept to record details regarding procedure, data collection, and data analysis.
Semi-structured interviews were conducted with each participant prior to beginning the IRT process as well as following the final IRT phase and project completion (Corbin & Strauss, 2015). All interviews were recorded and transcribed prior to analysis. The purpose of the initial interview was to gain insights into students' experience with using the internet and the evaluation strategies they may or may not have employed while reading on the internet to answer a research question. During the interview, students were asked to use the internet to answer two questions pertaining to the science curriculum, “What are the names of the different types of clouds?” and “What is the difference between cirrus and stratus clouds?” Students were asked to think aloud (Afflerbach, 2000; Pressley & Afflerbach, 1995) as they try to answer these questions using the search engine, Google with Safe Search enabled, in order to observe the actions they took and processes they went through while completing the task. Students had access to the Google Read & Write extension which allowed them to use the “Hover Speech” feature to assist with reading. Screencastify was also used to record students’ actions on the computer. The purpose of the final interview was to hear students’ perceptions of the IRT process and project as well as provide another opportunity to observe the strategies students use while researching a topic on the internet after having been provided instruction on using the strategies. All interviews were transcribed for analysis.
The participants were grouped heterogeneously based on their internet reading skills and reading levels. Internet reading skills were assessed through the initial interview as well as classroom observation. Reading levels were determined using the results from the school’s reading screening assessment: Fastbridge CBMreading. Fastbridge CBMreading is a screening assessment that measures students’ word recognition and fluency on a grade level reading passage.
Observations took place during small group work time in all phases of the IRT process to record descriptive reflections regarding student participation in the IRT process, evaluation strategies employed while reading on the internet, and other observations related to students’ interactions with each other, the teacher, and paraprofessionals while reading online. In combination with field note reflections from observations, whole class instruction as well as participants’ individual and small group work processes were video recorded. The video and screen recordings were recorded using Screencastify.
Data Analysis
Data were analyzed using the grounded theory approach to explore the processes and actions students take while participating in the IRT model for teaching online reading comprehension strategies (Corbin & Strauss, 2015; Glaser & Strauss, 2017). Data were coded in three phases and the constant comparative method were utilized. Data analysis began immediately during data collection to allow for theoretical sampling. Analytic memos were also created to support data analysis (Corbin & Strauss, 2015; Glaser & Strauss, 2017; Miles et al., 2020). The data were analyzed until saturation is reached.
In the first phase of coding, the teacher researcher analyzed data through open coding (Corbin & Strauss, 2015; Glaser & Strauss, 2017; Miles et al., 2020). All data was transcribed to allow for coding to take place. The purpose of this phase was to analyze data line by line to identify concepts in the data (Corbin & Strauss, 2015; Glaser & Strauss, 2017; Miles et al., 2020). Corbin and Strauss (2015) recommend analysis take place concurrently with data collection. Therefore, coding began as soon as the first interview was completed and transcribed. This was to allow for theoretical sampling (Corbin & Strauss, 2015; Glaser & Strauss, 2017). Analytic memos were recorded to describe concepts, how concepts are related to one another, and the researcher’s thinking about concept relationships (Corbin & Strauss, 2015; Glaser & Strauss, 2017; Miles et al., 2020).
The second phase of coding focused on axial coding, which develops and provides additional explanation and examples of each concept (Corbin & Strauss, 2015). Concepts were compared to other concepts to determine similarities and differences (Corbin & Strauss, 2015; Glaser & Strauss, 2017). This allowed the teacher researcher to develop each concept further and make connections between concepts. As in the first phase of coding, analytic memos were used to describe the process for analyzing concepts and pose questions for future theoretical sampling and analysis (Corbin & Strauss, 2015; Glaser & Strauss, 2017). This phase of analysis continued until the teacher researcher believed saturation had been reached because no new concepts emerged from the data (Corbin & Strauss, 2015; Glaser & Strauss, 2017).
In the final phase of coding, the teacher researcher identified core categories based off of the concepts already outlined (Corbin & Strauss, 2015). These core categories summarized the main idea of the research on using the IRT model to teach evaluation strategies to second grade students. The teacher researcher then reviewed previous memos and concepts and described a possible theory to explain the relationships between concepts and core categories. All coding categories are described in the Code Book in Appendix.
Findings
An analysis of data from interviews, observations, class videos, screen-recordings, and artifacts took place in three phases. From this analysis, several themes emerged related to IRT’s relationship with students’ use of evaluation strategies while reading online text, the criteria students’ use to evaluate for credibility, and students’ roles and level of comfortability while teaching and learning from peers.
Second Grade Students’ Use of Evaluation Strategies for Relevance and Credibility
The data show that students did possess some evaluation strategies prior to beginning IRT. For instance, Lucas used the link title to evaluate which link would be relevant to click on in order to find the answer to the research questions during the initial interview. However, no students demonstrated evaluation for credibility during the initial interviews. The data that were analyzed provide insights into how students’ evaluation strategies increased during and after IRT resulting in the first theme to emerge from this data: IRT may be related to an increase in students’ use of evaluation strategies while reading online text.
When comparing data from the initial interviews, IRT sessions, and final interviews, the frequency in which participants used evaluation strategies while reading online text increased. During Phase 1 and 2 of IRT, students received instruction on evaluation strategies along with other online reading comprehension strategies. During these phases as well as Phase 3 and the final interviews, students were observed implementing evaluation strategies to evaluate for both relevance and credibility.
Evaluating for relevance following IRT: Comparing the research question and the title.
During the instructional phases of IRT, students were taught to ask themselves, “Is it helpful?” when reading online text or when determining which link to click. Throughout the IRT phases and in final interviews, students were observed evaluating for relevance by using link titles, relevant search terms, and reflecting on their research question.
Throughout the IRT process students continued to use link titles as a way to evaluate for relevance prior to selecting a webpage to read. Students scanned search results and read link titles before clicking on a link. Students used these link titles to determine if the website would provide helpful information. Many times, the link titles that participants determined were relevant, aligned with the search terms a student used. For instance, in the final interviews, Lily searched “Blizzards for kids” to find more information about blizzards. The link that she selected matched closely with those search terms.
Participants showed the use of evaluation strategies to determine the relevance of a link based on its title prior to the final interviews as well. For example, in Phase 2 Lesson 1, Noah, AJ, and PK used the link title to evaluate the relevance of a website they tried to use to answer their research question, “What is the wind speed in a tornado?”
Teacher Researcher (TR): And why would that one be helpful?
Noah: Because it says what is the average wind speed inside a tornado.
In the final interview, Noah explained how he selected one website over others by using the link title and evaluating its relevance for answering his research question about what causes a blizzard.
Noah: [scrolls down results page] I go down to…Blizzards Causes and Effects [points to link with this title], What Makes a snowstorm a blizzard…[points to link with this title], [scrolls up page, clicks link titled “How Do Blizzards Form?”]
TR: What made you decide to click that?
Noah: Because it said, “How do blizzards form?” and that is the same thing as- that’s the same thing as “What causes blizzards?” because it’s how it’s made.
At times, students also determined a website was not helpful. One way they did this was by reflecting on their research question. In Phase 1 Lesson 4, Lucas and Noah were searching for more information about lightning. They clicked on a link to a website called “Lightning Forms” which described a software titled “Lightning Forms” rather than the weather event. : [reading information from webpage] Lightning forms help you to… [continues reading in head]
Noah: Well, that wasn’t helpful.
Noah immediately recognized that the website was not talking about the type of lightning he intended to research. By evaluating for relevance, Noah did not spend much time reading the website, and was able to go on and find other helpful websites.
Evaluating for credibility: Examining the URL, ads, author(s) and background knowledge of the website
Strategies for evaluation of credibility also increased following instruction in IRT. In the initial interviews, no participants demonstrated evaluating for credibility while completing the research task. However, in the final interviews, each of the seven participants whose data were analyzed in depth evaluated for credibility in some way. For example, Ava who stated she had “never thought about [evaluating for credibility]” in the initial interview, explained why she looks into the author or website to determine if the information is trustworthy.
TR: What about what tips would you give a friend to decide if a website is helpful or trustworthy?
Ava: Look for the About Us and it will tell you what it is all about and who the author is.
TR: And how will that help you know if something was trustworthy?
Ava: Because if it said like- something like that one website we looked at, that it would let people change like anything on the website, like that would tell you that it’s not trustworthy because people might have changed that and you’re just reading the wrong thing.
To evaluate for credibility, participants used several different types of criteria. They sought out more information about the website or author, used background knowledge about a particular website, looked at the URL, identified the number of ads on a webpage, and used other miscellaneous criteria.
Confirming Author or Website Credibility. During Phase 2 and 3, students demonstrated strategies for evaluating for credibility by confirming the author or website credibility.
TR: Anything else you learned?:
Noah: “Um, well, uh [opens new tab] Well like uh, I’m just going to go to a website like a random one. [clicks search history suggestion “tsunami destruction for kids,” clicks Britannica link”] I’ll go on this one. Like, I never really knew you would have to click that before you read [points to hyperlink titled “About Us”]
TR: What is that?
Ava: About us. See if you click on it [clicks link] it tells you stuff about it. It tells you that it’s helpful or trustworthy.
This included looking for the “About” section of a website or looking for the author on a webpage. As seen in the above excerpt, Ava explained this strategy during the final interview as something that she had learned through the IRT process.
Analyzing URLs. One strategy that participants used frequently to evaluate for credibility was analyzing the URL endings. During Phase 1, students received some instruction on the meanings of URL endings such as .com, .org, .edu, and .gov. This became a strategy that they used while scanning search results pages and determining which link to click on.
Ava: [scrolls down page] I’m going to see if there’s any with .edu. Oh there! [clicks link titled “How do blizzards form?” from UCAR.edu]
TR: So how does .edu help you again?
Ava: Uh it’s from a college or university that normally means it’s from someone that knows a lot about it.
For instance, as seen in the above excerpt, Ava also scanned the URL endings on the results page as a way to quickly evaluate for credibility before selecting a website in the final interview. She described how she was specifically looking for a website with a .edu ending.
Questioning Credibility of Websites with Ads. Along with evaluating URL endings, using the number of ads on a website to evaluate for credibility was one of the more frequent strategies that participants used in Phase 3 and the final interviews. Participants frequently questioned the credibility of a website if there were multiple ads on a page.
TR: How do you know if these websites are trustworthy?
Noah: Sometimes if they have a lot of ads, that can mean they're not trustworthy.
Questions about the credibility of a website with multiple ads did not just come up during the final interviews, but were also common during Phase 3 of IRT, when students searched for websites that could help them answer their research questions. Many dialogues were similar to this example between Lucas, Lily, and KJ during Phase 3 Lesson 6 in which they discuss whether or not the website they have selected is reliable based on the number of ads it has. As with this case, the number of ads was not always the sole criteria with which a group deemed a website not trustworthy, but did bring up questions that prompted the group to evaluate for credibility further or select a new website.
KJ: This has a lot of ads, are you sure it’s trustworthy?
Lucas: No, this is a lot of ads. Definitely not. Mine only has one ad.
KJ: Mine had way more than one ad.
Lucas: Mine has one. [Looks at Lily’s computer] Yours has two! Yours has two ads.
In the final interview, Noah discussed how ads help him determine the trustworthiness of a website. In addition, most students equated numbers of ads to mean that the website was not trustworthy.
Using Background Knowledge of the Website. As participants gained more experience on the internet, they began to recognize some websites that they had previously evaluated and found to be credible. For example, National Geographic Kids’ and NASA’s websites were frequently used in Phase 1 and 2 of IRT to practice online reading comprehension strategies. Through these activities, the teacher researcher explained why these websites were trustworthy. Later, in Phase 3 and in the final interviews, participants used this background knowledge of these websites to evaluate for credibility. Because they had previously discussed that these websites were reliable, they intentionally chose them to get more information on their topic. Other times, students saw a familiar website name in the link title on the search results page. This led them to select a link based on their previous experience with the website. National Geographic was another website that was frequently used and discussed in class.
KJ: Geographic [clicks link titled “Blizzard National Geographic Society”]
TR: Okay so why'd you pick that?
KJ: Because National Geographic is helpful for me.
In the final interviews when KJ noticed the words “National Geographic” in a link title, she associated it with the website that she had previously had success with in terms of relevance and credibility. The website she selected was actually Blizzard National Geographic Society, not the National Geographic she had previously worked with. Still, because she made the association, she evaluated the credibility of the website before clicking the link, but did not look into the credibility any further after having clicked the link.
Other Strategies for Evaluating for Credibility. There were a few instances in which participants used or mentioned other strategies for evaluating the credibility of a website. In Phase 1 and 2, students received instruction on strategies to use to evaluate for credibility. One of these strategies was to confirm information on one website with another website. While no students were observed using this strategy unprompted, some students, including Lily, Kate, and Sami did suggest it as a strategy that could be used to evaluate for credibility when asked in whole group or interview settings. Here, Lily describes this strategy during the final interview.
TR: How would they know if a website is trustworthy?
Lily: Um, you could look on it and you could go to a different website and see if that one says the same thing.
In the final interviews, one other strategy was observed that had not been taught, but was similar to a strategy students used to evaluate for relevance. When looking for information about blizzards during the final interview, Lucas used the link title to evaluate for credibility. He determined that the link must lead to a reliable website because it had “Trusted Choice” in the title, but did not evaluate any further.
Lucas: [clicks link titled “How does a Blizzard Form? - Trusted Choice”] It says T rusted Choice.
TR: So, what does that make you think?
Lucas: It might be trustworthy.
The language that Lucas uses in this example suggests that he is going to look into the trustworthiness of the website further, as he did not seem to indicate that the title completely determined the credibility of the website. However, in this instance and in many cases throughout Phase 3 and the final interviews, when participants evaluated for credibility on the search results page (using link title, URLs, or website familiarity) they often did not continue to evaluate for credibility once they were in the website, except when they noticed many ads on the page.
Discussion
The current study built upon research by Colwell et al., (2013), Henry et al. (2012), and Leu et al. (2008) by further exploring IRT as a strategy for teaching online reading comprehension skills. The study also explored Forzani (2018)’s recommendation to begin online reading comprehension instruction at a younger age. Finally, this study was designed to collect additional information about the current online reading comprehension skills of second grade students as well as provide insights into teaching online reading comprehension skills, specifically evaluation, at this age. The results from this study showed that second grade students already possess some evaluation strategies and that IRT may be an effective way to teach evaluation strategies to second grade students.
Positive Change of Students’ Evaluation Skills Through IRT
In the initial interviews, many participants noted that their experience using the internet in an educational setting had primarily included clicking links provided by their teachers, but none indicated having completed an online reading research task in the past. This indicates that they likely received little to no online reading comprehension skill instruction prior to this project and thus may not know how to implement online reading comprehension strategies fully. Some participants also mentioned that they occasionally searched for videos or games on the internet using a search engine. This seemed to align with the skills some participants showed in the initial interviews including locating information and evaluating the relevance of information. For example, Lucas typed search terms and read the title of links to decide if he should click them to find information to answer the research question. However, other students like Lily, were not able to complete the task beyond typing search terms in Google or Sami, who knew she could use the internet to find the answer to the research question, but did not know how. This is similar to Druin et al. (2009)’s findings that many young children were familiar with using Google, but were not always able to complete a research task using Google prior to instruction on how to read online text. In the final interviews, all students demonstrated the ability to use search terms to find information to answer their research question in addition to other strategies for evaluating for relevance such as reading the link title and scanning web pages.
In the initial interviews, no students evaluated the online texts for credibility. From this data it may be inferred that students did not evaluate for credibility in the initial interviews because they may not have learned how to use this strategy yet. Following instruction, all students were able to demonstrate some level of evaluation for credibility during the final interviews. Examples from final interviews in which students evaluated for credibility include: Ava locating the “About Us” section of a website to examine reliability of the author/website, Ava using URL endings to evaluate credibility, and Noah and Lucas pointing out ads on a webpage as a reason they were questioning credibility of the source. Prior to IRT, none of the participating students expressed or demonstrated any understanding of how to evaluate for credibility. Following IRT, all seven of the students selected for in depth data analysis demonstrated this skill in some way. Participants’ use of evaluation strategies following IRT aligns with previous research findings that show instruction on online reading comprehension strategies improves strategy use (Henry et al., 2012; Kuiper et al., 2008; Wiley et al., 2009).
Students participated in three phases of IRT which taught all online reading comprehension skills, but primarily focused on location and evaluation. The instructional scope in this study aligns with Forzani’s (2018) suggestion of teaching all online reading comprehension skills together, rather than teaching them in isolation. During the phases of IRT and in final interviews, the teacher researcher observed participants evaluating for relevance and credibility as well as modeling these skills for their peers and helping their peers evaluate themselves. In the final interviews, each of the participants selected for in depth data analysis evaluated for relevance and credibility during the online research task portion of the interview. This data shows that IRT may be an effective way to teach online text evaluation skills to second grade students with some internet experience. This aligns with previous research by Leu et al. (2008) and Henry et al., (2012) who used IRT as a method to teach online reading comprehension skills to older students.
Popular Evaluation Strategies: Analyzing the Website URL and Looking for Ads
During the phases of IRT and in final interviews, students used multiple criteria for evaluating for credibility. The most popular criteria students used were analyzing the website URL, looking for ads, seeking out information about the author or website, and using background knowledge about a website. The two most common strategies were analyzing the website URL (for example, noticing the URL ending is .edu and knowing that that means the website comes from an educational institution and is likely trustworthy) and looking for ads (for example if a student saw many ads on a website, they may deem it not trustworthy). URL endings were taught in one lesson of IRT, but looking for ads was not.
Students brought up the concern of multiple ads on a webpage and began using this as a common criteria for evaluating for credibility during the rest of the sessions and in the final interviews. Other criteria such as seeking out more information about the author and using background knowledge about a website were less popular. Primarily students who had more internet experience and read at a higher level in offline text used these strategies. Kate, Ava, and Noah were the only participants to mention one of these strategies in the final interviews and only Ava modeled how to find more information about an author or website through the “About” section of a website. One explanation for why criteria like analyzing URLs and looking for ads may be more common, is because they are more straightforward and easier to identify when looking at a website. To find the “About” section, students have to go through a series of steps and navigate throughout the website to find the “About” section. Then they must understand what the about section means and have some prior knowledge about the organization or background of the author.
Another possible explanation for the less frequent use of seeking out more information about the author or website as criteria for evaluating for credibility could be that the use of this strategy may be related to offline reading level or internet experience. Previous research found that high internet experience was often a more accurate predictor of the use of online reading comprehension strategies than prior knowledge on a topic (Coiro, 2011). Kate, Ava, and Noah were the only participants to mention looking into the author’s credibility as an evaluation strategy in their final interviews. Ava was the only participant to model the use of this strategy in the final interview, though Kate and Noah also used this strategy during Phase 3 of IRT. Kate and Ava both read above level in offline texts and Noah read on level. Ava demonstrated higher internet experience in the initial interviews, and Noah and Kate both expressed having used the internet to search for content prior to IRT. All three were enrolled in the all-online program for at least part of first grade. It is possible that their experience or reading level may have been related to their use of this strategy, however there is not enough data to confirm this.
Implications
Prior to this study, the majority of online reading comprehension studies included older participants in fourth grade or above. The results from this study provide some initial insights into younger students’ thought processes and interactions with online text. Previous research suggested beginning to teach online reading comprehension skills at a younger age (Cowell et al., 2013; Forzani, 2018; Zawilinski et al., 2019). In this study second grade participants were able to successfully learn some online text evaluation skills, which is an important part of online reading comprehension. Considering that critical evaluation has been lacking the most in the new literacies for online reading comprehension (Forzani, 2018; Leu et al., 2014; Wiley et al., 2009), the results of this study give significance to the necessity of the strategy teaching at a younger age, which was emphasized by Duke & Cartwright (2021) in their Active View of Reading model. It was also a way of supporting the second grade participants’ development of disciplinary literacy skills as suggested by Bruner & Hutchison (2023).
Further research in the area of online reading comprehension studying younger students may be beneficial to provide a clearer understanding of teaching online reading comprehension in younger grades. In this study, students improved their location and evaluation skills throughout the IRT process. However, synthesizing and communication remained difficult due to reading level impacts. Future research may focus on what supports are necessary for younger learners to successfully and fully comprehend online text. Additionally, researchers may consider studying which skills are beneficial to learn prior to becoming a fluent reader, and which skills may develop alongside offline reading comprehension.
Additionally, this study took place over the course of eight weeks from the initial interviews to the final interviews. It is not clear whether students retained the skills they learned during IRT beyond the eight-week period. Colwell et al. (2013) noticed that students did not continue using the strategies they had learned long after instruction and required reteaching and further practice. Future research may follow up with younger participants in the weeks and months following the IRT sessions to see which skills are retained and which skills are not.
Finally, researchers could continue to examine the impact of the COVID-19 pandemic on students’ internet and technology skills. All participants in this study (with the exception of Sami, who was homeschooled), participated in 100% online learning at some point during their first grade year due to the COVID-19 pandemic and had access to their own Chromebook at home. As a result of the COVID-19 pandemic, these participants could have had more experience using the internet and technology for school related purposes than other second graders who did not experience online learning or attend school during the COVID-19 pandemic. Researchers could compare the technology and critical analysis skills students bring to the classroom from their prior experiences between students who attended school during the COVID-19 pandemic as compared to those who did not. Additionally, this research may include examining the digital skills of children who are “digitally native,” or have grown up surrounded by technology and access to the internet. This information may help inform the prerequisite skills that need to be taught prior to beginning instruction on online reading comprehension skills.
Limitations
There are several limitations of this study. First, all participants in this study have access to their own school-provided Chromebook both at school and at home. In addition, all participants, with the exception of Sami, had participated in 100% online learning at some point during their first-grade year due to the COVID-19 pandemic and its effect on the district’s learning models. This prior experience and access to Chromebooks and the internet could have affected the skills they possessed before participating in IRT. Also, there was likely less time spent on instruction about basic navigation of the computer and internet with these participants than may be required with participants who have not had the same technology experience. The results may not be generalizable to populations with less access to a computer on a regular basis.
Additionally, the IRT process only took place over the course of six weeks, not including time for initial and final interviews. Students may be more likely to apply skills to online reading research tasks with exposure to these practices over a longer period of time (Colwell et al., 2013). Another limitation is that the teacher researcher had to intervene more than recommended by Leu et al. (2008) in Phase 3 of the IRT process. Many students needed support with vocabulary and staying on task. It is possible that students may not have been able to complete the research task to the same degree without the assistance from the teacher researcher. Finally, this study included a relatively small sample of students. In order for the results to be more generalizable, a larger sample may be needed including a more diverse participant population in terms of internet and technology experience, reading levels, and more.
Conclusion
Overall, IRT was effective in improving second grade students’ location and evaluation skills. There may be additional tools or teaching needed to support students at this age with synthesizing, communicating, and collaborating. Previous research noted that older students struggled with the online reading comprehension skills: locating, evaluating, synthesizing, and communicating (Forzani, 2018). It is possible that with instruction in these skills beginning at a younger age, students will be able to demonstrate these skills more effectively as they get older and they become more necessary as part of their regular classroom instruction. As researchers continue to explore the area of teaching online reading comprehension to younger students, further guidance on how to most efficiently teach and navigate the challenges of teaching these skills at younger ages may be helpful so that teachers can plan instruction that will benefit students as they continue to read on the internet throughout their education.
References
Afflerbach, P. (2000). Verbal reports and protocol analysis. In M. L. Kamil, P. B. Mosenthal, P. D. Pearson, & R. Barr (Eds.), Handbook of reading research, volume III (pp. 163-207). Routledge.
Anuyah, O., Milton, A., Green, M., & Pera, M. S. (2019). An empirical analysis of search engines’ response to web search queries associated with the classroom setting. Aslib Journal of Information Management, 72(1), 88-111. https://doi.org/10.1108/AJIM-06-2019-0143
Bruner, L., & Hutchison, A. (2023). Rethinking text features in the digital age: Teaching elementary students to navigate digital stories, websites, and videos. The Reading Teacher, 76(6), 747-756. https://doi.org/10.1002/trtr.2197
Christ, T., Wang, X. C., Chiu, M. M., & Cho, H. (2019). Kindergartener’s meaning making with multimodal app books: The relations amongst reader characteristics, app book characteristics, and comprehension outcomes. Early Childhood Research Quarterly, 47, 357–372.https://doi.org/10.1016/j.ecresq.2019.01.003
Coiro, J. (2011). Predicting reading comprehension on the internet: Contributions of offline reading skills, online reading skills, and prior knowledge. Journal of Literacy Research, 43(4), 352–392. https://doi.org/10.1177/1086296x11421979
Coiro, J., & Dobler, E. (2007). Exploring the online reading comprehension strategies used by sixth-grade skilled readers to search for and locate information on the internet. Reading Research Quarterly, 42(2), 214–257. https://doi.org/10.1598/rrq.42.2.2
Corbin, J., & Strauss, A. (2015). Basics of qualitative research: Techniques and procedures for developing grounded theory (4th ed.). SAGE Publications, Inc.
Colwell, J., Hunt-Barron, S., & Reinking, D. (2013). Obstacles to developing digital literacy on the internet in middle school science instruction. Journal of Literacy Research, 45(3), 296-324. https://doi.org/10.1177/1086296X13493273
De Jong, M. T., & Bus, A.G. (2004). The efficacy of electronic books in fostering kindergarten children’s emergent story understanding. Reading Research Quarterly 39(4), 378-393. https://doi.org/10.1598/RRQ.39.4.2
Druin, A., Foss, E., Hatley, L., Golub, E., Guha, M. L., Fails, J., & Hutchison, H. (2009). How children search the internet with keyword interfaces. Proceedings of the 8th international conference on interaction design design and children, 89-96. https://doi.org/10.1145/1551788.1551804
Duke, N. K., & Cartwright, K. B. (2021). The science of reading progresses: Communicating advances beyond the simple view of reading. Reading Research Quarterly, 56(S1), S25-S44. https://doi.org/10.1002/rrq.411
Forzani, E. (2018). How well can students evaluate online science information? Contributions of prior knowledge, gender, socioeconomic status, and offline reading ability. Reading Research Quarterly, 53(4), 385-390. https://doi.org/10.1002/rrq.218
Glaser, B. G., & Strauss, A. L. (2017). The discovery of grounded theory: Strategies for qualitative research. Routledge. (Original work published 1967).
Henry, L. A., Castek, J., O’Byrne, I., & Zawilinski, L. (2012). Using peer collaboration to support online reading, writing, and communication: An empowerment model for struggling readers. Reading & Writing Quarterly, 28(3), 279-306. https://doi.org/10.1080/10573569.2012.676431
Kiili, C., Leu, D. J., Utriainen, J., Coiro, J., Kanniainen, L., Tolvanen, A., Lohvansuu, K., & Leppanen, P. H. T. (2018). Reading to learn from online information: Modeling the factor structure. Journal of Literacy Research, 50(3), 304-334. https://doi.org/10.1177/1086296X18784640
Kuiper, E., Volman, M., & Terwel, J. (2008). Developing web literacy in collaborative inquiry activities. Computers & Education, 52, 668–680. https://doi.org/10.1016/j.compedu.2008.11.010
Leu, D. J., Coiro, J., Castek, J., Hartman, D., Henry, L. A., & Reinking, D. (2008). Research on instruction and assessment in the new literacies of online reading comprehension. In C. Collins Block, S. Parris, & P. Afflerbach (Eds.). Comprehension instruction: Research-based best practices (2nd ed., pp. 321-346). New York: Guilford Press.
Leu, D. J., Forzani, E., Rhoads, C., Maykel, C., Kennedy, C., & Timbrell, N. (2014). The new literacies of online research and comprehension: Rethinking the reading achievement gap. Reading Research Quarterly, 50(1), 37–59. https://doi.org/10.1002/rrq.85
Leu, D. J., Kinzer, C. K., Coiro, J., Castek, J., & Henry, L. A. (2017). New literacies: A dual level theory of the changing nature of literacy, instruction, and assessment. The Journal of Education, 197(2), 1-18. https://doi.org/10.1177/002205741719700202
Miles, M. B., Huberman, A. M., & Saldaña, J. (2020). Qualitative data analysis: A methods sourcebook (4th ed.). SAGE Publications, Inc.
National Assessment of Educational Progress (2019). NAEP report card: Reading. The nation’s report card. Retrieved September 6, 2021 from https://www.nationsreportcard.gov/reading/nation/groups/?grade=4
Pressley, M. & Afflerbach, P. (1995). Verbal protocols of reading: The nature of constructively responsive reading. Routledge.
Sung, Y.-T., Wu, M.-D., Chen, C.-K., & Chang, K.-E. (2015). Examining the online reading behavior and performance of fifth-graders: Evidence from eye-movement data. Frontiers in Psychology, 6(665). https://doi.org/10/3389/fpsyg.2015.00665
Wiley, J., Goldman, S. R., Graesser, A. C., Sanchez, C. A., Ash, I. K., & Hemmerich, J. A. (2009). Source evaluation, comprehension, and learning in internet science inquiry tasks. American Educational Research Journal, 46(4), 1060–1106. https://doi.org/10.3102/0002831209333183
Zawilinski, L., Forzani, E., Timbrell, N., & Leu, D. J. (2019) Best practices in teaching the new literacies of online research and comprehension. In L. Mandel Morrow, & L. B. Gambrell (Eds.) Best practices in literacy instruction (6th ed., pp. 337-358). New York: Guilford Press.
APPENDICES
CODE BOOK
The following codes were used during the data analysis process to code video, screen-recording, and interview transcriptions.
Appendix A
Codes Used in Initial Interviews Only
Code | Definition | Examples |
---|---|---|
First Grade Learning Mode | Mode of 1st Grade Learning (Hybrid- Half in-person, half online; Online- 100% online schooling; Homeschool) |
|
Internet Experience- Educational Use | Student mentions experience with using the internet to complete school-related activities. |
|
Internet Experience- Recreational Use | Student mentions experience with using the internet to complete non-school-related activities |
|
Comfortability | Student mentions their level of comfortability with using the internet or teaching others to use the internet. |
|
Appendix B
Codes Used Throughout all Interviews and Sessions
Code | Definition | Examples |
---|---|---|
Adult Assistance with Internet/Technology | Student receives or mentions receiving assistance from an adult to navigate the internet or use technology. |
|
Location | Student displaying or discussing location skills such as using the search bar, typing in the search bar, finding links, clicking links, etc. |
|
Evaluating Next Steps | Student evaluates progress toward research goal to determine what they should do next (finished, go back, re-search, adjust search terms, etc.) |
|
Evaluating for Relevance | Students determining whether or not a link or website is or will be helpful for them to answer their research question. |
|
Evaluating for Credibility | Students determining whether or not a website or author is trustworthy. This includes using various strategies for evaluating for credibility including using the website URL, using ads, confirming credibility of the author/website, using prior knowledge, or other miscellaneous strategies. | Using Website URL:
Using Ads:
Confirming Author/Website Credibility:
Miscellaneous Strategies:
|
Navigation | Definition: Student engages in physical actions associated with navigating the internet (i.e. scroll, back button, "x" out, etc.) |
|
Synthesizing | Student brings information together from multiple sources. |
|
Communication | Student demonstrates communication skills (i.e. verbally stating the answer or writing on the response sheet). |
|
Reading Image Results | Student gathers information from a picture rather than text-based source or student pauses at and discusses picture. |
|
Peer Questioning | Student asks for help from a peer. |
|
Adult Questioning | Student asks for help from an adult or asks an adult a question. |
|
Providing Help to a Peer | Student is providing modeling or assistance to a peer in a collaborative small group activity (not whole group). |
|
Collaboration | Students working together to complete a task (students asking questions about what they want to do, students delegating roles, etc.). |
|
Prior Knowledge | Student references Prior Knowledge about a topic. The prior knowledge could be accurate information or inaccurate. |
|
Developing Research Questions | Students discuss or create research questions for online research. |
|
Troubleshooting | Students work to fix a problem they have encountered while reading online. |
|
Spelling Strategies | Strategies students use to compensate for not knowing how to spell a word. |
|
Reading Level Impact | Instances when student reading level impacts ability to understand or read a text they encounter. |
|
Teacher Interaction | Teacher asks questions, prompts, or checks in with students. Does not include when the teacher assists with technology. |
|
Reading Online Text | Students are engaged in reading online text or having it read to them by a peer, adult, or Google Read & Write’s Hover Speech feature. |
|
Off Task | Students engaging in discussion or work on the computer that is off task. |
|
Technology Obstacle | Technology does not work in the way the student expected it to or thinks it should, causing difficulty with task completion or possibly frustration. |
|
Appendix C
Codes Used in Whole Class Sessions Only
Code | Definition | Examples |
---|---|---|
Student Modeling (Whole Group) | Student Models an online reading comprehension strategy on the interactive white board in front of the whole class. |
|
Teacher Questioning (Whole Group) | Teacher asks the whole class a question or poses a question to a group of students modeling for the whole class. |
|
Teacher Modeling (Whole Group) | Teacher is Modeling a strategy for students in a whole group setting. (Think alouds, walking through steps, etc.) |
|
Appendix D
Codes Used in Final Interviews Only
Code | Definition | Examples |
---|---|---|
Uncertainty | Student expresses being unsure about a question or concept. |
|
Technology Tip | Student describes a tip, trick, or strategy that they use to make online reading comprehension easier (could be related to any of the online reading comprehension skills). |
|