Group+Two+April+16


 * Hutchison, A. & Reinking, D. (2011) Discussion**

Hutchinson and Reinking begin their study with five premises from pre-established research. Those premises are: 1) Literacy teachers are expected to provide skills for students in the modes of communication. 2) Literacy has opportunities for using ICTs to enhance instruction. 3)Integration of technology thus far has been minimal. 4) ICTs require differing skills from print literacy. 5) Literacy teachers should not be the sole instructors for ICTs. The purposes of the research study were to: a) characterize literacy teachers' perceptions about integrating ICTs into instruction; b) clarify and extend findings from previous research pertaining to integrating ICTs into literacy instruction; c) assist policymakers, education leaders, and those involved with professional development in efforts to facilitate the integration of ICTs into instruction; and d) create a benchmark against which evolving trends might be compared in the future.
 * I. RATIONALE FOR THE STUDY**

The six research questions were: 1. "Do literacy teachers report that necessary or useful technologies are available to facilitate the integration of ICTs into instruction, and do they have the technological support to assist with the use of those technologies?" 2. "Which ICTs do literacy teachers believe are important for literacy instruction, and how frequently do they report using them?" 3. "How do literacy teachers conceptualize the integration of ICTs into their instruction? Do their conceptions align more with superficial technological or substantive curricular integration? 4) What are literacy teachers' perceptions about the role and benefit of using technology in literacy instruction? 5) What do literacy teachers perceive to be the obstacles to integrating ICTs into literacy instruction? and 6) What characteristics or beliefs are associated with more or less integration of ICTs? In other words, what factors predict teachers' reported integration of ICTs?" (p. 313-314)

They incorporate past research on integration, statements from the IRA, and limitations of past studies (i.e. lack of consistency, lack of large-scale studies) into a theoretical framework. The authors outline their rationale for the study, but they do not provide an adequate theoretical framework. Although they posit that a national survey on attitudes and practices of ICT integration will "enlighten teachers, theoretically and practically, about how to facilitate integration of ICTs into literacy instruction" (p. 315), they do not provide an overarching framework for constructing the survey. The position statements from the IRA and NCTE seem to be the basis of their framework. In addition, a clearer definition of "digital literacies" or "multimodal literacies" or even "new literacies" would have provided more contemporary terminology than "ICT." I wonder if they didn't label it/define it purposefully? It seems as though it is a running trend that some researchers do not clearly stamp their work with digital literacies or new literacies. Maybe because there are still so many developing under the new literacies umbrella?? I AGREE WITH YOUR ANALYSIS THAT THERE IS NOT AN ADEQUATE THEORETICAL FRAMEWORK PRESENTED FOR THE STUDY. I"M SURPRISED THAT RRQ EDITORS TOOK THE MANUSCRIPT WITHOUT THE FRAMEWORK ARTICULATED.

This was survey research. Participants were literacy educators in the US who are members of a state or local council of the IRA. The majority were elementary educators and had more than 11 years of teaching experience. A focus group of 3 classroom teachers provided feedback on the survey construct. The survey was revised based on feedback and was converted to Survey Monkey format. To enhance validity, the survey was piloted. Maximum likelihood exploratory factor analysis was conducted to examine the internal reliability of the survey. The tailored-design had multiple contacts with participants giving them more opportunities to respond thus increasing participation. I am not convinced that the sample was representative. I think external validity may be a concern due to the sampling procedures. The recursive nature of analyzing the survey by piloting it, then tweaking the questions provided greater internal consistency for the questions. In addition, a 4-point Likert-type scale was used to calculate answers with respondents given a choice of: not at all (0); a small extent (1); a moderate extent (2); and a large extent (3). The tailored design of the method included initial contact by email, a second email and the survey, and a reminder.
 * II. CRITIQUE OF RESEARCH METHODS**

Any surveys less than 85% complete were deleted. 1,637 were returned and 196 omitted due to inadequate completion, resulting in 1,441 usable surveys. The researchers used SPSS and Mplus to handle missing data. The researchers used section headings to ensure the data addressed their research questions. The authors used a "hypothesized path model" to determine related factors. These included professional development, competency, support, obstacles, stance, and availability. This method gave a synthesized analysis of the data beyond the raw numbers and allowed for a complete picture of the technology integration. Unfortunately, according to the article, causal inferences extrapolated from the path diagram model are not taken seriously by all researchers.
 * III. CRITIQUE OF DATA ANALYSIS**

One limitation was the unknown number of people who responded to the survey because it was online and was by invitation or posted to an organizational website. The researchers assert that there was a relatively high number of respondents, which would produce a very small sampling error. However, I question whether there might be validity issues since a survey weblink posted on an organizational website might be accessed by anyone, teacher or not. The study extends and clarifies findings from previous studies and establishes a foundation for future work. Results are clearly rationalized from the data. In addition, Survey Monkey is not a valid instrument for assessing data, because it can be accessed repeatedly. In other words, you can respond to a Survey Monkey multiple times and potentially skew the data. An online instrument developed by the authors would have been a better choice, or perhaps an email survey. The authors' assertion that the data can be read as a glass half full or half empty (p. 330) is interesting. The teachers do seem to express competency and interest in integrating technology, but do not quite know how to do it. Although brief mention was made of Mishra and Koehler in the lit review section of the article, use of a TPACK framework could go a long way toward helping teachers meet the new Common Core State Standards guidelines. The authors did not mention TPACK specifically, and this could have been a useful way in which to frame the discussion/results.
 * IV. CRITIQUE OF DISCUSSION**
 * I AGREE THAT USING A PASSWORD PROTECTED WEBLINK WOULD HAVE BEEN MORE SECURE THAN SURVEY MONKEY. THERE IS A SETTING ON SURVEY MONKEY THAT WILL NOT ALLOW THE SAME COMPUTER TO BE USED TWICE FOR THE SURVEY>**


 * V. ADDITIONAL OBSERVATIONS (QUESTIONS/CONFUSIONS??)**

I always have a slight problem with studies that gauge online proficiency and integration using an online survey. Wouldn't this always skew your population and effect your outcome? I, too, question the use of Survey Monkey for this survey research, especially when posting the link to the survey on organizational websites. If the researchers don't know how many people actually responded to the survey, how can they know that everyone who completed the survey was actually a literacy teacher? That's exactly what I was thinking, too! That's the great thing about reading studies like this, though. We see what other people are doing, and hopefully we will feel that it's all not such a mystery. I agree -- the methods for choosing the participants was not the best. A direct email to the literacy teachers (instead of just the presidents of the chapters) would have been better. **I AGREE WITH YOU ABOUT THIS!**