Group+April+16


 * Mesmer, H.A. (2006) Discussion**

The researchers identified three gaps of knowledge that existed in the literature: 1)How teachers navigate all the materials offered to them 2)What influences teachers' reported use of text 3)How perceptions changed since the Basal 2000 revisions. The focus of this study was just K-3 teachers and their goal was to give voice to this group. WAS THERE A THEORETICAL RATIONALE? There really was not an overarching theoretical framework that drove the rationale except in that research underscoring the various potential influences was included for each influence; however, they seemed isolated rather than congruent as in a theoretical framework. It was more descriptive as was the part on the text types with their overlaps and their individual differences. All of the descriptions though fit with research questions.
 * I. RATIONALE FOR THE STUDY**


 * II. CRITIQUE OF RESEARCH METHODS**

The researcher developed a survey that included open-ended and closed-ended items. The items were shared with experts to check for validity. The final survey had 37 items. (This represented items other than those discussed in this article though. . .smart move to perpetuate publishing.) Participants were randomly selected then geographically stratified based on zipcode. WHAT OTHER METHODS WERE USED TO ESTABLISH VALIDITY AND RELIABILITY? (from Hiller)

The authors identified their sample as 'randomly selected' but they were randomly selected from a presampled list of IRA members; it really was a convenience sample that then used random assignment maybe? They did get a good return % for mail back surveys and they ended up with an adequate return for the two states whose mandates were being investigated as a possible influence on text usage. Other than defining the various population percentages represented in the final survey returns, the authors used a pilot and dropping 'bad' items, but no other 'methods' were included in the methodology description to improve validity. There were some statistical tests used to improve reliability, such as Fisher's exact test to correct for small frequencies and the use of three independent raters in the constant-comparative analysis of the open ended items.

CAN YOU COMMENT ON THIS? The section of the article labeled data analysis seemed remarkably SPARSE compared to most of the other research reports we've consumed this semester. They did achieve an interrater reliability of 90% on the constant comparative coding portion though (see description above). The use of the pie charts for the frequency data was interesting in that it is a graph format that is RARELY used in empirical research write ups. I found them informative but they needed to be larger and have word labels for each Likert scale number span. The clustered bar table was perfect for the instructional purposes findings. I did note as I read the discussion of this that the strategic uses seemed basically common sense, but I guess it's always good to have data to 'prove' it. I was surprised that were were not greater differences between grade level uses for different instructional purposes and I was surprised by the level of freedom generally reported by the teachers -- that more freedom to choose did not render greater differences in choice. I was also a bit surprised that the mandates had some influence on teachers' choices but no more than other influences.
 * III. CRITIQUE OF DATA ANALYSIS**


 * IV. CRITIQUE OF DISCUSSION**

Teachers are using literature as much as they did in the past, but it is used differently. Teachers felt differently toward decodable texts than other types of reading materials. Teachers that utilized a certain type of phonics instruction preferred decodable text over teachers who used a different strategy. Mesmer suggested that decodable text may not be necessary, depending on the strength of a teacher's phonics instruction. In the discussion section, the authors pointed out the sample was only IRA members. This selective sampling seemed like a bigger deal to me than it appeared in the study. Teachers who are members of IRA more likely follow the philosophy of a balanced literacy approach, which support using the text to meet the needs of the students. This is what the survey demonstrated. I was excited to read the results in the abstract that teachers were using a variety of texts to meet student needs, but once I saw the sample, I was disappointed because this may not be true for the rest of the population who are not members of that professional organization. I agree that the sample coming all from the IRA was not addressed sufficiently. One piece of information that might have mitigated that a bit for me would have been the total number of K-3 teachers in the US (potential) compared to the number in membership to the IRA that was mentioned in terms of potential numbers for the sample. I did appreciate the historical context the author's afforded as they analyzed their findings and pointed out trends in change of usage patterns and/or purposes, especially in terms of use of literature. I it interesting that the analysis characterized teachers as being eclectic and very balanced in their approach to using literature depending on their students' needs but then on the other hand many of the uses teachers identified as being good for addressing the students' literacy needs were in conflict with prevailing research. I also found it somewhat questionable that the discussion around the most conflicted of texts - decodable text - was attributed to conflicting research instead of addressing the politics of literacy being a possible influence in the conflicting views. This omission seems relevant in light of the finding that teachers still do use decoding/phonics strategies for very specific purposes....I just don't have a strong enough background yet in literacy theory to weigh in on if research conflicts are the source of the quandary around decoding or not....


 * V. ADDITIONAL OBSERVATIONS (QUESTIONS/CONFUSIONS??) Would the methods discussion and data analysis description be considered well-crafted by a peer review panel? Or is this just an artifact of survey research reporting in general? **