I have been wrapping up all the data from research cycle two and in the last 48 hours have been squishing and squashing them and comparing it to my first research cycle data. Truthfully, it was a great disappointment to see that my second cycle of research yielded less significant results than the first regarding the implementation of peer-teaching in second grade math. My results showed that most students enjoyed and preferred peer-teaching to working on their own. Academic growth was not significant on a standardized test in either cycle but was significant on a unit/skills specific math test. Cycle one vocabulary development was extremely significant but results from cycle two vocabulary assessments showed no statistical improvement, in fact, overall, there was a decrease in academic vocabulary. Some of my practices were not EXACTLY the same and I will take that lesson with me. However, with results like these, I get to thinking, did the math lesson that day just have lesson math words to talk about? Some students' standardized math score actually went down after the two-week intervention. Can one really measure any kind of learning in two weeks? A big take-away is that I just don't know enough about statistics. But I do love to see my students improve. Where will I go from here? For some reason I want to fly in a completely different direction but staying on the highway I am, I'd like to continue to give students more work time with a classmate as opposed to teacher-talk listening time. I am a little unsure about reversing the roles of learners and teachers in the future since I think that naturally, those who need more help in math will ask for it instead of taking on a teaching role and those who can assist more will do so. However, perhaps I can change the titles, even if to simply have the student in need of math support repeat the steps or lesson verbally to their partner. I also want to focus on getting the peer-teacher the support they need. I'd love to have more rotation of partners but I'm not sure how to make that happen with strict COVID restrictions in the classroom as far as proximity and desk space are concerned. Mostly, I'm happy to turn in this paper to let my brain relax and hopefully get the creativity flowing again.
0 Comments
Well, here we are, down to the last weeks of writing our research paper. At the beginning of this semester, for some reason I was under the impression that I would be writing this paper over the course of the entire school year. I thought, "One research paper? No problem" and I sauntered along. Now I'm trying to be "The Little Engine that Could" and chug my way up and over this hill. Last week I thought I would wipe out my research paper to do list, but mainly I fiddled with data. I wanted to get a clear picture of what the data showed so that I could implement my second action research cycle more successfully than the first. And that action research cycle began today.
During the break I transcribed student interview responses and then counted up academic vocabulary terms in the pre and post interviews. I made separate spreadsheet tabs for tutor responses and learner responses and then counted those up. I also counted words per student in pre and post tests to see if overall, students were speaking more by the end of peer teaching implementation. While there was an increase in academic vocabulary and word count for both tutors and learners, it is not as dramatic as I had hoped. But I used this information to make some inferences about how I was teaching and if I was even giving them the terminology I wanted them to use. Perhaps this is influencing my study too much but for the second cycle I would like to be a little more explicit in how I teach math terms and academic vocabulary. The more I mention something, the more students prioritize it. Included in my preparation for action research cycle 2 was the preparation of a unit pre/post test. The pre/post test on unit topics that I administered to measure academic growth during the first action research cycle was recommended by the curriculum but was WAY too easy which made measuring growth difficult. Apart from that, I lost over half the responses to the post-test when students switched teachers as hybrid teaching began. This time, I hope to have crafted a more reliable measurement tool that focuses on what I will actually be teaching during these next few weeks. Now the question is, how do I write a more substantial literature review, add my second cycle of data that ends in two weeks along with an analysis of its results, and polish my research paper for submission all within the next three weeks? "I think I can, I think I can, I think I can..." After spending a day swishing through various youtube videos and Google searches on how to organize and represent data from a Google Form, I only ended up with more questions. What's the best way to organize my pre and post test data in one spreadsheet? What is a pivot table and how do I get specific numbers from a Likert scale in there? Most importantly, why is statistics a required general ed. class in college and not a class on using a spreadsheet properly? In desperation I reached out to a friend who works in data analysis. With a few lightening fast clicks by my friend, I had a host of consolidated information at my fingertips that could be represented in any number of graphs. I was pleased to learn that in 8 of 10 questions I asked in my Google Form, students grew in their self-assessment of collaborative and problem solving skills. Unfortunately I have a couple students who slipped past me without taking the pre-test which means that I have to exclude them from my study altogether. It is so difficult to catch all the kids' submissions or lack thereof during virtual teaching. I could see from responses that kids didn't tie a couple of my questions to what they had been experiencing in the previous two weeks so I think I will take these questions out of my next cycle of research. I believe these self-assessment questions (I listen to my peers, I ask my peers for help, I give suggestions of how my peers can improve, etc.) will help me answer my research questions: What interpersonal skills do successful students demonstrate when peer teaching? What are the benefits of peer teaching to the learners? What are the benefits of peer teaching to the teachers? Although my research involves mixed methods of data collection, my aim is to have as much quantitative data as possible. During virtual learning, our math unit assessments are much simpler than before, making it difficult to measure student growth. I used one of these assessments in this first round of research but after seeing most students ace it from the beginning I think I will change this measurement tool. I decided this time, that I need to develop my own assessment, planning to assess lessons covered in the specific time frame in which I plan to implement peer teaching. I will still have qualitative academic data from the STAR Math test, but it covers such a wide array of skills that students may not exhibit much growth in two weeks. I do have some qualitative data in daily recorded student answers to pre-recorded interview questions in Seesaw. After the fiasco of loosing all my students and their prior work submissions, I managed to find their first and last interview submissions in Seesaw from the peer teaching research cycle. I recorded the audio on Zoom and now it's a matter of transcribing their responses side by side in a table to look for growth in academic vocabulary which is a focus of my last research question: Does academic vocabulary increase over time with peer teaching? I feel I am at the crossroads of three very important journeys in my masters project right now. First, I am supposed to be reading about how to research and write by reading the research books, The Power of Questions by Beverly Falk, and Action Research by Craig Mertler. But I'm also supposed to be researching peer-reviewed literature at the same time. I also want to do some reading up on the authors, Jo Boaler and John Hattie, who I think could be seminal authors for my work. Lastly, I am in the thick of my first cycle of action research in the classroom. I wish I could have had this done earlier but...here we are. My action research cycle is very short, two weeks. However, even in two weeks, I have seen so much growth. I am researching the effects of peer teaching in 2nd grade math and I have to say it is THRILLING to watch the kids open up like they hadn't before, when given a chance to collaborate. I used assessment data to separate kids into tutors and learners and I think this has really been key to having successful peer teaching happen in our Zoom breakout rooms. I hop from room to room watching and listening. On Friday, a student who, before this, never spoke to his peers, ever, was having a full conversation about the math task with his team member. I am also finding some big benefits to Zoom even though I'm tired of working on Zoom just like everyone else. The learner has to share his/her screen and the tutor has to use her/his words to explain what to do or ask questions. The screen sharing role avoids the common problem of one student doing all the brain and written work and one sitting on the sidelines watching. Another plus is the added help in explaining technology tools. Of course I've modeled time and time again how to manipulate and use tools on Seesaw to complete assignments but now that students are having to complete assignments in front of each other, they are getting real time help from their classmates with the tech. One student didn't know how to hold the trackpad button down and drag at the same time to draw a simple line. By doing the assignment with him, his peer could help instruct him regarding the use of technology. That is the fun part. The difficult part is being at this point where I feel that everything is priority number one or I need each building block to get to the next and there is no way to stack them in a sequenced order for me to climb up. I know I'll get through this busy phase soon and hopefully the data shows what I've been seeing so I can have something substantial to write about. I've spent at least a couple days now looking for literature that ties to my study on peer teaching. For some reason, the only literature that pops up in my searches that halfway supports my research idea about the effects of peer teaching in 2nd grade math class, are dissertations, and many of these are from studies performed in secondary or higher education schools. I think the dynamic of peer teaching changes quite a bit in the lower grades. I narrowed my search to peer reviewed articles only but am still having difficulty finding something relevant to my study. I decided instead to turn to a book I already had on the shelf, John Hattie's Visible Learning for Teaching, to at least get some concrete ideas on the subject. I typed up some quotes into my literature review spreadsheet that were helpful. What I learned was that the effects of peers on student learning is high but peers are most advantageous to learning when the teacher mitigates the negative effects that they can also have on each other. Hattie states that often teachers think they are facilitating collaboration and partner work by arranging student seating together but this doesn't actually change anything. Hattie shares that most learning takes place when student tutors get feedback and work with the teacher to set mastery goals and evaluate learning in their group (Hattie, 2012). A swirling concern in my head is the upcoming switch from virtual learning to a hybrid model of part-time virtual learning and part time in person learning. This switch, first and foremost will cause me to lose half my students. Apart from this, I will have to think up a safe way for students to engage in peer-tutoring/teaching while in the classroom. Perhaps we will continue to use Zoom and headphones in the classroom....wait, do I have headphones? Will I be getting them? That's another can of worms... I have also been swimming through data collection ideas and finally began to create some tools today. I used the K-2 21st Century Rubric from Salvador Elementary to guide my creation of a Google Form with 10 questions in Spanish, each of which students respond on a Likert scale. I am somewhat worried that Google From's inability to read questions to students will cause a barrier for some students to respond to the questions accurately. Instead of including all 20 of the questions on the original I parsed it down to 10 questions mainly focusing on collaboration and communication since these are the main areas I want to study. As I move forward, I realize that I must be mindful of my data collection tools. I have an open ended interview that I want students to complete via Seesaw with voice recordings as well as a collaboration and communication Google Form mentioned above. This is apart from the Bridges unit assessments and STAR Math data for the quantitative academic data. Am I setting myself up to be drowning in data? I figure, if I don't need one of them, at least I have the data and can decide later not to use it. Guess what! My driving question changed! After conferencing with my professors, I have decided to focus more in the direction of peer-teaching than on digital feedback loops. I still love feedback but I was finding that I did not have clear Need to Knows that I had a good plan for how to go about researching. Through John Hattie's synthesis of other educational research, we learned that students retain more of what they hear from peers than information from the teacher. I'd like to capitalize on this in the subject area of math and see what the results would be on academic language and overall engagement in 2nd graders. I have a hard time moving forward when I don't have my measurement tools clearly in mind. This is the problem I was having with my initial driving question. With my new driving question focusing on peer teaching in math, I know I would like to collect data in multiple ways to perhaps draw multiple conclusions. I would like to students to somehow record their experience as tutor or tutored student. By collecting video recordings I hope to also assess academic vocabulary. My school has a special emphasis on language acquisition. I'm curious which language students will choose to interact in and if I should implement any language requirements (math in second grade is conducted in Spanish) or if these will hinder the peer teaching that will take place. I could easily use Seesaw to collect this kind of information. As a second measurement tool I need something that will give me qualitative data so I can measure academic growth in math concepts. The math screener suggested by Bridges on their resources website was not great. I already gave it to students and the majority of the class passed with flying colors. The test was primarily comprised of single digit addition facts and had two story problems. This will not help me measure growth in math. The unit assessments in math are complex and change drastically with each unit depending on the concepts. They also don't come in a digital format so I would have to adapt them. Sometimes they contain visuals or exercises that students are not familiar with because the low amount of paper and pencil practice that the Bridges curriculum has for students. I'm wondering if I should create my own assessment based on what I plan to teach from now until December but this would be a best guess as to what I would cover. I've heard the district has purchased STAR tests with very comprehensive reports but I'm not sure if these are meant for lower grades as well or if they will be primary aged children friendly. If I convert some other math screener questions to a Google Form, how will students show me their work in drawn out models? If I use Seesaw, I will need to make a multi-page assignment that may be quite cumbersome for students and I will need to print out or transcribe their results onto a better data sheet since Seesaw just lets you grade with a number of 1-5 stars. Reading Falk's "The Power of Questions" was a helpful way for me to get my mind wrapped around organizing my research progress. I made a google doc with a few of the tables she suggested and am excited to fill them up with helpful informational tidbits for my driving question. However, when I got around to researching in the databases, my results were coming up...not so great. I conducted research in college and even since then but conducting a quality internet search for information seems to be a skill that goes into my short-term memory. I copied down some titles and linked them into my literature review spreadsheet but they are on the back burner since I'm still finding the work around of how to avoid paying for them. I haven't found much related to peer-teaching among elementary students in math but I'm not sure if this is a result of a lack of research in this specific area or simply due to poor database searches. I will continue my quest for knowledge as the week roles on and get to work on those data collection tools. Need to knows, need to knows,...what do I need to know? A lot! After watching the video on an Introduction to Research Education, I began to realize most of what I believe to be true and successful in education is not based on scientific data. But I do have a really good feeling about it!...We do focus on data and data cycles at school....a lot! But for some reason I don't feel empowered by the data I take away from them. Perhaps one of the biggest skills I've taken away from them is how to make a really narrowed down pre and post test specific to one standard. My driving question for my own research at this point is still: How can digital feedback loops enhance metacognition in math for primary students? I want to capitalize on this metacognition in math by following John Hattie's findings that information learned by students from other students, is the best retained information. After learning about Pekka Peura's Hundred.org project where students took daily self-assessments in teams and then helped each other in their areas of weakness, I was inspired to somehow offer my students a space where they could teach each other as well based on their own assessed needs. Is this possible for 7 year olds? Is it possible during distance learning? Here is a table of burning questions I have before I can even get my feet off the ground: Need to Knows
Perhaps distance learning hasn't exactly thrown a wrench in my plans since when I started thinking about my driving question, I was pretty sure it would be still be with us come fall. My being forced into technological tools early might even have helped me get ready to collect data in a better way. With young kids, however, there is always the comfort zone of paper and pencil data. I think this will be one of my need to knows, how reliable is digital data from 2nd graders and which formats of digital survey data are most useful? |
Minna NummelinLife-long learner and dual language 2nd grade teacher. Archives
April 2021
Categories |