For the last couple of weeks, we have focused on ISTE Standard 5c: evaluate the impact of professional learning and continually make improvements in order to meet the school wide vision for using technology for high-impact teaching and learning. It seems like for most of this semester we have been focused on professional learning, and ways that we can make it better for teachers. When I read through this indicator and our focus question, I just couldn’t keep away from the section that focuses on using technology for high-impact teaching and learning. This got me thinking – as a coach, how can I evaluate the impact of technology in relation to student learning?
When in a coaching position, you are no longer in a classroom setting, Instead, you are working with educators to help them to improve their practice. This had me thinking about how coaches can help evaluate technology for student learning when they aren’t the ones who are necessarily using it with students. If as a coach, I don’t know what is impactful then I can’t make recommendations to improve student learning. With this in mind, I focused my research on finding strategies that can be used to evaluate the impact of technology on student learning.
In my initial research, I came across an article from Teachnology that provided steps to evaluate the value of educational technology. Their steps were things that I have seen before, but were a good reminder and a starting point for me as I went deeper into my search:
- Setting goals: set a goal for each program. Ask what do you intend to achieve by using this certain technology? If you don’t know what you want to get out of it, then you don’t have a good starting place to know if it’s effective
- Prepare to contrast information from before and after the use of technology: this one was new for me in the sense that I haven’t listed it out before, but it’s something that naturally is done. By creating a baseline from before introducing the technology, you can really track to see what if anything has changed or improved over the time that you added it to the practice.
- Using rubrics, give a score for each goal you set: after collecting and analyzing your data, use numbers to determine how successful students were in comparison to the baseline you recorded before implementation.
The next stop on my research journey was rubrics. When grading students and providing feedback to them, teachers rely heavily on rubrics. It allows for objectivity and provides students with what is needed from them to reach the success criteria. It makes sense to use a rubric when evaluating a technology tool as well. One of the mottos I have picked up over the years from a principal I worked under came to mind: “work smarter, not harder”. There is a world of educators out there who have already created rubrics that are posted and shared for others to use. Instead of me creating a new one, I looked for some that already exist.
One that I found comes from Educause. Their rubric is pretty comprehensive and includes the areas of functionality, accessibility, technical, mobile design, privacy, data protection and rights, social presence, teaching presence, and cognitive presence. When going to evaluate technology for instruction, a school system could decide if they wanted to include all of these sections or if they were really focused in just one area. They could also take this rubric and completely change things around to make it better fit the needs of their site. Starting from a pre-existing rubric just makes the process a little bit easier.
My next research steps are to tie in evaluating technology in learning and professional development. As coaches won’t be the ones in the classroom, it will fall mainly on the teachers to evaluate the effectiveness of their technology tools and then report that out. Coaches could lead professional learning times to train the teachers on how to evaluate the technology tools. How have you evaluated technology use in your own practice? Do you have any go-to rubrics or steps you follow? Share below!
- Anstey, Lauren and Watson, Gavan. (2018, September 10). A Rubric for Evaluating E-Learning Tools in Higher Education. Educause Review. Retrieved from https://er.educause.edu/articles/2018/9/a-rubric-for-evaluating-e-learning-tools-in-higher-education
- Ehsanipour, Tina and Zaccarelli, Flourencia Gomze. (2017, July). Exploring coaching for powerful technology use in education. Digital Promise. Retrieved from https://digitalpromise.org/wp-content/uploads/2017/07/Dynamic-Learning-Project-Paper-Final.pdf
- Hanover Research. (2017, March). Best practices in professional development. Hanover Research. Retrieved from https://www.antiochschools.net/cms/lib/CA02209771/Centricity/domain/43/curriculum-and-instruction/Best%20Practices%20in%20Professional%20Development.pdf
- Himmelsbach, Vawn. (2019, March 15). How does technology impact student learning? Top Hat. Retrieved from https://tophat.com/blog/how-does-technology-impact-student-learning/
- Noeth, Richard and Volkov, Boris. (2004). Evaluating the effectiveness of technology in our schools. ACT Policy Report. Retrieved from https://www.act.org/content/dam/act/unsecured/documents/school_tech.pdf
- Teachnology. (n.d.) Steps to Evaluate the Value of Educational Technology. Teachnology. Retrieved from https://www.teach-nology.com/teachers/educational_technology/evaluation