From an instructional coach’s perspective, a great deal of time is expended on the planning, designing, and execution phases of professional development (PD). Much effort is put forth into determining content, developing active learning activities that introduce and integrate educational technology, and structuring opportunities for educators to collaborate and reflect on their teaching practices. I recently wrote a blog post titled. “Designing Professional Development Sessions that Go the Distance” which addresses considerations for developing ed tech PD offerings. One important component of designing PD is to begin by surveying the need and wants of your target audience. Gauging learning needs is important as well as understanding how confident educators are with integrating digital technology into their teaching practices. This is a good place to start in the PD design process. Additionally, having a solid understanding of the goals and objectives of the educational system in which your target audience works is essential to provide relevant, applicable, useful PD.
At the conclusion of a PD workshop, whether formatted synchronously or asynchronously, the instructional coach should gather feedback to assess whether the PD delivery was effective in meeting the needs of the participants.
For this blog post, I will explore the mixed-methods approach to program evaluation by addressing the following question: How can surveys and focus groups be used collectively to evaluate ed-tech. professional development? My question aligns with the International Society for Technology in Education’s Standard 4.5 Professional Learning Facilitator, and performance indicator 4.5c “Evaluate the impact of professional learning and continually make improvements in order to meet the schoolwide vision for using technology for high-impact teaching and learning.”
A mixed-method research design has been defined as an approach “whereby researchers collect and analyse both quantitative and qualitative data within the same study” (Shorten & Smith, 2017). The explanatory mixed method design is structured to collect quantitative data first, such as through a survey asking closed-ended questions, then followed up with qualitative data collection, such as through focus groups.
In her blog post on evaluating professional development, Robinson (2018) describes how focus groups, surveys, and interviews can be used together to evaluate professional development. Robinson outlines how all three methods of data collection are unique and where they intersect (see Venn diagram for a graphic representation). Additionally, Robinson discusses strengths and weaknesses of using focus groups for general data collection purposes. For example, one strength mentioned is the ability to ask additional questions based on survey responses, such as asking probing questions to elicit more feedback. One weakness discussed is that data analysis of focus group feedback can be more time-consuming than other methods, such as surveys. Lastly, Robinson suggests that collecting information from a focus group after reviewing survey data can allow the researchers to further explore survey responses. Echoing Robinson’s views, Shorten and Smith (2017) state that “mixed methods can be used to gain a better understanding of connections or contradictions between qualitative and quantitative data.”
An additional consideration for program evaluation is to consider asking qualitative survey questions. In contrast to quantitative survey questions that are closed-ended, such as asking a question about satisfaction of the PD experience on a 5-point Likert scale, qualitative survey questions are open-ended, such as asking for feedback on the implementation of PD content into teaching practice.
One benefit of qualitative data collection is that it can be categorized into themes (i.e., a thematic analysis) to better understand feedback. There are two types of qualitative coding: deductive, which is when the researcher pre-determines categories that they believe responses will fit into, and inductive coding, where the researcher determines categories after the fact. Once data have been analyzed, qualitative coding can be arranged into a table delineating categories and frequency of responses.
So why put so much effort into evaluating ed tech PD courses? Any time a program is offered to a target audience, instructors need to know if they were successful in their delivery. Did the program meet the professional development needs of the educators in attendance? Was the content useful, relevant, and meaningful? Did the PD improve educators’ teaching practices and, ultimately, student learning?
A mixed method research design can be an effective approach to evaluating the effectiveness of PD efforts due to the varying types of feedback solicited. Quantitative and qualitative data reviewed collectively can provide meaningful feedback to assess what is working well and what improvements can be made to program design.
International Society for Technology in Education. www.iste.org
Robinson, S. (2018). How surveys and focus groups can be used together to evaluate professional development. Frontline Research & Learning Institute. https://www.frontlineinstitute.com/blog/using-surveys-and-focus-groups-to-evaluate-pd/
Shorten, A., Smith, J. (2017). Mixed methods research: expanding the evidence base. Evidence Based Nursing, 20,3: 74-75.