![]() When applied to education contexts, these experimentally designed studies are commonly referred to as randomized controlled trials, or cluster randomized trials in the cases when the design considers the multiple levels of analyses that are familiar in school-based settings (e.g., classrooms clustered within schools, and children clustered within classrooms). To date, studies of the impacts of SEL programs have mostly used field-based experimental designs ( Boruch et al., 2002 Bickman and Reich, 2015). Results of studies of SEL program impacts on children’s social and academic outcomes have gone on to be included in economic studies, and a recent estimate of a $11 return for every $1 invested in school-based SEL programs have compelled policy makers and program administrators nationwide to implement these programs at a wide scale ( Belfield et al., 2015). Positive impacts of SEL programs are also evident among preschool aged children results from a meta-analysis of 39 SEL programs in early childhood education settings found small to medium effects (Hedge’s g effect size estimates between 0.31 and 0.42) for improvements in children’s social and emotional competencies, and reductions in their challenging behaviors ( Luo et al., 2022). More specifically, SEL programs have been found to produce demonstrably positive impacts on students’ social and emotional skills (e.g., perspective taking, identifying emotions, interpersonal problem solving) attitudes toward self and others (e.g., self-esteem, self-efficacy) positive social behaviors (e.g., collaboration, cooperation) reduced conduct problems (e.g., class disruption, aggression) reduced emotional distress (e.g., depression and anxiety) and academic performance (e.g., standardized math and reading Durlak et al., 2011). The recent wide-scale expansion of social emotional learning (SEL) programs in schools and classrooms has been informed by results of research studies that demonstrate, across age groups, SEL programs have positive impacts on students’ academic success and well-being. We then discuss how we can use such multi-level implementation data to extend our understanding of program impacts to answer questions such as: “Why did the program work (or not work) to produce impacts?” “What are the core components of the program?” and “How can we improve the program in future implementations?” Specifically, we describe the process we used to develop an implementation conceptual framework that highlights the importance of studying implementation at two levels: (1) the program implementation supports for teachers, and (2) teacher implementation of the curriculum in the classroom with students. As such, the primary goal of this paper is to highlight the importance of studying implementation in the context of education RCTs, by sharing one example of a conceptualization and related set of implementation measures we created for a current study of ours testing the impacts of a SEL program for preschool children. Data collected for these purposes can be used to answer questions regarding program impacts that matter to policy makers and practitioners in the field (e.g., Will the program work in practice? Under what conditions? For whom and why?). The field of implementation science bridges the gap between the RCT framework and understanding program impacts through the systematic data collection of program implementation components (e.g., adherence, quality, responsiveness). ![]() What is often missing in RCT studies is a robust parallel investigation of the multi-level implementation of the program. Social–emotional learning (SEL) programs are frequently evaluated using randomized controlled trial (RCT) methodology as a means to assess program impacts. 2Department of Human Development and Family Studies, Pennsylvania State University, State College, PA, United States.1Department of Psychology, Portland State University, Portland, OR, United States.
0 Comments
Leave a Reply. |