Quality Improvement in Program Administration through Directors’ Support Cohorts

A woman wearing glasses and a suit is smiling in front of a flag.

Sim Loh is a family partnership coordinator at Children’s Village, a nationally-accredited Keystone 4 STARS early learning and school-age enrichment program in Philadelphia, Pennsylvania, serving about 350 children. She supports children and families, including non-English speaking families of immigrant status, by ensuring equitable access to education, health, employment, and legal information and resources on a day-to-day basis. She is a member of the Children First Racial Equity Early Childhood Education Provider Council, a community member representative of Philadelphia School District Multilingual Advisory Council, and a board member of Historic Philadelphia.


Sim explains, “I ensure families know their rights and educate them on ways to speak up for themselves and request for interpretation/translation services. I share families’ stories and experiences with legislators and decision-makers so that their needs are understood. Attending Leadership Connections will help me strengthen and grow my skills in all domains by interacting with and hearing from experienced leaders in different positions. With newly acquired skills, I seek to learn about the systems level while paying close attention to the accessibility and barriers of different systems and resources and their impacts on young children and their families.”

This document may be printed, photocopied, and disseminated freely with attribution. All content is the property of the McCormick Center for Early Childhood Leadership.

This resource is part of our Research Notes series. 


Initiatives to improve administrative practices in early childhood programs take many forms. Some models are high-intensity, providing substantial external support for directors—formal training leading to an advanced degree, high dosage of technical support for achieving accreditation, and on-site coaching addressing multiple facets of program leadership and management. These high-intensity models have been shown to yield significant improvements in program- and classroom-level quality, organizational climate, and participants’ level of knowledge and demonstrated skill.1


Other models are moderate-intensity, providing a lower dose of formal training and on-site support, and lead to a director credential. Although the outcomes are not as robust as the high-intensity models, moderate-intensity initiatives also yield significant improvements in program quality and directors’ level of competency.2


Because high- and moderate-intensity initiatives are costly to implement, the current study examined an informal low-intensity approach to strengthening leadership capacity as a viable alternative.


THE MODEL


Beginning in 2006, the Metropolitan Council on Early Learning (MCEL), a program of the Mid-America Regional Council in Kansas City, offered a director support program with the following characteristics:


  • Programs were assessed using the Program Administration Scale (PAS) to identify areas of administrative practice in need of improvement.3
  • Facilitated cohort groups of directors met monthly for networking and peer support.
  • Some training was offered to enhance leadership and management skills.
  • Some coaching was provided to help directors develop and implement their program improvement plans.4
  • Print and electronic resource materials were provided.


SAMPLE AND METHODS


Twenty-nine early childhood directors participated in two cohorts of the MCEL Director Support Program. Twenty-three participants (79%) completed the 18-month intervention. Participants in the sample were not highly qualified. Only one director had an advanced degree. More than half had not achieved an associate’s degree with 21 s.h. of college credit in ECE/CD and 9 s.h. in management coursework.


Participants were selected to represent a variety of early childhood centers in the Kansas City bi-state area. On average, the centers had a license capacity of 88 with 16 staff members. Fourteen programs (61%) were private nonprofit; 7 programs (30%) were private for-profit; and 2 (9%) were public programs. Three programs received Head Start funding. Nearly half of the centers (48%) were accredited.


Pre- and post-intervention assessments were conducted by independent certified PAS assessors.Paired sample t-tests were performed to assess change over time for each center’s overall PAS score and individual PAS items 1 through 21. Staff qualifications were not included in the analysis. Cohen’s d was computed to assess effect size.


RESULTS


On average, the overall PAS scores improved for participants’ programs. There was a significant difference between the pre-intervention PAS scores (M = 2.87, SD =1.06) and the post-intervention scores (M = 3.47, SD = 1.14, t = 3.07, p < .01, Cohen’s d = .54) indicating a medium effect.


The average PAS scores for these items were compared to the national averages obtained from the normative samples in developing the PAS. Study participants scored lower than the national average when they began meeting with other directors. By the end of the study, the average scores for these items exceeded the national means.


Significant differences were also found in four of the individual PAS items as seen in Table 1.

A table showing the comparison of pre and post intervention of fas scores

DISCUSSION


The results of this study suggest that an informal low-intensity model may be a cost-effective means for yielding moderate positive outcomes in the administrative practices in early care and education programs.


Utilizing an assessment tool of leadership and management practices like the PAS provides structure and standards to guide directors, coaches, and peer mentors in identifying specific areas of strength and areas in need of improvement. A learning community offers a venue for discussing specific aspects of leadership and management practice and for exploring practical solutions to issues that directors experience.


Multiple intervention strategies were incorporated in this model including facilitated peer learning groups that met quarterly, two targeted training sessions per cohort, monthly coaching contacts to develop and execute improvement plans, and support with resources and formal education.6 The training topics emerged from the peer learning groups based on the initial PAS results and participants’ perceived needs. Coaches helped directors interpret PAS scores as well as understand the value of implementing management practices, documentation, and organizing materials and records.


Results suggest that the model may be more effective with certain dimensions of early childhood program administration than others. The large effect size (.93) for improvement in the PAS item assessing staff orientation practices indicates it was especially impacted by this initiative. Moderate effects were also seen in supervision and performance appraisal, internal communication, and community outreach. These aspects of leadership and management can be readily adjusted by program directors, which may explain why the effects are more significant than for other areas that involve many individuals affiliated with the organization.


Many state leaders overseeing early childhood quality initiatives are considering how to take successful program models to scale or how to sustain advances made in statewide systems. Initiatives implementing a facilitated peer learning model for improvement in administrative practices may offer cost-effective features that could be incorporated into larger system initiatives such as QRIS. Participant cost data was not available for this study, but the intervention model using peer supports may be more feasible than other models that incorporate extensive formal training.


There are several limitations to this study that should be considered in interpreting the results. Caution should be exercised in generalizing the results due to the small sample size. Multiple aspects of the intervention model were not evaluated independently requiring further research to determine which strategies most contribute to its effectiveness. Its applicability to other agencies and in diverse communities is also unknown, although other low-intensity leadership training programs have reported similar results. The results of this study suggest that additional research on the intensity of professional development for early childhood administrators is warranted.


  1. Bloom, P. J., & Sheerer, M. (1992). The effect of leadership training on program quality. Early Childhood Research Quarterly, 7(4), 579-594.
  2. Bloom, P. J., Jackson, S., Talan, T. N., & Kelton, R. (2013). Taking Charge of Change: A 20-year review of empowering early childhood administrators through leadership training. Wheeling, IL: McCormick Center for Early Childhood Leadership, National Louis University.
  3. Talan, T. N. & Bloom, P. J. (2004). Program Administration Scale: Measuring early childhood leadership and management. New York: Teachers College Press.
  4. Meeting facilitation, training, and coaching were provided by Francis Institute for Child and Youth Development, located at Metropolitan Community College-Penn Valley.
  5. Assessments were conducted by University of Missouri-Kansas City, Institute for Human Development.
  6. Newkirk, M. K. (2014, January). Improving leadership and management practice in early learning programs through assessment and support. University of Missouri-Kansas City Institute for Human Development.
By McCormick Center May 13, 2025
Leaders, policymakers, and systems developers seek to improve early childhood programs through data-driven decision-making. Data can be useful for informing continuous quality improvement efforts at the classroom and program level and for creating support for workforce development at the system level. Early childhood program leaders use assessments to help them understand their programs’ strengths and to draw attention to where supports are needed.  Assessment data is particularly useful in understanding the complexity of organizational climate and the organizational conditions that lead to successful outcomes for children and families. Several tools are available for program leaders to assess organizational structures, processes, and workplace conditions, including: Preschool Program Quality Assessment (PQA) 1 Program Administration Scale (PAS) 2 Child Care Worker Job Stress Inventory (ECWJSI) 3 Early Childhood Job Satisfaction Survey (ECJSS) 4 Early Childhood Work Environment Survey (ECWES) 5 Supportive Environmental Quality Underlying Adult Learning (SEQUAL) 6 The Early Education Essentials is a recently developed tool to examine program conditions that affect early childhood education instructional and emotional quality. It is patterned after the Five Essentials Framework, 7 which is widely used to measure instructional supports in K-12 schools. The Early Education Essentials measures six dimensions of quality in early childhood programs: Effective instructional leaders Collaborative teachers Supportive environment Ambitious instruction Involved families Parent voice A recently published validation study for the Early Education Essentials 8 demonstrates that it is a valid and reliable instrument that can be used to assess early childhood programs to improve teaching and learning outcomes. METHODOLOGY For this validation study, two sets of surveys were administered in one Midwestern city; one for teachers/staff in early childhood settings and one for parents/guardians of preschool-aged children. A stratified random sampling method was used to select sites with an oversampling for the percentage of children who spoke Spanish. The teacher surveys included 164 items within 26 scales and were made available online for a three-month period in the public schools. In community-based sites, data collectors administered the surveys to staff. Data collectors also administered the parent surveys in all sites. The parent survey was shorter, with 54 items within nine scales. Rasch analyses was used to combine items into scales. In addition to the surveys, administrative data were analyzed regarding school attendance. Classroom observational assessments were performed to measure teacher-child interactions. The Classroom Assessment Scoring System TM (CLASS) 9 was used to assess the interactions. Early Education Essentials surveys were analyzed from 81 early childhood program sites (41 school-based programs and 40 community-based programs), serving 3- and 4-year old children. Only publicly funded programs (e.g., state-funded preschool and/or Head Start) were included in the study. The average enrollment for the programs was 109 (sd = 64); 91% of the children were from minority backgrounds; and 38% came from non-English speaking homes. Of the 746 teacher surveys collected, 451 (61%) were from school-based sites and 294 (39%) were from community-based sites. There were 2,464 parent surveys collected (59% school; 41% community). About one-third of the parent surveys were conducted in Spanish. Data were analyzed to determine reliability, internal validity, group differences, and sensitivity across sites. Child outcome results were used to examine if positive scores on the surveys were related to desirable outcomes for children (attendance and teacher-child interactions). Hierarchical linear modeling (HLM) was used to compute average site-level CLASS scores to account for the shared variance among classrooms within the same school. Exploratory factor analysis was performed to group the scales. RESULTS The surveys performed well in the measurement characteristics of scale reliability, internal validity, differential item functioning, and sensitivity across sites . Reliability was measured for 25 scales with Rasch Person Reliability scores ranging from .73 to .92; with only two scales falling below the preferred .80 threshold. The Rasch analysis also provided assessment of internal validity showing that 97% of the items fell in an acceptable range of >0.7 to <1.3 (infit mean squares). The Teacher/Staff survey could detect differences across sites, however the Parent Survey was less effective in detecting differences across sites. Differential item functioning (DIF) was used to compare if individual responses differed for school- versus community-based settings and primary language (English versus Spanish speakers). Results showed that 18 scales had no or only one large DIF on the Teacher/Staff Survey related to setting. There were no large DIFs found related to setting on the Parent Survey and only one scale that had more than one large DIF related to primary language. The authors decided to leave the large DIF items in the scale because the number of large DIFs were minimal and they fit well with the various groups. The factor analysis aligned closely with the five essentials in the K-12 model . However, researchers also identified a sixth factor—parent voice—which factored differently from involved families on the Parent Survey. Therefore, the Early Education Essentials have an additional dimension in contrast to the K-12 Five Essentials Framework. Outcomes related to CLASS scores were found for two of the six essential supports . Positive associations were found for Effective Instructional Leaders and Collaborative Teachers and all three of the CLASS domains (Emotional Support, Classroom Organization, and Instructional Support). Significant associations with CLASS scores were not found for the Supportive Environment, Involved Families, or Parent Voice essentials. Ambitious Instruction was not associated with any of the three domains of the CLASS scores. Table 1. HLM Coefficients Relating Essential Scores to CLASS Scores (Model 1) shows the results of the analysis showing these associations. Outcomes related to student attendance were found for four of the six essential supports . Effective Instructional Leaders, Collaborative Teachers, Supportive Environment, and Involved Families were positively associated with student attendance. Ambitious Instruction and Parent Voice were not found to be associated with student attendance. The authors are continuing to examine and improve the tool to better measure developmentally appropriate instruction and to adapt the Parent Survey so that it will perform across sites. There are a few limitations to this study that should be considered. Since the research is based on correlations, the direction of the relationship between factors and organizational conditions is not evident. It is unknown whether the Early Education Essentials survey is detecting factors that affect outcomes (e.g., engaged families or positive teacher-child interactions) or whether the organizational conditions predict these outcomes. This study was limited to one large city and a specific set of early childhood education settings. It has not been tested with early childhood centers that do not receive Head Start or state pre-K funding. DISCUSSION The Early Education Essentials survey expands the capacity of early childhood program leaders, policymakers, systems developers, and researchers to assess organizational conditions that specifically affect instructional quality. It is likely to be a useful tool for administrators seeking to evaluate the effects of their pedagogical leadership—one of the three domains of whole leadership. 10 When used with additional measures to assess whole leadership—administrative leadership, leadership essentials, as well as pedagogical leadership—stakeholders will be able to understand the organizational conditions and supports that positively impact child and family outcomes. Many quality initiatives focus on assessment at the classroom level, but examining quality with a wider lens at the site level expands the opportunity for sustainable change and improvement. The availability of valid and reliable instruments to assess the organizational structures, processes, and conditions within early childhood programs is necessary for data-driven improvement of programs as well as systems development and applied research. Findings from this validation study confirm that strong instructional leadership and teacher collaboration are good predictors of effective teaching and learning practices, evidenced in supportive teacher-child interactions and student attendance. 11 This evidence is an important contribution to the growing body of knowledge to inform embedded continuous quality improvement efforts. It also suggests that leadership to support teacher collaboration like professional learning communities (PLCs) and communities of practice (CoPs) may have an effect on outcomes for children. This study raises questions for future research. The addition of the “parent voice” essential support should be further explored. If parent voice is an essential support why was it not related to CLASS scores or student attendance? With the introduction of the Early Education Essentials survey to the existing battery of program assessment tools (PQA, PAS, ECWJSI, ECWES, ECJSS and SEQUAL), a concurrent validity study is needed to determine how these tools are related and how they can best be used to examine early childhood leadership from a whole leadership perspective. ENDNOTES 1 High/Scope Educational Research Foundation, 2003 2 Talan & Bloom, 2011 3 Curbow, Spratt, Ungaretti, McDonnell, & Breckler, 2000 4 Bloom, 2016 5 Bloom, 2016 6 Whitebook & Ryan, 2012 7 Bryk, Sebring, Allensworth, Luppescu, & Easton, 2010 8 Ehrlich, Pacchiano, Stein, Wagner, Park, Frank, et al., 2018 9 Pianta, La Paro, & Hamre, 2008 10 Abel, Talan, & Masterson, 2017 11 Bloom, 2016; Lower & Cassidy, 2007 REFERENCES Abel, M. B., Talan, T. N., & Masterson, M. (2017, Jan/Feb). Whole leadership: A framework for early childhood programs. Exchange(19460406), 39(233), 22-25. Bloom, P. J. (2016). Measuring work attitudes in early childhood settings: Technical manual for the Early Childhood Job Satisfaction Survey (ECJSS) and the Early Childhood Work Environment Survey (ECWES), (3rd ed.). Lake Forest, IL: New Horizons. Bryk, A. S., Sebring, P. B., Allensworth, E., Luppescu, S., & Easton, J. Q. (2010). Organizing schools for improvement: Lessons from Chicago. Chicago, IL: The University of Chicago Press. Curbow, B., Spratt, K., Ungaretti, A., McDonnell, K., & Breckler, S. (2000). Development of the Child Care Worker Job Stress Inventory. Early Childhood Research Quarterly, 15, 515-536. DOI: 10.1016/S0885-2006(01)00068-0 Ehrlich, S. B., Pacchiano, D., Stein, A. G., Wagner, M. R., Park, S., Frank, E., et al., (in press). Early Education Essentials: Validation of a new survey tool of early education organizational conditions. Early Education and Development. High/Scope Educational Research Foundation (2003). Preschool Program Quality Assessment, 2nd Edition (PQA) administration manual. Ypsilanti, MI: High/Scope Press. Lower, J. K. & Cassidy, D. J. (2007). Child care work environments: The relationship with learning environments. Journal of Research in Childhood Education, 22(2), 189-204. DOI: 10.1080/02568540709594621 Pianta, R. C., La Paro, K. M., & Hamre, B. K. (2008). Classroom Assessment Scoring System (CLASS). Baltimore, MD: Paul H. Brookes Publishing Co. Talan, T. N., & Bloom, P. J. (2011). Program Administration Scale: Measuring early childhood leadership and management (2 nd ed.). New York, NY: Teachers College Press. Whitebook, M., & Ryan, S. (2012). Supportive Environmental Quality Underlying Adult Learning (SEQUAL). Berkeley, CA: Center for the Study of Child Care Employment, University of California.
Show More