The Relationship between Global Classroom and Administrative Quality: A Cross-Cultural Comparison

This document may be printed, photocopied, and disseminated freely with attribution. All content is the property of the McCormick Center for Early Childhood Leadership.


International comparative education studies contribute to improvements in education systems by highlighting strengths and identifying challenges found in the different cultural contexts of different countries. Comparative early childhood education (ECE) studies provide new ideas and insights into how local and national systems can be improved. There are, however, few comparative international ECE studies that report data collected at the individual program level (Li, 2015; Sheridan et al., 2009). Further, there is scant research that examines the relationship between administrative practices and classroom practices in ECE programs (Lower and Cassidy, 2007; McCormick, 2010a; McCormick, 2010b).



Quality improvement systems in ECE exist in many countries (OECD, 2015). In the United States, 49 states have or are developing a quality rating and improvement system (QRIS). Dilara Yaya-Bryson, Catherine Scott-Little, and Deborah Cassidy, researchers at the University of North Carolina at Greensboro, and Berrin Akman, a researcher at Hacettepe University, in Ankara, Turkey, examined the quality of programs in two ECE quality improvement systems, one in Turkey and one in North Carolina, USA (Yaya-Bryson et al., 2020). In this study, the Early Childhood Environment Rating Scale-Revised (Harms, Clifford, & Cryer, 2005) was used to evaluate the quality of the early childhood classrooms, and the Program Administration Scale (Talan and Bloom, 2004) was used to evaluate the quality of administrative practices.


Cultural Context of Study


The two ECE quality improvement systems included in this research differed in several key areas, including the auspice of the system, the length of time the system had been in place, and the degree to which standards for program quality were developed.


Turkey


The centralized early childhood system in Turkey is administered by the Ministry of National Education. The regulation of ECE programs began in the 1960s, with new standards published in 2015. At the time the research was conducted, these standards had not been empirically tested.


North Carolina


The United States utilizes a state-based early childhood system of quality monitoring and improvement. The QRIS in North Carolina, the Star Rated License System, has been operating since 1999. It includes ongoing quality assessments using the Early Childhood Environment Rating Scale-Revised (ECERS-R) and other instruments to award a Star Rated License rating of 1 to 5 stars for all regulated early care and education programs in the state.


The purpose of the study was to compare the quality of programs in two different early childhood systems—Turkey, a developing country with emerging standards for quality early childhood education, and North Carolina, a state with a well-established quality improvement system for early care and education programs.


Methods


The sample for the study included 40 ECE programs, 20 located in Turkey and 20 located in North Carolina. In each program, one classroom serving preschool-aged children was selected using convenience sampling for observation using the ECERS-R. The director of each program was interviewed based on the 25 items of the Program Administration Scale (PAS). Documentation from the program was reviewed immediately following the interview to verify the director’s responses to interview questions.


Overall, the auspice of the program and the qualifications of the staff differed between the Turkey and North Carolina sample programs. There were more public programs in the Turkish sample (70%) than in the North Carolina sample (5%). In Turkey, the vast majority (95%) of directors had a bachelor’s degree or higher, while in North Carolina, a majority (60%) had a bachelor’s degree or higher. A similar pattern was seen with teacher qualifications. In Turkey, 85% of observed teachers had at least a bachelor’s degree; in North Carolina, 40% of observed teachers had at least a bachelor’s degree.


Pilot studies were conducted prior to collecting data with the PAS. Co-raters selected to collect data with the ECERS-R in Turkey were trained to administer the PAS using the translated version (Kalkan and Akman, 2009). The primary researcher and three other trained raters administered the PAS for the pilot in Turkey. Inter-rater reliability for the Turkish co-raters was calculated as .98. In North Carolina, the primary researcher and a reliable PAS assessor conducted assessments in a pilot study. Inter-rater reliability for the PAS was computed as .92.


The ECERS-R and the PAS were administered by a primary researcher on the same day, with the classroom observation occurring in the morning and the director interview and documentation review occurring in the afternoon. Independent t-tests were used to compare ECERS-R and PAS scores from the programs in Turkey with scores from programs in North Carolina. Pearson correlations between ECERS-R and PAS total scores were determined and used for comparison to see if there were significant differences between correlations for the scores from programs in each of the cultural contexts.


 Findings


Table 1 provides descriptive data on ECERS-R scores for classroom quality in which the t-test comparisons are reported as well as effect sizes using Cohen’s d. The overall mean score on ECERS-R in Turkey (M = 4.7, SD = 1.09) was significantly lower than the overall mean score for North Carolina (M = 5.7, SD = .83), t = -3.41. p = .002. The mean score for North Carolina is in the good range on the rating scale (1 = inadequate to 7 = excellent). The mean score for Turkey was in the medium range, falling between minimal and good on the rating scale.


For the total PAS scores on the quality of administrative practices, the t-test comparison between Turkey (M = 3.5, SD = .94) and North Carolina (M = 3.3, SD = .95) was not significant, t = .467, p = .643. Mean scores for Turkey and North Carolina on each PAS subscale fell within a low to medium range (1 is inadequate to 7 is excellent). There were no significant differences between national contexts on any of the PAS subscales.


An additional purpose of the study was to explore the associations in the overall ratings of classroom environment quality and administrative quality in each quality improvement system. Pearson correlations were used to evaluate the strength of the relationship between the overall ECERS-R score and overall PAS score in each system. In Turkey, there was a significant correlation between ECERS-R and PAS overall scores of .73, p = .000. In North Carolina, there was also a significant correlation between ECERS-R and PAS overall scores of .69, p = .001. These correlations indicated that ratings of classroom environment quality were strongly associated with the quality of administrative practices in each system.


Discussion


The ECERS-R scores were significantly higher in programs from North Carolina than in programs from a mid-size city in Turkey. The high scores in North Carolina are consistent with previous studies conducted in the state and support the value of a well-developed QRIS based on clear standards, reliable monitoring, and intentional supports. In addition, the ECERS-R is used as an official assessment tool in the QRIS, providing early educators in these programs with access and training on the expectations of the tool prior to its administration.


The results of the study did not indicate significant differences between the PAS scores measuring the quality of administrative practices across the two systems. It was notable that, in both systems, the subscale scores were in the low to medium level. Overall, results from the study suggested that administrative practices play a critical role in supporting high-quality ECE programs, as indicated by the significant, positive correlation between the quality ratings of classroom environments and ratings of administrative quality in both Turkey and North Carolina.


Implications

In emerging quality improvement systems, such as in Turkey, there needs to be an emphasis on helping programs understand the standards for quality and ensuring that measurement systems are validated.


Results from this study suggest that it may be important for quality improvement systems in different cultural contexts to assess administrative practices as well as classroom quality.


Standards for administrative practices in different global contexts need to be established, and then measures such as the PAS should be included in the program quality evaluation system. The PAS was first developed, and most recently revised in 2022, to help program leaders in the United States incrementally improve program administrative quality by establishing clear benchmarks of quality associated with child care licensing standards (at the minimal level of quality), national program accreditation standards (at the good level of quality) and universal prekindergarten standards (at the excellent level of quality). Further research is needed to determine whether the PAS can reliably measure administrative quality in different cultural contexts.


Table 1

ECERS-R scores for Turkish and North Carolina Classrooms

ECERS-R scores for Turkish and North Carolina Classrooms

References

  • Harms, T., Clifford, R. M., & Cryer, D. (2005). Early Childhood Environment Scale–Revised. New York: Teachers College Press.
  • Hujala, E., Eskelinen, M., Keskinen, S., Chen, C., Inoue, C., Matsumoto, M., & Kawase, M. (2016). Leadership tasks in early childhood education in Finland, Japan, and Singapore. Journal of Research in Childhood Education 30(3), 406-421.
  • Kalkan, E., & Akman, B. (2010). The Turkish adaptation of the program administration scale. Procedia-Social and Behavioral Sciences, 2, 2060-2063.
  • Li, J. (2015). What do we know about the implementation of the quality rating and improvement system? A cross-cultural comparison in three countries. Doctoral Dissertation. The University of North Carolina at Greensboro.
  • Lower, J. K., & Cassidy, D. J. (2007). Child care work environments: The relationship with learning environments. Journal of Research in Childhood Education, 22(2), 180-204.
  • McCormick Center for Early Childhood Leadership. (2010a, Winter). Head Start administrative practices, director qualifications, and links to classroom quality. Research Notes. Wheeling, IL: National Louis University.
  • McCormick Center for Early Childhood Leadership. (2010b, Summer). Connecting the dots: Director qualifications, instructional leadership practices, and learning environments in early childhood programs. Research Notes. Wheeling, IL: National Louis University.
  • OECD. (2015). Strong start—IV: Improving monitoring policies and practice in early childhood education and care. Paris: OECD Publishing. Retrieved from https://doi.org/10.1787/9789264233515-en.
  • Sheridan, S., Giota, J., Han, Y. M., & Kwon, J. Y. (2009). A cross-cultural study of preschool quality in South Korea and Sweden: ECERS evaluations. Early Childhood Research Quarterly, 24(2), 142-156.
  • Talan, T. N., Bella, J. M., & Bloom, P. J. (2022 in press). Program Administration Scale: Measuring whole leadership in early childhood centers. New York: Teachers College Press.
  • Talan, T. N., & Bloom, P. J. (2004). Program Administration Scale: Measuring early childhood leadership and management. New York: Teachers College Press.
  • Yaya-Bryson, D., Scott-Little, C., Akman, B., & Cassidy, D. (2020). A comparison of early childhood classroom environments and program administrative quality in Turkey and North Carolina. International Journal of Early Childhood, 52, 233-248.
By McCormick Center May 13, 2025
Leaders, policymakers, and systems developers seek to improve early childhood programs through data-driven decision-making. Data can be useful for informing continuous quality improvement efforts at the classroom and program level and for creating support for workforce development at the system level. Early childhood program leaders use assessments to help them understand their programs’ strengths and to draw attention to where supports are needed.  Assessment data is particularly useful in understanding the complexity of organizational climate and the organizational conditions that lead to successful outcomes for children and families. Several tools are available for program leaders to assess organizational structures, processes, and workplace conditions, including: Preschool Program Quality Assessment (PQA) 1 Program Administration Scale (PAS) 2 Child Care Worker Job Stress Inventory (ECWJSI) 3 Early Childhood Job Satisfaction Survey (ECJSS) 4 Early Childhood Work Environment Survey (ECWES) 5 Supportive Environmental Quality Underlying Adult Learning (SEQUAL) 6 The Early Education Essentials is a recently developed tool to examine program conditions that affect early childhood education instructional and emotional quality. It is patterned after the Five Essentials Framework, 7 which is widely used to measure instructional supports in K-12 schools. The Early Education Essentials measures six dimensions of quality in early childhood programs: Effective instructional leaders Collaborative teachers Supportive environment Ambitious instruction Involved families Parent voice A recently published validation study for the Early Education Essentials 8 demonstrates that it is a valid and reliable instrument that can be used to assess early childhood programs to improve teaching and learning outcomes. METHODOLOGY For this validation study, two sets of surveys were administered in one Midwestern city; one for teachers/staff in early childhood settings and one for parents/guardians of preschool-aged children. A stratified random sampling method was used to select sites with an oversampling for the percentage of children who spoke Spanish. The teacher surveys included 164 items within 26 scales and were made available online for a three-month period in the public schools. In community-based sites, data collectors administered the surveys to staff. Data collectors also administered the parent surveys in all sites. The parent survey was shorter, with 54 items within nine scales. Rasch analyses was used to combine items into scales. In addition to the surveys, administrative data were analyzed regarding school attendance. Classroom observational assessments were performed to measure teacher-child interactions. The Classroom Assessment Scoring System TM (CLASS) 9 was used to assess the interactions. Early Education Essentials surveys were analyzed from 81 early childhood program sites (41 school-based programs and 40 community-based programs), serving 3- and 4-year old children. Only publicly funded programs (e.g., state-funded preschool and/or Head Start) were included in the study. The average enrollment for the programs was 109 (sd = 64); 91% of the children were from minority backgrounds; and 38% came from non-English speaking homes. Of the 746 teacher surveys collected, 451 (61%) were from school-based sites and 294 (39%) were from community-based sites. There were 2,464 parent surveys collected (59% school; 41% community). About one-third of the parent surveys were conducted in Spanish. Data were analyzed to determine reliability, internal validity, group differences, and sensitivity across sites. Child outcome results were used to examine if positive scores on the surveys were related to desirable outcomes for children (attendance and teacher-child interactions). Hierarchical linear modeling (HLM) was used to compute average site-level CLASS scores to account for the shared variance among classrooms within the same school. Exploratory factor analysis was performed to group the scales. RESULTS The surveys performed well in the measurement characteristics of scale reliability, internal validity, differential item functioning, and sensitivity across sites . Reliability was measured for 25 scales with Rasch Person Reliability scores ranging from .73 to .92; with only two scales falling below the preferred .80 threshold. The Rasch analysis also provided assessment of internal validity showing that 97% of the items fell in an acceptable range of >0.7 to <1.3 (infit mean squares). The Teacher/Staff survey could detect differences across sites, however the Parent Survey was less effective in detecting differences across sites. Differential item functioning (DIF) was used to compare if individual responses differed for school- versus community-based settings and primary language (English versus Spanish speakers). Results showed that 18 scales had no or only one large DIF on the Teacher/Staff Survey related to setting. There were no large DIFs found related to setting on the Parent Survey and only one scale that had more than one large DIF related to primary language. The authors decided to leave the large DIF items in the scale because the number of large DIFs were minimal and they fit well with the various groups. The factor analysis aligned closely with the five essentials in the K-12 model . However, researchers also identified a sixth factor—parent voice—which factored differently from involved families on the Parent Survey. Therefore, the Early Education Essentials have an additional dimension in contrast to the K-12 Five Essentials Framework. Outcomes related to CLASS scores were found for two of the six essential supports . Positive associations were found for Effective Instructional Leaders and Collaborative Teachers and all three of the CLASS domains (Emotional Support, Classroom Organization, and Instructional Support). Significant associations with CLASS scores were not found for the Supportive Environment, Involved Families, or Parent Voice essentials. Ambitious Instruction was not associated with any of the three domains of the CLASS scores. Table 1. HLM Coefficients Relating Essential Scores to CLASS Scores (Model 1) shows the results of the analysis showing these associations. Outcomes related to student attendance were found for four of the six essential supports . Effective Instructional Leaders, Collaborative Teachers, Supportive Environment, and Involved Families were positively associated with student attendance. Ambitious Instruction and Parent Voice were not found to be associated with student attendance. The authors are continuing to examine and improve the tool to better measure developmentally appropriate instruction and to adapt the Parent Survey so that it will perform across sites. There are a few limitations to this study that should be considered. Since the research is based on correlations, the direction of the relationship between factors and organizational conditions is not evident. It is unknown whether the Early Education Essentials survey is detecting factors that affect outcomes (e.g., engaged families or positive teacher-child interactions) or whether the organizational conditions predict these outcomes. This study was limited to one large city and a specific set of early childhood education settings. It has not been tested with early childhood centers that do not receive Head Start or state pre-K funding. DISCUSSION The Early Education Essentials survey expands the capacity of early childhood program leaders, policymakers, systems developers, and researchers to assess organizational conditions that specifically affect instructional quality. It is likely to be a useful tool for administrators seeking to evaluate the effects of their pedagogical leadership—one of the three domains of whole leadership. 10 When used with additional measures to assess whole leadership—administrative leadership, leadership essentials, as well as pedagogical leadership—stakeholders will be able to understand the organizational conditions and supports that positively impact child and family outcomes. Many quality initiatives focus on assessment at the classroom level, but examining quality with a wider lens at the site level expands the opportunity for sustainable change and improvement. The availability of valid and reliable instruments to assess the organizational structures, processes, and conditions within early childhood programs is necessary for data-driven improvement of programs as well as systems development and applied research. Findings from this validation study confirm that strong instructional leadership and teacher collaboration are good predictors of effective teaching and learning practices, evidenced in supportive teacher-child interactions and student attendance. 11 This evidence is an important contribution to the growing body of knowledge to inform embedded continuous quality improvement efforts. It also suggests that leadership to support teacher collaboration like professional learning communities (PLCs) and communities of practice (CoPs) may have an effect on outcomes for children. This study raises questions for future research. The addition of the “parent voice” essential support should be further explored. If parent voice is an essential support why was it not related to CLASS scores or student attendance? With the introduction of the Early Education Essentials survey to the existing battery of program assessment tools (PQA, PAS, ECWJSI, ECWES, ECJSS and SEQUAL), a concurrent validity study is needed to determine how these tools are related and how they can best be used to examine early childhood leadership from a whole leadership perspective. ENDNOTES 1 High/Scope Educational Research Foundation, 2003 2 Talan & Bloom, 2011 3 Curbow, Spratt, Ungaretti, McDonnell, & Breckler, 2000 4 Bloom, 2016 5 Bloom, 2016 6 Whitebook & Ryan, 2012 7 Bryk, Sebring, Allensworth, Luppescu, & Easton, 2010 8 Ehrlich, Pacchiano, Stein, Wagner, Park, Frank, et al., 2018 9 Pianta, La Paro, & Hamre, 2008 10 Abel, Talan, & Masterson, 2017 11 Bloom, 2016; Lower & Cassidy, 2007 REFERENCES Abel, M. B., Talan, T. N., & Masterson, M. (2017, Jan/Feb). Whole leadership: A framework for early childhood programs. Exchange(19460406), 39(233), 22-25. Bloom, P. J. (2016). Measuring work attitudes in early childhood settings: Technical manual for the Early Childhood Job Satisfaction Survey (ECJSS) and the Early Childhood Work Environment Survey (ECWES), (3rd ed.). Lake Forest, IL: New Horizons. Bryk, A. S., Sebring, P. B., Allensworth, E., Luppescu, S., & Easton, J. Q. (2010). Organizing schools for improvement: Lessons from Chicago. Chicago, IL: The University of Chicago Press. Curbow, B., Spratt, K., Ungaretti, A., McDonnell, K., & Breckler, S. (2000). Development of the Child Care Worker Job Stress Inventory. Early Childhood Research Quarterly, 15, 515-536. DOI: 10.1016/S0885-2006(01)00068-0 Ehrlich, S. B., Pacchiano, D., Stein, A. G., Wagner, M. R., Park, S., Frank, E., et al., (in press). Early Education Essentials: Validation of a new survey tool of early education organizational conditions. Early Education and Development. High/Scope Educational Research Foundation (2003). Preschool Program Quality Assessment, 2nd Edition (PQA) administration manual. Ypsilanti, MI: High/Scope Press. Lower, J. K. & Cassidy, D. J. (2007). Child care work environments: The relationship with learning environments. Journal of Research in Childhood Education, 22(2), 189-204. DOI: 10.1080/02568540709594621 Pianta, R. C., La Paro, K. M., & Hamre, B. K. (2008). Classroom Assessment Scoring System (CLASS). Baltimore, MD: Paul H. Brookes Publishing Co. Talan, T. N., & Bloom, P. J. (2011). Program Administration Scale: Measuring early childhood leadership and management (2 nd ed.). New York, NY: Teachers College Press. Whitebook, M., & Ryan, S. (2012). Supportive Environmental Quality Underlying Adult Learning (SEQUAL). Berkeley, CA: Center for the Study of Child Care Employment, University of California.
Show More