Aim4Excellence™ Demonstrates Increased Director Competence

This resource is part of our Research Notes series. 


Since 2009, the McCormick Center for Early Childhood Leadership at National Louis University has offered Aim4Excellence National Director Credential, an interactive online professional learning experience that focuses on the core leadership and management competencies that early childhood leaders need. To date, over 2,000 participants (directors, administrators, teachers, and family child care professionals) have completed one or more modules and over 1,300 have earned the Aim4Excellence National Director Credential. It was the first national director credential recognized by the National Association for the Education of Young Children (NAEYC) as meeting the alternative pathway training requirements for directors of centers seeking program accreditation. Nearly one-third of participants receive college credit for finishing the program. The content of Aim4Excellence is aligned with many state professional development credentials and the Aim4Excellence credential is embedded in two state quality rating and improvement systems.[1]


The nine online modules that comprise Aim4Excellence not only provide the basics of early childhood program administration-finance, program operations, and human resources management—but also the essential knowledge and skills that directors need to empower staff and lead organizations that adapt to changing conditions. Administrators learn to apply principles of effective leadership to create compelling visions for their programs, become agents of change, walk the talk of ethical behavior, and embrace the paradoxes of their roles. Each module is the equivalent of approximately 16 clock hours (or 1 semester hour) of instruction. The nine modules are:

  • Module 1 – Leading the Way
  • Module 2 – Recruiting, Selecting, and Orienting Staff
  • Module 3 – Promoting Peak Performance
  • Module 4 – Managing Program Operations
  • Module 5 – Building a Sound Business Strategy
  • Module 6 – Planning Indoor and Outdoor Environments
  • Module 7 – Supporting Children’s Development and Learning
  • Module 8 – Creating Partnerships with Families
  • Module 9 – Evaluating Program Quality


Recently, the McCormick Center completed an evaluation of Aim4Excellence to assess its effectiveness. Findings from the study suggest Aim4Excellence participants were successful in completing the modules and their early childhood programs improved during the time they were enrolled in the program. The evaluation report describes the characteristics of Aim4Excellence participants and the programs they represented, completion rates, and change over time in administrative practices while directors were enrolled in Aim4Excellence. A full copy of the report is available here.


Sample

This study examined 1,372 individuals enrolled in Aim4Excellence, between January 1, 2014 and December 31, 2017. Participants worked for 555 different organizations, in 48 states and Canada. The racial composition of the participants was 58% White/Caucasian, 24% Black/African American, and 13% Hispanic/Latino persons. The average age of Aim4Excellence participants at enrollment was 42 years, ranging from 21 to 76 years-old. Although there was a broad range in participants’ educational background, nearly 60% of them had a bachelor’s degree or higher.


Methodology

Since the Aim4Excellence program is administered online, data were collected directly from participants during the registration process and during their engagement in the program through two online sources. Descriptive statistics were generated on the participants and outcomes data were analyzed to determine completion rates and assessment scores for each module. A total composite score was created by adding each of the module’s assessment scores together.


To study the change over time in administrative practice, on-site assessments were conducted, using the Program Administration Scale (PAS),[2] at child care centers in four states (Delaware, Arizona, New Mexico, and Illinois) when Aim4Excellence participants first began the program and again when they completed it. The PAS is designed to reliably measure leadership and management practices of center-based programs using a 7-point rating scale (inadequate to excellent), with 25 items grouped into 10 subscale categories. Matched pre- and post- data were available for 30 of 58 centers assessed. Paired sample t-tests were used to analyze the average change in leadership and management practice.


Participants provided feedback by completing an evaluation survey at the end of each module and after finishing all of the modules. Participants rated the difficulty of each module on a 3-point scale: 1=piece of cake, very easy; 2=challenging but not overwhelming; 3=very difficult and challenging. They also rated their satisfaction on the content and organization of each module. The surveys used a Likert-type 5-point scale that ranged from 1=strongly disagree to 5=strongly agree. The survey responses were analyzed and descriptive statistics were computed. The results were examined for trends across the modules and compared with learning outcome results.


Results

Participants completed several scored assignments throughout each module to assess their learning. An overall score of 70% was considered passing on these Evidence of Learning assessments. Average scores ranged between 87% (Module 3) and 96% (Module 5). Table 1 shows means, standard deviations, and percentage of participants who passed the Evidence of Learning assessments for each module.


Evaluation surveys indicated that participants were sufficiently challenged in their learning and highly rated their experience in Aim4Excellence. In eight of the nine modules, over 80% of respondents rated the modules as challenging but not overwhelming. Participants also rated their satisfaction with various aspects of the content and organization of the program. Across all modules, participants provided the highest ratings when asked whether the module challenged them to consider new and different viewpoints (M=4.44, SD=0.73) and provided the lowest ratings when asked whether the video segments, Internet links, and audio pieces worked smoothly (M=4.20, SD=0.98).


Results also suggest that when directors participate in Aim4Excellence administrative practice improves in their programs. Pretest and posttest results showed that Overall PAS scores increased .44 points (on a 7-point scale) or .19 standard deviations. Four of the subscales improved significantly: Center Operations, Child Assessment, Marketing and Public Relations, and Technology. Table 2 shows the change in the average overall PAS scores and nine subscale scores. Statistics include the pretest and posttest means and standard deviations, as well as the statistically significant change (t score) from the beginning to the end of the training.


Analysis of the 21 PAS items show significant changes in Staff Orientation; t(27) = 2.87, Facilities Management; t(27) = 2.38, Assessment in Support of Learning; t(27) = 3.05, and Use of Technology; t(27) = 2.64. Effect sizes were small to medium, ranging from d = .40 to d = .58.


Discussion

Because Aim4Excellence is administered online, the availability of reliable and consistent data was greatly enhanced, with a robust sample of 1,372 participants over a four-year period. We can be confident in the results of this study because the large number of participants were from across the U. S., represent diverse demographic groups, and work in over 500 early childhood programs.


Persistence, as a measure of performance, was found to be strong with at least a 97% completion rate for each module. Results indicated that a majority of participants (51%) completed all nine modules. The average scores on the Evidence of Learning assessments were very high (ranging from 87% to 96%). The change over time study suggests that leadership and management practice improves in programs when their administrators participate in Aim4Excellence, particularly in the areas of orienting staff, managing facilities, assessing children’s learning, and using technology. Participants were very satisfied with their experience in the program and indicated they expanded their knowledge and expertise, were challenged to consider new and different viewpoints and found the resources interesting and informative.



REFERENCES

[1] Aim4Excellence is embedded in the Iowa and North Dakota QRIS.

[2] Talan, T. N., & Bloom, P. J. (2011). Program Administration Scale: Measuring Early Childhood

By McCormick Center May 13, 2025
Leaders, policymakers, and systems developers seek to improve early childhood programs through data-driven decision-making. Data can be useful for informing continuous quality improvement efforts at the classroom and program level and for creating support for workforce development at the system level. Early childhood program leaders use assessments to help them understand their programs’ strengths and to draw attention to where supports are needed.  Assessment data is particularly useful in understanding the complexity of organizational climate and the organizational conditions that lead to successful outcomes for children and families. Several tools are available for program leaders to assess organizational structures, processes, and workplace conditions, including: Preschool Program Quality Assessment (PQA) 1 Program Administration Scale (PAS) 2 Child Care Worker Job Stress Inventory (ECWJSI) 3 Early Childhood Job Satisfaction Survey (ECJSS) 4 Early Childhood Work Environment Survey (ECWES) 5 Supportive Environmental Quality Underlying Adult Learning (SEQUAL) 6 The Early Education Essentials is a recently developed tool to examine program conditions that affect early childhood education instructional and emotional quality. It is patterned after the Five Essentials Framework, 7 which is widely used to measure instructional supports in K-12 schools. The Early Education Essentials measures six dimensions of quality in early childhood programs: Effective instructional leaders Collaborative teachers Supportive environment Ambitious instruction Involved families Parent voice A recently published validation study for the Early Education Essentials 8 demonstrates that it is a valid and reliable instrument that can be used to assess early childhood programs to improve teaching and learning outcomes. METHODOLOGY For this validation study, two sets of surveys were administered in one Midwestern city; one for teachers/staff in early childhood settings and one for parents/guardians of preschool-aged children. A stratified random sampling method was used to select sites with an oversampling for the percentage of children who spoke Spanish. The teacher surveys included 164 items within 26 scales and were made available online for a three-month period in the public schools. In community-based sites, data collectors administered the surveys to staff. Data collectors also administered the parent surveys in all sites. The parent survey was shorter, with 54 items within nine scales. Rasch analyses was used to combine items into scales. In addition to the surveys, administrative data were analyzed regarding school attendance. Classroom observational assessments were performed to measure teacher-child interactions. The Classroom Assessment Scoring System TM (CLASS) 9 was used to assess the interactions. Early Education Essentials surveys were analyzed from 81 early childhood program sites (41 school-based programs and 40 community-based programs), serving 3- and 4-year old children. Only publicly funded programs (e.g., state-funded preschool and/or Head Start) were included in the study. The average enrollment for the programs was 109 (sd = 64); 91% of the children were from minority backgrounds; and 38% came from non-English speaking homes. Of the 746 teacher surveys collected, 451 (61%) were from school-based sites and 294 (39%) were from community-based sites. There were 2,464 parent surveys collected (59% school; 41% community). About one-third of the parent surveys were conducted in Spanish. Data were analyzed to determine reliability, internal validity, group differences, and sensitivity across sites. Child outcome results were used to examine if positive scores on the surveys were related to desirable outcomes for children (attendance and teacher-child interactions). Hierarchical linear modeling (HLM) was used to compute average site-level CLASS scores to account for the shared variance among classrooms within the same school. Exploratory factor analysis was performed to group the scales. RESULTS The surveys performed well in the measurement characteristics of scale reliability, internal validity, differential item functioning, and sensitivity across sites . Reliability was measured for 25 scales with Rasch Person Reliability scores ranging from .73 to .92; with only two scales falling below the preferred .80 threshold. The Rasch analysis also provided assessment of internal validity showing that 97% of the items fell in an acceptable range of >0.7 to <1.3 (infit mean squares). The Teacher/Staff survey could detect differences across sites, however the Parent Survey was less effective in detecting differences across sites. Differential item functioning (DIF) was used to compare if individual responses differed for school- versus community-based settings and primary language (English versus Spanish speakers). Results showed that 18 scales had no or only one large DIF on the Teacher/Staff Survey related to setting. There were no large DIFs found related to setting on the Parent Survey and only one scale that had more than one large DIF related to primary language. The authors decided to leave the large DIF items in the scale because the number of large DIFs were minimal and they fit well with the various groups. The factor analysis aligned closely with the five essentials in the K-12 model . However, researchers also identified a sixth factor—parent voice—which factored differently from involved families on the Parent Survey. Therefore, the Early Education Essentials have an additional dimension in contrast to the K-12 Five Essentials Framework. Outcomes related to CLASS scores were found for two of the six essential supports . Positive associations were found for Effective Instructional Leaders and Collaborative Teachers and all three of the CLASS domains (Emotional Support, Classroom Organization, and Instructional Support). Significant associations with CLASS scores were not found for the Supportive Environment, Involved Families, or Parent Voice essentials. Ambitious Instruction was not associated with any of the three domains of the CLASS scores. Table 1. HLM Coefficients Relating Essential Scores to CLASS Scores (Model 1) shows the results of the analysis showing these associations. Outcomes related to student attendance were found for four of the six essential supports . Effective Instructional Leaders, Collaborative Teachers, Supportive Environment, and Involved Families were positively associated with student attendance. Ambitious Instruction and Parent Voice were not found to be associated with student attendance. The authors are continuing to examine and improve the tool to better measure developmentally appropriate instruction and to adapt the Parent Survey so that it will perform across sites. There are a few limitations to this study that should be considered. Since the research is based on correlations, the direction of the relationship between factors and organizational conditions is not evident. It is unknown whether the Early Education Essentials survey is detecting factors that affect outcomes (e.g., engaged families or positive teacher-child interactions) or whether the organizational conditions predict these outcomes. This study was limited to one large city and a specific set of early childhood education settings. It has not been tested with early childhood centers that do not receive Head Start or state pre-K funding. DISCUSSION The Early Education Essentials survey expands the capacity of early childhood program leaders, policymakers, systems developers, and researchers to assess organizational conditions that specifically affect instructional quality. It is likely to be a useful tool for administrators seeking to evaluate the effects of their pedagogical leadership—one of the three domains of whole leadership. 10 When used with additional measures to assess whole leadership—administrative leadership, leadership essentials, as well as pedagogical leadership—stakeholders will be able to understand the organizational conditions and supports that positively impact child and family outcomes. Many quality initiatives focus on assessment at the classroom level, but examining quality with a wider lens at the site level expands the opportunity for sustainable change and improvement. The availability of valid and reliable instruments to assess the organizational structures, processes, and conditions within early childhood programs is necessary for data-driven improvement of programs as well as systems development and applied research. Findings from this validation study confirm that strong instructional leadership and teacher collaboration are good predictors of effective teaching and learning practices, evidenced in supportive teacher-child interactions and student attendance. 11 This evidence is an important contribution to the growing body of knowledge to inform embedded continuous quality improvement efforts. It also suggests that leadership to support teacher collaboration like professional learning communities (PLCs) and communities of practice (CoPs) may have an effect on outcomes for children. This study raises questions for future research. The addition of the “parent voice” essential support should be further explored. If parent voice is an essential support why was it not related to CLASS scores or student attendance? With the introduction of the Early Education Essentials survey to the existing battery of program assessment tools (PQA, PAS, ECWJSI, ECWES, ECJSS and SEQUAL), a concurrent validity study is needed to determine how these tools are related and how they can best be used to examine early childhood leadership from a whole leadership perspective. ENDNOTES 1 High/Scope Educational Research Foundation, 2003 2 Talan & Bloom, 2011 3 Curbow, Spratt, Ungaretti, McDonnell, & Breckler, 2000 4 Bloom, 2016 5 Bloom, 2016 6 Whitebook & Ryan, 2012 7 Bryk, Sebring, Allensworth, Luppescu, & Easton, 2010 8 Ehrlich, Pacchiano, Stein, Wagner, Park, Frank, et al., 2018 9 Pianta, La Paro, & Hamre, 2008 10 Abel, Talan, & Masterson, 2017 11 Bloom, 2016; Lower & Cassidy, 2007 REFERENCES Abel, M. B., Talan, T. N., & Masterson, M. (2017, Jan/Feb). Whole leadership: A framework for early childhood programs. Exchange(19460406), 39(233), 22-25. Bloom, P. J. (2016). Measuring work attitudes in early childhood settings: Technical manual for the Early Childhood Job Satisfaction Survey (ECJSS) and the Early Childhood Work Environment Survey (ECWES), (3rd ed.). Lake Forest, IL: New Horizons. Bryk, A. S., Sebring, P. B., Allensworth, E., Luppescu, S., & Easton, J. Q. (2010). Organizing schools for improvement: Lessons from Chicago. Chicago, IL: The University of Chicago Press. Curbow, B., Spratt, K., Ungaretti, A., McDonnell, K., & Breckler, S. (2000). Development of the Child Care Worker Job Stress Inventory. Early Childhood Research Quarterly, 15, 515-536. DOI: 10.1016/S0885-2006(01)00068-0 Ehrlich, S. B., Pacchiano, D., Stein, A. G., Wagner, M. R., Park, S., Frank, E., et al., (in press). Early Education Essentials: Validation of a new survey tool of early education organizational conditions. Early Education and Development. High/Scope Educational Research Foundation (2003). Preschool Program Quality Assessment, 2nd Edition (PQA) administration manual. Ypsilanti, MI: High/Scope Press. Lower, J. K. & Cassidy, D. J. (2007). Child care work environments: The relationship with learning environments. Journal of Research in Childhood Education, 22(2), 189-204. DOI: 10.1080/02568540709594621 Pianta, R. C., La Paro, K. M., & Hamre, B. K. (2008). Classroom Assessment Scoring System (CLASS). Baltimore, MD: Paul H. Brookes Publishing Co. Talan, T. N., & Bloom, P. J. (2011). Program Administration Scale: Measuring early childhood leadership and management (2 nd ed.). New York, NY: Teachers College Press. Whitebook, M., & Ryan, S. (2012). Supportive Environmental Quality Underlying Adult Learning (SEQUAL). Berkeley, CA: Center for the Study of Child Care Employment, University of California.
Show More