Big Data and Little Kids: In Whose Best Interest?

Editor’s Note: An open dialogue is vital to progress! The views of guest authors are not necessarily a reflection of the McCormick Center’s opinions or beliefs. 


Americans love data. We cannot get enough of it. Collectors on speed, we measure every indicator in sight. Children are the youngest, most fragile casualties of our obsessive compulsive disorder. How many words do they have in their emergent lexicons? Do they know their letters? Can they count up to 20? Are they ready for school? Are they reading The Sorcerer’s Stone ahead of the third-grade benchmarks? They’re on treadmills, each milestone anxiously awaited, and dutifully recorded. 


Nothing is off limits in our pursuit of cognitive development and predictors of academic achievement. And we start early. Several years ago, Susan Goldin-Meadow and Meredith Rowe, two researchers from the University of Chicago, published a study in Science that revealed a gap among infants’ gesturing across socioeconomic lines—a potential harbinger of stunted language acquisition. In 2014, amid vociferous debate about the Common Core standards, Susan Sirigatti, a former school principal, extrapolated from their findings in a posting to a blog called “A Smarter Beginning.” The headline read “Gesturing Predicts Children’s Future School Success”—a symptom of our ever-growing anxiety. 


Recently, Megan McClelland, an associate professor of health and child development at Oregon State University tracked the outcomes of 400 preschoolers who had played a tweaked version of the classic children’s game, “Head, Shoulders, Knees, and Toes.” In the original game, the directives match the body part; in McClelland’s experiment, she required the children to do the opposite, touching toes, for example, when she asks them to touch their heads. 


Never mind that some of her undergraduates had trouble executing the task; she got her rich lode of data. Children who were able to finesse the exercise were more likely to pay attention in class or keep nose to the grindstone in specific activities—signs of a well-functioning prefrontal cortex, the locus of school readiness and academic success. 


Kindergarten assessments are among the latest manifestation of our obsessive-compulsive tendencies. Their origins can be traced to a meeting of George H.W. Bush and the nation’s governors in 1989 at which school readiness floated to the top of the agenda, enshrined in the Goals 2000: Educate America Act of 1994. The first Bush, grandfather of No Child Left Behind, had officially welcomed early childhood to the beleaguered precincts of K-12 education reform. 


I welcomed this development. I must have been crazy. The landscape of early learning was a mess: a patchwork of public and private programs plagued by uneven quality, an ill-educated, generally impoverished workforce, and anemic investment. How the field was going to get these young students with their variable backgrounds prepared for kindergarten was anyone’s guess. And then there’s that little problem of definition. What does school readiness mean? How do you know if a child is prepared—and for what? 


A classic survey of parents and teachers, conducted on the cusp of the Goals 2000 legislation highlights the divergence of opinion. The surveyors divided their work into two clusters of behavioral and school-related items. Included in the first category were the ability to verbally communicate needs, wants, and thoughts; take turns and share; display enthusiasm and curiosity in approaching new activities; and sit still and pay attention. The second listed proficiency with pencils and paint brushes, counting ability (up to 20 or more), and knowledge of the letters of the alphabet. 


Both groups converged on the need for well-honed communication skills and positive approaches to learning. But stark disagreement emerged on the question of the academic items: the percentage of parents who rated them very important or essential ranged from six to eight times greater than those of teachers. 


Assessing readiness “a somewhat narrow and artificial construct of questionable merit,” as one early childhood expert put it, is daunting. Kids develop on wildly different timelines, their progress difficult to capture in a snapshot. But that doesn’t stop us. Today, a growing number of states are adopting universal assessment of kindergarten students, grappling with the challenges of reliability and validity in the instruments they use. 


As Jennifer Stedron and Alexander Berger noted in a technical report for the National Conference of State Legislatures, “ideally, evaluation of the complicated set of skills and behaviors that comprise ‘school readiness’ would use multiple assessment methods.” Nuance, needless to say, does not come cheap, and children suffer the consequences


“Our protective urges are stymied,” Peter Mangione, co-director of WestEd’s Center for Child and Family Studies in Sausalito, California, told me. “Our tenderness is critical for their sense of well-being.” A child psychologist, he served as a technical advisor to Ohio when the state was crafting its standards for children from birth to age 5. He worries that we expect infants to act like third-graders: “We’re asking the child to do what they’re not ready to do, and we’re not supporting what they’re ready and meant to do.” 


Last December, in the wake of a survey of kindergarten teachers, the Maryland State Education Association called for immediate suspension of the state’s readiness assessment. The teachers had delivered an hour-long test to five-year-olds whom they barely knew on skills that they had not yet taught. Nearly 30 percent of the students were unable to understand and use the technology required by the exam. Many teachers lamented the loss of critical time for bonding with their eager, young learners. And 63 percent of them reported that they had received no meaningful data to inform instruction from administration of the exam. 


As the southern writer Walker Percy observed, authentic knowledge remains elusive. “The scientist, in practicing the scientific method, cannot utter a single word about an individual thing or creature insofar as it is an individual,” he wrote in “Diagnosing the Modern Malaise,” one of a collection of essays in Sign-Posts in a Strange Land. “This limitation holds true whether the individual is a molecule of NaCl or an amoeba or a human being.” 


I am not arguing that we should bury our heads in the sand. Assessment is necessary, a critical tool for marking human progress, and for surfacing the deep inequalities that mark young children’s development and learning in the U.S. But America’s youngest students have had the grave misfortune to enter the academic arena in a period of measurement gone terribly awry. We need to come to a consensus on the kind of data that’s worth collecting, and we need to stop putting our smallest learners under the microscope, squashing their own insatiable quest for data and knowledge about the world. 


An abridged version of this article appeared in the Albany Times Union on April 22, 2015.


Susan Ochshorn is the author of Squandering America’s Future – Why ECE Policy Matters for Equality, Our Economy, and Our Children. She is also the founder of the consulting firm ECE PolicyWorks. A former journalist, Ochshorn blogs at the Huffington Post and ECE Policy Matters, the go-to place for early childhood teachers, those who train them, and the decision makers who determine their professional course.

By McCormick Center May 13, 2025
Leaders, policymakers, and systems developers seek to improve early childhood programs through data-driven decision-making. Data can be useful for informing continuous quality improvement efforts at the classroom and program level and for creating support for workforce development at the system level. Early childhood program leaders use assessments to help them understand their programs’ strengths and to draw attention to where supports are needed.  Assessment data is particularly useful in understanding the complexity of organizational climate and the organizational conditions that lead to successful outcomes for children and families. Several tools are available for program leaders to assess organizational structures, processes, and workplace conditions, including: Preschool Program Quality Assessment (PQA) 1 Program Administration Scale (PAS) 2 Child Care Worker Job Stress Inventory (ECWJSI) 3 Early Childhood Job Satisfaction Survey (ECJSS) 4 Early Childhood Work Environment Survey (ECWES) 5 Supportive Environmental Quality Underlying Adult Learning (SEQUAL) 6 The Early Education Essentials is a recently developed tool to examine program conditions that affect early childhood education instructional and emotional quality. It is patterned after the Five Essentials Framework, 7 which is widely used to measure instructional supports in K-12 schools. The Early Education Essentials measures six dimensions of quality in early childhood programs: Effective instructional leaders Collaborative teachers Supportive environment Ambitious instruction Involved families Parent voice A recently published validation study for the Early Education Essentials 8 demonstrates that it is a valid and reliable instrument that can be used to assess early childhood programs to improve teaching and learning outcomes. METHODOLOGY For this validation study, two sets of surveys were administered in one Midwestern city; one for teachers/staff in early childhood settings and one for parents/guardians of preschool-aged children. A stratified random sampling method was used to select sites with an oversampling for the percentage of children who spoke Spanish. The teacher surveys included 164 items within 26 scales and were made available online for a three-month period in the public schools. In community-based sites, data collectors administered the surveys to staff. Data collectors also administered the parent surveys in all sites. The parent survey was shorter, with 54 items within nine scales. Rasch analyses was used to combine items into scales. In addition to the surveys, administrative data were analyzed regarding school attendance. Classroom observational assessments were performed to measure teacher-child interactions. The Classroom Assessment Scoring System TM (CLASS) 9 was used to assess the interactions. Early Education Essentials surveys were analyzed from 81 early childhood program sites (41 school-based programs and 40 community-based programs), serving 3- and 4-year old children. Only publicly funded programs (e.g., state-funded preschool and/or Head Start) were included in the study. The average enrollment for the programs was 109 (sd = 64); 91% of the children were from minority backgrounds; and 38% came from non-English speaking homes. Of the 746 teacher surveys collected, 451 (61%) were from school-based sites and 294 (39%) were from community-based sites. There were 2,464 parent surveys collected (59% school; 41% community). About one-third of the parent surveys were conducted in Spanish. Data were analyzed to determine reliability, internal validity, group differences, and sensitivity across sites. Child outcome results were used to examine if positive scores on the surveys were related to desirable outcomes for children (attendance and teacher-child interactions). Hierarchical linear modeling (HLM) was used to compute average site-level CLASS scores to account for the shared variance among classrooms within the same school. Exploratory factor analysis was performed to group the scales. RESULTS The surveys performed well in the measurement characteristics of scale reliability, internal validity, differential item functioning, and sensitivity across sites . Reliability was measured for 25 scales with Rasch Person Reliability scores ranging from .73 to .92; with only two scales falling below the preferred .80 threshold. The Rasch analysis also provided assessment of internal validity showing that 97% of the items fell in an acceptable range of >0.7 to <1.3 (infit mean squares). The Teacher/Staff survey could detect differences across sites, however the Parent Survey was less effective in detecting differences across sites. Differential item functioning (DIF) was used to compare if individual responses differed for school- versus community-based settings and primary language (English versus Spanish speakers). Results showed that 18 scales had no or only one large DIF on the Teacher/Staff Survey related to setting. There were no large DIFs found related to setting on the Parent Survey and only one scale that had more than one large DIF related to primary language. The authors decided to leave the large DIF items in the scale because the number of large DIFs were minimal and they fit well with the various groups. The factor analysis aligned closely with the five essentials in the K-12 model . However, researchers also identified a sixth factor—parent voice—which factored differently from involved families on the Parent Survey. Therefore, the Early Education Essentials have an additional dimension in contrast to the K-12 Five Essentials Framework. Outcomes related to CLASS scores were found for two of the six essential supports . Positive associations were found for Effective Instructional Leaders and Collaborative Teachers and all three of the CLASS domains (Emotional Support, Classroom Organization, and Instructional Support). Significant associations with CLASS scores were not found for the Supportive Environment, Involved Families, or Parent Voice essentials. Ambitious Instruction was not associated with any of the three domains of the CLASS scores. Table 1. HLM Coefficients Relating Essential Scores to CLASS Scores (Model 1) shows the results of the analysis showing these associations. Outcomes related to student attendance were found for four of the six essential supports . Effective Instructional Leaders, Collaborative Teachers, Supportive Environment, and Involved Families were positively associated with student attendance. Ambitious Instruction and Parent Voice were not found to be associated with student attendance. The authors are continuing to examine and improve the tool to better measure developmentally appropriate instruction and to adapt the Parent Survey so that it will perform across sites. There are a few limitations to this study that should be considered. Since the research is based on correlations, the direction of the relationship between factors and organizational conditions is not evident. It is unknown whether the Early Education Essentials survey is detecting factors that affect outcomes (e.g., engaged families or positive teacher-child interactions) or whether the organizational conditions predict these outcomes. This study was limited to one large city and a specific set of early childhood education settings. It has not been tested with early childhood centers that do not receive Head Start or state pre-K funding. DISCUSSION The Early Education Essentials survey expands the capacity of early childhood program leaders, policymakers, systems developers, and researchers to assess organizational conditions that specifically affect instructional quality. It is likely to be a useful tool for administrators seeking to evaluate the effects of their pedagogical leadership—one of the three domains of whole leadership. 10 When used with additional measures to assess whole leadership—administrative leadership, leadership essentials, as well as pedagogical leadership—stakeholders will be able to understand the organizational conditions and supports that positively impact child and family outcomes. Many quality initiatives focus on assessment at the classroom level, but examining quality with a wider lens at the site level expands the opportunity for sustainable change and improvement. The availability of valid and reliable instruments to assess the organizational structures, processes, and conditions within early childhood programs is necessary for data-driven improvement of programs as well as systems development and applied research. Findings from this validation study confirm that strong instructional leadership and teacher collaboration are good predictors of effective teaching and learning practices, evidenced in supportive teacher-child interactions and student attendance. 11 This evidence is an important contribution to the growing body of knowledge to inform embedded continuous quality improvement efforts. It also suggests that leadership to support teacher collaboration like professional learning communities (PLCs) and communities of practice (CoPs) may have an effect on outcomes for children. This study raises questions for future research. The addition of the “parent voice” essential support should be further explored. If parent voice is an essential support why was it not related to CLASS scores or student attendance? With the introduction of the Early Education Essentials survey to the existing battery of program assessment tools (PQA, PAS, ECWJSI, ECWES, ECJSS and SEQUAL), a concurrent validity study is needed to determine how these tools are related and how they can best be used to examine early childhood leadership from a whole leadership perspective. ENDNOTES 1 High/Scope Educational Research Foundation, 2003 2 Talan & Bloom, 2011 3 Curbow, Spratt, Ungaretti, McDonnell, & Breckler, 2000 4 Bloom, 2016 5 Bloom, 2016 6 Whitebook & Ryan, 2012 7 Bryk, Sebring, Allensworth, Luppescu, & Easton, 2010 8 Ehrlich, Pacchiano, Stein, Wagner, Park, Frank, et al., 2018 9 Pianta, La Paro, & Hamre, 2008 10 Abel, Talan, & Masterson, 2017 11 Bloom, 2016; Lower & Cassidy, 2007 REFERENCES Abel, M. B., Talan, T. N., & Masterson, M. (2017, Jan/Feb). Whole leadership: A framework for early childhood programs. Exchange(19460406), 39(233), 22-25. Bloom, P. J. (2016). Measuring work attitudes in early childhood settings: Technical manual for the Early Childhood Job Satisfaction Survey (ECJSS) and the Early Childhood Work Environment Survey (ECWES), (3rd ed.). Lake Forest, IL: New Horizons. Bryk, A. S., Sebring, P. B., Allensworth, E., Luppescu, S., & Easton, J. Q. (2010). Organizing schools for improvement: Lessons from Chicago. Chicago, IL: The University of Chicago Press. Curbow, B., Spratt, K., Ungaretti, A., McDonnell, K., & Breckler, S. (2000). Development of the Child Care Worker Job Stress Inventory. Early Childhood Research Quarterly, 15, 515-536. DOI: 10.1016/S0885-2006(01)00068-0 Ehrlich, S. B., Pacchiano, D., Stein, A. G., Wagner, M. R., Park, S., Frank, E., et al., (in press). Early Education Essentials: Validation of a new survey tool of early education organizational conditions. Early Education and Development. High/Scope Educational Research Foundation (2003). Preschool Program Quality Assessment, 2nd Edition (PQA) administration manual. Ypsilanti, MI: High/Scope Press. Lower, J. K. & Cassidy, D. J. (2007). Child care work environments: The relationship with learning environments. Journal of Research in Childhood Education, 22(2), 189-204. DOI: 10.1080/02568540709594621 Pianta, R. C., La Paro, K. M., & Hamre, B. K. (2008). Classroom Assessment Scoring System (CLASS). Baltimore, MD: Paul H. Brookes Publishing Co. Talan, T. N., & Bloom, P. J. (2011). Program Administration Scale: Measuring early childhood leadership and management (2 nd ed.). New York, NY: Teachers College Press. Whitebook, M., & Ryan, S. (2012). Supportive Environmental Quality Underlying Adult Learning (SEQUAL). Berkeley, CA: Center for the Study of Child Care Employment, University of California.
Show More