15 Comments

There is also the issue of whether randomization was maintained. One paper (that I don't have to hand) suggested that some parents with children in the larger classes found out about the experiment and pushed for their child to be moved to the small class. Since these parents tended to be higher in SES, this would contribute to an overestimation of the benefits of the small class.

Expand full comment

Chetty was good enough to review the post, but it's not like he's putting himself out there trying to correct the record by criticizing all those articles trumpeting insanely exaggerated claims.

Expand full comment

Well if you're failing to measure significant effect, the obvious thing to do is to try for larger and larger differences until you get a clear correlation, and ideally causation.

The two hypotheses to test here are that education needs dramatic improvements to produce results, or that it's already consumed all of it's long hanging fruits and we're spending a lot on very diminished results. Note that both could be true (and this would arguably be the best result).

Testing the second should be easiest. Identify kids who, for the most random reason possible, missed a year of kindergarden. Or two. Or all of them.

Expand full comment

“Unfortunately, the world is so eager for stories about the power of early education that their paper is being badly misinterpreted. “

Yep, any explanation *except* heredity. We’ve been failing at such ever since “Project Head Start”.

Expand full comment

Motivated misunderstanding ru(i)ns the world.

Expand full comment

"Motivated misunderstanding" saved to disk.

Expand full comment

The STAR study 1) only placed students aged 5 to 8 in smaller classes, particularly troublesome because environmental impacts on academic outcomes of children tend to fade over time, 2) was conducted in a state that was a significant negative outlier in all manner of educational metrics, undermining our confidence that its results could be generalized, and 3) has met with consistent skepticism about its randomization processes.

Expand full comment

WHAT state?

Expand full comment

Tennessee

Expand full comment

Nice analysis

Expand full comment

Chetty does millions worth of econometrics studies to conclude what any peasant grandmother could have told you in 1950, but somehow spins it as justification for massive gov’t programs. He loves marketing his “findings” with sensational headlines fed through friendly media outlets to the uncritical masses. I find very little practical value in his work or analysis.

Expand full comment

The "whatever any peasant grandmother could have told you" argument is bad. The peasant grandmother would also have told you many wives' tales that we know to be false, many ludicrously so. There is great value in empirically testing common wisdom, much of which is not wisdom at all.

Expand full comment

Absolutely. When people tell me that a research finding is obvious, I reply that the purpose of research is to separate what is obvious and true from what is obvious and false

Expand full comment

It's still a function of cost/benefit analysis on the cost of research (how much money did it take to confirm/deny conventional knowlege?) and whether research findings lead to practical and actionable policy prescriptions that have some level of efficacy, rather than more administrative black holes.

I have a hard time believing any of the social science econometrics wizardry is every worth it.

Expand full comment

Thank You. Real analysis ALWAYS matters.

Expand full comment