Contributed by David Letzler.

In 2009 the National Governors’ Association and the Council of Chief State School Officers resolved to develop a set of national education standards known as the Common Core. These were intended to unify what had been to that point a set of highly localized state education standards. The hope of the Common Core was that a nationally-oriented education policy would provide uniform benchmarks to guide and evaluate the American educational system. By 2013, most states had begun implementing the Common Core. That year, New York State began using the Common Core as the basis of statewide English and math tests administered to all public school students in grades 3 through 8. The Common Core and its associated tests have since come under some criticism. Many parents and education professionals have complained that the Common Core has increased the focus on high-stakes testing and, subsequently, "teaching to the test." Moreover, they believe that the uniform standards do not make sufficient allowance for students to learn at different paces and in different ways, and that the money spent on building the Common Core could have been used to directly support schools. That is an important debate, but it's one for another essay. For now, those of us interested in public education in New York City have four years of data to examine regarding school performance on the tests. To do so, I developed the New York City Common Core Appto chart the performance of all NYC public schools on the math and English tests for grades 3-8 during the period 2013-2016. In this post, I will examine the insights provided by this data. In brief, the results of my analysis suggest the following: 1) the results show the clear effect of neighborhood income level, though that effect is far from all-encompassing; 2) there has been clear progress on the English tests over time, both from one grade cohort to the next and longitudinally throughout a cohort; and 3) progress on the math test has been more elusive, especially in disadvantaged neighborhoods.

To construct my app, I combined four data sources. I downloaded NYC public school test scores from NYC Open Data. I overlaid these results onto a map of NYC's official 2010 census tracts, using a shapefile downloaded from the NYC Planning website. I found income data on these tracts from the official US Census Burea's American FactFinder website. Last, I found the addresses of all public schools on the NYC Department of Education website, then acquired their geographic coordinates by querying Google Maps' geocoding API. I combined this material together in R and built the app in Shiny, primarily using the Leaflet package. The relevant code and data are here.

Let's take a look at the city as a whole first. Here is the city map, displaying test results on the math exam from 2016. Each dot represents one school, with the green end of the color spectrum representing a higher percentage of students who scored at the "proficient" levels (i.e., scored at Level 3 or 4) and the red end representing lower percentages.

Those of you who know the city well will probably nod. A map charting performance on the math test essentially doubles as a demographic map of the city. There’s a strong concentration of red and orange in eastern Brooklyn, the south Bronx, and northern Manhattan. These are largely low-income, heavily African-American and Latino neighborhoods. Meanwhile, the green dots are concentrated in locations like mid- and downtown Manhattan and northeastern Queens, predominantly upscale white and East Asian communities. The yellow dots largely cover middle-class, white areas like central Queens, southern Brooklyn, and Staten Island. The distinctions are a little less visible, but equally present, on the map for the English test: the deep red areas in Brooklyn and the Bronx become a little more orange, while northeastern Queens moves toward the yellow range, probably due to the communities of high-skilled Asian immigrants in that area. Correlation between math and English scores against median household income is, respectively, r=0.47 and r=0.51 at vanishingly small p-values.

If we break down the data temporally, we can see two contrary stories emerges. The positive one involves the year-by-year results. If we look at the scores on the 2013 English exams, the map looks a good deal less friendly than the one from 2016.

In fact, only 27.6% of students scored as proficient in 2013, compared to 39.3% in 2016. Over time, the scores have steadily improved. We see a more muted version of that trend on the math exams, moving from 31.0% to 37.6%. The real-world causes behind those trends might derive from several sources. One might be that the Common Core has done what it was intended to do: slowly improve proficiency by holding students to uniform, high standards. It may, however, be that students are simply getting better at adapting to the test or that, after complaints about vagueness on the ELA test questions in early years, the questions themselves have improved and students can more readily answer them. For that matter, it may also be that the increasing trend of parents “opting out” of the tests on behalf of their children has caused weaker students to be disproportionately excluded from the test pool, which would artificially raise the scores without actually improving student proficiency. Still, the fact that steadily more students are scoring at the proficient level is a positive sign. The negative trend, however, is that students appear to perform progressively worse in higher grades, especially in math. If we take 2016, we see that math performance is relatively fine through the fourth grade, at over 40% system-wide. As we move toward the middle grades, though, we see noticeable declines. By the eighth grade, there are ugly red markers everywhere on the map, with many schools shepherding fewer than 10% of their students to proficiency.

Total proficiency is down to 26.4% in eighth grade, compared to 41.8% among that year's third-graders. We don’t see that trend in English—there may, at most, be a few dropped points between elementary and middle school. In other words, the system overall seems to be improving as the years go by, but students perform worse as they age. If we look longitudinally, we see the 2013 third-graders’ math proficiency is essentially unchanged by the time they are sixth-graders in 2016, with the fourth-graders a few points higher as seventh-graders and the fifth-graders are a few points lower as eighth-graders. On the English side, things are more positive longitudinally—the 2013 cohorts in the third, fourth, and fifth grades had all improved their proficiency by about ten percentage points by sixth, seventh, and eighth grades.

While the correlation between income and school performance is substantial, it is not all-encompassing. If we filter census tracts by income quintile, we can see plenty of schools that diverge from the general trend in their region. Let's just look at the lowest income quintile.

There's plenty of red, but a few patches of green. Some of these, like Columbia Secondary School or All-City Leadership, are not representative of their neighborhood: their specific missions allow them to draw strong students from around the city. But some green dots represent well-run neighborhood schools. Consider, for example, P.S. 171 Patrick Henry in East Harlem. The surrounding area had a median household income barely above $22,000 in 2010. Its student body is representative of the neighborhood, two-thirds Latino and a quarter black. Every student qualifies for free lunch. Yet on both the English and math tests, over 60% of students scored proficient in 2016. The non-profit website Insideschools credits its long-tenured principal, its aggressive pursuit of foundation grants to support computer and labs, its strictly-structured culture (uniforms, weekly-monitored online assignments, etc.), and its general cultural emphasis on reading for the school's high performance.

We can see, conversely, a number of schools in the wealthiest tracts that seem to underperform. For instance, let's zoom into the area around Rosedale, in southeast Queens.

This region is, granted, only barely in the top income quintile. It had median annual incomes of around $80,000 a year in 2010, better than 80% of the city but a far cry from the highest-earning tracts on the Upper East Side, which approach and sometimes exceed $200,000. Yet despite its comfortable affluence, it has low-performing public schools. P.S. 138 Sunrise, for example, had fewer than 10% of its students pass at proficiency on math. Why is this the case? One possibility is that the population of the schools is not representative of the tract. Nearly three-quarters of students in P.S. 138 qualify for free lunches, a number much more in line with the lower-middle income districts than those in the upper quintile. There are at least four private Christian schools in the neighborhood, which may draw away the more affluent students. The complicated racial dynamics of the city's education system may also be a factor: unlike most areas in the highest income bracket, the majority of Rosedale's residents are African-American. Understanding the struggles in this neighborhood in detail would require more extensive qualitative research.

It may be too difficult to extricate from this data the effects of the Common Core on student learning, as opposed to the influence of the city's learning curve in adjusting toward a new set of standards. Still, it should be clear that progress is stronger on English than in math, with scores improving more rapidly and with higher proficiency rates in the city's disadvantaged areas. The relatively poor math scores of those areas show the large-scale influence that income inequality still exerts on the educational system. Still, the success of schools like P.S. 171, and the lack thereof in some more affluent areas, show that a well-run school can overcome those effects.

© 2019 Data Science Central ® Powered by

Badges | Report an Issue | Privacy Policy | Terms of Service

**Most Popular Content on DSC**

To not miss this type of content in the future, subscribe to our newsletter.

- Book: Statistics -- New Foundations, Toolbox, and Machine Learning Recipes
- Book: Classification and Regression In a Weekend - With Python
- Book: Applied Stochastic Processes
- Long-range Correlations in Time Series: Modeling, Testing, Case Study
- How to Automatically Determine the Number of Clusters in your Data
- New Machine Learning Cheat Sheet | Old one
- Confidence Intervals Without Pain - With Resampling
- Advanced Machine Learning with Basic Excel
- New Perspectives on Statistical Distributions and Deep Learning
- Fascinating New Results in the Theory of Randomness
- Fast Combinatorial Feature Selection

**Other popular resources**

- Comprehensive Repository of Data Science and ML Resources
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- 100 Data Science Interview Questions and Answers
- Cheat Sheets | Curated Articles | Search | Jobs | Courses
- Post a Blog | Forum Questions | Books | Salaries | News

**Archives:** 2008-2014 |
2015-2016 |
2017-2019 |
Book 1 |
Book 2 |
More

**Most popular articles**

- Free Book and Resources for DSC Members
- New Perspectives on Statistical Distributions and Deep Learning
- Time series, Growth Modeling and Data Science Wizardy
- Statistical Concepts Explained in Simple English
- Machine Learning Concepts Explained in One Picture
- Comprehensive Repository of Data Science and ML Resources
- Advanced Machine Learning with Basic Excel
- Difference between ML, Data Science, AI, Deep Learning, and Statistics
- Selected Business Analytics, Data Science and ML articles
- How to Automatically Determine the Number of Clusters in your Data
- Fascinating New Results in the Theory of Randomness
- Hire a Data Scientist | Search DSC | Find a Job
- Post a Blog | Forum Questions

## You need to be a member of Data Science Central to add comments!

Join Data Science Central