I’ve got a big digital mouth. Last time, I wrote on frequencies using R, noting cavalierly that I’d done similar development in Python/Pandas. I wasn’t lying, but the pertinent work I dug up from two years ago was less proof and more concept.
Of course, R and Python are the two current language leaders for data science computing, while Pandas is to Python as data.table and tidyverse are to R for data management: everything.
So I took on the challenge of extending the work I’d started in Pandas to replicate the frequencies functionality I’d developed in R. I was able to demonstrate to my satisfaction how it might be done, but not before running into several pitfalls.
Pandas is quite the comprehensive library, aiming “to be the fundamental high-level building block for doing practical, real world data analysis in Python.” I think it succeeds, providing highly-optimized structures for efficiently managing/analyzing data. The primary Pandas data structures are the series and the dataframe; the Pandas developer mainly uses core Python to manage these structures.
Pandas provides a procedure, value_counts(), to output frequencies from a series or a single dataframe column. To include null or NA values, the programmer designates dropna=False in the function call.
Alas, value_counts() works on single attributes only, so to handle the multi-variable case, the programmer must dig into Pandas’s powerful split-apply-combine groupby functions. There is a problem with this though: by default, these groupby functions automatically delete NA’s from consideration, even as it’s generally the case with frequencies that NA counts are desirable. What’s the Pandas developer to do?
There are several work-arounds that can be deployed. The first is to convert all groupby “dimension” vars to string, in so doing preserving NA’s. That’s a pretty ugly and inefficient band-aid, however. The second is to use the fillna() function to replace NA’s with a designated “missing” value such as 999.999, and then to replace the 999.999 later in the chain with NA after the computations are completed. I’d gone with the string conversion option when first I considered frequencies in Pandas. This time, though, I looked harder at the fillna-replace option, generally finding it the lesser of two evils.
The remainder of this notebook looks at these Pandas frequencies options for the same Chicago crime data with almost 6.6M records I illustrated last time. I first build a working data set from the downloaded csv file, then take a look at the different options noted above, finally settling on a poc frequency function using fillna-replace.
Gratuitously, I also demo rmagic from the rpy2 Python package that allows R capabilities to be included in a Python program, much as the R package reticulate does in the other direction. Both rpy2 and reticulate are harbingers of soon-coming inclusive interoperability between R and Python. That’s all good for data scientists!
Read the entire blog here.