Put Some Pandas in Your Python
Every scientist needs tools to perform their experiments, and data scientists are no exception. Much advice has been written to answer the question of what the best stack of tools for machine learning, data analysis and big data is. At least for the past few years, the champions in the programming language league seem to be R, python and SQL. SQL needs no introduction, being the standard tool for managing records held in relational databases. R is undeniably the language of choice for most data analysts and statisticians; alone the size of its community of users and the thousands of contributed libraries and add-on packages turn it into a reliable workhorse for a broad range of tasks. But one thing R has not, and it is a consistent syntax. To use the words of somebody else, the R language is pathologically flexible, and that might be very confusing to new users.
Python as a language for data science
Python, on the contrary, might not be king when it comes to statistics-specific problems; it is rather a general-purpose programming language. Luis Pedro Coelho summarizes it well:
Python may often be the second best choice: for linear algebra, Matlab may have nicer syntax; for statistics, R is probably nicer; for heavy regular expression usage, Perl (ugh) might still be nicer; if you want speed, Fortran or C(++) may be a better choice. To design a webpage; perhaps you want node.js. Python is not perfect for any of these, but is acceptable for all of them.
Still and all, Python is growing to include an increasingly mature set of packages for data manipulation and analysis. One that has gotten a respectable amount of attention lately is pandas, a library that offers data structures and operations for manipulating numerical tables and time series. Its creator, Wes McKinney, regularly posts materials and tutorials in his blog, and has written a very handy book on data analysis with python. In combination with the ipython notebook, pandas provides an easy environment to develop and present data analysis routines.
Data analysis with pandas: a beginners tutorial
Let us have a brief look into the basics of pandas with a beginners tutorial. Installation of both ipython and pandas is trivial via pip or anaconda, and superior to using the standard Linux apt-get install
utility, since it guarantees more recent versions. The latest releases of pandas and ipython are 0.12.0 and 1.1.0 respectively. Once installed, the notebook starts in the browser when invoking the following command from the terminal:
$ ipython notebook --pylab inline
Pandas has very neat routines to load datasets in various formats, which can be browsed by typing pd.read*?
in ipython. For this table-top experiment we will read a CSV file containing attributes of photo cameras, found in the Aviz project page. The data specifies model, weight, focal length and price, among others, and can be downloaded from the Aviz wiki.
We load the file indicating that the fields are separated by semicolons and rename the columns with descriptive labels for the variables. This creates a DataFrame, a fundamental structure that stores entries in tabular form, with an index (dataframe.index
) for the rows and multiple columns for the different variables (dataframe.columns
). Each column can be retrieved by dictionary-like notation (dataframe['column_name']
) or by attribute (dataframe.column_name
). The method head(n)
applied to a data frame displays the first n rows of data (default = 5). The method apply(f)
allows to apply a function to each column or row. For our camera dataset, we want to access also the maker, which corresponds to the first word of the model name. Thus, we create a new column 'Maker'
by operating on the existing column 'Model'
.
import pandas as pd df = pd.read_csv('Camera.csv', sep=';') columns = ['Model', 'Date', 'MaxRes', 'LowRes', 'EffPix', 'ZoomW', 'ZoomT', 'NormalFR', 'MacroFR', 'Storage', 'Weight', 'Dimensions', 'Price'] df.columns = columns df['Maker'] = df['Model'].apply(lambda s:s.split()[0]) df[['Maker','Model','Date','MaxRes','LowRes','Weight','Dimensions','Price']].head()
Maker | Model | Date | MaxRes | LowRes | Weight | Dimensions | Price | |
---|---|---|---|---|---|---|---|---|
0 | Agfa | Agfa ePhoto 1280 | 1997 | 1024 | 640 | 420 | 95 | 179 |
1 | Agfa | Agfa ePhoto 1680 | 1998 | 1280 | 640 | 420 | 158 | 179 |
2 | Agfa | Agfa ePhoto CL18 | 2000 | 640 | 0 | 0 | 0 | 179 |
3 | Agfa | Agfa ePhoto CL30 | 1999 | 1152 | 640 | 0 | 0 | 269 |
4 | Agfa | Agfa ePhoto CL30 Clik! | 1999 | 1152 | 640 | 300 | 128 | 1299 |
A number of handy operations can be performed on a data frame with virtually no effort:
- Sorting data: display 5 most recent models
df.sort(['Date'], ascending = False).head()
- Filtering columns by value: show only models made by Nikon
df[df['Maker'] == 'Nikon']
- Filtering columns by range of values: return cameras with prices above 350 and below 500
df[(df['Price'] > 350) & (df['Price'] <= 500)]
- Get statistical descriptions of the data set: find maxima, minima, averages, standard deviations, percentiles
df[['MaxRes','LowRes','Storage','Weight','Dimensions','Price']].describe()
MaxRes | LowRes | Storage | Weight | Dimensions | Price | |
---|---|---|---|---|---|---|
count | 1038.000000 | 1038.000000 | 1036.000000 | 1036.000000 | 1036.000000 | 1038.000000 |
mean | 2474.672447 | 1773.936416 | 17.447876 | 319.265444 | 105.363417 | 457.384393 |
std | 759.513608 | 830.897955 | 27.440655 | 260.410137 | 24.262761 | 760.452918 |
min | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 0.000000 | 14.000000 |
25% | 2048.000000 | 1120.000000 | 8.000000 | 180.000000 | 92.000000 | 149.000000 |
50% | 2560.000000 | 2048.000000 | 16.000000 | 226.000000 | 101.000000 | 199.000000 |
75% | 3072.000000 | 2560.000000 | 20.000000 | 350.000000 | 115.000000 | 399.000000 |
max | 5616.000000 | 4992.000000 | 450.000000 | 1860.000000 | 240.000000 | 7999.000000 |
Plotting data frames with pandas
Pandas comes with handy wrappers around standard matplotlib routines that allow to plot data frames very easily. In the example below we create a histogram of all camera prices under 500 in our data set. Most camera models are priced in the bracket between 100 and 150 (units are not provided in the metadata of this set, but we might safely assume that we are talking about US dollars here).
matplotlib.rcParams.update({'font.size': 16}) df[(df['Price'] < 500)][['Price']].hist(figsize=(8,5), bins=10, alpha=0.5) plt.title('Histogram of camera prices\n') plt.savefig('PricesHist.png', bbox_inches='tight')
Grouping data with pandas
A powerful way of manipulating data in a frame is via processes that involve splitting it, applying functions to groups and combining the results. This is the “group by” approach that most SQL users will immediately recognize. Here is a clear exposition of how groupby works in pandas. In our table-top experiment, we group the data according to release year of the models, and study the evolution of the remaining variables with time. To the grouped structure gDate
we apply the aggregating function mean()
, which averages the non-grouped variables year by year. Quite reassuringly, we find out that photo cameras have become lighter over the past decade, while pixels and storage of newer models have steadily increased.
gDate = df[df['Date'] > 1998].groupby('Date').mean() dates = [str(s) for s in gDate.index] fig, axes = plt.subplots(nrows=1, ncols=3, figsize=(20,5)) cols = ['b', 'r', 'g'] vars = ['EffPix', 'Weight', 'Storage'] titles = ['effective pixels', 'weight', 'storage'] for i, var in enumerate(vars): gDate[[var]].plot(ax=axes[i], alpha=0.5, legend=False, lw=4, c=cols[i]) axes[i].set_xticklabels(dates, rotation=40) axes[i].set_title('Evolution of %s\n' % titles[i]) plt.savefig('CameraEvolution.png', bbox_inches='tight')
Grouping by maker instead, we can see which ones make the smallest and most economical models, on average.
gMak = df.groupby('Maker').median() gMak.index.name = '' fig, axes = plt.subplots(nrows=1, ncols=2, figsize=(18,8)) c = ['y','c'] vars = ['Dimensions', 'Price'] for i, var in enumerate(vars): gMak[[var]].plot(kind='barh', ax=axes[i], alpha=0.5, legend=False, color=c[i]) axes[i].set_title('Average %s by maker\n' % vars[i]) axes[i].grid(False) plt.savefig('MeanDimensionsPrices.png', bbox_inches='tight')
Pandas provides many more functionalities, and this is just a first look at them here at The Data Science Lab. We will continue the exploration of this and other datasets in due time. In the meanwhile, a quick google search for pandas shall be enough to keep us entertained.
At [5] df.columns = columns
I get the error,
ValueError: Length mismatch: Expected axis has 1 elements, new values have 13 elements