Thanks! We'll be in touch in the next 12 hours
Oops! Something went wrong while submitting the form.

A Quick Introduction to Data Analysis With Pandas

Ronak Mutha

Artificial Intelligence / Machine Learning

Python is a great language for doing data analysis, primarily because of the fantastic ecosystem of data-centric Python packages. Pandas is one of those packages and makes importing and analyzing data much easier.

Pandas aims to integrate the functionality of NumPy and matplotlib to give you a convenient tool for data analytics and visualization. Besides the integration,  it also makes the usage far more better.

In this blog, I’ll give you a list of useful pandas snippets that can be reused over and over again. These will definitely save you some time that you may otherwise need to skim through the comprehensive Pandas docs.

The data structures in Pandas are capable of holding elements of any type: Series, DataFrame.

Series

A one-dimensional object that can hold any data type such as integers, floats, and strings

A Series object can be created of different values. Series can be remembered similar to a Python list.

In the below example, NaN is NumPy’s nan symbol which tells us that the element is not a number but it can be used as one numerical type pointing out to be not a number. The type of series is an object because the series has mixed contents of strings and numbers.

CODE: https://gist.github.com/velotiotech/6f6127645c34ffcea01788562e603df3.js

Now if we use only numerical values, we get the basic NumPy dtype - float for our series.

CODE: https://gist.github.com/velotiotech/225594e1e38e8ce5716b82883520cf02.js

DataFrame

A two-dimensional labeled data structure where columns can be of different types.

Each column in a Pandas DataFrame represents a Series object in memory.

In order to convert a certain Python object (dictionary, lists, etc) to a DataFrame, it is extremely easy. From the python dictionaries, the keys map to Column names while values correspond to a list of column values.

CODE: https://gist.github.com/velotiotech/9bd350f7c6b4a7ea827ccc11e42dd902.js

Data Frame

Reading CSV files

Pandas can work with various file types while reading any file you need to remember.

CODE: https://gist.github.com/velotiotech/69e0051357f7903b63e799dd46e73758.js

Now you will have to only replace “filetype” with the actual type of the file, like csv or excel. You will have to give the path of the file inside the parenthesis as the first argument. You can also pass in different arguments that relate to opening the file. (Reading a csv file? See this)

CODE: https://gist.github.com/velotiotech/a0fa7772997917e2ebaddcf00155be9a.js

Reading CSV Files

Accessing Columns and Rows

DataFrame comprises of three sub-components, the indexcolumns, and the data (also known as values).

The index represents a sequence of values. In the DataFrame, it always on the left side. Values in an index are in bold font. Each individual value of the index is called a label. Index is like positions while the labels are values at that particular index. Sometimes the index is also referred to as row labels. In all the examples below, the labels and indexes are the same and are just integers beginning from 0 up to n-1, where n is the number of rows in the table.

Selecting rows is done using loc and iloc:

  • loc gets rows (or columns) with particular labels from the index. Raises KeyError when the items are not found.
  • iloc gets rows (or columns) at particular positions/index (so it only takes integers). Raises IndexError if a requested indexer is out-of-bounds.

CODE: https://gist.github.com/velotiotech/a78464b8de49a33e6646be513192e841.js

Accessing Columns And Rows

Accessing the data using column names

Pandas takes an extra step and allows us to access data through labels in DataFrames.

CODE: https://gist.github.com/velotiotech/968bcf05573148309529c3b637b8c9c4.js

Accessing data using column names

In Pandas, selecting data is very easy and similar to accessing an element from a dictionary or a list.

You can select a column (df[col_name]) and it will return column with label col_name as a Series, because rows and columns are stored as Series in a DataFrame, If you need to access more columns (df[[col_name_1, col_name_2]]) and it returns columns as a new DataFrame.

Filtering DataFrames with Conditional Logic

Let’s say we want all the companies with the vertical as B2B, the logic would be:

CODE: https://gist.github.com/velotiotech/f06ba78859a31eddf15591c65f3517e3.js

If we want the companies for the year 2009, we would use:

CODE: https://gist.github.com/velotiotech/5fb1eae43f9bc450b90150ee6124186b.js

Need to combine them both? Here’s how you would do it:

CODE: https://gist.github.com/velotiotech/92968a2171f53a2f0e5d5ae77119d6d5.js

Filtering Dataframes with Conditional logic
Get all companies with vertical as B2B for the year 2009

Sort and Groupby

Sorting

Sort values by a certain column in ascending order by using:

CODE: https://gist.github.com/velotiotech/6dee0a295411ce34fd4e27d22f795888.js

CODE: https://gist.github.com/velotiotech/1492f2a6752ab1d292397ca91ee9c197.js

Furthermore, it’s also possible to sort values by multiple columns with different orders. colname_1 is being sorted in ascending order and colname_2 in descending order by using:

CODE: https://gist.github.com/velotiotech/c35d91086db7452af4d7715ad01caa0e.js

Grouping

This operation involves 3 steps; splitting of the data, applying a function on each of the group, and finally combining the results into a data structure. This can be used to group large amounts of data and compute operations on these groups.

df.groupby(colname) returns a groupby object for values from one column while df.groupby([col1,col2]) returns a groupby object for values from multiple columns.

Data Cleansing

Data cleaning is a very important step in data analysis.

Checking missing values in the data

Check null values in the DataFrame by using:

CODE: https://gist.github.com/velotiotech/2fe5b76851803d1e7c9a3d80fe652637.js

This returns a boolean array (an array of true for missing values and false for non-missing values).

CODE: https://gist.github.com/velotiotech/ddcc2ed61ccd82e7a3c6992de41b3f11.js

Check non null values in the DataFrame using pd.notnull(). It returns a boolean array, exactly converse of df.notnull()

Removing Empty Values

Dropping empty values can be done easily by using:

CODE: https://gist.github.com/velotiotech/5a8f4b0ba5b0a9a1a445af4286f560c5.js

This drops the rows having empty values or df.dropna(axis=1) to drop the columns.

Also, if you wish to fill the missing values with other values, use df.fillna(x). This fills all the missing values with the value x (here you can put any value that you want) or s.fillna(s.mean()) which replaces null values with the mean (mean can be replaced with any function from the arithmetic section).

Operations on Complete Rows, Columns, or Even All Data

CODE: https://gist.github.com/velotiotech/7baf197dd62612ae373eb2818ecde5bd.js

Operations on Complete rows and columns

The .map() operation applies a function to each element of a column.

.apply() applies a function to columns. Use .apply(axis=1) to do it on the rows.

Iterating over rows

Very similar to iterating any of the python primitive types such as list, tuples, dictionaries.

CODE: https://gist.github.com/velotiotech/1bd5abcdb8261a75a86ce26ad7e45c51.js

The .iterrows() loops 2 variables together i.e, the index of the row and the row itself, variable is the index and variable row is the row in the code above.

Tips & Tricks

Using ufuncs (also known as Universal Functions). Python has the .apply() which applies a function to columns/rows. Similarly, Ufuncs can be used while preprocessing. What is the difference between ufuncs and .apply()?

Ufuncs is a numpy library, implemented in C which is highly efficient (ufuncs are around 10 times faster).

A list of common Ufuncs:

isinf: Element-wise checks for positive or negative infinity.

isnan: Element-wise checks for NaN and returns result as a boolean array.

isnat: Element-wise checks for NaT (not time) and returns result as a boolean array.

trunc: Return the truncated value of the input, element-wise.

.dt commands: Element-wise processing for date objects.

High-Performance Pandas

Pandas performs various vectorized/broadcasted operations and grouping-type operations. These operations are efficient and effective.

As of version 0.13, Pandas included tools that allow us to directly access C-speed operations without costly allocation of intermediate arrays. There are two functions, eval() and query().

DataFrame.eval() for efficient operations:

CODE: https://gist.github.com/velotiotech/a605fe261b073743b6b6271806349776.js

To compute the sum of df1, df2, df3, and df4 DataFrames using the typical Pandas approach, we can just write the sum:

CODE: https://gist.github.com/velotiotech/241707cfbf587d350a6d94457ef47368.js

A better and optimized approach for the same operation can be computed via pd.eval():

CODE: https://gist.github.com/velotiotech/7e5be838baca1bc8068997dbeece95d9.js

%timeit — Measure execution time of small code snippets.

The eval() expression is about 50% faster (it also consumes mush less memory).

And it performs the same result:

CODE: https://gist.github.com/velotiotech/ad772d813953f844b8a4d6f4af59e31c.js

np.allclose() is a numpy function which returns True if two arrays are element-wise equal within a tolerance.

Column-Wise & Assignment Operations Using df.eval()

Normal expression to split the first character of a column and assigning it to the same column can be done by using:

CODE: https://gist.github.com/velotiotech/e8f54130418df49956b08916f98d3132.js

By using df.eval(), same expression can be performed much faster:

CODE: https://gist.github.com/velotiotech/559e2cd094bdbdc93fb4f31b4bb84a49.js

DataFrame.query() for efficient operations:

Similar to performing filtering operations with conditional logic, to filter rows with vertical as B2B and year as 2009, we do it by using:

CODE: https://gist.github.com/velotiotech/a9e2f97aa855d36ac8b72518a20a4f2c.js

With .query() the same filtering can be performed about 50% faster.

CODE: https://gist.github.com/velotiotech/57bebcddd4b887f5285a0712ea3b9760.js

When to use eval() and query()? 

Two aspects: computation time and memory usage. 

Memory usage: Every operation which involves NumPy/Pandas DataFrames results into implicit creation of temporary variables. In such cases, if the memory usage of these temporary variables is greater, using eval() and query() is an appropriate choice to reduce the memory usage.

Computation time: Traditional method of performing NumPy/Pandas operations is faster for smaller arrays! The real benefit of eval()/query() is achieved mainly because of the saved memory, and also because of the cleaner syntax they offer.

Conclusion

Pandas is a powerful and fun library for data manipulation/analysis. It comes with easy syntax and fast operations. The blog highlights the most used pandas implementation and optimizations. Best way to master your skills over pandas is to use real datasets, beginning with Kaggle kernels to learning how to use pandas for data analysis. Check out more on real time text classification using Kafka and Scikit-learn and explanatory vs. predictive models in machine learning here.  

Get the latest engineering blogs delivered straight to your inbox.
No spam. Only expert insights.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings

A Quick Introduction to Data Analysis With Pandas

Python is a great language for doing data analysis, primarily because of the fantastic ecosystem of data-centric Python packages. Pandas is one of those packages and makes importing and analyzing data much easier.

Pandas aims to integrate the functionality of NumPy and matplotlib to give you a convenient tool for data analytics and visualization. Besides the integration,  it also makes the usage far more better.

In this blog, I’ll give you a list of useful pandas snippets that can be reused over and over again. These will definitely save you some time that you may otherwise need to skim through the comprehensive Pandas docs.

The data structures in Pandas are capable of holding elements of any type: Series, DataFrame.

Series

A one-dimensional object that can hold any data type such as integers, floats, and strings

A Series object can be created of different values. Series can be remembered similar to a Python list.

In the below example, NaN is NumPy’s nan symbol which tells us that the element is not a number but it can be used as one numerical type pointing out to be not a number. The type of series is an object because the series has mixed contents of strings and numbers.

CODE: https://gist.github.com/velotiotech/6f6127645c34ffcea01788562e603df3.js

Now if we use only numerical values, we get the basic NumPy dtype - float for our series.

CODE: https://gist.github.com/velotiotech/225594e1e38e8ce5716b82883520cf02.js

DataFrame

A two-dimensional labeled data structure where columns can be of different types.

Each column in a Pandas DataFrame represents a Series object in memory.

In order to convert a certain Python object (dictionary, lists, etc) to a DataFrame, it is extremely easy. From the python dictionaries, the keys map to Column names while values correspond to a list of column values.

CODE: https://gist.github.com/velotiotech/9bd350f7c6b4a7ea827ccc11e42dd902.js

Data Frame

Reading CSV files

Pandas can work with various file types while reading any file you need to remember.

CODE: https://gist.github.com/velotiotech/69e0051357f7903b63e799dd46e73758.js

Now you will have to only replace “filetype” with the actual type of the file, like csv or excel. You will have to give the path of the file inside the parenthesis as the first argument. You can also pass in different arguments that relate to opening the file. (Reading a csv file? See this)

CODE: https://gist.github.com/velotiotech/a0fa7772997917e2ebaddcf00155be9a.js

Reading CSV Files

Accessing Columns and Rows

DataFrame comprises of three sub-components, the indexcolumns, and the data (also known as values).

The index represents a sequence of values. In the DataFrame, it always on the left side. Values in an index are in bold font. Each individual value of the index is called a label. Index is like positions while the labels are values at that particular index. Sometimes the index is also referred to as row labels. In all the examples below, the labels and indexes are the same and are just integers beginning from 0 up to n-1, where n is the number of rows in the table.

Selecting rows is done using loc and iloc:

  • loc gets rows (or columns) with particular labels from the index. Raises KeyError when the items are not found.
  • iloc gets rows (or columns) at particular positions/index (so it only takes integers). Raises IndexError if a requested indexer is out-of-bounds.

CODE: https://gist.github.com/velotiotech/a78464b8de49a33e6646be513192e841.js

Accessing Columns And Rows

Accessing the data using column names

Pandas takes an extra step and allows us to access data through labels in DataFrames.

CODE: https://gist.github.com/velotiotech/968bcf05573148309529c3b637b8c9c4.js

Accessing data using column names

In Pandas, selecting data is very easy and similar to accessing an element from a dictionary or a list.

You can select a column (df[col_name]) and it will return column with label col_name as a Series, because rows and columns are stored as Series in a DataFrame, If you need to access more columns (df[[col_name_1, col_name_2]]) and it returns columns as a new DataFrame.

Filtering DataFrames with Conditional Logic

Let’s say we want all the companies with the vertical as B2B, the logic would be:

CODE: https://gist.github.com/velotiotech/f06ba78859a31eddf15591c65f3517e3.js

If we want the companies for the year 2009, we would use:

CODE: https://gist.github.com/velotiotech/5fb1eae43f9bc450b90150ee6124186b.js

Need to combine them both? Here’s how you would do it:

CODE: https://gist.github.com/velotiotech/92968a2171f53a2f0e5d5ae77119d6d5.js

Filtering Dataframes with Conditional logic
Get all companies with vertical as B2B for the year 2009

Sort and Groupby

Sorting

Sort values by a certain column in ascending order by using:

CODE: https://gist.github.com/velotiotech/6dee0a295411ce34fd4e27d22f795888.js

CODE: https://gist.github.com/velotiotech/1492f2a6752ab1d292397ca91ee9c197.js

Furthermore, it’s also possible to sort values by multiple columns with different orders. colname_1 is being sorted in ascending order and colname_2 in descending order by using:

CODE: https://gist.github.com/velotiotech/c35d91086db7452af4d7715ad01caa0e.js

Grouping

This operation involves 3 steps; splitting of the data, applying a function on each of the group, and finally combining the results into a data structure. This can be used to group large amounts of data and compute operations on these groups.

df.groupby(colname) returns a groupby object for values from one column while df.groupby([col1,col2]) returns a groupby object for values from multiple columns.

Data Cleansing

Data cleaning is a very important step in data analysis.

Checking missing values in the data

Check null values in the DataFrame by using:

CODE: https://gist.github.com/velotiotech/2fe5b76851803d1e7c9a3d80fe652637.js

This returns a boolean array (an array of true for missing values and false for non-missing values).

CODE: https://gist.github.com/velotiotech/ddcc2ed61ccd82e7a3c6992de41b3f11.js

Check non null values in the DataFrame using pd.notnull(). It returns a boolean array, exactly converse of df.notnull()

Removing Empty Values

Dropping empty values can be done easily by using:

CODE: https://gist.github.com/velotiotech/5a8f4b0ba5b0a9a1a445af4286f560c5.js

This drops the rows having empty values or df.dropna(axis=1) to drop the columns.

Also, if you wish to fill the missing values with other values, use df.fillna(x). This fills all the missing values with the value x (here you can put any value that you want) or s.fillna(s.mean()) which replaces null values with the mean (mean can be replaced with any function from the arithmetic section).

Operations on Complete Rows, Columns, or Even All Data

CODE: https://gist.github.com/velotiotech/7baf197dd62612ae373eb2818ecde5bd.js

Operations on Complete rows and columns

The .map() operation applies a function to each element of a column.

.apply() applies a function to columns. Use .apply(axis=1) to do it on the rows.

Iterating over rows

Very similar to iterating any of the python primitive types such as list, tuples, dictionaries.

CODE: https://gist.github.com/velotiotech/1bd5abcdb8261a75a86ce26ad7e45c51.js

The .iterrows() loops 2 variables together i.e, the index of the row and the row itself, variable is the index and variable row is the row in the code above.

Tips & Tricks

Using ufuncs (also known as Universal Functions). Python has the .apply() which applies a function to columns/rows. Similarly, Ufuncs can be used while preprocessing. What is the difference between ufuncs and .apply()?

Ufuncs is a numpy library, implemented in C which is highly efficient (ufuncs are around 10 times faster).

A list of common Ufuncs:

isinf: Element-wise checks for positive or negative infinity.

isnan: Element-wise checks for NaN and returns result as a boolean array.

isnat: Element-wise checks for NaT (not time) and returns result as a boolean array.

trunc: Return the truncated value of the input, element-wise.

.dt commands: Element-wise processing for date objects.

High-Performance Pandas

Pandas performs various vectorized/broadcasted operations and grouping-type operations. These operations are efficient and effective.

As of version 0.13, Pandas included tools that allow us to directly access C-speed operations without costly allocation of intermediate arrays. There are two functions, eval() and query().

DataFrame.eval() for efficient operations:

CODE: https://gist.github.com/velotiotech/a605fe261b073743b6b6271806349776.js

To compute the sum of df1, df2, df3, and df4 DataFrames using the typical Pandas approach, we can just write the sum:

CODE: https://gist.github.com/velotiotech/241707cfbf587d350a6d94457ef47368.js

A better and optimized approach for the same operation can be computed via pd.eval():

CODE: https://gist.github.com/velotiotech/7e5be838baca1bc8068997dbeece95d9.js

%timeit — Measure execution time of small code snippets.

The eval() expression is about 50% faster (it also consumes mush less memory).

And it performs the same result:

CODE: https://gist.github.com/velotiotech/ad772d813953f844b8a4d6f4af59e31c.js

np.allclose() is a numpy function which returns True if two arrays are element-wise equal within a tolerance.

Column-Wise & Assignment Operations Using df.eval()

Normal expression to split the first character of a column and assigning it to the same column can be done by using:

CODE: https://gist.github.com/velotiotech/e8f54130418df49956b08916f98d3132.js

By using df.eval(), same expression can be performed much faster:

CODE: https://gist.github.com/velotiotech/559e2cd094bdbdc93fb4f31b4bb84a49.js

DataFrame.query() for efficient operations:

Similar to performing filtering operations with conditional logic, to filter rows with vertical as B2B and year as 2009, we do it by using:

CODE: https://gist.github.com/velotiotech/a9e2f97aa855d36ac8b72518a20a4f2c.js

With .query() the same filtering can be performed about 50% faster.

CODE: https://gist.github.com/velotiotech/57bebcddd4b887f5285a0712ea3b9760.js

When to use eval() and query()? 

Two aspects: computation time and memory usage. 

Memory usage: Every operation which involves NumPy/Pandas DataFrames results into implicit creation of temporary variables. In such cases, if the memory usage of these temporary variables is greater, using eval() and query() is an appropriate choice to reduce the memory usage.

Computation time: Traditional method of performing NumPy/Pandas operations is faster for smaller arrays! The real benefit of eval()/query() is achieved mainly because of the saved memory, and also because of the cleaner syntax they offer.

Conclusion

Pandas is a powerful and fun library for data manipulation/analysis. It comes with easy syntax and fast operations. The blog highlights the most used pandas implementation and optimizations. Best way to master your skills over pandas is to use real datasets, beginning with Kaggle kernels to learning how to use pandas for data analysis. Check out more on real time text classification using Kafka and Scikit-learn and explanatory vs. predictive models in machine learning here.  

Did you like the blog? If yes, we're sure you'll also like to work with the people who write them - our best-in-class engineering team.

We're looking for talented developers who are passionate about new emerging technologies. If that's you, get in touch with us.

Explore current openings