Brett Romero

Data Inspired Insights

Month: February 2016

Data Science: A Kaggle Walkthrough – Understanding the Data

This article on understanding the data is Part II in a series looking at data science and machine learning by walking through a Kaggle competition. Part I can be found here.

Continuing on the walkthrough of data science via a Kaggle competition entry, in this part we focus on understanding the data provided for the Airbnb Kaggle competition.

Reviewing the Data

In any process involving data, the first goal should always be understanding the data. This involves looking at the data and answering a range of questions including (but not limited to):

  1. What features (columns) does the dataset contain?
  2. How many records (rows) have been provided?
  3. What format is the data in (e.g. what format are the dates provided, are there numerical values, what do the different categorical values look like)?
  4. Are there missing values?
  5. How do the different features relate to each other?

For this competition, Airbnb have provided 6 different files. Two of these files provide background information (countries.csv and age_gender_bkts.csv), while sample_submission_NDF.csv provides an example of how the submission file containing our final predictions should be formatted. The three remaining files are the key ones:

  1. train_users_2.csv – This dataset contains data on Airbnb users, including the destination countries. Each row represents one user with the columns containing various information such the users’ ages and when they signed up. This is the primary dataset that we will use to train the model.
  2. test_users.csv – This dataset also contains data on Airbnb users, in the same format as train_users_2.csv, except without the destination country. These are the users for which we will have to make our final predictions.
  3. sessions.csv – This data is supplementary data that can be used to train the model and make the final predictions. It contains information about the actions (e.g. clicked on a listing, updated a  wish list, ran a search etc.) taken by the users in both the testing and training datasets above.

With this information in mind, an easy first step in understanding the data is reviewing the information provided by the data provider – Airbnb. For this competition, the information can be found here. The main points (aside from the descriptions of the columns) are as follows:

  • All the users in the data provided are from the USA.
  • There are 12 possible outcomes of the destination country: ‘US’, ‘FR’, ‘CA’, ‘GB’, ‘ES’, ‘IT’, ‘PT’, ‘NL’,’DE’, ‘AU’, ‘NDF’ (no destination found), and ‘other’.
  • ‘other’ means there was a booking, but in a country not included in the list, while ‘NDF’ means there was not a booking.
  • The training and test sets are split by dates. In the test set, you will predict the destination country for all the new users with first activities after 7/1/2014
  • In the sessions dataset, the data only dates back to 1/1/2014, while the training dataset dates back to 2010.

After absorbing this information, we can start looking at the actual data. For now we will focus on the train_users_2.csv file only.

Table 1 – Three rows (transposed) from train_users_2.csv

Column NameExample 1Example 2Example 3
id4ft3gnwmtxv5lq9bj8gvmsucfwmlzc
date_account_created28/9/1030/6/1430/6/14
timestamp_first_active200906092312472014063023442920140630234729
date_first_booking2/8/1016/3/15
genderFEMALE-unknown-MALE
age5643
signup_methodbasicbasicbasic
signup_flow3250
languageenenen
affiliate_channeldirectdirectdirect
affiliate_providerdirectdirectdirect
first_affiliate_trackeduntrackeduntrackeduntracked
signup_appWebiOSWeb
first_device_typeWindows DesktopiPhoneWindows Desktop
first_browserIE-unknown-Firefox
country_destinationUSNDFUS

Looking at the sample of three records above provides us with a few key pieces of information about this dataset. The first is that at least two columns have missing values – the age column and date_first_booking column. This tells us that before we use this data for training a model, these missing values need to be filled or the rows excluded altogether. These options will be discussed in more detail in the next part of this series.

Secondly, most of the columns provided contain categorical data (i.e. the values represent one of some fixed number of categories). In fact 11 of the 16 columns provided appear to be categorical. Most of the algorithms that are used in classification do not handle categorical data like this very well, and so when it comes to the data transformation step, we will need to find a way to change this data into a form that is more suited for classification.

Thirdly, the timestamp_first_active column looks to be a full timestamp, but in the format of a number. For example 20090609231247 looks like it should be 2009-06-09 23:12:47. This formatting will need to be corrected if we are to use the date values.

Diving Deeper

Now that we have gained a basic understanding of the data by looking at a few example records, the next step is to start looking at the structure of the data.

Country Destination Values

Arguably, the most important column in the dataset is the one the model will try to predict – country_destination. Looking at the number of records that fall into each category can help provide some insights into how the model should be constructed as well as pitfalls to avoid.

Table 2 – Users by Destination

DestinationRecords% of Total
NDF124,54358.3%
US62,37629.2%
other10,0944.7%
FR5,0232.4%
IT2,8351.3%
GB2,3241.1%
ES2,2491.1%
CA1,4280.7%
DE1,0610.5%
NL7620.4%
AU5390.3%
PT2170.1%
Grand Total213,451100.0%

Looking at the breakdown of the data, one thing that immediately stands out is that almost 90% of users fall into two categories, that is, they are either yet to make a booking (NDF) or they made their first booking in the US. What’s more, breaking down these percentage splits by year reveals that the percentage of users yet to make a booking increases each year and reached over 60% in 2014.

Table 3 – Users by Destination and Year

Destination20102011201220132014Overall
NDF42.5%45.4%55.0%59.2%61.8%58.3%
US44.0%38.1%31.1%28.9%26.7%29.2%
other2.8%4.7%4.9%4.6%4.8%4.7%
FR4.3%4.0%2.8%2.2%1.9%2.4%
IT1.1%1.7%1.5%1.2%1.3%1.3%
GB1.0%1.5%1.3%1.0%1.0%1.1%
ES1.5%1.7%1.2%1.0%0.9%1.1%
CA1.5%1.1%0.7%0.6%0.6%0.7%
DE0.6%0.8%0.7%0.5%0.3%0.5%
NL0.4%0.6%0.4%0.3%0.3%0.4%
AU0.3%0.3%0.3%0.3%0.2%0.3%
PT0.0%0.2%0.1%0.1%0.1%0.1%
Total100.0%100.0%100.0%100.0%100.0%100.0%

For modeling purposes, this type of split means a couple of things. Firstly, the spread of categories has changed over time. Considering that our final predictions will be made against user data from July 2014 onwards, this change provides us with an incentive to focus on more recent data for training purposes, as it is more likely to resemble the test data.

Secondly, because the vast majority of users fall into 2 categories, there is a risk that if the model is too generalized, or in other words not sensitive enough, it will select one of those two categories for every prediction. A key step will be ensuring the training data has enough information to ensure the model will predict other categories as well.

Account Creation Dates

Let’s now move onto the date_account_created column to see how the values have changed over time.

Chart 1 – Accounts Created Over Time

Chart 1 provides excellent evidence of the explosive growth of Airbnb, averaging over 10% growth in new accounts created per month. In the year to June 2014, the number of new accounts created was 125,884 – 132% increase from the year before.

But aside from showing how quickly Airbnb has grown, this data also provides another important insight, the majority of the training data provided comes from the latest 2 years. In fact, if we limited the training data to accounts created from January 2013 onwards, we would still be including over 70% of all the data. This matters because, referring back to the notes provided by Airbnb, if we want to use the data in sessions.csv we would be limited to data from January 2014 onwards. Again looking at the numbers, this means that even though the sessions.csv data only covers 11% of the time period (6 out of 54 months), it still covers over 30% of the training data – or 76,466 users.

Age Breakdown

Looking at the breakdown by age, we can see a good example of another issue that anyone working with data (whether a Data Scientist or not) faces regularly – data quality issues. As can be seen from Chart 2, there are a significant number of users that have reported their ages as well over 100. In fact, a significant number of users reported their ages as over 1000.

Chart 2 – Reported Ages of Users

So what is going on here? Firstly, it appears that a number of users have reported their birth year instead of their age. This would help to explain why there are a lot of users with ‘ages’ between 1924 and 1953. Secondly, we also see significant numbers of users reporting their age as 105 and 110. This is harder to explain but it is likely that some users intentionally entered their age incorrectly for privacy reasons. Either way, these values would appear to be errors that will need to be addressed.

Additionally, as we saw in the example data provided above, another issue with the age column is that sometimes age has not been reported at all. In fact, if we look across all the training data provided, we can see a large number of missing values in all years.

Table 4 – Missing Ages

YearMissing ValuesTotal Records% Missing
20101,0822,78838.8%
20114,09011,77534.7%
201213,74039,46234.8%
201334,95082,96042.1%
201434,12876,46644.6%
Total87,990213,45141.2%

When we clean the data, we will have to decide what to do with these missing values.

First Device Type

Finally, one last column that we will look at is the first_device_used column.

Table 5 – First Device Used

Device20102011201220132014All Years
Mac Desktop37.2%40.4%47.2%44.2%37.3%42.0%
Windows Desktop21.6%25.2%37.7%36.9%31.0%34.1%
iPhone5.8%6.3%3.8%7.5%15.9%9.7%
iPad4.6%4.8%6.1%7.1%7.0%6.7%
Other/Unknown28.8%21.3%3.8%2.8%4.6%5.0%
Android Phone1.1%1.2%0.7%0.4%2.6%1.3%
Android Tablet0.4%0.4%0.3%0.5%0.9%0.6%
Desktop (Other)0.4%0.4%0.4%0.6%0.7%0.6%
SmartPhone (Other)0.0%0.1%0.1%0.0%0.0%0.0%
Total100.0%100.0%100.0%100.0%100.0%100.0%

The interesting thing about the data in this column is how the types of devices used have changed over time. Windows users have increased significantly as a percentage of all users. iPhone users have tripled their share, while users using ‘Other/unknown’ devices have gone from the second largest group to less than 5% of users. Further, the majority of these changes occurred between 2011 and 2012, suggesting that there may have been a change in the way the classification was done.

Like with the other columns we have reviewed above, this change over time reinforces the presumption that recent data is likely to be the most useful for building our model.

Other Columns

It should be noted that although we have not covered all of them here, having some understanding of all the data provided in a dataset is important for building an accurate classification model. In some cases, this may not be possible due to the presence of a very large number of columns, or due to the fact that the data has been abstracted (that is, the data has been converted into a different form). However, in this particular case, the number of columns is relatively small and the information is easily understandable.

Next Time

Now that we have taken the first step – understanding the data – in the next piece, we will start cleaning the data to get it into a form that will help to optimize the model’s performance.

 

Data Science: A Kaggle Walkthrough – Introduction

I have spent a lot of time working with spreadsheets, databases, and data more generally. This work has led to me having a very particular set of skills, skills I have acquired over a very long career. Skills that make me a nightmare for people like you. If you let my daughter go now, that’ll be the end of it. I will not look for you, I will not pursue you. But if you don’t, I will look for you, I will find you, and I will kill you.

The badassery of Liam Neeson aside, although I have spent years working with data in a range of capacities, the skills and techniques required for ‘data science’ are a very specific subset that do not tend to come up in too many jobs. What is more, data science tends to involve a lot more programming than most other data related work and this can be intimidating for people who are not coming from a computer science background. The problem is, people who work with data in other contexts (e.g. economics and statistics), as well as those with industry specific experience and knowledge, can often bring different and important perspectives to data science problems. Yet, these people often feel unable to contribute because they do not understand programming or the black box models being used.

Something that has nothing to do with data science

Therefore, in a probably futile attempt to shed some light on this field, this will be the first part in a multi-part series looking at what data science involves and some of the techniques most commonly used. This series is not intended to make everyone experts on data science, rather it is intended to simply try and remove some of the fear and mystery surrounding the field. In order to be as practical as possible, this series will be structured as a walk through of the process of entering a Kaggle competition and the steps taken to arrive at the final submission.

What is Kaggle?

For those that do not know, Kaggle is a website that hosts data science problems for an online community of data science enthusiasts to solve. These problems can be anything from predicting cancer based on patient data, to sentiment analysis of movie reviews and handwriting recognition – the only thing they all have in common is that they are problems requiring the application of data science to be solved.

The problems on Kaggle come from a range of sources. Some are provided just for fun and/or educational purposes, but many are provided by companies that have genuine problems they are trying to solve. As an incentive for Kaggle users to compete, prizes are often awarded for winning these competitions, or finishing in the top x positions. Sometimes the prize is a job or products from the company, but there can also be substantial monetary prizes. Home Depot for example is currently offering $40,000 for the algorithm that returns the most relevant search results on homedepot.com.

Despite the large prizes on offer though, many people on Kaggle compete simply for practice and the experience. The competitions involve interesting problems and there are plenty of users who submit their scripts publically, providing an excellent opportunity for learning for those just trying to break into the field. There are also active discussion forums full of people willing to provide advice and assistance to other users.

What is not spelled out on the website, but is assumed knowledge, is that to make accurate predictions, you will have to use machine learning.

Machine Learning

When it comes to machine learning, there is a lot of general misunderstanding about what this actually involves. While there are different forms of machine learning, the one that I will focus on here is known as classification, which is a form of ‘supervised learning’. Classification is the process of assigning records or instances (think rows in a dataset) to a specific category in a pre-determined set of categories. Think about a problem like predicting which passengers on the Titanic survived (i.e. there are two categories – ‘survived’ and ‘did not survive’) based on their age, class and gender[1].

Titanic Classification Problem

[table “titanic_example” not found /]

Referring specifically to ‘supervised learning’ algorithms, the way these predictions are made is by providing the algorithm with a dataset (typically the larger the better) of ‘training data’. This training data contains all the information available to make the prediction as well as the categories each record corresponds to. This data is then used to ‘train’ the algorithm to find the most accurate way to classify those records for which we do not know the category.

Training Data

PassengerAgeClassGenderSurvived?
001323SecondFemale1
001421SteerageFemale0
001546SteerageMale0
001632FirstMale0
001713FirstFemale1
001824SecondMale0
001929FirstMale1
002080SecondMale1
00219SteerageFemale0
002244SteerageMale0
002335SteerageFemale1
002410SteerageMale0

Although that seems relatively straightforward, part of what makes data science such a complex field is the limitless number of ways that a predictive model can be built. There are a huge number of different algorithms that can be trained, mostly with weird sounding names like Neural Network, Random Forest and Support Vector Machine (we will look at some of these in more detail in future installments). These algorithms can also be combined to create a single model. In fact, the people/teams that end up winning Kaggle competitions often combine the predictions of a number of different algorithms.

To make things more complicated, within each algorithm, there is a range of parameters that can be adjusted to significantly alter the prediction accuracy, and these parameters will vary for each classification problem. Finding the optimal set of parameters to maximize accuracy is often an art in itself.

Finally, just feeding the training data into an algorithm and hoping for the best is typically a fast track to poor performance (if it works at all). Significant time is needed to clean the data, correct formats and add additional ‘features’ to maximize the predictive capability of the algorithm. We will go into more detail on both of these requirements in future installments.

OK, so now let’s put all this into context by looking at the competition I entered, provided by Airbnb. The aim of the competition was to predict the country that users will make their first booking in, based on some basic user profile data[2]. In this case, the categories were the different country options and an additional category for users that had not made a previous booking through Airbnb. The training data was a set of users for whom we were provided with the correct category (i.e. what country they made their first booking in). Using the training data, I was required to train the model to accurately predict the country of first booking, and then submit my predictions for a set of users for whom we did not know the outcome.

How?

The aim of this series is to walk through the process of assessing and analyzing data, cleaning, transforming and adding new features, constructing and testing a model, and finally creating final predictions. The primary technology I will be using as I walk through this is Python, in combination with Excel/Google Sheets to analyze some of the outputs. Why Python? There are several reasons:

  1. It is free and open source.
  2. It has a great range of libraries (also free) that provide access to a large number of machine learning algorithms and other useful tools. The libraries I will primarily use are numpy, pandas and sklearn.
  3. It is very popular, meaning when I get stuck on a problem, there is usually plenty of material and documentation to be found online for help.
  4. It is very fast (primarily the reason I have chosen Python over R).

For those that are interested in following this series but do not have a programming background, do not panic – although I will show code snippets as we go – being able to read the code is not vital to understanding what is happening.

Next Time

In the next piece, we will start looking at the data in more detail and discuss how we can clean and transform it, to help optimize the model performance.

 

[1] This is an actual competition on Kaggle at the moment (no prizes are awarded, it is for experience only).

[2] The data has been anonymized so that users cannot be identified

 

© 2018 Brett Romero

Theme by Anders NorenUp ↑