Brett Romero

Data Inspired Insights

Tag: table

Data Science: A Kaggle Walkthrough – Adding New Data

This article is Part V in a series looking at data science and machine learning by walking through a Kaggle competition. If you have not done so already, you are strongly encouraged to go back and read the earlier parts – (Part I, Part II, Part III and Part IV).

Continuing on the walkthrough, in this part we take the data from sessions.csv that we left aside initially and add it to the transformed and expanded data from Part IV.  This part will cover, in brief, all the steps in Parts II – IV.

Understanding the Data

As we did for the user data in training.csv, the first step here is to understand what the data in sessions.csv looks like. Although this file, with over 10 million rows, is too large to display in entirety in Excel[1], we can still open the file using Excel to get an understanding of what columns we have and what at least the first million rows of data looks like. Some sample rows are provided below:

user_idactionaction_typeaction_detaildevice_typesecs_elapsed
4rvqpxoh3hcampaigns-unknown--unknown-iPhone375
4rvqpxoh3hactive-unknown--unknown-iPhone728
4rvqpxoh3hcreate-unknown--unknown-iPhone
4rvqpxoh3hnotifications-unknown--unknown-iPhone187
4rvqpxoh3hlistings-unknown--unknown-iPhone154
4rvqpxoh3hunavailabilities-unknown--unknown-iPhone204
4rvqpxoh3hindex-unknown--unknown-iPhone21
4rvqpxoh3hindex-unknown--unknown-iPhone886
c8mfesvkv0confirm_emailclickconfirm_email_linkiPad Tablet1371616
c8mfesvkv0header_userpicdataheader_userpiciPad Tablet8672
c8mfesvkv0createsubmitcreate_useriPad Tablet
xwxei6hdk4dashboardviewdashboardiPhone1355
xwxei6hdk4header_userpicdataheader_userpiciPhone1246
xwxei6hdk4message_postmessage_postiPad Tablet
xwxei6hdk4ask_questionsubmitcontact_hostiPad Tablet386
xwxei6hdk4ask_questionsubmitcontact_hostiPad Tablet424
xwxei6hdk4message_postmessage_postiPad Tablet0
xwxei6hdk4confirm_emailclickconfirm_email_linkiPhone46262

As can be seen, the dataset contains records of user actions, with each row representing one action a user took. Every time a user reviewed search results, updated a wish list or updated their account information, a new row was created in this dataset. Although this data is likely to be very useful for our goal of predicting which country a user will make their first booking in, it also complicates the process of combining this data with the data from training.csv, as it will have to be aggregated so that there is one row per user (as opposed to many rows for each user, currently).

Aside from details of the actions taken, there are a couple of interesting fields in this data. The first is device_type – this field contains the type of device used for the specified action. The second interesting field is the secs_elapsed field. This shows us how long (in seconds) was spent on a particular action.

Both of these fields provide us with potentially important information that could help to more accurately predict which country a user will make a first booking in. For example, it is not difficult to imagine that people spending relatively little time to make a booking on a phone are likely to be making bookings in locations closer to home (i.e. the US) than someone spending more time to make a booking on a desktop computer. Of course this is just a theory that needs to be proven, but it is a good reason to ensure we are capturing this information in our final training dataset.

Cleaning and Transforming the Data

Now that we have a basic understanding of the data, we need to undertake the cleaning and transformation steps. Because of the structure of this data (and for the sake of brevity), we are going to do both of these things at the same time.

The first step is to import the data:

# Import sessions data
s_filepath = "./sessions.csv"
sessions = pd.read_csv(s_filepath, header=0, index_col=False)

Extract the primary and secondary devices for each user

Remembering that we need to get the final data into a format that can be merged with the data created in Part IV (i.e. a dataset where one row equals one user), the first piece of information we are going to extract is the primary and secondary device for each user. How do we determine what a user’s primary and secondary devices are? We look at how much time they spent on each device. In short we are going to make the following changes to the data:

devices

One thing to note as we make these transformations is that by aggregating the data this way, we are also implicitly removing the missing values. The code to do this transformation is shown below:

# Determine primary device
print("Determing primary device...")
sessions_device = sessions.loc[:, ['user_id', 'device_type', 'secs_elapsed']]
aggregated_lvl1 = sessions_device.groupby(['user_id', 'device_type'], as_index=False, sort=False).aggregate(np.sum)
idx = aggregated_lvl1.groupby(['user_id'], sort=False)['secs_elapsed'].transform(max) == aggregated_lvl1['secs_elapsed']
df_primary = pd.DataFrame(aggregated_lvl1.loc[idx , ['user_id', 'device_type', 'secs_elapsed']])
df_primary.rename(columns = {'device_type':'primary_device', 'secs_elapsed':'primary_secs'}, inplace=True)
df_primary = convert_to_binary(df=df_primary, column_to_convert='primary_device')
df_primary.drop('primary_device', axis=1, inplace=True)

# Determine Secondary device
print("Determing secondary device...")
remaining = aggregated_lvl1.drop(aggregated_lvl1.index[idx])
idx = remaining.groupby(['user_id'], sort=False)['secs_elapsed'].transform(max) == remaining['secs_elapsed']
df_secondary = pd.DataFrame(remaining.loc[idx , ['user_id', 'device_type', 'secs_elapsed']])
df_secondary.rename(columns = {'device_type':'secondary_device', 'secs_elapsed':'secondary_secs'}, inplace=True)
df_secondary = convert_to_binary(df=df_secondary, column_to_convert='secondary_device')
df_secondary.drop('secondary_device', axis=1, inplace=True)

Determine Counts of Actions

The next thing we are going to do is take counts of how many times each action was taken by each user. This is a two-step process. The first step is to determine the count of each action type for each user:

Step 1

actions_step_1

Step 2

actions_step_2

For you Excel buffs out there, the second step might strike you as something that could be achieved using a pivot table – and you would be right. In fact, the custom function that we use to make this transformation uses a pandas method called ‘pivot’. This is important to note for a couple of reasons. The first is that, with all the talk about new data, people who have worked with data mostly (or entirely) using ‘old technology’ like Excel and SQL are often given the impression that their skills are redundant or not useful in modern data science. As this example shows, the ways of thinking about data that you develop working with Excel and SQL are not only relevant, but often extremely useful.

The second reason is that for people (like me) who do not know all the methods available for pandas dataframes off by heart, being able to identify techniques you have used in other programs and languages provides you with a way to find corresponding methods in new languages. I discovered this method by searching for “pandas pivot”, knowing that this way of manipulating data was likely to have some equivalent in pandas.

Looping Through the Actions Columns

Looking at the examples above, you may have realized that the transformation as shown only works for one action column at a time, but in the data we have three action columns: action, action_type and action_detail.

To handle the multiple action columns, we repeat these steps for each column individually, effectively creating three separate tables. Because we have now created tables where each row represents one user, we can now join (another concept SQL users will be very familiar with) these three tables together on the basis of the user id. The full code for these steps is shown below:

# Count occurrences of value in a column
def convert_to_counts(df, id_col, column_to_convert):
id_list = df[id_col].drop_duplicates()

df_counts = df.loc[:,[id_col, column_to_convert]]
df_counts['count'] = 1
df_counts = df_counts.groupby(by=[id_col, column_to_convert], as_index=False, sort=False).sum()

new_df = df_counts.pivot(index=id_col, columns=column_to_convert, values='count')
new_df = new_df.fillna(0)

# Rename Columns
categories = list(df[column_to_convert].drop_duplicates())
for category in categories:
cat_name = str(category).replace(" ", "_").replace("(", "").replace(")", "").replace("/", "_").replace("-", "").lower()
col_name = column_to_convert + '_' + cat_name
new_df.rename(columns = {category:col_name}, inplace=True)

return new_df

# Aggregate and combine actions taken columns
print("Aggregating actions taken...")
session_actions = sessions.loc[:,['user_id', 'action', 'action_type', 'action_detail']]
columns_to_convert = ['action', 'action_type', 'action_detail']
session_actions = session_actions.fillna('not provided')
first = True

for column in columns_to_convert:
print("Converting " + column + " column...")
current_data = convert_to_counts(df=session_actions, id_col='user_id', column_to_convert=column)

# If first loop, current data becomes existing data, otherwise merge existing and current
if first:
first = False
actions_data = current_data
else:
actions_data = pd.concat([actions_data, current_data], axis=1, join='inner')

Combine Data Sets

The final steps are to combine the various datasets we have created into one large dataset. First we combine the two device dataframes (df_primary and df_secondary) to create a device dataframe. Then we combine the device dataframe with the actions dataframe to create a sessions dataframe with all the features we extracted from sessions.csv. Finally, we combine the sessions dataframe with the user data dataframe from Part IV. The code for the various combinations is shown below:

# Merge device datasets
print("Combining results...")
df_primary.set_index('user_id', inplace=True)
df_secondary.set_index('user_id', inplace=True)
device_data = pd.concat([df_primary, df_secondary], axis=1, join="outer")

# Merge device and actions datasets
combined_results = pd.concat([device_data, actions_data], axis=1, join='outer')
df_sessions = combined_results.fillna(0)

# Merge user and session datasets
df_all.set_index('id', inplace=True)
df_all = pd.concat([df_all, df_sessions], axis=1, join='inner')

A Note on Joins

For those that can read a little bit of code and are familiar with joins in SQL, you may be asking why I am using (full) outer joins for the first two combinations, but an inner join for the final step[2].

The first step requires an outer join because not all users have a secondary device. That is, some users only logged onto Airbnb using one device (or at least one type of device). Doing an outer join here ensures that our dataset includes all users regardless of this fact.

The second step could use an inner or an outer join, as both the device and actions datasets should contain all users. In this case we use an outer join just to ensure that if a user is missing from one of the datasets (for whatever reason), we will still capture them. You may also notice that after the second step we fill any missing values with 0s to ensure we do not have any NULL values that may have been generated by these outer joins.

For the third step we use an inner join for a key reason – we want our final training dataset to only include users that also have sessions data. Using an inner join here is an easy way to join the datasets and filter for the users with sessions data in one step.

Wrapping Up

In the first four parts of this series, we looked in detail at some of the various steps in the process of building a model. Although these steps should be distinct thought processes that occur for each model building process, hopefully what this article provides is an insight into how some of these steps can be combined if planned out carefully. In relatively few steps, we have taken a dataset containing 10 million rows of user actions data, cleaned it, extracted a bunch of important information, and added it to our user data, ready for training a model.

The other important thing to take away from this article is how useful ‘old school’ ways of thinking about data still are. For all the talk about unstructured data and NoSQL databases, the fact is that knowing how to work with and manipulate old fashioned columns and rows is still as important as ever. Whether it is joins and aggregation in SQL, pivot tables and VLOOKUPS in Excel, or just the general concept of relational data, not only is that knowledge relevant, but it is often extremely useful.

Next Time

In the next piece, we will finally get to the good stuff and train the algorithm to make the final predictions.

 

[1] Nope, still doesn’t qualify as ‘Big Data’…

[2] For those that do not understand what I mean by inner and outer joins (and are interested in knowing) – stackoverflow comes to the rescue again with this great illustrated answer.

5 Things I Learned in 2015

2015 has been an interesting year in many respects. A new country[1], a new language, a new job, and plenty of new experiences – both at work and in life in general. To get into the year-end spirit, I thought I would list out 5 key things I learned this year.

1. I Love Pandas

Yes, those pandas as well, who doesn’t? But I knew that well before 2015. The pandas I learned to love this year is a data analysis library for the programming language Python. “Whoa, slow down egg head” I hear you say. For those that are not regular coders, what that means is that pandas provides a large range of ways for people writing Python code to interact with data that makes life very easy.

Reading from and writing to Excel, CSV files and JSON (see lesson number 2) is super easy and fast. Manipulating large datasets in table like structures (dataframes) – check. Slicing, dicing, aggregating – check, check and check. In fact, as a result of pandas, I have almost entirely stopped using R[2]. All the (mostly basic) data manipulation for which I used to use R, I now use Python. Of course R still has an important role to play, particularly when it comes to complex statistical analysis, but that does not tend to come up all that regularly.

2. JSON is Everywhere

JSON, JavaScript Object Notation for the uninitiated, is a data interchange format that has become the default way of transferring data online. Anytime you are seeing data displayed on a webpage, including all the visualizations on this website, JSON is the format the underlying data is in.

JSON has two big advantages that have led to its current state of dominance. The first is that, as the name suggests, it is native to JavaScript – the key programming language, alongside HTML, that is interpreted by the browser you are reading this on. The second is that JSON is an extremely flexible way of representing data.

However, as someone who comes from a statistics and data background, as opposed to a technology background, JSON can take a while to get used to. The way data is represented in JSON is very different to the traditional tables of data that most people are used to seeing. Gone are the columns and rows, replaced with key-value pairs and lots of curly brackets – “{“ and “}”. If you are interested in seeing what it looks like, there are numerous CSV to JSON convertors online. This one even has a sample dataset to play with.

If you do bother to take a look at some JSON, you will note that it is also much more verbose than your standard tabular format. A table containing 10 columns by 30 rows – something that could easily fit into one screen on a spreadsheet – runs to 300+ lines of JSON, depending on how it is structured. That does not make it easy to get an overview of the data for a human reader, but that overlooks what JSON is designed for – to be read by computers. The fact that a human can read it at all is seen as one of JSON’s strengths.

For those interested in working with data (or any web based technology), knowing how to read and manipulate JSON is becoming as important as knowing how to use a spreadsheet.

3. Free Tools are Great

There are some people working for software vendors who will read this and be happy I have a very small audience. Having worked in the public sector, for a large corporate and now for a small NGO, one thing I have been pleasantly surprised by in 2015 is the number and quality of free tools available online.

For general office administration there are office communicator applications (Slack), task management tools (Trello) and Google’s free replacements for Excel, Word and PowerPoint. For version control and code management there is GitHub. For data analysis, the aforementioned Python and R are both free and open source. For data storage, there is a huge range of free database technologies available, in both SQL (PostgreSQL, MySQL, SQLite3) and NoSQL (MongoDB, Redis, Cassandra) variations.

To be fair to my previous larger employers and my software-selling friends, most of these tools/applications do have significant catches. Many operate on a ‘freemium’ model. This means that for individuals and small organizations with relatively few users, the service is free (or next to free), but costs quickly rise when you need larger numbers of users and/or want access to additional features, typically the types of features larger organizations need. Many of the above also provide no tech support or guarantees, meaning that executives have no one to blame if the software blows up. If you are responsible for maintaining the personal data of millions of clients, that may not be a risk you are willing to take.

For small business owners and entrepreneurs however, these tools are great news. They bring down barriers to entry for small businesses and make their survival more dependent on the quality of the product rather than how much money they have. That is surely only a good thing.

4. Blogging is a Full Time Job

Speaking of starting a business, a common dream these days is semi-retiring somewhere warm and writing a blog. My realization this year from running a blog (if only part time) is just how difficult it is to get any traction. Aside from being able to write reasonably well, there are two main hurdles that anyone planning to become a full time blogger needs to overcome – note that I have not come close to accomplishing either of these:

  1. You have to generate large amounts of good quality content – at least 2-3 longer form pieces a week if you want to maintain a consistent audience. That may seem easy, but after you have quickly bashed out the 5-10 article ideas you have been mulling over, the grind begins. You will often be writing things that are not super interesting to you. You will often not be happy with what you have written. You will quickly realize that your favorite time is the time immediately after you have finished an article and your least favorite is when you need to start a new piece.
  2. You will spend more time marketing your blog than writing. Yep, if you want a big audience (big enough to generate cash to live on) you will need to spend an inordinate amount of time:
    • cold emailing other blogs and websites, asking them to link to your blog (‘generating back links’ in blogspeak)
    • ensuring everything on your blog is geared towards your blog showing up in peoples’ Google search results (Search Engine Optimization or SEO)
    • promoting yourself on Facebook
    • building a following on Twitter
    • contributing to discussions on Reddit and LinkedIn to show people you are someone worth listening to, and
    • writing guest blogs for other sites.

None of this is easy. Begging strangers for links, incorporating ‘focus words’ into your page titles and headings, posting links on Facebook to something you spend days writing, only to find you get one like (thanks Mum!). Meanwhile, some auto-generated, barely readable click-bait trash from ‘viralnova’ or ‘quandly’ (yes, I am deliberately not linking to those sites) is clocking up likes in the 5 figures. It can be downright depressing.

Of course, there are an almost infinite number of people out there offering their services to help with these things (I should know, they regularly comment on my articles telling me how one weird trick can improve my ‘on page SEO’). The problem is, the only real help they can give you is adding more things to the list above. On the other hand, if you are thinking about paid promotion (buying like’s or a similar strategy) I’d recommend watching this video:

Still want to be a blogger? You’re welcome.

5. Do not be Afraid to Try New Things

One of the things that struck me in 2015 is how attached people get to doing things a certain way. To a large degree this makes sense, the more often you use/do something, the better you get at it. I am very good at writing SQL and using Excel – I have spent most of the last 10 years using those two things. As a result, I will often try to use those tools to solve problems because I feel most comfortable using them.

Where this becomes a problem is when you start trying to shoehorn problems into tools not just because you are comfortable with the tool, but to avoid using something you are less comfortable with. As you have seen above, two of the best things I learned this year were two concepts that were completely foreign to a SQL/Excel guy like me. But that is part of what made learning them so rewarding. I gained a completely new perspective on how data can be structured and manipulated and, even though I am far from an expert in those new skills, I now know they are available and which sorts of problems they are useful for.

So, do not be afraid to try new things, even if the usefulness of that experience is not immediately apparent. You never know when that skill might come in handy.

 

Happy New Year to everyone, I hope you have a great 2016!

 

[1] Or ‘Autonomous Province’ depending on your political views

[2] R is another programming language designed specifically for statistical analysis, data manipulation and data mining.

Excel Tips – Template to Table

There was an interesting Excel problem I encountered a while back which I thought I would share as the solution ended up being quite well received and even got me a nice Swiss dinner.

Problem

The problem arose out of a reasonably common scenario – a friend of mine had begun working in a new workplace where they had been using a MS Word template to collect the results of survey they had been undertaking. Being more of a data focused person, she managed to convince them to migrate the template to MS Excel as this would allow easier aggregation and analysis of the data. However, in convincing them to migrate, she had to design the Excel template in a manner more or less mirroring the Word template, which meant merged cells, data in various columns and rows on the spreadsheet, and generally no easy way to tabulate the data across multiple surveys.

After the surveys starting coming in from various areas and she had spent an inordinate amount of time manually copying and pasting the results into a table, she asked me if there was a better way to do this.

Solution

After a couple of false starts, I came to a point where I thought the only way to do this was going to be a macro. The issues with using a macro are:

  • if it breaks, it is likely that no one else will be able to fix it
  • it isn’t easy for others to determine what exactly the macro is doing, and
  • moving a macro across multiple spreadsheets and workbooks can get messy.

Asides from those issues, non-technical people tend to get a little squeamish around macros.

I ended up stumbling across this solution almost by pure chance as I happened to be doing something at work that resulted in me using an Excel function that I hadn’t used very often: INDIRECT. What INDIRECT does is takes any string (either typed directly into the formula, referenced from another cell, or some combination of both) and tries to read it as a cell reference.

For example – if you type “A3” into a cell, then reference that cell (“=A1” for example), the result will be that both the original cell and the cell with the reference will now read “A3”. But what if you wanted the cell referencing the original cell to actually use the value in that cell as a reference to look up a third value in cell A3? That is where INDIRECT comes in:

sample_1_1

In this simple example we have used it to reference a cell in the same sheet, but we can also use INDIRECT to build a reference to a different sheet or workbook. And that’s where this is going. All the templates were the same – the same cell reference for each data point every time – but the name of the sheet (or workbook if they haven’t been added together yet) was always different. So what if we use an INDIRECT formula to dynamically look up the name of the sheet, which the user inputs? We can then use that value to complete the cell references for all the cells looking up the information on that sheet. Here is an example of what that would look like:

sample_1_2

Once you have the basic formula working there are 3 steps left:

  1. Wrap the formula in an IFERROR clause ( =IFERROR(formula, “”) ) so that if the ‘Sheet Name’ column is blank, you get blank data fields instead of a ton of #REF errors.
  2. Manually ‘program’ the first row. You will have to ensure all the right cells are being looked up in the right columns.
  3. Drag the formula down for as many rows as you like.

Once this is done, users of the workbook will be able to move a completed template into this workbook, enter the sheet name in the table, and all the results should immediately populate in the table. No Macros, no mess, and a convenient and simple way to convert information from an irregularly formatted template into tabular data.

Still confused? Please feel free to download this example file: 

© 2018 Brett Romero

Theme by Anders NorenUp ↑