Data Inspired Insights

Tag: pandas

Pandas: Basic data interrogation

This article is part of a series of practical guides for using the Python data processing library pandas. To see view all the available parts, click here.

Once we have our data in a pandas DataFrame, the basic table structure in pandas, the next step is how do we assess what we have? If you are coming from Excel or R Studio, you are probably used to being able to look at the data any time you want. In python/pandas, we don’t have a spreadsheet to work with, and we don’t even have an equivalent of R Studio (although Jupyter notebooks are a similar concept), but we do have several tools available that can help you get a handle on what your data looks like.

DataFrame Dimensions

Perhaps the most basic question is how much data do I actually have? Did I successfully load in all the rows and columns I expected or are some missing? These questions can be answered with the shape method:

import pandas as pd

df = pd.read_csv("https://archive.ics.uci.edu/ml/machine-learning-databases/wine-quality/winequality-red.csv", sep=';')

print(df.shape)

(1599, 12)

shape returns a tuple (think of it as a list that you can’t alter) which tells you the number of rows and columns, 1599 and 12 respectively in this example. You can also use len to get the number of rows:

print(len(df))

1599

Using len is also slightly quicker than using shape, so if it is just the number of rows you are interested in, go with len.

Another dimension we might also be interested in is the size of the table in terms of disk space . For this we can use the memory_usage method:

print(df.memory_usage())

Index                     128
fixed acidity           12792
volatile acidity        12792
citric acid             12792
residual sugar          12792
chlorides               12792
free sulfur dioxide     12792
total sulfur dioxide    12792
density                 12792
pH                      12792
sulphates               12792
alcohol                 12792
quality                 12792
dtype: int64

This tells us the space, in bytes, each column is taking up on the disk. If we want to know the total size of the DataFrame, we can take a sum, and then to get the number into a more usable metric like kilobytes (kB) or megabytes (MB), we can divide by 1024 as many times as needed.

print(df.memory_usage().sum() / 1024)  # Size in kB

150.03125

Lastly, for these basic dimension assessments, we can generate a list of the data types of each column. This can be a very useful early indicator that your data has been read in correctly. If you have a column that you believe should be numeric (i.e. a float64 or an int64) but it is listed as object (pandas speak for categorical data), it may be a sign that something has not been interpreted correctly:

print(df.dtypes)

fixed acidity           float64
volatile acidity        float64
citric acid             float64
residual sugar          float64
chlorides               float64
free sulfur dioxide     float64
total sulfur dioxide    float64
density                 float64
pH                      float64
sulphates               float64
alcohol                 float64
quality                   int64
dtype: object

Viewing some sample rows

After we have satisfied ourselves that we have the expected volume of data in our DataFrame, we might want to look at some actual rows of data. Particularly for large datasets, this is where the head and tail methods come in handy. As the names suggest, head will return the first n rows of the DataFrame and tail will return the last n rows of the DataFrame (n is set to 5 by default for both).

print(df.head())

Aside from this basic use, we can use head and tail in some very useful ways with some alterations/additions. First off, we can set n to what ever value we want, showing as many or few rows as desired:

print(df.head(10))

We can also combine it with sort_values to see the top (or bottom) n rows of data sorted by a column, or selection of columns:

print(df.sort_values('fixed acidity', ascending=False).head(10))

Finally, if we have a lot of columns, too many to display all of them in Jupyter or the console, we can combine head/tail with transpose to inspect all the columns for a few rows:

print(df.head().transpose())

Summary Statistics

Moving on, the next step is typically some exploratory data analysis (EDA). EDA is a very open ended process so no one can give you an explicit set of instructions on how to do it. Each dataset is different and to a large extent, you just have to allow your curiosity to run wild. However, there are some tools we can take advantage of in this process.

Describe

The most basic way to summarize the data in your DataFrame is the describe method. This method, by default, gives us a summary of all the numeric fields in the DataFrame, including counts of values (which exclude null values), the mean, standard deviation, min, max and some percentiles.

df.describe()

This is nice, but let’s talk about what isn’t being shown. Firstly, by default, any non-numeric and date fields are excluded. In the dataset we are using in this example we don’t have any non-numeric fields, so let’s add a couple of categorical fields (they will just have random letters in this case), and a date field:

import string
import random


df['categorical'] = [random.choice(string.ascii_letters) for i in range(len(df))]
df['categorical_2'] = [random.choice(string.ascii_letters) for i in range(len(df))]
df['date_col'] = pd.date_range(start='2020-11-01', periods=len(df))

df.describe()

As we can see, the output didn’t change. But we can use some of the parameters for describe to address that. First, we can set include='all' to include all datatypes in the summary:

df.describe(include='all')

Now for the categorical columns it tells us some useful numbers about the number of unique values and which value is the most frequent. But the way it is handling the date column is like a categorical value. We can also change that so it treats it as numeric value by setting the datetime_as_numeric parameter to True:

df[['date_col]].describe(datetime_as_numeric=True)

pandas_summary Library

Building on top of the kind of summaries that are produced by describe, some very talented people have developed a library called pandas_summary. This library is purely designed to generate informative summaries of pandas DataFrames. First though we need to do some quick setup (you may need to install the library using pip):

from pandas_summary import DataFrameSummary

dfs = DataFrameSummary(df)

Now let’s take a look at two ways we can use this new DataFrameSummary object. The first one is columns_stats. This is similar to what we saw previously with describe, but with one useful addition: the number and percent of missing values in each column:

dfs.columns_stats

Secondly, my personal favorite, by selecting a column from we can look at an individual column to get some really detailed statistics, plus a histogram thrown in for numeric fields:

dfs['fixed acidity']

Seaborn

Seaborn is a statistical data visualization library for python with a full suite of charts that you should definitely look into if you have time, but for today we are going to look at just one very nice feature – pairplot. This function will generate a pairwise correlation plots for all the columns in your DataFrame with literally one line:

import seaborn as sns

sns.pairplot(df, hue="quality")

The colors of the plot will be determined by the column you select for the hue parameter. This allows you to see how the values in that column are impacted by the two features in each pairwise plot , but also along the axis where you would have a plot of a feature against itself, we get the distributions of that variable for different values in your hue column.

Note, if you have a lot of columns, be aware that this type of chart will become less useful, and will also likely take a lot of time to render.

Wrapping Up

Exploratory data analysis (EDA) should be an open ended and flexible process that never really ends. However, when we are first trying to understand the basic dimensions of a new dataset and what it contains, there are some common methods we can employ such as shape and describe and dtypes, and some very useful third party libraries such as pandas_summary and seaborn. While this explainer does not provide a comprehensive list of methods and techniques, hopefully it has provided you with somewhere to get started.

Pandas: Reading in JSON data

This article is part of a series of practical guides for using the Python data processing library pandas. To see view all the available parts, click here.

When we are working with data in software development or when the data comes from APIs, it is often not provided in a tabular form. Instead it is provided in some combination of key-value stores and arrays broadly denoted as JavaScript Object Notation (JSON). So how do we read this type of non-tabular data into a tabular format like a pandas DataFrame?

Understanding Structures

The first thing to understand about data stored in this form is that there are effectively infinite ways to represent a single dataset. For example, take a simple dataset, shown here in tabular form:

iddepartmentfirst_namelast_name
1 SalesJohnJohnson
2SalesPeterPeterson
3SalesPaulaPaulson
4HRJamesJameson
5HRJenniferJensen
6AccountingSusanSusanson
7AccountingClareClareson

Let’s look at some of the more common ways this data might be represented using JSON:

1. “Records”

A list of objects, with each object representing a row of data. The column names are the keys of each object.

[
  {
    "department": "Sales",
    "first_name": "John",
    "id": 1,
    "last_name": "Johnson"
  },
  {
    "department": "Sales",
    "first_name": "Peter",
    "id": 2,
    "last_name": "Peterson"
  },
  {
    "department": "Sales",
    "first_name": "Paula",
    "id": 3,
    "last_name": "Paulson"
  },
  ...
]

2. “List”

An object where each key is a column, with the values for that column stored in a list.

{
  "id": [
    1,
    2,
    3,
    4,
    5,
    6,
    7
  ],
  "department": [
    "Sales",
    "Sales",
    "Sales",
    "HR",
    "HR",
    "Accounting",
    "Accounting"
  ],
  "first_name": [
    "John",
    "Peter",
    "Paula",
    "James",
    "Jennifer",
    "Susan",
    "Clare"
  ],
  "last_name": [
    "Johnson",
    "Peterson",
    "Paulson",
    "Jameson",
    "Jensen",
    "Susanson",
    "Clareson"
  ]
}

3. “Split”

An object with two keys, one for the column names, the other for the data which is a list of lists representing rows of data.

{
  "columns": [
    "id",
    "department",
    "first_name",
    "last_name"
  ],
  "data": [
    [
      1,
      "Sales",
      "John",
      "Johnson"
    ],
    [
      2,
      "Sales",
      "Peter",
      "Peterson"
    ],
  ...
  ]
}

Creating a DataFrame

So how do we get this data into a pandas DataFrame given that it could come in different forms? The key is knowing what structures pandas understands. In the first two cases above (#1 Records and #2 List), pandas understands the structure and will automatically convert it to a DataFrame for you. All you have to is pass the structure to the DataFrame class:

list_data = {
  "id": [
    1,
    2,
    3,
    4,
    5,
    6,
    7
  ],
  "department": [
    "Sales",
    "Sales",
    "Sales",
    "HR",
    "HR",
    "Accounting",
    "Accounting"
  ],
  "first_name": [
    "John",
    "Peter",
    "Paula",
    "James",
    "Jennifer",
    "Susan",
    "Clare"
  ],
  "last_name": [
    "Johnson",
    "Peterson",
    "Paulson",
    "Jameson",
    "Jensen",
    "Susanson",
    "Clareson"
  ]
}
df = pd.DataFrame(list_data)
print(df)

	id 	department     first_name 	last_name
0 	1 	Sales 	       John 	        Johnson
1 	2 	Sales 	       Peter 	        Peterson
2 	3 	Sales 	       Paula 	        Paulson
3 	4 	HR 	       James 	        Jameson
4 	5 	HR 	       Jennifer 	Jensen
5 	6 	Accounting     Susan 	        Susanson
6 	7 	Accounting     Clare 	        Clareson

Initializing a DataFrame this way also gives us a couple of options. We can load in only a selection of columns:

df = pd.DataFrame(list_data, columns=['id', 'department])
print(df)

	id 	department
0 	1 	Sales
1 	2 	Sales
2 	3 	Sales
3 	4 	HR
4 	5 	HR
5 	6 	Accounting
6	7 	Accounting

We can also define an index. However, if one of the fields in your dataset is what you want to set as the index, it is simpler to do so after you load the data into a DataFrame:

df = pd.DataFrame(list_data).set_index('id')
print(df)

    department first_name last_name
id                                 
1        Sales       John   Johnson
2        Sales      Peter  Peterson
3        Sales      Paula   Paulson
4           HR      James   Jameson
5           HR   Jennifer    Jensen
6   Accounting      Susan  Susanson
7   Accounting      Clare  Clareson

Acceptable Structures

What are the acceptable structures that pandas recognizes? Here are the ones I have found so far:

Records

[
  {
    "department": "Sales",
    "first_name": "John",
    "id": 1,
    "last_name": "Johnson"
  },
  {
    "department": "Sales",
    "first_name": "Peter",
    "id": 2,
    "last_name": "Peterson"
  },
  {
    "department": "Sales",
    "first_name": "Paula",
    "id": 3,
    "last_name": "Paulson"
  },
  ...
]

List

{
  "id": [
    1,
    2,
    3,
    4,
    5,
    6,
    7
  ],
  "department": [
    "Sales",
    "Sales",
    "Sales",
    "HR",
    "HR",
    "Accounting",
    "Accounting"
  ],
  "first_name": [
    "John",
    "Peter",
    "Paula",
    "James",
    "Jennifer",
    "Susan",
    "Clare"
  ],
  "last_name": [
    "Johnson",
    "Peterson",
    "Paulson",
    "Jameson",
    "Jensen",
    "Susanson",
    "Clareson"
  ]
}

Dict

{
  "id": {
      0: 1,
      1: 2,
      2: 3,
      3: 4,
      4: 5,
      5: 6,
      6: 7
  },
  "department": {
    0: "Sales",
    1: "Sales",
    2: "Sales",
    3: "HR",
    4: "HR",
    5: "Accounting",
    6: "Accounting"
  },
  "first_name": {
    0: "John",
    1: "Peter",
    2: "Paula",
    3: "James",
    4: "Jennifer",
    5: "Susan",
    6: "Clare"
  },
  "last_name": {
    0: "Johnson",
    1: "Peterson",
    2: "Paulson",
    3: "Jameson",
    4: "Jensen",
    5: "Susanson",
    6: "Clareson"
  }
}

Matrix

For this one you will have to pass the column names separately.

[
    [
        1,
        "Sales",
        "John",
        "Johnson"
    ],
    [
        2,
        "Sales",
        "Peter",
        "Peterson"
    ],
    [
        3,
        "Sales",
        "Paula",
        "Paulson"
    ],
    ...
]

Inconsistencies

What happens if the data is in one of these formats, but has some inconsistencies? For example, what if we have something that looks like this?

[ 
  {
    "department": "Sales",
    "first_name": "John",
    "id": 1,
    "last_name": "Johnson",
    "extra_field": "woops!"
  },
  {
    "department": "Sales",
    "first_name": "Peter",
    "id": 2,
    "last_name": "Peterson"
  },
  {
    "department": "Sales",
    "first_name": "Paula",
    "id": 3,
    "last_name": "Paulson"
  },
  ...
]

Fortunately, pandas is fairly robust to these types of inconsistencies, in this case creating an extra column and filling the remaining rows with NaN (null values):

    id  department  first_name  last_name  extra_field
0   1   Sales 	    John 	Johnson    woops!
1   2   Sales 	    Peter 	Peterson   NaN
2   3   Sales 	    Paula 	Paulson    NaN
3   4   HR 	    James 	Jameson    NaN
4   5   HR 	    Jennifer 	Jensen     NaN
5   6   Accounting  Susan       Susanson   NaN
6   7   Accounting  Clare       Clareson   NaN

Something important to note is that, depending on the structure and where the inconsistency occurs in the structure, the inconsistency can be handled differently. It could be an additional column, an additional row, or in some cases it may be ignored completely. The key is, as always, to check your data has loaded as expected.

Explicit Methods

There are two more methods for reading JSON data into a DataFrame: DataFrame.from_records and DataFrame.from_dict. DataFrame.from_records expects data in the ‘Records’ or ‘Matrix’ formats shown above, while DataFrame.from_dict will accept data in either the Dict or List structures. These methods are more explicit in what they do and have several potential advantages:

Clarity

When working with a team or in a situation where other people are going to review your code, being explicit can help them to understand what you are trying to do. Passing some unknown structure to DataFrame and knowing/hoping it will interpret it correctly is using a little too much ‘magic’ for some people. For the sanity of others, and yourself in 6 months when you are trying to work out what you did, you might want to consider the more explicit methods.

Strictness

When writing code that is going to be reused, maintained and/or run automatically, we want to write that code in a very strict way. That is, it should not keep working if the inputs change. Using DataFrame could lead to situations where the input data format changes, but is read in anyway and instead breaks something else further down the line. In a situation like this, someone will likely have the unenviable task of following the trail through the code to work out what changed.

Using the more explicit methods is more likely to cause the error to be raised where the problem actually occurred: reading in data which is no longer in the expected format.

Options

The more explicit methods give you more options for reading in the data. For example DataFrame.from_records gives you options to limit the number of rows to read in. DataFrame.from_dict allows you to specify the orientation of the data. That is, are the lists of values representative of columns or rows?

Coercion

In some cases, your data will not play nice and the generic DataFrame method will not correctly interpret your data. Using the more explicit method can help to resolve this. For example, if your objects are in a column of a DataFrame (i.e. a pandas Series) instead of a list, using DataFrame will give you a DataFrame with one column:

records_data = pd.Series([
  {
    "department": "Sales",
    "first_name": "John",
    "id": 1,
    "last_name": "Johnson",
    "test": 0
  },
  {
    "department": "Sales",
    "first": "Peter",
    "id": 2,
    "name": "Peterson"
  },
  {
    "dept": "Sales",
    "firstname": "Paula",
    "sid": 3,
    "lastname": "Paulson"
  },
  {
    "dept": "HR",
    "name": "James",
    "pid": 4,
    "last": "Jameson"
  }
])
print(pd.DataFrame(records_data))

Using the more explicit DataFrame.from_records gives you the expected results:

records_data = pd.Series([
  {
    "department": "Sales",
    "first_name": "John",
    "id": 1,
    "last_name": "Johnson",
    "test": 0
  },
  {
    "department": "Sales",
    "first": "Peter",
    "id": 2,
    "name": "Peterson"
  },
  {
    "dept": "Sales",
    "firstname": "Paula",
    "sid": 3,
    "lastname": "Paulson"
  },
  {
    "dept": "HR",
    "name": "James",
    "pid": 4,
    "last": "Jameson"
  }
])
print(pd.DataFrame.from_records(records_data))

Wrapping Up

We’ve looked at how we can quickly and easily convert JSON format data into tabular data using the DataFrame class and the more explicit DataFrame.from_records and DataFrame.from_dict methods. The downside is this only works if the data is in one of a few structures. The upside is most of the data you will encounter will be in one of these formats, or something that is easily converted into these formats.

If you want to play around with converting data between delimited format and various JSON formats, I can recommend trying an app I built a while back: JSONifyit.

Pandas: Reading in tabular data

This article is part of a series of practical guides for using the Python data processing library pandas. To see view all the available parts, click here.

To get started with pandas, the first thing you are going to need to understand is how to get data into pandas. For this guide we are going to focus on reading in tabular data (i.e. data stored in a table with rows and columns). If you don’t have some data available but want to try some things out, a great place to get some data to play with is the UCI Machine Learning Repository.

Delimited Files

One of the most common ways you will encounter tabular data, particularly data from an external source or publicly available data, is in the form of a delimited file such as comma separated values (CSV), tab separated values (TSV), or separated by some other character. To import this data so you can start playing with it, pandas gives you the read_csv function with a lot of options to help manage different cases. But let’s start with the very basic case:

import pandas as pd

df = pd.read_csv('path/to/file.csv')

# Show the top 5 rows to make sure it read in correctly
print(df.head())

Running this code imports the pandas library (as pd), uses the read_csv function to read in the data and stores it as a pandas DataFrame called df, then prints the top 5 rows using the head method. Note that the path to the file that you want to import can be a path to a file on your computer, or it can be a URL (web address) for a file on the internet. As long as you have internet access (and permission to access the file) it will work like you have the file downloaded and saved already.

When reading the data, unless specified, read_csv will attempt to automatically detect what the delimiting character is (e.g. “,” for CSV). In most cases this works fine, but in cases where it doesn’t, you can use the sep parameter to specify what char to use. For example, if your file is separated with “;” you might do something like:

import pandas as pd

df = pd.read_csv('path/to/file.csv', sep=';')

# Show the top 5 rows to make sure it is correct
print(df.head())

OK, what if your file has some other junk above and/or below the actual data like this:

We have two options for working around this, the header parameter and the skiprows parameter:

import pandas as pd

df_1 = pd.read_csv('path/to/file.csv', header=7)
df_2 = pd.read_csv('path/to/file.csv', skiprows=7)

# Both DataFrames produce the same result
print(df_1.head())
print(df_2.head())

These are equivalent because setting header=7 tells read_csv to look in row 7 (remember the row numbers are 0 indexed) to find the header row, then assume the data starts from the next row. On the other hand, setting skiprows=7 tells read_csv to ignore the first 7 rows (so rows 0 to 6), then it assumes the header row is the first row after the ignored rows.

Other Useful read_csv Parameters

There are dozens of other parameters to help you read in your data to handle a range of strange cases, but here are a selection of parameters I have found most useful to date:

ParameterDescription
skipfooterSkip rows at the end of the file (note: you will need to set engine=’python’ to use this)
index_colColumn to set as the index (the values in this column will become the row labels)
nrowsNumber of rows to read in (useful for reading in a sample of rows from a large file)
usecolsA list of columns to read (can use the column names or the 0 indexed column numbers)
skip_blank_linesSkip empty rows instead of reading in as NaN (empty values)

For the full list of available parameters, checkout out the official documentation. One thing to note is that although there a lot of the parameters available for read_csv, many are focused on helping correctly format and interpret data as it is being read in – for example, interpretation of dates, interpretation of boolean values, and so on. In many/most cases these are things that can be addressed after the data is in a pandas DataFrame, and in some cases, handling these types of formatting and standardization steps explicitly after reading in the data can make it easier to understand for the next person that reads your code.

Excel Data

Pandas also has a nice handy wrapper for reading in Excel data read_excel. Instead of writing your data to a CSV, then reading it in, now you can read directly from the Excel file itself. This function has many of the same parameters as read_csv, with options to skip rows, read in a sample of rows and/or columns, specify a header row and so on.

Databases

If your data is in a tabular/SQL database, like PostgreSQL, MySQL, Bigquery or something similar, your job gets a little bit more complicated to setup, but once that setup is done, it becomes really simple to repeatedly query data (using SQL) from that database directly into a DataFrame where you can do what you want with it.

The first step is to create a connection to the database holding the data. It is beyond the scope of this particular guide, but the library you will almost certainly need to use will be SQLAlchemy or in some cases a library created by the creator of the database (for example, Google has Bigquery API library called google-cloud-bigquery).

Once you have connected to your database, pandas provides three functions for you to extract data into a DataFrame: read_sql_table, read_sql_query and read_sql. The last of these, read_sql, is what’s called a ‘convenience wrapper’ around read_sql_table and read_sql_query – the functionality of both the underlying functions can be accessed from read_sql. But let’s look at the two underlying functions individually to see what the differences are and what options we have.

read_sql_table is a function we can use to extract data from a table in a SQL database. The function requires two parameters table_name – the name of the table you want to get the data from; and con – the location of the database the table is in. With these two parameters, all data from the specified table (i.e. SELECT * FROM table_name) will be returned as a DataFrame:

df = pd.read_sql_table(table_name='table_name', con='postgres:///db_name')  

read_sql_table does also give you the option to specify a list of columns to be extracted using the columns parameter.

read_sql_query on the other hand allows you to specify the query you want to run against the database.

query = """
    SELECT column_1
        , column_2
        , column_3
    FROM table_name
    WHERE column_4 > 10
"""
df = pd.read_sql_query(query, 'postgres:///db_name')  

Obviously writing your own query gives you a lot more flexibility to extract exactly what you need. However, also consider the potential upside in terms of processing efficiency. Doing aggregations and transformations in a database, in almost all cases, will be much faster than doing it in pandas after it is extracted. As a result, some careful query planning can save a lot of time and effort later on.

Other Data Sources

Pandas also has functions for reading in data from a range of other sources, including HTML tables, to SPSS, Stata, SAS and HDF files. We won’t go into them here, but being aware that these options exist is often all you really need to know. If a case arises where you need to read data from these sources, you can always refer to the documentation.

Wrapping Up

We’ve looked at how we can use pandas to read in data from various sources of tabular data, from delimited files and Excel, to databases, to some other more uncommon sources. While these functions often have many parameters available, remember most of them will be unnecessary for any given dataset. These functions are designed to work with the minimum parameters provided (e.g. just a file location) in most cases. Also remember that once you have the data in a DataFrame, you will have a tonne of options to fix and change the data as needed – you don’t need to do everything in one function.

Official Release: Visual Analytics

I am proud to announce the release of an application I’ve been working on for the last few months – Visual Analytics. This application is designed to give you a new way to view your Google Analytics data using a range of interactive visualizations, allowing you to get a better understanding of who your users are, how they are getting to your site, and what they are doing when they get there.

For those worried about privacy and personal security, the application has a couple of features that will hopefully ease your mind. Firstly, there is no separate account or login details needed for Visual Analytics, everything is based on your existing Google account, and the login process is completed using Google authentication.

 

 

Secondly, the application does not currently store any user data. In fact, the application has no database at all (sensing a theme here?). That means that not only does that mean I can not sell your data to third parties, but that even if someone does manage to hack into the application, there is nothing to steal except my hacky code base.

For those interested in the technical specs, the backend of the application was built using Python and the Flask web framework. To access the data, once you are logged in using your Google credentials, the application makes calls to the Google Analytics API and then uses Pandas to handle the data manipulation (where needed). On the front end, the visualizations are created using D3.js and Highcharts (a big shout out to the Highcharts team and Mike Bostock for their excellent work on these libraries).

Anyway, if you have a Google Analytics account and are interested in getting some interesting insights into your data, take a look and let me know what you think. And please, if you find an issue or a bug, let me know!

 

 

How to create a flashcard app without a database

Last week, I covered how setting up a database may not be necessary when creating an app or visualization, even one that relies on data. This week we are going to walk through an example application that runs off data, but does not need a formal database.

First some background. For the last couple of months, I have been attending some basic Arabic classes to help get around Jordan easier. During a recent class, several of the students were discussing the time they had spent putting together physical cardboard flashcards to help them memorize words. Hearing this, and having played around with creating simple applications and visualizations for a couple of years now, it occurred to me that generating flashcards using a simple application would probably be significantly quicker and easier to do. In addition, if done the right way, it could work on your phone, making it available to you anytime you had your phone and an internet connection.

Perhaps as you are reading this, you are thinking that Arabic is a widely spoken language, surely there is already an app for that? And you would be correct, there are multiple apps for learning Arabic. However, the complication with Arabic is that each country/region uses a significantly different version of the language. In addition to these regional dialects, there is Modern Standard Arabic, which is the formal written version of the language. When it comes to the apps currently available, the version of Arabic being presented is almost always Modern Standard Arabic, as opposed to the Levantine Arabic which is spoken throughout Palestine, Jordan, Lebanon and Syria. Additionally, the apps are quite expensive (up to $10), of questionable accuracy/quality, or both.

To address this problem, and for the challenge and learning opportunity, a few weeks back I sat down over a weekend and put together a simple application that would generate Arabic flashcards (you can see the current version here). The application is based on an Excel spreadsheet with translations that I continue to enter over time based on my notes and the class textbook. Using an Excel spreadsheet in this case is advantageous for two key reasons:

  • It is simple to edit and update with new translations
  • It is a format that almost everyone is familiar with so I can recruit other students and teachers to add translations

With that out of the way, let’s take a look at the high-level process for creating a flashcards app.

1. Collecting the Data

The first step is creating the Excel spreadsheet for our translations. In this case, it is fairly simple and looks something like this:

arabic flashcards

In this spreadsheet, each row represents one word and, through the application, will represent one flashcard. In one column, we have the Arabic script for each word, then in the next column the English translation. In addition, we also have a third version of the word with a column header of ‘transcribed’. This column represents how the Arabic word would look/sound if written in Latin script, something that is commonly used in beginner classes when students cannot yet read Arabic script.[1] Finally, in the last column we have a category column. This will be used to provide a feature where the user can filter the words in the application, allowing them to focus their study on particular sets of words.

A quick note, we also have an ID column, which is not used in the application. It is only included to provide a unique key in our datasets, as good practice.

2. Processing the Data

The next step is to take the data from our spreadsheet, convert it to a format that we can use to generate flashcards in the application, then save it. To do this we will use my favorite Python library, pandas, and the short script shown below.

[cc lang=”Python” escaped=”true” tab_size=”4″]
# -*- coding: utf-8 -*-
import pandas as pd

# Read In Data
df = pd.read_excel(“./data.xlsx”, header=0)

# Create JSON String
json_string = df.to_json(orient=”records”, force_ascii=False)
json_string = “var data = ” + json_string + “;”

# Write to file
text_file = open(“data.js”, “w”)
text_file.write(json_string)
text_file.close()
[/cc]

What this script in does is read in the file (in this case, data.xlsx) to a pandas dataframe (line 5). After that (line 8), we use the to_json method to output the contents of the dataframe to a JSON string. In line 9 we add some JavaScript to the beginning and end of that JSON string, then in lines 12-14 we save the string as a JavaScript file, data.js.

There are a couple of important things to note here. The first is that when dealing with non-Latin text characters (like Arabic characters), we need to specify that force_ascii=False (the default value is True). If we don’t do this, the script will return an error and/or convert the Arabic letters into a combination of Latin characters representing the Unicode character (i.e. it will look like gibberish).

The second thing to note for those that have not worked with JSON, or key-value stores more generally, is that this is the format that most data comes in when used in programs and applications. It is a highly flexible structure and, as a result, there are many ways we could represent the data shown above. In this case, we are using the ‘records’ format (as specified by pandas), which will look like this:

[

{
“id”:1,
“arabic”:”كتير”,
“english”:”A lot\/Many\/Very”,
“transcribed”:”kteer”,
“category”:”Basics”
},
{
“id”:2,
“arabic”:”عَن”,
“english”:”About”,
“transcribed”:”3an”,
“category”:”Basics”
},…

]

If this isn’t making any sense, or you would like to see some of the other possibilities, copy and paste some spreadsheet data into this CSV to JSON convertor. Toggling a few options, it should quickly become obvious how many different ways a given dataset can be represented in JSON format.

3. Building the App

Now that the data is ready, we create the files needed for the flashcards application. In this case, it is only three files, a HTML document (index.html) for the page, a CSS file for the styling, and an additional JavaScript file that will use the data in data.js to create the flashcards and generate the various features of the application. For those that are interested in the full code or want to create your own version, please feel free to checkout/fork the GitHub repo. For those that do not want to get too far into the weeds, there are just a few things I want to highlight about what the code is doing.

Firstly, the filtering and language options in the application are being generated directly from the data. What this means is that as more categories are added to the Excel spreadsheet, or if the languages change (i.e. the headings in the spreadsheet change), as soon as I update the underlying Excel and run the script shown above, all the options in the application will also update accordingly.

Secondly, I added a feature that allows the user to keep score. It is a simple honesty-based system, but I found it does provide some motivation to keep improving, as well as removing an element of self-deception as to how well you are actually doing. Often I would find myself thinking that I was getting almost all of them correct, only to find my correct percentage hovering around 70%.

Finally, a note on randomness. Whether the user is going through the cards unfiltered, or filtering for some category, the application is displaying the flashcards in a random[2] order. This random selection algorithm went through several iterations:

  1. In version 1, the algorithm would simply select four (the number of flashcards presented to the user at one time) random selections from the pool of eligible words.
  2. Upon testing version 1, it was found that, with surprising regularity, the same word would be selected more than once in a group of four flashcards. To address this, in version 2 a condition was added that when randomly selecting a word, it would only be accepted if that word had not already been selected in the given pool of four words.
  3. On further testing, I noticed another annoying issue. As I continually refreshed the four flashcards being displayed, some words would show up repeatedly, while others would take forever to show up, or not show up at all. To avoid this, for version 3, I changed the algorithm again. Now, instead of selecting four words at random, the algorithm instead took the whole list of words, shuffled them in a random order, and ran through the list in the new shuffled order. When the list ran out of words, it took the full list, shuffled it again, and continued.
  4. This was a big improvement. As I refreshed, I got different words, and was able to see all the words before they started repeating. But then I found another issue. In cases where the number of eligible words was not divisible by four, the old shuffled list and the new shuffled list would overlap in a selection of four words. In these cases, there was a possibility that the same word would be repeated. This is a little difficult to visualize, so the illustration below tries to present what was happening using an example list of ten words:

arabic flashcards

To address this, in version 4, a new condition was added. In cases like the example shown above, the algorithm will check the words from the new shuffled list to ensure they are not already selected from the old list. If a word is already selected, it will move that word to the end of the list and instead take the next word on the list. Here is another diagram to show what is happening:

arabic flashcards

4. Finishing Up

Ok, for those stepping through this and creating your own flashcards app, at this point you have copied the code available from the repo, made any changes to the spreadsheet, and rerun the script to refresh the data. For the final step, there are a couple of things that can be done.

If you are only planning to use the app on the same computer as you are using to create the flashcards app, you are done! Just open the index.html file using Chrome, Firefox or Safari (you can try Internet Explorer, but you know…) and you can test and use the app as you would use any website.

If you want to publish your flashcards app online to share with others, by far the easiest way is to use a service such as GitHub pages. I don’t want to turn this into a beginners guide to using git and GitHub, but there is excellent documentation available to help get you started if you would like to do this. You can see my version at the following address: https://vladimiriii.github.io/arabic-flashcards/, but there is even an option to redirect it to a domain of your choosing should you have one.

arabic flashcards

 

I hope this was a helpful guide to how a simple application can be created without a database, even if the application runs on some underlying form of data. Let me know what you think in the comments below!

 

[1] Because Arabic has many sounds that are difficult to convey in Latin script, this is also why when Arabic is transcribed, you will often find multiple spellings of the same word (e.g. Al-Qaeda vs Al-Qaida).

[2] As will be discussed in a new piece to be written, it is not truly random, and the reasons why are pretty interesting.

5 Things I Learned in 2015

2015 has been an interesting year in many respects. A new country[1], a new language, a new job, and plenty of new experiences – both at work and in life in general. To get into the year-end spirit, I thought I would list out 5 key things I learned this year.

1. I Love Pandas

Yes, those pandas as well, who doesn’t? But I knew that well before 2015. The pandas I learned to love this year is a data analysis library for the programming language Python. “Whoa, slow down egg head” I hear you say. For those that are not regular coders, what that means is that pandas provides a large range of ways for people writing Python code to interact with data that makes life very easy.

Reading from and writing to Excel, CSV files and JSON (see lesson number 2) is super easy and fast. Manipulating large datasets in table like structures (dataframes) – check. Slicing, dicing, aggregating – check, check and check. In fact, as a result of pandas, I have almost entirely stopped using R[2]. All the (mostly basic) data manipulation for which I used to use R, I now use Python. Of course R still has an important role to play, particularly when it comes to complex statistical analysis, but that does not tend to come up all that regularly.

2. JSON is Everywhere

JSON, JavaScript Object Notation for the uninitiated, is a data interchange format that has become the default way of transferring data online. Anytime you are seeing data displayed on a webpage, including all the visualizations on this website, JSON is the format the underlying data is in.

JSON has two big advantages that have led to its current state of dominance. The first is that, as the name suggests, it is native to JavaScript – the key programming language, alongside HTML, that is interpreted by the browser you are reading this on. The second is that JSON is an extremely flexible way of representing data.

However, as someone who comes from a statistics and data background, as opposed to a technology background, JSON can take a while to get used to. The way data is represented in JSON is very different to the traditional tables of data that most people are used to seeing. Gone are the columns and rows, replaced with key-value pairs and lots of curly brackets – “{“ and “}”. If you are interested in seeing what it looks like, there are numerous CSV to JSON convertors online. This one even has a sample dataset to play with.

If you do bother to take a look at some JSON, you will note that it is also much more verbose than your standard tabular format. A table containing 10 columns by 30 rows – something that could easily fit into one screen on a spreadsheet – runs to 300+ lines of JSON, depending on how it is structured. That does not make it easy to get an overview of the data for a human reader, but that overlooks what JSON is designed for – to be read by computers. The fact that a human can read it at all is seen as one of JSON’s strengths.

For those interested in working with data (or any web based technology), knowing how to read and manipulate JSON is becoming as important as knowing how to use a spreadsheet.

3. Free Tools are Great

There are some people working for software vendors who will read this and be happy I have a very small audience. Having worked in the public sector, for a large corporate and now for a small NGO, one thing I have been pleasantly surprised by in 2015 is the number and quality of free tools available online.

For general office administration there are office communicator applications (Slack), task management tools (Trello) and Google’s free replacements for Excel, Word and PowerPoint. For version control and code management there is GitHub. For data analysis, the aforementioned Python and R are both free and open source. For data storage, there is a huge range of free database technologies available, in both SQL (PostgreSQL, MySQL, SQLite3) and NoSQL (MongoDB, Redis, Cassandra) variations.

To be fair to my previous larger employers and my software-selling friends, most of these tools/applications do have significant catches. Many operate on a ‘freemium’ model. This means that for individuals and small organizations with relatively few users, the service is free (or next to free), but costs quickly rise when you need larger numbers of users and/or want access to additional features, typically the types of features larger organizations need. Many of the above also provide no tech support or guarantees, meaning that executives have no one to blame if the software blows up. If you are responsible for maintaining the personal data of millions of clients, that may not be a risk you are willing to take.

For small business owners and entrepreneurs however, these tools are great news. They bring down barriers to entry for small businesses and make their survival more dependent on the quality of the product rather than how much money they have. That is surely only a good thing.

4. Blogging is a Full Time Job

Speaking of starting a business, a common dream these days is semi-retiring somewhere warm and writing a blog. My realization this year from running a blog (if only part time) is just how difficult it is to get any traction. Aside from being able to write reasonably well, there are two main hurdles that anyone planning to become a full time blogger needs to overcome – note that I have not come close to accomplishing either of these:

  1. You have to generate large amounts of good quality content – at least 2-3 longer form pieces a week if you want to maintain a consistent audience. That may seem easy, but after you have quickly bashed out the 5-10 article ideas you have been mulling over, the grind begins. You will often be writing things that are not super interesting to you. You will often not be happy with what you have written. You will quickly realize that your favorite time is the time immediately after you have finished an article and your least favorite is when you need to start a new piece.
  2. You will spend more time marketing your blog than writing. Yep, if you want a big audience (big enough to generate cash to live on) you will need to spend an inordinate amount of time:
    • cold emailing other blogs and websites, asking them to link to your blog (‘generating back links’ in blogspeak)
    • ensuring everything on your blog is geared towards your blog showing up in peoples’ Google search results (Search Engine Optimization or SEO)
    • promoting yourself on Facebook
    • building a following on Twitter
    • contributing to discussions on Reddit and LinkedIn to show people you are someone worth listening to, and
    • writing guest blogs for other sites.

None of this is easy. Begging strangers for links, incorporating ‘focus words’ into your page titles and headings, posting links on Facebook to something you spend days writing, only to find you get one like (thanks Mum!). Meanwhile, some auto-generated, barely readable click-bait trash from ‘viralnova’ or ‘quandly’ (yes, I am deliberately not linking to those sites) is clocking up likes in the 5 figures. It can be downright depressing.

Of course, there are an almost infinite number of people out there offering their services to help with these things (I should know, they regularly comment on my articles telling me how one weird trick can improve my ‘on page SEO’). The problem is, the only real help they can give you is adding more things to the list above. On the other hand, if you are thinking about paid promotion (buying like’s or a similar strategy) I’d recommend watching this video:

Still want to be a blogger? You’re welcome.

5. Do not be Afraid to Try New Things

One of the things that struck me in 2015 is how attached people get to doing things a certain way. To a large degree this makes sense, the more often you use/do something, the better you get at it. I am very good at writing SQL and using Excel – I have spent most of the last 10 years using those two things. As a result, I will often try to use those tools to solve problems because I feel most comfortable using them.

Where this becomes a problem is when you start trying to shoehorn problems into tools not just because you are comfortable with the tool, but to avoid using something you are less comfortable with. As you have seen above, two of the best things I learned this year were two concepts that were completely foreign to a SQL/Excel guy like me. But that is part of what made learning them so rewarding. I gained a completely new perspective on how data can be structured and manipulated and, even though I am far from an expert in those new skills, I now know they are available and which sorts of problems they are useful for.

So, do not be afraid to try new things, even if the usefulness of that experience is not immediately apparent. You never know when that skill might come in handy.

 

Happy New Year to everyone, I hope you have a great 2016!

 

[1] Or ‘Autonomous Province’ depending on your political views

[2] R is another programming language designed specifically for statistical analysis, data manipulation and data mining.

© 2020 Brett Romero

Theme by Anders NorenUp ↑