I recently finished Machine Learning class on Coursera. It’s a great entry-level Machine Learning (ML) class. In this blog post and another following post I attempt to explain, in simple terms, some foundamental concepts in machine learning. Rather than drilling into mathematical details of ML techniques, this post starts with high-level way of thinking (e.g. how to decided which algorithm to try?).
Simply put, machine learning is a technique to teach computers to learn certain tasks, it is about developing algorithms that learn how to perform certain tasks given a large amounts of data. Some interesting tasks include:
The learning can be divided into two categories: supervised and unsupervised learning. Supervised learning handles labeled data and attempt to classify or predict based on labled data (training set). Unsupervised learnings handles unlabled data and attempt to detect trends and patterns (clustering).
Most machine learning problems share similar iterative, pipeline approach because the large picture of most ML remains similar, which can be roughly summarized as follows:
Given we reliably collect a large set of data
And we define a task ML algorithm is supposed to learn
First We perform proprocessing of data
Then we prototype algorithms to accomplish defined task
And we evaluate the fitness of algorithms
And we improve algorithms based on the evaluation.
When we collect data for a ML problem, we should ask the following questions:
The technique of
Artificual Synthesis is applied often to generate more data. The idea is that we can “simulate” more data based on real data. One example used is Optical Character Reader, researchers combine background of one image with the letter of another image to form a new image and include the synthesized image as a data point.
Beware what tasks can we perform based on the nature of data. For example, it might be more difficult to perform prediction related tasks using unlabeled data. To borrow from a great Stackoverflow answer on labled data vs. unlabeled data:
Typically, unlabeled data consists of samples of natural or human-created artifacts that you can obtain relatively easily from the world. Some examples of unlabeled data might include photos, audio recordings, videos, news articles, tweets, x-rays (if you were working on a medical application), etc. There is no “explanation” for each piece of unlabeled data – it just contains the data, and nothing else.
Labeled data typically takes a set of unlabeled data and augments each piece of that unlabeled data with some sort of meaningful “tag,” “label,” or “class” that is somehow informative or desirable to know. For example, labels for the above types of unlabeled data might be whether this photo contains a horse or a cow, which words were uttered in this audio recording, what type of action is being performed in this video, what the topic of this news article is, what the overall sentiment of this tweet is, whether the dot in this x-ray is a tumor, etc.
The goals of preprocessing data are twofold: first to get a feel of the dataset before we dive into algorithm development, and second to normalize data to similar scale. To get a feel of the data, visualizing data* helps. The idea of Dimensionality Reduction is applied when handling dataset with high dimensions, which maps data .
One technique of dimensionality reduction is Principle Component Analaysis (PCA), which is a linear transformation that can be defined as followings:
Given a dataset of size ,
We want to compute a transformation matrix to reduce to where ,
Another great visualized example of PCA can be found here.
To normize data to similar scale, we perform a data transformation defined as follows:
Replace each with
The benefit of feature scaling is to speed up the computation of algorithm later on, you can read more about why feature scaling to learn the rationale behind it.
In the following blogpost, we will discuss more foundation ideas behind prototyping algorithms, evaluating algorithms and build large machine learning (ML) systems.