The n-MNIST handwritten digit dataset *


Saikat Basu, Manohar Karki, Robert DiBiano and Supratik Mukhopadhyay, Louisiana State University
Sangram Ganguly, Bay Area Environmental Research Institute/NASA Ames Research Center
Ramakrishna R. Nemani, NASA Advanced Supercomputing Division, NASA Ames Research Center

The n-MNIST dataset (short for noisy MNIST) is created using the MNIST dataset of handwritten digits by adding -
(1) additive white gaussian noise,
(2) motion blur and
(3) a combination of additive white gaussian noise and reduced contrast to the MNIST dataset.

The datasets are available here:
n-mnist-with-awgn.gz
n-mnist-with-motion-blur.gz
n-mnist-with-reduced-contrast-and-awgn.gz

Sample images from the n-MNIST dataset:


n-MNIST with Additive White Gaussian Noise (AWGN) n-MNIST with Motion Blur n-MNIST with reduced contrast and AWGN

Dataset description:

The datasets are encoded as MATLAB .mat files that can be read using the standard load command in MATLAB. Each of the three datasets contain a total of 60,000 training samples and 10,000 test samples same as the original MNIST dataset. Each sample image is 28x28 and linearized as a vector of size 1x784. So, the training and test datasets are 2-d vectors of size 60000x784 and 10000x784 respectively. The training and test labels are 1x10 vectors having a single 1 indexing a particular digit from 0 to 9 and 0 values at all other indices.

The MAT files contain the following variables:

train_x 60000x784 uint8 (containing 60000 training samples of 28x28 images each linearized into a 1x784 linear vector)
train_y 60000x10 uint8 (containing 1x10 vectors having labels for the 60000 training samples)
test_x 10000x784 uint8 (containing 10000 test samples of 28x28 images each linearized into a 1x784 linear vector)
test_y 10000x10 uint8 (containing 1x10 vectors having labels for the 10000 test samples)


Additive White Gaussian Noise (AWGN)

The AWGN dataset is created using an Additive White Gaussian Noise with signal-to-noise ratio of 9.5. This emulates significant background clutter.

Motion Blur

The Motion Blur filter emulates a linear motion of a camera by τ pixels, with an angle of θ degrees. The filter becomes a vector for horizontal and vertical motions. We use a τ value of 5 pixels and θ value of 15 degrees in the counterclockwise direction.

Reduced Contrast and AWGN

The contrast range was scaled down to half and was applied with an Additive White Gaussian Noise with signal-to-noise ratio of 12. This emulates background clutter along with significant change in lighting conditions.

* To use this dataset, please cite the following paper:

Saikat Basu, Manohar Karki, Sangram Ganguly, Robert DiBiano, Supratik Mukhopadhyay, Ramakrishna Nemani, Learning Sparse Feature Representations using Probabilistic Quadtrees and Deep Belief Nets, European Symposium on Artificial Neural Networks, ESANN 2015.