00:00 AM
00:00 AM

To get the most out of the tutorials, you will need to have the correct software installed and running. Specific requirements for each tutorial are specified in the detailed description for each tutorial. But it's best to start with one of the scientific Python distributions to ensure an environment that includes most of the packages you'll need.

An Introduction to scikit-learn (II)

Gaël Varoquaux - INRIA
Jake Vanderplas - University of Washington
Olivier Grisel -


Part 1

Part 2


Gaël Varoquaux
Gaël Varoquaux is an INRIA faculty researcher working on computational science for brain imaging in the Neurospin brain research institute (Paris, France). He uses machine learning to develop statistical models and algorithms mining brain activity data. He dreams of making advanced data-processing techniques available across new fields via easy-to-use, inter-disciplinary, open-source, scientific software, in Python. For this purpose, he is a core developer of major Python modules for scientific data analysis (scikit-learn, Mayavi, joblib), and often teaches scientific computing with Python using http://scipy-lectures.github.com. His travels and rants can be found at http://gael-varoquaux.info

Jake Vanderplas
Jake Vanderplas is an NSF postdoctoral research fellow, working jointly between the Astronomy and Computer Science departments at the University of Washington, and is interested in topics at the intersection of large-scale machine learning and wide-field astronomical surveys. He is co-author of the book “Statistics, Data Mining, and Machine Learning in Astronomy”, which will be published by Princeton press later this year. In the Python world, Jake is the author of AstroML, and a maintainer of Scikit-learn & Scipy. He gives regular talks and tutorials at various Python conferences, and occasionally blogs his thoughts and his code at Pythonic Perambulations: http://jakevdp.github.com.

Olivier Grisel
Olivier Grisel is an independent Software Engineer expert in Machine Learning specializing in Text Analytics and Natural Language Processing and a regular contributor to the scikit-learn Machine Learning library. http://twitter.com/ogrisel http://ogrisel.com


Machine Learning is the branch of computer science concerned with the development of algorithms which can learn from previously-seen data in order to make predictions about future data, and has become an important part of research in many scientific fields. This set of tutorials will introduce the basics of machine learning, and how these learning tasks can be accomplished using Scikit-Learn, a machine learning library written in Python and built on NumPy, SciPy, and Matplotlib. By the end of the tutorials, participants will be poised to take advantage of Scikit-learn’s wide variety of machine learning algorithms to explore their own data sets. The tutorial will comprise two sessions, Session I in the morning (intermediate track), and Session II in the afternoon (advanced track). Participants are free to attend either one or both, but to get the most out of the material, we encourage those attending in the afternoon to attend in the morning as well.

Session II will build upon Session I, and assume familiarity with the concepts covered there. The goals of Session II are to introduce more involved algorithms and techniques which are vital for successfully applying machine learning in practice. It will cover cross-validation and hyperparameter optimization, unsupervised algorithms, pipelines, and go into depth on a few extremely powerful learning algorithms available in Scikit-learn: Support Vector Machines, Random Forests, and Sparse Models. We will finish with an extended exercise applying scikit-learn to a real-world problem.


Tutorial 2 (advanced track)

  • 0:00 - 0:30 -- Model validation and testing
    • Bias, Variance, Over-fitting, Under-fitting
    • Using validation curves & learning to improve your model
    • Exercise: Tuning a random forest for the digits data
  • 0:30 - 1:30 -- In depth with a few learners
    • SVMs and kernels
    • Trees and forests
    • Sparse and non-sparse linear models
  • 1:30 - 2:00 -- Unsupervised Learning
    • Example of Dimensionality Reduction: hand-written digits
    • Example of Clustering: Olivetti Faces
  • 2:00 - 2:15 -- Pipelining learners
    • Examples of unsupervised data reduction followed by supervised learning.
  • 2:15 - 2:30 -- Break (possibly in the middle of the previous section)
  • 2:30 - 3:00 -- Learning on big data
    • Online learning:
      • MiniBatchKmeans
      • Stochastic Gradient Descent for linear models
    • Data-reducing transforms: random-projections
  • 3:00 - 4:00 -- Parallel Machine Learning with IPython
    • IPython.parallel, a short primer
    • Parallel Model Assessment and Selection
    • Running a cluster on the EC2 cloud using StarCluster

Required Packages

This tutorial will use Python 2.6 / 2.7, and require recent versions of numpy (version 1.5+), scipy (version 0.10+), matplotlib (version 1.1+), scikit-learn (version 0.13.1+), and IPython (version 0.13.1+) with notebook support. The final requirement is particularly important: participants should be able to run IPython notebook and create & manipulate notebooks in their web browser. The easiest way to install these requirements is to use a packaged distribution: we recommend Anaconda CE, a free package provided by Continuum Analytics: http://continuum.io/downloads.html or the Enthought Python Distribution: http://www.enthought.com/products/epd_free.php