На EuroScipy 2014 tutorial: Introduction to predictive analytics with pandas and scikit-learn были объединены материалы из мануалов EuroScipy 2013 о Pandas и scikit-learn. Надо понять, как это все развивается. Здесь ссылки на все видео EuroSciPy 2014, сервис ACIS (хороший пример), из репозиториев EuroScipy 2014 tutorial: Introduction to predictive analytics with pandas and scikit-learn, Pandas, Sklearn-pandas
Popular scikit-learn & Python videos
EuroSciPy 2014 The EuroSciPy meeting is a cross-disciplinary gathering focused on the use and development of the Python language in scientific research. This event strives to bring together both users and developers of scientific tools, as well as academic resea...
Climate Observations from ACIS in pandas; SciPy 2013 Presentation
ACIS Web Services
builder.rcc-acis.org
This module provides a bridge between Scikit-Learn's machine learning methods and pandas-style Data Frames.
In particular, it provides:
a way to map DataFrame columns to transformations, which are later recombined into features
a way to cross-validate a pipeline that takes a pandas DataFrame as input.
EuroScipy 2014 tutorial: Introduction to predictive analytics with pandas and scikit-learn
This repository contains files and other info associated with the EuroPython 2014 scikit-learn tutorial.
Instructors:
Olivier Grisel @ogrisel | http://ogrisel.com
Gael Varoquaux @GaelVaroquaux | http://gael-varoquaux.info
These materials are "almost" finished, but will change, before the training session
Installation Notes
This tutorial will require recent installations of numpy, scipy, matplotlib, scikit-learn, pandas and Pillow (or PIL).
For users who do not yet have these packages installed, a relatively painless way to install all the requirements is to use a package such as Anaconda, which can be downloaded and installed for free.
Please download in advance the Olivetti dataset using:
from sklearn import datasets
datasets.fetch_olivetti_faces()
Reading the training materials
Not all the material will be covered at the EuroScipy training: there is not enough time available. However, you can follow the material by yourself.
With the IPython notebook
The recommended way to access the materials is to execute them in the
IPython notebook. If you have the IPython notebook installed, you should
download the materials (see below), go the the notebooks
directory, and
launch IPython notebook from there by typing:
cd notebooks
ipython notebook
in your terminal window. This will open a notebook panel load in your web browser.
On Internet
If you don't have the IPython notebook installed, you can browse the files on Internet:
-
For the instructions without the solutions:
http://nbviewer.ipython.org/github/GaelVaroquaux/sklearn_pandas_tutorial/tree/master/notebooks/
-
For the instructions and the solutions:
Downloading the Tutorial Materials
I would highly recommend using git, not only for this tutorial, but for the general betterment of your life. Once git is installed, you can clone the material in this tutorial by using the git address shown above:
If you can't or don't want to install git, there is a link above to download the contents of this repository as a zip file. I may make minor changes to the repository in the days before the tutorial, however, so cloning the repository is a much better option.
Data Downloads
The data for this tutorial is not included in the repository. We will be
using several data sets during the tutorial: most are built-in to
scikit-learn, which includes code which automatically downloads and
caches these data. Because the wireless network at conferences can often
be spotty, it would be a good idea to download these data sets before
arriving at the conference. You can do so by using the fetch_data.py
included in the tutorial materials.
Original material from the Scipy 2013 tutorial
This material is adapted from the scipy 2013 tutorial:
http://github.com/jakevdp/sklearn_scipy2013
Original authors:
- Gael Varoquaux @GaelVaroquaux | http://gael-varoquaux.info
- Olivier Grisel @ogrisel | http://ogrisel.com
- Jake VanderPlas @jakevdp | http://jakevdp.github.com
pandas: powerful Python data analysis toolkit
What is it
pandas is a Python package providing fast, flexible, and expressive data structures designed to make working with "relational" or "labeled" data both easy and intuitive. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python. Additionally, it has the broader goal of becoming the most powerful and flexible open source data analysis / manipulation tool available in any language. It is already well on its way toward this goal.
Main Features
Here are just a few of the things that pandas does well:
- Easy handling of missing data (represented as
NaN
) in floating point as well as non-floating point data - Size mutability: columns can be inserted and deleted from DataFrame and higher dimensional objects
- Automatic and explicit data alignment: objects can
be explicitly aligned to a set of labels, or the user can simply
ignore the labels and let
Series
,DataFrame
, etc. automatically align the data for you in computations - Powerful, flexible group by functionality to perform split-apply-combine operations on data sets, for both aggregating and transforming data
- Make it easy to convert ragged, differently-indexed data in other Python and NumPy data structures into DataFrame objects
- Intelligent label-based slicing, fancy indexing, and subsetting of large data sets
- Intuitive merging and joining data sets
- Flexible reshaping and pivoting of data sets
- Hierarchical labeling of axes (possible to have multiple labels per tick)
- Robust IO tools for loading data from flat files (CSV and delimited), Excel files, databases, and saving/loading data from the ultrafast HDF5 format
- Time series-specific functionality: date range generation and frequency conversion, moving window statistics, moving window linear regressions, date shifting and lagging, etc.
Where to get it
The source code is currently hosted on GitHub at: http://github.com/pydata/pandas
Binary installers for the latest released version are available at the Python package index
http://pypi.python.org/pypi/pandas/
And via easy_install
:
easy_install pandas
or pip
:
pip install pandas
Dependencies
- NumPy: 1.7.0 or higher
- python-dateutil: 1.5 or higher
-
pytz
- Needed for time zone support with
pandas.date_range
- Needed for time zone support with
Highly Recommended Dependencies
-
numexpr
- Needed to accelerate some expression evaluation operations
- Required by PyTables
-
bottleneck
- Needed to accelerate certain numerical operations
Optional dependencies
- Cython: Only necessary to build development version. Version 0.17.1 or higher.
- SciPy: miscellaneous statistical functions
- PyTables: necessary for HDF5-based storage
- SQLAlchemy: for SQL database support. Version 0.8.1 or higher recommended.
- matplotlib: for plotting
-
statsmodels
- Needed for parts of
pandas.stats
- Needed for parts of
- For Excel I/O:
-
xlrd/xlwt
- Excel reading (xlrd) and writing (xlwt)
-
openpyxl
- openpyxl version 1.6.1 or higher, but lower than 2.0.0, for writing .xlsx files
- xlrd >= 0.9.0
-
XlsxWriter
- Alternative Excel writer.
-
xlrd/xlwt
-
Google bq Command Line Tool
- Needed for
pandas.io.gbq
- Needed for
- boto: necessary for Amazon S3 access.
- One of the following combinations of libraries is needed to use the
top-level
pandas.read_html
function:- BeautifulSoup4 and html5lib (Any recent version of html5lib is okay.)
- BeautifulSoup4 and lxml
- BeautifulSoup4 and html5lib and lxml
- Only lxml, although see HTML reading gotchas for reasons as to why you should probably not take this approach.
Notes about HTML parsing libraries
- If you install BeautifulSoup4 you must install
either lxml or html5lib or both.
pandas.read_html
will not work with onlyBeautifulSoup4
installed. - You are strongly encouraged to read HTML reading gotchas. It explains issues surrounding the installation and usage of the above three libraries.
- You may need to install an older version of
BeautifulSoup4:
- Versions 4.2.1, 4.1.3 and 4.0.2 have been confirmed for 64 and 32-bit Ubuntu/Debian
- Additionally, if you're using Anaconda you should definitely read the gotchas about HTML parsing libraries
-
If you're on a system with
apt-get
you can dosudo apt-get build-dep python-lxml
to get the necessary dependencies for installation of lxml. This will prevent further headaches down the line.
Installation from sources
To install pandas from source you need Cython in addition to the normal dependencies above. Cython can be installed from pypi:
pip install cython
In the pandas
directory (same one where you found this file after
cloning the git repo), execute:
python setup.py install
or for installing in development mode:
python setup.py develop
Alternatively, you can use pip
if you want all the dependencies pulled
in automatically (the -e
option is for installing it in development
mode):
pip install -e .
On Windows, you will need to install MinGW and execute:
python setup.py build --compiler=mingw32
python setup.py install
See http://pandas.pydata.org/ for more information.
License
BSD
Documentation
The official documentation is hosted on PyData.org: http://pandas.pydata.org/
The Sphinx documentation should provide a good starting point for learning how to use the library. Expect the docs to continue to expand as time goes on.
Background
Work on pandas
started at AQR (a quantitative hedge fund) in 2008 and
has been under active development since then.
Discussion and Development
Since pandas development is related to a number of other scientific Python projects, questions are welcome on the scipy-user mailing list. Specialized discussions or design issues should take place on the PyData mailing list / Google group:
Посты чуть ниже также могут вас заинтересовать
Комментариев нет:
Отправить комментарий