Aplicando Python para análisis de precios: manejando, organizando y bajando datos con pandas

En esta y en las siguientes dos clases veremos un caso de aplicación de simulación montecarlo en la toma de decisiones. Para lograr este objetivo, primero veremos (en esta clase) como manipular datos con pandas, tanto desde un archivo local de excel como remotamente desde Yahoo Finance.

Python Data Analysis Library: pandas es una librería de código abierto, fácil de usar y que provee alto rendimiento en structuras de datos y herramientas de análisis de datos para el lenguaje de programación Python.

Referencias:

1. Importar datos desde holas de cálculo (como las de excel)

1.1. ¿Porqué hojas de cálculo?

  • Seguramente todos han trabajado con hojas de cálculo de excel, almenos para cosas básicas.
  • Esta herramienta nos ayuda a organizar, analizar y guardar datos en tablas.
  • Este software es ampliamente usado en distintos campos de aplicación en todo el mundo.
  • Nos guste o no, esto también aplica a ciencia de datos (ingeniería financiera).
  • Muchos de ustedes en su futuro académico y profesional tendrán que trabajar con estas hojas de cálculo, pero no siempre querrán trabajar directamente con ellas si tienen que hacer un análisis un poco más avanzado de los datos.
  • Por eso en Python se han implementado herramientas para leer, escribir y manipular este tipo de archivos.

En esta clase veremos cómo podemos trabajar con Excel y Python de manera básica utilizando la librería pandas.

1.2. Reglas básicas para antes de leer hojas de cálculo

Antes de comenzar a leer una hoja de cálculo en Python (o cualquier otro programa), debemos considerar el ajustar nuestro archivo para cumplir ciertos principios, como:

  • La primer fila de la hoja de cálculo se reserva para los títulos, mientras que la primer columna se usa para identificar la unidad de muestreo o indización de los datos (tiempo, fecha, eventos...)
  • Evitar nombres, valores o campos con espacios en blanco. De otra manera, cada palabra se interpreta como variable separada y resultan errores relacionados con el número de elementos por línea. Para esto usar buscar y reemplazar con guiones bajos, puntos, etcétera.
  • Los nombres cortos se prefieren sobre nombre largos.
  • Evite símbolos como ?, $,%, ^, &, *, (,),-,#, ?,,,<,>, /, |, \, [ ,] ,{, and }.
  • Borre cualquier tipo de comentario que haya hecho en su archivo para evitar columnas extras.
  • Asegúrese de que cualquier valor inexistente esté indicado como NA.

Si se hizo algún cambio, estar seguro de guardarlo.

Si estás trabajando con Microsoft Excel, verás que hay muchas opciones para guardar archivos, a parte de las extensiones por defecto .xls or .xlsx. Para esto ir a “Save As” y seleccionar una de las extensiones listadas en “Save as Type”.

La extensión más común es .csv (archivos de texto separados por comas).

Actividad. Descargar precios de acciones de Apple (AAPL), Amazon (AMZN), Microsoft (MSFT) y NVIDIA (NVDA) de Yahoo Finance, con una ventana de tiempo desde el 01-01-2011 al 31-12-2016 y frecuencia diaria.

  • Ir a https://finance.yahoo.com/.
  • Buscar cada una de las compañías solicitadas.
  • Dar click en la pestaña 'Historical Data'.
  • Cambiar las fechas en 'Time Period', click en 'Apply' y, finalmente, click en 'Download Data'.
  • ¡POR FAVOR! GUARDAR ESTOS ARCHIVOS EN UNA CARPETA LLAMADA precios EN EL MISMO DIRECTORIO DONDE TIENEN ESTE ARCHIVO.

Luego de esto, seguir las indicaciones dadas en 1.2.

1.3. Carguemos archivos .csv como ventanas de datos de pandas

Ahora podemos comenzar a importar nuestros archivos.

Una de las formas más comunes de trabajar con análisis de datos es en pandas. Esto es debido a que pandas está construido sobre NumPy y provee estructuras de datos y herramientas de análisis fáciles de usar.


In [1]:
# Importamos pandas
import pandas as pd

Para leer archivos .csv, utilizaremos la función pd.read_csv...


In [2]:
help(pd.read_csv)


Help on function read_csv in module pandas.io.parsers:

read_csv(filepath_or_buffer, sep=',', delimiter=None, header='infer', names=None, index_col=None, usecols=None, squeeze=False, prefix=None, mangle_dupe_cols=True, dtype=None, engine=None, converters=None, true_values=None, false_values=None, skipinitialspace=False, skiprows=None, nrows=None, na_values=None, keep_default_na=True, na_filter=True, verbose=False, skip_blank_lines=True, parse_dates=False, infer_datetime_format=False, keep_date_col=False, date_parser=None, dayfirst=False, iterator=False, chunksize=None, compression='infer', thousands=None, decimal=b'.', lineterminator=None, quotechar='"', quoting=0, escapechar=None, comment=None, encoding=None, dialect=None, tupleize_cols=False, error_bad_lines=True, warn_bad_lines=True, skipfooter=0, skip_footer=0, doublequote=True, delim_whitespace=False, as_recarray=False, compact_ints=False, use_unsigned=False, low_memory=True, buffer_lines=None, memory_map=False, float_precision=None)
    Read CSV (comma-separated) file into DataFrame
    
    Also supports optionally iterating or breaking of the file
    into chunks.
    
    Additional help can be found in the `online docs for IO Tools
    <http://pandas.pydata.org/pandas-docs/stable/io.html>`_.
    
    Parameters
    ----------
    filepath_or_buffer : str, pathlib.Path, py._path.local.LocalPath or any object with a read() method (such as a file handle or StringIO)
        The string could be a URL. Valid URL schemes include http, ftp, s3, and
        file. For file URLs, a host is expected. For instance, a local file could
        be file ://localhost/path/to/table.csv
    sep : str, default ','
        Delimiter to use. If sep is None, the C engine cannot automatically detect
        the separator, but the Python parsing engine can, meaning the latter will
        be used automatically. In addition, separators longer than 1 character and
        different from ``'\s+'`` will be interpreted as regular expressions and
        will also force the use of the Python parsing engine. Note that regex
        delimiters are prone to ignoring quoted data. Regex example: ``'\r\t'``
    delimiter : str, default ``None``
        Alternative argument name for sep.
    delim_whitespace : boolean, default False
        Specifies whether or not whitespace (e.g. ``' '`` or ``'    '``) will be
        used as the sep. Equivalent to setting ``sep='\s+'``. If this option
        is set to True, nothing should be passed in for the ``delimiter``
        parameter.
    
        .. versionadded:: 0.18.1 support for the Python parser.
    
    header : int or list of ints, default 'infer'
        Row number(s) to use as the column names, and the start of the data.
        Default behavior is as if set to 0 if no ``names`` passed, otherwise
        ``None``. Explicitly pass ``header=0`` to be able to replace existing
        names. The header can be a list of integers that specify row locations for
        a multi-index on the columns e.g. [0,1,3]. Intervening rows that are not
        specified will be skipped (e.g. 2 in this example is skipped). Note that
        this parameter ignores commented lines and empty lines if
        ``skip_blank_lines=True``, so header=0 denotes the first line of data
        rather than the first line of the file.
    names : array-like, default None
        List of column names to use. If file contains no header row, then you
        should explicitly pass header=None. Duplicates in this list are not
        allowed unless mangle_dupe_cols=True, which is the default.
    index_col : int or sequence or False, default None
        Column to use as the row labels of the DataFrame. If a sequence is given, a
        MultiIndex is used. If you have a malformed file with delimiters at the end
        of each line, you might consider index_col=False to force pandas to _not_
        use the first column as the index (row names)
    usecols : array-like or callable, default None
        Return a subset of the columns. If array-like, all elements must either
        be positional (i.e. integer indices into the document columns) or strings
        that correspond to column names provided either by the user in `names` or
        inferred from the document header row(s). For example, a valid array-like
        `usecols` parameter would be [0, 1, 2] or ['foo', 'bar', 'baz'].
    
        If callable, the callable function will be evaluated against the column
        names, returning names where the callable function evaluates to True. An
        example of a valid callable argument would be ``lambda x: x.upper() in
        ['AAA', 'BBB', 'DDD']``. Using this parameter results in much faster
        parsing time and lower memory usage.
    as_recarray : boolean, default False
        DEPRECATED: this argument will be removed in a future version. Please call
        `pd.read_csv(...).to_records()` instead.
    
        Return a NumPy recarray instead of a DataFrame after parsing the data.
        If set to True, this option takes precedence over the `squeeze` parameter.
        In addition, as row indices are not available in such a format, the
        `index_col` parameter will be ignored.
    squeeze : boolean, default False
        If the parsed data only contains one column then return a Series
    prefix : str, default None
        Prefix to add to column numbers when no header, e.g. 'X' for X0, X1, ...
    mangle_dupe_cols : boolean, default True
        Duplicate columns will be specified as 'X.0'...'X.N', rather than
        'X'...'X'. Passing in False will cause data to be overwritten if there
        are duplicate names in the columns.
    dtype : Type name or dict of column -> type, default None
        Data type for data or columns. E.g. {'a': np.float64, 'b': np.int32}
        Use `str` or `object` to preserve and not interpret dtype.
        If converters are specified, they will be applied INSTEAD
        of dtype conversion.
    engine : {'c', 'python'}, optional
        Parser engine to use. The C engine is faster while the python engine is
        currently more feature-complete.
    converters : dict, default None
        Dict of functions for converting values in certain columns. Keys can either
        be integers or column labels
    true_values : list, default None
        Values to consider as True
    false_values : list, default None
        Values to consider as False
    skipinitialspace : boolean, default False
        Skip spaces after delimiter.
    skiprows : list-like or integer or callable, default None
        Line numbers to skip (0-indexed) or number of lines to skip (int)
        at the start of the file.
    
        If callable, the callable function will be evaluated against the row
        indices, returning True if the row should be skipped and False otherwise.
        An example of a valid callable argument would be ``lambda x: x in [0, 2]``.
    skipfooter : int, default 0
        Number of lines at bottom of file to skip (Unsupported with engine='c')
    skip_footer : int, default 0
        DEPRECATED: use the `skipfooter` parameter instead, as they are identical
    nrows : int, default None
        Number of rows of file to read. Useful for reading pieces of large files
    na_values : scalar, str, list-like, or dict, default None
        Additional strings to recognize as NA/NaN. If dict passed, specific
        per-column NA values.  By default the following values are interpreted as
        NaN: '', '#N/A', '#N/A N/A', '#NA', '-1.#IND', '-1.#QNAN', '-NaN', '-nan',
        '1.#IND', '1.#QNAN', 'N/A', 'NA', 'NULL', 'NaN', 'nan'`.
    keep_default_na : bool, default True
        If na_values are specified and keep_default_na is False the default NaN
        values are overridden, otherwise they're appended to.
    na_filter : boolean, default True
        Detect missing value markers (empty strings and the value of na_values). In
        data without any NAs, passing na_filter=False can improve the performance
        of reading a large file
    verbose : boolean, default False
        Indicate number of NA values placed in non-numeric columns
    skip_blank_lines : boolean, default True
        If True, skip over blank lines rather than interpreting as NaN values
    parse_dates : boolean or list of ints or names or list of lists or dict, default False
    
        * boolean. If True -> try parsing the index.
        * list of ints or names. e.g. If [1, 2, 3] -> try parsing columns 1, 2, 3
          each as a separate date column.
        * list of lists. e.g.  If [[1, 3]] -> combine columns 1 and 3 and parse as
          a single date column.
        * dict, e.g. {'foo' : [1, 3]} -> parse columns 1, 3 as date and call result
          'foo'
    
        If a column or index contains an unparseable date, the entire column or
        index will be returned unaltered as an object data type. For non-standard
        datetime parsing, use ``pd.to_datetime`` after ``pd.read_csv``
    
        Note: A fast-path exists for iso8601-formatted dates.
    infer_datetime_format : boolean, default False
        If True and parse_dates is enabled, pandas will attempt to infer the format
        of the datetime strings in the columns, and if it can be inferred, switch
        to a faster method of parsing them. In some cases this can increase the
        parsing speed by 5-10x.
    keep_date_col : boolean, default False
        If True and parse_dates specifies combining multiple columns then
        keep the original columns.
    date_parser : function, default None
        Function to use for converting a sequence of string columns to an array of
        datetime instances. The default uses ``dateutil.parser.parser`` to do the
        conversion. Pandas will try to call date_parser in three different ways,
        advancing to the next if an exception occurs: 1) Pass one or more arrays
        (as defined by parse_dates) as arguments; 2) concatenate (row-wise) the
        string values from the columns defined by parse_dates into a single array
        and pass that; and 3) call date_parser once for each row using one or more
        strings (corresponding to the columns defined by parse_dates) as arguments.
    dayfirst : boolean, default False
        DD/MM format dates, international and European format
    iterator : boolean, default False
        Return TextFileReader object for iteration or getting chunks with
        ``get_chunk()``.
    chunksize : int, default None
        Return TextFileReader object for iteration.
        See the `IO Tools docs
        <http://pandas.pydata.org/pandas-docs/stable/io.html#io-chunking>`_
        for more information on ``iterator`` and ``chunksize``.
    compression : {'infer', 'gzip', 'bz2', 'zip', 'xz', None}, default 'infer'
        For on-the-fly decompression of on-disk data. If 'infer', then use gzip,
        bz2, zip or xz if filepath_or_buffer is a string ending in '.gz', '.bz2',
        '.zip', or 'xz', respectively, and no decompression otherwise. If using
        'zip', the ZIP file must contain only one data file to be read in.
        Set to None for no decompression.
    
        .. versionadded:: 0.18.1 support for 'zip' and 'xz' compression.
    
    thousands : str, default None
        Thousands separator
    decimal : str, default '.'
        Character to recognize as decimal point (e.g. use ',' for European data).
    float_precision : string, default None
        Specifies which converter the C engine should use for floating-point
        values. The options are `None` for the ordinary converter,
        `high` for the high-precision converter, and `round_trip` for the
        round-trip converter.
    lineterminator : str (length 1), default None
        Character to break file into lines. Only valid with C parser.
    quotechar : str (length 1), optional
        The character used to denote the start and end of a quoted item. Quoted
        items can include the delimiter and it will be ignored.
    quoting : int or csv.QUOTE_* instance, default 0
        Control field quoting behavior per ``csv.QUOTE_*`` constants. Use one of
        QUOTE_MINIMAL (0), QUOTE_ALL (1), QUOTE_NONNUMERIC (2) or QUOTE_NONE (3).
    doublequote : boolean, default ``True``
       When quotechar is specified and quoting is not ``QUOTE_NONE``, indicate
       whether or not to interpret two consecutive quotechar elements INSIDE a
       field as a single ``quotechar`` element.
    escapechar : str (length 1), default None
        One-character string used to escape delimiter when quoting is QUOTE_NONE.
    comment : str, default None
        Indicates remainder of line should not be parsed. If found at the beginning
        of a line, the line will be ignored altogether. This parameter must be a
        single character. Like empty lines (as long as ``skip_blank_lines=True``),
        fully commented lines are ignored by the parameter `header` but not by
        `skiprows`. For example, if comment='#', parsing '#empty\na,b,c\n1,2,3'
        with `header=0` will result in 'a,b,c' being
        treated as the header.
    encoding : str, default None
        Encoding to use for UTF when reading/writing (ex. 'utf-8'). `List of Python
        standard encodings
        <https://docs.python.org/3/library/codecs.html#standard-encodings>`_
    dialect : str or csv.Dialect instance, default None
        If provided, this parameter will override values (default or not) for the
        following parameters: `delimiter`, `doublequote`, `escapechar`,
        `skipinitialspace`, `quotechar`, and `quoting`. If it is necessary to
        override values, a ParserWarning will be issued. See csv.Dialect
        documentation for more details.
    tupleize_cols : boolean, default False
        Leave a list of tuples on columns as is (default is to convert to
        a Multi Index on the columns)
    error_bad_lines : boolean, default True
        Lines with too many fields (e.g. a csv line with too many commas) will by
        default cause an exception to be raised, and no DataFrame will be returned.
        If False, then these "bad lines" will dropped from the DataFrame that is
        returned.
    warn_bad_lines : boolean, default True
        If error_bad_lines is False, and warn_bad_lines is True, a warning for each
        "bad line" will be output.
    low_memory : boolean, default True
        Internally process the file in chunks, resulting in lower memory use
        while parsing, but possibly mixed type inference.  To ensure no mixed
        types either set False, or specify the type with the `dtype` parameter.
        Note that the entire file is read into a single DataFrame regardless,
        use the `chunksize` or `iterator` parameter to return the data in chunks.
        (Only valid with C parser)
    buffer_lines : int, default None
        DEPRECATED: this argument will be removed in a future version because its
        value is not respected by the parser
    compact_ints : boolean, default False
        DEPRECATED: this argument will be removed in a future version
    
        If compact_ints is True, then for any column that is of integer dtype,
        the parser will attempt to cast it as the smallest integer dtype possible,
        either signed or unsigned depending on the specification from the
        `use_unsigned` parameter.
    use_unsigned : boolean, default False
        DEPRECATED: this argument will be removed in a future version
    
        If integer columns are being compacted (i.e. `compact_ints=True`), specify
        whether the column should be compacted to the smallest signed or unsigned
        integer dtype.
    memory_map : boolean, default False
        If a filepath is provided for `filepath_or_buffer`, map the file object
        directly onto memory and access the data directly from there. Using this
        option can improve performance because there is no longer any I/O overhead.
    
    Returns
    -------
    result : DataFrame or TextParser


In [3]:
# Cargamos hoja de calculo en un dataframe
# file_apple = '/home/esteban/AnacondaProjects/Simulacion2017/Modulo2/precios/AAPL.csv'
file_apple = 'precios/AAPL.csv'
df_apple = pd.read_csv(file_apple)
df_apple


Out[3]:
Date Open High Low Close Adj_Close Volume
0 2011-01-03 46.520000 47.180000 46.405716 47.081429 42.357094 111284600
1 2011-01-04 47.491428 47.500000 46.878571 47.327145 42.578156 77270200
2 2011-01-05 47.078571 47.762856 47.071430 47.714287 42.926441 63879900
3 2011-01-06 47.817142 47.892857 47.557144 47.675713 42.891743 75107200
4 2011-01-07 47.712856 48.049999 47.414288 48.017143 43.198914 77982800
5 2011-01-10 48.404285 49.032856 48.167141 48.921429 44.012455 112140000
6 2011-01-11 49.268570 49.279999 48.495716 48.805714 43.908348 111027000
7 2011-01-12 49.035713 49.204285 48.857143 49.202858 44.265644 75647600
8 2011-01-13 49.308571 49.520000 49.121429 49.382858 44.427586 74195100
9 2011-01-14 49.412857 49.782856 49.205715 49.782856 44.787437 77210000
10 2011-01-18 47.074287 49.251427 46.571430 48.664288 43.781116 470249500
11 2011-01-19 49.764286 49.799999 48.125713 48.405716 43.548496 283903200
12 2011-01-20 48.061428 48.328571 47.160000 47.525715 42.756798 191197300
13 2011-01-21 47.681427 47.840000 46.661430 46.674286 41.990799 188600300
14 2011-01-24 46.695713 48.207142 46.674286 48.207142 43.369850 143670800
15 2011-01-25 48.047142 48.777142 47.795715 48.771427 43.877522 136717000
16 2011-01-26 48.994286 49.371429 48.785713 49.121429 44.192390 126718900
17 2011-01-27 49.111427 49.241428 48.975716 49.029999 44.110142 71256500
18 2011-01-28 49.167141 49.200001 47.647144 48.014286 43.196335 148014300
19 2011-01-31 47.971428 48.577145 47.757141 48.474285 43.610184 94311700
20 2011-02-01 48.757141 49.378571 48.711430 49.290001 44.344044 106658300
21 2011-02-02 49.207142 49.321430 49.078571 49.188572 44.252800 64738800
22 2011-02-03 49.114285 49.177143 48.364285 49.062859 44.139694 98449400
23 2011-02-04 49.091427 49.528572 49.072857 49.500000 44.532970 80460100
24 2011-02-07 49.698570 50.464287 49.662857 50.268570 45.224419 121255400
25 2011-02-08 50.525715 50.788570 50.307144 50.742859 45.651115 95260200
26 2011-02-09 50.741428 51.285713 50.695713 51.165714 46.031540 120686300
27 2011-02-10 51.055714 51.428570 49.714287 50.648571 45.566296 232137500
28 2011-02-11 50.678570 51.114285 50.505714 50.978573 45.863174 91893200
29 2011-02-14 50.970001 51.354286 50.958572 51.311428 46.162636 77604100
... ... ... ... ... ... ... ...
1480 2016-11-17 109.809998 110.349998 108.830002 109.949997 108.598877 27632000
1481 2016-11-18 109.720001 110.540001 109.660004 110.059998 108.707527 28428900
1482 2016-11-21 110.120003 111.989998 110.010002 111.730003 110.357010 29264600
1483 2016-11-22 111.949997 112.419998 111.400002 111.800003 110.426147 25965500
1484 2016-11-23 111.360001 111.510002 110.330002 111.230003 109.863159 27387900
1485 2016-11-25 111.129997 111.870003 110.949997 111.790001 110.416275 11475900
1486 2016-11-28 111.430000 112.470001 111.389999 111.570000 110.198975 27194000
1487 2016-11-29 110.779999 112.029999 110.070000 111.459999 110.090332 28528800
1488 2016-11-30 111.599998 112.199997 110.269997 110.519997 109.161880 36162300
1489 2016-12-01 110.370003 110.940002 109.029999 109.489998 108.144531 37086900
1490 2016-12-02 109.169998 110.089996 108.849998 109.900002 108.549500 26528000
1491 2016-12-05 110.000000 110.029999 108.250000 109.110001 107.769203 34324500
1492 2016-12-06 109.500000 110.360001 109.190002 109.949997 108.598877 26195500
1493 2016-12-07 109.260002 111.190002 109.160004 111.029999 109.665604 29998700
1494 2016-12-08 110.860001 112.430000 110.599998 112.120003 110.742218 27068300
1495 2016-12-09 112.309998 114.699997 112.309998 113.949997 112.549728 34402600
1496 2016-12-12 113.290001 115.000000 112.489998 113.300003 111.907722 26374400
1497 2016-12-13 113.839996 115.919998 113.750000 115.190002 113.774490 43733800
1498 2016-12-14 115.040001 116.199997 114.980003 115.190002 113.774490 34031800
1499 2016-12-15 115.379997 116.730003 115.230003 115.820000 114.396751 46524500
1500 2016-12-16 116.470001 116.500000 115.650002 115.970001 114.544907 44055400
1501 2016-12-19 115.800003 117.379997 115.750000 116.639999 115.206673 27779400
1502 2016-12-20 116.739998 117.500000 116.680000 116.949997 115.512863 21425000
1503 2016-12-21 116.800003 117.400002 116.779999 117.059998 115.621513 23783200
1504 2016-12-22 116.349998 116.510002 115.639999 116.290001 114.860977 26085900
1505 2016-12-23 115.589996 116.519997 115.589996 116.519997 115.088142 14181200
1506 2016-12-27 116.519997 117.800003 116.489998 117.260002 115.819054 18296900
1507 2016-12-28 117.519997 118.019997 116.199997 116.760002 115.325203 20905900
1508 2016-12-29 116.449997 117.110001 116.400002 116.730003 115.295570 14963300
1509 2016-12-30 116.650002 117.199997 115.430000 115.820000 114.396751 30586300

1510 rows × 7 columns

Acá hay varias cosas por notar.

  • Quisieramos indizar por fecha.
  • Para nuestra aplicación solo nos interesan los precios de cierre de las acciones (columna Adj_Close).

In [5]:
# Cargamos hoja de calculo en un dataframe
file_apple = 'precios/AAPL.csv'
df_apple = pd.read_csv(file_apple, index_col='Date', usecols=['Date', 'Adj_Close'])
df_apple


Out[5]:
Adj_Close
Date
2011-01-03 42.357094
2011-01-04 42.578156
2011-01-05 42.926441
2011-01-06 42.891743
2011-01-07 43.198914
2011-01-10 44.012455
2011-01-11 43.908348
2011-01-12 44.265644
2011-01-13 44.427586
2011-01-14 44.787437
2011-01-18 43.781116
2011-01-19 43.548496
2011-01-20 42.756798
2011-01-21 41.990799
2011-01-24 43.369850
2011-01-25 43.877522
2011-01-26 44.192390
2011-01-27 44.110142
2011-01-28 43.196335
2011-01-31 43.610184
2011-02-01 44.344044
2011-02-02 44.252800
2011-02-03 44.139694
2011-02-04 44.532970
2011-02-07 45.224419
2011-02-08 45.651115
2011-02-09 46.031540
2011-02-10 45.566296
2011-02-11 45.863174
2011-02-14 46.162636
... ...
2016-11-17 108.598877
2016-11-18 108.707527
2016-11-21 110.357010
2016-11-22 110.426147
2016-11-23 109.863159
2016-11-25 110.416275
2016-11-28 110.198975
2016-11-29 110.090332
2016-11-30 109.161880
2016-12-01 108.144531
2016-12-02 108.549500
2016-12-05 107.769203
2016-12-06 108.598877
2016-12-07 109.665604
2016-12-08 110.742218
2016-12-09 112.549728
2016-12-12 111.907722
2016-12-13 113.774490
2016-12-14 113.774490
2016-12-15 114.396751
2016-12-16 114.544907
2016-12-19 115.206673
2016-12-20 115.512863
2016-12-21 115.621513
2016-12-22 114.860977
2016-12-23 115.088142
2016-12-27 115.819054
2016-12-28 115.325203
2016-12-29 115.295570
2016-12-30 114.396751

1510 rows × 1 columns

Ahora, grafiquemos...


In [6]:
import matplotlib.pyplot as plt
%matplotlib inline

In [7]:
df_apple.plot(figsize=(8,6));


Actividad. Importen todos los archivos .csv como acabamos de hacerlo con el de apple. Además, crear un solo DataFrame que cuyos encabezados por columna sean los nombres respectivos (AAPL, AMZN,...) y contengan los datos de precio de cierre.


In [8]:
file_amazon = 'precios/AMZN.csv'
file_microsoft = 'precios/MSFT.csv'
file_nvidia = 'precios/NVDA.csv'

df_amazon = pd.read_csv(file_amazon, index_col='Date', usecols=['Date', 'Adj_Close'])
df_microsoft = pd.read_csv(file_microsoft, index_col='Date', usecols=['Date', 'Adj_Close'])
df_nvidia = pd.read_csv(file_nvidia, index_col='Date', usecols=['Date', 'Adj_Close'])

In [9]:
closes = pd.DataFrame(index=df_amazon.index, columns=['AAPL', 'AMZN', 'MSFT', 'NVDA'])
closes.index.name = 'Date'
closes['AAPL'] = df_apple
closes['AMZN'] = df_amazon
closes['MSFT'] = df_microsoft
closes['NVDA'] = df_nvidia
closes


Out[9]:
AAPL AMZN MSFT NVDA
Date
2011-01-03 42.357094 184.220001 23.325636 14.677008
2011-01-04 42.578156 185.009995 23.417341 14.630622
2011-01-05 42.926441 187.419998 23.342314 15.753201
2011-01-06 42.891743 185.860001 24.025904 17.933416
2011-01-07 43.198914 185.490005 23.842505 18.434401
2011-01-10 44.012455 184.679993 23.525713 19.139486
2011-01-11 43.908348 184.339996 23.434010 18.842609
2011-01-12 44.265644 184.080002 23.800816 21.662971
2011-01-13 44.427586 185.529999 23.500708 21.700079
2011-01-14 44.787437 188.750000 23.592407 21.885632
2011-01-18 43.781116 191.250000 23.892517 21.375370
2011-01-19 43.548496 186.869995 23.734127 20.790882
2011-01-20 42.756798 181.960007 23.634090 20.809439
2011-01-21 41.990799 177.419998 23.358988 20.614613
2011-01-24 43.369850 176.850006 23.659094 22.943270
2011-01-25 43.877522 176.699997 23.717451 22.238174
2011-01-26 44.192390 175.389999 23.992559 22.766996
2011-01-27 44.110142 184.449997 24.067589 22.702045
2011-01-28 43.196335 171.139999 23.133898 22.043348
2011-01-31 43.610184 169.639999 23.117222 22.191788
2011-02-01 44.344044 172.110001 23.333969 22.702045
2011-02-02 44.252800 173.529999 23.292288 23.731855
2011-02-03 44.139694 173.710007 23.050526 23.286535
2011-02-04 44.532970 175.929993 23.150572 23.815348
2011-02-07 45.224419 176.429993 23.509035 22.822659
2011-02-08 45.651115 183.059998 23.575731 22.145403
2011-02-09 46.031540 185.300003 23.317297 21.607307
2011-02-10 45.566296 186.210007 22.925484 21.171255
2011-02-11 45.863174 189.250000 22.717070 21.774300
2011-02-14 46.162636 190.419998 22.700396 21.440313
... ... ... ... ...
2016-11-17 108.598877 756.400024 59.613453 91.957710
2016-11-18 108.707527 760.159973 59.328362 92.923172
2016-11-21 110.357010 780.000000 59.829727 92.544952
2016-11-22 110.426147 785.330017 60.085323 93.211815
2016-11-23 109.863159 780.119995 59.377514 93.670349
2016-11-25 110.416275 780.369995 59.505310 93.859749
2016-11-28 110.198975 766.770020 59.583958 93.809898
2016-11-29 110.090332 762.520020 60.055836 92.952637
2016-11-30 109.161880 750.570007 59.239883 91.905991
2016-12-01 108.144531 743.650024 58.197826 87.360535
2016-12-02 108.549500 740.340027 58.246979 88.167946
2016-12-05 107.769203 759.359985 59.200565 91.587006
2016-12-06 108.598877 764.719971 58.935135 93.092194
2016-12-07 109.665604 770.419983 60.331097 94.766838
2016-12-08 110.742218 767.330017 59.977184 93.181915
2016-12-09 112.549728 768.659973 60.920940 91.527199
2016-12-12 111.907722 760.119995 61.117550 89.304306
2016-12-13 113.774490 774.340027 61.913837 90.879280
2016-12-14 113.774490 768.820007 61.618916 96.142426
2016-12-15 114.396751 761.000000 61.520611 98.395226
2016-12-16 114.544907 757.770020 61.245350 100.089813
2016-12-19 115.206673 766.000000 62.543003 101.305916
2016-12-20 115.512863 771.219971 62.464359 104.834633
2016-12-21 115.621513 770.599976 62.464359 105.492531
2016-12-22 114.860977 766.340027 62.474186 106.768440
2016-12-23 115.088142 760.590027 62.169441 109.429932
2016-12-27 115.819054 771.400024 62.208755 116.945885
2016-12-28 115.325203 772.130005 61.923668 108.901627
2016-12-29 115.295570 765.150024 61.835197 111.074669
2016-12-30 114.396751 749.869995 61.088055 106.399620

1510 rows × 4 columns


In [10]:
closes.plot(figsize=(8,6));


2. Descargar directamente los datos

Para esto utilizaremos el paquete pandas_datareader.

Nota: Usualmente, las distribuciones de Python no cuentan, por defecto, con el paquete pandas_datareader. Por lo que será necesario instalarlo aparte. El siguiente comando instala el paquete en Anaconda: conda install -c conda-forge pandas-datareader

  • Anaconda prompt

https://pandas-datareader.readthedocs.io/en/latest/


In [12]:
from pandas_datareader import data

Utilizaremos la función data.DataReader...


In [65]:
help(data.DataReader)


Help on function DataReader in module pandas_datareader.data:

DataReader(name, data_source=None, start=None, end=None, retry_count=3, pause=0.001, session=None, access_key=None)
    Imports data from a number of online sources.
    
    Currently supports Yahoo! Finance, Google Finance, St. Louis FED (FRED),
    Kenneth French's data library, and the SEC's EDGAR Index.
    
    Parameters
    ----------
    name : str or list of strs
        the name of the dataset. Some data sources (yahoo, google, fred) will
        accept a list of names.
    data_source: {str, None}
        the data source ("yahoo", "yahoo-actions", "yahoo-dividends",
        "google", "fred", "ff", or "edgar-index")
    start : {datetime, None}
        left boundary for range (defaults to 1/1/2010)
    end : {datetime, None}
        right boundary for range (defaults to today)
    retry_count : {int, 3}
        Number of times to retry query request.
    pause : {numeric, 0.001}
        Time, in seconds, to pause between consecutive queries of chunks. If
        single value given for symbol, represents the pause between retries.
    session : Session, default None
            requests.sessions.Session instance to be used
    
    Examples
    ----------
    
    # Data from Yahoo! Finance
    gs = DataReader("GS", "yahoo")
    
    # Corporate Actions (Dividend and Split Data)
    # with ex-dates from Yahoo! Finance
    gs = DataReader("GS", "yahoo-actions")
    
    # Data from Google Finance
    aapl = DataReader("AAPL", "google")
    
    # Data from FRED
    vix = DataReader("VIXCLS", "fred")
    
    # Data from Fama/French
    ff = DataReader("F-F_Research_Data_Factors", "famafrench")
    ff = DataReader("F-F_Research_Data_Factors_weekly", "famafrench")
    ff = DataReader("6_Portfolios_2x3", "famafrench")
    ff = DataReader("F-F_ST_Reversal_Factor", "famafrench")
    
    # Data from EDGAR index
    ed = DataReader("full", "edgar-index")
    ed2 = DataReader("daily", "edgar-index")


In [13]:
# Definimos los instrumentos que vamos a descargar. Como antes tendremos Apple, Amazon, Microsoft y Nvidia.
tickers = ['AAPL', 'AMZN', 'MSFT', 'NVDA']

# Definimos que fuente online vamos a usar (yahoo finance)
data_source = 'yahoo'

# Queremos los datos desde 01/01/2011 hasta 31/12/2016.
start_date = '2011-01-01'
end_date = '2016-12-31'

# Usamos la función DataReader. Si, así de fácil...
panel_data = data.DataReader(tickers, data_source, start_date, end_date)

¿Qué contiene esta variable?


In [15]:
panel_data


Out[15]:
<class 'pandas.core.panel.Panel'>
Dimensions: 6 (items) x 1510 (major_axis) x 4 (minor_axis)
Items axis: Adj Close to Volume
Major_axis axis: 2016-12-30 00:00:00 to 2011-01-03 00:00:00
Minor_axis axis: AAPL to NVDA

Como antes, solo nos interesan los precios de cierre ajustados...


In [17]:
# Notar que los índices se indican como Major_axis
closes = panel_data.ix['Adj Close']
closes


Out[17]:
AAPL AMZN MSFT NVDA
Date
2016-12-30 114.396751 749.869995 61.088055 106.399620
2016-12-29 115.295570 765.150024 61.835197 111.074669
2016-12-28 115.325203 772.130005 61.923668 108.901627
2016-12-27 115.819054 771.400024 62.208755 116.945885
2016-12-23 115.088142 760.590027 62.169441 109.429932
2016-12-22 114.860977 766.340027 62.474186 106.768440
2016-12-21 115.621513 770.599976 62.464359 105.492531
2016-12-20 115.512863 771.219971 62.464359 104.834633
2016-12-19 115.206673 766.000000 62.543003 101.305916
2016-12-16 114.544907 757.770020 61.245350 100.089813
2016-12-15 114.396751 761.000000 61.520611 98.395226
2016-12-14 113.774490 768.820007 61.618916 96.142426
2016-12-13 113.774490 774.340027 61.913837 90.879280
2016-12-12 111.907722 760.119995 61.117550 89.304306
2016-12-09 112.549728 768.659973 60.920940 91.527199
2016-12-08 110.742218 767.330017 59.977184 93.181915
2016-12-07 109.665604 770.419983 60.331097 94.766838
2016-12-06 108.598877 764.719971 58.935135 93.092194
2016-12-05 107.769203 759.359985 59.200565 91.587006
2016-12-02 108.549500 740.340027 58.246979 88.167946
2016-12-01 108.144531 743.650024 58.197826 87.360535
2016-11-30 109.161880 750.570007 59.239883 91.905991
2016-11-29 110.090332 762.520020 60.055836 92.952637
2016-11-28 110.198975 766.770020 59.583958 93.809898
2016-11-25 110.416275 780.369995 59.505310 93.859749
2016-11-23 109.863159 780.119995 59.377514 93.670349
2016-11-22 110.426147 785.330017 60.085323 93.211815
2016-11-21 110.357010 780.000000 59.829727 92.544952
2016-11-18 108.707527 760.159973 59.328362 92.923172
2016-11-17 108.598877 756.400024 59.613453 91.957710
... ... ... ... ...
2011-02-14 46.162636 190.419998 22.700396 21.440313
2011-02-11 45.863174 189.250000 22.717070 21.774300
2011-02-10 45.566296 186.210007 22.925484 21.171255
2011-02-09 46.031540 185.300003 23.317297 21.607307
2011-02-08 45.651115 183.059998 23.575731 22.145403
2011-02-07 45.224419 176.429993 23.509035 22.822659
2011-02-04 44.532970 175.929993 23.150572 23.815348
2011-02-03 44.139694 173.710007 23.050526 23.286535
2011-02-02 44.252800 173.529999 23.292288 23.731855
2011-02-01 44.344044 172.110001 23.333969 22.702045
2011-01-31 43.610184 169.639999 23.117222 22.191788
2011-01-28 43.196335 171.139999 23.133898 22.043348
2011-01-27 44.110142 184.449997 24.067589 22.702045
2011-01-26 44.192390 175.389999 23.992559 22.766996
2011-01-25 43.877522 176.699997 23.717451 22.238174
2011-01-24 43.369850 176.850006 23.659094 22.943270
2011-01-21 41.990799 177.419998 23.358988 20.614613
2011-01-20 42.756798 181.960007 23.634090 20.809439
2011-01-19 43.548496 186.869995 23.734127 20.790882
2011-01-18 43.781116 191.250000 23.892517 21.375370
2011-01-14 44.787437 188.750000 23.592407 21.885632
2011-01-13 44.427586 185.529999 23.500708 21.700079
2011-01-12 44.265644 184.080002 23.800816 21.662971
2011-01-11 43.908348 184.339996 23.434010 18.842609
2011-01-10 44.012455 184.679993 23.525713 19.139486
2011-01-07 43.198914 185.490005 23.842505 18.434401
2011-01-06 42.891743 185.860001 24.025904 17.933416
2011-01-05 42.926441 187.419998 23.342314 15.753201
2011-01-04 42.578156 185.009995 23.417341 14.630622
2011-01-03 42.357094 184.220001 23.325636 14.677008

1510 rows × 4 columns

Reordenamos las fechas...


In [18]:
# Generamos todas los días entre las fechas dadas
all_weekdays = pd.date_range(start=start_date, end=end_date, freq='B')

# Reindizamos en este orden
closes = closes.reindex(all_weekdays)
closes


Out[18]:
AAPL AMZN MSFT NVDA
2011-01-03 42.357094 184.220001 23.325636 14.677008
2011-01-04 42.578156 185.009995 23.417341 14.630622
2011-01-05 42.926441 187.419998 23.342314 15.753201
2011-01-06 42.891743 185.860001 24.025904 17.933416
2011-01-07 43.198914 185.490005 23.842505 18.434401
2011-01-10 44.012455 184.679993 23.525713 19.139486
2011-01-11 43.908348 184.339996 23.434010 18.842609
2011-01-12 44.265644 184.080002 23.800816 21.662971
2011-01-13 44.427586 185.529999 23.500708 21.700079
2011-01-14 44.787437 188.750000 23.592407 21.885632
2011-01-17 NaN NaN NaN NaN
2011-01-18 43.781116 191.250000 23.892517 21.375370
2011-01-19 43.548496 186.869995 23.734127 20.790882
2011-01-20 42.756798 181.960007 23.634090 20.809439
2011-01-21 41.990799 177.419998 23.358988 20.614613
2011-01-24 43.369850 176.850006 23.659094 22.943270
2011-01-25 43.877522 176.699997 23.717451 22.238174
2011-01-26 44.192390 175.389999 23.992559 22.766996
2011-01-27 44.110142 184.449997 24.067589 22.702045
2011-01-28 43.196335 171.139999 23.133898 22.043348
2011-01-31 43.610184 169.639999 23.117222 22.191788
2011-02-01 44.344044 172.110001 23.333969 22.702045
2011-02-02 44.252800 173.529999 23.292288 23.731855
2011-02-03 44.139694 173.710007 23.050526 23.286535
2011-02-04 44.532970 175.929993 23.150572 23.815348
2011-02-07 45.224419 176.429993 23.509035 22.822659
2011-02-08 45.651115 183.059998 23.575731 22.145403
2011-02-09 46.031540 185.300003 23.317297 21.607307
2011-02-10 45.566296 186.210007 22.925484 21.171255
2011-02-11 45.863174 189.250000 22.717070 21.774300
... ... ... ... ...
2016-11-21 110.357010 780.000000 59.829727 92.544952
2016-11-22 110.426147 785.330017 60.085323 93.211815
2016-11-23 109.863159 780.119995 59.377514 93.670349
2016-11-24 NaN NaN NaN NaN
2016-11-25 110.416275 780.369995 59.505310 93.859749
2016-11-28 110.198975 766.770020 59.583958 93.809898
2016-11-29 110.090332 762.520020 60.055836 92.952637
2016-11-30 109.161880 750.570007 59.239883 91.905991
2016-12-01 108.144531 743.650024 58.197826 87.360535
2016-12-02 108.549500 740.340027 58.246979 88.167946
2016-12-05 107.769203 759.359985 59.200565 91.587006
2016-12-06 108.598877 764.719971 58.935135 93.092194
2016-12-07 109.665604 770.419983 60.331097 94.766838
2016-12-08 110.742218 767.330017 59.977184 93.181915
2016-12-09 112.549728 768.659973 60.920940 91.527199
2016-12-12 111.907722 760.119995 61.117550 89.304306
2016-12-13 113.774490 774.340027 61.913837 90.879280
2016-12-14 113.774490 768.820007 61.618916 96.142426
2016-12-15 114.396751 761.000000 61.520611 98.395226
2016-12-16 114.544907 757.770020 61.245350 100.089813
2016-12-19 115.206673 766.000000 62.543003 101.305916
2016-12-20 115.512863 771.219971 62.464359 104.834633
2016-12-21 115.621513 770.599976 62.464359 105.492531
2016-12-22 114.860977 766.340027 62.474186 106.768440
2016-12-23 115.088142 760.590027 62.169441 109.429932
2016-12-26 NaN NaN NaN NaN
2016-12-27 115.819054 771.400024 62.208755 116.945885
2016-12-28 115.325203 772.130005 61.923668 108.901627
2016-12-29 115.295570 765.150024 61.835197 111.074669
2016-12-30 114.396751 749.869995 61.088055 106.399620

1565 rows × 4 columns

Las fechas para las que no se tienen datos quedan marcadas con ǸaN


In [19]:
# Se verán 'huecos' en la gráfica
closes.plot(figsize=(8,6));



In [20]:
# Llenamos los huecos con el precio de cierre del día anterior
closes = closes.fillna(method='ffill')
closes


Out[20]:
AAPL AMZN MSFT NVDA
2011-01-03 42.357094 184.220001 23.325636 14.677008
2011-01-04 42.578156 185.009995 23.417341 14.630622
2011-01-05 42.926441 187.419998 23.342314 15.753201
2011-01-06 42.891743 185.860001 24.025904 17.933416
2011-01-07 43.198914 185.490005 23.842505 18.434401
2011-01-10 44.012455 184.679993 23.525713 19.139486
2011-01-11 43.908348 184.339996 23.434010 18.842609
2011-01-12 44.265644 184.080002 23.800816 21.662971
2011-01-13 44.427586 185.529999 23.500708 21.700079
2011-01-14 44.787437 188.750000 23.592407 21.885632
2011-01-17 44.787437 188.750000 23.592407 21.885632
2011-01-18 43.781116 191.250000 23.892517 21.375370
2011-01-19 43.548496 186.869995 23.734127 20.790882
2011-01-20 42.756798 181.960007 23.634090 20.809439
2011-01-21 41.990799 177.419998 23.358988 20.614613
2011-01-24 43.369850 176.850006 23.659094 22.943270
2011-01-25 43.877522 176.699997 23.717451 22.238174
2011-01-26 44.192390 175.389999 23.992559 22.766996
2011-01-27 44.110142 184.449997 24.067589 22.702045
2011-01-28 43.196335 171.139999 23.133898 22.043348
2011-01-31 43.610184 169.639999 23.117222 22.191788
2011-02-01 44.344044 172.110001 23.333969 22.702045
2011-02-02 44.252800 173.529999 23.292288 23.731855
2011-02-03 44.139694 173.710007 23.050526 23.286535
2011-02-04 44.532970 175.929993 23.150572 23.815348
2011-02-07 45.224419 176.429993 23.509035 22.822659
2011-02-08 45.651115 183.059998 23.575731 22.145403
2011-02-09 46.031540 185.300003 23.317297 21.607307
2011-02-10 45.566296 186.210007 22.925484 21.171255
2011-02-11 45.863174 189.250000 22.717070 21.774300
... ... ... ... ...
2016-11-21 110.357010 780.000000 59.829727 92.544952
2016-11-22 110.426147 785.330017 60.085323 93.211815
2016-11-23 109.863159 780.119995 59.377514 93.670349
2016-11-24 109.863159 780.119995 59.377514 93.670349
2016-11-25 110.416275 780.369995 59.505310 93.859749
2016-11-28 110.198975 766.770020 59.583958 93.809898
2016-11-29 110.090332 762.520020 60.055836 92.952637
2016-11-30 109.161880 750.570007 59.239883 91.905991
2016-12-01 108.144531 743.650024 58.197826 87.360535
2016-12-02 108.549500 740.340027 58.246979 88.167946
2016-12-05 107.769203 759.359985 59.200565 91.587006
2016-12-06 108.598877 764.719971 58.935135 93.092194
2016-12-07 109.665604 770.419983 60.331097 94.766838
2016-12-08 110.742218 767.330017 59.977184 93.181915
2016-12-09 112.549728 768.659973 60.920940 91.527199
2016-12-12 111.907722 760.119995 61.117550 89.304306
2016-12-13 113.774490 774.340027 61.913837 90.879280
2016-12-14 113.774490 768.820007 61.618916 96.142426
2016-12-15 114.396751 761.000000 61.520611 98.395226
2016-12-16 114.544907 757.770020 61.245350 100.089813
2016-12-19 115.206673 766.000000 62.543003 101.305916
2016-12-20 115.512863 771.219971 62.464359 104.834633
2016-12-21 115.621513 770.599976 62.464359 105.492531
2016-12-22 114.860977 766.340027 62.474186 106.768440
2016-12-23 115.088142 760.590027 62.169441 109.429932
2016-12-26 115.088142 760.590027 62.169441 109.429932
2016-12-27 115.819054 771.400024 62.208755 116.945885
2016-12-28 115.325203 772.130005 61.923668 108.901627
2016-12-29 115.295570 765.150024 61.835197 111.074669
2016-12-30 114.396751 749.869995 61.088055 106.399620

1565 rows × 4 columns


In [21]:
# Gráfico limpio...
closes.plot(figsize=(8,6));


Una vez tenemos los datos, podemos operar con ellos. Por ejemplo un resumen de datos estadísticos se podría obtener con


In [86]:
closes.describe()


Out[86]:
AAPL AMZN MSFT NVDA
count 1565.000000 1565.000000 1565.000000 1565.000000
mean 81.224757 370.713157 35.618638 22.692051
std 24.595598 184.086742 11.534218 16.955522
min 40.525654 160.970001 20.013088 10.557797
25% 59.079132 228.289993 25.197577 13.480210
50% 78.010452 310.029999 33.699532 17.358208
75% 104.162621 438.559998 44.244453 21.926655
max 126.941574 844.359985 62.543003 116.945885

Recapitulando, hoy aprendimos a obtener datos con pandas, tanto desde archivos de texto separados por comas, como directamente desde fuentes remotas.

  • Para nuestra aplicación resulta más útil y más fácil obtenerlos directamente desde Yahoo Finance.
  • Sin embargo, muchas veces tendrán que adquirir los datos desde planillas de Excel y por tanto deben tener este conocimiento.

La siguiente clase veremos como simular escenarios de comportamiento de los precios futuros (no determinístico, no sabemos como se comporta, muchas posibilidades: montecarlo) a partir de datos de ingresos diarios.

Luego, con esas predicciones veremos la probabilidad de que el precio de las acciones quede por encima (debajo) de cierto umbral y con ello tomar decisiones de vender (comprar) estas acciones.

Created with Jupyter by Esteban Jiménez Rodríguez.