IO, CREATE/INSERT, and External Data

Setup


In [ ]:
import ibis
import os
hdfs_port = int(os.environ.get('IBIS_TEST_WEBHDFS_PORT', 50070))
user = os.environ.get('IBIS_TEST_WEBHDFS_USER', 'ubuntu')
hdfs = ibis.hdfs_connect(host='quickstart.cloudera', user=user, port=hdfs_port)
con = ibis.impala.connect(host='quickstart.cloudera', database='ibis_testing',
                          hdfs_client=hdfs)
ibis.options.interactive = True

Creating new Impala tables from Ibis expressions

Suppose you have an Ibis expression that produces a table:


In [ ]:
table = con.table('functional_alltypes')

t2 = table[table, (table.bigint_col - table.int_col).name('foo')]

expr = (t2
        [t2.bigint_col > 30]
        .group_by('string_col')
        .aggregate(min_foo=lambda t: t.foo.min(),
                   max_foo=lambda t: t.foo.max(),
                   sum_foo=lambda t: t.foo.sum()))
expr

To create a table in the database from the results of this expression, use the connection's create_table method:


In [ ]:
con.create_table('testing_table', expr, database='ibis_testing')

By default, this creates a table stored as Parquet format in HDFS. Support for views, external tables, configurable file formats, and so forth, will come in the future. Feedback on what kind of interface would be useful for that would help.


In [ ]:
con.table('testing_table')

Tables can be similarly dropped with drop_table


In [ ]:
con.drop_table('testing_table', database='ibis_testing')

Inserting data into existing Impala tables

The client's insert method can append new data to an existing table or overwrite the data that is in there.


In [ ]:
con.create_table('testing_table', expr)
con.table('testing_table')

In [ ]:
con.insert('testing_table', expr)
con.table('testing_table')

In [ ]:
con.drop_table('testing_table')

Uploading / downloading data from HDFS

If you've set up an HDFS connection, you can use the Ibis HDFS interface to look through your data and read and write files to and from HDFS:


In [ ]:
hdfs = con.hdfs
hdfs.ls('/__ibis/ibis-testing-data')

In [ ]:
hdfs.ls('/__ibis/ibis-testing-data/parquet')

Suppose we wanted to download /__ibis/ibis-testing-data/parquet/functional_alltypes, which is a directory. We need only do:


In [ ]:
!rm -rf parquet_dir/
hdfs.get('/__ibis/ibis-testing-data/parquet/functional_alltypes', 'parquet_dir')

Now we have that directory locally:


In [ ]:
!ls parquet_dir/

Files and directories can be written to HDFS just as easily using put:


In [ ]:
path = '/__ibis/dir-write-example'
if hdfs.exists(path):
    hdfs.rmdir(path)
hdfs.put(path, 'parquet_dir', verbose=True)

In [ ]:
hdfs.ls('/__ibis/dir-write-example')

Delete files with rm or directories with rmdir:


In [ ]:
hdfs.rmdir('/__ibis/dir-write-example')

In [ ]:
!rm -rf parquet_dir/

Queries on Parquet, Avro, and Delimited files in HDFS

Ibis can easily create temporary or persistent Impala tables that reference data in the following formats:

  • Parquet (parquet_file)
  • Avro (avro_file)
  • Delimited text formats (CSV, TSV, etc.) (delimited_file)

Parquet is the easiest because the schema can be read from the data files:


In [ ]:
path = '/__ibis/ibis-testing-data/parquet/tpch_lineitem'

lineitem = con.parquet_file(path)
lineitem.limit(2)

In [ ]:
lineitem.l_extendedprice.sum()

If you want to query a Parquet file and also create a table in Impala that remains after your session, you can pass more information to parquet_file:


In [ ]:
table = con.parquet_file(path, name='my_parquet_table', 
                         database='ibis_testing',
                         persist=True)
table.l_extendedprice.sum()

In [ ]:
con.table('my_parquet_table').l_extendedprice.sum()

In [ ]:
con.drop_table('my_parquet_table')

To query delimited files, you need to write down an Ibis schema. At some point we'd like to build some helper tools that will infer the schema for you, all in good time.

There's some CSV files in the test folder, so let's use those:


In [ ]:
hdfs.get('/__ibis/ibis-testing-data/csv', 'csv-files')

In [ ]:
!cat csv-files/0.csv

In [ ]:
!rm -rf csv-files/

The schema here is pretty simple (see ibis.schema for more):


In [ ]:
schema = ibis.schema([('foo', 'string'),
                      ('bar', 'double'),
                      ('baz', 'int32')])

table = con.delimited_file('/__ibis/ibis-testing-data/csv',
                           schema)
table.limit(10)

In [ ]:
table.bar.summary()

For functions like parquet_file and delimited_file, an HDFS directory must be passed (we'll add support for S3 and other filesystems later) and the directory must contain files all having the same schema.

If you have Avro data, you can query it too if you have the full avro schema:


In [ ]:
avro_schema = {
    "fields": [
        {"type": ["int", "null"], "name": "R_REGIONKEY"},
        {"type": ["string", "null"], "name": "R_NAME"},
        {"type": ["string", "null"], "name": "R_COMMENT"}],
    "type": "record",
    "name": "a"
}

path = '/__ibis/ibis-testing-data/avro/tpch.region'

hdfs.mkdir(path)
table = con.avro_file(path, avro_schema)
table

Other helper functions for interacting with the database

We're adding a growing list of useful utility functions for interacting with an Impala cluster on the client object. The idea is that you should be able to do any database-admin-type work with Ibis and not have to switch over to the Impala SQL shell. Any ways we can make this more pleasant, please let us know.

Here's some of the features, which we'll give examples for:

  • Listing and searching for available databases and tables
  • Creating and dropping databases
  • Getting table schemas

In [ ]:
con.list_databases(like='ibis*')

In [ ]:
con.list_tables(database='ibis_testing', like='tpch*')

In [ ]:
schema = con.get_schema('functional_alltypes')
schema

Databases can be created, too, and you can set the storage path in HDFS you want for the data files


In [ ]:
db = 'ibis_testing2'
con.create_database(db, path='/__ibis/my-test-database', force=True)

# you may or may not have to give the impala user write and execute permissions to '/__ibis/my-test-database'
hdfs.chmod('/__ibis/my-test-database', '777')

In [ ]:
con.create_table('example_table', con.table('functional_alltypes'),
                 database=db, force=True)

Hopefully, there will be data files in the indicated spot in HDFS:


In [ ]:
hdfs.ls('/__ibis/my-test-database')

To drop a database, including all tables in it, you can use drop_database with force=True:


In [ ]:
con.drop_database(db, force=True)

Dealing with Partitioned tables in Impala

Placeholder: This is not yet implemented. If you have use cases, please let us know.

Faster queries on small data in Impala

Since Impala internally uses LLVM to compile parts of queries (aka "codegen") to make them faster on large data sets there is a certain amount of overhead with running many kinds of queries, even on small datasets. You can disable LLVM code generation when using Ibis, which may significantly speed up queries on smaller datasets:


In [ ]:
from numpy.random import rand

In [ ]:
con.disable_codegen()

In [ ]:
t = con.table('ibis_testing.functional_alltypes')

%timeit (t.double_col + rand()).sum().execute()

In [ ]:
# Turn codegen back on
con.disable_codegen(False)

In [ ]:
%timeit (t.double_col + rand()).sum().execute()

It's important to remember that codegen is a fixed overhead and will significantly speed up queries on big data