Now that you have ibis installed and connecting to your Impala, let's get our feet wet.
In [ ]:
import ibis
import os
hdfs_port = os.environ.get('IBIS_WEBHDFS_PORT', 50070)
hdfs = ibis.hdfs_connect(host='quickstart.cloudera', port=hdfs_port)
con = ibis.impala.connect(host='quickstart.cloudera', database='ibis_testing',
hdfs_client=hdfs)
In [ ]:
table = con.table('functional_alltypes')
table = con.table('functional_alltypes', database='ibis_testing')
In [ ]:
col = table.double_col
# alternately
col2 = table['bigint_col']
Table columns are equipped with a variety of math operations and other methods to assist in writing your analytics. For example:
In [ ]:
expr = col.log2() - 1
Some operations transform arrays to arrays, while others aggregate, like sum
and mean
:
In [ ]:
expr2 = expr.sum()
The methods that are available on columns depend on the type of the column. For example, you won't see the substr
or upper
methods for strings on numeric columns:
In [ ]:
substr_expr = table.string_col.upper().substr(0, 2)
Notice that printing the expressions to the console does not actually do anything, and shows a graphical representation of the expression you've built.
Note: don't worry too much about the details of the expression tree that's outputted, it's very likely to change over time
In [ ]:
expr2
We can also execute an expression by calling execute
on the Impala connection object
In [ ]:
con.execute(col.sum())
There's a shortcut to make this a little more convenient in interactive use
Many Ibis expressions can be immediately executed against the database, and it may improve your productivity to have the executed for you whenever you try to print the expression in the console / IPython notebook.
To do this, we have an interactive mode available, which can be turned on/off like so:
In [ ]:
ibis.options.interactive = True
Now, any expressions you write will be executed right away
In [ ]:
table.limit(10)
You can select a row range with slicing syntax:
In [ ]:
table.double_col.sum()
Don't worry about the syntax here, but expressions resulting in tabular output will come back as a pandas DataFrame by default:
In [ ]:
metrics = [table.double_col.sum().name('total')]
expr = table.group_by('string_col').aggregate(metrics)
expr
In [ ]:
ibis.options.verbose = True
metrics = [table.double_col.sum().name('total')]
expr = table.group_by('string_col').aggregate(metrics)
expr
In [ ]:
queries = []
def logger(x):
queries.append(x)
ibis.options.verbose_log = logger
expr.execute()
expr.execute()
queries
In [ ]:
ibis.options.verbose_log = print
ibis.options.verbose = False
One of the essential table API functions is aggregate
. Aggregation involves the following
This ends up working very similarly to pandas's groupby mechanism.
Let's start with a simple reduction:
In [ ]:
metric = table.double_col.sum()
As you saw above, you can execute this immediately and obtain a value:
In [ ]:
metric
The reduced column can be more complex; for example, you could count the number of null values in a column like so:
In [ ]:
table.double_col.isnull().sum()
To aggregate a table, potentially with grouping keys, we have to give the reduction a name and call aggregate
In [ ]:
metric = metric.name('double_total')
expr = table.aggregate([metric])
result = con.execute(expr)
result
The result here is actually a pandas DataFrame with 1 row and just the one column. We can add another metric and add a grouping key
In [ ]:
metric2 = (table.bigint_col + 1).log10().max().name('some_metric')
expr = table.aggregate([metric, metric2], by=['string_col'])
expr
We provide a convenience group_by
, a la pandas, to make this a little more composable:
In [ ]:
expr = (table.group_by('string_col')
.aggregate([metric, metric2]))
expr
You can also group by named column expressions
In [ ]:
keys = [table.timestamp_col.hour().name('hour'), 'string_col']
expr = table.group_by(keys).aggregate([metric])
# Top 10 by double_total, more on this later
expr.sort_by([('double_total', False)]).limit(10)
In most cases, an aggregation by itself can be evaluated:
In [ ]:
table.double_col.mean()
This can also be done in simple cases along with group_by
:
In [ ]:
table.group_by('string_col').double_col.mean()
Many reduction functions have a default expression name, unlike many other Ibis expressions (for now!), to make some common analyses easier:
In [ ]:
d = table.double_col
(table.group_by('string_col')
.aggregate([d.sum(), d.mean(), d.min(), d.max()]))
Of course, for this particular case you can always use summary
In [ ]:
table.group_by('string_col').double_col.summary()
In [ ]:
table.aggregate([table.bigint_col.max().name('bigint_max'),
table.bigint_col.min().name('bigint_min'),
table.int_col.max().name('int_max'),
table.int_col.min().name('int_min')])
In [ ]:
table.count()
In [ ]:
table[table.bigint_col > 50].count()
Filters can be composed using & (and), | (or), and other logical array operators
In [ ]:
cond1 = table.bigint_col > 50
cond2 = table.int_col.between(2, 7)
table[cond1 | cond2].count()
There's a filter
function that allows you to pass a list of conditions (that are all required to hold):
In [ ]:
table.filter([cond1, cond2]).count()
Note this is the same as &-ing the boolean conditions yourself:
In [ ]:
table[cond1 & cond2].count()
In [ ]:
table.limit(2)