SCRAP


In [2]:
from pylab import *

# create figure & set domain
figure(num=None, figsize=(8, 4), dpi=80, facecolor='w', edgecolor='k')
domain = 100
drange = (1,domain)
ticks = [1,10,20,30,40,50,60,70,80,90,100]

# overall data
ax0 = subplot(311)
ax0.set_xticks(ticks)
rects = plt.bar(range(1,domain+1), predictive_dist)


# data 1
x1 = [2,4,8,16,32,64]
ax1 = subplot(312, sharex=ax0, sharey=ax0)
ax1.yaxis.set_visible(False)
pylab.hist(x1, bins=domain, range=drange)

# data 2
x2 = [1,3,7,15,31,63]
ax2 = subplot(313, sharex=ax1, sharey=ax1)
ax2.yaxis.set_visible(False)
pylab.hist(x2, bins=domain, range=drange)
show()


# ax1 = subplot(311)
# rects = plt.bar(range(1,domain+1), predictive_dist)

# plt.xlabel('Domain (from 1 to '+str(domain)+')')
# plt.ylabel('Probability of sampling this value')
# plt.title('This graph needs a title!')

# plt.show()


---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-2-18fdf051f470> in <module>()
     10 ax0 = subplot(311)
     11 ax0.set_xticks(ticks)
---> 12 rects = plt.bar(range(1,domain+1), predictive_dist)
     13 
     14 

NameError: name 'predictive_dist' is not defined

In [1]:
figure(num=None, figsize=(8, 1.5), dpi=80, facecolor='w', edgecolor='k')
hist([1,5, 25], bins=domain, range=(1,domain))


---------------------------------------------------------------------------
NameError                                 Traceback (most recent call last)
<ipython-input-1-7d179d1980ac> in <module>()
----> 1 figure(num=None, figsize=(8, 1.5), dpi=80, facecolor='w', edgecolor='k')
      2 hist([1,5, 25], bins=domain, range=(1,domain))

NameError: name 'figure' is not defined

Concepts and Rules (with Candy)

What do we mean by "learning of concepts"? (we'll come back to "compositional learning" later) Consider the following example:

A child is sorting through a bag of unlabeled candies, trying to decipher their flavors. The child takes candies one at a time, and tastes them to determine their flavor. We might imagine a set of ad-hoc rules such as "red candies are strawberry-flavored", "green candies are apple-flavored", and "black candies are black licorice-flavored (yuck!)".

$ Red \longrightarrow strawberry \\ Green \longrightarrow apple \\ Black \longrightarrow black\ licorice \\ $

Generally speaking, this child is monitoring a stream of information, trying to decipher useful rules. By tasting candies, the child acquires evidence about the world.

We can easily imagine faulty rules that do not fit with the evidence, for example "green candies are black licorice-flavored". In a world where black licorice candy can be either green or black, this would be a valid rule. We can also imagine less useful rules such as "red candies might be cherry, strawberry, or apple-flavored."

$ Red \longrightarrow strawberry \\ Red \longrightarrow cherry \\ Red \longrightarrow apple \\ $

This is less useful because if I only like cherry candies and I grab a red candy, we can only estimate with .33 probability that it is cherry-flavored. Now, this is all a very trivial example, but we can make things a little bit more interesting. I close my eyes whilst a friend feeds me candies of only a single color. Each candy seems to be apple-flavored. Did the friend choose green or red colored candies to feed me? Clearly I'll guess green, since it's very likely that after many red candies, I would have tasted at least one strawberry or cherry. The more apple candies I taste without tasting any others, the more I am convinced that the color my friend has chosen is green.

Now, let's examine a more elaborate example with a concrete implementation.