In [1]:
from rake import Rake

my_rake = Rake()

In [4]:
text = "We study two $Q$-state Potts models coupled by the product of their energy operators, in the regime $2 < Q \le 4$ where the coupling is relevant. A particular choice of weights on the square lattice is shown to be equivalent to the integrable $a_3^{(2)}$ vertex model. It corresponds to a selfdual system of two antiferromagnetic Potts models, coupled ferromagnetically. We derive the Bethe Ansatz equations and study them numerically for two arbitrary twist angles. The continuum limit is shown to involve two compact bosons and one non compact boson, with discrete states emerging from the continuum at appropriate twists. The non compact boson entails strong logarithmic corrections to the finite-size behaviour of the scaling levels, the understanding of which allows us to correct an earlier proposal for some of the critical exponents. In particular, we infer the full set of magnetic scaling dimensions (watermelon operators) of the Potts model. "

In [10]:
text = " We present an unsupervised framework for simultaneous appearance-based object discovery, detection, tracking and reconstruction using RGBD cameras and a robot manipulator. The system performs dense 3D simultaneous localization and mapping concurrently with unsupervised object discovery. Putative objects that are spatially and visually coherent are manipulated by the robot to gain additional motion-cues. The robot uses appearance alone, followed by structure and motion cues, to jointly discover, verify, learn and improve models of objects. Induced motion segmentation reinforces learned models which are represented implicitly as 2D and 3D level sets to capture both shape and appearance. We compare three different approaches for appearance-based object discovery and find that a novel form of spatio-temporal super-pixels gives the highest quality candidate object models in terms of precision and recall. Live experiments with a Baxter robot demonstrate a holistic pipeline capable of automatic discovery, verification, detection, tracking and reconstruction of unknown objects."

In [12]:
my_rake.run(text)


Out[12]:
[('system performs dense 3d simultaneous localization', 33.5),
 ('induced motion segmentation reinforces learned models', 32.333333333333336),
 ('highest quality candidate object models', 23.083333333333332),
 ('simultaneous appearance-based object discovery', 15.25),
 ('3d level sets', 10.5),
 ('appearance-based object discovery', 10.25),
 ('unsupervised object discovery', 9.25),
 ('holistic pipeline capable', 9.0),
 ('gain additional motion-cues', 9.0),
 ('baxter robot demonstrate', 7.75),
 ('improve models', 6.333333333333333),
 ('motion cues', 6.0),
 ('automatic discovery', 5.0),
 ('unsupervised framework', 4.5),
 ('represented implicitly', 4.0),
 ('rgbd cameras', 4.0),
 ('spatio-temporal super-pixels', 4.0),
 ('mapping concurrently', 4.0),
 ('live experiments', 4.0),
 ('jointly discover', 4.0),
 ('visually coherent', 4.0),
 ('robot manipulator', 3.75),
 ('unknown objects', 3.666666666666667),
 ('putative objects', 3.666666666666667),
 ('robot', 1.75),
 ('objects', 1.6666666666666667),
 ('appearance', 1.0),
 ('precision', 1.0),
 ('verification', 1.0),
 ('form', 1.0),
 ('verify', 1.0),
 ('approaches', 1.0),
 ('reconstruction', 1.0),
 ('recall', 1.0),
 ('present', 1.0),
 ('detection', 1.0),
 ('capture', 1.0),
 ('spatially', 1.0),
 ('structure', 1.0),
 ('tracking', 1.0),
 ('shape', 1.0),
 ('learn', 1.0),
 ('compare', 1.0),
 ('find', 1.0),
 ('terms', 1.0),
 ('manipulated', 1.0),
 ('2d', 1.0)]

In [13]:
%timeit my_rake = Rake()


1000 loops, best of 3: 181 µs per loop

In [ ]: