This is meant to try out the workflow, taking advantage of the filling in of future computations.
In the console, we begin with
mill core.jvm.publishLocal
In [12]:
import sys.process._
println("git log -1".!!)
Out[12]:
In [30]:
import $ivy.`io.github.siddhartha-gadgil::provingground-core-jvm:0.1.0`
// import $ivy.`io.monix::monix:3.0.0`
Out[30]:
In [31]:
import provingground._, interface._, learning._, translation._, HoTT._
Out[31]:
We will now explore without notes.
In [32]:
val tg = TermGenParams()
Out[32]:
In [33]:
val terms = FiniteDistribution.unif[Term](Type, Unit, Zero, Star)
Out[33]:
In [34]:
val typs = FiniteDistribution.unif[Typ[Term]](Type, Unit, Zero)
Out[34]:
In [35]:
val ts = TermState(terms, typs)
val lp = LocalProver(ts)
Out[35]:
As everything important is supposed to be memoized, we can calculate anything and intermediate steps are remembered. So we calculate theorems.
In [19]:
val thmsT = lp.theoremsByStatement
Out[19]:
In [22]:
val thmsF = thmsT.materialize.runToFuture
In [25]:
thmsF.value
Out[25]:
The main result of this experiment was a failure. Should investigate further. This could be because corrections for equations lead to a slowdown. Should try without generating equations.
In [36]:
val lp0 = LocalProver(ts, cutoff = 0.01)
Out[36]:
In [37]:
val nsT = lp0.nextState
Out[37]:
In [38]:
val nsF = nsT.runToFuture
In [29]:
nsF.value
Out[29]:
Error with even a low cutoff, so clearly a bug. But the code ran fine in a REPL, so notebook issue.