Exploring the Czech-Slovak Olympiad problem.

Our goal is to explore the solution of Achal Kumar. Specifically, we will start with all axioms and the constants $m$ and $n$ and then look for lemmas. The main change we make now is to target the codomain and hopefully get better instantiation and to increase the weight of $m$ and $n$.

Recall the problem:

Let ⋆ be a binary operation on a nonempty set $M$. That is, every pair $(a,b) \in M$ is assigned an element $a$ ⋆$ b$ in $M$. Suppose that ⋆ has the additional property that $(a $ ⋆ $b) $ ⋆$ b= a$ and $a$ ⋆ $(a$ ⋆$ b)= b$ for all $a,b \in M$. Show that $a$ ⋆ $b = b$ ⋆ $a$ for all $a,b \in M$.

We should derive the following lemmas.

  1. $ m = (m*n)*n $
  2. $ n = (m*n)*((m*n)*n) $
  3. $ (m*n)*m = (m*n)*((m*n)*n) $
  4. $ ((m*n)*m))*m = m*n $
  5. $ (m*n)*m = n $
  6. $ ((m*n)*m)*m = n*m $

Finally, we should get the desired result.

$ m*n = n*m $


In [1]:
import $cp.bin.`provingground-core-jvm-9cc21d9ac6.fat.jar`
import provingground._ , interface._, HoTT._, learning._ 
repl.pprinter() = {
  val p = repl.pprinter()
  p.copy(
    additionalHandlers = p.additionalHandlers.orElse {
      translation.FansiShow.fansiHandler
    }
  )
}


Out[1]:
import $cp.$                                              

import provingground._ , interface._, HoTT._, learning._ 

In [2]:
val M = "M" :: Type

val eqM = "eqM" :: M ->: M ->: Type

val a = "a" :: M
val b = "b" :: M
val c = "c" :: M

val m = "m" :: M

val n = "n" :: M

val mul = "mul" :: M ->: M ->: M

import FineDeducer.unif


Out[2]:
M: Typ[Term] = M
eqM: Func[Term, Func[Term, Typ[Term]]] = eqM
a: Term = a
b: Term = b
c: Term = c
m: Term = m
n: Term = n
mul: Func[Term, Func[Term, Term]] = mul
import FineDeducer.unif

In [3]:
val Thm = eqM(mul(m)(n))(mul(n)(m))

val Lemma1 = eqM(m)(mul(mul(m)(n))(n))
val Lemma2 = eqM(n)(mul(mul(m)(n))(mul(mul(m)(n))(n)))
val Lemma3 = eqM(mul(mul(m)(n))(m))(mul(mul(m)(n))(mul(mul(m)(n))(n)))
val Lemma4 = eqM(mul(mul(mul(m)(n))(m))(m))(mul(m)(n))
val Lemma5 = eqM(mul(mul(m)(n))(m))(n)
val Lemma6 = eqM(mul(mul(mul(m)(n))(m))(m))(mul(n)(m))


Out[3]:
Thm: Typ[Term] = eqM(mul(m)(n))(mul(n)(m))
Lemma1: Typ[Term] = eqM(m)(mul(mul(m)(n))(n))
Lemma2: Typ[Term] = eqM(n)(mul(mul(m)(n))(mul(mul(m)(n))(n)))
Lemma3: Typ[Term] = eqM(mul(mul(m)(n))(m))(mul(mul(m)(n))(mul(mul(m)(n))(n)))
Lemma4: Typ[Term] = eqM(mul(mul(mul(m)(n))(m))(m))(mul(m)(n))
Lemma5: Typ[Term] = eqM(mul(mul(m)(n))(m))(n)
Lemma6: Typ[Term] = eqM(mul(mul(mul(m)(n))(m))(m))(mul(n)(m))

Note that the above are defined just by using copy-paste (this is why we changed op to mul). We set up a local prover with all the axioms we use.


In [4]:
val fullTerms : FiniteDistribution[Term] = (unif(a,b,c)(m,n, mul, eqM)(
    eqM(a)(a),
    eqM(a)(b) ->: eqM(b)(a),
    eqM(a)(b) ->: eqM(b)(c) ->: eqM(a)(c),
    eqM(mul(mul(a)(b))(b))(a),
    eqM(mul(a)(mul(a)(b)))(b),
    eqM(b)(c) ->: eqM(mul(b)(a))(mul(c)(a))
  ) * 0.5 ++ (FiniteDistribution.unif(eqM: Term) * 0.125) ++ (FiniteDistribution.unif(m, n, mul : Term ) * 0.375)).filter((t) => !Set(a, b, c).contains(t)).normalized()


Out[4]:
fullTerms: FiniteDistribution[Term] = FiniteDistribution(
  Vector(
    Weighted(m, 0.04347826086956522),
    Weighted(n, 0.04347826086956522),
    Weighted(mul, 0.04347826086956522),
    Weighted(eqM, 0.04347826086956522),
    Weighted(axiom_{eqM(a)(a)}, 0.04347826086956522),
    Weighted(axiom_{(eqM(a)(b) \to eqM(b)(a))}, 0.04347826086956522),
    Weighted(
      axiom_{(eqM(a)(b) \to (eqM(b)(c) \to eqM(a)(c)))},
      0.04347826086956522
    ),
    Weighted(axiom_{eqM(mul(mul(a)(b))(b))(a)}, 0.04347826086956522),
    Weighted(axiom_{eqM(mul(a)(mul(a)(b)))(b)}, 0.04347826086956522),
    Weighted(
      axiom_{(eqM(b)(c) \to eqM(mul(b)(a))(mul(c)(a)))},
      0.04347826086956522
    ),
    Weighted(eqM, 0.14130434782608697),
    Weighted(m, 0.14130434782608697),
    Weighted(n, 0.14130434782608697),
    Weighted(mul, 0.14130434782608697)
  )
)

In [5]:
import monix.execution.Scheduler.Implicits.global
val ts = TermState(fullTerms,fullTerms.map(_.typ))
val lp0 = LocalProver(ts, tg = TermGenParams(typAsCodW = 0.1)).addGoals(
    Lemma1 -> 0.1, Lemma2 -> 0.1, Lemma3 -> 0.1, Lemma4 -> 0.1, Lemma5 -> 0.1, Lemma6 -> 0.1, Thm -> 0.4).noIsles
def lp(n: Int) = lp0.sharpen(math.pow(2, n))


Out[5]:
import monix.execution.Scheduler.Implicits.global

ts: TermState = TermState(
  FiniteDistribution(
    Vector(
      Weighted(m, 0.04347826086956522),
      Weighted(n, 0.04347826086956522),
      Weighted(mul, 0.04347826086956522),
      Weighted(eqM, 0.04347826086956522),
      Weighted(axiom_{eqM(a)(a)}, 0.04347826086956522),
      Weighted(axiom_{(eqM(a)(b) \to eqM(b)(a))}, 0.04347826086956522),
      Weighted(
        axiom_{(eqM(a)(b) \to (eqM(b)(c) \to eqM(a)(c)))},
        0.04347826086956522
      ),
      Weighted(axiom_{eqM(mul(mul(a)(b))(b))(a)}, 0.04347826086956522),
      Weighted(axiom_{eqM(mul(a)(mul(a)(b)))(b)}, 0.04347826086956522),
      Weighted(
        axiom_{(eqM(b)(c) \to eqM(mul(b)(a))(mul(c)(a)))},
        0.04347826086956522
      ),
      Weighted(eqM, 0.14130434782608697),
      Weighted(m, 0.14130434782608697),
      Weighted(n, 0.14130434782608697),
      Weighted(mul, 0.14130434782608697)
    )
  ),
  FiniteDistribution(
    Vector(
      Weighted(M, 0.04347826086956522),
      Weighted(M, 0.04347826086956522),
      Weighted((M → (M → M)), 0.04347826086956522),
      Weighted((M → (M → 𝒰 )), 0.04347826086956522),
      Weighted(∏(a : M){ eqM(a)(a) }, 0.04347826086956522),
      Weighted(
        ∏(a : M){ ∏(b : M){ (eqM(a)(b) → eqM(b)(a)) } },
        0.04347826086956522
      ),
      Weighted(
        ∏(a : M){ ∏(b : M){ ∏(c : M){ (eqM(a)(b) → (eqM(b)(c) → eqM(a)(c))) } } },
...
lp0: LocalProver = LocalProver(
  TermState(
    FiniteDistribution(
      Vector(
        Weighted(m, 0.04347826086956522),
        Weighted(n, 0.04347826086956522),
        Weighted(mul, 0.04347826086956522),
        Weighted(eqM, 0.04347826086956522),
        Weighted(axiom_{eqM(a)(a)}, 0.04347826086956522),
        Weighted(axiom_{(eqM(a)(b) \to eqM(b)(a))}, 0.04347826086956522),
        Weighted(
          axiom_{(eqM(a)(b) \to (eqM(b)(c) \to eqM(a)(c)))},
          0.04347826086956522
        ),
        Weighted(axiom_{eqM(mul(mul(a)(b))(b))(a)}, 0.04347826086956522),
        Weighted(axiom_{eqM(mul(a)(mul(a)(b)))(b)}, 0.04347826086956522),
        Weighted(
          axiom_{(eqM(b)(c) \to eqM(mul(b)(a))(mul(c)(a)))},
          0.04347826086956522
        ),
        Weighted(eqM, 0.14130434782608697),
        Weighted(m, 0.14130434782608697),
        Weighted(n, 0.14130434782608697),
        Weighted(mul, 0.14130434782608697)
      )
    ),
    FiniteDistribution(
      Vector(
        Weighted(M, 0.04347826086956522),
        Weighted(M, 0.04347826086956522),
        Weighted((M → (M → M)), 0.04347826086956522),
        Weighted((M → (M → 𝒰 )), 0.04347826086956522),
        Weighted(∏(a : M){ eqM(a)(a) }, 0.04347826086956522),
        Weighted(
          ∏(a : M){ ∏(b : M){ (eqM(a)(b) → eqM(b)(a)) } },
          0.04347826086956522
        ),
        Weighted(
          ∏(a : M){ ∏(b : M){ ∏(c : M){ (eqM(a)(b) → (eqM(b)(c) → eqM(a)(c))) } ...
defined function lp

In [6]:
def nslp(n: Int) = {
    val lpc = lp(n)
     lpc.nextState.map{ns => (n, ns, lpc)}
}
val bT = Utils.bestTask((1 to 30).map(nslp)).memoize


Out[6]:
defined function nslp
bT: monix.eval.Task[Option[(Int, TermState, LocalProverStep)]] = Async(
  <function2>,
  false,
  true,
  true
)

We have set up a task that refines up to timeout. We will run this asynchronously, and then use the final state.


In [7]:
import monix.execution.Scheduler.Implicits.global
val bF = bT.runToFuture



In [8]:
val nsT = bT.map(_.get._2).memoize
val lpT = bT.map(_.get._3).memoize


Out[8]:
nsT: monix.eval.Task[TermState] = Async(<function2>, false, true, true)
lpT: monix.eval.Task[LocalProverStep] = Async(<function2>, false, true, true)

In [9]:
bF.value


Out[9]:
res8: Option[scala.util.Try[Option[(Int, TermState, LocalProverStep)]]] = None

In [10]:
nsT.map(_.successes).runToFuture


Result here

These changes actually lost a lemma. When we had just changed the parameter there was neither gain nor loss.