Conversation
…xplicit switch in the command-line
…ormula + fix order of terms to keep to be the same after retyping a formula
srk/src/intLattice.ml
Outdated
|
|
||
| val of_generators: V.t list -> unreduced t | ||
|
|
||
| val hermitize: ?compare:(int -> int -> int) -> 'form t -> hnf t |
There was a problem hiding this comment.
Do we really care about HNF?
In my view is a "syntactic" property---it depends on the basis that is chosen to represent vectors, and the order that the generators are listed. The "semantic" property that I think we do care about is whether the generators form a basis---i.e., they are linearly independent. HNF is just one way to compute a basis.
There was a problem hiding this comment.
Yep; do you mean to abstract this before an interface like "independent t" instead of "hnf t"?
There was a problem hiding this comment.
Yes -- I'm just thinking that it's easier for a client to understand "the generators are linearly independent" than "the matrix formed by a particular arrangement of the generators is in hermite normal form".
| of [phi] is equivalent to [phi]. Projection must respect integrality | ||
| of symbols. | ||
| *) | ||
| val eliminate_floor_mod_div_int : 'a context -> 'a formula -> 'a formula |
There was a problem hiding this comment.
What is the difference between eliminate_floor_mod_div_int and eliminate_floor_mod_div? Why would we prefer one over the other?
There was a problem hiding this comment.
I'm using this to make a theoretical distinction between LIA formulas and LRA formulas, and to also give LIA/LIRA local abstraction the opportunity to optimize for e.g., negative Int literals. But maybe it's more confusing this way.
There was a problem hiding this comment.
I think maybe I misread the comment -- I missed that is_int is also eliminated. The resulting formula is still in LIRA, since it has integer-sorted symbols which have implicit is_int constraints. If we absolutely need an LRA formula, we should compute its real relaxation. Do we ever want to make explicit is_int constraints implicit?
There was a problem hiding this comment.
This happens for computing the convex hull using baseline Gomory-Chvatal or Normaliz. We retain integer-typing for all symbols but also need to remove is_ints.
There was a problem hiding this comment.
Can we do this inside the local abstraction algorithm rather than at the formula level, so that the client needn't be responsible for it and we don't add extra symbols to the context?
There was a problem hiding this comment.
Do you mean to eliminate floor-mod-div terms (and is_int literals if necessary) in cubify like we've done before?
There was a problem hiding this comment.
Ideally, the expected pre-processing required for calling for convex hull should be the same regardless of the algorithm. We can eliminate Gomory-Chvatal's requirement that the input is free of is_int atoms (and thus unify the algorithms pre-processing requirements for the client) by doing the transformation at the PLT level. (And then, as a bonus, we don't have to add extra symbols to the context)
There was a problem hiding this comment.
Good point, I can do that for GC.
| because projection and taking closure commutes. | ||
| The lattice and tiling components of the plt are ignored and dropped. | ||
| *) | ||
| val round_assuming_no_ints: (Plt.t, int -> QQ.t, P.t, int -> QQ.t) LocalAbstraction.t |
There was a problem hiding this comment.
Shouldn't the input be a polyhedron? (Assuming yes, let's move to polyhedron.ml)
There was a problem hiding this comment.
Do we want local abstractions to be visible outside of PolyhedronLatticeTiling? Maybe another way is to have round_assuming_no_ints and round_assuming_all_ints as standard functions rather than local abstractions, with the signature Polyhedron.t -> Polyhedron.t, and have local closure in this file use that?
There was a problem hiding this comment.
Yes, we should expose local abstractions so that we can use them elsewhere. E.g., I updated the quantifier elimination to use your PLT local abstraction, or we might want a more efficient algorithm for synthesizing quasi-ranking functions that builds on local project + hull.
|
|
||
| end | ||
|
|
||
| type lira_abstraction = |
There was a problem hiding this comment.
Why dowe want to reify algorithm selection into a datatype? (I.e., why not ask the client to provide the "ConvexHull.by_polyreccone" as the "how" instead of "LiraCCH PolyReccone")
Some of these configurations are for evaluation purposes only, and will eventually be removed.
There was a problem hiding this comment.
Yeah. I will resolve some of the other comments first, then fork to have an evaluation branch that's for the experiments and submission. The one to be pulled into flint will then be trimmed down.
By "ConvexHull.by_polyreccone", do you mean to place the functions in ConvexHull? Currently ConvexHull depends on this file rather than the other way round.
There was a problem hiding this comment.
Sounds good.
I meant that we can use type abstraction_algorithm = Apron.Manager.t -> 'a context -> 'a term array -> symbol list so that the how parameters can be the functions themselves rather than a data structure. (The current interface is the "defunctionalization" of the interface that I'm suggesting).
| (** All symbols must be of integer type, and all terms must have integer coefficients. | ||
| Local-project-local-hull is (sound and) compact when these conditions hold. | ||
|
|
||
| For the latter condition, rounding assuming integer-valued variables is in general |
There was a problem hiding this comment.
"terms must have integer coefficients" means the terms in the 'a arith_term array?
There was a problem hiding this comment.
Yes. The example in the comment shows why; non-variable integrality constraints are not taken into account when we round.
There was a problem hiding this comment.
Can you scale up by multiplying by the LCM of the denominators and then scale back down at the end?
| but may not have finite image. | ||
|
|
||
| Let PLT(x, Y) be a PLT in dimensions RR^{x \cup Y}. | ||
| [round_up: QQ^{x \cup Y} -> Term(Y) -> Term(Y)] is a function such that |
There was a problem hiding this comment.
Do we need round_up? I don't understand why this is part of local projection.
There was a problem hiding this comment.
This is legacy for int_frac; I left it there because the logic for it is tricky, but I should remove it. Will update this comment when I do that.
…raction' into refactor-local-abstraction
Abstract.abstract was in earlier commit.
…king satisfiability
…king satisfiability
Duet popl local abstraction
…ion use best-in-class convex hull algorithm
Overview of changes
Duet is extended with options to dump (convex) hulls (-dump-hulls) for goals that call out to ConvexHull, e.g. via -cra -monotone and -termination -monotone, and an option -retype-as-real-for-lira-convhull that will compute convex hulls by first relaxing the input formula to its real relaxation.
ConvexHull is the interface between Duet and Srk's polyhedronLatticeTiling. The implementation/interface for [abstract] is not so satisfactory because the solver's state is ignored and unchanged; the responsibility to relax the formula should perhaps move upstream to the clients.
PolyhedronLatticeTiling implements convex hull computation for LRA, LIA, and LIRA using local abstractions and baseline algorithms. It also provides a helper function to retype/relax a formula and terms into a real formula and terms over real variables.
Bigtop is extended with numerous options to compute convex hulls from formulas in SMT files (e.g, dumped by -dump-hulls) using different local abstraction or baseline algorithms, and to compare results between them.
Input formulas to PolyhedronLatticeTiling should only have LRA terms, but for LIRA and LIA methods, may otherwise be LIA formulas. ConvexHull and Bigtop take care of purifying formulas to ensure this.