Skip to content

WIP: feature addition#214

Draft
omereliy wants to merge 9 commits intoAI-Planning:mainfrom
SPL-BGU:main
Draft

WIP: feature addition#214
omereliy wants to merge 9 commits intoAI-Planning:mainfrom
SPL-BGU:main

Conversation

@omereliy
Copy link
Contributor

@omereliy omereliy commented Jun 2, 2025

Coverage Badge

  • negative preconditions
  • added type hierarchy
  • model type checking
  • aligning action\fluents lifted\grounded\parameter bound classes to formal definitions.
  • generator now allows observing static fluents.

@haz haz marked this pull request as draft June 2, 2025 13:59
@haz
Copy link
Contributor

haz commented Jun 2, 2025

Thanks, @omereliy ! Can you start by explaining the shift to macq/core? We have parsed actions for generation, and then learned actions (potentially lifted) on the extraction side -- what's the need for yet another action spec?

@omereliy
Copy link
Contributor Author

omereliy commented Jun 2, 2025

The suggestion includes ultimately removing all fluent/action related class that are not a part of the core package. And adapting all other packages to use those classes.

The learnedLiftedFluent behaves as a parameter bound fluent and hashed as it is a lifted fluent. Highlighting the need for the addition of parameter bound fluent. The learnedLiftedAction class stores sets of the learnedLiftedFluent class as preconditions and effects which I find error prone since it should be a parameter bound fluent.( some fluent can "disappear" when storing in a set).

The trace action/fluent classes can be of the same class as in the learned types. I don't see the need of redundant classes.

The split allowes easier handling of PDDL generation of the model.

This will eventually allow applying a learned action on a state (if fluent share the same name). Ultimately allowing trace generation, grounding/lifting and binding of a model, action and fluent. And bottom line making the evaluation of the model and debugging of process easier.

Future work can include a package for model evaluation.

@omereliy
Copy link
Contributor Author

omereliy commented Jun 2, 2025

The lack of type hierarchy can cause a lot of problems when trying to generate a pddl with tarski package, (multiple definitions of a variable error, if to be specific fluents when learning typed models with hierarchy tree of depth>2 where the root is 'object' type)

@haz
Copy link
Contributor

haz commented Jun 2, 2025

Ok, a scrapping of the previous separation is a fairly major shift we should think on.

@e-cal / @beckydvn / @TathagataChakraborti : I know it's been ages. Like, really long ago. But can you remember the original philosophy behind having separate action/fluent classes from the trace side -vs- learned representation side? I recall us debating it, but I don't recall what went into the final decision.

Other points, @omereliy :

  • We should be shifting away from tarski, and so I'm not so interested in changes that are needed just for tarksi's sake.
  • Having type hierarchy makes sense, but I don't think there's much out there in the way of extraction methods that learn the hierarchy, right?
  • Not sure I understand what you're saying here:

The learnedLiftedFluent behaves as a parameter bound fluent and hashed as it is a lifted fluent. Highlighting the need for the addition of parameter bound fluent. The learnedLiftedAction class stores sets of the learnedLiftedFluent class as preconditions and effects which I find error prone since it should be a parameter bound fluent.( some fluent can "disappear" when storing in a set).

By far the biggest argument to unify everything is so that we pave the way for metrics. But I'm hoping to get a take from the original co-authors to be sure that's a safe move to make.

@TathagataChakraborti
Copy link
Collaborator

I know it's been ages. Like, really long ago. But can you remember the original philosophy behind having separate action/fluent classes from the trace side -vs- learned representation side?

I think one of the fundamental ideas about model learning we wanted to preserve is that the model you choose to learn can have different properties than the underlying model (unknown) that produced the traces i.e. there can be multiple interpretations of the same traces with different sets of assumption on model properties (e.g. is it noise in observation or stochasticity of the behavior?).

But I am not sure the classes need to be different to capture this.

cause a lot of problems when trying to generate a pddl with tarski package, (multiple definitions of a variable error,

can confirm this 😬 i had to make a emergency fork recently just to make something run. 🫣

@haz
Copy link
Contributor

haz commented Jun 4, 2025

Thanks, @TathagataChakraborti !

But I am not sure the classes need to be different to capture this.

Definitely needs to be flexible enough to allow for differing input/output formats (just imagine all the lifted source PDDL with grounded actions learned), but as you point out, it's the class separation that remains a conundrum. @e-cal / @beckydvn ?

@e-cal
Copy link
Collaborator

e-cal commented Jun 4, 2025

I think separating the classes just better fits the mental (and literal) model of separation between actual and learned actions & fluents. The data models end up being pretty different since actual is a 1-1 representation of the original pddl while learned is whatever the extraction technique manages to pull out, so there isn't a lot that can be cleanly shared between the classes.

But, there's no technical reason as far as I can recall, and having them all inherit from a parent class seems like more of a java flavored / object oriented way of designing the internal api vs a more procedural way.

The only issue I could foresee is the classic issue with complete object oriented inheritance structures where you inevitably need to add some behavior or property to a child class that doesn't exactly suit the existing inheritance scheme and then requires a bunch of refactoring and more sub-classing.

My personal preference is to keep things more isolated and use data models over big OO class inheritance chains, but since I haven't worked on the codebase in awhile I can't speak to whether or not changing it would help or hurt the dev experience.

@haz
Copy link
Contributor

haz commented Nov 18, 2025

Alright. I've had a solid 5 months to sit and think on this -- no joke, I've thought about it off and on ;) -- and I think I'm game for a common core class. If there are specific things on either side (generation or learned) that we need, then we subclass. E.g., the confidence that a precondition is correct on a learned model.

@omereliy : Let's make this happen! Can you circle back to this comment and followup with the question/comments? What else can we do for you to help bring this across the finish line?

Another open question I have is whether we make the tarski > pddl transition as part of this PR, or do it separately. I leave it to you, @omereliy , to decide, and I'll help hook it up either way.

@omereliy
Copy link
Contributor Author

Alright. I've had a solid 5 months to sit and think on this -- no joke, I've thought about it off and on ;) -- and I think I'm game for a common core class. If there are specific things on either side (generation or learned) that we need, then we subclass. E.g., the confidence that a precondition is correct on a learned model.

@omereliy : Let's make this happen! Can you circle back to this comment and followup with the question/comments? What else can we do for you to help bring this across the finish line?

Another open question I have is whether we make the tarski > pddl transition as part of this PR, or do it separately. I leave it to you, @omereliy , to decide, and I'll help hook it up either way.

  1. tarski is fine, i added negative preconditions in my fork and it works well. It's been a long time so i need to review what i did.

  2. there are two options:

  • receiving hierarchy as an argument
  • applying a simple type learning algorithm. Same algorithm that works for any other scripting language should work on pddl problems. I have made some experiments and it worked relatively well. Yet receiving it as an argument is preferred imo. I can elaborate about it later.
  1. a lifted action's preconditions and effects are sets of parameter bound fluents, not lifted fluent. Current hashing method in the objects class does not reflect that difference cuasing issues binding the same fluents with different vondings in the learning process when constructing the pddl domain. Meaning it raises errors in the process or when adding them to a set, it constantly thinks all the learned lifted fluents are the same and does not keep all of them, but only a single lufted fluent.

@omereliy
Copy link
Contributor Author

Excuse my poor answer, i commented through my mobile phone.

@haz
Copy link
Contributor

haz commented Nov 19, 2025

No apologies needed -- I could read between the lines ;). All makes sense now!

So have at it -- just keep in mind that we don't want to do any extra mods just to adhere to tarski, since we're close to replacing it entirely with pddl instead. Do you have a plan for fixing the parameter-bounded fluents? Including the bindings as tuples on the lifted fluent?

I'd also agree that passing the hierarchy is best for now. Learning the type hierarchy may well be its own research contribution you want to hold on to, and then eventually release as a PR on this project ;).

@omereliy
Copy link
Contributor Author

omereliy commented Dec 8, 2025

apologies for the late response: here are the files that i used on a branch i had. i think the nnegative preconditions i added works as well.

this alone should be enough for the model generation without too much changes and breaking the existing extraction methods.

the "core.zip" is the beginning of an modification to avoid the "LearnedAction" and fluent objects difference from the regular ones (from the trace module) .

learned_sort.py
learned_action.py
learned_fluent.py
model.py
core.zip

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants