-
-
Notifications
You must be signed in to change notification settings - Fork 32
[WIP] Added functionality of parallel maximal independent set #145
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
|
@dschult please review it. |
dschult
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for this!
Some formatting suggestions before diving into the code/tests:
- we typically don't put error messages on the "assert" command. The pytest report of the assert failure is sufficient.
- we also typically dont include a comment line at the beginning of the test when the test function's name is sufficient. If the test involves soemthing special or unexpected, then a couple line doc_string (or comment) is used. We aren't building docs for the tests.
- instead of building a sequential version of the code inside nx-parallel, does it make sense to just "fall back" to the NetworkX version?
I tried this approach; however, the tests, because of nx.maximal_independent_set, were failing. Because of this, I attempted to implement the sequential mis approach myself. |
|
OK... I understtand. That makes sense. But the NetworkX backend system has a way to flag functions so that they should not run under certain conditions. The backend provides a "should_run" function which is called by NetworkX before it tries to run the backend version of the function. If the In nx-parallel we have a set of utility functions to help us set that up. They are currently used in @nxp._configure_if_nx_active(should_run=nxp.should_run_if_sparse(threshold=0.3))
def harmonic_centrality(That should_run function checks the density of the graph against a threshold. There is another function provided in utils called @nxp._configure_if_nx_active(should_run=nxp.should_run_if_large(50000))Unfortunately, while the function has been created, it currently has no way to pass in the size of the threshold -- and it is set to 200 nodes and currently not used anywhere. You can still use it here though -- it will ignore the input -- and use 200. But you can get this code working with that cutoff. If you are up for it, you could update the utility function If you'd prefer not to mess with that let me know and I'll fix it. |
|
I tried as suggested, I am not very sure whether I have done this on correct side. Now, instead of duplicating NetworkX's sequential MIS code, we now fall back to their implementation for small graphs (< 50,000 nodes). What I did:
Modified to work in two modes:
Also updated all policy signatures to accept
Added manual |
dschult
left a comment
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I haven't worked through the actual code yet, but looked at the tests and the docs. The tests seem like maybe some of them are already covered when we run the networkx tests using the environment variable set for nx-parallel. In that case, each networkx test is run with the nx-parallel code being called when it is supported. And we run that in the nx-parallel CI tests.
- can you see whether any of the nx-parallel tests are already tested by the networkx tests -- and remove the duplicates?
- I have a couple other comments/questions below.
I hope to look at the code itself soon.
:)
nx_parallel/algorithms/mis.py
Outdated
| from networkx.algorithms.mis import maximal_independent_set as _nx_mis_dispatcher | ||
| _nx_mis = inspect.unwrap(_nx_mis_dispatcher) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think you can use nx.maximal_independent_set._orig_func as the way to call the original networkx function. Can you check if that works? It is possible I haven't fully understood how that works. That way we don't have to use inspect.
I checked all the tests of nx and nx-parallel and found most of the tests are already covered. For example, Some other tests that I can think of are:
Let me know if these add value. I can think of implementing some of them.
I fixed this one by from networkx.algorithms.mis import maximal_independent_set as _nx_mis_dispatcher
_nx_mis = _nx_mis_dispatcher.orig_func |
| # If nodes_threshold is a graph-like object, it's being used as a direct should_run | ||
| # function instead of a factory. Use default threshold. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would prefer the function signature here be: should_run_if_large(G, nodes_threshold=200, *_, **__)
So can you briefly explain where this might be used as a factory? The previous version doesn't seem to deal with that case. Of course, it didn't deal with keyword args either... ;}
| # Validate directed graph | ||
| if G.is_directed(): | ||
| raise nx.NetworkXNotImplemented("Not implemented for directed graphs.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Does the networkx version work with directed graphs?
If the networkx version uses the not_implemented_for decorator, then I think that decorator runs before the backend is called. You can check whether this is ever called by changing the message (locally) and trying it to see which method gets shown.
| # Convert seed to Random object if needed (for fallback and parallel execution) | ||
| import random | ||
| if seed is not None: | ||
| if hasattr(seed, 'random'): | ||
| # It's already a RandomState/Random object | ||
| rng = seed | ||
| else: | ||
| # It's a seed value | ||
| rng = random.Random(seed) | ||
| else: | ||
| rng = random.Random() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Similarly doesn't the see inut get handled by the decorators in the networkx code before it ever gets to the backend? Check...
| # Check if we should run parallel version | ||
| # This is needed when backend is explicitly specified | ||
| should_run_result = maximal_independent_set.should_run(G, nodes, seed) | ||
| if should_run_result is not True: | ||
| # Fall back to NetworkX sequential (unwrapped version needs Random object) | ||
| return _nx_mis(G, nodes=nodes, seed=rng) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This definitely gets called by the backend machinery. This code should never be reached.
|
|
||
| # Parallel strategy: Run complete MIS algorithm on node chunks independently | ||
| # Then merge results by resolving conflicts | ||
| all_nodes = list(G.nodes()) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| all_nodes = list(G.nodes()) | |
| all_nodes = list(G) |
| if nodes_set: | ||
| available = set(all_nodes) - nodes_set | ||
| for node in nodes_set: | ||
| available.discard(node) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Dont these nodes already get removed two lines up?
| if nodes_set: | ||
| for node in nodes_set: | ||
| excluded.update(adj_dict[node]) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Didn't we do this already with available?
What does this PR change?
This PR implements the parallel maximal independent set via Luby's random permutation mechanism (solving issue #144)
What is your approach, and why did you choose it?
A standard parallel MIS algorithm assigns a random priority to every vertex once per round, then selects all locally maximal vertices at the same time:
Why this is parallel:
This matches the PRAM formulation and is widely used in CPU and GPU environments.
Brief summary of changes (1–2 lines)