Conversation
opensdh
left a comment
There was a problem hiding this comment.
I did one pass; there may be more to say after certain simplifications.
| the Charm++ backend. | ||
| */ | ||
|
|
||
| struct bind_accessors_t : public util::tuple_walker<bind_accessors_t> { |
There was a problem hiding this comment.
Let's please not add usage of util::tuple_walker; now that we use C++17, it's very easy to provide something like
template<class T>
void operator()(T &&tup) {
std::apply([this](TT &... tt) { (visit(tt), ...); }, std::forward<T>(tup));
}Also, new classes shouldn't use _t (which signifies POSIX types and trait aliases).
| */ | ||
| template<typename DATA_TYPE, size_t PRIVILEGES> | ||
| void visit(data::accessor<data::dense, DATA_TYPE, PRIVILEGES> & accessor) { | ||
| flog_assert(buf_.size() % sizeof(DATA_TYPE) == 0, "Bad buffer size\n"); |
There was a problem hiding this comment.
This is the one use of buf_, and (as per the comment above) it doesn't seem appropriate (what if there were more than one accessor in a call?).
| // is_constructible_v<float&,double&> instead. | ||
| template<class... PP, class... AA> | ||
| auto | ||
| serial_arguments(std::tuple<PP...> * /* to deduce PP */, AA &&... aa) { |
There was a problem hiding this comment.
If all this machinery (including all the serialization in task_wrapper.hh) is needed for multiple backends, it should be factored out somewhere rather than duplicated.
| template<class, class> | ||
| struct tuple_prepend; | ||
| template<class T, class... TT> | ||
| struct tuple_prepend<T, std::tuple<TT...>> { |
There was a problem hiding this comment.
Fair warning: this is probably going to go away (it turned out that having "MPI tasks" have an assumed number of point tasks was unnecessarily restrictive). (Sorry that devel is such a moving target in places.)
| void visit(data::accessor<data::dense, DATA_TYPE, PRIVILEGES> & accessor) { | ||
| flog_assert(buf_.size() % sizeof(DATA_TYPE) == 0, "Bad buffer size\n"); | ||
| auto & flecsi_context = run::context::instance(); | ||
| DATA_TYPE* d = (DATA_TYPE*)flecsi_context.getField(accessor.identifier()); |
There was a problem hiding this comment.
Please use reinterpret_cast to make it obvious what's happening here.
CharmLibExit calls MPI_Finalize if Charm++ is built on the MPI layer, so this was causing crashes. If Charm is not built on the MPI layer, then MPI is never used/initialized in the first place.
|
Closed due to lack of (discontinued) funding for this line of work. |
Here is the PR for merging in the Charm++ backend to FleCSI.
To build and test the Charm++ backend:
./build charm++ mpi-linux-x86_64 --build-sharedfrom within the top level charm directoryFLECSI_RUNTIME_MODEL charm
CMAKE_CXX_COMPILER <path_to_charm/bin/charmc>
CMAKE_CXX_FLAGS -c++-option --std=gnu++17
CMAKE_C_COMPILER <path_to_charm/bin/charmc>
CMAKE_C_FLAGS -c++-option --std=gnu++17