Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Core][MPI] Adding SpatialSearchResultContainer and SpatialSearchResultContainerMap #11205

Closed
wants to merge 100 commits into from

Conversation

loumalouomega
Copy link
Member

@loumalouomega loumalouomega commented May 31, 2023

📝 Description

This PR introduces a new class named SpatialSearchResultContainer and SpatialSearchResultContainerMap. The class provides storage and operations related to spatial search results. This is a transition PR to finally give support for MPI search using GeometricalObjectBins.

The SpatialSearchResultContainer class stores:

  1. Local pointers of results (in a PointerVectorSet)
  2. Global pointers (empty if not synchronized)
  3. GlobalPointerCommunicator pointer (to synchronize the global pointers)
  4. A map of distances from the results (local)
  5. A vector of distances from results (global)

The class includes functions such as, SynchronizeAll, HasResult, GetResultIndices, AddResult... The interesting part is that integrates a global pointer communicator, so it is possible to calculate anything passing a functor. The most common operations required are already implemented: GetResultShapeFunctions, GetResultIndices and GetResultCoordinates,

The SynchronizeAll method synchronizes the container between partitions. The AddResult method adds a result to the local pointers. There is a push_back which does the same.

The SpatialSearchResultContainerMap class stores an unordered map of SpatialSearchResultContainer, using an integer as hash. The hash is computed from the point coordinates which we are computing the results.

This commit seems to be a significant enhancement, providing a new means of handling and manipulating spatial search results in the Kratos library.

🆕 Changelog

@loumalouomega
Copy link
Member Author

Ready @pooyan-dadvand

@loumalouomega
Copy link
Member Author

@roigcarlo do you want to meet to advance with the review of this?

@pooyan-dadvand
Copy link
Member

@loumalouomega Thanks for the implementation and large effort to make the spatial search MPI parallel.

Before going into the details of the implementation I would like to put here my comments (after talking directly with you about the global design)

  1. To my understanding you are changing the simple std::vector<ResultType>& rResults in the search interface with a container which gives more info. Nevertheless, the change in how you are storing them (separating the pointers and distances) would make more difficult having a loop over results and work with the found object and distance:
for(auto& result : search_results){
    auto& geometry = result.Get()->GetGeometry();
    double distance = result.GetDistance();
    if(distance....)
}
  1. Using the same container for the SearchNearest implies an allocation of the array (only for 1 element) which can be a noticeable performance issue.
  2. In the MPI version, having a specialized communicator would be a good alternative approach.
  3. Using point coordinates as the hash for the map would be dangerous. (We may use the pointer to the point as the reference?)
  4. We should add new interfaces to the spatial containers for working with several queries at once. (Simply because it is very inefficient to do one by one query in MPI. This leads to have Map container variant but we should revisit that interface again.

@roigcarlo and @jcotela I would like to hear your design comments and concerns.

@loumalouomega
Copy link
Member Author

Closing as included in #11719

@loumalouomega loumalouomega deleted the core/mpi/spatial-search-container branch November 2, 2023 11:09
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Enhancement Kratos Core Parallel-MPI Distributed memory parallelism for HPC / clusters Transition
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants