-
Notifications
You must be signed in to change notification settings - Fork 51
Гусева Алена. Технология SEQ-MPI. Сумма элементов матрицы по столбцам. Вариант 12 #11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
Отправила ПР, т.к. локально проверить не могу, по какой-то причине нет доступных раннеров: |
That could be related to the major outage of Microsoft Azure yesterday. Now that should be OK |
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## master #11 +/- ##
==========================================
+ Coverage 94.73% 95.30% +0.56%
==========================================
Files 15 17 +2
Lines 456 511 +55
Branches 166 190 +24
==========================================
+ Hits 432 487 +55
Partials 24 24 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| MPI_Bcast(counts.data(), static_cast<int>(counts.size()), MPI_INT, 0, MPI_COMM_WORLD); | ||
| MPI_Bcast(displs.data(), static_cast<int>(displs.size()), MPI_INT, 0, MPI_COMM_WORLD); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Broadcast uses uninitialized counts buffer
counts is only filled on rank 0 but other ranks keep it empty. The subsequent MPI_Bcast(counts.data(), counts.size(), MPI_INT, 0, MPI_COMM_WORLD) therefore broadcasts wsize elements from the root while receivers provide count=0 and a null buffer, which violates MPI’s requirement that all ranks pass the same element count and leaves no storage for the incoming data. On multi‑process runs this results in an MPI error or crash and counts[rank] is undefined before MPI_Scatterv. The vector should be resized to wsize (and populated via the broadcast) before calling MPI_Bcast.
Useful? React with 👍 / 👎.


Описание
SEQ: Создается вектор сумм столбцом входной матрицы. Итерируясь по матрице, каждый очередной ее элемент добавляется к соответствующей сумме
MPI: Строки входной матрицы поровну распределяются по процессам. Если число строк не кратно количество процессов, нераспределенные n строк распределяются в первые n процессов. Каждый процесс вычисляет суммы столбцов локальной матрица. После чего локальные суммы всех процессов собираются и суммируются на корневом процессе.
Чек-лист
<фамилия>_<первая_буква_имени>_<короткое_название_задачи>clang-formatлокально в моем форке (нет ошибок форматирования)clang-tidyлокально в моем форке (нет предупреждений/ошибок)nesterov_a_vector_sum), а не вmaster