Skip to content

Conversation

mvafin
Copy link
Contributor

@mvafin mvafin commented Sep 9, 2025

@mvafin mvafin requested a review from popovaan September 9, 2025 20:48
@mvafin mvafin requested review from a team as code owners September 9, 2025 20:48
@github-actions github-actions bot added category: Core OpenVINO Core (aka ngraph) category: PyTorch FE OpenVINO PyTorch Frontend labels Sep 9, 2025
@mvafin mvafin requested a review from Copilot September 10, 2025 12:43
Copy link
Contributor

@Copilot Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR improves padding support in the PyTorch frontend by moving pad conversion from the transformation stage to the operations stage and adding support for dynamic paddings.

  • Move pad-related conversion logic from transformations to operations to better handle dynamic cases
  • Add dedicated translator for reflection pad operations across 1D, 2D, and 3D variants
  • Update pad shape inference to handle dynamic padding arrays

Reviewed Changes

Copilot reviewed 8 out of 8 changed files in this pull request and generated 2 comments.

Show a summary per file
File Description
tests/layer_tests/pytorch_tests/test_reflection_pad.py New test file for reflection padding operations
src/frontends/pytorch/src/transforms/prim_list_construct_pad.hpp Removed transformation header file
src/frontends/pytorch/src/transforms/prim_list_construct_pad.cpp Removed transformation implementation
src/frontends/pytorch/src/op_table.cpp Updated operation table to use new reflection pad translator
src/frontends/pytorch/src/op/pad.cpp Refactored pad operations to support dynamic paddings
src/frontends/pytorch/src/frontend.cpp Removed registration of deprecated transformation pass
src/core/tests/type_prop/pad.cpp Added test for dynamic padding shapes
src/core/shape_inference/include/pad_shape_inference.hpp Updated shape inference to handle dynamic padding dimensions

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

mvafin and others added 2 commits September 10, 2025 14:46
"Number of elements of pads_end must be >= 0 and <= arg rank");
NODE_SHAPE_INFER_CHECK(op,
input_shapes,
pads_begin_rank.is_dynamic() || pads_begin_shape[0].is_dynamic() ||
Copy link
Contributor

@bumbosiepsak bumbosiepsak Sep 17, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: is it guaranteed, that pads_begin_shape has at least one element, so that it can be indexed with 0? (Same for pads_end_shape below).

}

TYPED_TEST_P(PadTest, pad_begin_and_end_dynamic) {
auto arg_shape = PartialShape{9, {3, 5}, {3, 5}, {3, 4}, {3, 4}};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpick: how about having full, readable and regular names:

auto arg_shape = ...;
auto begin_shape = ...;
auto end_shape = ...;
...
auto arg_shape_param = make_shared<op::v0::Parameter>(element::f32, arg_shape);
auto begin_shape_param = make_shared<op::v0::Parameter>(element::i32, begin_shape);
auto end_shape_param = make_shared<op::v0::Parameter>(element::i32, end_shape);

or:

auto arg_shape = ...;
auto begin_shape = ...;
auto end_shape = ...;

auto arg = ...;
auto begin = ...; // Hopefully won't crush with std::begin and std::end
auto end = ...;

?

auto s_begin = make_shared<op::v0::Parameter>(element::i32, begin_shape);
auto s_end = make_shared<op::v0::Parameter>(element::i32, end_shape);

auto pad = this->make_op(arg, s_begin, s_end, op::PadMode::CONSTANT);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpick: because you can:

const auto pad = ...;

pad_preserve_partial_values_and_symbols_on_inputs,
pad_begin_and_end_dynamic);

using PadOpTypes = Types<op::v1::Pad, op::v12::Pad>;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: if we're introducing dynamic padding, shouldn't it result in a new Pad version?

And - if we wanted to stay strict - shouldn't pad_begin_and_end_dynamic fail for op::v1::Pad and op::v12::Pad?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think we use same shape inference for Pad-1 and Pad12 so no, it will not fail and this is unimportant really this behavior is applicable for both versions.

auto pads_end = context.mark_node(std::make_shared<v0::Concat>(OutputVector{pads_remaining_c, pads_end_short}, 0));

if (const auto begins = ov::util::get_constant_from_source(pads_begin)) {
pads_begin = begins;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Question: is it possible (and makes sense) to reuse objects:

if (auto begins = ov::util::get_constant_from_source(pads_begin)) {
    pads_begin = std::move(begins);
}

?

auto input_rank = std::get<1>(get_shape_rank(context, data, false, element::i32));
auto pads_diff = context.mark_node(std::make_shared<v1::Subtract>(input_rank, pads_short_len));
auto pads_remaining = context.mark_node(std::make_shared<v3::Broadcast>(zero, pads_diff));
auto pads_remaining_c = context.mark_node(std::make_shared<v1::ConvertLike>(pads_remaining, paddings));
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nitpick: the _c suffix is a bit enigmatic and pads_remaining_c is the variable you consume in other places (i.e. pads_remaining is an intermediate data).

How about:

// pads_remaining_raw is your helper data
auto pads_remaining_raw = context.mark_node(std::make_shared<v3::Broadcast>(zero, pads_diff));

auto pads_remaining = context.mark_node(std::make_shared<v1::ConvertLike>(pads_remaining_raw, paddings));

?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
category: Core OpenVINO Core (aka ngraph) category: PyTorch FE OpenVINO PyTorch Frontend
Projects
None yet
3 participants