Skip to content

Commit 04033ef

Browse files
committed
bpf, verifier: Improve precision for BPF_ADD and BPF_SUB
This patch improves the precison of the scalar(32)_min_max_add and scalar(32)_min_max_sub functions, which update the u(32)min/u(32)_max ranges for the BPF_ADD and BPF_SUB instructions. We discovered this more precise operator using a technique we are developing for automatically synthesizing functions for updating tnums and ranges. According to the BPF ISA [1], "Underflow and overflow are allowed during arithmetic operations, meaning the 64-bit or 32-bit value will wrap". Our patch leverages the wrap-around semantics of unsigned overflow and underflow to improve precision. Below is an example of our patch for scalar_min_max_add; the idea is analogous for all four functions. There are three cases to consider when adding two u64 ranges [dst_umin, dst_umax] and [src_umin, src_umax]. Consider a value x in the range [dst_umin, dst_umax] and another value y in the range [src_umin, src_umax]. (a) No overflow: No addition x + y overflows. This occurs when even the largest possible sum, i.e., dst_umax + src_umax <= U64_MAX. (b) Partial overflow: Some additions x + y overflow. This occurs when the largest possible sum overflows (dst_umax + src_umax > U64_MAX), but the smallest possible sum does not overflow (dst_umin + src_umin <= U64_MAX). (c) Full overflow: All additions x + y overflow. This occurs when both the smallest possible sum and the largest possible sum overflow, i.e., both (dst_umin + src_umin) and (dst_umax + src_umax) are > U64_MAX. The current implementation conservatively sets the output bounds to unbounded, i.e, [umin=0, umax=U64_MAX], whenever there is *any* possibility of overflow, i.e, in cases (b) and (c). Otherwise it computes tight bounds as [dst_umin + src_umin, dst_umax + src_umax]: if (check_add_overflow(*dst_umin, src_reg->umin_value, dst_umin) || check_add_overflow(*dst_umax, src_reg->umax_value, dst_umax)) { *dst_umin = 0; *dst_umax = U64_MAX; } Our synthesis-based technique discovered a more precise operator. Particularly, in case (c), all possible additions x + y overflow and wrap around according to eBPF semantics, and the computation of the output range as [dst_umin + src_umin, dst_umax + src_umax] continues to work. Only in case (b), do we need to set the output bounds to unbounded, i.e., [0, U64_MAX]. Case (b) can be checked by seeing if the minimum possible sum does *not* overflow and the maximum possible sum *does* overflow, and when that happens, we set the output to unbounded: min_overflow = check_add_overflow(*dst_umin, src_reg->umin_value, dst_umin); max_overflow = check_add_overflow(*dst_umax, src_reg->umax_value, dst_umax); if (!min_overflow && max_overflow) { *dst_umin = 0; *dst_umax = U64_MAX; } Below is an example eBPF program and the corresponding log from the verifier. At instruction 7: (0f) r5 += r3, due to conservative overflow handling, the current implementation of scalar_min_max_add() sets r5's bounds to [0, U64_MAX], which is then updated by reg_bounds_sync() to [0x3d67e960f7d, U64_MAX]. 0: R1=ctx() R10=fp0 0: (85) call bpf_get_prandom_u32#7 ; R0_w=scalar() 1: (bf) r3 = r0 ; R0_w=scalar(id=1) R3_w=scalar(id=1) 2: (18) r4 = 0x950a43d67e960f7d ; R4_w=0x950a43d67e960f7d 4: (4f) r3 |= r4 ; R3_w=scalar(smin=0x950a43d67e960f7d,smax=-1,umin=0x950a43d67e960f7d,smin32=0xfe960f7d,umin32=0x7e960f7d,var_off=(0x950a43d67e960f7d; 0x6af5bc298169f082)) R4_w=0x950a43d67e960f7d 5: (18) r5 = 0xc014a00000000000 ; R5_w=0xc014a00000000000 7: (0f) r5 += r3 ; R3_w=scalar(smin=0x950a43d67e960f7d,smax=-1,umin=0x950a43d67e960f7d,smin32=0xfe960f7d,umin32=0x7e960f7d,var_off=(0x950a43d67e960f7d; 0x6af5bc298169f082)) R5_w=scalar(smin=0x800003d67e960f7d,umin=0x3d67e960f7d,smin32=0xfe960f7d,umin32=0x7e960f7d,var_off=(0x3d67e960f7d; 0xfffffc298169f082)) 8: (b7) r0 = 0 ; R0_w=0 9: (95) exit With our patch, r5's bounds after instruction 7 are set to a much more precise [0x551ee3d67e960f7d, 0xc0149fffffffffff] by scalar_min_max_add(). ... 7: (0f) r5 += r3 ; R3_w=scalar(smin=0x950a43d67e960f7d,smax=-1,umin=0x950a43d67e960f7d,smin32=0xfe960f7d,umin32=0x7e960f7d,var_off=(0x950a43d67e960f7d; 0x6af5bc298169f082)) R5_w=scalar(smin=0x800003d67e960f7d,umin=0x551ee3d67e960f7d,umax=0xc0149fffffffffff,smin32=0xfe960f7d,umin32=0x7e960f7d,var_off=(0x3d67e960f7d; 0xfffffc298169f082)) 8: (b7) r0 = 0 ; R0_w=0 9: (95) exit The logic for scalar32_min_max_add is analogous. For the scalar(32)_min_max_sub functions, the reasoning is similar but applied to detecting underflow instead of overflow. We verified the correctness of the new implementations using Agni [3,4]. We since also discovered that a similar technique has been used to calculate output ranges for unsigned interval addition and subtraction in Hacker's Delight [2]. [1] https://docs.kernel.org/bpf/standardization/instruction-set.html [2] Hacker's Delight Ch.4-2, Propagating Bounds through Add’s and Subtract’s [3] https://github.com/bpfverif/agni [4] https://people.cs.rutgers.edu/~sn349/papers/sas24-preprint.pdf Co-developed-by: Matan Shachnai <[email protected]> Signed-off-by: Matan Shachnai <[email protected]> Co-developed-by: Srinivas Narayana <[email protected]> Signed-off-by: Srinivas Narayana <[email protected]> Co-developed-by: Santosh Nagarakatte <[email protected]> Signed-off-by: Santosh Nagarakatte <[email protected]> Signed-off-by: Harishankar Vishwanathan <[email protected]>
1 parent 4a4b84b commit 04033ef

File tree

1 file changed

+56
-20
lines changed

1 file changed

+56
-20
lines changed

kernel/bpf/verifier.c

Lines changed: 56 additions & 20 deletions
Original file line numberDiff line numberDiff line change
@@ -14617,14 +14617,25 @@ static void scalar32_min_max_add(struct bpf_reg_state *dst_reg,
1461714617
s32 *dst_smax = &dst_reg->s32_max_value;
1461814618
u32 *dst_umin = &dst_reg->u32_min_value;
1461914619
u32 *dst_umax = &dst_reg->u32_max_value;
14620+
u32 umin_val = src_reg->u32_min_value;
14621+
u32 umax_val = src_reg->u32_max_value;
14622+
bool min_overflow, max_overflow;
1462014623

1462114624
if (check_add_overflow(*dst_smin, src_reg->s32_min_value, dst_smin) ||
1462214625
check_add_overflow(*dst_smax, src_reg->s32_max_value, dst_smax)) {
1462314626
*dst_smin = S32_MIN;
1462414627
*dst_smax = S32_MAX;
1462514628
}
14626-
if (check_add_overflow(*dst_umin, src_reg->u32_min_value, dst_umin) ||
14627-
check_add_overflow(*dst_umax, src_reg->u32_max_value, dst_umax)) {
14629+
14630+
/* If either all additions overflow or no additions overflow, then
14631+
* it is okay to set: dst_umin = dst_umin + src_umin, dst_umax =
14632+
* dst_umax + src_umax. Otherwise (some additions overflow), set
14633+
* the output bounds to unbounded.
14634+
*/
14635+
min_overflow = check_add_overflow(*dst_umin, umin_val, dst_umin);
14636+
max_overflow = check_add_overflow(*dst_umax, umax_val, dst_umax);
14637+
14638+
if (!min_overflow && max_overflow) {
1462814639
*dst_umin = 0;
1462914640
*dst_umax = U32_MAX;
1463014641
}
@@ -14637,14 +14648,25 @@ static void scalar_min_max_add(struct bpf_reg_state *dst_reg,
1463714648
s64 *dst_smax = &dst_reg->smax_value;
1463814649
u64 *dst_umin = &dst_reg->umin_value;
1463914650
u64 *dst_umax = &dst_reg->umax_value;
14651+
u64 umin_val = src_reg->umin_value;
14652+
u64 umax_val = src_reg->umax_value;
14653+
bool min_overflow, max_overflow;
1464014654

1464114655
if (check_add_overflow(*dst_smin, src_reg->smin_value, dst_smin) ||
1464214656
check_add_overflow(*dst_smax, src_reg->smax_value, dst_smax)) {
1464314657
*dst_smin = S64_MIN;
1464414658
*dst_smax = S64_MAX;
1464514659
}
14646-
if (check_add_overflow(*dst_umin, src_reg->umin_value, dst_umin) ||
14647-
check_add_overflow(*dst_umax, src_reg->umax_value, dst_umax)) {
14660+
14661+
/* If either all additions overflow or no additions overflow, then
14662+
* it is okay to set: dst_umin = dst_umin + src_umin, dst_umax =
14663+
* dst_umax + src_umax. Otherwise (some additions overflow), set
14664+
* the output bounds to unbounded.
14665+
*/
14666+
min_overflow = check_add_overflow(*dst_umin, umin_val, dst_umin);
14667+
max_overflow = check_add_overflow(*dst_umax, umax_val, dst_umax);
14668+
14669+
if (!min_overflow && max_overflow) {
1464814670
*dst_umin = 0;
1464914671
*dst_umax = U64_MAX;
1465014672
}
@@ -14655,23 +14677,30 @@ static void scalar32_min_max_sub(struct bpf_reg_state *dst_reg,
1465514677
{
1465614678
s32 *dst_smin = &dst_reg->s32_min_value;
1465714679
s32 *dst_smax = &dst_reg->s32_max_value;
14680+
u32 *dst_umin = &dst_reg->u32_min_value;
14681+
u32 *dst_umax = &dst_reg->u32_max_value;
1465814682
u32 umin_val = src_reg->u32_min_value;
1465914683
u32 umax_val = src_reg->u32_max_value;
14684+
bool min_underflow, max_underflow;
1466014685

1466114686
if (check_sub_overflow(*dst_smin, src_reg->s32_max_value, dst_smin) ||
1466214687
check_sub_overflow(*dst_smax, src_reg->s32_min_value, dst_smax)) {
1466314688
/* Overflow possible, we know nothing */
1466414689
*dst_smin = S32_MIN;
1466514690
*dst_smax = S32_MAX;
1466614691
}
14667-
if (dst_reg->u32_min_value < umax_val) {
14668-
/* Overflow possible, we know nothing */
14669-
dst_reg->u32_min_value = 0;
14670-
dst_reg->u32_max_value = U32_MAX;
14671-
} else {
14672-
/* Cannot overflow (as long as bounds are consistent) */
14673-
dst_reg->u32_min_value -= umax_val;
14674-
dst_reg->u32_max_value -= umin_val;
14692+
14693+
/* If either all subtractions underflow or no subtractions
14694+
* underflow, it is okay to set: dst_umin = dst_umin - src_umax,
14695+
* dst_umax = dst_umax - src_umin. Otherwise (some subtractions
14696+
* underflow), set the output bounds to unbounded.
14697+
*/
14698+
min_underflow = check_sub_overflow(*dst_umin, umax_val, dst_umin);
14699+
max_underflow = check_sub_overflow(*dst_umax, umin_val, dst_umax);
14700+
14701+
if (min_underflow && !max_underflow) {
14702+
*dst_umin = 0;
14703+
*dst_umax = U32_MAX;
1467514704
}
1467614705
}
1467714706

@@ -14680,23 +14709,30 @@ static void scalar_min_max_sub(struct bpf_reg_state *dst_reg,
1468014709
{
1468114710
s64 *dst_smin = &dst_reg->smin_value;
1468214711
s64 *dst_smax = &dst_reg->smax_value;
14712+
u64 *dst_umin = &dst_reg->umin_value;
14713+
u64 *dst_umax = &dst_reg->umax_value;
1468314714
u64 umin_val = src_reg->umin_value;
1468414715
u64 umax_val = src_reg->umax_value;
14716+
bool min_underflow, max_underflow;
1468514717

1468614718
if (check_sub_overflow(*dst_smin, src_reg->smax_value, dst_smin) ||
1468714719
check_sub_overflow(*dst_smax, src_reg->smin_value, dst_smax)) {
1468814720
/* Overflow possible, we know nothing */
1468914721
*dst_smin = S64_MIN;
1469014722
*dst_smax = S64_MAX;
1469114723
}
14692-
if (dst_reg->umin_value < umax_val) {
14693-
/* Overflow possible, we know nothing */
14694-
dst_reg->umin_value = 0;
14695-
dst_reg->umax_value = U64_MAX;
14696-
} else {
14697-
/* Cannot overflow (as long as bounds are consistent) */
14698-
dst_reg->umin_value -= umax_val;
14699-
dst_reg->umax_value -= umin_val;
14724+
14725+
/* If either all subtractions underflow or no subtractions
14726+
* underflow, it is okay to set: dst_umin = dst_umin - src_umax,
14727+
* dst_umax = dst_umax - src_umin. Otherwise (some subtractions
14728+
* underflow), set the output bounds to unbounded.
14729+
*/
14730+
min_underflow = check_sub_overflow(*dst_umin, umax_val, dst_umin);
14731+
max_underflow = check_sub_overflow(*dst_umax, umin_val, dst_umax);
14732+
14733+
if (min_underflow && !max_underflow) {
14734+
*dst_umin = 0;
14735+
*dst_umax = U64_MAX;
1470014736
}
1470114737
}
1470214738

0 commit comments

Comments
 (0)