Skip to content

Add BDD-based rules engine trait #2703

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 12 commits into
base: main
Choose a base branch
from
Open

Add BDD-based rules engine trait #2703

wants to merge 12 commits into from

Conversation

mtdowling
Copy link
Member

@mtdowling mtdowling commented Jul 15, 2025

This commit updates the smithy-rules-engine package to support binary decision diagrams (BDD) to more efficiently resolve endpoints.

We create the BDD by converting the decision tree into a control flow graph (CFG), then compile the CFG to a BDD. The CFG canonicalizes conditions for better sharing (e.g., sorts commutative functions, expands simple string templates, etc), and strips all conditions from results and hash-conses them as well. Later, we'll migrate to emitting the BDD directly in order to shave off many conditions and results that can be simplified.

Our decision-tree based rules engine requires deep branching logic to find results. When evaluating the path to a result based on given input, decision trees require descending into a branch, and if at any point a condition in the branch fails, you bail out and go back up to the next branch. This can cause pathological searches of a tree (e.g., 60+ repeated checks on things like isset and booleanEquals to resolve S3 endpoints). In fact, there are currently ~73,000 unique paths through the current decision tree for S3 rules.

Using a BDD (a fully reduced one at least) guarantees that we only evaluate any given condition at most once, and only when that condition actually discriminates the result. This is achieved by recursively converting the CFG into BDD nodes using ITE (if-then-else) operations, choosing a variable ordering that honors dependencies between conditions and variable bindings. The BDD builder applies Shannon expansion during ITE operations and uses hash-consing to share common subgraphs.

The "bdd" trait has most of the same information as the endpointRuleset trait, but doesn't include "rules". Instead it contains a base64 encoded "nodes" value that contains the zig-zag variable-length encoded node triples, one after the other (this is much more compact and efficient to decode than 1000+ JSON array nodes).

The BDD implementation uses CUDD-style complement edges where negative node references represent logical NOT, further reducing BDD size.

BDD output examples

AWS Connect BDD output
Bdd{
  conditions (8):
     C0: isSet(Endpoint)
     C1: isSet(Region)
     C2: PartitionResult = aws.partition(Region)
     C3: booleanEquals(UseFIPS, true)
     C4: booleanEquals(UseDualStack, true)
     C5: booleanEquals(PartitionResult#supportsDualStack, true)
     C6: booleanEquals(PartitionResult#supportsFIPS, true)
     C7: stringEquals("aws-us-gov", PartitionResult#name)
  results (13):
     R0: NoMatchRule
     R1: Error: "Invalid Configuration: FIPS and custom endpoint are not supported"
     R2: Error: "Invalid Configuration: Dualstack and custom endpoint are not supported"
     R3: Endpoint: Endpoint
     R4: Endpoint: "https://connect-fips.{Region}.{PartitionResult#dualStackDnsSuffix}"
     R5: Error: "FIPS and DualStack are enabled, but this partition does not support one or both"
     R6: Endpoint: "https://connect.{Region}.amazonaws.com"
     R7: Endpoint: "https://connect-fips.{Region}.{PartitionResult#dnsSuffix}"
     R8: Error: "FIPS is enabled but this partition does not support FIPS"
     R9: Endpoint: "https://connect.{Region}.{PartitionResult#dualStackDnsSuffix}"
    R10: Error: "DualStack is enabled but this partition does not support DualStack"
    R11: Endpoint: "https://connect.{Region}.{PartitionResult#dnsSuffix}"
    R12: Error: "Invalid Configuration: Missing Region"
  root: 1
  nodes (14):
     0: terminal
     1: [ C0,     12,      2]
     2: [ C1,      3,    R12]
     3: [ C2,      4,    R12]
     4: [ C3,      7,      5]
     5: [ C4,      6,    R11]
     6: [ C5,     R9,    R10]
     7: [ C4,     10,      8]
     8: [ C6,      9,     R8]
     9: [ C7,     R6,     R7]
    10: [ C5,     11,     R5]
    11: [ C6,     R4,     R5]
    12: [ C3,     R1,     13]
    13: [ C4,     R2,     R3]
}
bdd trait
{
    "version": "1.3",
    "parameters": {
        "Region": {
            "builtIn": "AWS::Region",
            "required": false,
            "documentation": "The AWS region used to dispatch the request.",
            "type": "String"
        },
        "UseDualStack": {
            "builtIn": "AWS::UseDualStack",
            "required": true,
            "default": false,
            "documentation": "When true, use the dual-stack endpoint. If the configured endpoint does not support dual-stack, dispatching the request MAY return an error.",
            "type": "Boolean"
        },
        "UseFIPS": {
            "builtIn": "AWS::UseFIPS",
            "required": true,
            "default": false,
            "documentation": "When true, send this request to the FIPS-compliant regional endpoint. If the configured endpoint does not have a FIPS compliant endpoint, dispatching the request will return an error.",
            "type": "Boolean"
        },
        "Endpoint": {
            "builtIn": "SDK::Endpoint",
            "required": false,
            "documentation": "Override the endpoint used to send this request",
            "type": "String"
        }
    },
    "conditions": [
        {
            "fn": "isSet",
            "argv": [
                {
                    "ref": "Endpoint"
                }
            ]
        },
        {
            "fn": "isSet",
            "argv": [
                {
                    "ref": "Region"
                }
            ]
        },
        {
            "fn": "aws.partition",
            "argv": [
                {
                    "ref": "Region"
                }
            ],
            "assign": "PartitionResult"
        },
        {
            "fn": "booleanEquals",
            "argv": [
                {
                    "ref": "UseFIPS"
                },
                true
            ]
        },
        {
            "fn": "booleanEquals",
            "argv": [
                {
                    "ref": "UseDualStack"
                },
                true
            ]
        },
        {
            "fn": "booleanEquals",
            "argv": [
                {
                    "fn": "getAttr",
                    "argv": [
                        {
                            "ref": "PartitionResult"
                        },
                        "supportsDualStack"
                    ]
                },
                true
            ]
        },
        {
            "fn": "booleanEquals",
            "argv": [
                {
                    "fn": "getAttr",
                    "argv": [
                        {
                            "ref": "PartitionResult"
                        },
                        "supportsFIPS"
                    ]
                },
                true
            ]
        },
        {
            "fn": "stringEquals",
            "argv": [
                "aws-us-gov",
                {
                    "fn": "getAttr",
                    "argv": [
                        {
                            "ref": "PartitionResult"
                        },
                        "name"
                    ]
                }
            ]
        }
    ],
    "results": [
        {},
        {
            "error": "Invalid Configuration: FIPS and custom endpoint are not supported",
            "type": "error"
        },
        {
            "error": "Invalid Configuration: Dualstack and custom endpoint are not supported",
            "type": "error"
        },
        {
            "endpoint": {
                "url": {
                    "ref": "Endpoint"
                },
                "properties": {},
                "headers": {}
            },
            "type": "endpoint"
        },
        {
            "endpoint": {
                "url": "https://connect-fips.{Region}.{PartitionResult#dualStackDnsSuffix}",
                "properties": {},
                "headers": {}
            },
            "type": "endpoint"
        },
        {
            "error": "FIPS and DualStack are enabled, but this partition does not support one or both",
            "type": "error"
        },
        {
            "endpoint": {
                "url": "https://connect.{Region}.amazonaws.com",
                "properties": {},
                "headers": {}
            },
            "type": "endpoint"
        },
        {
            "endpoint": {
                "url": "https://connect-fips.{Region}.{PartitionResult#dnsSuffix}",
                "properties": {},
                "headers": {}
            },
            "type": "endpoint"
        },
        {
            "error": "FIPS is enabled but this partition does not support FIPS",
            "type": "error"
        },
        {
            "endpoint": {
                "url": "https://connect.{Region}.{PartitionResult#dualStackDnsSuffix}",
                "properties": {},
                "headers": {}
            },
            "type": "endpoint"
        },
        {
            "error": "DualStack is enabled but this partition does not support DualStack",
            "type": "error"
        },
        {
            "endpoint": {
                "url": "https://connect.{Region}.{PartitionResult#dnsSuffix}",
                "properties": {},
                "headers": {}
            },
            "type": "endpoint"
        },
        {
            "error": "Invalid Configuration: Missing Region",
            "type": "error"
        }
    ],
    "root": 2,
    "nodes": "AQIBACwGAggKBAwKKAIBBhgOCBIQJgIBChYUJAIBIgIBCCQaDB4cIAIBDiIgHgIBHAIBCiYoDCooGgIBGAIBBjQuCDIwFgIBFAIBEgIB"
}

Endpoint rules: BDD vs Decision tree size comparison

Regional service

  • BDD: Pretty=4.4 KB; Minified=2.8 KB
  • Decision tree: Pretty=9.7 KB; Minified=3.7 KB

S3

  • BDD: Pretty=67 KB; Minified=42 KB
  • Decision tree: Pretty=427 KB; Minified=96 KB

By submitting this pull request, I confirm that my contribution is made under the terms of the Apache 2.0 license.

@mtdowling mtdowling requested a review from a team as a code owner July 15, 2025 21:47
@mtdowling mtdowling requested a review from JordonPhillips July 15, 2025 21:47
@mtdowling mtdowling force-pushed the mtbdd branch 4 times, most recently from 25c0e7f to 16503fe Compare July 16, 2025 20:37
Copy link
Contributor

@JordonPhillips JordonPhillips left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not done reviewing, but I thought I should at least post what I have. I still need to look at the bdd sifting and tests.

overall looks great

Copy link
Contributor

@JordonPhillips JordonPhillips left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When are we going to be running the BDD optimizations? I think it would make sense to do either prior to code generation, or better as a sort of pre-compile/formatting step. The latter would make sure it's only done once, but maybe a generator wouldn't want to trust that

@mtdowling
Copy link
Member Author

When are we going to be running the BDD optimizations

I don't think anyone do code generation from a Bdd trait will want to optimize at all. We'll only ship already optimized BDDs.

In the future, I want us to eventually ship just the BDD trait and not the current decision tree trait. We'd do the optimizations at the end of the build process that computes the BDD (sifting, reversal, etc).

When building BDDs manually because you just have the decision tree and no BDD trait, you can choose to either optimize or not based on your "budget".

@range(min: 0)
nodeCount: Integer

/// Base64-encoded array of BDD nodes representing the decision graph structure.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the base64 encoding will make this trait difficult to write and review. How do you envision the development process for these?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The alternative is to embed thousands of numbers in arrays of arrays, which is just as unreadable and significantly more JSON to parse. The zig-zag encoding of the binary of the numbers gives a much more compact representation and lets consumers of the trait parse it directly into whatever data structure they want (e.g., in Java, we'd use int[][] instead of List for performance).

I don't envision people authoring BDDs by hand. They're going to typically generate them from something else. I will probably add some code in future PRs to make that easier too.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

So the expectation is that people write an endpointRuleset and then use some transformer like convertRulesetToBdd that also filters the endpointRuleset trait? That should be fine.

We'd also want some other tooling to make it easy to work with, like being able to compile/optimize from the command line, e.g. smithy rules optimize --timeout X --exhaustiveness X .... Back-porting the optimizations to the endpointRuleset trait would also be cool. We do have one in the CFG that we use while optimizing. And something to pretty-print the BDD.

All that can be done later though.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yep. I plan to add an API that contributes paths to results to a CFG, and then combines them all into one big ITE chain that the BDD can turn into a compressed DAG representation.

@mtdowling mtdowling requested a review from JordonPhillips July 21, 2025 19:56
@mtdowling mtdowling force-pushed the mtbdd branch 2 times, most recently from 8ed7d56 to f453664 Compare July 28, 2025 15:29
Copy link
Contributor

@kstich kstich left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Partial review, will continue tomorrow but posting comments so they can be discussed/addressed.

@mtdowling mtdowling force-pushed the mtbdd branch 3 times, most recently from c648fa5 to 219e5e1 Compare July 31, 2025 21:30
@mtdowling mtdowling requested a review from kstich August 1, 2025 17:41
@mtdowling mtdowling requested a review from kstich August 2, 2025 00:40
This commit updates the smithy-rules-engine package to support
binary decision diagrams (BDD) to more efficiently resolve endpoints.

We create the BDD by converting the decision tree into a control flow
graph (CFG), then compile the CFG to a BDD. The CFG canonicalizes
conditions for better sharing (e.g., sorts commutative functions,
expands simple string templates, etc), and strips all conditions from
results and hash-conses them as well. Later, we'll migrate to emitting
the BDD directly in order to shave off many conditions and results
that can be simplified.

Our decision-tree based rules engine requires deep branching logic to
find results. When evaluating the path to a result based on given
input, decision trees require descending into a branch, and if
at any point a condition in the branch fails, you bail out and go
back up to the next branch. This can cause pathological searches of a
tree (e.g., 60+ repeated checks on things like isset and
booleanEquals to resolve S3 endpoints). In fact, there are currently
~73,000 unique paths through the current decision tree for S3 rules.

Using a BDD (a fully reduced one at least) guarantees that we only
evaluate any given condition at most once, and only when that condition
actually discriminates the result. This is achieved by recursively
converting the CFG into BDD nodes using ITE (if-then-else) operations,
choosing a variable ordering that honors dependencies between conditions
and variable bindings. The BDD builder applies Shannon expansion during
ITE operations and uses hash-consing to share common subgraphs.

The "bdd" trait has most of the same information as the endpointRuleset
trait, but doesn't include "rules". Instead it contains a base64
encoded "nodes" value that contains the zig-zag variable-length
encoded node triples, one after the other (this is much more compact
and efficient to decode than 1000+ JSON array nodes).

The BDD implementation uses CUDD-style complement edges where negative
node references represent logical NOT, further reducing BDD size.
Rather than have the Bdd class contain Condition, Results, Parameters,
etc, it now just deals with nodes. It also now hides the implementation
detail of how the BDD nodes are laid out internally. BDD evaluation
is internalized to the BDD as well rather than a separate BddEvaluator.
This change provides faster evaluation, makes it possible to change
the internal node data layout if necessary, and cleans up all the
interacts we had with BddTrait (no need to always reach into Bdd).
We were using the wrong condition ordering in BddTrait after compiling
a Bdd from the CFG, leading to a totally broken BDD.

Also adds some tests, fixes, and generalizes BddTrait transforms
This also revealed a bug in the BDD compilation process that was
causing negated nodes to get added twice.
The varint encoding does help compact the binary node array, but adds
maybe a bit to much decoding complexity for only a 20-30% size
reduction, and most of the size comes from conditions and results.
Our previous initial ordering could result in pathalogical orderings
if it decided to moving something very early from the CFG to very late.
This is in fact what happened when I added a coalesce method: it moved
an early discriminating condition to very late, which blew up the
BDD from ~40K nodes to 5.1M. This taught me that we really shouldn't
throw away the ordering found in the CFG, and instead should leverage
it when determining the initial ordering since it inherently gates
logic and keeps related conditions together.

So now the initial ordering is based on the CFG ordering and also on
cone analysis (basically how many downstream nodes a node affects).
We now get an initial ordering ~3K nodes, and with the coalesce method,
we can now sift S3 down to ~800 nodes instead of ~1000.

The coalesce function is added here so that we can fold bind-then-test
conditions into a single condition. The current endpoints type system
has strict nullability requirements. So you can't do a substring test
and pass that directly into something that expects a non-null value.
You have to first do the nullable function, then assign that to a value,
then the next condition is inherently guarded and only called if the
assigned value is non-null (the assignment acts as an implicit guard).
The coalsce function allows us to identify these patterns and inline
the test into a single condition by defaulting null to the zero value
of the return type (integer=0, string="", array=[]). We only coalesce
when the comparison is not to literally the zero value. When coalesce
was added, it uncovered the original brittle ordering, leading to the
much improved ordering in this PR.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants