From 91d128a149e80b156fd5196354151a1e2ec93537 Mon Sep 17 00:00:00 2001 From: Minggang Wang Date: Fri, 29 Aug 2025 17:23:53 +0800 Subject: [PATCH 1/4] Re-enable benchmark --- benchmark/README.md | 140 ++++++++++++++---- benchmark/rclcpp/CMakeLists.txt | 11 +- benchmark/rclcpp/package.xml | 18 ++- .../rclcpp/service/client-stress-test.cpp | 45 ++++-- .../rclcpp/service/service-stress-test.cpp | 33 ++--- benchmark/rclcpp/startup/startup-test.cpp | 21 --- .../rclcpp/topic/publisher-stress-test.cpp | 90 ++++++----- .../rclcpp/topic/subscription-stress-test.cpp | 9 +- .../rclnodejs/service/client-stress-test.js | 122 +++++++++++---- .../rclnodejs/service/service-stress-test.js | 93 ++++++------ benchmark/rclnodejs/startup/startup-test.js | 21 --- .../rclnodejs/topic/publisher-stress-test.js | 91 ++++++++---- .../topic/subscription-stress-test.js | 36 +++-- benchmark/rclpy/service/client-stress-test.py | 83 +++++++---- .../rclpy/service/service-stress-test.py | 124 +++++++++------- benchmark/rclpy/startup/startup-test.py | 23 --- .../rclpy/topic/publisher-stress-test.py | 111 +++++++------- .../rclpy/topic/subscription-stress-test.py | 35 +++-- 18 files changed, 672 insertions(+), 434 deletions(-) delete mode 100644 benchmark/rclcpp/startup/startup-test.cpp delete mode 100644 benchmark/rclnodejs/startup/startup-test.js mode change 100644 => 100755 benchmark/rclpy/service/client-stress-test.py mode change 100644 => 100755 benchmark/rclpy/service/service-stress-test.py delete mode 100644 benchmark/rclpy/startup/startup-test.py mode change 100644 => 100755 benchmark/rclpy/topic/publisher-stress-test.py mode change 100644 => 100755 benchmark/rclpy/topic/subscription-stress-test.py diff --git a/benchmark/README.md b/benchmark/README.md index 388bbdd5..7527119f 100644 --- a/benchmark/README.md +++ b/benchmark/README.md @@ -1,47 +1,131 @@ -# Benchmarks for ROS 2.0 clients +# ROS 2 Client Library Benchmarks -This folder contains code used to measure the performance between different ROS 2.0 clients. +Performance benchmarks for comparing ROS 2 client libraries: C++ (rclcpp), Python (rclpy), and JavaScript (rclnodejs). ## Prerequisites -1.Install ROS 2.0 from binary +1. **ROS 2**: Install from [ros.org](https://docs.ros.org/en/jazzy/Installation.html) +2. **Node.js**: v16+ for rclnodejs (from [nodejs.org](https://nodejs.org/)) +3. **rclnodejs**: Follow [installation guide](https://github.com/RobotWebTools/rclnodejs#installation) -You can download the latest binary package of ROS2 from [here](http://ci.ros2.org/view/packaging/) and follow the instructions to setup the environment. +## Benchmark Structure -- [Linux](https://github.com/ros2/ros2/wiki/Linux-Install-Binary) -- [macOS](https://github.com/ros2/ros2/wiki/OSX-Install-Binary) -- [Windows](https://github.com/ros2/ros2/wiki/Windows-Install-Binary) +Each client library has identical benchmark tests: - 2.Install Node.js +| Test Type | Description | +| ----------- | ------------------------------------------- | +| **topic** | Publisher/subscriber performance | +| **service** | Client/service request-response performance | +| **startup** | Node initialization time | -Download the latest LTS edition from https://nodejs.org/en/ +## Performance Results -3.[Get the code](https://github.com/RobotWebTools/rclnodejs#get-code) and [install](https://github.com/RobotWebTools/rclnodejs#build-module) +### Test Environment -## Benchmark directories +**Hardware:** -The table lists the directories for each kind of the ROS 2.0 clients. +- **CPU:** Intel(R) Core(TM) i9-10900X @ 3.70GHz (10 cores, 20 threads) +- **Memory:** 32GB RAM +- **Architecture:** x86_64 -| Directory | Purpose | -| :-------: | ------------------------------------------------------ | -| topic | Benchmarks for `publisher` and `subscription` features | -| service | Benchmarks for `client` and `service` features | -| startup | Benchmarks for measuring the startup time consumption | +**Software:** -## Run tests +- **OS:** Ubuntu 24.04.3 LTS (WSL2) +- **ROS 2:** Jazzy distribution +- **C++ Compiler:** GCC 13.3.0 +- **Python:** 3.12.3 +- **Node.js:** v22.18.0 -1.Benchmark for [rclcpp](https://github.com/ros2/rclcpp) +### Benchmark Results -- Compile the source files, `cd benchmark/rclcpp/` and run `colcon build` -- The executable files locate at `build/rclcpp_benchmark/` -- `your_benchmark -h` for help. +Benchmark parameters: 1000 iterations, 1024KB message size - 2.Benchmark for [rclpy](https://github.com/ros2/rclpy) +| Client Library | Topic (ms) | Service (ms) | Performance Ratio | +| ----------------------- | ---------- | ------------ | ---------------------- | +| **rclcpp (C++)** | 437 | 8,129 | Baseline (fastest) | +| **rclpy (Python)** | 2,294 | 25,519 | 5.3x / 3.1x slower | +| **rclnodejs (Node.js)** | 2,075 | 3,420\* | 4.7x / 2.4x faster\*\* | -- Enter the Python scripts folder `benchmark/rclpy/` -- `python3 your_benchmark -h` for help. +_Last updated: August 29, 2025_ - 3.Benchmark for rclnodejs +**Notes:** -- Enter the Node.js scripts folder `benchmark/rclnodejs/` -- `node your_benchmark -h` for help. +- Topic benchmarks: All libraries completed successfully with 1024KB messages +- Service benchmarks: C++ and Python completed with 1024KB responses; Node.js completed with minimal data +- \*Node.js service uses minimal response data due to serialization issues with large (1024KB) payloads +- \*\*Node.js service performance is surprisingly good with small data, but not directly comparable due to different data sizes +- Performance ratios are relative to C++ baseline + +## Running Benchmarks + +### C++ (rclcpp) + +```bash +cd benchmark/rclcpp/ +colcon build +./build/rclcpp_benchmark/publisher-stress-test -r 1000 -s 1024 +./build/rclcpp_benchmark/client-stress-test -r 1000 +``` + +### Python (rclpy) + +```bash +cd benchmark/rclpy/ +source ~/Download/ros2-linux/local_setup.bash # Adjust path +python3 topic/publisher-stress-test.py -r 1000 -s 1024 +python3 service/client-stress-test.py -r 1000 +``` + +### JavaScript (rclnodejs) + +```bash +cd /path/to/rclnodejs/ # Project root +source ~/Download/ros2-linux/local_setup.bash # Adjust path +node benchmark/rclnodejs/topic/publisher-stress-test.js -r 1000 -s 1024 +node benchmark/rclnodejs/service/client-stress-test.js -r 1000 +``` + +## Test Workflow + +For complete tests, run subscriber/service first, then publisher/client: + +**Topic Test:** + +```bash +# Terminal 1: Start subscriber (adjust for your language) +python3 topic/subscription-stress-test.py # Python +./build/rclcpp_benchmark/subscription-stress-test # C++ +node benchmark/rclnodejs/topic/subscription-stress-test.js # Node.js + +# Terminal 2: Run publisher benchmark +python3 topic/publisher-stress-test.py -r 1000 -s 1024 # Python +./build/rclcpp_benchmark/publisher-stress-test -r 1000 -s 1024 # C++ +node benchmark/rclnodejs/topic/publisher-stress-test.js -r 1000 -s 1024 # Node.js +``` + +**Service Test:** + +```bash +# Terminal 1: Start service (adjust for your language) +python3 service/service-stress-test.py # Python +./build/rclcpp_benchmark/service-stress-test # C++ +node benchmark/rclnodejs/service/service-stress-test.js # Node.js + +# Terminal 2: Run client benchmark +python3 service/client-stress-test.py -r 1000 # Python +./build/rclcpp_benchmark/client-stress-test -r 1000 # C++ +node benchmark/rclnodejs/service/client-stress-test.js -r 1000 # Node.js +``` + +## Message Types + +- **Topics**: `std_msgs/msg/UInt8MultiArray` (configurable size) +- **Services**: `nav_msgs/srv/GetMap` (map data request/response) + +## Notes + +- All benchmarks use high-precision timing for accurate measurements +- C++ provides baseline performance; Python/JavaScript show expected interpreted language overhead +- All implementations are now working correctly across the three client libraries +- Results vary by system configuration; use relative comparisons between client libraries +- Service benchmarks involve request-response cycles with substantial data payloads (OccupancyGrid maps) diff --git a/benchmark/rclcpp/CMakeLists.txt b/benchmark/rclcpp/CMakeLists.txt index 0d1483df..531b4bc8 100644 --- a/benchmark/rclcpp/CMakeLists.txt +++ b/benchmark/rclcpp/CMakeLists.txt @@ -1,10 +1,14 @@ -cmake_minimum_required(VERSION 3.5) +cmake_minimum_required(VERSION 3.8) project(rclcpp_benchmark) -# Default to C++14 +# Default to C++17 if(NOT CMAKE_CXX_STANDARD) - set(CMAKE_CXX_STANDARD 14) + set(CMAKE_CXX_STANDARD 17) +endif() + +if(CMAKE_CXX_STANDARD LESS 17) + set(CMAKE_CXX_STANDARD 17) endif() if(CMAKE_COMPILER_IS_GNUCXX OR CMAKE_CXX_COMPILER_ID MATCHES "Clang") @@ -39,6 +43,5 @@ custom_executable(publisher-stress-test topic/publisher-stress-test.cpp) custom_executable(subscription-stress-test topic/subscription-stress-test.cpp) custom_executable(client-stress-test service/client-stress-test.cpp) custom_executable(service-stress-test service/service-stress-test.cpp) -custom_executable(startup-test startup/startup-test.cpp) ament_package() diff --git a/benchmark/rclcpp/package.xml b/benchmark/rclcpp/package.xml index 1a76f283..8b4c8e16 100644 --- a/benchmark/rclcpp/package.xml +++ b/benchmark/rclcpp/package.xml @@ -1,14 +1,24 @@ - - + + rclcpp_benchmark 0.0.1 Benchmark for rclcpp Minggang Wang Apache License 2.0 + ament_cmake - rclcpp - rclcpp + + rclcpp + example_interfaces + sensor_msgs + std_msgs + std_srvs + nav_msgs + + ament_lint_auto + ament_lint_common + ament_cmake diff --git a/benchmark/rclcpp/service/client-stress-test.cpp b/benchmark/rclcpp/service/client-stress-test.cpp index 14719f51..5fee722d 100644 --- a/benchmark/rclcpp/service/client-stress-test.cpp +++ b/benchmark/rclcpp/service/client-stress-test.cpp @@ -22,11 +22,10 @@ #include "utilities.hpp" void ShowUsage(const std::string name) { - std::cerr << "Usage: " << name << " [options]\n" - << "\nOptions:\n" - << "\n--run \tHow many times to run\n" - << "--help \toutput usage information" - << std::endl; + std::cerr << "Usage: " << name << " [options]\n" + << "\nOptions:\n" + << "\n-r, --run \tHow many times to run\n" + << "-h, --help \toutput usage information" << std::endl; } int main(int argc, char* argv[]) { @@ -36,15 +35,18 @@ int main(int argc, char* argv[]) { for (int i = 1; i < argc; i++) { std::string arg = argv[i]; if ((arg == "-h") || (arg == "--help")) { - ShowUsage(argv[0]); - return 0; + ShowUsage(argv[0]); + return 0; } else if (arg.find("--run=") != std::string::npos) { - total_times = std::stoi(arg.substr(arg.find("=") + 1)); + total_times = std::stoi(arg.substr(arg.find("=") + 1)); + } else if (arg == "-r" && i + 1 < argc) { + total_times = std::stoi(argv[++i]); } } printf( "The client will send a GetMap request continuously until receiving " - "response %d times.\n", total_times); + "response %d times.\n", + total_times); auto start = std::chrono::high_resolution_clock::now(); auto node = rclcpp::Node::make_shared("stress_client_rclcpp"); @@ -52,18 +54,35 @@ int main(int argc, char* argv[]) { auto request = std::make_shared(); auto received_times = 0; + // Wait for service to be available + while (!client->wait_for_service(std::chrono::seconds(1))) { + if (!rclcpp::ok()) { + RCLCPP_ERROR(rclcpp::get_logger("rclcpp"), + "Interrupted while waiting for the service. Exiting."); + return 0; + } + RCLCPP_INFO(rclcpp::get_logger("rclcpp"), + "Service not available, waiting again..."); + } + while (rclcpp::ok()) { - if (received_times > total_times) { + if (received_times >= total_times) { rclcpp::shutdown(); auto end = std::chrono::high_resolution_clock::now(); LogTimeConsumption(start, end); + break; } else { auto result_future = client->async_send_request(request); - if (rclcpp::spin_until_future_complete(node, result_future) != - rclcpp::executor::FutureReturnCode::SUCCESS) { - RCLCPP_ERROR(node->get_logger(), "service call failed."); + + // Spin until future is ready using the node directly + auto future_status = rclcpp::spin_until_future_complete( + node, result_future, std::chrono::seconds(10)); + + if (future_status != rclcpp::FutureReturnCode::SUCCESS) { + RCLCPP_ERROR(node->get_logger(), "Service call failed or timed out."); return 1; } + auto result = result_future.get(); received_times++; } diff --git a/benchmark/rclcpp/service/service-stress-test.cpp b/benchmark/rclcpp/service/service-stress-test.cpp index 7001151a..1c2dd235 100644 --- a/benchmark/rclcpp/service/service-stress-test.cpp +++ b/benchmark/rclcpp/service/service-stress-test.cpp @@ -21,11 +21,10 @@ #include "rclcpp/rclcpp.hpp" void ShowUsage(const std::string name) { - std::cerr << "Usage: " << name << " [options]\n" - << "\nOptions:\n" - << "\n--size [size_kb]\tThe block size\n" - << "--help \toutput usage information" - << std::endl; + std::cerr << "Usage: " << name << " [options]\n" + << "\nOptions:\n" + << "\n-s, --size [size_kb]\tThe block size\n" + << "-h, --help \toutput usage information" << std::endl; } int main(int argc, char* argv[]) { @@ -35,26 +34,24 @@ int main(int argc, char* argv[]) { for (int i = 1; i < argc; i++) { std::string arg = argv[i]; if ((arg == "-h") || (arg == "--help")) { - ShowUsage(argv[0]); - return 0; + ShowUsage(argv[0]); + return 0; } else if (arg.find("--size=") != std::string::npos) { - amount = std::stoi(arg.substr(arg.find("=") + 1)); + amount = std::stoi(arg.substr(arg.find("=") + 1)); + } else if (arg == "-s" && i + 1 < argc) { + amount = std::stoi(argv[++i]); } } auto node = rclcpp::Node::make_shared("stress_service_rclcpp"); - auto sub = node->create_service( + auto service = node->create_service( "get_map", - [amount](const std::shared_ptr request_header, - const std::shared_ptr request, - const std::shared_ptr response) { - (void)request_header; + [amount](const std::shared_ptr request, + std::shared_ptr response) { (void)request; - response->map.header.stamp.sec = 123456; - response->map.header.stamp.nanosec = 789; + response->map.header.stamp = rclcpp::Clock().now(); response->map.header.frame_id = "main_frame"; - response->map.info.map_load_time.sec = 123456; - response->map.info.map_load_time.nanosec = 789; + response->map.info.map_load_time = rclcpp::Clock().now(); response->map.info.resolution = 1.0; response->map.info.width = 1024; response->map.info.height = 768; @@ -64,7 +61,7 @@ int main(int argc, char* argv[]) { response->map.info.origin.orientation.x = 0.0; response->map.info.origin.orientation.y = 0.0; response->map.info.origin.orientation.z = 0.0; - response->map.info.origin.orientation.w = 0.0; + response->map.info.origin.orientation.w = 1.0; response->map.data = std::vector(1024 * amount, 125); }); rclcpp::spin(node); diff --git a/benchmark/rclcpp/startup/startup-test.cpp b/benchmark/rclcpp/startup/startup-test.cpp deleted file mode 100644 index 97ae709e..00000000 --- a/benchmark/rclcpp/startup/startup-test.cpp +++ /dev/null @@ -1,21 +0,0 @@ -// Copyright (c) 2018 Intel Corporation. All rights reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -#include "rclcpp/rclcpp.hpp" - -int main(int argc, char* argv[]) { - rclcpp::init(argc, argv); - rclcpp::Node::make_shared("startup_node"); - return 0; -} diff --git a/benchmark/rclcpp/topic/publisher-stress-test.cpp b/benchmark/rclcpp/topic/publisher-stress-test.cpp index a8c60106..eb8ee97a 100644 --- a/benchmark/rclcpp/topic/publisher-stress-test.cpp +++ b/benchmark/rclcpp/topic/publisher-stress-test.cpp @@ -23,12 +23,11 @@ #include "utilities.hpp" void ShowUsage(const std::string name) { - std::cerr << "Usage: " << name << " [options]\n" - << "\nOptions:\n" - << "\n--size=[size_kb]\tThe block size\n" - << "--run= \tHow many times to run\n" - << "--help \toutput usage information" - << std::endl; + std::cerr << "Usage: " << name << " [options]\n" + << "\nOptions:\n" + << "\n-s, --size=[size_kb]\tThe block size\n" + << "-r, --run= \tHow many times to run\n" + << "-h, --help \toutput usage information" << std::endl; } int main(int argc, char* argv[]) { @@ -36,61 +35,72 @@ int main(int argc, char* argv[]) { auto amount = 0; for (int i = 1; i < argc; i++) { - std::string arg = argv[i]; - if ((arg == "-h") || (arg == "--help")) { - ShowUsage(argv[0]); - return 0; - } else if (arg.find("--size=") != std::string::npos) { - amount = std::stoi(arg.substr(arg.find("=") + 1)); - } else if (arg.find("--run=") != std::string::npos) { - total_times = std::stoi(arg.substr(arg.find("=") + 1)); - } + std::string arg = argv[i]; + if ((arg == "-h") || (arg == "--help")) { + ShowUsage(argv[0]); + return 0; + } else if (arg.find("--size=") != std::string::npos) { + amount = std::stoi(arg.substr(arg.find("=") + 1)); + } else if (arg.find("--run=") != std::string::npos) { + total_times = std::stoi(arg.substr(arg.find("=") + 1)); + } else if (arg == "-s" && i + 1 < argc) { + amount = std::stoi(argv[++i]); + } else if (arg == "-r" && i + 1 < argc) { + total_times = std::stoi(argv[++i]); + } } rclcpp::init(argc, argv); - auto height_dim = std::make_shared(); - height_dim->label = "height"; - height_dim->size = 10; - height_dim->stride = 600; - - auto width_dim = std::make_shared(); - width_dim->label = "width"; - width_dim->size = 20; - width_dim->stride = 60; - - auto channel_dim = std::make_shared(); - channel_dim->label = "channel"; - channel_dim->size = 3; - channel_dim->stride = 4; - - auto layout = std::make_shared(); - layout->dim = std::vector{ - *height_dim, *width_dim, *channel_dim}; - layout->data_offset = 0; + + auto height_dim = std_msgs::msg::MultiArrayDimension(); + height_dim.label = "height"; + height_dim.size = 10; + height_dim.stride = 600; + + auto width_dim = std_msgs::msg::MultiArrayDimension(); + width_dim.label = "width"; + width_dim.size = 20; + width_dim.stride = 60; + + auto channel_dim = std_msgs::msg::MultiArrayDimension(); + channel_dim.label = "channel"; + channel_dim.size = 3; + channel_dim.stride = 4; + + auto layout = std_msgs::msg::MultiArrayLayout(); + layout.dim = std::vector{ + height_dim, width_dim, channel_dim}; + layout.data_offset = 0; auto msg = std_msgs::msg::UInt8MultiArray(); - msg.layout = *layout; + msg.layout = layout; msg.data = std::vector(1024 * amount, 255); printf( "The publisher will publish a UInt8MultiArray topic(contains a size of " - "%dKB array) %d times.\n", amount, total_times); + "%dKB array) %d times.\n", + amount, total_times); auto start = std::chrono::high_resolution_clock::now(); auto node = rclcpp::Node::make_shared("stress_publisher_rclcpp"); - auto publisher = - node->create_publisher("stress_topic", 10); + auto publisher = node->create_publisher( + "stress_topic", 10); auto sent_times = 0; + // Create executor for proper message handling + auto executor = rclcpp::executors::SingleThreadedExecutor::make_shared(); + executor->add_node(node); + while (rclcpp::ok()) { - if (sent_times > total_times) { + if (sent_times >= total_times) { rclcpp::shutdown(); auto end = std::chrono::high_resolution_clock::now(); LogTimeConsumption(start, end); + break; } else { publisher->publish(msg); sent_times++; - rclcpp::spin_some(node); + executor->spin_some(); } } diff --git a/benchmark/rclcpp/topic/subscription-stress-test.cpp b/benchmark/rclcpp/topic/subscription-stress-test.cpp index 6b18542c..3dda2cd8 100644 --- a/benchmark/rclcpp/topic/subscription-stress-test.cpp +++ b/benchmark/rclcpp/topic/subscription-stress-test.cpp @@ -19,10 +19,13 @@ int main(int argc, char* argv[]) { rclcpp::init(argc, argv); auto node = rclcpp::Node::make_shared("stress_subscription_rclcpp"); auto sub = node->create_subscription( - "stress_topic", - 10, - [](std_msgs::msg::UInt8MultiArray::SharedPtr msg) { (void)msg; }); + "stress_topic", 10, + [](const std_msgs::msg::UInt8MultiArray::SharedPtr msg) { + (void)msg; + // Just consume the message for benchmarking + }); rclcpp::spin(node); + rclcpp::shutdown(); return 0; } diff --git a/benchmark/rclnodejs/service/client-stress-test.js b/benchmark/rclnodejs/service/client-stress-test.js index b07a1e97..163610c1 100644 --- a/benchmark/rclnodejs/service/client-stress-test.js +++ b/benchmark/rclnodejs/service/client-stress-test.js @@ -15,42 +15,102 @@ 'use strict'; /* eslint-disable camelcase */ -const app = require('commander'); +const { program } = require('commander'); const rclnodejs = require('../../../index.js'); -app.option('-r, --run ', 'How many times to run').parse(process.argv); +program + .option('-r, --run ', 'How many times to run', '1') + .parse(process.argv); -rclnodejs - .init() - .then(() => { - const time = process.hrtime(); +const options = program.opts(); +const totalTimes = parseInt(options.run) || 1; + +async function main() { + try { + await rclnodejs.init(); + + const startTime = process.hrtime.bigint(); const node = rclnodejs.createNode('stress_client_rclnodejs'); - const client = node.createClient('nav_msgs/srv/GetMap', 'get_map'); + const client = node.createClient('nav_msgs/srv/GetMap', 'get_map', { + enableTypedArray: true, + }); + let receivedTimes = 0; - let totalTimes = app.run || 1; - console.log( - 'The client will send a GetMap request continuously' + - ` until receiving response ${totalTimes} times.` - ); - let sendRequest = function () { - client.sendRequest({}, (response) => { - if (++receivedTimes > totalTimes) { - rclnodejs.shutdown(); - const diff = process.hrtime(time); - console.log( - `Benchmark took ${diff[0]} seconds and ${Math.ceil( - diff[1] / 1000000 - )} milliseconds.` - ); - } else { - setImmediate(sendRequest); + + node + .getLogger() + .info( + `The client will send a GetMap request continuously until receiving response ${totalTimes} times.` + ); + + // Wait for service to be available - simplified approach + node.getLogger().info('Waiting for service to be available...'); + + // Simple wait with spinning + await new Promise((resolve) => { + setTimeout(() => { + node.getLogger().info('Service should be available now'); + resolve(); + }, 2000); // Wait 2 seconds for service to start + }); + + // Send requests sequentially using callback pattern + const sendRequests = async () => { + // Start spinning to handle callbacks + rclnodejs.spin(node); + + for (let i = 0; i < totalTimes; i++) { + try { + const response = await new Promise((resolve, reject) => { + client.sendRequest({}, (response) => { + console.log('send+++'); + console.log(response); + if (response) { + resolve(response.map.header); + } else { + reject(new Error('No response received')); + } + }); + }); + if (response) { + receivedTimes++; + // Show progress every 100 requests + if (receivedTimes % 100 === 0) { + node + .getLogger() + .info( + `Progress: ${receivedTimes}/${totalTimes} requests completed` + ); + } + } + } catch (error) { + node.getLogger().error(`Request ${i + 1} failed: ${error.message}`); } - }); + } }; - sendRequest(); - rclnodejs.spin(node); - }) - .catch((e) => { - console.log(`Error: ${e}`); - }); + await sendRequests(); + + const endTime = process.hrtime.bigint(); + const diffNanos = endTime - startTime; + const diffMillis = Number(diffNanos) / 1000000; + const seconds = Math.floor(diffMillis / 1000); + const milliseconds = Math.round(diffMillis % 1000); + + node + .getLogger() + .info( + `Benchmark took ${seconds} seconds and ${milliseconds} milliseconds.` + ); + node + .getLogger() + .info(`Successfully received ${receivedTimes}/${totalTimes} responses`); + + await rclnodejs.shutdown(); + } catch (error) { + console.error('Error in client stress test:', error); + process.exit(1); + } +} + +main(); diff --git a/benchmark/rclnodejs/service/service-stress-test.js b/benchmark/rclnodejs/service/service-stress-test.js index 01d9b2e7..7521968d 100644 --- a/benchmark/rclnodejs/service/service-stress-test.js +++ b/benchmark/rclnodejs/service/service-stress-test.js @@ -15,58 +15,63 @@ 'use strict'; /* eslint-disable camelcase */ -const app = require('commander'); +const { program } = require('commander'); const rclnodejs = require('../../../index.js'); -app.option('-s, --size [size_kb]', 'The block size').parse(process.argv); +program + .option('-s, --size ', 'The block size', '1') + .parse(process.argv); -let size = app.size || 1; -const mapData = { - map: { - header: { - stamp: { - sec: 123456, - nanosec: 789, - }, - frame_id: 'main_frame', - }, - info: { - map_load_time: { - sec: 123456, - nanosec: 789, - }, - resolution: 1.0, - width: 1024, - height: 768, - origin: { - position: { - x: 0.0, - y: 0.0, - z: 0.0, - }, - orientation: { - x: 0.0, - y: 0.0, - z: 0.0, - w: 0.0, - }, - }, - }, - data: Int8Array.from({ length: 1024 * size }, (v, k) => k), - }, -}; +const options = program.opts(); +let size = parseInt(options.size) || 1; rclnodejs .init() .then(() => { let node = rclnodejs.createNode('stress_service_rclnodejs'); - node.createService( - 'nav_msgs/srv/GetMap', - 'get_map', - (request, response) => { - return mapData; - } - ); + + node.createService('nav_msgs/srv/GetMap', 'get_map', () => { + // Create map data structure that matches the working test pattern + const mapData = { + map: { + header: { + stamp: { + sec: 123456, + nanosec: 789, + }, + frame_id: 'main_frame', + }, + info: { + map_load_time: { + sec: 123456, + nanosec: 789, + }, + resolution: 1.0, + width: 1024, + height: 768, + origin: { + position: { + x: 0.0, + y: 0.0, + z: 0.0, + }, + orientation: { + x: 0.0, + y: 0.0, + z: 0.0, + w: 0.0, + }, + }, + }, + // Use minimal data array for stability + data: [1, 2, 3], + }, + }; + + return mapData; + }); + + console.log(`GetMap service started, data size: ${size}KB`); rclnodejs.spin(node); }) .catch((e) => { diff --git a/benchmark/rclnodejs/startup/startup-test.js b/benchmark/rclnodejs/startup/startup-test.js deleted file mode 100644 index 6aaf835a..00000000 --- a/benchmark/rclnodejs/startup/startup-test.js +++ /dev/null @@ -1,21 +0,0 @@ -// Copyright (c) 2018 Intel Corporation. All rights reserved. -// -// Licensed under the Apache License, Version 2.0 (the "License"); -// you may not use this file except in compliance with the License. -// You may obtain a copy of the License at -// -// http://www.apache.org/licenses/LICENSE-2.0 -// -// Unless required by applicable law or agreed to in writing, software -// distributed under the License is distributed on an "AS IS" BASIS, -// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -// See the License for the specific language governing permissions and -// limitations under the License. - -'use strict'; - -const rclnodejs = require('../../../index.js'); - -rclnodejs.init().then(() => { - rclnodejs.createNode('startup_node'); -}); diff --git a/benchmark/rclnodejs/topic/publisher-stress-test.js b/benchmark/rclnodejs/topic/publisher-stress-test.js index 7b928b47..f7b839d5 100644 --- a/benchmark/rclnodejs/topic/publisher-stress-test.js +++ b/benchmark/rclnodejs/topic/publisher-stress-test.js @@ -15,26 +15,31 @@ 'use strict'; /* eslint-disable camelcase */ -const app = require('commander'); +const { program } = require('commander'); const rclnodejs = require('../../../index.js'); -app - .option('-s, --size [size_kb]', 'The block size') - .option('-r, --run ', 'How many times to run') +program + .option('-s, --size ', 'The block size', '1') + .option('-r, --run ', 'How many times to run', '1') .parse(process.argv); -rclnodejs - .init() - .then(() => { - const time = process.hrtime(); - let node = rclnodejs.createNode('stress_publisher_rclnodejs'); +const options = program.opts(); +const size = parseInt(options.size) || 1; +const totalTimes = parseInt(options.run) || 1; + +async function main() { + try { + await rclnodejs.init(); + + const startTime = process.hrtime.bigint(); + const node = rclnodejs.createNode('stress_publisher_rclnodejs'); const publisher = node.createPublisher( 'std_msgs/msg/UInt8MultiArray', 'stress_topic' ); + let sentTimes = 0; - let totalTimes = app.run || 1; - let size = app.size || 1; + const message = { layout: { dim: [ @@ -44,28 +49,50 @@ rclnodejs ], data_offset: 0, }, - data: Uint8Array.from({ length: 1024 * size }, (v, k) => k), + data: new Uint8Array(1024 * size).map((_, index) => index & 0xff), + }; + + node + .getLogger() + .info( + `The publisher will publish a UInt8MultiArray topic (contains a size of ${size}KB array) ${totalTimes} times.` + ); + + // Publish messages in a controlled manner + const publishMessages = () => { + return new Promise((resolve) => { + const publishNext = () => { + if (sentTimes < totalTimes) { + publisher.publish(message); + sentTimes++; + setImmediate(publishNext); + } else { + resolve(); + } + }; + publishNext(); + }); }; - console.log( - `The publisher will publish a UInt8MultiArray topic(contains a size of ${size}KB array)` + - ` ${totalTimes} times.` - ); - setImmediate(() => { - while (sentTimes++ < totalTimes) { - publisher.publish(message); - } - rclnodejs.shutdown(); - const diff = process.hrtime(time); - console.log( - `Benchmark took ${diff[0]} seconds and ${Math.ceil( - diff[1] / 1000000 - )} milliseconds.` + await publishMessages(); + + const endTime = process.hrtime.bigint(); + const diffNanos = endTime - startTime; + const diffMillis = Number(diffNanos) / 1000000; + const seconds = Math.floor(diffMillis / 1000); + const milliseconds = Math.round(diffMillis % 1000); + + node + .getLogger() + .info( + `Benchmark took ${seconds} seconds and ${milliseconds} milliseconds.` ); - }); - rclnodejs.spin(node); - }) - .catch((err) => { - console.log(err); - }); + await rclnodejs.shutdown(); + } catch (error) { + console.error('Error in publisher stress test:', error); + process.exit(1); + } +} + +main(); diff --git a/benchmark/rclnodejs/topic/subscription-stress-test.js b/benchmark/rclnodejs/topic/subscription-stress-test.js index 47a134b6..ac514b8b 100644 --- a/benchmark/rclnodejs/topic/subscription-stress-test.js +++ b/benchmark/rclnodejs/topic/subscription-stress-test.js @@ -16,17 +16,35 @@ const rclnodejs = require('../../../index.js'); -rclnodejs - .init() - .then(() => { +async function main() { + try { + await rclnodejs.init(); + const node = rclnodejs.createNode('stress_subscription_rclnodejs'); + node.createSubscription( 'std_msgs/msg/UInt8MultiArray', 'stress_topic', - (array) => {} + (message) => { + // Just consume the message for benchmarking + // In a real scenario, you might process the message here + } ); - rclnodejs.spin(node); - }) - .catch((e) => { - console.log(e); - }); + + node.getLogger().info('Subscription node ready to receive messages'); + + // Set up graceful shutdown + process.on('SIGINT', async () => { + node.getLogger().info('Shutting down subscription node...'); + await rclnodejs.shutdown(); + process.exit(0); + }); + + await rclnodejs.spin(node); + } catch (error) { + console.error('Error in subscription stress test:', error); + process.exit(1); + } +} + +main(); diff --git a/benchmark/rclpy/service/client-stress-test.py b/benchmark/rclpy/service/client-stress-test.py old mode 100644 new mode 100755 index 35f53f32..b9a30b5c --- a/benchmark/rclpy/service/client-stress-test.py +++ b/benchmark/rclpy/service/client-stress-test.py @@ -15,38 +15,63 @@ import argparse import math -from nav_msgs.srv import * +import time + import rclpy -from time import time +from rclpy.node import Node +from nav_msgs.srv import GetMap def main(): - parser = argparse.ArgumentParser() - parser.add_argument("-r", "--run", type=int, help="How many times to run") - args = parser.parse_args() - if args.run is None: - args.run = 1 - - rclpy.init() - print( - 'The client will send a GetMap request continuously until receiving %s response times.' % args.run) - start = time(); - node = rclpy.create_node('stress_client_rclpy') - client = node.create_client(GetMap, 'get_map') - request = GetMap.Request() - received_times = 0 + parser = argparse.ArgumentParser() + parser.add_argument("-r", "--run", type=int, default=1, help="How many times to run") + args = parser.parse_args() - while rclpy.ok(): - if received_times > args.run: - node.destroy_node() - rclpy.shutdown() - diff = time() - start - milliseconds, seconds = math.modf(diff) - print('Benchmark took %d seconds and %d milliseconds.' % (seconds, round(milliseconds * 1000))) - else: - future = client.call_async(request) - rclpy.spin_until_future_complete(node, future) - if future.result() is not None: - received_times += 1 + rclpy.init() + + try: + node = Node('stress_client_rclpy') + + # Create client + client = node.create_client(GetMap, 'get_map') + + # Wait for service to be available + node.get_logger().info('Waiting for service to be available...') + if not client.wait_for_service(timeout_sec=5.0): + node.get_logger().error('Service not available after waiting') + return + + node.get_logger().info(f'The client will send a GetMap request continuously ' + f'until receiving {args.run} response times.') + + start_time = time.time() + received_times = 0 + + # Send requests synchronously + for i in range(args.run): + request = GetMap.Request() + try: + future = client.call_async(request) + rclpy.spin_until_future_complete(node, future, timeout_sec=5.0) + + if future.result() is not None: + received_times += 1 + else: + node.get_logger().error(f'Request {i+1} failed') + except Exception as e: + node.get_logger().error(f'Service call {i+1} failed: {e}') + + # Log completion + diff = time.time() - start_time + milliseconds, seconds = math.modf(diff) + node.get_logger().info(f'Benchmark took {int(seconds)} seconds and ' + f'{round(milliseconds * 1000)} milliseconds.') + node.get_logger().info(f'Successfully received {received_times}/{args.run} responses') + + node.destroy_node() + except KeyboardInterrupt: + pass + finally: + rclpy.try_shutdown() if __name__ == '__main__': - main() + main() diff --git a/benchmark/rclpy/service/service-stress-test.py b/benchmark/rclpy/service/service-stress-test.py old mode 100644 new mode 100755 index 05775972..0bcb78dd --- a/benchmark/rclpy/service/service-stress-test.py +++ b/benchmark/rclpy/service/service-stress-test.py @@ -14,70 +14,88 @@ # limitations under the License. import argparse + import rclpy -from std_srvs.srv import * -from std_msgs.msg import * -from nav_msgs.srv import * -from nav_msgs.msg import * -from builtin_interfaces.msg import * -from sensor_msgs.msg import * -from geometry_msgs.msg import * +from rclpy.node import Node +from rclpy.clock import Clock +from nav_msgs.srv import GetMap +from nav_msgs.msg import OccupancyGrid, MapMetaData +from std_msgs.msg import Header +from geometry_msgs.msg import Point, Quaternion, Pose -parser = argparse.ArgumentParser() -parser.add_argument("-s", "--size", type=int, help="The block size[kb]") -args = parser.parse_args() -if args.size is None: - args.size = 1000 +class StressService(Node): + def __init__(self, size): + super().__init__('stress_service_rclpy') + self.size = size + + # Create service + self.service = self.create_service(GetMap, 'get_map', self.handle_get_map) + + # Prepare map data + self.map_data = self.create_map_data() + + self.get_logger().info(f'Service ready to serve GetMap requests with {size}KB data') -map_data = OccupancyGrid() -stamp = Time(); -stamp.sec = 123456 -stamp.nanosec = 789 -header = Header() -header.stamp = stamp; -header.frame_id = 'main_frame' + def create_map_data(self): + # Create timestamp + clock = Clock() + current_time = clock.now().to_msg() + + # Create header + header = Header() + header.stamp = current_time + header.frame_id = 'main_frame' -info = MapMetaData() -map_load_time = Time() -map_load_time.sec = 123456 -map_load_time.nanosec = 789 -info.resolution = 1.0 -info.width = 1024 -info.height = 768 + # Create map metadata + info = MapMetaData() + info.map_load_time = current_time + info.resolution = 1.0 + info.width = 1024 + info.height = 768 -position = Point() -position.x = 0.0 -position.y = 0.0 -position.z = 0.0 + # Create origin pose + position = Point() + position.x = 0.0 + position.y = 0.0 + position.z = 0.0 -orientation = Quaternion() -orientation.x = 0.0 -orientation.y = 0.0 -orientation.z = 0.0 -orientation.w = 0.0 + orientation = Quaternion() + orientation.x = 0.0 + orientation.y = 0.0 + orientation.z = 0.0 + orientation.w = 1.0 # Valid quaternion -origin = Pose() -origin.position = position -origin.orientation = orientation -info.map_load_time = map_load_time -info.origin = origin; + origin = Pose() + origin.position = position + origin.orientation = orientation + info.origin = origin -map_data.header = header -map_data.info = info -map_data.data = [x & 0x7f for x in range(1024 * args.size)] + # Create occupancy grid + map_data = OccupancyGrid() + map_data.header = header + map_data.info = info + map_data.data = [x & 0x7f for x in range(1024 * self.size)] + + return map_data -def callback(request, response): - global map_data - response.map = map_data - return response + def handle_get_map(self, request, response): + response.map = self.map_data + return response def main(): - rclpy.init() - node = rclpy.create_node('stress_service_rclpy') - service = node.create_service(GetMap, 'get_map', callback) + parser = argparse.ArgumentParser() + parser.add_argument("-s", "--size", type=int, default=1000, help="The block size[kb]") + args = parser.parse_args() - while rclpy.ok(): - rclpy.spin_once(node) + rclpy.init() + + try: + stress_service = StressService(args.size) + rclpy.spin(stress_service) + except KeyboardInterrupt: + pass + finally: + rclpy.try_shutdown() if __name__ == '__main__': - main() + main() diff --git a/benchmark/rclpy/startup/startup-test.py b/benchmark/rclpy/startup/startup-test.py deleted file mode 100644 index 943bfbd0..00000000 --- a/benchmark/rclpy/startup/startup-test.py +++ /dev/null @@ -1,23 +0,0 @@ -#!/usr/bin/env python3 -# Copyright (c) 2018 Intel Corporation. All rights reserved. -# -# Licensed under the Apache License, Version 2.0 (the "License"); -# you may not use this file except in compliance with the License. -# You may obtain a copy of the License at -# -# http://www.apache.org/licenses/LICENSE-2.0 -# -# Unless required by applicable law or agreed to in writing, software -# distributed under the License is distributed on an "AS IS" BASIS, -# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. -# See the License for the specific language governing permissions and -# limitations under the License. - -import rclpy - -def main(): - rclpy.init() - rclpy.create_node('startup_node') - -if __name__ == '__main__': - main() diff --git a/benchmark/rclpy/topic/publisher-stress-test.py b/benchmark/rclpy/topic/publisher-stress-test.py old mode 100644 new mode 100755 index 1cf7223c..ead31306 --- a/benchmark/rclpy/topic/publisher-stress-test.py +++ b/benchmark/rclpy/topic/publisher-stress-test.py @@ -14,65 +14,72 @@ # limitations under the License. import argparse -import rclpy -from builtin_interfaces.msg import * import math -from std_msgs.msg import * -import threading -from time import time - -def main(): - parser = argparse.ArgumentParser() - parser.add_argument("-s", "--size", type=int, help="The block size[kb]") - parser.add_argument("-r", "--run", type=int, help="How many times to run") - args = parser.parse_args() - rclpy.init() - if args.size is None: - args.size = 1 - if args.run is None: - args.run = 1 +import time - amount = args.size - width_dim = MultiArrayDimension() - width_dim.label = 'width' - width_dim.size = 20; - width_dim.stride = 60; +import rclpy +from rclpy.node import Node +from std_msgs.msg import UInt8MultiArray, MultiArrayDimension, MultiArrayLayout - height_dim = MultiArrayDimension() - height_dim.label = 'height' - height_dim.size = 10; - height_dim.stride = 600; +def main(): + parser = argparse.ArgumentParser() + parser.add_argument("-s", "--size", type=int, default=1, help="The block size[kb]") + parser.add_argument("-r", "--run", type=int, default=1, help="How many times to run") + args = parser.parse_args() - channel_dim = MultiArrayDimension() - channel_dim.label = 'channel' - channel_dim.size = 3; - channel_dim.stride = 4; + rclpy.init() + + try: + node = Node('stress_publisher_rclpy') + + # Create publisher + publisher = node.create_publisher(UInt8MultiArray, 'stress_topic', 10) + + # Prepare message + width_dim = MultiArrayDimension() + width_dim.label = 'width' + width_dim.size = 20 + width_dim.stride = 60 - layout = MultiArrayLayout() - layout.dim = [width_dim, height_dim, channel_dim] - layout.data_offset = 0; + height_dim = MultiArrayDimension() + height_dim.label = 'height' + height_dim.size = 10 + height_dim.stride = 600 - msg = UInt8MultiArray() - msg.layout = layout - msg.data = [x & 0xff for x in range(1024 * amount)] + channel_dim = MultiArrayDimension() + channel_dim.label = 'channel' + channel_dim.size = 3 + channel_dim.stride = 4 - print('The publisher will publish a UInt8MultiArray topic(contains a size of %dKB array) %s times.' % (amount, args.run)) - start = time() - node = rclpy.create_node('stress_publisher_rclpy') - publisher = node.create_publisher(UInt8MultiArray, 'stress_topic', 10) - total_times = args.run - sent_times = 0 + layout = MultiArrayLayout() + layout.dim = [width_dim, height_dim, channel_dim] + layout.data_offset = 0 - while rclpy.ok(): - if sent_times > total_times: - node.destroy_node() - rclpy.shutdown() - diff = time() - start - milliseconds, seconds = math.modf(diff) - print('Benchmark took %d seconds and %d milliseconds.' % (seconds, round(milliseconds * 1000))) - else: - publisher.publish(msg) - sent_times += 1 + msg = UInt8MultiArray() + msg.layout = layout + msg.data = [x & 0xff for x in range(1024 * args.size)] + + node.get_logger().info(f'The publisher will publish a UInt8MultiArray topic ' + f'(contains a size of {args.size}KB array) {args.run} times.') + + start_time = time.time() + + # Publish messages + for i in range(args.run): + publisher.publish(msg) + rclpy.spin_once(node, timeout_sec=0.001) + + # Log completion + diff = time.time() - start_time + milliseconds, seconds = math.modf(diff) + node.get_logger().info(f'Benchmark took {int(seconds)} seconds and ' + f'{round(milliseconds * 1000)} milliseconds.') + + node.destroy_node() + except KeyboardInterrupt: + pass + finally: + rclpy.try_shutdown() if __name__ == '__main__': - main() + main() diff --git a/benchmark/rclpy/topic/subscription-stress-test.py b/benchmark/rclpy/topic/subscription-stress-test.py old mode 100644 new mode 100755 index 17afadbc..ad80f8a7 --- a/benchmark/rclpy/topic/subscription-stress-test.py +++ b/benchmark/rclpy/topic/subscription-stress-test.py @@ -14,17 +14,34 @@ # limitations under the License. import rclpy -from std_msgs.msg import * +from rclpy.node import Node +from std_msgs.msg import UInt8MultiArray -def callback(msg): - pass +class StressSubscriber(Node): + def __init__(self): + super().__init__('stress_subscription_rclpy') + self.subscription = self.create_subscription( + UInt8MultiArray, + 'stress_topic', + self.listener_callback, + 10) + self.subscription # prevent unused variable warning + self.get_logger().info('Subscription node ready to receive messages') + + def listener_callback(self, msg): + # Just consume the message for benchmarking + pass def main(): - rclpy.init() - node = rclpy.create_node('stress_subscription_rclpy') - subscription = node.create_subscription(UInt8MultiArray, 'stress_topic', callback, 10) - while rclpy.ok(): - rclpy.spin_once(node) + rclpy.init() + + try: + stress_subscriber = StressSubscriber() + rclpy.spin(stress_subscriber) + except KeyboardInterrupt: + pass + finally: + rclpy.try_shutdown() if __name__ == '__main__': - main() + main() From 9a545989e707f0ca652e85029732a3dc442ecfb5 Mon Sep 17 00:00:00 2001 From: Minggang Wang Date: Sat, 30 Aug 2025 00:25:03 +0800 Subject: [PATCH 2/4] Address comments --- benchmark/README.md | 49 ++++++----- .../rclnodejs/service/client-stress-test.js | 83 +++++++++++-------- .../rclnodejs/service/service-stress-test.js | 33 ++++++-- 3 files changed, 99 insertions(+), 66 deletions(-) diff --git a/benchmark/README.md b/benchmark/README.md index 7527119f..138da3a5 100644 --- a/benchmark/README.md +++ b/benchmark/README.md @@ -7,6 +7,7 @@ Performance benchmarks for comparing ROS 2 client libraries: C++ (rclcpp), Pytho 1. **ROS 2**: Install from [ros.org](https://docs.ros.org/en/jazzy/Installation.html) 2. **Node.js**: v16+ for rclnodejs (from [nodejs.org](https://nodejs.org/)) 3. **rclnodejs**: Follow [installation guide](https://github.com/RobotWebTools/rclnodejs#installation) +4. **Build Dependencies**: For C++ benchmarks: `sudo apt install libssl-dev cmake build-essential` ## Benchmark Structure @@ -16,7 +17,6 @@ Each client library has identical benchmark tests: | ----------- | ------------------------------------------- | | **topic** | Publisher/subscriber performance | | **service** | Client/service request-response performance | -| **startup** | Node initialization time | ## Performance Results @@ -24,15 +24,16 @@ Each client library has identical benchmark tests: **Hardware:** -- **CPU:** Intel(R) Core(TM) i9-10900X @ 3.70GHz (10 cores, 20 threads) -- **Memory:** 32GB RAM +- **CPU:** 11th Gen Intel(R) Core(TM) i7-1185G7 @ 3.00GHz (4 cores, 8 threads) +- **Memory:** 8GB RAM - **Architecture:** x86_64 **Software:** - **OS:** Ubuntu 24.04.3 LTS (WSL2) +- **Kernel:** 6.6.87.2-microsoft-standard-WSL2 - **ROS 2:** Jazzy distribution -- **C++ Compiler:** GCC 13.3.0 +- **C++ Compiler:** GCC 13.3.0 (Ubuntu 13.3.0-6ubuntu2~24.04) - **Python:** 3.12.3 - **Node.js:** v22.18.0 @@ -40,21 +41,22 @@ Each client library has identical benchmark tests: Benchmark parameters: 1000 iterations, 1024KB message size -| Client Library | Topic (ms) | Service (ms) | Performance Ratio | -| ----------------------- | ---------- | ------------ | ---------------------- | -| **rclcpp (C++)** | 437 | 8,129 | Baseline (fastest) | -| **rclpy (Python)** | 2,294 | 25,519 | 5.3x / 3.1x slower | -| **rclnodejs (Node.js)** | 2,075 | 3,420\* | 4.7x / 2.4x faster\*\* | +| Client Library | Topic (ms) | Service (ms) | Performance Ratio | +| ----------------------- | ---------- | ------------ | ------------------- | +| **rclcpp (C++)** | 168 | 627 | Baseline (fastest) | +| **rclpy (Python)** | 1,618 | 15,380 | 9.6x / 24.5x slower | +| **rclnodejs (Node.js)** | 744 | 927 | 4.4x / 1.5x slower | -_Last updated: August 29, 2025_ +_Last updated: August 30, 2025_ **Notes:** - Topic benchmarks: All libraries completed successfully with 1024KB messages -- Service benchmarks: C++ and Python completed with 1024KB responses; Node.js completed with minimal data -- \*Node.js service uses minimal response data due to serialization issues with large (1024KB) payloads -- \*\*Node.js service performance is surprisingly good with small data, but not directly comparable due to different data sizes +- Service benchmarks: All libraries completed successfully with 1024KB responses - Performance ratios are relative to C++ baseline +- C++ shows excellent performance as expected for a compiled language +- Node.js performs significantly better than Python, likely due to V8 optimizations +- Python service performance shows significant overhead with large payloads compared to topics ## Running Benchmarks @@ -62,9 +64,12 @@ _Last updated: August 29, 2025_ ```bash cd benchmark/rclcpp/ -colcon build -./build/rclcpp_benchmark/publisher-stress-test -r 1000 -s 1024 -./build/rclcpp_benchmark/client-stress-test -r 1000 +mkdir -p build && cd build +source ~/Download/ros2-linux/local_setup.bash # Adjust path +cmake .. && make +# Run from build directory: +./publisher-stress-test -r 1000 -s 1024 +./client-stress-test -r 1000 ``` ### Python (rclpy) @@ -94,12 +99,12 @@ For complete tests, run subscriber/service first, then publisher/client: ```bash # Terminal 1: Start subscriber (adjust for your language) python3 topic/subscription-stress-test.py # Python -./build/rclcpp_benchmark/subscription-stress-test # C++ +./subscription-stress-test # C++ (from build dir) node benchmark/rclnodejs/topic/subscription-stress-test.js # Node.js # Terminal 2: Run publisher benchmark python3 topic/publisher-stress-test.py -r 1000 -s 1024 # Python -./build/rclcpp_benchmark/publisher-stress-test -r 1000 -s 1024 # C++ +./publisher-stress-test -r 1000 -s 1024 # C++ (from build dir) node benchmark/rclnodejs/topic/publisher-stress-test.js -r 1000 -s 1024 # Node.js ``` @@ -107,13 +112,13 @@ node benchmark/rclnodejs/topic/publisher-stress-test.js -r 1000 -s 1024 # Node. ```bash # Terminal 1: Start service (adjust for your language) -python3 service/service-stress-test.py # Python -./build/rclcpp_benchmark/service-stress-test # C++ -node benchmark/rclnodejs/service/service-stress-test.js # Node.js +python3 service/service-stress-test.py -s 1024 # Python +./service-stress-test -s 1024 # C++ (from build dir) +node benchmark/rclnodejs/service/service-stress-test.js -s 1024 # Node.js # Terminal 2: Run client benchmark python3 service/client-stress-test.py -r 1000 # Python -./build/rclcpp_benchmark/client-stress-test -r 1000 # C++ +./client-stress-test -r 1000 # C++ (from build dir) node benchmark/rclnodejs/service/client-stress-test.js -r 1000 # Node.js ``` diff --git a/benchmark/rclnodejs/service/client-stress-test.js b/benchmark/rclnodejs/service/client-stress-test.js index 163610c1..ec6006c7 100644 --- a/benchmark/rclnodejs/service/client-stress-test.js +++ b/benchmark/rclnodejs/service/client-stress-test.js @@ -43,50 +43,61 @@ async function main() { `The client will send a GetMap request continuously until receiving response ${totalTimes} times.` ); - // Wait for service to be available - simplified approach + // Wait for service to be available using client.waitForService() node.getLogger().info('Waiting for service to be available...'); - // Simple wait with spinning - await new Promise((resolve) => { - setTimeout(() => { - node.getLogger().info('Service should be available now'); - resolve(); - }, 2000); // Wait 2 seconds for service to start - }); + const serviceAvailable = await client.waitForService(5000); // Wait up to 5 seconds + if (!serviceAvailable) { + throw new Error( + 'GetMap service not available after 5 seconds. Make sure the service is running.' + ); + } + + node.getLogger().info('Service is available, starting benchmark...'); + + // Start spinning to handle callbacks + rclnodejs.spin(node); // Send requests sequentially using callback pattern - const sendRequests = async () => { - // Start spinning to handle callbacks - rclnodejs.spin(node); - - for (let i = 0; i < totalTimes; i++) { - try { - const response = await new Promise((resolve, reject) => { - client.sendRequest({}, (response) => { - console.log('send+++'); - console.log(response); - if (response) { - resolve(response.map.header); - } else { - reject(new Error('No response received')); + let requestPromiseResolve, requestPromiseReject; + let currentRequestPromise = null; + + const sendRequests = () => { + return new Promise((resolve, reject) => { + let requestCount = 0; + + const sendNextRequest = () => { + if (requestCount >= totalTimes) { + resolve(); + return; + } + + requestCount++; + client.sendRequest({}, (response) => { + if (response) { + receivedTimes++; + // Show progress every 100 requests or for small test runs + if (receivedTimes % 100 === 0 || totalTimes <= 10) { + node + .getLogger() + .info( + `Progress: ${receivedTimes}/${totalTimes} requests completed` + ); } - }); - }); - if (response) { - receivedTimes++; - // Show progress every 100 requests - if (receivedTimes % 100 === 0) { + } else { node .getLogger() - .info( - `Progress: ${receivedTimes}/${totalTimes} requests completed` - ); + .error(`Request ${requestCount} failed: No response received`); } - } - } catch (error) { - node.getLogger().error(`Request ${i + 1} failed: ${error.message}`); - } - } + + // Send next request + setImmediate(sendNextRequest); + }); + }; + + // Start the first request + sendNextRequest(); + }); }; await sendRequests(); diff --git a/benchmark/rclnodejs/service/service-stress-test.js b/benchmark/rclnodejs/service/service-stress-test.js index 7521968d..ab96c9db 100644 --- a/benchmark/rclnodejs/service/service-stress-test.js +++ b/benchmark/rclnodejs/service/service-stress-test.js @@ -19,11 +19,26 @@ const { program } = require('commander'); const rclnodejs = require('../../../index.js'); program - .option('-s, --size ', 'The block size', '1') + .option('-s, --size ', 'The block size in KB', '1') .parse(process.argv); const options = program.opts(); -let size = parseInt(options.size) || 1; +let sizeKB = parseInt(options.size) || 1; + +// Calculate map dimensions based on size +// Each cell is 1 byte (int8), so for sizeKB kilobytes we need sizeKB * 1024 cells +const totalCells = sizeKB * 1024; + +// Calculate width and height as close to square as possible +const width = Math.floor(Math.sqrt(totalCells)); +const height = Math.ceil(totalCells / width); +const actualCells = width * height; + +console.log(`Requested size: ${sizeKB}KB (${totalCells} cells)`); +console.log( + `Calculated dimensions: ${width} x ${height} = ${actualCells} cells` +); +console.log(`Actual size: ${(actualCells / 1024).toFixed(2)}KB`); rclnodejs .init() @@ -31,7 +46,7 @@ rclnodejs let node = rclnodejs.createNode('stress_service_rclnodejs'); node.createService('nav_msgs/srv/GetMap', 'get_map', () => { - // Create map data structure that matches the working test pattern + // Create map data structure with calculated dimensions const mapData = { map: { header: { @@ -47,8 +62,8 @@ rclnodejs nanosec: 789, }, resolution: 1.0, - width: 1024, - height: 768, + width: width, + height: height, origin: { position: { x: 0.0, @@ -63,15 +78,17 @@ rclnodejs }, }, }, - // Use minimal data array for stability - data: [1, 2, 3], + // Generate data array with the exact size needed (width * height) + data: new Int8Array(actualCells).fill(0), }, }; return mapData; }); - console.log(`GetMap service started, data size: ${size}KB`); + console.log(`GetMap service started`); + console.log(`Map dimensions: ${width} x ${height} = ${actualCells} cells`); + console.log(`Data size: ${(actualCells / 1024).toFixed(2)}KB`); rclnodejs.spin(node); }) .catch((e) => { From 022cc400a8a276a3ff02498e4fb0208f9064bfda Mon Sep 17 00:00:00 2001 From: Minggang Wang Date: Sat, 30 Aug 2025 00:33:52 +0800 Subject: [PATCH 3/4] Address comments --- README.md | 13 +++++++++++++ 1 file changed, 13 insertions(+) diff --git a/README.md b/README.md index e1b7f757..d6f85f71 100644 --- a/README.md +++ b/README.md @@ -29,6 +29,7 @@ rclnodejs.init().then(() => { - [Electron-based Visualization](#electron-based-visualization) - [Using TypeScript](#using-rclnodejs-with-typescript) - [ROS2 Interface Message Generation](#ros2-interface-message-generation) +- [Performance Benchmarks](#performance-benchmarks) - [Efficient Usage Tips](./docs/EFFICIENCY.md) - [FAQ and Known Issues](./docs/FAQ.md) - [Building from Scratch](./docs/BUILDING.md) @@ -139,6 +140,18 @@ To generate messages from IDL files, use the `generate-messages-idl` npm script: npm run generate-messages-idl ``` +## Performance Benchmarks + +Benchmark results for 1000 iterations with 1024KB messages (Ubuntu 24.04.3 WSL2, i7-1185G7): + +| Library | Topic (ms) | Service (ms) | +| ----------------------- | ---------- | ------------ | +| **rclcpp (C++)** | 168 | 627 | +| **rclnodejs (Node.js)** | 744 | 927 | +| **rclpy (Python)** | 1,618 | 15,380 | + +See [benchmark/README.md](benchmark/README.md) for details. + ## Contributing Please read the [Contributing Guide](./docs/CONTRIBUTING.md) before making a pull request. From f2bf9796a368e88828ab4b0912d35c83cd175a3d Mon Sep 17 00:00:00 2001 From: Minggang Wang Date: Mon, 1 Sep 2025 12:43:54 +0800 Subject: [PATCH 4/4] Update testing env info --- benchmark/README.md | 3 ++- 1 file changed, 2 insertions(+), 1 deletion(-) diff --git a/benchmark/README.md b/benchmark/README.md index 138da3a5..c50f34ab 100644 --- a/benchmark/README.md +++ b/benchmark/README.md @@ -32,10 +32,11 @@ Each client library has identical benchmark tests: - **OS:** Ubuntu 24.04.3 LTS (WSL2) - **Kernel:** 6.6.87.2-microsoft-standard-WSL2 -- **ROS 2:** Jazzy distribution +- **ROS 2:** [Jazzy Patch Release 6](https://github.com/ros2/ros2/releases/tag/release-jazzy-20250820) - **C++ Compiler:** GCC 13.3.0 (Ubuntu 13.3.0-6ubuntu2~24.04) - **Python:** 3.12.3 - **Node.js:** v22.18.0 +- **rclnodejs:** [v1.5.0](https://www.npmjs.com/package/rclnodejs/v/1.5.0) ### Benchmark Results