diff --git a/README.md b/README.md index e37561e17ee7e..3497f259d542f 100644 --- a/README.md +++ b/README.md @@ -329,196 +329,7 @@ bazel test ///... --test_tag_filters=this,-not-this If there are multiple `--test_tag_filters`, only the last one is considered, so be careful if also using an inherited config -### Java - -
-Click to see Java Test Commands - -To run unit tests: - -```shell -bazel test //java/... --test_size_filters=small -``` -To run integration tests: - -```shell -bazel test //java/... --test_size_filters=medium -``` -To run browser tests: - -```shell -bazel test //java/... --test_size_filters=large --test_tag_filters= -``` - -To run a specific test: - -```shell -bazel test //java/test/org/openqa/selenium/chrome:ChromeDriverFunctionalTest -``` - -
- -### JavaScript - -
-Click to see JavaScript Test Commands - -To run the tests run: - -```shell -bazel test //javascript/selenium-webdriver:all -``` - -You can use `--test_env` to pass in the browser name as `SELENIUM_BROWSER`. - -```shell -bazel test //javascript/selenium-webdriver:all --test_env=SELENIUM_BROWSER=firefox -``` - -
- -### Python - -
-Click to see Python Test Commands - -Run unit tests with: -```shell -bazel test //py:unit -``` - -To run common tests with a specific browser: - -```shell -bazel test //py:common- -``` - -To run common tests with a specific browser (include BiDi tests): - -```shell -bazel test //py:common--bidi -``` - -To run tests with a specific browser: - -```shell -bazel test //py:test- -``` - -To run all Python tests: - -```shell -bazel test //py:all -``` - -
- -### Ruby - -
-Click to see Ruby Test Commands - -Test targets: - -| Command | Description | -| -------------------------------------------------------------------------------- | -------------------------------------------------- | -| `bazel test //rb/...` | Run unit, all integration tests and lint | -| `bazel test //rb:lint` | Run RuboCop linter | -| `bazel test //rb/spec/...` | Run unit and integration tests for all browsers | -| `bazel test //rb/spec/... --test_size_filters small` | Run unit tests | -| `bazel test //rb/spec/unit/...` | Run unit tests | -| `bazel test //rb/spec/... --test_size_filters large` | Run integration tests for all browsers | -| `bazel test //rb/spec/integration/...` | Run integration tests for all browsers | -| `bazel test //rb/spec/integration/... --test_tag_filters firefox` | Run integration tests for local Firefox only | -| `bazel test //rb/spec/integration/... --test_tag_filters firefox-remote` | Run integration tests for remote Firefox only | -| `bazel test //rb/spec/integration/... --test_tag_filters firefox,firefox-remote` | Run integration tests for local and remote Firefox | - -Ruby test targets have the same name as the spec file with `_spec.rb` removed, so you can run them individually. -Integration tests targets also have a browser and remote suffix to control which browser to pick and whether to use Grid. - -| Test file | Test target | -| ------------------------------------------------------- | ---------------------------------------------------------------- | -| `rb/spec/unit/selenium/webdriver/proxy_spec.rb` | `//rb/spec/unit/selenium/webdriver:proxy` | -| `rb/spec/integration/selenium/webdriver/driver_spec.rb` | `//rb/spec/integration/selenium/webdriver:driver-chrome` | -| `rb/spec/integration/selenium/webdriver/driver_spec.rb` | `//rb/spec/integration/selenium/webdriver:driver-chrome-remote` | -| `rb/spec/integration/selenium/webdriver/driver_spec.rb` | `//rb/spec/integration/selenium/webdriver:driver-firefox` | -| `rb/spec/integration/selenium/webdriver/driver_spec.rb` | `//rb/spec/integration/selenium/webdriver:driver-firefox-remote` | - -Supported browsers: - -* `chrome` -* `edge` -* `firefox` -* `firefox-beta` -* `ie` -* `safari` -* `safari-preview` - -In addition to the [Common Options Examples](#common-options-examples), here are some additional Ruby specific ones: -* `--test_arg "-eTimeouts"` - test only specs which name include "Timeouts" -* `--test_arg ""` - pass any extra RSpec arguments (see `bazel run @bundle//bin:rspec -- --help`) - -Supported environment variables for use with `--test_env`: - -- `WD_SPEC_DRIVER` - the driver to test; either the browser name or 'remote' (gets set by Bazel) -- `WD_REMOTE_BROWSER` - when `WD_SPEC_DRIVER` is `remote`; the name of the browser to test (gets set by Bazel) -- `WD_REMOTE_URL` - URL of an already running server to use for remote tests -- `DOWNLOAD_SERVER` - when `WD_REMOTE_URL` not set; whether to download and use most recently released server version for remote tests -- `DEBUG` - turns on verbose debugging -- `HEADLESS` - for chrome, edge and firefox; runs tests in headless mode -- `DISABLE_BUILD_CHECK` - for chrome and edge; whether to ignore driver and browser version mismatches (allows testing Canary builds) -- `CHROME_BINARY` - path to test specific Chrome browser -- `CHROMEDRIVER_BINARY` - path to test specific ChromeDriver -- `EDGE_BINARY` - path to test specific Edge browser -- `MSEDGEDRIVER_BINARY` - path to test specific msedgedriver -- `FIREFOX_BINARY` - path to test specific Firefox browser -- `GECKODRIVER_BINARY` - path to test specific GeckoDriver - -To run with a specific version of Ruby you can change the version in `rb/.ruby-version` or from command line: - -```shell -echo '' > rb/.ruby-version -``` -
- -### .NET - -
-Click to see .NET Test Commands - -.NET tests currently only work with pinned browsers, so make sure to include that. - -Run all tests with: - -```shell -bazel test //dotnet/test/common:AllTests --pin_browsers=true -``` - -You can run specific tests by specifying the class name: - -```shell -bazel test //dotnet/test/common:ElementFindingTest --pin_browsers=true -``` - -If the module supports multiple browsers: - -```shell -bazel test //dotnet/test/common:ElementFindingTest-edge --pin_browsers=true -``` - -
- -### Rust - -
-Click to see Rust Test Commands - -Rust tests are run with: - -```shell -bazel test //rust/... -``` -
+Language specific testing guides can be found in a `TESTING.md` file in the applicable directory. ### Linux diff --git a/dotnet/TESTING.md b/dotnet/TESTING.md new file mode 100644 index 0000000000000..206095a61af6f --- /dev/null +++ b/dotnet/TESTING.md @@ -0,0 +1,134 @@ +# .NET Testing Guide + +This guide helps contributors write tests in the Selenium .NET codebase. + +## Test Framework + +* Tests use NUnit. +* All tests inherit from `DriverTestFixture`. +* Test HTML pages accessed via properties like `simpleTestPage`, `javascriptPage`. +* `WaitFor()` provides waiting with 5-second default timeout. + +```csharp +[TestFixture] +public class MyFeatureTest : DriverTestFixture +{ + [Test] + public void ShouldFindElement() + { + driver.Url = simpleTestPage; + IWebElement element = driver.FindElement(By.Id("foo")); + Assert.That(element.Text, Is.EqualTo("expected")); + } + + [Test] + [IgnoreBrowser(Browser.Safari, "Safari doesn't support this")] + public void ShouldDoSomething() + { + // Skipped on Safari + } +} +``` + +## Running Tests + +Bazel creates test targets for each browser. Always use `--pin_browsers`. + +```shell +bazel test //dotnet/test/common:AllTests --pin_browsers=true # Default browser (Firefox) +bazel test //dotnet/test/common:AllTests-chrome --pin_browsers=true +bazel test //dotnet/test/common:AllTests-firefox --pin_browsers=true +bazel test //dotnet/test/common:AllTests-edge --pin_browsers=true + +# Additional Arguments +bazel test //dotnet/... --flaky_test_attempts=3 --pin_browsers=true +bazel test //dotnet/... --test_output=all --pin_browsers=true +``` + +## Attributes + +### Skipping Tests + +| Attribute | When to Use | +|-----------|-------------| +| `[IgnoreBrowser(Browser.X, "reason")]` | Skip test for specific browser | +| `[IgnorePlatform("windows", "reason")]` | Skip test on specific OS | +| `[IgnoreTarget("net8", "reason")]` | Skip test on specific .NET version | +| `[Ignore("reason")]` | Skip test entirely (NUnit built-in) | + +```csharp +[Test] +[IgnoreBrowser(Browser.Safari, "Safari doesn't support multiple instances")] +[IgnoreBrowser(Browser.IE, "IE is flaky")] +public void TestWithMultipleDrivers() +{ +} + +[Test] +[IgnorePlatform("windows", "Thread time not supported")] +public void TestLinuxOnly() +{ +} +``` + +Browser values: `Browser.Chrome`, `Browser.Firefox`, `Browser.Edge`, `Browser.Safari`, `Browser.IE`, `Browser.Remote`, `Browser.All` + +### Driver Lifecycle + +| Attribute | When to Use | +|-----------|-------------| +| `[NeedsFreshDriver(IsCreatedBeforeTest = true)]` | Fresh driver before test | +| `[NeedsFreshDriver(IsCreatedAfterTest = true)]` | Fresh driver after test | + +```csharp +[Test] +[NeedsFreshDriver(IsCreatedBeforeTest = true, IsCreatedAfterTest = true)] +[IgnoreBrowser(Browser.Safari, "Safari doesn't support multiple instances")] +public void TestRequiringFreshDriver() +{ + IWebDriver driver2 = EnvironmentManager.Instance.CreateDriverInstance(); + try + { + // Test with multiple drivers + } + finally + { + driver2.Quit(); + } +} +``` + +## Helpers + +From `DriverTestFixture`: + +| Member | Description | +|--------|-------------| +| `driver` | Current WebDriver instance | +| `simpleTestPage`, `javascriptPage`, etc. | Test page URLs | +| `WaitFor(condition, timeout)` | Wait for condition (default 5s) | +| `CreateFreshDriver()` | Create new driver instance | + +From `EnvironmentManager.Instance`: + +| Member | Description | +|--------|-------------| +| `CreateDriverInstance()` | Create additional driver | +| `CreateDriverInstance(options)` | Create driver with custom options | +| `Browser` | Current browser enum value | + +## Test Organization + +``` +dotnet/test/ +├── common/ # Cross-browser tests +│ ├── DriverTestFixture.cs # Base class +│ ├── CustomTestAttributes/ # Custom NUnit attributes +│ └── *Test.cs # Test files +└── support/ # Support library tests +``` + +## Build Files + +* Adding tests shouldn't require Bazel changes—tests are picked up via glob. +* Make sure `*Test.cs` files are in a directory with `dotnet_nunit_test_suite` in BUILD.bazel. diff --git a/java/TESTING.md b/java/TESTING.md new file mode 100644 index 0000000000000..9b71281699750 --- /dev/null +++ b/java/TESTING.md @@ -0,0 +1,98 @@ +# Java Testing Guide + +This guide helps contributors write tests in the Selenium Java codebase. + +## Test Framework + +* Browser tests use JUnit 5 (Jupiter) and extend `JupiterTestBase`. +* Test HTML pages live in `common/src/web/`. +* `pages` field gets URLs from `java/test/org/openqa/selenium/testing/Pages.java`. +* There are 2 pre-configured wait methods for 5 seconds and 10 seconds. +* Assertions use the AssertJ library. + +```java +class MyFeatureTest extends JupiterTestBase { + @Test + void testBasicFunctionality() { + driver.get(pages.xhtmlTestPage); + + wait.until(ExpectedConditions.titleIs("XHTML Test Page")); // 10s timeout + shortWait.until(ExpectedConditions.elementToBeClickable(element)); // 5s timeout + + assertThat(driver.getTitle()).isEqualTo("XHTML Test Page"); + } + + @Test + void testWithLocalDriver() { + localDriver = new ChromeDriver(); + localDriver.get(pages.xhtmlTestPage); + wait(localDriver).until(ExpectedConditions.titleIs("XHTML Test Page")); // creates 10s wait + } +} +``` + +## Running Tests + +Bazel creates separate test targets for browsers and remote grid depending on +what is supported in the `BUILD.bazel` file in that test's directory. +Tests run in parallel by default. + +```shell +bazel test //java/... # Run all Java tests +bazel test //java/test/org/openqa/selenium/bidi/... # Run all tests in bidi directory +bazel test //java/test/org/openqa/selenium/chrome:ChromeDriverFunctionalTest # Run a specific test + +# Test Filters +bazel test //java/... --test_size_filters=small # unit tests only (no browser) +bazel test //java/... --test_size_filters=large # all browser tests +bazel test //java/... --test_tag_filters=chrome # chrome tests only +bazel test //java/... --test_tag_filters=remote-browser # all browser tests that run on grid +bazel test //java/... --test_tag_filters=-safari # no safari tests (use `-` to exclude a tag) + +# Additional Arguments +bazel test //java/... --pin_browsers # Use pinned browser versions (recommended) +bazel test //java/... --flaky_test_attempts=3 # Rerun failed tests up to 3 times +bazel test //java/... --test_output=all # displays all test output at end of run +bazel test //java/... --test_output=streamed # displays test output in real time (disables parallel execution) +``` + +## Annotations + +Valid Browser values: `CHROME`, `EDGE`, `FIREFOX`, `SAFARI`, `IE`, `ALL`. (Default is `ALL`) + +### Skipping and Expected Failures + +These accept parameters for `value` (browser name) and `reason`. Each browser on a separate line. + +| Annotation | When to Use | +|------------|-----------------------------------------------------------------------------------------------------------------------------------------------| +| `@Ignore` | Test doesn't work for a browser. Also accepts `issue` (GitHub issue, e.g. `"#1234"`) and `gitHubActions` (only skip on CI). | +| `@NotYetImplemented` | Feature isn't implemented yet. Test always runs and passes if it fails, and fails if it unexpectedly passes so the annotation can be removed. | + +### Driver Lifecycle + +The shared `driver` is reused across tests for efficiency. These annotations control that behavior. They accept `value` (browser) and `reason`. + +| Annotation | When to Use | +|------------|----------------------------------------------------------------------------------------------------------------------------------------------------------| +| `@NeedsFreshDriver` | Test needs clean browser state with default capabilities. Driver is restarted before the test. | +| `@NoDriverBeforeTest` | Test needs custom capabilities or tests driver creation itself. Driver is destroyed, must use `createNewDriver(capabilities)` in the test to create one. | +| `@NoDriverAfterTest` | Test leaves browser in a bad state. Driver is restarted after the test. Also accepts `failedOnly`. | + +For tests needing two browsers simultaneously (e.g., multi-user scenarios), create a second instance with `localDriver = new ChromeDriver()`. This driver is automatically quit after the test. + +If `createNewDriver(capabilities)` is called without an annotation, it closes the current driver and creates a new one. + +*Be careful with hard-coding the creation of new drivers since it may conflict with the current Bazel target.* + +### Other + +| Annotation | When to Use | +|------------|-------------| +| `@SwitchToTopAfterTest` | Test navigates into frames. Automatically switches to default content after. | +| `@NeedsSecureServer` | Class-level. All tests in the class need HTTPS. | + +## Build files + +* Adding tests shouldn't require changes in Bazel files since most are picked up automatically. +* Make sure the `*Test.java` file is in a directory that has a `BUILD.bazel` file with a `java_selenium_test_suite` declaration. diff --git a/javascript/selenium-webdriver/TESTING.md b/javascript/selenium-webdriver/TESTING.md new file mode 100644 index 0000000000000..6a9a371dcb005 --- /dev/null +++ b/javascript/selenium-webdriver/TESTING.md @@ -0,0 +1,128 @@ +# JavaScript Testing Guide + +This guide helps contributors write tests in the Selenium JavaScript codebase. + +## Test Framework + +- Tests use Mocha. +- Test HTML pages accessed via `Pages` object. +- `suite()` wrapper handles multi-browser setup. +- Assertions use Node.js `assert` module. + +```javascript +const assert = require('node:assert') +const { Browser, By } = require('selenium-webdriver') +const { Pages, ignore, suite } = require('../lib/test') + +suite(function (env) { + let driver + + before(async function () { + driver = await env.builder().build() + }) + + after(function () { + return driver.quit() + }) + + it('should find element', async function () { + await driver.get(Pages.simpleTestPage) + let element = await driver.findElement(By.id('foo')) + assert.strictEqual(await element.getText(), 'expected') + }) + + ignore(env.browsers(Browser.SAFARI)).it('skipped on Safari', async function () { + // This test is skipped on Safari + }) +}) +``` + +## Running Tests + +```shell +bazel test //javascript/selenium-webdriver:small-tests # Unit tests (no browser) +bazel test //javascript/selenium-webdriver:all # All tests + +# Specific browser tests +bazel test //javascript/selenium-webdriver:element-finding-test-chrome +bazel test //javascript/selenium-webdriver:element-finding-test-firefox + +# Additional Arguments +bazel test //javascript/selenium-webdriver:... --flaky_test_attempts=3 +bazel test //javascript/selenium-webdriver:... --test_output=all +``` + +## Skipping Tests + +Use `ignore()` with browser predicates to skip tests: + +```javascript +const { ignore, suite } = require('../lib/test') + +suite(function (env) { + // Skip single test on Safari + ignore(env.browsers(Browser.SAFARI)).it('test name', async function () {}) + + // Skip on multiple browsers + ignore(env.browsers(Browser.CHROME, Browser.FIREFOX)).it('test name', async function () {}) + + // Skip entire describe block + ignore(env.browsers(Browser.IE)).describe('feature', function () { + it('test 1', async function () {}) + it('test 2', async function () {}) + }) +}) +``` + +Browser values: `Browser.CHROME`, `Browser.FIREFOX`, `Browser.SAFARI`, `Browser.EDGE`, `Browser.IE` + +## Helpers + +### From `lib/test` + +| Export | Description | +| ------------------- | --------------------------------------------------------- | +| `suite(fn)` | Test wrapper that handles driver setup per browser | +| `ignore(predicate)` | Skip tests when predicate returns true | +| `Pages` | Object with test page URLs (`Pages.simpleTestPage`, etc.) | +| `whereIs(path)` | Get URL for test resource | + +### Inside `suite(fn)` + +| Member | Description | +| ------------------------ | ----------------------------------------- | +| `env.builder()` | Get WebDriver builder for current browser | +| `env.browsers(...names)` | Predicate for browser matching | + +### Test Utilities (`test/lib/testutil.js`) + +| Utility | Description | +| ------------------------------ | -------------------------------------- | +| `callbackPair(success, error)` | Create callback pair for async testing | +| `StubError` | Error class for testing error handling | +| `assertIsStubError(err)` | Assert error is StubError | + +## Test Organization + +``` +javascript/selenium-webdriver/ +├── test/ +│ ├── lib/ # Small tests (no browser) +│ │ ├── by_test.js +│ │ └── promise_test.js +│ ├── *_test.js # Large tests (browser required) +│ ├── chrome/ # Chrome-specific tests +│ ├── firefox/ # Firefox-specific tests +│ └── bidi/ # BiDi protocol tests +└── lib/test/ # Test helpers + ├── index.js + └── fileserver.js +``` + +Test files end in `_test.js`. + +## Build Files + +- Adding tests shouldn't require Bazel changes for existing directories. +- Small tests (no browser) go in `test/lib/`. +- Large tests (browser required) go in `test/`. diff --git a/py/TESTING.md b/py/TESTING.md new file mode 100644 index 0000000000000..7852a1530dd28 --- /dev/null +++ b/py/TESTING.md @@ -0,0 +1,169 @@ +# Python Testing Guide + +This guide helps contributors write tests in the Selenium Python codebase. + +## Test Framework + +* Tests use [pytest](https://pytest.org). +* Test HTML pages live in `common/src/web/`. +* `pages` fixture loads test pages via `pages.load("pageName.html")`. +* Assertions use standard pytest `assert` statements. + +```python +def test_element_is_displayed(driver, pages): + pages.load("javascriptPage.html") + + element = driver.find_element(By.ID, "displayed") + assert element.is_displayed() is True + +@pytest.mark.xfail_safari(reason="Safari doesn't support this") +def test_something_safari_fails(driver, pages): + # Expected to fail on Safari + pass +``` + +## Test Organization + +``` +py/test/ +├── unit/ # Unit tests (no browser) +│ └── selenium/webdriver/ +└── selenium/webdriver/ # Integration tests + ├── common/ # Cross-browser tests + ├── chrome/ + ├── firefox/ + ├── safari/ + └── remote/ +``` + +Test files end in `_tests.py` (e.g., `visibility_tests.py`). + +## Running Tests + +Bazel creates test targets for each browser. Tests run in parallel by default. + +```shell +bazel test //py/... # All tests +bazel test //py:unit # Unit tests (no browser) +bazel test //py:test-chrome # Chrome browser tests +bazel test //py:test-firefox # Firefox browser tests +bazel test //py:common-chrome # Common tests with Chrome + +# A single test file with Chrome: +bazel test //py:common-chrome-test/selenium/webdriver/common/alerts_tests.py + +# With BiDi protocol +bazel test //py:common-chrome-bidi + +# Test filters +bazel test //py/... --test_tag_filters=chrome + +# Additional arguments +bazel test //py/... --flaky_test_attempts=3 +bazel test //py/... --test_output=all +bazel test //py/... --test_output=streamed # Live output for debugging +bazel test //py:test-chrome --headless + +# Run a specific test in a test file +bazel test //py:common-chrome-bidi-test/selenium/webdriver/common/bidi_browsing_context_tests.py \ + --test_arg=-k \ + --test_arg=test_get_tree_with_child \ + +# View all targets +bazel query //py/... +``` + +## Running Tests Without Bazel (using pytest) + +You can run tests directly with pytest after setting up the development environment. + +### Setup + +First, install the required dependencies: + +```shell +pip install -r py/requirements.txt +``` + +Then build the generated files and copy them into your local source tree: + +```shell +./go py:local_dev +``` + +### Running with pytest + +```shell +# Run all tests in a directory +pytest py/test/selenium/webdriver/chrome/ --driver chrome + +# Run a specific test file +pytest py/test/selenium/webdriver/common/window_tests.py + +# Run a specific test function +pytest py/test/selenium/webdriver/common/window_tests.py::test_should_get_the_size_of_the_current_window + +# With pytest options +pytest py/test/selenium/webdriver/chrome/ --driver chrome --headless -v +``` +> **Note:** +> For running BiDi tests, use the `--bidi` flag. + +## Fixtures + +We make use of +[pytest fixtures](https://docs.pytest.org/en/stable/reference/fixtures.html) +to simplify test setup/teardown. There are several +[built-in pytest fixtures](https://docs.pytest.org/en/stable/reference/fixtures.html), +and many of our own internal fixtures. If a fixture is specific to a module, you will +find it defined within the test file that uses it. If it is shared among several +modules, you will find the main fixtures in `conftest.py`: + +| Fixture | Description | +|---------|-------------| +| `driver` | WebDriver instance, auto-parametrized by browser | +| `pages` | Load test pages: `pages.load("page.html")` or `pages.url("page.html")` | +| `webserver` | Test HTTP server reference | +| `clean_driver` | Fresh driver without parametrization | +| `clean_options` | Fresh browser options instance | + +## Markers + +We use [pytest markers](https://docs.pytest.org/en/stable/how-to/mark.html) to indicate +special test behaviors. + +### Browser-specific Expected Failures + +Each accepts optional `reason` and `run` parameters. + +| Marker | When to Use | +|--------|-------------| +| `@pytest.mark.xfail_chrome` | Test expected to fail on Chrome | +| `@pytest.mark.xfail_firefox` | Test expected to fail on Firefox | +| `@pytest.mark.xfail_safari` | Test expected to fail on Safari | +| `@pytest.mark.xfail_edge` | Test expected to fail on Edge | +| `@pytest.mark.xfail_ie` | Test expected to fail on IE | +| `@pytest.mark.xfail_remote` | Test expected to fail with Remote WebDriver | + +```python +@pytest.mark.xfail_chrome(reason="Not implemented yet") +@pytest.mark.xfail_firefox(reason="https://bugzilla.mozilla.org/123") +def test_something(driver, pages): + pass + +@pytest.mark.xfail_safari(run=False) # Skip entirely instead of xfail +def test_skip_safari(driver, pages): + pass +``` + +### Driver Lifecycle + +| Marker | When to Use | +|--------|-------------| +| `@pytest.mark.no_driver_after_test` | Teardown driver after test | +| `@pytest.mark.needs_fresh_driver` | Restart driver for test isolation | + +## Build Files + +* Adding tests shouldn't require Bazel changes—files matching `*_tests.py` are picked up automatically. +* Make sure the test file is in a directory covered by existing `py_test_suite` targets. diff --git a/rb/TESTING.md b/rb/TESTING.md new file mode 100644 index 0000000000000..6fda6fc4d7c64 --- /dev/null +++ b/rb/TESTING.md @@ -0,0 +1,180 @@ +# Ruby Testing Guide + +This guide helps contributors write tests in the Selenium Ruby codebase. + +## Test Framework + +* Tests use RSpec. +* Test HTML pages live in `common/src/web/`. +* `url_for("page.html")` gets test page URLs. +* Helper methods: `driver`, `wait`, `short_wait`, `long_wait`. + +```ruby +module Selenium + module WebDriver + describe Element do + it 'returns element text' do + driver.get(url_for('simpleTest.html')) + expect(driver.find_element(id: 'foo').text).to eq('expected') + end + + it 'clicks element', except: {browser: :safari, reason: 'Safari bug'} do + # Skipped on Safari + end + end + end +end +``` + +## Running Tests + +Bazel creates test targets for each browser and remote variants. + +```shell +bazel test //rb/spec/... # All tests +bazel test //rb/spec/unit/... # Unit tests +bazel test //rb/spec/integration/... --test_tag_filters=chrome # Chrome tests +bazel test //rb/spec/integration/... --test_tag_filters=firefox # Firefox tests +bazel test //rb/spec/integration/... --test_tag_filters=chrome-remote # Chrome on Grid + +# Additional Arguments +bazel test //rb/... --test_output=all # See console output at the end +bazel test //rb/... --test_output=streamed # See console output real-time (removes parallel execution) +``` + +## Guards (Test Skipping) + +Guards control when tests run. Add them as metadata on `describe`, `context`, or `it` blocks. + +| Guard | When to Use | +|-------|-------------| +| `except` | Test is pending if conditions ARE met | +| `only` | Test is pending if conditions are NOT met | +| `exclusive` | Test is skipped entirely if conditions not met | +| `exclude` | Test is skipped (use for broken/unreliable tests) | + +```ruby +# Skip on Safari +it 'does something', except: {browser: :safari, reason: 'Safari bug'} do +end + +# Only run on Chrome and Firefox +it 'does something', only: {browser: %i[chrome firefox]} do +end + +# Skip entirely (not pending) when BiDi enabled +describe Driver, exclusive: {bidi: false, reason: 'Not implemented with BiDi'} do +end + +# Multiple conditions +it 'something', exclude: [ + {browser: :safari}, + {browser: :firefox, reason: 'https://github.com/SeleniumHQ/selenium/issues/123'} +] do +end +``` + +### Guard Conditions + +| Condition | Values | +|-----------|--------| +| `browser` | `:chrome`, `:firefox`, `:edge`, `:safari`, `:safari_preview`, `:ie` | +| `driver` | `:remote` | +| `platform` | `:linux`, `:macos`, `:windows` | +| `headless` | `true`, `false` | +| `bidi` | `true`, `false` | +| `ci` | `true`, `false` | + +## Helpers + +From `spec_support/helpers.rb`: + +| Helper | Description | +|--------|-------------| +| `driver` | Current WebDriver instance | +| `reset_driver!(...)` | Reset driver with new options | +| `url_for(filename)` | Get test page URL | +| `wait` / `short_wait` / `long_wait` | Wait instances (10s, 3s, 30s) | +| `wait_for_element(locator)` | Wait for element to appear | +| `wait_for_alert` | Wait for alert presence | + +## Test Organization + +``` +rb/spec/ +├── unit/ # Unit tests (no browser) +│ └── selenium/webdriver/ +└── integration/ # Integration tests + └── selenium/webdriver/ + ├── chrome/ + ├── firefox/ + ├── safari/ + ├── bidi/ + └── spec_support/ # Test helpers +``` + +Test files end in `_spec.rb` (e.g., `driver_spec.rb`). + +## Build Files + +* Adding tests shouldn't require Bazel changes—`rb_integration_test` uses glob patterns. +* Make sure `*_spec.rb` files are in a directory with a `BUILD.bazel` containing `rb_integration_test`. + +## Environment Variables + +Environment variables control test execution behavior and enable specific features. + +### BiDi Testing + +To run tests with BiDi (Bidirectional) protocol enabled: + +```shell +# Enable BiDi for all tests +WD_REMOTE_BROWSER=chrome BIDI=true bazel test //rb/spec/integration/... + +# Run BiDi-specific tests +bazel test //rb/spec/integration/selenium/webdriver/bidi/... +``` + +### Available Variables + +| Variable | Purpose | Values | Example | +|----------|---------|--------|---------| +| `BIDI` | Enable BiDi protocol | `true`, `false` | `BIDI=true` | +| `WD_REMOTE_BROWSER` | Specify browser for remote tests | `chrome`, `firefox`, `edge`, `safari` | `WD_REMOTE_BROWSER=firefox` | +| `HEADLESS` | Run tests in headless mode | `true`, `false` | `HEADLESS=true` | +| `DEBUG` | Enable debug logging | `true`, `false` | `DEBUG=true` | + +### Examples + +```shell +# Run Chrome tests with BiDi enabled +BIDI=true bazel test //rb/spec/integration/... --test_tag_filters=chrome + +# Run headless Firefox tests +HEADLESS=true bazel test //rb/spec/integration/... --test_tag_filters=firefox + +# Run remote tests on Edge with BiDi +WD_REMOTE_BROWSER=edge BIDI=true bazel test //rb/spec/integration/... --test_tag_filters=remote + +# Combine multiple variables +BIDI=true HEADLESS=true DEBUG=true bazel test //rb/spec/integration/selenium/webdriver/bidi/... +``` + +### Testing Guard Behavior + +Environment variables interact with test guards. For example: + +```ruby +# This test only runs when BiDi is enabled +it 'uses BiDi feature', only: {bidi: true} do + # Test code +end + +# This test is excluded when BiDi is enabled +it 'classic WebDriver only', exclusive: {bidi: false} do + # Test code +end +``` + +Run with `BIDI=true` to see these guards in action. diff --git a/rust/TESTING.md b/rust/TESTING.md new file mode 100644 index 0000000000000..a2b0402e2a2d2 --- /dev/null +++ b/rust/TESTING.md @@ -0,0 +1,9 @@ +# Rust Testing Guide + +This guide helps contributors write tests in the Selenium Rust codebase. + +- `bazel test //rust/...` + +Recommended flags: +* `--test_env=RUST_BACKTRACE=full` (see full errors) +* `--test_env=RUST_TEST_NOCAPTURE=1` (display output)