generated from ntoll/codespaces-project-template-pyscript
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Refactor of discovery, more robust testing of the framework, update t…
…o docs.
- Loading branch information
Showing
7 changed files
with
342 additions
and
105 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,4 +1,106 @@ | ||
""" | ||
How do you test a test framework? | ||
You can't use the test framework to test itself, because it may contain bugs! | ||
Hence this script, which uses upytest to run tests and check the results are as | ||
expected. The expected results are hard-coded in the script, and the actual | ||
results are generated by running tests with upytest. The script then compares | ||
the expected and actual results to ensure they match. | ||
Finally, the script creates a div element to display the results in the page. | ||
If the tests fail, the script will raise an AssertionError, which will be | ||
displayed with a red background. If the tests pass, the script will display a | ||
message with a green background. | ||
This script will work with both MicroPython and Pyodide, just so we can ensure | ||
the test framework works in both environments. The index.html file uses | ||
MicroPython, the index2.html file uses Pyodide. | ||
That's it! You can now test a test framework with a meta-test framework. 🤯 | ||
""" | ||
|
||
from pyscript.web import page, div, h2, p, b | ||
import upytest | ||
|
||
|
||
await upytest.run("./tests") | ||
expected_results = { | ||
"result_all": { | ||
"passes": 6, | ||
"fails": 6, | ||
"skipped": 2, | ||
}, | ||
"result_module": { | ||
"passes": 5, | ||
"fails": 6, | ||
"skipped": 2, | ||
}, | ||
"result_specific": { | ||
"passes": 1, | ||
"fails": 0, | ||
"skipped": 0, | ||
}, | ||
} | ||
|
||
actual_results = {} | ||
# Run all tests in the tests directory. | ||
actual_results["result_all"] = await upytest.run("./tests") | ||
# Run all tests in a specific module. | ||
actual_results["result_module"] = await upytest.run( | ||
"tests/test_core_functionality.py" | ||
) | ||
# Run a specific test function. | ||
actual_results["result_specific"] = await upytest.run( | ||
"tests/test_core_functionality.py::test_passes" | ||
) | ||
|
||
# Evaluate the results have the right number of tests. | ||
for name, result in expected_results.items(): | ||
for key, value in result.items(): | ||
actual = len(actual_results[name][key]) | ||
assert ( | ||
actual == value | ||
), f"Expected {value} {key} in {name}, got {actual}" | ||
|
||
# Ensure the tests that pass have a name ending in "passed", tests that fail | ||
# have a name ending in "failed", and tests that are skipped have a name ending | ||
# in "skipped". | ||
for name, result in actual_results.items(): | ||
for key, value in result.items(): | ||
for test in value: | ||
assert test.endswith(key), f"Test {test} does not end with {key}" | ||
|
||
# Create a div to display the results in the page. | ||
page.append( | ||
div( | ||
h2("Test Results All Correct ✨"), | ||
div( | ||
p( | ||
b("All Tests: "), | ||
f"Passes: {len(actual_results['result_all']['passes'])}" | ||
f" Fails: {len(actual_results['result_all']['fails'])}" | ||
f" Skipped: {len(actual_results['result_all']['skipped'])}", | ||
), | ||
), | ||
div( | ||
p( | ||
b("Tests in a Specified Module: "), | ||
f"Passes: {len(actual_results['result_module']['passes'])}" | ||
f" Fails: {len(actual_results['result_module']['fails'])}" | ||
f" Skipped: {len(actual_results['result_module']['skipped'])}", | ||
), | ||
), | ||
div( | ||
p( | ||
b("Test a Specific Test: "), | ||
f"Passes: {len(actual_results['result_specific']['passes'])}" | ||
f" Fails: {len(actual_results['result_specific']['fails'])}" | ||
f" Skipped: {len(actual_results['result_specific']['skipped'])}", | ||
), | ||
), | ||
style={ | ||
"background-color": "lightgreen", | ||
"padding": "10px", | ||
"border": "1px solid green", | ||
}, | ||
) | ||
) |
Oops, something went wrong.