You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thus far, we've had the philosophy that if there is a task still running by the end of a test then the test is erroneous. This is true in the literal sense that it will cause python tear down errors in the test, but for some operations, e.g tasks running while other futures in the event loop enter fail states, it could be perfectly acceptable that in production the task would be kicking around for a bit before failure.
# str(EnumClass.value) handling changed in Python 3.11
match=(
"AravisController only supports the following trigger types: .* but"
),
):
awaittest_adaravis.prepare(
TriggerInfo(
number_of_triggers=0,
trigger=DetectorTrigger.VARIABLE_GATE,
deadtime=1,
livetime=1,
frame_timeout=3,
)
)
mock_open.assert_called_once()
Instead of mocking private members the developer should be able to articulate that there'll be a certain task which will still be running at the end of the test.
Suggestion
A fixture which runs before the finalizer which checks for unfinished tasks at the end of tests:
classAsyncTaskHandlerdef__init__(self):
self._allowed_pending_tasks= {}
defexpect_remaining_tasks(*task_names: str):
self._allowed_pending_tasks.update(set(task_names))
defclose_expected_tasks(self):
"""Closes tasks from self._allowed_pending_tasks"""@pytest.fixturedefpending_tasks_handler()
handler=AsyncTaskHandler()
yieldhandlerhandler.close_expected_tasks()
...
asyncdeftest_unsupported_trigger_excepts(test_adaravis: adaravis.AravisDetector, pending_tasks_handler):
pending_tasks_handler.expect_remaining_tasks("ADHDFWriter.open")
withpytest.raises(
ValueError,
# str(EnumClass.value) handling changed in Python 3.11match=(
"AravisController only supports the following trigger types: .* but"
),
):
awaittest_adaravis.prepare(
TriggerInfo(
number_of_triggers=0,
trigger=DetectorTrigger.VARIABLE_GATE,
deadtime=1,
livetime=1,
frame_timeout=3,
)
)
This forces developers to keep in mind what tasks will still be around in error states. It's clear from the code that this will happen while open is still pending. It could also be used in a context which could kill the tasks within a portion of a test, e.g:
Thus far, we've had the philosophy that if there is a task still running by the end of a test then the test is erroneous. This is true in the literal sense that it will cause python tear down errors in the test, but for some operations, e.g tasks running while other futures in the event loop enter fail states, it could be perfectly acceptable that in production the task would be kicking around for a bit before failure.
Take the following aravis test:
ophyd-async/tests/epics/adaravis/test_aravis.py
Lines 136 to 157 in 4658c48
Instead of mocking private members the developer should be able to articulate that there'll be a certain task which will still be running at the end of the test.
Suggestion
A fixture which runs before the finalizer which checks for unfinished tasks at the end of tests:
This forces developers to keep in mind what tasks will still be around in error states. It's clear from the code that this will happen while
open
is still pending. It could also be used in a context which could kill the tasks within a portion of a test, e.g:Originally posted by @evalott100 in #730 (comment)
Considerations
Would this have use on the plan level?
The text was updated successfully, but these errors were encountered: