-
Notifications
You must be signed in to change notification settings - Fork 347
Inconsistent results for a test when running in-suite #842
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Adding more information after some reading: I have no session-scoped fixtures in my suite. |
Ah, I think I see the problem. It looks like you're inadvertently dependent on test execution order. I would recommend avoiding I would also avoid putting the expected result as part of your parameterization, because different results ultimately means different behavior under test, and it tends to cause problems like this one because of the test structure it requires. I know the docs demonstrate parameterization this way, but it's likely to cause problems in tests that have complexity beyond small unit tests. Pytest also isn't built to handle the results of fixtures as parameter inputs. Parameter inputs are really more for the arguments you would pass into a function, rather than the result of that function. If you can provide some more detail about what |
Interesting. How so?
Noted. Thank you.
This is the only way I found that lets me keep my tests DRY, as they contain a lot of set-up. test_some and test_some_other are simply Django user objects with a certain attribute that when found, should change a certain subset of the output given the same inputs (besides the User object of course).
Is there an alternative to this one? It seemed like a lifesaver when I discovered it. |
I could be wrong, but it seems another test/fixture in your suite is doing something to the cached value of your fixtures or the state of a common asset that wasn't "undone" properly. When a test is run in isolation and is consistently resulting a certain way, but that result is different than when run with other tests, it being dependent on other tests is almost always the reason why.
DRY is more of a warning that you may be missing an opportunity to use more DAMP (i.e. abstraction); not so much an opportunity only to reduce duplicate code. There's also KISS to keep things straightforward. It's fine, and often a good thing, to embrace having many fixtures and many tests. I'm not sure I follow the use case though. It sounds like you have a sort "if object has this attribute, then do this, otherwise, do this". Is that the case?
There's always an alternative. I find that when folks try to use the internal pytest stuff, or even some of the hooks when trying to do certain things, they may have missed another pytest feature, are missing robust abstraction, or weren't lined up for a more effective structure for their fixtures and tests. I would say not to fear (for lack of a better word) expensive setups (which I'm assuming based off of what you said about them having a lot of setup), but leverage them a little more carefully. For example, I use larger scopes, like classes, to represent a resulting state. I only use the class scope for my fixtures, and make sure that my test methods in a given test class only query and never modify the state of the SUT. This lets me run as many asserts as I want against a common state without having to worry about one test stepping on the toes if any other. As long as I make sure any state-changing fixtures for that test class run before any test executes (which I use autouse and fixture request chains for), then I'm golden. I don't think any one of these suggestions answers your question, but I feel collectively they might. Let me know if not, and we can dive in a bit further. |
@SalmonMode Thanks for taking the time to answer and educate.
I guess the only question that remains is: is getfixturevalue supposed to behave like this? If it is, then I misunderstood it. If it’s not, then it’s a bug in pytest? Maybe the library shouldn’t advertise a feature that some say should be avoided at all costs? |
The cause of the issue that I mentioned isn't directly related to There are some issues in pytest that pytest-dev/pytest#6551 found fix and some in pytest-django that #807 should fix in tandem with the other fix, and they can be the cause of this as well. But since you used both the PRs I mentioned and still had the issue, I think it's likely the issues I mentioned above. That said, I agree that pytest shouldn't advertise such features when there's simpler and safer ways to achieve results. But it's an enormous investment to come up with a sanctioned, standardized structure for pytest tests and fixtures, and good educational material to teach users. Also, it's not my call. |
It seems that one particular test outputs different results based on if I'm running the whole suite or not.
When running the test like so:
The test passes. But the test fails if I run the whole suite:
I've been debugging this for the past few hours to no avail. I've removed
--reuse-db --no-migrations
and brought them back, no dice. The behaviour under test has been manually verified, and the test should definitely pass.Some details:
The test uses indirect parametrization like so:
Both
test_some
andtest_some_other
are fixtures defined infixtures.py
:Any suggestions?
The text was updated successfully, but these errors were encountered: