-
Notifications
You must be signed in to change notification settings - Fork 399
fix unit test to use no grad #3283
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
🔗 Helpful Links🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3283
Note: Links to docs will display an error until the docs builds have been completed. ❗ 1 Active SEVsThere are 1 currently active SEVs. If your PR is affected, please view them below: ✅ No FailuresAs of commit 1fa1253 with merge base 9266734 ( This comment was automatically generated by Dr. CI and updates every 15 minutes. |
|
|
||
| # Needed since changing args to function causes recompiles | ||
| torch._dynamo.config.cache_size_limit = 128 | ||
| torch.set_grad_enabled(False) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this disables it for all the tests, that's why some unrelated optim tests are failing? Maybe we should scope it to within the specific float8 test that's failing? @liangel-02 @jerryzh168 what do you think
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah I think we can put this in setUp of the test case
2f6ec9d to
97dc6cb
Compare
97dc6cb to
1fa1253
Compare
Summary
Updating tests to use no grad due to 166367 breaking CI. PR was since reverted but updating in anticipation of it relanding.
Test
python test/quantization/quantize_/workflows/float8/test_float8_tensor.py