Skip to content

Conversation

@danielvegamyhre
Copy link
Contributor

@danielvegamyhre danielvegamyhre commented Dec 4, 2025

As part of the release I was running the tutorial workflows and encountered this error which indicates that Python 3.9 is no longer finding Torch CUDA 12.6 builds:

Looking in indexes: https://download.pytorch.org/whl/nightly/cu126
ERROR: Could not find a version that satisfies the requirement torch (from versions: none)
ERROR: No matching distribution found for torch
ERROR conda.cli.main_run:execute(125): `conda run pip install --pre torch torchvision torchaudio --index-url [https://download.pytorch.org/whl/nightly/cu126`](https://download.pytorch.org/whl/nightly/cu126%60) failed. (See above for error)

I notice other workflows are using 3.10 or 3.11, so I am bumping this version to match, and it resolves the issue.

@pytorch-bot
Copy link

pytorch-bot bot commented Dec 4, 2025

🔗 Helpful Links

🧪 See artifacts and rendered test results at hud.pytorch.org/pr/pytorch/ao/3439

Note: Links to docs will display an error until the docs builds have been completed.

⏳ No Failures, 10 Pending

As of commit d5b8eae with merge base 16aad7c (image):
💚 Looks good so far! There are no failures yet. 💚

This comment was automatically generated by Dr. CI and updates every 15 minutes.

@meta-cla meta-cla bot added the CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. label Dec 4, 2025
@danielvegamyhre danielvegamyhre added ciflow/tutorials topic: bug fix Use this tag for PRs that fix bugs and removed CLA Signed This label is managed by the Facebook bot. Authors need to sign the CLA before a PR can be reviewed. labels Dec 4, 2025
@danielvegamyhre danielvegamyhre merged commit bcd5dbc into main Dec 4, 2025
23 of 25 checks passed
vkuzo added a commit that referenced this pull request Dec 10, 2025
* add MXFP8 all gather support

* added TODO for future feature

* remove emoji from comment

* fixed ruff formating

* fixed ruff formatting

* add mxfp8 and nvfp4 to Llama eval scripts (#3394)

Update

[ghstack-poisoned]

* flip mx inference scaling setting to RCEIL (#3428)

* Update

[ghstack-poisoned]

* Update

[ghstack-poisoned]

* Update

[ghstack-poisoned]

* add CLAUDE.local.md to gitignore (#3437)

Summary:

taking claude code for a more thorough spin, will start with local
instructions and will see what makes sense to upstream

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

* bump python version in tutorial ci workflow (#3439)

* [CPU] Reland qconv fp8 fusion passes (#3433)

* [Reland][PT2E][X86] Add Inductor fusion passes of float8 qconv for X86Inductor backend

* add torch version check for Qconv FP8 UTs

* fix format issue

* Skip tests for ROCm

---------

Co-authored-by: Sun, Jiayi <[email protected]>

* Int8Tensor migration cleanup (#3407)

* Int8Tensor migration

Summary:

This PR creates a new Int8Tensor and updates the configs to use the new
Int8Tensor flow

Test Plan:

To ensure BC:
```
pytest test/quantization/test_quant_api.py
```

To test new Int8Tensor:
```
pytest test/quantization/quantize_/workflows/int8/test_int8_tensor.py
```

Reviewers:

Subscribers:

Tasks:

Tags:

* ruff fixes

* add init

* fix ruff again

* update

* wip

* undo update tests

* fix ruff

* fix varname

* fix typing

* add tests

* fix dtype

* fix ci

* address granularity cr

* update _choose_quant_func_and_quantize_tensor

* make block size required attribute

* made dtype required as well

* address nits

* skip per tensor weight only test for now

* [xpu][test] Port 2 test/dtypes_{floatx, bitpacking} UT files to intel XPU (#3368)

* enable test/dtypes/test_bitpacking.py on intel xpu

* enable test/dtypes/test_floatx.py

* enable test/dtypes/test_floatx.py

* fix format issue

* fix format issue

* update _DEVICES

* [xpu][test] Port 2 test/quantization/pt2e/test_{quantize_pt2e, quantize_pt2e_qat} UT files to intel XPU (#3405)

* add test/quantization/pt2e/test_quantize_pt2e.py

* add test/quantization/pt2e/test_quantize_pt2e.py

* test/quantization/pt2e/test_quantize_pt2e_qat.py

* test/quantization/pt2e/test_quantize_pt2e_qat.py

* fix format issue

* update format

* increase timeout for xpu

* [Intel GPU] Enable optim SR test (#3055)

* updated test with rebase changes

* added checks to run only on CUDA with compatibility >=9

* updated test for H100

* added test to workflow

---------

Co-authored-by: Vasiliy Kuznetsov <[email protected]>
Co-authored-by: Daniel Vega-Myhre <[email protected]>
Co-authored-by: Xia Weiwen <[email protected]>
Co-authored-by: Sun, Jiayi <[email protected]>
Co-authored-by: Jesse Cai <[email protected]>
Co-authored-by: xiangdong <[email protected]>
Co-authored-by: Artur Lesniak <[email protected]>
namgyu-youn pushed a commit to namgyu-youn/ao that referenced this pull request Dec 19, 2025
namgyu-youn pushed a commit to namgyu-youn/ao that referenced this pull request Dec 19, 2025
* add MXFP8 all gather support

* added TODO for future feature

* remove emoji from comment

* fixed ruff formating

* fixed ruff formatting

* add mxfp8 and nvfp4 to Llama eval scripts (pytorch#3394)

Update

[ghstack-poisoned]

* flip mx inference scaling setting to RCEIL (pytorch#3428)

* Update

[ghstack-poisoned]

* Update

[ghstack-poisoned]

* Update

[ghstack-poisoned]

* add CLAUDE.local.md to gitignore (pytorch#3437)

Summary:

taking claude code for a more thorough spin, will start with local
instructions and will see what makes sense to upstream

Test Plan:

Reviewers:

Subscribers:

Tasks:

Tags:

* bump python version in tutorial ci workflow (pytorch#3439)

* [CPU] Reland qconv fp8 fusion passes (pytorch#3433)

* [Reland][PT2E][X86] Add Inductor fusion passes of float8 qconv for X86Inductor backend

* add torch version check for Qconv FP8 UTs

* fix format issue

* Skip tests for ROCm

---------

Co-authored-by: Sun, Jiayi <[email protected]>

* Int8Tensor migration cleanup (pytorch#3407)

* Int8Tensor migration

Summary:

This PR creates a new Int8Tensor and updates the configs to use the new
Int8Tensor flow

Test Plan:

To ensure BC:
```
pytest test/quantization/test_quant_api.py
```

To test new Int8Tensor:
```
pytest test/quantization/quantize_/workflows/int8/test_int8_tensor.py
```

Reviewers:

Subscribers:

Tasks:

Tags:

* ruff fixes

* add init

* fix ruff again

* update

* wip

* undo update tests

* fix ruff

* fix varname

* fix typing

* add tests

* fix dtype

* fix ci

* address granularity cr

* update _choose_quant_func_and_quantize_tensor

* make block size required attribute

* made dtype required as well

* address nits

* skip per tensor weight only test for now

* [xpu][test] Port 2 test/dtypes_{floatx, bitpacking} UT files to intel XPU (pytorch#3368)

* enable test/dtypes/test_bitpacking.py on intel xpu

* enable test/dtypes/test_floatx.py

* enable test/dtypes/test_floatx.py

* fix format issue

* fix format issue

* update _DEVICES

* [xpu][test] Port 2 test/quantization/pt2e/test_{quantize_pt2e, quantize_pt2e_qat} UT files to intel XPU (pytorch#3405)

* add test/quantization/pt2e/test_quantize_pt2e.py

* add test/quantization/pt2e/test_quantize_pt2e.py

* test/quantization/pt2e/test_quantize_pt2e_qat.py

* test/quantization/pt2e/test_quantize_pt2e_qat.py

* fix format issue

* update format

* increase timeout for xpu

* [Intel GPU] Enable optim SR test (pytorch#3055)

* updated test with rebase changes

* added checks to run only on CUDA with compatibility >=9

* updated test for H100

* added test to workflow

---------

Co-authored-by: Vasiliy Kuznetsov <[email protected]>
Co-authored-by: Daniel Vega-Myhre <[email protected]>
Co-authored-by: Xia Weiwen <[email protected]>
Co-authored-by: Sun, Jiayi <[email protected]>
Co-authored-by: Jesse Cai <[email protected]>
Co-authored-by: xiangdong <[email protected]>
Co-authored-by: Artur Lesniak <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

ciflow/tutorials topic: bug fix Use this tag for PRs that fix bugs

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants