Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

posts/intel-pytorch-extension-tutorial/native-windows/ #43

Open
utterances-bot opened this issue Oct 12, 2023 · 42 comments
Open

posts/intel-pytorch-extension-tutorial/native-windows/ #43

utterances-bot opened this issue Oct 12, 2023 · 42 comments

Comments

@utterances-bot
Copy link

Christian Mills - Getting Started with Intel’s PyTorch Extension for Arc GPUs on Windows

This tutorial provides a step-by-step guide to setting up Intel’s PyTorch extension on Windows to train models with Arc GPUs.

https://christianjmills.com/posts/intel-pytorch-extension-tutorial/native-windows/

Copy link

ArchDD commented Oct 12, 2023

Hi Christian Mills,

Excellent tuition! Thank you for your share!

However, under this tuition, I don't know why I could only import pytorch correctly, intel_extension_for_pytorch could not be imported. I wish someone could help me with it.

Also, another way to make Intel-gpu available for Pytorch on Windows may be via DirectML. I successfully make it good with TensorFlow.

I wonder which is better between intel_extension_for_pytorch and Directml.

Copy link
Owner

Hi @ArchDD,
Without additional information, it is hard to determine why you can't import the intel_extension_for_pytorch dependency. For now, I can only recommend you double-check that you followed all the steps in the tutorial.

Intel's PyTorch extension offers some key benefits compared to using TensorFlow with DirectML.

First, Intel's PyTorch extension still receives updates, with the most recent release being in August. In contrast, the tensorflow-directml package last received an update over a year ago, and the tensorflow-directml-plugin package in February. The tensorflow-directml package does not even support current Python versions.

Second, as mentioned in this post, Intel's PyTorch extension provides optimizations to take advantage of the Xe Matrix Extensions inside Arc GPUs.

Copy link

ArchDD commented Oct 14, 2023

Thank you for providing the information about the difference between the two methods. It helps a lot on deciding which method to use. And I will check the installation again.

Copy link

ArchDD commented Oct 14, 2023

Hi Christian Mills,

I think I have installed correctly.

I got message after import torch andintel_extension_for_pytorch:

C:\Users\NUC\mambaforge\envs\pytorch-arc\Lib\site-packages\torchvision\io\image.py:13: UserWarning: Failed to load image Python extension: '[WinError 127] Module not found. 'If you don't plan on using image functionality from torchvision.io, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have libjpeg or libpng installed before building torchvision from source?
warn

2.0.0a0+gitc6a572f
2.0.110+gitba7f6c1
[0]: _DeviceProperties(name='Intel(R) Arc(TM) A770M Graphics', platform_name='Intel(R) Level-Zero', dev_type='gpu, support_fp64=0, total_memory=15930MB, max_compute_units=512, gpu_eu_count=512)

I think torchvision can not work but anyway I don't need torchvision at the moment.

However, I am wondering how to use Pycharm not jupyter notebook. Do you have any suggestion on it?

Thank you so much!

@cj-mills
Copy link
Owner

Hi @ArchDD,

The UserWarning message you posted is not an issue, and you can still use torchvision if needed. The team that works on Intel's extension did not build torchvision with those libraries installed, but they are not required.

Regarding Pycharm, I don't use it personally, but I believe it has a place to set environment variables. Checkout the documentation:

Copy link

ArchDD commented Oct 17, 2023

Hi Christian Mills,
Thank you so much.
It is such a detailed tutorial so that I shared your page at https://zhuanlan.zhihu.com/p/661344296 which is a Chinese community for answering questions from anybody and noticed you as the original author.
I will delete it if it's out of your consideration.
Best

Copy link

ArchDD - conda install libpng and conda install jpeg will make the warnings go away

Copy link

Hi, Christian Mills,
Thank you very much for this guide.
I did everything according to your instructions. But since a new version of PyTorch from Intel v2.1.10+xpu has been released I decided to install it.
Everything installed safely.
But when I started to install TorchVision I got errors, maybe it is related to PyTorch v2.1.10+xpu version and you installed 2.0.110.
How critical are these errors and is it possible to fix them? This is how the errors look like:

(pytorch-arc) C:\Users\chema>compile_bundle.bat "C:\Program Files (x86)\Intel\oneAPI\compiler\latest" "C:\Program Files (x86)\Intel\oneAPI\mkl\2024.0"

(pytorch-arc) C:\Users\chema>rem Install Python dependencies

(pytorch-arc) C:\Users\chema>python -m pip install cmake astunparse numpy ninja pyyaml mkl-static mkl-include setuptools cffi typing_extensions future six requests dataclasses Pillow
Collecting cmake
  Downloading cmake-3.28.1-py2.py3-none-win_amd64.whl.metadata (6.5 kB)
Collecting astunparse
  Downloading astunparse-1.6.3-py2.py3-none-any.whl (12 kB)
Requirement already satisfied: numpy in c:\users\chema\miniforge3\envs\pytorch-arc\lib\site-packages (1.26.4)
Collecting ninja
  Downloading ninja-1.11.1.1-py2.py3-none-win_amd64.whl.metadata (5.4 kB)
Requirement already satisfied: pyyaml in c:\users\chema\miniforge3\envs\pytorch-arc\lib\site-packages (6.0.1)
Collecting mkl-static
  Downloading mkl_static-2024.0.0-py2.py3-none-win_amd64.whl.metadata (1.5 kB)
Collecting mkl-include
  Downloading mkl_include-2024.0.0-py2.py3-none-win_amd64.whl.metadata (1.3 kB)
Requirement already satisfied: setuptools in c:\users\chema\miniforge3\envs\pytorch-arc\lib\site-packages (68.2.2)
Requirement already satisfied: cffi in c:\users\chema\miniforge3\envs\pytorch-arc\lib\site-packages (1.16.0)
Requirement already satisfied: typing_extensions in c:\users\chema\miniforge3\envs\pytorch-arc\lib\site-packages (4.9.0)Collecting future
  Downloading future-0.18.3.tar.gz (840 kB)
     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 840.9/840.9 kB 1.4 MB/s eta 0:00:00
  Preparing metadata (setup.py) ... done
Requirement already satisfied: six in c:\users\chema\miniforge3\envs\pytorch-arc\lib\site-packages (1.16.0)
Requirement already satisfied: requests in c:\users\chema\miniforge3\envs\pytorch-arc\lib\site-packages (2.31.0)
Collecting dataclasses
  Downloading dataclasses-0.6-py3-none-any.whl (14 kB)
Requirement already satisfied: Pillow in c:\users\chema\miniforge3\envs\pytorch-arc\lib\site-packages (10.2.0)
Requirement already satisfied: wheel<1.0,>=0.23.0 in c:\users\chema\miniforge3\envs\pytorch-arc\lib\site-packages (from astunparse) (0.41.2)
Collecting intel-openmp==2024.* (from mkl-static)
  Downloading intel_openmp-2024.0.2-py2.py3-none-win_amd64.whl.metadata (1.2 kB)
Collecting tbb==2021.* (from mkl-static)
  Downloading tbb-2021.11.0-py3-none-win_amd64.whl.metadata (1.1 kB)
Requirement already satisfied: pycparser in c:\users\chema\miniforge3\envs\pytorch-arc\lib\site-packages (from cffi) (2.21)
Requirement already satisfied: charset-normalizer<4,>=2 in c:\users\chema\miniforge3\envs\pytorch-arc\lib\site-packages (from requests) (3.3.2)
Requirement already satisfied: idna<4,>=2.5 in c:\users\chema\miniforge3\envs\pytorch-arc\lib\site-packages (from requests) (3.6)
Requirement already satisfied: urllib3<3,>=1.21.1 in c:\users\chema\miniforge3\envs\pytorch-arc\lib\site-packages (from requests) (2.2.0)
Requirement already satisfied: certifi>=2017.4.17 in c:\users\chema\miniforge3\envs\pytorch-arc\lib\site-packages (from requests) (2024.2.2)
Downloading cmake-3.28.1-py2.py3-none-win_amd64.whl (35.8 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 35.8/35.8 MB 212.2 kB/s eta 0:00:00
Downloading ninja-1.11.1.1-py2.py3-none-win_amd64.whl (312 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 313.0/313.0 kB 717.3 kB/s eta 0:00:00
Downloading mkl_static-2024.0.0-py2.py3-none-win_amd64.whl (220.3 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 220.3/220.3 MB 315.7 kB/s eta 0:00:00
Downloading mkl_include-2024.0.0-py2.py3-none-win_amd64.whl (1.2 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 1.2/1.2 MB 519.8 kB/s eta 0:00:00
Downloading intel_openmp-2024.0.2-py2.py3-none-win_amd64.whl (3.9 MB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 3.9/3.9 MB 586.4 kB/s eta 0:00:00
Downloading tbb-2021.11.0-py3-none-win_amd64.whl (298 kB)
   ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 298.3/298.3 kB 317.6 kB/s eta 0:00:00
Building wheels for collected packages: future
  Building wheel for future (setup.py) ... done
  Created wheel for future: filename=future-0.18.3-py3-none-any.whl size=492054 sha256=d633e1dce98d3cfd40a96925ec2740482b82bb838b4c866d4948c74e7d0258f4
  Stored in directory: c:\users\chema\appdata\local\pip\cache\wheels\da\19\ca\9d8c44cd311a955509d7e13da3f0bea42400c469ef825b580b
Successfully built future
Installing collected packages: tbb, ninja, mkl-include, intel-openmp, dataclasses, cmake, mkl-static, future, astunparseSuccessfully installed astunparse-1.6.3 cmake-3.28.1 dataclasses-0.6 future-0.18.3 intel-openmp-2024.0.2 mkl-include-2024.0.0 mkl-static-2024.0.0 ninja-1.11.1.1 tbb-2021.11.0

(pytorch-arc) C:\Users\chema>rem Checkout individual components

(pytorch-arc) C:\Users\chema>if NOT EXIST pytorch (git clone https://github.com/pytorch/pytorch.git )
Cloning into 'pytorch'...
remote: Enumerating objects: 1236305, done.
remote: Counting objects: 100% (3136/3136), done.
remote: Compressing objects: 100% (1438/1438), done.
error: RPC failed; curl 92 HTTP/2 stream 5 was not closed cleanly: CANCEL (err 8)
error: 5594 bytes of body are still expected
fetch-pack: unexpected disconnect while reading sideband packet
fatal: early EOF
fatal: fetch-pack: invalid index-pack output

(pytorch-arc) C:\Users\chema>if NOT EXIST vision (git clone https://github.com/pytorch/vision.git )
Cloning into 'vision'...
remote: Enumerating objects: 466167, done.
remote: Counting objects: 100% (46625/46625), done.
remote: Compressing objects: 100% (2095/2095), done.
error: RPC failed; curl 92 HTTP/2 stream 5 was not closed cleanly: CANCEL (err 8)
error: 54640 bytes of body are still expected
fetch-pack: unexpected disconnect while reading sideband packet
fatal: early EOF
fatal: fetch-pack: invalid index-pack output

(pytorch-arc) C:\Users\chema>if NOT EXIST intel-extension-for-pytorch (git clone https://github.com/intel/intel-extension-for-pytorch.git )
Cloning into 'intel-extension-for-pytorch'...
remote: Enumerating objects: 81414, done.
remote: Counting objects: 100% (9332/9332), done.
remote: Compressing objects: 100% (2479/2479), done.
error: 5458 bytes of body are still expected MiB | 479.00 KiB/s
fetch-pack: unexpected disconnect while reading sideband packet
fatal: early EOF
fatal: fetch-pack: invalid index-pack output

(pytorch-arc) C:\Users\chema>rem Checkout required branch/commit and update submodules

(pytorch-arc) C:\Users\chema>cd pytorch
The system cannot find the path specified.

(pytorch-arc) C:\Users\chema>if not "v2.0.1" == "" (git checkout v2.0.1 )
fatal: not a git repository (or any of the parent directories): .git

(pytorch-arc) C:\Users\chema>git submodule sync
fatal: not a git repository (or any of the parent directories): .git

(pytorch-arc) C:\Users\chema>git submodule update --init --recursive
fatal: not a git repository (or any of the parent directories): .git

(pytorch-arc) C:\Users\chema>cd ..

(pytorch-arc) C:\Users>cd vision
The system cannot find the path specified.

(pytorch-arc) C:\Users>if not "v0.15.2" == "" (git checkout v0.15.2 )
fatal: not a git repository (or any of the parent directories): .git

(pytorch-arc) C:\Users>git submodule sync
fatal: not a git repository (or any of the parent directories): .git

(pytorch-arc) C:\Users>git submodule update --init --recursive
fatal: not a git repository (or any of the parent directories): .git

(pytorch-arc) C:\Users>cd ..

(pytorch-arc) C:\>cd intel-extension-for-pytorch
The system cannot find the path specified.

(pytorch-arc) C:\>if not "v2.0.110+xpu" == "" (git checkout v2.0.110+xpu )
fatal: not a git repository (or any of the parent directories): .git

(pytorch-arc) C:\>git submodule sync
fatal: not a git repository (or any of the parent directories): .git

(pytorch-arc) C:\>git submodule update --init --recursive
fatal: not a git repository (or any of the parent directories): .git

(pytorch-arc) C:\>rem Compile individual component

(pytorch-arc) C:\>rem PyTorch

(pytorch-arc) C:\>cd ..\pytorch
The system cannot find the path specified.

(pytorch-arc) C:\>git stash
fatal: not a git repository (or any of the parent directories): .git

(pytorch-arc) C:\>git clean -f
fatal: not a git repository (or any of the parent directories): .git

(pytorch-arc) C:\>for %f in ("..\intel-extension-for-pytorch\torch_patches\*.patch") do git apply "%f"

(pytorch-arc) C:\>python -m pip install -r requirements.txt
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'

(pytorch-arc) C:\>call conda install -q --yes -c conda-forge libuv=1.39
Retrieving notices: ...working... done
Channels:
 - conda-forge
 - defaults
Platform: win-64
Collecting package metadata (repodata.json): ...working... done
Solving environment: ...working... done

## Package Plan ##

  environment location: C:\Users\chema\miniforge3\envs\pytorch-arc

  added / updated specs:
    - libuv=1.39


The following packages will be downloaded:

    package                    |            build
    ---------------------------|-----------------
    ca-certificates-2024.2.2   |       h56e8100_0         152 KB  conda-forge
    libuv-1.39.0               |       he774522_0         260 KB
    openssl-3.2.1              |       hcfcfb64_0         7.8 MB  conda-forge
    ucrt-10.0.22621.0          |       h57928b3_0         1.2 MB  conda-forge
    vc14_runtime-14.38.33130   |      h82b7239_18         732 KB  conda-forge
    vs2015_runtime-14.38.33130 |      hcb4865c_18          17 KB  conda-forge
    ------------------------------------------------------------
                                           Total:        10.2 MB

The following NEW packages will be INSTALLED:

  ucrt               conda-forge/win-64::ucrt-10.0.22621.0-h57928b3_0
  vc14_runtime       conda-forge/win-64::vc14_runtime-14.38.33130-h82b7239_18

The following packages will be UPDATED:

  ca-certificates    pkgs/main::ca-certificates-2023.12.12~ --> conda-forge::ca-certificates-2024.2.2-h56e8100_0
  openssl              pkgs/main::openssl-3.0.13-h2bbff1b_0 --> conda-forge::openssl-3.2.1-hcfcfb64_0
  vs2015_runtime     pkgs/main::vs2015_runtime-14.27.29016~ --> conda-forge::vs2015_runtime-14.38.33130-hcb4865c_18

The following packages will be DOWNGRADED:

  libuv                                   1.44.2-h2bbff1b_0 --> 1.39.0-he774522_0


Preparing transaction: ...working... done
Verifying transaction: ...working... done
Executing transaction: ...working... done

(pytorch-arc) C:\>rem Ensure cmake can find python packages when using conda or virtualenv

(pytorch-arc) C:\>if defined CONDA_PREFIX (set "CMAKE_PREFIX_PATH=:\Users\chema\miniforge3\envs\pytorch-ar" )  else if defined VIRTUAL_ENV (set "CMAKE_PREFIX_PATH=~1,-1" )

(pytorch-arc) C:\>set "USE_STATIC_MKL=1"

(pytorch-arc) C:\>set "USE_NUMA=0"

(pytorch-arc) C:\>set "USE_CUDA=0"

(pytorch-arc) C:\>python setup.py clean
python: can't open file 'C:\\setup.py': [Errno 2] No such file or directory

(pytorch-arc) C:\>python setup.py bdist_wheel
python: can't open file 'C:\\setup.py': [Errno 2] No such file or directory

(pytorch-arc) C:\>set "USE_CUDA="

(pytorch-arc) C:\>set "USE_NUMA="

(pytorch-arc) C:\>set "USE_STATIC_MKL="

(pytorch-arc) C:\>set "CMAKE_PREFIX_PATH="

(pytorch-arc) C:\>python -m pip uninstall -y mkl-static mkl-include
Found existing installation: mkl-static 2024.0.0
Uninstalling mkl-static-2024.0.0:
  Successfully uninstalled mkl-static-2024.0.0
Found existing installation: mkl-include 2024.0.0
Uninstalling mkl-include-2024.0.0:
  Successfully uninstalled mkl-include-2024.0.0

(pytorch-arc) C:\>for %f in ("dist\*.whl") do python -m pip install --force-reinstall --no-deps "%f"

(pytorch-arc) C:\>rem TorchVision

(pytorch-arc) C:\>cd ..\vision
The system cannot find the path specified.

(pytorch-arc) C:\>set "DISTUTILS_USE_SDK=1"

(pytorch-arc) C:\>python setup.py clean
python: can't open file 'C:\\setup.py': [Errno 2] No such file or directory

(pytorch-arc) C:\>python setup.py bdist_wheel
python: can't open file 'C:\\setup.py': [Errno 2] No such file or directory

(pytorch-arc) C:\>set "DISTUTILS_USE_SDK="

(pytorch-arc) C:\>for %f in ("dist\*.whl") do python -m pip install --force-reinstall --no-deps "%f"

(pytorch-arc) C:\>rem Intel� Extension for PyTorch*

(pytorch-arc) C:\>cd ..\intel-extension-for-pytorch
The system cannot find the path specified.

(pytorch-arc) C:\>python -m pip install -r requirements.txt
ERROR: Could not open requirements file: [Errno 2] No such file or directory: 'requirements.txt'

(pytorch-arc) C:\>if NOT "" == "" (set "USE_AOT_DEVLIST=" )

(pytorch-arc) C:\>set "BUILD_WITH_CPU=0"

(pytorch-arc) C:\>set "USE_MULTI_CONTEXT=1"

(pytorch-arc) C:\>set "DISTUTILS_USE_SDK=1"

(pytorch-arc) C:\>set "CMAKE_CXX_COMPILER=icx"

(pytorch-arc) C:\>python setup.py clean
python: can't open file 'C:\\setup.py': [Errno 2] No such file or directory

(pytorch-arc) C:\>python setup.py bdist_wheel
python: can't open file 'C:\\setup.py': [Errno 2] No such file or directory

(pytorch-arc) C:\>set "CMAKE_CXX_COMPILER="

(pytorch-arc) C:\>set "DISTUTILS_USE_SDK="

(pytorch-arc) C:\>set "USE_MULTI_CONTEXT="

(pytorch-arc) C:\>set "BUILD_WITH_CPU="

(pytorch-arc) C:\>for %f in ("dist\*.whl") do python -m pip install --force-reinstall --no-deps "%f"

(pytorch-arc) C:\>rem Sanity Test

(pytorch-arc) C:\>cd ..

(pytorch-arc) C:\>python -c "import torch; import torchvision; import intel_extension_for_pytorch as ipex; print(f'torch_version:       {torch.__version__}'); print(f'torchvision_version: {torchvision.__version__}'); print(f'ipex_version:        {ipex.__version__}');"
torch_version:       2.1.0a0+cxx11.abi
torchvision_version: 0.16.0a0+cxx11.abi
ipex_version:        2.1.10+xpu

Copy link
Owner

cj-mills commented Feb 8, 2024

Hi @georg-che,

The console output seems to suggest the installation was successful. The test command at the bottom successfully prints the package versions for PyTorch, torchvision, and Intel's extension.

That aside, version v2.1.10+xpu of Intel's extension has an associated pre-compiled torchvision version, so building it from the source code should be unnecessary.

I only had time to briefly swap my Arc GPU into my desktop for a few days last December, but version v2.1.10+xpu did not seem to work that well. I did not have time to explore the issues in-depth, so I did not make a post about it.

Also, please limit posting a given question to a single location. I don't always have time to answer questions, but I try to set aside time on Thursdays and Fridays.

@georg-che
Copy link

Thanks for the reply, @cj-mills!
I will not bother you further on this issue ).
Thank you for such detailed instructions. It helped me a lot.

Copy link

OSError: [WinError 127] The specified procedure could not be found. Error loading "..\intel_extension_for_pytorch\bin\intel-ext-pt-gpu.dll" or one of its dependencies
I get this error. my device has Intel arc A530M

Copy link

I am using Miniconda. I followed your documentation and now get the following error

[1127/1130] Linking CXX static library csrc\gpu\oneDNN\src\dnnl.lib ignoring unknown argument: -fsycl ignoring unknown argument: -Wno-unknown-argument ignoring unknown argument: -Qoption,link,/machine:x64 [1129/1130] Linking CXX shared library csrc\gpu\intel-ext-pt-gpu.dll FAILED: csrc/gpu/intel-ext-pt-gpu.dll csrc/gpu/intel-ext-pt-gpu.lib C:\Windows\system32\cmd.exe /C "cd . && C:\Users\zhang\miniconda3\envs\oneAPI_env\Lib\site-packages\cmake\data\bin\cmake.exe -E vs_link_dll --intdir=csrc\gpu\CMakeFiles\intel-ext-pt-gpu.dir --rc=C:\PROGRA2\WI3CF21\10\bin\1002261.0\x64\rc.exe --mt=C:\PROGRA2\WI3CF21\10\bin\1002261.0\x64\mt.exe --manifests -- \Intel\oneAPI\compiler\latest\bin\icx.exe /nologo @CMakeFiles\intel-ext-pt-gpu.rsp -LD /Qoption,link,/machine:x64 -Wl,--no-as-needed -rdynamic -Wl,-Bsymbolic-functions /Qoption,link,/INCREMENTAL:NO -fsycl /EHsc -fsycl-max-parallel-link-jobs=12 -fsycl-targets=spir64_gen,spir64 -flink-huge-device-code /Xs "-device ats-m150 -options ' -cl-intel-enable-auto-large-GRF-mode -cl-poison-unsupported-fp64-kernels'" -link /out:csrc\gpu\intel-ext-pt-gpu.dll /implib:csrc\gpu\intel-ext-pt-gpu.lib /pdb:csrc\gpu\intel-ext-pt-gpu.pdb /version:0.0 && cd ." LINK: command "\Intel\oneAPI\compiler\latest\bin\icx.exe /nologo @CMakeFiles\intel-ext-pt-gpu.rsp -LD /Qoption,link,/machine:x64 -Wl,--no-as-needed -rdynamic -Wl,-Bsymbolic-functions /Qoption,link,/INCREMENTAL:NO -fsycl /EHsc -fsycl-max-parallel-link-jobs=12 -fsycl-targets=spir64_gen,spir64 -flink-huge-device-code /Xs -device ats-m150 -options ' -cl-intel-enable-auto-large-GRF-mode -cl-poison-unsupported-fp64-kernels' -link /out:csrc\gpu\intel-ext-pt-gpu.dll /implib:csrc\gpu\intel-ext-pt-gpu.lib /pdb:csrc\gpu\intel-ext-pt-gpu.pdb /version:0.0 /MANIFEST:EMBED,ID=2" failed (exit code 1) with the following output: icx: warning: unknown argument ignored in clang-cl: '-rdynamic' [-Wunknown-argument] icx: warning: unknown argument ignored in clang-cl: '-flink-huge-device-code' [-Wunknown-argument] icx: warning: ocloc tool could not be found and is required for AOT compilation. See: https://www.intel.com/content/www/us/en/develop/documentation/oneapi-dpcpp-cpp-compiler-dev-guide-and-reference/top/compilation/ahead-of-time-compilation.html for more information. [-Waot-tool-not-found] icx: warning: argument unused during compilation: '-EHsc' [-Wunused-command-line-argument] ????? csrc\gpu\intel-ext-pt-gpu.lib ??? csrc\gpu\intel-ext-pt-gpu.exp llvm-foreach: no such file or directory icx: error: gen compiler command failed with exit code 1 (use -v to see invocation) ninja: build stopped: subcommand failed. Traceback (most recent call last): File "C:\Intel\oneAPI\intel-extension-for-pytorch\setup.py", line 1172, in setup( File "C:\Users\zhang\miniconda3\envs\oneAPI_env\Lib\site-packages\setuptools_init_.py", line 104, in setup return distutils.core.setup(**attrs) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "C:\Users\zhang\miniconda3\envs\oneAPI_env\Lib\site-packages\setuptools_distutils\core.py", line 184, in setup return run_commands(dist) ^^^^^^^^^^^^^^^^^^ File "C:\Users\zhang\miniconda3\envs\oneAPI_env\Lib\site-packages\setuptools_distutils\core.py", line 200, in run_commands dist.run_commands() File "C:\Users\zhang\miniconda3\envs\oneAPI_env\Lib\site-packages\setuptools_distutils\dist.py", line 969, in run_commands self.run_command(cmd) File "C:\Users\zhang\miniconda3\envs\oneAPI_env\Lib\site-packages\setuptools\dist.py", line 967, in run_command super().run_command(command) File "C:\Users\zhang\miniconda3\envs\oneAPI_env\Lib\site-packages\setuptools_distutils\dist.py", line 988, in run_command cmd_obj.run() File "C:\Users\zhang\miniconda3\envs\oneAPI_env\Lib\site-packages\wheel_bdist_wheel.py", line 378, in run self.run_command("build") File "C:\Users\zhang\miniconda3\envs\oneAPI_env\Lib\site-packages\setuptools_distutils\cmd.py", line 316, in run_command self.distribution.run_command(command) File "C:\Users\zhang\miniconda3\envs\oneAPI_env\Lib\site-packages\setuptools\dist.py", line 967, in run_command super().run_command(command) File "C:\Users\zhang\miniconda3\envs\oneAPI_env\Lib\site-packages\setuptools_distutils\dist.py", line 988, in run_command cmd_obj.run() File "C:\Users\zhang\miniconda3\envs\oneAPI_env\Lib\site-packages\setuptools_distutils\command\build.py", line 132, in run self.run_command(cmd_name) File "C:\Users\zhang\miniconda3\envs\oneAPI_env\Lib\site-packages\setuptools_distutils\cmd.py", line 316, in run_command self.distribution.run_command(command) File "C:\Users\zhang\miniconda3\envs\oneAPI_env\Lib\site-packages\setuptools\dist.py", line 967, in run_command super().run_command(command) File "C:\Users\zhang\miniconda3\envs\oneAPI_env\Lib\site-packages\setuptools_distutils\dist.py", line 988, in run_command cmd_obj.run() File "C:\Intel\oneAPI\intel-extension-for-pytorch\setup.py", line 1141, in run self.run_command("build_clib") File "C:\Users\zhang\miniconda3\envs\oneAPI_env\Lib\site-packages\setuptools_distutils\cmd.py", line 316, in run_command self.distribution.run_command(command) File "C:\Users\zhang\miniconda3\envs\oneAPI_env\Lib\site-packages\setuptools\dist.py", line 967, in run_command super().run_command(command) File "C:\Users\zhang\miniconda3\envs\oneAPI_env\Lib\site-packages\setuptools_distutils\dist.py", line 988, in run_command cmd_obj.run() File "C:\Intel\oneAPI\intel-extension-for-pytorch\setup.py", line 838, in run _build_project(build_args, ipex_xpu_build_dir, my_env, use_ninja) File "C:\Intel\oneAPI\intel-extension-for-pytorch\setup.py", line 592, in _build_project check_call(["ninja"] + build_args, cwd=build_dir, env=build_env) File "C:\Users\zhang\miniconda3\envs\oneAPI_env\Lib\subprocess.py", line 413, in check_call raise CalledProcessError(retcode, cmd) subprocess.CalledProcessError: Command '['ninja', '-j', '12', 'install']' returned non-zero exit status 1. Traceback (most recent call last): File "", line 1, in ModuleNotFoundError: No module named 'intel_extension_for_pytorch'

@cj-mills
Copy link
Owner

@triilman25 @JamasChuang94
The installation and setup procedure has changed since I made this tutorial. I swapped my A770 into my desktop yesterday and plan to update the tutorial for the latest extension version over the weekend.

Copy link

@cj-mills
Is source compilation unavailable now? I want to use AOT and C++

@cj-mills
Copy link
Owner

Copy link

Updated PIP install script is this:
python -m pip install torch==2.3.1+cxx11.abi torchvision==0.18.1+cxx11.abi torchaudio==2.3.1+cxx11.abi intel-extension-for-pytorch==2.3.110+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/us/

Link can be generated from https://intel.github.io/intel-extension-for-pytorch/index.html#installation?platform=gpu&version=v2.3.110%2bxpu&os=windows&package=pip

Copy link

@cj-mills
great write up! This is by far the easiest way to implement the pytorch intel extension on windows for a beginner like myself =]

One note in the verify arc section is to add "import pandas as pd" before so that pd is defined.

@triilman25
Copy link

@triilman25 @JamasChuang94 The installation and setup procedure has changed since I made this tutorial. I swapped my A770 into my desktop yesterday and plan to update the tutorial for the latest extension version over the weekend.

Can you make installation for WSL?

@cj-mills
Copy link
Owner

cj-mills commented Oct 2, 2024

@triilman25 The installation/setup process for WSL should be the same as native Ubuntu (at least it was the last time I tried it). You can follow my native Ubuntu tutorial starting from the linked section:

@triilman25
Copy link

@triilman25 The installation/setup process for WSL should be the same as native Ubuntu (at least it was the last time I tried it). You can follow my native Ubuntu tutorial starting from the linked section:

* [Getting Started with Intel’s PyTorch Extension for Arc GPUs on Ubuntu - Install Drivers](https://christianjmills.com/posts/intel-pytorch-extension-tutorial/native-ubuntu/#install-drivers)

if we install in WSL should we disable integrated driver (UHD/Iris) like previous tutorial ?

@triilman25
Copy link

image
I got that warning. is it ok?

@cj-mills
Copy link
Owner

cj-mills commented Oct 3, 2024

@triilman25

if we install in WSL should we disable integrated driver (UHD/Iris) like previous tutorial ?

If I remember correctly, you don't need to disable it for WSL.

I got that warning. is it ok?

Yep, that's expected and fine since we do not need the transformers library for the image classification training code. That warning is for the extension's LLM-related functionality.

@triilman25
Copy link

@triilman25

if we install in WSL should we disable integrated driver (UHD/Iris) like previous tutorial ?

If I remember correctly, you don't need to disable it for WSL.

I got that warning. is it ok?

Yep, that's expected and fine since we do not need the transformers library for the image classification training code. That warning is for the extension's LLM-related functionality.

I got issue when I don't disable the integrated driver. the kernel always want restarted and never executes code bellow:

xpu_device_count = torch.xpu.device_count()

from this source code:

def get_public_properties(obj):
    return {
        prop: getattr(obj, prop)
        for prop in dir(obj)
        if not prop.startswith("__") and not callable(getattr(obj, prop))
    }

xpu_device_count = torch.xpu.device_count()
dict_properties_list = [get_public_properties(torch.xpu.get_device_properties(i)) for i in range(xpu_device_count)]
pd.DataFrame(dict_properties_list)

@cj-mills
Copy link
Owner

cj-mills commented Oct 3, 2024

@triilman25 Thanks for testing that, I'll make a note in the tutorial

@triilman25
Copy link

should I install intel extension for pytorch with cpu platform too if I have installed the GPU one?

@cj-mills
Copy link
Owner

cj-mills commented Nov 7, 2024

Hi @triilman25,

The xpu variant of the intel-extension-for-pytorch package is really the cpu variant with xpu support. They are the same package, so installing the CPU-only variant would replace the xpu one.

@triilman25
Copy link

triilman25 commented Nov 8, 2024

https://christianjmills.com/posts/intel-pytorch-extension-tutorial/native-windows/#install-microsoft-visual-c-redistributable
have this tutorial been up to date? as you said before

@triilman25 @JamasChuang94 The installation and setup procedure has changed since I made this tutorial. I swapped my A770 into my desktop yesterday and plan to update the tutorial for the latest extension version over the weekend.

@triilman25
Copy link

NumPy 2.1.3 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.

If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.

I got some error.

@triilman25
Copy link

intel/AI-Playground#76 (comment)

I got some solution for issue:
RuntimeError: Can't add devices across platforms to a single context. -33 (PI_ERROR_INVALID_DEVICE)
but don't know how to setting in my desktop.

@triilman25
Copy link

intel/AI-Playground#76 (comment)

I got some solution for issue: RuntimeError: Can't add devices across platforms to a single context. -33 (PI_ERROR_INVALID_DEVICE) but don't know how to setting in my desktop.

this works. Just create new env variable (system or user):
variable name: ONEAPI_DEVICE_SELECTOR
variable value: *:<dgpu id>

@triilman25
Copy link

I still can't solve this issue:

Try using the full path with constructor syntax.'If you don't plan on using image functionality from `torchvision.io`, you can ignore this warning. Otherwise, there might be something wrong with your environment. Did you have `libjpeg` or `libpng` installed before building `torchvision` from source?
  warn(

A module that was compiled using NumPy 1.x cannot be run in
NumPy 2.1.3 as it may crash. To support both 1.x and 2.x
versions of NumPy, modules must be compiled with NumPy 2.0.
Some module may need to rebuild instead e.g. with 'pybind11>=2.12'.

If you are a user of the module, the easiest solution will be to
downgrade to 'numpy<2' or try to upgrade the affected module.
We expect that some modules will need time to support NumPy 2.

@cj-mills
Copy link
Owner

cj-mills commented Nov 9, 2024

@triilman25 Did pip install 'numpy<2' not work for you?

@triilman25
Copy link

image

why does this always won't to execute?

@triilman25
Copy link

when it execute it takes long time to show the result.

@triilman25
Copy link

https://pytorch.org/blog/intel-gpu-support-pytorch-2-5/

Can you explain about this blog? I quite understand but I don't catch the whole idea.

@cj-mills
Copy link
Owner

@triilman25 It appears that PyTorch is finally integrating direct support for Intel GPUs without the need for Intel's extension.

@triilman25
Copy link

so do we needn't install intel extension (ipex) every we want using xpu in our project anymore?
but I pytorch version 2.5 (stable) is not released yet.

@triilman25
Copy link

does torch.compile support intel Arc accelerator? @cj-mills

@cj-mills
Copy link
Owner

@triilman25,

You should not need to install Intel's extension with PyTorch 2.5+. However, there are still some installation prerequisites:

Intel's extension supposedly supports torch.compile(), but I did not have much success with it the last time I tried. PyTorch 2.5+ should support torch.compile() for Intel GPUs, though.

@cj-mills
Copy link
Owner

cj-mills commented Dec 13, 2024

Hi @apk2222,

It appears they have updated the package version names:

# For Intel® Arc™ A-Series Graphics, use the commands below:
conda install libuv
pip install torch==2.3.1.post0+cxx11.abi torchvision==0.18.1.post0+cxx11.abi torchaudio==2.3.1.post0+cxx11.abi intel-extension-for-pytorch==2.3.110.post0+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/

# For Intel® Core™ Ultra Processors with Intel® Core™ Ultra Processors with Intel® Arc™ Graphics (MTL-H), use the commands below:
conda install libuv
pip install torch==2.3.1.post0+cxx11.abi torchvision==0.18.1.post0+cxx11.abi torchaudio==2.3.1.post0+cxx11.abi intel-extension-for-pytorch==2.3.110.post0+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/mtl/cn/

# For Intel® Core™ Ultra Series 2 with Intel® Arc™ Graphics, use the commands below:
conda install libuv
pip install torch==2.3.1.post0+cxx11.abi torchvision==0.18.1.post0+cxx11.abi torchaudio==2.3.1.post0+cxx11.abi intel-extension-for-pytorch==2.3.110.post0+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/lnl/cn/

There may also be some sort of configuration issue on their end as all the linked pip wheels seem to return the same error:

<Error>
<Code>AllAccessDisabled</Code>
<Message>All access to this object has been disabled</Message>
<RequestId>X04V3TJE6ZD3NZ9C</RequestId>
<HostId>
nZgdTg7gTlzvcV1H+wE+TKUKZ1xUUwCWWOzaViZMHfq8/SOSA9cRhGWqJirZDnMu4XmW52G6QiM=
</HostId>
</Error>

GitHub Issue: intel/intel-extension-for-pytorch#745

@triilman25
Copy link

Hi @apk2222,

It appears they have updated the package version names:

# For Intel® Arc™ A-Series Graphics, use the commands below:
conda install libuv
pip install torch==2.3.1.post0+cxx11.abi torchvision==0.18.1.post0+cxx11.abi torchaudio==2.3.1.post0+cxx11.abi intel-extension-for-pytorch==2.3.110.post0+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/

# For Intel® Core™ Ultra Processors with Intel® Core™ Ultra Processors with Intel® Arc™ Graphics (MTL-H), use the commands below:
conda install libuv
pip install torch==2.3.1.post0+cxx11.abi torchvision==0.18.1.post0+cxx11.abi torchaudio==2.3.1.post0+cxx11.abi intel-extension-for-pytorch==2.3.110.post0+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/mtl/cn/

# For Intel® Core™ Ultra Series 2 with Intel® Arc™ Graphics, use the commands below:
conda install libuv
pip install torch==2.3.1.post0+cxx11.abi torchvision==0.18.1.post0+cxx11.abi torchaudio==2.3.1.post0+cxx11.abi intel-extension-for-pytorch==2.3.110.post0+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/lnl/cn/
* [Documentation](https://pytorch-extension.intel.com/installation?platform=gpu&version=v2.3.110%2Bxpu&os=windows&package=pip)

There may also be some sort of configuration issue on their end as all the linked pip wheels seem to return the same error:

<Error>
<Code>AllAccessDisabled</Code>
<Message>All access to this object has been disabled</Message>
<RequestId>X04V3TJE6ZD3NZ9C</RequestId>
<HostId>
nZgdTg7gTlzvcV1H+wE+TKUKZ1xUUwCWWOzaViZMHfq8/SOSA9cRhGWqJirZDnMu4XmW52G6QiM=
</HostId>
</Error>

GitHub Issue: intel/intel-extension-for-pytorch#745

and just updated :

# For Intel® Arc™ B-Series Graphics, use the commands below:
conda install libuv
python -m pip install torch==2.5.1+cxx11.abi torchvision==0.20.1+cxx11.abi torchaudio==2.5.1+cxx11.abi intel-extension-for-pytorch==2.5.10+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/bmg/cn/

# For Intel® Arc™ A-Series Graphics, use the commands below:
conda install libuv
python -m pip install torch==2.5.1+cxx11.abi torchvision==0.20.1+cxx11.abi torchaudio==2.5.1+cxx11.abi intel-extension-for-pytorch==2.5.10+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/xpu/cn/

# For Intel® Core™ Ultra Series 2 with Intel® Arc™ Graphics, use the commands below:
conda install libuv
python -m pip install torch==2.5.1+cxx11.abi torchvision==0.20.1+cxx11.abi torchaudio==2.5.1+cxx11.abi intel-extension-for-pytorch==2.5.10+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/lnl/cn/

# For Intel® Core™ Ultra Processors with Intel® Arc™ Graphics (MTL-H), use the commands below:
conda install libuv
python -m pip install torch==2.5.1+cxx11.abi torchvision==0.20.1+cxx11.abi torchaudio==2.5.1+cxx11.abi intel-extension-for-pytorch==2.5.10+xpu --extra-index-url https://pytorch-extension.intel.com/release-whl/stable/mtl/cn/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants