tinfoil-hat
Active Member
Hi there, I have an advanced Question. I want to install stable Diffusion on Debian Bookworm. I am trying to replicate this on my system:
Clone ROCm via Docker
Run the Docker container
Stable Diffusion WebUI and env
Clone from Git
Install Python and Python-venv
Does not work from here
*
The Problem
I get the following errors when trying to compile pytorch-v2.1.0 from source
Any kind of help highly appreciated!!!
Step-by-step guide to run on RDNA3 (Linux) · AUTOMATIC1111/stable-diffusion-webui · Discussion #9591
AMD is keeping awfully quiet, but I somehow stumbled across a ROCm 5.5 release candidate Docker container that works properly on 7900XT/ 7900XTX cards - but you have to also compile PyTorch yoursel...
github.com
Clone ROCm via Docker
Code:
sudo docker pull rocm/composable_kernel:ck_ub20.04_rocm5.7
Code:
docker run --restart=always --name rocm -it -v $HOME/Software/docker/sd --device=/dev/kfd --device=/dev/dri --security-opt seccomp=unconfined --group-add video rocm/composable_kernel:ck_ub20.04_rocm5.7
Stable Diffusion WebUI and env
Clone from Git
Code:
git clone https://github.com/AUTOMATIC1111/stable-diffusion-webui
Code:
apt update && apt install python3.11-venv
Code:
cd stable-diffusion-webui
python3 -m venv venv
source venv/bin/activate
Code:
python3 -m pip install --upgrade pip wheel
export HIP_VISIBLE_DEVICES=0
export PYTORCH_ROCM_ARCH="gfx1100"
export CMAKE_PREFIX_PATH=/Software/docker/sd/stable-diffusion-webui/venv/
pip install -r requirements.txt
pip uninstall torch torchvision
mkdir repositories
cd repositories
wget https://github.com/pytorch/pytorch/releases/download/v2.1.0/pytorch-v2.1.0.tar.gz
wget https://github.com/pytorch/vision/archive/refs/tags/v0.16.0.tar.gz
tar -xzvf pytorch-v2.1.0.tar.gz && cd pytorch-v2.1.0
pip install cmake ninja
pip install -r requirements.txt
pip install mkl mkl-include
python3 tools/amd_build/build_amd.py
Does not work from here
*
Code:
python3 setup.py install
cd ..
tar -xzvf v0.16.0.tar.gz
cd vision-0.15.1
python3 setup.py install
cd /SD/stable-diffusion-webui
python3 launch.py --listen
The Problem
I get the following errors when trying to compile pytorch-v2.1.0 from source
Code:
/usr/lib/gcc/x86_64-linux-gnu/12/include/avx512fintrin.h:14044:52: error: ‘__Y’ may be used uninitialized [-Werror=maybe-uninitialized]
14044 | return (__m512i) __builtin_ia32_cvtps2dq512_mask ((__v16sf) __A,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~
14045 | (__v16si)
| ~~~~~~~~~
14046 | _mm512_undefined_epi32 (),
| ~~~~~~~~~~~~~~~~~~~~~~~~~~
14047 | (__mmask16) -1,
| ~~~~~~~~~~~~~~~
14048 | _MM_FROUND_CUR_DIRECTION);
| ~~~~~~~~~~~~~~~~~~~~~~~~~
/usr/lib/gcc/x86_64-linux-gnu/12/include/avx512fintrin.h: In function ‘void fbgemm::requantizeOutputProcessingGConvAvx512(uint8_t*, const int32_t*, const block_type_t&, int, int, const requantizationParams_t<BIAS_TYPE>&) [with bool A_SYMMETRIC = false; bool B_SYMMETRIC = false; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::OUT_CHANNEL; bool HAS_BIAS = false; bool FUSE_RELU = false; int C_PER_G = 16; BIAS_TYPE = int]’:
/usr/lib/gcc/x86_64-linux-gnu/12/include/avx512fintrin.h:206:11: note: ‘__Y’ was declared here
206 | __m512i __Y = __Y;
| ^~~
In function ‘__m512i _mm512_permutexvar_epi32(__m512i, __m512i)’,
inlined from ‘void fbgemm::requantizeOutputProcessingGConvAvx512(uint8_t*, const int32_t*, const block_type_t&, int, int, const requantizationParams_t<BIAS_TYPE>&) [with bool A_SYMMETRIC = false; bool B_SYMMETRIC = false; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::OUT_CHANNEL; bool HAS_BIAS = false; bool FUSE_RELU = false; int C_PER_G = 16; BIAS_TYPE = int]’ at /home/anon/Software/docker/sd/stable-diffusion-webui/repositories/pytorch-v2.1.0/third_party/fbgemm/src/QuantUtilsAvx512.cc:356:45:
/usr/lib/gcc/x86_64-linux-gnu/12/include/avx512fintrin.h:7027:53: error: ‘__Y’ may be used uninitialized [-Werror=maybe-uninitialized]
7027 | return (__m512i) __builtin_ia32_permvarsi512_mask ((__v16si) __Y,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~
7028 | (__v16si) __X,
| ~~~~~~~~~~~~~~
7029 | (__v16si)
| ~~~~~~~~~
7030 | _mm512_undefined_epi32 (),
| ~~~~~~~~~~~~~~~~~~~~~~~~~~
7031 | (__mmask16) -1);
| ~~~~~~~~~~~~~~~
/usr/lib/gcc/x86_64-linux-gnu/12/include/avx512fintrin.h: In function ‘void fbgemm::requantizeOutputProcessingGConvAvx512(uint8_t*, const int32_t*, const block_type_t&, int, int, const requantizationParams_t<BIAS_TYPE>&) [with bool A_SYMMETRIC = false; bool B_SYMMETRIC = false; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::OUT_CHANNEL; bool HAS_BIAS = false; bool FUSE_RELU = false; int C_PER_G = 16; BIAS_TYPE = int]’:
/usr/lib/gcc/x86_64-linux-gnu/12/include/avx512fintrin.h:206:11: note: ‘__Y’ was declared here
206 | __m512i __Y = __Y;
| ^~~
In function ‘__m128i _mm512_extracti32x4_epi32(__m512i, int)’,
inlined from ‘__m128i _mm512_castsi512_si128(__m512i)’ at /usr/lib/gcc/x86_64-linux-gnu/12/include/avx512fintrin.h:15829:10,
inlined from ‘void fbgemm::requantizeOutputProcessingGConvAvx512(uint8_t*, const int32_t*, const block_type_t&, int, int, const requantizationParams_t<BIAS_TYPE>&) [with bool A_SYMMETRIC = false; bool B_SYMMETRIC = false; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::OUT_CHANNEL; bool HAS_BIAS = false; bool FUSE_RELU = false; int C_PER_G = 16; BIAS_TYPE = int]’ at /home/anon/Software/docker/sd/stable-diffusion-webui/repositories/pytorch-v2.1.0/third_party/fbgemm/src/QuantUtilsAvx512.cc:372:25:
/usr/lib/gcc/x86_64-linux-gnu/12/include/avx512fintrin.h:6045:53: error: ‘__Y’ may be used uninitialized [-Werror=maybe-uninitialized]
6045 | return (__m128i) __builtin_ia32_extracti32x4_mask ((__v16si) __A,
| ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~^~~~~~~~~~~~~~~
6046 | __imm,
| ~~~~~~
6047 | (__v4si)
| ~~~~~~~~
6048 | _mm_undefined_si128 (),
| ~~~~~~~~~~~~~~~~~~~~~~~
6049 | (__mmask8) -1);
| ~~~~~~~~~~~~~~
/usr/lib/gcc/x86_64-linux-gnu/12/include/emmintrin.h: In function ‘void fbgemm::requantizeOutputProcessingGConvAvx512(uint8_t*, const int32_t*, const block_type_t&, int, int, const requantizationParams_t<BIAS_TYPE>&) [with bool A_SYMMETRIC = false; bool B_SYMMETRIC = false; QuantizationGranularity Q_GRAN = fbgemm::QuantizationGranularity::OUT_CHANNEL; bool HAS_BIAS = false; bool FUSE_RELU = false; int C_PER_G = 16; BIAS_TYPE = int]’:
/usr/lib/gcc/x86_64-linux-gnu/12/include/emmintrin.h:788:11: note: ‘__Y’ was declared here
788 | __m128i __Y = __Y;
| ^~~
cc1plus: all warnings being treated as errors
ninja: build stopped: subcommand failed.
Any kind of help highly appreciated!!!