https://github.com/llSourcell/How-to-Generate-Art-Demo 2021 virtualenv -p python $HOME/tmp/art-demo-venv/ 2022 cd ../ 2023 lst 2024 cd deep-speech/ 2025 lst 2026 cat notes.txt 2027 cd .. 2028 lst 2029 cd How-to-Generate-Art-Demo/ 2030 history 2031 source $HOME/tmp/art-demo-venv 2032 source activate $HOME/tmp/art-demo-venv 2033 cd ../deep-speech/ 2034 cat notes.txt 2035 cd .. 2036 lst 2037 cd How-to- 2038 cd How-to-Generate-Art-Demo/ 2039 source $HOME/tmp/art-demo-venv/bin/activate jupyter notebook # # To activate this environment, use # # $ conda activate art-demo # # To deactivate an active environment, use # # $ conda deactivate I just change the from pip._internal import main into from pip import main and Voila! Problem dismissed https://github.com/pypa/pip/issues/5253 I noticed that the art-demo install is not compiled to use SSEX.X and AVX, see warnings so it runs a bit slower. Running the code in my conda base install, which seems to have these installed actually makes it run loops a bit faster. (art-demo) erick@OptiPlex-790 ~/ml/How-to-Generate-Art-Demo $ python demo-256-anime+human.py Using TensorFlow backend. (1, 256, 256, 3) (1, 256, 256, 3) W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. Start of iteration 0 Current loss value: 1.80458e+11 Iteration 0 completed in 102s Start of iteration 1 Current loss value: 1.00805e+11 Iteration 1 completed in 98s Start of iteration 2 (base) erick@OptiPlex-790 ~/ml/How-to-Generate-Art-Demo $ python demo-256-anime+human.py /home/erick/anaconda3/lib/python3.6/site-packages/h5py/__init__.py:36: FutureWarning: Conversion of the second argument of issubdtype from `float` to `np.floating` is deprecated. In future, it will be treated as `np.float64 == np.dtype(float).type`. from ._conv import register_converters as _register_converters Using TensorFlow backend. (1, 256, 256, 3) (1, 256, 256, 3) WARNING:tensorflow:Variable += will be deprecated. Use variable.assign_add if you want assignment to the variable value or 'x = x + y' if you want a new python Tensor object. Start of iteration 0 Current loss value: 176162080000.0 Iteration 0 completed in 70s Start of iteration 1 Current loss value: 96496100000.0 Iteration 1 completed in 69s 5.04.2023 I found out that this code runs quicker under the sl_quant env as that version of TF is complied to use SSE3,4.1,4.2,AVX For a 512 images it's about 600s+ per epoch with art_demo env and 400s ish on the sl_quant (art-demo) erick@OptiPlex-790 ~/ml/How-to-Generate-Art-Demo $ python demo-512.py Using TensorFlow backend. (1, 512, 512, 3) (1, 512, 512, 3) W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE3 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations. W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations. Start of iteration 0 Current loss value: 1.41992e+11 Iteration 0 completed in 672s Start of iteration 1 Current loss value: 3.55052e+10 Iteration 1 completed in 593s Start of iteration 2 Current loss value: 2.78878e+10 Iteration 2 completed in 579s Start of iteration 3