pytorch-fast-transformers failed to build wheel?

hello! I’m just getting started with Streamlit and have been trying to host a little demo just to test things, but I’m running into an issue with importing ‘pytorch-fast-transformers’.

Collecting torch (from -r /mount/src/{redacted}/requirements.txt (line 7))

  Downloading torch-2.6.0-cp312-cp312-manylinux1_x86_64.whl.metadata (28 kB)

Collecting pytorch-fast-transformers (from -r /mount/src/{redacted}/requirements.txt (line 8))

  Downloading pytorch-fast-transformers-0.4.0.tar.gz (93 kB)

     ━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━ 93.6/93.6 kB 130.3 MB/s eta 0:00:00[2025-04-22 21:49:57.955717] 

  Installing build dependencies: started

  Installing build dependencies: finished with status 'done'

  Getting requirements to build wheel: started

  Getting requirements to build wheel: finished with status 'error'

  error: subprocess-exited-with-error

  

  × Getting requirements to build wheel did not run successfully.

  │ exit code: 1

  ╰─> [26 lines of output]

      Traceback (most recent call last):

        File "<string>", line 19, in <module>

      ModuleNotFoundError: No module named 'torch'

      

      The above exception was the direct cause of the following exception:

      

      Traceback (most recent call last):

        File "/home/adminuser/venv/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 353, in <module>

          main()

        File "/home/adminuser/venv/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 335, in main

          json_out['return_val'] = hook(**hook_input['kwargs'])

                                   ^^^^^^^^^^^^^^^^^^^^^^^^^^^^

        File "/home/adminuser/venv/lib/python3.12/site-packages/pip/_vendor/pyproject_hooks/_in_process/_in_process.py", line 118, in get_requires_for_build_wheel

          return hook(config_settings)

                 ^^^^^^^^^^^^^^^^^^^^^

        File "/tmp/pip-build-env-hxhue8h0/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 331, in get_requires_for_build_wheel

          return self._get_build_requires(config_settings, requirements=[])

                 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

        File "/tmp/pip-build-env-hxhue8h0/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 301, in _get_build_requires

          self.run_setup()

        File "/tmp/pip-build-env-hxhue8h0/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 512, in run_setup

          super().run_setup(setup_script=setup_script)

        File "/tmp/pip-build-env-hxhue8h0/overlay/lib/python3.12/site-packages/setuptools/build_meta.py", line 317, in run_setup

          exec(code, locals())

        File "<string>", line 22, in <module>

      ImportError: PyTorch is required to install pytorch-fast-transformers. Please install your favorite version of PyTorch, we support 1.3.1, 1.5.0 and >=1.6

      [end of output]

  

  note: This error originates from a subprocess, and is likely not a problem with pip.

error: subprocess-exited-with-error


× Getting requirements to build wheel did not run successfully.

│ exit code: 1

╰─> See above for output.


note: This error originates from a subprocess, and is likely not a problem with pip.

I have checked that I include torch prior to pytorch-fast-transformers in my requirements.txt, and I found supporting information that suggests it is because pytorch-fast-transformers requires torch at build-time but uv defaults to building in isolation, which I don’t fully understand (sorry!!) but I assume it means it ignores the torch I previously install just before? Another keyword I noticed was PEP 517, which meant very little to me.

Supporting links:
https://github.com/astral-sh/uv/issues/2991
https://github.com/astral-sh/uv/issues/2252

I have no idea how to remedy this for Streamlit, as I cannot pass a ‘–no-build-isolation’ arg to Streamlit. I have tried many other things, such as setting a specific version for torch/pytorch-fast-transformers, including torchvision and/or torchaudio, downgrading versions of pytorch-fast-transformers, etc.

I have also tried using wheel to manually build a wheel of my own, but unfortunately my local machine is running Windows and I receive an error missing some C/C++ libraries when I try to use the .whl for Streamlit. (I also tried using WSL to generate a Linux wheel, but that did not work either, I was missing GLIBC libraries among other things? Unsure.)

note: the wheel I built on Linxu named itself ‘pytorch_fast_transformers-0.4.0-cp312-cp312-linux_x86_64.whl’, which I would think matches the name of the torch wheel automatically imported up there, but it didn’t work for me… :frowning:

The only method I have found that I haven’t explored is using fast-transformers-pytorch instead of pytorch-fast-transformers, but I would have to reformat a model that I am using (EMOPIA) to use the different library and that does in fact make me feel like misery.

However, I noticed in another topic,
https://discuss.streamlit.io/t/running-locally-streamlit-prompts-to-login-to-my-git-repo-how-can-i-disable/6751/5
that user quadebroadwell had imported torch and pytorch-fast-transformers!! (presumably successfully!)

Unfortunately, I tried the same versions he used, and did not receive any better results. Still a failed to build wheel. This gives me hope though! (and reason to procrastinate on rewriting the EMOPIA model!)

I don’t think its proper etiquette however to reply to a message from a thread in 2020, so I was wondering if there is anyone smarter than I and more familiar with package management and Streamlit that could point me in a direction to resolve this? It annoys me insanely how much time I’ve spent not even debugging or playing around with Streamlit, but instead trying to fix an import.

P.S. If someone happened to have a wheel for pytorch-fast-transformers for a version of Linux that works for the servers backing Streamlit Community Cloud, I would love and cherish you for at least a week or two.

anyway I thought to include better information about the Linux wheel stuff I tried. I get a variety of errors initially, but this one tends to repeat in the logs for whatever reason.

In general the only thing I can tell is that there is a custom torch class that is causing an error? I am unfamiliar with custom classes and pytorch in general, so I am unsure what to think at all about that. Causal product is one of the .cpp modules that the package uses.

────────────────────── Traceback (most recent call last) ───────────────────────

  /home/adminuser/venv/lib/python3.12/site-packages/streamlit/runtime/scriptru  

  nner/exec_code.py:121 in exec_func_with_error_handling                        

                                                                                

  /home/adminuser/venv/lib/python3.12/site-packages/streamlit/runtime/scriptru  

  nner/script_runner.py:640 in code_to_exec                                     

                                                                                

  /mount/src/{redacted}/app.py:3 in <module>                           

                                                                                

     1 import streamlit as st                                                   

     2                                                                          

  ❱  3 from label import predict_emotion                                        

     4 from generate import generate                                            

     5 from e2va import composite_va, classify_va                               

     6 from utils.midi2wav import midi_to_wav                                   

                                                                                

  /mount/src/{redacted}/label.py:5 in <module>                         

                                                                                

      2 import pandas as pd                                                     

      3 import numpy as np                                                      

      4 from torch.utils.data import DataLoader                                 

  ❱   5 from utils.models import LSTMModel, GoEmotionsDataset                   

      6 from transformers import AutoTokenizer                                  

      7 from sklearn.metrics import f1_score                                    

      8                                                                         

                                                                                

  /mount/src/{redacted}/utils/models.py:1 in <module>                  

                                                                                

  ❱   1 from fast_transformers.builders import TransformerEncoderBuilder as Tr  

      2 from fast_transformers.builders import RecurrentEncoderBuilder as Recu  

      3 from fast_transformers.masking import TriangularCausalMask as Triangul  

      4                                                                         

                                                                                

  /home/adminuser/venv/lib/python3.12/site-packages/fast_transformers/builders  

  /__init__.py:42 in <module>                                                   

                                                                                

    39 # TODO: Should this behaviour change? Namely, should all attention       

    40 #       implementations be imported in order to be useable? This also a  

    41 #       using the library even partially built, for instance.            

  ❱ 42 from ..attention import \                                                

    43 │   FullAttention, \                                                     

    44 │   LinearAttention, CausalLinearAttention, \                            

    45 │   ClusteredAttention, ImprovedClusteredAttention, \                    

                                                                                

  /home/adminuser/venv/lib/python3.12/site-packages/fast_transformers/attentio  

  n/__init__.py:13 in <module>                                                  

                                                                                

    10 from .attention_layer import AttentionLayer                              

    11 from .full_attention import FullAttention                                

    12 from .linear_attention import LinearAttention                            

  ❱ 13 from .causal_linear_attention import CausalLinearAttention               

    14 from .clustered_attention import ClusteredAttention                      

    15 from .improved_clustered_attention import ImprovedClusteredAttention     

    16 from .reformer_attention import ReformerAttention                        

                                                                                

  /home/adminuser/venv/lib/python3.12/site-packages/fast_transformers/attentio  

  n/causal_linear_attention.py:15 in <module>                                   

                                                                                

     12 from ..attention_registry import AttentionRegistry, Optional, Callable  

     13 │   EventDispatcherInstance                                             

     14 from ..events import EventDispatcher                                    

  ❱  15 from ..causal_product import causal_dot_product                         

     16 from ..feature_maps import elu_feature_map                              

     17                                                                         

     18                                                                         

                                                                                

  /home/adminuser/venv/lib/python3.12/site-packages/fast_transformers/causal_p  

  roduct/__init__.py:9 in <module>                                              

                                                                                

     6                                                                          

     7 import torch                                                             

     8                                                                          

  ❱  9 from .causal_product_cpu import causal_dot_product as causal_dot_produc  

    10 │   causal_dot_backward as causal_dot_backward_cpu                       

    11                                                                          

    12 try:                                                                     

────────────────────────────────────────────────────────────────────────────────

ModuleNotFoundError: No module named 

'fast_transformers.causal_product.causal_product_cpu'

2025-04-23 14:51:44.284 Examining the path of torch.classes raised:

Traceback (most recent call last):

  File "/home/adminuser/venv/lib/python3.12/site-packages/streamlit/web/bootstrap.py", line 347, in run

    if asyncio.get_running_loop().is_running():

       ^^^^^^^^^^^^^^^^^^^^^^^^^^

RuntimeError: no running event loop


During handling of the above exception, another exception occurred:


Traceback (most recent call last):

  File "/home/adminuser/venv/lib/python3.12/site-packages/streamlit/watcher/local_sources_watcher.py", line 217, in get_module_paths

    potential_paths = extract_paths(module)

                      ^^^^^^^^^^^^^^^^^^^^^

  File "/home/adminuser/venv/lib/python3.12/site-packages/streamlit/watcher/local_sources_watcher.py", line 210, in <lambda>

    lambda m: list(m.__path__._path),

                   ^^^^^^^^^^^^^^^^

  File "/home/adminuser/venv/lib/python3.12/site-packages/torch/_classes.py", line 13, in __getattr__

    proxy = torch._C._get_custom_class_python_wrapper(self.name, attr)

            ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

RuntimeError: Tried to instantiate class '__path__._path', but it does not exist! Ensure that it is registered via torch::class_

2025-04-23 14:51:44.335 503 GET /script-health-check (127.0.0.1) 256.74ms

Hi again, I was pretty dumb, but I realized my mistake.

When building the wheel on my own, I used manylinux2014 which did not contain python 3.12.10. Notably the wheel was marked cp310 when it should have been cp312. Pretty obvious in hindsight. I remade the wheel with manylinux 2/WSL 2 and the wheel worked fine.

Though I will admit defeat on the whole build in isolation thing. I don’t think its possible to indicate for a package to be installed not in isolation without using the specific pip flag, at least using uv. If anyone can contradict that, I would be interested if only for the novelty.

If anyone in the future needs a pytorch_fast_transformers-0.4.0-cp312-cp312-linux_x86_64.whl, feel free to ask. I didn’t see an option to upload a file to this forum, but if someone could direct me I would leave it here.

Issue solved, sort of.

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.