How to troubleshoot an eternal "Please wait..." from nginx configuration?

I have a Streamlit app running on AWS EC2 with an Ubuntu 22.04 virtual machine, networked with nginx, using conda as a virtual environment/package manager. Sometimes this app has worked fine for an extended period, and then when I update a package, it may end up in the eternal “Please wait…” situation for Port 80. I am pretty sure it’s related to the nginx configuration, since I can access the app on Port 8501.

I would love help solving this issue, but almost as good would be concrete advice on how to go about troubleshooting this. I am new to networking and unsure how to even figure out decisively where this “Please wait…” is coming from. Advice for workarounds would be deeply helpful as well (for example, does Streamlit play better with pip than conda?)

My nginx configuration file (identical to another app I maintain and never causing issues):

worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 4096;
    client_max_body_size 100M;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    include /etc/nginx/conf.d/*.conf;

    server {
        listen       80;
        listen       [::]:80;
        server_name  _;
        root         /usr/share/nginx/html;

        include /etc/nginx/default.d/*.conf;

        error_page 404 /404.html;
        location = /404.html {
        }

        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
        }

        location / {
            auth_basic "restricted";
            auth_basic_user_file .htpasswd;
            proxy_pass http://127.0.0.1:8501/;
        }

        location ^~ /static {
            proxy_pass http://127.0.0.1:8501/static;
        }

        location ^~ /healthz {
            proxy_pass http://127.0.0.1:8501/healthz;
        }

        location ^~ /vendor {
            proxy_pass http://127.0.0.1:8501/vendor;
        }

        location /stream {
            proxy_pass http://127.0.0.1:8501/stream;
            proxy_http_version 1.1;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $host;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_set_header Sec-WebSocket-Extensions $http_sec_websocket_extentions;
            proxy_read_timeout 85400;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }
} 

My environment.yml file:

name: environment-name
channels:
  - conda-forge
  - nodefaults   
dependencies:
  - python=3.8
  - rpy2
  - pandas
  - streamlit
  - python-kaleido 
  - pip 

  - r-base=4.3
  - r-clarify
  - r-essentials
  - r-marginaleffects
  - r-gtools
  - r-gparotation
  - r-tidyverse
  - r-forcats
  - r-ggplot2
  - r-rcolorbrewer
  - r-readxl
  - r-showtext
  - r-ggtext
  - r-ggpubr
  - r-extrafont
  - r-stringr
  - r-hmisc
 

  - pip:
    - git+https://github.com/path/to/another/private/repo

Hey! I have been struggling with this same issue as well. I believe I fixed it when I used Python 3.11.4. Try it out if 3.11.4 is compatible with your code. Now I am struggling getting the SSL setup. Let me know if you figure out any better solutions…

Thanks for the input @kyle-mirich; unfortunately, that doesn’t seem to solve the issue in my case

Looking at the nginx logs, I’m seeing:

error.log is showing errors where the request "GET /_stcore/stream HTTP/1.1" is returning the messages:

recv() failed (104: Unknown error) while reading response header from upstream
open() "/usr/share/nginx/html/50x.html" failed (2: No such file or directory)

while access.log shows a 400 status for the same request "GET /_stcore/stream HTTP/1.1".

In error.log I’m also seeing the requests "GET /_stcore/health HTTP/1.1" and
"GET /_stcore/host-config HTTP/1.1" return:

connect() failed (111: Unknown error) while connecting to upstream,
open() "/usr/share/nginx/html/50x.html" failed (2: No such file or directory)

Any advice on next steps or how to parse this in the context of streamlit would be deeply appreciated.

Aha, I finally figured this out from here! Any changes regarding websocket for Streamlit v1.14 vs. 1.18? - #2 by andfanilo

Between versions of streamlit, the endpoint /stream was changed to /_stcore/stream. So anywhere the nginx configuration has the former had to be replaced with the latter. Now it seems to be working again fine!

UPDATE: If you think you have the configuration right, but it’s still not working for you, see if it works on another machine.

Even with the proper endpoint again, the nginx configuration for the virtual machine hosting my webapp was not working as expected (blank screen) after I cleared my local browser cache this morning. However, it was working great on my test virtual machine. Presumably the issue was some kind of caching with nginx and clearing the nginx cache or uninstall/reinstall may have helped, but I had difficulty finding my nginx cache, as apparently I configured it to be somewhere other than the default location at some point. (In my case, it was pretty trivial to just move production to the test machine, so I did not debug further.)

Also including my final configuration file to make this as easy as possible for folks going forward:

user nginx;
worker_processes auto;
error_log /var/log/nginx/error.log;
pid /run/nginx.pid;

include /usr/share/nginx/modules/*.conf;

events {
    worker_connections 1024;
}

http {
    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for"';

    access_log  /var/log/nginx/access.log  main;

    sendfile            on;
    tcp_nopush          on;
    tcp_nodelay         on;
    keepalive_timeout   65;
    types_hash_max_size 4096;

    include             /etc/nginx/mime.types;
    default_type        application/octet-stream;

    include /etc/nginx/conf.d/*.conf;

    server {
        listen       80;
        listen       [::]:80;
        server_name  _;
        root         /usr/share/nginx/html;

        client_max_body_size 20M;

        include /etc/nginx/default.d/*.conf;

        error_page 404 /404.html;
        location = /404.html {
        }

        error_page 500 502 503 504 /50x.html;
        location = /50x.html {
        }

        location / {
            auth_basic "restricted";
            auth_basic_user_file .htpasswd;
            proxy_pass http://127.0.0.1:8501/;
        }

        location ^~ /_stcore/static {
            proxy_pass http://127.0.0.1:8501/static;
        }

        location ^~ /_stcore/healthz {
            proxy_pass http://127.0.0.1:8501/_stcore/healthz;
        }

        location ^~ /_stcore/vendor {
            proxy_pass http://127.0.0.1:8501/_stcore/vendor;
        }

        location /_stcore/stream {
            proxy_pass http://127.0.0.1:8501/_stcore/stream;
            proxy_http_version 1.1;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $host;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_set_header Sec-WebSocket-Extensions $http_sec_websocket_extentions;
            proxy_read_timeout 85400;
            proxy_set_header X-Real-IP $remote_addr;
            proxy_set_header X-Forwarded-Proto $scheme;
        }
    }

}
2 Likes

nice!

I was just about to post my working config file as well. I am happy you got everything figured out.

server {
    listen 80;
    server_name yourdomain.com www.yourdomain.com;
    location / {
        return 301 https://$host$request_uri;
    }
    location /.well-known/acme-challenge/ {
        root /var/www/certbot;
        allow all;
    }
}

server {
    listen 443 ssl;
    server_name yourdomain.com www.yourdomain.com;

    ssl_certificate /etc/letsencrypt/live/yourdomain.com/fullchain.pem;
    ssl_certificate_key /etc/letsencrypt/live/yourdomain.com/privkey.pem;
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_ciphers xxxx:xxxx:xxxx:xxxxxx..  (message me to see full ssl_cipher, pretty sure this is optional but I am not sure if it is sensitive or not);

    location / {
        proxy_pass http://app:8501/;
        proxy_set_header Host $host;
        proxy_set_header X-Real-IP $remote_addr;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header X-Forwarded-Proto $scheme;
        proxy_http_version 1.1;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
    }

    location /_stcore/stream {
        proxy_pass http://app:8501/_stcore/stream;
        proxy_http_version 1.1;
        proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
        proxy_set_header Host $host;
        proxy_set_header Upgrade $http_upgrade;
        proxy_set_header Connection "upgrade";
        proxy_read_timeout 86400;
    }
}

I ended up using this docker image thanks to jonasal to create my ssl cert on my aws instance with much less headache. I am planning on making a post soon describing just how I did this because I was having quite a hard time figuring this out.

GOD BLESS

2 Likes

This topic was automatically closed 2 days after the last reply. New replies are no longer allowed.