Getting 404 errors when trying to reach _stcore/health and _stcore/allowed_messages_origins using nginx proxy on AWS EC2

After some back and forth I have been able to load the javascript and css files from Streamlit.
However, the application will not load fully and displays “Please Wait”. The cause of which seems to be that 404s are returned for the get requests “/_stcore/health” and “/_stcore/allowed_message_origins”

What I have checked:
I have looked through the deployment list and found many topics that list the “please wait”, but there the causes seems to be different.
Furthermore, many of the articles about this seem to have be written when /healthz was still used, so I have not found useful pointers there.
I have also looked through many articles such as Streamlit, docker, Nginx, ssl/https - :rocket: Deployment - Streamlit and Streamlit with Nginx. Configuration for Nginx and Streamlit… | by Raja CSP Raman | featurepreneur | Medium, but those all seem not to mention the (new?) /stcore/health check.

My situation:

  • I am hosting on AWS EC2 (as a proof of concept, will use containers when this this works)
  • I am behind a nginx proxy. This functions well for a simple flask server, and it does show the streamlit loading page, so connection to streamlit seems to work well
  • I suspect it might be a SSL issue, as nginx is currently listening on port 80. In AWS I have configured the ALB to use port 443 for ssl.

Nginx relevant parts of config:

location = / {
                proxy_pass http://IP:8502/;
                proxy_http_version 1.1;
                proxy_redirect off;
                proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
                proxy_set_header Host $host;
                proxy_set_header Upgrade $http_upgrade;
                proxy_set_header Connection "upgrade";
                proxy_read_timeout 86400;
        }

        location ^~ /static {
                proxy_pass http://127.0.0.1:8502/static/;
        }
        location ^~ /vendor {
                proxy_pass http://127.0.0.1:8502/vendor;
        }

       location ^~ /health {
               proxy_pass http://127.0.0.1:8502/_stcore/health;
       }

Here is config.toml

[server]
port=8502 
headless=true
enableCORS=false
enableXsrfProtection=false
enableWebsocketCompression=false

[browser]
serverPort = 443

Streamlit, version 1.18.1

This seems to work for the static folder, but not for the health check (and allowed_messages_origins)

Does anyone have an idea / pointer on how to resolve this?

Any help or ideas would be much appreciated!

@pim we are running to potential similar issue. curious did you end up getting anything figure out for this, if so would you mind sharing solution?

1 Like

@pim I do have the same problem, I running in AWS EKS with nginx for reverse proxy, please let me know if there is any solution for this issue.

Hey @ksdaftari and @Think_Big_Data_Analy

Cool to know you are working on a similar setup.
I’m still working on it, but seem to have a sort-of (though slow and unreliable and not secure) proof of concept up and running.

I ran into so many issues that I am not sure again what fixed what, and probably half of these changes are not even necessary, but here are some things I see that are different than my nginx and config.toml above:

  • I changed the name of the url that streamlit runs on in config.toml and then added the same url in nginx

  • I have added the allowed-messages and health and forwarded them to the port

  • I also (tried to) turn of some of the CSP policies. Of course, my plan is to add this back on when this fully works

server {
    listen       80;
    listen       [::]:80;
    server_name url-that-your-service-will-run-on-example.com www.url-that-your-service-will-run-on-example.com
   location = / {
            proxy_pass http://****:8501/;
            proxy_http_version 1.1;
            proxy_redirect off;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $http_host;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_read_timeout 86400;

    location ^~ /static {
            proxy_pass http://127.0.0.1:8501/static/;
    }
    location ^~ /vendor {
            proxy_pass http://127.0.0.1:8501/vendor;
    }

    location = /_stcore/health {
            proxy_pass http://******:8501/_stcore/health;
    }

    location = /_stcore/allowed-message-origins {
            proxy_pass http://*******:8501/_stcore/allowed-message-origins;
    }

    location = /_stcore/stream {
            proxy_pass http://*****:8501/_stcore/stream;
            proxy_http_version 1.1;
            proxy_redirect off;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $http_host;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_read_timeout 86400;
    }

    add_header Content-Security-Policy "default-src 'self' https: data: 'unsafe-inline';" always;
}

In config.toml

serverAddress = “url-that-your-service-will-run-on-example.com

Let me know if that helps / gets you a bit closer!

if you have Nginx Proxy Manager, just turn on Websockets Support

Hi @pim - I’m running into the same issue using an AWS ELB to direct traffic based on a URL path. Did you guys come up with a solution?
@Hu-Wentao, I don’t understand how websockets are related here - these are http/2 request, but not websocket requests. Can you elaborate on your fix pls?
Thanks, Mark

Also pinging @Think_Big_Data_Analy @ksdaftari, but I could only tag 2 users in the last post :slight_smile:

this was quite straightforward to resolve for me in the end.

Just add the --server.baseUrlPath=myapp to the streamlit run args.

Hi Mark,

Could you elaborate a bit more on how you solved this? We still have the same error. We are trying to have path based routing from a ELB to a streamlit app running on a subpath on fargate service. We still get a white screen and see a 404 error in the dev-console. We tried to do add --server.baseUrlPath=myapp, where then myapp is the subpath of our routing. e.g. url.com/myapp.
How did you do this?

Kudos to @pim ,
I got mine working behind url subpath folder. Only minor change is change config.toml as follow :

[server]
port=8501
runOnSave=true
baseUrlPath="webapp" #Change path here to same on nginx

[logger]
level="debug"
messageFormat = "%(asctime)s %(message)s"

Then on nginx config as follow:

   location /webapp {
            proxy_pass http://localhost:8501/webapp;
            proxy_http_version 1.1;
            proxy_redirect off;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $http_host;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_read_timeout 86400;
   }
    location /webapp/static {
            proxy_pass http://localhost:8501/webapp/static/;
    }
    location /webapp/vendor {
            proxy_pass http://localhost:8501/webapp/vendor;
    }

    location /webapp/_stcore/health {
            proxy_pass http://localhost:8501/webapp/_stcore/health;
    }

    location /webapp/_stcore/allowed-message-origins {
            proxy_pass http://localhost:8501/webapp/_stcore/allowed-message-origins;
    }

    location /webapp/_stcore/stream {
            proxy_pass http://localhost:8501/webapp/_stcore/stream;
            proxy_http_version 1.1;
            proxy_redirect off;
            proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
            proxy_set_header Host $http_host;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
            proxy_read_timeout 86400;
    }

And voila, it worked.
For automation on SSL, i also used Nginx Proxy Manager for SSL forcing and auto renew. It made my life easier.

Cheers

1 Like

Why use nginx? Couldn’t you just use the free version of Cloudflare?