I’ve been working on a Streamlit app called MarketScouter (https://market.streamlit.app) that performs comprehensive stock market analysis. It uses various machine learning models and algorithms to provide stock recommendations. You can see a demo of the app in action here: MarketScouter: AI Trading Tool that CHANGES EVERYTHING! - YouTube
Recently, I’ve been running into resource and memory limit issues with the app. This happened even when I was the only one using it. I’ve tried to mitigate this by deleting all variables after they’re no longer needed, and I added gc.collect() but I’m not sure if this is sufficient, especially if multiple users are using the app concurrently.
I’ve also tried using
@st.cache_data for the
model.predict functions, but this led to errors and didn’t seem to work for my app.
Maybe my code was wrong? Please give an example of how to change my code to use this.
# Define model with smaller hidden layer model = MLPRegressor(hidden_layer_sizes=(16,), activation='relu', solver='adam', alpha=0.001, early_stopping=True) # Train model history = model.fit(state, reward) # Evaluate model on validation set val_predictions = model.predict(val_state)
I’m seeking advice on how to better manage memory usage in my Streamlit app. Specifically, I’m interested in strategies for handling memory usage when multiple users are using the app concurrently. I’m also curious if there are other Streamlit features to manage memory.
Any advice or insights would be greatly appreciated. Thank you in advance for your help!