Display progress of mesa.batch_run in Streamlit

Hi everyone,
I am currently working on a Streamlit interface for a model using mesa (agent based modeling). In my model, I use a function called mesa.batch_run() to run the simulations. This function has a boolean argument called display_progress, and when it is True, the progress of the simulation is displayed in the Python console (using tqdm I think).
Now I would like to show this progress in my Streamlit interface but I’m not sure how to proceed.

Here’s my code on Streamlit:

st.subheader("Run simulation")
with st.container():
  if st.button("ABM WMS"):
      results = mesa.batch_run(
          wmsModel,
          parameters=model_params,
          iterations=nb_iter,
          max_steps=ms,
          number_processes=nproc,
          data_collection_period=dcp,
          display_progress=disp_proc,
      )
      global results_df
      results_df = pd.DataFrame(results)
      st.write("Simulation finished")

and the source code of the mesa.batch_run function:

def batch_run(
    model_cls: type[Model],
    parameters: Mapping[str, Union[Any, Iterable[Any]]],
    # We still retain the Optional[int] because users may set it to None (i.e. use all CPUs)
    number_processes: Optional[int] = 1,
    iterations: int = 1,
    data_collection_period: int = -1,
    max_steps: int = 1000,
    display_progress: bool = True,
) -> list[dict[str, Any]]:
    """Batch run a mesa model with a set of parameter values.

    Parameters
    ----------
    model_cls : Type[Model]
        The model class to batch-run
    parameters : Mapping[str, Union[Any, Iterable[Any]]],
        Dictionary with model parameters over which to run the model. You can either pass single values or iterables.
    number_processes : int, optional
        Number of processes used, by default 1. Set this to None if you want to use all CPUs.
    iterations : int, optional
        Number of iterations for each parameter combination, by default 1
    data_collection_period : int, optional
        Number of steps after which data gets collected, by default -1 (end of episode)
    max_steps : int, optional
        Maximum number of model steps after which the model halts, by default 1000
    display_progress : bool, optional
        Display batch run process, by default True

    Returns
    -------
    List[Dict[str, Any]]
        [description]
    """

    runs_list = []
    run_id = 0
    for iteration in range(iterations):
        for kwargs in _make_model_kwargs(parameters):
            runs_list.append((run_id, iteration, kwargs))
            run_id += 1

    process_func = partial(
        _model_run_func,
        model_cls,
        max_steps=max_steps,
        data_collection_period=data_collection_period,
    )

    results: list[dict[str, Any]] = []

    with tqdm(total=len(runs_list), disable=not display_progress) as pbar:
        if number_processes == 1:
            for run in runs_list:
                data = process_func(run)
                results.extend(data)
                pbar.update()
        else:
            with Pool(number_processes) as p:
                for data in p.imap_unordered(process_func, runs_list):
                    results.extend(data)
                    pbar.update()

    return results

If anyone could help me, I would be grateful.
Thanks!

Hi

you can use streamlit progress bar:


        # Ejecutar la simulación
        with Pool(nproc) as p:
            progress_bar = st.progress(0)
            for i, data in enumerate(p.imap_unordered(process_func, runs_list)):
                results.append(data)  # Devolver como una tupla en lugar de una lista
                pbar.update()
                progress = (i + 1) / len(runs_list)
                progress_bar.progress(progress)