After running simulations with the MJQM simulator, you’ll find data files in the Results directory, in a subdirectory with the name of your simulation. We provide two main ways to visualize and analyze these results:
plot_experiment.pyThe provided Python environment has all required packages already installed for both methods, and it also includes the Spyder IDE for a more custom approach to results analysis.
Both visualization methods share some common elements:
The plots help identify:
[!Note] For comprehensive guidance on interpreting these metrics, understanding stability boundaries, and comparing policies systematically, see the Policy Comparison Guide.
The plot_experiment.py script generates high-quality plots suitable for publications, presentations, and detailed analysis. It uses matplotlib to create PDF and PNG visualizations of various metrics.
To generate static plots for a simulation:
uv run scripts/plot_experiment.py [path]
Where [path] can be:
Results/ or absolute): loads all CSV files in that folderResults/ or absolute): loads that single CSV fileThe script will generate several plots in the simulation folder:
RespTime subfolder):
lambdasVsTotRespTime.pdf/png: Overall response time vs arrival ratelambdasVsT{N}RespTime.pdf/png: Response time for each job class vs arrival rateWaitTime subfolder):
lambdasVsTotWaitTime.pdf/png: Overall waiting time vs arrival ratelambdasVsT{N}WaitTime.pdf/png: Waiting time for each job class vs arrival rateYou can customize the plots by modifying parameters in plot_experiment.py. The key configurations include:
n_cores: Number of cores in the system. If you included the cores column in the simulation output, that will be used.cols: The colors to use for simulated distributionsmarkers: The markers to display the executed simulationsylims_*: Y-axis limits for different plot typesThe Plotly-based report provides an interactive dashboard for exploring simulation results dynamically.
To launch the interactive dashboard:
uv run scripts/plotly_app.py
This will start a local web server (by default at http://127.0.0.1:8050/) where you can access the report. The dashboard scans the Results/ directory for available experiments.
If you want to customize the dashboard, you can modify the plotly_app.py file:
y_axis_mappings dictionary to include additional metricsFor example, to add a new metric for execution efficiency:
y_axis_mappings = dict(
# ... existing metrics ...
throughput=dict(
column="Throughput",
label="Throughput",
class_column="T{} Throughput",
uom=" [%]",
per_class=True,
),
)
You can also modify, in the plot_experiment.py script, the cosmetic variables:
cols: The colors to use for simulated distributionsmarkers_plotly: The markers to display the executed simulationsThe dashboard provides the following features:
[!Tip] Use per-class selection to assess fairness: compare waiting times for smallest vs biggest classes. Large disparities indicate uneven treatment. See Fairness quantification in the Policy Comparison Guide.
The vertical dotted lines on plots indicate each policy’s stability boundary: the maximum arrival rate at which the policy maintains stable operation. Beyond this point, waiting times and queue lengths grow super-linearly, indicating the system cannot keep up with the offered load.
Stability boundaries are identified using Kleinrock’s Power Metric:
\[P(\lambda) = \frac{X(\lambda)}{R(\lambda)}\]where $X(\lambda)$ is throughput and $R(\lambda)$ is mean response time at arrival rate $\lambda$.
The power metric balances throughput against delay. The arrival rate $\lambda^*$ that maximises power represents the operational stability boundary: beyond this point, throughput grows slowly while delay grows super-linearly.
How it appears in visualizations:
Key insight: Different policies have different stability boundaries depending on workload characteristics. Comparing boundaries across policies is one of the primary uses of this visualisation.
For detailed explanation of stability analysis and policy comparison methodology, see the Policy Comparison Guide.
For more advanced analysis and custom visualizations, we provide tools to work with the simulation data directly.
For a more interactive development experience, you can use the Spyder IDE that comes with the Python environment:
uv run spyder
When working with the analysis tools, it’s helpful to open these three main Python files in Spyder from the scripts folder:
load_experiment_data.py: Core module for loading simulation dataplot_experiment.py: Static plot generationplotly_app.py: Interactive dashboardThis allows you to easily examine, modify, and run the visualization code while having access to data loading functions.
To interactively explore simulation data in Spyder, open the load_experiment_data.py file and execute it.
It will let you select the simulation results to load via the command line, and it will prepare the environment with your data.
If the wrong number of cores was picked, or you want to reload the data for any other reason, you can directly execute the following:
# Reload the data with the appropriate number of cores
dfs, Ts, exp, asymptotes, actual_util = load_experiment_data(folder, n_cores=2048)
# Now you can explore the dataframe
dfs.head()