{ "cells": [ { "cell_type": "markdown", "metadata": {}, "source": [ "# SBML import, observation model, sensitivity analysis, data export and visualization\n", "\n", "This is an example using the [model_steadystate_scaled.xml] model to demonstrate:\n", "\n", "* SBML import\n", "* specifying the observation model\n", "* performing sensitivity analysis\n", "* exporting and visualizing simulation results" ] }, { "cell_type": "code", "metadata": {}, "source": [ "# SBML model we want to import\n", "sbml_file = \"model_steadystate_scaled_without_observables.xml\"\n", "# Name of the model that will also be the name of the python module\n", "model_name = \"model_steadystate_scaled\"\n", "# Directory to which the generated model code is written\n", "model_output_dir = model_name\n", "\n", "import amici\n", "import libsbml\n", "import matplotlib.pyplot as plt\n", "import numpy as np\n", "from amici.sim.sundials import (\n", " ExpData,\n", " ParameterScaling,\n", " SensitivityMethod,\n", " SensitivityOrder,\n", " run_simulation,\n", ")" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "## The example model\n", "\n", "Here we use `libsbml` to show the reactions and species described by the model (this is independent of AMICI)." ] }, { "cell_type": "code", "metadata": {}, "source": [ "sbml_reader = libsbml.SBMLReader()\n", "sbml_doc = sbml_reader.readSBML(sbml_file)\n", "sbml_model = sbml_doc.getModel()\n", "dir(sbml_doc)\n", "\n", "print(\"Species: \", [s.getId() for s in sbml_model.getListOfSpecies()])\n", "\n", "print(\"\\nReactions:\")\n", "for reaction in sbml_model.getListOfReactions():\n", " reactants = \" + \".join(\n", " [\n", " \"{} {}\".format(\n", " int(r.getStoichiometry()) if r.getStoichiometry() > 1 else \"\",\n", " r.getSpecies(),\n", " )\n", " for r in reaction.getListOfReactants()\n", " ]\n", " )\n", " products = \" + \".join(\n", " [\n", " \"{} {}\".format(\n", " int(r.getStoichiometry()) if r.getStoichiometry() > 1 else \"\",\n", " r.getSpecies(),\n", " )\n", " for r in reaction.getListOfProducts()\n", " ]\n", " )\n", " reversible = \"<\" if reaction.getReversible() else \"\"\n", " print(\n", " \"%3s: %10s %1s->%10s\\t\\t[%s]\" # noqa: UP031\n", " % (\n", " reaction.getId(),\n", " reactants,\n", " reversible,\n", " products,\n", " libsbml.formulaToL3String(reaction.getKineticLaw().getMath()),\n", " )\n", " )" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Importing an SBML model, compiling and generating an AMICI module\n", "\n", "Before we can use AMICI to simulate our model, the SBML model needs to be translated to C++ code. This is done by `amici.SbmlImporter`." ] }, { "cell_type": "code", "metadata": {}, "source": [ "# Create an SbmlImporter instance for our SBML model\n", "sbml_importer = amici.SbmlImporter(sbml_file)" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "In this example, we want to specify fixed parameters, observables and a $\\sigma$ parameter. Unfortunately, the latter two are not part of the [SBML standard](https://sbml.org/). However, they can be provided to `amici.SbmlImporter.sbml2amici` as demonstrated in the following." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Fixed parameters\n", "\n", "Fixed parameters, i.e. parameters with respect to which no sensitivities are to be computed (these are often parameters specifying a certain experimental condition) are provided as a list of parameter names." ] }, { "cell_type": "code", "metadata": {}, "source": [ "fixed_parameters = [\"k0\"]" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Observation model\n", "\n", "Specifying the observation model (i.e., the quantities that are observed, as well as the respective error models) is beyond the scope of SBML. Here we define that manually.\n", "\n", "If you are looking for a more scalable way of defining observables, then checkout [PEtab](https://github.com/PEtab-dev/PEtab). Another possibility is using SBML's [AssignmentRules](https://sbml.org/software/libsbml/5.18.0/docs/formatted/python-api/classlibsbml_1_1_assignment_rule.html) to specify model outputs within the SBML file.\n", "\n", "\n", "\n", "For model import in AMICI, the different types of measurements are represented as `MeasurementChannels`.\n", "The measurement channel is characterized by an ID, an optional name, the observation function (`MeasurementChannels.formula`), the type of noise distribution (`MeasurementChannels.noise_distribution`, defaults to a normal distribution), and the scale parameter of that distribution (`MeasurementChannels.sigma`).\n", "The symbols used in the observation function and for the scale parameter must already be defined in the model." ] }, { "cell_type": "code", "metadata": {}, "source": [ "# Define observation model\n", "from amici import MeasurementChannel as MC\n", "\n", "observation_model = [\n", " MC(id_=\"observable_x1\", formula=\"x1\"),\n", " MC(id_=\"observable_x2\", formula=\"x2\"),\n", " MC(id_=\"observable_x3\", formula=\"x3\"),\n", " MC(id_=\"observable_x1_scaled\", formula=\"scaling_x1 * x1\"),\n", " MC(id_=\"observable_x2_offsetted\", formula=\"offset_x2 + x2\"),\n", " MC(\n", " id_=\"observable_x1withsigma\",\n", " formula=\"x1\",\n", " sigma=\"observable_x1withsigma_sigma\",\n", " ),\n", "]" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Generating the module\n", "\n", "Now we can generate the python module for our model. `amici.SbmlImporter.sbml2amici` will symbolically derive the sensitivity equations, generate C++ code for model simulation, and assemble the python module. Standard logging verbosity levels can be passed to this function to see timestamped progression during code generation." ] }, { "cell_type": "code", "metadata": { "scrolled": true }, "source": [ "import logging\n", "\n", "model = sbml_importer.sbml2amici(\n", " model_name,\n", " model_output_dir,\n", " verbose=logging.INFO,\n", " observation_model=observation_model,\n", " fixed_parameters=fixed_parameters,\n", ")" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Importing the module and loading the model\n", "\n", "A model instance for simulation is returned by `sbml2amici`, but we can also import the module containing the model class later using `amici.import_model_module`:" ] }, { "cell_type": "code", "metadata": {}, "source": [ "model_module = amici.import_model_module(model_name, model_output_dir)" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "And get an instance of our model from which we can retrieve information such as parameter names:" ] }, { "cell_type": "code", "metadata": {}, "source": [ "model = model_module.get_model()\n", "\n", "print(\"Model name: \", model.get_name())\n", "print(\"Model parameters: \", model.get_free_parameter_ids())\n", "print(\"Model outputs: \", model.get_observable_ids())\n", "print(\"Model state variables: \", model.get_state_ids())" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": { "collapsed": false }, "source": [ "## Running simulations and analyzing results\n", "\n", "After importing the model, we can run simulations using `amici.run_simulation`. This requires a `Model` instance and a `Solver` instance. Optionally you can provide measurements inside an `ExpData` instance, as shown later in this notebook." ] }, { "cell_type": "code", "metadata": {}, "source": [ "# Create Model instance\n", "model = model_module.get_model()\n", "\n", "# set timepoints for which we want to simulate the model\n", "model.set_timepoints(np.linspace(0, 60, 60))\n", "\n", "# Create solver instance\n", "solver = model.create_solver()\n", "\n", "# Run simulation using default model parameters and solver options\n", "rdata = run_simulation(model, solver)" ], "outputs": [], "execution_count": null }, { "cell_type": "code", "metadata": {}, "source": [ "print(\n", " \"Simulation was run using model default parameters as specified in the SBML model:\"\n", ")\n", "print(dict(zip(model.get_free_parameter_ids(), model.get_free_parameters())))" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "Simulation results are provided as `numpy.ndarray`s in the returned dictionary:" ] }, { "cell_type": "code", "metadata": {}, "source": [ "# np.set_printoptions(threshold=8, edgeitems=2)\n", "for key, value in rdata.items():\n", " print(f\"{key:12s}\", value)" ], "outputs": [], "execution_count": null }, { "cell_type": "code", "metadata": { "collapsed": false, "pycharm": { "name": "#%%\n" } }, "source": [ "# In particular for interactive use, ReturnDataView.by_id() and amici.evaluate provides a more convenient way to access slices of the result:\n", "# Time trajectory of observable observable_x1\n", "print(f\"{rdata.by_id('observable_x1')=}\")\n", "# Time trajectory of state variable x2\n", "print(f\"{rdata.by_id('x2')=}\")" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": "Alternatively, those data can be accessed through `ReturnData.xr.*` as [xarray.DataArray](https://docs.xarray.dev/en/stable/index.html) objects, that contain additional metadata such as timepoints and identifiers. This allows for more convenient indexing and plotting of the results." }, { "cell_type": "code", "metadata": {}, "source": [ "rdata.xr.x" ], "outputs": [], "execution_count": null }, { "cell_type": "code", "metadata": {}, "source": [ "rdata.xr.x.to_pandas()" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Plotting trajectories\n", "\n", "The simulation results above did not look too appealing. Let's plot the trajectories of the model states and outputs them using `matplotlib.pyplot`:" ] }, { "metadata": {}, "cell_type": "code", "source": [ "from amici.sim.sundials.plotting import *\n", "\n", "plot_state_trajectories(rdata)\n", "plot_observable_trajectories(rdata)" ], "outputs": [], "execution_count": null }, { "metadata": {}, "cell_type": "markdown", "source": "We can also evaluate symbolic expressions of model quantities using `amici.numpy.evaluate`, or directly plot the results using `amici.plotting.plot_expressions`:" }, { "metadata": {}, "cell_type": "code", "source": "plot_expressions(\"observable_x1 + observable_x2 + observable_x3\", rdata=rdata)", "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Computing likelihood\n", "\n", "Often model parameters need to be inferred from experimental data. This is commonly done by maximizing the likelihood of observing the data given to current model parameters. AMICI will compute this likelihood if experimental data is provided to `run_amici_simulations` as optional third argument. Measurements along with their standard deviations are provided through an `ExpData` instance." ] }, { "cell_type": "code", "metadata": {}, "source": [ "# Create model instance and set time points for simulation\n", "model = model_module.get_model()\n", "model.set_timepoints(np.linspace(0, 10, 11))\n", "\n", "# Create solver instance, keep default options\n", "solver = model.create_solver()\n", "\n", "# Run simulation without experimental data\n", "rdata = run_simulation(model, solver)\n", "\n", "# Create ExpData instance from simulation results\n", "edata = ExpData(rdata, 1.0, 0.0)\n", "\n", "# Re-run simulation, this time passing \"experimental data\"\n", "rdata = run_simulation(model, solver, edata)\n", "\n", "print(f\"Log-likelihood {rdata['llh']:f}\")" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": "The provided measurements can be visualized together with the simulation results by passing the `ExpData` to `amici.plotting.plot_observable_trajectories`:" }, { "cell_type": "code", "metadata": {}, "source": [ "plot_observable_trajectories(rdata, edata=edata)\n", "plt.legend(loc=\"center left\", bbox_to_anchor=(1.04, 0.5))" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Simulation tolerances\n", "Numerical error tolerances are often critical to get accurate results. For the state variables, integration errors can be controlled using `set_relative_tolerance` and `set_absolute_tolerance`. Similar functions exist for sensitivities, steady states and quadratures. We initially compute a reference solution using extremely low tolerances and then assess the influence on integration error for different levels of absolute and relative tolerance." ] }, { "cell_type": "code", "metadata": {}, "source": [ "solver.set_relative_tolerance(1e-16)\n", "solver.set_absolute_tolerance(1e-16)\n", "solver.set_sensitivity_order(SensitivityOrder.none)\n", "rdata_ref = run_simulation(model, solver, edata)\n", "\n", "\n", "def get_simulation_error(solver):\n", " rdata = run_simulation(model, solver, edata)\n", " return np.mean(np.abs(rdata[\"x\"] - rdata_ref[\"x\"])), np.mean(\n", " np.abs(rdata[\"llh\"] - rdata_ref[\"llh\"])\n", " )\n", "\n", "\n", "def get_errors(tolfun, tols):\n", " solver.set_relative_tolerance(1e-16)\n", " solver.set_absolute_tolerance(1e-16)\n", " x_errs = []\n", " llh_errs = []\n", " for tol in tols:\n", " getattr(solver, tolfun)(tol)\n", " x_err, llh_err = get_simulation_error(solver)\n", " x_errs.append(x_err)\n", " llh_errs.append(llh_err)\n", " return x_errs, llh_errs\n", "\n", "\n", "atols = np.logspace(-5, -15, 100)\n", "atol_x_errs, atol_llh_errs = get_errors(\"set_absolute_tolerance\", atols)\n", "\n", "rtols = np.logspace(-5, -15, 100)\n", "rtol_x_errs, rtol_llh_errs = get_errors(\"set_relative_tolerance\", rtols)\n", "\n", "fig, axes = plt.subplots(1, 2, figsize=(15, 5))\n", "\n", "\n", "def plot_error(tols, x_errs, llh_errs, tolname, ax):\n", " ax.plot(tols, x_errs, \"r-\", label=\"x\")\n", " ax.plot(tols, llh_errs, \"b-\", label=\"llh\")\n", " ax.set_xscale(\"log\")\n", " ax.set_yscale(\"log\")\n", " ax.set_xlabel(f\"{tolname} tolerance\")\n", " ax.set_ylabel(\"average numerical error\")\n", " ax.legend()\n", "\n", "\n", "plot_error(atols, atol_x_errs, atol_llh_errs, \"absolute\", axes[0])\n", "plot_error(rtols, rtol_x_errs, rtol_llh_errs, \"relative\", axes[1])\n", "\n", "# reset relative tolerance to default value\n", "solver.set_relative_tolerance(1e-8)\n", "solver.set_absolute_tolerance(1e-16)" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Sensitivity analysis\n", "\n", "AMICI can provide first- and second-order sensitivities using the forward- or adjoint-method. The respective options are set on the Model and Solver objects." ] }, { "cell_type": "markdown", "metadata": {}, "source": [ "### Forward sensitivity analysis" ] }, { "cell_type": "code", "metadata": {}, "source": [ "model = model_module.get_model()\n", "model.set_timepoints(np.linspace(0, 10, 11))\n", "\n", "# sensitivities w.r.t. all parameters\n", "model.require_sensitivities_for_all_parameters()\n", "# sensitivities w.r.t. the specified parameters:\n", "# model.set_parameter_list([1, 2])\n", "\n", "# parameters are used as-is (not log-transformed)\n", "model.set_parameter_scale(ParameterScaling.none)\n", "\n", "solver = model.create_solver()\n", "# forward sensitivity analysis\n", "solver.set_sensitivity_method(SensitivityMethod.forward)\n", "# first-order sensitivities\n", "solver.set_sensitivity_order(SensitivityOrder.first)\n", "\n", "rdata = run_simulation(model, solver)\n", "\n", "# print sensitivity-related results\n", "for key, value in rdata.items():\n", " if key.startswith(\"s\"):\n", " print(f\"{key:12s}\", value)" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Adjoint sensitivity analysis" ] }, { "cell_type": "code", "metadata": {}, "source": [ "# Set model options\n", "model = model_module.get_model()\n", "p_orig = np.array(model.get_free_parameters())\n", "p_orig[\n", " list(model.get_free_parameter_ids()).index(\"observable_x1withsigma_sigma\")\n", "] = 0.1 # Change default parameter\n", "model.set_free_parameters(p_orig)\n", "model.set_parameter_scale(ParameterScaling.none)\n", "model.set_timepoints(np.linspace(0, 10, 21))\n", "\n", "solver = model.create_solver()\n", "solver.set_max_steps(10**4) # Set maximum number of steps for the solver\n", "\n", "# simulate time-course to get artificial data\n", "rdata = run_simulation(model, solver)\n", "edata = ExpData(rdata, 1.0, 0)\n", "edata.fixed_parameters = model.get_fixed_parameters()\n", "# set sigma to 1.0 except for observable 5, so that p[7] is used instead\n", "# (if we have sigma parameterized, the corresponding ExpData entries must NaN, otherwise they will override the parameter)\n", "edata.set_noise_scales(\n", " rdata[\"t\"] * 0 + np.nan,\n", " list(model.get_observable_ids()).index(\"observable_x1withsigma\"),\n", ")\n", "\n", "# enable sensitivities\n", "# First-order ...\n", "solver.set_sensitivity_order(SensitivityOrder.first)\n", "# ... adjoint sensitivities\n", "solver.set_sensitivity_method(SensitivityMethod.adjoint)\n", "# ... w.r.t. all parameters\n", "model.require_sensitivities_for_all_parameters()\n", "\n", "# compute adjoint sensitivities\n", "rdata = run_simulation(model, solver, edata)\n", "# print(rdata['sigmay'])\n", "print(f\"Log-likelihood: {rdata['llh']}\")\n", "print(f\"Gradient: {rdata['sllh']}\")" ], "outputs": [], "execution_count": null }, { "cell_type": "markdown", "metadata": {}, "source": [ "## Finite differences gradient check\n", "\n", "Compare AMICI-computed gradient with finite differences" ] }, { "cell_type": "code", "metadata": {}, "source": [ "from scipy.optimize import check_grad\n", "\n", "\n", "def func(x0, symbol=\"llh\", x0full=None, plist=[], verbose=False):\n", " p = x0[:]\n", " if len(plist):\n", " p = x0full[:]\n", " p[plist] = x0\n", " verbose and print(f\"f: p={p}\")\n", "\n", " old_parameters = model.get_free_parameters()\n", " solver.set_sensitivity_order(SensitivityOrder.none)\n", " model.set_free_parameters(p)\n", " rdata = run_simulation(model, solver, edata)\n", "\n", " model.set_free_parameters(old_parameters)\n", "\n", " res = np.sum(rdata[symbol])\n", " verbose and print(res)\n", " return res\n", "\n", "\n", "def grad(x0, symbol=\"llh\", x0full=None, plist=[], verbose=False):\n", " p = x0[:]\n", " if len(plist):\n", " model.set_parameter_list(plist)\n", " p = x0full[:]\n", " p[plist] = x0\n", " else:\n", " model.require_sensitivities_for_all_parameters()\n", " verbose and print(f\"g: p={p}\")\n", "\n", " old_parameters = model.get_free_parameters()\n", " solver.set_sensitivity_method(SensitivityMethod.forward)\n", " solver.set_sensitivity_order(SensitivityOrder.first)\n", " model.set_free_parameters(p)\n", " rdata = run_simulation(model, solver, edata)\n", "\n", " model.set_free_parameters(old_parameters)\n", "\n", " res = rdata[f\"s{symbol}\"]\n", " if not isinstance(res, float):\n", " if len(res.shape) == 3:\n", " res = np.sum(res, axis=(0, 2))\n", " verbose and print(res)\n", " return res\n", "\n", "\n", "epsilon = 1e-4\n", "err_norm = check_grad(func, grad, p_orig, \"llh\", epsilon=epsilon)\n", "print(f\"sllh: |error|_2: {err_norm:f}\")\n", "# assert err_norm < 1e-6\n", "print()\n", "\n", "for ip in range(model.np()):\n", " plist = [ip]\n", " p = p_orig.copy()\n", " err_norm = check_grad(\n", " func, grad, p[plist], \"llh\", p, [ip], epsilon=epsilon\n", " )\n", " print(f\"sllh: p[{ip:d}]: |error|_2: {err_norm:f}\")\n", "\n", "print()\n", "for ip in range(model.np()):\n", " plist = [ip]\n", " p = p_orig.copy()\n", " err_norm = check_grad(func, grad, p[plist], \"y\", p, [ip], epsilon=epsilon)\n", " print(f\"sy: p[{ip}]: |error|_2: {err_norm:f}\")\n", "\n", "print()\n", "for ip in range(model.np()):\n", " plist = [ip]\n", " p = p_orig.copy()\n", " err_norm = check_grad(func, grad, p[plist], \"x\", p, [ip], epsilon=epsilon)\n", " print(f\"sx: p[{ip}]: |error|_2: {err_norm:f}\")\n", "\n", "print()\n", "for ip in range(model.np()):\n", " plist = [ip]\n", " p = p_orig.copy()\n", " err_norm = check_grad(\n", " func, grad, p[plist], \"sigmay\", p, [ip], epsilon=epsilon\n", " )\n", " print(f\"ssigmay: p[{ip}]: |error|_2: {err_norm:f}\")" ], "outputs": [], "execution_count": null }, { "cell_type": "code", "metadata": {}, "source": [ "eps = 1e-4\n", "op = model.get_free_parameters()\n", "\n", "\n", "solver.set_sensitivity_method(\n", " SensitivityMethod.forward\n", ") # forward sensitivity analysis\n", "solver.set_sensitivity_order(\n", " SensitivityOrder.first\n", ") # first-order sensitivities\n", "model.require_sensitivities_for_all_parameters()\n", "solver.set_relative_tolerance(1e-12)\n", "rdata = run_simulation(model, solver, edata)\n", "\n", "\n", "def fd(x0, ip, eps, symbol=\"llh\"):\n", " p = list(x0[:])\n", " old_parameters = model.get_free_parameters()\n", " solver.set_sensitivity_order(SensitivityOrder.none)\n", " p[ip] += eps\n", " model.set_free_parameters(p)\n", " rdata_f = run_simulation(model, solver, edata)\n", " p[ip] -= 2 * eps\n", " model.set_free_parameters(p)\n", " rdata_b = run_simulation(model, solver, edata)\n", "\n", " model.set_free_parameters(old_parameters)\n", " return (rdata_f[symbol] - rdata_b[symbol]) / (2 * eps)\n", "\n", "\n", "def plot_sensitivities(symbol, eps):\n", " fig, axes = plt.subplots(4, 2, figsize=(15, 10))\n", " for ip in range(4):\n", " fd_approx = fd(model.get_free_parameters(), ip, eps, symbol=symbol)\n", "\n", " axes[ip, 0].plot(\n", " edata.get_timepoints(), rdata[f\"s{symbol}\"][:, ip, :], \"r-\"\n", " )\n", " axes[ip, 0].plot(edata.get_timepoints(), fd_approx, \"k--\")\n", " axes[ip, 0].set_ylabel(f\"sensitivity {symbol}\")\n", " axes[ip, 0].set_xlabel(\"time\")\n", "\n", " axes[ip, 1].plot(\n", " edata.get_timepoints(),\n", " np.abs(rdata[f\"s{symbol}\"][:, ip, :] - fd_approx),\n", " \"k-\",\n", " )\n", " axes[ip, 1].set_ylabel(\"difference to fd\")\n", " axes[ip, 1].set_xlabel(\"time\")\n", " axes[ip, 1].set_yscale(\"log\")\n", "\n", " plt.tight_layout()\n", " plt.show()" ], "outputs": [], "execution_count": null }, { "cell_type": "code", "metadata": {}, "source": [ "plot_sensitivities(\"x\", eps)" ], "outputs": [], "execution_count": null }, { "cell_type": "code", "metadata": {}, "source": [ "plot_sensitivities(\"y\", eps)" ], "outputs": [], "execution_count": null } ], "metadata": { "kernelspec": { "display_name": "Python 3", "language": "python", "name": "python3" }, "language_info": { "codemirror_mode": { "name": "ipython", "version": 3 }, "file_extension": ".py", "mimetype": "text/x-python", "name": "python", "nbconvert_exporter": "python", "pygments_lexer": "ipython3", "version": "3.7.7" }, "nbsphinx": { "execute": "always" }, "toc": { "base_numbering": 1, "nav_menu": {}, "number_sections": true, "sideBar": true, "skip_h1_title": false, "title_cell": "Table of Contents", "title_sidebar": "Contents", "toc_cell": false, "toc_position": {}, "toc_section_display": true, "toc_window_display": false } }, "nbformat": 4, "nbformat_minor": 2 }