Create and run a new experiment =============================== **Auptimizer** only needs a modified training script and an experiment configuration file (.json) to run a new experiment. 1. Create the ``experiment.json`` file to specify the experiment configuration and hyperparameters. Using ``python -m aup.init`` will guide you interactively. This json file structure is generally the same for most algorithms with minor modifications. See :doc:`algorithm` for more details. 2. Modify your training script. We provide three approaches for modifying the training script: + `Manual conversion <#manual-modification-of-training-code>`_; + `Python decorator <#code-conversion-with-decorator>`_; + `Auto conversion for script (beta) <#auto-code-conversion>`_. 3. Your experiment is now ready to run via ``python -m aup experiment.json``. For more details, see `Run experiment`_. Terminology ----------- For data science applications, the **user** (AI scientist/engineer) solves a given data mining problem with a specified machine learning model. A script (**code**) is written and some hyperparameters are identified to be explored during model training. Typically, the user carries out an **experiment** to examine a range of hyperparameter combinations and measures the **performance** of the model on a hold-out dataset. For example, testing a deep learning model by exploring the learning rate between 0 and 1, and dropout between 0.4 and 0.6. the performance is measured by accuracy. Each individual training process for a given selection of hyperparameters (e.g., learning rate = 0.1, dropout = 0.5) is called a **job**. All jobs run on an assigned computational **resource**, e.g. CPU, GPU. And after all jobs are finished, the user retrieves the best model from the training history for further analysis or application. Manual modification of training code ------------------------------------ If you plan to change your training script manually, the general flow of the conversion process is as follows: a. parse the configuration file (first argument from command line, i.e. ``sys.argv[1]``) using ``aup.BasicConfig.load(sys.argv[1])``. And use the hyperparameters parsed from the ``BasicConfig`` in your code, such as ``config.param_name`` or ``config['param_name']``, where ``param_name`` need to be consistent with the one used in ``experiment.json``. b. to report the result by using ``aup.print_result``. c. Add Shebang line ``#!/usr/bin/env python`` and make the script executable (``chmod u+x script.py``). Code conversion with decorator ------------------------------ For better control, you can use `aup_args` or `aup_flags` to decorate your code. The examples are in below: .. figure:: images/comparison.png :alt: Code comparison Example of using decorator for code conversion. Auto code conversion -------------------- If your training function takes all hyperparameters as input, then **Auptimizer** converts code to run if the training script is well defined as follows:: python -m aup.convert