Auptimizer HPO main entry

aup.__main__ is the Auptimizer main entry point for HPO experiments.

Use it as:

python -m aup <experiment configuration>

The usage is detailed in Create and run a new experiment.

Additional arguments

2021-03-03 01:49:54.332389: W tensorflow/stream_executor/platform/default/dso_loader.cc:59] Could not load dynamic library 'libcudart.so.10.1'; dlerror: libcudart.so.10.1: cannot open shared object file: No such file or directory
2021-03-03 01:49:54.332429: I tensorflow/stream_executor/cuda/cudart_stub.cc:29] Ignore above cudart dlerror if you do not have a GPU set up on your machine.
Usage: __main__.py [OPTIONS] EXPERIMENT_FILE

  Auptimizer main function for HPO experiment 

  Copyright (C) 2018  LG Electronics Inc. 

  GPL-3.0 License. This program comes with ABSOLUTELY NO WARRANTY; 

  Arguments:     experiment_file {str} -- Experiment configuration (can be
  created by `python -m aup.init`).

Options:
  --test                         Test one case to verify the code is working
  --user TEXT                    User name for job scheduling
  --aup_folder TEXT              Specify customized aup folder
  --resume TEXT                  Resume from previous task
  --log [debug|info|warn|error]  Log level
  --sleep FLOAT                  Sleep interval to sync updates
  --launch_dashboard             Launch the dashboard together with the
                                 experiment.

  --dashboard_port INTEGER       Port for the dashboard frontend.
  -h, --help                     Show this message and exit.
/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/scipy/__init__.py:137: UserWarning: NumPy 1.16.5 or above is required for this version of SciPy (detected version 1.16.4)
  UserWarning)
/home/travis/virtualenv/python3.7.1/lib/python3.7/site-packages/torch/cuda/__init__.py:52: UserWarning: CUDA initialization: Found no NVIDIA driver on your system. Please check that you have an NVIDIA GPU and installed a driver from http://www.nvidia.com/Download/index.aspx (Triggered internally at  /pytorch/c10/cuda/CUDAFunctions.cpp:100.)
  return torch._C._cuda_getDeviceCount() > 0