aup.dlconvert package

Checkpoint to ONNX

Additional node tensor names are needed to convert TF checkpoint to ONNX. See dlconvert.to_onnx.convert() for more information.

Example

$ python -m aup.dlconvert.checkpoint_to_onnx.py --model model_ckpt/ckpt.meta \
   --output model.onnx \
   --input_nodes input:0 \
   --output_nodes output/Softmax:0 \
   [--input_shape 1,224,224,3]

Checkpoint to TF ProtoBuf

Require checkpoint folder with .meta file. Otherwise, please save the meta file manually before convertion.

Example

$ python -m aup.dlconvert.checkpoint_to_pb.py --model  model_ckpt/model.meta \
    --output model_frozen.pb \
    --frozen \
    --output_nodes output/Softmax:0
convert(checkpoint_meta_file: str, frozen: bool = False, output_nodes: List[str] = ())tensorflow.core.framework.graph_pb2.GraphDef[source]

Convert TF Checkpoint to ProtoBuf

Args:

checkpoint_meta_file (str): checkpoint meta file name frozen (bool, optional): to create a frozen graphdef. Defaults to False. output_nodes (List[str], optional): A list of output node names for frozen graph. Defaults to ().

Returns:

tf.GraphDef: Tensorflow Graph to be written to file

Checkpoint to TFLite

Require checkpoint folder with .meta file. Otherwise, please save the meta file manually before convertion.

Example

$ python -m aup.dlconvert.checkpoint_to_tflite.py --model  model_ckpt/ckpt.meta \
    --output model_tflite.tflite \
    --input_nodes input:0 \
    --output_nodes output/Softmax:0 \
    [--input_shape 1,224,224,3]

Keras to ONNX

Depends on tf2onnx and keras2onnx (TF1.x support only). Needs to import tf2onnx first to resolve dependency otherwise import keras2onnx may result in error

Example

$ python -m aup.dlconvert.keras_to_onnx.py -i model.h5 -o model.onnx
convert(model: str, output: str)[source]

Convert Keras model to ONNX

Args:

model (str): Keras h5 model file path output (str): output ONNX file path

Keras to ProtoBuf

Example

$ python -m aup.dlconvert.keras_to_pb.py --model model.h5 \
    --output model.pb \
    --frozen \
    --output_nodes output/Softmax:0
convert(model: str, frozen: bool = False, output_nodes: List[str] = ())tensorflow.core.framework.graph_pb2.GraphDef[source]

Convert Keras model to tensorflow graphdef

Args:

model (str): Keras model file name frozen (bool, optional): To frozen graphdef. Defaults to False. output_nodes (List[str], optional): output nodes, otherwise, use model outputs from Keras model. Defaults to ().

Returns:

tf.GraphDef: Tensorflow GraphDef

Keras to TFlite

See dlconvert.to_tflite.setup_converter() for more control arguments for tflite.

Example

$ python -m aup.dlconvert.keras_to_tflite.py --model model.h5 \
    --output model_keras.tflite \
    [--load rep_data] \
    [--opt default --ops int8 --type int8]
convert(model: str, output: str)[source]

Convert Keras model to tflite model

Args:

model (str): input model file name output (str): output model file name

model_loader(filename: str)tensorflow.lite.python.lite.TFLiteConverterV2[source]

TF 1/2 tflite converter loading function

Args:

filename (str): Keras model file

Returns:

lite.TFLiteConverter: TFLite converter to be used

ProtoBuf to ONNX

Input and output node names are needed.

Example

$ python -m aup.dlconvert.pb_to_onnx.py --model model.pb --output model.onnx \
    --input_nodes input:0 --output_nodes output/Softmax:0 \
    [--input_shape 1,224,224,3]
convert(model: str, output: str, input_nodes: str, output_nodes: str, input_shape: str)[source]

Convert TF ProtoBuf to ONNX model

Args:

model (str): TF ProtoBuf file name output (str): output ONNX model name input_nodes (str): model input names output_nodes (str): model output names [input_shape (str): model input shape, needed if input dimension is not specified in model graph]

ProtoBuf to TFlite

See dlconvert.to_tflite.setup_converter() for more control arguments for tflite.

Example

$ python -m aup.dlconvert.pb_to_tflite \
    --model model.pb --output model.tflite \
    [--load rep_data] \
    [--opt default --ops int8 --type int8]
    [--input_shape 1,224,224,3]
find_node_shape(tensor_name: str, graph_def: tensorflow.core.framework.graph_pb2.GraphDef)List[int][source]

Find node shape for the given tensor name

Args:

tensor_name (str): name of the tensor graph_def (tf.compat.v1.GraphDef): TF GraphDef

Raises:

ValueError: When node name is not in the graph

Returns:

List[int]: tensor shape, excluding first (batch) dimension

model_loader(filename: str)tensorflow.lite.python.lite.TFLiteConverterV2[source]

Load TF ProtoBuf (for TF v1 and v2)

Args:

filename (str): ProtoBuf file name

Returns:

lite.TFLiteConverter: TFLite converter

verify_input_names(input_names, graph_def)[source]

Check if input_names are correct

verify_output_names(output_names, graph_def)[source]

Check if output_names are correct

PyTorch to Keras

Use https://github.com/nerox8664/pytorch2keras for conversion.

Example

$ python -m aup.dlconvert.pytorch_to_keras -i model.pt -o model.h5 \
    --input_shape 1,3,224,224 --net net.py --net_name Net
convert(model: str, output: str, input_shape: List[int], net_path, net_name)[source]

Convert PyTorch model to Keras model

Args:

model (str): PyTorch model file path output (str): output file name for Keras model input_shape (List[int]): Tensor shape for input net_path ([type]): Python script defines the model net_name ([type]): PyTorch model class in the net_path file.

PyTorch to ONNX

Use https://github.com/nerox8664/pytorch2keras for conversion.

Example

$ python -m aup.dlconvert.pytorch_to_onnx -i model.pt -o model.onnx \
    --input_shape 1,3,224,224 --net net.py --net_name Net
convert(model: str, output: str, input_shape: List[int], net_path: str, net_name: str)[source]

PyTorch to TFLite

Use https://github.com/nerox8664/pytorch2keras for conversion.

Example

$ python -m dlconvert.pytorch_to_keras -i model.pt -o model.h5 \
    --input_shape 1,3,224,224 --net net.py --net_name Net
convert(model: str, output: str, input_shape: List[int], net_path, net_name)[source]

Convert PyTorch model to tfLite model

Args:

model (str): PyTorch model file path output (str): output file name for Keras input_shape (List[int]): Tensor shape for input net_path ([type]): Python script defines the model net_name ([type]): PyTorch model class in the net_path file.

model_loader(filename: str)tensorflow.lite.python.lite.TFLiteConverterV2[source]

TF 1.x and 2.x tflite converter loading function

Args:

filename (str): Keras model file

Returns:

lite.TFLiteConverter: TFLite converter to be used

SavedModel to ONNX

Not working

Example

$ python -m aup.dlconvert.savedmodel_to_onnx.py --model savedModel/ --output model.onnx
convert(model: str, output: str, tag: str, signature: str, concrete_function: str)[source]

Convert TF SavedModel to ONNX (currently only support TF2 and TF2 SavedModels)

Args:

model (str): TF SavedModel folder path output (str): ONNX output filename tag (str, optional): tag to use for SavedModel, default is “serve” signature (str, optional): signature to use for SavedModel, default is “serving_default” concrete_function (str, optional): index of func signature in __call__ to use instead of signature], default is None

SavedModel to tflite

Based on the version of ops, it may fail.

Example

$ python -m aup.dlconvert.savedmodel_to_tflite.py --model model/ \
    --output model_keras.tflite \
    [--load rep_data] \
    [--opt default --ops int8 --type int8]
model_loader(foldername: str)tensorflow.lite.python.lite.TFLiteConverterV2[source]

Function to load model file into TFLiteConverter.

Args:

foldername (str): file name

Returns:

lite.TFLiteConverter: TFLiteConverter to create tflite model

Convert to TF frozen ProtoBuf

to_frozen(sess: tensorflow.python.client.session.Session, output_nodes: List[str], clear_devices: bool = True)tensorflow.core.framework.graph_pb2.GraphDef[source]

Convert to TF frozen ProtoBuf based on current TF session. See reference.

Args:

sess (tf.Session): TF session contains the compute graph output_nodes (List[str]): list of output node names clear_devices (bool, optional): to clear device placement. Defaults to True.

Returns:

tf.GraphDef: frozen GraphDef to write to ProtoBuf

Convert to ONNX

Based on tf2onnx version>=1.6.0 See more arguments.

convert_onnx(**kwargs)[source]

Convert to TFLite

There are four major control parameters for tflite runtime, see setup_tfconverter().

The data feeding function (data_fun) is loaded by –load, where the argument is the Python filename defining get_data() to generate data for int8 quantization. Combine with –undefok flag to pass more control arguments.

create_converter(model: str, model_loader: Callable[[str], tensorflow.lite.python.lite.TFLiteConverterV2])tensorflow.lite.python.lite.TFLiteConverterV2[source]

Setup the TFLite converter

Args:

model (str): model filename model_loader (Callable[[str], lite.TFLiteConverter]): function to load model file and return a TFLiteConverter

Returns:

lite.TFLiteConverter: TFLiteConverter with additional arguments set up.

setup_tfconverter(converter: tensorflow.lite.python.lite.TFLiteConverterV2, dtype: str, opt: str, ops: str, data_fun: Optional[Callable] = None)tensorflow.lite.python.lite.TFLiteConverterV2[source]

Setup control arguments for TFLiteConverter

Args:

converter (lite.TFLiteConverter): loaded TFLiteConverter. dtype (str): data types: float, float16, int8, uint8. opt (str): optimization: none for float, default for ther data types. ops (str): operation sets: tflite, tf, int8. data_fun (Callable, optional): [description]. Defaults to None.

Returns:

lite.TFLiteConverter: TFLiteConverter with additional arguments set up.