modlee.converter module

The converter module holds the Converter class for converting between different formats of a neural network. The supported formats include Torch and ONNX.

The Torch formats include:

  • Model, the object which contains the forward pass and can be trained

  • Code, the model as text code that can be saved to a file and used to rebuild the model

The ONNX formats include:

  • Graph, the network represented as a graph with layers as nodes

  • Text, the textual description of the graph that is portable and can be rebuilt into a graph

class modlee.converter.Converter[source]

Bases: object

Base object that holds conversion functions.

cast_gather_layer(input_str)[source]

Cast variables in a Gather ONNX layer to the required types.

Parameters:

input_str – The string of the Gather layer.

Returns:

The string with types propery cast.

cast_tile_layer(input_str)[source]

Cast variables in a Tile ONNX layer to the required types.

Parameters:

input_str – The string of the Tile layer.

Returns:

The layer string with types properly cast.

code2torch(torch_code: str, tmp_model_path='./.tmp_model.py', *args, **kwargs)

Convert Torch Code into a Torch Model

Parameters:
  • torch_code – The Torch Code, either as a file path or the raw code text

  • tmp_model_path – The path to a cache of the code, as a *.py file

Returns:

The Torch model.

code_path2torch(torch_file)

Convert a Torch File into a Torch Model

Parameters:

torch_file – The Torch Code as a path

Return torch_model:

The Torch Model

convert_float(onnx_text)[source]
convert_onnx116(onnx_text)[source]

Convert ONNX graph text generated with ONNX 1.16, which requires modifications to be parseable by onnx.parser.

Parameters:

onnx_text – The

ONNX graph text to convert. :return: The ONNX graph text converted to a format parseable by onnx.parser.

dict2code(kwarg_dict)[source]

Converts a dictionary into a code string that, when called with exec(), rebuilds the dictionary.

Parameters:

kwarg_dict – The dictionary to convert.

Returns:

A code string to create the dictionary.

filter_node(x)[source]

Returns whether this is a non-layer node to filter out Checks for substrings in the node name that indicate that it is not a layer.

Parameters:

x – The NetworkX node to check.

Returns:

Whether the node contains a substring indicating that it should be filtered as a non-layer.

format_onnx_text(onnx_text)[source]

Check and format the ONNX Text

Parameters:

onnx_text – The ONNX Text

Return onnx_text:

The formatted ONNX Text

get_attr_name(input_str)[source]

Get the variable name of an object attribute from a string.

The input string to this function should be a line from the forward pass of a model converted from onnx2torch, e.g. : model_conv1_conv = getattr(self, “model/conv1/Conv”)(input_1); input_1 = None Will retrieve “model/conv1/Conv”.

Parameters:

input_str – The input string from which to get the attribute from.

Returns:

The attribute name.

get_forward(model) str[source]

Get a model’s forward() pass as code.

Parameters:

model – The model.

Returns:

The forward() code.

get_init(model) str[source]

Get the code for a model’s __init__() constructor function.

Parameters:

model – The model.

Returns:

The model’s __init__() function as a string.

get_init_module_state_dict_str(module_name_str: str, state_dict_str: str, indent_level=2)[source]

Return a string that, when called with exec(), will initialize the torch module’s state dictionary. The module will likely be a freshly initialized module with an empty state dict. This uses register_buffer to add unexpected keys to the state dict.

Parameters:
  • module_name – The variable name of the module to be initialized, as a string. Must have already been initialized.

  • state_dict – A string representation of the state_dict

  • indent_level – The amount of indents (4 whitespaces) to prepend to each line, defaults to 2 for use in a class function

Returns:

Code text to initialize the module’s state dict.

get_inner_string(input_str, _start, _end, return_only_single_value=True)[source]

Get the inner string between a start and end sequence. If there are multiple potential inner strings (e.g. if there are multiple instances of the start and/or end sequences), returns the longest valid substring.

Parameters:
  • input_str – The string to extract from.

  • _start – The start of the string sequence.

  • _end – The end of the string sequence.

  • return_only_single_value – Whether to return only the first whitespace-split item in the found sequence, defaults to Truedefaults to True.

Returns:

The inner string, or None if the _start and _end sequences could not be found.

get_model_attr_on_line(model, line)[source]

Get attributes as {attribute_name : attribute_object} pairs

Parameters:
  • model – The model to retrieve attributes from.

  • line – The line that has the attribute to retrieve.

Returns:

A dictionary of the {attribute_name : attribute_object}.

get_model_attrs_in_forward(model)[source]

Get all of the attributes from a model’s forward pass.

Parameters:

model – The model to get attributes from.

Returns:

A dictionary of { attribute_name : attribute_object } pairs

get_model_code(model) str[source]

Retrieve the model’s string representation, which includes its constructor (__init__) and forward pass. The code, when imported as a module or called with exec(), will rebuild the model object.

Parameters:

model – The model.

Returns:

The code for the entire model module.

get_params_for_attr(model_attr)[source]

Get the parameters required to initialize an attribute object, e.g. convolutional filter sizes / strides, frozen directly from the object.

Parameters:

model_attr – The attribute object to get parameters for.

Returns:

The parameters to reinitialize the object in the same state.

get_type_string(obj) str | None[source]

Get the type of an object as a string. TODO - refactor this as a regex

Parameters:

obj – The object.

Returns:

The type of the object as a string.

index_nx(onnx_nx)[source]

Index an ONNX NetworkX graph, by replacing the node labels with their indices.

Parameters:

onnx_nx – The ONNX NetworkX to index.

Returns:

The ONNX NetworkX ,indexed. The function modifies the graph in-place and the return value should be unnecessary.

init_graph_tensors(onnx_gs_graph, tensor_init_fn=functools.partial(<built-in method normal of numpy.random.mtrand.RandomState object>, scale=0.01))[source]

Initialize the graph’s tensors, in place (you do not need to use the return value) The input should be an ONNX graph exported from torch without parameters, i.e. torch.onnx.export(…, export_params=False).

Identity layers get special treatment; they must be initialized for onnx2torch, but their target shape is nested within its inputs.

Example usage: import onnx_graphsurgeon as gs graph = gs.import_onnx(path/to/uninitialized/model.onnx) Converter().init_graph_tensors(graph)

Parameters:
  • onnx_gs_graph – The ONNX GraphSurgeon Graph with uninitialized weights.

  • tensor_init_fn – The initialization function to use for the graph’s weights, defaults to np.random.normal(scale=0.01)

Returns:

The ONNX GraphSurgeon Graph with tensors initialized. The function modifies the graph in place, so assigning the return to the graph is not necessary.

init_onnx_params(onnx_graph)[source]

Initialize a parameterless ONNX Graph

Parameters:

onnx_graph – The ONNX Graph.

Return onnx_graph:

The ONNX Graph with initialized parameters.

init_onnx_tensors(onnx_graph)[source]

Initialize the tensors of an ONNX Graph

Parameters:

onnx_graph – The ONNX Graph

Return onnx_graph:

The ONNX Graph with initalized tensors

onnx2onnx_gs(onnx_graph)

Initialize the tensors of an ONNX Graph

Parameters:

onnx_graph – The ONNX Graph

Return onnx_graph:

The ONNX Graph with initalized tensors

onnx2onnx_text(onnx_graph, remove_identity=False)

Convert an ONNX Graph to ONNX Text

Parameters:
  • onnx_graph – The ONNX Graph to convert

  • remove_identity – Whether to remove Identity layers in the output text

Returns:

The ONNX Text representation

onnx2torch(onnx_graph, *args, **kwargs)

Convert an ONNX Graph to a Torch Model.

Parameters:

onnx_graph – The ONNX Graph object.

Return torch_model:

The Torch Model.

onnx_file2onnx_graph(onnx_file)[source]

Convert an ONNX File to an ONNX Graph.

Parameters:

onnx_file – The ONNX File as a path.

Return onnx_graph:

The ONNX Graph as a text.

onnx_file2torch(onnx_file)

Convert an ONNX File to a Torch Model

Parameters:

onnx_file – The ONNX File as a path

Return torch_model:

The Torch Model

onnx_file2torch_model(onnx_file)[source]

Convert an ONNX File to a Torch Model

Parameters:

onnx_file – The ONNX File as a path

Return torch_model:

The Torch Model

onnx_graph2onnx_nx(onnx_graph, prune=True)[source]

Convert an ONNX graph to ONNX NetworkX.

Parameters:
  • onnx_graph – The ONNX graph.

  • prune – Whether to prune the NetworkX to just layer nodes, defaults to True

Returns:

The ONNX NetworkX graph.

onnx_graph2onnx_text(onnx_graph, remove_identity=False)[source]

Convert an ONNX Graph to ONNX Text

Parameters:
  • onnx_graph – The ONNX Graph to convert

  • remove_identity – Whether to remove Identity layers in the output text

Returns:

The ONNX Text representation

onnx_graph2torch_model(onnx_graph, *args, **kwargs)[source]

Convert an ONNX Graph to a Torch Model.

Parameters:

onnx_graph – The ONNX Graph object.

Return torch_model:

The Torch Model.

onnx_parameterless2onnx(onnx_graph)

Initialize a parameterless ONNX Graph

Parameters:

onnx_graph – The ONNX Graph.

Return onnx_graph:

The ONNX Graph with initialized parameters.

onnx_path2torch(onnx_file, *args, **kwargs)

Convert an ONNX File to a Torch Model.

Parameters:

onnx_file – The ONNX File as a path.

Return torch_model:

The Torch Model.

onnx_text2code(onnx_text)

Convert ONNX Text to Torch Code

Parameters:

onnx_text – The ONNX Text

Return torch_code:

The Torch Code

onnx_text2onnx(onnx_text)

Convert ONNX Text to an ONNX Graph.

Parameters:

onnx_text – The ONNX Text

Return onnx_graph:

The ONNX Graph

onnx_text2onnx_graph(onnx_text)[source]

Convert ONNX Text to an ONNX Graph.

Parameters:

onnx_text – The ONNX Text

Return onnx_graph:

The ONNX Graph

onnx_text2torch(onnx_text: bytes)

Convert ONNX Text to Torch Model.

Parameters:

onnx_text – The ONNX Text as bytes.

Returns:

The Torch Model.

onnx_text2torch_code(onnx_text)[source]

Convert ONNX Text to Torch Code

Parameters:

onnx_text – The ONNX Text

Return torch_code:

The Torch Code

onnx_text2torch_model(onnx_text: bytes)[source]

Convert ONNX Text to Torch Model.

Parameters:

onnx_text – The ONNX Text as bytes.

Returns:

The Torch Model.

onnx_text_file2onnx(onnx_file)

Convert an ONNX File to an ONNX Graph.

Parameters:

onnx_file – The ONNX File as a path.

Return onnx_graph:

The ONNX Graph as a text.

onnx_uninit2torch(onnx_graph)[source]

Convert an uninitialized ONNX Graph to a Torch Model

Parameters:

onnx_graph – The uninitialized ONNX Graph

Return torch_model:

The Torch Model

prune_onnx_nx(onnx_nx)[source]

Prune an ONNX NetworkX graph to just the layer nodes.

Parameters:

onnx_nx – The ONNX NetworkX graph to prune.

Returns:

The pruned ONNX NetworkX graph.

refactor_bool_layer(input_str)[source]

Refactor boolean layers to the correct number of input elements The onnx.printer.to_text() function seems to remove any inputs that the parser would use. For example, an int layer is defined like: constant_output_0006 = Constant <value = int64[4] {3,12,-1,-1}> ()

From: constant_output_0005 = Constant <value = bool[1,1,3,3]___> () To: constant_output_0005 = Constant <value = bool[1,1,3,3] {0,0,0,0,0,0,0,0,0}> ()

Parameters:

input_str – The string with boolean layers.

Returns:

The string with boolean layers properly refactored.

refactor_inf(input_str, large_value='99999999')[source]

Replace ‘inf’ with a large value because the parser cannot handle infs

Parameters:
  • input_str – The string with ‘inf’.

  • large_value – A suitably large value to replace ‘inf’ with, defaults to “99999999”.

Returns:

The string with ‘inf’ refactored with a large value.

refactor_leading_number(input_str)[source]

Refactor variables with leading numbers which are not parseable, e.g. 0_model_fc_weight -> model_0_fc_weight

Parameters:

input_str – The string with variables with leading numbers.

Returns:

The string with number-lead variables refactored.

remove_identity(onnx_text)[source]

Remove identity layers in ONNX Text.

Parameters:

onnx_text – The ONNX Text.

Returns:

The ONNX Text stripped of identity layers.

save_code(torch_code, filepath)[source]

Save a PyTorch model’s sring representation as a .py file.

Parameters:
  • torch_model – The Torch Model to save.

  • filepath – The path to where the model should be saved, should end in .py.

save_torch(torch_model, filepath)[source]

Save a PyTorch model’s code representation as a .py file.

Parameters:
  • torch_model – The Torch Model to save.

  • filepath – The path to where the model should be saved, should end in .py.

tensor2init_code(input_tensor, tensor_type: str = None)[source]

Converts a monovalue tensor (len(set(tensor))==1) to a string representation of its initialization. Minifies potentially large from their explicit definition to simply ‘tensor.ones((x,y))*values’. If tensor_type is provided, this function will force-convert a non-uniform tensor to an initialization string for a tensor of that type (e.g. ‘randn’,’ones’,’zeros’).

Parameters:
  • input_tensor – The tensor to convert

  • tensor_type – The tensor type to convert to, from [‘randn’,’zeros’,’ones’]. Will try to auto-detect if not provided.

Returns:

A code string to create the tensor.

torch2code(torch_model, *args, **kwargs)

Convert a Torch Model to Torch Code.

Parameters:

torch_model – The Torch Model to convert.

Return torch_code:

The Torch Code.

torch2onnx(torch_model, input_dummy=None, tmp_onnx_path='./.tmp_model.onnx', modality=None, **kwargs)

Convert a Torch Model to ONNX Graph. Note that to reduce the size of the output graph, we set export_params=False. This and other parameters can be passed as **kwargs to torch.onnx.export.

Parameters:
  • torch_model – The Torch Model to convert.

  • input_dummy – A tensor input to the Torch Model, required for the ONNX parser to determine tensor sizes.

  • tmp_onnx_path – A placeholder location to save the ONNX graph

torch2onnx_text(torch_model, *args, **kwargs)

Convert a Torch Model to ONNX Text

Parameters:

torch_model – The Torch Model

Return onnx_text:

The ONNX Text

torch2torch(torch_model, *args, **kwargs)

Convert a PyTorch model into an equivalent PyTorch model, but represented as a graph of layers and operations PyTorch -> ONNX -> Code -> ONNX -> PyTorch

Parameters:

torch_model – The Torch model, created normally through code.

Returns:

The Torch model, after it has been graphized through ONNX.

torch2torch_graph(torch_model, *args, **kwargs)

Convert a PyTorch model into an equivalent PyTorch model, but represented as a graph of layers and operations PyTorch -> ONNX -> Code -> ONNX -> PyTorch

Parameters:

torch_model – The Torch model, created normally through code.

Returns:

The Torch model, after it has been graphized through ONNX.

torch_code2torch_model(torch_code: str, tmp_model_path='./.tmp_model.py', *args, **kwargs)[source]

Convert Torch Code into a Torch Model

Parameters:
  • torch_code – The Torch Code, either as a file path or the raw code text

  • tmp_model_path – The path to a cache of the code, as a *.py file

Returns:

The Torch model.

torch_file2torch_model(torch_file)[source]

Convert a Torch File into a Torch Model

Parameters:

torch_file – The Torch Code as a path

Return torch_model:

The Torch Model

torch_graph2code(model) str

Retrieve the model’s string representation, which includes its constructor (__init__) and forward pass. The code, when imported as a module or called with exec(), will rebuild the model object.

Parameters:

model – The model.

Returns:

The code for the entire model module.

torch_model2onnx_graph(torch_model, input_dummy=None, tmp_onnx_path='./.tmp_model.onnx', modality=None, **kwargs)[source]

Convert a Torch Model to ONNX Graph. Note that to reduce the size of the output graph, we set export_params=False. This and other parameters can be passed as **kwargs to torch.onnx.export.

Parameters:
  • torch_model – The Torch Model to convert.

  • input_dummy – A tensor input to the Torch Model, required for the ONNX parser to determine tensor sizes.

  • tmp_onnx_path – A placeholder location to save the ONNX graph

torch_model2onnx_text(torch_model, *args, **kwargs)[source]

Convert a Torch Model to ONNX Text

Parameters:

torch_model – The Torch Model

Return onnx_text:

The ONNX Text

torch_model2torch_code(torch_model, *args, **kwargs)[source]

Convert a Torch Model to Torch Code.

Parameters:

torch_model – The Torch Model to convert.

Return torch_code:

The Torch Code.

torch_model2torch_model(torch_model, *args, **kwargs)[source]

Convert a PyTorch model into an equivalent PyTorch model, but represented as a graph of layers and operations PyTorch -> ONNX -> Code -> ONNX -> PyTorch

Parameters:

torch_model – The Torch model, created normally through code.

Returns:

The Torch model, after it has been graphized through ONNX.