OpenCV453
|
This class allows to create and manipulate comprehensive artificial neural networks. [詳解]
#include <dnn.hpp>
公開型 | |
typedef DictValue | LayerId |
Container for strings and integers. | |
公開メンバ関数 | |
CV_WRAP | Net () |
Default constructor. | |
CV_WRAP | ~Net () |
Destructor frees the net only if there aren't references to the net anymore. | |
CV_WRAP bool | empty () const |
CV_WRAP String | dump () |
Dump net to String [詳解] | |
CV_WRAP void | dumpToFile (const String &path) |
Dump net structure, hyperparameters, backend, target and fusion to dot file [詳解] | |
int | addLayer (const String &name, const String &type, LayerParams ¶ms) |
Adds new layer to the net. [詳解] | |
int | addLayerToPrev (const String &name, const String &type, LayerParams ¶ms) |
Adds new layer and connects its first input to the first output of previously added layer. [詳解] | |
CV_WRAP int | getLayerId (const String &layer) |
Converts string name of the layer to the integer identifier. [詳解] | |
CV_WRAP std::vector< String > | getLayerNames () const |
CV_WRAP Ptr< Layer > | getLayer (LayerId layerId) |
Returns pointer to layer with specified id or name which the network use. | |
std::vector< Ptr< Layer > > | getLayerInputs (LayerId layerId) |
Returns pointers to input layers of specific layer. | |
CV_WRAP void | connect (String outPin, String inpPin) |
Connects output of the first layer to input of the second layer. [詳解] | |
void | connect (int outLayerId, int outNum, int inpLayerId, int inpNum) |
Connects #outNum output of the first layer to #inNum input of the second layer. [詳解] | |
CV_WRAP void | setInputsNames (const std::vector< String > &inputBlobNames) |
Sets outputs names of the network input pseudo layer. [詳解] | |
CV_WRAP void | setInputShape (const String &inputName, const MatShape &shape) |
Specify shape of network input. | |
CV_WRAP Mat | forward (const String &outputName=String()) |
Runs forward pass to compute output of layer with name outputName . [詳解] | |
CV_WRAP AsyncArray | forwardAsync (const String &outputName=String()) |
Runs forward pass to compute output of layer with name outputName . [詳解] | |
CV_WRAP void | forward (OutputArrayOfArrays outputBlobs, const String &outputName=String()) |
Runs forward pass to compute output of layer with name outputName . [詳解] | |
CV_WRAP void | forward (OutputArrayOfArrays outputBlobs, const std::vector< String > &outBlobNames) |
Runs forward pass to compute outputs of layers listed in outBlobNames . [詳解] | |
CV_WRAP_AS(forwardAndRetrieve) void forward(CV_OUT std CV_WRAP void | setHalideScheduler (const String &scheduler) |
Runs forward pass to compute outputs of layers listed in outBlobNames . [詳解] | |
CV_WRAP void | setPreferableBackend (int backendId) |
Ask network to use specific computation backend where it supported. [詳解] | |
CV_WRAP void | setPreferableTarget (int targetId) |
Ask network to make computations on specific target device. [詳解] | |
CV_WRAP void | setInput (InputArray blob, const String &name="", double scalefactor=1.0, const Scalar &mean=Scalar()) |
Sets the new input value for the network [詳解] | |
CV_WRAP void | setParam (LayerId layer, int numParam, const Mat &blob) |
Sets the new value for the learned param of the layer. [詳解] | |
CV_WRAP Mat | getParam (LayerId layer, int numParam=0) |
Returns parameter blob of the layer. [詳解] | |
CV_WRAP std::vector< int > | getUnconnectedOutLayers () const |
Returns indexes of layers with unconnected outputs. | |
CV_WRAP std::vector< String > | getUnconnectedOutLayersNames () const |
Returns names of layers with unconnected outputs. | |
CV_WRAP void | getLayersShapes (const std::vector< MatShape > &netInputShapes, CV_OUT std::vector< int > &layersIds, CV_OUT std::vector< std::vector< MatShape > > &inLayersShapes, CV_OUT std::vector< std::vector< MatShape > > &outLayersShapes) const |
Returns input and output shapes for all layers in loaded model; preliminary inferencing isn't necessary. [詳解] | |
CV_WRAP void | getLayersShapes (const MatShape &netInputShape, CV_OUT std::vector< int > &layersIds, CV_OUT std::vector< std::vector< MatShape > > &inLayersShapes, CV_OUT std::vector< std::vector< MatShape > > &outLayersShapes) const |
void | getLayerShapes (const MatShape &netInputShape, const int layerId, CV_OUT std::vector< MatShape > &inLayerShapes, CV_OUT std::vector< MatShape > &outLayerShapes) const |
Returns input and output shapes for layer with specified id in loaded model; preliminary inferencing isn't necessary. [詳解] | |
void | getLayerShapes (const std::vector< MatShape > &netInputShapes, const int layerId, CV_OUT std::vector< MatShape > &inLayerShapes, CV_OUT std::vector< MatShape > &outLayerShapes) const |
CV_WRAP int64 | getFLOPS (const std::vector< MatShape > &netInputShapes) const |
Computes FLOP for whole loaded model with specified input shapes. [詳解] | |
CV_WRAP int64 | getFLOPS (const MatShape &netInputShape) const |
CV_WRAP int64 | getFLOPS (const int layerId, const std::vector< MatShape > &netInputShapes) const |
CV_WRAP int64 | getFLOPS (const int layerId, const MatShape &netInputShape) const |
CV_WRAP void | getLayerTypes (CV_OUT std::vector< String > &layersTypes) const |
Returns list of types for layer used in model. [詳解] | |
CV_WRAP int | getLayersCount (const String &layerType) const |
Returns count of layers of specified type. [詳解] | |
void | getMemoryConsumption (const std::vector< MatShape > &netInputShapes, CV_OUT size_t &weights, CV_OUT size_t &blobs) const |
Computes bytes number which are required to store all weights and intermediate blobs for model. [詳解] | |
CV_WRAP void | getMemoryConsumption (const MatShape &netInputShape, CV_OUT size_t &weights, CV_OUT size_t &blobs) const |
CV_WRAP void | getMemoryConsumption (const int layerId, const std::vector< MatShape > &netInputShapes, CV_OUT size_t &weights, CV_OUT size_t &blobs) const |
CV_WRAP void | getMemoryConsumption (const int layerId, const MatShape &netInputShape, CV_OUT size_t &weights, CV_OUT size_t &blobs) const |
void | getMemoryConsumption (const std::vector< MatShape > &netInputShapes, CV_OUT std::vector< int > &layerIds, CV_OUT std::vector< size_t > &weights, CV_OUT std::vector< size_t > &blobs) const |
Computes bytes number which are required to store all weights and intermediate blobs for each layer. [詳解] | |
void | getMemoryConsumption (const MatShape &netInputShape, CV_OUT std::vector< int > &layerIds, CV_OUT std::vector< size_t > &weights, CV_OUT std::vector< size_t > &blobs) const |
CV_WRAP void | enableFusion (bool fusion) |
Enables or disables layer fusion in the network. [詳解] | |
CV_WRAP int64 | getPerfProfile (CV_OUT std::vector< double > &timings) |
Returns overall time for inference and timings (in ticks) for layers. [詳解] | |
静的公開メンバ関数 | |
static CV_WRAP Net | readFromModelOptimizer (const String &xml, const String &bin) |
Create a network from Intel's Model Optimizer intermediate representation (IR). [詳解] | |
static CV_WRAP Net | readFromModelOptimizer (const std::vector< uchar > &bufferModelConfig, const std::vector< uchar > &bufferWeights) |
Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR). [詳解] | |
static Net | readFromModelOptimizer (const uchar *bufferModelConfigPtr, size_t bufferModelConfigSize, const uchar *bufferWeightsPtr, size_t bufferWeightsSize) |
Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR). [詳解] | |
This class allows to create and manipulate comprehensive artificial neural networks.
Neural network is presented as directed acyclic graph (DAG), where vertices are Layer instances, and edges specify relationships between layers inputs and outputs.
Each network layer has unique integer id and unique string name inside its network. LayerId can store either layer name or layer id.
This class supports reference counting of its instances, i. e. copies point to the same instance.
int cv::dnn::Net::addLayer | ( | const String & | name, |
const String & | type, | ||
LayerParams & | params | ||
) |
Adds new layer to the net.
name | unique name of the adding layer. |
type | typename of the adding layer (type must be registered in LayerRegister). |
params | parameters which will be used to initialize the creating layer. |
int cv::dnn::Net::addLayerToPrev | ( | const String & | name, |
const String & | type, | ||
LayerParams & | params | ||
) |
Adds new layer and connects its first input to the first output of previously added layer.
void cv::dnn::Net::connect | ( | int | outLayerId, |
int | outNum, | ||
int | inpLayerId, | ||
int | inpNum | ||
) |
Connects #outNum
output of the first layer to #inNum
input of the second layer.
outLayerId | identifier of the first layer |
outNum | number of the first layer output |
inpLayerId | identifier of the second layer |
inpNum | number of the second layer input |
CV_WRAP void cv::dnn::Net::connect | ( | String | outPin, |
String | inpPin | ||
) |
Connects output of the first layer to input of the second layer.
outPin | descriptor of the first layer output. |
inpPin | descriptor of the second layer input. |
Descriptors have the following template <layer_name>[.input_number]
:
layer_name
is string name of the added layer. If this part is empty then the network input pseudo layer will be used;the second optional part of the template input_number
is either number of the layer input, either label one. If this part is omitted then the first layer input will be used.
CV_WRAP String cv::dnn::Net::dump | ( | ) |
Dump net to String
CV_WRAP void cv::dnn::Net::dumpToFile | ( | const String & | path | ) |
Dump net structure, hyperparameters, backend, target and fusion to dot file
path | path to output file with .dot extension |
CV_WRAP bool cv::dnn::Net::empty | ( | ) | const |
Returns true if there are no layers in the network.
CV_WRAP void cv::dnn::Net::enableFusion | ( | bool | fusion | ) |
Enables or disables layer fusion in the network.
fusion | true to enable the fusion, false to disable. The fusion is enabled by default. |
CV_WRAP Mat cv::dnn::Net::forward | ( | const String & | outputName = String() | ) |
Runs forward pass to compute output of layer with name outputName
.
outputName | name for layer which output is needed to get |
By default runs forward pass for the whole network.
CV_WRAP void cv::dnn::Net::forward | ( | OutputArrayOfArrays | outputBlobs, |
const std::vector< String > & | outBlobNames | ||
) |
Runs forward pass to compute outputs of layers listed in outBlobNames
.
outputBlobs | contains blobs for first outputs of specified layers. |
outBlobNames | names for layers which outputs are needed to get |
CV_WRAP void cv::dnn::Net::forward | ( | OutputArrayOfArrays | outputBlobs, |
const String & | outputName = String() |
||
) |
Runs forward pass to compute output of layer with name outputName
.
outputBlobs | contains all output blobs for specified layer. |
outputName | name for layer which output is needed to get |
If outputName
is empty, runs forward pass for the whole network.
CV_WRAP AsyncArray cv::dnn::Net::forwardAsync | ( | const String & | outputName = String() | ) |
Runs forward pass to compute output of layer with name outputName
.
outputName | name for layer which output is needed to get |
By default runs forward pass for the whole network.
This is an asynchronous version of forward(const String&). dnn::DNN_BACKEND_INFERENCE_ENGINE backend is required.
CV_WRAP int64 cv::dnn::Net::getFLOPS | ( | const int | layerId, |
const MatShape & | netInputShape | ||
) | const |
これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。
CV_WRAP int64 cv::dnn::Net::getFLOPS | ( | const int | layerId, |
const std::vector< MatShape > & | netInputShapes | ||
) | const |
これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。
CV_WRAP int64 cv::dnn::Net::getFLOPS | ( | const MatShape & | netInputShape | ) | const |
これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。
CV_WRAP int64 cv::dnn::Net::getFLOPS | ( | const std::vector< MatShape > & | netInputShapes | ) | const |
Computes FLOP for whole loaded model with specified input shapes.
netInputShapes | vector of shapes for all net inputs. |
CV_WRAP int cv::dnn::Net::getLayerId | ( | const String & | layer | ) |
Converts string name of the layer to the integer identifier.
CV_WRAP int cv::dnn::Net::getLayersCount | ( | const String & | layerType | ) | const |
Returns count of layers of specified type.
layerType | type. |
void cv::dnn::Net::getLayerShapes | ( | const MatShape & | netInputShape, |
const int | layerId, | ||
CV_OUT std::vector< MatShape > & | inLayerShapes, | ||
CV_OUT std::vector< MatShape > & | outLayerShapes | ||
) | const |
Returns input and output shapes for layer with specified id in loaded model; preliminary inferencing isn't necessary.
netInputShape | shape input blob in net input layer. |
layerId | id for layer. |
inLayerShapes | output parameter for input layers shapes; order is the same as in layersIds |
outLayerShapes | output parameter for output layers shapes; order is the same as in layersIds |
void cv::dnn::Net::getLayerShapes | ( | const std::vector< MatShape > & | netInputShapes, |
const int | layerId, | ||
CV_OUT std::vector< MatShape > & | inLayerShapes, | ||
CV_OUT std::vector< MatShape > & | outLayerShapes | ||
) | const |
これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。
CV_WRAP void cv::dnn::Net::getLayersShapes | ( | const MatShape & | netInputShape, |
CV_OUT std::vector< int > & | layersIds, | ||
CV_OUT std::vector< std::vector< MatShape > > & | inLayersShapes, | ||
CV_OUT std::vector< std::vector< MatShape > > & | outLayersShapes | ||
) | const |
これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。
CV_WRAP void cv::dnn::Net::getLayersShapes | ( | const std::vector< MatShape > & | netInputShapes, |
CV_OUT std::vector< int > & | layersIds, | ||
CV_OUT std::vector< std::vector< MatShape > > & | inLayersShapes, | ||
CV_OUT std::vector< std::vector< MatShape > > & | outLayersShapes | ||
) | const |
Returns input and output shapes for all layers in loaded model; preliminary inferencing isn't necessary.
netInputShapes | shapes for all input blobs in net input layer. |
layersIds | output parameter for layer IDs. |
inLayersShapes | output parameter for input layers shapes; order is the same as in layersIds |
outLayersShapes | output parameter for output layers shapes; order is the same as in layersIds |
CV_WRAP void cv::dnn::Net::getLayerTypes | ( | CV_OUT std::vector< String > & | layersTypes | ) | const |
Returns list of types for layer used in model.
layersTypes | output parameter for returning types. |
CV_WRAP void cv::dnn::Net::getMemoryConsumption | ( | const int | layerId, |
const MatShape & | netInputShape, | ||
CV_OUT size_t & | weights, | ||
CV_OUT size_t & | blobs | ||
) | const |
これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。
CV_WRAP void cv::dnn::Net::getMemoryConsumption | ( | const int | layerId, |
const std::vector< MatShape > & | netInputShapes, | ||
CV_OUT size_t & | weights, | ||
CV_OUT size_t & | blobs | ||
) | const |
これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。
CV_WRAP void cv::dnn::Net::getMemoryConsumption | ( | const MatShape & | netInputShape, |
CV_OUT size_t & | weights, | ||
CV_OUT size_t & | blobs | ||
) | const |
これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。
void cv::dnn::Net::getMemoryConsumption | ( | const MatShape & | netInputShape, |
CV_OUT std::vector< int > & | layerIds, | ||
CV_OUT std::vector< size_t > & | weights, | ||
CV_OUT std::vector< size_t > & | blobs | ||
) | const |
これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。
void cv::dnn::Net::getMemoryConsumption | ( | const std::vector< MatShape > & | netInputShapes, |
CV_OUT size_t & | weights, | ||
CV_OUT size_t & | blobs | ||
) | const |
Computes bytes number which are required to store all weights and intermediate blobs for model.
netInputShapes | vector of shapes for all net inputs. |
weights | output parameter to store resulting bytes for weights. |
blobs | output parameter to store resulting bytes for intermediate blobs. |
void cv::dnn::Net::getMemoryConsumption | ( | const std::vector< MatShape > & | netInputShapes, |
CV_OUT std::vector< int > & | layerIds, | ||
CV_OUT std::vector< size_t > & | weights, | ||
CV_OUT std::vector< size_t > & | blobs | ||
) | const |
Computes bytes number which are required to store all weights and intermediate blobs for each layer.
netInputShapes | vector of shapes for all net inputs. |
layerIds | output vector to save layer IDs. |
weights | output parameter to store resulting bytes for weights. |
blobs | output parameter to store resulting bytes for intermediate blobs. |
Returns parameter blob of the layer.
layer | name or id of the layer. |
numParam | index of the layer parameter in the Layer::blobs array. |
CV_WRAP int64 cv::dnn::Net::getPerfProfile | ( | CV_OUT std::vector< double > & | timings | ) |
Returns overall time for inference and timings (in ticks) for layers.
Indexes in returned vector correspond to layers ids. Some layers can be fused with others, in this case zero ticks count will be return for that skipped layers. Supported by DNN_BACKEND_OPENCV on DNN_TARGET_CPU only.
[out] | timings | vector for tick timings for all layers. |
|
static |
|
static |
|
static |
Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR).
[in] | bufferModelConfigPtr | buffer pointer of model's configuration. |
[in] | bufferModelConfigSize | buffer size of model's configuration. |
[in] | bufferWeightsPtr | buffer pointer of model's trained weights. |
[in] | bufferWeightsSize | buffer size of model's trained weights. |
CV_WRAP_AS(forwardAndRetrieve) void forward(CV_OUT std CV_WRAP void cv::dnn::Net::setHalideScheduler | ( | const String & | scheduler | ) |
Runs forward pass to compute outputs of layers listed in outBlobNames
.
outputBlobs | contains all output blobs for each layer specified in outBlobNames . |
outBlobNames | names for layers which outputs are needed to get |
Compile Halide layers.
[in] | scheduler | Path to YAML file with scheduling directives. |
Schedule layers that support Halide backend. Then compile them for specific target. For layers that not represented in scheduling file or if no manual scheduling used at all, automatic scheduling will be applied.
CV_WRAP void cv::dnn::Net::setInput | ( | InputArray | blob, |
const String & | name = "" , |
||
double | scalefactor = 1.0 , |
||
const Scalar & | mean = Scalar() |
||
) |
Sets the new input value for the network
blob | A new blob. Should have CV_32F or CV_8U depth. |
name | A name of input layer. |
scalefactor | An optional normalization scale. |
mean | An optional mean subtraction values. |
If scale or mean values are specified, a final input blob is computed as:
CV_WRAP void cv::dnn::Net::setInputsNames | ( | const std::vector< String > & | inputBlobNames | ) |
Sets outputs names of the network input pseudo layer.
Each net always has special own the network input pseudo layer with id=0. This layer stores the user blobs only and don't make any computations. In fact, this layer provides the only way to pass user data into the network. As any other layer, this layer can label its outputs and this function provides an easy way to do this.
Sets the new value for the learned param of the layer.
layer | name or id of the layer. |
numParam | index of the layer parameter in the Layer::blobs array. |
blob | the new value. |
CV_WRAP void cv::dnn::Net::setPreferableBackend | ( | int | backendId | ) |
Ask network to use specific computation backend where it supported.
[in] | backendId | backend identifier. |
If OpenCV is compiled with Intel's Inference Engine library, DNN_BACKEND_DEFAULT means DNN_BACKEND_INFERENCE_ENGINE. Otherwise it equals to DNN_BACKEND_OPENCV.
CV_WRAP void cv::dnn::Net::setPreferableTarget | ( | int | targetId | ) |
Ask network to make computations on specific target device.
[in] | targetId | target identifier. |
List of supported combinations backend / target:
DNN_BACKEND_OPENCV | DNN_BACKEND_INFERENCE_ENGINE | DNN_BACKEND_HALIDE | DNN_BACKEND_CUDA | |
---|---|---|---|---|
DNN_TARGET_CPU | + | + | + | |
DNN_TARGET_OPENCL | + | + | + | |
DNN_TARGET_OPENCL_FP16 | + | + | ||
DNN_TARGET_MYRIAD | + | |||
DNN_TARGET_FPGA | + | |||
DNN_TARGET_CUDA | + | |||
DNN_TARGET_CUDA_FP16 | + | |||
DNN_TARGET_HDDL | + |