OpenCV453
公開型 | 公開メンバ関数 | 静的公開メンバ関数 | 全メンバ一覧
cv::dnn::Net クラス

This class allows to create and manipulate comprehensive artificial neural networks. [詳解]

#include <dnn.hpp>

公開型

typedef DictValue LayerId
 Container for strings and integers.
 

公開メンバ関数

CV_WRAP Net ()
 Default constructor.
 
CV_WRAP ~Net ()
 Destructor frees the net only if there aren't references to the net anymore.
 
CV_WRAP bool empty () const
 
CV_WRAP String dump ()
 Dump net to String [詳解]
 
CV_WRAP void dumpToFile (const String &path)
 Dump net structure, hyperparameters, backend, target and fusion to dot file [詳解]
 
int addLayer (const String &name, const String &type, LayerParams &params)
 Adds new layer to the net. [詳解]
 
int addLayerToPrev (const String &name, const String &type, LayerParams &params)
 Adds new layer and connects its first input to the first output of previously added layer. [詳解]
 
CV_WRAP int getLayerId (const String &layer)
 Converts string name of the layer to the integer identifier. [詳解]
 
CV_WRAP std::vector< String > getLayerNames () const
 
CV_WRAP Ptr< LayergetLayer (LayerId layerId)
 Returns pointer to layer with specified id or name which the network use.
 
std::vector< Ptr< Layer > > getLayerInputs (LayerId layerId)
 Returns pointers to input layers of specific layer.
 
CV_WRAP void connect (String outPin, String inpPin)
 Connects output of the first layer to input of the second layer. [詳解]
 
void connect (int outLayerId, int outNum, int inpLayerId, int inpNum)
 Connects #outNum output of the first layer to #inNum input of the second layer. [詳解]
 
CV_WRAP void setInputsNames (const std::vector< String > &inputBlobNames)
 Sets outputs names of the network input pseudo layer. [詳解]
 
CV_WRAP void setInputShape (const String &inputName, const MatShape &shape)
 Specify shape of network input.
 
CV_WRAP Mat forward (const String &outputName=String())
 Runs forward pass to compute output of layer with name outputName. [詳解]
 
CV_WRAP AsyncArray forwardAsync (const String &outputName=String())
 Runs forward pass to compute output of layer with name outputName. [詳解]
 
CV_WRAP void forward (OutputArrayOfArrays outputBlobs, const String &outputName=String())
 Runs forward pass to compute output of layer with name outputName. [詳解]
 
CV_WRAP void forward (OutputArrayOfArrays outputBlobs, const std::vector< String > &outBlobNames)
 Runs forward pass to compute outputs of layers listed in outBlobNames. [詳解]
 
CV_WRAP_AS(forwardAndRetrieve) void forward(CV_OUT std CV_WRAP void setHalideScheduler (const String &scheduler)
 Runs forward pass to compute outputs of layers listed in outBlobNames. [詳解]
 
CV_WRAP void setPreferableBackend (int backendId)
 Ask network to use specific computation backend where it supported. [詳解]
 
CV_WRAP void setPreferableTarget (int targetId)
 Ask network to make computations on specific target device. [詳解]
 
CV_WRAP void setInput (InputArray blob, const String &name="", double scalefactor=1.0, const Scalar &mean=Scalar())
 Sets the new input value for the network [詳解]
 
CV_WRAP void setParam (LayerId layer, int numParam, const Mat &blob)
 Sets the new value for the learned param of the layer. [詳解]
 
CV_WRAP Mat getParam (LayerId layer, int numParam=0)
 Returns parameter blob of the layer. [詳解]
 
CV_WRAP std::vector< int > getUnconnectedOutLayers () const
 Returns indexes of layers with unconnected outputs.
 
CV_WRAP std::vector< String > getUnconnectedOutLayersNames () const
 Returns names of layers with unconnected outputs.
 
CV_WRAP void getLayersShapes (const std::vector< MatShape > &netInputShapes, CV_OUT std::vector< int > &layersIds, CV_OUT std::vector< std::vector< MatShape > > &inLayersShapes, CV_OUT std::vector< std::vector< MatShape > > &outLayersShapes) const
 Returns input and output shapes for all layers in loaded model; preliminary inferencing isn't necessary. [詳解]
 
CV_WRAP void getLayersShapes (const MatShape &netInputShape, CV_OUT std::vector< int > &layersIds, CV_OUT std::vector< std::vector< MatShape > > &inLayersShapes, CV_OUT std::vector< std::vector< MatShape > > &outLayersShapes) const
 
void getLayerShapes (const MatShape &netInputShape, const int layerId, CV_OUT std::vector< MatShape > &inLayerShapes, CV_OUT std::vector< MatShape > &outLayerShapes) const
 Returns input and output shapes for layer with specified id in loaded model; preliminary inferencing isn't necessary. [詳解]
 
void getLayerShapes (const std::vector< MatShape > &netInputShapes, const int layerId, CV_OUT std::vector< MatShape > &inLayerShapes, CV_OUT std::vector< MatShape > &outLayerShapes) const
 
CV_WRAP int64 getFLOPS (const std::vector< MatShape > &netInputShapes) const
 Computes FLOP for whole loaded model with specified input shapes. [詳解]
 
CV_WRAP int64 getFLOPS (const MatShape &netInputShape) const
 
CV_WRAP int64 getFLOPS (const int layerId, const std::vector< MatShape > &netInputShapes) const
 
CV_WRAP int64 getFLOPS (const int layerId, const MatShape &netInputShape) const
 
CV_WRAP void getLayerTypes (CV_OUT std::vector< String > &layersTypes) const
 Returns list of types for layer used in model. [詳解]
 
CV_WRAP int getLayersCount (const String &layerType) const
 Returns count of layers of specified type. [詳解]
 
void getMemoryConsumption (const std::vector< MatShape > &netInputShapes, CV_OUT size_t &weights, CV_OUT size_t &blobs) const
 Computes bytes number which are required to store all weights and intermediate blobs for model. [詳解]
 
CV_WRAP void getMemoryConsumption (const MatShape &netInputShape, CV_OUT size_t &weights, CV_OUT size_t &blobs) const
 
CV_WRAP void getMemoryConsumption (const int layerId, const std::vector< MatShape > &netInputShapes, CV_OUT size_t &weights, CV_OUT size_t &blobs) const
 
CV_WRAP void getMemoryConsumption (const int layerId, const MatShape &netInputShape, CV_OUT size_t &weights, CV_OUT size_t &blobs) const
 
void getMemoryConsumption (const std::vector< MatShape > &netInputShapes, CV_OUT std::vector< int > &layerIds, CV_OUT std::vector< size_t > &weights, CV_OUT std::vector< size_t > &blobs) const
 Computes bytes number which are required to store all weights and intermediate blobs for each layer. [詳解]
 
void getMemoryConsumption (const MatShape &netInputShape, CV_OUT std::vector< int > &layerIds, CV_OUT std::vector< size_t > &weights, CV_OUT std::vector< size_t > &blobs) const
 
CV_WRAP void enableFusion (bool fusion)
 Enables or disables layer fusion in the network. [詳解]
 
CV_WRAP int64 getPerfProfile (CV_OUT std::vector< double > &timings)
 Returns overall time for inference and timings (in ticks) for layers. [詳解]
 

静的公開メンバ関数

static CV_WRAP Net readFromModelOptimizer (const String &xml, const String &bin)
 Create a network from Intel's Model Optimizer intermediate representation (IR). [詳解]
 
static CV_WRAP Net readFromModelOptimizer (const std::vector< uchar > &bufferModelConfig, const std::vector< uchar > &bufferWeights)
 Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR). [詳解]
 
static Net readFromModelOptimizer (const uchar *bufferModelConfigPtr, size_t bufferModelConfigSize, const uchar *bufferWeightsPtr, size_t bufferWeightsSize)
 Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR). [詳解]
 

詳解

This class allows to create and manipulate comprehensive artificial neural networks.

Neural network is presented as directed acyclic graph (DAG), where vertices are Layer instances, and edges specify relationships between layers inputs and outputs.

Each network layer has unique integer id and unique string name inside its network. LayerId can store either layer name or layer id.

This class supports reference counting of its instances, i. e. copies point to the same instance.

関数詳解

◆ addLayer()

int cv::dnn::Net::addLayer ( const String &  name,
const String &  type,
LayerParams params 
)

Adds new layer to the net.

引数
nameunique name of the adding layer.
typetypename of the adding layer (type must be registered in LayerRegister).
paramsparameters which will be used to initialize the creating layer.
戻り値
unique identifier of created layer, or -1 if a failure will happen.

◆ addLayerToPrev()

int cv::dnn::Net::addLayerToPrev ( const String &  name,
const String &  type,
LayerParams params 
)

Adds new layer and connects its first input to the first output of previously added layer.

参照
addLayer()

◆ connect() [1/2]

void cv::dnn::Net::connect ( int  outLayerId,
int  outNum,
int  inpLayerId,
int  inpNum 
)

Connects #outNum output of the first layer to #inNum input of the second layer.

引数
outLayerIdidentifier of the first layer
outNumnumber of the first layer output
inpLayerIdidentifier of the second layer
inpNumnumber of the second layer input

◆ connect() [2/2]

CV_WRAP void cv::dnn::Net::connect ( String  outPin,
String  inpPin 
)

Connects output of the first layer to input of the second layer.

引数
outPindescriptor of the first layer output.
inpPindescriptor of the second layer input.

Descriptors have the following template <layer_name>[.input_number]:

  • the first part of the template layer_name is string name of the added layer. If this part is empty then the network input pseudo layer will be used;
  • the second optional part of the template input_number is either number of the layer input, either label one. If this part is omitted then the first layer input will be used.

    参照
    setNetInputs(), Layer::inputNameToIndex(), Layer::outputNameToIndex()

◆ dump()

CV_WRAP String cv::dnn::Net::dump ( )

Dump net to String

戻り値
String with structure, hyperparameters, backend, target and fusion Call method after setInput(). To see correct backend, target and fusion run after forward().

◆ dumpToFile()

CV_WRAP void cv::dnn::Net::dumpToFile ( const String &  path)

Dump net structure, hyperparameters, backend, target and fusion to dot file

引数
pathpath to output file with .dot extension
参照
dump()

◆ empty()

CV_WRAP bool cv::dnn::Net::empty ( ) const

Returns true if there are no layers in the network.

◆ enableFusion()

CV_WRAP void cv::dnn::Net::enableFusion ( bool  fusion)

Enables or disables layer fusion in the network.

引数
fusiontrue to enable the fusion, false to disable. The fusion is enabled by default.

◆ forward() [1/3]

CV_WRAP Mat cv::dnn::Net::forward ( const String &  outputName = String())

Runs forward pass to compute output of layer with name outputName.

引数
outputNamename for layer which output is needed to get
戻り値
blob for first output of specified layer.

By default runs forward pass for the whole network.

◆ forward() [2/3]

CV_WRAP void cv::dnn::Net::forward ( OutputArrayOfArrays  outputBlobs,
const std::vector< String > &  outBlobNames 
)

Runs forward pass to compute outputs of layers listed in outBlobNames.

引数
outputBlobscontains blobs for first outputs of specified layers.
outBlobNamesnames for layers which outputs are needed to get

◆ forward() [3/3]

CV_WRAP void cv::dnn::Net::forward ( OutputArrayOfArrays  outputBlobs,
const String &  outputName = String() 
)

Runs forward pass to compute output of layer with name outputName.

引数
outputBlobscontains all output blobs for specified layer.
outputNamename for layer which output is needed to get

If outputName is empty, runs forward pass for the whole network.

◆ forwardAsync()

CV_WRAP AsyncArray cv::dnn::Net::forwardAsync ( const String &  outputName = String())

Runs forward pass to compute output of layer with name outputName.

引数
outputNamename for layer which output is needed to get

By default runs forward pass for the whole network.

This is an asynchronous version of forward(const String&). dnn::DNN_BACKEND_INFERENCE_ENGINE backend is required.

◆ getFLOPS() [1/4]

CV_WRAP int64 cv::dnn::Net::getFLOPS ( const int  layerId,
const MatShape &  netInputShape 
) const

これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。

◆ getFLOPS() [2/4]

CV_WRAP int64 cv::dnn::Net::getFLOPS ( const int  layerId,
const std::vector< MatShape > &  netInputShapes 
) const

これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。

◆ getFLOPS() [3/4]

CV_WRAP int64 cv::dnn::Net::getFLOPS ( const MatShape &  netInputShape) const

これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。

◆ getFLOPS() [4/4]

CV_WRAP int64 cv::dnn::Net::getFLOPS ( const std::vector< MatShape > &  netInputShapes) const

Computes FLOP for whole loaded model with specified input shapes.

引数
netInputShapesvector of shapes for all net inputs.
戻り値
computed FLOP.

◆ getLayerId()

CV_WRAP int cv::dnn::Net::getLayerId ( const String &  layer)

Converts string name of the layer to the integer identifier.

戻り値
id of the layer, or -1 if the layer wasn't found.

◆ getLayersCount()

CV_WRAP int cv::dnn::Net::getLayersCount ( const String &  layerType) const

Returns count of layers of specified type.

引数
layerTypetype.
戻り値
count of layers

◆ getLayerShapes() [1/2]

void cv::dnn::Net::getLayerShapes ( const MatShape &  netInputShape,
const int  layerId,
CV_OUT std::vector< MatShape > &  inLayerShapes,
CV_OUT std::vector< MatShape > &  outLayerShapes 
) const

Returns input and output shapes for layer with specified id in loaded model; preliminary inferencing isn't necessary.

引数
netInputShapeshape input blob in net input layer.
layerIdid for layer.
inLayerShapesoutput parameter for input layers shapes; order is the same as in layersIds
outLayerShapesoutput parameter for output layers shapes; order is the same as in layersIds

◆ getLayerShapes() [2/2]

void cv::dnn::Net::getLayerShapes ( const std::vector< MatShape > &  netInputShapes,
const int  layerId,
CV_OUT std::vector< MatShape > &  inLayerShapes,
CV_OUT std::vector< MatShape > &  outLayerShapes 
) const

これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。

◆ getLayersShapes() [1/2]

CV_WRAP void cv::dnn::Net::getLayersShapes ( const MatShape &  netInputShape,
CV_OUT std::vector< int > &  layersIds,
CV_OUT std::vector< std::vector< MatShape > > &  inLayersShapes,
CV_OUT std::vector< std::vector< MatShape > > &  outLayersShapes 
) const

これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。

◆ getLayersShapes() [2/2]

CV_WRAP void cv::dnn::Net::getLayersShapes ( const std::vector< MatShape > &  netInputShapes,
CV_OUT std::vector< int > &  layersIds,
CV_OUT std::vector< std::vector< MatShape > > &  inLayersShapes,
CV_OUT std::vector< std::vector< MatShape > > &  outLayersShapes 
) const

Returns input and output shapes for all layers in loaded model; preliminary inferencing isn't necessary.

引数
netInputShapesshapes for all input blobs in net input layer.
layersIdsoutput parameter for layer IDs.
inLayersShapesoutput parameter for input layers shapes; order is the same as in layersIds
outLayersShapesoutput parameter for output layers shapes; order is the same as in layersIds

◆ getLayerTypes()

CV_WRAP void cv::dnn::Net::getLayerTypes ( CV_OUT std::vector< String > &  layersTypes) const

Returns list of types for layer used in model.

引数
layersTypesoutput parameter for returning types.

◆ getMemoryConsumption() [1/6]

CV_WRAP void cv::dnn::Net::getMemoryConsumption ( const int  layerId,
const MatShape &  netInputShape,
CV_OUT size_t &  weights,
CV_OUT size_t &  blobs 
) const

これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。

◆ getMemoryConsumption() [2/6]

CV_WRAP void cv::dnn::Net::getMemoryConsumption ( const int  layerId,
const std::vector< MatShape > &  netInputShapes,
CV_OUT size_t &  weights,
CV_OUT size_t &  blobs 
) const

これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。

◆ getMemoryConsumption() [3/6]

CV_WRAP void cv::dnn::Net::getMemoryConsumption ( const MatShape &  netInputShape,
CV_OUT size_t &  weights,
CV_OUT size_t &  blobs 
) const

これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。

◆ getMemoryConsumption() [4/6]

void cv::dnn::Net::getMemoryConsumption ( const MatShape &  netInputShape,
CV_OUT std::vector< int > &  layerIds,
CV_OUT std::vector< size_t > &  weights,
CV_OUT std::vector< size_t > &  blobs 
) const

これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。

◆ getMemoryConsumption() [5/6]

void cv::dnn::Net::getMemoryConsumption ( const std::vector< MatShape > &  netInputShapes,
CV_OUT size_t &  weights,
CV_OUT size_t &  blobs 
) const

Computes bytes number which are required to store all weights and intermediate blobs for model.

引数
netInputShapesvector of shapes for all net inputs.
weightsoutput parameter to store resulting bytes for weights.
blobsoutput parameter to store resulting bytes for intermediate blobs.

◆ getMemoryConsumption() [6/6]

void cv::dnn::Net::getMemoryConsumption ( const std::vector< MatShape > &  netInputShapes,
CV_OUT std::vector< int > &  layerIds,
CV_OUT std::vector< size_t > &  weights,
CV_OUT std::vector< size_t > &  blobs 
) const

Computes bytes number which are required to store all weights and intermediate blobs for each layer.

引数
netInputShapesvector of shapes for all net inputs.
layerIdsoutput vector to save layer IDs.
weightsoutput parameter to store resulting bytes for weights.
blobsoutput parameter to store resulting bytes for intermediate blobs.

◆ getParam()

CV_WRAP Mat cv::dnn::Net::getParam ( LayerId  layer,
int  numParam = 0 
)

Returns parameter blob of the layer.

引数
layername or id of the layer.
numParamindex of the layer parameter in the Layer::blobs array.
参照
Layer::blobs

◆ getPerfProfile()

CV_WRAP int64 cv::dnn::Net::getPerfProfile ( CV_OUT std::vector< double > &  timings)

Returns overall time for inference and timings (in ticks) for layers.

Indexes in returned vector correspond to layers ids. Some layers can be fused with others, in this case zero ticks count will be return for that skipped layers. Supported by DNN_BACKEND_OPENCV on DNN_TARGET_CPU only.

引数
[out]timingsvector for tick timings for all layers.
戻り値
overall ticks for model inference.

◆ readFromModelOptimizer() [1/3]

static CV_WRAP Net cv::dnn::Net::readFromModelOptimizer ( const std::vector< uchar > &  bufferModelConfig,
const std::vector< uchar > &  bufferWeights 
)
static

Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR).

引数
[in]bufferModelConfigbuffer with model's configuration.
[in]bufferWeightsbuffer with model's trained weights.
戻り値
Net object.

◆ readFromModelOptimizer() [2/3]

static CV_WRAP Net cv::dnn::Net::readFromModelOptimizer ( const String &  xml,
const String &  bin 
)
static

Create a network from Intel's Model Optimizer intermediate representation (IR).

引数
[in]xmlXML configuration file with network's topology.
[in]binBinary file with trained weights. Networks imported from Intel's Model Optimizer are launched in Intel's Inference Engine backend.

◆ readFromModelOptimizer() [3/3]

static Net cv::dnn::Net::readFromModelOptimizer ( const uchar *  bufferModelConfigPtr,
size_t  bufferModelConfigSize,
const uchar *  bufferWeightsPtr,
size_t  bufferWeightsSize 
)
static

Create a network from Intel's Model Optimizer in-memory buffers with intermediate representation (IR).

引数
[in]bufferModelConfigPtrbuffer pointer of model's configuration.
[in]bufferModelConfigSizebuffer size of model's configuration.
[in]bufferWeightsPtrbuffer pointer of model's trained weights.
[in]bufferWeightsSizebuffer size of model's trained weights.
戻り値
Net object.

◆ setHalideScheduler()

CV_WRAP_AS(forwardAndRetrieve) void forward(CV_OUT std CV_WRAP void cv::dnn::Net::setHalideScheduler ( const String &  scheduler)

Runs forward pass to compute outputs of layers listed in outBlobNames.

引数
outputBlobscontains all output blobs for each layer specified in outBlobNames.
outBlobNamesnames for layers which outputs are needed to get

Compile Halide layers.

引数
[in]schedulerPath to YAML file with scheduling directives.
参照
setPreferableBackend

Schedule layers that support Halide backend. Then compile them for specific target. For layers that not represented in scheduling file or if no manual scheduling used at all, automatic scheduling will be applied.

◆ setInput()

CV_WRAP void cv::dnn::Net::setInput ( InputArray  blob,
const String &  name = "",
double  scalefactor = 1.0,
const Scalar mean = Scalar() 
)

Sets the new input value for the network

引数
blobA new blob. Should have CV_32F or CV_8U depth.
nameA name of input layer.
scalefactorAn optional normalization scale.
meanAn optional mean subtraction values.
参照
connect(String, String) to know format of the descriptor.

If scale or mean values are specified, a final input blob is computed as:

\[input(n,c,h,w) = scalefactor \times (blob(n,c,h,w) - mean_c)\]

◆ setInputsNames()

CV_WRAP void cv::dnn::Net::setInputsNames ( const std::vector< String > &  inputBlobNames)

Sets outputs names of the network input pseudo layer.

Each net always has special own the network input pseudo layer with id=0. This layer stores the user blobs only and don't make any computations. In fact, this layer provides the only way to pass user data into the network. As any other layer, this layer can label its outputs and this function provides an easy way to do this.

◆ setParam()

CV_WRAP void cv::dnn::Net::setParam ( LayerId  layer,
int  numParam,
const Mat blob 
)

Sets the new value for the learned param of the layer.

引数
layername or id of the layer.
numParamindex of the layer parameter in the Layer::blobs array.
blobthe new value.
参照
Layer::blobs
覚え書き
If shape of the new blob differs from the previous shape, then the following forward pass may fail.

◆ setPreferableBackend()

CV_WRAP void cv::dnn::Net::setPreferableBackend ( int  backendId)

Ask network to use specific computation backend where it supported.

引数
[in]backendIdbackend identifier.
参照
Backend

If OpenCV is compiled with Intel's Inference Engine library, DNN_BACKEND_DEFAULT means DNN_BACKEND_INFERENCE_ENGINE. Otherwise it equals to DNN_BACKEND_OPENCV.

◆ setPreferableTarget()

CV_WRAP void cv::dnn::Net::setPreferableTarget ( int  targetId)

Ask network to make computations on specific target device.

引数
[in]targetIdtarget identifier.
参照
Target

List of supported combinations backend / target:

DNN_BACKEND_OPENCV DNN_BACKEND_INFERENCE_ENGINE DNN_BACKEND_HALIDE DNN_BACKEND_CUDA
DNN_TARGET_CPU + + +
DNN_TARGET_OPENCL + + +
DNN_TARGET_OPENCL_FP16 + +
DNN_TARGET_MYRIAD +
DNN_TARGET_FPGA +
DNN_TARGET_CUDA +
DNN_TARGET_CUDA_FP16 +
DNN_TARGET_HDDL +

このクラス詳解は次のファイルから抽出されました: