OpenCV453
|
LSTM recurrent layer [詳解]
#include <all_layers.hpp>
cv::dnn::Layerを継承しています。
公開メンバ関数 | |
virtual CV_DEPRECATED void | setWeights (const Mat &Wh, const Mat &Wx, const Mat &b)=0 |
Set trained weights for LSTM layer. [詳解] | |
virtual void | setOutShape (const MatShape &outTailShape=MatShape())=0 |
Specifies shape of output blob which will be [[T ], N ] + outTailShape . [詳解] | |
virtual CV_DEPRECATED void | setUseTimstampsDim (bool use=true)=0 |
Specifies either interpret first dimension of input blob as timestamp dimension either as sample. [詳解] | |
virtual CV_DEPRECATED void | setProduceCellOutput (bool produce=false)=0 |
If this flag is set to true then layer will produce ![]() | |
int | inputNameToIndex (String inputName) CV_OVERRIDE |
Returns index of input blob into the input array. [詳解] | |
int | outputNameToIndex (const String &outputName) CV_OVERRIDE |
Returns index of output blob in output array. [詳解] | |
![]() | |
virtual CV_DEPRECATED_EXTERNAL void | finalize (const std::vector< Mat * > &input, std::vector< Mat > &output) |
Computes and sets internal parameters according to inputs, outputs and blobs. [詳解] | |
virtual CV_WRAP void | finalize (InputArrayOfArrays inputs, OutputArrayOfArrays outputs) |
Computes and sets internal parameters according to inputs, outputs and blobs. [詳解] | |
virtual CV_DEPRECATED_EXTERNAL void | forward (std::vector< Mat * > &input, std::vector< Mat > &output, std::vector< Mat > &internals) |
Given the input blobs, computes the output blobs . [詳解] | |
virtual void | forward (InputArrayOfArrays inputs, OutputArrayOfArrays outputs, OutputArrayOfArrays internals) |
Given the input blobs, computes the output blobs . [詳解] | |
void | forward_fallback (InputArrayOfArrays inputs, OutputArrayOfArrays outputs, OutputArrayOfArrays internals) |
Given the input blobs, computes the output blobs . [詳解] | |
CV_DEPRECATED_EXTERNAL void | finalize (const std::vector< Mat > &inputs, CV_OUT std::vector< Mat > &outputs) |
これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。 [詳解] | |
CV_DEPRECATED std::vector< Mat > | finalize (const std::vector< Mat > &inputs) |
これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。 [詳解] | |
CV_DEPRECATED CV_WRAP void | run (const std::vector< Mat > &inputs, CV_OUT std::vector< Mat > &outputs, CV_IN_OUT std::vector< Mat > &internals) |
Allocates layer and computes output. [詳解] | |
virtual bool | supportBackend (int backendId) |
Ask layer if it support specific backend for doing computations. [詳解] | |
virtual Ptr< BackendNode > | initHalide (const std::vector< Ptr< BackendWrapper > > &inputs) |
Returns Halide backend node. [詳解] | |
virtual Ptr< BackendNode > | initInfEngine (const std::vector< Ptr< BackendWrapper > > &inputs) |
virtual Ptr< BackendNode > | initNgraph (const std::vector< Ptr< BackendWrapper > > &inputs, const std::vector< Ptr< BackendNode > > &nodes) |
virtual Ptr< BackendNode > | initVkCom (const std::vector< Ptr< BackendWrapper > > &inputs) |
virtual Ptr< BackendNode > | initCUDA (void *context, const std::vector< Ptr< BackendWrapper > > &inputs, const std::vector< Ptr< BackendWrapper > > &outputs) |
Returns a CUDA backend node [詳解] | |
virtual void | applyHalideScheduler (Ptr< BackendNode > &node, const std::vector< Mat * > &inputs, const std::vector< Mat > &outputs, int targetId) const |
Automatic Halide scheduling based on layer hyper-parameters. [詳解] | |
virtual Ptr< BackendNode > | tryAttach (const Ptr< BackendNode > &node) |
Implement layers fusing. [詳解] | |
virtual bool | setActivation (const Ptr< ActivationLayer > &layer) |
Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. [詳解] | |
virtual bool | tryFuse (Ptr< Layer > &top) |
Try to fuse current layer with a next one [詳解] | |
virtual void | getScaleShift (Mat &scale, Mat &shift) const |
Returns parameters of layers with channel-wise multiplication and addition. [詳解] | |
virtual void | unsetAttached () |
"Deattaches" all the layers, attached to particular layer. | |
virtual bool | getMemoryShapes (const std::vector< MatShape > &inputs, const int requiredOutputs, std::vector< MatShape > &outputs, std::vector< MatShape > &internals) const |
virtual int64 | getFLOPS (const std::vector< MatShape > &inputs, const std::vector< MatShape > &outputs) const |
virtual bool | updateMemoryShapes (const std::vector< MatShape > &inputs) |
Layer (const LayerParams ¶ms) | |
Initializes only name, type and blobs fields. | |
void | setParamsFrom (const LayerParams ¶ms) |
Initializes only name, type and blobs fields. | |
![]() | |
virtual CV_WRAP void | clear () |
Clears the algorithm state [詳解] | |
virtual void | write (FileStorage &fs) const |
Stores algorithm parameters in a file storage [詳解] | |
CV_WRAP void | write (const Ptr< FileStorage > &fs, const String &name=String()) const |
simplified API for language bindings これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。 | |
virtual CV_WRAP void | read (const FileNode &fn) |
Reads algorithm parameters from a file storage [詳解] | |
virtual CV_WRAP bool | empty () const |
Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read [詳解] | |
virtual CV_WRAP void | save (const String &filename) const |
virtual CV_WRAP String | getDefaultName () const |
静的公開メンバ関数 | |
static Ptr< LSTMLayer > | create (const LayerParams ¶ms) |
![]() | |
template<typename _Tp > | |
static Ptr< _Tp > | read (const FileNode &fn) |
Reads algorithm from the file node [詳解] | |
template<typename _Tp > | |
static Ptr< _Tp > | load (const String &filename, const String &objname=String()) |
Loads algorithm from the file [詳解] | |
template<typename _Tp > | |
static Ptr< _Tp > | loadFromString (const String &strModel, const String &objname=String()) |
Loads algorithm from a String [詳解] | |
その他の継承メンバ | |
![]() | |
CV_PROP_RW std::vector< Mat > | blobs |
List of learned parameters must be stored here to allow read them by using Net::getParam(). | |
CV_PROP String | name |
Name of the layer instance, can be used for logging or other internal purposes. | |
CV_PROP String | type |
Type name which was used for creating layer by layer factory. | |
CV_PROP int | preferableTarget |
prefer target for layer forwarding | |
![]() | |
void | writeFormat (FileStorage &fs) const |
LSTM recurrent layer
|
static |
Creates instance of LSTM layer
|
virtual |
Returns index of input blob into the input array.
inputName | label of input blob |
Each layer input and output can be labeled to easily identify them using "%<layer_name%>[.output_name]" notation. This method maps label of input blob to its index into input vector.
cv::dnn::Layerを再実装しています。
|
virtual |
|
pure virtual |
Specifies shape of output blob which will be [[T
], N
] + outTailShape
.
If this parameter is empty or unset then outTailShape
= [Wh
.size(0)] will be used, where Wh
is parameter from setWeights().
|
pure virtual |
If this flag is set to true then layer will produce as second output.
use_timestamp_dim
in LayerParams. Shape of the second output is the same as first output.
|
pure virtual |
Specifies either interpret first dimension of input blob as timestamp dimension either as sample.
produce_cell_output
in LayerParams. If flag is set to true then shape of input blob will be interpreted as [T
, N
, [data dims]
] where T
specifies number of timestamps, N
is number of independent streams. In this case each forward() call will iterate through T
timestamps and update layer's state T
times.
If flag is set to false then shape of input blob will be interpreted as [N
, [data dims]
]. In this case each forward() call will make one iteration and produce one timestamp with shape [N
, [out dims]
].
|
pure virtual |
Set trained weights for LSTM layer.
LSTM behavior on each step is defined by current input, previous output, previous cell state and learned weights.
Let be current input,
be current output,
be current state. Than current output and current cell state is computed as follows:
where is per-element multiply operation and
is internal gates that are computed using learned weights.
Gates are computed as follows:
where ,
and
are learned weights represented as matrices:
,
,
.
For simplicity and performance purposes we use (i.e.
is vertical concatenation of
),
. The same for
and for
,
.
Wh | is matrix defining how previous output is transformed to internal gates (i.e. according to above mentioned notation is ![]() |
Wx | is matrix defining how current input is transformed to internal gates (i.e. according to above mentioned notation is ![]() |
b | is bias vector (i.e. according to above mentioned notation is ![]() |