OpenCV453
公開メンバ関数 | 静的公開メンバ関数 | 全メンバ一覧
cv::dnn::LSTMLayer クラスabstract

LSTM recurrent layer [詳解]

#include <all_layers.hpp>

cv::dnn::Layerを継承しています。

公開メンバ関数

virtual CV_DEPRECATED void setWeights (const Mat &Wh, const Mat &Wx, const Mat &b)=0
 Set trained weights for LSTM layer. [詳解]
 
virtual void setOutShape (const MatShape &outTailShape=MatShape())=0
 Specifies shape of output blob which will be [[T], N] + outTailShape. [詳解]
 
virtual CV_DEPRECATED void setUseTimstampsDim (bool use=true)=0
 Specifies either interpret first dimension of input blob as timestamp dimension either as sample. [詳解]
 
virtual CV_DEPRECATED void setProduceCellOutput (bool produce=false)=0
 If this flag is set to true then layer will produce $ c_t $ as second output. [詳解]
 
int inputNameToIndex (String inputName) CV_OVERRIDE
 Returns index of input blob into the input array. [詳解]
 
int outputNameToIndex (const String &outputName) CV_OVERRIDE
 Returns index of output blob in output array. [詳解]
 
- 基底クラス cv::dnn::Layer に属する継承公開メンバ関数
virtual CV_DEPRECATED_EXTERNAL void finalize (const std::vector< Mat * > &input, std::vector< Mat > &output)
 Computes and sets internal parameters according to inputs, outputs and blobs. [詳解]
 
virtual CV_WRAP void finalize (InputArrayOfArrays inputs, OutputArrayOfArrays outputs)
 Computes and sets internal parameters according to inputs, outputs and blobs. [詳解]
 
virtual CV_DEPRECATED_EXTERNAL void forward (std::vector< Mat * > &input, std::vector< Mat > &output, std::vector< Mat > &internals)
 Given the input blobs, computes the output blobs. [詳解]
 
virtual void forward (InputArrayOfArrays inputs, OutputArrayOfArrays outputs, OutputArrayOfArrays internals)
 Given the input blobs, computes the output blobs. [詳解]
 
void forward_fallback (InputArrayOfArrays inputs, OutputArrayOfArrays outputs, OutputArrayOfArrays internals)
 Given the input blobs, computes the output blobs. [詳解]
 
CV_DEPRECATED_EXTERNAL void finalize (const std::vector< Mat > &inputs, CV_OUT std::vector< Mat > &outputs)
 これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。 [詳解]
 
CV_DEPRECATED std::vector< Matfinalize (const std::vector< Mat > &inputs)
 これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。 [詳解]
 
CV_DEPRECATED CV_WRAP void run (const std::vector< Mat > &inputs, CV_OUT std::vector< Mat > &outputs, CV_IN_OUT std::vector< Mat > &internals)
 Allocates layer and computes output. [詳解]
 
virtual bool supportBackend (int backendId)
 Ask layer if it support specific backend for doing computations. [詳解]
 
virtual Ptr< BackendNodeinitHalide (const std::vector< Ptr< BackendWrapper > > &inputs)
 Returns Halide backend node. [詳解]
 
virtual Ptr< BackendNodeinitInfEngine (const std::vector< Ptr< BackendWrapper > > &inputs)
 
virtual Ptr< BackendNodeinitNgraph (const std::vector< Ptr< BackendWrapper > > &inputs, const std::vector< Ptr< BackendNode > > &nodes)
 
virtual Ptr< BackendNodeinitVkCom (const std::vector< Ptr< BackendWrapper > > &inputs)
 
virtual Ptr< BackendNodeinitCUDA (void *context, const std::vector< Ptr< BackendWrapper > > &inputs, const std::vector< Ptr< BackendWrapper > > &outputs)
 Returns a CUDA backend node [詳解]
 
virtual void applyHalideScheduler (Ptr< BackendNode > &node, const std::vector< Mat * > &inputs, const std::vector< Mat > &outputs, int targetId) const
 Automatic Halide scheduling based on layer hyper-parameters. [詳解]
 
virtual Ptr< BackendNodetryAttach (const Ptr< BackendNode > &node)
 Implement layers fusing. [詳解]
 
virtual bool setActivation (const Ptr< ActivationLayer > &layer)
 Tries to attach to the layer the subsequent activation layer, i.e. do the layer fusion in a partial case. [詳解]
 
virtual bool tryFuse (Ptr< Layer > &top)
 Try to fuse current layer with a next one [詳解]
 
virtual void getScaleShift (Mat &scale, Mat &shift) const
 Returns parameters of layers with channel-wise multiplication and addition. [詳解]
 
virtual void unsetAttached ()
 "Deattaches" all the layers, attached to particular layer.
 
virtual bool getMemoryShapes (const std::vector< MatShape > &inputs, const int requiredOutputs, std::vector< MatShape > &outputs, std::vector< MatShape > &internals) const
 
virtual int64 getFLOPS (const std::vector< MatShape > &inputs, const std::vector< MatShape > &outputs) const
 
virtual bool updateMemoryShapes (const std::vector< MatShape > &inputs)
 
 Layer (const LayerParams &params)
 Initializes only name, type and blobs fields.
 
void setParamsFrom (const LayerParams &params)
 Initializes only name, type and blobs fields.
 
- 基底クラス cv::Algorithm に属する継承公開メンバ関数
virtual CV_WRAP void clear ()
 Clears the algorithm state [詳解]
 
virtual void write (FileStorage &fs) const
 Stores algorithm parameters in a file storage [詳解]
 
CV_WRAP void write (const Ptr< FileStorage > &fs, const String &name=String()) const
 simplified API for language bindings これはオーバーロードされたメンバ関数です。利便性のために用意されています。元の関数との違いは引き数のみです。
 
virtual CV_WRAP void read (const FileNode &fn)
 Reads algorithm parameters from a file storage [詳解]
 
virtual CV_WRAP bool empty () const
 Returns true if the Algorithm is empty (e.g. in the very beginning or after unsuccessful read [詳解]
 
virtual CV_WRAP void save (const String &filename) const
 
virtual CV_WRAP String getDefaultName () const
 

静的公開メンバ関数

static Ptr< LSTMLayercreate (const LayerParams &params)
 
- 基底クラス cv::Algorithm に属する継承静的公開メンバ関数
template<typename _Tp >
static Ptr< _Tp > read (const FileNode &fn)
 Reads algorithm from the file node [詳解]
 
template<typename _Tp >
static Ptr< _Tp > load (const String &filename, const String &objname=String())
 Loads algorithm from the file [詳解]
 
template<typename _Tp >
static Ptr< _Tp > loadFromString (const String &strModel, const String &objname=String())
 Loads algorithm from a String [詳解]
 

その他の継承メンバ

- 基底クラス cv::dnn::Layer に属する継承公開変数類
CV_PROP_RW std::vector< Matblobs
 List of learned parameters must be stored here to allow read them by using Net::getParam().
 
CV_PROP String name
 Name of the layer instance, can be used for logging or other internal purposes.
 
CV_PROP String type
 Type name which was used for creating layer by layer factory.
 
CV_PROP int preferableTarget
 prefer target for layer forwarding
 
- 基底クラス cv::Algorithm に属する継承限定公開メンバ関数
void writeFormat (FileStorage &fs) const
 

詳解

LSTM recurrent layer

関数詳解

◆ create()

static Ptr< LSTMLayer > cv::dnn::LSTMLayer::create ( const LayerParams params)
static

Creates instance of LSTM layer

◆ inputNameToIndex()

int cv::dnn::LSTMLayer::inputNameToIndex ( String  inputName)
virtual

Returns index of input blob into the input array.

引数
inputNamelabel of input blob

Each layer input and output can be labeled to easily identify them using "%<layer_name%>[.output_name]" notation. This method maps label of input blob to its index into input vector.

cv::dnn::Layerを再実装しています。

◆ outputNameToIndex()

int cv::dnn::LSTMLayer::outputNameToIndex ( const String &  outputName)
virtual

Returns index of output blob in output array.

参照
inputNameToIndex()

cv::dnn::Layerを再実装しています。

◆ setOutShape()

virtual void cv::dnn::LSTMLayer::setOutShape ( const MatShape &  outTailShape = MatShape())
pure virtual

Specifies shape of output blob which will be [[T], N] + outTailShape.

If this parameter is empty or unset then outTailShape = [Wh.size(0)] will be used, where Wh is parameter from setWeights().

◆ setProduceCellOutput()

virtual CV_DEPRECATED void cv::dnn::LSTMLayer::setProduceCellOutput ( bool  produce = false)
pure virtual

If this flag is set to true then layer will produce $ c_t $ as second output.

非推奨:
Use flag use_timestamp_dim in LayerParams.

Shape of the second output is the same as first output.

◆ setUseTimstampsDim()

virtual CV_DEPRECATED void cv::dnn::LSTMLayer::setUseTimstampsDim ( bool  use = true)
pure virtual

Specifies either interpret first dimension of input blob as timestamp dimension either as sample.

非推奨:
Use flag produce_cell_output in LayerParams.

If flag is set to true then shape of input blob will be interpreted as [T, N, [data dims]] where T specifies number of timestamps, N is number of independent streams. In this case each forward() call will iterate through T timestamps and update layer's state T times.

If flag is set to false then shape of input blob will be interpreted as [N, [data dims]]. In this case each forward() call will make one iteration and produce one timestamp with shape [N, [out dims]].

◆ setWeights()

virtual CV_DEPRECATED void cv::dnn::LSTMLayer::setWeights ( const Mat Wh,
const Mat Wx,
const Mat b 
)
pure virtual

Set trained weights for LSTM layer.

非推奨:
Use LayerParams::blobs instead.

LSTM behavior on each step is defined by current input, previous output, previous cell state and learned weights.

Let $x_t$ be current input, $h_t$ be current output, $c_t$ be current state. Than current output and current cell state is computed as follows:

\begin{eqnarray*} h_t &= o_t \odot tanh(c_t), \\ c_t &= f_t \odot c_{t-1} + i_t \odot g_t, \\ \end{eqnarray*}

where $\odot$ is per-element multiply operation and $i_t, f_t, o_t, g_t$ is internal gates that are computed using learned weights.

Gates are computed as follows:

\begin{eqnarray*} i_t &= sigmoid&(W_{xi} x_t + W_{hi} h_{t-1} + b_i), \\ f_t &= sigmoid&(W_{xf} x_t + W_{hf} h_{t-1} + b_f), \\ o_t &= sigmoid&(W_{xo} x_t + W_{ho} h_{t-1} + b_o), \\ g_t &= tanh &(W_{xg} x_t + W_{hg} h_{t-1} + b_g), \\ \end{eqnarray*}

where $W_{x?}$, $W_{h?}$ and $b_{?}$ are learned weights represented as matrices: $W_{x?} \in R^{N_h \times N_x}$, $W_{h?} \in R^{N_h \times N_h}$, $b_? \in R^{N_h}$.

For simplicity and performance purposes we use $ W_x = [W_{xi}; W_{xf}; W_{xo}, W_{xg}] $ (i.e. $W_x$ is vertical concatenation of $ W_{x?} $), $ W_x \in R^{4N_h \times N_x} $. The same for $ W_h = [W_{hi}; W_{hf}; W_{ho}, W_{hg}], W_h \in R^{4N_h \times N_h} $ and for $ b = [b_i; b_f, b_o, b_g]$, $b \in R^{4N_h} $.

引数
Whis matrix defining how previous output is transformed to internal gates (i.e. according to above mentioned notation is $ W_h $)
Wxis matrix defining how current input is transformed to internal gates (i.e. according to above mentioned notation is $ W_x $)
bis bias vector (i.e. according to above mentioned notation is $ b $)

このクラス詳解は次のファイルから抽出されました: