欢迎访问 生活随笔!

凯发ag旗舰厅登录网址下载

当前位置: 凯发ag旗舰厅登录网址下载 > 人工智能 > caffe >内容正文

caffe

caffe 层 -凯发ag旗舰厅登录网址下载

发布时间:2024/6/30 caffe 30 豆豆
凯发ag旗舰厅登录网址下载 收集整理的这篇文章主要介绍了 caffe 层 小编觉得挺不错的,现在分享给大家,帮大家做个参考.

卷积神经网络(convolutional neural network, cnn)是一种前馈神经网络,它的人工神经元可以响应一部分覆盖范围内的周围单元,[1]对于大型图像处理有出色表现。

deep neural network(dnn)模型是基本的深度学习框架

递归神经网络(rnn)是两种人工神经网络的总称。一种是时间递归神经网络(recurrent neural network),另一种是结构递归神经网络(recursive neural network)。时间递归神经网络的神经元间连接构成矩阵,而结构递归神经网络利用相似的神经网络结构递归构造更为复杂的深度网络。rnn一般指代时间递归神经网络。单纯递归神经网络因为无法处理随着递归,权重指数级爆炸或消失的问题(vanishing gradient problem),难以捕捉长期时间关联;而结合不同的lstm可以很好解决这个问题。

# bottom = last top name: "lenet" # 数据层 layer {name: "mnist"type: "data"top: "data"top: "label"include {phase: train}transform_param {scale: 0.00390625}data_param {source: "mnist_train_lmdb"batch_size: 64backend: lmdb} } # 数据层 layer {name: "mnist"type: "data"top: "data"top: "label"include {phase: test}transform_param {scale: 0.00390625}data_param {source: "mnist_test_lmdb"batch_size: 100backend: lmdb} } # 卷积层 layer {name: "conv1"type: "convolution"bottom: "data"top: "conv1"param {lr_mult: 1}param {lr_mult: 2}convolution_param {num_output: 20kernel_size: 5stride: 1weight_filler {type: "xavier"}bias_filler {type: "constant"}} } # 池化层 layer {name: "pool1"type: "pooling"bottom: "conv1"top: "pool1"pooling_param {pool: maxkernel_size: 2stride: 2} } # 卷积层 layer {name: "conv2"type: "convolution"bottom: "pool1"top: "conv2"param {lr_mult: 1}param {lr_mult: 2}convolution_param {num_output: 50kernel_size: 5stride: 1weight_filler {type: "xavier"}bias_filler {type: "constant"}} } # 池化层 layer {name: "pool2"type: "pooling"bottom: "conv2"top: "pool2"pooling_param {pool: maxkernel_size: 2stride: 2} } # 全连接层 layer {name: "ip1"type: "innerproduct"bottom: "pool2"top: "ip1"param {lr_mult: 1}param {lr_mult: 2}inner_product_param {num_output: 500weight_filler {type: "xavier"}bias_filler {type: "constant"}} } # relu层 layer {name: "relu1"type: "relu"bottom: "ip1"top: "ip1" } # 全连接层 layer {name: "ip2"type: "innerproduct"bottom: "ip1"top: "ip2"param {lr_mult: 1}param {lr_mult: 2}inner_product_param {num_output: 10weight_filler {type: "xavier"}bias_filler {type: "constant"}} } # 损失层/预测精度 layer {name: "accuracy"type: "accuracy"bottom: "ip2"bottom: "label"top: "accuracy"include {phase: test} } # 损失层 layer {name: "loss"type: "softmaxwithloss"bottom: "ip2"bottom: "label"top: "loss" }
数据层 data layers
  • image data - read raw images.
  • database - read data from leveldb or lmdb.
  • hdf5 input - read hdf5 data, allows data of arbitrary dimensions.
  • hdf5 output - write data as hdf5.
  • input - typically used for networks that are being deployed.
  • window data - read window data file.
  • memory data - read data directly from memory.
  • dummy data - for static data and debugging.
视觉层 vision layers
  • convolution layer - convolves the input image with a set of learnable filters, each producing one feature map in the output image.
  • pooling layer - max, average, or stochastic pooling.
  • spatial pyramid pooling (spp)
  • crop - perform cropping transformation.
  • deconvolution layer - transposed convolution.
  • im2col - relic helper layer that is not used much anymore.
经常层 recurrent layers
  • recurrent
  • rnn
  • long-short term memory (lstm)
普通层 common layers
  • inner product - fully connected layer.
  • dropout
  • embed - for learning embeddings of one-hot encoded vector (takes index as input).
归一化层 normalization layers
  • local response normalization (lrn) - performs a kind of “lateral inhibition” by normalizing over local input regions.
  • mean variance normalization (mvn) - performs contrast normalization / instance normalization.
  • batch normalization - performs normalization over mini-batches.
激活/神经元层 activation / neuron layers
  • relu / rectified-linear and leaky-relu - relu and leaky-relu rectification.
  • prelu - parametric relu.
  • elu - exponential linear rectification.
  • sigmoid
  • tanh
  • absolute value
  • power - f(x) = (shift scale * x) ^ power.
  • exp - f(x) = base ^ (shift scale * x).
  • log - f(x) = log(x).
  • bnll - f(x) = log(1 exp(x)).
  • threshold - performs step function at user defined threshold.
  • bias - adds a bias to a blob that can either be learned or fixed.
  • scale - scales a blob by an amount that can either be learned or fixed.
实用层 utility layers
  • flatten
    *reshape
  • batch reindex
  • split
  • concat
  • slicing
  • eltwise - element-wise operations such as product or sum between two blobs.
  • filter / mask - mask or select output using last blob.
  • parameter - enable parameters to be shared between layers.
  • reduction - reduce input blob to scalar blob using operations such as sum or mean.
  • silence - prevent top-level blobs from being printed during training.
  • argmax
  • softmax
  • python - allows custom python layers.
损失层 loss layers
  • multinomial logistic loss
  • infogain loss - a generalization of multinomiallogisticlosslayer.
  • softmax with loss - computes the multinomial logistic loss of the softmax of its inputs. it’s conceptually identical to a softmax layer followed by a multinomial logistic loss layer, but provides a more numerically stable gradient.
  • sum-of-squares / euclidean - computes the sum of squares of differences of its two inputs, 12n∑ni=1∥x1i−x2i∥2212n∑i=1n‖xi1−xi2‖22
  • hinge / margin - the hinge loss layer computes a one-vs-all hinge (l1) or squared hinge loss (l2).
  • sigmoid cross-entropy loss - computes the cross-entropy (logistic) loss, often used for predicting targets interpreted as probabilities.
  • accuracy / top-k layer - scores the output as an accuracy with respect to target – it is not actually a loss and has no backward step.
  • contrastive loss

转载于:https://www.cnblogs.com/cheungxiongwei/articles/7746386.html

总结

以上是凯发ag旗舰厅登录网址下载为你收集整理的caffe 层的全部内容,希望文章能够帮你解决所遇到的问题。

如果觉得凯发ag旗舰厅登录网址下载网站内容还不错,欢迎将凯发ag旗舰厅登录网址下载推荐给好友。

  • 上一篇:
  • 下一篇:
网站地图