Ⅰ 能不能简单举个例子说明一下深度置信网络的(DBN)的分类过程主要针对文本分类,
function test_example_DBN
load mnist_uint8;
train_x = double(train_x) / 255;
test_x = double(test_x) / 255;
train_y = double(train_y);
test_y = double(test_y);
%% ex1 train a 100 hidden unit RBM and visualize its weights
rand('state',0)
dbn.sizes = [100];
opts.numepochs = 1;
opts.batchsize = 100;
opts.momentum = 0;
opts.alpha = 1;
dbn = dbnsetup(dbn, train_x, opts);
dbn = dbntrain(dbn, train_x, opts);
figure; visualize(dbn.rbm{1}.W'); % Visualize the RBM weights
%% ex2 train a 100-100 hidden unit DBN and use its weights to initialize a NN
rand('state',0)
%train dbn
dbn.sizes = [100 100];
opts.numepochs = 1;
opts.batchsize = 100;
opts.momentum = 0;
opts.alpha = 1;
dbn = dbnsetup(dbn, train_x, opts);
dbn = dbntrain(dbn, train_x, opts);
%unfold dbn to nn
nn = dbnunfoldtonn(dbn, 10); %类别数
nn.activation_function = 'sigm';
%train nn
opts.numepochs = 1;
opts.batchsize = 100;
nn = nntrain(nn, train_x, train_y, opts);
[er, bad] = nntest(nn, test_x, test_y);
assert(er < 0.10, 'Too big error');
Ⅱ matlab deeplearning toolbox 中的DBN输入数据必须是(0,1]范围内的吗
程序如下:
function [nn, L] = nntrain(nn, train_x, train_y, opts, val_x, val_y)
%NNTRAIN trains a neural net
% [nn, L] = nnff(nn, x, y, opts) trains the neural network nn with input x and
% output y for opts.numepochs epochs, with minibatches of size
% opts.batchsize. Returns a neural network nn with updated activations,
% errors, weights and biases, (nn.a, nn.e, nn.W, nn.b) and L, the sum
% squared error for each training minibatch.
assert(isfloat(train_x), 'train_x must be a float');
assert(nargin == 4 || nargin == 6,'number ofinput arguments must be 4 or 6')
loss.train.e = [];
loss.train.e_frac = [];
loss.val.e = [];
loss.val.e_frac = [];
opts.validation = 0;
if nargin == 6
opts.validation = 1;
end
fhandle = [];
if isfield(opts,'plot') && opts.plot == 1
fhandle = figure();
end
m = size(train_x, 1);
batchsize = opts.batchsize;
numepochs = opts.numepochs;
numbatches = m / batchsize;
assert(rem(numbatches, 1) == 0, 'numbatches must be a integer');
L = zeros(numepochs*numbatches,1);
n = 1;
for i = 1 : numepochs
tic;
kk = randperm(m);
for l = 1 : numbatches
batch_x = train_x(kk((l - 1) * batchsize + 1 : l * batchsize), :);
%Add noise to input (for use in denoising autoencoder)
if(nn.inputZeroMaskedFraction ~= 0)
batch_x = batch_x.*(rand(size(batch_x))>nn.inputZeroMaskedFraction);
end
batch_y = train_y(kk((l - 1) * batchsize + 1 : l * batchsize), :);
nn = nnff(nn, batch_x, batch_y);
nn = nnbp(nn);
nn = nnapplygrads(nn);
L(n) = nn.L;
n = n + 1;
end
t = toc;
if ishandle(fhandle)
if opts.validation == 1
loss = nneval(nn, loss, train_x, train_y, val_x, val_y);
else
loss = nneval(nn, loss, train_x, train_y);
end
nnupdatefigures(nn, fhandle, loss, opts, i);
end
disp(['epoch ' num2str(i) '/' num2str(opts.numepochs) '. Took ' num2str(t) ' seconds' '. Mean squared error on training set is ' num2str(mean(L((n-numbatches):(n-1))))]);
nn.learningRate = nn.learningRate * nn.scaling_learningRate;
end
end
Ⅲ 深度信念网络为什么不火
深度信念网络由于理论分析的难度大,加上训练方法需要很多经验和技巧,所以不火。
深度信念网络在2006年提出,它是一种生成模型,通过训练其神经元间的权重,我们可以让整个神经网络按照最大概率来生成训练数据。我们不仅可以使用DBN识别特征、分类数据,还可以用它来生成数据。
深度信念网络是神经网络的一种,既可以用于非监督学习,类似于一个自编码机;也可以用于监督学习,作为分类器来使用。
Ⅳ 深度学习中的端到端是什么概念
端到端指的是输入是原始数据,输出是最后的结果,非端到端的输入端不是直接的原始数据,而是在原始数据中提取的特征,这一点在图像问题上尤为突出,因为图像像素数太多,数据维度高,会产生维度灾难,所以原来一个思路是手工提取图像的一些关键特征,这实际就是就一个降维的过程。
那么问题来了,特征怎么提?
特征提取的好坏异常关键,甚至比学习算法还重要,举个例子,对一系列人的数据分类,分类结果是性别,如果你提取的特征是头发的颜色,无论分类算法如何,分类效果都不会好,如果你提取的特征是头发的长短,这个特征就会好很多,但是还是会有错误,如果你提取了一个超强特征,比如染色体的数据,那你的分类基本就不会错了。
这就意味着,特征需要足够的经验去设计,这在数据量越来越大的情况下也越来越困难。
于是就出现了端到端网络,特征可以自己去学习,所以特征提取这一步也就融入到算法当中,不需要人来干预了。