2017-07-14 113 views
0

这是使用CNTK模块创建自定义错误功能CNTK

batch_axis = C.Axis.default_batch_axis() 
input_seq_axis = C.Axis.default_dynamic_axis() 

input_dynamic_axes = [batch_axis, input_seq_axis] 
input_dynamic_axes2 = [batch_axis, input_seq_axis] 

input = C.input_variable(n_ins, dynamic_axes=input_dynamic_axes, dtype=numpy.float32) 
output = C.input_variable(n_outs, dynamic_axes=input_dynamic_axes2, dtype=numpy.float32) 

dnn_model = cntk_model.create_model(input, hidden_layer_type, hidden_layer_size, n_outs) 

loss = C.squared_error(dnn_model, output) 
error = C.squared_error(dnn_model, output) 

lr_schedule = C.learning_rate_schedule(current_finetune_lr, C.UnitType.minibatch) 
      momentum_schedule = C.momentum_schedule(current_momentum) 

learner = C.adam(dnn_model.parameters, lr_schedule, momentum_schedule, unit_gain = False, l1_regularization_weight=l1_reg, l2_regularization_weight= l2_reg)  

trainer = C.Trainer(dnn_model, (loss, error), [learner]) 

在蟒蛇NN训练我目前Python代码组成部分,这里是代码创建神经网络模型

def create_model(features, hidden_layer_type, hidden_layer_size, n_out): 
    logger.debug('Creating cntk model') 
    assert len(hidden_layer_size) == len(hidden_layer_type) 

    n_layers = len(hidden_layer_size) 

    my_layers = list() 
    for i in xrange(n_layers): 
     if(hidden_layer_type[i] == 'TANH'): 
      my_layers.append(C.layers.Dense(hidden_layer_size[i], activation=C.tanh, init=C.layers.glorot_uniform())) 
     elif (hidden_layer_type[i] == 'LSTM'): 
      my_layers.append(C.layers.Recurrence(C.layers.LSTM(hidden_layer_size[i]))) 
     else: 
      raise Exception('Unknown hidden layer type') 

    my_layers.append(C.layers.Dense(n_out, activation=None)) 

    my_model = C.layers.Sequential([my_layers]) 
    my_model = my_model(features) 

    return my_model 

现在,我想改变一个反向传播,所以当计算出来的错误不是直接网络输出使用,而是输出一些额外的计算后。我试图定义类似这样的东西

def create_error_function(self, prediction, target): 

    prediction_denorm = C.element_times(prediction, self.std_vector) 
    prediction_denorm = C.plus(prediction_denorm, self.mean_vector) 
    prediction_denorm_rounded = C.round(C.element_times(prediction_denorm[0:5], C.round(prediction_denorm[5]))) 
    prediction_denorm_rounded = C.element_divide(prediction_denorm_rounded, C.round(prediction_denorm[5])) 

    prediction_norm = C.minus(prediction_denorm_rounded, self.mean_vector[0:5]) 
    prediction_norm = C.element_divide(prediction_norm, self.std_vector[0:5]) 

    first = C.squared_error(prediction_norm, target[0:5]) 
    second = C.minus(C.round(prediction_denorm[5]), self.mean_vector[5]) 
    second = C.element_divide(second, self.std_vector[5]) 

    return C.plus(first, C.squared_error(second, target[5])) 

并用它代替标准squared_error。 而对于NN训练的一部分

dnn_model = cntk_model.create_model(input, hidden_layer_type, hidden_layer_size, n_outs) 
error_function = cntk_model.ErrorFunction(cmp_mean_vector, cmp_std_vector) 
loss = error_function.create_error_function(dnn_model, output) 
error = error_function.create_error_function(dnn_model, output) 
lr_schedule = C.learning_rate_schedule(current_finetune_lr, C.UnitType.minibatch) 
momentum_schedule = C.momentum_schedule(current_momentum) 

learner = C.adam(dnn_model.parameters, lr_schedule, momentum_schedule, unit_gain = False, l1_regularization_weight=l1_reg, 
           l2_regularization_weight= l2_reg)  

trainer = C.Trainer(dnn_model, (loss, error), [learner]) 
trainer.train_minibatch({input: temp_train_x, output: temp_train_y}) 

但两个时代后,我开始一直流汗相同的平均损失,我的网络没有学习

回答

0

你想怎么改backprop作品每一次,你需要使用stop_gradient。这是唯一的功能,其梯度不同于正向操作的梯度。在正向传递stop_gradient作为身份。在向后传递中,它阻止传播的渐变。

要在直传做一些x操作f(x),假装好像它永远不会在你需要做这样的事情后向通行事情发生了: C.stop_gradient(f(x) - x) + x。在你的情况下,将是

norm_features = C.stop_gradient(features/normalization - features) + features

+0

我更新了我的问题。我成功地创造了新的损失函数的工作示例,但在我的实现中看起来有些问题,因为我在所有时期的平均值相同。 我也不确定应该在哪里添加您建议的修改 – sinisha