1

我已经实现了CNN数字分类模型。我的模型过度配合,为了克服过度配合,我试图在我的成本函数中使用L2 Regularization。我有一个小混乱 我怎么能选择<weights>把成本公式(代码的最后一行)。如何实现卷积神经网络的L2正则化成本函数

... 

x = tf.placeholder(tf.float32, shape=[None, img_size, img_size, num_channels], name='x') # Input 
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true') # Labels 

<Convolution Layer 1> 

<Convolution Layer 2> 

<Convolution Layer 3> 

<Fully Coonected 1> 

<Fully Coonected 2> O/P = layer_fc2 

# Loss Function 
lambda = 0.01 
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2, labels=y_true) 
# cost  = tf.reduce_mean(cross_entropy) # Nornmal Loss 
cost   = tf.reduce_mean(cross_entropy + lambda * tf.nn.l2_loss(<weights>)) # Regularized Loss 

... 

回答

2

应定义L2损失相对于权重 - 使用该trainable_variables

C = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2, labels=y_true) 
l2_loss = tf.add_n([tf.nn.l2_loss(v) for v in tf.trainable_variables()]) 
C = C + lambda * l2_loss