2017-08-04 52 views
0
import tensorflow as tf 

x = tf.placeholder(tf.float32, [None,4]) # input vector  

w1 = tf.Variable(tf.random_normal([4,2])) # weights between first and second layers 

b1 = tf.Variable(tf.zeros([2]))    # biases added to hidden layer 

w2 = tf.Variable(tf.random_normal([2,1])) # weights between second and third layer 

b2 = tf.Variable(tf.zeros([1]))    # biases added to third (output) layer 

def feedForward(x,w,b):      # function for forward propagation 
  Input = tf.add(tf.matmul(x,w), b) 

      Output = tf.sigmoid(Input) 

      return Output 


>>> Out1 = feedForward(x,w1,b1)    # output of first layer 

>>> Out2 = feedForward(Out1,w2,b2)    # output of second layer 

>>> MHat = 50*Out2        # final prediction is in the range (0,50) 



>>> M = tf.placeholder(tf.float32, [None,1]) # placeholder for actual (target value of marks) 

>>> J = tf.reduce_mean(tf.square(MHat - M)) # cost function -- mean square errors       

>>> train_step = tf.train.GradientDescentOptimizer(0.05).minimize(J)  # minimize J using Gradient Descent 

>>> sess = tf.InteractiveSession()    # create interactive session 

>>> tf.global_variables_initializer().run() # initialize all weight and bias variables with specified values 

>>> xs = [[1,3,9,7],  
      [7,9,8,2],       # x training data 
      [2,4,6,5]] 

>>> Ms = [[47], 
      [43],        # M training data 
      [39]] 

>>> for _ in range(1000):      # performing learning process on training data 1000 times 

     sess.run(train_step, feed_dict = {x:xs, M:Ms}) 


>>> print(sess.run(MHat, feed_dict = {x:[[1,3,9,7]]})) 

[[50]]Tensorflow回归模型给予相同的预测每次

>>> print(sess.run(MHat, feed_dict = {x:[[1,15,9,7]]})) 

[[50]]

>>> print(sess.run(tf.transpose(MHat), feed_dict = {x:[[1,15,9,7]]})) 

[[50]]

在这段代码中,我试图预测学生的标记M出来给他多少小时/她睡,学习,使用电子和打50。这4个特征来自输入特征向量x。

为了解决这个回归问题,我使用了一个具有4个感知器(输入特征)的输入层,一个具有两个感知器的隐藏层和一个感知器的输出层的深度神经网络。我用sigmoid作为激活函数。但是,对于我输入的所有可能的输入向量,我得到了完全相同的M([[50.0]])。有人可以告诉我 下面的代码有什么问题。我非常感谢帮助! (提前)

回答

0

您需要修改feedforward()函数。在这里,你不需要在最后一层申请sigmoid()(简单地返回激活功能!),也没有必要由50

def feedForward(X,W1,b1,W2,b2): 
    Z=tf.sigmoid(tf.matmul(X,W1)+b1) 
    return tf.matmul(Z,W2)+b2 
MHat = feedForward(x,w1,b1,w2,b2) 

希望这有助于乘这个函数的输出!


不要忘了让我们知道是否能解决你的问题:)

+0

@Tarun是否能解决你的问题,那么你应该将其标记为正确答案。如果将来遇到类似问题,它肯定会有所帮助。谢谢! – Prem

相关问题