早上,我在TensorFlow方面取得了一些进展,但它仍然很困难。我发现很多在线示例没有运行。使用TensorFlow算法调试回归-python
无论如何,我试着编写一些代码来应用多层神经网络来解决回归问题。但是,我只能得到零。有人能够帮助我找到我的代码和理解中哪里出错吗?
我在Windows 10上运行Tensorflow 0.12与Python 3.5
非常感谢
import tensorflow as tf
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
# Parameters
learning_rate = 0.001
training_epochs = 100
batch_size = 5000
# Network Parameters
n_hidden_1 = 256
n_hidden_2 = 10
n_input = 9
n_classes = 1
n_samples = dataVar.shape[0]
# TensorFlow Graph Input
x = tf.placeholder("float", [None, n_input])
if (n_classes > 1):
y = tf.placeholder("float", [None, n_classes])
else:
y = tf.placeholder("float", [None,])
# Create Multilayer Model
def multilayer_perceptron(x, weights, biases):
'''
x: Place holder for data input
weights: Dictionary of weights
biases: Dictionary of biases
'''
# First hidden layer with RELU activation
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
layer_1 = tf.nn.relu(layer_1)
# Second hidden layer with RELU activation
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.relu(layer_2)
# Last output layer with linear activation
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
return out_layer
# weights and biases
weights = {
'h1': tf.Variable(tf.random_normal([n_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_hidden_2, n_classes]))
}
biases = {
'b1' : tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_classes]))
}
# Construct Model
pred = multilayer_perceptron(x, weights, biases)
# Define loss and optimizer
cost = tf.reduce_mean(tf.square(pred - y))
#cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(pred, y))
optimizer = tf.train.GradientDescentOptimizer(learning_rate = learning_rate).minimize(cost)
# Initialize variables
init = tf.initialize_all_variables()
# RUNNING THE SESSION
# launch the session
sess = tf.InteractiveSession()
# Initialize all the variables
sess.run(init)
# Training Epochs
for epoch in range(training_epochs):
# Start with cost = 0
avg_cost = 0.0
# Convert total number of batches to integer
total_batch = int(n_samples/batch_size)
# Loop over all batches
for i in range(total_batch):
# Grab the next batch of training data and labels
ind = np.random.randint(0, high=dataVar_scaled.shape[0], size=(batch_size))
batch_x = dataVar_scaled[ind,:]
batch_y = depth[ind]
# Feed dictionary for optimization and loss value
_, c,p = sess.run([optimizer, cost,pred], feed_dict={x: batch_x, y: batch_y})
# Compute average loss
avg_cost += c/total_batch
print("Epoch: {} cost = {:.4f}".format(epoch+1, avg_cost))
print("Model has completed {} Epochs of training".format(training_epochs))
prediction = tf.argmax(sess.run(pred, feed_dict={x:dataVar_scaled}),1).eval()
print(prediction)
plt.plot(prediction,depth,'b.')
感谢您的快速响应。对不起,是的,我的意思是线性回归。我已经改变了这一行代码,但我仍然让每个输入等于相同的输出。还有什么我做错了吗? – jlt199
我用自己的修改和自己的输入浏览了你的代码。它给了我一个非零值。我没有你的'dataVar'值。所以我相信,这个变量有些问题。确保它们不是全零。接受我的答案,如果它帮助了你。 – user1190882
感谢您的时间。我不认为我的输入数据是问题,但我会仔细检查 – jlt199