2
使用受训简单的前馈神经网络预测新的数据
原谅我,如果这听起来像一个愚蠢的问题。假设我有一个训练有形状[m,n]的数据的神经网络,我如何用形状数据测试训练网络[1,3]如何tensorflow
这里是我目前拥有的代码:
n_hidden_1 = 1024
n_hidden_2 = 1024
n = len(test_data[0]) - 1
m = len(test_data)
alpha = 0.005
training_epoch = 1000
display_epoch = 100
train_X = np.array([i[:-1:] for i in test_data]).astype('float32')
train_X = normalize_data(train_X)
train_Y = np.array([i[-1::] for i in test_data]).astype('float32')
train_Y = normalize_data(train_Y)
X = tf.placeholder(dtype=np.float32, shape=[m, n])
Y = tf.placeholder(dtype=np.float32, shape=[m, 1])
weights = {
'h1': tf.Variable(tf.random_normal([n, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_hidden_2, 1]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'out': tf.Variable(tf.random_normal([1])),
}
layer_1 = tf.add(tf.matmul(X, weights['h1']), biases['b1'])
layer_1 = tf.nn.sigmoid(layer_1)
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
layer_2 = tf.nn.sigmoid(layer_2)
activation = tf.matmul(layer_2, weights['out']) + biases['out']
cost = tf.reduce_sum(tf.square(activation - Y))/(2 * m)
optimizer = tf.train.GradientDescentOptimizer(alpha).minimize(cost)
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
for epoch in range(training_epoch):
sess.run([optimizer, cost], feed_dict={X: train_X, Y: train_Y})
cost_ = sess.run(cost, feed_dict={X: train_X, Y: train_Y})
if epoch % display_epoch == 0:
print('Epoch:', epoch, 'Cost:', cost_)
如何测试一个新的数据?对于回归我知道我可以使用类似这样的数据[0.4, 0.5, 0.1]
predict_x = np.array([0.4, 0.5, 0.1], dtype=np.float32).reshape([1, 3])
predict_x = (predict_x - mean)/std
predict_y = tf.add(tf.matmul(predict_x, W), b)
result = sess.run(predict_y).flatten()[0]
我如何做同样的神经网络?
什么尺寸'[M,N]'立场?样本数量和特征数量是多少? – kaufmanu
@kaufmanu yup 720 3 – monoshiro