当前位置: 首页 > 知识库问答 >
问题:

Tensorflow中的卷积神经网络与自有数据进行预测

督嘉言
2023-03-14

我是CNN和Tensorflow的初学者。我试图用自己的数据在tensorflow中实现卷积神经网络进行预测,但我遇到了一些问题。我将Deep MNIST for Experts教程转换为此。对于专家来说,深度分类是一种分类,但我正在尝试回归。另一个问题是,该代码为每一步提供的精度为1。错误的原因是什么?如何将此代码转换为回归?

数据集:

Year_Month_Day,Hour_Minute,Temperature,Relative_humidity,Pressure,Total_Precipitation,Snowfall_amount,Total_cloud_cover,High_cloud_cover,Medium_cloud_cover,Low_cloud_cover,Shortwave_Radiation,Wind_speed_10m,Wind_direction_10m,Wind_speed_80m,Wind_direction_80m,Wind_speed_900m,Wind_direction_900m,Wind_Gust_10m,Difference
2016-10-24,23.00,15.47,76.00,1015.40,0.00,0.00,100.00,26.00,100.00,100.00,0.00,6.88,186.01,12.26,220.24,27.60,262.50,14.04,2.1 
2016-10-24,22.00,16.14,73.00,1014.70,0.00,0.00,10.20,34.00,0.00,2.00,0.00,6.49,176.82,11.97,201.16,24.27,249.15,7.92,0.669999 
..... 
..... 
.....
2016-10-24,18.00,20.93,56.00,1012.20,0.00,0.00,100.00,48.00,15.00,100.00,91.67,6.49,146.31,12.10,149.62,17.65,163.41,8.64,1.65 
2016-10-24,17.00,21.69,50.00,1012.10,0.00,0.00,100.00,42.00,10.00,100.00,243.86,9.50,142.70,12.77,139.57,19.08,144.21,32.40,0.76 

代码:

import tensorflow as tf
import pandas as pandas
from sklearn import cross_validation
from sklearn import preprocessing
from sklearn import metrics

sess = tf.InteractiveSession()

data = pandas.read_csv("tuna.csv")
print(data[-2:])
#X=data.copy(deep=True)

X=data[['Relative_humidity','Pressure','Total_Precipitation','Snowfall_amount','Total_cloud_cover','High_cloud_cover','Medium_cloud_cover','Low_cloud_cover','Shortwave_Radiation','Wind_speed_10m','Wind_direction_10m','Wind_speed_80m','Wind_direction_80m','Wind_speed_900m','Wind_direction_900m','Wind_Gust_10m']].fillna(0)
Y=data[['Temperature']]

number_of_samples=X.shape[0]
elements_of_one_sample=X.shape[1]


print("number of samples", number_of_samples)
print("elements_of_one_sample", elements_of_one_sample)
train_x, test_x, train_y, test_y = cross_validation.train_test_split(X, Y, test_size=0.1, random_state=42)

print("train_x.shape=", train_x.shape)
print("train_y.shape=", train_y.shape)
print("test_x.shape=", test_x.shape)
print("test_y.shape=", test_y.shape)

epoch = 0 # counter for number of rounds training network
last_cost = 0 # keep track of last cost to measure difference
max_epochs = 2000 # total number of training sessions
tolerance = 1e-6 # we stop when diff in costs less than that
batch_size = 50 # we batch the data in groups of this size
num_samples = train_y.shape[0] # number of samples in training set
num_batches = int( num_samples / batch_size ) # compute number of batches, given
print("############################## num_samples", num_samples)       
print("############################## num_batches", num_batches)       

x = tf.placeholder(tf.float32, shape=[None, 16])
y_ = tf.placeholder(tf.float32, shape=[None, 1])

# xW + b
W = tf.Variable(tf.zeros([16,1]))
b = tf.Variable(tf.zeros([1]))

sess.run(tf.initialize_all_variables())

# y = softmax(xW + b)
y = tf.nn.softmax(tf.matmul(x,W) + b)

# lossはcross entropy
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y), reduction_indices=[1]))

train_step = tf.train.GradientDescentOptimizer(0.5).minimize(cross_entropy)


for n in range( num_batches ):
  batch_x = train_x[ n*batch_size : (n+1)*batch_size ]
  batch_y = train_y[ n*batch_size : (n+1)*batch_size ]
  train_step.run( feed_dict={x: batch_x, y_: batch_y} )


correct_prediction = tf.equal(tf.argmax(y,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
print(accuracy.eval(feed_dict={x: test_x, y_: test_y}))


# To create this model, we're going to need to create a lot of weights and biases. 
# One should generally initialize weights with a small amount of noise for symmetry 
# breaking, and to prevent 0 gradients
def weight_variable(shape):
  initial = tf.truncated_normal(shape, stddev=0.1)
  return tf.Variable(initial)

# Since we're using ReLU neurons, it is also good practice to initialize them 
# with a slightly positive initial bias to avoid "dead neurons." Instead of doing 
# this repeatedly while we build the model, let's create two handy functions 
# to do it for us.
def bias_variable(shape):
  initial = tf.constant(0.1, shape=shape)
  return tf.Variable(initial)


# https://www.tensorflow.org/versions/master/api_docs/python/nn.html#conv2d
def conv2d(x, W):
  return tf.nn.conv2d(x, W, strides=[1, 1, 1, 1], padding='SAME')


# https://www.tensorflow.org/versions/master/api_docs/python/nn.html#max_pool
def max_pool_2x2(x):
  return tf.nn.max_pool(x, ksize=[1, 2, 2, 1],strides=[1, 2, 2, 1], padding='SAME')


W_conv1 = weight_variable([2, 2, 1, 32])
b_conv1 = bias_variable([32])


x_image = tf.reshape(x, [-1,4,4,1])

h_conv1 = tf.nn.relu(conv2d(x_image, W_conv1) + b_conv1)
h_pool1 = max_pool_2x2(h_conv1)

W_conv2 = weight_variable([2, 2, 32, 64])
b_conv2 = bias_variable([64])

h_conv2 = tf.nn.relu(conv2d(h_pool1, W_conv2) + b_conv2)
h_pool2 = max_pool_2x2(h_conv2)


W_fc1 = weight_variable([1 * 1 * 64, 1024])
b_fc1 = bias_variable([1024])

h_pool2_flat = tf.reshape(h_pool2, [-1, 1*1*64])
h_fc1 = tf.nn.relu(tf.matmul(h_pool2_flat, W_fc1) + b_fc1)

keep_prob = tf.placeholder(tf.float32)
h_fc1_drop = tf.nn.dropout(h_fc1, keep_prob)


W_fc2 = weight_variable([1024, 1])
b_fc2 = bias_variable([1])

y_conv=tf.nn.softmax(tf.matmul(h_fc1_drop, W_fc2) + b_fc2)

# loss
cross_entropy = tf.reduce_mean(-tf.reduce_sum(y_ * tf.log(y_conv), reduction_indices=[1]))
train_step = tf.train.AdamOptimizer(1e-4).minimize(cross_entropy)

# accuracy
correct_prediction = tf.equal(tf.argmax(y_conv,1), tf.argmax(y_,1))
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))

# train
sess.run(tf.initialize_all_variables())
for i in range(20000):
  if i%100 == 0:
    batch_x = train_x[ n*batch_size : (n+1)*batch_size ]
    batch_y = train_y[ n*batch_size : (n+1)*batch_size ]
    train_accuracy = accuracy.eval(feed_dict={x:batch_x, y_: batch_y, keep_prob: 1.0})
    print("step %d, training accuracy %g"%(i, train_accuracy))
  train_step.run(feed_dict={x: batch_x, y_: batch_y, keep_prob: 0.5})

# result
print("test accuracy %g"%accuracy.eval(feed_dict={
    x: test_x, y_: test_y, keep_prob: 1.0}))

输出:

number of samples 1250
elements_of_one_sample 16
train_x.shape= (1125, 16)
train_y.shape= (1125, 1)
test_x.shape= (125, 16)
test_y.shape= (125, 1)
############################## num_samples 1125
############################## num_batches 22
1.0
step 0, training accuracy 1
step 100, training accuracy 1
step 200, training accuracy 1
step 300, training accuracy 1
step 400, training accuracy 1
....
....
....
step 19500, training accuracy 1
step 19600, training accuracy 1
step 19700, training accuracy 1
step 19800, training accuracy 1
step 19900, training accuracy 1
test accuracy 1

我对神经网络和机器学习很陌生,所以请原谅我的任何错误,提前谢谢。

共有1个答案

陶星渊
2023-03-14

你有一个交叉熵的损失函数,这是一个专门为分类设计的损失函数。如果你想做回归,你需要从一个惩罚预测错误的损失函数开始(L2错误是一个很好的起点)。

对于预测,网络最右边的层需要有线性单位(无激活函数)。最右侧层中的神经元数量应与您预测的值数量相对应(如果这是一个简单的回归问题,您在给定输入向量x的情况下预测y的单个值,那么您只需要最右侧层中的单个神经元)。现在,网络后端有一个softmax层,它也专门用于分类任务。

基本上-您需要将softmax替换为线性神经元,并将损失函数更改为类似L2误差(又称均方误差)的值。

 类似资料:
  • 在了解了机器学习概念之后,现在可以将注意力转移到深度学习概念上。深度学习是机器学习的一个分支。深度学习实现的示例包括图像识别和语音识别等应用。 以下是两种重要的深度神经网络 - 卷积神经网络 递归神经网络 在本章中,我们将重点介绍CNN - 卷积神经网络。 卷积神经网络 卷积神经网络旨在通过多层阵列处理数据。这种类型的神经网络用于图像识别或面部识别等应用。CNN与其他普通神经网络之间的主要区别在于

  • 本文向大家介绍TensorFlow实现卷积神经网络CNN,包括了TensorFlow实现卷积神经网络CNN的使用技巧和注意事项,需要的朋友参考一下 一、卷积神经网络CNN简介 卷积神经网络(ConvolutionalNeuralNetwork,CNN)最初是为解决图像识别等问题设计的,CNN现在的应用已经不限于图像和视频,也可用于时间序列信号,比如音频信号和文本数据等。CNN作为一个深度学习架构被

  • 注意: 本教程适用于对Tensorflow有丰富经验的用户,并假定用户有机器学习相关领域的专业知识和经验。 概述 对CIFAR-10 数据集的分类是机器学习中一个公开的基准测试问题,其任务是对一组大小为32x32的RGB图像进行分类,这些图像涵盖了10个类别: 飞机, 汽车, 鸟, 猫, 鹿, 狗, 青蛙, 马, 船以及卡车。 想了解更多信息请参考CIFAR-10 page,以及Alex Kriz

  • 卷积神经网络(Convolutional Neural Network, CNN)是一种前馈神经网络,它的人工神经元可以响应一部分覆盖范围内的周围单元,对于大型图像处理有出色表现。卷积神经网络由一个或多个卷积层和顶端的全连通层(对应经典的神经网络)组成,同时也包括关联权重和池化层(pooling layer)。这一结构使得卷积神经网络能够利用输入数据的二维结构。与其他深度学习结构相比,卷积神经网络

  • 我正在试图理解卷积神经网络中的维度是如何表现的。在下图中,输入为带1个通道的28乘28矩阵。然后是32个5乘5的过滤器(高度和宽度步幅为2)。所以我理解结果是14乘14乘32。但是在下一个卷积层中,我们有64个5×5的滤波器(同样是步幅2)。那么为什么结果是7乘7乘64而不是7乘7乘32*64呢?我们不是将64个滤波器中的每一个应用于32个通道中的每一个吗?

  • 本文向大家介绍TensorFlow实现简单卷积神经网络,包括了TensorFlow实现简单卷积神经网络的使用技巧和注意事项,需要的朋友参考一下 本文使用的数据集是MNIST,主要使用两个卷积层加一个全连接层构建的卷积神经网络。 先载入MNIST数据集(手写数字识别集),并创建默认的Interactive Session(在没有指定回话对象的情况下运行变量) 在定义一个初始化函数,因为卷积神经网络有