当前位置: 首页 > 编程笔记 >

解决keras使用cov1D函数的输入问题

翟曦之
2023-03-14
本文向大家介绍解决keras使用cov1D函数的输入问题,包括了解决keras使用cov1D函数的输入问题的使用技巧和注意事项,需要的朋友参考一下

解决了以下错误:

1.ValueError: Input 0 is incompatible with layer conv1d_1: expected ndim=3, found ndim=4

2.ValueError: Error when checking target: expected dense_3 to have 3 dimensions, but got array with …

1.ValueError: Input 0 is incompatible with layer conv1d_1: expected ndim=3, found ndim=4

错误代码:

model.add(Conv1D(8, kernel_size=3, strides=1, padding='same', input_shape=(x_train.shape))

或者

model.add(Conv1D(8, kernel_size=3, strides=1, padding='same', input_shape=(x_train.shape[1:])))

这是因为模型输入的维数有误,在使用基于tensorflow的keras中,cov1d的input_shape是二维的,应该:

1、reshape x_train的形状

x_train=x_train.reshape((x_train.shape[0],x_train.shape[1],1))
x_test = x_test.reshape((x_test.shape[0], x_test.shape[1],1))

2、改变input_shape

model = Sequential()
model.add(Conv1D(8, kernel_size=3, strides=1, padding='same', input_shape=(x_train.shape[1],1)))

大神原文:

The input shape is wrong, it should be input_shape = (1, 3253) for Theano or (3253, 1) for TensorFlow. The input shape doesn't include the number of samples.

Then you need to reshape your data to include the channels axis:

x_train = x_train.reshape((500000, 1, 3253))

Or move the channels dimension to the end if you use TensorFlow. After these changes it should work.

2.ValueError: Error when checking target: expected dense_3 to have 3 dimensions, but got array with …

出现此问题是因为ylabel的维数与x_train x_test不符,既然将x_train x_test都reshape了,那么也需要对y进行reshape。

解决办法:

同时对照x_train改变ylabel的形状

t_train=t_train.reshape((t_train.shape[0],1))
t_test = t_test.reshape((t_test.shape[0],1))

附:

修改完的代码:

import warnings
warnings.filterwarnings("ignore")
import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"

import pandas as pd
import numpy as np
import matplotlib
# matplotlib.use('Agg')
import matplotlib.pyplot as plt

from sklearn.model_selection import train_test_split
from sklearn import preprocessing

from keras.models import Sequential
from keras.layers import Dense, Dropout, BatchNormalization, Activation, Flatten, Conv1D
from keras.callbacks import LearningRateScheduler, EarlyStopping, ModelCheckpoint, ReduceLROnPlateau
from keras import optimizers
from keras.regularizers import l2
from keras.models import load_model
df_train = pd.read_csv('./input/train_V2.csv')
df_test = pd.read_csv('./input/test_V2.csv')
df_train.drop(df_train.index[[2744604]],inplace=True)#去掉nan值
df_train["distance"] = df_train["rideDistance"]+df_train["walkDistance"]+df_train["swimDistance"]
# df_train["healthpack"] = df_train["boosts"] + df_train["heals"]
df_train["skill"] = df_train["headshotKills"]+df_train["roadKills"]
df_test["distance"] = df_test["rideDistance"]+df_test["walkDistance"]+df_test["swimDistance"]
# df_test["healthpack"] = df_test["boosts"] + df_test["heals"]
df_test["skill"] = df_test["headshotKills"]+df_test["roadKills"]

df_train_size = df_train.groupby(['matchId','groupId']).size().reset_index(name='group_size')
df_test_size = df_test.groupby(['matchId','groupId']).size().reset_index(name='group_size')

df_train_mean = df_train.groupby(['matchId','groupId']).mean().reset_index()
df_test_mean = df_test.groupby(['matchId','groupId']).mean().reset_index()

df_train = pd.merge(df_train, df_train_mean, suffixes=["", "_mean"], how='left', on=['matchId', 'groupId'])
df_test = pd.merge(df_test, df_test_mean, suffixes=["", "_mean"], how='left', on=['matchId', 'groupId'])
del df_train_mean
del df_test_mean

df_train = pd.merge(df_train, df_train_size, how='left', on=['matchId', 'groupId'])
df_test = pd.merge(df_test, df_test_size, how='left', on=['matchId', 'groupId'])
del df_train_size
del df_test_size

target = 'winPlacePerc'
train_columns = list(df_test.columns)
""" remove some columns """
train_columns.remove("Id")
train_columns.remove("matchId")
train_columns.remove("groupId")
train_columns_new = []
for name in train_columns:
 if '_' in name:
  train_columns_new.append(name)
train_columns = train_columns_new
# print(train_columns)

X = df_train[train_columns]
Y = df_test[train_columns]
T = df_train[target]

del df_train
x_train, x_test, t_train, t_test = train_test_split(X, T, test_size = 0.2, random_state = 1234)

# scaler = preprocessing.MinMaxScaler(feature_range=(-1, 1)).fit(x_train)
scaler = preprocessing.QuantileTransformer().fit(x_train)

x_train = scaler.transform(x_train)
x_test = scaler.transform(x_test)
Y = scaler.transform(Y)
x_train=x_train.reshape((x_train.shape[0],x_train.shape[1],1))
x_test = x_test.reshape((x_test.shape[0], x_test.shape[1],1))
t_train=t_train.reshape((t_train.shape[0],1))
t_test = t_test.reshape((t_test.shape[0],1))

model = Sequential()
model.add(Conv1D(8, kernel_size=3, strides=1, padding='same', input_shape=(x_train.shape[1],1)))
model.add(BatchNormalization())
model.add(Conv1D(8, kernel_size=3, strides=1, padding='same'))
model.add(Conv1D(16, kernel_size=3, strides=1, padding='valid'))
model.add(BatchNormalization())
model.add(Conv1D(16, kernel_size=3, strides=1, padding='same'))
model.add(Conv1D(32, kernel_size=3, strides=1, padding='valid'))
model.add(BatchNormalization())
model.add(Conv1D(32, kernel_size=3, strides=1, padding='same'))
model.add(Conv1D(32, kernel_size=3, strides=1, padding='same'))
model.add(Conv1D(64, kernel_size=3, strides=1, padding='same'))
model.add(Activation('tanh'))
model.add(Flatten())
model.add(Dropout(0.5))
# model.add(Dropout(0.25))
model.add(Dense(512,kernel_initializer='he_normal', activation='relu', W_regularizer=l2(0.01)))
model.add(Dense(128,kernel_initializer='he_normal', activation='relu', W_regularizer=l2(0.01)))
model.add(Dense(1, kernel_initializer='normal', activation='sigmoid'))

optimizers.Adam(lr=0.01, epsilon=1e-8, decay=1e-4)

model.compile(optimizer=optimizer, loss='mse', metrics=['mae'])
model.summary()

ng = EarlyStopping(monitor='val_mean_absolute_error', mode='min', patience=4, verbose=1)
# model_checkpoint = ModelCheckpoint(filepath='best_model.h5', monitor='val_mean_absolute_error', mode = 'min', save_best_only=True, verbose=1)
# reduce_lr = ReduceLROnPlateau(monitor='val_mean_absolute_error', mode = 'min',factor=0.5, patience=3, min_lr=0.0001, verbose=1)
history = model.fit(x_train, t_train,
     validation_data=(x_test, t_test),
     epochs=30,
     batch_size=32768,
     callbacks=[early_stopping],
     verbose=1)predict(Y)
pred = pred.ravel()

补充知识:Keras Conv1d 参数及输入输出详解

Conv1d(in_channels,out_channels,kernel_size,stride=1,padding=0,dilation=1,groups=1,bias=True)

filters:卷积核的数目(即输出的维度)

kernel_size: 整数或由单个整数构成的list/tuple,卷积核的空域或时域窗长度

strides: 整数或由单个整数构成的list/tuple,为卷积的步长。任何不为1的strides均为任何不为1的dilation_rata均不兼容

padding: 补0策略,为”valid”,”same”或”casual”,”casual”将产生因果(膨胀的)卷积,即output[t]不依赖于input[t+1:]。当对不能违反事件顺序的时序信号建模时有用。“valid”代表只进行有效的卷积,即对边界数据不处理。“same”代表保留边界处的卷积结果,通常会导致输出shape与输入shape相同。

activation:激活函数,为预定义的激活函数名,或逐元素的Theano函数。如果不指定该函数,将不会使用任何激活函数(即使用线性激活函数:a(x)=x)

model.add(Conv1D(filters=nn_params["input_filters"],
      kernel_size=nn_params["filter_length"],
      strides=1,
      padding='valid',
      activation=nn_params["activation"],
      kernel_regularizer=l2(nn_params["reg"])))

例:输入维度为(None,1000,4)

第一维度:None

第二维度:

output_length = int((input_length - nn_params["filter_length"] + 1))

在此情况下为:

output_length = (1000 + 2*padding - filters +1)/ strides = (1000 + 2*0 -32 +1)/1 = 969

第三维度:filters

以上这篇解决keras使用cov1D函数的输入问题就是小编分享给大家的全部内容了,希望能给大家一个参考,也希望大家多多支持小牛知识库。

 类似资料:
  • 本文向大家介绍解决Keras中CNN输入维度报错问题,包括了解决Keras中CNN输入维度报错问题的使用技巧和注意事项,需要的朋友参考一下 想要写分类器对图片进行分类,用到了CNN。然而,在运行程序时,一直报错: ValueError: Negative dimension size caused by subtracting 5 from 1 for ‘conv2d_1/convolution'

  • 问题 使用Appium进行Android测试时,使用send_keys()发送中文,输入框没有输入任何文本 解决办法 使用Appium键盘,appium执行时,会在Android手机安装一个特殊键盘(即Appium Android lnput Manager for Unicode) 在Appum config中增加下列代码: 'unicodeKeyboard':True, 'resetKeybo

  • 我想使用两个包,一个用Keras1.2编写,另一个用TensorFlow编写。我想将构建在tensorflow中的体系结构的一部分用于Keras模型。 这里提出了一个部分的解决方案,但它是针对顺序模型的。关于功能模型的建议--将预处理包装在一个Lambda层中--没有奏效。

  • 在模型中添加LSTM层之前,我不确定是否需要添加密集输入层。例如,使用以下模型: LSTM层是否为输入层,密集层是否为输出层(即无隐藏层)?或者Keras是否创建了一个输入层,这意味着LSTM层将是一个隐藏层?

  • 本文向大家介绍PHP中in_array函数使用的问题与解决办法,包括了PHP中in_array函数使用的问题与解决办法的使用技巧和注意事项,需要的朋友参考一下 先介绍一下需求背景: 发票方式: 0=捐赠(不要问我为什么,历史原因) 1=对中寄送 2=索取 3=电子发票 现在要对用户提交的数据进行检测: 这个时候出现一个问题,如果压根就不存在$_POST[‘invoice_action']这个值,为

  • 我正在尝试使用Keras创建自定义损失函数。我想根据输入计算损失函数并预测神经网络的输出。 我尝试在Keras中使用自定义loss函数。我认为y_true是我们为训练提供的输出,y_pred是神经网络的预测输出。下面的损失函数与 Keras 中的“mean_squared_error”损失相同。 我想使用神经网络的输入也计算自定义损失函数除了mean_squared_error损失。有没有办法将输