我有一个自动编码器,我尝试使用lambda在中间层中使用输入层的特定值,生成一个新的张量并发送到下一层,但它会产生以下错误:
回溯(最近一次呼叫最后一次):
File"",第99行,在Model=Model(输入=[图像,wtm],输出=解码)中
文件“D:\software\Anaconda3\envs\py36\lib\site packages\keras\legacy\interfaces.py”,第91行,包装器返回函数(*args,**kwargs)
文件"D:\Software\Anaconda3\envs\py36\lib\site-包\keras\Engine\network.py",第93行,在初始化自身中。_init_graph_network(*args,**kwargs)
文件"D:\Software\Anaconda3\envs\py36\lib\site-包\keras\Engine\network.py",行231,_init_graph_networkself.inputs,self.outputs)
文件“D:\software\Anaconda3\envs\py36\lib\site packages\keras\engine\network.py”,第1366行,在(地图)图(网络张量)索引(张量)索引中
文件“D:\software\Anaconda3\envs\py36\lib\site packages\keras\engine\network.py”,第1353行,内置映射节点索引,张量索引)
文件“D:\software\Anaconda3\envs\py36\lib\site packages\keras\engine\network.py”,第1353行,内置映射节点索引,张量索引)
文件“D:\software\Anaconda3\envs\py36\lib\site packages\keras\engine\network.py”,第1353行,内置映射节点索引,张量索引)
文件“D:\software\Anaconda3\envs\py36\lib\site packages\keras\engine\network.py”,第1353行,内置映射节点索引,张量索引)
文件“D:\software\Anaconda3\envs\py36\lib\site packages\keras\engine\network.py”,第1353行,内置映射节点索引,张量索引)
文件“D:\software\Anaconda3\envs\py36\lib\site packages\keras\engine\network.py”,第1353行,内置映射节点索引,张量索引)
文件“D:\software\Anaconda3\envs\py36\lib\site packages\keras\engine\network.py”,第1353行,内置映射节点索引,张量索引)
文件"D:\Software\Anaconda3\envs\py36\lib\site-包\keras\Engine\network.py",第1325行,在build_map节点=层中。_inbound_nodes[node_index]
对象没有属性_inbound_nodes
这是我的代码,在添加第一个lambda层之后,它会产生这个错误!你能告诉我为什么会发生这个错误吗?谢谢你的帮助?我需要的东西是这样的:wtm={[0,1,1,0],[0,1,1,0],[0,0,0,0],[0,1,0,0]}
我选择wtm[:,I,j]
并产生新的张量,其形状(28,28,1)
和值为wtm[:,I,j]。
wt_random=np.random.randint(2, size=(49999,4,4))
w_expand=wt_random.astype(np.float32)
wv_random=np.random.randint(2, size=(9999,4,4))
wv_expand=wv_random.astype(np.float32)
#w_expand[:,:4,:4]=wt_random
#wv_expand[:,:4,:4]=wv_random
x,y,z=w_expand.shape
w_expand=w_expand.reshape((x,y,z,1))
x,y,z=wv_expand.shape
wv_expand=wv_expand.reshape((x,y,z,1))
#-----------------building w test---------------------------------------------
w_test = np.random.randint(2,size=(1,4,4))
w_test=w_test.astype(np.float32)
#wt_expand=np.zeros((1,28,28),dtype='float32')
#wt_expand[:,0:4,0:4]=w_test
w_test=w_test.reshape((1,4,4,1))
wtm=Input((4,4,1))
image = Input((28, 28, 1))
conv1 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl1e')(image)
conv2 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl2e')(conv1)
conv3 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl3e')(conv2)
BN=BatchNormalization()(conv3)
encoded = Conv2D(1, (5, 5), activation='relu', padding='same',name='encoded_I')(BN)
rep=Kr.layers.Lambda(lambda x:Kr.backend.repeat(x,28))
a=rep(Kr.layers.Lambda(lambda x:x[1,1])(wtm))
add_const = Kr.layers.Lambda(lambda x: x[0] + x[1])
encoded_merged = add_const([encoded,a])
#-----------------------decoder------------------------------------------------
#------------------------------------------------------------------------------
deconv1 = Conv2D(64, (5, 5), activation='elu', padding='same', name='convl1d')(encoded_merged)
deconv2 = Conv2D(64, (5, 5), activation='elu', padding='same', name='convl2d')(deconv1)
deconv3 = Conv2D(64, (5, 5), activation='elu',padding='same', name='convl3d')(deconv2)
deconv4 = Conv2D(64, (5, 5), activation='elu',padding='same', name='convl4d')(deconv3)
BNd=BatchNormalization()(deconv4)
#DrO2=Dropout(0.25,name='DrO2')(BNd)
decoded = Conv2D(1, (5, 5), activation='sigmoid', padding='same', name='decoder_output')(BNd)
#model=Model(inputs=image,outputs=decoded)
model=Model(inputs=[image,wtm],outputs=decoded)
decoded_noise = GaussianNoise(0.5)(decoded)
#----------------------w extraction------------------------------------
convw1 = Conv2D(64, (5,5), activation='relu', name='conl1w')(decoded_noise)#24
convw2 = Conv2D(64, (5,5), activation='relu', name='convl2w')(convw1)#20
#Avw1=AveragePooling2D(pool_size=(2,2))(convw2)
convw3 = Conv2D(64, (5,5), activation='relu' ,name='conl3w')(convw2)#16
convw4 = Conv2D(64, (5,5), activation='relu' ,name='conl4w')(convw3)#12
#Avw2=AveragePooling2D(pool_size=(2,2))(convw4)
convw5 = Conv2D(64, (5,5), activation='relu', name='conl5w')(convw4)#8
convw6 = Conv2D(64, (5,5), activation='relu', name='conl6w')(convw5)#4
convw7 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl7w',dilation_rate=(2,2))(convw6)#4
convw8 = Conv2D(64, (5,5), activation='relu', padding='same',name='conl8w',dilation_rate=(2,2))(convw7)#4
convw9 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl9w',dilation_rate=(2,2))(convw8)#4
convw10 = Conv2D(64, (5,5), activation='relu',padding='same', name='conl10w',dilation_rate=(2,2))(convw9)#4
BNed=BatchNormalization()(convw10)
pred_w = Conv2D(1, (1, 1), activation='sigmoid', padding='same', name='reconstructed_W',dilation_rate=(2,2))(BNed)
w_extraction=Model(inputs=[image,wtm],outputs=[decoded,pred_w])
w_extraction.summary()
(x_train, _), (x_test, _) = mnist.load_data()
x_validation=x_train[1:10000,:,:]
x_train=x_train[10001:60000,:,:]
#
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_validation = x_validation.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 28, 28, 1)) # adapt this if using `channels_first` image data format
x_test = np.reshape(x_test, (len(x_test), 28, 28, 1)) # adapt this if using `channels_first` image data format
x_validation = np.reshape(x_validation, (len(x_validation), 28, 28, 1))
#---------------------compile and train the model------------------------------
opt=SGD(momentum=0.99)
w_extraction.compile(optimizer='adam', loss={'decoder_output':'mse','reconstructed_W':'binary_crossentropy'}, loss_weights={'decoder_output': 0.2, 'reconstructed_W': 1.0},metrics=['mae'])
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=40)
#rlrp = ReduceLROnPlateau(monitor='val_loss', factor=0.1, patience=20, min_delta=1E-4, verbose=1)
mc = ModelCheckpoint('best_model_5x5F_dp_gn.h5', monitor='val_loss', mode='min', verbose=1, save_best_only=True)
history=w_extraction.fit([x_train,w_expand], [x_train,w_expand],
epochs=1,
batch_size=64,
validation_data=([x_validation,wv_expand], [x_validation,wv_expand]),
callbacks=[TensorBoard(log_dir='E:concatnatenetwork', histogram_freq=0, write_graph=False),es,mc])
当我实现它时,会显示此错误:
回溯(最近一次呼叫最后一次):
File"",第1行,encoded_merged=add_const([编码,a])
文件"D:\Software\Anaconda3\envs\py36\lib\site-包\keras\Engine\base_layer.py",第457行,在调用输出=self.call(输入,**kwargs)
文件“D:\software\Anaconda3\envs\py36\lib\site packages\keras\layers\core.py”,第687行,在call return self中。函数(输入,**参数)
文件“”,第1行,在add_const=Kr.layers中。λ(λx:x[0]x1)
文件“D:\software\Anaconda3\envs\py36\lib\site packages\tensorflow\python\ops\math_ops.py”,第866行,二进制_ops_wrapper return func(x,y,name=name)
文件“D:\software\Anaconda3\envs\py36\lib\site packages\tensorflow\python\ops\gen\u math\u ops.py”,第301行,在add“add”中,x=x,y=y,name=name)
文件“D:\software\Anaconda3\envs\py36\lib\site packages\tensorflow\python\framework\op_def_library.py”,第787行,位于_apply_op_helper op_def=op_def)
文件“D:\software\Anaconda3\envs\py36\lib\site packages\tensorflow\python\util\deprecation.py”,第488行,在new_func return func(*args,**kwargs)中
文件“D:\software\Anaconda3\envs\py36\lib\site packages\tensorflow\python\framework\ops.py”,第3274行,位于create\u op\u def=op\u def中)
文件“D:\software\Anaconda3\envs\py36\lib\site packages\tensorflow\python\framework\ops.py”,第1792行,在init control\u input\u ops中)
文件“D:\software\Anaconda3\envs\py36\lib\site packages\tensorflow\python\framework\ops.py”,第1631行,位于create\uC\uOP raise VALUERROR(str(e))中
ValueError:尺寸必须相等,但是输入形状为[?,28,28,1],[4,28,1]的lambda_9/add(op:'Add')的尺寸是28和4。
在keras中,每一层都应该是keras图层类的实例。下一行
a=rep(wtm[1,1])
选择张量元素时不使用keras层。这一行导致了错误。您应该将该行更改为以下内容以处理错误。
a=rep(Kr.layers.Lambda(lambda x:x[1,1])(wtm))
由于wtm的形状为[None,4,4,1](您可以尝试打印该形状),wtm[1,1]
将从第一个维度选择一个元素,然后从所选元素中选择第一个元素。如果你想得到[1,1]形状数组,你可以做的是索引前三个维度。在这里,您也应该注意批大小,因此您想要的是具有形状[Batchsize,1]的数组。这可以按如下方式进行。
new_wtm = Kr.layers.Lambda(lambda x:x[:,1,1,:])(wtm)
现在是新的。shape将是[None,1],调用repeat方法将生成shape为[None,rep,1]的数组。
rep=Kr.layers.Lambda(lambda x:Kr.backend.repeat(x,28))
a=rep(Kr.layers.Lambda(lambda x:x[:,1,1,:])(wtm))
print(a.shape) # [None, 28, 1]
我希望这将解决这个问题
要使用shape[None,28,28,1]获得a
,您需要使用tile方法。
rep=Kr.layers.Lambda(lambda x:Kr.backend.tile(x,[1, 28, 28, 1]))
a_1 = Kr.layers.Lambda(lambda x: x[:, 1, 1, :])(wtm)
a=rep(Kr.layers.Reshape([1,1,1])(a_1))
测试代码
from keras.layers import Input, Concatenate, Activation,GaussianNoise,Dropout,BatchNormalization,MaxPool2D,AveragePooling2D
from keras.layers import Conv2D, AtrousConv2D
from keras.models import Model
from keras.datasets import mnist
from keras.callbacks import TensorBoard
from keras import backend as K
from keras import layers
import matplotlib.pyplot as plt
import tensorflow as tf
import keras as Kr
from keras.optimizers import SGD,RMSprop,Adam
from keras.callbacks import ReduceLROnPlateau
from keras.callbacks import EarlyStopping
from keras.callbacks import ModelCheckpoint
import numpy as np
import pylab as pl
import matplotlib.cm as cm
import keract
from matplotlib import pyplot
from keras import optimizers
from keras import regularizers
from tensorflow.python.keras.layers import Lambda;
#-----------------building w train---------------------------------------------
#wv_expand=np.zeros((9999,28,28),dtype='float32')
wt_random=np.random.randint(2, size=(49999,4,4))
w_expand=wt_random.astype(np.float32)
wv_random=np.random.randint(2, size=(9999,4,4))
wv_expand=wv_random.astype(np.float32)
#w_expand[:,:4,:4]=wt_random
#wv_expand[:,:4,:4]=wv_random
x,y,z=w_expand.shape
w_expand=w_expand.reshape((x,y,z,1))
x,y,z=wv_expand.shape
wv_expand=wv_expand.reshape((x,y,z,1))
#-----------------building w test---------------------------------------------
w_test = np.random.randint(2,size=(1,4,4))
w_test=w_test.astype(np.float32)
#wt_expand=np.zeros((1,28,28),dtype='float32')
#wt_expand[:,0:4,0:4]=w_test
w_test=w_test.reshape((1,4,4,1))
#wt_expand=wt_expand.reshape((1,28,28,1))
#-----------------------encoder------------------------------------------------
#------------------------------------------------------------------------------
wtm=Input((4,4,1))
image = Input((28, 28, 1))
conv1 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl1e',dilation_rate=(2,2))(image)
conv2 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl2e',dilation_rate=(2,2))(conv1)
conv3 = Conv2D(64, (5, 5), activation='relu', padding='same', name='convl3e',dilation_rate=(2,2))(conv2)
#conv3 = Conv2D(8, (3, 3), activation='relu', padding='same', name='convl3e', kernel_initializer='Orthogonal',bias_initializer='glorot_uniform')(conv2)
BN=BatchNormalization()(conv3)
#DrO1=Dropout(0.25,name='Dro1')(BN)
encoded = Conv2D(1, (5, 5), activation='relu', padding='same',name='encoded_I',dilation_rate=(2,2))(BN)
#-----------------------adding w---------------------------------------
#-----------------------decoder------------------------------------------------
#------------------------------------------------------------------------------
rep0=Kr.layers.Lambda(lambda x:Kr.backend.tile(x,[1, 28, 28, 1]),name='aux0')
a_0 = Kr.layers.Lambda(lambda x: x[:, 0, 0, :])(wtm)
a0=rep0(Kr.layers.Reshape([1,1,1])(a_0))
rep1=Kr.layers.Lambda(lambda x:Kr.backend.tile(x,[1, 28, 28, 1]),name='aux1')
a_1 = Kr.layers.Lambda(lambda x: x[:, 0, 1, :])(wtm)
a1=rep1(Kr.layers.Reshape([1,1,1])(a_1))
rep2=Kr.layers.Lambda(lambda x:Kr.backend.tile(x,[1, 28, 28, 1]),name='aux2')
a_2 = Kr.layers.Lambda(lambda x: x[:, 0, 2, :])(wtm)
a2=rep2(Kr.layers.Reshape([1,1,1])(a_2))
rep3=Kr.layers.Lambda(lambda x:Kr.backend.tile(x,[1, 28, 28, 1]),name='aux3')
a_3 = Kr.layers.Lambda(lambda x: x[:, 0, 3, :])(wtm)
a3=rep3(Kr.layers.Reshape([1,1,1])(a_3))
add_const1 = Kr.layers.Lambda(lambda x: x[0] + x[1]+x[2]+x[3]+x[4],name='decoder_output')
encoded_merged = add_const1([encoded,a0,a1,a2,a3])
w=Model(inputs=[image,wtm],outputs=encoded_merged)
w.summary()
#----------------------training the model--------------------------------------
#------------------------------------------------------------------------------
#----------------------Data preparesion----------------------------------------
(x_train, _), (x_test, _) = mnist.load_data()
x_validation=x_train[1:10000,:,:]
x_train=x_train[10001:60000,:,:]
#
x_train = x_train.astype('float32') / 255.
x_test = x_test.astype('float32') / 255.
x_validation = x_validation.astype('float32') / 255.
x_train = np.reshape(x_train, (len(x_train), 28, 28, 1)) # adapt this if using `channels_first` image data format
x_test = np.reshape(x_test, (len(x_test), 28, 28, 1)) # adapt this if using `channels_first` image data format
x_validation = np.reshape(x_validation, (len(x_validation), 28, 28, 1))
#---------------------compile and train the model------------------------------
opt=SGD(lr=0.0001,momentum=0.9)
w.compile(optimizer='adam', loss={'decoder_output':'mse'}, metrics=['mae'])
es = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=40)
mc = ModelCheckpoint('los4x4_repw.h5', monitor='val_loss', mode='min', verbose=1, save_best_only=True)
history=w.fit([x_train,w_expand], x_train,
epochs=1,
batch_size=32,
validation_data=([x_validation,wv_expand], x_validation))
w.summary()
layer_name = 'lambda_96'
intermediate_layer_model = Model(inputs=watermark_extraction.input,outputs=watermark_extraction.get_layer(layer_name).output)
intermediate_output = intermediate_layer_model.predict([x_test[8000:8001],w_test])
fig = plt.figure(figsize=(20, 20))
rows = 8
columns = 8
first = intermediate_output
for i in range(1, columns*rows +1):
img = intermediate_output[0,:,:,i-1]
fig.add_subplot(rows, columns, i)
plt.imshow(img, interpolation='nearest',cmap='gray')
plt.axis('off')
plt.show()
我遇到了这个错误,它不允许我在表单中保存信息。初始数据在表单中显示良好,但保存起来很困难。希望有人能帮忙,我真的被困了 追踪: 内部 41 中的文件“C:\程序文件\Python35\lib\site-packages\django\core\处理程序\exception.py”。响应 = get_response(请求) 文件"C:\Program Files\Python35\lib\site
我是硒网络驱动程序的新手,目前使用Python来编写脚本。现在我想应用参数,我使用Excel应用了数据驱动方法。基本上只有第一个循环ok,它可以读取和写入第一行的数据,但之后不能。 首先,我在Python文件中创建它(作为新模块): 接下来,我编写以下代码以在登录和注销过程中包含数据驱动测试: 我得到了错误属性错误:“NoneType”对象没有属性“send_keys”,所以我认为包括等待/睡眠可
当我执行代码时,我得到一个错误, 属性错误:“WebDriver”对象没有属性“find_element_by_xpath”
我是一个新的程序员,我一直在学习如何创建一个不和谐机器人的教程,下面的代码实际上是直接从教程中复制出来的,我已经创建了一个. env文件来存储我的AuthToken。每次我运行代码,我得到的错误低于上述代码。有什么提示吗?提前感谢! 代码: 错误:
我正在使用Python开发一个Discord机器人。并获取以下错误(AttributeError:'NoneType'对象没有属性'strip')。这是我的密码。
问题内容: 我正在尝试读取文件,并用逗号在每行中拆分一个单元格,然后仅显示包含有关纬度和经度信息的第一和第二个单元格。这是文件: 时间, 纬度,经度 ,类型2015-03-20T10:20:35.890Z, 38.8221664,-122.7649994 ,地震 2015-03-20T10 :18:13.070Z, 33.2073333,-116.6891667 ,地震 2015-03-20T10