当前位置: 首页 > 知识库问答 >
问题:

Keras功能CNN模型出错:图形在主输入层断开连接

邓英卓
2023-03-14

我正在为CNN开发一个功能性Keras模型,在R中具有1d输入层。

当我运行keras_model函数来构建模型时,我得到以下错误:

py\u call\u impl(可调用,dots$args,dots$keywords)中出错:值错误:图形断开连接:无法获取“main\u input”层的张量张量(“main\u input\u 15:0”,shape=(4201,1024),dtype=float32)的值。访问了以下以前的层,没有问题:[]

详细回溯:文件“/库/框架/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras/legacy/interfaces.py”,第91行,在wrapper return func(*args,**kwargs)文件“/库/框架/Python.framework/Versions/3.7/lib/python3.7/site-packages/keras/engine/network.py”,第93行,在init self中_init\u graph\u network(*args,**kwargs)文件“/库/框架/Python.framework/Versions/3.7/lib/python3.7/site packages/keras/engine/network.py”,第231行,在init\u graph\u network self中。输入,自我。输出)文件“/库/框架/Python.framew

我附上我的代码,任何帮助都将不胜感激。

main_input = layer_input(shape = c(1024), batch_shape = c(4201,1024), dtype = 'float32', name = 'main_input') %>%
  layer_reshape( target_shape = list(1024,1), input_shape = c(1024),dtype = 'float32', batch_input_shape = c(4201, 1024), batch_size = 4201)

conv1 = layer_conv_1d(filters = 64, kernel_size = 10, strides = 5, dtype = 'float32', activation = 'relu' )
max1 = layer_max_pooling_1d(pool_size = 10)

first_conv = main_input %>% conv1%>%max1

conv2 = layer_conv_1d(filters = 32, kernel_size = 5, strides = 3, dtype = 'float32', activation = 'relu' )
max2 = layer_max_pooling_1d(pool_size = 5)

second_conv = first_conv %>% conv2 %>% max2

conc1 = second_conv %>% layer_flatten()

semantic_input = layer_input(shape = c(2074), dtype = 'float32', batch_shape = c(4201,2074),  name = 'semantic_input')%>%
  layer_reshape(target_shape = list(2074,1), input_shape = c (2074), dtype = 'float32')

conc2 = semantic_input %>% layer_flatten()

output = layer_concatenate(c(conc1, conc2)) %>%
  layer_dense( units = 100, activation = 'relu', use_bias = TRUE) %>%
  layer_dense(units = 50, activation = 'relu', use_bias = TRUE) %>%
  layer_dense(units = 25, activation = 'relu', use_bias = TRUE)%>%
  layer_dense(units = 10, activation = 'relu', use_bias = TRUE)%>%
  layer_dense(units = 1, activation = 'softmax', name = 'output')


cnn1_model = keras_model(
  inputs = c(main_input,semantic_input),
  outputs = c(output)
) 

当我尝试构建模型时,在代码的最后一行中出现了上述错误。

共有1个答案

祝俊
2023-03-14

我已经想通了,经过2天的封锁!

定义两个输入层时,不应重塑它们。重塑可以在下一步中进行,输入层应该独立声明。

以下是固定代码:

main_input = layer_input(shape = c(1024), batch_shape = c(4201,1024), dtype = 'float32', name = 'main_input') 

main_reshaped = main_input %>% layer_reshape( target_shape = list(1024,1), input_shape = c(1024),dtype = 'float32', batch_input_shape = c(4201, 1024), batch_size = 4201)

conv1 = layer_conv_1d(filters = 64, kernel_size = 10, strides = 5, dtype = 'float32', activation = 'relu' )
max1 = layer_max_pooling_1d(pool_size = 10)

conv2 = layer_conv_1d(filters = 32, kernel_size = 5, strides = 3, dtype = 'float32', activation = 'relu' )
max2 = layer_max_pooling_1d(pool_size = 5)

conv = reshaped %>% conv1%>%max1%>%conv2 %>% max2 %>% layer_flatten()

semantic_input = layer_input(shape = c(2074), dtype = 'float32', batch_shape = c(4201,2074),  name = 'semantic_input')


sem_reshaped = semantic_input %>% layer_reshape(target_shape = list(2074,1), input_shape = c (2074), dtype = 'float32')

conc = sem_reshaped %>% layer_flatten()

output = layer_concatenate(c(conv, conc)) %>%
  layer_dense( units = 100, activation = 'relu', use_bias = TRUE) %>%
  layer_dense(units = 50, activation = 'relu', use_bias = TRUE) %>%
  layer_dense(units = 25, activation = 'relu', use_bias = TRUE)%>%
  layer_dense(units = 10, activation = 'relu', use_bias = TRUE)%>%
  layer_dense(units = 1, activation = 'softmax', name = 'output')

cnn1_model = keras_model(
  inputs = c(main_input,semantic_input),
  outputs = c(output)
)  

所以模型是这样的

summary (cnn1_model)

_______________________________________________________________________________________________________________________________________________________________________________
Layer (type)                                             Output Shape                           Param #              Connected to                                              
===============================================================================================================================================================================
main_input (InputLayer)                                  (4201, 1024)                           0                                                                              
_______________________________________________________________________________________________________________________________________________________________________________
reshape_25 (Reshape)                                     (4201, 1024, 1)                        0                    main_input[0][0]                                          
_______________________________________________________________________________________________________________________________________________________________________________
conv1d_65 (Conv1D)                                       (4201, 203, 64)                        704                  reshape_25[0][0]                                          
_______________________________________________________________________________________________________________________________________________________________________________
max_pooling1d_50 (MaxPooling1D)                          (4201, 20, 64)                         0                    conv1d_65[6][0]                                           
_______________________________________________________________________________________________________________________________________________________________________________
conv1d_66 (Conv1D)                                       (4201, 6, 32)                          10272                max_pooling1d_50[6][0]                                    
_______________________________________________________________________________________________________________________________________________________________________________
semantic_input (InputLayer)                              (4201, 2074)                           0                                                                              
_______________________________________________________________________________________________________________________________________________________________________________
max_pooling1d_51 (MaxPooling1D)                          (4201, 1, 32)                          0                    conv1d_66[5][0]                                           
_______________________________________________________________________________________________________________________________________________________________________________
reshape_26 (Reshape)                                     (4201, 2074, 1)                        0                    semantic_input[0][0]                                      
_______________________________________________________________________________________________________________________________________________________________________________
flatten_35 (Flatten)                                     (4201, 32)                             0                    max_pooling1d_51[5][0]                                    
_______________________________________________________________________________________________________________________________________________________________________________
flatten_36 (Flatten)                                     (4201, 2074)                           0                    reshape_26[0][0]                                          
_______________________________________________________________________________________________________________________________________________________________________________
concatenate_38 (Concatenate)                             (4201, 2106)                           0                    flatten_35[0][0]                                          
                                                                                                                     flatten_36[0][0]                                          
_______________________________________________________________________________________________________________________________________________________________________________
dense_77 (Dense)                                         (4201, 100)                            210700               concatenate_38[0][0]                                      
_______________________________________________________________________________________________________________________________________________________________________________
dense_78 (Dense)                                         (4201, 50)                             5050                 dense_77[0][0]                                            
_______________________________________________________________________________________________________________________________________________________________________________
dense_79 (Dense)                                         (4201, 25)                             1275                 dense_78[0][0]                                            
_______________________________________________________________________________________________________________________________________________________________________________
dense_80 (Dense)                                         (4201, 10)                             260                  dense_79[0][0]                                            
_______________________________________________________________________________________________________________________________________________________________________________
output (Dense)                                           (4201, 1)                              11                   dense_80[0][0]                                            
===============================================================================================================================================================================
Total params: 228,272
Trainable params: 228,272
Non-trainable params: 0
_______________________________________________________________________________________________________________________________________________________________________________

 类似资料:
  • 我正在使用Keras构建一个CNN,以下Conv1D是我的第一层: 我正在培训以下功能: 其中,train\u df是一个由两列组成的pandas数据帧,其中,对于每一行,标签是一个int(0或1),有效载荷是一个用零填充/截断为1000的浮点数组。train\U df中的培训示例总数为15641。 模型编译,但在训练期间,我得到这个错误: 我看了这篇文章,试图将输入更改为1000个浮点长列表的数

  • 问题内容: 在Keras中创建顺序模型时,我知道您在第一层中提供了输入形状。然后,此输入形状会构成 隐式 输入层吗? 例如,下面的模型明确指定了2个密集层,但这实际上是一个3层模型,即由输入形状隐含的一个输入层,一个具有32个神经元的隐藏密集层,然后一个具有10个可能输出的输出层组成的模型吗? 问题答案: 好吧,实际上它实际上 是 一个隐式输入层,即您的模型是一个具有三层“输入,隐藏和输出”的“老

  • 我用python用800个样本训练了一个CNN神经网络,并用60个样本进行了测试。输出精度是50,现在每次我使用模型。预测它会给我同样的结果。 我用了keras和tensorflow。图像为224x224像素,每个像素分为两类。我对神经网络知之甚少,这是我第一次尝试把它做得这么大。我听说它可能太合适了,或者我需要一个更重要的图层,或者我的批量大小/年代/学习率是错误的。 编辑1:种子对网络培训有何

  • 我正在开发一个程序,该程序用一个无方向的Conv2D层(跨距=1)替换跨距的Conv2D层(跨距=2),然后在激活层之后添加一个AveragePooling2D层(跨距=2)。换句话说,AveragePooling2D层将减少输出维度,而不是让Conv2D层减少输出维度。 我使用本文中描述的方法将Conv2D层替换为非结构化版本,并在激活后插入averagepoolig2d层。替换Conv2D层效

  • 我正试图使用Keras库实现一个序列2序列模型。模型框图如下 模型将输入序列嵌入到3D张量中。然后双向lstm创建编码层。接下来,编码序列被发送到自定义关注层,该层返回一个2D张量,该张量具有每个隐藏节点的关注权重。 解码器输入作为一个热向量注入模型中。现在在解码器(另一个bi lstm)中,解码器输入和注意权重都作为输入传递。解码器的输出通过softmax激活函数发送到时间分布密集层,以概率的方

  • 在模型中添加LSTM层之前,我不确定是否需要添加密集输入层。例如,使用以下模型: LSTM层是否为输入层,密集层是否为输出层(即无隐藏层)?或者Keras是否创建了一个输入层,这意味着LSTM层将是一个隐藏层?