当前位置: 首页 > 面试题库 >

使用来自Keras模型的张量流图进行预测

华展鹏
2023-03-14
问题内容

我有一个使用Keras和Tensorflow作为后端训练的模型,但是现在我需要将我的模型转换为特定应用程序的张量流图。我尝试执行此操作并进行了预测以确保其正常工作,但是与从model.predict()收集的结果进行比较时,我得到了非常不同的值。例如:

from keras.models import load_model
import tensorflow as tf

model = load_model('model_file.h5')

x_placeholder = tf.placeholder(tf.float32, shape=(None,7214,1))
y = model(x_placeholder)

x = np.ones((1,7214,1))


with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    print("Predictions from:\ntf graph:      "+str(sess.run(y, feed_dict={x_placeholder:x})))
    print("keras predict: "+str(model.predict(x)))

返回

Predictions from:
tf graph:      [[-0.1015993   0.07432419  0.0592984 ]]
keras predict: [[ 0.39339241  0.57949686 -3.67846966]]

keras预测的值是正确的,但tf图的结果却不正确。

如果它有助于了解最终的预期应用程序,那么我将使用tf.gradients()函数创建一个jacobian矩阵,但是与theano的jacobian函数进行比较(当前给出正确的jacobian函数)时,当前它无法返回正确的结果。这是我的tensorflow
jacobian代码:

x = tf.placeholder(tf.float32, shape=(None,7214,1))
y = tf.reshape(model(x)[0],[-1])
y_list = tf.unstack(y)

jacobian_list = [tf.gradients(y_, x)[0] for y_ in y_list]
jacobian = tf.stack(jacobian_list)

编辑:模型代码

import numpy as np

from keras.models import Sequential
from keras.layers import Dense, InputLayer, Flatten
from keras.layers.convolutional import Conv1D
from keras.layers.convolutional import MaxPooling1D
from keras.optimizers import Adam
from keras.callbacks import EarlyStopping, ReduceLROnPlateau

# activation function used following every layer except for the output layers
activation = 'relu'

# model weight initializer
initializer = 'he_normal'

# shape of input data that is fed into the input layer
input_shape = (None,7214,1)

# number of filters used in the convolutional layers
num_filters = [4,16]

# length of the filters in the convolutional layers
filter_length = 8

# length of the maxpooling window 
pool_length = 4

# number of nodes in each of the hidden fully connected layers
num_hidden_nodes = [256,128]

# number of samples fed into model at once during training
batch_size = 64

# maximum number of interations for model training
max_epochs = 30

# initial learning rate for optimization algorithm
lr = 0.0007

# exponential decay rate for the 1st moment estimates for optimization algorithm
beta_1 = 0.9

# exponential decay rate for the 2nd moment estimates for optimization algorithm
beta_2 = 0.999

# a small constant for numerical stability for optimization algorithm
optimizer_epsilon = 1e-08

model = Sequential([

    InputLayer(batch_input_shape=input_shape),

    Conv1D(kernel_initializer=initializer, activation=activation, padding="same", filters=num_filters[0], kernel_size=filter_length),

    Conv1D(kernel_initializer=initializer, activation=activation, padding="same", filters=num_filters[1], kernel_size=filter_length),

    MaxPooling1D(pool_size=pool_length),

    Flatten(),

    Dense(units=num_hidden_nodes[0], kernel_initializer=initializer, activation=activation),

    Dense(units=num_hidden_nodes[1], kernel_initializer=initializer, activation=activation),

    Dense(units=3, activation="linear", input_dim=num_hidden_nodes[1]),
])

# compile model
loss_function = mean squared error
early_stopping_min_delta = 0.0001
early_stopping_patience = 4
reduce_lr_factor = 0.5
reuce_lr_epsilon = 0.0009
reduce_lr_patience = 2
reduce_lr_min = 0.00008

optimizer = Adam(lr=lr, beta_1=beta_1, beta_2=beta_2, epsilon=optimizer_epsilon, decay=0.0)

early_stopping = EarlyStopping(monitor='val_loss',     min_delta=early_stopping_min_delta, 
                                   patience=early_stopping_patience, verbose=2, mode='min')

reduce_lr = ReduceLROnPlateau(monitor='loss', factor=0.5, epsilon=reuce_lr_epsilon, 
                              patience=reduce_lr_patience,     min_lr=reduce_lr_min, mode='min', verbose=2)

model.compile(optimizer=optimizer, loss=loss_function)

model.fit(train_x, train_y, validation_data=(cv_x, cv_y),
      epochs=max_epochs, batch_size=batch_size, verbose=2,
      callbacks=[reduce_lr,early_stopping])

model.save('model_file.h5')

问题答案:

@frankyjuang将我链接到这里

https://github.com/amir-abdi/keras_to_tensorflow

并将其与来自

https://github.com/metaflow-ai/blog/blob/master/tf-
freeze/load.py

https://github.com/tensorflow/tensorflow/issues/675

我找到了既可以使用tf图进行预测又可以创建jacobian函数的解决方案:

import tensorflow as tf
import numpy as np

# Create function to convert saved keras model to tensorflow graph
def convert_to_pb(weight_file,input_fld='',output_fld=''):

    import os
    import os.path as osp
    from tensorflow.python.framework import graph_util
    from tensorflow.python.framework import graph_io
    from keras.models import load_model
    from keras import backend as K


    # weight_file is a .h5 keras model file
    output_node_names_of_input_network = ["pred0"] 
    output_node_names_of_final_network = 'output_node'

    # change filename to a .pb tensorflow file
    output_graph_name = weight_file[:-2]+'pb'
    weight_file_path = osp.join(input_fld, weight_file)

    net_model = load_model(weight_file_path)

    num_output = len(output_node_names_of_input_network)
    pred = [None]*num_output
    pred_node_names = [None]*num_output

    for i in range(num_output):
        pred_node_names[i] = output_node_names_of_final_network+str(i)
        pred[i] = tf.identity(net_model.output[i], name=pred_node_names[i])

    sess = K.get_session()

    constant_graph = graph_util.convert_variables_to_constants(sess, sess.graph.as_graph_def(), pred_node_names)
    graph_io.write_graph(constant_graph, output_fld, output_graph_name, as_text=False)
    print('saved the constant graph (ready for inference) at: ', osp.join(output_fld, output_graph_name))

    return output_fld+output_graph_name

呼叫:

tf_model_path = convert_to_pb('model_file.h5','/model_dir/','/model_dir/')

创建函数以将tf模型加载为图形:

def load_graph(frozen_graph_filename):
    # We load the protobuf file from the disk and parse it to retrieve the 
    # unserialized graph_def
    with tf.gfile.GFile(frozen_graph_filename, "rb") as f:
        graph_def = tf.GraphDef()
        graph_def.ParseFromString(f.read())

    # Then, we can use again a convenient built-in function to import a graph_def into the 
    # current default Graph
    with tf.Graph().as_default() as graph:
        tf.import_graph_def(
            graph_def, 
            input_map=None, 
            return_elements=None, 
            name="prefix", 
            op_dict=None, 
            producer_op_list=None
        )

    input_name = graph.get_operations()[0].name+':0'
    output_name = graph.get_operations()[-1].name+':0'

    return graph, input_name, output_name

创建一个函数以使用tf图进行模型预测

def predict(model_path, input_data):
    # load tf graph
    tf_model,tf_input,tf_output = load_graph(model_path)

    # Create tensors for model input and output
    x = tf_model.get_tensor_by_name(tf_input)
    y = tf_model.get_tensor_by_name(tf_output)

    # Number of model outputs
    num_outputs = y.shape.as_list()[0]
    predictions = np.zeros((input_data.shape[0],num_outputs))
    for i in range(input_data.shape[0]):        
        with tf.Session(graph=tf_model) as sess:
            y_out = sess.run(y, feed_dict={x: input_data[i:i+1]})
            predictions[i] = y_out

    return predictions

作出预测:

tf_predictions = predict(tf_model_path,test_data)

雅可比函数:

def compute_jacobian(model_path,input_data):

    tf_model,tf_input,tf_output = load_graph(model_path)

    x = tf_model.get_tensor_by_name(tf_input)
    y = tf_model.get_tensor_by_name(tf_output)
    y_list = tf.unstack(y)
    num_outputs = y.shape.as_list()[0]
    jacobian = np.zeros((num_outputs,input_data.shape[0],input_data.shape[1]))
    for i in range(input_data.shape[0]):
        with tf.Session(graph=tf_model) as sess:
            y_out = sess.run([tf.gradients(y_, x)[0] for y_ in y_list], feed_dict={x: input_data[i:i+1]})
            jac_temp = np.asarray(y_out)
        jacobian[:,i:i+1,:]=jac_temp[:,:,:,0]
    return jacobian

计算雅可比矩阵:

jacobians = compute_jacobian(tf_model_path,test_data)


 类似资料:
  • Tensorflow 1.12发行说明指出:“Keras模型现在可以直接导出为SavedModel格式(tf.contrib.saved_model.save_Keras_mode()),并与Tensorflow服务一起使用。”。所以我试了一下- 我用一行代码导出了一个简单的模型。但是,Tensorflow服务不识别模型。我猜问题出在docker调用上,可能是模型定义中缺少了“signature_

  • 这里是一个初学者,你会看到:)我试图将我的keras (.h5)训练模型转换为TensorFlow Lite格式,但我有以下错误; 属性错误: 类型对象 “TFLite转换器” 没有属性 “from_keras_model”

  • 本文向大家介绍使用Keras预训练模型ResNet50进行图像分类方式,包括了使用Keras预训练模型ResNet50进行图像分类方式的使用技巧和注意事项,需要的朋友参考一下 Keras提供了一些用ImageNet训练过的模型:Xception,VGG16,VGG19,ResNet50,InceptionV3。在使用这些模型的时候,有一个参数include_top表示是否包含模型顶部的全连接层,如

  • 文章信息 通过本教程,你可以掌握技能:使用预先训练的词向量和卷积神经网络解决一个文本分类问题 本文代码已上传到Github 本文地址:http://blog.keras.io/using-pre-trained-word-embeddings-in-a-keras-model.html 本文作者:Francois Chollet 什么是词向量? ”词向量”(词嵌入)是将一类将词的语义映射到向量空间

  • /usr/local/lib/python3.6/dist-packages/tensorflow/python/training/checkpointable/base.py in_method_wrapper(self,*args,**kwargs)362 self._setattr_tracking=False#pylint:disable=protected-access 363 try:

  • 我正在尝试使用 VGG16 编码器构建 U-Net 模型。这是模型代码。 我收到以下错误。 ValueError:图断开连接:无法获取张量张量的值("input_1: 0",形状=(无,512,512,3),dtype=float32)在层"input_1"。 注意:是VGG16型号的输入层