当前位置: 首页 > 知识库问答 >
问题:

Tensorflow,喂养占位符与估计(model_fn)?

宗安宁
2023-03-14

我想建立一个lstm模型。但我正在得到

 
        InvalidArgumentError Traceback (most recent call last)
    /home/george/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
        964     try:
    --> 965       return fn(*args)
        966     except errors.OpError as e:
    /home/george/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run_fn(session, feed_dict, fetch_list, target_list, options, run_metadata)
        946                                  feed_dict, fetch_list, target_list,
    --> 947                                  status, run_metadata)
        948 
    /home/george/anaconda3/lib/python3.5/contextlib.py in exit(self, type, value, traceback)
         65             try:
    ---> 66                 next(self.gen)
         67             except StopIteration:
    /home/george/anaconda3/lib/python3.5/site-packages/tensorflow/python/framework/errors.py in raise_exception_on_not_ok_status()
        449           compat.as_text(pywrap_tensorflow.TF_Message(status)),
    --> 450           pywrap_tensorflow.TF_GetCode(status))
        451   finally:
    InvalidArgumentError: You must feed a value for placeholder tensor 'input' with dtype float
         [[Node: input = Placeholderdtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]]
    During handling of the above exception, another exception occurred:
    InvalidArgumentError                      Traceback (most recent call last)
     in ()
          1 classificator.fit(X_train_TF, Y_train, monitors = [validation_monitor],
    ----> 2                   batch_size = batch_size, steps = training_steps)
    /home/george/anaconda3/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py in fit(self, x, y, input_fn, steps, batch_size, monitors, max_steps)
        217                              steps=steps,
        218                              monitors=monitors,
    --> 219                              max_steps=max_steps)
        220     logging.info('Loss for final step: %s.', loss)
        221     return self
    /home/george/anaconda3/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py in _train_model(self, input_fn, steps, feed_fn, init_op, init_feed_fn, init_fn, device_fn, monitors, log_every_steps, fail_on_nan_loss, max_steps)
        477       features, targets = input_fn()
        478       self._check_inputs(features, targets)
    --> 479       train_op, loss_op = self._get_train_ops(features, targets)
        480 
        481       # Add default monitors.
    /home/george/anaconda3/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py in _get_train_ops(self, features, targets)
        747       Tuple of train Operation and loss Tensor.
        748     """
    --> 749     _, loss, train_op = self._call_model_fn(features, targets, ModeKeys.TRAIN)
        750     return train_op, loss
        751 
    /home/george/anaconda3/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/estimators/estimator.py in _call_model_fn(self, features, targets, mode)
        731       else:
        732         return self._model_fn(features, targets, mode=mode)
    --> 733     return self._model_fn(features, targets)
        734 
        735   def _get_train_ops(self, features, targets):
    /home/george/ipython/project/lstm_model.py in model(X, y)
         61         output = lstm_layers(output[-1],dense_layers)
         62         prediction, loss = tflearn.run_n({"outputs": output, "last_states": layers}, n=1,
    ---> 63                                         feed_dict=None)
         64         train_operation = tflayers.optimize_loss(loss, tf.contrib.framework.get_global_step(), optimizer=optimizer,
         65                                                  learning_rate=learning_rate)
    /home/george/anaconda3/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/graph_actions.py in run_n(output_dict, feed_dict, restore_checkpoint_path, n)
        795       output_dict=output_dict,
        796       feed_dicts=itertools.repeat(feed_dict, n),
    --> 797       restore_checkpoint_path=restore_checkpoint_path)
        798 
        799 
    /home/george/anaconda3/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/graph_actions.py in run_feeds(*args, **kwargs)
        850 def run_feeds(*args, **kwargs):
        851   """See run_feeds_iter(). Returns a list instead of an iterator."""
    --> 852   return list(run_feeds_iter(*args, **kwargs))
        853 
        854 
    /home/george/anaconda3/lib/python3.5/site-packages/tensorflow/contrib/learn/python/learn/graph_actions.py in run_feeds_iter(output_dict, feed_dicts, restore_checkpoint_path)
        841         threads = queue_runner.start_queue_runners(session, coord=coord)
        842         for f in feed_dicts:
    --> 843           yield session.run(output_dict, f)
        844       finally:
        845         coord.request_stop()
    /home/george/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py in run(self, fetches, feed_dict, options, run_metadata)
        708     try:
        709       result = self._run(None, fetches, feed_dict, options_ptr,
    --> 710                          run_metadata_ptr)
        711       if run_metadata:
        712         proto_data = tf_session.TF_GetBuffer(run_metadata_ptr)
    /home/george/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py in _run(self, handle, fetches, feed_dict, options, run_metadata)
        906     if final_fetches or final_targets:
        907       results = self._do_run(handle, final_targets, final_fetches,
    --> 908                              feed_dict_string, options, run_metadata)
        909     else:
        910       results = []
    /home/george/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_run(self, handle, target_list, fetch_list, feed_dict, options, run_metadata)
        956     if handle is None:
        957       return self._do_call(_run_fn, self._session, feed_dict, fetch_list,
    --> 958                            target_list, options, run_metadata)
        959     else:
        960       return self._do_call(_prun_fn, self._session, handle, feed_dict,
    /home/george/anaconda3/lib/python3.5/site-packages/tensorflow/python/client/session.py in _do_call(self, fn, *args)
        976         except KeyError:
        977           pass
    --> 978       raise type(e)(node_def, op, message)
        979 
        980   def _extend_graph(self):

InvalidArgumentError: You must feed a value for placeholder tensor 'input' with dtype float
     [[Node: input = Placeholder[dtype=DT_FLOAT, shape=[], _device="/job:localhost/replica:0/task:0/cpu:0"]()]]


    classificator = tf.contrib.learn.Estimator(model_fn=lstm_model(timesteps,rnn_layers,dense_layers))
    validation_monitor = tf.contrib.learn.monitors.ValidationMonitor(X_train_TF, Y_train, 
                                                        every_n_steps=1000, early_stopping_rounds = 1000)
    classificator.fit(X_train_TF, Y_train, monitors = [validation_monitor],
                      batch_size = batch_size, steps = training_steps)
 

def lstm_model(num_utnits, rnn_layers, dense_layers=None, learning_rate=0.1, optimizer='Adagrad'):
    def lstm_cells(layers):
        if isinstance(layers[0], dict):
            return [tf.nn.rnn_cell.DropoutWrapper(tf.nn.rnn_cell.LSTMCell(layer['num_units'],
                                                                          state_is_tuple=True),
                                                  layer['keep_prob'])
                    if layer.get('keep_prob') else tf.nn.rnn_cell.LSTMCell(layer['num_units'],
                                                                           state_is_tuple=True)
                    for layer in layers]
        return [tf.nn.rnn_cell.LSTMCell(steps, state_is_tuple=True) for steps in layers]

    def lstm_layers(input_layers, layers):
        if layers and isinstance(layers, dict):
            return tflayers.stack(input_layers, tflayers.fully_connected,
                                  layers['layers'])  # check later
        elif layers:
            return tflayers.stack(input_layers, tflayers.fully_connected, layers)
        else:
            return input_layers

    def model(X, y):
        stacked_lstm = tf.nn.rnn_cell.MultiRNNCell(lstm_cells(rnn_layers), state_is_tuple=True)
        output, layers = tf.nn.dynamic_rnn(cell=stacked_lstm, inputs=X, dtype=dtypes.float32,)
        output = lstm_layers(output[-1], dense_layers)
        prediction, loss = tflearn.run_n({"outputs": output, "last_states": layers}, n=1,
                                        feed_dict=None)
        train_operation = tflayers.optimize_loss(loss, tf.contrib.framework.get_global_step(), optimizer=optimizer,
                                                 learning_rate=learning_rate)
        return prediction, loss, train_operation

    return model

共有1个答案

逑何平
2023-03-14

这个fit方法已修改为支持输入作为参数,而不是训练数据及其标签。请看一看这个例子。

 类似资料:
  • 问题内容: 我正在尝试实现一个简单的前馈网络。但是,我不知道如何输入。这个例子: 给我以下错误: 我试过了 但这显然不起作用。 问题答案: 要填充占位符,请使用(或)参数。假设您有一个带占位符的下图: 如果要评估,则必须输入的值。您可以按照以下步骤进行操作: 有关更多信息,请参见有关送纸的文档。

  • #{}速度快,能防止sql注入,是占位符方式,先预编译,然后填充参数,字符串格式,用户名=(___),参数只是下划线上的内容 ${}是直接拼接到语句上,这种方式需要自己拼括号和参数,但是也可以拼接想执行的任何语句,也就是传说中的sql注入 详情如下 在MyBatis中使用参数进行SQL拼装经常会使用到#{var}和${var}两种参数的设置方式。下面是两种方式的不用之处: #{var} 使用预编译

  • 问题内容: 对于我的项目,我需要将有向图转换为图的张量流实现,就好像它是神经网络一样。在tensorflow版本1中,我可以将所有输入定义为占位符,然后使用广度优先搜索图为输出生成数据流图。然后,我只需使用feed_dict来输入我的输入。但是,在TensorFlow v2.0中,他们决定完全放弃占位符。 如何在不使用占位符的情况下为每个接受可变数量的输入并返回可变数量的输出的图制作tf.func

  • 类型 Glide允许用户指定三种不同类型的占位符,分别在三种不同场景使用: placeholder error fallback 占位符(Placeholder) 占位符是当请求正在执行时被展示的 Drawable 。当请求成功完成时,占位符会被请求到的资源替换。如果被请求的资源是从内存中加载出来的,那么占位符可能根本不会被显示。如果请求失败并且没有设置 error Drawable ,则占位符将

  • 问题内容: 我一直在尝试使用tensorflow两天,现在在python2.7和3.4中一遍又一遍地安装和重新安装它。无论我做什么,尝试使用tensorflow.placeholder()时都会收到此错误消息 这是非常简单的代码: 无论我做什么,我总是可以追溯到: 有人知道我该如何解决吗? 问题答案: 这也发生在我身上。我有tensorflow,并且运行良好,但是当我在之前的tensorflow旁

  • 问题内容: 我有一个不想触摸的Wordpress插件,所以我希望我的目标只能通过CSS来实现。我在Chrome开发者工具中摆弄了CSS(没有双关语),但一直到头都一团糟。 我想要的标签是在输入字段的顶部。“在顶部”表示Z-而非Y轴。以下是HTML结构。在此先感谢您动动脑筋。 问题答案: 只需进行少量标记更改和脚本,即可完成此操作 该脚本只是将用户值附加到其value属性,因此可以使用CSS设置其样