当前位置: 首页 > 知识库问答 >
问题:

如何修复这个“属性错误,:load_multiclass_scores”?

廖永长
2023-03-14
    `i have a problem when i try to train the model(train.py)
    INPUT:
    python train.py --logtostderr --train_dir=training/ --pipeline_config_path=training/faster_rcnn_inception_v2_pets.config

代码:import functools import json import os import tensorflow as tf import sys。路径附加(“C:\Users\Gilbertchristian\Documents\Anaconda\Object\u detection\u api\models\research”)系统。路径附加(“C:\Users\Gilbertchristian\Documents\Anaconda\Object\u detection\u api\models\research\Object\u detection\utils”)系统。路径附加(“C:\Users\Gilbertchristian\Documents\Anaconda\Object\u detection\u api\models\research\slim”)sys。路径追加(“C:\Users\Gilbertchristian\Documents\Anaconda\Object\u detection\u api\models\research\slim\nets”)

    from object_detection.builders import dataset_builder
    from object_detection.builders import graph_rewriter_builder
    from object_detection.builders import model_builder
    from object_detection.legacy import trainer
    from object_detection.utils import config_util

    tf.logging.set_verbosity(tf.logging.INFO)

    flags = tf.app.flags
    flags.DEFINE_string('master', '', 'Name of the TensorFlow master to use.')
    flags.DEFINE_integer('task', 0, 'task id')
    flags.DEFINE_integer('num_clones', 1, 'Number of clones to deploy per worker.')
    flags.DEFINE_boolean('clone_on_cpu', False,
                         'Force clones to be deployed on CPU.  Note that even if '
                         'set to False (allowing ops to run on gpu), some ops may '
                         'still be run on the CPU if they have no GPU kernel.')
    flags.DEFINE_integer('worker_replicas', 1, 'Number of worker+trainer '
                         'replicas.')
    flags.DEFINE_integer('ps_tasks', 0,
                         'Number of parameter server tasks. If None, does not use '
                         'a parameter server.')
    flags.DEFINE_string('train_dir', '',
                        'Directory to save the checkpoints and training summaries.')

    flags.DEFINE_string('pipeline_config_path', '',
                        'Path to a pipeline_pb2.TrainEvalPipelineConfig config '
                        'file. If provided, other configs are ignored')

    flags.DEFINE_string('train_config_path', '',
                        'Path to a train_pb2.TrainConfig config file.')
    flags.DEFINE_string('input_config_path', '',
                        'Path to an input_reader_pb2.InputReader config file.')
    flags.DEFINE_string('model_config_path', '',
                        'Path to a model_pb2.DetectionModel config file.')

    FLAGS = flags.FLAGS


    @tf.contrib.framework.deprecated(None, 'Use object_detection/model_main.py.')
    def main(_):
      assert FLAGS.train_dir, '`train_dir` is missing.'
      if FLAGS.task == 0: tf.gfile.MakeDirs(FLAGS.train_dir)
      if FLAGS.pipeline_config_path:
        configs = config_util.get_configs_from_pipeline_file(
            FLAGS.pipeline_config_path)
        if FLAGS.task == 0:
          tf.gfile.Copy(FLAGS.pipeline_config_path,
                        os.path.join(FLAGS.train_dir, 'pipeline.config'),
                        overwrite=True)
      else:
        configs = config_util.get_configs_from_multiple_files(
            model_config_path=FLAGS.model_config_path,
            train_config_path=FLAGS.train_config_path,
            train_input_config_path=FLAGS.input_config_path)
        if FLAGS.task == 0:
          for name, config in [('model.config', FLAGS.model_config_path),
                               ('train.config', FLAGS.train_config_path),
                               ('input.config', FLAGS.input_config_path)]:
            tf.gfile.Copy(config, os.path.join(FLAGS.train_dir, name),
                          overwrite=True)

      model_config = configs['model']
      train_config = configs['train_config']
      input_config = configs['train_input_config']

      model_fn = functools.partial(
          model_builder.build,
          model_config=model_config,
          is_training=True)

      def get_next(config):
        return dataset_builder.make_initializable_iterator(
            dataset_builder.build(config)).get_next()

      create_input_dict_fn = functools.partial(get_next, input_config)

      env = json.loads(os.environ.get('TF_CONFIG', '{}'))
      cluster_data = env.get('cluster', None)
      cluster = tf.train.ClusterSpec(cluster_data) if cluster_data else None
      task_data = env.get('task', None) or {'type': 'master', 'index': 0}
      task_info = type('TaskSpec', (object,), task_data)

      # Parameters for a single worker.
      ps_tasks = 0
      worker_replicas = 1
      worker_job_name = 'lonely_worker'
      task = 0
      is_chief = True
      master = ''

      if cluster_data and 'worker' in cluster_data:
        # Number of total worker replicas include "worker"s and the "master".
        worker_replicas = len(cluster_data['worker']) + 1
      if cluster_data and 'ps' in cluster_data:
        ps_tasks = len(cluster_data['ps'])
      if worker_replicas > 1 and ps_tasks < 1:
        raise ValueError('At least 1 ps task is needed for distributed training.')
      if worker_replicas >= 1 and ps_tasks > 0:
        # Set up distributed training.
        server = tf.train.Server(tf.train.ClusterSpec(cluster), protocol='grpc',
                                 job_name=task_info.type,
                                 task_index=task_info.index)
        if task_info.type == 'ps':
          server.join()
          return

        worker_job_name = '%s/task:%d' % (task_info.type, task_info.index)
        task = task_info.index
        is_chief = (task_info.type == 'master')
        master = server.target

      graph_rewriter_fn = None
      if 'graph_rewriter_config' in configs:
        graph_rewriter_fn = graph_rewriter_builder.build(
            configs['graph_rewriter_config'], is_training=True)

      trainer.train(
          create_input_dict_fn,
          model_fn,
          train_config,
          master,
          task,
          FLAGS.num_clones,
          worker_replicas,
          FLAGS.clone_on_cpu,
          ps_tasks,
          worker_job_name,
          is_chief,
          FLAGS.train_dir,
          graph_hook_fn=graph_rewriter_fn()


    if __name__ == '__main__':
      tf.app.run()

输出:tf中第191行的文件“train.py”。应用程序。run()文件“C:\Users\Gilbertchristian\AppData\Local\Programs\Python\35\lib\site packages\tensorflow\Python\platform\app.py”,第125行,在run\u sys中。退出(main(argv))文件“C:\Users\Gilbertchristian\AppData\Local\Programs\Python\35\lib\site packages\tensorflow\Python\util\deprecation.py”,第324行,在新函数返回函数(*args,**kwargs)文件“train.py”中,第187行,在主图(hook\fn=graph\rewriter\u fn)文件中“C:\Users\Gilbertchristian\AppData\Local\Programs\Python\Python35\lib\site packages\object\u detection-0.1-py3。5.鸡蛋\物体\检测\传统\培训师。py”,第280行,列车内列车配置。预取队列容量,数据增强选项)文件“C:\Users\Gilbertchristian\AppData\Local\Programs\Python\Python35\lib\site packages\object\u detection-0.1-py3”。5.鸡蛋\物体\检测\传统\培训师。py”,第59行,在create_input_queue tensor_dict=create_tensor_dict_fn()文件“train”中。py”,第128行,在get_next dataset_builder.build(config)中。get_next()文件“C:\Users\Gilbertchristian\AppData\Local\Programs\Python\35\lib\site packages\object_detection-0.1-py3”。5.egg\object\u detection\builders\dataset\u builder。py”,第120行,内部加载多类分数=输入读取器配置。加载多类分数,属性错误:加载多类分数

共有2个答案

厍华清
2023-03-14

我必须单独运行每个原始文件才能让它工作。通用的*. proto不起作用。

请注意,由于这仍然是一个研究文件夹,因此有些。原始文件将更改。按姓名核对姓名。

从模型/研究文件夹

protoc --python_out=. .\object_detection\protos\anchor_generator.proto .\object_detection\protos\argmax_matcher.proto .\object_detection\protos\bipartite_matcher.proto .\object_detection\protos\box_coder.proto .\object_detection\protos\box_predictor.proto .\object_detection\protos\eval.proto .\object_detection\protos\faster_rcnn.proto .\object_detection\protos\faster_rcnn_box_coder.proto .\object_detection\protos\grid_anchor_generator.proto .\object_detection\protos\hyperparams.proto .\object_detection\protos\image_resizer.proto .\object_detection\protos\input_reader.proto .\object_detection\protos\losses.proto .\object_detection\protos\matcher.proto .\object_detection\protos\mean_stddev_box_coder.proto .\object_detection\protos\model.proto .\object_detection\protos\optimizer.proto .\object_detection\protos\pipeline.proto .\object_detection\protos\post_processing.proto .\object_detection\protos\preprocessor.proto .\object_detection\protos\region_similarity_calculator.proto .\object_detection\protos\square_box_coder.proto .\object_detection\protos\ssd.proto .\object_detection\protos\ssd_anchor_generator.proto .\object_detection\protos\string_int_label_map.proto .\object_detection\protos\train.proto .\object_detection\protos\keypoint_box_coder.proto .\object_detection\protos\multiscale_anchor_generator.proto .\object_detection\protos\graph_rewriter.proto .\object_detection\protos\calibration.proto .\object_detection\protos\flexible_grid_anchor_generator.proto
狄心水
2023-03-14

文件是否包含名称='load_multiclass_scores'如果没有,它可能有助于重新运行。/bin/原型object_detection/原型/*. proto-python_out=。(也许用一个不同版本)

 类似资料:
  • 所以基本上我应该做的是,我应该得到一个字符串的数组列表,用两个字符串填充它,然后比较它们。例如,如果一个字符串是“1,2,3,4”,第二个字符串是“7,6,2,8,1”,那么它应该打印出“1,2”,因为它打印出的数字相似。但我遇到了和arrayindexoutofbounds异常,我不知道如何修复它。此外,如果你有任何可能缩短此代码的提示,也请告诉我。我有一个习惯,让我的代码超长。

  • 我怎么能修好它? replaceAll函数中此字符“{”错误。谢谢。

  • 我有一个自动编码器,我尝试使用lambda在中间层中使用输入层的特定值,生成一个新的张量并发送到下一层,但它会产生以下错误: 回溯(最近一次呼叫最后一次): File"",第99行,在Model=Model(输入=[图像,wtm],输出=解码)中 文件“D:\software\Anaconda3\envs\py36\lib\site packages\keras\legacy\interfaces

  • 我正试图在我的本机代码中初始化CI的函数。 我得到一个错误的解析错误:语法错误,意外的[ 我能问一下如何解决这个问题吗? 使用PHP5.3.3

  • 我正在遵循这个视频教程(文本版本相同)。我遵循了完全相同的代码,并收到了以下错误: 错误TS2339:属性“Get Employees”在类型“EmployeeService”上不存在

  • ProjectAAA.obj:错误LNK2001:未解析的外部符号" public:_ this call X::class event::class event(unsigned int)"(??0类事件@X@@QAE@I@Z) 我已经定义了不知道如何修复这个LINK错误。 欢迎提出任何建议。 谢谢你 更多信息: 1. 现在我已经完全隔离了错误: 1.