版权声明:本文为博主原创文章,转载请注明出处:https://blog.csdn.net/ling620/article/details/97789853
之前的文章介绍了如何使用Bert的extract_features.py
去提取特征向量,本文对源码进一步的分析。
BERT之提取特征向量 及 bert-as-server的使用
代码位于: bert/extract_features.py
本文主要包含两部分内容:
必选参数,如下:
input_file
:数据存放路径vocab_file
:字典文件的地址bert_config_file
:配置文件init_checkpoint
:模型文件output_file
:输出文件if __name__ == "__main__":
flags.mark_flag_as_required("input_file")
flags.mark_flag_as_required("vocab_file")
flags.mark_flag_as_required("bert_config_file")
flags.mark_flag_as_required("init_checkpoint")
flags.mark_flag_as_required("output_file")
tf.app.run()
其他参数:
在文件最开始部分
layers
:获取的层数索引, 默认值是 [-1, -2, -3, -4] 即表示倒数第一层、倒数第二层、倒数第三层和倒数第四层max_seq_length
:输入序列的最大长度,大于此值则截断,小于此值则填充0batch_size
:预测的batch大小use_tpu
:是否使用TPU
use_one_hot_embeddings
:是否使用独热编码flags.DEFINE_string("layers", "-1,-2,-3,-4", "")
flags.DEFINE_integer(
"max_seq_length", 128,
"The maximum total input sequence length after WordPiece tokenization. "
"Sequences longer than this will be truncated, and sequences shorter "
"than this will be padded.")
flags.DEFINE_bool(
"do_lower_case", True,
"Whether to lower case the input text. Should be True for uncased "
"models and False for cased models.")
flags.DEFINE_integer("batch_size", 32, "Batch size for predictions.")
flags.DEFINE_bool("use_tpu", False, "Whether to use TPU or GPU/CPU.")
flags.DEFINE_bool(
"use_one_hot_embeddings", False,
"If True, tf.one_hot will be used for embedding lookups, otherwise "
"tf.nn.embedding_lookup will be used. On TPUs, this should be True "
"since it is much faster.")
主要有以下几个步骤:
读取配置文件,构建BertConfig
bert_config = modeling.BertConfig.from_json_file(FLAGS.bert_config_file)
获取tokenization的对象
tokenization.py是对输入的句子处理,包含两个主要类:BasickTokenizer, FullTokenizer
BasickTokenizer
会对每个字做分割,会识别英文单词,对于数字会合并,例如:
query: 'Jack,请回答1988, UNwant\u00E9d,running'
token: ['jack', ',', '请', '回', '答', '1988', ',', 'unwanted', ',', 'running']
FullTokenizer
会对英文字符做n-gram匹配,会将英文单词拆分,例如running会拆分为run、##ing,主要是针对英文。
query: 'UNwant\u00E9d,running'
token: ["un", "##want", "##ed", ",", "runn", "##ing"]
获取RunConfig对象,作为TPUEstimator的输入参数
run_config = tf.contrib.tpu.RunConfig()
读取输入文件,处理为InputExample
类型的列表
examples.append(InputExample(unique_id=unique_id, text_a=text_a, text_b=text_b))
将输入文件转化为InputFeatures
类型的列表
features = convert_examples_to_features()
构造model
构造Estimator
对象
构造输入input_fn
进行预测,获取结果,存入json文件中
results = estimator.predict(input_fn, yield_single_examples=True)
依次将结果读取
for result in estimator.predict(input_fn, yield_single_examples=True):
for (i, token) in enumerate(feature.tokens):
all_layers = []
for (j, layer_index) in enumerate(layer_indexes):
layer_output = result["layer_output_%d" % j]
layers = collections.OrderedDict()
layers["index"] = layer_index
layers["values"] = [
round(float(x), 6) for x in layer_output[i:(i + 1)].flat
]
all_layers.append(layers)
上面这部分代码的意思是只取出输入 经过
tokenize
之后的长度 的向量。
即如果max_seq_lenght
设为128, 如果输入的句子为我爱你,则经过tokenize之后的输入tokens=[["CLS"], '我', '爱','你',["SEP"]]
,实际有效长度为5,而其余128-5位均填充0。
上面代码就是只取出有效长度的向量。
layer_output
的维度是(128, 768),layers["values"]
的维度是是(5,768)
这在文章BERT之提取特征向量 及 bert-as-server的使用中提到。
上述几个流程详细内容见下一小节。
源码及注释如下:
def main(_):
tf.logging.set_verbosity(tf.logging.INFO)
layer_indexes = [int(x) for x in FLAGS.layers.split(",")]
# 读取配置文件,构建BertConfig
bert_config = modeling.BertConfig.from_json_file(FLAGS.bert_config_file)
# 对句子进行处理,拆分
tokenizer = tokenization.FullTokenizer(
vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case)
# 获取RunConfig对象,作为TPUEstimator的输入参数
is_per_host = tf.contrib.tpu.InputPipelineConfig.PER_HOST_V2
run_config = tf.contrib.tpu.RunConfig(
master=FLAGS.master,
tpu_config=tf.contrib.tpu.TPUConfig(
num_shards=FLAGS.num_tpu_cores,
per_host_input_for_training=is_per_host))
# 读取输入文件,处理为InputExample类型的列表
examples = read_examples(FLAGS.input_file)
# 将输入文件转化为InputFeatures类型的列表
features = convert_examples_to_features(
examples=examples, seq_length=FLAGS.max_seq_length, tokenizer=tokenizer)
# 构造id到特征的映射字典
unique_id_to_feature = {}
for feature in features:
unique_id_to_feature[feature.unique_id] = feature
# 构造model
model_fn = model_fn_builder(
bert_config=bert_config,
init_checkpoint=FLAGS.init_checkpoint,
layer_indexes=layer_indexes,
use_tpu=FLAGS.use_tpu,
use_one_hot_embeddings=FLAGS.use_one_hot_embeddings)
# If TPU is not available, this will fall back to normal Estimator on CPU
# or GPU.
# 构造estimator
estimator = tf.contrib.tpu.TPUEstimator(
use_tpu=FLAGS.use_tpu,
model_fn=model_fn,
config=run_config,
predict_batch_size=FLAGS.batch_size)
# 构造输入
input_fn = input_fn_builder(
features=features, seq_length=FLAGS.max_seq_length)
with codecs.getwriter("utf-8")(tf.gfile.Open(FLAGS.output_file,"w")) as writer:
# 进行预测,获取结果,存入json文件中
for result in estimator.predict(input_fn, yield_single_examples=True):
unique_id = int(result["unique_id"])
feature = unique_id_to_feature[unique_id]
output_json = collections.OrderedDict()
output_json["linex_index"] = unique_id
all_features = []
for (i, token) in enumerate(feature.tokens):
all_layers = []
for (j, layer_index) in enumerate(layer_indexes):
layer_output = result["layer_output_%d" % j]
layers = collections.OrderedDict()
layers["index"] = layer_index
layers["values"] = [
round(float(x), 6) for x in layer_output[i:(i + 1)].flat
]
all_layers.append(layers)
features = collections.OrderedDict()
features["token"] = token
features["layers"] = all_layers
all_features.append(features)
output_json["features"] = all_features
writer.write(json.dumps(output_json) + "\n")
见到这部分内容,应该很熟悉,在fine-tuning
的时候,数据读入后基本的处理,即将文件中的数据构造为InputExample
类型的列表。
分析:
input_file
文件,逐行进行处理text_a
,如果是两个句子(两个句子之间使用三个’|‘隔开,如’你好 ||| 中国人’),则分别赋值给text_a
和text_b
text_a = str.split(',')[0]
,text_b = str.split(',')[1]
unique_id
为从0开始,注意增加的整型数据InputExample
的类型封装好,然后加入到列表examples
中返回。InputExample
类的定义如下:
class InputExample(object):
def __init__(self, unique_id, text_a, text_b):
self.unique_id = unique_id
self.text_a = text_a
self.text_b = text_b
源码及注释:
def read_examples(input_file):
"""Read a list of `InputExample`s from an input file."""
examples = []
unique_id = 0
with tf.gfile.GFile(input_file, "r") as reader:
while True:
line = tokenization.convert_to_unicode(reader.readline())
if not line:
break
line = line.strip()
text_a = None
text_b = None
m = re.match(r"^(.*) \|\|\| (.*)$", line)
if m is None:
text_a = line
else:
text_a = m.group(1)
text_b = m.group(2)
split_line = line.split(",")
text_a = split_line[0]
text_b = split_line[1]
examples.append(
InputExample(unique_id=unique_id, text_a=text_a, text_b=text_b))
unique_id += 1
return examples
调用代码
传入example, 最大序列长度和tokenizer
features = convert_examples_to_features(
examples=examples, seq_length=FLAGS.max_seq_length, tokenizer=tokenizer)
作用: 将上一步获取的输入文件内容转换为feature
主要流程:
max_seq_length-3
,若长度大于max_seq_length-3
,则从尾部截断长度较长的句子单字。tokens = []
存放处理后的序列input_type_ids[]
存放每个句子对应的标签(句子1的序列都为0,句子2的序列都为1)input_ids
存放处理后的序列对应的id值(根据vocab.txt)input_mask
用来标注实际值掩膜(即有值的位标1,否则标0)max_seq_length
的补零InputFeatures
类的类型,追加到列表中。InputFeatures
类的定义如下
class InputFeatures(object):
"""A single set of features of data."""
def __init__(self, unique_id, tokens, input_ids, input_mask, input_type_ids):
self.unique_id = unique_id
self.tokens = tokens
self.input_ids = input_ids # 映射的值
self.input_mask = input_mask
self.input_type_ids = input_type_ids
源码及注释如下:
def convert_examples_to_features(examples, seq_length, tokenizer):
"""Loads a data file into a list of `InputBatch`s."""
features = []
for (ex_index, example) in enumerate(examples):
tokens_a = tokenizer.tokenize(example.text_a)
tokens_b = None
# 如果text_b不为空,则进行tokenize
if example.text_b:
tokens_b = tokenizer.tokenize(example.text_b)
if tokens_b:
# Modifies `tokens_a` and `tokens_b` in place so that the total
# length is less than the specified length.
# Account for [CLS], [SEP], [SEP] with "- 3"
# 对于两个句子,需要加上头尾,中间,一共加上三个标注位
_truncate_seq_pair(tokens_a, tokens_b, seq_length - 3)
else:
# Account for [CLS] and [SEP] with "- 2"
# 对于单个句子,主需要在句子头尾添加[CLS]和[SEP]两个标志即可,因此-2
if len(tokens_a) > seq_length - 2:
tokens_a = tokens_a[0:(seq_length - 2)]
# The convention in BERT is:
# (a) For sequence pairs:
# tokens: [CLS] is this jack ##son ##ville ? [SEP] no it is not . [SEP]
# type_ids: 0 0 0 0 0 0 0 0 1 1 1 1 1 1
# (b) For single sequences:
# tokens: [CLS] the dog is hairy . [SEP]
# type_ids: 0 0 0 0 0 0 0
tokens = []
input_type_ids = []
tokens.append("[CLS]")
input_type_ids.append(0)
for token in tokens_a:
tokens.append(token)
input_type_ids.append(0)
tokens.append("[SEP]")
input_type_ids.append(0)
if tokens_b:
for token in tokens_b:
tokens.append(token)
input_type_ids.append(1)
tokens.append("[SEP]")
input_type_ids.append(1)
input_ids = tokenizer.convert_tokens_to_ids(tokens)
# The mask has 1 for real tokens and 0 for padding tokens. Only real
# tokens are attended to. 只关注有值存在的地方
input_mask = [1] * len(input_ids)
# Zero-pad up to the sequence length.长度小于
while len(input_ids) < seq_length:
input_ids.append(0)
input_mask.append(0)
input_type_ids.append(0)
assert len(input_ids) == seq_length
assert len(input_mask) == seq_length
assert len(input_type_ids) == seq_length
# 打印前5个文本句的处理结果
if ex_index < 5:
tf.logging.info("*** Example ***")
tf.logging.info("unique_id: %s" % (example.unique_id))
tf.logging.info("tokens: %s" % " ".join(
[tokenization.printable_text(x) for x in tokens]))
tf.logging.info("input_ids: %s" %
" ".join([str(x) for x in input_ids]))
tf.logging.info("input_mask: %s" %
" ".join([str(x) for x in input_mask]))
tf.logging.info(
"input_type_ids: %s" % " ".join([str(x) for x in input_type_ids]))
# 构造为InputFeatures对象
features.append(
InputFeatures(
unique_id=example.unique_id,
tokens=tokens,
input_ids=input_ids,
input_mask=input_mask,
input_type_ids=input_type_ids))
return features
调用代码:
model_fn = model_fn_builder(
bert_config=bert_config,
init_checkpoint=FLAGS.init_checkpoint,
layer_indexes=layer_indexes,
use_tpu=FLAGS.use_tpu,
use_one_hot_embeddings=FLAGS.use_one_hot_embeddings)
model_fn_builder
函数中采用 闭包 的形式返回函数model_fn
即把函数作为结果值返回。
【注意】 在调用model_fn_builder()的时候,不立刻执行,而是根据后面代码的需要再计算。
函数model_fn
的主要内容:
bert_config
文件layer_indexes
指定的层索引值,获取模型的预测输出TPUEstimatorSpec
类的实例for (i, layer_index) in enumerate(layer_indexes):
predictions["layer_output_%d" % i] = all_layers[layer_index]
output_spec = tf.contrib.tpu.TPUEstimatorSpec(
mode=mode, predictions=predictions, scaffold_fn=scaffold_fn)
return output_spec
tf.estimator.EstimatorSpec
定义在model_fn
函数中的一个类,model_fn返回值是它的一个实例,该实例用来初始化Estimator类。predictions
是模型的预测输出,主要是在infer阶段。
源码及注释如下:
def model_fn_builder(bert_config, init_checkpoint, layer_indexes, use_tpu,
use_one_hot_embeddings):
"""Returns `model_fn` closure for TPUEstimator."""
def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
"""The `model_fn` for TPUEstimator."""
unique_ids = features["unique_ids"]
input_ids = features["input_ids"]
input_mask = features["input_mask"]
input_type_ids = features["input_type_ids"]
# 创建Bert模型
model = modeling.BertModel(
config=bert_config,
is_training=False,
input_ids=input_ids,
input_mask=input_mask,
token_type_ids=input_type_ids,
use_one_hot_embeddings=use_one_hot_embeddings)
if mode != tf.estimator.ModeKeys.PREDICT:
raise ValueError("Only PREDICT modes are supported: %s" % (mode))
# 获取模型中所有的训练参数
tvars = tf.trainable_variables()
scaffold_fn = None
# 加载Bert模型
(assignment_map,
initialized_variable_names) = modeling.get_assignment_map_from_checkpoint(
tvars, init_checkpoint)
if use_tpu:
def tpu_scaffold():
tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
return tf.train.Scaffold()
scaffold_fn = tpu_scaffold
else:
tf.train.init_from_checkpoint(init_checkpoint, assignment_map) # 从已有模型中初始化参数
tf.logging.info("**** Trainable Variables ****")
# 打印加载模型的参数
for var in tvars:
init_string = ""
if var.name in initialized_variable_names:
init_string = ", *INIT_FROM_CKPT*"
tf.logging.info(" name = %s, shape = %s%s", var.name, var.shape,
init_string)
all_layers = model.get_all_encoder_layers() # 获取所有编码层
predictions = {
"unique_id": unique_ids,
}
# 获取指定层用于预测
for (i, layer_index) in enumerate(layer_indexes):
predictions["layer_output_%d" % i] = all_layers[layer_index]
output_spec = tf.contrib.tpu.TPUEstimatorSpec(
mode=mode, predictions=predictions, scaffold_fn=scaffold_fn)
return output_spec
return model_fn
调用代码:
input_fn = input_fn_builder(
features=features, seq_length=FLAGS.max_seq_length)
分析:
input_fn_builder
函数同样采用 闭包 的形式返回函数input_fn
。
该函数的作用就是返回一个batch_size的输入数据。
使用TFRecordReader
会更为高效。
源码及注释:
def input_fn_builder(features, seq_length):
"""Creates an `input_fn` closure to be passed to TPUEstimator."""
all_unique_ids = []
all_input_ids = []
all_input_mask = []
all_input_type_ids = []
for feature in features:
all_unique_ids.append(feature.unique_id)
all_input_ids.append(feature.input_ids)
all_input_mask.append(feature.input_mask)
all_input_type_ids.append(feature.input_type_ids)
def input_fn(params):
"""The actual input function."""
batch_size = params["batch_size"]
num_examples = len(features)
# This is for demo purposes and does NOT scale to large data sets. We do
# not use Dataset.from_generator() because that uses tf.py_func which is
# not TPU compatible. The right way to load data is with TFRecordReader.
d = tf.data.Dataset.from_tensor_slices({
"unique_ids":
tf.constant(all_unique_ids, shape=[
num_examples], dtype=tf.int32),
"input_ids":
tf.constant(
all_input_ids, shape=[num_examples, seq_length],
dtype=tf.int32),
"input_mask":
tf.constant(
all_input_mask,
shape=[num_examples, seq_length],
dtype=tf.int32),
"input_type_ids":
tf.constant(
all_input_type_ids,
shape=[num_examples, seq_length],
dtype=tf.int32),
})
d = d.batch(batch_size=batch_size, drop_remainder=False)
return d
return input_fn
BERT开源的代码,采用的estimator是tf.contrib.tpu.TPUEstimator
,虽然TPU的estimator同样可以在gpu和cpu上运行,若没有使用TPU且想在gpu上更高效地做一些提升,可将其换成tf.estimator.Estimator
,同时model_fn里的tf.contrib.tpu.TPUEstimatorSpec
需要修改成tf.estimator.EstimatorSpec
的形式,同时还需要调整一些参数,具体调整如下。
主要是函数model_fn_builder
和函数main
中需要稍作修改。
注意 :在tf.estimator.Estimator的参数中增加batch_size参数,否则报错:
params = {'batch_size':FLAGS.batch_size}
estimator = tf.estimator.Estimator(
model_fn=model_fn,
config=run_config,
params = {'batch_size':FLAGS.batch_size}
)
修改后的代码如下:
def model_fn_builder(bert_config, init_checkpoint, layer_indexes,
use_one_hot_embeddings):
"""Returns `model_fn` closure for TPUEstimator."""
def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
"""The `model_fn` for TPUEstimator."""
unique_ids = features["unique_ids"]
input_ids = features["input_ids"]
input_mask = features["input_mask"]
input_type_ids = features["input_type_ids"]
model = modeling.BertModel(
config=bert_config,
is_training=False,
input_ids=input_ids,
input_mask=input_mask,
token_type_ids=input_type_ids,
use_one_hot_embeddings=use_one_hot_embeddings)
if mode != tf.estimator.ModeKeys.PREDICT:
raise ValueError("Only PREDICT modes are supported: %s" % (mode))
tvars = tf.trainable_variables()
(assignment_map,
initialized_variable_names) = modeling.get_assignment_map_from_checkpoint(
tvars, init_checkpoint)
tf.train.init_from_checkpoint(init_checkpoint, assignment_map)
tf.logging.info("**** Trainable Variables ****")
for var in tvars:
init_string = ""
if var.name in initialized_variable_names:
init_string = ", *INIT_FROM_CKPT*"
tf.logging.info(" name = %s, shape = %s%s", var.name, var.shape,
init_string)
all_layers = model.get_all_encoder_layers()
predictions = {
"unique_id": unique_ids,
}
for (i, layer_index) in enumerate(layer_indexes):
predictions["layer_output_%d" % i] = all_layers[layer_index]
output_spec = tf.estimator.EstimatorSpec(
mode=mode, predictions=predictions)
return output_spec
return model_fn
def main(_):
tf.logging.set_verbosity(tf.logging.INFO)
layer_indexes = [int(x) for x in FLAGS.layers.split(",")]
bert_config = modeling.BertConfig.from_json_file(FLAGS.bert_config_file)
run_config = tf.estimator.RunConfig()
# 创建、加载Bert模型
model_fn = model_fn_builder(
bert_config=bert_config,
init_checkpoint=FLAGS.init_checkpoint,
layer_indexes=layer_indexes,
use_one_hot_embeddings=FLAGS.use_one_hot_embeddings)
estimator = tf.estimator.Estimator(
model_fn=model_fn,
config=run_config,
params = {'batch_size':FLAGS.batch_size}
)
# 读入测试文件
examples = read_examples(FLAGS.input_file)
# get tokenizer and features
tokenizer = tokenization.FullTokenizer(
vocab_file=FLAGS.vocab_file, do_lower_case=FLAGS.do_lower_case)
features = convert_examples_to_features(
examples=examples, seq_length=FLAGS.max_seq_length, tokenizer=tokenizer)
unique_id_to_feature = {}
for feature in features:
unique_id_to_feature[feature.unique_id] = feature
input_fn = input_fn_builder(
features=features, seq_length=FLAGS.max_seq_length)
# 计算predict结果,并保存到output_file中
with codecs.getwriter("utf-8")(tf.gfile.Open(FLAGS.output_file,
"w")) as writer:
for result in estimator.predict(input_fn, yield_single_examples=True):
unique_id = int(result["unique_id"])
feature = unique_id_to_feature[unique_id]
output_json = collections.OrderedDict()
output_json["linex_index"] = unique_id
all_features = []
for (i, token) in enumerate(feature.tokens):
all_layers = []
for (j, layer_index) in enumerate(layer_indexes):
layer_output = result["layer_output_%d" % j]
layers = collections.OrderedDict()
layers["index"] = layer_index
layers["values"] = [
round(float(x), 6) for x in layer_output[i:(i + 1)].flat
]
all_layers.append(layers)
features = collections.OrderedDict()
features["token"] = token
features["layers"] = all_layers
all_features.append(features)
output_json["features"] = all_features
writer.write(json.dumps(output_json) + "\n")
if __name__ == "__main__":
flags.mark_flag_as_required("input_file")
flags.mark_flag_as_required("vocab_file")
flags.mark_flag_as_required("bert_config_file")
flags.mark_flag_as_required("init_checkpoint")
flags.mark_flag_as_required("output_file")
tf.app.run()