当前位置: 首页 > 知识库问答 >
问题:

数据流中的Apache beam获取与生成器对象相关的错误不可订阅

羊舌炯
2023-03-14

我试图在dataflow中创建我的第一个pipleine,当我使用交互式beam runner执行时,我有相同的代码运行,但在dataflow上,我得到了各种各样的错误,对我来说没有太大意义。

{"timestamp":1589992571906,"lastPageVisited":"https://kickassdataprojects.com/simple-and-complete-tutorial-on-simple-linear-regression/","pageUrl":"https://kickassdataprojects.com/","pageTitle":"Helping%20companies%20and%20developers%20create%20awesome%20data%20projects%20%7C%20Data%20Engineering/%20Data%20Science%20Blog","eventType":"Pageview","landingPage":0,"referrer":"direct","uiud":"31af5f22-4cc4-48e0-9478-49787dd5a19f","sessionId":322371}

这是我的管道的代码。

from __future__ import absolute_import
import apache_beam as beam
#from apache_beam.runners.interactive import interactive_runner
#import apache_beam.runners.interactive.interactive_beam as ib
import google.auth
from datetime import timedelta
import json
from datetime import datetime
from apache_beam import window
from apache_beam.transforms.trigger import AfterWatermark, AfterProcessingTime, AccumulationMode, AfterCount
from apache_beam.options.pipeline_options import GoogleCloudOptions
from apache_beam.options.pipeline_options import PipelineOptions
from apache_beam.options.pipeline_options import SetupOptions
from apache_beam.options.pipeline_options import StandardOptions
import argparse
import logging
from time import mktime

def setTimestamp(elem):
     from apache_beam import window
     yield window.TimestampedValue(elem, elem['timestamp'])

def createTuples(elem):
     yield (elem["sessionId"], elem)

class WriteToBigQuery(beam.PTransform):
  """Generate, format, and write BigQuery table row information."""
  def __init__(self, table_name, dataset, schema, project):
    """Initializes the transform.
    Args:
      table_name: Name of the BigQuery table to use.
      dataset: Name of the dataset to use.
      schema: Dictionary in the format {'column_name': 'bigquery_type'}
      project: Name of the Cloud project containing BigQuery table.
    """
    # TODO(BEAM-6158): Revert the workaround once we can pickle super() on py3.
    #super(WriteToBigQuery, self).__init__()
    beam.PTransform.__init__(self)
    self.table_name = table_name
    self.dataset = dataset
    self.schema = schema
    self.project = project

  def get_schema(self):
    """Build the output table schema."""
    return ', '.join('%s:%s' % (col, self.schema[col]) for col in self.schema)

  def expand(self, pcoll):
    return (
        pcoll
        | 'ConvertToRow' >>
        beam.Map(lambda elem: {col: elem[col]
                               for col in self.schema})
        | beam.io.WriteToBigQuery(
            self.table_name, self.dataset, self.project, self.get_schema()))


class ParseSessionEventFn(beam.DoFn):
  """Parses the raw game event info into a Python dictionary.
  Each event line has the following format:
    username,teamname,score,timestamp_in_ms,readable_time
  e.g.:
    user2_AsparagusPig,AsparagusPig,10,1445230923951,2015-11-02 09:09:28.224
  The human-readable time string is not used here.
  """
  def __init__(self):
    # TODO(BEAM-6158): Revert the workaround once we can pickle super() on py3.
    #super(ParseSessionEventFn, self).__init__()
    beam.DoFn.__init__(self)

  def process(self, elem):
          #timestamp = mktime(datetime.strptime(elem["timestamp"], "%Y-%m-%d %H:%M:%S").utctimetuple())
          elem['sessionId'] = int(elem['sessionId'])
          elem['landingPage'] = int(elem['landingPage'])
          yield elem

class AnalyzeSessions(beam.DoFn):
  """Parses the raw game event info into a Python dictionary.
  Each event line has the following format:
    username,teamname,score,timestamp_in_ms,readable_time
  e.g.:
    user2_AsparagusPig,AsparagusPig,10,1445230923951,2015-11-02 09:09:28.224
  The human-readable time string is not used here.
  """
  def __init__(self):
    # TODO(BEAM-6158): Revert the workaround once we can pickle super() on py3.
    #super(AnalyzeSessions, self).__init__()
    beam.DoFn.__init__(self)

  def process(self, elem, window=beam.DoFn.WindowParam):
          sessionId = elem[0]
          uiud = elem[1][0]["uiud"]
          count_of_events = 0
          pageUrl = []
          window_end = window.end.to_utc_datetime()
          window_start = window.start.to_utc_datetime()
          session_duration = window_end - window_start
          for rows in elem[1]:
             if rows["landingPage"] == 1:
                    referrer = rows["refererr"]
             pageUrl.append(rows["pageUrl"])       

          return {
             "pageUrl":pageUrl,
             "eventType":"pageview",
             "uiud":uiud,
             "sessionId":sessionId,
             "session_duration": session_duration,
              "window_start" : window_start
               }

def run(argv=None, save_main_session=True):
    parser = argparse.ArgumentParser()
    parser.add_argument('--topic', type=str, help='Pub/Sub topic to read from')
    parser.add_argument(
          '--subscription', type=str, help='Pub/Sub subscription to read from')
    parser.add_argument(
          '--dataset',
          type=str,
          required=True,
          help='BigQuery Dataset to write tables to. '
          'Must already exist.')
    parser.add_argument(
          '--table_name',
          type=str,
          default='game_stats',
          help='The BigQuery table name. Should not already exist.')
    parser.add_argument(
          '--fixed_window_duration',
          type=int,
          default=60,
          help='Numeric value of fixed window duration for user '
          'analysis, in minutes')
    parser.add_argument(
          '--session_gap',
          type=int,
          default=5,
          help='Numeric value of gap between user sessions, '
          'in minutes')
    parser.add_argument(
          '--user_activity_window_duration',
          type=int,
          default=30,
          help='Numeric value of fixed window for finding mean of '
          'user session duration, in minutes')
    args, pipeline_args = parser.parse_known_args(argv)
    session_gap = args.session_gap * 60
    options = PipelineOptions(pipeline_args)
    # Set the pipeline mode to stream the data from Pub/Sub.
    options.view_as(StandardOptions).streaming = True

    options.view_as( StandardOptions).runner= 'DataflowRunner'
    options.view_as(SetupOptions).save_main_session = save_main_session
    p = beam.Pipeline(options=options)
    lines = (p
                | beam.io.ReadFromPubSub(
              subscription="projects/phrasal-bond-274216/subscriptions/rrrr")
             | 'decode' >> beam.Map(lambda x: x.decode('utf-8'))
             | beam.Map(lambda x: json.loads(x))
             | beam.ParDo(ParseSessionEventFn())
             )

    next = ( lines
                | 'AddEventTimestamps' >> beam.Map(setTimestamp)
                | 'Create Tuples' >> beam.Map(createTuples)
                | beam.Map(print) 
                | 'Window' >> beam.WindowInto(window.Sessions(15))
                | 'group by key' >> beam.GroupByKey()          
                | 'analyze sessions' >> beam.ParDo(AnalyzeSessions())         
                | 'WriteTeamScoreSums' >> WriteToBigQuery(
                args.table_name,
               {

               "uiud":'STRING',
               "session_duration": 'INTEGER',
               "window_start" : 'TIMESTAMP'
                          },
                options.view_as(GoogleCloudOptions).project)
             )

    next1 = ( next
             | 'Create Tuples' >> beam.Map(createTuples)
             | beam.Map(print) 

             )

    result = p.run()
#    result.wait_till_termination()

if __name__ == '__main__':
  logging.getLogger().setLevel(logging.INFO)
  run()

在下面的代码中,当我试图在我的管道中创建元组时,我得到了以下错误'Generator'对象不可订阅。我用yield创建了生成器对象,即使return也不能工作,它只是占用了我的管道。

apache_beam.coders.coder_impl.SequenceCoderImpl.get_estimated_size_and_observables File "sessiontest1.py", line 23, in createTuples TypeError: 'generator' object is not subscriptable [while running 'generatedPtransform-148']

下面是我用来执行管道的代码。

python3 sessiontest1.py     --project phrasal-bond-xxxxx     --region us-central1     --subscription projects/phrasal-bond-xxxxx/s
ubscriptions/xxxxxx     --dataset sessions_beam     --runner DataflowRunner     --temp_location gs://webevents/sessions --service_account_email-xxxxxxxx-
compute@developer.gserviceaccount.com  
NameError: name 'window' is not defined [while running 'generatedPtransform-3820']

共有1个答案

慕河
2023-03-14

在createTuples上获取'generator'对象不可订阅错误表明,当您尝试执行elem[“sessionid”]时,elem已经是一个生成器。您所做的前一个转换是setTimestamp,它也使用yield,因此输出一个生成器,该生成器作为元素传递给createTuples。

这里的解决方案是使用return而不是yield实现setTimestamp和createTuples。返回要在下面的转换中接收的元素。

 类似资料:
  • 我正在学习Django和Rest框架,我有一个小项目要练习,但我在试图访问http://localhost:8000/admin:typeerror时出错:对象'module'不可订阅。 以下是我创建的Python文件: 文件“/home/jesus/.local/lib/python3.8/site-packages/django/core/handlers/base.py”,第115行,在_g

  • 我有一个SQL函数,它返回一个对象。 上面返回一个对象,比如 我需要从对象中获取单独的值。

  • 所以我不熟悉使用BufferedReader读取文件,也不熟悉Java中的ArrayList。我想知道为什么我的代码没有在arrayList中的每个DVD对象中存储正确的数据。我最初认为它的工作方式是,它从文本文件的顶部开始,读取行,然后将该信息存储在变量中。读取完一个双精度值后,它会根据以前找到的信息创建一个DVD对象。然后该文件将继续读取并存储arrayList中的剩余数据。但是,我在arra

  • 问题内容: 如何获得所有具有ForeignKey指向对象的模型对象的列表?(类似于DELETE CASCADE之前Django管理员中的删除确认页面)。 我试图提出一种合并数据库中重复对象的通用方法。基本上,我希望所有具有ForeignKeys指向对象“ B”的对象都被更新为指向对象“ A”,这样我就可以删除“ B”而不会丢失任何重要内容。 谢谢你的帮助! 问题答案: Django <= 1.7

  • 问题内容: 我遇到一些与parse.com相关的问题,我想获取特定对象的数据…我已经使用以下代码来获取数据,但似乎已被贬值。 我使用以下代码来获取数据形式的objectId: 这里“ U8mCwTHOaC”是我的objectId,我想获取此objectId的行。谢谢。 问题答案: 如果要从用户表中获取数据。 如果要当前用户的行