关键字:BigQuery
搜索结果共计:14
[单选]You write a Python script to connect to Google BigQuery from a Google Compute Engine virtual machine. The script is printing errors that it cannot connect to BigQuery.
What should you do to fix the script?
Install the latest BigQuery API client library for Python
Run your script on a new virtual machine with the BigQuery access scope enabled
Create a new service account with BigQuery access and execute your script with that user
Install the bq component for gcloud with the command gcloud components install bq.
答案:C
A - If client library was not installed, the python scripts won't run - since the question states the script reports "cannot connect" - the client library must have been installed. so it's B or C.
B - https://cloud.google.com/bigquery/docs/authorization an access scope is how your client application retrieve access_token with access permission in OAuth when you want to access services via API call - in this case, it is possible that the python script use an API call instead of library, if this is true, then access scope is required. client library requires no access scope (as it does not go through OAuth)
C - service account is Google Cloud's best practice
So prefer C.
收起解析
[单选]Your company is using BigQuery as its enterprise data warehouse. Data is distributed over several Google Cloud projects. All queries on BigQuery need to be billed on a single project. You want to make sure that no query costs are incurred on the projects that contain the data. Users should be able to query the datasets, but not edit them.
How should you configure users‘ access roles?.
Add all users to a group. Grant the group the role of BigQuery user on the billing project and BigQuery dataViewer on the projects that contain the data.
Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and BigQuery user on the projects that contain the data.
Add all users to a group. Grant the group the roles of BigQuery jobUser on the billing project and BigQuery dataViewer on the projects that contain the data.
Add all users to a group. Grant the group the roles of BigQuery dataViewer on the billing project and BigQuery jobUser on the projects that contain the data.
答案:C
Answer C
roles/bigquery.jobUser : Provides permissions to run jobs, including queries, within the project.
roles/bigquery.user: When applied to a dataset, this role provides the ability to read the dataset's metadata and list tables in the dataset.
When applied to a project, this role also provides the ability to run jobs, including queries, within the project. A member with this role can enumerate their own jobs, cancel their own jobs, and enumerate datasets within a project. Additionally, allows the creation of new datasets within the project; the creator is granted the BigQuery Data Owner role (roles/bigquery.dataOwner) on these new datasets.
收起解析
[单选]Your applications will be writing their logs to BigQuery for analysis. Each application should have its own table. Any logs older than 45 days should be removed. You want to optimize storage and follow Google-recommended practices. What should you do?
Configure the expiration time for your tables at 45 days
Make the tables time-partitioned, and configure the partition expiration at 45 days
Rely on BigQuery‘s default behavior to prune application logs older than 45 days.
Create a script that uses the BigQuery command line tool (bq) to remove records older than 45 days
答案:B
I think B is correct.
It looks like table will be deleted with option A.
https://cloud.google.com/bigquery/docs/managing-tables#updating_a_tables_expiration_time
When you delete a table, any data in the table is also deleted. To automatically delete tables after a specified period of time, set the default table expiration for the dataset or set the expiration time when you create the table.
收起解析
[单选]Your BigQuery project has several users. For audit purposes, you need to see how many queries each user ran in the last month. What should you do?
Connect Google Data Studio to BigQuery. Create a dimension for the users and a metric for the amount of queries per user.
In the BigQuery interface, execute a query on the JOBS table to get the required information.
Use ‘bq show’ to list all jobs. Per job, use ‘bq ls ‘ to list job information and get the required information..
Use Cloud Audit Logging to view Cloud Audit Logs, and create a filter on the query operation to get the required information.
答案:D
D- reasons:
1).-Cloud Audit Logs maintains audit logs for admin activity, data access and system events. BIGQUERY is automatically send to cloud audit log
functionality.
2).- In the filter you can filter relevant BigQuery Audit messages, you can express filters as part of the export
https://cloud.google.com/logging/docs/audit
https://cloud.google.com/bigquery/docs/reference/auditlogs#ids
https://cloud.google.com/bigquery/docs/reference/auditlogs#auditdata_examples
收起解析
[单选]Question #148
You are designing a Data Warehouse on Google Cloud and want to store sensitive data in BigQuery. Your company requires you to generate the encryption keys outside of Google Cloud. You need to implement a solution. What should you do?
Generate a new key in Cloud Key Management Service (Cloud KMS). Store all data in Cloud Storage using the customer-managed key option and select the created key. Set up a Dataflow pipeline to decrypt the data and to store it in a new BigQuery dataset.
Generate a new key in Cloud KMS. Create a dataset in BigQuery using the customer-managed key option and select the created key.
Import a key in Cloud KMS. Store all data in Cloud Storage using the customer-managed key option and select the created key. Set up a Dataflow pipeline to decrypt the data and to store it in a new BigQuery dataset.
Import a key in Cloud KMS. Create a dataset in BigQuery using the customer-supplied key option and select the created key.
答案:D
[单选]You are managing several projects on Google Cloud and need to interact on a daily basis with BigQuery,Bigtable,and Kubernetes Engine using the gcloud CLI tool.You are travelling a lot and work on different workstations during the week. You want to avoid having to manage the gcloud CLI manually What should you do?
您正在 Google Cloud 上管理多个项目,并且需要每天使用 gcloud CLI 工具与 BigQuery、Bigtable 和 Kubernetes Engine 进行交互。您一周经常出差并在不同的工作站上工作。您想避免手动管理 gcloud CLI,您应该怎么做?
Install gcloud on all of your workstations.Run the command gcloud components auto-update on each workstation.
在所有工作站上安装 gcloud。在每个工作站上运行命令 gcloud components auto-update。
Create a Compute Engine instance and install gcloud on the instance.Connect to this instance via SSH to always use the same gcloud installation when interacting with Google Cloud.
创建一个 Compute Engine 实例并在该实例上安装 gcloud。通过 SSH 连接到该实例以在与 GCP 交互时始终使用相同的 gcloud 安装。
Use Google Cloud Shell in the Google Cloud Console to interact with Google Cloud.
在 Google Cloud Console 中使用 Google Cloud Shell 与 Google Cloud 交互。
Use a package manager to install gcloud on your workstations instead of installing it manually
使用包管理器在您的工作站上安装 gcloud 而不是手动安装
答案:C
[多选]Your company implemented BigQuery as an enterprise data warehouse. Users from multiple business units run queries on this data warehouse. However, you notice that query costs for BigQuery are very high and you need to control costs. Which two methods should you use? (Choose two.)
Split the users from business units to multiple projects.
Apply a user- or project-level custom query quota for BigQuery data warehouse. | 为BigQuery数据仓库应用用户级或项目级的自定义查询配额。
Create separate copies of your BigQuery data warehouse for each business unit.
Split your BigQuery data warehouse into multiple data warehouses for each business unit.
Change your BigQuery query model from on-demand to flat rate.Apply the appropriate number of slots to each Project. | 将BigQuery查询模型从按需速率更改为统一速率。对每个项目应用适当数量的插槽。
答案:B、E
[单选]Your organization needs to grant users access to query datasets in BigQuery but prevent them from accidentally deleting the datasets. You want a solution that follows Google-recommended practices.What should you do?
Add users to roles/bigquery user role only, instead of roles/bigquery dataOwner.
Add users to roles/bigquery dataEditor role only, instead of roles/bigquery dataOwner.
Create a custom role by removing delete permissions, and add users to that role only.
Create a custom role by removing delete permissions. Add users to the group, and then add the group to the custom role.
答案:A
[单选]If you have configured Stackdriver Logging to export logs to BigQuery, but logs entries are not getting exported to BigQuery, what is the most likely cause?
The Cloud Data Transfer Service has not been enabled.
There isn't a firewall rule allowing traffic between Stackdriver and BigQuery.
Stackdriver Logging does not have permssion to write to the BigQuery dataset.
The size of the Stackdriver log entries being exported exceeds the maximum capacity of the BigQuery dataset.
答案:C
[单选]Your company has a Google Cloud project that uses BigQuery for data warehousing on a pay-per-use basis. You want to monitor queries in real time to discover the most costly queries and which users spend the most. What should you do?
1. In the BigQuery dataset that contains all the tables to be queried, add a label for each user that can launch a query. 2. Open the Billing page of the project. 3. Select Reports. 4. Select BigQuery as the product and filter by the user you want to check.
1. Create a Cloud Logging sink to export BigQuery data access logs to BigQuery. 2. Perform a BigQuery query on the generated table to extract the information you need.
1. Create a Cloud Logging sink to export BigQuery data access logs to Cloud Storage. 2. Develop a Dataflow pipeline to compute the cost of queries split by users.
1. Activate billing export into BigQuery. 2. Perform a BigQuery query on the billing table to extract the information you need.
答案:B
[单选]Your company captures all web traffic data in Google Analytics 360 and stores it in BigQuery. Each country has its own dataset. Each dataset has multiple tables. You want analysts from each country to be able to see and query only the data for their respective countries. How should you configure the access rights?
Create a group per country. Add analysts to their respective country-groups. Create a single group‘all_analysts’ ‘, and add all country-groups as members. Grant the ‘all_analysts’ group
the IAM role of BigQuery jobUser. Share the appropriate dataset with view access with each respective analyst country-group.
Create a group per country. Add analysts to their respective country-groups. Create a single group ‘all_analysts’ and add all country groups as members. Grant the‘all_analysts’ group the IAM role of BigQuery jobUser. Share the appropriate tables with view access with each respective analyst country-group.
Create a group per country. Add analysts to their respective country-groups. Create a single group ‘all_analysts‘ and add all country- groups as members. Grant the ‘all_analysts ’ group the IAM role of BigQuery dataViewer. Share the appropriate dataset with view access.
with each respective analyst country- group.
Create a group per country. Add analysts to their respective country-groups. Create a single group‘all_analysts’and all country- groups as members. Grant the ‘all_analysts‘ group the IAM role of BigQuery dataViewer. Share the appropriate table with view access with each respective analyst country-group.
答案:A
It should be A. The question requires that user from each country can only view a specific data set, so BQ dataViewer cannot be assigned at project level. Only A could limit the user to query and view the data that they are supposed to be allowed to.
收起解析
[单选]Topic 1Question #122
You are working at a sports association whose members range in age from 8 to 30. The association collects a large amount of health data, such as sustained injuries. You are storing this data in BigQuery. Current legislation requires you to delete such information upon request of the subject. You want to design a solution that can accommodate such a request. What should you do?
Use a unique identifier for each individual. Upon a deletion request, delete all rows from BigQuery with this identifier.
When ingesting new data in BigQuery, run the data through the Data Loss Prevention (DLP) API to identify any personal information. As part of the DLP scan, save the result to Data Catalog. Upon a deletion request, query Data Catalog to find the column with personal information.
Create a BigQuery view over the table that contains all data. Upon a deletion request, exclude the rows that affect the subject's data from this view. Use this view instead of the source table for all analysis tasks.
Use a unique identifier for each individual. Upon a deletion request, overwrite the column with the unique identifier with a salted SHA256 of its value.
答案:B
According to me, the question states "The association collects a large amount of health data, such as sustained injuries." and the nuance on the word such => " Current legislation requires you to delete "SUCH" information upon request of the subject. " So from that point of view the question is not to delete the entire user records but specific data related to personal health data. With DLP you can use InfoTypes and InfoType detectors to specifically scan for those entries and how to act upon them (link https://cloud.google.com/dlp/docs/concepts-infotypes)
I would say B.
收起解析
[单选]Question #156
Your company has a Google Cloud project that uses BigQuery for data warehousing. They have a VPN tunnel between the on-premises environment and Google
Cloud that is configured with Cloud VPN. The security team wants to avoid data exfiltration by malicious insiders, compromised code, and accidental oversharing.
What should they do?
Configure Private Google Access for on-premises only.
Perform the following tasks: 1. Create a service account. 2. Give the BigQuery JobUser role and Storage Reader role to the service account. 3. Remove all other IAM access from the project.
Configure VPC Service Controls and configure Private Google Access.
Configure Private Google Access.
答案:C
IMO its C: VPC Service Control
收起解析
[单选]Topic 1 Question #179
Your company has a Google Cloud project that uses BigQuery for data warehousing. There are some tables that contain personally identifi able
information (PII).
Only the compliance team may access the PII. The other information in the tables must be available to the data science team. You want to minimize cost and the time it takes to assign appropriate access to the tables. What should you do?
1). From the dataset where you have the source data, create views of tables that you want to share, excluding PII. 2). Assign an appropriate
project-level IAM role to the members of the data science team. 3. Assign access controls to the dataset that contains the view.
1). From the dataset where you have the source data, create materialized views of tables that you want to share, excluding PII. 2). Assign an
appropriate project-level IAM role to the members of the data science team. 3. Assign access controls to the dataset that contains the view.
1). Create a dataset for the data science team. 2). Create views of tables that you want to share, excluding PII. 3). Assign an appropriate
project-level IAM role to the members of the data science team. 4). Assign access controls to the dataset that contains the view. 5). Authorize
the view to access the source dataset.
1). Create a dataset for the data science team. 2). Create materialized views of tables that you want to share, excluding PII. 3). Assign an
appropriate project-level IAM role to the members of the data science team. 4). Assign access controls to the dataset that contains the view. 5).
Authorize the view to access the source dataset.
答案:C
I would say C.
A is not limiting access to the dataset, which still contains PII.
C is creating a new dataset without PII.
收起解析