关于“Automating your BigQuery Data Pipeline with Cloud Dataprep”的评价

关于“Automating your BigQuery Data Pipeline with Cloud Dataprep”的评价

2059 条评价

Subhan M. · 已于 over 3 years前审核

Subhan M. · 已于 over 3 years前审核

Lubna S. · 已于 over 3 years前审核

Subhan M. · 已于 over 3 years前审核

Ashok K. · 已于 over 3 years前审核

dil b. · 已于 over 3 years前审核

jonu c. · 已于 over 3 years前审核

This is the worst ever written lab in qwiklabs. A guy has to discover what is meant to be done at several steps and in 1 silly hour and 30 silly minutes. Finally done at my 4th attempt! The time required is like 2H 30M. And there is no problem even if it requires 3H, but make it clear. Several issues: 1. Once again, Add Datasets to the flow. Choose Import Datasets, then BigQuery again. --> Use the right terms. Add Dataset and Add Datasets are different functions in this context because both exist. 2. By default, Dataprep has inferred this column to be an array/list. --> Not always. In fact, i found dataprep to not be stable in terms of suggestions and columns formats as this may vary usage by usage. On my first time it inferred it was a string and that changes everything because "Flatten array values into new rows" doesn't appear as suggestion. Instructions shall be provided for the case this happens because one first need to change for Array type so the right Flatten suggestion can appear. 3. (...) If you mouse over the histogram of the column, you can see that the bars represent individual keys inside of the object. --> Nope... this behaviour didn't happen to me. 6. In the first row, highlight the middle portion of the citation_publication_number. You will see suggestions to extract a digit pattern. --> Nope. Didn't happend to me. There are several option and you shall make clear that one shall not chose the one that includes the specific number. 2. Modify the join keys to use citation_publication_id == patent_id as the join keys. --> What about "Click Next."? Still in point 2 and after "Keep the following columns and save the step:" + variables: CLICK Review. CLICK Add to Recipe. 7. One by one, add the following columns to the ‘Group by' field and observe how the preview column updates: --> In fact is ‘Group rows by' ... otherwise, why write everything at all? 9. Now to do the partial deduplication, you will simply need to get rid of any rows where partial_dupe is greater than 1. Click on the histogram in the partial_dupe column and add the suggestion to Keep rows where (1 <= partial_dupe) && (partial_dupe < 2). --> Not allways. If the suggestion doesn't appear, what one can do? --> Maybe you want to write: If this is not working, keep reading to get the idea and resume the process on Step 12. 18. Finally, inner join the inventor dataset, using inventor_id == id as the join keys. Keep all columns from both datasets and Add the step. --> Oh boy. What if you would stepwise this line as provided in proper technological education? And that's all! Goes with a 1 star because the time to execute this lab is horrible and the instructional wording leads to frustration. Thanks.

Américo A. · 已于 over 3 years前审核

This is the worst ever written lab in qwiklabs. A guy has to discover what is meant to be done at several steps and in 1 silly hour and 30 silly minutes.

Américo A. · 已于 over 3 years前审核

arshad a. · 已于 over 3 years前审核

Jimmy H. · 已于 over 3 years前审核

ashraf n. · 已于 over 3 years前审核

Hal B. · 已于 over 3 years前审核

Cemile A. · 已于 over 3 years前审核

dfdf d. · 已于 over 3 years前审核

Corey W. · 已于 over 3 years前审核

Hal B. · 已于 over 3 years前审核

dfdf d. · 已于 over 3 years前审核

dfdf d. · 已于 over 3 years前审核

dfdf d. · 已于 over 3 years前审核

dil b. · 已于 over 3 years前审核

Good

King P. · 已于 over 3 years前审核

Good

King P. · 已于 over 3 years前审核

Good

King P. · 已于 over 3 years前审核

我们无法确保发布的评价来自已购买或已使用产品的消费者。评价未经 Google 核实。