Carregando...
Nenhum resultado encontrado.

Google Cloud Ensina

Aplique suas habilidades no console do Google Cloud

DFCX Bot Building Quality Assurance and Deployment Lifecycle

Acesse mais de 700 laboratórios e cursos

Conversational AI | Testing and Logging in Conversational Agents

Laboratório 1 hora 30 minutos universal_currency_alt 1 crédito show_chart Intermediário
info Este laboratório pode incorporar ferramentas de IA para ajudar no seu aprendizado.
Acesse mais de 700 laboratórios e cursos

Course overview

Gone are the bots of the past which interpreted inquiries incorrectly or couldn't maintain context within a conversation. Google's Contact Center as a Service (CCaaS) provides all the tools, services, and APIs you'll need to build your Conversational agent so you can hold intelligent conversations with customers reaching out to you for assistance. In this course you'll learn how to utilize Conversational Agents built in features to test, debug, and improve your agent. We will show you how to create and maintain test cases, use the validations tool to improve agent functionality, review conversation history logs, and use the console to test and debug.

Objectives

In this lab, you'll explore some of the testing and logging tools available for developing an agent in Conversational Agents. By the end of this module, you'll be able to:

  • Use Conversational Agent tools for troubleshooting.
  • Use Google Cloud tools to debug your conversational agent.
  • Review logs generated by Conversational agent activity.

Resources

The following are some resources that may help you complete the lab components of this course:

Setup and requirements

Setting up

For each lab, you get a new Google Cloud project and set of resources for a fixed time at no cost.

  1. Make sure you signed into Qwiklabs using an incognito window.

  2. Note the lab's access time (for example, and make sure you can finish in that time block.

  1. When ready, click .

  2. Note your lab credentials. You will use them to sign in to the Google Cloud Console.

  3. Click Open Google Console.

  4. Click Use another account and copy/paste credentials for this lab into the prompts.

  1. Accept the terms and skip the recovery resource page.

Assumption: You've already logged into Google Cloud before continuing with the steps below.

  1. In the Google Cloud Console, enable the Dialogflow API.

  2. Navigate to AI Applications and click on Continue and activate the API.

  3. To create a new conversational agent, go to the AI Applications Apps Console choose Conversational agent as the app type, and click Create.

  4. A new page for Conversational Agents opens. On this page, you should see a pop-up asking you to select a project.

  5. Search the list in the pop-up for the project that matches your assigned Project ID (i.e., ) for this lab. Click on .

Note: If you don't see your Project ID listed, look at the user on the right side to confirm that you are using Conversational Agents as student.

  1. Select Build your own on the Get started with Conversational Agents popup.

  2. On the Create agent page, for the agent display name enter Cloudio-cx.

  3. Select location as .

  4. Ensure the timezone and default language are set appropriately. Set the Conversation start to Flow, and then click on Create button.

    Once the agent is created, you will see the design and configuration portion of the Conversational Agents UI.

Task 1. Importing a .blob Conversational agent file

In this task, you can import the conversational agent you worked on for an earlier lab or you can import the conversational agent quick start. Choose the option you feel will help you get the most out of this lab.

Task 2. Creating a test case

In this task, you'll create a new test case and save it so that you can run it again later without typing in everything all over again.

Let's start by viewing the Test Cases section in the Conversational Agents.

  1. To create the test case, click on Toggle Simulator to open the Simulator pane.

  2. Enter the customer utterance, I want to upgrade my tier, for the ChangeTier intent,

    The Conversational Agent should respond with "Which tier do you want?"

  3. Enter gold.

    The Conversational Agent should respond with What's the phone number on your account?

  4. Enter 4155551212.

    The Conversational Agent should respond with Your tier is now gold. Anything else?.

  5. Click the Create test case button above the simulator (in the upper right of the Simulator pane).

  6. Enter the following in the Create Test Case pop-up window:

    • Enter the Display Name as Change Tier.

    • Expand the Basic Settings section and enter the Tags as #gold.

    • In Notes field enter Customer wants to upgrade to the gold tier.

  1. Click the Save test case button above the simulator.

  2. From the left navigation menu, navigate to Test cases under the TEST & EVALUATE tab.

  3. Select your test case named Change Tier from the list.

  1. However, the Last Run State pane is currently blank. To execute your test case, click the Run selected button and select the Draft environment.

  2. Notice the Last tested pane is now populated with the test you just ran. The Last Run State now shows Pass. The timestamp is more recent than your first test.

You can open the View running operations list by clicking on the hourglass icon in the upper right.
  1. Click on your test case to Edit it.

  2. Notice there are two fields, one for Optional Settings and one for Expected conversation.

  1. Notice the Expected conversation script contains all of the conversation details you entered into the simulator.

  2. Click on t the first user utterance, I want to upgrade my tier, and replace it with I want to change tier.

  3. Click Save.

    What does this do? You're probably thinking, correctly, that Conversational Agents might potentially match a different intent if it find a training phrase that more closely matches the customer utterance.

    In this particular case, it's ok. Conversational Agent runs through the test case and matches it to the ChangeTier intent just as before.

    This is an easy way for you to make simple modifications to your test cases without rebuilding them, but you'll want to be careful not to make changes that result in a different scenario entirely.

Task 3. Reviewing validation data

  1. Navigate to Flows, open the Manage tab, and then click Validations under the Feedback tab
    Are there any validation errors or warnings?

  2. Click on the Flow dropdown and choose Default Start Flow if not already preset.

    Are there any errors or warnings now?

    Note: You may need to refresh the page to see errors and warnings.
  3. Expand the Intent: subscribe intent under Intent issues.

  4. Notice it contains the warning, The annotated text 'Wang' in training phrase 'i want to get a new account for 9255551212 with silver tier and last name is Wang' does not correspond to entity type '@sys.last-name”.

    Why do you think this is?

    Hint: Review training phrases and the system entities documentation.
  5. Scroll to the Flow issues section.

  6. Expand Page: Anything Else.

    You should see a warning, The page cannot reach END_FLOW/END_SESSION or another flow.

    What do you think this means?

Hint: The Anything Else page does not explicitly define an end page or flow. That's ok because we want Cloudio to wait for another request from the user (versus terminate the session).

Task 4. Getting to Google Cloud execution logs

In this task, you'll look at execution logs generated for your Conversational Agent which will help you to debug any issues.

  1. In the Google Cloud console, type "Logging" in the search bar, and then click Logging in the resulting options.

    The Logging Explorer opens by default. Don't worry if you don't see much there yet. There will be once we do some more testing.

  1. Click on Last 1 hour (or if it says Last, click on that).

  2. In Relative time enter 15m and press Enter.

  3. Under All severities, look for Debug type logs if you have them. If not, try another log severity type (for instance, Info type).

  4. Notice in the Query pane, you can construct a query to include only logs with specific type or other criteria. For instance,

    severity=INFO

    This filters out everything except INFO type logs.

    You may want to inspect DEBUG type log for webhooks in your Conversational Agent to see what the parameters were and how long they took.

    Sometimes you'll see DEBUG level logs that give you an indication of simple typos in your code. For example, let's say you forgot to set the value of the tier parameter in your cloud function.

    You might get a log such as the following indicating your cloud function crashed:

    { "textPayload": "Function execution took 905 ms, finished with status: 'crash'", "insertId": "000000-0b19678b-a509-436a-b174-f7d36b722956", "resource": { "type": "cloud_function", "labels": { "function_name": "cloudioAccountUpdate", "region": "{{{ project_0.default_region | REGION }}}", "project_id": "qwiklabs-gcp-04-31bc7acc8e71" } }, "timestamp": "2021-05-18T01:04:36.237217533Z", "severity": "DEBUG", "labels": { "execution_id": "iwjy0cpz8964" }, "logName": "projects/{{{project_0.project_id | Project ID}}}/logs/cloudfunctions.googleapis.com%2Fcloud-functions", "trace": "projects/{{{project_0.project_id | Project ID}}}/traces/ca514f5399280afec54e299a52c81c1b", "receiveTimestamp": "2021-05-18T01:04:37.318856769Z" }
  5. Compare the above with an example of a properly executed Cloud Function log:

    { "textPayload": "Function execution took 191 ms, finished with status code: 200", "insertId": "000000-ea4a1e48-80c7-497f-b2f4-fdfffae11123", "resource": { "type": "cloud_function", "labels": { "function_name": "cloudioAccountUpdate", "project_id": "{{{project_0.project_id | Project ID}}}", "region": "{{{ project_0.default_region | REGION }}}", } }, "timestamp": "2021-05-16T16:36:54.804028817Z", "severity": "DEBUG", "labels": { "execution_id": "ms1uwwnuuqmu" }, "logName": "projects/{{{project_0.project_id | Project ID}}}/logs/cloudfunctions.googleapis.com%2Fcloud-functions", "trace": "projects/{{{project_0.project_id | Project ID}}}/traces/fb66f7bf07b139c4ef277d9954b97023", "receiveTimestamp": "2021-05-16T16:37:05.695763035Z" }


Task 5. Debugging from the Conversational Agents interface

Some debugging efforts are best done in the Conversational Agents simulator pane and good old fashioned detective work.

  1. Navigate to the ChangeTier flow under Flows > Build tab in the Conversational Agents.

  2. Click on the Get Tier page.

  3. Click on the phone_number parameter to bring up the configuration pane.

  4. Deselect Required.

  5. Click Save.

    Next, you'll run your Change Tier test case that you created earlier in this lab.

  6. Navigate to Test cases under the TEST & EVALUATE tab.

  7. Select your Change Tier test case.

  8. Click on the Run selected button and choose environment Draft.

  9. Did the test case show a status of Fail?

    In this scenario, your test case was expecting a valid value for the phone-number but you never actually got a phone-number from the user. Your test case proved that the new scenario will not work for your Conversational Agent.

  1. Navigate to the ChangeTier flow under Flows > Build tab and click on Get Tier page and enable Required option for the phone-number parameter and then click on Save.

  2. Rerun your Change Tier test case.

    Did it work this time?

    Excellent! You're well on your way to using logs as well as test cases to debug your conversational agent.


Congratulations!

You've created a simple agent in Conversational Agents. In the next section, we'll go over slightly more complex scenarios and build out more of the Cloudio functionality.

Completion

In this lab you may not have created any new functionality for your Conversational Agent, but you've seen what things could look like when they're not working.

Export your agent (optional)

  1. In the Agent dropdown at the top, choose View all agents.

  2. Click on the ellipsis menu (three vertical dots) on the right next to your Conversational Agent.

  3. Select Export.

    Your Conversational Agent will be downloaded to your local machine as a *.blob type file. You can use the Restore option to import it later.

Cleanup

  1. In the Cloud Console, sign out of the Google account.

  2. Close the browser tab.

End your lab

When you have completed your lab, click End Lab. Qwiklabs removes the resources you’ve used and cleans the account for you.

You will be given an opportunity to rate the lab experience. Select the applicable number of stars, type a comment, and then click Submit.

The number of stars indicates the following:

  • 1 star = Very dissatisfied
  • 2 stars = Dissatisfied
  • 3 stars = Neutral
  • 4 stars = Satisfied
  • 5 stars = Very satisfied

You can close the dialog box if you don't want to provide feedback.

For feedback, suggestions, or corrections, please use the Support tab.

Copyright 2023 Google LLC All rights reserved. Google and the Google logo are trademarks of Google LLC. All other company and product names may be trademarks of the respective companies with which they are associated.

Anterior Seguinte

Antes de começar

  1. Os laboratórios criam um projeto e recursos do Google Cloud por um período fixo
  2. Os laboratórios têm um limite de tempo e não têm o recurso de pausa. Se você encerrar o laboratório, vai precisar recomeçar do início.
  3. No canto superior esquerdo da tela, clique em Começar o laboratório

Este conteúdo não está disponível no momento

Você vai receber uma notificação por e-mail quando ele estiver disponível

Ótimo!

Vamos entrar em contato por e-mail se ele ficar disponível

Um laboratório por vez

Confirme para encerrar todos os laboratórios atuais e iniciar este

Use a navegação anônima para executar o laboratório

Para executar este laboratório, use o modo de navegação anônima ou uma janela anônima do navegador. Isso evita conflitos entre sua conta pessoal e a conta de estudante, o que poderia causar cobranças extras na sua conta pessoal.
Visualizar