In this lab we are going to build, train and deploy a churn prediction model using Amazon SageMaker .
We will then use AWS Lambda the invoke the Churn prediction endpoint that we have deployed
Great question!
We will then look on one use case where we have an Amazon Connect instance.
We will deploy a simple call flow, and use the lambda function that we created in order to trigger the endpoint we created.
The result 'Churn'/'no Churn', will be used to decide to which queue shall we route the customer to - Expert/standard Queue.
Absolutely not. All you need to do is follow the lab guide, ask questions when you have doubt and enjoy the workshop!
In the below diagram you can see a demo that we have built for an 'Agent Dashboard' which is using different AWS Services to do real time transcription and sentiment analytics and then visualize that to an agent.
It will also route the call to a Expert/Default queue once there is an incoming call using a churn prediction endpoint. This is what we are going to implement in this lab.
-
login to AWS Console
-
in the search bar enter 'sagemaker' and select 'Amazon SageMaker'
-
click on on 'notebook instances' and create a notebook instance
-
enter a name for your notebook instance
-
select
ml.m5.xlarge
as your notebook instance type -
under 'IAM role' create a new role
-
go ahead and click on 'Create notebook instance'
- under 'notebook instance' once your instance status will show 'InService' , click on 'OpenJupyter' under Actions
- click on the 'SageMaker Examples' tab
- click on 'Introduction to Applying Machine Learning'
- click on the'use' button next to
xgboost_customer_churn.ipynb
- Create a copy in your home directory
- select
Conda_python3
as your kernel - go through the SageMaker notebook
- point to your S3 bucket where the sample dataset will be downloaded to.
- you can now click on the 'Cell' tab. Run upto Step 17
Go through the notebook and examine the different steps:
- data exploration
- model training
- Host
Once the model will be deployed and an endpoint will be created , we can move to the next stage.
- go to the lambda console
- create a new lambda function
- select 'author from scratch'
- give a name for your lambda function
- select python 3.7 runtime
- make sure your lambda function has permission to invoke sagemaker endpoint
- click on 'create function'
- copy the following lambda function to the console: Makesure you put the end point generated in SageMaker
import os
import io
import boto3
import json
import csv
# grab environment variables
#ENDPOINT_NAME = os.environ['ENDPOINT_NAME']
runtime= boto3.client('runtime.sagemaker')
def lambda_handler(event, context):
print("Received event: " + json.dumps(event, indent=2))
data = json.loads(json.dumps(event))
payload ="0,47,28,141.3,94,168.0,108,113.5,84,7.8,2,1,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,1,0,0"
print(payload)
response = runtime.invoke_endpoint(EndpointName='xgboost-2019-03-22-11-38-32-449',
ContentType='text/csv',
Body=payload)
print(response)
result = json.loads(response['Body'].read().decode())
print(result)
pred = result
#predicted_label = 'Churn' if pred>0.80 else 'Not Churn'
predicted_label = {'predicted_label': 'Churn' if pred >0.80 else 'Not Churn'}
return predicted_label
- save
The payload will contain the customer sample in a CSV format which we will 'inject' into the endpoint .
As you can see above we have hard-coded in this example the customer data which should invoke the endpoint returning 'Churn'
We would like to create an amazon connect instance which will have 2 agent queues:
- basic queue
- expert queue
The logic is that whenever there is an incoming call in real time there is a lambda function which triggers the churn prediction endpoint based on the customer data and according to the result (churn/not churn) the call will be routed between the queues.
We would create a Connect contact flows.
After creating an amazon connect instance, you can then import a contact flow we have created before the workshop). Once you have imported the contact flow, you can then point to your lambda function. Next step will be to generate an incoming call and observe how the call is being routed to an expert queue.