I’m creating an Asynchronous Endpoint because for a model that takes up to 40 minutes to process the input. I want to configure it so it can scale-in to zero instances, and once it recieves an input, then scale-out from zero instances. I’ve been testing and looking for some policies, but I can’t solve my problem. Everytime I launch the endpoint and I ask for processing an input, it starts as it is supposed to do, but, after 10 minutes, the instances is shutted down, and then another one is raised but just waiting for an input.
I’ve used some policies to control the CPU usage, the number of invocations, etc., but I’m always getting the same problem. When I was testing the use of the CPU, I got some weird behavior, the instances stopped after 5-10 minutes of processing the input, and then, after an hour, another instace appeared with all the job done, as if I have done another invocation to the endpoint (but I didn’t).
I want the endpoint to scale-in, i.e., shutting down instances when is not processing any instance for X minutes (that’s why I tried to check de %CPU that was being used), and to scale-out from zero, i.e., raise a new instance if an invocation to the endpoint is done and there is no instance already up.
Here is some code for the policies that I’ve been using.
Scale-In policies:
response = asg_client.register_scalable_target(
ServiceNamespace="sagemaker",
ResourceId=resource_id,
ScalableDimension="sagemaker:variant:DesiredInstanceCount",
MinCapacity=0,
MaxCapacity=5,
)
response_scale_in = asg_client.put_scaling_policy(
PolicyName = f'scaleinpolicy-{endpoint_name}',
ServiceNamespace="sagemaker", # The namespace of the service that provides the resource.
ResourceId=resource_id, # Endpoint name
ScalableDimension="sagemaker:variant:DesiredInstanceCount", # SageMaker supports only Instance Count
PolicyType="StepScaling", # 'StepScaling' or 'TargetTrackingScaling'
StepScalingPolicyConfiguration={
"AdjustmentType": "ChangeInCapacity", # Specifies whether the ScalingAdjustment value in the StepAdjustment property is an absolute number or a percentage of the current capacity.
"MetricAggregationType": "Average", # The aggregation type for the CloudWatch metrics.
"Cooldown": 2400, # The amount of time, in seconds, to wait for a previous scaling activity to take effect.
"StepAdjustments": # A set of adjustments that enable you to scale based on the size of the alarm breach.
[
{
"MetricIntervalUpperBound": 0,
"ScalingAdjustment": -1
}
]
},
)
CPU Utilization:
response = asg_client.put_scaling_policy(
PolicyName=policy_name,
ServiceNamespace='sagemaker',
ResourceId=resource_id,
ScalableDimension='sagemaker:variant:DesiredInstanceCount',
PolicyType='TargetTrackingScaling',
TargetTrackingScalingPolicyConfiguration={
'TargetValue': 50.0,
'CustomizedMetricSpecification':
{
'MetricName': 'CPUUtilization',
'Namespace': '/aws/sagemaker/Endpoints',
'Dimensions': [
{'Name': 'EndpointName', 'Value': endpoint_name},
{'Name': 'VariantName', 'Value': "AllTraffic"}
],
'Statistic': 'Average',
# 'Statistic': 'Maximum',
'Unit': 'Percent'
},
'ScaleOutCooldown': scale_out_cool_down,
'ScaleInCooldown': scale_in_cool_down,
'DisableScaleIn': False
}
)
Scale out (from zero):
response = asg_client.put_scaling_policy(
PolicyName = f'HasBacklogWithoutCapacity-ScalingPolicy-{endpoint_name}',
ServiceNamespace="sagemaker", # The namespace of the service that provides the resource.
ResourceId=resource_id, # Endpoint name
ScalableDimension="sagemaker:variant:DesiredInstanceCount", # SageMaker supports only Instance Count
PolicyType="StepScaling", # 'StepScaling' or 'TargetTrackingScaling'
StepScalingPolicyConfiguration={
"AdjustmentType": "ChangeInCapacity", # Specifies whether the ScalingAdjustment value in the StepAdjustment property is an absolute number or a percentage of the current capacity.
"MetricAggregationType": "Average", # The aggregation type for the CloudWatch metrics.
"Cooldown": 300, # The amount of time, in seconds, to wait for a previous scaling activity to take effect.
"StepAdjustments": # A set of adjustments that enable you to scale based on the size of the alarm breach.
[
{
"MetricIntervalLowerBound": 0.0,
"ScalingAdjustment": 1
}
]
},
)
I’m invoking the endpoint as follows:
response = sm_client.invoke_endpoint_async(
EndpointName=endpoint_name,
ContentType="application/octet-stream", # File to be processed
InputLocation="s3://path_to_s3_file",
Accept="application/xml",
InvocationTimeoutSeconds=3600,
CustomAttributes='some_atribs',
InferenceId='gened_id'
)
I’m not using GPU, so I didn’t test any GPU instance. In addition, the docker image is custom, one of the official Python docker images repository.
Thank you very much in advance!!
I’ve being seraching in the rePost, but any configuration is working for me, and I don’t understand what is happening with the alarms or the policies.
I have tried to change more things, like the AI comment said that was to using the ‘Maximum’ for the CPU instead of the ‘Average’, and setting it to 800% of CPU use, and what I get is the following: First, it creates an instance, 10 minutes later it shutdown the instance and raise a new one. If I make another invocation, the new instance get the invocation, process it, and then process the other petition which processing was “killed” with the first instance. If I try to make more invocations, the instances get shutdown after 10 mins. I don’t get any error file in the s3 path for error logs, but when the input is processed as it is supposed to do, it is correct. I don’t know whats happening or if this can help. I have uploaded an image that can help to understand better what I’m trying to explain.
%CPU Use
I’d like to add that I’ve been using multiples instances in order to check if it could be the problem, I keep getting the same behaviour with the following instances:
- ml.m5.xlarge, ml.m5.2xlarge
- ml.r5.xlarge, ml.r5.2xlarge
- ml.c6g.xlarge: As it uses ARM processors, I cant use it.
- ml.c4.2xlarge: Got memory error.
Alberto Argente is a new contributor to this site. Take care in asking for clarification, commenting, and answering.
Check out our Code of Conduct.