I’ve created a sagemaker pipeline with a register model step show below. The model_metrics
is produced by an eval step. It’s based of the sagemaker tutorials but all the examples I can find use an SKLearn estimator, my model is pytorch
based (it’s a sentence-transformers
model evaluated using their information retrieval evaluator).
The issue I have is when I view the registered model in sagemaker studio (latest version, not sagemaker studio classic). I’m expecting metrics to appear on the “Performance” tab under “Model Metrics” but all I get is a spinner then “NetworkError when attempting to fetch resource.”
This didn’t happen before I added model_metrics
to model.register
so I’m assuming that metrics are supposed to appear here? I’ve checked that the file exists at the uri it’s being registered at.
Has anyone got this working? Does the file have to be a specific format? I tried to upload an example evaluation.json
file from the tutorial and passed the uri to it but I got the same error.
Is there a way of getting a more detailed error message?
model_metrics = ModelMetrics(
model_statistics=MetricsSource(
s3_uri=Join(
on="/",
values=[
evaluation_step.properties.ProcessingOutputConfig.Outputs["evaluation"].S3Output.S3Uri,
"evaluation.json",
],
),
content_type="application/json",
)
)
register_args = model.register(
content_types=["application/json"],
response_types=["application/json"],
transform_instances=["ml.g4dn.xlarge"], # g5's not available for batch transform?
inference_instances=["ml.g4dn.xlarge", "ml.g5.xlarge"],
model_package_group_name=model_package_group_name,
approval_status="PendingManualApproval",
model_metrics=model_metrics,
)
register_model_step = ModelStep(name="RegisterModel", step_args=register_args)