- Use cases
-
1. Preprocessing
- SageMaker Object Detection preprocessing
- Rekognition Object Detection preprocessing
- SageMaker Kmeans preprocessing
- Autopilot preprocessing
- DeepAR preprocessing
- Personalize preprocessing
- Select, drop or extract Columns
- Split dataset to Train and Test
- Upload to s3
- Forecast preprocessing
- Rekognition Classification preprocessing
- SageMaker Image Classification preprocessing
- Xgboost preprocessing
- Blazingtext preprocessing
- Comprehend custom preprocessing
-
2. Training
- SageMaker Object Detection training
- Rekognition Object Detection training
- Forecast training
- Personalize training
- BlazingText training
- DeepAR training
- SageMaker Kmeans training
- Comprehend custom training
- Autopilot Training
- Xgboost Training
- Autogluon training
- Rekognition Classification training
- SageMaker Image Classification training
-
3. Inference
- SageMaker Object Detection inference
- Forecast inference
- Rekognition Object Detection inference
- Comprehend custom inference
- Personalize inference
- Autopilot Inference
- BlazingText Inference
- Custom SageMaker model Inference
- DeepAR Inference
- Rekognition Classification inference
- SageMaker Image Classification inference
- SageMaker Kmeans inference
- Xgboost Inference
- Contribute a use case or contact us for help.
- Frequently Asked Questions
Custom SageMaker model Inference
Ezsmdeploy python SDK helps you easily deploy Machine learning models and provides a rich set of features such as passing one or more model files (yes, through multi-model deployments), automatically choosing an instance based on model size or based on a budget, and load testing endpoints using an intuitive API. Ezsmdeploy uses the SageMaker Python SDK, which is an open source library for training and deploying machine learning models on Amazon SageMaker.
Installing the Ezsmdeploy Python SDK
pip install ezsmdeploy
Key Features
At minimum, ezsmdeploy requires you to provide:
one or more model files a python script with two functions:
- load_model(modelpath) - loads a model from a modelpath and returns a model object, and
- predict(model,input) - performs inference based on a model object and input data a list of requirements or a requirements.txt file
For example, you can do this to deploy a pytorch model:
import ezsmdeploy
ezonsm = ezsmdeploy.Deploy(model = 'model.pth',
script = 'modelscript_pytorch.py',
requirements = ['numpy','torch','joblib'])
Read more about the ezsmdeploy SDK here, and find sample notebooks for Scikit-learn, Pytorch, Tensorflow and MXnet deployments here