- Use cases
-
1. Preprocessing
- SageMaker Object Detection preprocessing
- Rekognition Object Detection preprocessing
- SageMaker Kmeans preprocessing
- Autopilot preprocessing
- DeepAR preprocessing
- Personalize preprocessing
- Select, drop or extract Columns
- Split dataset to Train and Test
- Upload to s3
- Forecast preprocessing
- Rekognition Classification preprocessing
- SageMaker Image Classification preprocessing
- Xgboost preprocessing
- Blazingtext preprocessing
- Comprehend custom preprocessing
-
2. Training
- SageMaker Object Detection training
- Rekognition Object Detection training
- Forecast training
- Personalize training
- BlazingText training
- DeepAR training
- SageMaker Kmeans training
- Comprehend custom training
- Autopilot Training
- Xgboost Training
- Autogluon training
- Rekognition Classification training
- SageMaker Image Classification training
-
3. Inference
- SageMaker Object Detection inference
- Forecast inference
- Rekognition Object Detection inference
- Comprehend custom inference
- Personalize inference
- Autopilot Inference
- BlazingText Inference
- Custom SageMaker model Inference
- DeepAR Inference
- Rekognition Classification inference
- SageMaker Image Classification inference
- SageMaker Kmeans inference
- Xgboost Inference
- Contribute a use case or contact us for help.
- Frequently Asked Questions
DeepAR Inference
Create Endpoint
If you followed the python instructions in this link to train your DeepAR model, deploying your model is as simple as doing:
predictor = estimator.deploy(initial_instance_count=1,instance_type='ml.m4.xlarge')
Otherwise, you can create a model and deploy it as an endpoint using the console.
First go to your training job, and click create model:
Then create an endpoint:
Predict
DeepAR requires the following set up to do a predict:
First, copy existing data from a test file and add any dynamic features:
instance = [{"start": "2013-01-01 00:00:00", "target": [0, 5530, .....], ....}]
Replace the python dict above with your own dict from a test file you generated, or use a line from the train file for demonstration purposes.
Next, Prepare HTTP request data that DeepAR likes:
configuration = {
"num_samples": 100,
"output_types": ["quantiles"],
"quantiles": ['0.25','0.5','0.75']
}
http_request_data = {
"instances": instance,
"configuration": configuration
}
req = json.dumps(http_request_data).encode('utf-8')
Finally do a predict!
predictor.predict(req)
Learn more about inference formats here.