- Use cases
- SageMaker Object Detection preprocessing
- Rekognition Object Detection preprocessing
- SageMaker Kmeans preprocessing
- Autopilot preprocessing
- DeepAR preprocessing
- Personalize preprocessing
- Select, drop or extract Columns
- Split dataset to Train and Test
- Upload to s3
- Forecast preprocessing
- Rekognition Classification preprocessing
- SageMaker Image Classification preprocessing
- Xgboost preprocessing
- Blazingtext preprocessing
- Comprehend custom preprocessing
- SageMaker Object Detection training
- Rekognition Object Detection training
- Forecast training
- Personalize training
- BlazingText training
- DeepAR training
- SageMaker Kmeans training
- Comprehend custom training
- Autopilot Training
- Xgboost Training
- Autogluon training
- Rekognition Classification training
- SageMaker Image Classification training
- SageMaker Object Detection inference
- Forecast inference
- Rekognition Object Detection inference
- Comprehend custom inference
- Personalize inference
- Autopilot Inference
- BlazingText Inference
- Custom SageMaker model Inference
- DeepAR Inference
- Rekognition Classification inference
- SageMaker Image Classification inference
- SageMaker Kmeans inference
- Xgboost Inference
- Contribute a use case or contact us for help.
- Frequently Asked Questions
Rekognition Object Detection preprocessing
Rekognition Object Detection deals with finding objects within an image. To train your model, Amazon Rekognition Custom Labels require bounding boxes to be drawn around objects and the objects should be labeled in your images.
If your image has an object, such as a machine part or an animated character, the image needs a bounding box around an object and an object-identifying label. You can have multiple objects within an image. In this step, you add object-level labels and bounding boxes to an image.
If you are looking to classify images or scenes, check out Rekognition Classification preprocessing
Upload data to S3
To make the process easy, you can upload and organize your data in a single folder in S3 (for example, in a bucket called
rekognitioncustomlabels) especially if you have multiple objects within a single image.
Supported file formats are PNG and JPEG. Maximum number of bounding boxes in an image is 50. The maximum number of images per dataset is 250,000. Make sure that the minimum image dimension of each image file 64 pixels x 64 pixels, and the maximum is 4096 pixels x 4096 pixels.
Other limits are specified here
Create a dataset
Navigate to Rekognition on the console and click “Amazon Rekognition”:
Click Use Custom Labels
On the left sidebar / menu, click datasets
Provide a dataset name and choose Import images from S3
Switch to the S3 console, copy and paste the bucket permissions into the bucket that contains your data:
Switch back to the Rekognition console, enter the S3 path, and leave Automatic labeling unchecked, and click Submit
On Rekognition console, click on Edit next to “Filter by labels”.
Type the label name and click Add label. Once you add all your labels, click Save.
Now, as you have labels created, you need to draw bounding boxes and tag the labels to the images. You will click on “Start Labeling” on the right top corner.
Once you're in labeling mode, you will select the images and click on “Draw bounding box”
You will be presented a preview of the image and labels on the right.
You will select or click on a label (for example, I will select “Frank”) and draw bounding box around the object.
If you have multiple objects in an image, you will follow the same process of selecting the label and drawing bounding box around it.
Similarly you will go through entire set of images to draw bounding box around the object and label them. Once you're done, you will click on “Save changes” on the right top corner to come out of labeling mode and saving the changes you made. Your dataset should look similar to below: