Float width
Width of the bounding box as a ratio of the overall image width.
Float height
Height of the bounding box as a ratio of the overall image height.
Float left
Left coordinate of the bounding box as a ratio of overall image width.
Float top
Top coordinate of the bounding box as a ratio of overall image height.
BoundingBox boundingBox
Float confidence
Level of confidence that what the bounding box contains is a face.
BoundingBox boundingBox
Float confidence
Confidence level that the selected bounding box contains a face.
Float similarity
Level of confidence that the faces match.
ComparedFace face
Provides face metadata (bounding box and confidence that the bounding box actually contains a face).
ComparedSourceImageFace sourceImageFace
The face from the source image that was used for comparison.
List<E> faceMatches
Provides an array of CompareFacesMatch objects. Each object provides the bounding box, confidence
that the bounding box contains a face, and the similarity between the face in the bounding box and the face in
the source image.
String collectionId
ID for the collection that you are creating.
String collectionId
ID of the collection to delete.
Integer statusCode
HTTP status code that indicates the result of the operation.
Image image
The image in which you want to detect faces. You can specify a blob or an S3 object.
List<E> attributes
A list of facial attributes you want to be returned. This can be the default list of attributes or all
attributes. If you don't specify a value for Attributes or if you specify ["DEFAULT"],
the API returns the following subset of facial attributes: BoundingBox, Confidence,
Pose, Quality and Landmarks. If you provide ["ALL"], all
facial attributes are returned but the operation will take longer to complete.
If you provide both, ["ALL", "DEFAULT"], the service uses a logical AND operator to determine which
attributes to return (in this case, all attributes).
List<E> faceDetails
Details of each face found in the image.
String orientationCorrection
The algorithm detects the image orientation. If it detects that the image was rotated, it returns the degrees of rotation. If your application is displaying the image, you can use this value to adjust the orientation.
For example, if the service detects that the input image was rotated by 90 degrees, it corrects orientation, performs face detection, and then returns the faces. That is, the bounding box coordinates in the response are based on the corrected orientation.
If the source image Exif metadata populates the orientation field, Amazon Rekognition does not perform orientation correction and the value of OrientationCorrection will be nil.
Image image
The input image. You can provide a blob of image bytes or an S3 object.
Integer maxLabels
Maximum number of labels you want the service to return in the response. The service returns the specified number of highest confidence labels.
Float minConfidence
Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn't return any labels with confidence lower than this specified value.
If MinConfidence is not specified, the operation returns labels with a confidence values greater
than or equal to 50 percent.
List<E> labels
An array of labels for the real-world objects detected.
String orientationCorrection
Amazon Rekognition returns the orientation of the input image that was detected (clockwise direction). If your application displays the image, you can use this value to correct the orientation. If Amazon Rekognition detects that the input image was rotated (for example, by 90 degrees), it first corrects the orientation before detecting the labels.
If the source image Exif metadata populates the orientation field, Amazon Rekognition does not perform orientation correction and the value of OrientationCorrection will be nil.
Image image
Float minConfidence
Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn't return any labels with a confidence level lower than this specified value.
If you don't specify MinConfidence, the operation returns labels with confidence values greater than
or equal to 50 percent.
String faceId
Unique identifier that Amazon Rekognition assigns to the face.
BoundingBox boundingBox
String imageId
Unique identifier that Amazon Rekognition assigns to the source image.
String externalImageId
Identifier that you assign to all the faces in the input image.
Float confidence
Confidence level that the bounding box contains a face (and not a different object such as a tree).
BoundingBox boundingBox
Bounding box of the face.
AgeRange ageRange
The estimated age range, in years, for the face. Low represents the lowest estimated age and High represents the highest estimated age.
Smile smile
Indicates whether or not the face is smiling, and the confidence level in the determination.
Eyeglasses eyeglasses
Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination.
Sunglasses sunglasses
Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination.
Gender gender
Gender of the face and the confidence level in the determination.
Beard beard
Indicates whether or not the face has a beard, and the confidence level in the determination.
Mustache mustache
Indicates whether or not the face has a mustache, and the confidence level in the determination.
EyeOpen eyesOpen
Indicates whether or not the eyes on the face are open, and the confidence level in the determination.
MouthOpen mouthOpen
Indicates whether or not the mouth on the face is open, and the confidence level in the determination.
List<E> emotions
The emotions detected on the face, and the confidence level in the determination. For example, HAPPY, SAD, and ANGRY.
List<E> landmarks
Indicates the location of the landmark on the face.
Pose pose
Indicates the pose of the face as determined by pitch, roll, and the yaw.
ImageQuality quality
Identifies image brightness and sharpness.
Float confidence
Confidence level that the bounding box contains a face (and not a different object such as a tree).
Face face
FaceDetail faceDetail
ByteBuffer bytes
Blob of image bytes up to 5 MBs.
S3Object s3Object
Identifies an S3 object as the image source.
Float brightness
Value representing brightness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a brighter face image.
Float sharpness
Value representing sharpness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a sharper face image.
String collectionId
The ID of an existing collection to which you want to add the faces that are detected in the input images.
Image image
String externalImageId
ID you want to assign to all the faces detected in the image.
List<E> detectionAttributes
A list of facial attributes that you want to be returned. This can be the default list of attributes or all
attributes. If you don't specify a value for Attributes or if you specify ["DEFAULT"],
the API returns the following subset of facial attributes: BoundingBox, Confidence,
Pose, Quality and Landmarks. If you provide ["ALL"], all
facial attributes are returned but the operation will take longer to complete.
If you provide both, ["ALL", "DEFAULT"], the service uses a logical AND operator to determine which
attributes to return (in this case, all attributes).
List<E> faceRecords
An array of faces detected and added to the collection. For more information, see howitworks-index-faces.
String orientationCorrection
The algorithm detects the image orientation. If it detects that the image was rotated, it returns the degree of rotation. You can use this value to correct the orientation and also appropriately analyze the bounding box coordinates that are returned.
If the source image Exif metadata populates the orientation field, Amazon Rekognition does not perform orientation correction and the value of OrientationCorrection will be nil.
String type
Type of the landmark.
Float x
x-coordinate from the top left of the landmark expressed as the ration of the width of the image. For example, if the images is 700x200 and the x-coordinate of the landmark is at 350 pixels, this value is 0.5.
Float y
y-coordinate from the top left of the landmark expressed as the ration of the height of the image. For example, if the images is 700x200 and the y-coordinate of the landmark is at 100 pixels, this value is 0.5.
String collectionId
ID of the collection from which to list the faces.
String nextToken
If the previous response was incomplete (because there is more data to retrieve), Amazon Rekognition returns a pagination token in the response. You can use this pagination token to retrieve the next set of faces.
Integer maxResults
Maximum number of faces to return.
Float confidence
Specifies the confidence that Amazon Rekognition has that the label has been correctly identified.
If you don't specify the MinConfidence parameter in the call to DetectModerationLabels,
the operation returns labels with a confidence value greater than or equal to 50 percent.
String name
The label name for the type of content detected in the image.
String parentName
The name for the parent label. Labels at the top-level of the hierarchy have the parent label "".
String collectionId
ID of the collection to search.
Image image
Integer maxFaces
Maximum number of faces to return. The operation returns the maximum number of faces with the highest confidence in the match.
Float faceMatchThreshold
(Optional) Specifies the minimum confidence in the face match to return. For example, don't return any matches where confidence in matches is less than 70%.
BoundingBox searchedFaceBoundingBox
The bounding box around the face in the input image that Amazon Rekognition used for the search.
Float searchedFaceConfidence
The level of confidence that the searchedFaceBoundingBox, contains a face.
List<E> faceMatches
An array of faces that match the input face, along with the confidence in the match.
String collectionId
ID of the collection the face belongs to.
String faceId
ID of a face to find matches for in the collection.
Integer maxFaces
Maximum number of faces to return. The operation returns the maximum number of faces with the highest confidence in the match.
Float faceMatchThreshold
Optional value specifying the minimum confidence in the face match to return. For example, don't return any matches where confidence in matches is less than 70%.
Copyright © 2017. All rights reserved.