Float width
Width of the bounding box as a ratio of the overall image width.
Float height
Height of the bounding box as a ratio of the overall image height.
Float left
Left coordinate of the bounding box as a ratio of overall image width.
Float top
Top coordinate of the bounding box as a ratio of overall image height.
List<E> urls
An array of URLs pointing to additional information about the celebrity. If there is no additional information about the celebrity, this list is empty.
String name
The name of the celebrity.
String id
A unique identifier for the celebrity.
ComparedFace face
Provides information about the celebrity's face, such as its location on the image.
Float matchConfidence
The confidence, in percentage, that Rekognition has that the recognized face is the celebrity.
BoundingBox boundingBox
Bounding box of the face.
Float confidence
Level of confidence that what the bounding box contains is a face.
List<E> landmarks
An array of facial landmarks.
Pose pose
Indicates the pose of the face as determined by its pitch, roll, and yaw.
ImageQuality quality
Identifies face image brightness and sharpness.
BoundingBox boundingBox
Bounding box of the face.
Float confidence
Confidence level that the selected bounding box contains a face.
Float similarity
Level of confidence that the faces match.
ComparedFace face
Provides face metadata (bounding box and confidence that the bounding box actually contains a face).
Image sourceImage
The source image, either as bytes or as an S3 object.
Image targetImage
The target image, either as bytes or as an S3 object.
Float similarityThreshold
The minimum level of confidence in the face matches that a match must meet to be included in the
FaceMatches array.
ComparedSourceImageFace sourceImageFace
The face in the source image that was used for comparison.
List<E> faceMatches
An array of faces in the target image that match the source image face. Each CompareFacesMatch
object provides the bounding box, the confidence level that the bounding box contains a face, and the similarity
score for the face in the bounding box and the face in the source image.
List<E> unmatchedFaces
An array of faces in the target image that did not match the source image face.
String sourceImageOrientationCorrection
The orientation of the source image (counterclockwise direction). If your application displays the source image,
you can use this value to correct image orientation. The bounding box coordinates returned in
SourceImageFace represent the location of the face before the image orientation is corrected.
If the source image is in .jpeg format, it might contain exchangeable image (Exif) metadata that includes the
image's orientation. If the Exif metadata for the source image populates the orientation field, the value of
OrientationCorrection is null and the SourceImageFace bounding box coordinates
represent the location of the face after Exif metadata is used to correct the orientation. Images in .png format
don't contain Exif metadata.
String targetImageOrientationCorrection
The orientation of the target image (in counterclockwise direction). If your application displays the target
image, you can use this value to correct the orientation of the image. The bounding box coordinates returned in
FaceMatches and UnmatchedFaces represent face locations before the image orientation is
corrected.
If the target image is in .jpg format, it might contain Exif metadata that includes the orientation of the image.
If the Exif metadata for the target image populates the orientation field, the value of
OrientationCorrection is null and the bounding box coordinates in FaceMatches and
UnmatchedFaces represent the location of the face after Exif metadata is used to correct the
orientation. Images in .png format don't contain Exif metadata.
String collectionId
ID for the collection that you are creating.
String collectionId
ID of the collection to delete.
Integer statusCode
HTTP status code that indicates the result of the operation.
Image image
The image in which you want to detect faces. You can specify a blob or an S3 object.
List<E> attributes
An array of facial attributes you want to be returned. This can be the default list of attributes or all
attributes. If you don't specify a value for Attributes or if you specify ["DEFAULT"],
the API returns the following subset of facial attributes: BoundingBox, Confidence,
Pose, Quality and Landmarks. If you provide ["ALL"], all
facial attributes are returned but the operation will take longer to complete.
If you provide both, ["ALL", "DEFAULT"], the service uses a logical AND operator to determine which
attributes to return (in this case, all attributes).
List<E> faceDetails
Details of each face found in the image.
String orientationCorrection
The orientation of the input image (counter-clockwise direction). If your application displays the image, you can
use this value to correct image orientation. The bounding box coordinates returned in FaceDetails
represent face locations before the image orientation is corrected.
If the input image is in .jpeg format, it might contain exchangeable image (Exif) metadata that includes the
image's orientation. If so, and the Exif metadata for the input image populates the orientation field, the value
of OrientationCorrection is null and the FaceDetails bounding box coordinates represent
face locations after Exif metadata is used to correct the image orientation. Images in .png format don't contain
Exif metadata.
Image image
The input image. You can provide a blob of image bytes or an S3 object.
Integer maxLabels
Maximum number of labels you want the service to return in the response. The service returns the specified number of highest confidence labels.
Float minConfidence
Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn't return any labels with confidence lower than this specified value.
If MinConfidence is not specified, the operation returns labels with a confidence values greater
than or equal to 50 percent.
List<E> labels
An array of labels for the real-world objects detected.
String orientationCorrection
The orientation of the input image (counter-clockwise direction). If your application displays the image, you can use this value to correct the orientation. If Amazon Rekognition detects that the input image was rotated (for example, by 90 degrees), it first corrects the orientation before detecting the labels.
If the input image Exif metadata populates the orientation field, Amazon Rekognition does not perform orientation correction and the value of OrientationCorrection will be null.
Image image
The input image as bytes or an S3 object.
Float minConfidence
Specifies the minimum confidence level for the labels to return. Amazon Rekognition doesn't return any labels with a confidence level lower than this specified value.
If you don't specify MinConfidence, the operation returns labels with confidence values greater than
or equal to 50 percent.
String faceId
Unique identifier that Amazon Rekognition assigns to the face.
BoundingBox boundingBox
Bounding box of the face.
String imageId
Unique identifier that Amazon Rekognition assigns to the input image.
String externalImageId
Identifier that you assign to all the faces in the input image.
Float confidence
Confidence level that the bounding box contains a face (and not a different object such as a tree).
BoundingBox boundingBox
Bounding box of the face.
AgeRange ageRange
The estimated age range, in years, for the face. Low represents the lowest estimated age and High represents the highest estimated age.
Smile smile
Indicates whether or not the face is smiling, and the confidence level in the determination.
Eyeglasses eyeglasses
Indicates whether or not the face is wearing eye glasses, and the confidence level in the determination.
Sunglasses sunglasses
Indicates whether or not the face is wearing sunglasses, and the confidence level in the determination.
Gender gender
Gender of the face and the confidence level in the determination.
Beard beard
Indicates whether or not the face has a beard, and the confidence level in the determination.
Mustache mustache
Indicates whether or not the face has a mustache, and the confidence level in the determination.
EyeOpen eyesOpen
Indicates whether or not the eyes on the face are open, and the confidence level in the determination.
MouthOpen mouthOpen
Indicates whether or not the mouth on the face is open, and the confidence level in the determination.
List<E> emotions
The emotions detected on the face, and the confidence level in the determination. For example, HAPPY, SAD, and ANGRY.
List<E> landmarks
Indicates the location of landmarks on the face.
Pose pose
Indicates the pose of the face as determined by its pitch, roll, and yaw.
ImageQuality quality
Identifies image brightness and sharpness.
Float confidence
Confidence level that the bounding box contains a face (and not a different object such as a tree).
Face face
Describes the face properties such as the bounding box, face ID, image ID of the input image, and external image ID that you assigned.
FaceDetail faceDetail
Structure containing attributes of the face that the algorithm detected.
String id
The ID for the celebrity. You get the celebrity ID from a call to the operation, which recognizes celebrities in an image.
ByteBuffer bytes
Blob of image bytes up to 5 MBs.
S3Object s3Object
Identifies an S3 object as the image source.
Float brightness
Value representing brightness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a brighter face image.
Float sharpness
Value representing sharpness of the face. The service returns a value between 0 and 100 (inclusive). A higher value indicates a sharper face image.
String collectionId
The ID of an existing collection to which you want to add the faces that are detected in the input images.
Image image
The input image as bytes or an S3 object.
String externalImageId
ID you want to assign to all the faces detected in the image.
List<E> detectionAttributes
An array of facial attributes that you want to be returned. This can be the default list of attributes or all
attributes. If you don't specify a value for Attributes or if you specify ["DEFAULT"],
the API returns the following subset of facial attributes: BoundingBox, Confidence,
Pose, Quality and Landmarks. If you provide ["ALL"], all
facial attributes are returned but the operation will take longer to complete.
If you provide both, ["ALL", "DEFAULT"], the service uses a logical AND operator to determine which
attributes to return (in this case, all attributes).
List<E> faceRecords
An array of faces detected and added to the collection. For more information, see howitworks-index-faces.
String orientationCorrection
The orientation of the input image (counterclockwise direction). If your application displays the image, you can
use this value to correct image orientation. The bounding box coordinates returned in FaceRecords
represent face locations before the image orientation is corrected.
If the input image is in jpeg format, it might contain exchangeable image (Exif) metadata. If so, and the Exif
metadata populates the orientation field, the value of OrientationCorrection is null and the
bounding box coordinates in FaceRecords represent face locations after Exif metadata is used to
correct the image orientation. Images in .png format don't contain Exif metadata.
String type
Type of the landmark.
Float x
x-coordinate from the top left of the landmark expressed as the ratio of the width of the image. For example, if the images is 700x200 and the x-coordinate of the landmark is at 350 pixels, this value is 0.5.
Float y
y-coordinate from the top left of the landmark expressed as the ratio of the height of the image. For example, if the images is 700x200 and the y-coordinate of the landmark is at 100 pixels, this value is 0.5.
String collectionId
ID of the collection from which to list the faces.
String nextToken
If the previous response was incomplete (because there is more data to retrieve), Amazon Rekognition returns a pagination token in the response. You can use this pagination token to retrieve the next set of faces.
Integer maxResults
Maximum number of faces to return.
Float confidence
Specifies the confidence that Amazon Rekognition has that the label has been correctly identified.
If you don't specify the MinConfidence parameter in the call to DetectModerationLabels,
the operation returns labels with a confidence value greater than or equal to 50 percent.
String name
The label name for the type of content detected in the image.
String parentName
The name for the parent label. Labels at the top-level of the hierarchy have the parent label "".
Image image
The input image to use for celebrity recognition.
List<E> celebrityFaces
Details about each celebrity found in the image. Amazon Rekognition can detect a maximum of 15 celebrities in an image.
List<E> unrecognizedFaces
Details about each unrecognized face in the image.
String orientationCorrection
The orientation of the input image (counterclockwise direction). If your application displays the image, you can
use this value to correct the orientation. The bounding box coordinates returned in CelebrityFaces
and UnrecognizedFaces represent face locations before the image orientation is corrected.
If the input image is in .jpeg format, it might contain exchangeable image (Exif) metadata that includes the
image's orientation. If so, and the Exif metadata for the input image populates the orientation field, the value
of OrientationCorrection is null and the CelebrityFaces and
UnrecognizedFaces bounding box coordinates represent face locations after Exif metadata is used to
correct the image orientation. Images in .png format don't contain Exif metadata.
String collectionId
ID of the collection to search.
Image image
The input image as bytes or an S3 object.
Integer maxFaces
Maximum number of faces to return. The operation returns the maximum number of faces with the highest confidence in the match.
Float faceMatchThreshold
(Optional) Specifies the minimum confidence in the face match to return. For example, don't return any matches where confidence in matches is less than 70%.
BoundingBox searchedFaceBoundingBox
The bounding box around the face in the input image that Amazon Rekognition used for the search.
Float searchedFaceConfidence
The level of confidence that the searchedFaceBoundingBox, contains a face.
List<E> faceMatches
An array of faces that match the input face, along with the confidence in the match.
String collectionId
ID of the collection the face belongs to.
String faceId
ID of a face to find matches for in the collection.
Integer maxFaces
Maximum number of faces to return. The operation returns the maximum number of faces with the highest confidence in the match.
Float faceMatchThreshold
Optional value specifying the minimum confidence in the face match to return. For example, don't return any matches where confidence in matches is less than 70%.
Copyright © 2017. All rights reserved.