latest--aws-rekognition-streamprocessor
sharedThe AWS::Rekognition::StreamProcessor type is used to create an Amazon Rekognition StreamProcessor that you can use to analyze streaming videos.
. Source:- https://github.com/aws-cloudformation/aws-cloudformation-rpdk.git
Properties
ARN of the IAM role that allows access to the stream processor, and provides Rekognition read permissions for KVS stream and write permissions to S3 bucket and SNS topic.
The Kinesis Video Stream that streams the source video.
1 nested properties
ARN of the Kinesis Video Stream that streams the source video.
Name of the stream processor. It's an identifier you assign to the stream processor. You can use it to manage the stream processor.
The KMS key that is used by Rekognition to encrypt any intermediate customer metadata and store in the customer's S3 bucket.
Face search settings to use on a streaming video. Note that either FaceSearchSettings or ConnectedHomeSettings should be set. Not both
2 nested properties
The ID of a collection that contains faces that you want to search for.
Minimum face match confidence score percentage that must be met to return a result for a recognized face. The default is 80. 0 is the lowest confidence. 100 is the highest confidence. Values between 0 and 100 are accepted.
Connected home settings to use on a streaming video. Note that either ConnectedHomeSettings or FaceSearchSettings should be set. Not both
2 nested properties
List of labels that need to be detected in the video stream. Current supported values are PERSON, PET, PACKAGE, ALL.
Minimum object class match confidence score that must be met to return a result for a recognized object.
The Amazon Kinesis Data Stream stream to which the Amazon Rekognition stream processor streams the analysis results, as part of face search feature.
1 nested properties
ARN of the Kinesis Data Stream stream.
The S3 location in customer's account where inference output & artifacts are stored, as part of connected home feature.
2 nested properties
Name of the S3 bucket.
The object key prefix path where the results will be stored. Default is no prefix path
The ARN of the SNS notification channel where events of interests are published, as part of connected home feature.
1 nested properties
ARN of the SNS topic.
Indicates whether Rekognition is allowed to store the video stream data for model-training.
1 nested properties
Flag to enable data-sharing
The PolygonRegionsOfInterest specifies a set of polygon areas of interest in the video frames to analyze, as part of connected home feature. Each polygon is in turn, an ordered list of Point
The BoundingBoxRegionsOfInterest specifies an array of bounding boxes of interest in the video frames to analyze, as part of connected home feature. If an object is partially in a region of interest, Rekognition will tag it as detected if the overlap of the object with the region-of-interest is greater than 20%.
An array of key-value pairs to apply to this resource.
One of
Definitions
The ARN of the stream processor
The Kinesis Video Stream that streams the source video.
ARN of the Kinesis Video Stream that streams the source video.
The S3 location in customer's account where inference output & artifacts are stored, as part of connected home feature.
Name of the S3 bucket.
The object key prefix path where the results will be stored. Default is no prefix path
The Amazon Kinesis Data Stream stream to which the Amazon Rekognition stream processor streams the analysis results, as part of face search feature.
ARN of the Kinesis Data Stream stream.
List of labels that need to be detected in the video stream. Current supported values are PERSON, PET, PACKAGE, ALL.
Connected home settings to use on a streaming video. Note that either ConnectedHomeSettings or FaceSearchSettings should be set. Not both
List of labels that need to be detected in the video stream. Current supported values are PERSON, PET, PACKAGE, ALL.
Minimum object class match confidence score that must be met to return a result for a recognized object.
Face search settings to use on a streaming video. Note that either FaceSearchSettings or ConnectedHomeSettings should be set. Not both
The ID of a collection that contains faces that you want to search for.
Minimum face match confidence score percentage that must be met to return a result for a recognized face. The default is 80. 0 is the lowest confidence. 100 is the highest confidence. Values between 0 and 100 are accepted.
The ARN of the SNS notification channel where events of interests are published, as part of connected home feature.
ARN of the SNS topic.
An (X, Y) cartesian coordinate denoting a point on the frame
The X coordinate of the point.
The Y coordinate of the point.
A polygon showing a region of interest. Note that the ordering of the Point entries matter in defining the polygon
A bounding box denoting a region of interest in the frame to be analyzed.
Indicates whether Rekognition is allowed to store the video stream data for model-training.
Flag to enable data-sharing
A key-value pair to associate with a resource.
The key name of the tag. You can specify a value that is 1 to 128 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
The value for the tag. You can specify a value that is 0 to 256 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.