latest--aws-sagemaker-modelexplainabilityjobdefinition
sharedResource Type definition for AWS::SageMaker::ModelExplainabilityJobDefinition. Source:- No source definition found, add manually please
Properties
Container image configuration object for the monitoring job.
3 nested properties
The container image to be run by the monitoring job.
The Amazon S3 URI.
Sets the environment variables in the Docker container
The inputs for a monitoring job.
2 nested properties
The endpoint for a monitoring job.
7 nested properties
The name of the endpoint used to run the monitoring job.
Path to the filesystem where the endpoint data is available to the container.
Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
JSONpath to locate features in JSONlines dataset
Index or JSONpath to locate predicted label(s)
Index or JSONpath to locate probabilities
The batch transform input for a monitoring job.
8 nested properties
A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.
The dataset format of the data to monitor
Path to the filesystem where the endpoint data is available to the container.
Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
JSONpath to locate features in JSONlines dataset
Index or JSONpath to locate predicted label(s)
Index or JSONpath to locate probabilities
The output configuration for monitoring jobs.
2 nested properties
Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.
The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
Identifies the resources to deploy for a monitoring job.
1 nested properties
Configuration for the cluster used to run model monitoring jobs.
4 nested properties
The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.
The ML compute instance type for the processing job.
The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.
The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
The Amazon Resource Name (ARN) of an IAM role that Amazon SageMaker can assume to perform tasks on your behalf.
The name of the job definition.
Baseline configuration used to validate that the data conforms to the specified constraints and statistics.
2 nested properties
The name of a processing job
The baseline constraints resource for a monitoring job.
1 nested properties
The Amazon S3 URI.
Networking options for a job, such as network traffic encryption between containers, whether to allow inbound and outbound network calls to and from containers, and the VPC subnets and security groups to use for VPC-enabled jobs.
3 nested properties
Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.
Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC.
2 nested properties
The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
The ID of the subnets in the VPC to which you want to connect to your monitoring jobs.
The name of the endpoint used to run the monitoring job.
Specifies a time limit for how long the monitoring job is allowed to run.
1 nested properties
The maximum runtime allowed in seconds.
Definitions
Baseline configuration used to validate that the data conforms to the specified constraints and statistics.
The name of a processing job
The baseline constraints resource for a monitoring job.
1 nested properties
The Amazon S3 URI.
The baseline constraints resource for a monitoring job.
The Amazon S3 URI.
The Amazon S3 URI.
Container image configuration object for the monitoring job.
The container image to be run by the monitoring job.
The Amazon S3 URI.
Sets the environment variables in the Docker container
The inputs for a monitoring job.
The endpoint for a monitoring job.
7 nested properties
The name of the endpoint used to run the monitoring job.
Path to the filesystem where the endpoint data is available to the container.
Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
JSONpath to locate features in JSONlines dataset
Index or JSONpath to locate predicted label(s)
Index or JSONpath to locate probabilities
The batch transform input for a monitoring job.
8 nested properties
A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.
The dataset format of the data to monitor
Path to the filesystem where the endpoint data is available to the container.
Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
JSONpath to locate features in JSONlines dataset
Index or JSONpath to locate predicted label(s)
Index or JSONpath to locate probabilities
The endpoint for a monitoring job.
The name of the endpoint used to run the monitoring job.
Path to the filesystem where the endpoint data is available to the container.
Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
JSONpath to locate features in JSONlines dataset
Index or JSONpath to locate predicted label(s)
Index or JSONpath to locate probabilities
The batch transform input for a monitoring job.
A URI that identifies the Amazon S3 storage location where Batch Transform Job captures data.
The dataset format of the data to monitor
3 nested properties
Path to the filesystem where the endpoint data is available to the container.
Whether input data distributed in Amazon S3 is fully replicated or sharded by an S3 key. Defauts to FullyReplicated
Whether the Pipe or File is used as the input mode for transfering data for the monitoring job. Pipe mode is recommended for large datasets. File mode is useful for small files that fit in memory. Defaults to File.
JSONpath to locate features in JSONlines dataset
Index or JSONpath to locate predicted label(s)
Index or JSONpath to locate probabilities
The output configuration for monitoring jobs.
Monitoring outputs for monitoring jobs. This is where the output of the periodic monitoring jobs is uploaded.
The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt the model artifacts at rest using Amazon S3 server-side encryption.
The output object for a monitoring job.
Information about where and how to store the results of a monitoring job.
3 nested properties
The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.
A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.
Whether to upload the results of the monitoring job continuously or after the job completes.
Information about where and how to store the results of a monitoring job.
The local path to the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job. LocalPath is an absolute path for the output data.
A URI that identifies the Amazon S3 storage location where Amazon SageMaker saves the results of a monitoring job.
Whether to upload the results of the monitoring job continuously or after the job completes.
Identifies the resources to deploy for a monitoring job.
Configuration for the cluster used to run model monitoring jobs.
4 nested properties
The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.
The ML compute instance type for the processing job.
The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.
The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
Configuration for the cluster used to run model monitoring jobs.
The number of ML compute instances to use in the model monitoring job. For distributed processing jobs, specify a value greater than 1. The default value is 1.
The ML compute instance type for the processing job.
The size of the ML storage volume, in gigabytes, that you want to provision. You must specify sufficient ML storage for your scenario.
The AWS Key Management Service (AWS KMS) key that Amazon SageMaker uses to encrypt data on the storage volume attached to the ML compute instance(s) that run the model monitoring job.
Networking options for a job, such as network traffic encryption between containers, whether to allow inbound and outbound network calls to and from containers, and the VPC subnets and security groups to use for VPC-enabled jobs.
Whether to encrypt all communications between distributed processing jobs. Choose True to encrypt communications. Encryption provides greater security for distributed processing jobs, but the processing might take longer.
Whether to allow inbound and outbound network calls to and from the containers used for the processing job.
Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC.
2 nested properties
The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
The ID of the subnets in the VPC to which you want to connect to your monitoring jobs.
Specifies a VPC that your training jobs and hosted models have access to. Control access to and from your training and model containers by configuring the VPC.
The VPC security group IDs, in the form sg-xxxxxxxx. Specify the security groups for the VPC that is specified in the Subnets field.
The ID of the subnets in the VPC to which you want to connect to your monitoring jobs.
Specifies a time limit for how long the monitoring job is allowed to run.
The maximum runtime allowed in seconds.
A key-value pair to associate with a resource.
The key name of the tag. You can specify a value that is 1 to 127 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
The value for the tag. You can specify a value that is 1 to 255 Unicode characters in length and cannot be prefixed with aws:. You can use any of the following characters: the set of Unicode letters, digits, whitespace, _, ., /, =, +, and -.
The name of the endpoint used to run the monitoring job.
The name of the job definition.
The name of a processing job
The time offsets in ISO duration format
The dataset format of the data to monitor
The CSV format
A boolean flag indicating if given CSV has header
The Json format
A boolean flag indicating if it is JSON line format
A flag indicating if the dataset format is Parquet