Interface VertexAiDeploymentResourcePoolDedicatedResources

    • Method Detail

      • getMachineSpec

        @Stability(Stable)
        @NotNull
        VertexAiDeploymentResourcePoolDedicatedResourcesMachineSpec getMachineSpec()
        machine_spec block.

        Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/5.43.1/docs/resources/vertex_ai_deployment_resource_pool#machine_spec VertexAiDeploymentResourcePool#machine_spec}

      • getMinReplicaCount

        @Stability(Stable)
        @NotNull
        Number getMinReplicaCount()
        The minimum number of machine replicas this DeployedModel will be always deployed on.

        This value must be greater than or equal to 1. If traffic against the DeployedModel increases, it may dynamically be deployed onto more replicas, and as traffic decreases, some of these extra replicas may be freed. Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/5.43.1/docs/resources/vertex_ai_deployment_resource_pool#min_replica_count VertexAiDeploymentResourcePool#min_replica_count}

      • getAutoscalingMetricSpecs

        @Stability(Stable)
        @Nullable
        default Object getAutoscalingMetricSpecs()
        autoscaling_metric_specs block.

        Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/5.43.1/docs/resources/vertex_ai_deployment_resource_pool#autoscaling_metric_specs VertexAiDeploymentResourcePool#autoscaling_metric_specs}

      • getMaxReplicaCount

        @Stability(Stable)
        @Nullable
        default Number getMaxReplicaCount()
        The maximum number of replicas this DeployedModel may be deployed on when the traffic against it increases.

        If the requested value is too large, the deployment will error, but if deployment succeeds then the ability to scale the model to that many replicas is guaranteed (barring service outages). If traffic against the DeployedModel increases beyond what its replicas at maximum may handle, a portion of the traffic will be dropped. If this value is not provided, will use min_replica_count as the default value. The value of this field impacts the charge against Vertex CPU and GPU quotas. Specifically, you will be charged for max_replica_count * number of cores in the selected machine type) and (max_replica_count * number of GPUs per replica in the selected machine type). Docs at Terraform Registry: {@link https://registry.terraform.io/providers/hashicorp/google/5.43.1/docs/resources/vertex_ai_deployment_resource_pool#max_replica_count VertexAiDeploymentResourcePool#max_replica_count}