| Package | Description |
|---|---|
| com.amazonaws.services.bedrockagent.model |
| Modifier and Type | Method and Description |
|---|---|
InferenceConfiguration |
InferenceConfiguration.clone() |
InferenceConfiguration |
PromptConfiguration.getInferenceConfiguration()
Contains inference parameters to use when the agent invokes a foundation model in the part of the agent sequence
defined by the
promptType. |
InferenceConfiguration |
InferenceConfiguration.withMaximumLength(Integer maximumLength)
The maximum number of tokens to allow in the generated response.
|
InferenceConfiguration |
InferenceConfiguration.withStopSequences(Collection<String> stopSequences)
A list of stop sequences.
|
InferenceConfiguration |
InferenceConfiguration.withStopSequences(String... stopSequences)
A list of stop sequences.
|
InferenceConfiguration |
InferenceConfiguration.withTemperature(Float temperature)
The likelihood of the model selecting higher-probability options while generating a response.
|
InferenceConfiguration |
InferenceConfiguration.withTopK(Integer topK)
While generating a response, the model determines the probability of the following token at each point of
generation.
|
InferenceConfiguration |
InferenceConfiguration.withTopP(Float topP)
While generating a response, the model determines the probability of the following token at each point of
generation.
|
| Modifier and Type | Method and Description |
|---|---|
void |
PromptConfiguration.setInferenceConfiguration(InferenceConfiguration inferenceConfiguration)
Contains inference parameters to use when the agent invokes a foundation model in the part of the agent sequence
defined by the
promptType. |
PromptConfiguration |
PromptConfiguration.withInferenceConfiguration(InferenceConfiguration inferenceConfiguration)
Contains inference parameters to use when the agent invokes a foundation model in the part of the agent sequence
defined by the
promptType. |
Copyright © 2025. All rights reserved.