| Package | Description |
|---|---|
| com.amazonaws.services.bedrockruntime.model |
| Modifier and Type | Method and Description |
|---|---|
InferenceConfiguration |
InferenceConfiguration.clone() |
InferenceConfiguration |
ConverseRequest.getInferenceConfig()
Inference parameters to pass to the model.
|
InferenceConfiguration |
InferenceConfiguration.withMaxTokens(Integer maxTokens)
The maximum number of tokens to allow in the generated response.
|
InferenceConfiguration |
InferenceConfiguration.withStopSequences(Collection<String> stopSequences)
A list of stop sequences.
|
InferenceConfiguration |
InferenceConfiguration.withStopSequences(String... stopSequences)
A list of stop sequences.
|
InferenceConfiguration |
InferenceConfiguration.withTemperature(Float temperature)
The likelihood of the model selecting higher-probability options while generating a response.
|
InferenceConfiguration |
InferenceConfiguration.withTopP(Float topP)
The percentage of most-likely candidates that the model considers for the next token.
|
| Modifier and Type | Method and Description |
|---|---|
void |
ConverseRequest.setInferenceConfig(InferenceConfiguration inferenceConfig)
Inference parameters to pass to the model.
|
ConverseRequest |
ConverseRequest.withInferenceConfig(InferenceConfiguration inferenceConfig)
Inference parameters to pass to the model.
|
Copyright © 2024. All rights reserved.