| Package | Description |
|---|---|
| com.amazonaws.services.bedrockagent.model |
| Modifier and Type | Method and Description |
|---|---|
PromptModelInferenceConfiguration |
PromptModelInferenceConfiguration.clone() |
PromptModelInferenceConfiguration |
PromptInferenceConfiguration.getText()
Contains inference configurations for a text prompt.
|
PromptModelInferenceConfiguration |
PromptModelInferenceConfiguration.withMaxTokens(Integer maxTokens)
The maximum number of tokens to return in the response.
|
PromptModelInferenceConfiguration |
PromptModelInferenceConfiguration.withStopSequences(Collection<String> stopSequences)
A list of strings that define sequences after which the model will stop generating.
|
PromptModelInferenceConfiguration |
PromptModelInferenceConfiguration.withStopSequences(String... stopSequences)
A list of strings that define sequences after which the model will stop generating.
|
PromptModelInferenceConfiguration |
PromptModelInferenceConfiguration.withTemperature(Float temperature)
Controls the randomness of the response.
|
PromptModelInferenceConfiguration |
PromptModelInferenceConfiguration.withTopK(Integer topK)
The number of most-likely candidates that the model considers for the next token during generation.
|
PromptModelInferenceConfiguration |
PromptModelInferenceConfiguration.withTopP(Float topP)
The percentage of most-likely candidates that the model considers for the next token.
|
| Modifier and Type | Method and Description |
|---|---|
void |
PromptInferenceConfiguration.setText(PromptModelInferenceConfiguration text)
Contains inference configurations for a text prompt.
|
PromptInferenceConfiguration |
PromptInferenceConfiguration.withText(PromptModelInferenceConfiguration text)
Contains inference configurations for a text prompt.
|
Copyright © 2024. All rights reserved.