| Package | Description |
|---|---|
| com.amazonaws.services.bedrockagentruntime.model |
| Modifier and Type | Method and Description |
|---|---|
TextInferenceConfig |
TextInferenceConfig.clone() |
TextInferenceConfig |
InferenceConfig.getTextInferenceConfig()
Configuration settings specific to text generation while generating responses using RetrieveAndGenerate.
|
TextInferenceConfig |
TextInferenceConfig.withMaxTokens(Integer maxTokens)
The maximum number of tokens to generate in the output text.
|
TextInferenceConfig |
TextInferenceConfig.withStopSequences(Collection<String> stopSequences)
A list of sequences of characters that, if generated, will cause the model to stop generating further tokens.
|
TextInferenceConfig |
TextInferenceConfig.withStopSequences(String... stopSequences)
A list of sequences of characters that, if generated, will cause the model to stop generating further tokens.
|
TextInferenceConfig |
TextInferenceConfig.withTemperature(Float temperature)
Controls the random-ness of text generated by the language model, influencing how much the model sticks to the
most predictable next words versus exploring more surprising options.
|
TextInferenceConfig |
TextInferenceConfig.withTopP(Float topP)
A probability distribution threshold which controls what the model considers for the set of possible next tokens.
|
| Modifier and Type | Method and Description |
|---|---|
void |
InferenceConfig.setTextInferenceConfig(TextInferenceConfig textInferenceConfig)
Configuration settings specific to text generation while generating responses using RetrieveAndGenerate.
|
InferenceConfig |
InferenceConfig.withTextInferenceConfig(TextInferenceConfig textInferenceConfig)
Configuration settings specific to text generation while generating responses using RetrieveAndGenerate.
|
Copyright © 2024. All rights reserved.