Class CompletionRequestSettings
- java.lang.Object
-
- com.microsoft.semantickernel.textcompletion.CompletionRequestSettings
-
- Direct Known Subclasses:
ChatRequestSettings
public class CompletionRequestSettings extends Object
Settings for a text completion request
-
-
Constructor Summary
Constructors Constructor Description CompletionRequestSettings()Create a new settings object with default values.CompletionRequestSettings(double temperature, double topP, double presencePenalty, double frequencyPenalty, int maxTokens)Create a new settings object with the given values.CompletionRequestSettings(double temperature, double topP, double presencePenalty, double frequencyPenalty, int maxTokens, int bestOf, String user, List<String> stopSequences)Create a new settings object with the given values.
-
Method Summary
All Methods Static Methods Instance Methods Concrete Methods Modifier and Type Method Description static CompletionRequestSettingsfromCompletionConfig(PromptTemplateConfig.CompletionConfig config)Create a new settings object with the values from another settings object.IntegergetBestOf()The maximum number of completions to generate for each prompt.doublegetFrequencyPenalty()Number between -2.0 and 2.0.intgetMaxTokens()The maximum number of tokens to generate in the completion.doublegetPresencePenalty()Number between -2.0 and 2.0.List<String>getStopSequences()Sequences where the completion will stop generating further tokens.doublegetTemperature()Temperature controls the randomness of the completion.doublegetTopP()TopP controls the diversity of the completion.StringgetUser()A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse
-
-
-
Constructor Detail
-
CompletionRequestSettings
public CompletionRequestSettings(double temperature, double topP, double presencePenalty, double frequencyPenalty, int maxTokens)Create a new settings object with the given values.- Parameters:
temperature- Temperature controls the randomness of the completion.topP- TopP controls the diversity of the completion.presencePenalty- Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.frequencyPenalty- Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.maxTokens- The maximum number of tokens to generate in the completion.
-
CompletionRequestSettings
public CompletionRequestSettings(double temperature, double topP, double presencePenalty, double frequencyPenalty, int maxTokens, int bestOf, String user, List<String> stopSequences)Create a new settings object with the given values.- Parameters:
temperature- Temperature controls the randomness of the completion.topP- TopP controls the diversity of the completion.presencePenalty- Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.frequencyPenalty- Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.maxTokens- The maximum number of tokens to generate in the completion.bestOf- The maximum number of completions to generate for each prompt. This is used by the CompletionService to generate multiple completions for a single prompt.user- A unique identifier representing your end-user, which can help OpenAI to monitor and detect abusestopSequences- Sequences where the completion will stop generating further tokens.
-
CompletionRequestSettings
public CompletionRequestSettings()
Create a new settings object with default values.
-
-
Method Detail
-
fromCompletionConfig
public static CompletionRequestSettings fromCompletionConfig(PromptTemplateConfig.CompletionConfig config)
Create a new settings object with the values from another settings object.- Parameters:
config- The config to copy values from
-
getTemperature
public double getTemperature()
Temperature controls the randomness of the completion. The higher the temperature, the more random the completion
-
getTopP
public double getTopP()
TopP controls the diversity of the completion. The higher the TopP, the more diverse the completion.
-
getPresencePenalty
public double getPresencePenalty()
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics.
-
getFrequencyPenalty
public double getFrequencyPenalty()
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim.
-
getMaxTokens
public int getMaxTokens()
The maximum number of tokens to generate in the completion.
-
getStopSequences
public List<String> getStopSequences()
Sequences where the completion will stop generating further tokens.
-
getBestOf
public Integer getBestOf()
The maximum number of completions to generate for each prompt. This is used by the CompletionService to generate multiple completions for a single prompt.
-
getUser
public String getUser()
A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse
-
-