Package com.vaadin.uitest.ai
Interface LLMService
public interface LLMService
A LLM service that generates a response based on a prompt. All
responsibilities related to the model usage have to be implemented in this
service. This could be APIKEY providing, parameter setting, prompt template
generation, etc.
-
Nested Class Summary
Nested Classes -
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final Stringstatic final Stringstatic final Stringstatic final Stringstatic final Stringstatic final Stringstatic final Stringstatic final Stringstatic final org.slf4j.Loggerstatic final Stringstatic final Stringstatic final Stringstatic final String -
Method Summary
Modifier and TypeMethodDescriptionstatic StringextractCode(String markdown) default StringgetChatAssistantSystemMessage(String name, Framework framework) default StringgetGeneratedResponse(AiArguments aiArguments) default StringgetGeneratedResponse(String prompt, Framework framework, String gherkin) Generates a response based on the input prompt from the AI module.default StringgetImports(TestFramework testFramework) default intThe maximum number of tokens to generate in the completion.default StringgetModel()ID of the model to use.default StringgetPromptTemplate(AiArguments aiArguments) default doubleWhat sampling temperature to use, between 0 and 2.static intTimeout for AI module response in secondsdefault AiPromptsprompts()requestAI(AiArguments aiArguments)
-
Field Details
-
PARSER_ASSISTANT_QUESTION
- See Also:
-
PARSER_SYSTEM_MSG_LIT
- See Also:
-
PARSER_SYSTEM_MSG_REACT
- See Also:
-
PARSER_SYSTEM_MSG_FLOW
- See Also:
-
GENERATE_LOGIN_PLAYWRIGHT_REACT
- See Also:
-
GENERATE_LOGIN_PLAYWRIGHT_JAVA
- See Also:
-
GENERATE_IMPORTS_PLAYWRIGHT_REACT
- See Also:
-
GENERATE_IMPORTS_PLAYWRIGHT_JAVA
- See Also:
-
GENERATE_SNIPPETS
- See Also:
-
GENERATE_IMPORTS
- See Also:
-
GENERATE_HEADING_PLAYWRIGHT_REACT
- See Also:
-
GENERATE_HEADING_PLAYWRIGHT_JAVA
- See Also:
-
LOGGER
static final org.slf4j.Logger LOGGER
-
-
Method Details
-
getGeneratedResponse
Generates a response based on the input prompt from the AI module.- Parameters:
prompt- the prompt to be used by the AI modulegherkin-- Returns:
- the generated response from the AI module.
-
getGeneratedResponse
-
requestAI
-
getModel
ID of the model to use.- Returns:
- model
-
prompts
-
getTemperature
default double getTemperature()What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.- Returns:
- temperature
-
getMaxTokens
default int getMaxTokens()The maximum number of tokens to generate in the completion.- Returns:
- max tokens limit
-
getTimeout
static int getTimeout()Timeout for AI module response in seconds- Returns:
- timeout in seconds
-
extractCode
-
getPromptTemplate
-
getChatAssistantSystemMessage
-
getImports
-