Interface LLMService


public interface LLMService
A LLM service that generates a response based on a prompt. All responsibilities related to the model usage have to be implemented in this service. This could be APIKEY providing, parameter setting, prompt template generation, etc.
  • Field Details

  • Method Details

    • getGeneratedResponse

      default String getGeneratedResponse(String prompt, Framework framework, String gherkin)
      Generates a response based on the input prompt from the AI module.
      Parameters:
      prompt - the prompt to be used by the AI module
      gherkin -
      Returns:
      the generated response from the AI module.
    • getGeneratedResponse

      default String getGeneratedResponse(AiArguments aiArguments)
    • requestAI

      String requestAI(AiArguments aiArguments)
    • getModel

      default String getModel()
      ID of the model to use.
      Returns:
      model
    • prompts

      default AiPrompts prompts()
    • getTemperature

      default double getTemperature()
      What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
      Returns:
      temperature
    • getMaxTokens

      default int getMaxTokens()
      The maximum number of tokens to generate in the completion.
      Returns:
      max tokens limit
    • getTimeout

      static int getTimeout()
      Timeout for AI module response in seconds
      Returns:
      timeout in seconds
    • extractCode

      static String extractCode(String markdown)
    • getPromptTemplate

      default String getPromptTemplate(AiArguments aiArguments)
    • getChatAssistantSystemMessage

      default String getChatAssistantSystemMessage(String name, Framework framework)
    • getImports

      default String getImports(TestFramework testFramework)