Class CompletionCreateParams

  • All Implemented Interfaces:
    com.anthropic.core.Params

    
    public final class CompletionCreateParams
     implements Params
                        

    Legacy Create a Text Completion.

    The Text Completions API is a legacy API. We recommend using the Messages API going forward.

    Future models and features will not be compatible with Text Completions. See our migration guide for guidance in migrating from Text Completions to Messages.

    • Constructor Detail

    • Method Detail

      • maxTokensToSample

         final Long maxTokensToSample()

        The maximum number of tokens to generate before stopping.

        Note that our models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate.

      • model

         final Model model()

        The model that will complete your prompt.\n\nSee models for additional details and options.

      • prompt

         final String prompt()

        The prompt that you want Claude to complete.

        For proper response generation you will need to format your prompt using alternating \n\nHuman: and \n\nAssistant: conversational turns. For example:

        "\n\nHuman: {userQuestion}\n\nAssistant:"

        See prompt validation and our guide to prompt design for more details.

      • stopSequences

         final Optional<List<String>> stopSequences()

        Sequences that will cause the model to stop generating.

        Our models stop on "\n\nHuman:", and may include additional built-in stop sequences in the future. By providing the stop_sequences parameter, you may include additional strings that will cause the model to stop generating.

      • temperature

         final Optional<Double> temperature()

        Amount of randomness injected into the response.

        Defaults to 1.0. Ranges from 0.0 to 1.0. Use temperature closer to 0.0 for analytical / multiple choice, and closer to 1.0 for creative and generative tasks.

        Note that even with temperature of 0.0, the results will not be fully deterministic.

      • topK

         final Optional<Long> topK()

        Only sample from the top K options for each subsequent token.

        Used to remove "long tail" low probability responses. Learn more technical details here.

        Recommended for advanced use cases only. You usually only need to use temperature.

      • topP

         final Optional<Double> topP()

        Use nucleus sampling.

        In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. You should either alter temperature or top_p, but not both.

        Recommended for advanced use cases only. You usually only need to use temperature.

      • _maxTokensToSample

         final JsonField<Long> _maxTokensToSample()

        The maximum number of tokens to generate before stopping.

        Note that our models may stop before reaching this maximum. This parameter only specifies the absolute maximum number of tokens to generate.

      • _model

         final JsonField<Model> _model()

        The model that will complete your prompt.\n\nSee models for additional details and options.

      • _prompt

         final JsonField<String> _prompt()

        The prompt that you want Claude to complete.

        For proper response generation you will need to format your prompt using alternating \n\nHuman: and \n\nAssistant: conversational turns. For example:

        "\n\nHuman: {userQuestion}\n\nAssistant:"

        See prompt validation and our guide to prompt design for more details.

      • _stopSequences

         final JsonField<List<String>> _stopSequences()

        Sequences that will cause the model to stop generating.

        Our models stop on "\n\nHuman:", and may include additional built-in stop sequences in the future. By providing the stop_sequences parameter, you may include additional strings that will cause the model to stop generating.

      • _temperature

         final JsonField<Double> _temperature()

        Amount of randomness injected into the response.

        Defaults to 1.0. Ranges from 0.0 to 1.0. Use temperature closer to 0.0 for analytical / multiple choice, and closer to 1.0 for creative and generative tasks.

        Note that even with temperature of 0.0, the results will not be fully deterministic.

      • _topK

         final JsonField<Long> _topK()

        Only sample from the top K options for each subsequent token.

        Used to remove "long tail" low probability responses. Learn more technical details here.

        Recommended for advanced use cases only. You usually only need to use temperature.

      • _topP

         final JsonField<Double> _topP()

        Use nucleus sampling.

        In nucleus sampling, we compute the cumulative distribution over all the options for each subsequent token in decreasing probability order and cut it off once it reaches a particular probability specified by top_p. You should either alter temperature or top_p, but not both.

        Recommended for advanced use cases only. You usually only need to use temperature.

      • _headers

         Headers _headers()

        The full set of headers in the parameters, including both fixed and additional headers.

      • _queryParams

         QueryParams _queryParams()

        The full set of query params in the parameters, including both fixed and additional query params.