Class Message.Builder

    • Constructor Detail

    • Method Detail

      • id

         final Message.Builder id(String id)

        Unique object identifier.

        The format and length of IDs may change over time.

      • content

         final Message.Builder content(List<ContentBlock> content)

        Content generated by the model.

        This is an array of content blocks, each of which has a type that determines its shape.

        Example:

        [{ "type": "text", "text": "Hi, I'm Claude." }]

        If the request input messages ended with an assistant turn, then the response content will continue directly from that last turn. You can use this to constrain the model's output.

        For example, if the input messages were:

        [
          {
            "role": "user",
            "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
          },
          { "role": "assistant", "content": "The best answer is (" }
        ]

        Then the response content might be:

        [{ "type": "text", "text": "B)" }]
      • content

         final Message.Builder content(JsonField<List<ContentBlock>> content)

        Content generated by the model.

        This is an array of content blocks, each of which has a type that determines its shape.

        Example:

        [{ "type": "text", "text": "Hi, I'm Claude." }]

        If the request input messages ended with an assistant turn, then the response content will continue directly from that last turn. You can use this to constrain the model's output.

        For example, if the input messages were:

        [
          {
            "role": "user",
            "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
          },
          { "role": "assistant", "content": "The best answer is (" }
        ]

        Then the response content might be:

        [{ "type": "text", "text": "B)" }]
      • addContent

         final Message.Builder addContent(ContentBlock content)

        Content generated by the model.

        This is an array of content blocks, each of which has a type that determines its shape.

        Example:

        [{ "type": "text", "text": "Hi, I'm Claude." }]

        If the request input messages ended with an assistant turn, then the response content will continue directly from that last turn. You can use this to constrain the model's output.

        For example, if the input messages were:

        [
          {
            "role": "user",
            "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
          },
          { "role": "assistant", "content": "The best answer is (" }
        ]

        Then the response content might be:

        [{ "type": "text", "text": "B)" }]
      • addContent

         final Message.Builder addContent(TextBlock text)

        Content generated by the model.

        This is an array of content blocks, each of which has a type that determines its shape.

        Example:

        [{ "type": "text", "text": "Hi, I'm Claude." }]

        If the request input messages ended with an assistant turn, then the response content will continue directly from that last turn. You can use this to constrain the model's output.

        For example, if the input messages were:

        [
          {
            "role": "user",
            "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
          },
          { "role": "assistant", "content": "The best answer is (" }
        ]

        Then the response content might be:

        [{ "type": "text", "text": "B)" }]
      • addContent

         final Message.Builder addContent(ToolUseBlock toolUse)

        Content generated by the model.

        This is an array of content blocks, each of which has a type that determines its shape.

        Example:

        [{ "type": "text", "text": "Hi, I'm Claude." }]

        If the request input messages ended with an assistant turn, then the response content will continue directly from that last turn. You can use this to constrain the model's output.

        For example, if the input messages were:

        [
          {
            "role": "user",
            "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
          },
          { "role": "assistant", "content": "The best answer is (" }
        ]

        Then the response content might be:

        [{ "type": "text", "text": "B)" }]
      • stopReason

         final Message.Builder stopReason(Message.StopReason stopReason)

        The reason that we stopped.

        This may be one the following values:

        • "end_turn": the model reached a natural stopping point

        • "max_tokens": we exceeded the requested max_tokens or the model's maximum

        • "stop_sequence": one of your provided custom stop_sequences was generated

        • "tool_use": the model invoked one or more tools

        In non-streaming mode this value is always non-null. In streaming mode, it is null in the message_start event and non-null otherwise.

      • stopReason

         final Message.Builder stopReason(Optional<Message.StopReason> stopReason)

        The reason that we stopped.

        This may be one the following values:

        • "end_turn": the model reached a natural stopping point

        • "max_tokens": we exceeded the requested max_tokens or the model's maximum

        • "stop_sequence": one of your provided custom stop_sequences was generated

        • "tool_use": the model invoked one or more tools

        In non-streaming mode this value is always non-null. In streaming mode, it is null in the message_start event and non-null otherwise.

      • stopReason

         final Message.Builder stopReason(JsonField<Message.StopReason> stopReason)

        The reason that we stopped.

        This may be one the following values:

        • "end_turn": the model reached a natural stopping point

        • "max_tokens": we exceeded the requested max_tokens or the model's maximum

        • "stop_sequence": one of your provided custom stop_sequences was generated

        • "tool_use": the model invoked one or more tools

        In non-streaming mode this value is always non-null. In streaming mode, it is null in the message_start event and non-null otherwise.

      • stopSequence

         final Message.Builder stopSequence(String stopSequence)

        Which custom stop sequence was generated, if any.

        This value will be a non-null string if one of your custom stop sequences was generated.

      • stopSequence

         final Message.Builder stopSequence(Optional<String> stopSequence)

        Which custom stop sequence was generated, if any.

        This value will be a non-null string if one of your custom stop sequences was generated.

      • stopSequence

         final Message.Builder stopSequence(JsonField<String> stopSequence)

        Which custom stop sequence was generated, if any.

        This value will be a non-null string if one of your custom stop sequences was generated.

      • usage

         final Message.Builder usage(Usage usage)

        Billing and rate-limit usage.

        Anthropic's API bills and rate-limits by token counts, as tokens represent the underlying cost to our systems.

        Under the hood, the API transforms requests into a format suitable for the model. The model's output then goes through a parsing stage before becoming an API response. As a result, the token counts in usage will not match one-to-one with the exact visible content of an API request or response.

        For example, output_tokens will be non-zero, even for an empty string response from Claude.

      • usage

         final Message.Builder usage(JsonField<Usage> usage)

        Billing and rate-limit usage.

        Anthropic's API bills and rate-limits by token counts, as tokens represent the underlying cost to our systems.

        Under the hood, the API transforms requests into a format suitable for the model. The model's output then goes through a parsing stage before becoming an API response. As a result, the token counts in usage will not match one-to-one with the exact visible content of an API request or response.

        For example, output_tokens will be non-zero, even for an empty string response from Claude.