Class Message

  • All Implemented Interfaces:

    
    public final class Message
    
                        
    • Constructor Detail

    • Method Detail

      • id

         final String id()

        Unique object identifier.

        The format and length of IDs may change over time.

      • content

         final List<ContentBlock> content()

        Content generated by the model.

        This is an array of content blocks, each of which has a type that determines its shape.

        Example:

        [{ "type": "text", "text": "Hi, I'm Claude." }]

        If the request input messages ended with an assistant turn, then the response content will continue directly from that last turn. You can use this to constrain the model's output.

        For example, if the input messages were:

        [
          {
            "role": "user",
            "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
          },
          { "role": "assistant", "content": "The best answer is (" }
        ]

        Then the response content might be:

        [{ "type": "text", "text": "B)" }]
      • model

         final Model model()

        The model that will complete your prompt.\n\nSee models for additional details and options.

      • role

         final Message.Role role()

        Conversational role of the generated message.

        This will always be "assistant".

      • stopReason

         final Optional<Message.StopReason> stopReason()

        The reason that we stopped.

        This may be one the following values:

        • "end_turn": the model reached a natural stopping point

        • "max_tokens": we exceeded the requested max_tokens or the model's maximum

        • "stop_sequence": one of your provided custom stop_sequences was generated

        • "tool_use": the model invoked one or more tools

        In non-streaming mode this value is always non-null. In streaming mode, it is null in the message_start event and non-null otherwise.

      • stopSequence

         final Optional<String> stopSequence()

        Which custom stop sequence was generated, if any.

        This value will be a non-null string if one of your custom stop sequences was generated.

      • type

         final Message.Type type()

        Object type.

        For Messages, this is always "message".

      • usage

         final Usage usage()

        Billing and rate-limit usage.

        Anthropic's API bills and rate-limits by token counts, as tokens represent the underlying cost to our systems.

        Under the hood, the API transforms requests into a format suitable for the model. The model's output then goes through a parsing stage before becoming an API response. As a result, the token counts in usage will not match one-to-one with the exact visible content of an API request or response.

        For example, output_tokens will be non-zero, even for an empty string response from Claude.

      • _id

         final JsonField<String> _id()

        Unique object identifier.

        The format and length of IDs may change over time.

      • _content

         final JsonField<List<ContentBlock>> _content()

        Content generated by the model.

        This is an array of content blocks, each of which has a type that determines its shape.

        Example:

        [{ "type": "text", "text": "Hi, I'm Claude." }]

        If the request input messages ended with an assistant turn, then the response content will continue directly from that last turn. You can use this to constrain the model's output.

        For example, if the input messages were:

        [
          {
            "role": "user",
            "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun"
          },
          { "role": "assistant", "content": "The best answer is (" }
        ]

        Then the response content might be:

        [{ "type": "text", "text": "B)" }]
      • _model

         final JsonField<Model> _model()

        The model that will complete your prompt.\n\nSee models for additional details and options.

      • _stopReason

         final JsonField<Message.StopReason> _stopReason()

        The reason that we stopped.

        This may be one the following values:

        • "end_turn": the model reached a natural stopping point

        • "max_tokens": we exceeded the requested max_tokens or the model's maximum

        • "stop_sequence": one of your provided custom stop_sequences was generated

        • "tool_use": the model invoked one or more tools

        In non-streaming mode this value is always non-null. In streaming mode, it is null in the message_start event and non-null otherwise.

      • _stopSequence

         final JsonField<String> _stopSequence()

        Which custom stop sequence was generated, if any.

        This value will be a non-null string if one of your custom stop sequences was generated.

      • _usage

         final JsonField<Usage> _usage()

        Billing and rate-limit usage.

        Anthropic's API bills and rate-limits by token counts, as tokens represent the underlying cost to our systems.

        Under the hood, the API transforms requests into a format suitable for the model. The model's output then goes through a parsing stage before becoming an API response. As a result, the token counts in usage will not match one-to-one with the exact visible content of an API request or response.

        For example, output_tokens will be non-zero, even for an empty string response from Claude.