Class BetaMessage
-
- All Implemented Interfaces:
public final class BetaMessage
-
-
Nested Class Summary
Nested Classes Modifier and Type Class Description public final classBetaMessage.BuilderA builder for BetaMessage.
public final classBetaMessage.StopReasonThe reason that we stopped.
This may be one the following values:
"end_turn": the model reached a natural stopping point"max_tokens": we exceeded the requestedmax_tokensor the model's maximum"stop_sequence": one of your provided customstop_sequenceswas generated"tool_use": the model invoked one or more tools
In non-streaming mode this value is always non-null. In streaming mode, it is null in the
message_startevent and non-null otherwise.
-
Method Summary
Modifier and Type Method Description final Stringid()Unique object identifier. final List<BetaContentBlock>content()Content generated by the model. final Modelmodel()The model that will complete your prompt. final JsonValue_role()Conversational role of the generated message. final Optional<BetaMessage.StopReason>stopReason()The reason that we stopped. final Optional<String>stopSequence()Which custom stop sequence was generated, if any. final JsonValue_type()Object type. final BetaUsageusage()Billing and rate-limit usage. final JsonField<String>_id()Unique object identifier. final JsonField<List<BetaContentBlock>>_content()Content generated by the model. final JsonField<Model>_model()The model that will complete your prompt. final JsonField<BetaMessage.StopReason>_stopReason()The reason that we stopped. final JsonField<String>_stopSequence()Which custom stop sequence was generated, if any. final JsonField<BetaUsage>_usage()Billing and rate-limit usage. final Map<String, JsonValue>_additionalProperties()final BetaMessageParamtoParam()final BetaMessagevalidate()final BetaMessage.BuildertoBuilder()Booleanequals(Object other)IntegerhashCode()StringtoString()final static BetaMessage.Builderbuilder()Returns a mutable builder for constructing an instance of BetaMessage. -
-
Method Detail
-
content
final List<BetaContentBlock> content()
Content generated by the model.
This is an array of content blocks, each of which has a
typethat determines its shape.Example:
[{ "type": "text", "text": "Hi, I'm Claude." }]If the request input
messagesended with anassistantturn, then the responsecontentwill continue directly from that last turn. You can use this to constrain the model's output.For example, if the input
messageswere:[ { "role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun" }, { "role": "assistant", "content": "The best answer is (" } ]Then the response
contentmight be:[{ "type": "text", "text": "B)" }]
-
model
final Model model()
The model that will complete your prompt.\n\nSee models for additional details and options.
-
_role
final JsonValue _role()
Conversational role of the generated message.
This will always be
"assistant".
-
stopReason
final Optional<BetaMessage.StopReason> stopReason()
The reason that we stopped.
This may be one the following values:
"end_turn": the model reached a natural stopping point"max_tokens": we exceeded the requestedmax_tokensor the model's maximum"stop_sequence": one of your provided customstop_sequenceswas generated"tool_use": the model invoked one or more tools
In non-streaming mode this value is always non-null. In streaming mode, it is null in the
message_startevent and non-null otherwise.
-
stopSequence
final Optional<String> stopSequence()
Which custom stop sequence was generated, if any.
This value will be a non-null string if one of your custom stop sequences was generated.
-
usage
final BetaUsage usage()
Billing and rate-limit usage.
Anthropic's API bills and rate-limits by token counts, as tokens represent the underlying cost to our systems.
Under the hood, the API transforms requests into a format suitable for the model. The model's output then goes through a parsing stage before becoming an API response. As a result, the token counts in
usagewill not match one-to-one with the exact visible content of an API request or response.For example,
output_tokenswill be non-zero, even for an empty string response from Claude.Total input tokens in a request is the summation of
input_tokens,cache_creation_input_tokens, andcache_read_input_tokens.
-
_id
final JsonField<String> _id()
Unique object identifier.
The format and length of IDs may change over time.
-
_content
final JsonField<List<BetaContentBlock>> _content()
Content generated by the model.
This is an array of content blocks, each of which has a
typethat determines its shape.Example:
[{ "type": "text", "text": "Hi, I'm Claude." }]If the request input
messagesended with anassistantturn, then the responsecontentwill continue directly from that last turn. You can use this to constrain the model's output.For example, if the input
messageswere:[ { "role": "user", "content": "What's the Greek name for Sun? (A) Sol (B) Helios (C) Sun" }, { "role": "assistant", "content": "The best answer is (" } ]Then the response
contentmight be:[{ "type": "text", "text": "B)" }]
-
_model
final JsonField<Model> _model()
The model that will complete your prompt.\n\nSee models for additional details and options.
-
_stopReason
final JsonField<BetaMessage.StopReason> _stopReason()
The reason that we stopped.
This may be one the following values:
"end_turn": the model reached a natural stopping point"max_tokens": we exceeded the requestedmax_tokensor the model's maximum"stop_sequence": one of your provided customstop_sequenceswas generated"tool_use": the model invoked one or more tools
In non-streaming mode this value is always non-null. In streaming mode, it is null in the
message_startevent and non-null otherwise.
-
_stopSequence
final JsonField<String> _stopSequence()
Which custom stop sequence was generated, if any.
This value will be a non-null string if one of your custom stop sequences was generated.
-
_usage
final JsonField<BetaUsage> _usage()
Billing and rate-limit usage.
Anthropic's API bills and rate-limits by token counts, as tokens represent the underlying cost to our systems.
Under the hood, the API transforms requests into a format suitable for the model. The model's output then goes through a parsing stage before becoming an API response. As a result, the token counts in
usagewill not match one-to-one with the exact visible content of an API request or response.For example,
output_tokenswill be non-zero, even for an empty string response from Claude.Total input tokens in a request is the summation of
input_tokens,cache_creation_input_tokens, andcache_read_input_tokens.
-
_additionalProperties
final Map<String, JsonValue> _additionalProperties()
-
toParam
final BetaMessageParam toParam()
-
validate
final BetaMessage validate()
-
toBuilder
final BetaMessage.Builder toBuilder()
-
builder
final static BetaMessage.Builder builder()
Returns a mutable builder for constructing an instance of BetaMessage.
The following fields are required:
.id() .content() .model() .stopReason() .stopSequence() .usage()
-
-
-
-