String text
Text to include in the message.
ImageBlock image
Image to include in the message.
This field is only supported by Anthropic Claude 3 models.
ToolResultBlock toolResult
The result for a tool request that a model makes.
Long latencyMs
The latency of the call to Converse, in milliseconds.
Message message
The message that the model generates.
String modelId
The identifier for the model that you want to call.
The modelId to provide depends on the type of model that you use:
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
List<E> messages
The messages that you want to send to the model.
List<E> system
A system prompt to pass to the model.
InferenceConfiguration inferenceConfig
Inference parameters to pass to the model. Converse supports a base set of inference parameters. If
you need to pass additional parameters that the model supports, use the additionalModelRequestFields
request field.
ToolConfiguration toolConfig
Configuration information for the tools that the model can use when generating a response.
This field is only supported by Anthropic Claude 3, Cohere Command R, Cohere Command R+, and Mistral Large models.
List<E> additionalModelResponseFieldPaths
Additional model parameters field paths to return in the response. Converse returns the requested
fields as a JSON Pointer object in the additionalModelResultFields field. The following is example
JSON for additionalModelResponseFieldPaths.
[ "/stop_sequence" ]
For information about the JSON Pointer syntax, see the Internet Engineering Task Force (IETF) documentation.
Converse rejects an empty JSON Pointer or incorrectly structured JSON Pointer with a
400 error code. if the JSON Pointer is valid, but the requested field is not in the model response,
it is ignored by Converse.
ConverseOutput output
The result from the call to Converse.
String stopReason
The reason why the model stopped generating output.
TokenUsage usage
The total number of tokens used in the call to Converse. The total includes the tokens input to the
model and the tokens generated by the model.
ConverseMetrics metrics
Metrics for the call to Converse.
String format
The format of the image.
ImageSource source
The source for the image.
ByteBuffer bytes
The raw image bytes for the image. If you use an AWS SDK, you don't need to base64 encode the image bytes.
Integer maxTokens
The maximum number of tokens to allow in the generated response. The default value is the maximum allowed value for the model that you are using. For more information, see Inference parameters for foundatio{ "messages": [ { "role": "user", "content": [ { "text": "what's the weather in Queens, NY and Austin, TX?" } ] }, { "role": "assistant", "content": [ { "toolUse": { "toolUseId": "1", "name": "get_weather", "input": { "city": "Queens", "state": "NY" } } }, { "toolUse": { "toolUseId": "2", "name": "get_weather", "input": { "city": "Austin", "state": "TX" } } } ] }, { "role": "user", "content": [ { "toolResult": { "toolUseId": "2", "content": [ { "json": { "weather": "40" } } ] } }, { "text": "..." }, { "toolResult": { "toolUseId": "1", "content": [ { "text": "result text" } ] } } ] } ], "toolConfig": { "tools": [ { "name": "get_weather", "description": "Get weather", "inputSchema": { "type": "object", "properties": { "city": { "type": "string", "description": "City of location" }, "state": { "type": "string", "description": "State of location" } }, "required": ["city", "state"] } } ] } } n models.
Float temperature
The likelihood of the model selecting higher-probability options while generating a response. A lower value makes the model more likely to choose higher-probability options, while a higher value makes the model more likely to choose lower-probability options.
The default value is the default value for the model that you are using. For more information, see Inference parameters for foundation models.
Float topP
The percentage of most-likely candidates that the model considers for the next token. For example, if you choose
a value of 0.8 for topP, the model selects from the top 80% of the probability distribution of
tokens that could be next in the sequence.
The default value is the default value for the model that you are using. For more information, see Inference parameters for foundation models.
List<E> stopSequences
A list of stop sequences. A stop sequence is a sequence of characters that causes the model to stop generating the response.
ByteBuffer body
The prompt and inference parameters in the format specified in the contentType in the header. To see
the format and content of the request and response bodies for different models, refer to Inference parameters. For
more information, see Run
inference in the Bedrock User Guide.
String contentType
The MIME type of the input data in the request. The default value is application/json.
String accept
The desired MIME type of the inference body in the response. The default value is application/json.
String modelId
The unique identifier of the model to invoke to run inference.
The modelId to provide depends on the type of model that you use:
If you use a base model, specify the model ID or its ARN. For a list of model IDs for base models, see Amazon Bedrock base model IDs (on-demand throughput) in the Amazon Bedrock User Guide.
If you use a provisioned model, specify the ARN of the Provisioned Throughput. For more information, see Run inference using a Provisioned Throughput in the Amazon Bedrock User Guide.
If you use a custom model, first purchase Provisioned Throughput for it. Then specify the ARN of the resulting provisioned model. For more information, see Use a custom model in Amazon Bedrock in the Amazon Bedrock User Guide.
String trace
Specifies whether to enable or disable the Bedrock trace. If enabled, you can see the full Bedrock trace.
String guardrailIdentifier
The unique identifier of the guardrail that you want to use. If you don't provide a value, no guardrail is applied to the invocation.
An error will be thrown in the following situations.
You don't provide a guardrail identifier but you specify the amazon-bedrock-guardrailConfig field in
the request body.
You enable the guardrail but the contentType isn't application/json.
You provide a guardrail identifier, but guardrailVersion isn't specified.
String guardrailVersion
The version number for the guardrail. The value can also be DRAFT.
ByteBuffer body
Inference response from the model in the format specified in the contentType header. To see the
format and content of the request and response bodies for different models, refer to Inference parameters.
String contentType
The MIME type of the inference result.
String name
The name of the tool that the model must request.
String text
A system prompt for the model.
ToolSpecification toolSpec
The specfication for the tool.
AutoToolChoice auto
The Model automatically decides if a tool should be called or to whether to generate text instead.
AnyToolChoice any
The model must request at least one tool (no text is generated).
SpecificToolChoice tool
The Model must request the specified tool.
List<E> tools
An array of tools that you want to pass to a model.
ToolChoice toolChoice
If supported by model, forces the model to request a tool.
String text
A tool result that is text.
ImageBlock image
A tool result that is an image.
This field is only supported by Anthropic Claude 3 models.
String name
The name for the tool.
String description
The description for the tool.
ToolInputSchema inputSchema
The input schema for the tool in JSON format.
Copyright © 2024. All rights reserved.