Class GenAiIncubatingAttributes
java.lang.Object
io.opentelemetry.semconv.incubating.GenAiIncubatingAttributes
-
Nested Class Summary
Nested ClassesModifier and TypeClassDescriptionstatic final classValues forGEN_AI_OPERATION_NAME.static final classValues forGEN_AI_SYSTEM.static final classValues forGEN_AI_TOKEN_TYPE. -
Field Summary
FieldsModifier and TypeFieldDescriptionstatic final io.opentelemetry.api.common.AttributeKey<String>The full response received from the GenAI model.static final io.opentelemetry.api.common.AttributeKey<String>The name of the operation being performed.static final io.opentelemetry.api.common.AttributeKey<String>The full prompt sent to the GenAI model.static final io.opentelemetry.api.common.AttributeKey<Double>The frequency penalty setting for the GenAI request.static final io.opentelemetry.api.common.AttributeKey<Long>The maximum number of tokens the model generates for a request.static final io.opentelemetry.api.common.AttributeKey<String>The name of the GenAI model a request is being made to.static final io.opentelemetry.api.common.AttributeKey<Double>The presence penalty setting for the GenAI request.List of sequences that the model will use to stop generating further tokens.static final io.opentelemetry.api.common.AttributeKey<Double>The temperature setting for the GenAI request.static final io.opentelemetry.api.common.AttributeKey<Double>The top_k sampling setting for the GenAI request.static final io.opentelemetry.api.common.AttributeKey<Double>The top_p sampling setting for the GenAI request.Array of reasons the model stopped generating tokens, corresponding to each generation received.static final io.opentelemetry.api.common.AttributeKey<String>The unique identifier for the completion.static final io.opentelemetry.api.common.AttributeKey<String>The name of the model that generated the response.static final io.opentelemetry.api.common.AttributeKey<String>The Generative AI product as identified by the client or server instrumentation.static final io.opentelemetry.api.common.AttributeKey<String>The type of token being counted.static final io.opentelemetry.api.common.AttributeKey<Long>Deprecated.Deprecated, use `gen_ai.usage.output_tokens` instead.static final io.opentelemetry.api.common.AttributeKey<Long>The number of tokens used in the GenAI input (prompt).static final io.opentelemetry.api.common.AttributeKey<Long>The number of tokens used in the GenAI response (completion).static final io.opentelemetry.api.common.AttributeKey<Long>Deprecated.Deprecated, use `gen_ai.usage.input_tokens` instead. -
Method Summary
-
Field Details
-
GEN_AI_COMPLETION
The full response received from the GenAI model.Notes:
- It's RECOMMENDED to format completions as JSON string matching OpenAI messages format
-
GEN_AI_OPERATION_NAME
The name of the operation being performed.Notes:
- If one of the predefined values applies, but specific system uses a different name it's RECOMMENDED to document it in the semantic conventions for specific GenAI system and use system-specific name in the instrumentation. If a different name is not documented, instrumentation libraries SHOULD use applicable predefined value.
-
GEN_AI_PROMPT
The full prompt sent to the GenAI model.Notes:
- It's RECOMMENDED to format prompts as JSON string matching OpenAI messages format
-
GEN_AI_REQUEST_FREQUENCY_PENALTY
public static final io.opentelemetry.api.common.AttributeKey<Double> GEN_AI_REQUEST_FREQUENCY_PENALTYThe frequency penalty setting for the GenAI request. -
GEN_AI_REQUEST_MAX_TOKENS
The maximum number of tokens the model generates for a request. -
GEN_AI_REQUEST_MODEL
The name of the GenAI model a request is being made to. -
GEN_AI_REQUEST_PRESENCE_PENALTY
public static final io.opentelemetry.api.common.AttributeKey<Double> GEN_AI_REQUEST_PRESENCE_PENALTYThe presence penalty setting for the GenAI request. -
GEN_AI_REQUEST_STOP_SEQUENCES
public static final io.opentelemetry.api.common.AttributeKey<List<String>> GEN_AI_REQUEST_STOP_SEQUENCESList of sequences that the model will use to stop generating further tokens. -
GEN_AI_REQUEST_TEMPERATURE
The temperature setting for the GenAI request. -
GEN_AI_REQUEST_TOP_K
The top_k sampling setting for the GenAI request. -
GEN_AI_REQUEST_TOP_P
The top_p sampling setting for the GenAI request. -
GEN_AI_RESPONSE_FINISH_REASONS
public static final io.opentelemetry.api.common.AttributeKey<List<String>> GEN_AI_RESPONSE_FINISH_REASONSArray of reasons the model stopped generating tokens, corresponding to each generation received. -
GEN_AI_RESPONSE_ID
The unique identifier for the completion. -
GEN_AI_RESPONSE_MODEL
The name of the model that generated the response. -
GEN_AI_SYSTEM
The Generative AI product as identified by the client or server instrumentation.Notes:
- The
gen_ai.systemdescribes a family of GenAI models with specific model identified bygen_ai.request.modelandgen_ai.response.modelattributes. - The actual GenAI product may differ from the one identified by the client. For example,
when using OpenAI client libraries to communicate with Mistral, the
gen_ai.systemis set toopenaibased on the instrumentation's best knowledge. - For custom model, a custom friendly name SHOULD be used. If none of these options apply,
the
gen_ai.systemSHOULD be set to_OTHER.
- The
-
GEN_AI_TOKEN_TYPE
The type of token being counted. -
GEN_AI_USAGE_COMPLETION_TOKENS
@Deprecated public static final io.opentelemetry.api.common.AttributeKey<Long> GEN_AI_USAGE_COMPLETION_TOKENSDeprecated.Deprecated, use `gen_ai.usage.output_tokens` instead.Deprecated, usegen_ai.usage.output_tokensinstead. -
GEN_AI_USAGE_INPUT_TOKENS
The number of tokens used in the GenAI input (prompt). -
GEN_AI_USAGE_OUTPUT_TOKENS
The number of tokens used in the GenAI response (completion). -
GEN_AI_USAGE_PROMPT_TOKENS
@Deprecated public static final io.opentelemetry.api.common.AttributeKey<Long> GEN_AI_USAGE_PROMPT_TOKENSDeprecated.Deprecated, use `gen_ai.usage.input_tokens` instead.Deprecated, usegen_ai.usage.input_tokensinstead.
-