Class TextAnalyticsClient
Getting Started
In order to interact with the Text Analytics features in Azure AI Language Service, you'll need to create an
instance of the TextAnalyticsClient. To make this possible you'll need the key credential of the service.
Alternatively, you can use AAD authentication via
Azure Identity
to connect to the service.
- Azure Key Credential, see
AzureKeyCredential. - Azure Active Directory, see
TokenCredential.
Sample: Construct Synchronous Text Analytics Client with Azure Key Credential
The following code sample demonstrates the creation of a TextAnalyticsClient,
using the TextAnalyticsClientBuilder to configure it with a key credential.
TextAnalyticsClient textAnalyticsClient = new TextAnalyticsClientBuilder()
.credential(new AzureKeyCredential("{key}"))
.endpoint("{endpoint}")
.buildClient();
View TextAnalyticsClientBuilder for additional ways to construct the client.
See methods in client level class below to explore all features that library provides.
Extract information
Text Analytics client can use Natural Language Understanding (NLU) to extract information from unstructured text. For example, identify key phrases or Personally Identifiable, etc. Below you can look at the samples on how to use it.
Key Phrases Extraction
The extractKeyPhrases
method can be used to extract key phrases, which returns a list of strings denoting the key phrases in the document.
KeyPhrasesCollection extractedKeyPhrases =
textAnalyticsClient.extractKeyPhrases("My cat might need to see a veterinarian.");
for (String keyPhrase : extractedKeyPhrases) {
System.out.printf("%s.%n", keyPhrase);
}
See this for supported languages in Text Analytics API.
Note: For asynchronous sample, refer to TextAnalyticsAsyncClient.
Named Entities Recognition(NER): Prebuilt Model
The recognizeEntities method can be used to recognize
entities, which returns a list of general categorized entities in the provided document.
CategorizedEntityCollection recognizeEntitiesResult =
textAnalyticsClient.recognizeEntities("Satya Nadella is the CEO of Microsoft");
for (CategorizedEntity entity : recognizeEntitiesResult) {
System.out.printf("Recognized entity: %s, entity category: %s, confidence score: %f.%n",
entity.getText(), entity.getCategory(), entity.getConfidenceScore());
}
See this for supported languages in Text Analytics API.
Note: For asynchronous sample, refer to TextAnalyticsAsyncClient.
Custom Named Entities Recognition(NER): Custom Model
The beginRecognizeCustomEntities(Iterable, String, String) method can be used to
recognize custom entities, which returns a list of custom entities for the provided list of document.
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(
"A recent report by the Government Accountability Office (GAO) found that the dramatic increase "
+ "in oil and natural gas development on federal lands over the past six years has stretched the"
+ " staff of the BLM to a point that it has been unable to meet its environmental protection "
+ "responsibilities."); }
SyncPoller<RecognizeCustomEntitiesOperationDetail, RecognizeCustomEntitiesPagedIterable> syncPoller =
textAnalyticsClient.beginRecognizeCustomEntities(documents, "{project_name}", "{deployment_name}");
syncPoller.waitForCompletion();
syncPoller.getFinalResult().forEach(documentsResults -> {
System.out.printf("Project name: %s, deployment name: %s.%n",
documentsResults.getProjectName(), documentsResults.getDeploymentName());
for (RecognizeEntitiesResult documentResult : documentsResults) {
System.out.println("Document ID: " + documentResult.getId());
for (CategorizedEntity entity : documentResult.getEntities()) {
System.out.printf(
"\tText: %s, category: %s, confidence score: %f.%n",
entity.getText(), entity.getCategory(), entity.getConfidenceScore());
}
}
});
See this for supported languages in Text Analytics API.
Note: For asynchronous sample, refer to TextAnalyticsAsyncClient.
Linked Entities Recognition
The recognizeLinkedEntities method can be used to
find linked entities, which returns a list of recognized entities with links to a well-known knowledge base for
the provided document.
String document = "Old Faithful is a geyser at Yellowstone Park.";
System.out.println("Linked Entities:");
textAnalyticsClient.recognizeLinkedEntities(document).forEach(linkedEntity -> {
System.out.printf("Name: %s, entity ID in data source: %s, URL: %s, data source: %s.%n",
linkedEntity.getName(), linkedEntity.getDataSourceEntityId(), linkedEntity.getUrl(),
linkedEntity.getDataSource());
linkedEntity.getMatches().forEach(entityMatch -> System.out.printf(
"Matched entity: %s, confidence score: %f.%n",
entityMatch.getText(), entityMatch.getConfidenceScore()));
});
See this for supported languages in Text Analytics API.
Note: For asynchronous sample, refer to TextAnalyticsAsyncClient.
Personally Identifiable Information(PII) Entities Recognition
The recognizePiiEntities
method can be used to recognize PII entities, which returns a list of Personally Identifiable Information(PII)
entities in the provided document.
For a list of supported entity types, check: this.
PiiEntityCollection piiEntityCollection = textAnalyticsClient.recognizePiiEntities("My SSN is 859-98-0987");
System.out.printf("Redacted Text: %s%n", piiEntityCollection.getRedactedText());
for (PiiEntity entity : piiEntityCollection) {
System.out.printf(
"Recognized Personally Identifiable Information entity: %s, entity category: %s,"
+ " entity subcategory: %s, confidence score: %f.%n",
entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore());
}
See this for supported languages in Text Analytics API.
Note: For asynchronous sample, refer to TextAnalyticsAsyncClient.
Text Analytics for Health: Prebuilt Model
The beginAnalyzeHealthcareEntities method
can be used to analyze healthcare entities, entity data sources, and entity relations in a list of documents.
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add("The patient is a 54-year-old gentleman with a history of progressive angina over "
+ "the past several months.");
}
SyncPoller<AnalyzeHealthcareEntitiesOperationDetail, AnalyzeHealthcareEntitiesPagedIterable>
syncPoller = textAnalyticsClient.beginAnalyzeHealthcareEntities(documents);
syncPoller.waitForCompletion();
AnalyzeHealthcareEntitiesPagedIterable result = syncPoller.getFinalResult();
result.forEach(analyzeHealthcareEntitiesResultCollection -> {
analyzeHealthcareEntitiesResultCollection.forEach(healthcareEntitiesResult -> {
System.out.println("document id = " + healthcareEntitiesResult.getId());
System.out.println("Document entities: ");
AtomicInteger ct = new AtomicInteger();
healthcareEntitiesResult.getEntities().forEach(healthcareEntity -> {
System.out.printf("\ti = %d, Text: %s, category: %s, confidence score: %f.%n",
ct.getAndIncrement(), healthcareEntity.getText(), healthcareEntity.getCategory(),
healthcareEntity.getConfidenceScore());
IterableStream<EntityDataSource> healthcareEntityDataSources =
healthcareEntity.getDataSources();
if (healthcareEntityDataSources != null) {
healthcareEntityDataSources.forEach(healthcareEntityLink -> System.out.printf(
"\t\tEntity ID in data source: %s, data source: %s.%n",
healthcareEntityLink.getEntityId(), healthcareEntityLink.getName()));
}
});
// Healthcare entity relation groups
healthcareEntitiesResult.getEntityRelations().forEach(entityRelation -> {
System.out.printf("\tRelation type: %s.%n", entityRelation.getRelationType());
entityRelation.getRoles().forEach(role -> {
final HealthcareEntity entity = role.getEntity();
System.out.printf("\t\tEntity text: %s, category: %s, role: %s.%n",
entity.getText(), entity.getCategory(), role.getName());
});
System.out.printf("\tRelation confidence score: %f.%n",
entityRelation.getConfidenceScore());
});
});
});
See this for supported languages in Text Analytics API.
Note: For asynchronous sample, refer to TextAnalyticsAsyncClient.
Summarize text-based content: Document Summarization
Text Analytics client can use Natural Language Understanding (NLU) to summarize lengthy documents. For example, extractive or abstractive summarization. Below you can look at the samples on how to use it.
Extractive summarization
The beginExtractSummary
method returns a list of extract summaries for the provided list of document.
This method is supported since service API version TextAnalyticsServiceVersion.V2023_04_01.
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(
"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic,"
+ " human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI"
+ " Cognitive Services, I have been working with a team of amazing scientists and engineers to turn "
+ "this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship"
+ " among three attributes of human cognition: monolingual text (X), audio or visual sensory signals,"
+ " (Y) and multilingual (Z). At the intersection of all three, there’s magic—what we call XYZ-code"
+ " as illustrated in Figure 1—a joint representation to create more powerful AI that can speak, hear,"
+ " see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term"
+ " vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have"
+ " pretrained models that can jointly learn representations to support a broad range of downstream"
+ " AI tasks, much in the way humans do today. Over the past five years, we have achieved human"
+ " performance on benchmarks in conversational speech recognition, machine translation, "
+ "conversational question answering, machine reading comprehension, and image captioning. These"
+ " five breakthroughs provided us with strong signals toward our more ambitious aspiration to"
+ " produce a leap in AI capabilities, achieving multisensory and multilingual learning that "
+ "is closer in line with how humans learn and understand. I believe the joint XYZ-code is a "
+ "foundational component of this aspiration, if grounded with external knowledge sources in "
+ "the downstream AI tasks.");
}
SyncPoller<ExtractiveSummaryOperationDetail, ExtractiveSummaryPagedIterable> syncPoller =
textAnalyticsClient.beginExtractSummary(documents);
syncPoller.waitForCompletion();
syncPoller.getFinalResult().forEach(resultCollection -> {
for (ExtractiveSummaryResult documentResult : resultCollection) {
System.out.println("\tExtracted summary sentences:");
for (ExtractiveSummarySentence extractiveSummarySentence : documentResult.getSentences()) {
System.out.printf(
"\t\t Sentence text: %s, length: %d, offset: %d, rank score: %f.%n",
extractiveSummarySentence.getText(), extractiveSummarySentence.getLength(),
extractiveSummarySentence.getOffset(), extractiveSummarySentence.getRankScore());
}
}
});
See this for supported languages in Text Analytics API.
Note: For asynchronous sample, refer to TextAnalyticsAsyncClient.
Abstractive summarization
The beginAbstractSummary
method returns a list of abstractive summary for the provided list of document.
This method is supported since service API version TextAnalyticsServiceVersion.V2023_04_01.
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(
"At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic,"
+ " human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI"
+ " Cognitive Services, I have been working with a team of amazing scientists and engineers to turn "
+ "this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship"
+ " among three attributes of human cognition: monolingual text (X), audio or visual sensory signals,"
+ " (Y) and multilingual (Z). At the intersection of all three, there’s magic—what we call XYZ-code"
+ " as illustrated in Figure 1—a joint representation to create more powerful AI that can speak, hear,"
+ " see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term"
+ " vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have"
+ " pretrained models that can jointly learn representations to support a broad range of downstream"
+ " AI tasks, much in the way humans do today. Over the past five years, we have achieved human"
+ " performance on benchmarks in conversational speech recognition, machine translation, "
+ "conversational question answering, machine reading comprehension, and image captioning. These"
+ " five breakthroughs provided us with strong signals toward our more ambitious aspiration to"
+ " produce a leap in AI capabilities, achieving multisensory and multilingual learning that "
+ "is closer in line with how humans learn and understand. I believe the joint XYZ-code is a "
+ "foundational component of this aspiration, if grounded with external knowledge sources in "
+ "the downstream AI tasks.");
}
SyncPoller<AbstractiveSummaryOperationDetail, AbstractiveSummaryPagedIterable> syncPoller =
textAnalyticsClient.beginAbstractSummary(documents);
syncPoller.waitForCompletion();
syncPoller.getFinalResult().forEach(resultCollection -> {
for (AbstractiveSummaryResult documentResult : resultCollection) {
System.out.println("\tAbstractive summary sentences:");
for (AbstractiveSummary summarySentence : documentResult.getSummaries()) {
System.out.printf("\t\t Summary text: %s.%n", summarySentence.getText());
for (AbstractiveSummaryContext abstractiveSummaryContext : summarySentence.getContexts()) {
System.out.printf("\t\t offset: %d, length: %d%n",
abstractiveSummaryContext.getOffset(), abstractiveSummaryContext.getLength());
}
}
}
});
See this for supported languages in Text Analytics API.
Note: For asynchronous sample, refer to TextAnalyticsAsyncClient.
Classify Text
Text Analytics client can use Natural Language Understanding (NLU) to detect the language or classify the sentiment of text you have. For example, language detection, sentiment analysis, or custom text classification. Below you can look at the samples on how to use it.
Analyze Sentiment and Mine Text for Opinions
The analyzeSentiment(String, String, AnalyzeSentimentOptions) analyzeSentiment}
method can be used to analyze sentiment on a given input text string, which returns a sentiment prediction,
as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each
sentence within it. If the includeOpinionMining of AnalyzeSentimentOptions set to true,
the output will include the opinion mining results. It mines the opinions of a sentence and conducts more granular
analysis around the aspects in the text (also known as aspect-based sentiment analysis).
DocumentSentiment documentSentiment = textAnalyticsClient.analyzeSentiment(
"The hotel was dark and unclean.", "en",
new AnalyzeSentimentOptions().setIncludeOpinionMining(true));
for (SentenceSentiment sentenceSentiment : documentSentiment.getSentences()) {
System.out.printf("\tSentence sentiment: %s%n", sentenceSentiment.getSentiment());
sentenceSentiment.getOpinions().forEach(opinion -> {
TargetSentiment targetSentiment = opinion.getTarget();
System.out.printf("\tTarget sentiment: %s, target text: %s%n", targetSentiment.getSentiment(),
targetSentiment.getText());
for (AssessmentSentiment assessmentSentiment : opinion.getAssessments()) {
System.out.printf("\t\t'%s' sentiment because of \"%s\". Is the assessment negated: %s.%n",
assessmentSentiment.getSentiment(), assessmentSentiment.getText(), assessmentSentiment.isNegated());
}
});
}
See this for supported languages in Text Analytics API.
Note: For asynchronous sample, refer to TextAnalyticsAsyncClient.
Detect Language
The detectLanguage
method returns the detected language and a confidence score between zero and one. Scores close to one indicate 100%
certainty that the identified language is true.
This method will use the default country hint that sets up in
TextAnalyticsClientBuilder.defaultCountryHint(String). If none is specified, service will use 'US' as the
country hint.
DetectedLanguage detectedLanguage = textAnalyticsClient.detectLanguage("Bonjour tout le monde");
System.out.printf("Detected language name: %s, ISO 6391 name: %s, confidence score: %f.%n",
detectedLanguage.getName(), detectedLanguage.getIso6391Name(), detectedLanguage.getConfidenceScore());
See this for supported languages in Text Analytics API.
Note: For asynchronous sample, refer to TextAnalyticsAsyncClient.
Single-Label Classification
The beginSingleLabelClassify
method returns a list of single-label classification for the provided list of documents.
Note: this method is supported since service API version TextAnalyticsServiceVersion.V2022_05_01.
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(
"A recent report by the Government Accountability Office (GAO) found that the dramatic increase "
+ "in oil and natural gas development on federal lands over the past six years has stretched the"
+ " staff of the BLM to a point that it has been unable to meet its environmental protection "
+ "responsibilities."
);
}
// See the service documentation for regional support and how to train a model to classify your documents,
// see https://aka.ms/azsdk/textanalytics/customfunctionalities
SyncPoller<ClassifyDocumentOperationDetail, ClassifyDocumentPagedIterable> syncPoller =
textAnalyticsClient.beginSingleLabelClassify(documents, "{project_name}", "{deployment_name}");
syncPoller.waitForCompletion();
syncPoller.getFinalResult().forEach(documentsResults -> {
System.out.printf("Project name: %s, deployment name: %s.%n",
documentsResults.getProjectName(), documentsResults.getDeploymentName());
for (ClassifyDocumentResult documentResult : documentsResults) {
System.out.println("Document ID: " + documentResult.getId());
for (ClassificationCategory classification : documentResult.getClassifications()) {
System.out.printf("\tCategory: %s, confidence score: %f.%n",
classification.getCategory(), classification.getConfidenceScore());
}
}
});
See this for supported languages in Text Analytics API.
Note: For asynchronous sample, refer to TextAnalyticsAsyncClient.
Multi-Label Classification
The beginMultiLabelClassify
method returns a list of multi-label classification for the provided list of document.
Note: this method is supported since service API version TextAnalyticsServiceVersion.V2022_05_01.
List<String> documents = new ArrayList<>();
for (int i = 0; i < 3; i++) {
documents.add(
"I need a reservation for an indoor restaurant in China. Please don't stop the music."
+ " Play music and add it to my playlist");
}
SyncPoller<ClassifyDocumentOperationDetail, ClassifyDocumentPagedIterable> syncPoller =
textAnalyticsClient.beginMultiLabelClassify(documents, "{project_name}", "{deployment_name}");
syncPoller.waitForCompletion();
syncPoller.getFinalResult().forEach(documentsResults -> {
System.out.printf("Project name: %s, deployment name: %s.%n",
documentsResults.getProjectName(), documentsResults.getDeploymentName());
for (ClassifyDocumentResult documentResult : documentsResults) {
System.out.println("Document ID: " + documentResult.getId());
for (ClassificationCategory classification : documentResult.getClassifications()) {
System.out.printf("\tCategory: %s, confidence score: %f.%n",
classification.getCategory(), classification.getConfidenceScore());
}
}
});
See this for supported languages in Text Analytics API.
Note: For asynchronous sample, refer to TextAnalyticsAsyncClient.
Execute multiple actions
The beginAnalyzeActions method
execute actions, such as, entities recognition, PII entities recognition, key phrases extraction, and etc, for a
list of documents.
List<String> documents = Arrays.asList(
"Elon Musk is the CEO of SpaceX and Tesla.",
"My SSN is 859-98-0987"
);
SyncPoller<AnalyzeActionsOperationDetail, AnalyzeActionsResultPagedIterable> syncPoller =
textAnalyticsClient.beginAnalyzeActions(
documents,
new TextAnalyticsActions().setDisplayName("{tasks_display_name}")
.setRecognizeEntitiesActions(new RecognizeEntitiesAction())
.setExtractKeyPhrasesActions(new ExtractKeyPhrasesAction()));
syncPoller.waitForCompletion();
AnalyzeActionsResultPagedIterable result = syncPoller.getFinalResult();
result.forEach(analyzeActionsResult -> {
System.out.println("Entities recognition action results:");
analyzeActionsResult.getRecognizeEntitiesResults().forEach(
actionResult -> {
if (!actionResult.isError()) {
actionResult.getDocumentsResults().forEach(
entitiesResult -> entitiesResult.getEntities().forEach(
entity -> System.out.printf(
"Recognized entity: %s, entity category: %s, entity subcategory: %s,"
+ " confidence score: %f.%n",
entity.getText(), entity.getCategory(), entity.getSubcategory(),
entity.getConfidenceScore())));
}
});
System.out.println("Key phrases extraction action results:");
analyzeActionsResult.getExtractKeyPhrasesResults().forEach(
actionResult -> {
if (!actionResult.isError()) {
actionResult.getDocumentsResults().forEach(extractKeyPhraseResult -> {
System.out.println("Extracted phrases:");
extractKeyPhraseResult.getKeyPhrases()
.forEach(keyPhrases -> System.out.printf("\t%s.%n", keyPhrases));
});
}
});
});
See this for supported languages in Text Analytics API.
Note: For asynchronous sample, refer to TextAnalyticsAsyncClient.
-
Method Summary
Modifier and TypeMethodDescriptionanalyzeSentiment(String document) Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it.analyzeSentiment(String document, String language) Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it.analyzeSentiment(String document, String language, AnalyzeSentimentOptions options) Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it.analyzeSentimentBatch(Iterable<String> documents, String language, AnalyzeSentimentOptions options) Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it.analyzeSentimentBatch(Iterable<String> documents, String language, TextAnalyticsRequestOptions options) Deprecated.com.azure.core.http.rest.Response<AnalyzeSentimentResultCollection>analyzeSentimentBatchWithResponse(Iterable<TextDocumentInput> documents, AnalyzeSentimentOptions options, com.azure.core.util.Context context) Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it.com.azure.core.http.rest.Response<AnalyzeSentimentResultCollection>analyzeSentimentBatchWithResponse(Iterable<TextDocumentInput> documents, TextAnalyticsRequestOptions options, com.azure.core.util.Context context) Deprecated.com.azure.core.util.polling.SyncPoller<AbstractiveSummaryOperationDetail,AbstractiveSummaryPagedIterable> beginAbstractSummary(Iterable<TextDocumentInput> documents, AbstractiveSummaryOptions options, com.azure.core.util.Context context) Returns a list of abstractive summary for the provided list ofdocumentwith provided request options.com.azure.core.util.polling.SyncPoller<AbstractiveSummaryOperationDetail,AbstractiveSummaryPagedIterable> beginAbstractSummary(Iterable<String> documents) Returns a list of abstractive summary for the provided list ofdocument.com.azure.core.util.polling.SyncPoller<AbstractiveSummaryOperationDetail,AbstractiveSummaryPagedIterable> beginAbstractSummary(Iterable<String> documents, String language, AbstractiveSummaryOptions options) Returns a list of abstractive summary for the provided list ofdocumentwith provided request options.com.azure.core.util.polling.SyncPoller<AnalyzeActionsOperationDetail,AnalyzeActionsResultPagedIterable> beginAnalyzeActions(Iterable<TextDocumentInput> documents, TextAnalyticsActions actions, AnalyzeActionsOptions options, com.azure.core.util.Context context) Execute actions, such as, entities recognition, PII entities recognition and key phrases extraction for a list ofdocumentswith provided request options.com.azure.core.util.polling.SyncPoller<AnalyzeActionsOperationDetail,AnalyzeActionsResultPagedIterable> beginAnalyzeActions(Iterable<String> documents, TextAnalyticsActions actions) Execute actions, such as, entities recognition, PII entities recognition and key phrases extraction for a list ofdocuments.com.azure.core.util.polling.SyncPoller<AnalyzeActionsOperationDetail,AnalyzeActionsResultPagedIterable> beginAnalyzeActions(Iterable<String> documents, TextAnalyticsActions actions, String language, AnalyzeActionsOptions options) Execute actions, such as, entities recognition, PII entities recognition and key phrases extraction for a list ofdocumentswith provided request options.com.azure.core.util.polling.SyncPoller<AnalyzeHealthcareEntitiesOperationDetail,AnalyzeHealthcareEntitiesPagedIterable> beginAnalyzeHealthcareEntities(Iterable<TextDocumentInput> documents, AnalyzeHealthcareEntitiesOptions options, com.azure.core.util.Context context) Analyze healthcare entities, entity data sources, and entity relations in a list ofdocumentand provided request options to show statistics.com.azure.core.util.polling.SyncPoller<AnalyzeHealthcareEntitiesOperationDetail,AnalyzeHealthcareEntitiesPagedIterable> beginAnalyzeHealthcareEntities(Iterable<String> documents) Analyze healthcare entities, entity data sources, and entity relations in a list ofdocuments.com.azure.core.util.polling.SyncPoller<AnalyzeHealthcareEntitiesOperationDetail,AnalyzeHealthcareEntitiesPagedIterable> beginAnalyzeHealthcareEntities(Iterable<String> documents, String language, AnalyzeHealthcareEntitiesOptions options) Analyze healthcare entities, entity data sources, and entity relations in a list ofdocumentswith provided request options.com.azure.core.util.polling.SyncPoller<ExtractiveSummaryOperationDetail,ExtractiveSummaryPagedIterable> beginExtractSummary(Iterable<TextDocumentInput> documents, ExtractiveSummaryOptions options, com.azure.core.util.Context context) Returns a list of extract summaries for the provided list ofdocumentwith provided request options.com.azure.core.util.polling.SyncPoller<ExtractiveSummaryOperationDetail,ExtractiveSummaryPagedIterable> beginExtractSummary(Iterable<String> documents) Returns a list of extract summaries for the provided list ofdocument.com.azure.core.util.polling.SyncPoller<ExtractiveSummaryOperationDetail,ExtractiveSummaryPagedIterable> beginExtractSummary(Iterable<String> documents, String language, ExtractiveSummaryOptions options) Returns a list of extract summaries for the provided list ofdocumentwith provided request options.com.azure.core.util.polling.SyncPoller<ClassifyDocumentOperationDetail,ClassifyDocumentPagedIterable> beginMultiLabelClassify(Iterable<TextDocumentInput> documents, String projectName, String deploymentName, MultiLabelClassifyOptions options, com.azure.core.util.Context context) Returns a list of multi-label classification for the provided list ofdocumentwith provided request options.com.azure.core.util.polling.SyncPoller<ClassifyDocumentOperationDetail,ClassifyDocumentPagedIterable> beginMultiLabelClassify(Iterable<String> documents, String projectName, String deploymentName) Returns a list of multi-label classification for the provided list ofdocument.com.azure.core.util.polling.SyncPoller<ClassifyDocumentOperationDetail,ClassifyDocumentPagedIterable> beginMultiLabelClassify(Iterable<String> documents, String projectName, String deploymentName, String language, MultiLabelClassifyOptions options) Returns a list of multi-label classification for the provided list ofdocumentwith provided request options.com.azure.core.util.polling.SyncPoller<RecognizeCustomEntitiesOperationDetail,RecognizeCustomEntitiesPagedIterable> beginRecognizeCustomEntities(Iterable<TextDocumentInput> documents, String projectName, String deploymentName, RecognizeCustomEntitiesOptions options, com.azure.core.util.Context context) Returns a list of custom entities for the provided list ofdocumentwith provided request options.com.azure.core.util.polling.SyncPoller<RecognizeCustomEntitiesOperationDetail,RecognizeCustomEntitiesPagedIterable> beginRecognizeCustomEntities(Iterable<String> documents, String projectName, String deploymentName) Returns a list of custom entities for the provided list ofdocument.com.azure.core.util.polling.SyncPoller<RecognizeCustomEntitiesOperationDetail,RecognizeCustomEntitiesPagedIterable> beginRecognizeCustomEntities(Iterable<String> documents, String projectName, String deploymentName, String language, RecognizeCustomEntitiesOptions options) Returns a list of custom entities for the provided list of document with provided request options.com.azure.core.util.polling.SyncPoller<ClassifyDocumentOperationDetail,ClassifyDocumentPagedIterable> beginSingleLabelClassify(Iterable<TextDocumentInput> documents, String projectName, String deploymentName, SingleLabelClassifyOptions options, com.azure.core.util.Context context) Returns a list of single-label classification for the provided list ofdocumentwith provided request options.com.azure.core.util.polling.SyncPoller<ClassifyDocumentOperationDetail,ClassifyDocumentPagedIterable> beginSingleLabelClassify(Iterable<String> documents, String projectName, String deploymentName) Returns a list of single-label classification for the provided list ofdocument.com.azure.core.util.polling.SyncPoller<ClassifyDocumentOperationDetail,ClassifyDocumentPagedIterable> beginSingleLabelClassify(Iterable<String> documents, String projectName, String deploymentName, String language, SingleLabelClassifyOptions options) Returns a list of single-label classification for the provided list ofdocumentwith provided request options.detectLanguage(String document) Returns the detected language and a confidence score between zero and one.detectLanguage(String document, String countryHint) Returns the detected language and a confidence score between zero and one.detectLanguageBatch(Iterable<String> documents, String countryHint, TextAnalyticsRequestOptions options) Detects Language for a batch of document with the provided country hint and request options.com.azure.core.http.rest.Response<DetectLanguageResultCollection>detectLanguageBatchWithResponse(Iterable<DetectLanguageInput> documents, TextAnalyticsRequestOptions options, com.azure.core.util.Context context) Detects Language for a batch ofdocumentwith provided request options.extractKeyPhrases(String document) Returns a list of strings denoting the key phrases in the document.extractKeyPhrases(String document, String language) Returns a list of strings denoting the key phrases in the document.extractKeyPhrasesBatch(Iterable<String> documents, String language, TextAnalyticsRequestOptions options) Returns a list of strings denoting the key phrases in the documents with provided language code and request options.com.azure.core.http.rest.Response<ExtractKeyPhrasesResultCollection>extractKeyPhrasesBatchWithResponse(Iterable<TextDocumentInput> documents, TextAnalyticsRequestOptions options, com.azure.core.util.Context context) Returns a list of strings denoting the key phrases in the a batch ofdocumentwith request options.Gets default country hint code.Gets default language when the builder is setup.recognizeEntities(String document) Returns a list of general categorized entities in the provided document.recognizeEntities(String document, String language) Returns a list of general categorized entities in the provided document with provided language code.recognizeEntitiesBatch(Iterable<String> documents, String language, TextAnalyticsRequestOptions options) Returns a list of general categorized entities for the provided list of documents with provided language code and request options.com.azure.core.http.rest.Response<RecognizeEntitiesResultCollection>recognizeEntitiesBatchWithResponse(Iterable<TextDocumentInput> documents, TextAnalyticsRequestOptions options, com.azure.core.util.Context context) Returns a list of general categorized entities for the provided list ofdocumentwith provided request options.recognizeLinkedEntities(String document) Returns a list of recognized entities with links to a well-known knowledge base for the provided document.recognizeLinkedEntities(String document, String language) Returns a list of recognized entities with links to a well-known knowledge base for the provided document with language code.recognizeLinkedEntitiesBatch(Iterable<String> documents, String language, TextAnalyticsRequestOptions options) Returns a list of recognized entities with links to a well-known knowledge base for the list of documents with provided language code and request options.com.azure.core.http.rest.Response<RecognizeLinkedEntitiesResultCollection>recognizeLinkedEntitiesBatchWithResponse(Iterable<TextDocumentInput> documents, TextAnalyticsRequestOptions options, com.azure.core.util.Context context) Returns a list of recognized entities with links to a well-known knowledge base for the list ofdocumentand request options.recognizePiiEntities(String document) Returns a list of Personally Identifiable Information(PII) entities in the provided document.recognizePiiEntities(String document, String language) Returns a list of Personally Identifiable Information(PII) entities in the provided document with provided language code.recognizePiiEntities(String document, String language, RecognizePiiEntitiesOptions options) Returns a list of Personally Identifiable Information(PII) entities in the provided document with provided language code.recognizePiiEntitiesBatch(Iterable<String> documents, String language, RecognizePiiEntitiesOptions options) Returns a list of Personally Identifiable Information(PII) entities for the provided list of documents with provided language code and request options.com.azure.core.http.rest.Response<RecognizePiiEntitiesResultCollection>recognizePiiEntitiesBatchWithResponse(Iterable<TextDocumentInput> documents, RecognizePiiEntitiesOptions options, com.azure.core.util.Context context) Returns a list of Personally Identifiable Information(PII) entities for the provided list ofdocumentwith provided request options.
-
Method Details
-
getDefaultCountryHint
Gets default country hint code.- Returns:
- The default country hint code
-
getDefaultLanguage
Gets default language when the builder is setup.- Returns:
- The default language
-
detectLanguage
Returns the detected language and a confidence score between zero and one. Scores close to one indicate 100% certainty that the identified language is true. This method will use the default country hint that sets up inTextAnalyticsClientBuilder.defaultCountryHint(String). If none is specified, service will use 'US' as the country hint.Code Sample
Detects the language of single document.
DetectedLanguage detectedLanguage = textAnalyticsClient.detectLanguage("Bonjour tout le monde"); System.out.printf("Detected language name: %s, ISO 6391 name: %s, confidence score: %f.%n", detectedLanguage.getName(), detectedLanguage.getIso6391Name(), detectedLanguage.getConfidenceScore());- Parameters:
document- The document to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.- Returns:
- The
detected languageof the document. - Throws:
NullPointerException- ifdocumentis null.
-
detectLanguage
Returns the detected language and a confidence score between zero and one. Scores close to one indicate 100% certainty that the identified language is true.Code Sample
Detects the language of documents with a provided country hint.
DetectedLanguage detectedLanguage = textAnalyticsClient.detectLanguage( "This text is in English", "US"); System.out.printf("Detected language name: %s, ISO 6391 name: %s, confidence score: %f.%n", detectedLanguage.getName(), detectedLanguage.getIso6391Name(), detectedLanguage.getConfidenceScore());- Parameters:
document- The document to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.countryHint- Accepts two letter country codes specified by ISO 3166-1 alpha-2. Defaults to "US" if not specified. To remove this behavior you can reset this parameter by setting this value to empty stringcountryHint= "" or "none".- Returns:
- The
detected languageof the document. - Throws:
NullPointerException- ifdocumentis null.
-
detectLanguageBatch
public DetectLanguageResultCollection detectLanguageBatch(Iterable<String> documents, String countryHint, TextAnalyticsRequestOptions options) Detects Language for a batch of document with the provided country hint and request options.Code Sample
Detects the language in a list of documents with a provided country hint and request options.
List<String> documents = Arrays.asList( "This is written in English", "Este es un documento escrito en Español." ); DetectLanguageResultCollection resultCollection = textAnalyticsClient.detectLanguageBatch(documents, "US", null); // Batch statistics TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics(); System.out.printf("A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); // Batch result of languages resultCollection.forEach(detectLanguageResult -> { System.out.printf("Document ID: %s%n", detectLanguageResult.getId()); DetectedLanguage detectedLanguage = detectLanguageResult.getPrimaryLanguage(); System.out.printf("Primary language name: %s, ISO 6391 name: %s, confidence score: %f.%n", detectedLanguage.getName(), detectedLanguage.getIso6391Name(), detectedLanguage.getConfidenceScore()); });- Parameters:
documents- The list of documents to detect languages for. For text length limits, maximum batch size, and supported text encoding, see data limits.countryHint- Accepts two letter country codes specified by ISO 3166-1 alpha-2. Defaults to "US" if not specified. To remove this behavior you can reset this parameter by setting this value to empty stringcountryHint= "" or "none".options- Theoptionsto configure the scoring model for documents and show statistics.- Returns:
- A
DetectLanguageResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifTextAnalyticsRequestOptions.isServiceLogsDisabled()is true in service API versionTextAnalyticsServiceVersion.V3_0.disableServiceLogsis only available for API version v3.1 and newer.
-
detectLanguageBatchWithResponse
public com.azure.core.http.rest.Response<DetectLanguageResultCollection> detectLanguageBatchWithResponse(Iterable<DetectLanguageInput> documents, TextAnalyticsRequestOptions options, com.azure.core.util.Context context) Detects Language for a batch ofdocumentwith provided request options.Code Sample
Detects the languages with http response in a list of
documentwith provided request options.List<DetectLanguageInput> detectLanguageInputs = Arrays.asList( new DetectLanguageInput("1", "This is written in English.", "US"), new DetectLanguageInput("2", "Este es un documento escrito en Español.", "es") ); Response<DetectLanguageResultCollection> response = textAnalyticsClient.detectLanguageBatchWithResponse(detectLanguageInputs, new TextAnalyticsRequestOptions().setIncludeStatistics(true), Context.NONE); // Response's status code System.out.printf("Status code of request response: %d%n", response.getStatusCode()); DetectLanguageResultCollection detectedLanguageResultCollection = response.getValue(); // Batch statistics TextDocumentBatchStatistics batchStatistics = detectedLanguageResultCollection.getStatistics(); System.out.printf( "Documents statistics: document count = %d, erroneous document count = %d, transaction count = %d," + " valid document count = %d.%n", batchStatistics.getDocumentCount(), batchStatistics.getInvalidDocumentCount(), batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); // Batch result of languages detectedLanguageResultCollection.forEach(detectLanguageResult -> { System.out.printf("Document ID: %s%n", detectLanguageResult.getId()); DetectedLanguage detectedLanguage = detectLanguageResult.getPrimaryLanguage(); System.out.printf("Primary language name: %s, ISO 6391 name: %s, confidence score: %f.%n", detectedLanguage.getName(), detectedLanguage.getIso6391Name(), detectedLanguage.getConfidenceScore()); });- Parameters:
documents- The list ofdocumentsto be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.options- Theoptionsto configure the scoring model for documents and show statistics.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A
Responsethat contains aDetectLanguageResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifTextAnalyticsRequestOptions.isServiceLogsDisabled()is true in service API versionTextAnalyticsServiceVersion.V3_0.disableServiceLogsis only available for API version v3.1 and newer.
-
recognizeEntities
Returns a list of general categorized entities in the provided document. For a list of supported entity types, check: this This method will use the default language that can be set by using methodTextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.Code Sample
Recognize the entities of documents
CategorizedEntityCollection recognizeEntitiesResult = textAnalyticsClient.recognizeEntities("Satya Nadella is the CEO of Microsoft"); for (CategorizedEntity entity : recognizeEntitiesResult) { System.out.printf("Recognized entity: %s, entity category: %s, confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getConfidenceScore()); }- Parameters:
document- The document to recognize entities for. For text length limits, maximum batch size, and supported text encoding, see data limits.- Returns:
- A
CategorizedEntityCollectioncontains a list ofrecognized categorized entitiesand warnings. - Throws:
NullPointerException- ifdocumentis null.TextAnalyticsException- if the response returned with anerror.
-
recognizeEntities
Returns a list of general categorized entities in the provided document with provided language code. For a list of supported entity types, check: this For a list of enabled languages, check: thisCode Sample
Recognizes the entities in a document with a provided language code.
CategorizedEntityCollection recognizeEntitiesResult = textAnalyticsClient.recognizeEntities("Satya Nadella is the CEO of Microsoft", "en"); for (CategorizedEntity entity : recognizeEntitiesResult) { System.out.printf("Recognized entity: %s, entity category: %s, confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getConfidenceScore()); }- Parameters:
document- The document to recognize entities for. For text length limits, maximum batch size, and supported text encoding, see data limits.language- The 2 letter ISO 639-1 representation of language. If not set, uses "en" for English as default.- Returns:
- The
CategorizedEntityCollectioncontains a list ofrecognized categorized entitiesand warnings. - Throws:
NullPointerException- ifdocumentis null.TextAnalyticsException- if the response returned with anerror.
-
recognizeEntitiesBatch
public RecognizeEntitiesResultCollection recognizeEntitiesBatch(Iterable<String> documents, String language, TextAnalyticsRequestOptions options) Returns a list of general categorized entities for the provided list of documents with provided language code and request options.Code Sample
Recognizes the entities in a list of documents with a provided language code and request options.
List<String> documents = Arrays.asList( "I had a wonderful trip to Seattle last week.", "I work at Microsoft."); RecognizeEntitiesResultCollection resultCollection = textAnalyticsClient.recognizeEntitiesBatch(documents, "en", null); // Batch statistics TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics(); System.out.printf( "A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); resultCollection.forEach(recognizeEntitiesResult -> recognizeEntitiesResult.getEntities().forEach(entity -> System.out.printf("Recognized entity: %s, entity category: %s, confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getConfidenceScore())));- Parameters:
documents- A list of documents to recognize entities for. For text length limits, maximum batch size, and supported text encoding, see data limits.language- The 2 letter ISO 639-1 representation of language. If not set, uses "en" for English as default.options- Theoptionsto configure the scoring model for documents and show statistics.- Returns:
- A
RecognizeEntitiesResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifTextAnalyticsRequestOptions.isServiceLogsDisabled()is true in service API versionTextAnalyticsServiceVersion.V3_0.disableServiceLogsis only available for API version v3.1 and newer.
-
recognizeEntitiesBatchWithResponse
public com.azure.core.http.rest.Response<RecognizeEntitiesResultCollection> recognizeEntitiesBatchWithResponse(Iterable<TextDocumentInput> documents, TextAnalyticsRequestOptions options, com.azure.core.util.Context context) Returns a list of general categorized entities for the provided list ofdocumentwith provided request options.Code Sample
Recognizes the entities with http response in a list of
documentwith provided request options.List<TextDocumentInput> textDocumentInputs = Arrays.asList( new TextDocumentInput("0", "I had a wonderful trip to Seattle last week.").setLanguage("en"), new TextDocumentInput("1", "I work at Microsoft.").setLanguage("en") ); Response<RecognizeEntitiesResultCollection> response = textAnalyticsClient.recognizeEntitiesBatchWithResponse(textDocumentInputs, new TextAnalyticsRequestOptions().setIncludeStatistics(true), Context.NONE); // Response's status code System.out.printf("Status code of request response: %d%n", response.getStatusCode()); RecognizeEntitiesResultCollection recognizeEntitiesResultCollection = response.getValue(); // Batch statistics TextDocumentBatchStatistics batchStatistics = recognizeEntitiesResultCollection.getStatistics(); System.out.printf( "A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); recognizeEntitiesResultCollection.forEach(recognizeEntitiesResult -> recognizeEntitiesResult.getEntities().forEach(entity -> System.out.printf("Recognized entity: %s, entity category: %s, confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getConfidenceScore())));- Parameters:
documents- A list ofdocumentsto recognize entities for. For text length limits, maximum batch size, and supported text encoding, see data limits.options- Theoptionsto configure the scoring model for documents and show statistics.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A
Responsethat contains aRecognizeEntitiesResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifTextAnalyticsRequestOptions.isServiceLogsDisabled()is true in service API versionTextAnalyticsServiceVersion.V3_0.disableServiceLogsis only available for API version v3.1 and newer.
-
recognizePiiEntities
Returns a list of Personally Identifiable Information(PII) entities in the provided document. For a list of supported entity types, check: this For a list of enabled languages, check: this. This method will use the default language that is set usingTextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.Code Sample
Recognize the PII entities details in a document.
PiiEntityCollection piiEntityCollection = textAnalyticsClient.recognizePiiEntities("My SSN is 859-98-0987"); System.out.printf("Redacted Text: %s%n", piiEntityCollection.getRedactedText()); for (PiiEntity entity : piiEntityCollection) { System.out.printf( "Recognized Personally Identifiable Information entity: %s, entity category: %s," + " entity subcategory: %s, confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore()); }- Parameters:
document- The document to recognize PII entities details for. For text length limits, maximum batch size, and supported text encoding, see data limits.- Returns:
- A
recognized PII entities collection. - Throws:
NullPointerException- ifdocumentis null.TextAnalyticsException- if the response returned with anerror.UnsupportedOperationException- ifrecognizePiiEntitiesis called with service API versionTextAnalyticsServiceVersion.V3_0.recognizePiiEntitiesis only available for API version v3.1 and newer.
-
recognizePiiEntities
Returns a list of Personally Identifiable Information(PII) entities in the provided document with provided language code. For a list of supported entity types, check: this For a list of enabled languages, check: thisCode Sample
Recognizes the PII entities details in a document with a provided language code.
PiiEntityCollection piiEntityCollection = textAnalyticsClient.recognizePiiEntities( "My SSN is 859-98-0987", "en"); System.out.printf("Redacted Text: %s%n", piiEntityCollection.getRedactedText()); piiEntityCollection.forEach(entity -> System.out.printf( "Recognized Personally Identifiable Information entity: %s, entity category: %s," + " entity subcategory: %s, confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore()));- Parameters:
document- The document to recognize PII entities details for. For text length limits, maximum batch size, and supported text encoding, see data limits.language- The 2 letter ISO 639-1 representation of language. If not set, uses "en" for English as default.- Returns:
- The
recognized PII entities collection. - Throws:
NullPointerException- ifdocumentis null.TextAnalyticsException- if the response returned with anerror.UnsupportedOperationException- ifrecognizePiiEntitiesis called with service API versionTextAnalyticsServiceVersion.V3_0.recognizePiiEntitiesis only available for API version v3.1 and newer.
-
recognizePiiEntities
public PiiEntityCollection recognizePiiEntities(String document, String language, RecognizePiiEntitiesOptions options) Returns a list of Personally Identifiable Information(PII) entities in the provided document with provided language code. For a list of supported entity types, check: this For a list of enabled languages, check: thisCode Sample
Recognizes the PII entities details in a document with a provided language code and
RecognizePiiEntitiesOptions.PiiEntityCollection piiEntityCollection = textAnalyticsClient.recognizePiiEntities( "My SSN is 859-98-0987", "en", new RecognizePiiEntitiesOptions().setDomainFilter(PiiEntityDomain.PROTECTED_HEALTH_INFORMATION)); System.out.printf("Redacted Text: %s%n", piiEntityCollection.getRedactedText()); piiEntityCollection.forEach(entity -> System.out.printf( "Recognized Personally Identifiable Information entity: %s, entity category: %s," + " entity subcategory: %s, confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore()));- Parameters:
document- The document to recognize PII entities details for. For text length limits, maximum batch size, and supported text encoding, see data limits.language- The 2 letter ISO 639-1 representation of language. If not set, uses "en" for English as default.options- The additional configurableoptionsthat may be passed when recognizing PII entities.- Returns:
- The
recognized PII entities collection. - Throws:
NullPointerException- ifdocumentis null.TextAnalyticsException- if the response returned with anerror.UnsupportedOperationException- ifrecognizePiiEntitiesis called with service API versionTextAnalyticsServiceVersion.V3_0.recognizePiiEntitiesis only available for API version v3.1 and newer.
-
recognizePiiEntitiesBatch
public RecognizePiiEntitiesResultCollection recognizePiiEntitiesBatch(Iterable<String> documents, String language, RecognizePiiEntitiesOptions options) Returns a list of Personally Identifiable Information(PII) entities for the provided list of documents with provided language code and request options.Code Sample
Recognizes the PII entities details in a list of documents with a provided language code and request options.
List<String> documents = Arrays.asList( "My SSN is 859-98-0987", "Visa card 4111 1111 1111 1111" ); RecognizePiiEntitiesResultCollection resultCollection = textAnalyticsClient.recognizePiiEntitiesBatch( documents, "en", new RecognizePiiEntitiesOptions().setIncludeStatistics(true)); // Batch statistics TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics(); System.out.printf("A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); resultCollection.forEach(recognizePiiEntitiesResult -> { PiiEntityCollection piiEntityCollection = recognizePiiEntitiesResult.getEntities(); System.out.printf("Redacted Text: %s%n", piiEntityCollection.getRedactedText()); piiEntityCollection.forEach(entity -> System.out.printf( "Recognized Personally Identifiable Information entity: %s, entity category: %s," + " entity subcategory: %s, confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore())); });- Parameters:
documents- A list of documents to recognize PII entities for. For text length limits, maximum batch size, and supported text encoding, see data limits.language- The 2 letter ISO 639-1 representation of language. If not set, uses "en" for English as default.options- The additional configurableoptionsthat may be passed when recognizing PII entities.- Returns:
- A
RecognizePiiEntitiesResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifrecognizePiiEntitiesBatchis called with service API versionTextAnalyticsServiceVersion.V3_0.recognizePiiEntitiesBatchis only available for API version v3.1 and newer.
-
recognizePiiEntitiesBatchWithResponse
public com.azure.core.http.rest.Response<RecognizePiiEntitiesResultCollection> recognizePiiEntitiesBatchWithResponse(Iterable<TextDocumentInput> documents, RecognizePiiEntitiesOptions options, com.azure.core.util.Context context) Returns a list of Personally Identifiable Information(PII) entities for the provided list ofdocumentwith provided request options.Code Sample
Recognizes the PII entities details with http response in a list of
documentwith provided request options.List<TextDocumentInput> textDocumentInputs = Arrays.asList( new TextDocumentInput("0", "My SSN is 859-98-0987"), new TextDocumentInput("1", "Visa card 4111 1111 1111 1111") ); Response<RecognizePiiEntitiesResultCollection> response = textAnalyticsClient.recognizePiiEntitiesBatchWithResponse(textDocumentInputs, new RecognizePiiEntitiesOptions().setIncludeStatistics(true), Context.NONE); RecognizePiiEntitiesResultCollection resultCollection = response.getValue(); // Batch statistics TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics(); System.out.printf("A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); resultCollection.forEach(recognizePiiEntitiesResult -> { PiiEntityCollection piiEntityCollection = recognizePiiEntitiesResult.getEntities(); System.out.printf("Redacted Text: %s%n", piiEntityCollection.getRedactedText()); piiEntityCollection.forEach(entity -> System.out.printf( "Recognized Personally Identifiable Information entity: %s, entity category: %s," + " entity subcategory: %s, confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore())); });- Parameters:
documents- A list ofdocumentsto recognize PII entities for. For text length limits, maximum batch size, and supported text encoding, see data limits.options- The additional configurableoptionsthat may be passed when recognizing PII entities.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A
Responsethat contains aRecognizePiiEntitiesResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifrecognizePiiEntitiesBatchWithResponseis called with service API versionTextAnalyticsServiceVersion.V3_0.recognizePiiEntitiesBatchWithResponseis only available for API version v3.1 and newer.
-
recognizeLinkedEntities
Returns a list of recognized entities with links to a well-known knowledge base for the provided document. See this for supported languages in Text Analytics API. This method will use the default language that can be set by using methodTextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.Code Sample
Recognize the linked entities of documents
String document = "Old Faithful is a geyser at Yellowstone Park."; System.out.println("Linked Entities:"); textAnalyticsClient.recognizeLinkedEntities(document).forEach(linkedEntity -> { System.out.printf("Name: %s, entity ID in data source: %s, URL: %s, data source: %s.%n", linkedEntity.getName(), linkedEntity.getDataSourceEntityId(), linkedEntity.getUrl(), linkedEntity.getDataSource()); linkedEntity.getMatches().forEach(entityMatch -> System.out.printf( "Matched entity: %s, confidence score: %f.%n", entityMatch.getText(), entityMatch.getConfidenceScore())); });- Parameters:
document- The document to recognize linked entities for. For text length limits, maximum batch size, and supported text encoding, see data limits.- Returns:
- A
LinkedEntityCollectioncontains a list ofrecognized linked entities. - Throws:
NullPointerException- ifdocumentis null.TextAnalyticsException- if the response returned with anerror.
-
recognizeLinkedEntities
Returns a list of recognized entities with links to a well-known knowledge base for the provided document with language code. See this for supported languages in Text Analytics API.Code Sample
Recognizes the linked entities in a document with a provided language code.
String document = "Old Faithful is a geyser at Yellowstone Park."; textAnalyticsClient.recognizeLinkedEntities(document, "en").forEach(linkedEntity -> { System.out.printf("Name: %s, entity ID in data source: %s, URL: %s, data source: %s.%n", linkedEntity.getName(), linkedEntity.getDataSourceEntityId(), linkedEntity.getUrl(), linkedEntity.getDataSource()); linkedEntity.getMatches().forEach(entityMatch -> System.out.printf( "Matched entity: %s, confidence score: %f.%n", entityMatch.getText(), entityMatch.getConfidenceScore())); });- Parameters:
document- The document to recognize linked entities for. For text length limits, maximum batch size, and supported text encoding, see data limits.language- The 2 letter ISO 639-1 representation of language for the document. If not set, uses "en" for English as default.- Returns:
- A
LinkedEntityCollectioncontains a list ofrecognized linked entities. - Throws:
NullPointerException- ifdocumentis null.TextAnalyticsException- if the response returned with anerror.
-
recognizeLinkedEntitiesBatch
public RecognizeLinkedEntitiesResultCollection recognizeLinkedEntitiesBatch(Iterable<String> documents, String language, TextAnalyticsRequestOptions options) Returns a list of recognized entities with links to a well-known knowledge base for the list of documents with provided language code and request options. See this for supported languages in Text Analytics API.Code Sample
Recognizes the linked entities in a list of documents with a provided language code and request options.
List<String> documents = Arrays.asList( "Old Faithful is a geyser at Yellowstone Park.", "Mount Shasta has lenticular clouds." ); RecognizeLinkedEntitiesResultCollection resultCollection = textAnalyticsClient.recognizeLinkedEntitiesBatch(documents, "en", null); // Batch statistics TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics(); System.out.printf("A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); resultCollection.forEach(recognizeLinkedEntitiesResult -> recognizeLinkedEntitiesResult.getEntities().forEach(linkedEntity -> { System.out.println("Linked Entities:"); System.out.printf("Name: %s, entity ID in data source: %s, URL: %s, data source: %s.%n", linkedEntity.getName(), linkedEntity.getDataSourceEntityId(), linkedEntity.getUrl(), linkedEntity.getDataSource()); linkedEntity.getMatches().forEach(entityMatch -> System.out.printf( "Matched entity: %s, confidence score: %f.%n", entityMatch.getText(), entityMatch.getConfidenceScore())); }));- Parameters:
documents- A list of documents to recognize linked entities for. For text length limits, maximum batch size, and supported text encoding, see data limits.language- The 2 letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.options- Theoptionsto configure the scoring model for documents and show statistics.- Returns:
- A
RecognizeLinkedEntitiesResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifTextAnalyticsRequestOptions.isServiceLogsDisabled()is true in service API versionTextAnalyticsServiceVersion.V3_0.disableServiceLogsis only available for API version v3.1 and newer.
-
recognizeLinkedEntitiesBatchWithResponse
public com.azure.core.http.rest.Response<RecognizeLinkedEntitiesResultCollection> recognizeLinkedEntitiesBatchWithResponse(Iterable<TextDocumentInput> documents, TextAnalyticsRequestOptions options, com.azure.core.util.Context context) Returns a list of recognized entities with links to a well-known knowledge base for the list ofdocumentand request options. See this for supported languages in Text Analytics API.Code Sample
Recognizes the linked entities with http response in a list of
TextDocumentInputwith request options.List<TextDocumentInput> textDocumentInputs = Arrays.asList( new TextDocumentInput("1", "Old Faithful is a geyser at Yellowstone Park.").setLanguage("en"), new TextDocumentInput("2", "Mount Shasta has lenticular clouds.").setLanguage("en") ); Response<RecognizeLinkedEntitiesResultCollection> response = textAnalyticsClient.recognizeLinkedEntitiesBatchWithResponse(textDocumentInputs, new TextAnalyticsRequestOptions().setIncludeStatistics(true), Context.NONE); // Response's status code System.out.printf("Status code of request response: %d%n", response.getStatusCode()); RecognizeLinkedEntitiesResultCollection resultCollection = response.getValue(); // Batch statistics TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics(); System.out.printf( "A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); resultCollection.forEach(recognizeLinkedEntitiesResult -> recognizeLinkedEntitiesResult.getEntities().forEach(linkedEntity -> { System.out.println("Linked Entities:"); System.out.printf("Name: %s, entity ID in data source: %s, URL: %s, data source: %s.%n", linkedEntity.getName(), linkedEntity.getDataSourceEntityId(), linkedEntity.getUrl(), linkedEntity.getDataSource()); linkedEntity.getMatches().forEach(entityMatch -> System.out.printf( "Matched entity: %s, confidence score: %.2f.%n", entityMatch.getText(), entityMatch.getConfidenceScore())); }));- Parameters:
documents- A list ofdocumentsto recognize linked entities for. For text length limits, maximum batch size, and supported text encoding, see data limits.options- Theoptionsto configure the scoring model for documents and show statistics.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A
Responsethat contains aRecognizeLinkedEntitiesResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifTextAnalyticsRequestOptions.isServiceLogsDisabled()is true in service API versionTextAnalyticsServiceVersion.V3_0.disableServiceLogsis only available for API version v3.1 and newer.
-
extractKeyPhrases
Returns a list of strings denoting the key phrases in the document. This method will use the default language that can be set by using methodTextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.Code Sample
Extracts key phrases of documents
KeyPhrasesCollection extractedKeyPhrases = textAnalyticsClient.extractKeyPhrases("My cat might need to see a veterinarian."); for (String keyPhrase : extractedKeyPhrases) { System.out.printf("%s.%n", keyPhrase); }- Parameters:
document- The document to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.- Returns:
- A
KeyPhrasesCollectioncontains a list of extracted key phrases. - Throws:
NullPointerException- ifdocumentis null.TextAnalyticsException- if the response returned with anerror.
-
extractKeyPhrases
Returns a list of strings denoting the key phrases in the document. See this for the list of enabled languages.Code Sample
Extracts key phrases in a document with a provided language representation.
textAnalyticsClient.extractKeyPhrases("My cat might need to see a veterinarian.", "en") .forEach(kegPhrase -> System.out.printf("%s.%n", kegPhrase));- Parameters:
document- The document to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.language- The 2 letter ISO 639-1 representation of language for the document. If not set, uses "en" for English as default.- Returns:
- A
KeyPhrasesCollectioncontains a list of extracted key phrases. - Throws:
NullPointerException- ifdocumentis null.TextAnalyticsException- if the response returned with anerror.
-
extractKeyPhrasesBatch
public ExtractKeyPhrasesResultCollection extractKeyPhrasesBatch(Iterable<String> documents, String language, TextAnalyticsRequestOptions options) Returns a list of strings denoting the key phrases in the documents with provided language code and request options. See this for the list of enabled languages.Code Sample
Extracts key phrases in a list of documents with a provided language code and request options.
List<String> documents = Arrays.asList( "My cat might need to see a veterinarian.", "The pitot tube is used to measure airspeed." ); // Extracting batch key phrases ExtractKeyPhrasesResultCollection resultCollection = textAnalyticsClient.extractKeyPhrasesBatch(documents, "en", null); // Batch statistics TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics(); System.out.printf( "A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); // Extracted key phrase for each of documents from a batch of documents resultCollection.forEach(extractKeyPhraseResult -> { System.out.printf("Document ID: %s%n", extractKeyPhraseResult.getId()); // Valid document System.out.println("Extracted phrases:"); extractKeyPhraseResult.getKeyPhrases().forEach(keyPhrase -> System.out.printf("%s.%n", keyPhrase)); });- Parameters:
documents- A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.language- The 2 letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.options- Theoptionsto configure the scoring model for documents and show statistics.- Returns:
- A
ExtractKeyPhrasesResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifTextAnalyticsRequestOptions.isServiceLogsDisabled()is true in service API versionTextAnalyticsServiceVersion.V3_0.disableServiceLogsis only available for API version v3.1 and newer.
-
extractKeyPhrasesBatchWithResponse
public com.azure.core.http.rest.Response<ExtractKeyPhrasesResultCollection> extractKeyPhrasesBatchWithResponse(Iterable<TextDocumentInput> documents, TextAnalyticsRequestOptions options, com.azure.core.util.Context context) Returns a list of strings denoting the key phrases in the a batch ofdocumentwith request options. See this for the list of enabled languages.Code Sample
Extracts key phrases with http response in a list of
TextDocumentInputwith request options.List<TextDocumentInput> textDocumentInputs = Arrays.asList( new TextDocumentInput("1", "My cat might need to see a veterinarian.").setLanguage("en"), new TextDocumentInput("2", "The pitot tube is used to measure airspeed.").setLanguage("en") ); // Extracting batch key phrases Response<ExtractKeyPhrasesResultCollection> response = textAnalyticsClient.extractKeyPhrasesBatchWithResponse(textDocumentInputs, new TextAnalyticsRequestOptions().setIncludeStatistics(true), Context.NONE); // Response's status code System.out.printf("Status code of request response: %d%n", response.getStatusCode()); ExtractKeyPhrasesResultCollection resultCollection = response.getValue(); // Batch statistics TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics(); System.out.printf( "A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); // Extracted key phrase for each of documents from a batch of documents resultCollection.forEach(extractKeyPhraseResult -> { System.out.printf("Document ID: %s%n", extractKeyPhraseResult.getId()); // Valid document System.out.println("Extracted phrases:"); extractKeyPhraseResult.getKeyPhrases().forEach(keyPhrase -> System.out.printf("%s.%n", keyPhrase)); });- Parameters:
documents- A list ofdocumentsto be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.options- Theoptionsto configure the scoring model for documents and show statistics.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A
Responsethat contains aExtractKeyPhrasesResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifTextAnalyticsRequestOptions.isServiceLogsDisabled()is true in service API versionTextAnalyticsServiceVersion.V3_0.disableServiceLogsis only available for API version v3.1 and newer.
-
analyzeSentiment
Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it. This method will use the default language that can be set by using methodTextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.Code Sample
Analyze the sentiments of documents
DocumentSentiment documentSentiment = textAnalyticsClient.analyzeSentiment("The hotel was dark and unclean."); System.out.printf( "Recognized sentiment: %s, positive score: %.2f, neutral score: %.2f, negative score: %.2f.%n", documentSentiment.getSentiment(), documentSentiment.getConfidenceScores().getPositive(), documentSentiment.getConfidenceScores().getNeutral(), documentSentiment.getConfidenceScores().getNegative()); for (SentenceSentiment sentenceSentiment : documentSentiment.getSentences()) { System.out.printf( "Recognized sentence sentiment: %s, positive score: %.2f, neutral score: %.2f, negative score: %.2f.%n", sentenceSentiment.getSentiment(), sentenceSentiment.getConfidenceScores().getPositive(), sentenceSentiment.getConfidenceScores().getNeutral(), sentenceSentiment.getConfidenceScores().getNegative()); }- Parameters:
document- The document to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.- Returns:
- A
analyzed document sentimentof the document. - Throws:
NullPointerException- ifdocumentis null.TextAnalyticsException- if the response returned with anerror.
-
analyzeSentiment
Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it.Code Sample
Analyze the sentiments in a document with a provided language representation.
DocumentSentiment documentSentiment = textAnalyticsClient.analyzeSentiment( "The hotel was dark and unclean.", "en"); System.out.printf( "Recognized sentiment: %s, positive score: %.2f, neutral score: %.2f, negative score: %.2f.%n", documentSentiment.getSentiment(), documentSentiment.getConfidenceScores().getPositive(), documentSentiment.getConfidenceScores().getNeutral(), documentSentiment.getConfidenceScores().getNegative()); for (SentenceSentiment sentenceSentiment : documentSentiment.getSentences()) { System.out.printf( "Recognized sentence sentiment: %s, positive score: %.2f, neutral score: %.2f, negative score: %.2f.%n", sentenceSentiment.getSentiment(), sentenceSentiment.getConfidenceScores().getPositive(), sentenceSentiment.getConfidenceScores().getNeutral(), sentenceSentiment.getConfidenceScores().getNegative()); }- Parameters:
document- The document to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.language- The 2 letter ISO 639-1 representation of language for the document. If not set, uses "en" for English as default.- Returns:
- A
analyzed document sentimentof the document. - Throws:
NullPointerException- ifdocumentis null.TextAnalyticsException- if the response returned with anerror.
-
analyzeSentiment
public DocumentSentiment analyzeSentiment(String document, String language, AnalyzeSentimentOptions options) Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it. If theincludeOpinionMiningofAnalyzeSentimentOptionsset to true, the output will include the opinion mining results. It mines the opinions of a sentence and conducts more granular analysis around the aspects in the text (also known as aspect-based sentiment analysis).Code Sample
Analyze the sentiment and mine the opinions for each sentence in a document with a provided language representation and
AnalyzeSentimentOptionsoptions.DocumentSentiment documentSentiment = textAnalyticsClient.analyzeSentiment( "The hotel was dark and unclean.", "en", new AnalyzeSentimentOptions().setIncludeOpinionMining(true)); for (SentenceSentiment sentenceSentiment : documentSentiment.getSentences()) { System.out.printf("\tSentence sentiment: %s%n", sentenceSentiment.getSentiment()); sentenceSentiment.getOpinions().forEach(opinion -> { TargetSentiment targetSentiment = opinion.getTarget(); System.out.printf("\tTarget sentiment: %s, target text: %s%n", targetSentiment.getSentiment(), targetSentiment.getText()); for (AssessmentSentiment assessmentSentiment : opinion.getAssessments()) { System.out.printf("\t\t'%s' sentiment because of \"%s\". Is the assessment negated: %s.%n", assessmentSentiment.getSentiment(), assessmentSentiment.getText(), assessmentSentiment.isNegated()); } }); }- Parameters:
document- The document to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.language- The 2 letter ISO 639-1 representation of language for the document. If not set, uses "en" for English as default.options- The additional configurableoptionsthat may be passed when analyzing sentiments.- Returns:
- A
analyzed document sentimentof the document. - Throws:
NullPointerException- ifdocumentis null.TextAnalyticsException- if the response returned with anerror.UnsupportedOperationException- ifTextAnalyticsRequestOptions.isServiceLogsDisabled()orAnalyzeSentimentOptions.isIncludeOpinionMining()is true in service API versionTextAnalyticsServiceVersion.V3_0.disableServiceLogsandincludeOpinionMiningare only available for API version v3.1 and newer.
-
analyzeSentimentBatch
@Deprecated public AnalyzeSentimentResultCollection analyzeSentimentBatch(Iterable<String> documents, String language, TextAnalyticsRequestOptions options) Deprecated.Please use theanalyzeSentimentBatch(Iterable, String, AnalyzeSentimentOptions).Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it.Code Sample
Analyze the sentiments in a list of documents with a provided language representation and request options.
List<String> documents = Arrays.asList( "The hotel was dark and unclean. The restaurant had amazing gnocchi.", "The restaurant had amazing gnocchi. The hotel was dark and unclean." ); // Analyzing batch sentiments AnalyzeSentimentResultCollection resultCollection = textAnalyticsClient.analyzeSentimentBatch( documents, "en", new TextAnalyticsRequestOptions().setIncludeStatistics(true)); // Batch statistics TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics(); System.out.printf("A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); // Analyzed sentiment for each of documents from a batch of documents resultCollection.forEach(analyzeSentimentResult -> { System.out.printf("Document ID: %s%n", analyzeSentimentResult.getId()); // Valid document DocumentSentiment documentSentiment = analyzeSentimentResult.getDocumentSentiment(); System.out.printf( "Recognized document sentiment: %s, positive score: %.2f, neutral score: %.2f," + " negative score: %.2f.%n", documentSentiment.getSentiment(), documentSentiment.getConfidenceScores().getPositive(), documentSentiment.getConfidenceScores().getNeutral(), documentSentiment.getConfidenceScores().getNegative()); documentSentiment.getSentences().forEach(sentenceSentiment -> System.out.printf( "Recognized sentence sentiment: %s, positive score: %.2f, neutral score: %.2f," + " negative score: %.2f.%n", sentenceSentiment.getSentiment(), sentenceSentiment.getConfidenceScores().getPositive(), sentenceSentiment.getConfidenceScores().getNeutral(), sentenceSentiment.getConfidenceScores().getNegative())); });- Parameters:
documents- A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.language- The 2 letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.options- Theoptionsto configure the scoring model for documents and show statistics.- Returns:
- A
AnalyzeSentimentResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.
-
analyzeSentimentBatch
public AnalyzeSentimentResultCollection analyzeSentimentBatch(Iterable<String> documents, String language, AnalyzeSentimentOptions options) Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it. If theincludeOpinionMiningofAnalyzeSentimentOptionsset to true, the output will include the opinion mining results. It mines the opinions of a sentence and conducts more granular analysis around the aspects in the text (also known as aspect-based sentiment analysis).Code Sample
Analyze the sentiments and mine the opinions for each sentence in a list of documents with a provided language representation and
AnalyzeSentimentOptionsoptions.List<String> documents = Arrays.asList( "The hotel was dark and unclean. The restaurant had amazing gnocchi.", "The restaurant had amazing gnocchi. The hotel was dark and unclean." ); // Analyzing batch sentiments AnalyzeSentimentResultCollection resultCollection = textAnalyticsClient.analyzeSentimentBatch( documents, "en", new AnalyzeSentimentOptions().setIncludeOpinionMining(true)); // Analyzed sentiment for each of documents from a batch of documents resultCollection.forEach(analyzeSentimentResult -> { System.out.printf("Document ID: %s%n", analyzeSentimentResult.getId()); DocumentSentiment documentSentiment = analyzeSentimentResult.getDocumentSentiment(); documentSentiment.getSentences().forEach(sentenceSentiment -> { System.out.printf("\tSentence sentiment: %s%n", sentenceSentiment.getSentiment()); sentenceSentiment.getOpinions().forEach(opinion -> { TargetSentiment targetSentiment = opinion.getTarget(); System.out.printf("\tTarget sentiment: %s, target text: %s%n", targetSentiment.getSentiment(), targetSentiment.getText()); for (AssessmentSentiment assessmentSentiment : opinion.getAssessments()) { System.out.printf("\t\t'%s' sentiment because of \"%s\". Is the assessment negated: %s.%n", assessmentSentiment.getSentiment(), assessmentSentiment.getText(), assessmentSentiment.isNegated()); } }); }); });- Parameters:
documents- A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.language- The 2 letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.options- The additional configurableoptionsthat may be passed when analyzing sentiments.- Returns:
- A
AnalyzeSentimentResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifTextAnalyticsRequestOptions.isServiceLogsDisabled()orAnalyzeSentimentOptions.isIncludeOpinionMining()is true in service API versionTextAnalyticsServiceVersion.V3_0.disableServiceLogsandincludeOpinionMiningare only available for API version v3.1 and newer.
-
analyzeSentimentBatchWithResponse
@Deprecated public com.azure.core.http.rest.Response<AnalyzeSentimentResultCollection> analyzeSentimentBatchWithResponse(Iterable<TextDocumentInput> documents, TextAnalyticsRequestOptions options, com.azure.core.util.Context context) Deprecated.Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it.Code Sample
Analyze sentiment in a list of
documentwith provided request options.List<TextDocumentInput> textDocumentInputs = Arrays.asList( new TextDocumentInput("1", "The hotel was dark and unclean. The restaurant had amazing gnocchi.") .setLanguage("en"), new TextDocumentInput("2", "The restaurant had amazing gnocchi. The hotel was dark and unclean.") .setLanguage("en") ); // Analyzing batch sentiments Response<AnalyzeSentimentResultCollection> response = textAnalyticsClient.analyzeSentimentBatchWithResponse(textDocumentInputs, new TextAnalyticsRequestOptions().setIncludeStatistics(true), Context.NONE); // Response's status code System.out.printf("Status code of request response: %d%n", response.getStatusCode()); AnalyzeSentimentResultCollection resultCollection = response.getValue(); // Batch statistics TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics(); System.out.printf("A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); // Analyzed sentiment for each of documents from a batch of documents resultCollection.forEach(analyzeSentimentResult -> { System.out.printf("Document ID: %s%n", analyzeSentimentResult.getId()); // Valid document DocumentSentiment documentSentiment = analyzeSentimentResult.getDocumentSentiment(); System.out.printf( "Recognized document sentiment: %s, positive score: %.2f, neutral score: %.2f, " + "negative score: %.2f.%n", documentSentiment.getSentiment(), documentSentiment.getConfidenceScores().getPositive(), documentSentiment.getConfidenceScores().getNeutral(), documentSentiment.getConfidenceScores().getNegative()); documentSentiment.getSentences().forEach(sentenceSentiment -> { System.out.printf( "Recognized sentence sentiment: %s, positive score: %.2f, neutral score: %.2f," + " negative score: %.2f.%n", sentenceSentiment.getSentiment(), sentenceSentiment.getConfidenceScores().getPositive(), sentenceSentiment.getConfidenceScores().getNeutral(), sentenceSentiment.getConfidenceScores().getNegative()); }); });- Parameters:
documents- A list ofdocumentsto be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.options- Theoptionsto configure the scoring model for documents and show statistics.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A
Responsethat contains aAnalyzeSentimentResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.
-
analyzeSentimentBatchWithResponse
public com.azure.core.http.rest.Response<AnalyzeSentimentResultCollection> analyzeSentimentBatchWithResponse(Iterable<TextDocumentInput> documents, AnalyzeSentimentOptions options, com.azure.core.util.Context context) Returns a sentiment prediction, as well as confidence scores for each sentiment label (Positive, Negative, and Neutral) for the document and each sentence within it. If theincludeOpinionMiningofAnalyzeSentimentOptionsset to true, the output will include the opinion mining results. It mines the opinions of a sentence and conducts more granular analysis around the aspects in the text (also known as aspect-based sentiment analysis).Code Sample
Analyze sentiment and mine the opinions for each sentence in a list of
documentwith providedAnalyzeSentimentOptionsoptions.List<TextDocumentInput> textDocumentInputs = Arrays.asList( new TextDocumentInput("1", "The hotel was dark and unclean. The restaurant had amazing gnocchi.") .setLanguage("en"), new TextDocumentInput("2", "The restaurant had amazing gnocchi. The hotel was dark and unclean.") .setLanguage("en") ); AnalyzeSentimentOptions options = new AnalyzeSentimentOptions().setIncludeOpinionMining(true) .setIncludeStatistics(true); // Analyzing batch sentiments Response<AnalyzeSentimentResultCollection> response = textAnalyticsClient.analyzeSentimentBatchWithResponse(textDocumentInputs, options, Context.NONE); // Response's status code System.out.printf("Status code of request response: %d%n", response.getStatusCode()); AnalyzeSentimentResultCollection resultCollection = response.getValue(); // Batch statistics TextDocumentBatchStatistics batchStatistics = resultCollection.getStatistics(); System.out.printf("A batch of documents statistics, transaction count: %s, valid document count: %s.%n", batchStatistics.getTransactionCount(), batchStatistics.getValidDocumentCount()); // Analyzed sentiment for each of documents from a batch of documents resultCollection.forEach(analyzeSentimentResult -> { System.out.printf("Document ID: %s%n", analyzeSentimentResult.getId()); DocumentSentiment documentSentiment = analyzeSentimentResult.getDocumentSentiment(); documentSentiment.getSentences().forEach(sentenceSentiment -> { System.out.printf("\tSentence sentiment: %s%n", sentenceSentiment.getSentiment()); sentenceSentiment.getOpinions().forEach(opinion -> { TargetSentiment targetSentiment = opinion.getTarget(); System.out.printf("\tTarget sentiment: %s, target text: %s%n", targetSentiment.getSentiment(), targetSentiment.getText()); for (AssessmentSentiment assessmentSentiment : opinion.getAssessments()) { System.out.printf("\t\t'%s' sentiment because of \"%s\". Is the assessment negated: %s.%n", assessmentSentiment.getSentiment(), assessmentSentiment.getText(), assessmentSentiment.isNegated()); } }); }); });- Parameters:
documents- A list ofdocumentsto be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.options- The additional configurableoptionsthat may be passed when analyzing sentiments.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A
Responsethat contains aAnalyzeSentimentResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifTextAnalyticsRequestOptions.isServiceLogsDisabled()orAnalyzeSentimentOptions.isIncludeOpinionMining()is true in service API versionTextAnalyticsServiceVersion.V3_0.disableServiceLogsandincludeOpinionMiningare only available for API version v3.1 and newer.
-
beginAnalyzeHealthcareEntities
public com.azure.core.util.polling.SyncPoller<AnalyzeHealthcareEntitiesOperationDetail,AnalyzeHealthcareEntitiesPagedIterable> beginAnalyzeHealthcareEntities(Iterable<String> documents) Analyze healthcare entities, entity data sources, and entity relations in a list ofdocuments. This method will use the default language that can be set by using methodTextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.Code Sample
List<String> documents = new ArrayList<>(); for (int i = 0; i < 3; i++) { documents.add("The patient is a 54-year-old gentleman with a history of progressive angina over " + "the past several months."); } SyncPoller<AnalyzeHealthcareEntitiesOperationDetail, AnalyzeHealthcareEntitiesPagedIterable> syncPoller = textAnalyticsClient.beginAnalyzeHealthcareEntities(documents); syncPoller.waitForCompletion(); AnalyzeHealthcareEntitiesPagedIterable result = syncPoller.getFinalResult(); result.forEach(analyzeHealthcareEntitiesResultCollection -> { analyzeHealthcareEntitiesResultCollection.forEach(healthcareEntitiesResult -> { System.out.println("document id = " + healthcareEntitiesResult.getId()); System.out.println("Document entities: "); AtomicInteger ct = new AtomicInteger(); healthcareEntitiesResult.getEntities().forEach(healthcareEntity -> { System.out.printf("\ti = %d, Text: %s, category: %s, confidence score: %f.%n", ct.getAndIncrement(), healthcareEntity.getText(), healthcareEntity.getCategory(), healthcareEntity.getConfidenceScore()); IterableStream<EntityDataSource> healthcareEntityDataSources = healthcareEntity.getDataSources(); if (healthcareEntityDataSources != null) { healthcareEntityDataSources.forEach(healthcareEntityLink -> System.out.printf( "\t\tEntity ID in data source: %s, data source: %s.%n", healthcareEntityLink.getEntityId(), healthcareEntityLink.getName())); } }); // Healthcare entity relation groups healthcareEntitiesResult.getEntityRelations().forEach(entityRelation -> { System.out.printf("\tRelation type: %s.%n", entityRelation.getRelationType()); entityRelation.getRoles().forEach(role -> { final HealthcareEntity entity = role.getEntity(); System.out.printf("\t\tEntity text: %s, category: %s, role: %s.%n", entity.getText(), entity.getCategory(), role.getName()); }); System.out.printf("\tRelation confidence score: %f.%n", entityRelation.getConfidenceScore()); }); }); });- Parameters:
documents- A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.- Returns:
- A
SyncPollerthat polls the analyze healthcare operation until it has completed, has failed, or has been cancelled. The completed operation returns aPagedIterableofAnalyzeHealthcareEntitiesResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifbeginAnalyzeHealthcareEntitiesis called with service API versionTextAnalyticsServiceVersion.V3_0.beginAnalyzeHealthcareEntitiesis only available for API version v3.1 and newer.TextAnalyticsException- If analyze operation fails.
-
beginAnalyzeHealthcareEntities
public com.azure.core.util.polling.SyncPoller<AnalyzeHealthcareEntitiesOperationDetail,AnalyzeHealthcareEntitiesPagedIterable> beginAnalyzeHealthcareEntities(Iterable<String> documents, String language, AnalyzeHealthcareEntitiesOptions options) Analyze healthcare entities, entity data sources, and entity relations in a list ofdocumentswith provided request options. See this supported languages in Language service API.Code Sample
List<String> documents = new ArrayList<>(); for (int i = 0; i < 3; i++) { documents.add("The patient is a 54-year-old gentleman with a history of progressive angina over " + "the past several months."); } // Request options: show statistics and model version AnalyzeHealthcareEntitiesOptions options = new AnalyzeHealthcareEntitiesOptions() .setIncludeStatistics(true); SyncPoller<AnalyzeHealthcareEntitiesOperationDetail, AnalyzeHealthcareEntitiesPagedIterable> syncPoller = textAnalyticsClient.beginAnalyzeHealthcareEntities(documents, "en", options); syncPoller.waitForCompletion(); AnalyzeHealthcareEntitiesPagedIterable result = syncPoller.getFinalResult(); result.forEach(analyzeHealthcareEntitiesResultCollection -> { // Model version System.out.printf("Results of Azure Text Analytics \"Analyze Healthcare\" Model, version: %s%n", analyzeHealthcareEntitiesResultCollection.getModelVersion()); TextDocumentBatchStatistics healthcareTaskStatistics = analyzeHealthcareEntitiesResultCollection.getStatistics(); // Batch statistics System.out.printf("Documents statistics: document count = %d, erroneous document count = %d," + " transaction count = %d, valid document count = %d.%n", healthcareTaskStatistics.getDocumentCount(), healthcareTaskStatistics.getInvalidDocumentCount(), healthcareTaskStatistics.getTransactionCount(), healthcareTaskStatistics.getValidDocumentCount()); analyzeHealthcareEntitiesResultCollection.forEach(healthcareEntitiesResult -> { System.out.println("document id = " + healthcareEntitiesResult.getId()); System.out.println("Document entities: "); AtomicInteger ct = new AtomicInteger(); healthcareEntitiesResult.getEntities().forEach(healthcareEntity -> { System.out.printf("\ti = %d, Text: %s, category: %s, confidence score: %f.%n", ct.getAndIncrement(), healthcareEntity.getText(), healthcareEntity.getCategory(), healthcareEntity.getConfidenceScore()); IterableStream<EntityDataSource> healthcareEntityDataSources = healthcareEntity.getDataSources(); if (healthcareEntityDataSources != null) { healthcareEntityDataSources.forEach(healthcareEntityLink -> System.out.printf( "\t\tEntity ID in data source: %s, data source: %s.%n", healthcareEntityLink.getEntityId(), healthcareEntityLink.getName())); } }); // Healthcare entity relation groups healthcareEntitiesResult.getEntityRelations().forEach(entityRelation -> { System.out.printf("\tRelation type: %s.%n", entityRelation.getRelationType()); entityRelation.getRoles().forEach(role -> { final HealthcareEntity entity = role.getEntity(); System.out.printf("\t\tEntity text: %s, category: %s, role: %s.%n", entity.getText(), entity.getCategory(), role.getName()); }); System.out.printf("\tRelation confidence score: %f.%n", entityRelation.getConfidenceScore()); }); }); });- Parameters:
documents- A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.language- The 2-letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.options- The additional configurableoptionsthat may be passed when analyzing healthcare entities.- Returns:
- A
SyncPollerthat polls the analyze healthcare operation until it has completed, has failed, or has been cancelled. The completed operation returns aPagedIterableofAnalyzeHealthcareEntitiesResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifbeginAnalyzeHealthcareEntitiesis called with service API versionTextAnalyticsServiceVersion.V3_0.beginAnalyzeHealthcareEntitiesis only available for API version v3.1 and newer.TextAnalyticsException- If analyze operation fails.
-
beginAnalyzeHealthcareEntities
public com.azure.core.util.polling.SyncPoller<AnalyzeHealthcareEntitiesOperationDetail,AnalyzeHealthcareEntitiesPagedIterable> beginAnalyzeHealthcareEntities(Iterable<TextDocumentInput> documents, AnalyzeHealthcareEntitiesOptions options, com.azure.core.util.Context context) Analyze healthcare entities, entity data sources, and entity relations in a list ofdocumentand provided request options to show statistics. See this supported languages in Language service API.Code Sample
List<TextDocumentInput> documents = new ArrayList<>(); for (int i = 0; i < 3; i++) { documents.add(new TextDocumentInput(Integer.toString(i), "The patient is a 54-year-old gentleman with a history of progressive angina over " + "the past several months.")); } // Request options: show statistics and model version AnalyzeHealthcareEntitiesOptions options = new AnalyzeHealthcareEntitiesOptions() .setIncludeStatistics(true); SyncPoller<AnalyzeHealthcareEntitiesOperationDetail, AnalyzeHealthcareEntitiesPagedIterable> syncPoller = textAnalyticsClient.beginAnalyzeHealthcareEntities(documents, options, Context.NONE); syncPoller.waitForCompletion(); AnalyzeHealthcareEntitiesPagedIterable result = syncPoller.getFinalResult(); // Task operation statistics AnalyzeHealthcareEntitiesOperationDetail operationResult = syncPoller.poll().getValue(); System.out.printf("Operation created time: %s, expiration time: %s.%n", operationResult.getCreatedAt(), operationResult.getExpiresAt()); result.forEach(analyzeHealthcareEntitiesResultCollection -> { // Model version System.out.printf("Results of Azure Text Analytics \"Analyze Healthcare\" Model, version: %s%n", analyzeHealthcareEntitiesResultCollection.getModelVersion()); TextDocumentBatchStatistics healthcareTaskStatistics = analyzeHealthcareEntitiesResultCollection.getStatistics(); // Batch statistics System.out.printf("Documents statistics: document count = %d, erroneous document count = %d," + " transaction count = %d, valid document count = %d.%n", healthcareTaskStatistics.getDocumentCount(), healthcareTaskStatistics.getInvalidDocumentCount(), healthcareTaskStatistics.getTransactionCount(), healthcareTaskStatistics.getValidDocumentCount()); analyzeHealthcareEntitiesResultCollection.forEach(healthcareEntitiesResult -> { System.out.println("document id = " + healthcareEntitiesResult.getId()); System.out.println("Document entities: "); AtomicInteger ct = new AtomicInteger(); healthcareEntitiesResult.getEntities().forEach(healthcareEntity -> { System.out.printf("\ti = %d, Text: %s, category: %s, confidence score: %f.%n", ct.getAndIncrement(), healthcareEntity.getText(), healthcareEntity.getCategory(), healthcareEntity.getConfidenceScore()); IterableStream<EntityDataSource> healthcareEntityDataSources = healthcareEntity.getDataSources(); if (healthcareEntityDataSources != null) { healthcareEntityDataSources.forEach(healthcareEntityLink -> System.out.printf( "\t\tEntity ID in data source: %s, data source: %s.%n", healthcareEntityLink.getEntityId(), healthcareEntityLink.getName())); } }); // Healthcare entity relation groups healthcareEntitiesResult.getEntityRelations().forEach(entityRelation -> { System.out.printf("\tRelation type: %s.%n", entityRelation.getRelationType()); entityRelation.getRoles().forEach(role -> { final HealthcareEntity entity = role.getEntity(); System.out.printf("\t\tEntity text: %s, category: %s, role: %s.%n", entity.getText(), entity.getCategory(), role.getName()); }); System.out.printf("\tRelation confidence score: %f.%n", entityRelation.getConfidenceScore()); }); }); });- Parameters:
documents- A list ofdocumentsto be analyzed.options- The additional configurableoptionsthat may be passed when analyzing healthcare entities.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A
SyncPollerthat polls the analyze healthcare operation until it has completed, has failed, or has been cancelled. The completed operation returns aPagedIterableofAnalyzeHealthcareEntitiesResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifbeginAnalyzeHealthcareEntitiesis called with service API versionTextAnalyticsServiceVersion.V3_0.beginAnalyzeHealthcareEntitiesis only available for API version v3.1 and newer.TextAnalyticsException- If analyze operation fails.
-
beginRecognizeCustomEntities
public com.azure.core.util.polling.SyncPoller<RecognizeCustomEntitiesOperationDetail,RecognizeCustomEntitiesPagedIterable> beginRecognizeCustomEntities(Iterable<String> documents, String projectName, String deploymentName) Returns a list of custom entities for the provided list ofdocument.This method is supported since service API version
This method will use the default language that can be set by using methodTextAnalyticsServiceVersion.V2022_05_01.TextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.Code Sample
List<String> documents = new ArrayList<>(); for (int i = 0; i < 3; i++) { documents.add( "A recent report by the Government Accountability Office (GAO) found that the dramatic increase " + "in oil and natural gas development on federal lands over the past six years has stretched the" + " staff of the BLM to a point that it has been unable to meet its environmental protection " + "responsibilities."); } SyncPoller<RecognizeCustomEntitiesOperationDetail, RecognizeCustomEntitiesPagedIterable> syncPoller = textAnalyticsClient.beginRecognizeCustomEntities(documents, "{project_name}", "{deployment_name}"); syncPoller.waitForCompletion(); syncPoller.getFinalResult().forEach(documentsResults -> { System.out.printf("Project name: %s, deployment name: %s.%n", documentsResults.getProjectName(), documentsResults.getDeploymentName()); for (RecognizeEntitiesResult documentResult : documentsResults) { System.out.println("Document ID: " + documentResult.getId()); for (CategorizedEntity entity : documentResult.getEntities()) { System.out.printf( "\tText: %s, category: %s, confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getConfidenceScore()); } } });- Parameters:
documents- A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.projectName- The name of the project which owns the model being consumed.deploymentName- The name of the deployment being consumed.- Returns:
- A
SyncPollerthat polls the recognize custom entities operation until it has completed, has failed, or has been cancelled. The completed operation returns aPagedIterableofRecognizeCustomEntitiesResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifbeginRecognizeCustomEntitiesis called with service API versionTextAnalyticsServiceVersion.V3_0orTextAnalyticsServiceVersion.V3_1.beginRecognizeCustomEntitiesis only available for API version 2022-05-01 and newer.TextAnalyticsException- If analyze operation fails.
-
beginRecognizeCustomEntities
public com.azure.core.util.polling.SyncPoller<RecognizeCustomEntitiesOperationDetail,RecognizeCustomEntitiesPagedIterable> beginRecognizeCustomEntities(Iterable<String> documents, String projectName, String deploymentName, String language, RecognizeCustomEntitiesOptions options) Returns a list of custom entities for the provided list of document with provided request options.This method is supported since service API version
See this supported languages in Language service API.TextAnalyticsServiceVersion.V2022_05_01.Code Sample
List<String> documents = new ArrayList<>(); for (int i = 0; i < 3; i++) { documents.add( "A recent report by the Government Accountability Office (GAO) found that the dramatic increase " + "in oil and natural gas development on federal lands over the past six years has stretched the" + " staff of the BLM to a point that it has been unable to meet its environmental protection " + "responsibilities."); } RecognizeCustomEntitiesOptions options = new RecognizeCustomEntitiesOptions().setIncludeStatistics(true); SyncPoller<RecognizeCustomEntitiesOperationDetail, RecognizeCustomEntitiesPagedIterable> syncPoller = textAnalyticsClient.beginRecognizeCustomEntities(documents, "{project_name}", "{deployment_name}", "en", options); syncPoller.waitForCompletion(); syncPoller.getFinalResult().forEach(documentsResults -> { System.out.printf("Project name: %s, deployment name: %s.%n", documentsResults.getProjectName(), documentsResults.getDeploymentName()); for (RecognizeEntitiesResult documentResult : documentsResults) { System.out.println("Document ID: " + documentResult.getId()); for (CategorizedEntity entity : documentResult.getEntities()) { System.out.printf( "\tText: %s, category: %s, confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getConfidenceScore()); } } });- Parameters:
documents- A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.projectName- The name of the project which owns the model being consumed.deploymentName- The name of the deployment being consumed.language- The 2-letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.options- The additional configurableoptionsthat may be passed when recognizing custom entities.- Returns:
- A
SyncPollerthat polls the recognize custom entities operation until it has completed, has failed, or has been cancelled. The completed operation returns aPagedIterableofRecognizeCustomEntitiesResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifbeginRecognizeCustomEntitiesis called with service API versionTextAnalyticsServiceVersion.V3_0orTextAnalyticsServiceVersion.V3_1.beginRecognizeCustomEntitiesis only available for API version 2022-05-01 and newer.TextAnalyticsException- If analyze operation fails.
-
beginRecognizeCustomEntities
public com.azure.core.util.polling.SyncPoller<RecognizeCustomEntitiesOperationDetail,RecognizeCustomEntitiesPagedIterable> beginRecognizeCustomEntities(Iterable<TextDocumentInput> documents, String projectName, String deploymentName, RecognizeCustomEntitiesOptions options, com.azure.core.util.Context context) Returns a list of custom entities for the provided list ofdocumentwith provided request options.This method is supported since service API version
See this supported languages in Language service API.TextAnalyticsServiceVersion.V2022_05_01Code Sample
List<TextDocumentInput> documents = new ArrayList<>(); for (int i = 0; i < 3; i++) { documents.add(new TextDocumentInput(Integer.toString(i), "A recent report by the Government Accountability Office (GAO) found that the dramatic increase " + "in oil and natural gas development on federal lands over the past six years has stretched the" + " staff of the BLM to a point that it has been unable to meet its environmental protection " + "responsibilities.")); RecognizeCustomEntitiesOptions options = new RecognizeCustomEntitiesOptions().setIncludeStatistics(true); SyncPoller<RecognizeCustomEntitiesOperationDetail, RecognizeCustomEntitiesPagedIterable> syncPoller = textAnalyticsClient.beginRecognizeCustomEntities(documents, "{project_name}", "{deployment_name}", options, Context.NONE); syncPoller.waitForCompletion(); syncPoller.getFinalResult().forEach(documentsResults -> { System.out.printf("Project name: %s, deployment name: %s.%n", documentsResults.getProjectName(), documentsResults.getDeploymentName()); for (RecognizeEntitiesResult documentResult : documentsResults) { System.out.println("Document ID: " + documentResult.getId()); for (CategorizedEntity entity : documentResult.getEntities()) { System.out.printf( "\tText: %s, category: %s, confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getConfidenceScore()); } } }); }- Parameters:
documents- A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.projectName- The name of the project which owns the model being consumed.deploymentName- The name of the deployment being consumed. English as default.options- The additional configurableoptionsthat may be passed when recognizing custom entities.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A
SyncPollerthat polls the recognize custom entities operation until it has completed, has failed, or has been cancelled. The completed operation returns aPagedIterableofRecognizeCustomEntitiesResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifbeginRecognizeCustomEntitiesis called with service API versionTextAnalyticsServiceVersion.V3_0orTextAnalyticsServiceVersion.V3_1.beginRecognizeCustomEntitiesis only available for API version 2022-05-01 and newer.TextAnalyticsException- If analyze operation fails.
-
beginSingleLabelClassify
public com.azure.core.util.polling.SyncPoller<ClassifyDocumentOperationDetail,ClassifyDocumentPagedIterable> beginSingleLabelClassify(Iterable<String> documents, String projectName, String deploymentName) Returns a list of single-label classification for the provided list ofdocument.This method is supported since service API version
This method will use the default language that can be set by using methodTextAnalyticsServiceVersion.V2022_05_01.TextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.Code Sample
List<String> documents = new ArrayList<>(); for (int i = 0; i < 3; i++) { documents.add( "A recent report by the Government Accountability Office (GAO) found that the dramatic increase " + "in oil and natural gas development on federal lands over the past six years has stretched the" + " staff of the BLM to a point that it has been unable to meet its environmental protection " + "responsibilities." ); } // See the service documentation for regional support and how to train a model to classify your documents, // see https://aka.ms/azsdk/textanalytics/customfunctionalities SyncPoller<ClassifyDocumentOperationDetail, ClassifyDocumentPagedIterable> syncPoller = textAnalyticsClient.beginSingleLabelClassify(documents, "{project_name}", "{deployment_name}"); syncPoller.waitForCompletion(); syncPoller.getFinalResult().forEach(documentsResults -> { System.out.printf("Project name: %s, deployment name: %s.%n", documentsResults.getProjectName(), documentsResults.getDeploymentName()); for (ClassifyDocumentResult documentResult : documentsResults) { System.out.println("Document ID: " + documentResult.getId()); for (ClassificationCategory classification : documentResult.getClassifications()) { System.out.printf("\tCategory: %s, confidence score: %f.%n", classification.getCategory(), classification.getConfidenceScore()); } } });- Parameters:
documents- A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.projectName- The name of the project which owns the model being consumed.deploymentName- The name of the deployment being consumed.- Returns:
- A
SyncPollerthat polls the single-label classification operation until it has completed, has failed, or has been cancelled. The completed operation returns aPagedIterableofClassifyDocumentResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifbeginSingleLabelClassifyis called with service API versionTextAnalyticsServiceVersion.V3_0orTextAnalyticsServiceVersion.V3_1.beginSingleLabelClassifyis only available for API version 2022-05-01 and newer.TextAnalyticsException- If analyze operation fails.
-
beginSingleLabelClassify
public com.azure.core.util.polling.SyncPoller<ClassifyDocumentOperationDetail,ClassifyDocumentPagedIterable> beginSingleLabelClassify(Iterable<String> documents, String projectName, String deploymentName, String language, SingleLabelClassifyOptions options) Returns a list of single-label classification for the provided list ofdocumentwith provided request options.This method is supported since service API version
See this supported languages in Language service API.TextAnalyticsServiceVersion.V2022_05_01.Code Sample
List<String> documents = new ArrayList<>(); for (int i = 0; i < 3; i++) { documents.add( "A recent report by the Government Accountability Office (GAO) found that the dramatic increase " + "in oil and natural gas development on federal lands over the past six years has stretched the" + " staff of the BLM to a point that it has been unable to meet its environmental protection " + "responsibilities." ); } SingleLabelClassifyOptions options = new SingleLabelClassifyOptions().setIncludeStatistics(true); // See the service documentation for regional support and how to train a model to classify your documents, // see https://aka.ms/azsdk/textanalytics/customfunctionalities SyncPoller<ClassifyDocumentOperationDetail, ClassifyDocumentPagedIterable> syncPoller = textAnalyticsClient.beginSingleLabelClassify(documents, "{project_name}", "{deployment_name}", "en", options); syncPoller.waitForCompletion(); syncPoller.getFinalResult().forEach(documentsResults -> { System.out.printf("Project name: %s, deployment name: %s.%n", documentsResults.getProjectName(), documentsResults.getDeploymentName()); for (ClassifyDocumentResult documentResult : documentsResults) { System.out.println("Document ID: " + documentResult.getId()); for (ClassificationCategory classification : documentResult.getClassifications()) { System.out.printf("\tCategory: %s, confidence score: %f.%n", classification.getCategory(), classification.getConfidenceScore()); } } });- Parameters:
documents- A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.projectName- The name of the project which owns the model being consumed.deploymentName- The name of the deployment being consumed.language- The 2-letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.options- The additional configurableoptionsthat may be passed when analyzing single-label classification.- Returns:
- A
SyncPollerthat polls the single-label classification operation until it has completed, has failed, or has been cancelled. The completed operation returns aPagedIterableofClassifyDocumentResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifbeginSingleLabelClassifyis called with service API versionTextAnalyticsServiceVersion.V3_0orTextAnalyticsServiceVersion.V3_1.beginSingleLabelClassifyis only available for API version 2022-05-01 and newer.TextAnalyticsException- If analyze operation fails.
-
beginSingleLabelClassify
public com.azure.core.util.polling.SyncPoller<ClassifyDocumentOperationDetail,ClassifyDocumentPagedIterable> beginSingleLabelClassify(Iterable<TextDocumentInput> documents, String projectName, String deploymentName, SingleLabelClassifyOptions options, com.azure.core.util.Context context) Returns a list of single-label classification for the provided list ofdocumentwith provided request options.This method is supported since service API version
TextAnalyticsServiceVersion.V2022_05_01.Code Sample
List<TextDocumentInput> documents = new ArrayList<>(); for (int i = 0; i < 3; i++) { documents.add(new TextDocumentInput(Integer.toString(i), "A recent report by the Government Accountability Office (GAO) found that the dramatic increase " + "in oil and natural gas development on federal lands over the past six years has stretched the" + " staff of the BLM to a point that it has been unable to meet its environmental protection " + "responsibilities.")); } SingleLabelClassifyOptions options = new SingleLabelClassifyOptions().setIncludeStatistics(true); // See the service documentation for regional support and how to train a model to classify your documents, // see https://aka.ms/azsdk/textanalytics/customfunctionalities SyncPoller<ClassifyDocumentOperationDetail, ClassifyDocumentPagedIterable> syncPoller = textAnalyticsClient.beginSingleLabelClassify(documents, "{project_name}", "{deployment_name}", options, Context.NONE); syncPoller.waitForCompletion(); syncPoller.getFinalResult().forEach(documentsResults -> { System.out.printf("Project name: %s, deployment name: %s.%n", documentsResults.getProjectName(), documentsResults.getDeploymentName()); for (ClassifyDocumentResult documentResult : documentsResults) { System.out.println("Document ID: " + documentResult.getId()); for (ClassificationCategory classification : documentResult.getClassifications()) { System.out.printf("\tCategory: %s, confidence score: %f.%n", classification.getCategory(), classification.getConfidenceScore()); } } });- Parameters:
documents- A list ofdocumentsto be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.projectName- The name of the project which owns the model being consumed.deploymentName- The name of the deployment being consumed.options- The additional configurableoptionsthat may be passed when analyzing single-label classification.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A
SyncPollerthat polls the single-label classification operation until it has completed, has failed, or has been cancelled. The completed operation returns aPagedIterableofClassifyDocumentResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifbeginSingleLabelClassifyis called with service API versionTextAnalyticsServiceVersion.V3_0orTextAnalyticsServiceVersion.V3_1.beginSingleLabelClassifyis only available for API version 2022-05-01 and newer.TextAnalyticsException- If analyze operation fails.
-
beginMultiLabelClassify
public com.azure.core.util.polling.SyncPoller<ClassifyDocumentOperationDetail,ClassifyDocumentPagedIterable> beginMultiLabelClassify(Iterable<String> documents, String projectName, String deploymentName) Returns a list of multi-label classification for the provided list ofdocument.This method is supported since service API version
This method will use the default language that can be set by using methodTextAnalyticsServiceVersion.V2022_05_01.TextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.Code Sample
List<String> documents = new ArrayList<>(); for (int i = 0; i < 3; i++) { documents.add( "I need a reservation for an indoor restaurant in China. Please don't stop the music." + " Play music and add it to my playlist"); } SyncPoller<ClassifyDocumentOperationDetail, ClassifyDocumentPagedIterable> syncPoller = textAnalyticsClient.beginMultiLabelClassify(documents, "{project_name}", "{deployment_name}"); syncPoller.waitForCompletion(); syncPoller.getFinalResult().forEach(documentsResults -> { System.out.printf("Project name: %s, deployment name: %s.%n", documentsResults.getProjectName(), documentsResults.getDeploymentName()); for (ClassifyDocumentResult documentResult : documentsResults) { System.out.println("Document ID: " + documentResult.getId()); for (ClassificationCategory classification : documentResult.getClassifications()) { System.out.printf("\tCategory: %s, confidence score: %f.%n", classification.getCategory(), classification.getConfidenceScore()); } } });- Parameters:
documents- A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.projectName- The name of the project which owns the model being consumed.deploymentName- The name of the deployment being consumed.- Returns:
- A
SyncPollerthat polls the multi-label classification operation until it has completed, has failed, or has been cancelled. The completed operation returns aPagedIterableofClassifyDocumentResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifbeginMultiLabelClassifyis called with service API versionTextAnalyticsServiceVersion.V3_0orTextAnalyticsServiceVersion.V3_1.beginMultiLabelClassifyis only available for API version 2022-05-01 and newer.TextAnalyticsException- If analyze operation fails.
-
beginMultiLabelClassify
public com.azure.core.util.polling.SyncPoller<ClassifyDocumentOperationDetail,ClassifyDocumentPagedIterable> beginMultiLabelClassify(Iterable<String> documents, String projectName, String deploymentName, String language, MultiLabelClassifyOptions options) Returns a list of multi-label classification for the provided list ofdocumentwith provided request options.This method is supported since service API version
See this supported languages in Language service API.TextAnalyticsServiceVersion.V2022_05_01.Code Sample
List<String> documents = new ArrayList<>(); for (int i = 0; i < 3; i++) { documents.add( "I need a reservation for an indoor restaurant in China. Please don't stop the music." + " Play music and add it to my playlist"); } MultiLabelClassifyOptions options = new MultiLabelClassifyOptions().setIncludeStatistics(true); SyncPoller<ClassifyDocumentOperationDetail, ClassifyDocumentPagedIterable> syncPoller = textAnalyticsClient.beginMultiLabelClassify(documents, "{project_name}", "{deployment_name}", "en", options); syncPoller.waitForCompletion(); syncPoller.getFinalResult().forEach(documentsResults -> { System.out.printf("Project name: %s, deployment name: %s.%n", documentsResults.getProjectName(), documentsResults.getDeploymentName()); for (ClassifyDocumentResult documentResult : documentsResults) { System.out.println("Document ID: " + documentResult.getId()); for (ClassificationCategory classification : documentResult.getClassifications()) { System.out.printf("\tCategory: %s, confidence score: %f.%n", classification.getCategory(), classification.getConfidenceScore()); } } });- Parameters:
documents- A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.projectName- The name of the project which owns the model being consumed.deploymentName- The name of the deployment being consumed.language- The 2-letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.options- The additional configurableoptionsthat may be passed when analyzing multi-label classification.- Returns:
- A
SyncPollerthat polls the multi-label classification operation until it has completed, has failed, or has been cancelled. The completed operation returns aPagedIterableofClassifyDocumentResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifbeginMultiLabelClassifyis called with service API versionTextAnalyticsServiceVersion.V3_0orTextAnalyticsServiceVersion.V3_1.beginMultiLabelClassifyis only available for API version 2022-05-01 and newer.TextAnalyticsException- If analyze operation fails.
-
beginMultiLabelClassify
public com.azure.core.util.polling.SyncPoller<ClassifyDocumentOperationDetail,ClassifyDocumentPagedIterable> beginMultiLabelClassify(Iterable<TextDocumentInput> documents, String projectName, String deploymentName, MultiLabelClassifyOptions options, com.azure.core.util.Context context) Returns a list of multi-label classification for the provided list ofdocumentwith provided request options.This method is supported since service API version
TextAnalyticsServiceVersion.V2022_05_01.Code Sample
List<TextDocumentInput> documents = new ArrayList<>(); for (int i = 0; i < 3; i++) { documents.add(new TextDocumentInput(Integer.toString(i), "I need a reservation for an indoor restaurant in China. Please don't stop the music." + " Play music and add it to my playlist")); } MultiLabelClassifyOptions options = new MultiLabelClassifyOptions().setIncludeStatistics(true); SyncPoller<ClassifyDocumentOperationDetail, ClassifyDocumentPagedIterable> syncPoller = textAnalyticsClient.beginMultiLabelClassify(documents, "{project_name}", "{deployment_name}", options, Context.NONE); syncPoller.waitForCompletion(); syncPoller.getFinalResult().forEach(documentsResults -> { System.out.printf("Project name: %s, deployment name: %s.%n", documentsResults.getProjectName(), documentsResults.getDeploymentName()); for (ClassifyDocumentResult documentResult : documentsResults) { System.out.println("Document ID: " + documentResult.getId()); for (ClassificationCategory classification : documentResult.getClassifications()) { System.out.printf("\tCategory: %s, confidence score: %f.%n", classification.getCategory(), classification.getConfidenceScore()); } } });- Parameters:
documents- A list ofdocumentsto be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.projectName- The name of the project which owns the model being consumed.deploymentName- The name of the deployment being consumed.options- The additional configurableoptionsthat may be passed when analyzing multi-label classification.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A
SyncPollerthat polls the multi-label classification operation until it has completed, has failed, or has been cancelled. The completed operation returns aPagedIterableofClassifyDocumentResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifbeginMultiLabelClassifyis called with service API versionTextAnalyticsServiceVersion.V3_0orTextAnalyticsServiceVersion.V3_1.beginMultiLabelClassifyis only available for API version 2022-05-01 and newer.TextAnalyticsException- If analyze operation fails.
-
beginAbstractSummary
public com.azure.core.util.polling.SyncPoller<AbstractiveSummaryOperationDetail,AbstractiveSummaryPagedIterable> beginAbstractSummary(Iterable<String> documents) Returns a list of abstractive summary for the provided list ofdocument.This method is supported since service API version
This method will use the default language that can be set by using methodTextAnalyticsServiceVersion.V2023_04_01.TextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.Code Sample
List<String> documents = new ArrayList<>(); for (int i = 0; i < 3; i++) { documents.add( "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic," + " human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI" + " Cognitive Services, I have been working with a team of amazing scientists and engineers to turn " + "this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship" + " among three attributes of human cognition: monolingual text (X), audio or visual sensory signals," + " (Y) and multilingual (Z). At the intersection of all three, there’s magic—what we call XYZ-code" + " as illustrated in Figure 1—a joint representation to create more powerful AI that can speak, hear," + " see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term" + " vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have" + " pretrained models that can jointly learn representations to support a broad range of downstream" + " AI tasks, much in the way humans do today. Over the past five years, we have achieved human" + " performance on benchmarks in conversational speech recognition, machine translation, " + "conversational question answering, machine reading comprehension, and image captioning. These" + " five breakthroughs provided us with strong signals toward our more ambitious aspiration to" + " produce a leap in AI capabilities, achieving multisensory and multilingual learning that " + "is closer in line with how humans learn and understand. I believe the joint XYZ-code is a " + "foundational component of this aspiration, if grounded with external knowledge sources in " + "the downstream AI tasks."); } SyncPoller<AbstractiveSummaryOperationDetail, AbstractiveSummaryPagedIterable> syncPoller = textAnalyticsClient.beginAbstractSummary(documents); syncPoller.waitForCompletion(); syncPoller.getFinalResult().forEach(resultCollection -> { for (AbstractiveSummaryResult documentResult : resultCollection) { System.out.println("\tAbstractive summary sentences:"); for (AbstractiveSummary summarySentence : documentResult.getSummaries()) { System.out.printf("\t\t Summary text: %s.%n", summarySentence.getText()); for (AbstractiveSummaryContext abstractiveSummaryContext : summarySentence.getContexts()) { System.out.printf("\t\t offset: %d, length: %d%n", abstractiveSummaryContext.getOffset(), abstractiveSummaryContext.getLength()); } } } });- Parameters:
documents- A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits..- Returns:
- A
SyncPollerthat polls the abstractive summarization operation until it has completed, has failed, or has been cancelled. The completed operation returns aPagedIterableofAbstractiveSummaryResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifbeginAbstractSummaryis called with service API versionTextAnalyticsServiceVersion.V3_0,TextAnalyticsServiceVersion.V3_1, orTextAnalyticsServiceVersion.V2022_05_01. Those actions are only available for API versionTextAnalyticsServiceVersion.V2023_04_01and newer.TextAnalyticsException- If analyze operation fails.
-
beginAbstractSummary
public com.azure.core.util.polling.SyncPoller<AbstractiveSummaryOperationDetail,AbstractiveSummaryPagedIterable> beginAbstractSummary(Iterable<String> documents, String language, AbstractiveSummaryOptions options) Returns a list of abstractive summary for the provided list ofdocumentwith provided request options.This method is supported since service API version
See this supported languages in Language service API.TextAnalyticsServiceVersion.V2022_05_01.Code Sample
List<String> documents = new ArrayList<>(); for (int i = 0; i < 3; i++) { documents.add( "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic," + " human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI" + " Cognitive Services, I have been working with a team of amazing scientists and engineers to turn " + "this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship" + " among three attributes of human cognition: monolingual text (X), audio or visual sensory signals," + " (Y) and multilingual (Z). At the intersection of all three, there’s magic—what we call XYZ-code" + " as illustrated in Figure 1—a joint representation to create more powerful AI that can speak, hear," + " see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term" + " vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have" + " pretrained models that can jointly learn representations to support a broad range of downstream" + " AI tasks, much in the way humans do today. Over the past five years, we have achieved human" + " performance on benchmarks in conversational speech recognition, machine translation, " + "conversational question answering, machine reading comprehension, and image captioning. These" + " five breakthroughs provided us with strong signals toward our more ambitious aspiration to" + " produce a leap in AI capabilities, achieving multisensory and multilingual learning that " + "is closer in line with how humans learn and understand. I believe the joint XYZ-code is a " + "foundational component of this aspiration, if grounded with external knowledge sources in " + "the downstream AI tasks."); } SyncPoller<AbstractiveSummaryOperationDetail, AbstractiveSummaryPagedIterable> syncPoller = textAnalyticsClient.beginAbstractSummary(documents, "en", new AbstractiveSummaryOptions().setDisplayName("{tasks_display_name}").setSentenceCount(3)); syncPoller.waitForCompletion(); syncPoller.getFinalResult().forEach(resultCollection -> { for (AbstractiveSummaryResult documentResult : resultCollection) { System.out.println("\tAbstractive summary sentences:"); for (AbstractiveSummary summarySentence : documentResult.getSummaries()) { System.out.printf("\t\t Summary text: %s.%n", summarySentence.getText()); for (AbstractiveSummaryContext abstractiveSummaryContext : summarySentence.getContexts()) { System.out.printf("\t\t offset: %d, length: %d%n", abstractiveSummaryContext.getOffset(), abstractiveSummaryContext.getLength()); } } } });- Parameters:
documents- A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.language- The 2-letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.options- The additional configurableoptionsthat may be passed when analyzing abstractive summarization.- Returns:
- A
SyncPollerthat polls the abstractive summarization operation until it has completed, has failed, or has been cancelled. The completed operation returns aPagedIterableofAbstractiveSummaryResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifbeginAbstractSummaryis called with service API versionTextAnalyticsServiceVersion.V3_0,TextAnalyticsServiceVersion.V3_1, orTextAnalyticsServiceVersion.V2022_05_01. Those actions are only available for API versionTextAnalyticsServiceVersion.V2023_04_01and newer.TextAnalyticsException- If analyze operation fails.
-
beginAbstractSummary
public com.azure.core.util.polling.SyncPoller<AbstractiveSummaryOperationDetail,AbstractiveSummaryPagedIterable> beginAbstractSummary(Iterable<TextDocumentInput> documents, AbstractiveSummaryOptions options, com.azure.core.util.Context context) Returns a list of abstractive summary for the provided list ofdocumentwith provided request options.This method is supported since service API version
TextAnalyticsServiceVersion.V2023_04_01.Code Sample
List<TextDocumentInput> documents = new ArrayList<>(); for (int i = 0; i < 3; i++) { documents.add(new TextDocumentInput(Integer.toString(i), "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic," + " human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI" + " Cognitive Services, I have been working with a team of amazing scientists and engineers to turn " + "this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship" + " among three attributes of human cognition: monolingual text (X), audio or visual sensory signals," + " (Y) and multilingual (Z). At the intersection of all three, there’s magic—what we call XYZ-code" + " as illustrated in Figure 1—a joint representation to create more powerful AI that can speak, hear," + " see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term" + " vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have" + " pretrained models that can jointly learn representations to support a broad range of downstream" + " AI tasks, much in the way humans do today. Over the past five years, we have achieved human" + " performance on benchmarks in conversational speech recognition, machine translation, " + "conversational question answering, machine reading comprehension, and image captioning. These" + " five breakthroughs provided us with strong signals toward our more ambitious aspiration to" + " produce a leap in AI capabilities, achieving multisensory and multilingual learning that " + "is closer in line with how humans learn and understand. I believe the joint XYZ-code is a " + "foundational component of this aspiration, if grounded with external knowledge sources in " + "the downstream AI tasks.")); } SyncPoller<AbstractiveSummaryOperationDetail, AbstractiveSummaryPagedIterable> syncPoller = textAnalyticsClient.beginAbstractSummary(documents, new AbstractiveSummaryOptions().setDisplayName("{tasks_display_name}").setSentenceCount(3), Context.NONE); syncPoller.waitForCompletion(); syncPoller.getFinalResult().forEach(resultCollection -> { for (AbstractiveSummaryResult documentResult : resultCollection) { System.out.println("\tAbstractive summary sentences:"); for (AbstractiveSummary summarySentence : documentResult.getSummaries()) { System.out.printf("\t\t Summary text: %s.%n", summarySentence.getText()); for (AbstractiveSummaryContext abstractiveSummaryContext : summarySentence.getContexts()) { System.out.printf("\t\t offset: %d, length: %d%n", abstractiveSummaryContext.getOffset(), abstractiveSummaryContext.getLength()); } } } });- Parameters:
documents- A list ofdocumentsto be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.options- The additional configurableoptionsthat may be passed when analyzing abstractive summarization.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A
SyncPollerthat polls the abstractive summarization operation until it has completed, has failed, or has been cancelled. The completed operation returns aPagedIterableofAbstractiveSummaryResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifbeginAbstractSummaryis called with service API versionTextAnalyticsServiceVersion.V3_0,TextAnalyticsServiceVersion.V3_1, orTextAnalyticsServiceVersion.V2022_05_01. Those actions are only available for API versionTextAnalyticsServiceVersion.V2023_04_01and newer.TextAnalyticsException- If analyze operation fails.
-
beginExtractSummary
public com.azure.core.util.polling.SyncPoller<ExtractiveSummaryOperationDetail,ExtractiveSummaryPagedIterable> beginExtractSummary(Iterable<String> documents) Returns a list of extract summaries for the provided list ofdocument.This method is supported since service API version
This method will use the default language that can be set by using methodTextAnalyticsServiceVersion.V2023_04_01.TextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.Code Sample
List<String> documents = new ArrayList<>(); for (int i = 0; i < 3; i++) { documents.add( "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic," + " human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI" + " Cognitive Services, I have been working with a team of amazing scientists and engineers to turn " + "this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship" + " among three attributes of human cognition: monolingual text (X), audio or visual sensory signals," + " (Y) and multilingual (Z). At the intersection of all three, there’s magic—what we call XYZ-code" + " as illustrated in Figure 1—a joint representation to create more powerful AI that can speak, hear," + " see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term" + " vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have" + " pretrained models that can jointly learn representations to support a broad range of downstream" + " AI tasks, much in the way humans do today. Over the past five years, we have achieved human" + " performance on benchmarks in conversational speech recognition, machine translation, " + "conversational question answering, machine reading comprehension, and image captioning. These" + " five breakthroughs provided us with strong signals toward our more ambitious aspiration to" + " produce a leap in AI capabilities, achieving multisensory and multilingual learning that " + "is closer in line with how humans learn and understand. I believe the joint XYZ-code is a " + "foundational component of this aspiration, if grounded with external knowledge sources in " + "the downstream AI tasks."); } SyncPoller<ExtractiveSummaryOperationDetail, ExtractiveSummaryPagedIterable> syncPoller = textAnalyticsClient.beginExtractSummary(documents); syncPoller.waitForCompletion(); syncPoller.getFinalResult().forEach(resultCollection -> { for (ExtractiveSummaryResult documentResult : resultCollection) { System.out.println("\tExtracted summary sentences:"); for (ExtractiveSummarySentence extractiveSummarySentence : documentResult.getSentences()) { System.out.printf( "\t\t Sentence text: %s, length: %d, offset: %d, rank score: %f.%n", extractiveSummarySentence.getText(), extractiveSummarySentence.getLength(), extractiveSummarySentence.getOffset(), extractiveSummarySentence.getRankScore()); } } });- Parameters:
documents- A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.- Returns:
- A
SyncPollerthat polls the extractive summarization operation until it has completed, has failed, or has been cancelled. The completed operation returns aPagedIterableofExtractiveSummaryResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifbeginExtractSummaryis called with service API versionTextAnalyticsServiceVersion.V3_0,TextAnalyticsServiceVersion.V3_1, orTextAnalyticsServiceVersion.V2022_05_01. Those actions are only available for API versionTextAnalyticsServiceVersion.V2023_04_01and newer.TextAnalyticsException- If analyze operation fails.
-
beginExtractSummary
public com.azure.core.util.polling.SyncPoller<ExtractiveSummaryOperationDetail,ExtractiveSummaryPagedIterable> beginExtractSummary(Iterable<String> documents, String language, ExtractiveSummaryOptions options) Returns a list of extract summaries for the provided list ofdocumentwith provided request options.This method is supported since service API version
See this supported languages in Language service API.TextAnalyticsServiceVersion.V2023_04_01.Code Sample
List<String> documents = new ArrayList<>(); for (int i = 0; i < 3; i++) { documents.add( "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic," + " human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI" + " Cognitive Services, I have been working with a team of amazing scientists and engineers to turn " + "this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship" + " among three attributes of human cognition: monolingual text (X), audio or visual sensory signals," + " (Y) and multilingual (Z). At the intersection of all three, there’s magic—what we call XYZ-code" + " as illustrated in Figure 1—a joint representation to create more powerful AI that can speak, hear," + " see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term" + " vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have" + " pretrained models that can jointly learn representations to support a broad range of downstream" + " AI tasks, much in the way humans do today. Over the past five years, we have achieved human" + " performance on benchmarks in conversational speech recognition, machine translation, " + "conversational question answering, machine reading comprehension, and image captioning. These" + " five breakthroughs provided us with strong signals toward our more ambitious aspiration to" + " produce a leap in AI capabilities, achieving multisensory and multilingual learning that " + "is closer in line with how humans learn and understand. I believe the joint XYZ-code is a " + "foundational component of this aspiration, if grounded with external knowledge sources in " + "the downstream AI tasks."); } SyncPoller<ExtractiveSummaryOperationDetail, ExtractiveSummaryPagedIterable> syncPoller = textAnalyticsClient.beginExtractSummary(documents, "en", new ExtractiveSummaryOptions().setMaxSentenceCount(4).setOrderBy(ExtractiveSummarySentencesOrder.RANK)); syncPoller.waitForCompletion(); syncPoller.getFinalResult().forEach(resultCollection -> { for (ExtractiveSummaryResult documentResult : resultCollection) { System.out.println("\tExtracted summary sentences:"); for (ExtractiveSummarySentence extractiveSummarySentence : documentResult.getSentences()) { System.out.printf( "\t\t Sentence text: %s, length: %d, offset: %d, rank score: %f.%n", extractiveSummarySentence.getText(), extractiveSummarySentence.getLength(), extractiveSummarySentence.getOffset(), extractiveSummarySentence.getRankScore()); } } });- Parameters:
documents- A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.language- The 2-letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.options- The additional configurableoptionsthat may be passed when analyzing extractive summarization.- Returns:
- A
SyncPollerthat polls the extractive summarization operation until it has completed, has failed, or has been cancelled. The completed operation returns aPagedIterableofExtractiveSummaryResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifbeginExtractSummaryis called with service API versionTextAnalyticsServiceVersion.V3_0,TextAnalyticsServiceVersion.V3_1, orTextAnalyticsServiceVersion.V2022_05_01. Those actions are only available for API versionTextAnalyticsServiceVersion.V2023_04_01and newer.TextAnalyticsException- If analyze operation fails.
-
beginExtractSummary
public com.azure.core.util.polling.SyncPoller<ExtractiveSummaryOperationDetail,ExtractiveSummaryPagedIterable> beginExtractSummary(Iterable<TextDocumentInput> documents, ExtractiveSummaryOptions options, com.azure.core.util.Context context) Returns a list of extract summaries for the provided list ofdocumentwith provided request options.This method is supported since service API version
TextAnalyticsServiceVersion.V2023_04_01.Code Sample
List<TextDocumentInput> documents = new ArrayList<>(); for (int i = 0; i < 3; i++) { documents.add(new TextDocumentInput(Integer.toString(i), "At Microsoft, we have been on a quest to advance AI beyond existing techniques, by taking a more holistic," + " human-centric approach to learning and understanding. As Chief Technology Officer of Azure AI" + " Cognitive Services, I have been working with a team of amazing scientists and engineers to turn " + "this quest into a reality. In my role, I enjoy a unique perspective in viewing the relationship" + " among three attributes of human cognition: monolingual text (X), audio or visual sensory signals," + " (Y) and multilingual (Z). At the intersection of all three, there’s magic—what we call XYZ-code" + " as illustrated in Figure 1—a joint representation to create more powerful AI that can speak, hear," + " see, and understand humans better. We believe XYZ-code will enable us to fulfill our long-term" + " vision: cross-domain transfer learning, spanning modalities and languages. The goal is to have" + " pretrained models that can jointly learn representations to support a broad range of downstream" + " AI tasks, much in the way humans do today. Over the past five years, we have achieved human" + " performance on benchmarks in conversational speech recognition, machine translation, " + "conversational question answering, machine reading comprehension, and image captioning. These" + " five breakthroughs provided us with strong signals toward our more ambitious aspiration to" + " produce a leap in AI capabilities, achieving multisensory and multilingual learning that " + "is closer in line with how humans learn and understand. I believe the joint XYZ-code is a " + "foundational component of this aspiration, if grounded with external knowledge sources in " + "the downstream AI tasks.")); } SyncPoller<ExtractiveSummaryOperationDetail, ExtractiveSummaryPagedIterable> syncPoller = textAnalyticsClient.beginExtractSummary(documents, new ExtractiveSummaryOptions().setMaxSentenceCount(4).setOrderBy(ExtractiveSummarySentencesOrder.RANK), Context.NONE); syncPoller.waitForCompletion(); syncPoller.getFinalResult().forEach(resultCollection -> { for (ExtractiveSummaryResult documentResult : resultCollection) { System.out.println("\tExtracted summary sentences:"); for (ExtractiveSummarySentence extractiveSummarySentence : documentResult.getSentences()) { System.out.printf( "\t\t Sentence text: %s, length: %d, offset: %d, rank score: %f.%n", extractiveSummarySentence.getText(), extractiveSummarySentence.getLength(), extractiveSummarySentence.getOffset(), extractiveSummarySentence.getRankScore()); } } });- Parameters:
documents- A list ofdocumentsto be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.options- The additional configurableoptionsthat may be passed when analyzing extractive summarization.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A
SyncPollerthat polls the extractive summarization operation until it has completed, has failed, or has been cancelled. The completed operation returns aPagedIterableofExtractiveSummaryResultCollection. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifbeginExtractSummaryis called with service API versionTextAnalyticsServiceVersion.V3_0,TextAnalyticsServiceVersion.V3_1, orTextAnalyticsServiceVersion.V2022_05_01. Those actions are only available for API versionTextAnalyticsServiceVersion.V2023_04_01and newer.TextAnalyticsException- If analyze operation fails.
-
beginAnalyzeActions
public com.azure.core.util.polling.SyncPoller<AnalyzeActionsOperationDetail,AnalyzeActionsResultPagedIterable> beginAnalyzeActions(Iterable<String> documents, TextAnalyticsActions actions) Execute actions, such as, entities recognition, PII entities recognition and key phrases extraction for a list ofdocuments. This method will use the default language that can be set by using methodTextAnalyticsClientBuilder.defaultLanguage(String). If none is specified, service will use 'en' as the language.Code Sample
List<String> documents = Arrays.asList( "Elon Musk is the CEO of SpaceX and Tesla.", "My SSN is 859-98-0987" ); SyncPoller<AnalyzeActionsOperationDetail, AnalyzeActionsResultPagedIterable> syncPoller = textAnalyticsClient.beginAnalyzeActions( documents, new TextAnalyticsActions().setDisplayName("{tasks_display_name}") .setRecognizeEntitiesActions(new RecognizeEntitiesAction()) .setExtractKeyPhrasesActions(new ExtractKeyPhrasesAction())); syncPoller.waitForCompletion(); AnalyzeActionsResultPagedIterable result = syncPoller.getFinalResult(); result.forEach(analyzeActionsResult -> { System.out.println("Entities recognition action results:"); analyzeActionsResult.getRecognizeEntitiesResults().forEach( actionResult -> { if (!actionResult.isError()) { actionResult.getDocumentsResults().forEach( entitiesResult -> entitiesResult.getEntities().forEach( entity -> System.out.printf( "Recognized entity: %s, entity category: %s, entity subcategory: %s," + " confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore()))); } }); System.out.println("Key phrases extraction action results:"); analyzeActionsResult.getExtractKeyPhrasesResults().forEach( actionResult -> { if (!actionResult.isError()) { actionResult.getDocumentsResults().forEach(extractKeyPhraseResult -> { System.out.println("Extracted phrases:"); extractKeyPhraseResult.getKeyPhrases() .forEach(keyPhrases -> System.out.printf("\t%s.%n", keyPhrases)); }); } }); });- Parameters:
documents- A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.actions- Theactionsthat contains all actions to be executed. An action is one task of execution, such as a single task of 'Key Phrases Extraction' on the given document inputs.- Returns:
- A
SyncPollerthat polls the analyze a collection of actions operation until it has completed, has failed, or has been cancelled. The completed operation returns aAnalyzeActionsResultPagedIterable. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifbeginAnalyzeActionsis called with service API versionTextAnalyticsServiceVersion.V3_0.beginAnalyzeActionsis only available for API version v3.1 and newer.UnsupportedOperationException- if requestAnalyzeHealthcareEntitiesAction,RecognizeCustomEntitiesAction,SingleLabelClassifyAction, orMultiLabelClassifyActionin service API versionTextAnalyticsServiceVersion.V3_0orTextAnalyticsServiceVersion.V3_1. Those actions are only available for API version 2022-05-01 and newer.TextAnalyticsException- If analyze operation fails.
-
beginAnalyzeActions
public com.azure.core.util.polling.SyncPoller<AnalyzeActionsOperationDetail,AnalyzeActionsResultPagedIterable> beginAnalyzeActions(Iterable<String> documents, TextAnalyticsActions actions, String language, AnalyzeActionsOptions options) Execute actions, such as, entities recognition, PII entities recognition and key phrases extraction for a list ofdocumentswith provided request options. See this supported languages in Language service API.Code Sample
List<String> documents = Arrays.asList( "Elon Musk is the CEO of SpaceX and Tesla.", "My SSN is 859-98-0987" ); SyncPoller<AnalyzeActionsOperationDetail, AnalyzeActionsResultPagedIterable> syncPoller = textAnalyticsClient.beginAnalyzeActions( documents, new TextAnalyticsActions().setDisplayName("{tasks_display_name}") .setRecognizeEntitiesActions(new RecognizeEntitiesAction()) .setExtractKeyPhrasesActions(new ExtractKeyPhrasesAction()), "en", new AnalyzeActionsOptions().setIncludeStatistics(false)); syncPoller.waitForCompletion(); AnalyzeActionsResultPagedIterable result = syncPoller.getFinalResult(); result.forEach(analyzeActionsResult -> { System.out.println("Entities recognition action results:"); analyzeActionsResult.getRecognizeEntitiesResults().forEach( actionResult -> { if (!actionResult.isError()) { actionResult.getDocumentsResults().forEach( entitiesResult -> entitiesResult.getEntities().forEach( entity -> System.out.printf( "Recognized entity: %s, entity category: %s, entity subcategory: %s," + " confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore()))); } }); System.out.println("Key phrases extraction action results:"); analyzeActionsResult.getExtractKeyPhrasesResults().forEach( actionResult -> { if (!actionResult.isError()) { actionResult.getDocumentsResults().forEach(extractKeyPhraseResult -> { System.out.println("Extracted phrases:"); extractKeyPhraseResult.getKeyPhrases() .forEach(keyPhrases -> System.out.printf("\t%s.%n", keyPhrases)); }); } }); });- Parameters:
documents- A list of documents to be analyzed. For text length limits, maximum batch size, and supported text encoding, see data limits.actions- Theactionsthat contains all actions to be executed. An action is one task of execution, such as a single task of 'Key Phrases Extraction' on the given document inputs.language- The 2 letter ISO 639-1 representation of language for the documents. If not set, uses "en" for English as default.options- The additional configurableoptionsthat may be passed when analyzing a collection of actions.- Returns:
- A
SyncPollerthat polls the analyze a collection of actions operation until it has completed, has failed, or has been cancelled. The completed operation returns aAnalyzeActionsResultPagedIterable. - Throws:
NullPointerException- ifdocumentsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifbeginAnalyzeActionsis called with service API versionTextAnalyticsServiceVersion.V3_0.beginAnalyzeActionsis only available for API version v3.1 and newer.UnsupportedOperationException- if requestAnalyzeHealthcareEntitiesAction,RecognizeCustomEntitiesAction,SingleLabelClassifyAction, orMultiLabelClassifyActionin service API versionTextAnalyticsServiceVersion.V3_0orTextAnalyticsServiceVersion.V3_1. Those actions are only available for API version 2022-05-01 and newer.TextAnalyticsException- If analyze operation fails.
-
beginAnalyzeActions
public com.azure.core.util.polling.SyncPoller<AnalyzeActionsOperationDetail,AnalyzeActionsResultPagedIterable> beginAnalyzeActions(Iterable<TextDocumentInput> documents, TextAnalyticsActions actions, AnalyzeActionsOptions options, com.azure.core.util.Context context) Execute actions, such as, entities recognition, PII entities recognition and key phrases extraction for a list ofdocumentswith provided request options. See this supported languages in Language service API.Code Sample
List<TextDocumentInput> documents = Arrays.asList( new TextDocumentInput("0", "Elon Musk is the CEO of SpaceX and Tesla.").setLanguage("en"), new TextDocumentInput("1", "My SSN is 859-98-0987").setLanguage("en") ); SyncPoller<AnalyzeActionsOperationDetail, AnalyzeActionsResultPagedIterable> syncPoller = textAnalyticsClient.beginAnalyzeActions( documents, new TextAnalyticsActions().setDisplayName("{tasks_display_name}") .setRecognizeEntitiesActions(new RecognizeEntitiesAction()) .setExtractKeyPhrasesActions(new ExtractKeyPhrasesAction()), new AnalyzeActionsOptions().setIncludeStatistics(false), Context.NONE); syncPoller.waitForCompletion(); AnalyzeActionsResultPagedIterable result = syncPoller.getFinalResult(); result.forEach(analyzeActionsResult -> { System.out.println("Entities recognition action results:"); analyzeActionsResult.getRecognizeEntitiesResults().forEach( actionResult -> { if (!actionResult.isError()) { actionResult.getDocumentsResults().forEach( entitiesResult -> entitiesResult.getEntities().forEach( entity -> System.out.printf( "Recognized entity: %s, entity category: %s, entity subcategory: %s," + " confidence score: %f.%n", entity.getText(), entity.getCategory(), entity.getSubcategory(), entity.getConfidenceScore()))); } }); System.out.println("Key phrases extraction action results:"); analyzeActionsResult.getExtractKeyPhrasesResults().forEach( actionResult -> { if (!actionResult.isError()) { actionResult.getDocumentsResults().forEach(extractKeyPhraseResult -> { System.out.println("Extracted phrases:"); extractKeyPhraseResult.getKeyPhrases() .forEach(keyPhrases -> System.out.printf("\t%s.%n", keyPhrases)); }); } }); });- Parameters:
documents- A list ofdocumentsto be analyzed.actions- Theactionsthat contains all actions to be executed. An action is one task of execution, such as a single task of 'Key Phrases Extraction' on the given document inputs.options- The additional configurableoptionsthat may be passed when analyzing a collection of actions.context- Additional context that is passed through the Http pipeline during the service call.- Returns:
- A
SyncPollerthat polls the analyze a collection of actions operation until it has completed, has failed, or has been cancelled. The completed operation returns aAnalyzeActionsResultPagedIterable. - Throws:
NullPointerException- ifdocumentsoractionsis null.IllegalArgumentException- ifdocumentsis empty.UnsupportedOperationException- ifbeginAnalyzeActionsis called with service API versionTextAnalyticsServiceVersion.V3_0.beginAnalyzeActionsis only available for API version v3.1 and newer.UnsupportedOperationException- if requestAnalyzeHealthcareEntitiesAction,RecognizeCustomEntitiesAction,SingleLabelClassifyAction, orMultiLabelClassifyActionin service API versionTextAnalyticsServiceVersion.V3_0orTextAnalyticsServiceVersion.V3_1. Those actions are only available for API version 2022-05-01 and newer.TextAnalyticsException- If analyze operation fails.
-
analyzeSentimentBatch(Iterable, String, AnalyzeSentimentOptions).