public class DataLakeFileAsyncClient extends DataLakePathAsyncClient
This client is instantiated through DataLakePathClientBuilder or retrieved via
DataLakeFileSystemAsyncClient.getFileAsyncClient(String).
Please refer to the Azure Docs for more information.
| Modifier and Type | Method and Description |
|---|---|
Mono<Void> |
append(Flux<ByteBuffer> data,
long fileOffset,
long length)
Appends data to the specified resource to later be flushed (written) by a call to flush
|
Mono<com.azure.core.http.rest.Response<Void>> |
appendWithResponse(Flux<ByteBuffer> data,
long fileOffset,
long length,
byte[] contentMd5,
String leaseId)
Appends data to the specified resource to later be flushed (written) by a call to flush
|
Mono<Void> |
delete()
Deletes a file.
|
Mono<com.azure.core.http.rest.Response<Void>> |
deleteWithResponse(DataLakeRequestConditions requestConditions)
Deletes a file.
|
Mono<PathInfo> |
flush(long position)
Flushes (writes) data previously appended to the file through a call to append.
|
Mono<PathInfo> |
flush(long position,
boolean overwrite)
Flushes (writes) data previously appended to the file through a call to append.
|
Mono<com.azure.core.http.rest.Response<PathInfo>> |
flushWithResponse(long position,
boolean retainUncommittedData,
boolean close,
PathHttpHeaders httpHeaders,
DataLakeRequestConditions requestConditions)
Flushes (writes) data previously appended to the file through a call to append.
|
String |
getFileName()
Gets the name of this file, not including its full path.
|
String |
getFilePath()
Gets the path of this file, not including the name of the resource itself.
|
String |
getFileUrl()
Gets the URL of the file represented by this client on the Data Lake service.
|
Flux<ByteBuffer> |
query(String expression)
Queries the entire file.
|
Mono<FileQueryAsyncResponse> |
queryWithResponse(FileQueryOptions queryOptions)
Queries the entire file.
|
Flux<ByteBuffer> |
read()
Reads the entire file.
|
Mono<PathProperties> |
readToFile(String filePath)
Reads the entire file into a file specified by the path.
|
Mono<PathProperties> |
readToFile(String filePath,
boolean overwrite)
Reads the entire file into a file specified by the path.
|
Mono<com.azure.core.http.rest.Response<PathProperties>> |
readToFileWithResponse(String filePath,
FileRange range,
com.azure.storage.common.ParallelTransferOptions parallelTransferOptions,
DownloadRetryOptions options,
DataLakeRequestConditions requestConditions,
boolean rangeGetContentMd5,
Set<OpenOption> openOptions)
Reads the entire file into a file specified by the path.
|
Mono<FileReadAsyncResponse> |
readWithResponse(FileRange range,
DownloadRetryOptions options,
DataLakeRequestConditions requestConditions,
boolean getRangeContentMd5)
Reads a range of bytes from a file.
|
Mono<DataLakeFileAsyncClient> |
rename(String destinationFileSystem,
String destinationPath)
Moves the file to another location within the file system.
|
Mono<com.azure.core.http.rest.Response<DataLakeFileAsyncClient>> |
renameWithResponse(String destinationFileSystem,
String destinationPath,
DataLakeRequestConditions sourceRequestConditions,
DataLakeRequestConditions destinationRequestConditions)
Moves the file to another location within the file system.
|
Mono<Void> |
scheduleDeletion(FileScheduleDeletionOptions options)
Schedules the file for deletion.
|
Mono<com.azure.core.http.rest.Response<Void>> |
scheduleDeletionWithResponse(FileScheduleDeletionOptions options)
Schedules the file for deletion.
|
Mono<PathInfo> |
upload(Flux<ByteBuffer> data,
com.azure.storage.common.ParallelTransferOptions parallelTransferOptions)
Creates a new file and uploads content.
|
Mono<PathInfo> |
upload(Flux<ByteBuffer> data,
com.azure.storage.common.ParallelTransferOptions parallelTransferOptions,
boolean overwrite)
Creates a new file and uploads content.
|
Mono<Void> |
uploadFromFile(String filePath)
Creates a new file, with the content of the specified file.
|
Mono<Void> |
uploadFromFile(String filePath,
boolean overwrite)
Creates a new file, with the content of the specified file.
|
Mono<Void> |
uploadFromFile(String filePath,
com.azure.storage.common.ParallelTransferOptions parallelTransferOptions,
PathHttpHeaders headers,
Map<String,String> metadata,
DataLakeRequestConditions requestConditions)
Creates a new file, with the content of the specified file.
|
Mono<com.azure.core.http.rest.Response<PathInfo>> |
uploadWithResponse(FileParallelUploadOptions options)
Creates a new file.
|
Mono<com.azure.core.http.rest.Response<PathInfo>> |
uploadWithResponse(Flux<ByteBuffer> data,
com.azure.storage.common.ParallelTransferOptions parallelTransferOptions,
PathHttpHeaders headers,
Map<String,String> metadata,
DataLakeRequestConditions requestConditions)
Creates a new file.
|
create, create, createWithResponse, exists, existsWithResponse, generateSas, generateSas, generateUserDelegationSas, generateUserDelegationSas, getAccessControl, getAccessControlWithResponse, getAccountName, getFileSystemName, getHttpPipeline, getProperties, getPropertiesWithResponse, getServiceVersion, removeAccessControlRecursive, removeAccessControlRecursiveWithResponse, setAccessControlList, setAccessControlListWithResponse, setAccessControlRecursive, setAccessControlRecursiveWithResponse, setHttpHeaders, setHttpHeadersWithResponse, setMetadata, setMetadataWithResponse, setPermissions, setPermissionsWithResponse, updateAccessControlRecursive, updateAccessControlRecursiveWithResponsepublic String getFileUrl()
public String getFilePath()
public String getFileName()
public Mono<Void> delete()
Code Samples
client.delete().subscribe(response ->
System.out.println("Delete request completed"));
For more information see the Azure Docs
public Mono<com.azure.core.http.rest.Response<Void>> deleteWithResponse(DataLakeRequestConditions requestConditions)
Code Samples
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId);
client.deleteWithResponse(requestConditions)
.subscribe(response -> System.out.println("Delete request completed"));
For more information see the Azure Docs
requestConditions - DataLakeRequestConditionspublic Mono<PathInfo> upload(Flux<ByteBuffer> data, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions)
Code Samples
client.uploadFromFile(filePath)
.doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage()))
.subscribe(completion -> System.out.println("Upload from file succeeded"));
data - The data to write to the file. Unlike other upload methods, this method does not require that the
Flux be replayable. In other words, it does not have to support multiple subscribers and is not expected
to produce the same values across subscriptions.parallelTransferOptions - ParallelTransferOptions used to configure buffered uploading.public Mono<PathInfo> upload(Flux<ByteBuffer> data, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, boolean overwrite)
Code Samples
boolean overwrite = false; // Default behavior
client.uploadFromFile(filePath, overwrite)
.doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage()))
.subscribe(completion -> System.out.println("Upload from file succeeded"));
data - The data to write to the file. Unlike other upload methods, this method does not require that the
Flux be replayable. In other words, it does not have to support multiple subscribers and is not expected
to produce the same values across subscriptions.parallelTransferOptions - ParallelTransferOptions used to configure buffered uploading.overwrite - Whether or not to overwrite, should the file already exist.public Mono<com.azure.core.http.rest.Response<PathInfo>> uploadWithResponse(Flux<ByteBuffer> data, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, PathHttpHeaders headers, Map<String,String> metadata, DataLakeRequestConditions requestConditions)
To avoid overwriting, pass "*" to DataLakeRequestConditions.setIfNoneMatch(String).
Code Samples
PathHttpHeaders headers = new PathHttpHeaders()
.setContentMd5("data".getBytes(StandardCharsets.UTF_8))
.setContentLanguage("en-US")
.setContentType("binary");
Map<String, String> metadata = Collections.singletonMap("metadata", "value");
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId)
.setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
Long blockSize = 100L * 1024L * 1024L; // 100 MB;
ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize);
client.uploadWithResponse(data, parallelTransferOptions, headers, metadata, requestConditions)
.subscribe(response -> System.out.println("Uploaded file %n"));
Using Progress Reporting
PathHttpHeaders httpHeaders = new PathHttpHeaders()
.setContentMd5("data".getBytes(StandardCharsets.UTF_8))
.setContentLanguage("en-US")
.setContentType("binary");
Map<String, String> metadataMap = Collections.singletonMap("metadata", "value");
DataLakeRequestConditions conditions = new DataLakeRequestConditions()
.setLeaseId(leaseId)
.setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
ParallelTransferOptions pto = new ParallelTransferOptions()
.setBlockSizeLong(blockSize)
.setProgressReceiver(bytesTransferred -> System.out.printf("Upload progress: %s bytes sent", bytesTransferred));
client.uploadWithResponse(data, pto, httpHeaders, metadataMap, conditions)
.subscribe(response -> System.out.println("Uploaded file %n"));
data - The data to write to the file. Unlike other upload methods, this method does not require that the
Flux be replayable. In other words, it does not have to support multiple subscribers and is not expected
to produce the same values across subscriptions.parallelTransferOptions - ParallelTransferOptions used to configure buffered uploading.headers - PathHttpHeadersmetadata - Metadata to associate with the resource. If there is leading or trailing whitespace in any
metadata key or value, it must be removed or encoded.requestConditions - DataLakeRequestConditionspublic Mono<com.azure.core.http.rest.Response<PathInfo>> uploadWithResponse(FileParallelUploadOptions options)
To avoid overwriting, pass "*" to DataLakeRequestConditions.setIfNoneMatch(String).
Code Samples
PathHttpHeaders headers = new PathHttpHeaders()
.setContentMd5("data".getBytes(StandardCharsets.UTF_8))
.setContentLanguage("en-US")
.setContentType("binary");
Map<String, String> metadata = Collections.singletonMap("metadata", "value");
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId)
.setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
Long blockSize = 100L * 1024L * 1024L; // 100 MB;
ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize);
client.uploadWithResponse(new FileParallelUploadOptions(data)
.setParallelTransferOptions(parallelTransferOptions).setHeaders(headers)
.setMetadata(metadata).setRequestConditions(requestConditions)
.setPermissions("permissions").setUmask("umask"))
.subscribe(response -> System.out.println("Uploaded file %n"));
Using Progress Reporting
PathHttpHeaders httpHeaders = new PathHttpHeaders()
.setContentMd5("data".getBytes(StandardCharsets.UTF_8))
.setContentLanguage("en-US")
.setContentType("binary");
Map<String, String> metadataMap = Collections.singletonMap("metadata", "value");
DataLakeRequestConditions conditions = new DataLakeRequestConditions()
.setLeaseId(leaseId)
.setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
ParallelTransferOptions pto = new ParallelTransferOptions()
.setBlockSizeLong(blockSize)
.setProgressReceiver(bytesTransferred -> System.out.printf("Upload progress: %s bytes sent", bytesTransferred));
client.uploadWithResponse(new FileParallelUploadOptions(data)
.setParallelTransferOptions(parallelTransferOptions).setHeaders(headers)
.setMetadata(metadata).setRequestConditions(requestConditions)
.setPermissions("permissions").setUmask("umask"))
.subscribe(response -> System.out.println("Uploaded file %n"));
options - FileParallelUploadOptionspublic Mono<Void> uploadFromFile(String filePath)
Code Samples
client.uploadFromFile(filePath)
.doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage()))
.subscribe(completion -> System.out.println("Upload from file succeeded"));
filePath - Path to the upload fileUncheckedIOException - If an I/O error occurspublic Mono<Void> uploadFromFile(String filePath, boolean overwrite)
Code Samples
boolean overwrite = false; // Default behavior
client.uploadFromFile(filePath, overwrite)
.doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage()))
.subscribe(completion -> System.out.println("Upload from file succeeded"));
filePath - Path to the upload fileoverwrite - Whether or not to overwrite, should the file already exist.UncheckedIOException - If an I/O error occurspublic Mono<Void> uploadFromFile(String filePath, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, PathHttpHeaders headers, Map<String,String> metadata, DataLakeRequestConditions requestConditions)
To avoid overwriting, pass "*" to DataLakeRequestConditions.setIfNoneMatch(String).
Code Samples
PathHttpHeaders headers = new PathHttpHeaders()
.setContentMd5("data".getBytes(StandardCharsets.UTF_8))
.setContentLanguage("en-US")
.setContentType("binary");
Map<String, String> metadata = Collections.singletonMap("metadata", "value");
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId)
.setIfUnmodifiedSince(OffsetDateTime.now().minusDays(3));
Long blockSize = 100L * 1024L * 1024L; // 100 MB;
ParallelTransferOptions parallelTransferOptions = new ParallelTransferOptions().setBlockSizeLong(blockSize);
client.uploadFromFile(filePath, parallelTransferOptions, headers, metadata, requestConditions)
.doOnError(throwable -> System.err.printf("Failed to upload from file %s%n", throwable.getMessage()))
.subscribe(completion -> System.out.println("Upload from file succeeded"));
filePath - Path to the upload fileparallelTransferOptions - ParallelTransferOptions to use to upload from file. Number of parallel
transfers parameter is ignored.headers - PathHttpHeadersmetadata - Metadata to associate with the resource. If there is leading or trailing whitespace in any
metadata key or value, it must be removed or encoded.requestConditions - DataLakeRequestConditionsUncheckedIOException - If an I/O error occurspublic Mono<Void> append(Flux<ByteBuffer> data, long fileOffset, long length)
Code Samples
client.append(data, offset, length)
.subscribe(
response -> System.out.println("Append data completed"),
error -> System.out.printf("Error when calling append data: %s", error));
For more information, see the Azure Docs
data - The data to write to the file.fileOffset - The position where the data is to be appended.length - The exact length of the data. It is important that this value match precisely the length of the
data emitted by the Flux.public Mono<com.azure.core.http.rest.Response<Void>> appendWithResponse(Flux<ByteBuffer> data, long fileOffset, long length, byte[] contentMd5, String leaseId)
Code Samples
FileRange range = new FileRange(1024, 2048L);
DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5);
byte[] contentMd5 = new byte[0]; // Replace with valid md5
client.appendWithResponse(data, offset, length, contentMd5, leaseId).subscribe(response ->
System.out.printf("Append data completed with status %d%n", response.getStatusCode()));
For more information, see the Azure Docs
data - The data to write to the file.fileOffset - The position where the data is to be appended.length - The exact length of the data. It is important that this value match precisely the length of the
data emitted by the Flux.contentMd5 - An MD5 hash of the content of the data. If specified, the service will calculate the MD5 of the
received data and fail the request if it does not match the provided MD5.leaseId - By setting lease id, requests will fail if the provided lease does not match the active lease on
the file.public Mono<PathInfo> flush(long position)
By default this method will not overwrite existing data.
Code Samples
client.flush(position).subscribe(response ->
System.out.println("Flush data completed"));
For more information, see the Azure Docs
position - The length of the file after all data has been written.public Mono<PathInfo> flush(long position, boolean overwrite)
Code Samples
boolean overwrite = true;
client.flush(position, overwrite).subscribe(response ->
System.out.println("Flush data completed"));
For more information, see the Azure Docs
position - The length of the file after all data has been written.overwrite - Whether or not to overwrite, should data exist on the file.public Mono<com.azure.core.http.rest.Response<PathInfo>> flushWithResponse(long position, boolean retainUncommittedData, boolean close, PathHttpHeaders httpHeaders, DataLakeRequestConditions requestConditions)
Code Samples
FileRange range = new FileRange(1024, 2048L);
DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5);
byte[] contentMd5 = new byte[0]; // Replace with valid md5
boolean retainUncommittedData = false;
boolean close = false;
PathHttpHeaders httpHeaders = new PathHttpHeaders()
.setContentLanguage("en-US")
.setContentType("binary");
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId);
client.flushWithResponse(position, retainUncommittedData, close, httpHeaders,
requestConditions).subscribe(response ->
System.out.printf("Flush data completed with status %d%n", response.getStatusCode()));
For more information, see the Azure Docs
position - The length of the file after all data has been written.retainUncommittedData - Whether or not uncommitted data is to be retained after the operation.close - Whether or not a file changed event raised indicates completion (true) or modification (false).httpHeaders - httpHeadersrequestConditions - requestConditionspublic Flux<ByteBuffer> read()
Code Samples
ByteArrayOutputStream downloadData = new ByteArrayOutputStream();
client.read().subscribe(piece -> {
try {
downloadData.write(piece.array());
} catch (IOException ex) {
throw new UncheckedIOException(ex);
}
});
For more information, see the Azure Docs
public Mono<FileReadAsyncResponse> readWithResponse(FileRange range, DownloadRetryOptions options, DataLakeRequestConditions requestConditions, boolean getRangeContentMd5)
Code Samples
FileRange range = new FileRange(1024, 2048L);
DownloadRetryOptions options = new DownloadRetryOptions().setMaxRetryRequests(5);
client.readWithResponse(range, options, null, false).subscribe(response -> {
ByteArrayOutputStream readData = new ByteArrayOutputStream();
response.getValue().subscribe(piece -> {
try {
readData.write(piece.array());
} catch (IOException ex) {
throw new UncheckedIOException(ex);
}
});
});
For more information, see the Azure Docs
range - FileRangeoptions - DownloadRetryOptionsrequestConditions - DataLakeRequestConditionsgetRangeContentMd5 - Whether the contentMD5 for the specified file range should be returned.public Mono<PathProperties> readToFile(String filePath)
The file will be created and must not exist, if the file already exists a FileAlreadyExistsException
will be thrown.
Code Samples
client.readToFile(file).subscribe(response -> System.out.println("Completed download to file"));
For more information, see the Azure Docs
filePath - A String representing the filePath where the downloaded data will be written.public Mono<PathProperties> readToFile(String filePath, boolean overwrite)
If overwrite is set to false, the file will be created and must not exist, if the file already exists a
FileAlreadyExistsException will be thrown.
Code Samples
boolean overwrite = false; // Default value
client.readToFile(file, overwrite).subscribe(response -> System.out.println("Completed download to file"));
For more information, see the Azure Docs
filePath - A String representing the filePath where the downloaded data will be written.overwrite - Whether or not to overwrite the file, should the file exist.public Mono<com.azure.core.http.rest.Response<PathProperties>> readToFileWithResponse(String filePath, FileRange range, com.azure.storage.common.ParallelTransferOptions parallelTransferOptions, DownloadRetryOptions options, DataLakeRequestConditions requestConditions, boolean rangeGetContentMd5, Set<OpenOption> openOptions)
By default the file will be created and must not exist, if the file already exists a
FileAlreadyExistsException will be thrown. To override this behavior, provide appropriate
OpenOptions
Code Samples
FileRange fileRange = new FileRange(1024, 2048L);
DownloadRetryOptions downloadRetryOptions = new DownloadRetryOptions().setMaxRetryRequests(5);
Set<OpenOption> openOptions = new HashSet<>(Arrays.asList(StandardOpenOption.CREATE_NEW,
StandardOpenOption.WRITE, StandardOpenOption.READ)); // Default options
client.readToFileWithResponse(file, fileRange, null, downloadRetryOptions, null, false, openOptions)
.subscribe(response -> System.out.println("Completed download to file"));
For more information, see the Azure Docs
filePath - A String representing the filePath where the downloaded data will be written.range - FileRangeparallelTransferOptions - ParallelTransferOptions to use to download to file. Number of parallel
transfers parameter is ignored.options - DownloadRetryOptionsrequestConditions - DataLakeRequestConditionsrangeGetContentMd5 - Whether the contentMD5 for the specified file range should be returned.openOptions - OpenOptions to use to configure how to open or create the file.IllegalArgumentException - If blockSize is less than 0 or greater than 100MB.UncheckedIOException - If an I/O error occurs.public Mono<DataLakeFileAsyncClient> rename(String destinationFileSystem, String destinationPath)
Code Samples
DataLakeFileAsyncClient renamedClient = client.rename(fileSystemName, destinationPath).block();
System.out.println("Directory Client has been renamed");
destinationFileSystem - The file system of the destination within the account.
null for the current file system.destinationPath - Relative path from the file system to rename the file to, excludes the file system name.
For example if you want to move a file with fileSystem = "myfilesystem", path = "mydir/hello.txt" to another path
in myfilesystem (ex: newdir/hi.txt) then set the destinationPath = "newdir/hi.txt"Mono containing a DataLakeFileAsyncClient used to interact with the new file created.public Mono<com.azure.core.http.rest.Response<DataLakeFileAsyncClient>> renameWithResponse(String destinationFileSystem, String destinationPath, DataLakeRequestConditions sourceRequestConditions, DataLakeRequestConditions destinationRequestConditions)
Code Samples
DataLakeRequestConditions sourceRequestConditions = new DataLakeRequestConditions()
.setLeaseId(leaseId);
DataLakeRequestConditions destinationRequestConditions = new DataLakeRequestConditions();
DataLakeFileAsyncClient newRenamedClient = client.renameWithResponse(fileSystemName, destinationPath,
sourceRequestConditions, destinationRequestConditions).block().getValue();
System.out.println("Directory Client has been renamed");
destinationFileSystem - The file system of the destination within the account.
null for the current file system.destinationPath - Relative path from the file system to rename the file to, excludes the file system name.
For example if you want to move a file with fileSystem = "myfilesystem", path = "mydir/hello.txt" to another path
in myfilesystem (ex: newdir/hi.txt) then set the destinationPath = "newdir/hi.txt"sourceRequestConditions - DataLakeRequestConditions against the source.destinationRequestConditions - DataLakeRequestConditions against the destination.Mono containing a Response whose value contains a DataLakeFileAsyncClient used to interact with the file created.public Flux<ByteBuffer> query(String expression)
For more information, see the Azure Docs
Code Samples
ByteArrayOutputStream queryData = new ByteArrayOutputStream();
String expression = "SELECT * from BlobStorage";
client.query(expression).subscribe(piece -> {
try {
queryData.write(piece.array());
} catch (IOException ex) {
throw new UncheckedIOException(ex);
}
});
expression - The query expression.public Mono<FileQueryAsyncResponse> queryWithResponse(FileQueryOptions queryOptions)
For more information, see the Azure Docs
Code Samples
String expression = "SELECT * from BlobStorage";
FileQueryJsonSerialization input = new FileQueryJsonSerialization()
.setRecordSeparator('\n');
FileQueryDelimitedSerialization output = new FileQueryDelimitedSerialization()
.setEscapeChar('\0')
.setColumnSeparator(',')
.setRecordSeparator('\n')
.setFieldQuote('\'')
.setHeadersPresent(true);
DataLakeRequestConditions requestConditions = new DataLakeRequestConditions().setLeaseId(leaseId);
Consumer<FileQueryError> errorConsumer = System.out::println;
Consumer<FileQueryProgress> progressConsumer = progress -> System.out.println("total file bytes read: "
+ progress.getBytesScanned());
FileQueryOptions queryOptions = new FileQueryOptions(expression)
.setInputSerialization(input)
.setOutputSerialization(output)
.setRequestConditions(requestConditions)
.setErrorConsumer(errorConsumer)
.setProgressConsumer(progressConsumer);
client.queryWithResponse(queryOptions)
.subscribe(response -> {
ByteArrayOutputStream queryData = new ByteArrayOutputStream();
response.getValue().subscribe(piece -> {
try {
queryData.write(piece.array());
} catch (IOException ex) {
throw new UncheckedIOException(ex);
}
});
});
queryOptions - The query optionspublic Mono<Void> scheduleDeletion(FileScheduleDeletionOptions options)
Code Samples
FileScheduleDeletionOptions options = new FileScheduleDeletionOptions(OffsetDateTime.now().plusDays(1));
client.scheduleDeletion(options)
.subscribe(r -> System.out.println("File deletion has been scheduled"));
options - Schedule deletion parameters.public Mono<com.azure.core.http.rest.Response<Void>> scheduleDeletionWithResponse(FileScheduleDeletionOptions options)
Code Samples
FileScheduleDeletionOptions options = new FileScheduleDeletionOptions(OffsetDateTime.now().plusDays(1));
client.scheduleDeletionWithResponse(options)
.subscribe(r -> System.out.println("File deletion has been scheduled"));
options - Schedule deletion parameters.Copyright © 2021 Microsoft Corporation. All rights reserved.