diff --git a/sdk/storage/azure_storage_blob/CHANGELOG.md b/sdk/storage/azure_storage_blob/CHANGELOG.md index fb9c9f6d59..5345d2d5d1 100644 --- a/sdk/storage/azure_storage_blob/CHANGELOG.md +++ b/sdk/storage/azure_storage_blob/CHANGELOG.md @@ -15,6 +15,7 @@ ### Features Added - Added the `reqwest_rustls` feature to use `aws-lc-rs` as the default TLS provider. +- Added `From` implementations on `HttpRange` for standard Rust range types: `Range`, `RangeFrom`, `RangeInclusive`, `RangeTo`, `RangeToInclusive`, and their `usize` equivalents. This allows `(0..100u64).into()`, `(100u64..).into()`, etc. ### Breaking Changes @@ -22,6 +23,13 @@ - Removed the `reqwest_native_tls` feature in favor of `reqwest_rustls`. - Responses are no longer automatically decompressed. - Removed `download_into()` from existing clients. Callers can still use `download()` and collect the streamed `Bytes` into memory. +- Changed `BlobClientDownloadOptions.range` from `Option>` to `Option`. +- Changed `BlobClientDownloadOptions.if_match` and `if_none_match` from `Option` to `Option`. +- Changed `PageBlobClient::upload_pages()` and `clear_pages()` `range` parameter from `String` to `HttpRange`. +- Changed `PageBlobClient::upload_pages_from_url()` `range` and `source_range` parameters from `String` to `HttpRange`. +- Changed `PageBlobClientGetPageRangesOptions.range` from `Option` to `Option`. +- Changed `AppendBlobClientAppendBlockFromUrlOptions.source_range` from `Option` to `Option`. +- Changed `BlockBlobClientUploadBlobFromUrlOptions.source_range` from `Option` to `Option`. ### Other Changes diff --git a/sdk/storage/azure_storage_blob/README.md b/sdk/storage/azure_storage_blob/README.md index a9afbac085..ebb1ed20b9 100644 --- a/sdk/storage/azure_storage_blob/README.md +++ b/sdk/storage/azure_storage_blob/README.md @@ -6,7 +6,7 @@ Azure Blob storage is Microsoft's object storage solution for the cloud. Blob st ## Getting started -**⚠️ Note: The `azure_storage_blob` crate is currently under active development and not all features may be implemented or work as intended. This crate is in beta and not suitable for Production environments. For any general feedback or usage issues, please open a GitHub issue [here](https://github.com/Azure/azure-sdk-for-rust/issues).** +**⚠️ Note: The `azure_storage_blob` crate is currently under active development and not all features may be implemented or work as intended. This crate is in beta and not suitable for Production environments. For any general feedback or usage issues, please [open a GitHub issue](https://github.com/Azure/azure-sdk-for-rust/issues).** ### Install the package @@ -18,7 +18,7 @@ cargo add azure_storage_blob ### Prerequisites -* You must have an [Azure subscription] and an [Azure storage account] to use this package. +- You must have an [Azure subscription] and an [Azure storage account] to use this package. ### Create a storage account @@ -48,10 +48,10 @@ async fn main() -> Result<(), Box> { let credential = DeveloperToolsCredential::new(None)?; let blob_client = BlobClient::new( "https://.blob.core.windows.net/", // Endpoint - "container_name", // Container Name - "blob_name", // Blob Name - Some(credential), // Credential - Some(BlobClientOptions::default()), // BlobClient Options + "", // Container Name + "", // Blob Name + Some(credential), // Credential + Some(BlobClientOptions::default()), // BlobClient Options )?; Ok(()) } @@ -65,14 +65,16 @@ You may need to specify RBAC roles to access Blob Storage via Microsoft Entra ID You can find executable examples for all major SDK functions in: -* [blob_hello_world.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_blob/examples/blob_hello_world.rs) - Getting started: create a container, upload and download a blob -* [blob_container_client.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_blob/examples/blob_container_client.rs) - Container-level operations: metadata, list blobs with continuation, access policies -* [blob_service_client.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_blob/examples/blob_service_client.rs) - Service-level operations: list containers, service properties, statistics -* [block_blob_client.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_blob/examples/block_blob_client.rs) - Block blob operations: staged block upload, copy from URL -* [append_blob_client.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_blob/examples/append_blob_client.rs) - Append blob operations: create, append blocks, seal -* [page_blob_client.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_blob/examples/page_blob_client.rs) - Page blob operations: create, upload/clear pages, list page ranges, resize -* [blob_storage_logging.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_blob/examples/blob_storage_logging.rs) - Logging and OpenTelemetry distributed tracing -* [storage_error.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_blob/examples/storage_error.rs) - Structured error handling with `StorageError` +- [blob_hello_world.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_blob/examples/blob_hello_world.rs) - Getting started: create a container, upload and download a blob +- [blob_client.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_blob/examples/blob_client.rs) - Blob-level operations: exists, metadata, index tags, access tier +- [blob_container_client.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_blob/examples/blob_container_client.rs) - Container-level operations: metadata, list blobs with continuation, access policies +- [blob_service_client.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_blob/examples/blob_service_client.rs) - Service-level operations: list containers, service properties, statistics +- [block_blob_client.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_blob/examples/block_blob_client.rs) - Block blob operations: staged block upload, copy from URL +- [append_blob_client.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_blob/examples/append_blob_client.rs) - Append blob operations: create, append blocks, seal +- [page_blob_client.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_blob/examples/page_blob_client.rs) - Page blob operations: create, upload/clear pages, list page ranges, resize +- [blob_storage_upload_file.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_blob/examples/blob_storage_upload_file.rs) - Upload a local file with streaming support for large files +- [blob_storage_logging.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_blob/examples/blob_storage_logging.rs) - Logging and OpenTelemetry distributed tracing +- [storage_error.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_blob/examples/storage_error.rs) - Structured error handling with `StorageError` ### Upload a blob @@ -86,8 +88,8 @@ async fn main() -> Result<(), Box> { let credential = DeveloperToolsCredential::new(None)?; let blob_client = BlobClient::new( "https://.blob.core.windows.net/", - "container_name", - "blob_name", + "", + "", Some(credential), Some(BlobClientOptions::default()), )?; @@ -114,12 +116,14 @@ async fn main() -> Result<(), Box> { let credential = DeveloperToolsCredential::new(None)?; let blob_client = BlobClient::new( "https://.blob.core.windows.net/", // Endpoint - "container_name", // Container Name - "blob_name", // Blob Name - Some(credential), // Credential - Some(BlobClientOptions::default()), // BlobClient Options + "", // Container Name + "", // Blob Name + Some(credential), // Credential + Some(BlobClientOptions::default()), // BlobClient Options )?; - let blob_properties = blob_client.get_properties(None).await?; + let response = blob_client.download(None).await?; + let data = String::from_utf8(response.body.collect().await?.into())?; + println!("Downloaded: {data}"); Ok(()) } ``` @@ -131,7 +135,7 @@ async fn main() -> Result<(), Box> { By default, all storage clients create an HTTP transport with automatic decompression disabled, which is required for partitioned (multi-part) downloads to work correctly. If you set a custom transport in client options (e.g., a `reqwest::Client` with gzip enabled) without disabling automatic -decompression, partitioned downloads via [`BlobClient::download`](https://docs.rs/azure_storage_blob/latest/azure_storage_blob/clients/struct.BlobClient.html#method.download). +decompression, partitioned downloads via [`BlobClient::download`](https://docs.rs/azure_storage_blob/latest/azure_storage_blob/clients/struct.BlobClient.html#method.download) may not work correctly. If you need to provide a custom transport, disable automatic decompression to be consistent with default SDK behavior. ## Next Steps @@ -154,7 +158,7 @@ This project has adopted the [Microsoft Open Source Code of Conduct](https://ope [Azure Portal]: https://learn.microsoft.com/azure/storage/common/storage-quickstart-create-account?tabs=azure-portal [Azure PowerShell]: https://learn.microsoft.com/azure/storage/common/storage-quickstart-create-account?tabs=azure-powershell [Azure CLI]: https://learn.microsoft.com/azure/storage/common/storage-quickstart-create-account?tabs=azure-cli -[cargo]: https://dev-doc.rust-lang.org/stable/cargo/commands/cargo.html +[cargo]: https://doc.rust-lang.org/cargo/ [Azure Identity]: https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/identity/azure_identity [API reference documentation]: https://docs.rs/crate/azure_storage_blob/latest [Package (crates.io)]: https://crates.io/crates/azure_storage_blob diff --git a/sdk/storage/azure_storage_blob/assets.json b/sdk/storage/azure_storage_blob/assets.json index 448e39fbd0..0143a32fb4 100644 --- a/sdk/storage/azure_storage_blob/assets.json +++ b/sdk/storage/azure_storage_blob/assets.json @@ -1,6 +1,6 @@ { "AssetsRepo": "Azure/azure-sdk-assets", "AssetsRepoPrefixPath": "rust", - "Tag": "rust/azure_storage_blob_3bcbe9fe6f", + "Tag": "rust/azure_storage_blob_a6b159d0dc", "TagPrefix": "rust/azure_storage_blob" } \ No newline at end of file diff --git a/sdk/storage/azure_storage_blob/examples/README.md b/sdk/storage/azure_storage_blob/examples/README.md index a1a15a7d9e..7c72eca071 100644 --- a/sdk/storage/azure_storage_blob/examples/README.md +++ b/sdk/storage/azure_storage_blob/examples/README.md @@ -14,6 +14,7 @@ This directory contains a set of examples for the use of the Blob Storage client | `append_blob_client.rs` | Append blob operations: create, append blocks, seal | | `page_blob_client.rs` | Page blob operations: create, upload/clear pages, list page ranges, resize | | `blob_storage_logging.rs` | Logging and OpenTelemetry distributed tracing | +| `blob_storage_upload_file.rs` | Upload a local file with streaming support for large files | | `storage_error.rs` | Structured error handling with `StorageError` | ## Setup diff --git a/sdk/storage/azure_storage_blob/examples/page_blob_client.rs b/sdk/storage/azure_storage_blob/examples/page_blob_client.rs index 80f3c4eb22..7ff7e343df 100644 --- a/sdk/storage/azure_storage_blob/examples/page_blob_client.rs +++ b/sdk/storage/azure_storage_blob/examples/page_blob_client.rs @@ -64,7 +64,7 @@ async fn main() -> Result<(), Box> { // Write 512 bytes of data to bytes 0-511. let page_data = vec![b'A'; 512]; - let range = HttpRange::new(0, 512).to_string(); + let range = HttpRange::new(0, 512); page_blob_client .upload_pages(RequestContent::from(page_data), 512, range, None) .await?; @@ -80,7 +80,7 @@ async fn main() -> Result<(), Box> { // Clear the page range (zeroes out those bytes). page_blob_client - .clear_pages(HttpRange::new(0, 512).to_string(), None) + .clear_pages(HttpRange::new(0, 512), None) .await?; println!("Cleared page range 0-511"); diff --git a/sdk/storage/azure_storage_blob/src/clients/blob_client.rs b/sdk/storage/azure_storage_blob/src/clients/blob_client.rs index 2b24178c76..1166f3048f 100644 --- a/sdk/storage/azure_storage_blob/src/clients/blob_client.rs +++ b/sdk/storage/azure_storage_blob/src/clients/blob_client.rs @@ -7,8 +7,8 @@ use crate::{ generated::clients::BlobClient as GeneratedBlobClient, generated::models::BlobClientDownloadInternalOptions, models::{ - http_ranges::IntoRangeHeader, BlobClientDownloadOptions, BlobClientDownloadResult, - BlobClientUploadOptions, BlobClientUploadResult, StorageErrorCode, + BlobClientDownloadOptions, BlobClientDownloadResult, BlobClientUploadOptions, + BlobClientUploadResult, HttpRange, StorageErrorCode, }, partitioned_transfer::{self, PartitionedDownloadBehavior}, AppendBlobClient, BlockBlobClient, PageBlobClient, @@ -19,8 +19,8 @@ use azure_core::{ error::ErrorKind, http::{ policies::{auth::BearerTokenAuthorizationPolicy, Policy}, - AsyncRawResponse, ClientMethodOptions, NoFormat, Pipeline, RequestContent, StatusCode, Url, - UrlExt, + AsyncRawResponse, ClientMethodOptions, Etag, NoFormat, Pipeline, RequestContent, + StatusCode, Url, UrlExt, }, tracing, Bytes, Result, }; @@ -309,10 +309,10 @@ impl PartitionedDownloadBehavior for BlobClientDownloadBehavior<'_> { async fn transfer_range( &self, range: Option>, - etag_lock: Option, + etag_lock: Option, ) -> Result { let mut opt = self.options.clone(); - opt.range = range.map(|r| r.as_range_header()); + opt.range = range.map(HttpRange::from); if let Some(etag) = etag_lock { opt.if_match = Some(etag); opt.if_none_match = None; diff --git a/sdk/storage/azure_storage_blob/src/generated/clients/append_blob_client.rs b/sdk/storage/azure_storage_blob/src/generated/clients/append_blob_client.rs index 980ec22110..5d936e4bb2 100644 --- a/sdk/storage/azure_storage_blob/src/generated/clients/append_blob_client.rs +++ b/sdk/storage/azure_storage_blob/src/generated/clients/append_blob_client.rs @@ -342,7 +342,7 @@ impl AppendBlobClient { ); } if let Some(source_range) = options.source_range.as_ref() { - request.insert_header("x-ms-source-range", source_range); + request.insert_header("x-ms-source-range", source_range.to_string()); } request.insert_header("x-ms-version", &self.version); let rsp = self diff --git a/sdk/storage/azure_storage_blob/src/generated/clients/blob_client.rs b/sdk/storage/azure_storage_blob/src/generated/clients/blob_client.rs index fc399bb0fe..d76a60928b 100644 --- a/sdk/storage/azure_storage_blob/src/generated/clients/blob_client.rs +++ b/sdk/storage/azure_storage_blob/src/generated/clients/blob_client.rs @@ -652,20 +652,20 @@ impl BlobClient { query_builder.build(); let mut request = Request::new(url, Method::Get); request.insert_header("accept", "application/octet-stream"); - if let Some(if_match) = options.if_match.as_ref() { - request.insert_header("if-match", if_match); + if let Some(if_match) = options.if_match { + request.insert_header("if-match", if_match.to_string()); } if let Some(if_modified_since) = options.if_modified_since { request.insert_header("if-modified-since", to_rfc7231(&if_modified_since)); } - if let Some(if_none_match) = options.if_none_match.as_ref() { - request.insert_header("if-none-match", if_none_match); + if let Some(if_none_match) = options.if_none_match { + request.insert_header("if-none-match", if_none_match.to_string()); } if let Some(if_unmodified_since) = options.if_unmodified_since { request.insert_header("if-unmodified-since", to_rfc7231(&if_unmodified_since)); } if let Some(range) = options.range.as_ref() { - request.insert_header("range", range); + request.insert_header("range", range.to_string()); } if let Some(encryption_algorithm) = options.encryption_algorithm.as_ref() { request.insert_header( diff --git a/sdk/storage/azure_storage_blob/src/generated/clients/block_blob_client.rs b/sdk/storage/azure_storage_blob/src/generated/clients/block_blob_client.rs index 41a6f2e49c..4f46d42793 100644 --- a/sdk/storage/azure_storage_blob/src/generated/clients/block_blob_client.rs +++ b/sdk/storage/azure_storage_blob/src/generated/clients/block_blob_client.rs @@ -551,7 +551,7 @@ impl BlockBlobClient { ); } if let Some(source_range) = options.source_range.as_ref() { - request.insert_header("x-ms-source-range", source_range); + request.insert_header("x-ms-source-range", source_range.to_string()); } request.insert_header("x-ms-version", &self.version); let rsp = self diff --git a/sdk/storage/azure_storage_blob/src/generated/clients/page_blob_client.rs b/sdk/storage/azure_storage_blob/src/generated/clients/page_blob_client.rs index 1cc5f855a4..34afe12625 100644 --- a/sdk/storage/azure_storage_blob/src/generated/clients/page_blob_client.rs +++ b/sdk/storage/azure_storage_blob/src/generated/clients/page_blob_client.rs @@ -3,13 +3,17 @@ // // Code generated by Microsoft (R) Rust Code Generator. DO NOT EDIT. -use crate::generated::models::{ - PageBlobClientClearPagesOptions, PageBlobClientClearPagesResult, PageBlobClientCreateOptions, - PageBlobClientCreateResult, PageBlobClientGetPageRangesOptions, PageBlobClientResizeOptions, - PageBlobClientResizeResult, PageBlobClientSetSequenceNumberOptions, - PageBlobClientSetSequenceNumberResult, PageBlobClientUploadPagesFromUrlOptions, - PageBlobClientUploadPagesFromUrlResult, PageBlobClientUploadPagesOptions, - PageBlobClientUploadPagesResult, PageList, SequenceNumberActionType, +use crate::{ + generated::models::{ + PageBlobClientClearPagesOptions, PageBlobClientClearPagesResult, + PageBlobClientCreateOptions, PageBlobClientCreateResult, + PageBlobClientGetPageRangesOptions, PageBlobClientResizeOptions, + PageBlobClientResizeResult, PageBlobClientSetSequenceNumberOptions, + PageBlobClientSetSequenceNumberResult, PageBlobClientUploadPagesFromUrlOptions, + PageBlobClientUploadPagesFromUrlResult, PageBlobClientUploadPagesOptions, + PageBlobClientUploadPagesResult, PageList, SequenceNumberActionType, + }, + models::HttpRange, }; use azure_core::{ base64, @@ -87,7 +91,7 @@ impl PageBlobClient { #[tracing::function("Storage.Blob.PageBlobClient.clearPages")] pub async fn clear_pages( &self, - range: String, + range: HttpRange, options: Option>, ) -> Result> { let options = options.unwrap_or_default(); @@ -113,7 +117,7 @@ impl PageBlobClient { if let Some(if_unmodified_since) = options.if_unmodified_since { request.insert_header("if-unmodified-since", to_rfc7231(&if_unmodified_since)); } - request.insert_header("range", range); + request.insert_header("range", range.to_string()); if let Some(encryption_algorithm) = options.encryption_algorithm.as_ref() { request.insert_header( "x-ms-encryption-algorithm", @@ -407,7 +411,7 @@ impl PageBlobClient { request.insert_header("if-unmodified-since", to_rfc7231(&if_unmodified_since)); } if let Some(range) = options.range.as_ref() { - request.insert_header("range", range); + request.insert_header("range", range.to_string()); } if let Some(if_tags) = options.if_tags.as_ref() { request.insert_header("x-ms-if-tags", if_tags); @@ -686,7 +690,7 @@ impl PageBlobClient { &self, body: RequestContent, content_length: u64, - range: String, + range: HttpRange, options: Option>, ) -> Result> { let options = options.unwrap_or_default(); @@ -716,7 +720,7 @@ impl PageBlobClient { if let Some(if_unmodified_since) = options.if_unmodified_since { request.insert_header("if-unmodified-since", to_rfc7231(&if_unmodified_since)); } - request.insert_header("range", range); + request.insert_header("range", range.to_string()); if let Some(transactional_content_crc64) = options.transactional_content_crc64 { request.insert_header( "x-ms-content-crc64", @@ -797,11 +801,9 @@ impl PageBlobClient { /// # Arguments /// /// * `source_url` - Specify a URL to the copy source. - /// * `source_range` - Bytes of source data in the specified range. The length of this range should match the ContentLength - /// header and x-ms-range/Range destination range header. + /// * `source_range` - Bytes of source data in the specified range. /// * `content_length` - The length of the request. - /// * `range` - Bytes of source data in the specified range. The length of this range should match the ContentLength header - /// and x-ms-range/Range destination range header. + /// * `range` - Bytes of data in the specified range. /// * `options` - Optional parameters for the request. /// /// ## Response Headers @@ -843,9 +845,9 @@ impl PageBlobClient { pub async fn upload_pages_from_url( &self, source_url: String, - source_range: String, + source_range: HttpRange, content_length: u64, - range: String, + range: HttpRange, options: Option>, ) -> Result> { let options = options.unwrap_or_default(); @@ -871,6 +873,7 @@ impl PageBlobClient { if let Some(if_unmodified_since) = options.if_unmodified_since { request.insert_header("if-unmodified-since", to_rfc7231(&if_unmodified_since)); } + request.insert_header("range", range.to_string()); request.insert_header("x-ms-copy-source", source_url); if let Some(copy_source_authorization) = options.copy_source_authorization.as_ref() { request.insert_header("x-ms-copy-source-authorization", copy_source_authorization); @@ -920,7 +923,6 @@ impl PageBlobClient { request.insert_header("x-ms-lease-id", lease_id); } request.insert_header("x-ms-page-write", "update"); - request.insert_header("x-ms-range", range); if let Some(source_content_crc64) = options.source_content_crc64 { request.insert_header( "x-ms-source-content-crc64", @@ -969,7 +971,7 @@ impl PageBlobClient { to_rfc7231(&source_if_unmodified_since), ); } - request.insert_header("x-ms-source-range", source_range); + request.insert_header("x-ms-source-range", source_range.to_string()); request.insert_header("x-ms-version", &self.version); let rsp = self .pipeline diff --git a/sdk/storage/azure_storage_blob/src/generated/models/method_options.rs b/sdk/storage/azure_storage_blob/src/generated/models/method_options.rs index 5555dd95f3..ac5b8b4873 100644 --- a/sdk/storage/azure_storage_blob/src/generated/models/method_options.rs +++ b/sdk/storage/azure_storage_blob/src/generated/models/method_options.rs @@ -9,6 +9,7 @@ use super::{ ListBlobsIncludeItem, ListContainersIncludeType, PremiumPageBlobAccessTier, PublicAccessType, RehydratePriority, }; +use crate::models::HttpRange; use azure_core::{ fmt::SafeDebug, http::{pager::PagerOptions, ClientMethodOptions, Etag}, @@ -102,7 +103,7 @@ pub struct AppendBlobClientAppendBlockFromUrlOptions<'a> { pub source_if_unmodified_since: Option, /// Bytes of source data in the specified range. - pub source_range: Option, + pub source_range: Option, /// The timeout parameter is expressed in seconds. For more information, see [Setting Timeouts for Blob Service Operations.](https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/setting-timeouts-for-blob-service-operations) pub timeout: Option, @@ -510,19 +511,19 @@ pub struct BlobClientDownloadInternalOptions<'a> { /// with a client token, this header should be specified using the SHA256 hash of the encryption key. pub encryption_key_sha256: Option, - /// The request should only proceed if an entity matches this string. - pub if_match: Option, + /// A condition that must be met in order for the request to be processed. + pub if_match: Option, - /// The request should only proceed if the entity was modified after this time. + /// A date-time value. A request is made under the condition that the resource has been modified since the specified date-time. pub if_modified_since: Option, - /// The request should only proceed if no entity matches this string. - pub if_none_match: Option, + /// A condition that must be met in order for the request to be processed. + pub if_none_match: Option, /// Specify a SQL where clause on blob tags to operate only on blobs with a matching value. pub if_tags: Option, - /// The request should only proceed if the entity was not modified after this time. + /// A date-time value. A request is made under the condition that the resource has not been modified since the specified date-time. pub if_unmodified_since: Option, /// If specified, the operation only succeeds if the resource's lease is active and matches this ID. @@ -532,7 +533,7 @@ pub struct BlobClientDownloadInternalOptions<'a> { pub method_options: ClientMethodOptions<'a>, /// Return only the bytes of the blob in the specified range. - pub range: Option, + pub range: Option, /// Optional. When this header is set to true and specified together with the Range header, the service returns the CRC64 /// hash for the range, as long as the range is less than or equal to 4 MB in size. @@ -1476,7 +1477,7 @@ pub struct BlockBlobClientStageBlockFromUrlOptions<'a> { pub source_if_unmodified_since: Option, /// Bytes of source data in the specified range. - pub source_range: Option, + pub source_range: Option, /// The timeout parameter is expressed in seconds. For more information, see [Setting Timeouts for Blob Service Operations.](https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/setting-timeouts-for-blob-service-operations) pub timeout: Option, @@ -1926,7 +1927,7 @@ pub struct PageBlobClientGetPageRangesOptions<'a> { pub method_options: ClientMethodOptions<'a>, /// Return only the bytes of the blob in the specified range. - pub range: Option, + pub range: Option, /// The snapshot parameter is an opaque DateTime value that, when present, specifies the blob snapshot to retrieve. For more /// information on working with blob snapshots, see [Creating a Snapshot of a Blob.](https://docs.microsoft.com/en-us/rest/api/storageservices/fileservices/creating-a-snapshot-of-a-blob) diff --git a/sdk/storage/azure_storage_blob/src/models/http_ranges.rs b/sdk/storage/azure_storage_blob/src/models/http_ranges.rs index 7411512d02..5631446b26 100644 --- a/sdk/storage/azure_storage_blob/src/models/http_ranges.rs +++ b/sdk/storage/azure_storage_blob/src/models/http_ranges.rs @@ -4,7 +4,7 @@ use azure_core::error::{Error, ErrorKind, ResultExt}; use azure_core::http::headers::{Header, HeaderName, HeaderValue}; use std::fmt; -use std::ops::{Range, RangeFrom}; +use std::ops::{Range, RangeFrom, RangeInclusive, RangeTo, RangeToInclusive}; use std::str::FromStr; const PREFIX: &str = "bytes "; @@ -13,26 +13,6 @@ const CONTENT_RANGE_ID: HeaderName = HeaderName::from_static("content-range"); type Result = azure_core::Result; -/// Trait to convert a value into an HTTP Range header. -/// Implemented on `Range<>` and `RangeFrom<>`. -/// Note that `Range<>` uses an exclusive end value while -/// HTTP uses an inclusive end value. -pub(crate) trait IntoRangeHeader { - fn as_range_header(&self) -> String; -} - -impl IntoRangeHeader for Range { - fn as_range_header(&self) -> String { - format!("bytes={}-{}", self.start, self.end - 1) - } -} - -impl IntoRangeHeader for RangeFrom { - fn as_range_header(&self) -> String { - format!("bytes={}-", self.start) - } -} - /// Represents the `Content-Range` HTTP response header. #[derive(Debug, Copy, Clone, PartialEq, Eq)] pub(crate) struct ContentRange { @@ -197,17 +177,38 @@ mod tests { /// /// # Examples /// +/// Range of 512 bytes starting at offset 0: +/// /// ``` /// use azure_storage_blob::models::HttpRange; /// -/// // Range of 512 bytes starting at offset 0: bytes=0-511 /// let range = HttpRange::new(0, 512); /// assert_eq!(range.to_string(), "bytes=0-511"); +/// ``` +/// +/// Open-ended range starting at offset 255: +/// +/// ``` +/// use azure_storage_blob::models::HttpRange; /// -/// // Open-ended range starting at offset 255: bytes=255- /// let range = HttpRange::from_offset(255); /// assert_eq!(range.to_string(), "bytes=255-"); /// ``` +/// +/// Convert from standard Rust range types: +/// +/// ``` +/// use azure_storage_blob::models::HttpRange; +/// +/// let range: HttpRange = (0u64..100).into(); +/// assert_eq!(range.to_string(), "bytes=0-99"); +/// +/// let range: HttpRange = (100u64..).into(); +/// assert_eq!(range.to_string(), "bytes=100-"); +/// +/// let range: HttpRange = (0u64..=99).into(); +/// assert_eq!(range.to_string(), "bytes=0-99"); +/// ``` #[derive(Debug, Clone, PartialEq, Eq)] pub struct HttpRange { /// The starting byte offset. @@ -243,6 +244,14 @@ impl HttpRange { length: None, } } + + pub(crate) fn offset(&self) -> u64 { + self.offset + } + + pub(crate) fn length(&self) -> Option { + self.length + } } impl fmt::Display for HttpRange { @@ -265,6 +274,73 @@ impl From for HeaderValue { } } +// From> impls + +impl From> for HttpRange { + fn from(range: Range) -> Self { + Self::new(range.start, range.end - range.start) + } +} + +impl From> for HttpRange { + fn from(range: RangeFrom) -> Self { + Self::from_offset(range.start) + } +} + +impl From> for HttpRange { + fn from(range: RangeInclusive) -> Self { + Self::new(*range.start(), range.end() - range.start() + 1) + } +} + +impl From> for HttpRange { + fn from(range: RangeTo) -> Self { + Self::new(0, range.end) + } +} + +impl From> for HttpRange { + fn from(range: RangeToInclusive) -> Self { + Self::new(0, range.end + 1) + } +} + +// From> impls + +impl From> for HttpRange { + fn from(range: Range) -> Self { + Self::new(range.start as u64, (range.end - range.start) as u64) + } +} + +impl From> for HttpRange { + fn from(range: RangeFrom) -> Self { + Self::from_offset(range.start as u64) + } +} + +impl From> for HttpRange { + fn from(range: RangeInclusive) -> Self { + Self::new( + *range.start() as u64, + (range.end() - range.start() + 1) as u64, + ) + } +} + +impl From> for HttpRange { + fn from(range: RangeTo) -> Self { + Self::new(0, range.end as u64) + } +} + +impl From> for HttpRange { + fn from(range: RangeToInclusive) -> Self { + Self::new(0, (range.end + 1) as u64) + } +} + #[cfg(test)] mod http_range_tests { use super::*; @@ -320,4 +396,88 @@ mod http_range_tests { let range = HttpRange::new(u64::MAX, u64::MAX); let _ = range.to_string(); } + + // From> tests + + #[test] + fn from_range_u64() { + let range: HttpRange = (0u64..100).into(); + assert_eq!(range.to_string(), "bytes=0-99"); + } + + #[test] + fn from_range_from_u64() { + let range: HttpRange = (100u64..).into(); + assert_eq!(range.to_string(), "bytes=100-"); + } + + #[test] + fn from_range_inclusive_u64() { + let range: HttpRange = (0u64..=99).into(); + assert_eq!(range.to_string(), "bytes=0-99"); + } + + #[test] + fn from_range_to_u64() { + let range: HttpRange = (..100u64).into(); + assert_eq!(range.to_string(), "bytes=0-99"); + } + + #[test] + fn from_range_to_inclusive_u64() { + let range: HttpRange = (..=99u64).into(); + assert_eq!(range.to_string(), "bytes=0-99"); + } + + // From> tests + + #[test] + fn from_range_usize() { + let range: HttpRange = (0usize..100).into(); + assert_eq!(range.to_string(), "bytes=0-99"); + } + + #[test] + fn from_range_from_usize() { + let range: HttpRange = (100usize..).into(); + assert_eq!(range.to_string(), "bytes=100-"); + } + + #[test] + fn from_range_inclusive_usize() { + let range: HttpRange = (0usize..=99).into(); + assert_eq!(range.to_string(), "bytes=0-99"); + } + + #[test] + fn from_range_to_usize() { + let range: HttpRange = (..100usize).into(); + assert_eq!(range.to_string(), "bytes=0-99"); + } + + #[test] + fn from_range_to_inclusive_usize() { + let range: HttpRange = (..=99usize).into(); + assert_eq!(range.to_string(), "bytes=0-99"); + } + + #[test] + fn from_range_nonzero_offset() { + // Verify no off-by-one when start != 0 + let exclusive: HttpRange = (50u64..150).into(); + let inclusive: HttpRange = (50u64..=149).into(); + assert_eq!(exclusive.to_string(), "bytes=50-149"); + assert_eq!(inclusive.to_string(), "bytes=50-149"); + assert_eq!(exclusive, inclusive); + } + + #[test] + fn from_range_single_byte() { + // A 1-byte range must not produce an off-by-one + let exclusive: HttpRange = (42u64..43).into(); + let inclusive: HttpRange = (42u64..=42).into(); + assert_eq!(exclusive.to_string(), "bytes=42-42"); + assert_eq!(inclusive.to_string(), "bytes=42-42"); + assert_eq!(exclusive, inclusive); + } } diff --git a/sdk/storage/azure_storage_blob/src/models/method_options.rs b/sdk/storage/azure_storage_blob/src/models/method_options.rs index fd449f2982..e4d78674d0 100644 --- a/sdk/storage/azure_storage_blob/src/models/method_options.rs +++ b/sdk/storage/azure_storage_blob/src/models/method_options.rs @@ -1,7 +1,7 @@ // Copyright (c) Microsoft Corporation. All rights reserved. // Licensed under the MIT License. -use std::{collections::HashMap, num::NonZero, ops::Range}; +use std::{collections::HashMap, num::NonZero}; use azure_core::{ fmt::SafeDebug, @@ -9,7 +9,7 @@ use azure_core::{ }; use time::OffsetDateTime; -use crate::models::{AccessTier, EncryptionAlgorithmType, ImmutabilityPolicyMode}; +use crate::models::{AccessTier, EncryptionAlgorithmType, HttpRange, ImmutabilityPolicyMode}; /// Options to be passed to `BlobClient::download()` #[derive(Clone, Default, SafeDebug)] @@ -28,13 +28,13 @@ pub struct BlobClientDownloadOptions<'a> { pub encryption_key_sha256: Option, /// The request should only proceed if an entity matches this string. - pub if_match: Option, + pub if_match: Option, /// The request should only proceed if the entity was modified after this time. pub if_modified_since: Option, /// The request should only proceed if no entity matches this string. - pub if_none_match: Option, + pub if_none_match: Option, /// Specify a SQL where clause on blob tags to operate only on blobs with a matching value. pub if_tags: Option, @@ -58,12 +58,11 @@ pub struct BlobClientDownloadOptions<'a> { /// Optional range of the blob to download. /// - /// The range is specified in byte offsets and uses standard Rust range semantics: - /// `start` is the first byte offset to include, and `end` is a byte offset that is - /// *not* included in the download (i.e. `start..end` is end-exclusive). + /// Accepts an [`HttpRange`] value. You can convert from standard Rust range types + /// using `.into()`, for example `(0..100u64).into()` or `(100u64..).into()`. /// /// When set to `None`, the entire blob will be downloaded. - pub range: Option>, + pub range: Option, /// Optional. When this header is set to true and specified together with the Range header, the service returns the CRC64 /// hash for the range, as long as the range is less than or equal to 4 MB in size. diff --git a/sdk/storage/azure_storage_blob/src/partitioned_transfer/download.rs b/sdk/storage/azure_storage_blob/src/partitioned_transfer/download.rs index 4d5818d463..b5b1859293 100644 --- a/sdk/storage/azure_storage_blob/src/partitioned_transfer/download.rs +++ b/sdk/storage/azure_storage_blob/src/partitioned_transfer/download.rs @@ -11,11 +11,13 @@ use std::{ }, }; +use crate::models::HttpRange; + use async_trait::async_trait; use azure_core::{ async_runtime::{get_async_runtime, SpawnedTask}, error::ErrorKind, - http::{AsyncRawResponse, StatusCode}, + http::{AsyncRawResponse, Etag, StatusCode}, Error, }; use bytes::Bytes; @@ -34,7 +36,7 @@ pub(crate) trait PartitionedDownloadBehavior { async fn transfer_range( &self, range: Option>, - etag_lock: Option, + etag_lock: Option, ) -> AzureResult; } @@ -49,7 +51,7 @@ pub(crate) trait PartitionedDownloadBehavior { /// correct set of additional ranges to download and queues them up. The returned `Stream` /// executes these downloads, maintaining limits for parallel downloads and buffer count. pub(crate) async fn download( - range: Option>, + range: Option, parallel: NonZero, partition_size: NonZero, client: Arc, @@ -57,6 +59,14 @@ pub(crate) async fn download( where Behavior: PartitionedDownloadBehavior + Send + Sync + 'static, { + let range: Option> = range.map(|hr| { + let start = hr.offset() as usize; + let end = match hr.length() { + Some(len) => start + len as usize, + None => usize::MAX, + }; + start..end + }); let parallel = parallel.get(); let max_buffers = parallel * 2; let partition_size = partition_size.get(); @@ -66,7 +76,7 @@ where let status = initial_response.status(); let headers = initial_response.headers().clone(); - let etag_lock = headers.get_optional_str(&"etag".into()).map(str::to_string); + let etag_lock = headers.get_optional_str(&"etag".into()).map(Etag::from); let mut remaining_ranges = stats .map(|s| s.remaining_download_ranges) @@ -237,7 +247,7 @@ fn start_initial_download_task( fn start_download_task( client: Arc, range: Range, - etag_lock: Option, + etag_lock: Option, mut sender: UnboundedSender>, active_tasks_counter: Arc, chunk_idx: usize, @@ -342,7 +352,7 @@ mod tests { use azure_core::{ http::{ headers::{Header, Headers}, - StatusCode, + Etag, StatusCode, }, stream::BytesStream, }; @@ -361,14 +371,14 @@ mod tests { #[derive(Clone, Debug)] enum MockPartitionedDownloadBehaviorInvocation { - TransferRange(Option>, Option), + TransferRange(Option>, Option), } struct MockPartitionedDownloadBehavior { pub invocations: Mutex>, pub data: Bytes, pub delay_millis: Option>, - pub etag: Mutex>, + pub etag: Mutex>, } #[derive(Clone, Default)] @@ -377,7 +387,7 @@ mod tests { delay_millis_range: Option>, /// Sets the initial ETag to match against and return in responses. - etag: Option, + etag: Option, } impl MockPartitionedDownloadBehavior { @@ -396,7 +406,7 @@ mod tests { async fn transfer_range( &self, requested_range: Option>, - etag_lock: Option, + etag_lock: Option, ) -> AzureResult { { self.invocations.lock().await.push( @@ -432,7 +442,7 @@ mod tests { } let mut headers = Headers::new(); if let Some(etag) = self.etag.lock().await.as_ref() { - headers.insert("etag", etag.clone()); + headers.insert("etag", etag.to_string()); } match (requested_range, self.data.len()) { (Some(range), data_len) => { @@ -555,7 +565,7 @@ mod tests { let mock = Arc::new(MockPartitionedDownloadBehavior::new(data.clone(), None)); let mut body = download( - args.download_range.map(|r| r.0..r.1), + args.download_range.map(|r| (r.0..r.1).into()), PARALLEL.try_into().unwrap(), args.partition_len.try_into().unwrap(), mock.clone(), @@ -629,7 +639,7 @@ mod tests { let mock = Arc::new(MockPartitionedDownloadBehavior::new(data.clone(), None)); let mut body = download( - args.download_range.map(|r| r.0..r.1), + args.download_range.map(|r| (r.0..r.1).into()), args.parallel.try_into().unwrap(), args.partition_len.try_into().unwrap(), mock.clone(), @@ -716,7 +726,7 @@ mod tests { #[tokio::test] async fn download_etag_lock() -> AzureResult<()> { - let configured_etag = Some("some_etag".to_string()); + let configured_etag = Some(Etag::from("some_etag")); let data_len: usize = 1024; let partition_len = NonZero::new(data_len / 4).unwrap(); let parallel = NonZero::new(2).unwrap(); @@ -758,8 +768,8 @@ mod tests { #[tokio::test] async fn download_fails_on_etag_update() -> AzureResult<()> { - let configured_etag_1 = Some("some_etag".to_string()); - let configured_etag_2 = Some("another_etag".to_string()); + let configured_etag_1 = Some(Etag::from("some_etag")); + let configured_etag_2 = Some(Etag::from("another_etag")); let data_len: usize = 2048; let total_partitions = 8; let partition_len = NonZero::new(data_len / total_partitions).unwrap(); diff --git a/sdk/storage/azure_storage_blob/tests/blob_client.rs b/sdk/storage/azure_storage_blob/tests/blob_client.rs index 549138a9bd..903f72bbde 100644 --- a/sdk/storage/azure_storage_blob/tests/blob_client.rs +++ b/sdk/storage/azure_storage_blob/tests/blob_client.rs @@ -971,7 +971,7 @@ async fn test_managed_download(ctx: TestContext) -> Result<(), Box> { .download(Some(BlobClientDownloadOptions { partition_size: Some(NonZero::new(partition_len).unwrap()), parallel: Some(NonZero::new(parallel).unwrap()), - range: download_range.map(|r| r.0..r.1), + range: download_range.map(|r| (r.0..r.1).into()), ..Default::default() })) .await? diff --git a/sdk/storage/azure_storage_blob/tests/blob_client_options.rs b/sdk/storage/azure_storage_blob/tests/blob_client_options.rs index efd597d30f..77c9f4a4f2 100644 --- a/sdk/storage/azure_storage_blob/tests/blob_client_options.rs +++ b/sdk/storage/azure_storage_blob/tests/blob_client_options.rs @@ -42,7 +42,7 @@ async fn test_ranged_download(ctx: TestContext) -> Result<(), Box> { // Bounded Range Download (first 5 bytes: "hello") let response = blob_client .download(Some(BlobClientDownloadOptions { - range: Some(0..5), + range: Some((0..5usize).into()), ..Default::default() })) .await?; @@ -53,7 +53,7 @@ async fn test_ranged_download(ctx: TestContext) -> Result<(), Box> { // Bounded Range Download (middle 6 bytes: " rusty") let response = blob_client .download(Some(BlobClientDownloadOptions { - range: Some(5..11), + range: Some((5..11usize).into()), ..Default::default() })) .await?; @@ -64,7 +64,7 @@ async fn test_ranged_download(ctx: TestContext) -> Result<(), Box> { // Bounded Range Download (last 6 bytes: " world") let response = blob_client .download(Some(BlobClientDownloadOptions { - range: Some(11..17), + range: Some((11..17usize).into()), ..Default::default() })) .await?; diff --git a/sdk/storage/azure_storage_blob/tests/blob_conditional_headers.rs b/sdk/storage/azure_storage_blob/tests/blob_conditional_headers.rs index fe837fed3e..d00db6bcbc 100644 --- a/sdk/storage/azure_storage_blob/tests/blob_conditional_headers.rs +++ b/sdk/storage/azure_storage_blob/tests/blob_conditional_headers.rs @@ -43,7 +43,7 @@ mod blob_client { create_test_blob(&blob_client, None, None).await?; let props = blob_client.get_properties(None).await?; - let etag = props.etag()?.unwrap().to_string(); + let etag = props.etag()?.unwrap(); // Read Operations - if_match Success + if_none_match 304 @@ -68,7 +68,7 @@ mod blob_client { // Download if_match Failure let err = blob_client .download(Some(BlobClientDownloadOptions { - if_match: Some(BAD_ETAG.to_string()), + if_match: Some(BAD_ETAG.to_string().into()), ..Default::default() })) .await; @@ -80,13 +80,13 @@ mod blob_client { // Get Properties blob_client .get_properties(Some(BlobClientGetPropertiesOptions { - if_match: Some(etag.clone().into()), + if_match: Some(etag.clone()), ..Default::default() })) .await?; let err = blob_client .get_properties(Some(BlobClientGetPropertiesOptions { - if_none_match: Some(etag.clone().into()), + if_none_match: Some(etag.clone()), ..Default::default() })) .await; @@ -98,13 +98,13 @@ mod blob_client { // Get Tags blob_client .get_tags(Some(BlobClientGetTagsOptions { - if_match: Some(etag.clone().into()), + if_match: Some(etag.clone()), ..Default::default() })) .await?; let err = blob_client .get_tags(Some(BlobClientGetTagsOptions { - if_none_match: Some(etag.clone().into()), + if_none_match: Some(etag.clone()), ..Default::default() })) .await; @@ -116,13 +116,13 @@ mod blob_client { // Create Snapshot blob_client .create_snapshot(Some(BlobClientCreateSnapshotOptions { - if_match: Some(etag.clone().into()), + if_match: Some(etag.clone()), ..Default::default() })) .await?; let err = blob_client .create_snapshot(Some(BlobClientCreateSnapshotOptions { - if_none_match: Some(etag.clone().into()), + if_none_match: Some(etag.clone()), ..Default::default() })) .await; @@ -152,21 +152,21 @@ mod blob_client { .set_metadata( &metadata, Some(BlobClientSetMetadataOptions { - if_match: Some(etag.clone().into()), + if_match: Some(etag.clone()), ..Default::default() }), ) .await?; // Set Metadata Changes the ETag - Refresh let props = blob_client.get_properties(None).await?; - let etag = props.etag()?.unwrap().to_string(); + let etag = props.etag()?.unwrap(); // if_none_match Failure on Set Metadata let err = blob_client .set_metadata( &metadata, Some(BlobClientSetMetadataOptions { - if_none_match: Some(etag.clone().into()), + if_none_match: Some(etag.clone()), ..Default::default() }), ) @@ -190,14 +190,14 @@ mod blob_client { ); blob_client .set_properties(Some(BlobClientSetPropertiesOptions { - if_match: Some(etag.clone().into()), + if_match: Some(etag.clone()), blob_content_type: Some("application/octet-stream".to_string()), ..Default::default() })) .await?; // Set Properties Changes the ETag - Refresh let props = blob_client.get_properties(None).await?; - let etag = props.etag()?.unwrap().to_string(); + let etag = props.etag()?.unwrap(); // Set Tags - Does Not Change the ETag, so etag remains valid for repeated use let err = blob_client @@ -223,7 +223,7 @@ mod blob_client { "test".to_string(), )])))?, Some(BlobClientSetTagsOptions { - if_match: Some(etag.clone().into()), + if_match: Some(etag.clone()), ..Default::default() }), ) @@ -251,7 +251,7 @@ mod blob_client { .acquire_lease( -1, Some(BlobClientAcquireLeaseOptions { - if_match: Some(etag.clone().into()), + if_match: Some(etag.clone()), ..Default::default() }), ) @@ -278,7 +278,7 @@ mod blob_client { .renew_lease( lease_id_1.clone(), Some(BlobClientRenewLeaseOptions { - if_match: Some(etag.clone().into()), + if_match: Some(etag.clone()), ..Default::default() }), ) @@ -307,7 +307,7 @@ mod blob_client { lease_id_1, proposed_id.clone(), Some(BlobClientChangeLeaseOptions { - if_match: Some(etag.clone().into()), + if_match: Some(etag.clone()), ..Default::default() }), ) @@ -334,7 +334,7 @@ mod blob_client { .release_lease( lease_id_2, Some(BlobClientReleaseLeaseOptions { - if_match: Some(etag.clone().into()), + if_match: Some(etag.clone()), ..Default::default() }), ) @@ -356,7 +356,7 @@ mod blob_client { ); blob_client .break_lease(Some(BlobClientBreakLeaseOptions { - if_match: Some(etag.clone().into()), + if_match: Some(etag.clone()), ..Default::default() })) .await?; @@ -374,7 +374,7 @@ mod blob_client { ); blob_client .delete(Some(BlobClientDeleteOptions { - if_match: Some(etag.clone().into()), + if_match: Some(etag.clone()), delete_snapshots: Some(DeleteSnapshotsOptionType::Include), ..Default::default() })) @@ -1095,7 +1095,7 @@ mod block_blob_client { // Upload Initial Blob create_test_blob(&blob_client, None, None).await?; let props = blob_client.get_properties(None).await?; - let etag = props.etag()?.unwrap().to_string(); + let etag = props.etag()?.unwrap(); let last_modified = props.last_modified()?.unwrap(); let before = last_modified - Duration::from_secs(60); let after = last_modified + Duration::from_secs(60); @@ -1119,7 +1119,7 @@ mod block_blob_client { .upload( RequestContent::from(b"new-content".to_vec()), Some(BlockBlobClientUploadOptions { - if_none_match: Some(etag.clone().into()), + if_none_match: Some(etag.clone()), ..Default::default() }), ) @@ -1175,7 +1175,7 @@ mod block_blob_client { .upload( RequestContent::from(b"updated".to_vec()), Some(BlockBlobClientUploadOptions { - if_match: Some(etag.clone().into()), + if_match: Some(etag.clone()), ..Default::default() }), ) @@ -1190,7 +1190,7 @@ mod block_blob_client { .await?; let props = blob_client.get_properties(None).await?; - let etag = props.etag()?.unwrap().to_string(); + let etag = props.etag()?.unwrap(); let last_modified = props.last_modified()?.unwrap(); let before = last_modified - Duration::from_secs(60); let after = last_modified + Duration::from_secs(60); @@ -1219,7 +1219,7 @@ mod block_blob_client { .commit_block_list( RequestContent::try_from(lookup.clone())?, Some(BlockBlobClientCommitBlockListOptions { - if_none_match: Some(etag.clone().into()), + if_none_match: Some(etag.clone()), ..Default::default() }), ) @@ -1278,7 +1278,7 @@ mod block_blob_client { .commit_block_list( RequestContent::try_from(lookup)?, Some(BlockBlobClientCommitBlockListOptions { - if_match: Some(etag.into()), + if_match: Some(etag), ..Default::default() }), ) @@ -1342,7 +1342,7 @@ mod append_blob_client { // Create Initial Append Blob (No Conditions) append_blob_client.create(None).await?; let props = blob_client.get_properties(None).await?; - let etag = props.etag()?.unwrap().to_string(); + let etag = props.etag()?.unwrap(); let last_modified = props.last_modified()?.unwrap(); let before = last_modified - Duration::from_secs(60); let after = last_modified + Duration::from_secs(60); @@ -1363,7 +1363,7 @@ mod append_blob_client { // if_none_match Failure let err = append_blob_client .create(Some(AppendBlobClientCreateOptions { - if_none_match: Some(etag.clone().into()), + if_none_match: Some(etag.clone()), ..Default::default() })) .await; @@ -1407,14 +1407,14 @@ mod append_blob_client { // Create Success With if_match append_blob_client .create(Some(AppendBlobClientCreateOptions { - if_match: Some(etag.into()), + if_match: Some(etag), blob_tags_string: Some("env=test".to_string()), ..Default::default() })) .await?; let props = blob_client.get_properties(None).await?; - let etag = props.etag()?.unwrap().to_string(); + let etag = props.etag()?.unwrap(); let last_modified = props.last_modified()?.unwrap(); let before = last_modified - Duration::from_secs(60); let after = last_modified + Duration::from_secs(60); @@ -1444,7 +1444,7 @@ mod append_blob_client { chunk.clone(), 5u64, Some(AppendBlobClientAppendBlockOptions { - if_none_match: Some(etag.clone().into()), + if_none_match: Some(etag.clone()), ..Default::default() }), ) @@ -1504,14 +1504,14 @@ mod append_blob_client { chunk, 5u64, Some(AppendBlobClientAppendBlockOptions { - if_match: Some(etag.into()), + if_match: Some(etag), ..Default::default() }), ) .await?; let props = blob_client.get_properties(None).await?; - let etag = props.etag()?.unwrap().to_string(); + let etag = props.etag()?.unwrap(); let last_modified = props.last_modified()?.unwrap(); let before = last_modified - Duration::from_secs(60); let after = last_modified + Duration::from_secs(60); @@ -1532,7 +1532,7 @@ mod append_blob_client { // if_none_match Failure let err = append_blob_client .seal(Some(AppendBlobClientSealOptions { - if_none_match: Some(etag.clone().into()), + if_none_match: Some(etag.clone()), ..Default::default() })) .await; @@ -1565,7 +1565,7 @@ mod append_blob_client { // Seal Success append_blob_client .seal(Some(AppendBlobClientSealOptions { - if_match: Some(etag.into()), + if_match: Some(etag), ..Default::default() })) .await?; @@ -1598,7 +1598,7 @@ mod page_blob_client { // Create Initial Page Blob page_blob_client.create(BLOB_SIZE, None).await?; let props = blob_client.get_properties(None).await?; - let etag = props.etag()?.unwrap().to_string(); + let etag = props.etag()?.unwrap(); let last_modified = props.last_modified()?.unwrap(); let before = last_modified - Duration::from_secs(60); let after = last_modified + Duration::from_secs(60); @@ -1622,7 +1622,7 @@ mod page_blob_client { .create( BLOB_SIZE, Some(PageBlobClientCreateOptions { - if_none_match: Some(etag.clone().into()), + if_none_match: Some(etag.clone()), ..Default::default() }), ) @@ -1678,7 +1678,7 @@ mod page_blob_client { .create( BLOB_SIZE, Some(PageBlobClientCreateOptions { - if_match: Some(etag.into()), + if_match: Some(etag), blob_tags_string: Some("env=test".to_string()), ..Default::default() }), @@ -1686,7 +1686,7 @@ mod page_blob_client { .await?; let props = blob_client.get_properties(None).await?; - let etag = props.etag()?.unwrap().to_string(); + let etag = props.etag()?.unwrap(); let last_modified = props.last_modified()?.unwrap(); let before = last_modified - Duration::from_secs(60); let after = last_modified + Duration::from_secs(60); @@ -1694,7 +1694,7 @@ mod page_blob_client { // Upload Pages - PageBlobClientUploadPagesOptions let page_data = RequestContent::from(vec![1u8; PAGE_SIZE]); - let range = HttpRange::new(0, PAGE_SIZE as u64).to_string(); + let range = HttpRange::new(0, PAGE_SIZE as u64); // if_match Failure let err = page_blob_client @@ -1719,7 +1719,7 @@ mod page_blob_client { PAGE_SIZE as u64, range.clone(), Some(PageBlobClientUploadPagesOptions { - if_none_match: Some(etag.clone().into()), + if_none_match: Some(etag.clone()), ..Default::default() }), ) @@ -1783,14 +1783,14 @@ mod page_blob_client { PAGE_SIZE as u64, range.clone(), Some(PageBlobClientUploadPagesOptions { - if_match: Some(etag.into()), + if_match: Some(etag), ..Default::default() }), ) .await?; let props = blob_client.get_properties(None).await?; - let etag = props.etag()?.unwrap().to_string(); + let etag = props.etag()?.unwrap(); let last_modified = props.last_modified()?.unwrap(); let before = last_modified - Duration::from_secs(60); @@ -1843,14 +1843,14 @@ mod page_blob_client { .clear_pages( range, Some(PageBlobClientClearPagesOptions { - if_match: Some(etag.into()), + if_match: Some(etag), ..Default::default() }), ) .await?; let props = blob_client.get_properties(None).await?; - let etag = props.etag()?.unwrap().to_string(); + let etag = props.etag()?.unwrap(); let last_modified = props.last_modified()?.unwrap(); let before = last_modified - Duration::from_secs(60); let after = last_modified + Duration::from_secs(60); @@ -1893,7 +1893,7 @@ mod page_blob_client { // Get Page Ranges Success page_blob_client .get_page_ranges(Some(PageBlobClientGetPageRangesOptions { - if_match: Some(etag.clone().into()), + if_match: Some(etag.clone()), ..Default::default() })) .await?; @@ -1947,14 +1947,14 @@ mod page_blob_client { .resize( BLOB_SIZE * 2, Some(PageBlobClientResizeOptions { - if_match: Some(etag.clone().into()), + if_match: Some(etag.clone()), ..Default::default() }), ) .await?; let props = blob_client.get_properties(None).await?; - let etag = props.etag()?.unwrap().to_string(); + let etag = props.etag()?.unwrap(); let last_modified = props.last_modified()?.unwrap(); let before = last_modified - Duration::from_secs(60); let after = last_modified + Duration::from_secs(60); @@ -1981,7 +1981,7 @@ mod page_blob_client { .set_sequence_number( SequenceNumberActionType::Update, Some(PageBlobClientSetSequenceNumberOptions { - if_none_match: Some(etag.clone().into()), + if_none_match: Some(etag.clone()), blob_sequence_number: Some(1), ..Default::default() }), @@ -2041,7 +2041,7 @@ mod page_blob_client { .set_sequence_number( SequenceNumberActionType::Update, Some(PageBlobClientSetSequenceNumberOptions { - if_match: Some(etag.into()), + if_match: Some(etag), blob_sequence_number: Some(42), ..Default::default() }), diff --git a/sdk/storage/azure_storage_blob/tests/blob_cpk.rs b/sdk/storage/azure_storage_blob/tests/blob_cpk.rs index a07d85de8b..e55f90a24a 100644 --- a/sdk/storage/azure_storage_blob/tests/blob_cpk.rs +++ b/sdk/storage/azure_storage_blob/tests/blob_cpk.rs @@ -799,7 +799,7 @@ mod page_blob_client { .upload_pages( RequestContent::from(vec![b'P'; 512]), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientUploadPagesOptions { encryption_algorithm: Some(encryption_algorithm), encryption_key: Some(encryption_key), @@ -870,7 +870,7 @@ mod page_blob_client { .upload_pages( RequestContent::from(content.clone()), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientUploadPagesOptions { encryption_algorithm: Some(algo), encryption_key: Some(key.clone()), @@ -896,7 +896,7 @@ mod page_blob_client { .upload_pages( RequestContent::from(vec![b'B'; 512]), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientUploadPagesOptions { encryption_scope: Some(get_invalid_encryption_scope()), ..Default::default() @@ -908,7 +908,7 @@ mod page_blob_client { // Clear Pages with CPK page_blob .clear_pages( - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientClearPagesOptions { encryption_algorithm: Some(algo), encryption_key: Some(key.clone()), @@ -931,7 +931,7 @@ mod page_blob_client { // Invalid Scope Clear Pages let err = bad_scope_page_blob .clear_pages( - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientClearPagesOptions { encryption_scope: Some(get_invalid_encryption_scope()), ..Default::default() @@ -995,7 +995,7 @@ mod page_blob_client { .upload_pages( RequestContent::from(source_content.clone()), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientUploadPagesOptions { encryption_algorithm: Some(algo), encryption_key: Some(key.clone()), @@ -1024,9 +1024,9 @@ mod page_blob_client { dest_page_blob .upload_pages_from_url( source_blob.url().as_str().into(), - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientUploadPagesFromUrlOptions { encryption_algorithm: Some(algo), encryption_key: Some(key.clone()), @@ -1057,9 +1057,9 @@ mod page_blob_client { let err = source_mismatch_dest_page_blob .upload_pages_from_url( source_blob.url().as_str().into(), - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientUploadPagesFromUrlOptions { source_encryption_algorithm: Some(algo), source_encryption_key: Some(wrong_key.clone()), @@ -1079,7 +1079,7 @@ mod page_blob_client { .upload_pages( RequestContent::from(vec![b'T'; 512]), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), None, ) .await?; @@ -1102,9 +1102,9 @@ mod page_blob_client { let err = dest_mismatch_page_blob .upload_pages_from_url( plain_source_blob.url().as_str().into(), - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientUploadPagesFromUrlOptions { encryption_algorithm: Some(algo), encryption_key: Some(wrong_key), @@ -1123,9 +1123,9 @@ mod page_blob_client { let err = bad_scope_dest_page_blob .upload_pages_from_url( plain_source_blob.url().as_str().into(), - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientUploadPagesFromUrlOptions { encryption_scope: Some(get_invalid_encryption_scope()), ..Default::default() @@ -1277,7 +1277,7 @@ mod partial_cpk_validation { .upload_pages( RequestContent::from(vec![b'P'; 512]), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientUploadPagesOptions { encryption_algorithm: Some(encryption_algorithm), encryption_key: Some(encryption_key), diff --git a/sdk/storage/azure_storage_blob/tests/page_blob_client.rs b/sdk/storage/azure_storage_blob/tests/page_blob_client.rs index 116871f1d1..2447b05703 100644 --- a/sdk/storage/azure_storage_blob/tests/page_blob_client.rs +++ b/sdk/storage/azure_storage_blob/tests/page_blob_client.rs @@ -68,7 +68,7 @@ async fn test_upload_page(ctx: TestContext) -> Result<(), Box> { .upload_pages( RequestContent::from(data.clone()), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), None, ) .await?; @@ -97,13 +97,13 @@ async fn test_clear_page(ctx: TestContext) -> Result<(), Box> { .upload_pages( RequestContent::from(data), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), None, ) .await?; page_blob_client - .clear_pages(HttpRange::new(0, 512).to_string(), None) + .clear_pages(HttpRange::new(0, 512), None) .await?; // Assert @@ -132,7 +132,7 @@ async fn test_resize_blob(ctx: TestContext) -> Result<(), Box> { .upload_pages( RequestContent::from(data.clone()), 1024, - HttpRange::new(0, 1024).to_string(), + HttpRange::new(0, 1024), None, ) .await; @@ -145,7 +145,7 @@ async fn test_resize_blob(ctx: TestContext) -> Result<(), Box> { .upload_pages( RequestContent::from(data.clone()), 1024, - HttpRange::new(0, 1024).to_string(), + HttpRange::new(0, 1024), None, ) .await?; @@ -227,7 +227,7 @@ async fn test_upload_page_from_url(ctx: TestContext) -> Result<(), Box Result<(), Box Result<(), Box> { .upload_pages( RequestContent::from(data.clone()), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), None, ) .await?; @@ -304,13 +304,13 @@ async fn test_get_page_ranges(ctx: TestContext) -> Result<(), Box> { .upload_pages( RequestContent::from(vec![b'B'; 512]), 512, - HttpRange::new(512, 512).to_string(), + HttpRange::new(512, 512), None, ) .await?; let response = page_blob_client .get_page_ranges(Some(PageBlobClientGetPageRangesOptions { - range: Some(HttpRange::new(0, 512).to_string()), + range: Some(HttpRange::new(0, 512)), ..Default::default() })) .await? @@ -393,7 +393,7 @@ async fn test_upload_pages_sequence_number_condition( .upload_pages( RequestContent::from(data.clone()), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientUploadPagesOptions { if_sequence_number_equal_to: Some(3), ..Default::default() @@ -412,7 +412,7 @@ async fn test_upload_pages_sequence_number_condition( .upload_pages( RequestContent::from(data.clone()), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientUploadPagesOptions { if_sequence_number_equal_to: Some(5), ..Default::default() @@ -425,7 +425,7 @@ async fn test_upload_pages_sequence_number_condition( .upload_pages( RequestContent::from(data.clone()), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientUploadPagesOptions { if_sequence_number_less_than: Some(3), ..Default::default() @@ -444,7 +444,7 @@ async fn test_upload_pages_sequence_number_condition( .upload_pages( RequestContent::from(data.clone()), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientUploadPagesOptions { if_sequence_number_less_than: Some(6), ..Default::default() @@ -457,7 +457,7 @@ async fn test_upload_pages_sequence_number_condition( .upload_pages( RequestContent::from(data.clone()), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientUploadPagesOptions { if_sequence_number_less_than_or_equal_to: Some(3), ..Default::default() @@ -476,7 +476,7 @@ async fn test_upload_pages_sequence_number_condition( .upload_pages( RequestContent::from(data.clone()), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientUploadPagesOptions { if_sequence_number_less_than_or_equal_to: Some(5), ..Default::default() @@ -503,7 +503,7 @@ async fn test_get_page_ranges_snapshot(ctx: TestContext) -> Result<(), Box Result<( .upload_pages( RequestContent::from(data.clone()), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientUploadPagesOptions { transactional_content_md5: Some(vec![0u8; 16]), ..Default::default() @@ -572,7 +572,7 @@ async fn test_upload_pages_transactional_checksums(ctx: TestContext) -> Result<( .upload_pages( RequestContent::from(data.clone()), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientUploadPagesOptions { transactional_content_md5: Some(correct_md5), ..Default::default() @@ -585,7 +585,7 @@ async fn test_upload_pages_transactional_checksums(ctx: TestContext) -> Result<( .upload_pages( RequestContent::from(data.clone()), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientUploadPagesOptions { transactional_content_crc64: Some(vec![0u8; 8]), ..Default::default() @@ -602,7 +602,7 @@ async fn test_upload_pages_transactional_checksums(ctx: TestContext) -> Result<( .upload_pages( RequestContent::from(data.clone()), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientUploadPagesOptions { transactional_content_crc64: Some(correct_crc64), ..Default::default() @@ -615,7 +615,7 @@ async fn test_upload_pages_transactional_checksums(ctx: TestContext) -> Result<( .upload_pages( RequestContent::from(data), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), None, ) .await?; @@ -669,7 +669,7 @@ async fn test_upload_pages_from_url_source_if_match( .upload_pages( RequestContent::from(vec![b'S'; 512]), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), None, ) .await?; @@ -677,8 +677,7 @@ async fn test_upload_pages_from_url_source_if_match( .get_properties(None) .await? .etag()? - .unwrap() - .to_string(); + .unwrap(); let dest_blob_client = container_client.blob_client(&get_blob_name(recording)); let dest_page_blob = dest_blob_client.page_blob_client(); @@ -688,11 +687,11 @@ async fn test_upload_pages_from_url_source_if_match( dest_page_blob .upload_pages_from_url( source_blob_client.url().as_str().into(), - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientUploadPagesFromUrlOptions { - source_if_match: Some(etag.clone().into()), + source_if_match: Some(etag.clone()), ..Default::default() }), ) @@ -702,11 +701,11 @@ async fn test_upload_pages_from_url_source_if_match( let response = dest_page_blob .upload_pages_from_url( source_blob_client.url().as_str().into(), - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), Some(PageBlobClientUploadPagesFromUrlOptions { - source_if_none_match: Some(etag.into()), + source_if_none_match: Some(etag), ..Default::default() }), ) diff --git a/sdk/storage/azure_storage_blob/tests/streaming.rs b/sdk/storage/azure_storage_blob/tests/streaming.rs index bec053855e..acff29af83 100644 --- a/sdk/storage/azure_storage_blob/tests/streaming.rs +++ b/sdk/storage/azure_storage_blob/tests/streaming.rs @@ -186,7 +186,7 @@ async fn stream_upload_pages(ctx: TestContext) -> Result<(), Box> { .upload_pages( request_content_from_bytes(&data), 512, - HttpRange::new(0, 512).to_string(), + HttpRange::new(0, 512), None, ) .await?; diff --git a/sdk/storage/azure_storage_blob/tsp-location.yaml b/sdk/storage/azure_storage_blob/tsp-location.yaml index 58e66555a0..b1138bca9b 100644 --- a/sdk/storage/azure_storage_blob/tsp-location.yaml +++ b/sdk/storage/azure_storage_blob/tsp-location.yaml @@ -1,4 +1,4 @@ directory: specification/storage/Microsoft.BlobStorage -commit: 3928c0a2de8aad3536d42578eefdd1dd7126072b +commit: fa6985a722a37f572d868a3ba2f8b155e74bcd14 repo: Azure/azure-rest-api-specs additionalDirectories: diff --git a/sdk/storage/azure_storage_queue/README.md b/sdk/storage/azure_storage_queue/README.md index d01614289e..e58b5ba302 100644 --- a/sdk/storage/azure_storage_queue/README.md +++ b/sdk/storage/azure_storage_queue/README.md @@ -18,7 +18,7 @@ cargo add azure_storage_queue ### Prerequisites -* You must have an [Azure subscription] and an [Azure storage account] to use this package. +- You must have an [Azure subscription] and an [Azure storage account] to use this package. ### Create a storage account @@ -36,7 +36,7 @@ az storage account create -n my-storage-account-name -g my-resource-group #### Authenticate the client -In order to interact with the Azure Queue service, you'll need to create an instance of a client, `QueueClient`. The [Azure Identity] library makes it easy to add Microsoft Entra ID support for authenticating Azure SDK clients with their corresponding Azure services: +In order to interact with the Azure Queue service, you'll need to create an instance of a client, `QueueClient` or `QueueServiceClient`. The [Azure Identity] library makes it easy to add Microsoft Entra ID support for authenticating Azure SDK clients with their corresponding Azure services: ```rust no_run use azure_storage_queue::{QueueClient, QueueClientOptions}; @@ -48,7 +48,7 @@ async fn main() -> Result<(), Box> { let credential = DeveloperToolsCredential::new(None)?; let queue_client = QueueClient::new( "https://.queue.core.windows.net/", // Endpoint - "queue-name", // Queue Name + "", // Queue Name Some(credential), // Credential Some(QueueClientOptions::default()), // QueueClient Options )?; @@ -64,11 +64,60 @@ You may need to specify RBAC roles to access Queues via Microsoft Entra ID. Plea You can find executable examples for all major SDK functions in: -* [queue_hello_world.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_queue/examples/queue_hello_world.rs) - Getting started: create a queue, send and receive messages -* [queue_client.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_queue/examples/queue_client.rs) - Queue-level operations: metadata, send/peek/receive/delete, TTL/visibility options -* [queue_service_client.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_queue/examples/queue_service_client.rs) - Service-level operations: list queues, service properties, statistics -* [access_policy.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_queue/examples/access_policy.rs) - Set and get queue access policies (stored access policies for SAS) -* [queue_storage_logging.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_queue/examples/queue_storage_logging.rs) - Logging and OpenTelemetry distributed tracing +- [queue_hello_world.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_queue/examples/queue_hello_world.rs) - Getting started: create a queue, send and receive messages +- [queue_client.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_queue/examples/queue_client.rs) - Queue-level operations: metadata, send/peek/receive/delete, TTL/visibility options +- [queue_service_client.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_queue/examples/queue_service_client.rs) - Service-level operations: list queues, service properties, statistics +- [access_policy.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_queue/examples/access_policy.rs) - Set and get queue access policies (stored access policies for SAS) +- [queue_storage_logging.rs](https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/storage/azure_storage_queue/examples/queue_storage_logging.rs) - Logging and OpenTelemetry distributed tracing + +### Send a message + +```rust no_run +use azure_storage_queue::{models::QueueMessage, QueueClient, QueueClientOptions}; +use azure_identity::DeveloperToolsCredential; + +#[tokio::main] +async fn main() -> Result<(), Box> { + let credential = DeveloperToolsCredential::new(None)?; + let queue_client = QueueClient::new( + "https://.queue.core.windows.net/", + "", + Some(credential), + Some(QueueClientOptions::default()), + )?; + + let message = QueueMessage { + message_text: Some("hello world".to_string()), + }; + queue_client.send_message(message.try_into()?, None).await?; + Ok(()) +} +``` + +### Receive messages + +```rust no_run +use azure_storage_queue::{QueueClient, QueueClientOptions}; +use azure_identity::DeveloperToolsCredential; + +#[tokio::main] +async fn main() -> Result<(), Box> { + let credential = DeveloperToolsCredential::new(None)?; + let queue_client = QueueClient::new( + "https://.queue.core.windows.net/", + "", + Some(credential), + Some(QueueClientOptions::default()), + )?; + + let response = queue_client.receive_messages(None).await?; + let messages = response.into_model()?; + for msg in messages.items.unwrap_or_default() { + println!("{}", msg.message_text.as_deref().unwrap_or("")); + } + Ok(()) +} +``` ## Next steps @@ -90,7 +139,7 @@ This project has adopted the [Microsoft Open Source Code of Conduct](https://ope [Azure Portal]: https://learn.microsoft.com/azure/storage/common/storage-quickstart-create-account?tabs=azure-portal [Azure PowerShell]: https://learn.microsoft.com/azure/storage/common/storage-quickstart-create-account?tabs=azure-powershell [Azure CLI]: https://learn.microsoft.com/azure/storage/common/storage-quickstart-create-account?tabs=azure-cli -[cargo]: https://dev-doc.rust-lang.org/stable/cargo/commands/cargo.html +[cargo]: https://doc.rust-lang.org/cargo/ [Azure Identity]: https://github.com/Azure/azure-sdk-for-rust/tree/main/sdk/identity/azure_identity [API reference documentation]: https://docs.rs/crate/azure_storage_queue/latest [Package (crates.io)]: https://crates.io/crates/azure_storage_queue