Skip to content

Conversation

@yutannihilation
Copy link

  • I agree to follow the project's code of conduct.
  • I added an entry to CHANGES.md if knowledge of this change could be valuable to users.

Hi! I was struggling to statically build a CLI using the gdal crate, and found the CI in this repo uses this trick:

- name: Build bundled gdal (minimal features)
# we use cargo test --no-run here because
# tests do not pass due to missing libgeos but we want to have a complete build (including linking)
run: cargo test --features "gdal-sys/bundled gdal-src gdal-src/driver_sqlite" --no-run

However, this is possible only when gdal-sys and gdal-src are the direct dependency; using the gdal crate is not the case. I think it makes sense to expose bundled feature on the gdal crate as well.

@lnicola
Copy link
Member

lnicola commented Jun 7, 2025

r? @weiznich I did wonder why there's no gdal feature for this

@weiznich
Copy link
Contributor

weiznich commented Jun 8, 2025

I'm personally not a friend of exposing features from dependencies in such a way as it adds additional constraints on what you can/cannot change as crate without a major release. Users can always opt in such features by directly depending on the crate exposing the feature as cargo will unify these features across multiple sources. Yes that's a bit more work for users, but I think that is manageable on their and.
In this particular case users cannot have ever more than one version of gdal-sys in their dependency tree, so the worst thing that happens is that they get a compilation error stating that they have multiple versions of that crate in their dependency tree.

Then there are one general note about bundled features like this: While they are really nice for certain use cases, they are a really bad for other uses cases. It's nice to ship statically linked binaries if you provide pre-build binaries for whatever reason. On the other hand if your application get shipped by some linux distribution they will very likely hate you for unconditionally statically linking such dependencies as they want to be able to dynamically link them from their canonical distro-package instead.

Finally there is one gdal specific consideration here: I would guess that the gdal-sys/bundled feature on it's on is not useful for most of the users as it doesn't enable any import/export drivers. On the other hand the proposed bundled_all feature seems to enable really all features which is again something I would guess is not the thing most projects want, especially given the implications around licensing and linking to libgeos. For me that sounds like most users rather want to enable drivers they depend on explicitly to get a fitting configuration for their use-cases. If that all should be configurable from the gdal crate itself it would require exposing a lot of feature flags. I'm not a maintainer of gdal, but I wouldn't do that to keep the amount of required work on the maintainer side as low as possible. If you don't duplicate the feature flags users would need to depend on gdal-sys/gdal-src anyway so there is really not much sense in exposing these features at all from gdal itself.

What can and should be done is documenting all of that. Unfortunately I'm currently rather busy with a lot of other stuff so I'm not sure if I can promise contributions in the next weeks.

@yutannihilation
Copy link
Author

Ah, makes sense, thanks for the details.

I would guess that the gdal-sys/bundled feature on it's on is not useful for most of the users as it doesn't enable any import/export drivers.

In our case, we don't need import/export, so that's totally fine! But, I agree it might be confusing to users. Actually, I wasn't aware of this.

For me that sounds like most users rather want to enable drivers they depend on explicitly to get a fitting configuration for their use-cases. If that all should be configurable from the gdal crate itself it would require exposing a lot of feature flags. I'm not a maintainer of gdal, but I wouldn't do that to keep the amount of required work on the maintainer side as low as possible.

Agreed. As a user, I want such fine-grained feature flags, but, if I were a maintainer, I would feel it's a nightmare.

So, should I close this pull request?

@weiznich
Copy link
Contributor

@yutannihilation I'm not a maintainer here, but I personally would suggest to change the scope of the PR to add the relevant documentation instead. Maybe just a crate level documentation entry here that points to the bundled feature of gdal-sys and mentions the considerations around enabling specific drivers? Possibly with a link to the relevant section of the gdal-src/Cargo.toml file?

@yutannihilation
Copy link
Author

Hm, thanks for the suggestion, but I don't think I'm the right person to write the documents, considering my knowledge is very limited.

After some thinking, while I get your concern, I still feel this pull request might be valid to some extent. So, I'll leave this pull request and wait for the maintainer to decide.

weiji14 added a commit to weiji14/cog3pio that referenced this pull request Jun 24, 2025
Fix for `The system library `gdal` required by crate `gdal-sys` was not found. The file `gdal.pc` needs to be installed and the PKG_CONFIG_PATH environment variable must contain its parent directory`. Using bundled gdal-sys, and building gdal-src with GTIFF driver. Inspired by georust/gdal#646.
weiji14 added a commit to weiji14/cog3pio that referenced this pull request Jul 16, 2025
* ➕ Add nvtiff-sys

Rust bindings to nvTIFF via bindgen! Crate at https://crates.io/crates/nvtiff-sys

* ➕ Add cudarc

Safe wrappers around CUDA apis! Crate at https://crates.io/crates/cudarc. Also including the 'cuda-version-from-build-system' feature flag.

* ✨ Decode GeoTIFF images to CUDA stream

Reading GeoTIFF images to GPU memory! Uses the `nvtiff-sys` crate for the I/O decoding, and `cudarc` crate to store the 1D array in CUDA memory. Putting this behind a 'cuda' feature flag since it does require a CUDA GPU.

* 👔 Work out number of bytes to allocate memory for from file info

Calculated how many bytes of memory to allocate on the GPU from image_width x image_height x bits_per_pixel/8.

* ♻️ Refactor to avoid cloning of CudaSlice in to_cuda function

Store the CudaSlice as part of the CudaCogReader struct, initializing it with alloc_zeros in the `new` method, and filling it with actual decoded bytes in the `to_cuda` method. Return the decoded bytes in a CudaSlice using `upgrade_device_ptr`, which necessitates calling unsafe. Also need to store num_bytes somewhere in the struct.

* 🔇 Don't run clippy on 'cuda' feature

GitHub Actions CI doesn't have CUDA, so getting `Failed to execute `nvcc`: Os { code: 2, kind: NotFound, message: "No such file or directory" }` when trying to compile `cudarc`. So just making clippy run on default features for now.

* ⚡ Benchmark nvTIFF vs GDAL vs image-tiff on reading Sentinel-2 TCI

Set up benchmark to compare read speeds/throughput from reading a 318MB DEFLATE-compressed Sentinel-2 TCI file via nvTIFF, GDAL or image-tiff. Benchmarks ran using a patched version of criterion, with sample size of 10 for nvTIFF_GPU, and sample size of 30 for gdal_CPU and image-tiff_CPU.

Summary report is:
- read_cog/1_nvTIFF_GPU/Sentinel-2 TCI
                        time:   [330.85 ms 344.04 ms 330.85 ms]
                        thrpt:  [961.16 MB/s 924.31 MB/s 961.16 MB/s]

- read_cog/2_gdal_CPU/Sentinel-2 TCI
                        time:   [1.0533 s 1.0543 s 1.0533 s]
                        thrpt:  [301.91 MB/s 301.63 MB/s 301.91 MB/s]

- read_cog/3_image-tiff_CPU/Sentinel-2 TCI
                        time:   [1.7426 s 1.7461 s 1.7426 s]
                        thrpt:  [182.48 MB/s 182.12 MB/s 182.48 MB/s]

* ➕ Add gdal-sys bundled and gdal-src with drive_gtiff

Fix for `The system library `gdal` required by crate `gdal-sys` was not found. The file `gdal.pc` needs to be installed and the PKG_CONFIG_PATH environment variable must contain its parent directory`. Using bundled gdal-sys, and building gdal-src with GTIFF driver. Inspired by georust/gdal#646.

* 🚩 Add note about using libgdal-dev locally

Using GDAL 3.10.3 locally from Debian to run the benchmarks. Otherwise getting some `undefined reference to `GDALRasterIOEx' ...` errors when trying to compile. Keeping the `gdal-src` and `gdal-sys` lines uncommented though to please CI.

* 🚩 Let 'cuda' feature show up on docs.rs

Xref https://vadosware.io/post/getting-features-to-show-up-in-your-rust-docs. For checking locally, run `RUSTDOCFLAGS="--cfg docsrs" cargo +nightly doc --all-features`.

* ⬆️ Bump nvtiff-sys from 0.1.1 to 0.1.2

* 🥅 Handle nvTIFF status errors with NvTiffResult instead of panicking

More gracefully handling nvTIFF decode API status codes by converting it to an NvTiffResult via the NvTiffResultCheck trait's `.result()` method. Removed all the assert_eq!(status_*) checks, but kept the dbg!() statements around as decoding still fails sometimes between the nvtiffDecodeCheckSupported and nvtiffDecodeImage call.

* 🔧 Configure dev builds to have basic optimizations

Change opt-level from default of 0 (no optimizations) to 1 (basic optimizations), because `nvtiffDecodeCheckSupported` and `nvtiffDecodeImage` fails with `NVTIFF_STATUS_TIFF_NOT_SUPPORTED (4)` on test/dev builds, but not on bench/release builds. Xref https://doc.rust-lang.org/cargo/reference/profiles.html#opt-level

* 💬 Fix missing_panics_doc and general doc improvements

Improve error message to mention how many bytes were attempted to be allocated on CUDA device during panic. Also general improvements to the docstrings in the nvtiff module.

* 🔧 Ensure optional features are documented on docs.rs

Xref https://docs.rs/about/metadata
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

4 participants