Skip to content

[CAS] Sync LLVMCAS library implementation to next#12794

Open
cachemeifyoucan wants to merge 6 commits intoswiftlang:stable/21.xfrom
cachemeifyoucan:eng/PR-sync-cas-next-to-21.x
Open

[CAS] Sync LLVMCAS library implementation to next#12794
cachemeifyoucan wants to merge 6 commits intoswiftlang:stable/21.xfrom
cachemeifyoucan:eng/PR-sync-cas-next-to-21.x

Conversation

@cachemeifyoucan
Copy link
Copy Markdown

Sync LLVM CAS library implementation to upstreamed version. This Allows:

Add bug fixes from latest LLVMCAS that includes much robust error checking and size management.
Allows better cherry-pick between branches

@cachemeifyoucan cachemeifyoucan requested a review from a team as a code owner April 17, 2026 16:00
@cachemeifyoucan cachemeifyoucan force-pushed the eng/PR-sync-cas-next-to-21.x branch from 4bb110d to 8fefe79 Compare April 17, 2026 16:02
@cachemeifyoucan
Copy link
Copy Markdown
Author

@swift-ci please test

@cachemeifyoucan cachemeifyoucan force-pushed the eng/PR-sync-cas-next-to-21.x branch from 8fefe79 to a7d5f04 Compare April 17, 2026 16:12
@cachemeifyoucan
Copy link
Copy Markdown
Author

@swift-ci please test

@cachemeifyoucan
Copy link
Copy Markdown
Author

swiftlang/swift#88537

@swift-ci please test

@cachemeifyoucan
Copy link
Copy Markdown
Author

swiftlang/swift#88537

@swift-ci please test

Update the downstream code to use upstreamed content.
Add MappedFileRegionArena which can be served as a file system backed
persistent memory allocator. The allocator works like a
BumpPtrAllocator,
and is designed to be thread safe and process safe.

The implementation relies on the POSIX compliance of file system and
doesn't work on all file systems. If the file system supports lazy tail
(doesn't allocate disk space if the tail of the large file is not used),
user has more flexibility to declare a larger capacity.

The allocator works by using a atomically updated bump ptr at a location
that can be customized by the user. The atomic pointer points to the
next available space to allocate, and the allocator will resize/truncate
to current usage once all clients closed the allocator.

Windows implementation contributed by: @hjyamauchi
Replace all CAS library and header files with their versions from
origin/next to bring upstream LLVMCAS changes to stable/21.x.

Key changes:
- HashMappedTrie removed (replaced by ADT/TrieRawHashMap)
- OnDiskHashMappedTrie → OnDiskTrieRawHashMap
- MappedFileRegionBumpPtr → MappedFileRegionArena
- Add OnDiskDataAllocator, DatabaseFile, NamedValuesSchema
- Add FileOffset.h, OnDiskTrieRawHashMap.h
- Update OnDiskGraphDB, UnifiedOnDiskCache, ActionCache APIs
- createCASFromIdentifier now returns pair<ObjectStore, ActionCache>
- LLVM_ENABLE_ONDISK_CAS moved to llvm-config.h.cmake
- TrieRawHashMap: add print()/dump() support
Replace all CAS test files with their versions from origin/next
to match the updated LLVMCAS library implementation.
Update all downstream consumers for the new LLVMCAS APIs:
- GRPCRelayCAS: HashMappedTrie → TrieRawHashMap
- RemoteCachingService: new ObjectStoreCreateFuncTy (pair return)
- libCASPluginTest: KVPut/KVGet → cachePut/cacheGet, ObjectHandle
- llvm-cas: rewrite with tablegen-based option parsing
- llvm-cas-dump, llvm-cas-object-format: .first for pair return
- llvm-cas-test: Config.def macro-based parameters
- llc, llvm-mc: createCASFromIdentifier returns pair
- cc1depscan_main: bool → LockKind::Exclusive
llvm#192565)

When opening an existing large CAS using a smaller requested mapping
size, the file size can be smaller than capacity while holding only a
shared lock. Replace the assertion with a graceful lock upgrade to
exclusive before resizing the file.
@cachemeifyoucan cachemeifyoucan force-pushed the eng/PR-sync-cas-next-to-21.x branch from a7d5f04 to b0d33a4 Compare April 21, 2026 19:36
@cachemeifyoucan
Copy link
Copy Markdown
Author

swiftlang/swift#88537

@swift-ci please test

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants