Skip to content

Develop upstream sync 20250113 #2802

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 1,288 commits into from
Jan 20, 2025
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
1288 commits
Select commit Hold shift + click to select a range
4950de7
PR #19649: [ROCm] Implement hermetic rocm dependency
alekstheod Jan 8, 2025
b7bed6c
Automated Code Change
tensorflower-gardener Jan 8, 2025
11b202f
Fix undefined behavior of mismatch in coordination service.
tensorflower-gardener Jan 8, 2025
a70dab5
[xla:gpu] fix bug in counting good autotuner configs
vwbaker Jan 8, 2025
affe2e7
[pjrt] Removed unused CreateDeviceToHostChannelHandle, CreateChannelH…
superbobry Jan 8, 2025
039586b
[pjrt] Removed unused prefer_to_retain_reference argument from Record…
superbobry Jan 8, 2025
5ab13f8
#sdy use `applyPatternsGreedily` with `config.fold=false` and `config…
tomnatan30 Jan 8, 2025
b7418dd
Moving AtomicRMW utilities out of lower_tensors. These are going to a…
Moerafaat Jan 8, 2025
4c829e6
[XLA:CPU] Remove no thunks tests for exhaustive_binary_test
tensorflower-gardener Jan 8, 2025
19d4d54
[XLA:GPU] Fix sorted scatter with imperfectly tiled indices.
pifon2a Jan 8, 2025
9314252
Passing device information to Vectorization pass. This will be needed…
Moerafaat Jan 8, 2025
a001136
[xla:cpu] Add CpuClique to XLA:CPU collectives and use generic collec…
ezhulenev Jan 8, 2025
95def86
Remove experimental TOSA convert python API
jpienaar Jan 8, 2025
58dabf1
[XLA:GPU][Emitters] Fix a typo in vectorize_loads_stores.mlir
pifon2a Jan 8, 2025
f7c7dd2
Update CompiledModel.Run()
terryheo Jan 8, 2025
6fefafc
IFRT proxy asan fix: Do not call `promise.Set()` twice in error-handl…
tensorflower-gardener Jan 8, 2025
0c60260
Remove obsolete target.
klucke Jan 8, 2025
4117c70
Eliminate circular dependency imposed by CollectivePermuteDecomposer …
toli-y Jan 8, 2025
9f293a1
Move most of kernel Launch processing from Stream to the Kernel classes.
klucke Jan 8, 2025
a7703e7
Make MatchShapeCoveringDynamicIndexInstruction handle non-unit slice …
Jan 8, 2025
dea544a
#tf-data-service Remove obsolete todo.
mpcallanan Jan 8, 2025
11273af
Try to handle multiple source target pairs when generating a single a…
ZixuanJiang Jan 8, 2025
82600bb
#tf-data-service Remove obsolete todo.
mpcallanan Jan 8, 2025
f0a6d12
#tf-data Remove obsolete todo.
mpcallanan Jan 8, 2025
70be419
Add a HLOPrintOption to control printing of the parameter number for …
tensorflower-gardener Jan 8, 2025
d1047c5
Adds CreateFromAhwb method
tensorflower-gardener Jan 8, 2025
f3d8658
#tf-data Remove obsolete todo.
mpcallanan Jan 8, 2025
f805ef0
[HLO Componentization] Populate hlo/testlib sub-component (Phase I).
sdasgup3 Jan 8, 2025
863d86b
[xla:python] Removed unused `*Executable.compile_options`
superbobry Jan 8, 2025
805779b
#tf-data For an empty `from_list`, update the error message to sugges…
mpcallanan Jan 8, 2025
df75250
Fix application of JIT compiler plugins
tensorflower-gardener Jan 8, 2025
2fd9ba8
Remove redundant string conversions.
tensorflower-gardener Jan 8, 2025
ca5e504
Silence Dlopen log messages when probing for Neuron library
tensorflower-gardener Jan 8, 2025
7886562
Merge pull request #83753 from codinglover222:support_math_boolean
tensorflower-gardener Jan 8, 2025
aa290d7
Fixed a bug where slice Op legalization constructing QNN param with s…
YunanAZ Jan 9, 2025
56e0795
Improve comment in ShapeUtil
SandSnip3r Jan 9, 2025
0ce1d6f
[XLA:GPU] Make dnn_compiled_graphs as bytes. This can fix parsing err…
tensorflower-gardener Jan 9, 2025
ccaef81
Reverts changelist 546034127
tensorflower-gardener Jan 9, 2025
35fbbd0
Create option to allow tensorflow::Tensor objects to be imported as D…
vamsimanchala Jan 9, 2025
09ce373
Refactor collective_permute decomposer. Extract general purpose colle…
toli-y Jan 9, 2025
a28caef
Add `device_count` accessor to `HloRunnerInterface`.
nvgrw Jan 9, 2025
fe57ee0
Make PJRTArray::Create validate the create-request for addressable de…
tensorflower-gardener Jan 9, 2025
7d91b32
Fold `xla::PjRtXlaLayout` into `xla::PjRtLayout` for simplification
junwhanahn Jan 9, 2025
c9d164a
Automated Code Change
tensorflower-gardener Jan 9, 2025
93a7459
Automated Code Change
tensorflower-gardener Jan 9, 2025
29a8da4
Minor code simplification.
fergushenderson Jan 9, 2025
938cd49
Automated Code Change
tensorflower-gardener Jan 9, 2025
68ce340
Automated Code Change
tensorflower-gardener Jan 9, 2025
c63676e
Automated Code Change
tensorflower-gardener Jan 9, 2025
02bdfbb
Automated Code Change
tensorflower-gardener Jan 9, 2025
e17fbcc
Automated Code Change
tensorflower-gardener Jan 9, 2025
2e14ee8
Automated Code Change
tensorflower-gardener Jan 9, 2025
01abee8
Automated Code Change
tensorflower-gardener Jan 9, 2025
1fbc09a
compat: Update forward compatibility horizon to 2025-01-09
tensorflower-gardener Jan 9, 2025
76f426c
Update GraphDef version to 2102.
tensorflower-gardener Jan 9, 2025
aee2a07
Merge pull request #81920 from pasweistorz:fix-issue-70730
tensorflower-gardener Jan 9, 2025
e56a210
Merge pull request #77208 from yhng3010:fix_label_image_cmake_cross_c…
tensorflower-gardener Jan 9, 2025
1240e44
Generalize `GetFirstMergeableDimForSortOperand` and rename it as `Get…
ZixuanJiang Jan 9, 2025
1b3ee9d
Fix typos in multiple documentation strings
Venkat6871 Jan 9, 2025
4d3e63f
PR #21166: [DOC] Fix a link in the documentation.
sergachev Jan 9, 2025
49d396c
PR #21175: [DOC] Fix a mistype.
sergachev Jan 9, 2025
1d54c40
Automated Code Change
tensorflower-gardener Jan 9, 2025
2f389ff
Automated Code Change
tensorflower-gardener Jan 9, 2025
a4d4a09
Automated Code Change
tensorflower-gardener Jan 9, 2025
1f58545
[xla:gpu] Only run XLA Triton passes on XLA fusions.
chr1sj0nes Jan 9, 2025
ed61e14
[XLA:GPU] Model output_bytes_accessed for collectives.
golechwierowicz Jan 9, 2025
b12d4ba
Automated Code Change
tensorflower-gardener Jan 9, 2025
2332929
[XLA:GPU] Use output_bytes_accessed in SoL latency estimator.
golechwierowicz Jan 9, 2025
e8e48a3
Automated Code Change
tensorflower-gardener Jan 9, 2025
167dfc4
Automated Code Change
tensorflower-gardener Jan 9, 2025
5844fae
Automated Code Change
tensorflower-gardener Jan 9, 2025
54e4a84
Automated Code Change
tensorflower-gardener Jan 9, 2025
8f94e73
Moving test from Triton patch file internally. The associated fix was…
Moerafaat Jan 9, 2025
810f33c
Fix lint issue
mihaimaruseac Jan 9, 2025
3d9a31f
Use better word, make docstring clearer
mihaimaruseac Jan 9, 2025
cbc2ac2
Merge branch 'master' into fixtypos11
mihaimaruseac Jan 9, 2025
cf43bb5
#sdy support JAX callbacks through the Shardy XLA round-trip pipeline.
bartchr808 Jan 9, 2025
7b49ba4
[xla:cpu] Replace xla::cpu::CollectivesInterface with xla::cpu::CpuCo…
ezhulenev Jan 9, 2025
5adbf9c
Simplify copying the tensor content to a string and byte swapping
jonathan-albrecht-ibm Jan 9, 2025
d7a41d2
[xla:cpu:benchmarks] Add scripts to run Gemma2 Keras model.
penpornk Jan 9, 2025
765be4d
Update users of moved TSL headers to use new location in XLA
ddunl Jan 9, 2025
7fab320
Extend MTK dispatch API to support DMA-BUF buffers
tensorflower-gardener Jan 9, 2025
752229b
[xla:cpu] Consolidate all XLA:CPU collectives under backends/cpu/coll…
ezhulenev Jan 9, 2025
3c90759
Rewrite `Reshard(HloSharding::Replicate())` as `Replicate()` for `Par…
ZixuanJiang Jan 9, 2025
9867508
Internal change only
SiqiaoWu1993 Jan 9, 2025
7b5f57c
[xla:collectives] Remove redundant nranks argument from collectives API
ezhulenev Jan 9, 2025
17fb3d8
Fix typo: std:string -> std::string
jonathan-albrecht-ibm Jan 9, 2025
107e28c
Update users of moved TSL headers to use new location in XLA for `act…
ddunl Jan 9, 2025
fa3b3c1
Update scripts/configs for Windows nightly/release builds.
belitskiy Jan 9, 2025
354a956
Merge branch 'master' into export_symbols2
mihaimaruseac Jan 9, 2025
79520d7
Remove `SKIP_TEST_IF_NUM_DEVICES_LESS_THAN` macro.
nvgrw Jan 9, 2025
2342c28
Remove unused MemoryTypeString function.
klucke Jan 9, 2025
63def23
Remove unused variable
mihaimaruseac Jan 9, 2025
a6bbc76
[xla:gpu] Rename gpu_clique_locking to gpu_cliques for consistency wi…
ezhulenev Jan 9, 2025
49b2a87
Increase wheel limit size up to 270M for a temporary nightlies fix.
rtg0795 Jan 9, 2025
263ab05
Use const reference to context instead of universal reference.
tensorflower-gardener Jan 9, 2025
4f81d05
Integrate LLVM at llvm/llvm-project@644de6ad1c75
d0k Jan 9, 2025
933af85
Merge pull request #59851 from linux-on-ibm-z:endian-arithmetic-optim…
tensorflower-gardener Jan 9, 2025
dc7a738
[xla:cpu] Migrate CollectivePermute to RendezvousSingle API
ezhulenev Jan 9, 2025
a25b416
Update to match upstream API change (NFC).
jpienaar Jan 9, 2025
7855a8d
Migrate replicated_io_feed_test to always use PjRt for its test backend.
nvgrw Jan 9, 2025
51bad8c
Automated Code Change
tensorflower-gardener Jan 9, 2025
99a5c72
Remove unused free_visitors from DeviceMemAllocator.
klucke Jan 9, 2025
50e5e96
Merge pull request #84463 from tensorflow:fixtypos11
tensorflower-gardener Jan 9, 2025
f25e674
Add an HLO parsing option to enable/disable initialization of short f…
tensorflower-gardener Jan 9, 2025
d285ca0
[xla:cpu] Migrate AllToAll to RendezvousSingle API
ezhulenev Jan 9, 2025
afab56a
Drop shard barrier custom calls in sharding-remover HLO pass.
tensorflower-gardener Jan 9, 2025
c49fb9f
Fix build errors on MacOS
tensorflower-gardener Jan 9, 2025
d0b79eb
Cleanup. Sort the declarations in spmd_partitioner.
ZixuanJiang Jan 9, 2025
df6eb2f
[xla:cpu] Migrate AllGather to RendezvousSingle API
ezhulenev Jan 9, 2025
a72d9bf
Merge pull request #56525 from API92:lstm_cudnn_impl_selection
tensorflower-gardener Jan 9, 2025
da13887
Merge pull request #79330 from misterBart:fix_issue_79317
tensorflower-gardener Jan 9, 2025
bf3bc31
Merge pull request #79380 from fiberflow:export_symbols2
tensorflower-gardener Jan 9, 2025
bbbf277
[xla:cpu] Migrate AllReduce to RendezvousSingle API
ezhulenev Jan 9, 2025
bd88b82
Refactor GetIfrtHloSharding and GetIfrtConcreteEvenSharding
pschuh Jan 10, 2025
0228231
[XLA] Simplify the scheduler test HLO.
seherellis Jan 10, 2025
d12282a
Fix oss buld error of dispatch_api
terryheo Jan 10, 2025
8b3f802
Add support for default memory space descriptions;
matthiaskramm Jan 10, 2025
689ebfe
[xla:cpu] Migrate ReduceScatter to RendezvousSingle API
ezhulenev Jan 10, 2025
6c950eb
[xla] Delete unused refcounting hashmap
ezhulenev Jan 10, 2025
52bbb02
Internal change only
SiqiaoWu1993 Jan 10, 2025
c8657c6
[xla] Delete unused Rendezvous implementation
ezhulenev Jan 10, 2025
f3883b3
[TFLite] Optimize FlatBuffer export performance
vamsimanchala Jan 10, 2025
e9ad877
Automated Code Change
tensorflower-gardener Jan 10, 2025
e18bcf8
Automated Code Change
tensorflower-gardener Jan 10, 2025
924d350
Automated Code Change
tensorflower-gardener Jan 10, 2025
91ea68b
Add tests to make sure DenseResourceElementsAttr are handled/supporte…
vamsimanchala Jan 10, 2025
7415a77
Automated Code Change
tensorflower-gardener Jan 10, 2025
4b51d30
Automated Code Change
tensorflower-gardener Jan 10, 2025
4d53671
Automated Code Change
tensorflower-gardener Jan 10, 2025
932b6b3
Automated Code Change
tensorflower-gardener Jan 10, 2025
037dd11
Add `SpmdPartitioningVisitor::HandleBitcastConvert`.
ZixuanJiang Jan 10, 2025
e56433c
Automated Code Change
tensorflower-gardener Jan 10, 2025
8cb43b0
Update to match upstream API change (NFC).
jpienaar Jan 10, 2025
4a01927
Automated Code Change
tensorflower-gardener Jan 10, 2025
252ee2a
Automated Code Change
tensorflower-gardener Jan 10, 2025
47c7e44
Automated Code Change
tensorflower-gardener Jan 10, 2025
cdfdadf
Automated Code Change
tensorflower-gardener Jan 10, 2025
f975999
Automated Code Change
tensorflower-gardener Jan 10, 2025
5bbe8c3
Automated Code Change
tensorflower-gardener Jan 10, 2025
b672892
Automated Code Change
tensorflower-gardener Jan 10, 2025
9c80151
Automated Code Change
tensorflower-gardener Jan 10, 2025
3a59c8a
[XLA:GPU] Fix broken build.
dimitar-asenov Jan 10, 2025
5a55725
Automated Code Change
tensorflower-gardener Jan 10, 2025
9a46187
compat: Update forward compatibility horizon to 2025-01-10
tensorflower-gardener Jan 10, 2025
c7cdaf5
Automated Code Change
tensorflower-gardener Jan 10, 2025
15049e8
Update GraphDef version to 2103.
tensorflower-gardener Jan 10, 2025
59e47f2
[XLA:GPU] Fix reduce scatter transfered bytes.
golechwierowicz Jan 10, 2025
83fb63b
PR #19067: [XLA:CPU][oneDNN] Move simplification pass before oneDNN pass
mahmoud-abuzaina Jan 10, 2025
fd4c85e
Remove outdated and no longer used mips cpu config_setting in lite/BU…
tensorflower-gardener Jan 10, 2025
e512ccd
[xla] Rename RendezvousSingle to Rendezvous
ezhulenev Jan 10, 2025
00aa8b8
Automated Code Change
tensorflower-gardener Jan 10, 2025
285a923
[XLA:CPU] Decouple object loading from JIT compiler.
tensorflower-gardener Jan 10, 2025
d91be6e
Automated Code Change
tensorflower-gardener Jan 10, 2025
902ff41
PR #20744: [NVIDIA GPU] Add a flag to control a2a collective matmul r…
Tixxx Jan 10, 2025
410e487
PR #21234: [ROCm] Fix failing dot tests
mmakevic-amd Jan 10, 2025
f1ae147
Automated Code Change
tensorflower-gardener Jan 10, 2025
5790796
[xla:cpu] Add operator[] to SortIterator
d0k Jan 10, 2025
f11a18b
Fix typo regarding `ImportConstantsPass` comment.
bartchr808 Jan 10, 2025
f638807
[XLA:CPU] Emit nested computation name rather than caller's
WillFroom Jan 10, 2025
34b418f
Don't set the promotion state explicitly.
Jan 10, 2025
8a1089e
Fix bad merge that skipped exporting tags.
jpienaar Jan 10, 2025
1faae56
PR #21191: [xla:cpu] Fix missing header in oneDNN ACL build
cfRod Jan 10, 2025
26e0b3e
Rollback of PR #19067
penpornk Jan 10, 2025
6e69c74
[xla:cpu][oneDNN] Add missing deps for onednn.
tensorflower-gardener Jan 10, 2025
86b868c
Integrate LLVM at llvm/llvm-project@a531800344dc
d0k Jan 10, 2025
0a837ca
Add support for int1 types in literal.cc
amitsabne1 Jan 10, 2025
a2eab0d
[XLA:GPU] Introduce xla_gpu_experimental_enable_triton_i4_rewrites, t…
loislo Jan 10, 2025
f70de22
Handle missing dtype cases in `xla::ifrt::DType::DebugString()`
junwhanahn Jan 10, 2025
90cab38
#sdy fix bug due to tensor dialect being introduced
bartchr808 Jan 10, 2025
c0e2a9a
Reverts a72d9bf92d333bff536f0b9d8eb05d7cff468023
mihaimaruseac Jan 10, 2025
d42f44f
Replace outdated select() on --cpu in lite/delegates/gpu/BUILD with p…
tensorflower-gardener Jan 10, 2025
25daee4
[XLA:GPU] add fusion wrapper tool
metaflow Jan 10, 2025
b164ad6
Add remaining FP8 (B11)FNUZ types to Tensorflow. This exposes
tensorflower-gardener Jan 10, 2025
5b8a6c1
Split `RunAndCompare` with reference backend functionality into a mixin.
nvgrw Jan 10, 2025
91495e8
Delete tfcompile documentation
ezhulenev Jan 10, 2025
d8c8ea0
[XLA:GPU][Emitters] Allow unrolling loops that yield values defined a…
pifon2a Jan 10, 2025
cd56cd6
[lite/kernels] cpu_backend_gemm: Update TFLITE_WITH_RUY comments
tensorflower-gardener Jan 10, 2025
523d135
Add DutyCycleTracker to open source.
bmass02 Jan 10, 2025
ce2eb08
Add `HloPjRtInterpreterReferenceMixin` wrapper around `HloRunnerAgnos…
nvgrw Jan 10, 2025
6e097e2
Replace outdated select() on --cpu in lite/kernels/BUILD with platfor…
tensorflower-gardener Jan 10, 2025
c3d1767
[XLA:GPU][Emitters] Allow to vectorize 128 bits for scatter.
pifon2a Jan 10, 2025
005238b
Cleanup: Remove PjRtMemoryDescription in favor of MemoryKind.
matthiaskramm Jan 10, 2025
be17f18
[xla:cpu] Micro-optimizations for ThunkExecutor
ezhulenev Jan 10, 2025
1bf8861
Remove unused data members from PluggableDeviceProcessState.
klucke Jan 10, 2025
fcbdad8
[Coordination Service] Fix pjrt_c_api_gpu_test after introducing TryGet
ishark Jan 10, 2025
1eb8cc2
Wrap typevars in string to avoid pytype bug
tensorflower-gardener Jan 10, 2025
4f67801
Plug the `allow_id_dropping` from the user configuration.
pineapplejuice233 Jan 10, 2025
e9b86ce
[TF:TPU] Enable cast tests for recently added FP8 types.
mrry Jan 10, 2025
c1a993e
Integrate LLVM at llvm/llvm-project@35e76b6a4fc7
alinas Jan 10, 2025
c3fc6a0
Remove host memory space as input to HostOffloader constructor.
SandSnip3r Jan 11, 2025
73db721
Add original cp name prefix to the send/receives instructions for bet…
toli-y Jan 11, 2025
7999ccc
[HLO Componentization] Populate hlo/testlib sub-component (Phase II).
sdasgup3 Jan 11, 2025
76f447e
Create an IFRT wrapper around NanoRT.
tensorflower-gardener Jan 11, 2025
7869999
[Coordination Service]Allow restartable tasks to connect back to clus…
ishark Jan 11, 2025
a095088
Extract a common helper function `HandleElementwiseWithDimsToReplicat…
ZixuanJiang Jan 11, 2025
8a06346
Apply proper version scripts to pywrap_library artifacts
vam-google Jan 11, 2025
22cba27
PR #21213: [GPU] Fix mutex locking of a cuDNN handle.
sergachev Jan 11, 2025
5cdc161
PR #21192: [xla:cpu] Add XLA_VLOG_LINES to oneDNN rewriter passes
cfRod Jan 11, 2025
d42857d
PR #21245: Fix failing test //xla/pjrt/gpu:pjrt_client_test_se_gpu
shraiysh Jan 11, 2025
8c1104d
[XLA:Python] Use PyEval_SetProfileAllThreads to install the python pr…
hawkinsp Jan 11, 2025
a8ecb44
[XLA:Python] Make sure we hold the lock on cache_ when destroying exe…
hawkinsp Jan 11, 2025
66b9ba9
Add source line and stack_frame functionality in hlo_module_map and u…
zzzaries Jan 11, 2025
42bd1d3
Update ml_dtypes version to 0fa5313b65efe848c5968a15dd37dd220cc29567.
reedwm Jan 11, 2025
9410875
PR #20494: Update slop_factor flag desc in debug_options_flags.cc
sfvaroglu Jan 11, 2025
d17d5f3
PR #20911: [XLA:GPU] Update cudnn frontend version to 1.9
Cjkkkk Jan 11, 2025
68844a9
PR #21163: [GPU] Redefine the flag xla_gpu_cudnn_gemm_fusion_level.
sergachev Jan 11, 2025
eb92017
PR #21123: Disable cuDNN fusions explicitly in tests that are testing…
dimvar Jan 11, 2025
5b93130
PR #20954: [XLA:GPU] migrate command buffer to use buffer_use.h
shawnwang18 Jan 11, 2025
479e087
PR #19066: [XLA:CPU][oneDNN] Handle oneDNN scalar
mahmoud-abuzaina Jan 11, 2025
87c5517
Handle INT64 shapes correctly for resource_variable_ops. Fix other pa…
tensorflower-gardener Jan 11, 2025
91e2a70
PR #20340: Fix missing template value
charleshofer Jan 11, 2025
786352d
PR #21134: [XLA:GPU] Add profiler annotation for sequential thunk.
shawnwang18 Jan 11, 2025
f180178
internal change only to update dependency visibility
tensorflower-gardener Jan 11, 2025
ec859bf
PR #20924: Fix typo in the definition of XLA_PredicatedExtractOp
dimvar Jan 11, 2025
f528aaf
Adjust the build config to an existing value defined in .bazelrc
codinglover222 Jan 11, 2025
08f116d
[HLO Componentization] Populate hlo/testlib sub-component (Phase II).
sdasgup3 Jan 11, 2025
13322e7
Internal relative changes only
zzzaries Jan 11, 2025
a6e4e02
OpenCL wrappers for device, command queue and buffer management
tensorflower-gardener Jan 11, 2025
68e171b
Automated Code Change
tensorflower-gardener Jan 11, 2025
e7758b0
Automated Code Change
tensorflower-gardener Jan 11, 2025
29fd2b3
Automated Code Change
tensorflower-gardener Jan 11, 2025
8d4c9b8
Automated Code Change
tensorflower-gardener Jan 11, 2025
8923b7d
Automated Code Change
tensorflower-gardener Jan 11, 2025
a786719
Update GraphDef version to 2104.
tensorflower-gardener Jan 11, 2025
ae49e93
compat: Update forward compatibility horizon to 2025-01-11
tensorflower-gardener Jan 11, 2025
53a3597
Allows suboptimal solutions for partial mesh shapes when given a *har…
tensorflower-gardener Jan 11, 2025
eeb0bfe
[HLO Componentization] Populate hlo/testlib sub-component (Phase II).
sdasgup3 Jan 11, 2025
2022b91
[HLO Componentization] Populate hlo/testlib sub-component (Phase II).
sdasgup3 Jan 11, 2025
ca76ccc
[HLO Componentization] Populate hlo/testlib sub-component (Phase II).
sdasgup3 Jan 11, 2025
c876d0d
[HLO Componentization] Populate hlo/testlib sub-component (Phase II).
sdasgup3 Jan 11, 2025
486cfa0
compat: Update forward compatibility horizon to 2025-01-12
tensorflower-gardener Jan 12, 2025
d025a0e
Update GraphDef version to 2105.
tensorflower-gardener Jan 12, 2025
e26ce85
PR #21265: Attempt to add pmap free-threading support
vfdev-5 Jan 12, 2025
a9f7c26
[XLA:Python] Add locking around the pytree registry in free threading…
hawkinsp Jan 13, 2025
74c6423
Merge pull request #84648 from codinglover222:contribute-document
tensorflower-gardener Jan 13, 2025
502cae1
Move xla::gpu::mlir_converter namespace to xla::emitters namespace.
akuegel Jan 13, 2025
a516227
[XLA:SchedulingAnnotations] Handle instructions with control dependen…
seherellis Jan 13, 2025
b5d22c7
PR #20808: [GSPMD] Partitions collective permute instructions in manu…
yliu120 Jan 13, 2025
a786747
[XLA:GPU] Replace genrule by LLVM archive parser to load fatbin in tests
thcmbs Jan 13, 2025
005cb06
Automated Code Change
tensorflower-gardener Jan 13, 2025
0945d5e
Update GraphDef version to 2106.
tensorflower-gardener Jan 13, 2025
68e1e20
initial commit
alekstheod Jan 13, 2025
e1d9704
Fix conflicts
alekstheod Jan 13, 2025
b3932df
Fix lower tensors alloc issue #10233
alekstheod Jan 16, 2025
a5407d3
Fix triton tests
alekstheod Jan 16, 2025
288618c
Fix fabin tests
alekstheod Jan 16, 2025
797657a
Disable matmul failing test
alekstheod Jan 17, 2025
467f778
Fix todo comment
alekstheod Jan 20, 2025
0e049c0
Narrow disabled matmul_op_tests
alekstheod Jan 20, 2025
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
The diff you're trying to view is too large. We only load the first 3000 changed files.
145 changes: 62 additions & 83 deletions .bazelrc

Large diffs are not rendered by default.

6 changes: 3 additions & 3 deletions CONTRIBUTING.md
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@ There are two ways to run TensorFlow unit tests.
bazel by doing as follows:

```bash
export flags="--config=opt -k"
export flags="--config=linux -k"
```

If the tests are to be run on the GPU:
Expand All @@ -259,15 +259,15 @@ There are two ways to run TensorFlow unit tests.
flag.

```bash
export flags="--config=opt --config=cuda -k"
export flags="--config=linux --config=cuda -k"
```

* For TensorFlow versions prior v.2.18.0: Add CUDA paths to
LD_LIBRARY_PATH and add the `cuda` option flag.

```bash
export LD_LIBRARY_PATH="${LD_LIBRARY_PATH}:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64:$LD_LIBRARY_PATH"
export flags="--config=opt --config=cuda -k"
export flags="--config=linux --config=cuda -k"
```

For example, to run all tests under tensorflow/python, do:
Expand Down
15 changes: 15 additions & 0 deletions ci/devinfra/docker/windows/Dockerfile
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@ RUN C:\TEMP\vs_community.exe \
--add Microsoft.VisualStudio.Workload.NativeDesktop \
--add Microsoft.VisualStudio.Component.VC.14.39.17.9.x86.64 \
--add Microsoft.VisualStudio.Component.Windows11SDK.22621 \
--add Microsoft.VisualStudio.Component.VC.ATL \
|| IF "%ERRORLEVEL%"=="3010" EXIT 0

SHELL ["powershell.exe", "-ExecutionPolicy", "Bypass", "-Command", \
Expand Down Expand Up @@ -152,4 +153,18 @@ RUN (New-Object Net.WebClient).DownloadFile( \
$env:PATH = [Environment]::GetEnvironmentVariable('PATH', 'Machine') + ';C:\tools\bazel'; \
[Environment]::SetEnvironmentVariable('PATH', $env:PATH, 'Machine');

ENV CLOUDSDK_CORE_DISABLE_PROMPTS 1
RUN (New-Object Net.WebClient).DownloadFile('https://dl.google.com/dl/cloudsdk/channels/rapid/google-cloud-sdk.zip', 'C:\Temp\google-cloud-sdk.zip'); \
Expand-Archive -Path 'C:\Temp\google-cloud-sdk.zip' -DestinationPath $env:ProgramFiles -Verbose:$false
RUN & \"$env:ProgramFiles\\google-cloud-sdk\\install.bat\" --path-update false
RUN $env:Path += \";$env:ProgramFiles\\google-cloud-sdk\\bin\"; \
[Environment]::SetEnvironmentVariable('Path', $env:Path, [EnvironmentVariableTarget]::Machine);
# Re-enable prompts for interactive use.
ENV CLOUDSDK_CORE_DISABLE_PROMPTS=""

# MSYS attempts to use non-cmd versions, which aren't meant for Windows
RUN Add-Content -Path C:\tools\msys64\.bashrc -Value 'alias gcloud=gcloud.cmd'
RUN Add-Content -Path C:\tools\msys64\.bashrc -Value 'alias gsutil=gsutil.cmd'
RUN Add-Content -Path C:\tools\msys64\.bashrc -Value 'alias bq=bq.cmd'

SHELL ["cmd.exe", "/s", "/c"]
2 changes: 1 addition & 1 deletion ci/official/any.sh
Original file line number Diff line number Diff line change
Expand Up @@ -36,7 +36,7 @@
# export TF_ANY_EXTRA_ENV=ci/official/envs/local_rbe
# ./any.sh
# ...
set -euxo pipefail
set -exo pipefail
cd "$(dirname "$0")/../../" # tensorflow/
# Any request that includes "nightly_upload" should just use the
# local multi-cache (public read-only cache + disk cache) instead.
Expand Down
2 changes: 1 addition & 1 deletion ci/official/bisect.sh
Original file line number Diff line number Diff line change
Expand Up @@ -32,7 +32,7 @@
# export TF_BISECT_BAD=a_failing_commit_sha
# export TF_ANY_TARGETS="quoted list of targets, like on the command line"
# export TF_ANY_MODE=test
set -euxo pipefail
set -exo pipefail
cd "$(dirname "$0")/../../" # tensorflow/
export TFCI="$(echo $TFCI | sed 's/,nightly_upload/,public_cache,disk_cache/')"
git bisect start "$TF_BISECT_BAD" "$TF_BISECT_GOOD"
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@

# Check and rename wheels with auditwheel. Inserts the platform tags like
# "manylinux_xyz" into the wheel filename.
set -euxo pipefail
set -exo pipefail

for wheel in /tf/pkg/*.whl; do
echo "Checking and renaming $wheel..."
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
set -euxo pipefail
set -exo pipefail

# Run this from inside the tensorflow github directory.
# Usage: setup_venv_test.sh venv_and_symlink_name "glob pattern for one wheel file"
Expand Down
18 changes: 12 additions & 6 deletions ci/official/containers/ml_build/Dockerfile
Original file line number Diff line number Diff line change
@@ -1,5 +1,8 @@
################################################################################
FROM ubuntu:22.04@sha256:58b87898e82351c6cf9cf5b9f3c20257bb9e2dcf33af051e12ce532d7f94e3fe AS devel
ARG BASE_IMAGE=ubuntu:22.04@sha256:58b87898e82351c6cf9cf5b9f3c20257bb9e2dcf33af051e12ce532d7f94e3fe
FROM $BASE_IMAGE AS devel
# See https://docs.docker.com/reference/dockerfile/#understand-how-arg-and-from-interact
# on why we cannot reference BASE_IMAGE again unless we declare it again.
################################################################################

# Install devtoolset build dependencies
Expand All @@ -20,15 +23,15 @@ RUN /build_devtoolset.sh devtoolset-9 /dt9
# Setup Python
COPY setup.python.sh /setup.python.sh
COPY builder.requirements.txt /builder.requirements.txt
RUN /setup.python.sh python3.9 builder.requirements.txt
RUN /setup.python.sh python3.10 builder.requirements.txt
RUN /setup.python.sh python3.11 builder.requirements.txt
RUN /setup.python.sh python3.13 builder.requirements.txt
RUN /setup.python.sh python3.9 /builder.requirements.txt
RUN /setup.python.sh python3.10 /builder.requirements.txt
RUN /setup.python.sh python3.11 /builder.requirements.txt
RUN /setup.python.sh python3.13 /builder.requirements.txt

# Since we are using python3.12 as the default python version, we need to
# install python3.12 last for now.
# TODO(b/376338367): switch to pyenv.
RUN /setup.python.sh python3.12 builder.requirements.txt
RUN /setup.python.sh python3.12 /builder.requirements.txt

# Setup links for TensorFlow to compile.
# Referenced in devel.usertools/*.bazelrc.
Expand All @@ -41,6 +44,9 @@ RUN ln -sf /usr/lib/python3.12 /usr/lib/tf_python
# Make sure clang is on the path
RUN ln -s /usr/lib/llvm-18/bin/clang /usr/bin/clang

# Link the compat driver to the location if available.
RUN if [ -e "/usr/local/cuda/compat/libcuda.so.1" ]; then ln -s /usr/local/cuda/compat/libcuda.so.1 /usr/lib/x86_64-linux-gnu/libcuda.so.1; fi

# Install various tools.
# - bats: bash unit testing framework
# - bazelisk: always use the correct bazel version
Expand Down
2 changes: 1 addition & 1 deletion ci/official/containers/ml_build/setup.python.sh
Original file line number Diff line number Diff line change
Expand Up @@ -24,7 +24,7 @@ VERSION=$1
REQUIREMENTS=$2

# Install Python packages for this container's version
if [[ ${VERSION} == "python3.13" ]]; then
if [[ ${VERSION} == "python3.13" || ${VERSION} == "python3.12" ]]; then
cat >pythons.txt <<EOF
$VERSION
$VERSION-dev
Expand Down
1 change: 1 addition & 0 deletions ci/official/envs/ci_default
Original file line number Diff line number Diff line change
Expand Up @@ -42,6 +42,7 @@ TFCI_DOCKER_PULL_ENABLE=
TFCI_DOCKER_REBUILD_ARGS=
TFCI_DOCKER_REBUILD_ENABLE=
TFCI_DOCKER_REBUILD_UPLOAD_ENABLE=
TFCI_FIND_BIN=find
TFCI_GIT_DIR=
TFCI_INDEX_HTML_ENABLE=
TFCI_INSTALLER_WHL_ENABLE=
Expand Down
5 changes: 3 additions & 2 deletions ci/official/envs/linux_x86
Original file line number Diff line number Diff line change
Expand Up @@ -12,7 +12,7 @@
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
TFCI_BAZEL_COMMON_ARGS="--repo_env=HERMETIC_PYTHON_VERSION=$TFCI_PYTHON_VERSION --config release_cpu_linux"
TFCI_BAZEL_COMMON_ARGS="--repo_env=HERMETIC_PYTHON_VERSION=$TFCI_PYTHON_VERSION --repo_env=USE_PYWRAP_RULES=True --config release_cpu_linux"
TFCI_BAZEL_TARGET_SELECTING_CONFIG_PREFIX=linux_cpu
TFCI_BUILD_PIP_PACKAGE_ARGS="--repo_env=WHEEL_NAME=tensorflow_cpu"
TFCI_DOCKER_ENABLE=1
Expand All @@ -25,5 +25,6 @@ TFCI_OUTPUT_DIR=build_output
TFCI_WHL_AUDIT_ENABLE=1
TFCI_WHL_AUDIT_PLAT=manylinux2014_x86_64
TFCI_WHL_BAZEL_TEST_ENABLE=1
TFCI_WHL_SIZE_LIMIT=240M
# TODO: Set back to 240M once the wheel size is fixed.
TFCI_WHL_SIZE_LIMIT=270M
TFCI_WHL_SIZE_LIMIT_ENABLE=1
3 changes: 2 additions & 1 deletion ci/official/envs/macos_arm64
Original file line number Diff line number Diff line change
Expand Up @@ -21,7 +21,8 @@ TFCI_MACOS_BAZEL_TEST_DIR_ENABLE=1
TFCI_MACOS_BAZEL_TEST_DIR_PATH="/Volumes/BuildData/bazel_output"
TFCI_OUTPUT_DIR=build_output
TFCI_WHL_BAZEL_TEST_ENABLE=1
TFCI_WHL_SIZE_LIMIT=240M
# TODO: Set back to 240M once the wheel size is fixed.
TFCI_WHL_SIZE_LIMIT=270M
TFCI_WHL_SIZE_LIMIT_ENABLE=1

# 3.11 is the system python on our images
Expand Down
21 changes: 20 additions & 1 deletion ci/official/envs/windows_x86
Original file line number Diff line number Diff line change
Expand Up @@ -14,7 +14,26 @@
# ==============================================================================
TFCI_DOCKER_ENABLE=1
TFCI_DOCKER_PULL_ENABLE=1
TFCI_DOCKER_IMAGE="gcr.io/tensorflow-testing/tf-win2019-rbe@sha256:1082ef4299a72e44a84388f192ecefc81ec9091c146f507bc36070c089c0edcc"
TFCI_DOCKER_IMAGE="gcr.io/tensorflow-testing/tf-win2019-rbe@sha256:d3577d20dea75966faf7fd03479c71462441937df5694259109c2ee1d002a3dd"
TFCI_BAZEL_BAZELRC_ARGS="--output_user_root=C:/t"
TFCI_BAZEL_COMMON_ARGS="--repo_env=HERMETIC_PYTHON_VERSION=$TFCI_PYTHON_VERSION"
TFCI_BAZEL_TARGET_SELECTING_CONFIG_PREFIX=windows_x86_cpu
TFCI_OUTPUT_DIR=build_output
TFCI_FIND_BIN=C:/tools/msys64/usr/bin/find.exe

# TODO(belitskiy): Add a link to the Dockerfile comment that explains this more.
# Used to simulate a T:\ drive within the container, to a limited extent,
# via a symlink.
# Helpful since the internal CI utilizes a T:\ drive, part of which is mounted
# to the container, and would result in C:\<path> != T:\<path> mismatches,
# when using variables like `TFCI_OUTPUT_DIR` in `docker exec commands,
# requiring conditional path adjustments throughout the CI scripts.
# Note: This does not work for `docker cp` commands.
TFCI_OUTPUT_WIN_DOCKER_DIR='C:/drive_t'

# Docker on Windows doesn't support the `host` networking mode, and so
# port-forwarding is required for the container to detect it's running on GCE.
export IP_ADDR=$(powershell -command "(Get-NetIPAddress -AddressFamily IPv4 -InterfaceAlias 'vEthernet (nat)').IPAddress")
netsh interface portproxy add v4tov4 listenaddress=$IP_ADDR listenport=80 connectaddress=169.254.169.254 connectport=80
# A local firewall rule for the container is added in
# ci/official/utilities/setup_docker.sh.
49 changes: 49 additions & 0 deletions ci/official/envs/windows_x86_2022
Original file line number Diff line number Diff line change
@@ -0,0 +1,49 @@
# Copyright 2023 The TensorFlow Authors. All Rights Reserved.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
# ==============================================================================
TFCI_DOCKER_ENABLE=1
TFCI_DOCKER_PULL_ENABLE=1
TFCI_DOCKER_IMAGE="gcr.io/tensorflow-testing/tf-win2022@sha256:915cb093630432c38b028f56bd31116a5559ebbc688d427b6092d86828ae03bc"
TFCI_BAZEL_BAZELRC_ARGS="--output_user_root=C:/t"
TFCI_BAZEL_COMMON_ARGS="--repo_env=HERMETIC_PYTHON_VERSION=$TFCI_PYTHON_VERSION --config=windows_x86_cpu"
TFCI_BAZEL_TARGET_SELECTING_CONFIG_PREFIX=windows_x86_cpu
TFCI_OUTPUT_DIR=build_output
TFCI_FIND_BIN=C:/tools/msys64/usr/bin/find.exe
TFCI_LIB_SUFFIX="-cpu-windows-x86_64"
# auditwheel is not supported for Windows
TFCI_WHL_AUDIT_ENABLE=0
TFCI_WHL_AUDIT_PLAT=0
# Tests are extremely slow at the moment
TFCI_WHL_BAZEL_TEST_ENABLE=0
TFCI_WHL_SIZE_LIMIT=450M
TFCI_WHL_SIZE_LIMIT_ENABLE=1
TFCI_WHL_IMPORT_TEST_ENABLE=1
TFCI_PYTHON_VERIFY_PIP_INSTALL_ARGS=""

# TODO(belitskiy): Add a link to the Dockerfile comment that explains this more.
# Used to simulate a T:\ drive within the container, to a limited extent,
# via a symlink.
# Helpful since the internal CI utilizes a T:\ drive, part of which is mounted
# to the container, and would result in C:\<path> != T:\<path> mismatches,
# when using variables like `TFCI_OUTPUT_DIR` in `docker exec commands,
# requiring conditional path adjustments throughout the CI scripts.
# Note: This does not work for `docker cp` commands.
TFCI_OUTPUT_WIN_DOCKER_DIR='C:/drive_t'

# Docker on Windows doesn't support the `host` networking mode, and so
# port-forwarding is required for the container to detect it's running on GCE.
export IP_ADDR=$(powershell -command "(Get-NetIPAddress -AddressFamily IPv4 -InterfaceAlias 'vEthernet (nat)').IPAddress")
netsh interface portproxy add v4tov4 listenaddress=$IP_ADDR listenport=80 connectaddress=169.254.169.254 connectport=80
# A local firewall rule for the container is added in
# ci/official/utilities/setup_docker.sh.
10 changes: 7 additions & 3 deletions ci/official/libtensorflow.sh
Original file line number Diff line number Diff line change
Expand Up @@ -25,10 +25,14 @@ if [[ "$TFCI_NIGHTLY_UPDATE_VERSION_ENABLE" == 1 ]]; then
tfrun python3 tensorflow/tools/ci_build/update_version.py --nightly
fi

tfrun bazel $TFCI_BAZEL_BAZELRC_ARGS test $TFCI_BAZEL_COMMON_ARGS --config=linux_libtensorflow_test
tfrun bazel $TFCI_BAZEL_BAZELRC_ARGS build $TFCI_BAZEL_COMMON_ARGS --config=linux_libtensorflow_build
if [[ $(uname -s) != MSYS_NT* ]]; then
tfrun bazel $TFCI_BAZEL_BAZELRC_ARGS test $TFCI_BAZEL_COMMON_ARGS --config=linux_libtensorflow_test
tfrun bazel $TFCI_BAZEL_BAZELRC_ARGS build $TFCI_BAZEL_COMMON_ARGS --config=linux_libtensorflow_build
else
tfrun bazel $TFCI_BAZEL_BAZELRC_ARGS build $TFCI_BAZEL_COMMON_ARGS --config=windows_libtensorflow_build
fi

tfrun ./ci/official/utilities/repack_libtensorflow.sh "$TFCI_OUTPUT_DIR" "$TFCI_LIB_SUFFIX"
tfrun bash ./ci/official/utilities/repack_libtensorflow.sh "$TFCI_OUTPUT_DIR" "$TFCI_LIB_SUFFIX"

if [[ "$TFCI_ARTIFACT_STAGING_GCS_ENABLE" == 1 ]]; then
# Note: -n disables overwriting previously created files.
Expand Down
11 changes: 3 additions & 8 deletions ci/official/pycpp.sh
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@
source "${BASH_SOURCE%/*}/utilities/setup.sh"

if [[ `uname -s | grep -P '^MSYS_NT'` ]]; then
PROFILE_JSON_PATH=$(replace_drive_letter_with_c "$TFCI_OUTPUT_DIR")
PROFILE_JSON_PATH=$(replace_drive_letter_with_prefix "$TFCI_OUTPUT_WIN_DOCKER_DIR")
PROFILE_JSON_PATH="$PROFILE_JSON_PATH/profile.json.gz"
else
PROFILE_JSON_PATH="$TFCI_OUTPUT_DIR/profile.json.gz"
Expand All @@ -29,14 +29,9 @@ if [[ "$TFCI_WHL_NUMPY_VERSION" == 1 ]]; then
fi

if [[ $TFCI_PYCPP_SWAP_TO_BUILD_ENABLE == 1 ]]; then
tfrun bazel build $TFCI_BAZEL_COMMON_ARGS --profile "$PROFILE_JSON_PATH" --@local_config_cuda//cuda:override_include_cuda_libs=true --@local_tsl//third_party/py:verify_manylinux=false --config="${TFCI_BAZEL_TARGET_SELECTING_CONFIG_PREFIX}_pycpp_test"
tfrun bazel $TFCI_BAZEL_BAZELRC_ARGS build $TFCI_BAZEL_COMMON_ARGS --profile "$PROFILE_JSON_PATH" --@local_config_cuda//cuda:override_include_cuda_libs=true --config="${TFCI_BAZEL_TARGET_SELECTING_CONFIG_PREFIX}_pycpp_test"
else
# TODO(belitskiy): Clean this up when migrating to new VM/Docker image
if [[ `uname -s | grep -P '^MSYS_NT'` ]]; then
tfrun bazel --output_user_root 'C:/tmp' test $TFCI_BAZEL_COMMON_ARGS --profile "$PROFILE_JSON_PATH" --@local_config_cuda//cuda:override_include_cuda_libs=true --@local_tsl//third_party/py:verify_manylinux=false --config="${TFCI_BAZEL_TARGET_SELECTING_CONFIG_PREFIX}_pycpp_test"
else
tfrun bazel test $TFCI_BAZEL_COMMON_ARGS --profile "$PROFILE_JSON_PATH" --@local_config_cuda//cuda:override_include_cuda_libs=true --@local_tsl//third_party/py:verify_manylinux=false --config="${TFCI_BAZEL_TARGET_SELECTING_CONFIG_PREFIX}_pycpp_test"
fi
tfrun bazel $TFCI_BAZEL_BAZELRC_ARGS test $TFCI_BAZEL_COMMON_ARGS --profile "$PROFILE_JSON_PATH" --@local_config_cuda//cuda:override_include_cuda_libs=true --config="${TFCI_BAZEL_TARGET_SELECTING_CONFIG_PREFIX}_pycpp_test"
fi

# Note: the profile can be viewed by visiting chrome://tracing in a Chrome browser.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,20 @@ requests >= 2.31.0
packaging==23.2
setuptools==70.0.0
jax==0.4.7
# NVIDIA CUDA dependencies
# Note that the wheels are downloaded only when the targets in bazel command
# contain dependencies on these wheels.
nvidia-cublas-cu12 == 12.5.3.2
nvidia-cuda-cupti-cu12 == 12.5.82
nvidia-cuda-nvrtc-cu12 == 12.5.82
nvidia-cuda-runtime-cu12 == 12.5.82
nvidia-cudnn-cu12 == 9.3.0.75
nvidia-cufft-cu12 == 11.2.3.61
nvidia-curand-cu12 == 10.3.6.82
nvidia-cusolver-cu12 == 11.6.3.83
nvidia-cusparse-cu12 == 12.5.1.3
nvidia-nccl-cu12 == 2.23.4
nvidia-nvjitlink-cu12 == 12.5.82
# The dependencies below are needed for TF wheel testing.
tensorflow-io-gcs-filesystem==0.37.1
libclang >= 13.0.0
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -430,6 +430,69 @@ numpy==1.26.4 \
# opt-einsum
# scipy
# tb-nightly
nvidia-cublas-cu12==12.5.3.2 \
--hash=sha256:4960f3dc5f39699acadf76fa6d94b10a2a00f2956c2c442efa299fb22b0748f3 \
--hash=sha256:7d0191251180de606023d396b94d66f66470a0ae96d1dbb906c7656ea0f71eda \
--hash=sha256:ca070ad70e9fa6654084575d01bd001f30cc4665e33d4bb9fc8e0f321caa034b
# via
# -r ci/official/requirements_updater/requirements.in
# nvidia-cudnn-cu12
# nvidia-cusolver-cu12
nvidia-cuda-cupti-cu12==12.5.82 \
--hash=sha256:4f835281cf492e2bedd153f5c3de9da8f1d775a419468305e64ce73b3b0c6dc3 \
--hash=sha256:bde77a5feb66752ec61db2adfe47f56b941842825b4c7e2068aff27c9d107953 \
--hash=sha256:d32c06490c6ba35c4323730820c7d0c4c126c04ed58d2f57275adb8d54b138fe
# via -r ci/official/requirements_updater/requirements.in
nvidia-cuda-nvrtc-cu12==12.5.82 \
--hash=sha256:3dbd97b0104b4bfbc3c4f8c79cd2496307c89c43c29a9f83125f1d76296ff3fd \
--hash=sha256:5bb6a0eb01d4974bb7ca3d48bd3859472debb3c3057a5e7de2b08fbdf35eed7e \
--hash=sha256:e5db37e990056c70953b7772dd778336ef9da0a0b5bb28f9f2a61c2e42b51d78
# via -r ci/official/requirements_updater/requirements.in
nvidia-cuda-runtime-cu12==12.5.82 \
--hash=sha256:0fd5fbca289bceb9f0690aa9858f06187b554fdeb7e2711dfd5bb3ce58900b46 \
--hash=sha256:3e79a060e126df40fd3a068f3f787eb000fa51b251ec6cd97d09579632687115 \
--hash=sha256:71f015dbf9df05dd71f7480132c6ebf47a6ceb2ab53d7db8e08e4b30ebb87e14
# via -r ci/official/requirements_updater/requirements.in
nvidia-cudnn-cu12==9.3.0.75 \
--hash=sha256:9ad9c6929ebb5295eb4a1728024666d1c88283373e265a0c5c883e6f9d5cd76d \
--hash=sha256:c5cf7ff3415e446adf195a5b7dd2ba56cd00c3ee78bfdc566e51698931aa4b7f \
--hash=sha256:c819e82eed8cf564b9d37478ea4eab9e87194bb3b7f7f8098bc1f67c9b80f1b6
# via -r ci/official/requirements_updater/requirements.in
nvidia-cufft-cu12==11.2.3.61 \
--hash=sha256:4a8f6f0ce93c52a50ee83422a80472b5f376054a63f38532d0eab4007e7ef28b \
--hash=sha256:6d45b48a5ee7599e57131129cda2c58544d9b78b95064d3ec3e5c6b96e2b58cc \
--hash=sha256:9a6e8df162585750f61983a638104a48c756aa13f9f48e19ab079b38e3c828b8
# via -r ci/official/requirements_updater/requirements.in
nvidia-curand-cu12==10.3.6.82 \
--hash=sha256:0631ba65231260ad832ce233ddda57e7b3b7158eabf000d78e46cbb5bd5b7aae \
--hash=sha256:2823fb27de4e44dbb22394a6adf53aa6e1b013aca0f8c22867d1cfae58405536 \
--hash=sha256:36aabeb5990297bbce3df324ea7c7c13c3aabb140c86d50ab3b23e4ec61672f1
# via -r ci/official/requirements_updater/requirements.in
nvidia-cusolver-cu12==11.6.3.83 \
--hash=sha256:1b8b77d2fe8abe72bb722dafb708cceaeb81f1a03999477f20b33b34f46ab885 \
--hash=sha256:6224732963cba312a84c78114b9a38c4ffabb2e2a6a120923ac99ba6f895c8cf \
--hash=sha256:93cfafacde4428b71778eeb092ec615a02a3d05404da1bcf91c53e3fa1bce42b
# via -r ci/official/requirements_updater/requirements.in
nvidia-cusparse-cu12==12.5.1.3 \
--hash=sha256:016df8e993c437e8301e62739f01775cba988fd5253cd4c64173f8e8d2f8e752 \
--hash=sha256:33520db374e2f5ebc976d6faa1852b98c398a57e6f71150fe59705928596ffd1 \
--hash=sha256:7b97fd01f0a61628af99d0efd52132fccc8c18fc5c509f13802dccf0574a19c2
# via
# -r ci/official/requirements_updater/requirements.in
# nvidia-cusolver-cu12
nvidia-nccl-cu12==2.23.4 \
--hash=sha256:aa946c8327e22ced28e7cef508a334673abc42064ec85f02d005ba1785ea4cec \
--hash=sha256:b097258d9aab2fa9f686e33c6fe40ae57b27df60cedbd15d139701bb5509e0c1
# via -r ci/official/requirements_updater/requirements.in
nvidia-nvjitlink-cu12==12.5.82 \
--hash=sha256:98103729cc5226e13ca319a10bbf9433bbbd44ef64fe72f45f067cacc14b8d27 \
--hash=sha256:e782564d705ff0bf61ac3e1bf730166da66dd2fe9012f111ede5fc49b64ae697 \
--hash=sha256:f9b37bc5c8cf7509665cb6ada5aaa0ce65618f2332b7d3e78e9790511f111212
# via
# -r ci/official/requirements_updater/requirements.in
# nvidia-cufft-cu12
# nvidia-cusolver-cu12
# nvidia-cusparse-cu12
opt-einsum==3.3.0 \
--hash=sha256:2455e59e3947d3c275477df7f5205b30635e266fe6dc300e3d9f9646bfcea147 \
--hash=sha256:59f6475f77bbc37dcf7cd748519c0ec60722e91e63ca114e68821c0c54a46549
Expand Down
Loading