You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
The theory is that this is because the host compiler includes libc++ and uses that, but the stage 1 compiler does not. So it's mixing libc++ stuff built by the host compiler with libstdc++ things it has built as part of the runtimes.
#134990 aims to fix this by passing on the LLVM_ENABLE_LIBCXX setting to the runtimes build.
This makes sense given that libcxx is not in LLVM_ENABLE_RUNTIMES, and the stage 1 compiler has no way to know the other host compiler exists in this particular bot's setup. Which is...
An llvm release package is copied into /usr/local/ and "cc" and "cxx" are setup to point to clang and clang++ in that package.
The library directory of that package is added to ld.so.conf. This means that libc++ is not installed system wide, this is important.
So what can we do to fix this?
I think the PR itself is fine, but the bot needs to be updated to work with it.
We need to:
Not test flang and flang-rt using mismatched versions of libc++ and/or libstdc++.
Not hard code compiler paths into llvm-zorg, which would be a major pain when we come to update the host compiler.
Some routes we could take...
Pass the include and library paths to the stage 1 compiler
We must specify LLVM_RUNTIME_TARGETS to be able to pass custom options into the runtimes build.
Doing so changes the check target name for flang-rt.
This works but we would have to hardcode those paths, or put stable symlinks in our docker image and point to those. Which is not ideal as it's one more step to reproduce outside of the container for what is, in theory, a pretty simple build.
Build the runtime with the host compiler instead of stage 1 compiler
We have to build clang to build flang, but we don't have to use the just built clang:
This is basically a standalone build of flang-rt (where you point cmake at flang-rt itself, not llvm) but with more steps. It does work but I'm not sure we're supposed to be doing it this way.
With the way our container is setup, "cc" will be clang and "cxx" clang++. So I'd be fine putting those into llvm-zorg as those names are stable.
Change the builder type and build flang-rt standalone using host compiler
Unified tree builder doesn't seem like it would accept an extra step easily, but flang-aarch64-out-of-tree uses out of tree builder which might. We end up with a bot that is unusual in 2 ways (out of tree and libc++) not one, but it's easier to fit into llvm-zorg.
Or we look at making it an annotated builder where there's total freedom. It's a bit much for a theoretically simple change though.
Even if we could get the tests to run this way, we now have tests using 1 version of libcxx, and flang using another. If we actually find a real bug, that's one more dimension to deal with.
So I do not think we should pursue this.
Recommendation
I think we either:
Tell the runtimes build to use the host compiler, assuming this is not massively against the spirit of the runtimes build. We could also do this as a short term fix, even if it is not a great idea.
Change the builder type and add a step building a standalone flang-rt using the host compiler.
The text was updated successfully, but these errors were encountered:
I'll be at Euro LLVM so I'm handing this off to a colleague to finish up.
Notes for them:
Start an llvmbot container using the --entrypoint bash option to skip starting the buildbot agent.
Doing that means you'll have to manually apply the changes that run.sh would have. That is the ld.so.conf file and the cc/cxx setup.
The failure happens in the check step, not the building of flang-rt. A few times I thought I couldn't reproduce but I just needed to run the tests as well.
The bot is now silent, so if you want to see latest builds, go to staging.
I was able to reproduce this issue as well, also reaching the conclusion that the problem is mixing stuff built with libc++ with things built with libstdc++. Below I expand a bit more about the cause.
LLVM_ENABLE_LIBCXX=On makes LLVM be built with -stdlib=libc++. llvm::Twine::str() const, defined in Twine.cpp.o, part of LLVM libraries, is built with this flag.
But AccessTest.cpp.o, from flang-rt tests, that references llvm::Twine::str, is built without -stdlib=libc++, thus using libstdc++.
As a result, when linking, the linker searches for llvm::Twine::str[abi:cxx11]() const instead of llvm::Twine::str() const, which is not found.
Analyzing the possible solutions listed above, it seems to me that building the runtime with the host compiler instead of stage 1 compiler is the best one. It's relatively simple, doesn't require many changes, and, as the only runtime that flang-aarch64-libcxx builts is flang-rt, that doesn't have a strong dependency on the version of the C++ compiler and libraries used, it shouldn't be a problem.
I will prepare a PR with this change.
luporl
added a commit
to luporl/llvm-zorg
that referenced
this issue
Apr 17, 2025
After flang-rt, flang-aarch64-libcxx builder started to fail.
Before it, llvm libraries, flang and its runtime were built with the
host compiler, but now flang runtime is built with the stage 1 clang.
The problem is that the C++ library of these compilers may be
incompatible. The linked issue has more details.
To avoid this issue, build flang-rt with the host compiler.
Fixesllvm/llvm-project#135381
Since flang-rt builds were added to the flang-aarch64-libcxx bot it has been failing with:
The theory is that this is because the host compiler includes libc++ and uses that, but the stage 1 compiler does not. So it's mixing libc++ stuff built by the host compiler with libstdc++ things it has built as part of the runtimes.
#134990 aims to fix this by passing on the LLVM_ENABLE_LIBCXX setting to the runtimes build.
The theory is sound but the bot still failed:
This makes sense given that libcxx is not in LLVM_ENABLE_RUNTIMES, and the stage 1 compiler has no way to know the other host compiler exists in this particular bot's setup. Which is...
So what can we do to fix this?
I think the PR itself is fine, but the bot needs to be updated to work with it.
We need to:
Some routes we could take...
Pass the include and library paths to the stage 1 compiler
Note that:
This works but we would have to hardcode those paths, or put stable symlinks in our docker image and point to those. Which is not ideal as it's one more step to reproduce outside of the container for what is, in theory, a pretty simple build.
Build the runtime with the host compiler instead of stage 1 compiler
We have to build clang to build flang, but we don't have to use the just built clang:
This is basically a standalone build of flang-rt (where you point cmake at flang-rt itself, not llvm) but with more steps. It does work but I'm not sure we're supposed to be doing it this way.
With the way our container is setup, "cc" will be clang and "cxx" clang++. So I'd be fine putting those into llvm-zorg as those names are stable.
Change the builder type and build flang-rt standalone using host compiler
Unified tree builder doesn't seem like it would accept an extra step easily, but flang-aarch64-out-of-tree uses out of tree builder which might. We end up with a bot that is unusual in 2 ways (out of tree and libc++) not one, but it's easier to fit into llvm-zorg.
Or we look at making it an annotated builder where there's total freedom. It's a bit much for a theoretically simple change though.
Add libcxx to runtimes (which does not work)
This one does not work, at least for our goals.
This does give stage 1 clang a libcxx to use but the tests end up compiling against a newer libcxx than they would find when trying to run:
You can fix that with LD_LIBRARY_PATH, but doing that in a bot config is a maintanence problem.
Even if we could get the tests to run this way, we now have tests using 1 version of libcxx, and flang using another. If we actually find a real bug, that's one more dimension to deal with.
So I do not think we should pursue this.
Recommendation
I think we either:
The text was updated successfully, but these errors were encountered: