malloc.c:1909:8: error: converting the result of '<<' to a boolean always evaluates to true [-Werror,-Wtautological-constant-compare]
1909 | if (!DEFAULT_THP_PAGESIZE || mp_.thp_mode != malloc_thp_mode_not_supported)
| ^
../sysdeps/unix/sysv/linux/aarch64/malloc-hugepages.h:19:35: note: expanded from macro 'DEFAULT_THP_PAGESIZE'
19 | #define DEFAULT_THP_PAGESIZE (1UL << 21)
elf: Fix elf/tst-decorate-maps on aarch64 after 321e1fc73f
The intention of the call "xmalloc(256 * 1024)" in tst-decorate-maps is
to force malloc() to fall back to using mmap() since such an amount
won't be available from the main heap.
Post 321e1fc73f, on aarch64, the heap gets extended by default by at
least 2MB, thus the aforementioned call may get satisfied on the main
heap itself. Thus, increase the amount of memory requested to force the
mmap() path again.
x86: Do not use __builtin_isinf_sign for _Float64x/long double
Neither gcc [1] nor clang [2] handles pseudo-normal numbers correctly
with the __builtin_isinf_sign, so disable its usage for _Float64x and
long double types.
This only affects x86, so add a new define __FP_BUILTIN_ISINF_SIGN_DENORMAL
to gate long double and related types to the libc function instead.
It fixes the regression on test-ldouble-isinf when built with clang:
Failure: isinf (pseudo_zero): Exception "Invalid operation" set
Failure: isinf (pseudo_inf): Exception "Invalid operation" set
Failure: isinf (pseudo_qnan): Exception "Invalid operation" set
Failure: isinf (pseudo_snan): Exception "Invalid operation" set
Failure: isinf (pseudo_unnormal): Exception "Invalid operation" set
Failure: isinf_downward (pseudo_zero): Exception "Invalid operation" set
Failure: isinf_downward (pseudo_inf): Exception "Invalid operation" set
Failure: isinf_downward (pseudo_qnan): Exception "Invalid operation" set
Failure: isinf_downward (pseudo_snan): Exception "Invalid operation" set
Failure: isinf_downward (pseudo_unnormal): Exception "Invalid operation" set
Failure: isinf_towardzero (pseudo_zero): Exception "Invalid operation" set
Failure: isinf_towardzero (pseudo_inf): Exception "Invalid operation" set
Failure: isinf_towardzero (pseudo_qnan): Exception "Invalid operation" set
Failure: isinf_towardzero (pseudo_snan): Exception "Invalid operation" set
Failure: isinf_towardzero (pseudo_unnormal): Exception "Invalid operation" set
Failure: isinf_upward (pseudo_zero): Exception "Invalid operation" set
Failure: isinf_upward (pseudo_inf): Exception "Invalid operation" set
Failure: isinf_upward (pseudo_qnan): Exception "Invalid operation" set
Failure: isinf_upward (pseudo_snan): Exception "Invalid operation" set
Failure: isinf_upward (pseudo_unnormal): Exception "Invalid operation" set
Checked on x86_64-linux-gnu with gcc-15 and clang-18.
x86: Do not use __builtin_fpclassify for _Float64x/long double
Neither gcc [1] nor clang [2] handles pseudo-normal numbers correctly
with the __builtin_fpclassify, so disable its usage for _Float64x and
long double types.
This only affects x86, so add a new header, fp-builtin-denormal.h, that
defines whether the architecture requires disabling the optimization
through a new glibc define (__FP_BUILTIN_FPCLASSIFY_DENORMAL).
It fixes the regression on test-ldouble-fpclassify and
test-float64x-fpclassify when built with clang:
Failure: fpclassify (pseudo_zero): Exception "Invalid operation" set
Failure: fpclassify (pseudo_inf): Exception "Invalid operation" set
Failure: fpclassify (pseudo_qnan): Exception "Invalid operation" set
Failure: fpclassify (pseudo_snan): Exception "Invalid operation" set
Failure: fpclassify (pseudo_unnormal): Exception "Invalid operation" set
Failure: fpclassify_downward (pseudo_zero): Exception "Invalid operation" set
Failure: fpclassify_downward (pseudo_inf): Exception "Invalid operation" set
Failure: fpclassify_downward (pseudo_qnan): Exception "Invalid operation" set
Failure: fpclassify_downward (pseudo_snan): Exception "Invalid operation" set
Failure: fpclassify_downward (pseudo_unnormal): Exception "Invalid operation" set
Failure: fpclassify_towardzero (pseudo_zero): Exception "Invalid operation" set
Failure: fpclassify_towardzero (pseudo_inf): Exception "Invalid operation" set
Failure: fpclassify_towardzero (pseudo_qnan): Exception "Invalid operation" set
Failure: fpclassify_towardzero (pseudo_snan): Exception "Invalid operation" set
Failure: fpclassify_towardzero (pseudo_unnormal): Exception "Invalid operation" set
Failure: fpclassify_upward (pseudo_zero): Exception "Invalid operation" set
Failure: fpclassify_upward (pseudo_inf): Exception "Invalid operation" set
Failure: fpclassify_upward (pseudo_qnan): Exception "Invalid operation" set
Failure: fpclassify_upward (pseudo_snan): Exception "Invalid operation" set
Failure: fpclassify_upward (pseudo_unnormal): Exception "Invalid operation" set
Checked on x86_64-linux-gnu with gcc-15 and clang-18.
Sergey Kolosov [Mon, 15 Dec 2025 12:00:01 +0000 (13:00 +0100)]
resolv: Add test for NOERROR/NODATA handling [BZ #14308]
Add a test which verifies that getaddrinfo does not fail if one of A/AAAA
responses is NOERROR/NODATA reply with recursion unavailable and the other
response provides an address.
Yao Zihong [Fri, 19 Dec 2025 23:46:42 +0000 (17:46 -0600)]
riscv: Add RVV memset for both multiarch and non-multiarch builds
This patch adds an RVV-optimized implementation of memset for RISC-V and
enables it for both multiarch (IFUNC) and non-multiarch builds.
The implementation integrates Hau Hsu's 2023 RVV work under a unified
ifunc-based framework. A vectorized version (__memset_vector) is added
alongside the generic fallback (__memset_generic). The runtime resolver
selects the RVV variant when RISCV_HWPROBE_KEY_IMA_EXT_0 reports vector
support (RVV).
Currently, the resolver still selects the RVV variant even when the RVV
extension is disabled via prctl(). As a consequence, any process that
has RVV disabled via prctl() will receive SIGILL when calling memset().
Co-authored-by: Jerry Shih <jerry.shih@sifive.com> Co-authored-by: Jeff Law <jeffreyalaw@gmail.com> Signed-off-by: Yao Zihong <zihong.plct@isrc.iscas.ac.cn> Reviewed-by: Peter Bergner <bergner@tenstorrent.com>
Will issue a __gttf2 call instead of a __unordtf2 followed by the
comparison.
Using the generic implementation fixes multiple issues with math tests,
such as:
Failure: fmax (0, qNaN): Exception "Invalid operation" set
Failure: fmax (0, -qNaN): Exception "Invalid operation" set
Failure: fmax (-0, qNaN): Exception "Invalid operation" set
Failure: fmax (-0, -qNaN): Exception "Invalid operation" set
Failure: fmax (9, qNaN): Exception "Invalid operation" set
Failure: fmax (9, -qNaN): Exception "Invalid operation" set
Failure: fmax (-9, qNaN): Exception "Invalid operation" set
Failure: fmax (-9, -qNaN): Exception "Invalid operation" set
It has a small performance overhead due to the extra isunordered (which
could be omitted for float and double types). Using _Generic (similar to
how __MATH_TG) on a bivariadic function requires a lot of boilerplate
macros.
[1] https://github.com/llvm/llvm-project/issues/172499 Reviewed-by: H.J. Lu <hjl.tools@gmail.com>
elf: Support vDSO with more than one PT_LOAD with v_addr starting at 0 (BZ 32583)
The setup_vdso assumes that vDSO will contain only one PT_LOAD segment
and that 0 is the sentinel for the start mapping address. Although
the kernel avoids adding more than one PT_LOAD to avoid compatibility
issues, there is no impending issue that prevents glibc from supporting
vDSO with multiple PT_LOAD (as some wrapper tools do [1]).
To support multiple PT_LOAD segments, replace the sentinel with a bool
to indicate that the VMA start has already been set.
Testing is really tricky, since the bug report does not indicate which
tool was used to trigger the issue, nor a runtime that provides a vDSO
with multiple PT_LOAD. I had to modify the qemu user with a custom
script to create 2 PT_LOAD sections, remove checks that prevent the
vDSO object from being created, and remove the load bias adjustment
in load_elf_vdso. I could not come up with an easy test case to
integrate with glibc.
The Linux kernel provides vDSO with only one PT_LOAD due to
compatibility reasons. For instance
* arch/arm64/kernel/vdso/vdso.lds.S
86 /*
87 * We must supply the ELF program headers explicitly to get just one
88 * PT_LOAD segment, and set the flags explicitly to make segments read-only.
89 /
90 PHDRS
91 {
92 text PT_LOAD FLAGS(5) FILEHDR PHDRS; / PF_R|PF_X /
93 dynamic PT_DYNAMIC FLAGS(4); / PF_R /
94 note PT_NOTE FLAGS(4); / PF_R */
95 }
* arch/x86/entry/vdso/vdso-layout.lds.S
95 /*
96 * We must supply the ELF program headers explicitly to get just one
97 * PT_LOAD segment, and set the flags explicitly to make segments read-only.
98 /
99 PHDRS
100 {
101 text PT_LOAD FLAGS(5) FILEHDR PHDRS; / PF_R|PF_X /
102 dynamic PT_DYNAMIC FLAGS(4); / PF_R /
103 note PT_NOTE FLAGS(4); / PF_R */
104 eh_frame_hdr PT_GNU_EH_FRAME;
105 }
nptl: Make pthread_{clock, timed}join{_np} act on all cancellation (BZ 33717)
The pthread_join/pthread_timedjoin_np/pthread_clockjoin_np will not act
on cancellation if 1. some other thread is already waiting on the 'joinid'
or 2. If the thread has already exited.
On nptl/pthread_join_common.c, the 1. is due to the CAS doing an early
return:
80 else if (__glibc_unlikely (atomic_compare_exchange_weak_acquire (&pd->joinid,
81 &self,
82 NULL)))
83 /* There is already somebody waiting for the thread. */
84 return EINVAL;
Same as support_process_state_wait, but wait for the task TID
(obtained with gettid) from the current process. Since the kernel
might remove the /proc/<pid>/task/<tid>/status at any time if the
thread terminates, the code needs to handle possible
fopen/getline/fclose failures due to an inexistent file.
And use the new __pthread_descriptor_valid function that checks
for 'joinstate' to get the thread state instead of 'tid'. The
joinstate is set by the kernel when the thread exits.
nptl: Do not use pthread set_tid_address as state synchronization (BZ #19951)
The use-after-free described in BZ#19951 is due to the use of two
different PD fields, 'joinid' and 'cancelhandling', to describe the
thread state and to synchronise the calls of pthread_join,
pthread_detach, pthread_exit, and normal thread exit.
Any state change may require checking both fields atomically to handle
partial state (e.g., pthread_join() with a cancellation handler to
issue a 'joinstate' field rollback).
This patch uses a different PD member with 4 possible states (JOINABLE,
DETACHED, EXITING, and EXITED) instead of the pthread 'tid' field, with
the following logic:
1. On pthread_create, the initial state is set either to JOINABLE or
DETACHED depending on the pthread attribute used.
2. On pthread_detach, a CAS is issued on the state. If the CAS fails,
the thread is already detached (DETACHED) or being terminated (EXITING).
For the former, an EINVAL is returned; for the latter, pthread_detach
should be responsible for joining the thread (and for deallocating any
internal resources).
3. In the exit phase of the wrapper function for the thread start routine
(reached either if the thread function has returned, pthread_exit has
been called, or cancellation handled has been acted upon), we issue a
CAS on state to set it to the EXITING mode.
If the thread is previously in DETACHED mode, the thread is responsible
for deallocating any resources; otherwise, the thread must be joined
(detached threads cannot deallocate themselves immediately).
4. The clear_tid_field on 'clone' call is changed to set the new 'state'
field on thread exit (EXITED). This state is only reached at thread
termination.
5. The pthread_join implementation is now simpler: the futex wait is done
directly on thread state, and there is no need to reset it in case of
timeout since the state is now set either by pthread_detach() or by the
kernel on process termination.
The race condition on pthread_detach is avoided with a single atomic
operation on the PD state: once the mode is set to THREAD_STATE_DETACHED, it
is up to the thread itself to deallocate its memory (done during the exit
phase at pthread_create()).
Also, the INVALID_NOT_TERMINATED_TD_P is removed since a negative yid is
not possible, and the macro is not used anywhere.
This change triggers an invalid C11 thread test: it creates a thread that
detaches, and after a timeout, the creating thread checks whether the join
fails. The issue is that once thrd_join() is called, the thread's lifetime
is not defined.
Checked on x86_64-linux-gnu, i686-linux-gnu, aarch64-linux-gnu,
arm-linux-gnueabihf, and powerpc64-linux-gnu.
Stefan Liebler [Fri, 19 Dec 2025 10:19:53 +0000 (11:19 +0100)]
build-many-glibcs.py: Fix s390x-linux-gnu.
The recent commit 638d437dbf9c68e40986edaa9b0d1c2e72a1ae81
"Deprecate s390-linux-gnu (31bit)"
leads to:
FAIL: compilers-s390x-linux-gnu gcc build
when it tries to build 31bit libgcc.
The build is fixed by explicitely disabling multilib.
Sunil K Pandey [Tue, 9 Dec 2025 16:57:44 +0000 (08:57 -0800)]
nptl: Optimize trylock for high cache contention workloads (BZ #33704)
Check lock availability before acquisition to reduce cache line
bouncing. Significantly improves trylock throughput on multi-core
systems under heavy contention.
Tested on x86_64.
Fixes BZ #33704.
Co-authored-by: Alex M Wells <alex.m.wells@intel.com> Reviewed-by: Wilco Dijkstra <Wilco.Dijkstra@arm.com>
Reinstate HAVE_64B_ATOMICS configure check that was reverted by commit 7fec8a5de6826ef9ae440238d698f0fe5a5fb372 due to BZ #33632. This was
fixed by 3dd2cbfa35e0e6e0345633079bd5a83bb822c2d8 by only allowing
64-bit atomics on sem_t if its type is 8-byte aligned. Rebase and add
in cleanups in include/atomic.h that were omitted.
Fix an issue with sparcv8-linux-gnu-leon3 forcing -mcpu=v8 for rtld.c which
overrules -mcpu=leon3 and causes __atomic_always_lock_free (4, 0) to
incorrectly return 0 and trigger asserts in atomics. Remove this as it
seems to be a workaround for an issue in 1997.
Jiayuan Chen [Mon, 17 Nov 2025 08:06:48 +0000 (16:06 +0800)]
Updates struct tcp_info and TCP_AO_XX corresponding struct from 6.17 to netinet/tcp.h
This patch updates struct tcp_info to include new fields from Linux 6.17:
- tcpi_pacing_rate, tcpi_max_pacing_rate
- tcpi_bytes_acked, tcpi_bytes_received
- tcpi_delivery_rate, tcpi_busy_time
- tcpi_delivered, tcpi_delivered_ce
- and many other TCP metrics
Additionally, this patch adds:
- TCP_AO_* definitions (Authentication Option)
- struct tcp_diag_md5sig for INET_DIAG_MD5SIG
- Netlink attribute types for SCM_TIMESTAMPING_OPT_STATS
All changes are synchronized from the Linux kernel's tcp.h without
functional modifications, only code style changes.
Dev Jain [Wed, 10 Dec 2025 15:03:22 +0000 (15:03 +0000)]
malloc: set default tcache fill count to 16
Now that the fastbins are gone, set the default per size class length of
tcache to 16. We observe that doing this retains the original performance
of malloc.
Dev Jain [Wed, 10 Dec 2025 15:00:18 +0000 (15:00 +0000)]
malloc: Remove do_check_remalloced_chunk
do_check_remalloced_chunk checks properties of fastbin chunks. But, it is also
used to check properties of other chunks. Hence, remove this and merge the body
of the function in do_check_malloced_chunk.
Stefan Liebler [Tue, 16 Dec 2025 14:20:29 +0000 (15:20 +0100)]
Deprecate s390-linux-gnu (31bit)
The next linux 6.19 release will remove support for compat syscalls on s390x with those commits:
https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/commit/?id=0d79affa31cbee477a45642efc49957d05466307 0d79affa31cb Merge branch 'compat-removal' 4ac286c4a8d9 s390/syscalls: Switch to generic system call table generation f4e1f1b1379d s390/syscalls: Remove system call table pointer from thread_struct 3db5cf935471 s390/uapi: Remove 31 bit support from uapi header files 8e0b986c59c6 s390: Remove compat support 169ebcbb9082 tools: Remove s390 compat support 7afb095df3e3 s390/syscalls: Add pt_regs parameter to SYSCALL_DEFINE0() syscall wrapper b2da5f6400b4 s390/kvm: Use psw32_t instead of psw_compat_t 8c633c78c23a s390/ptrace: Rename psw_t32 to psw32_t
This patch also removes s390-linux-gnu (31bit) from build-many-glibcs.py.
Then the next update of syscall numbers for Linux 6.19 won't change
sysdeps/unix/sysv/linux/s390/s390-32/arch-syscall.h Reviewed-by: Andreas K. Huettel <dilfridge@gentoo.org>
posix: Fix getconf symbolic constants defined in limits.h (BZ# 29147)
POSIX-1.2018 defines that getconf utility shall print symbolic constant
listed under the heading Maximum value and Minimum Values [1], however
current behaviour is to print the values using pathconf or sysconf,
which represents the specific implementation values instead of the
system agnostics ones.
Another issue is for such the symbolic constants, getconf handles them
as a path_var which requires an additional pathname (which does not
make sense for constants values).
The patch fixes it adding a new internal type, LIMITS_H, which only
prints the constant symbolic without requiring an additional path.
Only the values define in glibc provided limits.h plust the GNU
extensions are handled.
configure: use TEST_CC to check for --no-error-execstack
The ld.lld does not support the --no-error-execstack option, and it is
required only to suppress the linker warning while building tests. A new
configure macro, LIBC_TEST_LINKER_FEATURE, is added to check for linker
features using TEST_CC instead of CC.
Checked on x86_64-linux-gnu and aarch64-linux-gnu with gcc and
TEST_CC set to clang-18 and clan-21.
Dev Jain [Wed, 10 Dec 2025 12:15:18 +0000 (12:15 +0000)]
malloc: Enable 2MB THP by default on Aarch64
Linux supports multi-sized Transparent Huge Pages (mTHP). For the purpose
of this patch description, we call the block size mapped by a non-last
level pagetable level, the traditional THP size (2M for 4K basepage,
512M for 64K basepage). Linux now also supports intermediate THP sizes
mapped by the last level pagetable - we call that the mTHP size.
The support for mTHP in Linux has grown to be better and stable over time -
applications can benefit from reduced page faults and reduced kernel
memory management overhead, albeit at the cost of internal fragmentation.
We have observed consistent performance boosts with mTHP with little
variance.
As a result, enable 2M THP by default on Aarch64. This enables THP even if
user hasn't passed glibc.malloc.hugetlb=1. If user has passed it, we avoid
making the system call to check the hugepage size from sysfs, and override
it with the hardcoded 2MB.
There are two additional benefits of this patch, if the transparent
hugepage sysctl is set to madvise or always:
1) The THP size is now hardcoded to 2MB for Aarch64. This avoids a
syscall for fetching the THP size from sysfs.
2) On 64K basepage size systems, the traditional THP size is 512M, which
is unusable and impractical. We can instead benefit from the mTHP size of
2M. Apart from the usual benefit of THPs/mTHPs as described above, Aarch64
systems benefit from reduced TLB pressure on this mTHP size, commonly
known as the "contpte" size. If the application takes a pagefault, and
either the THP sysctl settings is "always", or the virtual memory area
has been madvise(MADV_HUGEPAGE)'d along with sysctl being "madvise", then
Linux will fault in a 2M mTHP, mapping contiguous pages into the pagetable,
and painting the pagetable entries with the cont-bit. This bit is a hint to
the hardware that the concerned pagetable entry maps a page which is part
of a set of contiguous pages - the TLB then only remembers a single entry
for this set of 2M/64K = 32 pages, because the physical address of any
other page in this contiguous set is computable by the TLB cached physical
address via a linear offset. Hence, what was only possible with the
traditional THP size, is now possible with the mTHP size.
We see a 6.25% performance improvement on SPEC.
If the sysctl is set to never, no transparent hugepages will be created by
the kernel. But, this patch still sets thp_pagesize = 2MB. The benefit is
that on MORECORE() invocation, we extend the heap by 2MB instead of 4KB,
potentially reducing the frequency of this syscall's invocation by 512x.
Note that, there is no difference in cost between an sbrk(2M) and sbrk(4K);
the kernel only does a virtual reservation and does not touch user physical
memory.
Dev Jain [Wed, 10 Dec 2025 12:09:36 +0000 (12:09 +0000)]
malloc: Do not make out-of-bounds madvise call on non-aligned heap
Currently, if the initial program break is not aligned to the system page
size, then we align the pointer down to the page size. If there is a gap
before the heap VMA, then such an adjustment means that the madvise() range
now contains a gap. The behaviour in the upstream kernel is currently this:
madvise() will return -ENOMEM, even though the operation will still succeed
in the sense that the VM_HUGEPAGE flag will be set on the heap VMA. We
*must not* depend on this behaviour - this is an internal kernel
implementation, and earlier kernels may possibly abort the operation
altogether.
The other case is that there is no gap, and as a result we may end up
setting the VM_HUGEPAGE flag on that other VMA too, which is an
unnecessary side effect.
Let us fix this by aligning the pointer up to the page size. We should
also subtract the pointer difference from the size, because if we don't,
since the pointer is now aligned up, the size may cross the heap VMA, thus
leading to the same problem but at the other end.
There is no need to check this new size against mp_.thp_pagesize to decide
whether to make the madvise() call. The reason we make this check at the
start of madvise_thp() is to check whether the size of the VMA is enough
to map THPs into it. Since that check has passed, all that we need to
ensure now is that q + size does not cross the heap VMA.
The openat2 syscall was added on Linux 5.6, as an extension of openat.
Unlike other open-like functions, the kernel only provides the LFS
variant (so files larger than 4GB always succeed, unlike other
functions with an offset larger than off_t). Also, similar to other
open functions, the new symbol is a cancellable entrypoint.
The test case added only stress tests for some of the syscalls' provided
functionality, and it is based on an existing kernel self-test.
A fortify wrapper is added to verify the argument size if not larger
than the current support open_how struct.
Gnulib added an openat2 module, which uses read-only for the open_how
argument [1]. There is no clear indication whether the kernel will
indeed use the argument as in-out, how it would do so, or for which
kind of functionality [2]. Also, adding a potentially different prototype
than gnulib only would add extra unnecessary friction and extra
wrappers to handle it.
Checked on x86_64-linux-gnu and aarch64-linux-gnu.
The clang support is still experimental and not all testcases run
correctly. Only clang 18 and onwards is supported and only
for x86_64-linux-gnu and aarch64-linux-gnu.
Reviewed-by: Collin Funk <collin.funk1@gmail.com> Reviewed-by: Sam James <sam@gentoo.org>
Clang warns that another_external_impl always resolves to __internal_impl,
even if external_impl is a weak reference. Using the internal symbol for
both aliases resolves this warning.
This issue also occurs with certain libc_hidden_def usage:
int __internal_impl (...) {}
weak_alias (__internal_impl, __internal_alias)
libc_hidden_weak (__internal_alias)
In this case, using a strong_alias is sufficient to avoid the warning
(since the alias is internal, there is no need to use a weak alias).
Clang warns that the internal external_alias will always resolve to
__GI___internal_impl, even if a weak definition of __GI_internal_impl is
overridden. For this case, a new macro named static_weak_alias is used
to create a strong alias for SHARED, or a weak_alias otherwise.
With these changes, there is no need to check and enable the
-Wno-ignored-attributes suppression when using clang.
Checked with a build on affected ABIs, and a full check on aarch64,
armhf, i686, and x86_64.
H.J. Lu [Sun, 7 Dec 2025 03:33:33 +0000 (11:33 +0800)]
x32: Implement prctl in assembly
Since the variadic prctl function takes at most 5 integer arguments which
are passed in the same integer registers on x32 as the function with 5
integer arguments, we can use assembly for prctl. Since upper 32-bits in
the last 4 arguments of pcrtl must be cleared to match the x32 prctl
syscall interface where the last 4 arguments are unsigned 64 bit longs,
implement prctl in assembly to clear upper 32-bits in the last 4 arguments
and add a test to verify it.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com> Reviewed-by: Florian Weimer <fweimer@redhat.com>
while (getline (&line, &len, fp) != -1)
;
/* Process LINE. */
After that commit, line[0] would be equal to '\0' instead of containing
the last line of the file like before that commit. A recent POSIX issue
clarified that the behavior before and after that commit are allowed,
since the contents of LINE are unspecified after -1 is returned
[1]. However, some programs rely on the previous behavior.
This patch null terminates the buffer upon getdelim/getline's initial
allocation. This is compatible with previous glibc versions, while also
protecting the caller from reading uninitialized memory if the file is
empty, as long as getline/getdelim does the initial allocation.
i386: Fix fmod/fmodf/remainder/remainderf for gcc-12
The __builtin_fmod{f} and __builtin_remainder{f} were added on gcc 13,
and the minimum supported gcc is 12. This patch adds a configure test
to check whether the compiler enables inlining for fmod/remainder, and
uses inline assembly if not.
Wilco Dijkstra [Thu, 4 Dec 2025 15:17:25 +0000 (15:17 +0000)]
nptl: Check alignment of pthread structs
Report assertion failure if the alignment of external pthread structs is
lower than the internal version. This triggers on type mismatches like
in BZ #33632.
James Chesterman [Fri, 28 Nov 2025 11:18:53 +0000 (11:18 +0000)]
aarch64: Optimise AdvSIMD atanhf
Optimise AdvSIMD atanhf by vectorising the special case.
There are asymptotes at x = -1 and x = 1. So return inf for these.
Values for which |x| > 1, return NaN.
R.Throughput difference on V2 with GCC@15:
58-60% improvement in special cases.
No regression in fast pass.
James Chesterman [Fri, 28 Nov 2025 11:18:52 +0000 (11:18 +0000)]
aarch64: Optimise AdvSIMD asinhf
Optimise AdvSIMD asinhf by vectorising the special case.
For values greater than 0x1p64, scale the input down first.
This is because the output will overflow with inputs greater than
or equal to this value as there is a squaring operation in the
algorithm.
To scale, do:
2asinh(sqrt[(x-1)/2])
Because:
2asinh(x) = +-acosh(2x^2 + 1)
Apply opposite operations in opposite order for x, and you get:
asinh(x) = 2acosh(sqrt[(x-1)/2]).
Found that using asinh instead of acosh also very closely
approximates asinh(x) for a high input x.
R.Throughput difference on V2 with GCC@15:
25-58% improvement in special cases.
4% regression in fast pass.
James Chesterman [Fri, 28 Nov 2025 11:18:51 +0000 (11:18 +0000)]
aarch64: Optimise AdvSIMD acoshf
Optimise AdvSIMD acoshf by vectorising the special case.
For values greater than 0x1p64, scale the input down first.
This is because the output will overflow with inputs greater than
or equal to this value as there is a squaring operation in the
algorithm.
To scale, do:
2acosh(sqrt[(x+1)/2])
Because:
acosh(x) = 1/2acosh(2x^2 - 1) for x>=1.
Apply opposite operations in opposite order for x, and you get:
acosh(x) = 2acosh(sqrt[(x+1)/2]).
R.Throughput difference on V2 with GCC@15:
30-49% improvement in special cases.
2% regression in fast pass.
Yury Khrustalev [Wed, 29 Oct 2025 16:14:06 +0000 (16:14 +0000)]
aarch64: Add tests for glibc.cpu.aarch64_bti behaviour
Check that the new tunable changes behaviour correctly:
* When BTI is enforced, any unmarked binary that is loaded
results in an error: either an abort or dlopen error when
this binary is loaded via dlopen.
* When BTI is not enforced, it is OK to load an unmarked
binary.
Yury Khrustalev [Mon, 24 Nov 2025 13:23:35 +0000 (13:23 +0000)]
aarch64: Add configure checks for BTI support
We add configure checks for 3 things:
- Compiler (both CC and TEST_CC) supports -mbranch-protection=bti.
- Linker supports -z force-bti.
- The toolchain supplies object files and target libraries with
the BTI marking.
All three must be true in order for the tests to be valid, so
we check all flags and set the makefile variable accordingly.
James Chesterman [Wed, 19 Nov 2025 21:40:43 +0000 (21:40 +0000)]
aarch64: Optimise AdvSIMD log10
Optimise AdvSIMD log10 by vectorising the special case.
For subnormal input values, use the same scaling technique as
described in the single precision equivalent.
Then check for inf, nan and x<=0.
James Chesterman [Wed, 19 Nov 2025 21:40:42 +0000 (21:40 +0000)]
aarch64: Optimise AdvSIMD log2
Optimise AdvSIMD log2 by vectorising the special case.
For subnormal input values, use the same scaling technique as
described in the single precision equivalent.
Then check for inf, nan and x<=0.
James Chesterman [Wed, 19 Nov 2025 21:40:41 +0000 (21:40 +0000)]
aarch64: Optimise AdvSIMD log
Optimise AdvSIMD log by vectorising the special case.
For subnormal input values, use the same scaling technique as
described in the single precision equivalent.
Then check for inf, nan and x<=0.
James Chesterman [Wed, 19 Nov 2025 14:11:40 +0000 (14:11 +0000)]
aarch64: Optimise AdvSIMD log10f
Optimise AdvSIMD log10f by vectorising the special case.
Use scaling technique on subnormal values, then check for inf and
nan values.
The scaling technique will sqrt the input then multiply the output
by 2 because:
log(sqrt(x)) = 1/2(log(x)), so log(x) = 2log(sqrt(x))
James Chesterman [Wed, 19 Nov 2025 14:11:39 +0000 (14:11 +0000)]
aarch64: Optimise AdvSIMD log2f
Optimise AdvSIMD log2f by vectorising the special case.
Use scaling technique on subnormal values, then check for inf and
nan values.
The scaling technique used will sqrt the input then multiply the
output by 2 because:
log(sqrt(x)) = 1/2 log(x), so log(x) = 2log(sqrt(x))
James Chesterman [Wed, 19 Nov 2025 14:11:38 +0000 (14:11 +0000)]
aarch64: Optimise AdvSIMD logf
Optimise AdvSIMD logf by vectorising the special case.
Use scaling technique on subnormal values, then check for inf and
nan values.
The scaling technique used will sqrt the input then multiply the
output by 2 because:
log(sqrt(x)) = 1/2 log(x), so log(x) = 2log(sqrt(x))
James Chesterman [Wed, 19 Nov 2025 14:11:37 +0000 (14:11 +0000)]
aarch64: Optimise AdvSIMD log1pf
Optimise AdvSIMD log1pf by vectorising the special case and by
reducing the range of values passed to the special case.
Previously, high values such as 0x1.1p127 where treated as special
cases, but now the special cases are for when the input is:
Less than or equal to -1
+/- INFINITY
+/- NaN
checks __WORDSIZE == 32 to decide if int128 should be used, which breaks
x32 which has int128 and __WORDSIZE == 32. Check BITS_PER_MP_LIMB == 32,
instead of __WORDSIZE == 32. This fixes BZ #33677.
Tested on x32, x86-64 and i686.
Signed-off-by: H.J. Lu <hjl.tools@gmail.com> Reviewed-by: Adhemerval Zanella <adhemerval.zanella@linaro.org>
uses 64-bit atomic operations on sem_t if 64-bit atomics are supported.
But sem_t may be aligned to 32-bit on 32-bit architectures.
1. Add a macro, SEM_T_ALIGN, for sem_t alignment.
2. Add a macro, HAVE_UNALIGNED_64B_ATOMICS. Define it if unaligned 64-bit
atomic operations are supported.
3. Add a macro, USE_64B_ATOMICS_ON_SEM_T. Define to 1 if 64-bit atomic
operations are supported and SEM_T_ALIGN is at least 8-byte aligned or
HAVE_UNALIGNED_64B_ATOMICS is defined.
4. Assert that size and alignment of sem_t are not lower than those of
the internal struct new_sem.
5. Check USE_64B_ATOMICS_ON_SEM_T, instead of USE_64B_ATOMICS, when using
64-bit atomic operations on sem_t.
Yury Khrustalev [Wed, 12 Nov 2025 10:57:24 +0000 (10:57 +0000)]
scripts: Support custom Git URLs in build-many-glibcs.py
Use environment variables to provide mirror URLs to checkout
sources from Git. Each component has a corresponding env var
that will be used if it's present: <component>_GIT_MIRROR.
Note that '<component>' should be upper case, e.g. GLIBC.
Co-authored-by: Carlos O'Donell <carlos@redhat.com> Reviewed-by: Carlos O'Donell <carlos@redhat.com>
Yury Khrustalev [Wed, 12 Nov 2025 10:55:58 +0000 (10:55 +0000)]
scripts: Support custom FTP mirror URL in build-many-glibcs.py
Allow to use custom mirror URLs to download tarballs from a mirror
of ftp.gnu.org using the FTP_GNU_ORG_MIRROR env variable (default
value is 'https://ftp.gnu.org').
Yury Khrustalev [Mon, 1 Dec 2025 10:09:14 +0000 (10:09 +0000)]
nptl: tests: Fix test-wrapper use in tst-dl-debug-tid.sh
Test wrapper script was used twice: once to run the test
command and second time within the text command which
seems unnecessary and results in false errors when running
this test.
The allocation_index was being incremented before checking if mmap()
succeeds. If mmap() fails, allocation_index would still be incremented,
creating a gap in the allocations tracking array and making
allocation_index inconsistent with the actual number of successful
allocations.
This fix moves the allocation_index increment to after the mmap()
success check, ensuring it only increments when an allocation actually
succeeds. This maintains proper tracking for leak detection and
prevents gaps in the allocations array.
Joseph Myers [Thu, 27 Nov 2025 19:32:49 +0000 (19:32 +0000)]
Define C23 header version macros
C23 defines library macros __STDC_VERSION_<header>_H__ to indicate
that a header has support for new / changed features from C23. Now
that all the required library features are implemented in glibc,
define these macros. I'm not sure this is sufficiently much of a
user-visible feature to be worth a mention in NEWS.
Tested for x86_64.
There are various optional C23 features we don't yet have, of which I
might look at the Annex H ones (floating-point encoding conversion
functions and _Float16 functions) next.
* Optional time bases TIME_MONOTONIC, TIME_ACTIVE, TIME_THREAD_ACTIVE.
See
<https://sourceware.org/pipermail/libc-alpha/2023-June/149264.html>
- we need to review / update that patch. (I think patch 2/2,
inventing new names for all the nonstandard CLOCK_* supported by the
Linux kernel, is rather more dubious.)
* Updating conform/ tests for C23.
* Defining the rounding mode macro FE_TONEARESTFROMZERO for RISC-V (as
far as I know, the only architecture supported by glibc that has
hardware support for this rounding mode for binary floating point)
and supporting it throughout glibc and its tests (especially the
string/numeric conversions in both directions that explicitly handle
each possible rounding mode, and various tests that do likewise).
* Annex H floating-point encoding conversion functions. (It's not
entirely clear which are optional even given support for Annex H;
there's some wording applied inconsistently about only being
required when non-arithmetic interchange formats are supported; see
the comments I raised on the WG14 reflector on 23 Oct 2025.)
* _Float16 functions (and other header and testcase support for this
type).
* Decimal floating-point support.
* Fully supporting __int128 and unsigned __int128 as integer types
wider than intmax_t, as permitted by C23. Would need doing in
coordination with GCC, see GCC bug 113887 for more discussion of
what's involved.
The current implementation relies on setting the rounding mode for
different calculations (FE_TOWARDZERO) to obtain correctly rounded
results. For most CPUs, this adds significant performance overhead
because it requires executing a typically slow instruction (to
get/set the floating-point status), necessitates flushing the
pipeline, and breaks some compiler assumptions/optimizations.
The original implementation adds tests to handle underflow in corner
cases, whereas this implementation uses a different strategy that
checks both the mantissa and the result to determine whether the
result is not subject to double rounding.
I tested this implementation on various targets (x86_64, i686, arm,
aarch64, powerpc), including some by manually disabling the compiler
instructions.
Yury Khrustalev [Mon, 24 Nov 2025 11:20:57 +0000 (11:20 +0000)]
aarch64: make GCS configure checks aarch64-only
We only need to enable GCS tests on AArch64 targets, however previously
the configure checks for GCS support in compiler and linker were added
for all targets which was not efficient.
To enable tests for GCS we need 4 things to be true:
- Compiler supports GCS branch protection.
- Test compiler supports GCS branch protection.
- Linker supports GCS marking of binaries.
- The CRT objects provided by the toolchain have GCS marking.
To check for the latter, we add new macro to aclocal.m4 that allows to
grep output from readelf.
We check all four and then put the result in one make variable to
simplify checks in makefiles.
The current implementation relies on setting the rounding mode for
different calculations (first to FE_TONEAREST and then to FE_TOWARDZERO)
to obtain correctly rounded results. For most CPUs, this adds a significant
performance overhead since it requires executing a typically slow
instruction (to get/set the floating-point status), it necessitates
flushing the pipeline, and breaks some compiler assumptions/optimizations.
This patch introduces a new implementation originally written by Szabolcs
for musl, which utilizes mostly integer arithmetic. Floating-point
arithmetic is used to raise the expected exceptions, without the need for
fenv.h operations.
I added some changes compared to the original code:
* Fixed some signaling NaN issues when the 3-argument is NaN.
* Use math_uint128.h for the 64-bit multiplication operation. It allows
the compiler to use 128-bit types where available, which enables some
optimizations on certain targets (for instance, MIPS64).
* Fixed an arm32 issue where the libgcc routine might not respect the
rounding mode [1]. This can also be used on other targets to optimize
the conversion from int64_t to double.
* Use -fexcess-precision=standard on i686.
I tested this implementation on various targets (x86_64, i686, arm, aarch64,
powerpc), including some by manually disabling the compiler instructions.
To enable “longlong.h” removal, the umul_ppmm is moved to a gmp-arch.h.
The generic implementation now uses a static inline, which provides
better type checking than the GNU extension to cast the asm constraint
(and it works better with clang).
Most of the architecture uses the generic implementation, which is
expanded from a macro, except for alpha, arm, hppa, x86, m68k, mips,
powerpc, and sparc. The 32 bit architectures the compiler generates
good enough code using uint64_t types, where for 64 bit architecture
the patch leverages the math_u128.h definitions that uses 128-bit
integers when available (all 64 bit architectures on gcc 15).