Age | Commit message (Collapse) | Author |
|
|
|
Safe-Linking alignment checks should be done on the user's buffer and not
the mchunkptr. The new check adds support for cases in which:
MALLOC_ALIGNMENT != 2*(sizeof(size_t))
The default case for both 32 bits and 64 bits was already supported, and
this patch adds support for the described irregular case.
|
|
|
|
Safe-Linking is a security mechanism that protects single-linked
lists (such as the fastbins) from being tampered by attackers. The
mechanism makes use of randomness from ASLR (mmap_base), and when
combined with chunk alignment integrity checks, it protects the
pointers from being hijacked by an attacker.
While Safe-Unlinking protects double-linked lists (such as the small
bins), there wasn't any similar protection for attacks against
single-linked lists. This solution protects against 3 common attacks:
* Partial pointer override: modifies the lower bytes (Little Endian)
* Full pointer override: hijacks the pointer to an attacker's location
* Unaligned chunks: pointing the list to an unaligned address
The design assumes an attacker doesn't know where the heap is located,
and uses the ASLR randomness to "sign" the single-linked pointers. We
mark the pointer as P and the location in which it is stored as L, and
the calculation will be:
* PROTECT(P) := (L >> PAGE_SHIFT) XOR (P)
* *L = PROTECT(P)
This way, the random bits from the address L (which start at the bits
in the PAGE_SHIFT position), will be merged with the LSB of the stored
protected pointer. This protection layer prevents an attacker from
modifying the pointer into a controlled value.
An additional check that the chunks are MALLOC_ALIGNed adds an
important layer:
* Attackers can't point to illegal (unaligned) memory addresses
* Attackers must guess correctly the alignment bits
On standard 32 bit Linux machines, an attacker will directly fail 7
out of 8 times, and on 64 bit machines it will fail 15 out of 16
times.
The proposed solution adds 3-4 asm instructions per malloc()/free()
and therefore has only minor performance implications if it has
any. A similar protection was added to Chromium's version of TCMalloc
in 2013, and according to their documentation the performance overhead
was less than 2%.
Signed-off-by: Eyal Itkin <eyalit@checkpoint.com>
|
|
|
|
|
|
sysconf creates a lot of code dependencies.
getpagesize dosen't.
staticly linked code that calls malloc is now much smaller.
Signed-off-by: Waldemar Brodkorb <wbx@uclibc-ng.org>
|
|
This option is enabled for a long time and I see no
useful case where we should be incompatible to glibc here.
|
|
|
|
Linuxthreads.new isn't really useful with the existence
of NPTL/TLS for well supported architectures. There is no
reason to use LT.new for ARM/MIPS or other architectures
supporting NPTL/TLS. It is not available for noMMU architectures
like Blackfin or FR-V. To simplify the live of the few uClibc-ng
developers, LT.new is removed and LT.old is renamed to LT.
LINUXTHREADS_OLD -> UCLIBC_HAS_LINUXTHREADS
|
|
This fix commit 76dfc7ce8c "Some requested additional malloc entry points"
from 2004's
Signed-off-by: Leonid Lisovskiy <lly.dev@gmail.com>
Signed-off-by: Waldemar Brodkorb <wbx@uclibc-ng.org>
|
|
Closes bugzilla #4586
Signed-off-by: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
|
|
For some rarely cases(almost App bugs), calling malloc with
a very largre size, checked_request2size check will fail,set
ENOMEM, and return 0 to caller.
But this will let __malloc_lock futex locked and owned by the
caller. In multithread circumstance, other thread calling
malloc/calloc will NOT succeed and get locked.
Signed-off-by: Zhiqiang Zhang <zhangzhiqiang.zhang@huawei.com>
Signed-off-by: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
|
|
Signed-off-by: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
|
|
valloc uses memalign
Signed-off-by: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
|
|
The name was changed to include a trailing 'D' when it went into the
kernel.
Signed-off-by: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
|
|
Signed-off-by: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
|
|
Signed-off-by: Peter S. Mazinger <ps.m@gmx.net>
Signed-off-by: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
|
|
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
Signed-off-by: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
|
|
Now that the kernel supports MAP_UNINITIALIZE, have the malloc places use
it to get real uninitialized memory on no-mmu systems. This avoids a lot
of normally useless overhead involved in zeroing out all of the memory
(sometimes multiple times).
Signed-off-by: Mike Frysinger <vapier@gentoo.org>
|
|
sed -i -e '/Experimentally off - /d' $(grep -rl "Experimentally off - " *)
sed -i -e '/^\/\*[[:space:]]*libc_hidden_proto(/d' $(grep -rl "libc_hidden_proto" *)
should be a nop
Signed-off-by: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
|
|
Handle O=
Signed-off-by: Bernhard Reutner-Fischer <rep.dot.nop@gmail.com>
|
|
Appears to build fine (several .configs tried)
|
|
|
|
|
|
|
|
|
|
|
|
in string.h and strings.h. This caught unguarded string ops in
libc/inet/ethers.c __ether_line_w() function.
I will wait for fallout reports for a week or so,
then continue converting more libc_hidden_proto's.
|
|
|
|
|
|
things, and avoid potential deadlocks caused when a thread holding a uClibc
internal lock get canceled and terminates without releasing the lock. This
change also provides a single place, bits/uClibc_mutex.h, for thread libraries
to modify to change all instances of internal locking.
|
|
|
|
I had clearly run search/replace on that were cluttering things up.
|
|
regardless of the setting of MALLOC_GLIBC_COMPAT.
|
|
most of global data relocations are back
|
|
|
|
libc.a/libc.so, the diffs go into libc-static-y/libc-shared-y exclusively, add IMA to libc, don't use any MSRC anymore
|
|
|
|
|
|
|
|
|
|
is a useless attempt
|
|
gone from libc. The remaining are left as exercise for others ;-)
|
|
|
|
missing headers, other jump relocs removed
|
|
|
|
|
|
|