Age | Commit message (Collapse) | Author |
|
In commit 3e0a1f388, Richard tried to fix malloc alignments by using
alignof (double __attribute_aligned__(sizeof (size_t))).
This doesn't work, since attribute_aligned overrides the alignment
rather than providing a minimum. On C6X, malloc returns four-byte
aligned values rather than the necessary eight-byte alignment.
It's simpler to use a comparison and pick the bigger of the two values,
so that's what I've done.
Signed-off-by: Bernd Schmidt <bernds@codesourcery.com>
|
|
Update malloc library to use internal uclibc locking primitives
to get the libpthread calls correct.
Signed-off-by: Timo Teräs <timo.teras@iki.fi>
Signed-off-by: Austin Foxley <austinf@cetoncorp.com>
|
|
found it, this is Bernhard's patch to fix it. Tested and it Works For Me (tm)).
|
|
This should have been in r23660. Untested.
|
|
Thank you Chase Douglas for reporting it and for the patch.
|
|
|
|
|
|
However, retesting on m68k showed up a problem that had appeared in
uClibc since the last time I tried. Specifically, revision 15785 did:
-#define HEAP_GRANULARITY (sizeof (HEAP_GRANULARITY_TYPE))
+#define HEAP_GRANULARITY (__alignof__ (HEAP_GRANULARITY_TYPE))
-#define MALLOC_ALIGNMENT (sizeof (double))
+#define MALLOC_ALIGNMENT (__alignof__ (double))
The problem is that
(a) MALLOC_HEADER_SIZE == MALLOC_ALIGNMENT
(b) the header contains a size value of type size_t
(c) sizeof (size_t) is 4 on m68k, but...
(d) __alignof__ (double) is only 2 (the largest alignment used on m68k)
So we only allocate 2 bytes for the 4-byte header, and the least
significant 2 bytes of the size are in the user's area rather than
the header. The patch below fixes that problem by redefining
MALLOC_HEADER_SIZE to:
MAX (MALLOC_ALIGNMENT, sizeof (size_t))
(but without the help of the MAX macro ;)). However, we really would
like to have word alignment on Coldfire. It makes a big performance
difference, and because we have to allocate a 4-byte header anyway,
what wastage there is will be confined to the end of the allocated block.
Any wastage will also be limited to 2 bytes per allocation compared to
the current alignment.
I've therefore used the __aligned__ type attribute to create a double
type that has at least sizeof (size_t) bytes of alignment. I've
introduced a new __attribute_aligned__ macro for this. It might seem
silly protecting against old or non-GNU compilers here, but the extra
alignment is only an optimisation, and having the macro is more in the
spirit of the other attribute code.
|
|
does not easily lend itself to becoming complete pthread cancelation
safe without first investing in some deep and serious thought...
The other malloc implementations are pthread cancelation safe, and
this one is mostly used for uClinux, where the lack is at least less
likely to be a common problem.
|
|
things, and avoid potential deadlocks caused when a thread holding a uClibc
internal lock get canceled and terminates without releasing the lock. This
change also provides a single place, bits/uClibc_mutex.h, for thread libraries
to modify to change all instances of internal locking.
|
|
bernds writes: Use __alignof__ instead of sizeof to get alignments. Eliminates some warnings about misalignments when malloc debugging is enabled.
|
|
|
|
were including libc-lock.h which had a bunch of weak pragmas. Also,
uClibc supplied a number of no-op weak thread functions even though
many weren't needed. This combined result was that sometimes the
functional versions of thread functions in pthread would not override
the weaks in libc.
While fixing this, I also prepended double-underscore to all necessary
weak thread funcs in uClibc, and removed all unused weaks.
I did a test build, but haven't tested this since these changes are
a backport from my working tree. I did test the changes there and
no longer need to explicitly add -lpthread in the perl build for
perl to pass its thread self tests.
|
|
their alignment are correct.
|
|
because of our fiddling with alignment (because doing so is VERY BAD).
|
|
|
|
|
|
-Erik
|
|
__UCLIBC_UCLINUX_BROKEN_MUNMAP__ (which is currently not defined anywhere).
This makes other cases a tiny bit less efficient too.
* Move the malloc lock into the heap structure (locking is still done
at the malloc level though, not by the heap functions).
* Initialize the malloc heap to contain a tiny initial static free-area so
that programs that only do a very little allocation won't ever call mmap.
|
|
|
|
(__heap_free_area_alloc): Use __heap_delete.
(__heap_is_empty): New macro.
|
|
|
|
(HEAP_MIN_FREE_AREA_SIZE): Increase size.
Enable debugging if HEAP_DEBUGGING is defined.
|
|
the malloc/free level, not within the heap abstraction, and there's a
separate lock to control sbrk access.
Also, get rid of the separate `unmap_free_area' function in free.c, and
just put the code in the `free' function directly, which saves a bunch
of space (even compared to using an inline function) for some reason.
|
|
|
|
* Instead of using mmap/munmap directly for large allocations, just use
the heap for everything (this is reasonable now that heap memory can
be unmapped).
* Use sbrk instead of mmap/munmap on systems with an MMU.
|
|
Doc fix.
|
|
smarter than the old "malloc-simple", and actually works, unlike
the old "malloc". So kill the old "malloc-simple" and the old
"malloc" and replace them with Miles' new malloc implementation.
Update Config files to match. Thanks Miles!
|