The last address of "data" is not "cast(uintptr)raw_data(data)+cast(uintptr)size" but
"cast(uintptr)raw_data(data)+cast(uintptr)(size-1)".
The original assert would fail when for example the allocation size requested and the buddy allocator allignment were both 64.
This is always a pointer past the end of the buffer given to
`buddy_allocator_init`, which could be an invalid address. Printing may
result in a segmentation violation.
This didn't take into account the size of the header plus the size of
the allocation itself by virtue of `align_forward_uint`; this could
result in no change if `size` was equal to `b.alignment` because the
number is aligned, and if `actual_size` and `size` ended up being equal,
no additional space would be requested.
This meant that a block would end up being allocated on top of its
buddy's head.
Fixes#3435
It has been discovered that AddressSanitizer does not keep a 1:1 mapping
of which bytes are poisoned and which are not. This can cause issues for
allocations less than 8 bytes and where addresses straddle 8-byte
boundaries.
See the following link for more information:
https://github.com/google/sanitizers/wiki/AddressSanitizerAlgorithm#mapping
Changed the check from `bytes` to `err` for safety's sake, too.
This will prevent the potential bug of allocating non-zero memory, then
doing a zeroed resize, which will result in having garbage data in the
initial half.
The backup allocator is set at `init` and must stay the same for the
lifetime of the Scratch allocator, as this allocator is used to free all
`leaked_allocations`. Changing it could lead to a situation where the
wrong allocator is used to free a leaked allocation.
This will cause an error if the memory being resized was not the last
allocation, as should be expected according to the description that this
"acts just like stack_free."
1. The size was being adjusted for the alignment which does not make any
sense without the context of the base pointer. Now we just add the
`alignment - 1` to the size if needed then adjust the pointer.
2. The root pointer of the last allocation is now stored in order to
make the free operation more useful (and to cover the right memory
region for ASan).
3. Resizing now only works on the last allocation instead of any address
in a valid range, which resulted in overwriting allocations that had
just been made.
4. `old_memory` is now re-poisoned entirely before the resized range is
returned with the new range unpoisoned. This will guarantee that
there are no unpoisoned gaps.
Fixes#2694
1. store alignment instead of original pointer
2. implement .Query_Info
3. poison the header and alignment portion of the allocation
4. .Resize uses `max(orig_alignment, new_alignment)` as it's alignment
now
5. .Free passes along the original alignment
free on tlsf poisons the entire block, while alloc might only unpoison a
part of it (cause it's size is aligned up). This causes free to
potentially poison an already poisoned portion, which is a
use-after-poison.
Because this is "fine" and intended, I opted to just
@no_sanitize_address it.
allocators
This adds various bindings to the asan runtime which can be used
to poison/unpoison memory handed out by various allocators. This
means we can catch use after free memory bugs when using operations
such as free_all during runtime.
Asan poisoning are added for the follow allocators in mem:
Arena (including temporary arenas)
Scratch
Stack
Small_Stack
Additionally a bug in the stack allocator was fixed to disallow freeing
in the middle of the stack (caught by the asan!).
I plan on adding support for all the allocators in core. This is just
a good starting point and were some of the easiest ones to implement
asan for.