The initial allocation for the slice is limited to prevent untrusted
data from forcing a huge allocation, but then the dynamic array was
created with a capacity of the unlimited length, rather than the actual
capacity of the allocation. This was causing a buffer overrun.
allocators
This adds various bindings to the asan runtime which can be used
to poison/unpoison memory handed out by various allocators. This
means we can catch use after free memory bugs when using operations
such as free_all during runtime.
Asan poisoning are added for the follow allocators in mem:
Arena (including temporary arenas)
Scratch
Stack
Small_Stack
Additionally a bug in the stack allocator was fixed to disallow freeing
in the middle of the stack (caught by the asan!).
I plan on adding support for all the allocators in core. This is just
a good starting point and were some of the easiest ones to implement
asan for.
If allocation of a `^Thread` failed, `create*` now properly return `nil`,
so you can assert on that instead of calling `thread.destroy` on a null pointer, say.
This implementation doesn't allow for out-of-band allocations to be passed through, as it's not designed to
track those. Nor is it able to signal those allocations then need to be freed on the backing allocator,
as opposed to regular allocations handled for you when you `destroy` the TLSF instance.
So if we're asked for more than we're configured to grow by, we can fail with an OOM error early, without adding a new pool.
New features:
- If TLSF can't service an allocation made on it, and it's initialized with `new_pool_size` > 0, it will ask the backing allocator for additional memory.
- `estimate_pool_size` can tell you what size your initial (and `new_pool_size`) ought to be if you want to make `count` allocations of `size` and `alignment`, or in its other form, how much backing memory is needed for `count` allocations of `type` and its corresponding size and alignment.