Skip to content

Latest commit

 

History

History
212 lines (161 loc) · 19.5 KB

bins-and-memory-allocations.md

File metadata and controls

212 lines (161 loc) · 19.5 KB

Bins & Memory Allocations

Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)!

Other ways to support HackTricks:

Basic Information

In order to improve the efficiency on how chunks are stored every chunk is not just in one linked list, but there are several types. These are the bins and there are 5 type of bins: 62 small bins, 63 large bins, 1 unsorted bin, 10 fast bins and 64 tcache bins per thread.

The initial address to each unsorted, small and large bins is inside the same array. The index 0 is unused, 1 is the unsorted bin, bins 2-64 are small bins and bins 65-127 are large bins.

Small Bins

Small bins are faster than large bins but slower than fast bins.

Each bin of the 62 will have chunks of the same size: 16, 24, ... (with a max size of 504 bytes in 32bits and 1024 in 64bits). This helps in the speed on finding the bin where a space should be allocated and inserting and removing of entries on these lists.

Large bins

Unlike small bins, which manage chunks of fixed sizes, each large bin handle a range of chunk sizes. This is more flexible, allowing the system to accommodate various sizes without needing a separate bin for each size.

In a memory allocator, large bins start where small bins end. The ranges for large bins grow progressively larger, meaning the first bin might cover chunks from 512 to 576 bytes, while the next covers 576 to 640 bytes. This pattern continues, with the largest bin containing all chunks above 1MB.

Large bins are slower to operate compared to small bins because they must sort and search through a list of varying chunk sizes to find the best fit for an allocation. When a chunk is inserted into a large bin, it has to be sorted, and when memory is allocated, the system must find the right chunk. This extra work makes them slower, but since large allocations are less common than small ones, it's an acceptable trade-off.

There are:

  • 32 bins of 64B range
  • 16 bins of 512B range
  • 8bins of 4096B range
  • 4bins of 32768B range
  • 2bins of 262144B range
  • 1bins of for reminding sizes

Unsorted bin

The unsorted bin is a fast cache used by the heap manager to make memory allocation quicker. Here's how it works: When a program frees memory, the heap manager doesn't immediately put it in a specific bin. Instead, it first tries to merge it with any neighbouring free chunks to create a larger block of free memory. Then, it places this new chunk in a general bin called the "unsorted bin."

When a program asks for memory, the heap manager checks the unsorted bin to see if there's a chunk of enough size. If it finds one, it uses it right away. If it doesn't find a suitable chunk, it moves the freed chunks to their corresponding bins, either small or large, based on their size.

So, the unsorted bin is a way to speed up memory allocation by quickly reusing recently freed memory and reducing the need for time-consuming searches and merges.

{% hint style="danger" %} Note that even in chunks are of different categories, if an available chunk is colliding with another available chunk (even if they are of different categories), they will be merged. {% endhint %}

Fast bins

Fast bins are designed to speed up memory allocation for small chunks by keeping recently freed chunks in a quick-access structure. These bins use a Last-In, First-Out (LIFO) approach, which means that the most recently freed chunk is the first to be reused when there's a new allocation request. This behavior is advantageous for speed, as it's faster to insert and remove from the top of a stack (LIFO) compared to a queue (FIFO).

Additionally, fast bins use singly linked lists, not double linked, which further improves speed. Since chunks in fast bins aren't merged with neighbours, there's no need for a complex structure that allows removal from the middle. A singly linked list is simpler and quicker for these operations.

Basically, what happens here is that the header (the pointer to the first chunk to check) is always pointing to the latest freed chunk of that size. So:

  • When a new chunk is allocated of that size, the header is pointing to a free chunk to use. As this free chunk is pointing to the next one to use, this address is stored in the header so the next allocation knows where to get ana available chunk
  • When a chunk is freed, the free chunk will save the address to the current available chunk and the address to this newly freed chunk will be pu in the header

{% hint style="danger" %} Chunks in fast bins aren't automatically set as available so they keep as fast bin chunks for some time instead of being able to merge with other chunks. {% endhint %}

Tcache (Per-Thread Cache) Bins

Even though threads try to have their own heap (see Arenas and Subheaps), there is the possibility that a process with a lot of threads (like a web server) will end sharing the heap with another threads. In this case, the main solution is the use of lockers, which might slow down significantly the threads.

Therefore, a tcache is similar to a fast bin per thread in the way that it's a single linked list that doesn't merge chunks. Each thread has 64 singly-linked tcache bins. Each bin can have a maximum of 7 same-size chunks ranging from 24 to 1032B on 64-bit systems and 12 to 516B on 32-bit systems.

When a thread frees a chunk, if it isn't too big to be allocated in the tcache and the respective tcache bin isn't full (already 7 chunks), it'll be allocated in there. If it cannot go to the tcache, it'll need to wait for the heap lock to be able to perform the free operation globally.

When a chunk is allocated, if there is a free chunk of the needed size in the Tcache it'll use it, if not, it'll need to wait for the heap lock to be able to find one in the global bins or create a new one.
There also an optimization, in this case, while having the heap lock, the thread will fill his Tcache with heap chunks (7) of the requested size, so if case it needs more, it'll find them in Tcache.

Allocation Flow

{% hint style="success" %} (This current explanation is from https://heap-exploitation.dhavalkapil.com/diving_into_glibc_heap/core_functions. TODO: Check last version and update it) {% endhint %}

Allocations are finally performed with the function: void * _int_malloc (mstate av, size_t bytes) and have this order:

  1. Updates bytes to take care of alignments, etc.
  2. Checks if av is NULL or not.
  3. In the case of absence of usable arena (when av is NULL), calls sysmalloc to obtain chunk using mmap. If successful, calls alloc_perturb. Returns the pointer.
  4. Depending on the size:
    • [Addition to the original] Use tcache before checking the next fastbin.
      • [Addition to the original] If no tcache but a different bin is used (see later step), try to fill tcache from that bin
    • If size falls in the fastbin range:
      1. Get index into the fastbin array to access an appropriate bin according to the request size.
      2. Removes the first chunk in that bin and make victim point to it.
      3. If victim is NULL, move on to the next case (smallbin).
      4. If victim is not NULL, check the size of the chunk to ensure that it belongs to that particular bin. An error ("malloc(): memory corruption (fast)") is thrown otherwise.
      5. Calls alloc_perturb and then returns the pointer.
    • If size falls in the smallbin range:
      1. Get index into the smallbin array to access an appropriate bin according to the request size.
      2. If there are no chunks in this bin, move on to the next case. This is checked by comparing the pointers bin and bin->bk.
      3. victim is made equal to bin->bk (the last chunk in the bin). If it is NULL (happens during initialization), call malloc_consolidate and skip this complete step of checking into different bins.
      4. Otherwise, when victim is non NULL, check if victim->bk->fd and victim are equal or not. If they are not equal, an error (malloc(): smallbin double linked list corrupted) is thrown.
      5. Sets the PREV_INSUSE bit for the next chunk (in memory, not in the doubly linked list) for victim.
      6. Remove this chunk from the bin list.
      7. Set the appropriate arena bit for this chunk depending on av.
      8. Calls alloc_perturb and then returns the pointer.
    • If size does not fall in the smallbin range:
      1. Get index into the largebin array to access an appropriate bin according to the request size.
      2. See if av has fastchunks or not. This is done by checking the FASTCHUNKS_BIT in av->flags. If so, call malloc_consolidate on av.
  5. If no pointer has yet been returned, this signifies one or more of the following cases:
    1. Size falls into 'fastbin' range but no fastchunk is available.
    2. Size falls into 'smallbin' range but no smallchunk is available (calls malloc_consolidate during initialization).
    3. Size falls into 'largbin' range.
  6. Next, unsorted chunks are checked and traversed chunks are placed into bins. This is the only place where chunks are placed into bins. Iterate the unsorted bin from the 'TAIL'.
    1. victim points to the current chunk being considered.
    2. Check if victim's chunk size is within minimum (2*SIZE_SZ) and maximum (av->system_mem) range. Throw an error (malloc(): memory corruption) otherwise.
    3. If (size of requested chunk falls in smallbin range) and (victim is the last remainder chunk) and (it is the only chunk in the unsorted bin) and (the chunks size >= the one requested): Break the chunk into 2 chunks:
      • The first chunk matches the size requested and is returned.
      • Left over chunk becomes the new last remainder chunk. It is inserted back into the unsorted bin.
        1. Set chunk_size and chunk_prev_size fields appropriately for both chunks.
        2. The first chunk is returned after calling alloc_perturb.
        3. If the above condition is false, control reaches here. Remove victim from the unsorted bin. If the size of victim matches the size requested exactly, return this chunk after calling alloc_perturb.
        4. If victim's size falls in smallbin range, add the chunk in the appropriate smallbin at the HEAD.
        5. Else insert into appropriate largebin while maintaining sorted order:
        6. First checks the last chunk (smallest). If victim is smaller than the last chunk, insert it at the last.
        7. Otherwise, loop to find a chunk with size >= size of victim. If size is exactly same, always insert in the second position.
        8. Repeat this whole step a maximum of MAX_ITERS (10000) times or till all chunks in unsorted bin get exhausted.
  7. After checking unsorted chunks, check if requested size does not fall in the smallbin range, if so then check largebins.
    1. Get index into largebin array to access an appropriate bin according to the request size.
    2. If the size of the largest chunk (the first chunk in the bin) is greater than the size requested:
      1. Iterate from 'TAIL' to find a chunk (victim) with the smallest size >= the requested size.
      2. Call unlink to remove the victim chunk from the bin.
      3. Calculate remainder_size for the victim's chunk (this will be victim's chunk size - requested size).
      4. If this remainder_size >= MINSIZE (the minimum chunk size including the headers), split the chunk into two chunks. Otherwise, the entire victim chunk will be returned. Insert the remainder chunk in the unsorted bin (at the 'TAIL' end). A check is made in unsorted bin whether unsorted_chunks(av)->fd->bk == unsorted_chunks(av). An error is thrown otherwise ("malloc(): corrupted unsorted chunks").
      5. Return the victim chunk after calling alloc_perturb.
  8. Till now, we have checked unsorted bin and also the respective fast, small or large bin. Note that a single bin (fast or small) was checked using the exact size of the requested chunk. Repeat the following steps till all bins are exhausted:
    1. The index into bin array is incremented to check the next bin.
    2. Use av->binmap map to skip over bins that are empty.
    3. victim is pointed to the 'TAIL' of the current bin.
    4. Using the binmap ensures that if a bin is skipped (in the above 2nd step), it is definitely empty. However, it does not ensure that all empty bins will be skipped. Check if the victim is empty or not. If empty, again skip the bin and repeat the above process (or 'continue' this loop) till we arrive at a nonempty bin.
    5. Split the chunk (victim points to the last chunk of a nonempty bin) into two chunks. Insert the remainder chunk in unsorted bin (at the 'TAIL' end). A check is made in the unsorted bin whether unsorted_chunks(av)->fd->bk == unsorted_chunks(av). An error is thrown otherwise ("malloc(): corrupted unsorted chunks 2").
    6. Return the victim chunk after calling alloc_perturb.
  9. If still no empty bin is found, 'top' chunk will be used to service the request:
    1. victim points to av->top.
    2. If size of 'top' chunk >= 'requested size' + MINSIZE, split it into two chunks. In this case, the remainder chunk becomes the new 'top' chunk and the other chunk is returned to the user after calling alloc_perturb.
    3. See if av has fastchunks or not. This is done by checking the FASTCHUNKS_BIT in av->flags. If so, call malloc_consolidate on av. Return to step 6 (where we check unsorted bin).
    4. If av does not have fastchunks, call sysmalloc and return the pointer obtained after calling alloc_perturb.

Free Flow

{% hint style="success" %} (This current explanation is from https://heap-exploitation.dhavalkapil.com/diving_into_glibc_heap/core_functions. TODO: Check last version and update it) {% endhint %}

The final function freeing chunks of memory is _int_free (mstate av, mchunkptr p, int have_lock) :

  1. Check whether p is before p + chunksize(p) in the memory (to avoid wrapping). An error (free(): invalid pointer) is thrown otherwise.
  2. Check whether the chunk is at least of size MINSIZE or a multiple of MALLOC_ALIGNMENT. An error (free(): invalid size) is thrown otherwise.
  3. If the chunk's size falls in fastbin list:
    1. Check if next chunk's size is between minimum and maximum size (av->system_mem), throw an error (free(): invalid next size (fast)) otherwise.
    2. Calls free_perturb on the chunk.
    3. Set FASTCHUNKS_BIT for av.
    4. Get index into fastbin array according to chunk size.
    5. Check if the top of the bin is not the chunk we are going to add. Otherwise, throw an error (double free or corruption (fasttop)).
    6. Check if the size of the fastbin chunk at the top is the same as the chunk we are adding. Otherwise, throw an error (invalid fastbin entry (free)).
    7. Insert the chunk at the top of the fastbin list and return.
  4. If the chunk is not mmapped:
    1. Check if the chunk is the top chunk or not. If yes, an error (double free or corruption (top)) is thrown.
    2. Check whether next chunk (by memory) is within the boundaries of the arena. If not, an error (double free or corruption (out)) is thrown.
    3. Check whether next chunk's (by memory) previous in use bit is marked or not. If not, an error (double free or corruption (!prev)) is thrown.
    4. Check whether the size of next chunk is between the minimum and maximum size (av->system_mem). If not, an error (free(): invalid next size (normal)) is thrown.
    5. Call free_perturb on the chunk.
    6. If previous chunk (by memory) is not in use, call unlink on the previous chunk.
    7. If next chunk (by memory) is not top chunk:
      1. If next chunk is not in use, call unlink on the next chunk.
      2. Merge the chunk with previous, next (by memory), if any is free and add it to the head of unsorted bin. Before inserting, check whether unsorted_chunks(av)->fd->bk == unsorted_chunks(av) or not. If not, an error ("free(): corrupted unsorted chunks") is thrown.
    8. If next chunk (by memory) was a top chunk, merge the chunks appropriately into a single top chunk.
  5. If the chunk was mmapped, call munmap_chunk.

Heap Functions Security Checks

Check the security checks performed by heavily used functions in heap in:

{% content-ref url="heap-functions-security-checks.md" %} heap-functions-security-checks.md {% endcontent-ref %}

References

Learn AWS hacking from zero to hero with htARTE (HackTricks AWS Red Team Expert)!

Other ways to support HackTricks: