The idea of the slot allocator is for it to be used in conjunction with a fixed sized buffer. You use the slot allocator to allocator an index that can be used
The idea of the slot allocator is for it to be used in conjunction with a fixed sized buffer. You use the slot allocator to allocator an index that can be used
as the insertion point for an object. This is lock-free.
as the insertion point for an object.
Slots are reference counted to help mitigate the ABA problem in the lock-free queue we use for tracking jobs.
The slot index is stored in the low 32 bits. The reference counter is stored in the high 32 bits:
+-----------------+-----------------+
| 32 Bits | 32 Bits |
+-----------------+-----------------+
| Reference Count | Slot Index |
+-----------------+-----------------+
*/
*/
typedefstruct
typedefstruct
{
{
struct
volatilestruct
{
{
ma_uint32bitfield;
ma_uint32bitfield;
/*ma_uint32 refcount;*//* When greater than 0 it means something already has a hold on this group. */
/* A ref counted implementation which is a bit simpler to understand what's going on, but has the expense of an extra 32-bits for the group ref count. */
ma_uint32refcount;
refcount=ma_atomic_increment_32(&pAllocator->groups[iGroup].refcount);/* <-- Grab a hold on the bitfield. */
/* Before releasing the group we need to ensure the write operation above has completed so we'll throw a memory barrier in here for safety. */
/* Increment the reference count before constructing the output value. */
ma_memory_barrier();
pAllocator->slots[slotIndex]+=1;
ma_atomic_increment_32(&pAllocator->counter);/* Incrementing the counter should happen before releasing the group's ref count to ensure we don't waste loop iterations in out-of-memory scenarios. */
ma_atomic_decrement_32(&pAllocator->groups[iGroup].refcount);/* Release the hold as soon as possible to allow other things to access the bitfield. */
/* We weren't able to find a slot. If it's because we've reached our capacity we need to return MA_OUT_OF_MEMORY. Otherwise we need to do another iteration and try again. */
/* We weren't able to find a slot. If it's because we've reached our capacity we need to return MA_OUT_OF_MEMORY. Otherwise we need to do another iteration and try again. */
pAllocator->groups[iGroup].bitfield=pAllocator->groups[iGroup].bitfield&~(1<<iBit);/* Unset the bit. */
/* Before releasing the group we need to ensure the write operation above has completed so we'll throw a memory barrier in here for safety. */
ma_memory_barrier();
c89atomic_fetch_sub_32(&pAllocator->counter,1);/* Incrementing the counter should happen before releasing the group's ref count to ensure we don't waste loop iterations in out-of-memory scenarios. */
c89atomic_fetch_sub_32(&pAllocator->groups[iGroup].refcount,1);/* Release the hold as soon as possible to allow other things to access the bitfield. */
returnMA_SUCCESS;
#define MA_JOB_ID_NONE ~((ma_uint64)0)
}else{
/* Something else is holding the group. We need to spin for a bit. */
MA_ASSERT(refcount>1);
}
/* Getting here means something is holding our lock. We need to release and spin. */
Lock free queue implementation based on the paper by Michael and Scott: Nonblocking Algorithms and Preemption-Safe Locking on Multiprogrammed Shared Memory Multiprocessors
Lock free queue implementation based on the paper by Michael and Scott: Nonblocking Algorithms and Preemption-Safe Locking on Multiprogrammed Shared Memory Multiprocessors
Our queue needs to be initialized with a free standing node. This should always be slot 0. Required for the lock free algorithm. The first job in the queue is
Our queue needs to be initialized with a free standing node. This should always be slot 0. Required for the lock free algorithm. The first job in the queue is
just a dummy item for giving us the first item in the list which is stored in the "next" member.
just a dummy item for giving us the first item in the list which is stored in the "next" member.
*/
*/
ma_slot_allocator_alloc_16(&pQueue->allocator,&pQueue->head);/* Will never fail. */
ma_slot_allocator_alloc(&pQueue->allocator,&pQueue->head);/* Will never fail. */