C++ Custom Allocators with Polymorphic unique_ptr

This week, I spent two days debugging an issue at work. I am documenting it here, hoping it might be helpful to someone facing a similar situation in the future. This article also aims to deepen the understanding of unique_ptr.

Specifically, the issue involved the deleter of std::unique_ptr. In polymorphic scenarios, when performing a dynamic_cast on a unique_ptr, if the memory for the unique_ptr is allocated using a custom allocator, especially one with a custom deleter, the deleter may not automatically cast.

The incident started with a task at work where I needed to write a custom allocator to limit the memory allocation of certain data structures, ensuring their total memory usage did not exceed 100MB. These data structures included standard std::vector and std::map. Close to the deadline, a colleague, in haste, replaced one of the std::vector<T> with std::unique_ptr<T>, where T was a baseClass, and what was actually stored were unique_ptr of T‘s derivedClass objects. This caused me quite a headache.

I had a simple custom allocator:

#include <iostream>
#include <memory>
#include <limits>
#include <vector>
#include <functional>

// Custom allocator that tracks memory usage and enforces a limit.
template <typename T>
class LimitedAllocator 
{
public:
  using value_type = T;
  using pointer = T*;
  using const_pointer = const T*;
  using void_pointer = void*;
  using const_void_pointer = const void*;
  using size_type = size_t;
  using difference_type = ptrdiff_t;
 
  static const size_t limit = 100 * 1024 * 1024; // memory limit of 100MB
  static size_t used_memory;

  LimitedAllocator() noexcept = default;

  template <typename U>
  LimitedAllocator(const LimitedAllocator<U>&) noexcept {}

  pointer allocate(size_type n, const_void_pointer hint = 0) 
  {
    size_type bytes = n * sizeof(T);
    if (bytes + used_memory > limit)
    {
      throw std::bad_alloc();
    }
    used_memory += bytes;
    return static_cast<pointer>(::operator new(n * sizeof(T)));
  }

  void deallocate(pointer p, size_type n) noexcept
  {
    used_memory -= n * sizeof(T);
    ::operator delete(p);
  }

  size_type max_size() const noexcept
  {
    return std::numeric_limits<size_type>::max() / sizeof(T);
  }

  template <typename U, typename... Args>
  void construct(U* p, Args&&... args) {
    new(p) U(std::forward<Args>(args)...);
  }

  template <typename U>
  void destroy(U* p) {
    p->~U();
  }

  bool operator==(const LimitedAllocator&) const noexcept {
    return true;
  }

  bool operator!=(const LimitedAllocator&) const noexcept {
    return false;
  }
};

template <typename T>
typename LimitedAllocator<T>::size_type LimitedAllocator<T>::used_memory = 0;

However, abstracting my situation, I had two classes:

 class MyClassA {
public:
    virtual ~MyClassA() = default; // Ensure we have a virtual destructor for base class
    virtual void doSomething() const {
        std::cout << "MyClassA doing something." << std::endl;
    }
};

class MyClassB : public MyClassA {
public:
    void doSomething() const override {
        std::cout << "MyClassB doing something different." << std::endl;
    }
};

void testLimitedAllocatorWithPolymorphism() 
{
    LimitedAllocator<MyClassB> allocator; 
    std::vector<LimitedUniquePtr<MyClassA> > m_Horizontals;
    
    auto myClassBPtr = make_unique_limited<MyClassB>(allocator);

    // Since we're using PolymorphicDeleter, we can directly assign myClassBPtr to myClassAPtr
    LimitedUniquePtr<MyClassA>  myClassAPtr = std::move(myClassBPtr);
    m_Horizontals.emplace_back(std::move(myClassAPtr));

    
    for (const auto& item : m_Horizontals) 
    {
        item->doSomething(); 
    }
    m_Horizontals.clear();
}

When using them, I had a std::vector<LimitedUniquePtr<MyClassA>> which, in practice, was emplaced back with LimitedUniquePtr<MyClassB> objects.

Naively, I thought I could simply release the ownership from ClassBUniquePtr and construct a MyClassAUniquePtr from it, assuming derivedClass -> baseClass conversion would work flawlessly in C++’s object-oriented capabilities.

However, I encountered a std::bad_function_call error. Upon debugging with gdb, I discovered that the deleter in make_unique_limited, which captured the allocator, was causing issues because the allocator went out of scope by the program’s end.

To solve this, I introduced a PolymorphicDeleter, explicitly overriding the std::unique_ptr‘s deleter with a function pointer that does not depend on the allocator. This resolved the issue.

The complete solution is as follows:

// Polymorphic deleter
struct PolymorphicDeleter {
    template<typename T>
    void operator()(T* ptr) const {
        delete ptr; // Correctly calls the destructor for T, handling polymorphism
    }
};

template<typename T>
using LimitedUniquePtr = std::unique_ptr<T, PolymorphicDeleter>;

template<typename T, typename... Args>
std::unique_ptr<T, PolymorphicDeleter> make_unique_limited(LimitedAllocator<T>& allocator, Args&&... args) {
    T* raw_ptr = allocator.allocate(1); // Allocate space for one T
    new (raw_ptr) T(std::forward<Args>(args)...); // Use placement new with forwarded arguments

    return std::unique_ptr<T, PolymorphicDeleter>(raw_ptr, PolymorphicDeleter{});
}

This approach successfully resolved the `std::bad_function_call` issue. In summary, if you’re using a static, singleton allocator, there’s not much to worry about since the allocator remains in memory, and there’s no concern about the deleter capturing it failing. Polymorphism is also not a significant issue. However, if your allocator has a lifecycle, you must consider the relationship between the deleter’s function pointer and the allocator’s lifecycle.


已发布

分类

来自

标签:

评论

发表回复

您的邮箱地址不会被公开。 必填项已用 * 标注