― Mark Twain
The typical use case for a ring buffer is a producer consumer scenario, where a producer sometimes produces data faster than the consumer can consume. Eventually, however, the consumer will catch up. This typical use case can be summarized as “short term buffering of bursts of data”.
But what happens if buffered data is not consumed fast enough and the ring buffer becomes full? Answer: adding another element will drop the oldest element, as can be seen by looking at the implementation of the ‘add’ method:
1 2 3 4 5 6 7 8 9 10 |
void add(const T& item) { buffer_[head_] = item; advance(head_); if (head_ == tail_) { advance(tail_); // ring buffer full, overwrite oldest element. } } void advance(size_t& value) { value = (value + 1) % BUFSIZE; } |
After the new element is added to the buffer, the head index is incremented (modulo ‘BUFSIZE’). If it now points at the tail index, the tail index is incremented (modulo ‘BUFSIZE’) as well, which effectively removes the oldest element from the ring buffer, making space for a new ring buffer element to be added. Here’s an example of this happening to a ring buffer that already holds 9 elements (why not 10? The head index is ‘exclusive‘ and points at the location where the next added element will go):
1 2 3 4 5 |
0 1 2 3 4 5 6 7 8 9 ^ ^ t h |
Calling ‘add’ in this situation advances the head and the tail index by 1:
1 2 3 4 5 |
0 1 2 3 4 5 6 7 8 9 ^ ^ h t |
This will be continue for every subsequent ‘add’. Here’s how the ring buffer looks like after six more calls to ‘add’:
1 2 3 4 5 |
0 1 2 3 4 5 6 7 8 9 ^ ^ h t |
Why am I telling you all this? Because there’s a ring buffer use case, where a producer endlessly produces data that is never consumed. I call this the “history buffer” use case where record is kept over the last N events, similar to how a flight recorder operates.
If a ring buffer is used as a history buffer, it is always fully loaded. Thus, the ‘add’ operation is unnecessarily inefficient, as the ‘if (head == tail)’ conditions is always true. So let’s get rid of it:
1 2 3 4 5 6 7 8 |
void add(const T& item) { buffer_[head_] = item; advance(head_); assert(head_ == tail_); advance(tail_); } |
This naturally leads to another optimization: the incremented value of ‘head_’ is the current value of ‘tail_’. Thus, we just set the new ‘head_’ to the current value of ‘tail_’ and only increment ‘tail_’:
1 2 3 4 5 6 7 |
void add(const T& item) { buffer_[head_] = item; head_ = tail_; advance(tail_); } |
That’s even better, isn’t it? But there’s more! The tail is usually only used when consuming elements, which we don’t do for the history buffer use case. Why maintain it’s value at all? Plus, if we need it later (as we shall see shortly), we can always compute it from the head, since it is one element ahead of head (sounds a bit weird, doesn’t it? The tail is always ahead of the head):
1 2 3 4 5 6 |
void add(const T& item) { buffer_[head_] = item; advance(head_); } |
Adding to the ring buffer has definitely become simpler and cheaper, but you probably ask yourself how to retrieve the history data stored in our ring buffer. As usual, there are many ways, but the one that I find most useful is through the standard library’s way of doing things, namely iterators. ‘begin()’ and ‘cbegin()’ simply return the tail, which is — as we already know — one element ahead of ‘head_’:
1 2 3 |
auto tail = (head_ + 1) % BUFSIZE; |
‘end()’ and ‘cend()’ just return ‘head_’.
In the simplest case, the iterators returned are forward iterators, which means that they only provide ‘operator++()’. As limited as forward operators are, they allow you to use many useful algorithms from the standard library, including:
1 2 3 4 5 6 |
std::find std::copy std::accumulate std::any_of |
But there’s probably another nagging question on your mind: what if the ring buffer is not fully loaded? What if, for instance, only 3 elements have been added so far to a 10-element ring buffer holding doubles? In this case, the 7 unused elements will hold default-initialized doubles, which can be easiliy skipped over:
1 2 3 4 5 6 |
auto it = rb.cbegin(); while (it != rb.cend() && *it != double()) { ++it; } |
Most of the time, this extra work is unnecessary in history buffer use cases, as a) the ring buffer is quickly filled and from this point on always fully loaded and b) default constructed elements don’t harm in typical history buffer use cases.
I like to call this particular variant of ring buffer “Manx buffer”, named after a peculiar breed of cat, called Manx, whose salient feature is the lack of tail.
Choosing a Manx buffer over a regular ring buffer may make sense in the following cases:
- The elements added are never consumed but rather kept to have chronological, historical data.
- The size of the element type is small, such that the cost of maintaining the tail is significant compared to the overall cost of adding an element.
- Elements are added at a rapid rate.
An example of when a Manx buffer can prove useful is a data collection system for an aircraft that records the last 10 minutes of accelerometer readings for roll, pitch, and yaw as double values.
You can find my 70 lines implementation of a Manx buffer here.