|
5 months ago | |
---|---|---|
bench | 1 year ago | |
src | 5 months ago | |
tests | 1 year ago | |
.gitignore | 1 year ago | |
LICENSE | 1 year ago | |
README.md | 5 months ago |
README.md
nimdeque
Various deque implementations in pure nim. A deque (short for "double-ended queue") is a data type that is optimized for access towards its ends. A deque's most interesting feauture is the ~O(1) time that it takes to pop/append at either ends (as opposed to regular lists where appending at the beginning is an O(n) operation).
Examples
LinkedDeque
A LinkedDeque
is a deque based on a doubly linked list.
import nimdeque
var queue = newLinkedDeque[int]()
# Appends to the tail
queue.add(1)
queue.add(2)
queue.add(3)
# Prepends to the head
queue.addLeft(0)
queue.addLeft(-1)
queue.addLeft(-2)
echo queue
# Pops and returns the first element in O(1) time
echo queue.pop() # -2
echo queue
# Pops and returns the last element in O(1) time
echo queue.pop(queue.high()) # 3
# This can also be written as
echo queue.pop(^1) # 2
echo queue
# Pops element at position 2
echo queue.pop(2) # 1
# Supports iteration
for i, e in queue:
echo i, " ", e
# Reversed iteration too!
for e in queue.reversed():
echo e
# Length and last index
echo queue.len()
echo queue.high()
# 'in' operator
echo 5 in queue # false
echo 0 in queue # true
# Item accessing works just like regular sequence types in Nim.
# Note that the further the item is from either end of the
# queue, the higher the time it takes to retrieve it. For
# fast random access, seqs should be used instead
echo queue[0] # -1
echo queue[^1] # 0
echo queue[queue.high()] # 0
# It's possible to extend a deque with other deques or with seqs
# of compatible type
var other = newLinkedDeque[int]()
other.add(9)
other.add(10)
queue.extend(@[5, 6, 7, 8])
queue.extend(other)
# Finds the first occurrence of an
# item in the queue, returns -1 if not
# found
echo queue.find(9999) # -1, not found
echo queue.find(-1) # 0
# Clears the queue in O(1) time
queue.clear()
# Clears the queue in O(n) time
queue.clearPop()
Notes
- All queue constructors take an optional
maxSize
argument which limits the size of the queue. The default value is 0 (no size limit). WhenmaxSize > 0
, the queue will discard elements from the head when items are added at the end and conversely pop items at the end when one is added at the head. Callinginsert
on a full queue will raise anIndexDefect
- Two deques compare equal if they have the same elements inside them, in the same order. The value of
maxSize
is disregarded in comparisons - Calls to
extend()
do not raise any errors when the queue is full. They're merely an abstraction over a for loop callingself.add()
with every item from the other iterable - Deques in this module do not support slicing. Use the built-in
seq
type if you need fast random accessing and/or slicing capabilities - The objects in this module are all tracked references! (Unlike the
std/deques
module which implements them as value types and givesvar
variants of each procedure) - As with the data structure implemented in
std/deques
, all bounds checking is disabled when compiled with--checks:off
or-d:danger
, but queue size checking is not. To disable queue size checking, pass-d:noQueueSizeCheck
Disclaimer
This is mostly a toy, there are no performance guarantees nor particular optimizations other than very obvious ones. With that said, the collections do work and are tested somewhat thoroughly (please report any bugs!). The tests directory contains some benchmarks as well as the test suite used to validate the behavior of the queues.
Why? There's std/deques!
- I was bored during my programming class
- std/deques only provides a deque based on
seq
s - The deque in std/deques is a value type
- The deques in this module allow accessing at arbirary locations
- More useful procs are implemented (
find
,extend
,extendLeft
,reversedPairs
, etc.) - The deques in this module can be restrained in size
- I was bored during my programming class
Performance against a regular seq
Most people probably know that a data structure optimized for access towards both ends will be several times more efficient
than a general purpose container. The performance difference between a regular dynamic array like Nim's seq
type is very
noticeable: LinkedDeque
is anywhere from 30 to 2913 times faster at operating near the ends, depending on the platform and
compiler (compiled with -d:release
or higher). The usual expected speedup lies anywhere from 30 to ~400-500 times faster than
a seq
, especially if many operations are done sequentially.
TODOs
There are many possible implementations for double-ended queues: the current one is based on the usual textbook implementation of a doubly linked list, but that isn't the best choice for cache locality and has significant memory overhead for each link in the chain; Other possibilities involve using a list of subarrays to alleviate both of these issues, while some other options make use of ring buffers or specialized dynamic arrays growing from the center that can be used to allow even fast random accessing and can be made really efficient using lazy evaluation. The goal of this module is to implement most (possibly all) of these approaches, because I find them fascinating.