In PureCFR and CFU the result of the computation, the average strategy, can usually be stored in 32-bit integers. A simple implementation might look
like this:
class AverageStrategyStorage
uint[] data
Init()
data = new uint[number of infoset-actions]
Increment(key)
data[key]++
Note two things:
- We don't need to know the actual accumulated values during the computation. We only need to increment them.
- The keys are not uniformly distributed. Many of the keys are rarely or never used (but we don't know which ones before we do the computation).
This leads us to an implementation (shown below) which divides the storage in three parts: An 8-bit array that uses 1/4 of the memory compared to the naive implementation,
a constant size (512 MB) hash map, and files stored on a hard disk. It works as follows:
- 255/256 (99.6%) of the time, we simply increment a byte and compare it to zero which as fast or faster (because of higher chance of a cache hit) than the original implementation.
- 1/256 of the time, we do a more expensive (but still pretty cheap) hash map increment.
- Finally, when the hash map gets full, we accumulate it to disk and clear the map.
The size of the hash map should be chosen so that we don't have to touch the disk too often (maybe once an hour or so). For disk storage, an entropy
coder such as an adaptive range coder can be used to keep the disk usage and access time small. It's also possible use two hash maps instead and do
the disk accumulation in a background thread.
The algorithm works very well in a real CFU FLHU implementation, being only about 10% slower and requiring surprisingly small amount of disk space.
The memory usage is only ~28 bits* per infoset-action, whereas regular CFR needs 128 bits (two doubles) per infoset-action.
* The CFU implementation needs to store only n-1 integer CFU values per an n-action infoset
class AverageStrategyStorage
byte[] low8bits
AccumulationHashMap accumulationHashMap
Init()
low8bits = new byte[number of infoset-actions]
accumulationHashMap = new AccumulationHashMap(512 * 1024 * 1024, 8)
Increment(key)
low8bits[key]++
if (low8bits[key] == 0)
if (!accumulationHashMap.Increment(key))
accumulationHashMap.AccumulateToDisk()
accumulationHashMap.Clear()
class AccumulationHashMap
struct Entry
int64 Key
int Value
int Next
int capacity
int slotCount
int entryCount
Entry[] entries
AccumulationHashMap(int nBytes, int loadFactor)
capacity = nBytes / sizeof(Entry)
slotCount = capacity / loadFactor
entries = new Entry[capacity]
Clear()
Clear()
entryCount = slotCount
for (int i = 0; i < slotCount; i++)
entries[i].Key = -1
bool Increment(key)
i = (int)(HashFunction(key) & (slotCount - 1))
if (entries[i].Key < 0)
entries[i].Key = key
entries[i].Value = 1
entries[i].Next = -1
return true
while(true)
if (entries[i].Key == key)
entries[i].Value++
return true
if (entries[i].Next < 0)
break
i = entries[i].Next
if (entryCount >= capacity)
return false
entries[entryCount].Key = key
entries[entryCount].Value = 1
entries[entryCount].Next = -1
entries[i].Next = entryCount++
return true
AccumulateToDisk()
decoder = new Decoder(previous_storage_file)
encoder = new Encoder(new_storage_file)
newEntries = new Iterator(entries sorted by key)
lastKey = 0
while (newEntries.More || decoder.More)
nextKey = min(newEntries.CurrentKey, decoder.CurrentKey)
encoder.EncodeDeltaKey(nextKey - lastKey)
lastKey = nextKey
value = 0
if (nextKey == decoder.CurrentKey)
value += decoder.CurrentValue
decoder.Next()
if (nextKey == newEntries.CurrentKey)
value += newEntries.CurrentValue
newEntries.Next()
encoder.EncodeValue(value)
// Thomas Wang downscaling hash function
uint HashFunction(uint64 key)
key = (~key) + (key << 18)
key = key ^ (key >> 31)
key = key * 21
key = key ^ (key >> 11)
key = key + (key << 6)
key = key ^ (key >> 22)
return (uint)key