As an exercise, I’m implementing HashLife in Go.
In brief, HashLife works by memoizing nodes in a quadtree so that once a given node’s value in the future has been calculated, it can just be looked up instead of being re-calculated. So eg. if you have a node at the 8×8 level, you remember it by its four children (each at the 2×2 level). So next time you see an 8×8 node, when you calculate the next generation, you first check if you’ve already seen a node with those same four children. This is extended up through all levels of the quadtree, which gives you some pretty amazing optimizations if eg. you’re 10 levels above the leaves.
Unsurprisingly, it looks like the perfmance crux of this is the lookup of nodes by child-node values. Currently I have a hashmap of
{&upper_left_node,&upper_right_node,&lower_left_node,&lower_right_node} -> node
So my lookup function is this:
func FindNode(ul, ur, ll, lr *Node) *Node {
var node *Node
var ok bool
nc := NodeChildren{ul, ur, ll, lr}
node, ok = NodeMap[nc]
if ok {
return node
}
node = &Node{ul, ur, ll, lr, 0, ul.Level + 1, nil}
NodeMap[nc] = node
return node
}
What I’m trying to figure out is if the “nc := NodeChildren…” line causes a memory allocation each time the function is called. If it does, can I/should I move the declaration to the global scope and just modify the values each time this function is called? Or is there a more efficient way to do this?
Any advice/feedback would be welcome. (even coding style nits; this is literally the first thing I’ve written in Go so I’d love any feedback)
1
What I’m trying to figure out is if the “nc := NodeChildren…” line causes a >memory allocation each time the function is called?
Yeah. Everytime you call the FindNode() function, nc := will be assigned to a new struct that lives in a new block of memory than the struct created from the previous function call. How this impacts performance depends on how many times you are calling that function (in a tight loop will create alot of garbage to take care of and require allocating memory everytime you reach 2x’s the amount of memory during last allocation period. 2mb -> 4mb ->8mb->16mb, ect) Until the garbage collector runs those mbs can add up for objects that are no longer reachable.
If it does, can I/should I move the declaration to the global scope and just modify the values each time this function is called? Or is there a more efficient way to do this?
What I would do is create a slice to hold the Node pointers
var nc [4]*Node
func FindNode(ul, ur, ll, lr *Node) *Node {
var node *Node
var ok bool
nc[0] = ul; nc[1] = ur
nc[2] = ll; nc[3] = lr
node, ok = NodeMap[nc] //change struct logic to slice logic
//i.e nc.ul -> nc[0], ect
if ok {
return node
}
node = &Node{ul, ur, ll, lr, 0, ul.Level + 1, nil}
//why hash this with an index to a type that wont exist later?
NodeMap[nc] = node
return node
}