70fed23be5
Previously our file hashing functions were backed by the same "read file into memory" function we use for situations like "file" and "templatefile", meaning that they'd read the entire file into memory first and then calculate the hash from that buffer. All of the hash implementations we use here can calculate hashes from a sequence of smaller buffer writes though, so there's no actual need for us to create a file-sized temporary buffer here. This, then, is a small refactoring of our underlying function into two parts, where one is responsible for deciding the actual filename to load opening it, and the other is responsible for buffering the file into memory. Our hashing functions can then use only the first function and skip the second. This then allows us to use io.Copy to stream from the file into the hashing function in smaller chunks, possibly of a size chosen by the hash function if it happens to implement io.ReaderFrom. The new implementation is functionally equivalent to the old but should use less temporary memory if the user passes a large file to one of the hashing functions. |
||
---|---|---|
.. | ||
blocktoattr | ||
funcs | ||
testdata/functions-test | ||
data.go | ||
data_test.go | ||
doc.go | ||
eval.go | ||
eval_test.go | ||
functions.go | ||
functions_test.go | ||
references.go | ||
scope.go |