@Rotonen as I retool a bunch of miner code, anything I should do to make life easier for optimizations?
If we at some point magic up a numa aware memory store that can go in modularly in the new setup
Or any other novel data sources
Like redis or memcache
@Fireduck nothing trivial, unless you go for hwloc levels of hw awareness, it's best to let the os do its thing as much as possible
makes sense
@Rotonen, I forget, did you repro the miner slowdown from 1.06 to 1.07?
i've not yet had time to do a benchmark sweep
prolly gonna start fiddling with that earliest sunday evening
yeah, no pressure or anything. just curious
no pressure, it's just also in my interest as well to help with that
I wonder if there is any reason to not do huge byte arrays in java as opposed to a larger number of smaller ones
like 1gb byte array
@Fireduck NUMA
@Fireduck it's easier for the OS scheduler to shuffle 1G chunks around unfragmented than to figure out how to fragment big chunks
so that is in favor of big chunks?
to me, big = 1gb, small = 1mb
I could also do anything in between
ah no, for me small < 32GB
1G is a good size for various reasons
alright
also makes it easier for people to reason about it, with a leaky abstraction, but "where can i put one of these where it's fast to get from"
and then just repeat until you have the full snowfield somewhere
building a sweep of the deploy jars, but prolly not gonna have time to test before sunday ``` #!/usr/bin/env bash for version in $(git tag | grep -E '^1' | gsort -V) do bazel clean git checkout "$version" for target in $(grep name BUILD | cut -d'"' -f2) do jar_name="$target"_deploy.jar bazel build ":$jar_name" mv bazel-bin/"$jar_name" "$version"-"$jar_name" done done # EOF ``` (explicit gnu coreutils version sort as per building on a mac)