You are telling it to load 90g?
no, that's the RSS size at the time of hitting the heap limit
i was pulling a 128GB field into memory as a memfield and previously never hit the heap size, usually gave it like 135G or 140G max heap size
i was watching it in a split screen and saw the RSS crawl up as it read off the disk and the miner pooped out at 90G pulled into ram
i'm a bit puzzled as to how it thinks it cannot when the heap size is set higher
oh well i'll let it fscache, although that'll take ridiculously long to have everything cached up
Jvm is weird
ah well, it's more fun watching the iowait percentage drop down as fscache gets more and more populated
also a faster feedback cycle on theadcount tweaks, so sorta a win
i guess the field needs to be like 5/6 in fscache for significant speedups from extra ram?
I don't know, I'm just the janitor
it's actually faster to grep the snowfield and start mining than to memfield it :stuck_out_tongue:
[snowblossomcoin/snowblossom] Issue closed by fireduck64
@Fireduck thank you, that does well enough
> Ok, I'll hit this with my dumb hammer
i love this duck. :rolling_on_the_floor_laughing:
Never use a smart hammer when a dumb one will do
hammer time?
I'm thinking about adding some actual useful stats to Arktika
but having a hard time reasoning about if I need to get clever to avoid synchronization across cores for metrics
or if an AtomicLong is going to be fast enough
you need to get clever
yeah, that is what I figure
which is fine, I don't mind doing it
@Rotonen you are right, 60M writes per second to AtomicLong, 650M writes per second if each thread gets its own AtomicLong
and there it is
@Fireduck thank you for metrifying
It might take me a bit. Turns out arktika was written by a crazy person.
@Fireduck https://www.monkeyuser.com/2018/reminiscing/ Software development satire
It isn't that bad, but it is necessarily complex due to the layers
yeah, to be fair, the only bits i've actually so far looked at are to do with the config parsers