Without at least 1 GB/s it really won't work
so a fast spinny disk raid 1 is about 10x short
Yeah
So that is closely related to the methods near it which save and get the records, so it is kinda commented by proximity to those. :wink:
not for someone coming in cold and trying to get an overview level understanding of how it ticks :P
al least name variables and sum those together :D
you are not at all wrong
a test run cycle is 20min
i guess the sensible amount of hashing threads is the system hyperthread count
that might as well be per default that
what’s the sizing thing?
and why would one want more than one wave?
buffer_size ?
work unit mem gb
how much memory (roughly) you want to use for in progress work units
in total or per wave?
total
gotcha
and why more than one wave?
and you would want more than one wave so that reads continue while a wave is processing
and if your IO system does better with more readers
why more than two waves?
not sure
i’ll try with two waves, 8 threads, 50gb work unit
2TB 960 pro, 64GB ram, i7-7700T
probably want more threads. Ram is fast, but not instant
so threads will still be waiting on data from ram
but I am not sure
so, 16 threads
I had some better experience with 10x the number of cores
but I really don't know
if that is from your R900, i’ll not try to follow
I did most of my testing on a ryzen system
preliminarily i get more and more consistent reads with just one wave
by theory was ram is about 10 microseconds to access, and it takes about 1 microsecond to do the hashing, so 10:1 ratio makes sense
i’ll try 80 threads 16 waves later
I should probably add a 100ms delay between wave starts so they aren't in lock step to start with
the device is a multiqueue io device
add a rand() + 100ms
I should also add a higher level bandwidth report, I might be fooling myself watching dstat
if the OS is caching anything
got a null pointer exception
runpass -> old work unit -> boom
can you paste me the stack trace?
checking this out on the side and slack only on phone on site
ah
line number of closest line in my code?
os::commit_memory failed
so dunno
ah
prolly i am using more ram than is available
for the miner to be worth it, i guess i’ll need to punch above 200k in hashrate
the memory use just balloons out of hand, already down to a 35G work unit and still tries to use 55G memory
my math says if you can do 2GB/s reads, and use 40GB of ram, you should be able to get about 1.3 MH/s
but I haven't managed to get anything with near that number of work units not slow way down
i’ll idly play with it on the side for the next 3 hours
awesome, I appreciate it
i’ll try with one wave one thread
I don't quite have the right hardware for this and am trying to spend less money on things
but the sequential reads it does are not above like 1.2GB/s
with one thread and one wave it only uses about 20GB ram - expected?
oh, it stops reading for a bit
it should use 1gb for the wave itself, plus whatever amount of work units you specify
gotcha, switching to 2 waves
oh, the per wave 1GB got me
and 2 waves 1 thread gets me 2.3GB/s reads
each wave uses 100% CPU - expected?
so this actually needs a beefy cpu too
otherwise it keeps stepping on its own toes
something went boom lines 370 393
it fails to consume work units from the pool?
letting it run 2 waves, 40 threads, 50gb, xmx 55gb
so far it is up to two hashes per second
ah, that makes sense
i changed some of the time flow and broke it
has run for about two blocks now, one of which it tripped up on, but seemed to somehow recover
up to ten hashes per second
ha
will leave it for an hour now and come back to it
maybe you figure something out in the meanwhile
it peaks up to 150k, averages 50k over the hour and posted only invalid shares
invalid shares are the best shares
all rejected?
ping me for a retest once you figure it out, the pool miner performs better
all rejected
that is strange
something is off in the pool communication