alrighty, no clue if that's slow, average or wut but ```[2018-06-25 01:50:39] INFO snowblossom.miner.PoolMiner printStats 1-min: 5.760K/s, 5-min: 5.758K/s, hour: 693.236/s [2018-06-25 01:50:39] INFO snowblossom.miner.PoolMiner printStats Shares: 1 (rejected 0) (blocks 0) [2018-06-25 01:50:54] INFO snowblossom.miner.PoolMiner printStats 15 Second mining rate: 5752.999/sec - at this rate 12.151 minutes per share (diff 22.000) [2018-06-25 01:50:54] INFO snowblossom.miner.PoolMiner printStats 1-min: 5.743K/s, 5-min: 5.762K/s, hour: 717.213/s [2018-06-25 01:50:54] INFO snowblossom.miner.PoolMiner printStats Shares: 1 (rejected 0) (blocks 0)``` took me like 3 days to finally have it running (with the torrenting thingy, raid 0 setup and all xD)
no blocks all day at snowday today. ouch.
blocks are no longer cool
is that due to the difficulty staying the same while nethash dropped by half (with the moving from field 6 to 7) ? _idk how it works just yet, am still noob_ :stuck_out_tongue:
blocks are are and always will be cool
its probably because of miners jumping around from pool to pool.
Yeah, I noticed
I get like 10MH/s on snowday, then they leave for some reason
Ok, at this point I'd really like people to vote yes on prop 2.
It gives school teachers a raise or something.
I would agree to give teacher a raise if they get rid of that "untouchable after 10 years, bitches" rule
better yet, fuck public schools, either private or homeschooling.
I'm not so sure about prop 2... it essentially removes the hashrate requirement to advance snowfields.
@Protovist ? I don't think so.
Once someone is creating blocks with a higher snowfield, the weighting means that only blocks from that field will be added to the main chain.
Weight only comes from required field
Not whatever someone mines with
Someone can mine with field 11 and it makes no difference. Only the current active field matters for weight. Which continues to only be triggered by difficulty
The work sum weighting had no effect on difficulty
can difficulty drop?
Yes
But activated field can not
awww
ok
got it
snowday needs more power!
I'm givin it all she's got captain!
@Protovist I appreciate the scrutiny, but I think in this case there is nothing worry about. The fields activate with difficulty which adjusts based on hash rate. That all stays the same. The only difference is that once the chain activates a higher field, the chain with the higher field has an advantage in work sum weighting, which hopefully counter acts the hash rate drop when going to a higher field.
That makes sense. As long as it's the activated field, my concern is void.
right. The weighting is based on activated field, not which field was used for a particular block.
@Shoots did you try the amazon skylake xeons? at least the ones on digital ocean were rather performant, but i suspect there is less resource contention on DO than on AWS
I have, but they only have 192gb of mem
and for some reason my miner uses 210gb
unless I run the latest version of the miner, then it maxes out the mem usage at 145gb, but the hr is much lower
and on top of that there's no available spots anymore
who is running Vauxhall? Would be nice to see miner list, pool hash, net hash, and time since last block on the website
agree
@Shoots yeah, i suppose @Fireduck will eventually fix the miner to not require obscene amounts of JRE tweaking
yeah tweaking the java parameters is beyond me
2% fee on protopool? Surprised there are that many miners for it
block density matters more early on
INFO: Send new work to 13 workers. Keeping 10, Dropping 3
what does the "Dropping" mean?
i did have have it yesterday. but now i have 1-5 dropping in nearly every message.
network issue?
look at the code, probably something like ’did not hear back from it in a while’
drop_count++; //logger.info("Error in send work: " + t);
rejected shares?
INFO: Work block load error: java.lang.RuntimeException: Unable to select a field of at least 7
ah. damn. i think my snowfield is broken.
curious, so definitely not just an old field?
no. after closer inspection it was a wrong sym link.
if I create a swap file in ubuntu will the miner use that without enabling hybrid mining?
drop_count about clients that are removed due to the links already being broken
noted by trying to send them work resulting in errors
so it means they have disconnected and are gone
but grpc doesn't really tell us that until we try to send something
@Shoots see vm.swappiness
swap file seems to lock up the vm
im trying hybrid mining now
I really want to get this m5d instance running, it has the highest hash per $, but has a tad too little of memory
how much does it have?
192
my miner seem to use about 210gb
strange
when I installed java 10 and the newest miner last night it was using only 145gb
but something wasnt right cause the hr was way too low
now Ive compiled from source with openjdk and trying hybrid mining with 180gb set as my cache size
@Shoots if you're like 10G short, try allocating 50G to zram
@Shoots debian 9, default-jre-8-headless, and i'm getting by 135G ram use for memfield (though bumped -Xmx to 200G as why not, rather have it fire the GC less often)
@Shoots in a nutshell zram is a way of trading spare cpu cycles for 'more ram' - even with ram mining snowblossom is not actually cpu limited
I wonder if the number of cores or threads impacts the ram usage?
it does
try to find the minimal thread count where the next one over does not bring you any meaningful extra
Oh ok
Its a 48c VM I have it set to 96
start with 1, check where the ram use is after 1min and 5min are ~equal, double, repeat until gain is ~naught
those are vcores so 48 or 24 more plausible sane thread counts
Getting 0h with hybrid miner
you have 48 potentially-shared-with-other-guests hyperthreads from the hypervisor
Yeah
but try memfield and 12, 24, 48
start with 12, to see if you get it going at all
usually in high performance computing contexts `cloud vcore count / 4` ~ `real cpu count equivalent`
why am I not hashing I wonder
but borderline impossible to guess how the memory channels are provisioned for amazon guests, if they do certain things not optimally your snowblossom performance can actually be up to dumb luck in how it just happened to map when provisioned (or even based on hypervisor load)
well that's just a miner bug, i'd say
yeah nothing to worry about lol
it seemed to cap out at 138gb or ram usage
but 0h
with what exact config?
# network # (snowblososm/mainnet, teapot/testnet, spoon/regtest) network=snowblossom #node_host= pool_host=http://snow.protopool.io #node_port=23380 #pool_host=http://pool.snowblossom.cluelessperson.com # the location of "snow" fields for mining snow_path=snow/mainnet # automatically generate mining snow files. # CAUTION! INTENSIVE! You may wish to torrent instead. # torrents: https://snowblossom.org/snowfields/index.html #auto_snow=true # pick an address (at random for now)from this wallet to mine to #mine_to_wallet=wallets/mainnet # or mine to address mine_to_address=snow: # number of cpu threads to commit to PoW threads=96 memfield=true memfield_precache_gb=180
i'm running fine with ``` memfield=true mine_to_address=<redacted> network=snowblossom pool_host=http://snowplough.kekku.li snow_path=snow/mainnet threads=24 ```
~135Gb ram use
6 core E5
whats your hr with 24 threads?
850k, which is what i'm expecting as well
thats low compared to what I get though, with 24 threads I get about 1.2mh
24cores I should say
i had a script iterate from 1 to 128 threads and 24 was the sweet spot
``` $ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 12 On-line CPU(s) list: 0-11 Thread(s) per core: 2 Core(s) per socket: 6 Socket(s): 1 NUMA node(s): 1 Vendor ID: GenuineIntel CPU family: 6 Model: 63 Model name: Intel(R) Xeon(R) CPU E5-1650 v3 @ 3.50GHz Stepping: 2 CPU MHz: 3599.975 CPU max MHz: 3800,0000 CPU min MHz: 1200,0000 BogoMIPS: 6983.10 Virtualization: VT-x L1d cache: 32K L1i cache: 32K L2 cache: 256K L3 cache: 15360K NUMA node0 CPU(s): 0-11 Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush dts acpi mmx fxsr sse sse2 ss ht tm pbe syscall nx pdpe1gb rdtscp lm constant_tsc arch_perfmon pebs bts rep_good nopl xtopology nonstop_tsc aperfmperf eagerfpu pni pclmulqdq dtes64 monitor ds_cpl vmx smx est tm2 ssse3 sdbg fma cx16 xtpr pdcm pcid dca sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand lahf_lm abm epb invpcid_single kaiser tpr_shadow vnmi flexpriority ept vpid fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid cqm xsaveopt cqm_llc cqm_occup_llc dtherm ida arat pln pts ``` older hardware, in line with what i expect
oh its 12 cores, thats pretty good actually for 12 cores
thats your own hardware?
6 cores
well thats really good for 6 cores damn
it's not the cores, it's not the memory channels, it's the ratio of cores to memory channels
i have 6 cores over 4 memory channels
then the generational gains are like 5% per generation for the processor cores and about 3% for memory
so i could get 1M on a modern xeon and something like 2,5M on an AMD EPYC
potentially 1,6M on the new i9-esque 6 channel stuff
so if you find an old intel E7 for cheap, give that a spin, that should always land you north of 1M
but those processors usually cost tens of thousands when new, so they're probably going to be used to the bitter end by whomever needs to buy them (and they're actually usually bought more for the reliability than the throughput)
https://ark.intel.com/products/82765/Intel-Xeon-Processor-E5-1650-v3-15M-Cache-3_50-GHz Intel® Xeon® Processor E5-1650 v3 (15M Cache, 3.50 GHz) quick reference guide including specifications, features, pricing, compatibility, design documentation, ordering codes, spec codes and more.
and this processor is from 2014
old hardware is rather viable
@Shoots but i'm curious as to what you get with 12 threads and memfield
@Shoots that's what i'd start with given the vcores / 4 shorthand rule
what do you mean by shorthand rule?
just by using 12 threads?
no, if you use any cloud services for anything performance critical and you compare to real hardware, dividing the cloud 'core count' by four makes it more apples to apples in what one can expect
well Im getting 3.4mh with 96 threads right before it runs out of mem
that means the backing system is a multi socket system and you're getting really lucky in how the memory of your VM is spread over 12 channels (or 24 channels if it is a quad socket system)
its probably cause its latest gen
probably dual socket and mostly memory bandwidth idle by your neighbours, if i estimate about 300kH/s per memory channel, that's 3,6M
usually the truth about hashes per memory channel maximums is somewhere between 200kH/s and 400kH/s
in my 'wizard' page i'm lowballing that to 200, but i really need to format that whole thing better
I wonder if it will use more memory cause Im running in tmux sesssion
no
ok cause I was using damn near 130gb with field 6
trying 12 threads now
@Shoots I could write a script that experiments with different values and charts out the most efficient settings
@Clueless rather make the miner autotune the thread count at runtime
it would be nice if it was a separate utility or if it asked you if you wanted to run it the first time
i'll actually take a stab at that next week most likely, i've not touched java since 1.4 EE was the hot new thing just out
@Shoots i'd rather it always warms up the system carefully and speeds up slow
@Shoots not starting from 1 thread, but from a sensible ballpark, like the system thread count
or quarter that and then doubling until matching and then trying +50% or somesuch
if i actually bother, can figure it out as i go
the most naive approach is of course to just let it bump the thead count up one at a time, but that'll take a very long time to get hot on large systems
like 3 hours or so
miners don't even seem to have patience for the torrenting of a snow field, memfield loading or getting the first stable 5min rate :slightly_smiling_face:
it would need to use a lower field to do it faster
i dream of the fields being split into 1G files which can be shoved into ram on demand until hitting the heap limit (and a script which sets the heap limit upon launch to be a bit shy of the available ram)
is there anyway I can see whats in ram and see whats eating up the extra 90gb overtop of the field?
see ptrace, strace
strace -p 2031 strace: Process 2031 attached futex(0x7f40126309d0, FUTEX_WAIT, 2033, NULL
seems there is a java specific one: https://docs.oracle.com/javase/7/docs/technotes/tools/share/jmap.html that should feed into any vm visualizers
dumping now
from a different vm
12 threads already up to 150gb
``` PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 7380 root 35 15 0,206t 0,136t 17820 S 1180 55,5 27:04.99 java -XX:+UseParallelOldGC -Xmx200G -jar PoolMiner_deploy.jar miner.conf ```
what number do you track? RES is the one which matters
im going to try your command
the parallel old gc, when used alone, decreases performance
that's just the random test run i'm on currently
but skip that one
(or maybe, since you have a different java, try it? dunno at this point)
Im running openjkd8
cause thats what I needed to build from source
i build on a mac
but on debian 9, the package `default-jre-headless` works the best for me
and the best i get with that so far is `nice -n 15 java -Xmx200G -jar PoolMiner_deploy.jar miner.conf`
then oracle linux with oracle java can get me more, but requires tens of parametres for the jre for ultimately very little gain
theres gotta be something I can do to reduce this memory issue
could also be nicing the miner perversely enough gives one slight extra oompf as it gets less in the way of system things
any chance you could compress your dir for me?
and share
now, this is getting into dangerous country as i could just give you any malware
I trust you
i'm half sure it's something to do with the fact we use a different jre, though
bazel should output the same `_deploy.jar`
I tried building with java 10 and got an error
uninstalled java 10 and installed openjkd and it worked fine
also weird that when hybrid mining I get 0h/s
Does it show 0 until you are fully done loading?
no, it does not
hmm, or at least not when memfielding, don't recall about hybrid, can try
mem dump to a text file used up my disk space and ran out :S
well, it is an actual mem dump, that's the full memory content
yeah woops, just wanted to print out what was in it
the size of things is a bit nontrivial, yeah
@Shoots actually try memfield with `-Xmx180G -Xms180G`
gets out of memory error
tried that last night
what's using the memory before you start java? if you have 192, that should still leave 12 free
and as said, try that with threads=1
to see if that starts
it shows up as 185gb
if I dont set xmx higher than java uses I get out of mem error
which field do you try to mine with?
7 should fit
8 would not
try with the old gc from above and both xmx and xms and threads 1
it almost sounds like your memfielding is double
also the threads 1 will give me a more exact figure on where the per channel bandwidth lands on those systems
for ram ballparks when 1min and 5min agree close enough that's a result
plotted numbers and i pay one eurocent per hour per kilohash on my infra for mining
no, that's actually not right at all
yeah, time to sleep :smile:
(failing way too hard at excel references)
hmmm I had a partial field 8 in my snow/mainnet folder from one time I accidentally had autosnow enabled, I wonder if thats what was causing my issue
when I would launch the miner I noticed it says building field 8, but then never continues to build it.
cause I have autosnow disabled now
that could explain the 0 hash
@Shoots actually drop the memfield parametre, drop the xmx, just let it read from the disk and let fscache slowly take over
Don't forget to fprot the tarball
Is fscache automatic or do I have to enable it?
also if I still have snowfield 6 will it try to load that into mem?