@Shoots, what is your version of java?
I've tried 8, 10 and openjdk 8
I'm using openjdk 8 right now
you had the slower hashing with 10 as well?
and to confirm this is with memfield=true and without setting anything for the precaching
Correct all versions of Java were the same. Tested different versions of poolminer and 1.0.7+ uses less memory but also hashes slower.
Made some notes for a new mining setup I am thinking about: https://github.com/snowblossomcoin/snowblossom/wiki/Future-Plans
Extending the idea of hybrid mining to more flexible ends
well, I tried to repro the slowness, but I get the same rate I always got with memory mining. if you take some stacktraces (jstack <pid>) and post them I'll take a look and see if anything looks fishy
getting them right now
dammit this guy is hammering the network with hash now snow:dwmve86sywjhk3xsfznj2wkjjv9j7rnc6z7afgwv
@Tyler Boone this is a different VM its only 32c, but it has the same experience. V1.0.6 PoolMiner gives me 1.75mh V1.1.2 PoolMiner gives me 1.48mh
1.0.6 uses ~210gb of memory 1.1.2 uses ~155gb of memory
"MinerThread" #107 daemon prio=5 os_prio=0 tid=0x00007f50c4cea000 nid=0x1ae3 runnable [0x00007f13484c3000] java.lang.Thread.State: RUNNABLE at snowblossom.miner.PoolMiner$MinerThread.runPass(PoolMiner.java:324) at snowblossom.miner.PoolMiner$MinerThread.run(PoolMiner.java:394)
I see a bunch of those, and I have no idea why that would be common...
in which version?
oh its in both
damn, one of my itty bitty miners found a block for the pool. Shoulda solo mined :stuck_out_tongue_closed_eyes:
I thought people would like that block counter
@Fireduck, did you look at that stacktrace from Shoots?
I looked at the code and it's nonsensical in my eyes for that stacktrace to appear so many times
but you've been doing java a lot more than me in the last 5 years. maybe there is some terrible gotcha?
on merkle_proof.readWord?
I'd expect things to be inside readword
I know right?
I mean, readWord isn't synchronized
I think top and java both don
't have a good way to represent waiting for a page from ram
ok, on the next gen mining plan, I would love it if we could avoid doing the memory mining in the java process
linux has really nice shared memory fs and zram or whatever @Rotonen is always on about
Windows is the trouble there
all this next version mining talk is making me moist
@stoner19 how were you able to see you found the block?
INFO: Shares: 31 (rejected 0) (blocks 0)
counter from poolminer
oh Im still using old version
cause you know
hr issues
Windows can do RAM disks. the problem with relying on that kind of stuff is you have to have separate stuff for each platform
or rely on users doing it themselves (hint: disaster)
limit windows users to storage mining only
as punishment
``` [2018-06-28 14:38:54] INFO snowblossom.miner.PoolMiner printStats 15 Second mining rate: 588419.267/sec - at this rate 3.802 minutes per share (diff 27.000) [2018-06-28 14:38:54] INFO snowblossom.miner.PoolMiner printStats 1-min: 588.168K/s, 5-min: 587.713K/s, hour: 588.002K/s [2018-06-28 14:38:54] INFO snowblossom.miner.PoolMiner printStats Shares: 288 (rejected 0) (blocks 1) ```
that is counter to the purpose of snowblossom
or they can figure out their own ramfs
but Tyler is right
sounds like a killer way to attract more users
payout from the pool for the block I solved was 0.96. LOL that blows
pwned
no joke
workin for the man
Don't want to mine on Maggie's Mining Pool no more
:heart:
https://www.youtube.com/watch?v=b2F-DItXtZs YouTube Video: Episode 1 - Mongo DB Is Web Scale
that will make you want to work on a farm
I die
why the hell does java force us to specify a max heap size that is always and forever the max for that process????
what kind of moron designs this shit?
hard to say
The biggest problem with Java is that the designers think they are smarter than the users, and they force their way on people
Larry Wall FTW!
Weren't smart enough to stop the OOP shitstorm
but people can write terrible in any language
define: "OOP shitstorm"
@Slackbot, what do you do?
@Slackbot, tell me a joke
worthless
@Slackbot help
OOP shitstorm, where people don't type any code without making an interface and a factory
slackbot tells me when people sign up using my subscribe link
Java encourages such behavior
I think java is a fine language, it is the programmers who use it that are wrong
I'm not sure how you would instantiate that factory though without a FactoryInstantiator
and a carry all to move it if a sandworm comes
nothing has ever been so beautify
so the largest field imaginable is lets say 16TB, that would be 16 thousand of these files
pushing the open file limit, but whatever at that point
wow `dwmve86sywjhk3xsfznj2wkjjv9j7rnc6z7afgwv` is murdering these blocks
everyone has to run out of money for AWS and google compute eventually, right?
I have
seems other haven't yet. Unless they're planning on just getting their account terminated after maxing it out and not paying their bill.
haha
that's how I roll with my mortgage, so it's probably cool
yeah, it will take them a while to find you
2nd ave NW or whatever, that could be anywhere
so I am thinking rather than decimal chunk numbers I'll do 4 hex digits
.0000 to .FFFF
nothing matters
take out a second mortgage to pay for 48 hours worth of AWS instances
nice
what is an easy way to convert a number into hex in bash?
@Fireduck also ext3 can only do 64k subitems per dir
@Fireduck as in filecount issues are something of yesteryears, no sense to worry
@Fireduck use a python oneliner for that in your script
`printf '%x\n' <decimal>` ?
I dig that
I don't actually know python worth shit
and using bash to just run dd commands to split out chunks
prone to such input maliciousness, but sure
we should just rewrite everything in Perl
perl yeah
surprisingly nice stuff in the ecosystem
perl? It has been around long enough, so one would hope
Perl attracted all the best hackers in the 90s and 2000s, and had one of the first really good community library repositories in CPAN
@stoner19 we should start getting more blocks, just doubled up the pool hr
i’m alone on mine and i still got a block
as long as you're hashing it all averages out over time
Which pool ? @stoner19 we should start getting more blocks, just doubled up the pool hr
proto
and what is avr. hr now on protopool ?
I think its around 120mh only right now
ok, I'll put ~ 250MH on protopool
Fatcats
I think I have about 150kh
I think you was one of the first miners - so it's enough)
Should hope I was one of the first
Alex you're the one who keeps jumping on our pool every now and then I assume?
I did it few times)
we appreciate it, didnt take long to find some blocks now
250mh that’s about 50-100 server whoa :open_mouth:
Or they broke the pow
But I think it is servers
if you broke bitcoin's PoW, how much money do you think you could extract before people figured it out and bitcoin's value crashes to zero?
I fuckton if not dumb
Mine at like 20% total network rate and keep mouth shut
problem is extracting actual money without someone looking into it
Ah
a single client pulling out hundreds of millions from an exchange is going to attract a lot of attention
pulling in more people increases risk of secret leaking
Yeah, don't mine more than you can trickle out
Probably a million a week
@bl0ckchain I imagine its more like 150 servers
depending on the instance type
I have 30 running and its 55mh
Someone before said they were getting 5Mh/s per server, so 50 of those would be 250
I have 2 at 1.1M/s total
obviously some servers are more equal than other servers
it's 160x32cores. I get only ~ 1.8mh per server Someone before said they were getting 5Mh/s per server, so 50 of those would be 250
what is 160?
160gb
my 6 core machine gets 660kh when memory mining. I'm guessing your server is hitting memory bus limits
or you forgot to increase the thread count in the config
I have 64 threads in config, and use 180gb ram
160 servers, 32cores and 180gb per server what is 160?
whats your hr per vm? and which version of miner are you running?
nvm I see now
I cant get 1.8mh with only 180gb mem usage, so wondering how you get that, I have an issue with the old version using too much memory and the new version using less but not providing the same hr
in order to get 1.8mh I have to run 1.0.6 and it uses 210gb of mem
I run 1.1.2
weird, I wonder why I get such low hr
which version of java?
aws or gcp?
I tried java 8 and 10, same HR GCP
hmm Im on aws
it gets up to full speed and then just keeps creeping the memory up and up and up on 1.0.6
on 1.1.2 it uses about 150gb of mem, but it doesnt fully utilize all the cores, seems to be bottlenecked by something as if the threads Ive set in the config arent actually all fully running
I run miner with these params " --jvm_flags=" -Xms"$xms"g -Xmx"$xmx"g -XX:+OptimizeFill -XX:+AggressiveOpts -XX:+UseG1GC -XX:+ExplicitGCInvokesConcurrent -XX:+ParallelRefProcEnabled -XX:+UseStringDeduplication -XX:+UnlockExperimentalVMOptions -XX:G1NewSizePercent=20 -XX:+UnlockDiagnosticVMOptions -XX:G1SummarizeRSetStatsPeriod=1" configs/miner-mainnet.conf "
xms =174 xmx 174
oh wow ok let me try
oh...
flags you say...
I don't understand nothing in Java
so all these params it's just ....
when it doubt flag it out
which one ?
I have the zip. I'd have to compile from the repo to use those flags I think
actually probably not. Just put the flags in the bash script, eh?
it's running flags /PoolMiner --jvm_flags= " " config
compiling from source gives me a bit better hashrate than the pre compiled, but still not as much as 1.0.6 for some reason
yeah binary got me only 1.4MH
I actually got quite a bit less from compiling
yeah, it doesn't like the flags `SEVERE: Incorrect syntax. Syntax: PoolMiner <config_file>`
you cant use the same syntax as the pre built script
it uses java -jar
You guys are all nuts, I love it
this is what Im using
bash script is nothing more than `java -jar PoolMiner_deploy.jar`
you can either run with java -jar filename
or ./PoolMiner --jvmflags=
will give it a try again. I was trying to put the flags into the bash script
you can put them in the bash script
look at mine above
strange, wonder why mine is being crabby. Will keep trying.
actually I didnt copy and paste the whole thing, its missing the end
still only getting 1.45mh with 1.1.2 and all those flags
Have you guys tried using shm filesystem rather than memfield?
nope
tmpfs /var/shm tmpfs defaults,noexec,nosuid,size=160G 0 0
In fstab
oh wait yes I did
and it was taking god awfully long to load into mem so I just gave up
Ha
I restart my miners too often to have them down for an hour copying the field into mem
If you run on gcp I have a bucket with the snowfield
I dont know why it was taking so much longer than using memfield option
Memfield lazy loads as needed
So probably takes that hour to get to full speed
tiny tiny tiny bit of improvement with those flags. Thanks for sharing @AlexCrow & @Shoots
yes. thanks. i have a ~10% better hashrate now.
1.1MH vs 1.8MH with memfield tmpfs /var/shm tmpfs defaults,noexec,nosuid,size=160G 0 0
@AlexCrow it’s not the cores, it’s the unthrottled cores per memory channels ratio
and striping shm guaranteedly across channels on the cloud is quite the dark art
YMMV
1) linux does not yet have NUMA aware shared memory 2) cloud vendors do not expose translucent NUMA backendy stuff
Anyone who wants to bleed on the edge of new tech, download this: https://snowblossom.org/snowfields/snowblossom.7.chunk.torrent
splits the field into 1gb chunks
hopefully the code to read that will be read in the next few days
sweet
so I used those flags and it didnt improve my hr on 1.1.2, butttt it reduced the memory usage on 1.0.6 so i can now run my 48 core vm that has 185gb of memory, should get better hash per $ now
@Fireduck neat