hey, so i got 2x 128 Gb SSDs in RAID 0, finished torrenting the 128 Gb field 7, got it into the folder, added the pool into config/pool-miner.conf basically, those are uncommented ```network=snowblossom log_config_file=configs/logging.properties pool_host=pool_host=http://snow.vauxhall.io snow_path=snow mine_to_address=<address> threads=8``` _sidenote: miner.sh is working having the average speed with time to find a block_ but when running ./pool-miner.sh i get an error, did i miss something ? like a port or something else ?
i mean, i tried protopool first but it was the same
are you running node while trying to mine?
also maybe in <#CB4CYRTPG|mining> too
yeah running the node, else miner.sh probably wouldn't work either (which, is working fine)
do you have a comment starting with //? replace with "#"
pool_host=pool_host=...
once is enough
thanks
yeah, seems to work now, haha, i can be really dumb sometimes UwU
ty :heart:
you´re welcome
doot
Can anyone tell me if there's a specific version of Java that uses less memory on Ubuntu?
Also do the newer version of the miner require less mem? I have xmx set to 192 and it's running out
192 mb or gb?
-Xmx192g
thats what you've got
and you can run free -m or htop and actually see 192 gigs of memory being used up?
My immediate guess is perhaps you forgot the 'g'
I can run this thing with -Xmx140g using standard openjdk available on ubuntu
definitely have the g
Im using an old original version of the miner, going to try the altest
latest*
yeah for some reason these aws instances require you set the xmx setting much higher than 140g like yourself
breaking the hell out of the explorer for a minute
once it resyncs we will have history on addresses
is explorer code open source
yep
in main repo, called shackleton
shackleton ?
yea ok
because it is a snow explorer
well the old version of the miner is much faster Ive determined. 48c vm gets 3.4mh on old version and 2.1 on new version. Memory usage seems to cap out at 145gb on new version where old version will use over 200gb
very strange
explorer now supports searching by block number
I wonder if the memory fix for windows caused an issue with memory usage in linux?
@Shoots certainly possible
the problem with something like the miner is optimization on top of optimization soon gets very strange
If you write up your findings with as much detail as you can in a github issue we can take a look
are there any new settings I can try to configure?
in mining tuning wiki page
im wondering if theres some configs on by default that arent in the readme
@Shoots which GC do you use for old and new heap objects, and with which parametres
Ive tried java 8 and java 10, java 10 using G1. doesnt make a difference. The old miner just uses more memory and gets higher hashrate
now to determine which version of the miner caused the issue
@Shoots no, you can tweak which implementation is used for old objects on the heap and which for new objects and then still the parametres of the chosen implementations
@Shoots there’s an asymmetry of expected object lifetimes as far as i’ve observed
@Shoots also can adjust what constitutes old
Im not sure how to do that, can you help?
read the jre docs
What's the flag for hybrid mining? I don't see it in the miner config
https://snowplough.kekku.li/wizard/howto/ A quickstart wizard to help you get started with mining Snowblossom.
@Shoots https://snowblossom.slack.com/archives/CAS0CNA3U/p1529854212000136 ``` double precacheGig = config.getDoubleWithDefault("memfield_precache_gb", 0); ```
Thanks guys
Do you have a 1000000/sec for each computer?
1M .. 2M is the expected expected ballpark for RAM on quad channel and up systems, 3M with EPYC, and up to 5M on prohibitively expensive enterprise hardware
I am outsourcing my mining. Spending too much time screwing with moving hardware around when I should be programming.
By which I mean I am shoving hardware at @Clueless
INFO: Generating snow field at 4262.25 writes per second. Estimated total runtime is 244.92 hours. 0.23 complete.
good old field 11.
We shall see if I lose power again before it finishes.
is there no checkpointint?
no, a checkpoint would involve a complete copy of the working snowfield
so in this case be 2tb
plus the state of the various prngs
plus if I screw it up everyone will laugh at me
got half way done last time and someone from the power company came and pulled out my meter
you could have built the algorithm in a way that enabled checkpointing
for example by making idempotent sections which write values into some set of areas and don't read any from that same set
the entire point of the algorithm was to make checkpointing very hard
different kind of checkpoint
not very different
totally different
you aren't understanding me
lets say you want to create a checkpoint every 10,000 writes.
have it such that those 10,000 writes don't read from any of the areas those 10,000 writes are going to
it still reads from the rest of the snowfield
so you need the entire snowfield up to that point
so yeah, you could "checkpoint", but the checkpoint requires data the size of the entire snowfield
obviously the first pass would have no checkpoint
wouldn't the checkpoint be the size of the entire file (which I can do now)?
yes, exactly
I can checkpoint it now, if I want to store a complete copy of the file and all the rng state
I thought you just said if you lose power you have to start over.
yes, I do
well, there you go.
I mean I could implementing checkpointing without change the algorithm
sweet
I won't
but I could
I'm not convinced that you can
I love you too
BURN
heh
the point when it brings all the threads together for the lock step synchronization would make a natural checkpointing spot
what if some writes happen to the snowfield after you save the rng state?
is it guaranteed that starting from that last lock-step would not read any of those written values?
for example, if you read a value before overwriting it, you are most likely screwed.
I would have to stop all execution, save file and rng state and then resume
At the point I have the synchronization lock all writes are stopped
we'll have to go over this in person I think
probably
ACID, snow - oh my
https://github.com/snowblossomcoin/snowblossom/blob/master/lib/src/SnowFall.java#L191 - at that point all writers are not doing anything ``` syncsem.acquire(MULTIPLICITY); ```
and the thread running that line is the only thing going anything
I mean after that returns
anywhere to see nethash?
it is an estimate anyways
hard to be sure
@Tyler Boone and to answer your question, there is a check within each step to make sure a page is only touched once in each step
for it to be checkpointable, the written pages in a step cannot be read in the same step
when a page is involved in a step, it is read and written in the step
then your screwed
you are drunk
you're even
any process can be checkpointed
you stop doing things, save the state somewhere and resume doing things
yes, but we want to discuss checkpoints which don't massively slowdown processing
oh, without massive slowdowns
that is a different weasel
could actually do it with zfs or btrfs snapshots
every n something
anyone know what this is about? https://pastebin.com/XpG1Zbzt
nevermind. install process just doesn't make `/logs/` dir by default
`//Limited to 100 bytes, so short letters to grandma` nice one @Fireduck :slightly_smiling_face:
maybe mkdir logs?
@Fireduck yes I did that. Thanks. As I said, the build process just doesn't build the logs directory so I had to do that manually.
which install process did you use?
I did ``` git clone https://github.com/snowblossomcoin/snowblossom.git cd snowblossom/ bazel build :all ```
ok
would you like me to open an issue on github for this? so that the code can be fixed to create the `/logs/` dir?
go for it
will do so shortly.
thanks
GCE nvme is terrible
like 10kh/s
Anyone tries the AWS nvme?
get more hash rate from a taco
md5 instance might be alright, but there's no spot instances available
just be careful not to store the snowfield on an ebs volume - they charge you per io-operation
@Fireduck all cloud ’ssd’ or ’nvme’ are gonna suck, they just use those as a shorthand for ’gen + 1 SAN’
@Shoots there are if you contact sales and commit to a flexible fleet of 10 .. 100 for a year, but... :D
@dystophia lots of instance types have EBS-only disks
Just don´t mine directly from an EBS volume
but loading snowfield from EBS into RAM is not too expensive right
right
mining directly from it is slow anyway, just helping others avoid this mistake
thanks
talked to an engineer at google. It is working as intended, when you provision ssd (local or network) you are given an IOPS budget
and it gets that budget pretty well
but it isn't amazing
@Fireduck as said, the ’nvme’ there is a marketing stamp on sustained 4k read iops - i suppose the next one they’ll call 3d nand or optane - raw engineering numbers are not salesy
sure. I though it was worth checking out regardless.
GCP should be able to get up to 40K read IOPS with local SSD...
@Rotonen What kind of computer are you?
@Rotonen is a real boy!
(that's a Pinocchio reference, for those not in the know)
@Ninja hopefully a lockstep mainframe
I'm a T-1000
There is 1M~2M???
https://www.youtube.com/watch?v=J6d1K7YM7IA YouTube Video: Jonathan Coulton- Todd the T1000
no, a proper mainframe would probably blow past 10M, but no one wants to afford that for snowblossom
I used to mine on http://snowday.fun pool, but haven't received any share since yesterday. Is there a problem with the pool?
(and yes, I am using field 7)
but any quad channel system should give you ~1M
@Johannes if the pool has not found a block, no rewards get paid out - people are swapping between pools from time to time for whatever reason
The equipment is very large
hmk, just wondering as the pool found blocks pretty regularly, and since the diff is down since field.6...
is there any way to see which pool has what hash-rate?
only if the pools disclose that and only if you believe what they report
i mean it's estimates on estimates and no one prevents anyone from lying
and bigger hashrates currently draw a crowd
i'm not suspecting anyone of bumping their numbers, but FYI
it's pretty obvious from the block explorer which pools have the miners currently
ok thanks