There we go, field 8!
difficulty (avg): 41.003 (40.799)
not yet. hmm.
Heh, I forgot the details
But it will do it's thing soon
what are the details? :slightly_smiling_face:
When it shows as next field activated
Time to ask the code
5GH/s
if (prev_target_avg.compareTo(next_field.getActivationTarget()) <= 0) { bs.setActivatedField( field + 1 ); }
five giga, someone is on a death spurt
This is the second value, 40.803 currently
still some time left until field 8 activates
soon enough
like another day
depends on the hashrate growth
i guess around 12 hours if the hashrate stays on that level.
looking forwards to the hashrate crash
Field 7 722
it will crash hard
unless someone hijacked a small-medium data centre
oh well, then
they are still using field 7 for most of the mining
depends on if they are able to switch to field 8
(I doubt it)
could be just orchestrating around cloud credit offerings like the previous spurt seemed to be
google and alibaba are at least pretty generous
I'll need to retool for field 8 over here
Doubling of the hash rate in the last 18 hours. Geez. Doubling of the price would be nice.
I think it'll be healthy to goto a higher field
I expect it to wash out some guys; it's harder to find cloud servers with over 256 GB
At a reasonable cost
Arktika!
low power rigs
Cheeseburger spiders
That odroid h2 looks interesting
yes. i cant wait for them to be in stock.
i have a couple of hc1 and xu-4 and they are fun to play with.
Checking up on Snowblossom.8 Seeds
6 seeds, 1 peer, all good
@Fireduck, wouldn't you have to move your Arktika servers onto their own private network? It would use a lot of bandwidth having to hop networks, wouldn't it?
I run it on my own lan
If your cloud provider has a high capacity interconnect and don't charge for it, you are good
I am wondering how realistic Arktika is as a hosted solution
As long as you keep it in one data center it should be fine
Something to think about for the future. I have one more node to add before my lans are maxed.
hash rate is up almost another 10% in roughly the last hour or so
I'll delete the snowfield 7 files from my seednodes once it switches
there is no point in still downloading them
six giga
so someone was serious `difficulty (avg): 41.220 (40.988)`
que confetti
oh it moves ever so slow, oh well `(40.990)`
and bugbear just kicked in
I guess the mainnet is down.
it is not
my node unable sync data....
its just the last block thats quite old
give it some time to adjust difficulty
Mee to.....
the nethash went down as the new field kicked in and the block density is going to be lower until the difficulty adjusts
this is expected, keep mining
it'll probably take like a full day to adjust down, which is good, as it creates less of a wolf pack situation at field switches
but yes, that is a flipside positive
Yep. Hopefully the slow to react will be helpful when snowblossom I'd forked into snow cash, lightning snow, yellowsnow and of course sdv (ducks vision).
I might keep the miners hopping around slow
I can help you produce yellowsnow
Everyone has a skill to bring to the table
lol
@Tilian @Fireduck As expected, they prefer us to take one bip44 index, use 1 for testnet, and use another path entirely for identity stuff
ok, that is fine
@Fireduck never enable dedup on ZFS, ever.
like, no
big no no
I use it on my backup server
it is great
the best
top kek
well, it impacts IOPs performance soo much, it's causing my entire VM stack to crash on every torrent move operation
yeah
you tried to bring the world of IO into one file system
and a really complex file system at that
@Clueless you're just running it at less ram than it is designed for
the dedup table is 320 bytes per block
48GB RAM :P
also do not use 512 byte block devices
I suppose I could double that
4096 blocks help a ton
4K blocks
native or emulated?
and ashift=12 ?
native
no ashift
redo
you'll cut the dedup table cost by a factor of 4
@Rotonen how do you suggest redoing it? :P
drop the data or get equivalent hardware :stuck_out_tongue:
@Rotonen the good news is I only had dedup on while I torrented snowblossom fields. so deleting them should get rid of the dedup table size and overhead
dunno about ashift
ashift is only settable at pool creation time
that's the block size for the pool
A filesytem is just a database.
it defaults to 9, which is 512 bytes, 12 is 4096 bytes
Never trust a database.
wtf would they default to 512 bytes, that's absur
4k is typical
an easy gotcha to see if you ever went to an oracle training on it or not :stuck_out_tongue:
For this data set you want a dedup of like 16mb
@Rotonen the way I handle it is dumping that specific filesystem and restoring it with the proper settings
zfs on linux seems to sniff that at runtime, so if your hardware is not lying, you might be lucky on that one
but you should be able to see it in your pool data
yeah, listed in `zpool get all`
or if not listed as something explicitly set, could be something you can fish out via `zdb -C`, might have to point at the actual pool
@Rotonen waiting on it to finish whatever it's doing on boot.
may be a few hours
txg_sync
that'll take a while
but mind you, you need to have the `-o ashift=12` flag on both create and add commands, just to not have it too easy to do right :smile: