if snowblossom had a tagline, what would it be?
:P
satoshis to snowflakes: .00000001
I think I'm onto something with this: https://jsfiddle.net/Lcz901re/48/ Test your JavaScript, CSS, HTML or CoffeeScript online with JSFiddle code editor.
@aikida3k What do you mean? :P
that would be awesome if snowblossom replaced bitcoin
I fear the font actually makes it look too techy, and would scare some people off
https://jsfiddle.net/Lcz901re/57/ Test your JavaScript, CSS, HTML or CoffeeScript online with JSFiddle code editor.
hazzah
Snowblossom is not for the easily dissuaded
I like it. I also like the current website. It reminds me of a college professor/ academic website. A lot of coins use wordpress, but kudos to you for not going that route
I hate wordpress
@aikida3k wordpress == complicated tears and lies, and I can just write my own at this point
:P
also PHP is dead
I don't mind my own php. I don't trust more php than fits on a screen.
is triggered with flaskbacks to previous jobs
Gah, I made the nonce too short.
which
Snowblossom block header nonce
are people already overflowing?
Oh god no
I need to cram some extra data in there for directory nodes
Because of course we need a dos resilient dht
For the sharing channel infrastructure
It is just like that jayz song. If you liked it should have put a PoW on it
that's beyonce
:stuck_out_tongue:
Thanks. I am not an expert in block chain or anything.
lol
Does it get more hash power if more disk space?
no
so what effect the hash power, cpu?
@0x IOPS, random read operations, basically
so for example 2 disks will double the hash?
Okay so I am running 30 threads on a dual processor 288GB RAM machine and I am getting about 387/s . top command says java is consuming about 6% of CPU and about 0.2% memory. Seems low
maybe need to set -xmx
Shouldn't memory show much closer to 50% use?
-xmx ?
@0x what is -xmx
I'm not sure, I think you need first: Enable memfield=true in the miner config. Default Java will use 1/4 of total memory as the max memory useage. You can manuly change with -Xmx param.
for example: java -Xmx10240m -jar PoolMiner_deploy.jar configs/pool-miner.conf
-Xmx10240m means 10g
I tried memfield=true with ubuntu and I got JavaLangMemoryError
try to change poolminer.sh
-Xmx204800m is 200g
memory use now shows only .1
hash went up to 416/s
@0x what is your hash rate? I'm trying to get some kind of ballpark figure
i'm not mining
did you buy OTC?
no, my mining machine is using hdd, too slow, need to add ssd
Is the hash miner display in kh/s? On the mining tuning page it says: So if you have an NVME that can in theory do 2400MB/s of reads. So: 2400*1024 / 24 = 102400 hashes/sec. That means the theoretical max, 102kh/s. In practice, on such an NVME I see about 1800 MB/s moved (check via dstat) and a hash rate around 88kh/s.
So would that be displaying 88/s ?
@aikida3k no, pretty sure it's raw int
hmm so I should be getting over 88,000 /s mining with RAM. So for some reason the fields aren't loading into RAM?
@aikida3k take a look at your ram usage and what's loaded for each process
top says java is only using 0.2% of memory
I'm getting about 25x the hash rate of my workstation that was mining from HDD
@Clueless What is your hash rate with the hardware you are using?
@aikida3k okay, and what is 0.2%?
how much is that?
you can also check disk load
I am supposed to get a share this way about every 140 minutes. So if I wasn't mining at a kilohash level and the NVMe drive was doing 88,000 that would be 183x my mining rate and thus getting a share about every 45 seconds. That doesn't seem right
@aikida3k 1. If you have enough ram to load the snowfield entirely, you want to make sure it's actually loading into ram. 2. If it doesn't load entirely into ram, it'll be bottlenecking on your disk IO 3. Even if you're doing disk IO, there's time spent with the CPU performing hashing
.002*288GB is .576 GB
okay so how do I check if its all in RAM other than using top command?
140 minutes seems about right for a share, I think
top saying barely any is in RAM at all, .576 GB but if I was mining at the same rate as my workstation, I would be around 20 h/s
@aikida3k So you can try multiple things. See if linux itself is caching the files into ram, or you can set `memfield=true` in the conf
if you're using the poolminer or miner
i tried memfield=true and i got JavaLangMemoryError
if you're running from the `client.sh` open that up and raise the heap size you're allowing for java
linux is supposed to load it into ram without being told
1. It takes time to load things into ram from disk. 2. if it's not, we'll want to figure out why. 3. If you don't want to go through that trouble, you can set `client.sh` java options manually :P
java -Xmx204800m -jar PoolMiner_deploy.jar configs/pool-miner.conf would work, right?
I'm not running the client or the node and just the pool miner
that's fine, and yeah.
204800 seems high
I have room to spare. I guess it takes a while. It's still creeping up. Now at 530/s
Tuning this is not easy. If I try to stop, reload the conf files I have to start all over again and wait to see what the hash rate does. Don't take it as a complaint; it just seems there is a learning curve to it.
@aikida3k takes time to load into ram. check out your file cache at `free -h`
CPU usage is still only 6-15%
because it's bottlenecking on disk io.
That's the point. No one gets any real advantage.
It's a big IO problem that's hard to solve. Tuning your gaming machine has the same problems.
free -h says 283G total 1.6G used 205G free
well, I'll let it run tonight and check tomorrow. Thanks for the help @Clueless -name is more fitting me than you
@aikida3k what about buff/cache?
76G
is it going up?
yeah, its going up 80G now
is raging at GPG password complexity requirements
thinks password complexity requirements should be federally abolished.
@aikida3k So yeah. Looks like it is loading into ram. Did you have to set `memfield=true` or are you just relying on linux to cache it?
you'll find it speeds up as more and more is in ram
relying on linux. when i set memfield=true I got the Java Heap overflow error: JavaLangMemError
good
sounds like linux is caching it, just takes time to load it all into ram
:P
yep. thanks
@aikida3k use memfield=true, set xmx to 220g (there is some overhead on top of the snow field, and you want to have extra heap so the GC does not fire all the time), set threads to double hyperthreads (with 8 cores, 32 threads, try +- 50% too to see if that makes a dent), and reading 128GB into RAM will take quite a while also only believe the 1h hashrate after 2h of mining and if you are very patient, do what i do - set threads to 1, mine for 2h, set threads to 2, mine for 2h, set threads to 3, mine for 2h...
i fail ’what is a link’ as a webdesigner, but here is some ballparking, if you find and click links https://snowplough.kekku.li/wizard/ A quickstart wizard to help you get started with mining Snowblossom.
@aikida3k while fscache has you covered, the syscall overhead of hitting the disk with higher densities vs. memfield will limit you - with larger fields and linux, precache is not worth the while, but makes sense on windows as prefetch and superfetch get in your way otherwise
@Rotonen Thanks for the guidance. I'll give it a try and update in a bit.
there are no peers for the snowblossom.8 unchunked torrent?
there should be at least one. i am currently seeding. and someone is downloading.
how long are you giving it to discover peers before deciding there are none? i’d recommend 2h
It is running for >30h now. I got both 7´er snowfields + the 8´er chunked
Or I´ll just merge the chunked one..
weird, i’m immediately seeing 5 seeders on that one
max configured file size exceeded. Got the error :wink:
what’d spit that out?
rtorrent
that software manages to have the most bafflibg issues
https://snowplough.kekku.li/wizard/howto/ A quickstart wizard to help you get started with mining Snowblossom.
try aria2
can aria2 seed multiple torrents?
I just download the torrents to seed them afterwards
Deluge works well for me
Also ipv6 helps
Not sure if upnp is working on my seeders
@mjay it can
deluge hits performance issues for me at high data rates and high peer counts
one can also run aria as a daemon and it has an rpc interface, check the arch wiki page for some setup suggestions to get to know the ballpark
thanks. I leave my rtorrent running now, besides the file size limit no problems
Aria can also perform parallel torrent/http downloads. (I've heard)
hghclekkivrrgvtljfelklbkridtvtgfrhcfknthfnbe
hghclekkivrrjblifnvncuiidcnecdkntivjeuhilvlt
sorry, ignore those lol
testkey OTPs
I won't ignore those, clearly they are the answers to something
private keys for something? :thinking_face:
brainwallets, but for which coin
@Fireduck are there bounties for something? I like to port snowfield-generation (Snowfall) to C++
Anyone know which pool (if its a pool, which I am assuming it is) has this address snow:dwmve86sywjhk3xsfznj2wkjjv9j7rnc6z7afgwv ?
as far as I know its a private pool
I guess "Sunshine Squirrels" is #Snowday?
I think its someone else
the nethash is picking up again, just seems like new parties are arriving to the scene
the coin is still in its early stage, i hope there is some time left before it starts
every new snowfield is a whole new ballpark of difficulty
just moving the fields around is getting tough
IMO 100kH/s per nvme ssd is longterm sustainable
fields will only switch at 4x previous hashrate, it won´t be too often
and you can switch early :wink:
but the price worth sweet spot might be something else with arktika and old hardware, but gonna be hard to beat a passively cooled 35W quadcore with 2 low power SSDs at 40W total system power draw cranking out ~250kH/s on power efficiency
of course the purchase price of such a system is something between 1k and 2k usd
and figuring out a power efficient motherboard is quite a chore as well
My rigs come close to this. 4xE7-4870, 160GB Ram in a DL580 G7, doing 3.1MH/s at 800W
about 1k investment for each
what's your hash rate gonna be on field 8 or 9? the passive rig will probably still do 250kH/s on field 10 as well (2x 1TB raid 0 / JBOD)
Field 8 will be 300$ additional investment for 128GB Ram
that's still fair
+600$ for field 9.
how do the sockets go on that one, or do you assume selling smaller DIMMs off first?
it has 64 sockets, 20x8GB populated now
and you're 10x .. 20x on the electricity draw as well, but that's not that bad
next step would be 40x8GB, final step 40x8 + 16x16
any predictions on the seek latency per channel effects of the asymmetry? or you actually dive deep and do openmpi / hwloc magic and pin resources per thread?
have not looked into this yet
you're, as far as i can see, in a silly position, where the more you go towards maxing it out, the more you benefit
I could upgrade one server now and see how it affects hashrate
which pool are you on mjay?
hamster
Best pool
~3M is borderline soloable currently
hamster is best pool?
I have several, so solo should be no problem
or just run your own pool
no problems with hamster pool, and the fee is low
seems i still have the lowest fee, but only one miner
that's me, it should be anyway
some of my spare capacity automatically joins in from time to time there, but i'm not predicting any in a few weeks now
just haven't hit any blocks on snowplough
the big issue for users on low volume pools is the low block density, but based on hash rate estimates it should currently get one block in 1000
averaged over time you'll still get about the same, but dunno if that is the kind of a time span you're in for
no i realize that. I didn't check the block frequency when i wrote my config.
It kinda hurts to spin up a bunch of expensive cloud and get serious blocks
I point my miners to a dns entry so I can change pool without changing config
So I'm shopping pools to see where i can get more blocks. I would come back when we get more miners; i would expect more to eventually use snowplough to make it worthwhile
the pool does not really play a role in your personal block finding density, just on the reward payment density
well the more blocks a pool finds with the higher hash rate, the more frequent the reward
i should bother to change the stats to show current share counts and next expected payouts per address instead of the 1h rates, but dunno if that's even a thing on the rpc yet
I am basically solo mining on your pool
yep
although you'll get the rewards sorta delayed as it's pplns over the last 5 blocks
True
though that'd be a block a day currently, or a bit less, so it should average out for you just fine within a week
plusminus nethash changes plus (afaik only plus?) more miners joining in
@mjay secondhand amd epyc would be rather interesting as well
amd epyc are still quite new and expensive second hand
they also run on ddr4 which is a lot more expensive
Epyc some sort of low power amd?
its the server branch of their zen architecture
no, quite the opposite of low power
but more memory channels
price per hash could still be in the same ballpark we discussed above
I bet its a lot higher. You want some decent processors, like 2x16 core at least, +some ram
thats 4k+ easily
theorethically you could get into the 10M ballpark, though
Can you somewhere rent such a server?
I like to know
you can
where?
Thats for a full month
yep, but they're fine with just one, though
i'm curious as to the 3d xpoint data centre u.2 disks as well
and it has just 4 memory modules
upgrade to 8 modules thats 265€
a little bit too much just for testing
yeah, not gonna get a viable one for less than 500 a month
but similar performance from cloud vendors costs at least as much
I just want one for a few hours to test :wink:
at some point try to i'll figure out the cheapest 100kH/s rig, that might come in at under usd 300 soon enough
We need to collect such data
and create a mining calculator
i am, slowly, into my 'wizard', but my webdesign sucks
i accept updates from people
https://snowplough.kekku.li/wizard/ A quickstart wizard to help you get started with mining Snowblossom.
i fail 'what is a link', just hover over titles and click
I could code one for the in-node-explorer with some javascript
*mining calculator
if Fireduck is okay with that
meh, i like curated lists, but i'm a bit biased here
Or just output how many hashes per second you need to earn 1 snow/day
not everyone knows how hashes and coins are related
btw. how far off are you on the older E7 platform from my ballparking of 200kH/s per memory channel?
my samples are mostly from v3 E5 2xxx
E7-4870 has 4 memory channels, 16 total
`3100/16 == 193,75`
almost exactly
generational gains in memory bandwidth are inching that to 250
and larger caches can do 10 .. 50 on top
so a dual socket epyc could do 4,8M, meh
Man .. java is really bugging me. Can´t even output a ByteBuffer as hex :disappointed:
insufficient factories
@mjay how many threads per hyperthread are you running?
so 80 threads on the server
try with 2, performance counters usually fail hard at genuine ram seek blocking
as in cpu saturation is not cpu saturation here
okay wait a sec
it'd be iowait, but it has no way to account for that
i sorta predict you should get some extra from the larger caches and intricacies of cache interleaving on an E7
Still 3.1MH/s