why is it silly?
and someone should do the mathemathics on what's the collision chance of shuffling the fields in regards to getting a higher ratio of nicer seeks
why should shuffling improve it?
128bit is just not a nice thing to seek in as that'd require page size tweaks
it should just proof you did something, it could do more in the same time, but thats not the point
well, playing around with the chance of 256 reads happening from the same 4k block
but that's quite on the fringe of maybes
but yeah, 16 byte reads off a gpu, do provide the work for reverification if you tackle that
heh, nvidia has blogged close enough https://devblogs.nvidia.com/maximizing-unified-memory-performance-cuda/ Detailed explanation of CUDA Unified Memory page migration to improve your understanding and help you optimize and get the best Unified Memory performance.
`Considering that Unified Memory introduces a complex page fault handling mechanism, the on-demand streaming Unified Memory performance is quite reasonable. Still it’s almost 2x slower (5.4GB/s) than prefetching (10.9GB/s) or explicit memory copy (11.4GB/s) for PCIe.` gives you a ball park
okay, first test, directly accessing 16 byte chunks from global memory, using openCL, 18.5M accesses/s on a 1080TI
but they did tiered caching and all the trimmings for you
offloading will be faster, thats the next test
there is a fun way around the 4k read problem
that caps out at 40GE for anything consumer obtainable
though not many even have 10GE
@Fireduck or are you saying that'd be faster over a switch with two network interfaces on the same computer? :smile:
Better use localhost :smile:
loopbacks are unix sockets under the hood and thus 4k? :stuck_out_tongue:
i'm trying to wrap my head around there which part is able to feed faster, as there's ultimately down the line a 4k read system somewhere down the line
Is every 16 byte chunk sent on its own???
That would be a waste
on the network level, yeah, can be, but what feeds into that and how
meh, only like 20% overhead all inclusive? :stuck_out_tongue:
I´d fit at least 50 of them into one packet
packets have a header, too
more than 16 bytes total
@mjay so seems cuda actually has everything one would need and no need to orchestrate if you think of each execution unit as just a cpu doing one fetch, and all the layers within the gpu as sacrificial caches for the main system memory, and give priority to the gpu - all the gpu internal 'cache hits' would be an edge
Of cause. But right now with 128GB field, and most consumer GPUs way below 16GB, thats not even 15%
once field 8 activates even less
when mining off an ssd, everything you get on top counts
if hardware you might already have doubles your hashrate, win
though nonsense in regards to hashes per watt
it could stay in its lower power stages
compared to an ssd its still way faster
oh, that's the kind of fun for which one can get paid in fiat - how to idle GPGPUs in flight
The idea is 4k to CPU, then marshalled over network. That way you use multiple machines to access a cluster of memory machine
In practice it works great
My r900 can do about 300kh/s on it's own
But enabled about 1mh/s by sharing over 1g network
To other machines that Max their CPU as well
What we are aiming for is an arktika-gpu-client
that uses the snowfield either on a local machine or a network source
i'm just emptying the dishwasher and shooting the breeze
around 22Mio. accesses per second seems to be about max for a 1080TI
16 bytes, random position, 16 byte aligned, from 10GB of data
power draw <70W
that'd be like 2,5MH/s over that plus penalties from misses, so currently that'd land you ~200kH/s extra? that's not bad per watt
i'm rounding down and probably not enough, but ballpark-wise something like 1..2 extra SSDs for hardware you might already have?
assuming you have the 10GB of RAM to dedicate to that, otherwise extra penalties and no sense?
so someone could have a cheap longterm viable miner off an SSD and have the GPU profit switch between boosting snow and doing whatever else, i like the narrative - now just someone would have to put a lot of effort into software to try that out
probably a ~80h project for someone who has done something similar before? (80h for a PoC implementation is a lot of work for something this specific, mind you)
@Fireduck did you even spend that much on snowblossom so far? :smile:
and about to hit 1BTC on 24h trading volume on qtrade too
I've probably spent 200h on snowblossom
Hard to estimate
More than that, I was effectively full time for six weeks
ah you had some luxury between jobs or how'd you pull that off?
no wonder most of the hard stuff actually works
Yeah, took time between jobs
Quit Google and started at Axon
I still wonder who the hell is mining with field 9 and why.
maybe they just have 1tb SSDs and why not
There is not benefit to using it right now. Even downloading it would be a pain. I imagine the person using it doesn't know the correct snowfield
I kinda doubt that
it is certainly possible of course
when field 9?
3G harsh rate?
as people are too lazy to just check the block explorer, maybe worth placing current active field alongside some other stats on the front page of main site @Fireduck
also, maybe tell people on the snowfield-torrent-page that the fields < 7 are not needed anymore
I know at least 2 people who downloaded them
@alistar 8 at ~4G, 9 at ~8G
2^(25+2*field)/600
9 is at 14,6G
thought it just doubles, nice
maybe those should go onto the snowfields explanation page
and also say they’re all the same field, with ’including up to’
for anyone lazy, see the values table http://m.wolframalpha.com/input/?i=2%5E%2825%2B2*n%29%2F600 Wolfram|Alpha brings expert-level knowledge and capabilities to the broadest possible range of people—spanning all professions and education levels.
@mjay when I first came across snow I learnt this hard lesson too as I aggressively skimmed the docs due to wanting to mine some snow asap. little patience goes a long way
Its probably best to download field 8 right now so you are ready when it switches
@mjay already on it :smile:
i shouldnt be able to edit that right?:grin:
@Clueless can you update your testnet node to the latest in git?
sure
@lajot are you saying it'll let you edit it?
i think so yes
@lajot set the channel topic: WIKI https://github.com/snowblossomcoin/snowblossom/wiki hello
ah, I thought you meant the wiki, yeah, you shouldn't be able to change that either, thanks. :P
@Fireduck I thought I was current already
`Oct 29 07:07:08 snownode node-testnet.sh[954]: [2018-10-29 07:07:08] INFO snowblossom.node.SnowBlossomNode <init> Starting SnowBlossomNode version 1.4.0`
1.4.0 is ages ago
you want 1.4.1-dev
herk
ah, an update script broke
I made a change such that low fee transactions only get 100k from each block
that way old clients that don't do the fee right will still work
but it won't be possible to fill all the blocks for free anymore
yeah, just to warn you all, I'm redoing my monitoring not happy with my old monitoring setup, so some things like this may slip under the radar
@Fireduck do all mining nodes have to upgrade or will the network still accept blocks with more than 100k low fee transactions?
it is entirely block creator deciding, other nodes will accept the blocks just fine
not a protocol change
okay, great
I am talking with a guy on reddit, can anyone provide any input?
https://www.reddit.com/r/snowblossom/comments/94kuwt/xmss_vs_rsa_large_key/e8ocv1z I appreciate you taking the time to set me straight. I am an expert in blockchain cryptocurrency and pretty solid on traditional crypto but the...
Specifical, WTF stateless signing?
also, why the hell does quantum resistant mean the same thing as quantum safe?
I am pleased when anyone looks hard enough to tell me I am wrong
As far as I know algorithms are only quantum safe if there is provably no quantum algorithm with a sub-exponential number of qubits, where quantum resistant is just we don´t know any algorithm
ah, ok, that is the distinction I was missing
Any project admin/dev I can talk to? :smile:
@Gamaranto what's up? :P
@Fireduck why i keep telling fighting unknown unknowns in a practical application is a bit silly
@Fireduck someone eventually needs to hilbert up all the idea spaces, but that’s meta engineering
@Gamaranto I am here too.