re SUB - dont forget https://handshake.org/ forging the way of the new decentralized interwebs Decentralized certificate authority and naming
Also this one, even older than bitcoin: https://freenetproject.org/
I remember messing with that and not being impressed
it is very slow, forgets data quickly, but has the focus on anonymity
a different use case
for the sole reason of not accepting a host key again, I think HNS will succeed
what is HNS?
handshake
faucet for verified FOSS devs = fucking rad distribution model
how to config the Arktika's config files?
threads?
got
made an update @ snowblossom.satoshis.guru, node now working on the lastest release
wow, almost 2GH/s
will be pushed to 8:grinning:
people getting PoW fomo
I fear tigers
Are there stats on which miners are using field 8 already?
I checked some blocks manually, all 7
hum, I could write a script to check but I don't have a good tool at the moment
I´ll write something then
Its not too difficult I guess
if you take the previous version of RichList.java, it iterated through all blocks
you can just copy that
in the most recent version I changed it to use utxo instead
I´m using VoteTracker now, just the last 1k blocks are enough
cool, good idea
Field 7 997 Field 9 3
Field 9!!
Is that 128gb?
512gb
No way
I still remember ram mining lol
days of ram-mining are almost over
Vps with 512gb are expensive af
you could buy some fancy server hardware, but that gets more expensive than ssd´s
I was one of the first 5 people on the network :grin:
nice :smile:
your first block?
mine was around 6k, 64GB still
Not sure which was my first block
I remember 3 weeks later someone put 100x what I had on the network
mainframes, i say
i applaud whomever set their block remark to ´POOL´
ha
@Rotonen hey uh. at about `2 GH/s`, if the average person nets about `500,000 H/s`, doesn't that equate to like `4000 people` ?
500kh/s are quite cheap on aws
how cheap?
2$/day
about
~@Rana Waleed~@mjay link me?
I haven't seen anyone that allows you to use that sort of bandwidth, io, cpu for that cheap
hey whatsup @Clueless
sup
what do you need bandwidth for?
were you asking me something @Clueless
oh sorry
@Clueless i’ll show you the opposite end http://www.cirrascale.com/pricing_power8BM.php
Holy hell.
well, there’s the deep end too https://www.ibm.com/cloud/bare-metal-servers Bare metal servers are dedicated, IBM high-performance cloud servers configurable in hourly and monthly options.
can we mining,with,gpu?
not enough memory in gpus
Maybe. How fast is system memory to gps bus?
Maybe an arktika like solution where CPU bundles chunks to GPU
And do the hashing on GPU
If all conditions are perfect (Pcie-3.0, 16x connector, CPU supports it) its ~13GB/s
There is probably some room to do something there
You saturate the system memory bus, assuming the cpu can move words around fast enough
this would only accelerate memory mining
unless gpu memory is used as well
you flood the gpu with as much data as it can hash
sure, memory mining is cpu bound
it moves that bound
the GPU will outperform any memory bus
its a whole different league
So, probably pretty fast
The usual mining rigs have their GPU´s connected with extenders, limiting the bandwidth to pcie 1x
like 700mb/s at best
Well, you can get that fast with network on a 10gb link
you're approaching what omnipath is doing in gpu clusters
so the question is, if you put the entire field in memory, what is the max speed you could read that memory on a cpu?
from there, the cpu can sling over network or to gpu or both
ctrl-f gpu
all that stuff has been solved for years
just the backing hardware is not gonna be viable
way too expensive infra-wise
see use cases, case 4
~about what you're musing about above with memory moves being cpu bound when they're not on the same bus
keeping taps on these people might be insightful: https://www.openfabrics.org/
with enough GPU memory this could be all GPU, add NVLink-connections and it will outperform everything
that's nvidia dgx territory
a mainframe is cheaper per hash
I actually don't know anything. Which can read from system ram faster, CPU or GPU?
hopefully, for non-snowblossom reasons, i'm wrong in a generation or two of hardware
cpu
So no exotic hardware needed
CPU bundles for GPU hashing
gpu have its own mem, why need system mem
well, there is pci-e dma stuff, like BAR
gpu´s dont have enough
@alistar
thx
there is an access diagram which should tell you of how that dance goes - stuff still hits the system memory
less, but snowblossom is a very evil problem where everything counts
@alistar also not all gpu execution nodes actually see all of the memory, there's a lot of shuffling within a gpu, it's basically a highly branching tree - http://www.ce.jhu.edu/dalrymple/classes/602/Class13.pdf the first image here is useful for getting at why it's very difficult for what snowblossom is doing
I see one problem with this kind of effort .. once the miner is ready and public, the hashrate will climb a lot, causing some snowstorms until the point where memory mining is not possible anymore
that's not a problem, that's an inevitability - also why i'm saying 100kH/s NVMe miners are sustainable
Makes sense
That was my plan overall but getting there is weirder than I thought
@mjay the development would only be too quick if GPUs are wide enough to have very high per executor unit miss rates not matter, which they're not in regards to width vs. memory bulk
@Fireduck i'm still waiting for someone to make a *very* wide layered spinny disk raid
@Fireduck that's the only way i can imagine anyone botnetting this one, but that'd DDoS all the regional ISPs in between while at it
@mjay a useful way to imagine a GPU is 'could this be solved by duct taping 2^11 pentiums together'
You would have to cluster the data required by the GPU anyway, there is not much overhead/memory usage on the GPU
but each cuda executor would only see like a few tens of megs of memory
and you cannot really orchestrate sideways within a gpu
of cause you can, it´s slow however
or if you can, that's some higher order side effect of the new fused multiply add they did, i've not yet seen
yeah, way slower than meaningful
if all the data is in CPU memory, why does it need more than a few kb?
unless there's something missed so far
if all the data is not on the gpu, why'd you crunch anything on the gpu?
moving stuff in our out of there is clunky
The plan was to move the actual hashing to the GPU
that's not a bottle neck
a cuda core has a few tens of megs of very fast seek space, but i have no ideas on how to orchestrate that miss fest efficiently
I can do some real-life-testing on this one. Move 10GB of random data to GPU memory, and try to access random 16 byte chunks form GPU memory as fast as possible
perhaps some of the 'how to pack multidimensional message spaces as efficiently as possible' -approaches could yield something for the known-stable packings, but i have no idea where to being looking into that yet
my guess is: Pascal - reasonably fast, Volta - very fast
@mjay please do, you do actually have a point that what i call 'clunky' might result in better latencies for small seeks
aka streams vs. blocks
i keep forgetting @Fireduck was an evil smartypants and made the seek size silly