2018-10-28 00:02:35
re SUB - dont forget https://handshake.org/ forging the way of the new decentralized interwebs Decentralized certificate authority and naming

noogie
2018-10-28 00:07:14
Also this one, even older than bitcoin:
https://freenetproject.org/

mjay
2018-10-28 00:07:58
I remember messing with that and not being impressed

Fireduck
2018-10-28 00:08:36
it is very slow, forgets data quickly, but has the focus on anonymity

mjay
2018-10-28 00:08:50
a different use case

mjay
2018-10-28 00:13:12
for the sole reason of not accepting a host key again, I think HNS will succeed

noogie
2018-10-28 00:17:48
what is HNS?

mjay
2018-10-28 00:36:26
handshake

noogie
2018-10-28 00:37:00
faucet for verified FOSS devs = fucking rad distribution model

noogie
2018-10-28 05:43:22
how to config the Arktika's config files?

alistar
2018-10-28 05:43:24
threads?

alistar
2018-10-28 07:03:53
@alistar https://github.com/snowblossomcoin/snowblossom/tree/master/example/arktika

Clueless
2018-10-28 07:06:41
got

alistar
2018-10-28 11:34:51
made an update @ snowblossom.satoshis.guru, node now working on the lastest release

finex
2018-10-28 15:20:00
wow, almost 2GH/s

Fireduck
2018-10-28 15:24:45
will be pushed to 8:grinning:

alexaabb
2018-10-28 16:23:17
people getting PoW fomo

offmenu
2018-10-28 16:23:46
I fear tigers

Fireduck
2018-10-28 17:01:22
Are there stats on which miners are using field 8 already?

mjay
2018-10-28 17:01:32
I checked some blocks manually, all 7

mjay
2018-10-28 17:02:22
hum, I could write a script to check but I don't have a good tool at the moment

Fireduck
2018-10-28 17:02:35
I´ll write something then

mjay
2018-10-28 17:02:51
Its not too difficult I guess

mjay
2018-10-28 17:12:48
if you take the previous version of RichList.java, it iterated through all blocks

Fireduck
2018-10-28 17:12:53
you can just copy that

Fireduck
2018-10-28 17:13:05
in the most recent version I changed it to use utxo instead

Fireduck
2018-10-28 17:13:31
I´m using VoteTracker now, just the last 1k blocks are enough

mjay
2018-10-28 17:21:48
cool, good idea

Fireduck
2018-10-28 17:36:59
Field 7 997
Field 9 3

mjay
2018-10-28 17:37:08
now also on http://snowblossom-explorer.org/

mjay
2018-10-28 17:37:54
Field 9!!

Fireduck
2018-10-28 17:39:47
Is that 128gb?

Joko
2018-10-28 17:39:53
512gb

mjay
2018-10-28 17:39:59
No way

Joko
2018-10-28 17:40:19
I still remember ram mining lol

Joko
2018-10-28 17:40:36
days of ram-mining are almost over

mjay
2018-10-28 17:43:27
Vps with 512gb are expensive af

Joko
2018-10-28 17:43:32
you could buy some fancy server hardware, but that gets more expensive than ssd´s

mjay
2018-10-28 17:44:03
I was one of the first 5 people on the network :grin:

Joko
2018-10-28 17:44:10
nice :smile:

mjay
2018-10-28 17:44:19
your first block?

mjay
2018-10-28 17:44:39
mine was around 6k, 64GB still

mjay
2018-10-28 17:45:55
Not sure which was my first block

Joko
2018-10-28 17:46:16
I remember 3 weeks later someone put 100x what I had on the network

Joko
2018-10-28 19:31:32
mainframes, i say

Rotonen
2018-10-28 19:33:59
i applaud whomever set their block remark to ´POOL´

Rotonen
2018-10-28 19:36:02
ha

Clueless
2018-10-28 19:39:21
@Rotonen hey uh. at about `2 GH/s`, if the average person nets about `500,000 H/s`, doesn't that equate to like `4000 people` ?

Clueless
2018-10-28 19:39:57
500kh/s are quite cheap on aws

mjay
2018-10-28 19:40:16
how cheap?

Clueless
2018-10-28 19:40:49
2$/day

mjay
2018-10-28 19:40:53
about

mjay
2018-10-28 19:46:11
~@Rana Waleed~@mjay link me?

Clueless
2018-10-28 19:46:44
I haven't seen anyone that allows you to use that sort of bandwidth, io, cpu for that cheap

Clueless
2018-10-28 19:50:56
hey whatsup @Clueless

Rana Waleed
2018-10-28 19:51:06
sup

Clueless
2018-10-28 19:51:09
what do you need bandwidth for?

Rotonen
2018-10-28 19:51:30
were you asking me something @Clueless

Rana Waleed
2018-10-28 19:51:49
oh sorry

Clueless
2018-10-28 20:13:43
@Clueless i’ll show you the opposite end
http://www.cirrascale.com/pricing_power8BM.php

Rotonen
2018-10-28 20:15:02
Holy hell.

Clueless
2018-10-28 20:16:23
well, there’s the deep end too
https://www.ibm.com/cloud/bare-metal-servers Bare metal servers are dedicated, IBM high-performance cloud servers configurable in hourly and monthly options.

Rotonen
2018-10-28 23:16:01
can we mining,with,gpu?

alistar
2018-10-28 23:17:39
not enough memory in gpus

Rotonen
2018-10-28 23:26:02
Maybe. How fast is system memory to gps bus?

Fireduck
2018-10-28 23:26:25
Maybe an arktika like solution where CPU bundles chunks to GPU

Fireduck
2018-10-28 23:26:41
And do the hashing on GPU

Fireduck
2018-10-28 23:29:57
If all conditions are perfect (Pcie-3.0, 16x connector, CPU supports it) its ~13GB/s

mjay
2018-10-28 23:31:02
There is probably some room to do something there

Fireduck
2018-10-28 23:33:05
You saturate the system memory bus, assuming the cpu can move words around fast enough

Fireduck
2018-10-28 23:33:12
this would only accelerate memory mining

mjay
2018-10-28 23:33:19
unless gpu memory is used as well

mjay
2018-10-28 23:33:22
you flood the gpu with as much data as it can hash

Fireduck
2018-10-28 23:33:35
sure, memory mining is cpu bound

Fireduck
2018-10-28 23:33:37
it moves that bound

Fireduck
2018-10-28 23:33:54
the GPU will outperform any memory bus

mjay
2018-10-28 23:34:07
its a whole different league

mjay
2018-10-28 23:34:17
So, probably pretty fast

Fireduck
2018-10-28 23:36:51
The usual mining rigs have their GPU´s connected with extenders, limiting the bandwidth to pcie 1x

mjay
2018-10-28 23:37:00
like 700mb/s at best

mjay
2018-10-28 23:38:29
Well, you can get that fast with network on a 10gb link

Fireduck
2018-10-28 23:38:56
you're approaching what omnipath is doing in gpu clusters

Rotonen
2018-10-28 23:38:57
so the question is, if you put the entire field in memory, what is the max speed you could read that memory on a cpu?

Fireduck
2018-10-28 23:39:09
https://www.intel.com/content/dam/support/us/en/documents/network-and-i-o/fabric-products/Intel_OP_Performance_Tuning_UG_H93143_v10_0.pdf

Rotonen
2018-10-28 23:39:12
from there, the cpu can sling over network or to gpu or both

Fireduck
2018-10-28 23:39:21
ctrl-f gpu

Rotonen
2018-10-28 23:39:49
all that stuff has been solved for years

Rotonen
2018-10-28 23:39:59
just the backing hardware is not gonna be viable

Rotonen
2018-10-28 23:40:08
way too expensive infra-wise

Rotonen
2018-10-28 23:40:47
see use cases, case 4

Rotonen
2018-10-28 23:41:45
~about what you're musing about above with memory moves being cpu bound when they're not on the same bus

Rotonen
2018-10-28 23:42:46
keeping taps on these people might be insightful: https://www.openfabrics.org/

Rotonen
2018-10-28 23:43:03
with enough GPU memory this could be all GPU, add NVLink-connections and it will outperform everything

mjay
2018-10-28 23:43:22
that's nvidia dgx territory

Rotonen
2018-10-28 23:43:29
a mainframe is cheaper per hash

Rotonen
2018-10-28 23:44:08
I actually don't know anything. Which can read from system ram faster, CPU or GPU?

Fireduck
2018-10-28 23:44:11
hopefully, for non-snowblossom reasons, i'm wrong in a generation or two of hardware

Rotonen
2018-10-28 23:44:19
cpu

Rotonen
2018-10-28 23:44:39
So no exotic hardware needed

Fireduck
2018-10-28 23:44:50
CPU bundles for GPU hashing

Fireduck
2018-10-28 23:44:55
gpu have its own mem, why need system mem

alistar
2018-10-28 23:44:58
well, there is pci-e dma stuff, like BAR

Rotonen
2018-10-28 23:45:08
gpu´s dont have enough

mjay
2018-10-28 23:45:13
@alistar

mjay
2018-10-28 23:45:37
@alistar see 2.1 http://www.applistar.com/wp-content/uploads/apps/PCIe%20DMA%20User%20Manual.pdf

Rotonen
2018-10-28 23:45:52
thx

alistar
2018-10-28 23:46:04
there is an access diagram which should tell you of how that dance goes - stuff still hits the system memory

Rotonen
2018-10-28 23:46:21
less, but snowblossom is a very evil problem where everything counts

Rotonen
2018-10-28 23:49:03
@alistar also not all gpu execution nodes actually see all of the memory, there's a lot of shuffling within a gpu, it's basically a highly branching tree - http://www.ce.jhu.edu/dalrymple/classes/602/Class13.pdf the first image here is useful for getting at why it's very difficult for what snowblossom is doing

Rotonen
2018-10-28 23:49:17
I see one problem with this kind of effort .. once the miner is ready and public, the hashrate will climb a lot, causing some snowstorms until the point where memory mining is not possible anymore

mjay
2018-10-28 23:50:01
that's not a problem, that's an inevitability - also why i'm saying 100kH/s NVMe miners are sustainable

Rotonen
2018-10-28 23:51:05
Makes sense

Fireduck
2018-10-28 23:51:26
That was my plan overall but getting there is weirder than I thought

Fireduck
2018-10-28 23:51:44
@mjay the development would only be too quick if GPUs are wide enough to have very high per executor unit miss rates not matter, which they're not in regards to width vs. memory bulk

Rotonen
2018-10-28 23:52:14
@Fireduck i'm still waiting for someone to make a *very* wide layered spinny disk raid

Rotonen
2018-10-28 23:52:46
@Fireduck that's the only way i can imagine anyone botnetting this one, but that'd DDoS all the regional ISPs in between while at it

Rotonen
2018-10-28 23:53:40
@mjay a useful way to imagine a GPU is 'could this be solved by duct taping 2^11 pentiums together'

Rotonen
2018-10-28 23:54:11
You would have to cluster the data required by the GPU anyway, there is not much overhead/memory usage on the GPU

mjay
2018-10-28 23:54:41
but each cuda executor would only see like a few tens of megs of memory

Rotonen
2018-10-28 23:54:55
and you cannot really orchestrate sideways within a gpu

Rotonen
2018-10-28 23:55:20
of cause you can, it´s slow however

mjay
2018-10-28 23:55:33
or if you can, that's some higher order side effect of the new fused multiply add they did, i've not yet seen

Rotonen
2018-10-28 23:55:45
yeah, way slower than meaningful

Rotonen
2018-10-28 23:55:47
if all the data is in CPU memory, why does it need more than a few kb?

mjay
2018-10-28 23:55:54
unless there's something missed so far

Rotonen
2018-10-28 23:56:20
if all the data is not on the gpu, why'd you crunch anything on the gpu?

Rotonen
2018-10-28 23:56:32
moving stuff in our out of there is clunky

Rotonen
2018-10-28 23:56:41
The plan was to move the actual hashing to the GPU

mjay
2018-10-28 23:56:49
that's not a bottle neck

Rotonen
2018-10-28 23:57:27
a cuda core has a few tens of megs of very fast seek space, but i have no ideas on how to orchestrate that miss fest efficiently

Rotonen
2018-10-28 23:58:01
I can do some real-life-testing on this one. Move 10GB of random data to GPU memory, and try to access random 16 byte chunks form GPU memory as fast as possible

mjay
2018-10-28 23:58:08
perhaps some of the 'how to pack multidimensional message spaces as efficiently as possible' -approaches could yield something for the known-stable packings, but i have no idea where to being looking into that yet

Rotonen
2018-10-28 23:58:42
my guess is: Pascal - reasonably fast, Volta - very fast

mjay
2018-10-28 23:58:43
@mjay please do, you do actually have a point that what i call 'clunky' might result in better latencies for small seeks

Rotonen
2018-10-28 23:58:56
aka streams vs. blocks

Rotonen
2018-10-28 23:59:35
i keep forgetting @Fireduck was an evil smartypants and made the seek size silly

Rotonen