@Fireduck don't stop with FrostyTrader --- where there's innovation, there's a way, don't get scared off by liability. We should talk through and find a way to address it but it probably is in some way.
I'm not concerned about civil liability.
It sounds like qtrade is about a month out so makes sense to wait for them
integrated with qtrade?
http://qtrade.io is an exchange that is working on listing snowblossom
@Fireduck the slowdown if the whole field is not available… you had a neat formula… something containing (1/2)^6?
ran out of scrollback in Slack I think :slightly_smiling_face:
well 1/2 assuming 50% available I guess
yeah, just basic probabilities
so is that right, if you had 50% of the field the effective hashrate would be reduced to ~1.56% ?
I think I’m getting about 78% efficiency with 128GB RAM, around 122-123 is used for caching the field, so ~95% of the field… ouch :slightly_smiling_face:
I guess I need to try Arktika or w/ever it is to fill out that last 5%
if you have some percent of the field p, then it is 1/(p^6) is the prob of being able to check a complete pow
nevermind the 1 over, losing my mind
anyways, the problem with PoolMiner is that it doesn't discard when it gets a cache miss, it goes to disk and gets it
which doesn't help you much
Arktika can discard it (or more correctly, put it on a layer with no or few assigned threads)
Also, with Arktika you could host those last 6 GB on a separate machine and have them use each other
I guess the idle GPU with 8GB+ could work too
anyone tried GPU yet?
6x skein is going to be slow though, I presume the slowdown is too much given the limited RAM
You´d need to connect enough GPU´s to get at least close to 128GB, otherwise (p^6) will kill your hashrate
I did a few experiments on this, not actual GPU, but just loading the needed data from ram without any hashing, just like it would have to be done when offloading the hashes to gpu. This was only slightly faster than regular CPU hashing
I guess its still the memory bandwidth thats limiting here. Had 100% cpu load on my machine, but it actually was the cpu waiting for the memory to respond
Copying all data to the GPU would need even more I/O. I don´t think this is faster.
Unless someone can get their hands on a 128GB+ GPU. Is there even such a thing?
interesting stuff, and no I don’t think as of now there are GPUs with >32GB
Even 32GB are way too expensive, like the V100
plenty of bandwidth if you batch it like arktika does for network access
but that would be some fancy gpu programming
given PCI-e x16 is ~16GB/s it’d just be very expensive RAM methinks :slightly_smiling_face:
can do a pretty insane hash rate with that
oh? isn’t RAM faster? I thought I was seeing ~30GB/s on DDR4
yeah, but you are absolutely cpu bound way before that
ohh right, hmm, that’d be a nice stat to output in the miner
what is it called? memory bandwidth? or more genreally, I/O bandwidth?
there is a fake mode in arktika to see what the cpu limit is
I guess I really need to download arktika :slightly_smiling_face:
not sure where I got 30GB/s from I only get 7-8GB/s
I think I get 24 GB/s on a gce instance
well, that’s using `mbw`, whatever that measures :smile:
but not sure
oh maybe that’s what it was, I did test on GCE
do you also get 24GB/s with 16-byte-random-access?
for me these numbers are a lot lower
no but I’d like to know what those numbers are… any tools that can do it? or had to roll your own?
I think the peak was large block memcpy using `mbw`
I just modified the java miner, about 500MB/s on a 40 core 4 socket server
this is just counting the 16 bytes each
18GB/s with memcpy otherwise
wow, quite stark
would that have two (or more) memory channels per socket?
I wonder, is it possible to calculate effective I/O bandwidth if you know the hashrate?
I suppose it’s just 16*6*hashrate
meaning I’m getting about 50MB/s but <80% efficiency (not quite enough RAM) so maybe ~64MB/s is possible for my hardware
so maxing out PCI-e x16 would be a 250x improvement
to get enough memory though I’d need to drop some to x8 and/or x4
or, limit to 2-3 GPUs per motherboard I suppose
gets expensive quickly
This server has 2 memory channels per CPU, thats 8 total. DDR4
nice
With server hardware you can easily connect 8 GPUs, each 16x
this gets expensive, yes :wink:
laugh, yep, plus I’m not keen on having a jet engine in the same room as me :slightly_smiling_face:
You dont sleep next to these things
or: you cant.
haha, yes… I don’t intend to but the thing is, using a couple of appliances + a rig in one part of the house will trip the breaker… so I run it in part of the house that while in a separate room, is still audible at night with the door open… thin walls X)
glad I have a cellar to do this
big walls, several doors, always cool
also very humid, but hey :smile:
maybe the hardware will be obsolete before it corrodes too badly :slightly_smiling_face:
keep it hot and the relative humidity is lower
@mjay one would have to do something silly like writing an in-gpu nvme direct driver, but that’d still not be significantly quick
@Rotonen Imagine the GPU working on a bunch of PoW units and filling a buffer with the offsets of the words it needs from the snowfield.
Then the CPU side reads that buffer in chunks and fills in those words in another buffer and sends the results back.
Basically can offload the hash processing to the GPU and make use of the CPUs memory bus bandwidth.
i have imagined this
it has not turned me on
haha, then what do you hang around here for? :wink:
8TB fields
ok, good
I have the 4tb field generated
@mjay if you squeeze more than 300kH/s per memory channel from RAM, do tell
high end NVMe goes up to close to 200kH/s per disk
-> for now AMD EPYC is the thing
@Fireduck I'm prepared to seed a lot more
make sure you have the things there, especially chunked for 7,8,9
@Fireduck I haven't setup any of the chucked ones yet, nor monitors for them. I may have to buy some storage if we count those. Which is fine. I've been looking for an excuse to buy more drives. It'll triple my storage.
I don't think your monitors exist
gradle gradle gradle, I made you out of clay
Why are there so many terrible build systems
Would you expect the following to make a shortcut that will open node.sh?
[Desktop Entry] Encoding=UTF-8 Type=Application Name=Snowblossom node Name[en_GB]=Snowblossom node Icon=/home/thx11384eb/Downloads/snowflake_logo.png Exec=/home/thx11384eb/snowblossom-1.3.0/node.sh Comment[en_GB]= Terminal=true StartupNotify=true X-KeepTerminal=true
When I click on the shortcut, the cursor gets a sand timer but no terminal window opens
I have no idea what that configuration notation is. Some sort of window manager specific thing? It looks reasonable. The only thing suspect is that node.sh assumes you are in working directory that node.sh is in. Maybe specify a working directory or put a 'cd' into the start of node.sh ?
that's windows `.lnk`
i guess
yeah, but it is a linux path
gnome?
`https://<redacted>/video/search?search=gnome`
The movie I linked is not worth searching for, watching or buying
@THX 1138 4EB It should work, but you'd have to install snowblossom as your own user.
@Fireduck @THX 1138 4EB So typically the scripts I provided defaulted/leaned towards creating a dedicated system user/service. This favors servers and such... but.. I should probably improve upon the two avenues of install. Headless, and headed. So gui/windows/ubuntu users can point and click, and more easily run as a GUI application.
I think I'll do that built into my Python GUI run.
@Clueless systemd has user services
I usually run new suspicious crypto coins in a terminal under screen