@Fireduck can we mark the now obsolete snowfields 0 to 6 as such on the torrent page?
is snow on any exchange?
not yet
http://qtrade.io is working on it
Is it normal that when I launch client version 1.3.2, I have that message : FAILED TO INITIALIZE LOGGING: java.io.FileNotFoundException: configs\logging.properties And indeed, there's no logging.properties file as there was in 1.3.0.
Forgot to add, it's working, it gives me my balance and an address, but I have that message at the beginning
I also have this message, everything is working as intended
I started the poolminer for 15 mins. hash power seems 0 ...
INFO: 1-min: 0.000/s, 5-min: 0.000/s, hour: 0.000/s Aug 27, 2018 11:06:21 PM snowblossom.miner.PoolMiner printStats INFO: we seem to be stalled, reconnecting to node Aug 27, 2018 11:06:21 PM snowblossom.miner.PoolMiner subscribe INFO: Subscribed to work Aug 27, 2018 11:06:21 PM snowblossom.miner.PoolMiner printStats INFO: Shares: 0 (rejected 0) (blocks 0) Aug 27, 2018 11:06:22 PM snowblossom.miner.SnowMerkleProof readWord INFO: pre-caching snowfield: loaded 4 gb of 7 (55%) Aug 27, 2018 11:06:36 PM snowblossom.miner.PoolMiner printStats INFO: 15 Second mining rate: 0.000/sec - at this rate ∞ minutes per share (diff 22.000) Aug 27, 2018 11:06:36 PM snowblossom.miner.PoolMiner printStats INFO: 1-min: 0.000/s, 5-min: 0.000/s, hour: 0.000/s Aug 27, 2018 11:06:36 PM snowblossom.miner.PoolMiner printStats INFO: we seem to be stalled, reconnecting to node Aug 27, 2018 11:06:36 PM snowblossom.miner.PoolMiner subscribe INFO: Subscribed to work Aug 27, 2018 11:06:36 PM snowblossom.miner.PoolMiner printStats INFO: Shares: 0 (rejected 0) (blocks 0) Aug 27, 2018 11:06:38 PM snowblossom.miner.SnowMerkleProof readWord INFO: pre-caching snowfield: loaded 5 gb of 7 (69%)
It is still pre-caching the snowfield. What kind of storage are you using?
how much ram?
you won´t get a lot of hashrate with this config
you need to place the snowfield on a ssd or nvme drive
otherwise it will take a long time to precache (as you experienced), and will end up with 30 hashes/sec or less
ok, I have another 250g ssd. how much hashrate could get with that?
depends on the ssd. Somewhere between 20 and 200kh/s
well, that's a huge difference.
some nvme drives do pretty well on this
you can also run several in parallel to get more hashes
ok, thanks
970 evo is great
you´re welcome
I get about 70kh/s from mine
torrent for chunked 10 is up
so far that's 10TB of torrents
I can handle that. :)
which ones are downloaded most?
snowblossom.7, since that is the active field right now
@Clueless thinks we should completely replace all social media platforms with something decentralized
yes
The plan we have seems solid, except I can't see a way to prevent people from basically doing a DoS on a channel by registering an avalanche of bullshit peers for an item
Shouldn't be a problem for long running channels with an active community, since they should have a peer list for that channel already so doesn't matter what the directory nodes have to say
but people are basically terrible and are always trying to silence each other so this will absolutely be a thing
@Clueless look up mastodon
@mjay i get 160k on a 960 pro, 200k with unsafe kernel patches, 200k is what i get per memory channel with ram
@mjay and 1h rate after 2h
also, precache only helps if you have other things trying to fill your fscache, on an otherwise idle system that is unnecessary
@Rotonen what unsafe kernel patches are you using?
He just comments out lines of code that offend him
i rolled my own for fun early on, but those are *actually* crashy
basically anything that says CHECK(...)
yep, pretty much
I did something similar actually :smile:
introduced memory leaks too
I manually inlined some functions
crappy looking code for a few % performance
the way to go would be to write nvme direct userspace code into the miner and let it use a whole device
IIRC cavium was investigating that for memory-wide HPC a while back
can this be done in java actually?
well, sorta
Does it do something other than 4k reads?
not really
it just gets the filesystemy bits out of the way as much as possible
yeah...that does seem to be the trouble
cool
you could be able to just dd an image to a drive and blindly read per pointer?
Hmm .. if I remove the snowblossom.7.snow and replace it with a block device pointing to an nvme ssd containing the data directly .. I´ll try
or just wait for heterogenous memory to land and extend ram onto nvme
or use arktika and build a ram fleet
it seems that 64gb nodes are the sweet price point right now, at about $1000 each
well, everything is a file in unix, should work
that’ll clock in at $1000/Mh?
I'm not sure. I suspect if you have a fleet of 3 64gb nodes and spread field 7 over them, you should be able to support about 3 MH/s
but that will depend on a bunch of cores of processing nodes attached to the network
don't need memory on the processing nodes, just enough to run the OS
seems some ddr3 is actually cheap currently
Right now my old as hell Dell R900 is supporting about 1 MH/s
ddr3 is a problem, I can't get higher density that 32gb per MB as far I can tell.
ddr3 registered is even cheaper if you buy used, and old server boards are also
yeah, half to quarter the price
also server gear usually gives one more memory channels per core
like the dl180 g7
@mjay i guess you’ll symlink the device to the snow field location for your experiment?
Thats the plan. I currently dd´ the file to the device
dunno how the nvme root device likes the dd to begin with
I don´t boot from nvme, shouldn´t be a problem
i’d rather get the i3-8300T, 35W and plenty beefy enough
500$ for the ram is a lot
That is what it costs, as far as I can tell
@Fireduck the T suffix is meaningful
As far as I can tell the 'T' means unavailable :wink:
i got a 7700T, they usually have like 2 week shipping times unless you buy from a specialist vendor
the 8300T is in stock at my local vendor
Whenever they are close enough, I always go with AMD to make sure Intel has a competitor
the 35W AMD parts are not close enough, IMO
but i like my systems fanless
fair enough
file vs block device: Almost the same speed, maybe half a percent faster
cool
I wasn't sure that was going to work
couldn't remember if any of the read logic used the total file length as part of the calculation
@Fireduck It is expected, but easy to patch
total_words = snow_file.length() / SnowMerkle.HASH_LEN_LONG; Just put the number in there
I should really have that get it from the NetworkParams
which is what I think I have arktika do
How’s the exchange going
I have some questions about pool mining and artika. First, I have the client, node and pool miner installed on a windows workstation. I am planning on using the windows workstation as my wallet. I currently have 2 shares mining on this off of the HDD, for fun to get started. How do I access the CLI to check my balance? Next I am about to install the pool miner on a server and mine from RAM. For this I don't need the client and the node for pool mining, correct? Next, to start installing artika on this server when I have the field 7 chunk downloaded, where do I store the artika node.conf files? I start with that, thanks for your help
@aikida3k run `client.bat` (assuming you're already running a node)
It comes up with an unused address, says to press any key and then exits
@aikida3k ah, open up the batch file and change it to "balance" play with it. :P
@Clueless you're telling him how to use the wallet client, he's after how to set up arktika
i guess @Fireduck is the only one who has set one up so far?
@Rotonen "How do I access the CLI to check my balance?" :P
ok, I also see now it looks like there is a total balance below the timestamp and above the number of wallet keys available
i too, apparently, skip lines too much
@Rotonen I didn't even notice the arktika uqestion
well... both actually
now this just got silly :smile:
@aikida3k I'm in the middle of something, I'll come back with some answers. ;)
ok that's cool. thanks
probably 10-20 min
no rush
@Rotonen If I mine to my windows workstation address and just install the pool miner on my server, I don't need the client or node installed on the server, correct?
correct
for the solo miner you need a node, for the pool miner you just need either a local wallet file or an address
So for the artika conf file, that probably gets stored in the same location as the pool miner conf file
@aikida3k I'm going to take another look at it since it's been awhile and I want to make sure its friendly
alrighty
@Clueless So the questions I have now are, 1) is the artika node conf file stored in the same location as the pool-miner conf. 2) Does the pool-miner.conf need to be deleted to mine with artika? 3)Should the memory server in an artika configuration have 2 layers: one layer for RAM (mem) and one layer for HDD (file) 4) In the 3 nodes and 1 worker example configuration, why does the worker.conf initialize each layer node as having 30 threads, but in the node.conf files, the conf for node1.conf has 6 threads for layer_0 and 1 thread for all other nodes except the worker node, layer_3 which is set at zero.
1) one per each artika node 2) no 3) whatever, however, per arktika node 4) to demonstrate you can, and should, configure them appropriately, for whatever you run them on
1) yes 2) no 3) yes 4) I don't know
4) to do some mining on the memory nodes
okay more on 4) So worker.conf sets layer_0...layer_2 with 30 threads and the layer_3 thread as 0 to tell it to use threads to mine with memory on the other nodes.
So on the node1.conf why not set layer_0_threads to 30 instead of 6?
there is a trade off
the the memory node is using all the cpu to mine, it won't be able to serve requests as well
so there is some sort of middle ground to find
there's a benchmark mode in arktika, try to find your sweet spot with that
ok. And I store the worker.conf file on the server I store the chunked snowfield on, correct?
The one with the big HDD
All nodes that do any mining need access to all the chunks
To do the proof work for a share
Doesn't need to be fast
NFS is fine
So I have a 1.2 TB SAS HDD with 288 GB RAM so in an artika configuration, with 2 64GB RAM the configs would look like:
worker.conf mine_to_address=addr_here pool_host=http://snowday.fun layer_count=4 layer_0_type=remote layer_0_range=0,63 layer_0_threads=30 layer_0_host=10.138.0.2 layer_1_type=remote layer_1_range=64,127 layer_1_threads=30 layer_1_host=10.138.0.3 layer_2_type=mem layer_2_range=0,127 layer_2_threads=30 layer_3_type=file layer_3_path=/home/arktika/snow layer_3_threads=0 selected_field=7
node3.conf mine_to_address=addr_here pool_host=http://snowday.fun layer_count=4 layer_0_type=remote layer_0_range=0,63 layer_0_threads=1 layer_0_host=10.138.0.2 layer_1_type=remote layer_1_range=64,127 layer_1_threads=1 layer_1_host=10.138.0.3 layer_2_type=mem layer_2_range=0,127 layer_2_threads=6 layer_3_type=file layer_3_path=/home/arktika/snow layer_3_threads=0 selected_field=7
node2.conf mine_to_address=addr_here pool_host=http://snowday.fun layer_count=4 layer_0_type=remote layer_0_range=0,63 layer_0_threads=1 layer_0_host=10.138.0.2 layer_1_type=mem layer_1_range=64,127 layer_1_threads=6 layer_2_type=remote layer_2_range=0,127 layer_2_threads=1 layer_2_host=10.138.0.4 layer_3_type=file layer_3_path=/home/arktika/snow layer_3_threads=0 selected_field=7
etc for node1.conf
Correct?
you have the ram, just mine in ram with the pool miner for now
and arktika is for involving multiple computers over a lan into a single miner
Yeah, I don't have the chunks downloaded either, but I have the other servers I would like to use
also, the poolminer does not know how to use the chunked snowfield
if your servers have the ram, just pool mine individually on them for now
Anyways, I think I get it, thanks @Rotonen, @Fireduck @Clueless
the next hurdle you'll climb is figuring out suitable thead counts for your hardware
I use arktika with a 256gb server to share the chunks to all my other computers
and max out all their cpus
what network type are you using for this?
but your 256 GB server mines, too, right?
Gigabit ethernet
@Fireduck, what network type are you using?
I can get my hands on a couple of infiniband cards for cheap, if that would make any sense
Gige
oki, no need to invest then I guess
roughly 1 gbit/s = 1 MH/s
for one nonce this is 6*16 bytes + .. 6*37 bits for the request?
96MB/s for 1MH/s, numbers add up :smile:
Request is a big bundle
i'd have a 10gig fibre to the premises, but cannot run noisy hardware so no SFP+ in house, until someone actually finally makes something sensible
the macchiatobin is close, but waiting for this one https://www.crowdsupply.com/traverse-technologies/five64 Quad-core ARM64 Networking Platform with Mainline Linux Support
@aikida3k So, in the release zip, there should be a file `cmd_here.bat` run that and you'll have a cmd open ready for commands
try, `client balance` or `client help`
so that's gotta be in 1.3.2, not 1.3.0
i'll try that later this evening