I made stupid mistakes yesterday
the address history work doesn't respect reorgs
and could show things taht are not in the current main chain
which is no good
Anyone running my address history code from last night should do a db nuke and recreate
with addr_index
if not using addr_index, there is no difference
@Fireduck good thing I missed you have to do addr_index
hah
you don't need it unless you have an explorer attached
Have you guys started working on a BTC style RPC for exchanges to use?
do exchanges sponsor that sort of work?
@Shoots not yet. My hope is to get in touch with an exchange and get a rough set of requirements to work towards
rather than guessing what they want
I'm pretty sure all exchanges look for an RPC that mirrors BTC
sure, but there are a bunch of questions. Do they want one wallet per customer? Do they want notifications or are they happy to poll when a user logs in?
@Fireduck if you wanna raise eyebrows and make a splash, don’t do cointopia as the first one
I am aware of that terrible json rpc protocol
terrible maybe, but thats how most if not all exchanges handle RPC
sure, and it has about 100 calls. It would be good to know which ones anyone cares about.
@Rotonen why is cointopia some sort of list anything that moves exchange?
These guys seem like they're pretty decent for getting back to dev's https://crex24.com/
Are they going to be irritated when I answer "What listing fee do you propose to pay?" with "Nothing" ?
Or is that a gate question to eliminate terrible scams
heh
doesn't really matter
not sure why they would be irritated.
respond with "We are an honest hard working community that believes projects started with no premine deserve a free listing."
DEFT just got listed on that exchange and it actually seems decent. The guy they were dealing with seemed to be responding very fast and they actually set a trading launch date that they hit.
nice
alright, I'm all over it
@Fireduck 'not even on cointopia yet' is a sign of freshness of new currencies
@Fireduck `INFO: Mining rate: 0.000/sec - at this rate ∞ hours per block` is common when not having many miners, how does it decide that? also claims being stalled
I should fix that
``` INFO: Mining rate: 5-min: 1.007M/s, 15-min: 997.312K/s, hour: 808.569K/s Jun 26, 2018 10:18:53 PM snowblossom.miner.MrPlow printStats INFO: Mining rate: 0.000/sec - at this rate ∞ hours per block Jun 26, 2018 10:18:53 PM snowblossom.miner.MrPlow printStats INFO: we seem to be stalled, reconnecting to node ```
It is assuming no shares in 20 seconds is a problem
well, especially with the difficulty ramp up creep issues, that'll jam things up a bit
what's the cost of it hitting that, actually?
None
good, just from time to time wondering if there is actually something wrong with the pool as relatively many people try it a bit and leave
grepping the logs i've had 100+ addresses on within 24h, but like 99% of the shares are mine
i'll go with 'most people are confused'
what pool do you run?
snowplough?
yeah, snowplough is mine
I mine on that pool
but it's my gaming machine, so I kill the miner often
by "my gaming machine" I mean "my only computer"
i guess you're the frequent 100k range miner then
yup
you do about the same as i do, throw spare capacity at it
though mine bursts to some 30M range if everything is idle (but this essentially never happens, though, as my infra is mostly actually doing stuff)
the one i have constantly there just to show there is any mining going on, is actually testing over different combinations of jvm and miner parametres slowly
haha, nice
so far nothing is out of the ordinary very much
but i actually reach the same terminal mining speed without memfield as with memfield, it just takes like 2 days
so memfield and hybrid are actually only mitigate a slow warmup
the max depth parameter will make a difference
if you can't fit the entire field into memory
what's that actually controlling and is that on a branch?
that causes the miner to drop a piece of work on the floor if it has to go to disk early in the hash chain
each POW has 6 hashes (as you know)
so if you set it to 2, then if the first or second hash can't be served from memory, it will drop it on the floor
i think i'd rather let it pull new stuff into the cache - that'll average out over time
but i suppose if you precache as in hybrid, that'd then pull more hashes, but decrease the collision chance?
this is for miners who can't cache the entire snowfield
is that actually a good thing?
yeah, but that'll kink the search distribution, right?
yeah, but who cares?
i'm not convinced that's actually conductive to block hit rates
when I had 51gb of 64gb in memory it bumped my hashrate from ~110k to ~230k
gotta run
ttyl
sure, but quality of collision attempts makes that apples to oranges
nah, it's just trading extra CPU for less disk io
or rather you're trading entropy quality off for density
rather do zram or something for that
the miner itself gets bumped into zram first
and that compresses well, also thread overlaps do
if you do that across many miners, and you can have offsets for the precache range, and you have enough miners to cover the whole field, that will work
but that'd require the pools to orchestrate that and the miners to ask for allocation ranges from the pools
so summa summarum i do see a use for the feature
that'd actually allow everyone to do ram-only or hybrid, but share value counts get a lot more complicated
the case can be made pretty easily using an argument ad absurdum
imaging you have 1GB of storage with almost unlimited throughput (like 1pb/sec), and all other storage is extremely low throughput (like 10 bytes/sec).
and you have 10,000 cores
it will be more effective to use all those cores to search for a POW which does all 6 hashes within the 1GB of ulta-fast storage
that it would be to use the 10 byte/sec throughput
Miners are already segregated with separate nonce prefixes
WTB one tboone machine
BTW, @Rotonen, the feature is in master, but I don't know if it is in a release. you can test it out by using min_depth_to_disk=<x> in your miner's config file.
it's only applicable if using memfield_precache_gb for some amount smaller than the entire snowfield