the pool now has these in the logs ```Feb 04, 2020 9:56:16 AM snowblossom.miner.plow.MrPlow$BlockTemplateEater onError INFO: Got error:io.grpc.StatusRuntimeException: UNAVAILABLE: Channel shutdownNow invoked```
it is getting the block info too, but something is erroring out
Yeah, it is messily creating new connections
Which is what I was trying to fix when I introduced that bug
frying pan -> fire
yeah, so I did a simple revert of that
working on a better fix now
i can do low impact testing as essentially no one mines on my pool
it took me stupidly long to get my unit test for this case working
turns out I was running the MrPlow maintenance loop from main() so my tests were not running it at all
rough
i suppose it’d not be too fun to blackbox / smoketest that with controlling processes and asserting on the log output?
mostly the data setup would be hard to build and still not make it realistic
hmm, maybe build a log parser with a whitelist and have it alert you of every unexpected output? and just leave stuff running?
For most of my modules, it is easy to start them up with just new Thing(config)
and then I can use methods on the thing to check on its state
so in the case of the mining test, I start up a node, a MrPlow and two miners
then I check with a client connected to the node that the miners are making funds
that’s already better orchestrated than i’m assuming
then I call the shutdown methods, which may or may not work
but regardless, junit/bazel cleans it up eventually
There are almost certainly some linger thread pools and such, but not a big deal
new version is looking good, but the node will need to be updated first
There is a new updatable template grpc
other than one of my testnet nodes was way out of date, it seems to be working fine
does only the pool depend on the block template?
The old solo miner does
which is why I am leaving in the old rpc
not a lot of code
rather kill the solo miner than carry cruft?
for anyone mad enough to think they can do it, they might as well go for writing their own miner instead
The cruft is about 10 lines, I'm not worried about it
so it starts
removing code is like the best thing ever
When I was at Amazon I did a project where I was about to delete a few packages totaling about 25k lines of stupid. That was good.
i'd love to see the process red tape for that :smile:
Basically none. The previous packages were a dumpster fire.
My replacement worked faster, more reliably and with about half the hardware
But this was a long time ago, probably 2010 or so
maybe it's different when you're in house, as an external it was thicker than any governments i've dealt with in similar circumstances
my experience is from 2014
It was a relatively new cloud product at the time (SNS)
used that in 2012 to pass turns around in a turn based iOS game
the pricing model was prone for abusing the hell out of it
good
The parts I replaced had so much insane broker indirection that it sometimes delivered messages to the wrong queue (owned by a different customer)
off by one? :smile:
timing issue, grab a broken from a pool, assign it to an endpoint, use it
something like that, but with some sort of really stupid race condition
as a business with cash to burn in getting a position in a new market, it's not a bad idea to go live with something which is barely holding together
if the business side of it makes any sense, then just burn more cash to redo it
yeah, probably true
there are not many companies on the planet where that makes sense
but amazon certainly has the cash
like how S3 implemented reduced redundancy storage at first
Initially, it was just a bit you set that charged you less money
And they would (I imagine) run a query occasionally to see how much they would save by implementing the actual feature
yep
also there's the classic category of not doing some expensive rarely used doohickey in tech, but actually dedicating people to act behind the curtain
hah, yeah
probably going to be better and more accurate than the initial implementation would be anyway :smile: