partial memory mapping - c would be your friend there
one neat thing would be to have that map split into NUMA aware cpu pinned thread locals
for a good overview on what i’m thinking in: https://www.open-mpi.org/projects/hwloc/
so 1) max thread size should fit into a NUMA node 2) upon multiple memory channels per nose split further into thread groups per numa node to max out memory bandwidth use 3.1) one (or two?) thread for the disk remnant if spinny disks 3.2) query the NCQ size for the thread count on SATA SSDs 3.3) query the multiqueue scheduler for details on how to saturate an NVMe device
it can just expose ’it can use up to this much RAM’ to the user
for details of real code: the monero cpu miners do that to a degree as they are also memory latency bound
@bottob88 has joined the channel
```Jun 07, 2018 7:54:53 AM io.grpc.netty.NettyServerHandler onStreamError WARNING: Stream Error io.netty.handler.codec.http2.Http2Exception$StreamException: Received DATA frame for an unknown stream 3 at io.netty.handler.codec.http2.Http2Exception.streamError(Http2Exception.java:129) at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.shouldIgnoreHeadersOrDataFrame(DefaultHttp2ConnectionDecoder.java:535) at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder$FrameReadListener.onDataRead(DefaultHttp2ConnectionDecoder.java:187) at io.netty.handler.codec.http2.Http2InboundFrameLogger$1.onDataRead(Http2InboundFrameLogger.java:48) at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readDataFrame(DefaultHttp2FrameReader.java:421) at io.netty.handler.codec.http2.DefaultHttp2FrameReader.processPayloadState(DefaultHttp2FrameReader.java:251) at io.netty.handler.codec.http2.DefaultHttp2FrameReader.readFrame(DefaultHttp2FrameReader.java:160) at io.netty.handler.codec.http2.Http2InboundFrameLogger.readFrame(Http2InboundFrameLogger.java:41) at io.netty.handler.codec.http2.DefaultHttp2ConnectionDecoder.decodeFrame(DefaultHttp2ConnectionDecoder.java:118) at io.netty.handler.codec.http2.Http2ConnectionHandler$FrameDecoder.decode(Http2ConnectionHandler.java:390) at io.netty.handler.codec.http2.Http2ConnectionHandler.decode(Http2ConnectionHandler.java:450) at io.netty.handler.codec.ByteToMessageDecoder.decodeRemovalReentryProtection(ByteToMessageDecoder.java:489) at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:428) at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:265) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:340) at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1434) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:362) at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:348) at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:965) at io.netty.channel.nio.AbstractNioByteChannel$NioByteUnsafe.read(AbstractNioByteChannel.java:163) at io.netty.channel.nio.NioEventLoop.processSelectedKey(NioEventLoop.java:647) at io.netty.channel.nio.NioEventLoop.processSelectedKeysPlain(NioEventLoop.java:547) at io.netty.channel.nio.NioEventLoop.processSelectedKeys(NioEventLoop.java:501) at io.netty.channel.nio.NioEventLoop.run(NioEventLoop.java:461) at io.netty.util.concurrent.SingleThreadEventExecutor$5.run(SingleThreadEventExecutor.java:884) at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.base/java.lang.Thread.run(Thread.java:844)
getting that from Node.
note that I still appear to be at the correct node height
and things are working
seen those as well.
rare occassoin
Something is sending bullshit to your grpc port.
Ideally we'd have the log level set to not see those
@Shoots has joined the channel
@Protovist has joined the channel
ok, I need to build a 5tb SSD array to build the next field
The build I linked you earlier should do well, but i'd want to confirm the mobo supports the pcie lanes for the 4x nvmes just fine
And can you link that build again?
That'll work pretty well, but I'm not an expert in hardware architecture
wont be as fast but will let me make the file
@Fireduck Suggest setting Slack Channel Permissions so that only collaborators can create channels. We're winding up with iterations of <#CB40FDCF9|trade>, <#CB2R25KC0|trading>, #otc and more just on accident. heh
think it requires workspace owner to modify those permissions
@bl0ckchain has joined the channel
@SerfyWerfy has joined the channel
@Gary Brinkman has joined the channel
Anyone who wants to run a pool, now is your time to shine
```Jun 07, 2018 10:59:58 PM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=io.grpc.internal.ManagedChannel$mpl-316, target=[2601:602:8802:2091:4875:7446:ce75:f5f3]:2338} was not shutdown properly!!! ~*~*~* Make sure to call shutdown()/shutdownNow() and wait until awaitTermination($ returns true. java.lang.RuntimeException: ManagedChannel allocation site at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference$<init>(ManagedChannelOrphanWrapper.java:103) at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOr$hanWrapper.java:53) at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOr$hanWrapper.java:44) at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractMan$gedChannelImplBuilder.java:396) at snowblossom.node.PeerClient.<init>(PeerClient.java:20) at snowblossom.node.Peerage$PeerageMaintThread.connectToPeers(Peerage.j$ va:383) at snowblossom.node.Peerage$PeerageMaintThread.run(Peerage.java:316)```
yeah, doesn't seem to cause a problem but hven't tracked it down yet
want to file that as a bug on github?
sure
`Jun 07, 2018 10:59:58 PM io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference cleanQueue SEVERE: *~*~*~ Channel ManagedChannelImpl{logId=io.grpc.internal.ManagedChannel$mpl-316, target=[2601:602:8802:2091:4875:7446:ce75:f5f3]:2338} was not shutdown properly!!! ~*~*~* Make sure to call shutdown()/shutdownNow() and wait until awaitTermination($ returns true. java.lang.RuntimeException: ManagedChannel allocation site at io.grpc.internal.ManagedChannelOrphanWrapper$ManagedChannelReference$<init>(ManagedChannelOrphanWrapper.java:103) at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOr$hanWrapper.java:53) at io.grpc.internal.ManagedChannelOrphanWrapper.<init>(ManagedChannelOr$hanWrapper.java:44) at io.grpc.internal.AbstractManagedChannelImplBuilder.build(AbstractMan$gedChannelImplBuilder.java:396) at snowblossom.node.PeerClient.<init>(PeerClient.java:20) at snowblossom.node.Peerage$PeerageMaintThread.connectToPeers(Peerage.j$ va:383) at snowblossom.node.Peerage$PeerageMaintThread.run(Peerage.java:316)`
@Fireduck Why not store just store the pool shares w/ RocksDB as well?
Because the baby is awake now. But yeah, easy. Just not done yet.
Ahh, ok.