i'm trying to figure out how to do a runtime container for rosesnow as a staged build and to not have `~/.cache` be transferred over as that's fairly big what are the artefacts it actually needs?
also having that maven thing fire on the container startup seems like an antipattern for all things "cloud native"
also one could not fire that up in an airgapped context as the maven startup of jetty hits the network immediately
oh there's a war
only copying the .war to the jetty container does not work can one somehow make a non-selfcontained .war?
```FROM ubuntu:20.04 as build ARG DEBIAN_FRONTEND=noninteractive ARG APT_KEY_DONT_WARN_ON_DANGEROUS_USAGE=true RUN apt-get update \ && apt-get install -qq --no-install-suggests --no-install-recommends \ apt-utils \ gpg \ gpg-agent \ ca-certificates \ curl RUN echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" \ > /etc/apt/sources.list.d/snowblossom-bazel.list RUN curl -Ls https://bazel.build/bazel-release.pub.gpg | apt-key add - RUN apt-get update \ && apt-get install -qq --no-install-suggests --no-install-recommends \ git \ default-jdk-headless \ maven \ bazel \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* RUN useradd -ms /bin/bash rosesnow USER rosesnow WORKDIR /home/rosesnow RUN git clone --depth=1 --branch=v0.1 \ https://github.com/snowblossomcoin/rosesnow.git WORKDIR /home/rosesnow/rosesnow RUN bazel build :RoseSnow_deploy.jar WORKDIR /home/rosesnow/rosesnow/maven RUN mvn package FROM jetty:9-jre11-slim as runtime COPY --from=build /home/rosesnow/rosesnow/maven/target/snowrosetta-rosesnow.war \ /var/lib/jetty/webapps/ROOT.war # From repository root: # docker build -t rosesnow docker/ # docker run -it --rm -p 8080:8080 rosesnow``` i'll make a PR if you make that .war fully self contained
seems like something you'd have to do in the maven declarations for `mvn package` to do the right thing
cloud native 101: always use upstream container as base (in this case jetty container) opsec 101: never leave your build tooling in production servers or containers
breakdowns of improvements `ARG` <- only set this environment variable during build time (can also be overridden from command line) `--no-install-suggests --no-install-recommends` <- smaller transient containers, less waiting at build time `apt-get clean` <- smaller transient containers `rm -rf /var/lib/apt/lists/*` <- smaller transient containers `git clone --depth=1` <- smaller transient containers, less waiting at build time `RUN useradd` <- don't run builds as root, you pull dynamic stuff in and root has a higher attack surface to the container host `FROM jetty:9-jre11-slim as runtime` <- use the appropriate upstream container as your runtime container `COPY --from=build` <- staged build for the most minimal possible runtime
```org.glassfish.jersey.message.internal.MessageBodyProviderNotFoundException: MessageBodyWriter not found for media type=application/json, type=class io.swagger.oas.inflector.models.ApiError, genericType=class io.swagger.oas.inflector.models.ApiError.``` i guess swagger does not get included in the .war
you fix, i rebase, i test, and i PR or nag more
also can has contributor to org? doing the fork repo dance on github, while not tragic, is churn
actually i'll refactor the dockerfile to copy the current working tree in so it can be used in pipelines, we could look at github actions for that
also thus gotta jam in appropriate .gitignore and .dockerignore
i guess that .war being broken is not a super big issue as your container serves a 404 as the root :slightly_smiling_face:
oh well, i'll put a draft PR together while you figure out how to make a standard .war
• Add appropriate Docker and git ignore files • Copy current working tree in instead of git cloning in-container • This will enable CI/CD down the line off this base • Reduce transient layer count • This helps with how demanding the local caching is on the storage of developer machines • Improve use of apt-get • Quiet output • Clean all transient metadata up • Use an unprivileged user for the build • General cleanups • Move to a staged build where the runtime is the standard upstream Jetty 9 container Known issue: the maven output `.war` does not work. I'll rebase and undraft this PR once a fix for that lands on the `main` branch.
i guess these kinds of things are the extra "frameworky bits" you would not like to get in your way
oh well, getting the crypto stuff right is what matters
and i'll pitch in container stuff in as getting that stuff right helps with adoption
i guess i should also do one for the node, the explorer, the pool and for what else?
Rosetta API (aka coinbase) as some particular requirements for the docker setup: https://www.rosetta-api.org/docs/node_deployment.html ## Multiple Modes
well, i can also do an ubuntu based runtime in that for the .war
the one that seems weird to me is they don't want to copy in files from the repo, they want it to pull them from git
with any luck the jetty image is based on ubuntu, let's check
well, that's also fair, as they can rebuild that at any point to pull in os updates
but even a git tag is a floating pointer so someone could do a nasty switcheroo for them
nope, jetty is debian based
and they don't offer an ubuntu based image, so i'll roll my own
but you still need to fix the .war
installing jetty in ubuntu and putting the .war to the correct place as ROOT.war and starting jetty in the container entrypoint is easy
I agree I should fix the war, but I'd have to understand what is wrong with it. Ideally I'd remove maven completely from the picture.
you can use my dockerfile for debugging
https://github.com/Rotonen/rosesnow/blob/2020-10-20-improve-dockerfile/Dockerfile ``` FROM ubuntu:20.04 as build ARG DEBIAN_FRONTEND=noninteractive ARG APT_KEY_DONT_WARN_ON_DANGEROUS_USAGE=true RUN apt-get update -qq \ && apt-get install -qq --no-install-recommends --no-install-suggests \ apt-utils \ gpg \ gpg-agent \ ca-certificates \ curl RUN echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" \ > /etc/apt/sources.list.d/snowblossom-bazel.list RUN curl -Ls https://bazel.build/bazel-release.pub.gpg | apt-key add - RUN apt-get update -qq \ && apt-get install -qq --no-install-recommends --no-install-suggests \ git \ default-jdk-headless \ bazel \ maven \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* RUN useradd -ms /bin/bash snowblossom COPY . /home/snowblossom/rosesnow RUN chown -R snowblossom:snowblossom /home/snowblossom USER snowblossom WORKDIR /home/snowblossom/rosesnow RUN bazel build :RoseSnow_deploy.jar WORKDIR /home/snowblossom/rosesnow/maven RUN mvn package FROM jetty:9-jre11-slim as runtime COPY --from=build /home/snowblossom/rosesnow/maven/target/snowrosetta-rosesnow.war \ /var/lib/jetty/webapps/ROOT.war # From repository root: # docker build -t rosesnow . # docker run -it --rm -p 8080:8080/tcp rosesnow ```
and also grab the .gitignore and .dockerignore so you don't put the wrong things into the container per accident
or just build the war and jam that into a standard jetty container with a volume
also works
so, i'll do a second Dockerfile just for rosetta
one for local dev at repo root, one for shipping to rosetta
and i'll add the branch it should clone as an argument - git clone --branch limits it to just published objects (branch names, tags), but that should work
and the second requirement they have is the build staging i already went for anyway
testing locally and amending PR still pending a working .war
so, this works ```FROM ubuntu:20.04 as build ARG DEBIAN_FRONTEND=noninteractive ARG APT_KEY_DONT_WARN_ON_DANGEROUS_USAGE=true RUN apt-get update -qq \ && apt-get install -qq --no-install-recommends --no-install-suggests \ apt-utils \ gpg \ gpg-agent \ ca-certificates \ curl RUN echo "deb [arch=amd64] http://storage.googleapis.com/bazel-apt stable jdk1.8" \ > /etc/apt/sources.list.d/snowblossom-bazel.list RUN curl -Ls https://bazel.build/bazel-release.pub.gpg | apt-key add - RUN apt-get update -qq \ && apt-get install -qq --no-install-recommends --no-install-suggests \ git \ default-jdk-headless \ bazel \ maven \ && apt-get clean \ && rm -rf /var/lib/apt/lists/* RUN useradd -ms /bin/bash snowblossom RUN chown -R snowblossom:snowblossom /home/snowblossom USER snowblossom WORKDIR /home/snowblossom ARG BRANCH=v0.1 RUN git clone --depth=1 --branch=${BRANCH} \ https://github.com/snowblossomcoin/rosesnow.git \ && rm -rf rosesnow/.git/ WORKDIR /home/snowblossom/rosesnow RUN bazel build :RoseSnow_deploy.jar WORKDIR /home/snowblossom/rosesnow/maven RUN mvn package FROM ubuntu:20.04 as runtime ARG DEBIAN_FRONTEND=noninteractive RUN apt-get update -qq \ && apt-get install -qq --no-install-recommends --no-install-suggests \ jetty9 RUN rm -rf /var/lib/jetty9/webapps/root/ COPY --from=build /home/snowblossom/rosesnow/maven/target/snowrosetta-rosesnow.war \ /var/lib/jetty9/webapps/ROOT.war RUN chown -R jetty:adm /var/lib/jetty9/ USER jetty ENV JETTY_HOME=/usr/share/jetty9/ ENV JETTY_STATE=/var/lib/jetty9/jetty.state ENV JAVA_OPTS=-Djava.awt.headless=true CMD /usr/share/jetty9/bin/jetty.sh run # For local dev vs. main (or any other branch or tag): # docker build --build-arg BRANCH=main -t rosesnow-rosetta . # docker run -it --rm -p 8080:8080/tcp rosesnow-rosetta```
in the early boot log there's some nonsense about how the jetty in debian is deeply systemd integrated and how it cannot find the state file from the process manager, but that's safe to ignore
so, if you have a branch where you try to fix the .war, you can use that to check your working
and i think the branch heads are also published objects on the public github so you could be able to just pass in a hash as a build arg
github optimises some lesser used facets of their git remotes for obvious reasons of scale and performance, and they're not fully open about the things they've not taken a stance on yet - so this stuff sometimes flutters back and forth without any announcements from them (which is fair, they never promised it'd work and no normal tooling does that)
as far as i can tell that now fulfills all of the rosetta specs 1) build from git 2) separate build time and run time 3) all based on ubuntu
if i missed any, give me a poke
oh, missed a couple of the environment variables from the systemd file
@Fireduck edited the dockerfile above, there you go, for your local experimentation
Thanks I'll take a look. But honestly, it might be a few days before I am up to fighting jetty and war files more
I find it really frustrating. The introspection of the framework I am using is weird.
And I have plenty of work to do implementing the actual api calls
that's fair, but what i can offer is helping you out with any container stuff - i'm not in a particular hurry
you need to find a second java dude
i can do plenty of systems and packaging and CI/CD stuff for snowblossom
Ideally I'll figure out how to make the war file correctly and just do it from bazel
that works as well, changing those dockerfiles to match that flow is a non-issue
whenever you think you got it, ping me and i'll reshape and rebase
Awesome, thanks
should i also try to do one for the node?
lowering the barrier of entry for nodes could be healthy
We have one for the node that could use some fixing
and the more i look at that rosetta build spec, the more i think they just don't trust people on average to have a sane .dockerignore in their repositories to filter out any potential build outputs, so they want a hard guarantee of reproducible builds which is fair, given what they do
leading up to the question: do i do the same for the node?
Plus should I make a dockerhub account and publish things there?
yes, but then you should give me some org access to snowblossom on github, so i can automate that with github actions
Yeah, will do
github orgs also have a secrets management scheme these days so you can put the dockerhub upload key there for the pipeline to use
and i'd also add branch protection rules on the main / master branches of all the repositories so no one can ever force push to master
and one can add more branch protection rules if you get more developers, like mandating all changes to master come through PRs (and that the tests pass on said PRs)
for now i'd just go for "tests run on all commits"
then after that "creating a tag builds a container and pushes to dockerhub"
never did anything with github actions so far, so i'm very curious about all that
especially their design paradigm on container promotion pipelines is something i'd like to give a spin
At my work they were in some way unhappy with it and switched to gitlab for the automation
Code and pr work still in github, just automatic mirrored to gitlab for ci
i try to be able to use ~anything, so i'm currently on a mix of jenkins x, jenkins, spinnaker, teamcity, bamboo, screwdriver, circle, travis (and probably forgot two thirds of what i've used in the past decade - i started with tinderbox)
task durability engines are not magic, they all do about the same thing
serialization, marshalling, execution, passing things around, distributed map reduce, storage of artefacts
Yep
and very good fun everything always boils down to "returns zero, or not"
this looks good for jamming into a github action https://github.com/bazelbuild/bazelisk A user-friendly launcher for Bazel.
not sure if that or docker is the neater way to go here, oh well, i'll worry about that later and i'll do the docker image for the node first
that might be handy, especially if bazel continues to make breaking changes
for example, right now if you wanted to go back and build old versions of snowblossom, it wouldn't work
because bazel has changed things. But if we could specify which version, then sure.
i think i'll give that a spin already for the node dockerfile
@Clueless does the `COPY --chown=docker:docker` already work reliably on all platforms?
i guess that's not really an issue across containers like that
oh that wasn't from you, but from some community member
or was that the other dev who was on board early on?
that file is pretty well done
i'm not exactly sure of the multi use paradigm in there do we know if we have any downstream users? if i break that up into multiple files, that'd break their flow
@Rotonen I made some distance with a new dockerfile but got hung up on something. I have it somewhere if you want reference.
push it onto a branch and i'll cherry-pick
in general i always like having more perspectives and approaches around
@Clueless did you make a snowblossom dockerhub repo? If so, can I have access to it?
and put the upload token into the secrets storage on the repo https://docs.github.com/en/free-pro-team@latest/actions/reference/encrypted-secrets
@Fireduck how do you run the tests?
bazel test ...
does it make a difference if one uses the deploy target? `bazel test :SnowBlossomNode_deploy.jar` would that actually compile the node as well in an usable way?
I have no idea what that command would do
I actually run this: bazel build :all :IceLeaf_deploy.jar :IceLeafTestnet_deploy.jar && bazel test ...
And I have some other automation that picks up those deploy files and moves them to my desktop for testing
that command seems to have built the node deployment and nothing else
and a naked `bazel test` does nothing - i'm missing something very obvious here as i have no idea of the tooling or codebase
so normal build will make the bazel scripts, like bazel-bin/SnowBlossomClient and such
if you want the stand alone deploy.jar files you have to ask for them explicitly
bazel test :all used to use
and that seems to override whatever test does, which is weird, but ok
but then something changed so "bazel test ..." works
with literally three dots
i'm thinking of splitting the tests up per output
literally three dots was not what i expected that to mean :smile:
ha, yeah
`bazel test ...` runs something even without building anything
is 108 tests expected?
@Fireduck @Rotonen I have a framework for building individual pieces and entry points for each one. I'm even able to integrate with x window system in linux and provide a GUI. Am creating that branch right now.
I don't think 108 tests exist, but I'm not sure. There should be 5 or 6 sub tasks, which are tests in separate components
which probably around a dozen in each one
so it isn't way off base
i guess it builds 108 files to run the 6
yup, that's current with what I had in my build directory on my docker server.
bah, i wanted to tree shake the stability of the tests and run them n+1 times in a row and see if they can be flaky bazel is clever and has cached the results for me and skips the build and the tests, "..."
that one is about as properly done as one can do a dockerfile
a bit of inline cleanups here and there would help, but practically does not matter for something like this
@Fireduck I sent an ownership invite to your email address for dockerhub's snowblossom org
and `DEBIAN_FRONTEND=noninteractive` to get rid of the `-y`
last I was doing was screwing with dockerhub automatic building from github repos.
there's now a github action for that
@Clueless thanks
both work, though
i guess one could start tagging the dockerfile repository and building the images off tags via actions and uploading those to dockerhub seems all the groundwork is actually done there
just use the tag name as the environment variable for the version and you're good to go
actually one could even go a step further and trigger an action on the snowblossom repository when that gets a tag to tag the current master of the docker repo to fully automate that
starting with manual double tagging and seeing how it goes is probably prudent
@Rotonen want to be in charge of the dockerhub org and the dockerfiles?
and i guess the container could run the tests at build time so it halts the deployment if someone tagged bad code
also, is there a good way to run two things in a docker image? Like if we want a node and the explorer as one docker unit.
not in particular, but i'm not against that either, and @Clueless seems to be on top of things too
@Fireduck you can jam any process manager in, but why? just use a proper orchestration model
orchestration model meaning something like terraform?
more like rancher 1.6 or amazon fargate
or any kubernetes cluster for that matter :slightly_smiling_face:
so rancher 2, GKE, AKE, EKS, etc.
but locally on your computer, docker-compose should do the trick
and you can make just one image and change the entry points in the compose file but rather do lean images, one per purpose
@Rotonen My implementation allows you to build specific parts, or all in one and use an entrypoint, I think.
haven't touched it in 8 months though. I should.
yep, it looked to cover all cases
that should serve as-is for dockerhub uploads via github actions
i think the only broken thing is the now-too-old version pinning of bazel
but as that's configurable through the environment, that's fine too
win!
so i'll clean up my rosetta dockerfile sometime this week and we can also look at the github actions -> dockerhub stuff
i'll collapse the rosetta back into one and just only provide that one in-repo as it originally was
awesome
i still had time to finish this tonight after all https://github.com/snowblossomcoin/rosesnow/pull/1 test locally and feedback is welcome on the PR • Reduce transient layer count • This helps with how demanding the local caching is on the storage of developer machines • Improve use of apt-get • Quiet output • Clean all transient metadata up • Use an unprivileged user for the build • General cleanups • Move to a staged build and use Jetty 9 from the Ubuntu repositories Known issue: the maven output `.war` does not work. I'll rebase and undraft this PR once a fix for that lands on the `main` branch.