I can’t publish staging to production because the schemas to not match between environments.
From our side, both agents seems to be using the same docker image
when I run the command
forest schema:diff 136189 127842 (between my production and staging), I get a diff that contains the modifications I made.
What I tried
- I checked our logs, and the new docker image of our agent seems to have deployed fine and does log the expected message (
Schema was updated, sending new version).
- I checked google cloud panel, and both staging and production seem to be using the same docker image.
- I opened a PR on the agent-nodejs repo so that the schemaFileHash is logged when the agent starts.
This should allow to make extra sure that the right schema is actually sent and that the issue is not on our side.
- Project name: Roundtable
- Team name: not relevant
- Environment name: Staging / Production
- Agent (forest package) name & version: latest agent nodejs
- Database type: postgres
- Recent changes made on your end if any: added smart actions, upgraded the agent package, added collections
We restarted the agent, and this time the schema was received, so our problem is fixed for now…
I’m pretty sure we did send the new schema the first time around (yesterday around 19h50, I see it on the logs…).
Thanks for merging the contribution, if this happens again we now have a tool to be sure where the issue comes from!
It seems that the fork you made on agent-nodejs broke the CI, I duplicated your PR and merged it here feat(forestadmin-client): add schema hash to startup logs by DayTF · Pull Request #867 · ForestAdmin/agent-nodejs · GitHub
Thanks for your contribution !
I’m happy to hear that the issue is resolved for the time being, I do see the schema you have pushed yesterday at 19:52. I will continue to investigate on this
That’s because it does not have access to the github secrets, which breaks code climate (and probably deployment stages after that).
Thanks for being so quick.
Hopefully it helps the forestadmin team when debugging customers here as this is common issue
That’s great, thanks