Feature(s) impacted
Update of models schema on Forest
Observed behavior
When deploying a new version of our Forest agent, synchronization of .forestadmin-schema.json file fails with Forest servers with error “413 Request Entity Too Large”. Error started to ocurre less than a week ago, and sometimes after a few retries it ends up working, but not always.
File is about 1.6Mo.
Seems like a Forest server file upload limit or something..
Expected behavior
When the .forestadmin-schema.json is regerated or when the Forest agent starts, it should send it to Forest server for sync, and it should succeed
Failure Logs
"header": {
"date": "Fri, 17 Oct 2025 08:19:31 GMT",
"content-type": "text/html",
"content-length": "183",
"connection": "close",
"server": "nginx/1.28.0"
},
"status": 413,
"text": "<html>\r\n<head><title>413 Request Entity Too Large</title></head>\r\n<body>\r\n<center><h1>413 Request Entity Too Large</h1></center>\r\n<hr><center>nginx/1.28.0</center>\r\n</body>\r\n</html>\r\n"
Context
- Project name: AirSaas
- Team name: Operations
- Environment name: Review App - PFS dashboard
- Database type: Postgres
- Recent changes made on your end if any: addition of new fields/tables, which we do very often, nothing out of the ordinary
And, if you are self-hosting your agent:
- Agent technology: node
- Agent (forest package) name & version (from your .lock file): forest-express-sequelize 9.6.1 ; forest-express 10.6.7
"meta": {
"liana": "forest-express-sequelize",
"liana_version": "9.6.1",
"stack": {
"database_type": "postgres",
"engine": "nodejs",
"engine_version": "20.19.5",
"orm_version": "6.37.3"
},
"schemaFileHash": "e5fda648d55844097c6170d7a9678419c2bd5821"
}
Hi lumberjacks !
I’m throwing a me too here.
We have multiple developers impacted by this issue on our side 
Thanks for having a quick look ! 
error: Error: cannot POST /forest/apimaps (413)
at Response.toError (/Users/nicolas.moreau/*/back-office-api-gitlab/node_modules/@forestadmin/forestadmin-client/node_modules/superagent/src/node/response.js:110:17)
at Response._setStatusProperties (/Users/nicolas.moreau/*/back-office-api-gitlab/node_modules/@forestadmin/forestadmin-client/node_modules/superagent/src/response-base.js:107:48)
at new Response (/Users/nicolas.moreau/*/back-office-api-gitlab/node_modules/@forestadmin/forestadmin-client/node_modules/superagent/src/node/response.js:41:8)
at Request._emitResponse (/Users/nicolas.moreau/*/back-office-api-gitlab/node_modules/@forestadmin/forestadmin-client/node_modules/superagent/src/node/index.js:932:20)
at IncomingMessage.<anonymous> (/Users/nicolas.moreau/*/back-office-api-gitlab/node_modules/@forestadmin/forestadmin-client/node_modules/superagent/src/node/index.js:1170:38)
at /Users/nicolas.moreau/*/back-office-api-gitlab/node_modules/@opentelemetry/context-async-hooks/src/AbstractAsyncHooksContextManager.ts:75:49
at AsyncLocalStorage.run (node:async_hooks:335:14)
at AsyncLocalStorageContextManager.with (/Users/nicolas.moreau/*/back-office-api-gitlab/node_modules/@opentelemetry/context-async-hooks/src/AsyncLocalStorageContextManager.ts:40:36)
at IncomingMessage.contextWrapper (/Users/nicolas.moreau/*/back-office-api-gitlab/node_modules/@opentelemetry/context-async-hooks/src/AbstractAsyncHooksContextManager.ts:75:26)
at IncomingMessage.emit (node:events:531:35)
at IncomingMessage.emit (node:domain:488:12)
at endReadableNT (node:internal/streams/readable:1696:12)
at processTicksAndRejections (node:internal/process/task_queues:82:21) {
status: 413,
text: '<html>\r\n' +
'<head><title>413 Request Entity Too Large</title></head>\r\n' +
'<body>\r\n' +
'<center><h1>413 Request Entity Too Large</h1></center>\r\n' +
'<hr><center>nginx/1.28.0</center>\r\n' +
'</body>\r\n' +
'</html>\r\n',
method: 'POST',
path: '/forest/apimaps'
},
Hello @Matthieu_Delanoe and @Nicolas_Moreau,
Sorry for the late reply, we’ve been working on this nonetheless.
This issue as reported is linked to a limitation on payload size to request forwarded to our server.
We were able to rollback to the previous infra, we are already seeing some users successfully posting their schema.
Please try again, if you’re still experiencing the issue maybe clearing the DNS resolution cache might help.
Keep us updated if you’re still stuck,
Sorry for the inconvenience,
Best regards,
3 Likes
It seems ok on my side now, thx for the reactivity!
1 Like