TypeError: value.map is not a function when viewing collection with PostgreSQL native enum array

Feature(s) impacted

Rendering a collection (Profile) that contains a native PostgreSQL enumeration array (Industry[]) using the Forest Admin Node.js agent with Prisma and PostgreSQL.

Observed behavior

After our nightly reset of the staging database (we recreate a fresh PostgreSQL instance every night), navigating to the Profile collection in Forest Admin throws the following error:

TypeError: value.map is not a function
    at BinaryCollectionDecorator.convertValueHelper ...

It appears that Forest Admin expects value to be an array, but it isn’t interpreted correctly post-reset — specifically when the field is a native Postgres enum array.

The field in question is defined in our Prisma schema as:

enum Industry {
  ...
  realEstatePropTech
  ...
}

model Profile {
  ...
  industries Industry[] @default([])
}

:warning: To be clear, this Industry enum is a native PostgreSQL enum, declared and used via Prisma.

See attached:


Expected behavior

Forest Admin should support native PostgreSQL enum arrays without issues — even after a full DB instance reset — and should not require a manual redeployment of the agent to function.

Failure Logs

TypeError: value.map is not a function
    at BinaryCollectionDecorator.convertValueHelper ...
    at BinaryCollectionDecorator.convertValue ...
    ...

Context

  • Project name: Roundtable
  • Team name: Roundtable
  • Environment name: staging
  • Database type: PostgreSQL
  • Recent changes: We reset the entire DB instance nightly for a clean test environment

Self-hosted agent info:

  • Agent technology: Node.js
  • Agent package: @forestadmin/agent
  • Agent version: 1.64.0
{
    "@forestadmin/agent": "1.64.4",
    "@forestadmin/datasource-customizer": "1.67.0",
    "@forestadmin/datasource-sql": "1.17.1",
    "@forestadmin/datasource-toolkit": "1.50.0",
    "@forestadmin/plugin-flattener": "1.4.16",
}

This issue is consistently resolved by redeploying the agent — but that’s not sustainable. We’re looking for insights into what causes this mismatch (maybe a metadata caching issue?) and if there’s a better workaround or permanent fix.

Thanks a lot!

Hi @ThibaultWalterspiele ,

I am trying to reproduce your bug. With just the enum array, I could not find any issue.

The convertValueHelper is called in the datasource-customizer, could you please send me your code from your agent that customize this collection ?

Best regards,

Shohan

The error occurs even on tables that are not customised via Forest. The bug occurs as soon as the staging DB is flushed and continues to occur until the forest admin agent is re-deployed.

For testing reasons we are forced to reset the staging DB every day.

When you flush the data, do you also delete the schema ?

An alternative solution could be to call the restart() function of the agent. It would avoid to manually restart it.

Yes, we delete the schema as well.

The solution isn’t ideal; I get the impression that we’re getting round the problem rather than really fixing it.

Hello @ThibaultWalterspiele,

Sadly, this issue is tied to the workings of Sequelize, which is used by @forestadmin/datasource-sql. When interacting with arrays of enums, Sequelize will cast the value to the type of the enum, which was originally populated and loaded into memory during the introspection process.

The agent is not capable of partial introspection of the database. With the current implementation, a workaround would be, as previously shared, either redeploying the agent or calling the restart() function defined on the agent.

Another workaround—on your infrastructure—could be to flush only the data of your staging environment while leaving the schema intact, so as not to require a new introspection.

As things stand, I cannot commit to a fix, as the use case is quite particular and not expected to happen in production. I will, however, add it to our backlog.

I hope you find the workarounds satisfactory.

Best regards,