ForestAdmin cannot detect that my backend is up and running for production purposes
Observed behavior
ForestAdmin is stuck the “Set your environment variable” with the error message “it looks like the URL is not responding”
Expected behavior
The URL is set correctly and respond with ssl certificate when I test it on my browser, Forestadmin should be able to detect it, I restart the operation multiple times, replace with good environment variable each time and restart the backend container.
I’m in charge for this issue now, I tried to deploy a new forest remote environment with fresh new environment variables but the issue is still the same.
The application url is responding with a 200 http status code and with a “Your application is running” message.
I printed all environment variables using the shell in the kube pod and everything is right configured, I don’t understand why the forest configuration is saying “It looks like the URL is not responding”…
I’m not able to reproduce the same issue. In my case the front is still trying to reach the /forest route.
Can you try another fresh install ? (You will need to update your FOREST_ENV_SECRET and restart your container)
Could you open the network tab when you start the process so we could see all calls ?
Let me know if it helps this time.
Plus, I’ll do a feedback to our team because you should be able to relaunch the system that detect the agent is running instead of being stuck at this point.
It only calls the /forest route during initialisation phase (when you first onboard a new project or when you deploy to production/remote).
I try to update your project on my side. Can you try to connect to your Pre-Production environment for Lexagone project ? Let me know what happen (if you can connect to this environment).
I already tried to connect to our Pre-Production environment for the Lexagone project but I’m still stuck on the last step with this message (I can’t connect to this environment in the project settings):
The idea during the live debug was the good one, removing our Kubernetes configuration snippet on the nginx ingress worked well to solve the CORS issue.