Selfhosted QfieldCloud: API_INTERNAL_SERVER_ERROR / Connect timeout on endpoint URL

Hi everyone,

for a small company, we started setting up a small selfhosted QFieldCloud instance on an Ubuntu Server.

It is set up as dokumented in the README with the docker-compose.override.standalone.yml based on Release v0.32.5 and docker version 28.0.4. Currently its state is DEBUG=1 and ENVIRONMENT=staging.

It ran fine for several weeks, including project creating, syncing these projects in QField and QGIS and also adding gpkg and postgis layers.

Until, there was a docker update on the server to version 29.1.4. Since then we are experiencing an API_INTERNAL_SERVER_ERROR (according to the Jobs error type in the webinterface), when trying to sync projects in Qfield or QGIS. The docker compose logs for the app show the following error: “Connect timeout on endpoint URL:http://172.17.0.1:8009/qfieldcloud-local”.

Minio (localhost:8010) and the webinterface (localhost:8011) are reachable via browser over ssh. The project list ist shown in Qfield and also updateable. It is only not possible to sync any projects.

Did anyone of you, who is self-hosting, ever experienced such an error and/or can help us out ?

Hi!

Does the issue occur with all projects on the server, or only with a specific one?

Have you tried creating a new project with minimal content (for example, a single map layer) to see whether syncing works there?

Do the app or jobs container logs show any additional errors beyond the timeout?

The Docker update may have introduced changes to how internal networking works, or some containers might not have been restarted properly.

Ensure all relevant containers are running and part of the same Docker network.

docker ps
docker network ls
docker network inspect qfieldcloud_default # change this to your default network

Have you tested the update on a testing environment, and did it work there?

Hi,

thanks for your quick reply.

Yes, it occures with all existing projects.

I tried creating a new project in QGIS via the sync plugin. The new project appears in the webinterface under the listed projects, so some pushing worked, but it has status “failed“ and status code “failed_process_projectfile“. In QGIS I get the error “504 Gateway Time-out” from nginx/1.28.1.

The nginx container logs give me this error: “172.17.1.1 - - [14/Jan/2026:15:51:38 +0000] “GET /nginx-status HTTP/1.1” 444 0 “-” “Python-urllib/3.12” “-””.

The app container logs don’t give much more information as the timeout error. Only some additional “Internal Server Error: /api/v1/files/…..”

What do you mean by jobs container ?

Regarding the docker update. No, we did not test it in an testing environment.

Docker ps and also systemctl status docker look normal, all containers are running.

Here the network output. All containers are in the same network = qfieldcloud_default.

Earlier after the errors occured, the docker containers had the subnet 172.18.x.x. I created a /etc/docker/daemon.json, to make it static on 172.17.x.x

(this was a suggestion i read on Den Privaten IP-Adressbereich von Docker anpassen – RRZK – Regionales Rechenzentrum , and i tried it out, but it doesn’t help):

I guess the issue is somewhere in a misscommunication between the containers … How should the qfieldcloud_bridge network look like ?

Thanks for your help !

This is the output of docker ps:

Can you check your .env file and the used Docker Compose files for any MinIO/storage connection settings (likely something like STORAGE_ENDPOINT_URL)?

I suspect you might have a hardcoded IP address like http://172.17.0.1:8009 that’s no longer valid after the Docker update.

Try replacing it with the MinIO service name instead (likely http://minio:9000, check your docker-compose.yml to confirm the service name). Docker’s internal DNS will resolve it automatically, and this should prevent issues when network configurations change.

If it’s not MinIO, can you figure out what service is supposed to be reachable behind that IP?

Yes, the STORAGE_ENDPOINT_URL is hardcoded with http://172.17.0.1:8009 in the .env.

The setup for the storage looks like this:

I can’t find the service name for MinIO in the docker-compose.yml, but docker compose ps shows me, that its service name is minio. So I changed it in the .env as you suggested to http://minio:9000. This led to changes in QGIS, when I try to sync a project.

I can now choose whether I want to synchronize locally or from the cloud, and when I click on cloud, the following error message appears:

Earlier I couldn’t even choose this option, I directly got the error Failed to update the project files status from the server in QGIS.

If it’s not MinIO, can you figure out what service is supposed to be reachable behind that IP?

The IP is also the gateway to the bridge network (docker0):

docker inspect -f ‘{{range.IPAM.Config}}{{.Gateway}}{{end}}’ bridge

172.17.0.1

How should the setup with the networks in docker should be correctly ? I saw in the docker-compose.yml, that for some containers the network host is used and for others nothing is defined (so it uses the bridge). And then there is also a qfieldcloud_default network created. The bridge has the Subnet 127.17.0.0/24 and the qfieldcloud_default the Subnet 172.17.1.0/24. For me this is kind of confusing. Is it possible that there is something wrong with that ?

Update: After changing the STORAGE_ENDPOINT_URL to http://minio:9000, I tried again to create a new project. It now works and can be synchronized with the cloud in QGIS. However, I cannot synchronize it in Qfield. I now get a download error there (no further information are provided). The docker compose app logs now show a new error:

The already existing projects are still not syncable either in QGIS nor Qfield.

Does the error shown come from attempting to sync any of the old projects, or from the freshly created one?
If it’s from the new one, can you check your jobs for errors? Is the processing job (the first one after uploading a new project to QFieldCloud) properly generating thumbnails?

Judging from what you are describing and the first line of the log file you shared, “normal” file downloads seem to work now. Can you confirm this by downloading a project file or attachment manually?

After the sync fails in QField, does QField give you a log message? Even if opening the project fails, the logs are usually preserved between opening different projects.
Can you open an existing one and check if the logs display any information?

Also, I noticed you only mentioned “based on Release v0.32.5”. Are you still running Release v0.32.5?

The error is from a freshly created project.
I can’t find any error showing there is an issue creating the thumbnail in the logs and the thumbnail.png also appears properly in minio for the new project in the meta folder. Not sure if the error is actually caused by the thumbnails, maybe it’s just the first thing QField wants to download ?

Normal file downloading, directly from within the minio webinterface (ssh and then localhost:8010) is working fine. I also opened the manually downloaded projekt and checked if the app packed them properly for QField - it does.

I guess there seems to be a general issue with the downloading process in the QFieldCloud. As uploads and packaging works fine now, but somehow it doesn’t seem to be able to communicate with the outside again.

Thanks for the hint, the Message Log from QField show this:

Yes, we are still using version v0.32.5. I know it is outdated, but we started developing the hosting when this version was the latest. We want to get everything up and running first before we perform an update. It is only a side project, so development is progressing very slowly.

I’m afraid I’m out of ideas.

Do you have the option to clone the server and see if updating QFieldCloud might actually fix the issues?

Hi Pascal,

I guess the issue is kind of weird. Yes, perhaps cloning and updating is the best idea.

Thank you so so much for your time and support. I really appreciate it !

1 Like

Hi everyone,

I was able to fix the problem:

  • the storage_endpoint_url in the .env file needed to be changed, as Pascal suggested, from: 172.17.0.1:8009 –> minio:9000
  • the hardcoded IP was the docker bridge, which can be reached from the host as well as from docker internally, 8009 was the host port of minio
  • but this solution is not stable and broke, when an update of docker was applied
  • so it was changed to the service name of minio and its container port
  • this led to a download problem, because container port 9000 is just reachable from within docker and the service name minio needs to be resolved to its IP adress
  • to solve this the DNS needed to be changed: in the file …/docker-nginx/templates/default.conf.template, from 8.8.8.8 (default google DNS) –> 127.0.0.11 (Docker internal DNS)

  • then i rebuild the nginx container, so that the new DNS was loaded
  • and now it works :slight_smile: (and I hope it will be stable)
1 Like