Deploy docker image as a container to my already existing instance. Docker image is stored in artifact registry.
2. Run the container in the instance.
Please advise on how to solve for the permission denied error. Adding further to this: the service account associated with the instance has artifact registry reader role assigned to it and the instance has access to all the APIs.
On Artifact Registry, inside your repo, click on Copy Path:
Unless:
If your administrator set up repositories with gcr.io domain support, requests to gcr.io hostnames are automatically redirected to a corresponding Artifact Registry repository. To use a gcr.io repository hosted on Artifact Registry, you must have an appropriate Artifact Registry role or a role with equivalent permissions.
If so, you can ignore the above recommendation.
Concerning the permission denied, it seems that you correctly used export HOME=/home/appuser as it’s recommended in the linked thread and mentioned in the documentation regarding Access to Artifact Registry images.
Furthermore, another page says:
Note: If you normally run Docker commands on Linux with sudo, Docker looks for Artifact Registry credentials in /root/.docker/config.json instead of $HOME/.docker/config.json. If you want to use sudo with docker commands instead of using the Docker security group, configure credentials with sudo docker-credential-gcr configure-docker instead.
I believe you’re having the same issue, so you’re going to need sudo with your command or use another $HOME path where your user has access. In both cases, it may not be the best practice.
Thanks for the solution. However after setting home directory with “export HOME=/home/appuser” and then configuring docker to use access artifact registry with “docker-credential-gcr configure-docker –-registries us-central1-f-docker.pkg.dev” I was getting permission denied error.
So as per your recommendation I tried “sudo docker-credential-gcr configure-docker –-registries us-central1-f-docker.pkg.dev” but then again I got the below error
But then I didn’t set up home directory with “export HOME=/home/appuser” and directly tried to configure docker for accessing artifact registry and I successfully managed to authenticate.
Also upon using the below script which specifies the particular registry I was able to manage eliminate the Warning sign as well.
So my question now is that if I don’t set the home directory and pull the docker image into the instance, which I was able to do successfully . Is it advisable?
And also why am I still not able to follow the proper path as suggested in the documentation. My instance has full access to cloud apis and the service account for my artifact registry also has the necessary roles as suggested in the documentation.
I am still working on the getting the app running on cloud but that is different challenge and I am hopeful I will be able to solve for that as well.
The Container-Optimized OS root filesystem is always mounted as read-only. Additionally, its checksum is computed at build time and verified by the kernel on each boot. This mechanism prevents attackers from “owning” the machine through permanent local changes. Additionally, several other mounts are non-executable by default. See Filesystem for details.
So that’s normal.
I think you may have to set export HOME=/home/appuser in the startup script, but that’s not clear to me.
Otherwise, you could use a writable path for the Docker config by using export DOCKER_CONFIG=xxx where xxx is a writable path (so, not /root/*).
So my question now is that if I don’t set the home directory and pull the docker image into the instance, which I was able to do successfully . Is it advisable?
I would say yes, because I think the tool you were trying to use firsthand would have pulled it anyway. If it’s a one-time job, I wouldn’t give myself too many headaches for it.
And also why am I still not able to follow the proper path as suggested in the documentation. My instance has full access to Cloud APIs and the service account for my Artifact Registry also has the necessary roles as suggested in the documentation.
You’re right about the APIs access but your issue here is that the root file system of Container-Optimized OS is hardened. It is not related to your access level on GCP.
I am still working on getting the app running on the cloud, but that is a different challenge, and I am hopeful I will be able to solve that as well.
Also, can shed some light on the issue described below
I managed to pull the image into the instance. Now when I try to run the image as container I got the below error saying No such file or directory. Can you help me point out my mistake. I think it’s something root file system which you mentioned earlier as well.
However, I am getting stuck at running the container inside the VM. After successfully SSHing into the instance and then deploying the image into the container when I run the container, the below error pops up.
I have already checked for a docker client calling docker api inside my docker image/python app script and couldn’t find anything. I am using selenium & selenoid libraries but I have # (hashed them) and then deployed the image again but still encountered the same error.
Any thoughts on what I can do next to resolve.