Response
stringlengths
8
2k
Instruction
stringlengths
18
2k
Prompt
stringlengths
14
160
Facepalm!!!The docker containers were communicating just fine, the problem was I hadn't told Resque (the app using Redis) where to find it. Thank you to "The Real Bill" for pointing out I should be using docker-cli.For anyone else using Docker and Resque, you need this in yourconfig/initializers/resque.rbfile:Resque.redis = Redis.new(host: 'redis', port: 6379) Resque.after_fork = Proc.new { ActiveRecord::Base.establish_connection }
I havs migrated my Rails app (local dev machine) to Docker-Compose. All is working except the Worker Rails instance (batch) cannot connect to Redis.Completed 500 Internal Server Error in 40ms (ActiveRecord: 2.3ms) Redis::CannotConnectError (Error connecting to Redis on 127.0.0.1:6379 (Errno::ECONNREFUSED)):In my docker-compose.ymlredis: image: redis ports: - "6379:6379" batch: build: . command: bundle exec rake environment resque:work QUEUE=* volumes: - .:/app links: - db - redis environment: - REDIS_URL=redis://redis:6379I think the Redis instance is available via the IP of the Docker host.$ docker-machine ls NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS default * virtualbox Running tcp://192.168.99.100:2376 v1.10.0Accessing via 0.0.0.0 doesn't work$ curl 0.0.0.0:6379 curl: (7) Failed to connect to 0.0.0.0 port 6379: Connection refusedAccessing via the docker-machine IP I think works:$ curl http://192.168.99.100:6379 -ERR wrong number of arguments for 'get' command -ERR unknown command 'Host:'EDITAfter installingredis-cliin the batch instance, I was able to hit the redis server using the 'redis' hostname. I think the problem is possibly in the Rails configuration itself.
Docker-compose - Redis at 0.0.0.0 instead of 127.0.0.1
When you explicitly setcontainer_name:in yourdocker-compose.ymlfile, the container name will beexactlywhat you specify; Docker Compose won't add its per-directory prefix to it.Usually this doesn't matter to you at all, and it's safe to removecontainer_name:. You will still be able to reach other containers using their service name indocker-compose.ymlas hostnames, and thedocker-composeCLI provides wrappers for management commands likedocker-compose stopthat will act on the correct container.You will also hit trouble withports:and there is less of a clear solution here. Only one container or process can bind to a specific port on the host. You can leave off the first port number inports:, and Docker will pick a port for youports: - '80' - '443'but then you need tomanually look up the corresponding host port numberdocker-compose port service1 80
I have a set of docker that I run usingdocker-compose up -dpretty basic so farI want to run multiple instances of my project and I have read thisRun multiple docker composeNow when runningdocker-compose up -p PRNAME -dcompose is not prepending project name to containers likeprname_container1and I'm getting the following error :ERROR: for container1 Cannot create container for service service1: Conflict. The container name "/container1" is already in use by container "f7aeb2ef782556ae5b0". You have to remove (or rename) that container to be able to reuse that name.I might be missing something out here.A part of my docker-compose.yml looks likeservices: service1: image: "${PROJECT_REPO}:image" container_name: "container1" ports: - 443:443 - 8000:80 networks: - db - proxy - oauth depends_on: - db
Docker compose not prepending project name to the container names when running multiple instances
You have to set the PATH environment variable in the Dockerfile with:ENV PATH=~/.linuxbrew/bin:~/.linuxbrew/sbin:$PATHHere is a complete working Dockerfile:FROM debian RUN apt-get update && apt-get install -y git curl binutils clang make RUN git clone https://github.com/Homebrew/brew ~/.linuxbrew/Homebrew \ && mkdir ~/.linuxbrew/bin \ && ln -s ../Homebrew/bin/brew ~/.linuxbrew/bin \ && eval $(~/.linuxbrew/bin/brew shellenv) \ && brew --version \ && brew tap aws/tap && brew install aws-sam-cli \ && sam --version ENV PATH=~/.linuxbrew/bin:~/.linuxbrew/sbin:$PATH
I am trying to install setup a docker image and want certain Homebrew packages pre-installed when I run the container. I am able to build it just fine and version statements are working as expected but when I run the installed packages are missing. Any idea what I am doing wrong?RUN git clone https://github.com/Homebrew/brew ~/.linuxbrew/Homebrew \ && mkdir ~/.linuxbrew/bin \ && ln -s ../Homebrew/bin/brew ~/.linuxbrew/bin \ && eval $(~/.linuxbrew/bin/brew shellenv) \ && brew --version \ && brew tap aws/tap && brew install aws-sam-cli \ && sam --version
Installing homebrew packages during Docker build
You could take advantage of the--virtualbox-boot2docker-urloption.This issueillustrates its usage (with an iso which isnota TinyCore one, but aRancherOS one)docker-machine create -d virtualbox --virtualbox-boot2docker-url https://releases.rancher.com/os/latest/machine-rancheros.iso rancherIf RancherOS is a bit too bare, you can take some clues fromhow boot2docker is currently built, and build your own distro.The key is to remove the parts not needed in order to be able to launch headless VM without using too much memory.# Remove useless kernel modules, based on unclejack/debian2docker RUN cd $ROOTFS/lib/modules && \ rm -rf ./*/kernel/sound/* && \ rm -rf ./*/kernel/drivers/gpu/* && \ ...
Is there a possibility to simply create a docker-machine that is non-boot2docker based (i.e., Ubuntu based) (which uses virtualbox driver)?I would like to have full-featured Linux distro running the docker daemon on my mac instead of Tiny Core Linux distro which is fast and lightweight but doesn't offer me all the debugging facilities I need.I know I can create it manually. I'm just wondering if there is a simple way such asdocker-machine createis.
Ubuntu based docker-machine image
Helm uses KubernetesDeploymentwith a different terminology than Docker. You'll want to define:commandin Helm forentrypointin Docker Compose (seethis post)workingDirin Helm forworking_dirin Docker Compose (seethis post)For your example it would be:... containers: - name: checklist ... command: ["dotnet", "Checklist.dll"] # Docker entrypoint equivalent workingDir: "/checklist" # Docker working_dir equivalent
I have the followingdocker-composefile and I don't get how I can set theworking_dirandentrypointin the helmdeployment.yaml. Does someone have an example on how to do this?docker-composeversion: "3.5" services: checklist: image: ... working_dir: /checklist entrypoint: ["dotnet", "Checklist.dll"] ...
How to configure docker entrypoint in Helm charts
This is not mentioned in the wiki article on modules, but from reading the updated docs on thego tool, I found out that when using Go modules, thegotool will still useGOPATHto store the available sources, namely$GOPATH/pkg/mod.This means that for my local dev setup, I can 1. define theGOPATHin the container and 2. mount the local$GOPATH/pkg/modinto the container's GOPATH.web: image: golang:1.11rc2 working_dir: /app volumes: - .:/app - $GOPATH/pkg/mod:/go/pkg/mod environment: - GOPATH=/go - PORT=9999 command: go run cmd/my-project/main.go
I am migrating a Go 1.10 app to Go 1.11. This also includes migrating fromdeptomodfor managing dependencies.As the application depends on a database, I am using adocker-composeto set up the local development environment. With Go 1.10 I simply mounted the local repository (including thevendorfolder) into the correct location in the container'sGOPATH:web: image: golang:1.10 working_dir: /go/src/github.com/me/my-project volumes: - .:/go/src/github.com/me/my-project environment: - GOPATH=/go - PORT=9999 command: go run cmd/my-project/main.goSince Go 1.11 ditchesGOPATH(when using modules that is) I thought I could just do the following:web: image: golang:1.11rc2 working_dir: /app volumes: - .:/app environment: - PORT=9999 command: go run cmd/my-project/main.goThis works, but every time Idocker-compose up(or any other command that calls through to the Go tool) it will resolve and re-download the dependency tree from scratch. This does not happen (rather only once) when I run the command outside of the container (i.e. on my local OS).How can I improve the setup so that the Docker container persists the modules being downloaded by thegotool?
How can I persist go 1.11 modules in a Docker container?
Yes, you can create networks with TestContainers. We're going to document it soon, but it's as simple as:First, create a network:@Rule public Network network = Network.newNetwork();Then, configure your containers to join it:@Rule public NginxContainer nginx = new NginxContainer<>() .withNetwork(network) // <--- Here .withNetworkAliases("nginx") // <--- "hostname" of this container .withCustomContent(contentFolder.toString()); @Rule public BrowserWebDriverContainer chrome = new BrowserWebDriverContainer<>() .withNetwork(network) // <--- And here .withDesiredCapabilities(DesiredCapabilities.chrome());Now Nginx container will be visible to Chrome as "http://nginx/".The same example in our tests:https://github.com/testcontainers/testcontainers-java/blob/540f5672df90aa5233dde1dde7e8a9bc021c6e88/modules/selenium/src/test/java/org/testcontainers/junit/LinkedContainerTest.java#L27
Looks like I need a network because I would like to reference one container by hostname from another.I could also use the--linkbut it is deprecated and can disappear soon. That's why I wonder if Testcontainers can create a docker network for me.With command line I would just executedocker network create bridge2and then I can start containers like this:docker run -it --rm --net=bridge2 --name alpine1 alpine docker run -it --rm --net=bridge2 --name alpine2 alpineand resolvenslookup alpine2fromalpine1container.If I try to use default--net=bridgenetwork or skip--netoption (which is actually the same) referencing by name will not work.
Can Testcontainers create docker network for me if it does not exist?
Thenpm installshould have worked based on yourDockerfile. You can see the created files if you run the image without a mounted volume (DIRNAME: where yourdocker-compose.ymlis located):docker run --rm -it DIRNAME_node ls -ahl /usr/src/appWithdocker build, all data is stored in the image. So, it's intended that you don't see any files created on your host.If you mount a volume (generally in Linux, also in a Docker container), itoverlays the directory. So you can't see thenode_modulescreated in the build step.I suggest you do your tests based on the Docker image itself and don't mount the volume. Then you have animmutable Docker imagewhich is better for deployment.
This is my Dockerfile:FROM node:7 RUN apt-get update && apt-get install -y --no-install-recommends \ rubygems build-essential ruby-dev \ && rm -rf /var/lib/apt/lists/* RUN npm install -gq gulp bower RUN mkdir -p /usr/src/app WORKDIR /usr/src/app COPY . /usr/src/app RUN npm install CMD ["gulp", "start:dev"]When I build the image, the npm install command executes with little output and really quickly. I actually build it through docker-compose which does have a volume mounted - and I cannot see the node_modules folder being created on my hose. When I launch a container on this image, I can see there is no node_modules folder. I then execute npm install and things start working - it takes 2-3 minutes to install all the packages and the node_modules folder is indeed created.What is happening here? What am I doing wrong? Why doesn't npm install work at build time, but then it works at run time?
npm install doesn't work in Docker
/devis a special folder on linux systems reserved to maintain the devices related ressources (filesystem, disks, etc...) and mounted on a special filesystem. In a docker container, it will be remounted with a tmpfs dedicated filesystem and is not on the main container filesystem (/). See the following example:$ docker run -it --rm ubuntu:18.04 root@17b9ad96ccbc:/# df -h /dev/ Filesystem Size Used Avail Use% Mounted on tmpfs 64M 0 64M 0% /devIn your case, your file was actually copied during the build but in a temporary filesystem that died as soon as the build was finished.Conclusion: don't use/devas a destination, choose an other folder.
I want to create a docker image where I add a file to the/devfolder. I'm using thisDockerfile:FROM ubuntu:bionic COPY test.txt /dev/After building this with:docker build -t test .I get a docker image where nothing has been added to the/devfolder. No error has been thrown bydocker build.I find this very strange because copying to different folders works fine. For exampleCOPY test.txt /COPY test.txt /root/COPY test.txt /home/all work fine.Does the/devfolder have some special permissions? How do I copy a file to the/devfolder?I'm usingDocker Toolbox on windows.
Why can't docker build COPY files to the /dev folder?
After some searching I found out that settings of Docker's user interface are stored in%APPDATA%\Docker\settings.json(e.g.C:\Users\olly\AppData\Roaming\Docker), memory settings are defined inmemoryMiBproperty.The following solved the problem on my environement:quit Dockermodifysettings.jsonfile usingnotepad %APPDATA%\Docker\settings.jsonin the run command prompt (Windows-Key + R)adjust valuememoryMiBto1024(has been2048before)in Docker versions 19.x and later the property is calledmemoryMiBin Docker versions 18.x and before the property was calledVmMemorysavesettings.jsonstart Docker and finally being able to use "switch to Linux containers"PropertymemoryMiBin Docker versions 19.x and laterPropertyVmMemoryin Docker versions 18.x and before
ScenarioWindows 10 ProfessionalDocker 18.06.1-ce running in Windows container mode4GB of available memory on host systemusing Hyper-V virtual machineProblemWhen trying to "switch to Linux containers" via Docker's taskbar item the process fails after a couple of seconds showing an error about "Not enough memory to start Docker".Since the host system does not have that much memory, I'd like to reduce the maximum amount of memory the global Docker machine is allowed to use (I think 2 GB is the default here). Thus, I'd like to reduce that to just 1 GB.When having Docker running in Windows container mode, there is no "advanced" section in Docker's settings that would allow to reduct that memory assignment easily.I was able to find the "MobyLinuxVM" using Windows' Hyper-V manager. However, when adjusting memory settings there, it is overwritten each time I start Docker and try again switching to Linux container mode.QuestionIs there a different way to define the maximum amount of memory for Docker without using the user interface (which won't work in this scenario due to the missing "advanced" section in Windows container mode - before being able to switch to Linux containers)?
How to reduce default VM memory for Docker Linux containers on Windows
Unclear why, butcaclsdoesn't seem to be working when run as part of building the container. Switched to usingicacls, and was able to grant theIIS_USRSpermissions on the folder.Line added to dockerfile:RUN icacls 'C:\inetpub\wwwroot\App_Data' /grant 'IIS_IUSRS:(F)'
I've got a Windows Docker container (microsoft/aspnet) that is hosting a simple Web API. The web API accepts files from a form, saves them in a temp folder, does some processing, and then returns the result.This works fine when deployed locally, but when done in my Docker container, I get a file permissions error on my temp folder (App_Data).Is there a way to grant the IIS user the code is running as access to this file, or to open up the folder to any user for read/write access?Current Docker file is below:FROM microsoft/aspnet COPY ./Deploy/ /inetpub/wwwroot RUN mkdir /inetpub/wwwroot/App_DataError message snippet I get running API from docker image:"InnerException":{"Message":"An error has occurred.","ExceptionMessage":"Access to the path 'C:\\inetpub\\wwwroot\\App_Data\\BodyPart_481b6424-f9a5-4608-894d-406145a48445' is denied.","ExceptionType":"System.UnauthorizedAccessException"It looks like there is a bug open on the aspnet-docker github about this same issue.[link]In the meantime, it looks like runningcacls App_Data /G IIS_IUSRS:Fafter starting the container fixes the issue temporarily.
Container File Permissions in Windows Container
You can usecontainer definitions parametersin ECS task definition to pass runtime arguments.Commandparameter maps to COMMAND parameter in docker run."command": [ "--arg1", "val", "--arg2", "val" ],It is also possible to pass parameters as environment variables."environment": [ { "name": "LOG_LEVEL", "value": "debug" } ],
Passing arguments to a Docker container running a Python script can be done like sodocker run my_script:0.1 --arg1 val --arg2 val ...I can't seem to figure out how to pass those arguments when running the container on AWS Fargate (perhaps it doesn't work?)
Pass arguments to Python running in Docker container on AWS Fargate
With reference to serving static files, your options depend on the functionality of your application. There's a very nifty tool calleddj-staticwhich will help you serve static files by adding very minimal code.The documentation is fairly simple and all you have to do is followthese steps.
Goal: The set of docker containers for a production django website deployment.My hang up in this process is that usually nginx directly serves the static files... Based on my understanding of a good architecture using docker, you would have a container for your wsgi server (probably gunicorn), a separate nginx container with an upstream server configuration pointing to your gunicorn container. The nginx container can load balance between multiple gunicorn containers.But what this implies is that I have to install my django app's static files in the nginx container, which seems like bad practice since it's primary goal is really load balancingIs it better to have three containers: nginx, gunicorn, and a dedicated static server (possibly nginx or lighthttpd) for static files?
docker, nginx, django and how to serve static files
It's the same as the.gitignorenotation, ignoring the specified file in any sub-directory recursively, including the current directory. A single star would only include one level of sub-directories.For more on the.dockerignoresyntax, see:https://docs.docker.com/engine/reference/builder/#dockerignore-fileHere's the statement that's relevant to your question:...Docker also supports a special wildcard string ** that matches any number of directories (including zero). For example, **/*.go will exclude all files that end with .go that are found in all directories, including the root of the build context.
In my .dockerignore file, I see many lines starting with two asterisks like below.**/.git **/.gitignore **/.projectWhat does this mean?
What do two asterisks mean in a .dockerignore file?
From what I can tell, it's not possible.The simplest thing to do is to run the docker create command with the entrypoint override and it's args as a build step. Something like this:name: Node CI on: [push] jobs: build: runs-on: ubuntu-latest steps: ... - run: docker create --name build_etcd --network-alias etcd --entrypoint etcd microbox/etcd:2.1.1 --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://0.0.0.0:237 ...However, what I wound up doing is to just build and run the docker-compose.yml that I already knew worked as a step in the workflow. Here's the docker-compose.version: '3' services: config: build: . links: - etcd etcd: image: microbox/etcd:2.1.1 entrypoint: "etcd --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://0.0.0.0:2379" hostname: etcd container_name: build_etcd expose: - 2379And here are the related steps:steps: - uses: actions/checkout@v2 - name: Build run: docker-compose build --pull --force-rm config - name: Test run: docker-compose run --rm config test ...
I need to define a service in my GitHub action and override its entrypoint by adding arguments to it. How can I do this?Here's a docker-compose that works that I'm trying to translate.version: '2' services: config: build: . links: - etcd etcd: image: microbox/etcd:2.1.1 entrypoint: "etcd --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://0.0.0.0:2379" hostname: etcd container_name: build_etcd expose: - 2379Here's my Action and how I initially thought it'd work...name: Node CI on: [push] jobs: build: runs-on: ubuntu-latest strategy: matrix: node-version: [12.x] services: etcd: image: microbox/etcd:2.1.1 options: --entrypoint 'etcd --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://0.0.0.0:2379' steps: ...However, this blows up when initializing containers because the command it runs isn't right.../usr/bin/docker create --name 1062a703242743a29bbcfda9fc19c823_microboxetcd211_3767cc --label 488dfb --network github_network_244f1c7676b8488e99c66694d06a21f2 --network-alias etcd --entrypoint 'etcd --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://0.0.0.0:2379' -e GITHUB_ACTIONS=true microbox/etcd:2.1.1The error isunknown flag: --listen-client-urlsI think it should actually be like this.../usr/bin/docker create --name 1062a703242743a29bbcfda9fc19c823_microboxetcd211_3767cc --label 488dfb --network github_network_244f1c7676b8488e99c66694d06a21f2 --network-alias etcd --entrypoint etcd -e GITHUB_ACTIONS=true microbox/etcd:2.1.1 --listen-client-urls http://0.0.0.0:2379 --advertise-client-urls http://0.0.0.0:2379Any ideas how within a GitHub Action Service definition I can override the entrypoint with arguments being passed to the executable?
In a GitHub Action, how do I override a service's entrypoint?
Spark submit can take additional args like,--conf spark.driver.bindAddress, --conf spark.driver.host, --conf spark.driver.port, --conf spark.driver.blockManager.port, --conf spark.port.maxRetries. Thespark.driver.hostanddriver.portis used to tell Spark Executor to use this host and port to connect back to the Spark submit.We usehostPortandcontainerPortto expose the ports inside the container, inject the port range andhostIPas the environment variables to the Pod so that spark-submit knows what to use. So those additional args are:--conf spark.driver.bindAddress=0.0.0.0` # has to be `0.0.0.0` so that it is accessible outside pod --conf spark.driver.host=$HOST_IP # k8s worker ip, can be easily injected to the pod --conf spark.driver.port=$SPARK_DRIVER_PORT # defined as environment variable --conf spark.driver.blockManager.port=$SPARK_DRIVER_PORT # defined as environment variable --conf spark.port.maxRetries=$SPARK_PORT_MAX_RETRIES # defined as environment variableThehostPortis local to the Kubernetes worker, which means we don’t need to worry about the run out of ports. The k8s scheduler can find a host to run the pod.We can reserve the ports from 40000 to 49000 on each host, and open 8 ports for each pod (as each spark-submit requires 2 open ports). The ports are chosen based on the pod_id. Since Kubernetes recommends running less than 100 pods per node, the ports collision will be very rare.
I am trying to usespark-submitwithclientmode in the kubernetes pod to submit jobs to EMR (Due to some other infra issues, we don't allowclustermode). By default,spark-submituses thehostnameof the pod as thespark.driver.hostand thehostnameis the pod's hostname sospark executorcould not resolve it. And thespark.driver.portis also locally to the pod (container).I know a way to pass some confs tospark-submitso that thespark executorcan talk to thedriver, those configs are:--conf spark.driver.bindAddress=0.0.0.0 --conf spark.driver.host=$HOST_IP_OF_K8S_WORKER --conf spark.driver.port=32000 --conf spark.driver.blockManager.port=32001and create a service to in the kubernetes so thatspark executorcan talk to thedriver:apiVersion: v1 kind: Service metadata: name: spark-block-manager namespace: my-app spec: selector: app: my-app type: NodePort ports: - name: port-0 nodePort: 32000 port: 32000 protocol: TCP targetPort: 32000 - name: port-1 nodePort: 32001 port: 32001 protocol: TCP targetPort: 32001 - name: port-2 nodePort: 32002 port: 32002 protocol: TCP targetPort: 32002But the issue is there are can be more than 1 pods running on one k8s worker and even more than 1spark-submitjobs in one pod. So before launching a pod, we need to dynamically select few available ports in the k8s node and create a service to do the port mapping and then during launching the pod, pass those ports into the pod to tellspark-submitto use them. I feel like this is a little bit complex.UsinghostNetwork: truecould potentially solve this issue but it introduces lots of other issues in our infra so this is not an option.Ifspark-submitcan supportbindPortconcept just likedriver.bindAddressanddriver.hostor supportproxy, it will be cleaner to solve the issue.Does someone have similar situation? Please share some insights.Thanks.Additional context:spark version:2.4
Spark/k8s: How to run spark submit on Kubernetes with client mode
If you are using typescript in your node application then follow these instructions.Please add the below entry undercompilerOptionssection on "tsconfig.json"tsconfig.json**"outDir": "./dist/"**package.json - Add the below script too."scripts": { "build": "tsc" }Now, re-run the "npm run build". You will see the dist folder.
I usually write Dockerfiles for Java / Go applications and it's the first time I have encountered a situation where I have to write a Dockerfile for an already existing (and production running) Node.js application. As per my little knowledge about the Node.js which I acquired in the past couple of days,distfolder is generated after we build a Node.js project which carries the source code (please correct me if I am wrong here). So I am interested in copying thedistfolder from parent Docker image to child Docker image.However, after I copy everything from an application into my parent Docker image (line 6) and run 'npm run build' command, dist folder is not generated for me (please note thenode_modulesand package-lock.json are being generated).My Dockerfile is as below:FROM node:10-alpine as BUILD WORKDIR /src COPY package*.json /src RUN apk add --update --no cache \ python \ make \ g++ RUN npm install COPY . /src RUN npm run buildHow can I resolve this?
'dist' folder is not generated while doing npm build in a Dockerfile
Is it possible to configure docker-compose/docker-networking to route by the port to allow the same IP address to be used for different containers?Yes we can(familiar? -_-!). There is an option of network mode presented by Docker, calledservice:service-name.When we executedocker run, we could add--network=service:service-nameflag. It means that current container uses the same network namespace ofservice:service-name.More informationreference here.Try the following compose file below. I've tested it, which works well.version: '2' services: web: image: nginx networks: public: ipv4_address: 10.0.0.2 ports: - "8880:80" - "2220:22" ssh: image: panubo/sshd network_mode: "service:web" depends_on: - web networks: public: external: true
We currently have docker containers with complex builds using supervisord so that we can group services together. For example, nginx and ssh.I'm attempting to rebuild these with more service-driven isolation linked by shared volumes. However,without mapping the IP to the host, I can't seem to find a way to allow IP addresses to be shared even though the ports may be discrete.What I'm trying to do is something like this:version: '2' services: web: image: nginx volumes: - /data/web:/var/www networks: public: ipv4_address: 10.0.0.1 ports: - "10.0.0.1:80:80" ssh: image: alpine-sshd volumes: - /data/web:/var/www networks: public: ipv4_address: 10.0.0.1 ports: - "10.0.0.1:22:22" networks: public: external: true...wherepublicis a predefined docker macvlan network.When I try this, I get the error:ERROR: for ssh Cannot start service ssh: Address already in useI'm aware that another solution to this is to introduce a third service to work as a proxy. However, I thought this would be a simple enough case not to need it.Is it possible to configure docker-compose/docker-networking to route by the port to allow the same IP address to be used for different containers?
Can docker-compose share an ip between services with discrete ports?
The problem is here:The mongo containers have same shared volumes with the docker hostYou cannot run two mongo instances on the same data-directory. It would lead to data corruption and strange problems, so mongo-db explicitly prohibits doing that (see alsothisquestion here)Why do you want to do this? Normally you would provide two different volumes for your mongo instances like this:version: '3' services: frontend: image: fernandomaxwell/frontend ports: - "3007:3007" networks: main: database_frontend: backend: image: fernandomaxwell/backend ports: - "2007:2007" networks: main: database_backend: mongo_backend: image: mongo volumes: - "/var/lib/mongodb-back:/data/db" ports: - "27017:27017" networks: database_backend: mongo_frontend: image: mongo volumes: - "/var/lib/mongodb-front:/data/db" ports: - "27018:27017" networks: database_frontend: networks: main: database_backend: database_frontend:Additionally you should consider to use named volumes, instead of host-paths. Doing that you don't need to take care of creating the directories on the host before starting the compose-file. To use named volumes just change the volume declaration from"/var/lib/mongodb-back:/data/db"to"mongodb-back:/data/db"
I want to run two mongo docker containers with docker compose. The mongo containers have same shared volumes with the docker host. When I ran it with docker compose, only one mongo container is working meanwhile the other is shutting down because it saidDBPathInUse: Unable to lock the lock file: /data/db/mongod.lock (Unknown error). Another mongod instance is already running on the /data/db directory, terminatingThis is my docker compose fileversion: '3' services: frontend: image: fernandomaxwell/frontend ports: - "3007:3007" networks: main: database_frontend: backend: image: fernandomaxwell/backend ports: - "2007:2007" networks: main: database_backend: mongo_backend: image: mongo volumes: - "/var/lib/mongodb:/data/db" ports: - "27017:27017" networks: database_backend: mongo_frontend: image: mongo volumes: - "/var/lib/mongodb:/data/db" ports: - "27018:27017" networks: database_frontend: networks: main: database_backend: database_frontend:Any idea to solve this?
Can't run multiple mongodb docker container with same shared volume
The password was saved to/var/jenkins_home/secrets/initialAdminPassword.You can usedocker exec cat /var/jenkins_home/secrets/initialAdminPasswordwhereis your container id or name.
I'm new to docker, i trying to use jenkins on docker. So I pull jenkins image with this commanddocker pull jenkinsJenkins installed without any error. After that i started jenkins image like the document said.https://hub.docker.com/r/_/jenkins/docker run -p 8080:8080 -p 50000:50000 -v /your/home:/var/jenkins_home jenkinsSo I tried to loginhttp://localhost:8080/but I got login error. And its said go tohttps://wiki.jenkins-ci.org/display/JENKINS/Loggingfor check. I checked and I need to open/var/log/jenkins/jenkins.logfile to generated admin password. I tried to reach that filed withbash. I didn't get the file.How can I reach the file ? How to reach my generated password or jenkins files on docker etc.Thank you.
Docker Jenkins images login error
Your index.php is not really an application. The application is your Apache or nginx or even PHP's own server.Because Docker uses features not available in the Windows core, you are running it inside an actual virtual machine. The only purpose for that would be training or preparing images for your real server environment.There are two main concepts you need to understand for Docker: Images and Containers.Animageis a template composed of layers. Each layer contains only the differences between the previous layer and some offline system information. Each layer is fact an image. You shouldalwaysmake your image from an existing base, using the FROM directive in theDockerfile(Reference docs at time of edit. Jan Vladimir Mostert's link is now a 404).Acontaineris an instance of an image, that has run or is currently running. When creating a container (a.k.a. running an image), you can map an internal directory from it to the outside. If there are files in both locations, the external directory override the one inside the image, but those files are not lost. To recover them you cancommita container to an image (preferably after stopping it), then launch a new container from the new image, without mapping that directory.
I am using windows and have boot2docker installed. I've downloaded images from docker hub and run basic commands. BUT How do I take an existing application sitting on my local machine (lets just say it has one fileindex.php, for simplicity). How do I take that and put it into a docker image and run it?
How do I dockerize an existing application...the basics
As far as I know that's not possible now. The best thing you can do is too use the-Poption to map the 3000 to some random port, so it won't conflict with the main container instance, e.g.docker run -it -P This will result in the followingdocker psmain_container 0.0.0.0:3000->3000/tcp run_container 0.0.0.0:32769->3000/tcp
I'm working on a Dockerfile for a web app that will use annginx-proxycontainer. It's also got a CLI for doing app domain stuff (creating/modifying db, running cleanup jobs, etc.)In 99% of cases, when I boot a container, I want to use the webapp. I've got anEXPOSE 3000in the Dockerfile and that works perfectly well for NGINX-proxy. Nginx-proxy usesdocker-gen, which listens for docker'sstartandstopevents and re-builds the NGINX config based on exposed ports.The problem comes when I want to run a CLI-based container. Idon'twant toEXPOSE 3000. I want to un-expose port 3000, so NGINX-proxy doesn't change the NGINX configs.Is this possible fromdocker run? Reading the docs didn't give any clarity, and tryingdocker run -p ''didn't do work (I got adocker: No port specified: ::.error).Honestly, this is a bit of a nitpick, and it's not that big of a deal. IcantakeEXPOSE 3000out of the Dockerfile and just do-p 3000on the command line. I just like having it in the Dockerfile so it's on by default and I just have to turn it off in the few instances when I'll need it.I also know that Icanuse a second Dockerfile that inherits from the first.I'm just curious if it's possible to un-publish a port exposed in the Dockerfile when I dodocker run(or possibly in docker-compose).
If I use EXPOSE $PORT in a Dockerfile, can I un-expose the port it when I use `docker run`?
Problem solved by using an ubuntu image instead of an alpine image. Not exactly sure why, but might have to do with the file's user/permission bits getting copied over and not interpreted correctly.
I'm doing something extremely simple. Here is myDockerfile:FROM alpine:latest ADD hello.sh /bin/hello RUN chmod +x /bin/hello CMD /bin/helloThen I build the image:docker build -t hello .Then I run the image:docker run helloAnd here is the output:/bin/sh: /bin/hello: not foundWhy is this happening? If I run:docker run hello cat /bin/helloI can see the content of my script, which makes me even more confused.
File not found in docker container
First thing better to use one of the base images, either fornode-imageand install docker and fordocker-imageand installed node, instead of creating image from scratch. All you needFROM node:buster RUN apt-get update RUN apt install docker.io -y RUN docker --version ENTRYPOINT nohup dockerd >/dev/null 2>&1 & sleep 10 && node /app/app.jssecond thing, The errorCannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?, The reason is you are not starting the docker process in the Dockefile, and also running multiple processes in the container is not recommended, as if Docker process dies you will not know the status, you have to put one process in the background.CMD nohup dockerd >/dev/null 2>&1 & sleep 10 && node /app/app.jsand rundocker run --privileged -it -p 8000:8000 -v /var/run/docker.sock:/var/run/docker.sock your_image
I installed docker inside a container running onubuntu:18.04to run my nodejs app, I need docker installed inside this container because i need to dockerize an other small appHer is my DockerfileFROM ubuntu:18.04 WORKDIR /app COPY package*.json ./ # Install Nodejs RUN apt-get update RUN apt-get -y install curl wget dirmngr apt-transport-https lsb-release ca-certificates software-properties-common gnupg-agent RUN curl -sL https://deb.nodesource.com/setup_12.x | bash - RUN apt-get -y install nodejs # Install Chromium RUN wget -q -O - https://dl-ssl.google.com/linux/linux_signing_key.pub | apt-key add - RUN sh -c 'echo "deb [arch=amd64] http://dl.google.com/linux/chrome/deb/ stable main" >> /etc/apt/sources.list.d/google.list' RUN apt-get update RUN apt-get install -y google-chrome-unstable fonts-ipafont-gothic fonts-wqy-zenhei fonts-thai-tlwg fonts-kacst \ --no-install-recommends RUN rm -rf /var/lib/apt/lists/* # Install Docker RUN curl -fsSL https:/download.docker.com/linux/ubuntu/gpg | apt-key add - RUN apt-key fingerprint 0EBFCD88 RUN add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable" RUN apt-get update -y RUN apt-get install -y docker-ce docker-ce-cli containerd.io RUN npm install COPY . . CMD [ "npm", "start" ] EXPOSE 3000When the container is up, idocker exec -it app bash. If i do aservice docker startthenps ax, got thisPID TTY STAT TIME COMMAND 115 ? Z 0:00 [dockerd] What can i do to be able to use docker inside the container or is there a docker image not using apk but apt-get ?Because when i need to use it, i got this error :Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
How to install Docker inside my ubuntu container?
The main source of information regarding docker security practice is the page on "Docker security".only trusted users should be allowed to control your Docker daemon.This is a direct consequence of some powerful Docker features.Specifically, Docker allows you to share a directory between the Docker host and a guest container; and it allows you to do so without limiting the access rights of the container.If you expose the REST API, you should do so over https.Finally, if you run Docker on a server, it is recommended to run exclusively Docker in the server, and move all other services within containers controlled by DockerRegarding the VM, see "Are Docker containers really secure?"The biggest problem is everything in Linux is not namespaced. Currently, Docker uses five namespaces to alter processes view of the system: Process, Network, Mount, Hostname, Shared Memory.While these give the user some level of security it is by no means comprehensive, likeKVM (Kernel-based Virtual Machine).In a KVM environment processes in a virtual machine do not talk to the host kernel directly. They do not have any access to kernel file systems like/sysand/sys/fs,/proc/*.
I understand that the docker daemon requires toruns as rootso I'm told this can cause some security implications such as if the container were compromised, attackers can make changes to the host's system files.What precautions can I take to mitigate damage in the case of an attack?Is there a practice that I should be aware when running the docker daemon? I've thought about having a vagrant to up a vm and have docker run in the vm instead.
Running docker securely
CentOS 6:1.Plugin module not found in 'module-path'; module-path='/lib64/syslog-ng', module='afsql' Starting syslog-ng: Plugin module not found in 'module-path'; module-path='/lib64/syslog-ng', module='afsql'You can fix above error by installingsyslog-ng-libdbipackage:yum install -y syslog-ng-libdbi2.Error opening file for reading; filename='/proc/kmsg', error='Operation not permitted (1)' Error initializing source driver; source='s_sys', id='s_sys#0' Error initializing message pipeline;Sincesyslog-ngdoesn't have direct access on the kernel messages, you need to disable (comment) that in its configuration:sed -i 's|file ("/proc/kmsg"|#file ("/proc/kmsg"|g' /etc/syslog-ng/syslog-ng.confCentOS 7:1.Error opening file for reading; filename='/proc/kmsg', error='Operation not permitted (1)'Thesystem()source is in default configuration. This source reads platform-specific sources automatically, and reads/dev/kmsgon Linux if the kernel is version 3.5 or newer. So, we need to disable (comment)system()source in configuration file:sed -i 's/system()/# system()/g' /etc/syslog-ng/syslog-ng.conf2.When we start it in foreground modesyslog-ng -Fwe get the following:# syslog-ng -F syslog-ng: Error setting capabilities, capability management disabled; error='Operation not permitted'So, we need to runsyslog-ngas root, without capability-support:syslog-ng --no-caps -F
My application will send out syslog local0 messages. When I move my application into docker, I found it is difficult to show the syslog.I've tried to run docker as --log-dirver as syslog or journald, both works strange, the /var/log/local0.log show console output of docker container instead of my application's syslog when I try to run this command inside containerlogger -p local0.info -t a messageSo, I try to install syslog-ng inside the docker container. The outside docker box is Arch Linux (kernel 4.14.8 + systemctl). The docker container is running as CentOS 6. If I install syslog-ng inside the container and start it, it shows following message.# yum install -y syslog-ng # this will install syslog-ng 3.2.5 # /etc/init.d/syslog-ng start Plugin module not found in 'module-path'; module-path='/lib64/syslog-ng', module='afsql' Starting syslog-ng: Plugin module not found in 'module-path'; module-path='/lib64/syslog-ng', module='afsql' Error opening file for reading; filename='/proc/kmsg', error='Operation not permitted (1)' Error initializing source driver; source='s_sys', id='s_sys#0' Error initializing message pipeline;
How to let syslog workable in docker?
The kernel does not support per-container patterns. There is a patch for this, but it is unlikely to go in any time soon. The basic problem is that core patterns support piping to a dedicated process which is spawned for this purpose. But the code spawning it does not know how to handle containers just yet. For some reason a simplified pattern handling which requires a target file was not deemed acceptable.
How can i change/proc/sys/kernel/core_patternfile inside the docker container with out privileged mode? Are there any flags to be passed todocker daemonordocker runor anything related toDockerfile?
Changing /proc/sys/kernel/core_pattern file inside docker container
The error you see happens when you try to change the mount configuration on an existing cluster when using Docker. Docker doesn't allow changing of volumes after the container has been created and thus you cannot change themount-stringonminikube startafter the cluster has already been created. More info and source for this behavior can be foundhereandhere.
I am trying to usekubernetesfor local deployment usingminikube, I want to mount a share a directory between host machine and pods. For this, I am trying to mount directory tominikube. But I already had minikube running on which few deployments were running. I deleted them. But every time I restart minikube with mount I get following error$ minikube start --mount-string="/var/log:/log" --mount * minikube v1.14.2 on Ubuntu 18.04 * Using the docker driver based on existing profile * Starting control plane node minikube in cluster minikube * Restarting existing docker container for "minikube" ... X Exiting due to GUEST_MOUNT_CONFLICT: Sorry, docker does not allow mounts to be changed after container creation (previous mount: '', new mount: '/var/log:/log)'Output for kubectl get all iskubectl get all NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE service/kubernetes ClusterIP 10.96.0.1 443/TCP 2sWhat am I doing wrong here. I need to mount/var/log:/login my pods just like docker
Exiting due to GUEST_MOUNT_CONFLICT : While starting minikube
Stopping the container usingdocker-compose downand then restarting them did the trick. I was using CTRL + C prior to that.
I am trying to dockerise my Django app.docker-compose.ymlversion: "3.8" services: db: image: mysql:8 command: --default-authentication-plugin=mysql_native_password # this is not working restart: always environment: MYSQL_ROOT_PASSWORD: rootmypass ports: - '3306:3306' cache: image: redis environment: REDIS_HOST: localhost REDIS_PORT: 6379 ports: - '6379:6379' web: build: . command: python manage.py runserver 0.0.0.0:8000 volumes: - .:/code ports: - "8000:8000" depends_on: - db - cacheWhen I rundocker-compose -up, I get the following error.django.db.utils.OperationalError: (1156, 'Plugin caching_sha2_password could not be loaded: /usr/lib/x86_64-linux-gnu/mariadb19/plugin/caching_sha2_password.so: cannot open shared object file: No such file or directory')Upon searching, I found the solution was to usecommand: --default-authentication-plugin=mysql_native_passwordor downgrade mysql. Tried the first command but it's not working. I don't want to downgrade as well.How do I get it to work?Additional details:Using Docker on Windows.Distro: Ubuntu 20.04 LTS (WSL2).Connector: mysqlclient==1.4.6.
Plugin caching_sha2_password could not be loaded: /mariadb19/plugin/caching_sha2_password.so: cannot open shared object file
If you already have boot2docker, the upgrade is the usual:boot2docker stop boot2docker download boot2docker start docker@boot2docker:~$ docker version Client: Version: 1.8.1 API version: 1.20 Go version: go1.4.2 Git commit: d12ea79 Built: Thu Aug 13 02:49:29 UTC 2015 OS/Arch: linux/amd64That being said, going forward,docker machineis the recommended project to use.See "Get started with Docker Machine and a local VM".
At the moment I've got Docker v.1.7 and I'd want to upgrade it to latest (1.8 at the moment).Important part:I want to do this without installing Docker and boot2docker again. I wasn't able to find any info about it.Is it possible? And how can I do this?
Update & Upgrade Docker distribution on Windows
When making connection via PHP (or any other), use the container's name as host, which in case islearn-php-mysql.Thus$mysqli = new mysqli("learn-php-mysql", "learning", "learning", "learning");Will work.
I have following Docker containers running that were generated byPHPDocker:learn-php-mysql: image: mysql:5.7 container_name: learn-php-mysql volumes: - "./.data/db:/var/lib/mysql" restart: always environment: MYSQL_ROOT_PASSWORD: learning MYSQL_DATABASE: learning MYSQL_USER: learning MYSQL_PASSWORD: learning learn-php-webserver: image: phpdockerio/nginx:latest container_name: learn-php-webserver volumes: - ./code:/code - ./nginx/nginx.conf:/etc/nginx/conf.d/default.conf ports: - "8080:80" links: - learn-php-php-fpm learn-php-php-fpm: build: . dockerfile: php-fpm/Dockerfile container_name: learn-php-php-fpm volumes: - ./code:/code - ./php-fpm/php-ini-overrides.ini:/etc/php/7.1/fpm/conf.d/99-overrides.ini links: - learn-php-mysql:mysqlEverything works fine, except for when trying to connect to MySQL server via PHP code:$mysqli = new mysqli("db", "learning", "learning", "learning"); /* check connection */ if ($mysqli->connect_errno) { printf("Connect failed: %s\n", $mysqli->connect_error); exit(); }It will throw following error:Connect failed: php_network_getaddresses: getaddrinfo failed: Name or service not knownWhy does this happen?..You can find repository for thishere. And when runningdocker-compose upgo tohttp://localhost:8080/classes/serialize.phpto produce same error.
Docker php_network_getaddresses error
With the help of Kelsey Hightower, I solved the problem. It turns out it was a Docker routing issue. I've written up the details in ablog post, but the bottom line is to alter the minions' routing table like so:$ sudo iptables -t nat -I POSTROUTING -d /32 -o eth0 -j MASQUERADE
I have a Kubernetes cluster running in Amazon EC2 inside its own VPC, and I'm trying to get Dockerized services to connect to an RDS database (which is in a different VPC). I've figured out the peering and routing table entries so I can do this from the minion machines:ubuntu@minion1:~$ psql -h Password:So that's all working. The problem is that when I try to make that connection from inside a Kubernetes-managed container, I get a timeout:ubuntu@pod-1234:~$ psql -h …To get the minion to connect, I configured a peering connection, set up the routing tables from the Kubernetes VPC so that10.0.0.0/16(the CIDR for the RDS VPC) maps to the peering connection, and updated the RDS instance's security group to allow traffic to port 5432 from the address range172.20.0.0/16(the CIDR for the Kubernetes VPC).
Kubernetes container connection to RDS instance in separate VPC
Seeing thecurl -4 ...makes me suspect this is an ipv6 issue. If your local machine isn't configured for ipv6 and localhost has a reference to the ipv6 address in the hosts file, then calls to localhost will hang.The workaround is rather simple, go to127.0.0.1instead oflocalhostin your urls.
I'm doing the docker getting started guide:https://docs.docker.com/get-started/part3/#recap-and-cheat-sheet-optionaldocker-compose.yml:version: "3" services: web: # replace username/repo:tag with your name and image details image: username/repo:tag deploy: replicas: 5 resources: limits: cpus: "0.1" memory: 50M restart_policy: condition: on-failure ports: - "80:80" networks: - webnet networks: webnet:I deployed my app by runningdocker stack deploy -c docker-compose.yml getstartedlab. Then Accessing my service from curl which working finecurl -4 http://localhostHello World!Hostname: 1532cae6e06f....But I can't access it from chrome or postman by going tohttp://localhost:80(it loads forever). Why and how can I fix it?Update 19/10/17:I can access my service in the browser from:http://192.168.1.68:80. It is the address of the leader node (which is the ip of my real machine also..).But why can't I do it from localhost also?
Can access docker service from curl but not from postman/chrome
Getting "Bind for 0.0.0.0:8080 failed: port is already allocated".you have duplicated port allocations.when not specifying a connection type,the port defaults totcp:meaning"0.0.0.0:8080:8080"and"0.0.0.0:8080:8080/tcp"both trying to bind to the same port and hence your error.sincedocker uses0.0.0.0for default binding, same applies to"8080:8080/tcp"and"0.0.0.0:8080:8080/tcp"- you have no need in both of them.therefore, you can shrink yourportssection to:ports: - "8080:8080" - "8080:8080/udp"I am not sure what the service should be calledit is completely up to you. usually services are named after their content, or role in the network, such asnginx_proxylaravel_backendetc. sonode_appsounds good to me,appis also ok in small networks,srcdoesnt appear to have any meaning but again - it is just some identifier for your service, without any additional effect.
I have been trying to get a socketio server moved over from EC2 to Docker.I have been able to connect to the socket via a web (http) client, but connecting directly to the socket via iOS or Android seems to be impossible.I read one of the issues can be the ports exposed are not actually published when using Docker. Since our mobile apps currently connect on port 8080 on our classic EC2 instance. I setup a docker-compose.yml file to try and open all ports and communication protocals, but I am two issues:1. I am not sure what the service should be called so I went with "src" (see DockerFile below). But wondering if it should be app since server file is app.js?2. Getting "Bind for 0.0.0.0:8080 failed: port is already allocated".DockerFileFROM ubuntu:14.04 ENV DEBIAN_FRONTEND noninteractive RUN mkdir /src ADD package.json /src RUN apt-get update RUN apt-get install --yes curl RUN curl --silent --location https://deb.nodesource.com/setup_4.x | sudo bash - RUN apt-get install --yes nodejs RUN apt-get install --yes build-essential RUN update-alternatives --install /usr/bin/node node /usr/bin/nodejs 10 RUN cd /src; npm install RUN npm install --silent[email protected]WORKDIR /src # Bundle app source # Trouble with COPY http://stackoverflow.com/a/30405787/2926832 COPY . /src ADD app.js /src/ EXPOSE 8080 CMD ["node", "/src/app.js"]Docker-Compose.ymlsrc: build: . volumes: - ./:/src expose: - 8080 ports: - "8080" - "8080:8080/udp" - "8080:8080/tcp" - "0.0.0.0:8080:8080" - "0.0.0.0:8080:8080/tcp" - "0.0.0.0:8080:8080/udp" environment: - NODE_ENV=development - PORT=8080 command: sh -c 'npm i && node server.js' echo 'ready'
Docker compose bind failed: port is already allocated
You need to change a little your Dockerfile, try this:#Making a dotnet container FROM microsoft/iis SHELL ["powershell"] RUN Install-WindowsFeature NET-Framework-45-ASPNET ; \ Install-WindowsFeature Web-Asp-Net45 RUN Remove-WebSite -Name 'Default Web Site' RUN New-Website -Name 'app' -Port 80 \ -PhysicalPath 'c:\app' -ApplicationPool '.NET v4.5' #copy dll files and other dependencies COPY app app #dotnet run should run the app CMD ["ping", "-t", "localhost"]Test itdocker build -t app . docker run --name app -d -p 80:80 app docker inspect --format="{{.NetworkSettings.Networks.nat.IPAddress}}" appIt will give you an ip just test it in your browser.More information:Run IIS + ASP.NET on Windows 10 with Docker
I have developed a web application using asp dotnet and currently I have it running on IIS is there anyway I can run the same app in a docker container,I am relatively new to Docker and I have played around a bit and I am familiar with docker compose , so I was wondering if I can (dockerize) the application that I have developed.My Dockerfile now looks like:#Making a dotnet container FROM microsoft/dotnet:latest #Make a directory WORKDIR /app #copy dll files and other dependencies COPY . /app #dotnet run should run the app ENTRYPOINT ["DOTNET","RUN"]From what I understand this makes a directory inside my dotnet container and copies the files in the current folder and the app will run on dotnet run
Can I Run a dotnet app which is hosted on IIS in a docker container?
Since you're mapping the port 5432 on the container to the same port on host with-p 5432:5432in yourdocker runstatement, try connecting pgadmin to port 5432 on the host instead of the container.
related posts: 1)docker postgres pgadmin local connection2)https://coderwall.com/p/qsr3yq/postgresql-with-docker-on-os-x(in the example "Name" entry is not filled in)there are two ways to complete this task, I use official postgresMETHOD 1:and runs it withsudo docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -p 5432:5432 -d postgresthen connect withName: postgres Host: localhost Port: 5432 user pass ...METHOD 2:starts withsudo docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgresand then check the ip of containersudo docker inspectsay result172.17.42.1then connect with pgAdmin tab Properties filled infoName: postgres Host: 172.17.42.1 Port: 5432 user pass ...
Can not connect to Postgres Container from pgAdmin
I had the same problem. I resolved it followingthis instructions:docker run --rm -v /var/run/docker/swarm/control.sock:/var/run/swarmd.sock dperny/tasknuke Be sure to use the full long task id or it will not work (fkgz0oihexzsjqwv4ju0szorhin your case).
Last week I had to remove a failed node from my Docker Swarm Cluster, leaving some tasks that ran on that node in desired state "Remove".Even after deleting the stack and recreating it with the same name,docker stack ps stacknamestill shows them.Interestingly enough, after recreating the stack, the tasks are still there, but with no node assigned.Here's what I tried so far to "cleanup" the stack:Recreating the stack with the same namedocker container prunedocker volume prunedocker system pruneIs there a way to remove a specific task?Here's the output fordocker inspect fkgz0oihexzs, the first task in the list:[ { "ID": "fkgz0oihexzsjqwv4ju0szorh", "Version": { "Index": 14422171 }, "CreatedAt": "2018-11-05T16:15:31.528933998Z", "UpdatedAt": "2018-11-05T16:27:07.422368364Z", "Labels": {}, "Spec": { "ContainerSpec": { "Image": "redacted", "Labels": { "com.docker.stack.namespace": "redacted" }, "Env": [ "redacted" ], "Privileges": { "CredentialSpec": null, "SELinuxContext": null }, "Isolation": "default" }, "Resources": {}, "Placement": { "Platforms": [ { "Architecture": "amd64", "OS": "linux" } ] }, "Networks": [ { "Target": "3i998stqemnevzgiqw3ndik4f", "Aliases": [ "redacted" ] } ], "ForceUpdate": 0 }, "ServiceID": "g3vk9tgfibmcigmf67ik7uhj6", "Slot": 1, "Status": { "Timestamp": "2018-11-05T16:15:31.528892467Z", "State": "new", "Message": "created", "PortStatus": {} }, "DesiredState": "remove" } ]
Orphaned Tasks in Docker Swarm after removal of failed node
Thedepends_onisn't used on docker swarm:Thedepends_onoption is ignored when deploying a stack in swarm mode with a version 3 compose file. -from Docker DocsAnother good explanation on GitHub:depends_onis a no-op when used withdocker stack deploy. Swarm mode services are restarted when they fail, so there's no reason to delay their startup. Even if they fail a few times, they will eventually recover. -from GitHub
Let's say we have the following stack file:version: "3" services: ubuntu: image: ubuntu deploy: replicas: 2 restart_policy: condition: on-failure resources: limits: cpus: "0.1" memory: 50M entrypoint: - tail - -f - /dev/null logging: driver: "json-file" ports: - "80:80" networks: - webnet web: image: httpd ports: - "8080:8080" hostname: "apache" volumes: - "/var/run/docker.sock:/var/run/docker.sock" deploy: placement: constraints: [node.role == manager] resources: limits: memory: 32M reservations: memory: 16M depends_on: - "ubuntu" networks: - webnet networks: webnet:When I rundocker service inspect mystack_webthe output generated does not show any reference to thedepends_onentry.Is that okay? and how can I print the dependencies of a given docker service?
docker swarm list dependencies of a service
With a Linux distro you normally get:A bootloader which loads a kernelA kernel which manages the system and loads an init systemAninitsystem that sets up and runs everything elseEverything elseDocker itself replaces most of theinitsystem.Docker images replace "Everything else", which can still be a large portion of any normal distro.An Ubuntu image contains the minimal set of Ubuntu binaries and shared libraries compiled with the Ubuntu build tools to run a shell, do some normal linuxy things and use theaptpackage manager.A Centos image does the same with Centos binaries, shared libraries and theyumpackage manager. etc.A Docker image doesn't need to be a complete distribution. You can run asingle statically compiled binaryin a container and you only need that binary in the image, nothing else.Thebusyboximage is a good example of building a largely normal Linux environment from a single static binary.The KernelAll containers share the one host kernel. The container is separated from the rest of the system using kernel cgroups and namespaces. To anything running in the container, this appears to be it's own system.All flavours of Linux don't use the exact same kernel but thekernel interfaces are largely compatiblewhich allows the portability of Docker images. Docker itself requires a 3.10+ kernel to be able to run which narrows the range of kernel possibilities.It's possible to have some esoteric software that requires some esoteric kernel feature to be compiled in, that wouldn't run across different Docker hosts. That's pretty rare and usually identifiable as it would often require a custom kernel compile or kernel modules to get said software working in the first place.
I'm starting a new Django Project using Docker. I'm confused about the existence of the Ubuntu Docker image which is mentioned in many tutorials as well as being one of the most popular images in the Docker Repo.I thought Docker is a containerization system built ON TOP of the OS, so why is there an Ubuntu Docker Image? Perhaps a common use scenario on when/who would use this would help.
What is inside a Docker Ubuntu Image if Docker doesn't encapsulate an OS?
You can used a named volume:flask: build: . command: "gulp" ports: - '3000:3000' - '5000:5000' links: - celery - redis volumes: - .:/usr/src/app:rw - tmp:/tmp celery: build: . command: "celery -A web.tasks worker --autoreload --loglevel=info" environment: - C_FORCE_ROOT="true" links: - redis - neo4j volumes: - .:/usr/src/app:ro - tmp:/tmpWhen compose creates the volume for the first container, it will be initialized with the contents of /tmp from the image. And after that, it will be persistent until deleted withdocker-compose down -v.
I'm usingdocker-composeto spawn two containers. I would like to share the/tmpdirectory between these two containers (but not with the host/tmpif possible). This is because I'm uploading some files throughflaskto/tmpand want to process these files fromcelery.flask: build: . command: "gulp" ports: - '3000:3000' - '5000:5000' links: - celery - redis volumes: - .:/usr/src/app:rw celery: build: . command: "celery -A web.tasks worker --autoreload --loglevel=info" environment: - C_FORCE_ROOT="true" links: - redis - neo4j volumes: - .:/usr/src/app:ro
Sharing /tmp between two containers
An obfuscation-based only solution would not be enough, as "Encrypted and secure docker containers" details.You would need full control of the host your containers are running in order to prevent any "poking". And that is not the case in your scenario, where a developer does have access to the host (ie his/her local development machine) where said container would run.
I was wondering if it is possible to offer Docker images, but not allow any access to the internals of the built containers. Basically, the user of the container images can use the services they provide, but can't dig into any of the code within the containers.Call it a way to obfuscate the source code, but also offer a service (the software) to someone on the basis of the container, instead of offering the software itself. Something like "Container as a Service", but with the main advantage that the developer can use these container(s) for local development too, but with no access to the underlying code within the containers.My first thinking is, the controller of the Docker instances controls everything down to root access. So no, it isn't possible. But, I am new to Docker and am not aware of all of its possibilities.Is this idea in any way possible?
A completely closed source docker container
Answers to your questions:Do I need anything else other than what I described above?What you described sounds very reasonable. But keep in mind that you don't want to useonedocker container, but ratherone container per service. That means: one container running mongo, one container running node, and so on. That is a Docker best practice.Do I need Vagrant, for example to deploy that docker container or is that an overkill?It sounds like your rather simple setup does not require Vagrant. You can use Dockerfiles to build images that have everything you need installed. See theDockerfile ReferenceandDockerfile best Practices.Can docker specify all my needs, that is the right version of node.js, sails etc?Yes, every Docker image has a certain version of the service that will run inside the container. That's one of the points of using containers.Is there a ready made container I can reuse or modify rather than starting from scratch?Yes, there are many ready made containers to be found on theDocker Hub. Use these images as a base when writing your Dockerfiles to install anything additionally to what is supplied within the image on Docker Hub.Also, check outVolumesto figure out how to handle source code in development.
I am playing around with a MEAN javascript project. (mongoDB + angular + sails.js + node.js) As I am offline a lot of the time, I'd like to keep my dev environment, running in a docker container, on OS X laptop, using boot2docker.The 'production' (not actual production, just somewhere I deploy to to show it to friends) is a Digital Ocean droplet running Ubuntu as host and hopefully the same docker container.I expect that the environment won't change very often and that I can continue using git push/pull to push just the code changes.Do I need anything else other than what I described above? Do I need Vagrant, for example to deploy that docker container or is that an overkill? Can docker specify all all my needs, that is the right version of node.js, sails etc? Is there a ready made container I can reuse or modify rather than starting from scratch?
What is the most simple setup for a MEAN stack docker container to have the same config on OS X and DigitalOcean?
Networking is one of the namespaces in docker, similar to the pid and filesystem namespaces. If you kill pid 1 inside a container, that kills the process inside the container and not systemd/init on the host (as long as you don't override the namespace). And if yourm -rf /bininside a container, that deletes files from that container, not from the host (as long as you don't have a volume mount). Similarly, the loopback network (localhost or 127.0 0.1) in a namespace refers to just that namespace, not the host.Thinking about it from a higher level, loopback on the host is only reachable from that host, you cannot access it from another host, or an external load balancer. Namespaced networking works very similar. Loopback inside the container can be reached by other processes inside that same network namespace, but not containers in other namespaces, and not from the host with port forwarding since that forwards to the virtual network interface, similar to how an external load balancer forwards to the host network interface.
I have a grpc-go server running in docker container, listening on0.0.0.0:8080. I found this to be working after having failures with listening onlocalhostor127.0.0.1in a docker container - and it only failed running in a docker container, not if I go run on the same machine.Also a simple web server did work listening on localhost or 127.0.0.1.I found that0.0.0.0is listening on any network adapter - but found no other explanations.Well, problem solved - but I am looking for an explanation - do you know?
Why 0.0.0.0 is working and localhost or 127.0.01 is not
The key is to make sure your web server is the owner of the directory WordPress is installed in (and its sub-directories). You're seeing an error because your web server doesn't have the proper privileges to write to your directories.I recommend running achown -R user:group /path/to/wordpress, substituting theuserandgroupwith your server's info.
I installed a Wordpress website with thewordpressDocker image, and then installed my themes. All works well, but when I want to update Wordpress later on, I get this message:To perform the requested action, WordPress needs to access your web server. Please enter your FTP credentials to proceed. If you do not remember your credentials, you should contact your web host.The Wordpress container is not running an FTP server on the web root. How could I solve this problem?PS:I have my web root in a data container, shared among different containers.PS2:I am planning on storing several Wordpress websites in the same host. Is there also a solution that is compatible with this?
Updating Wordpress inside a container. No FTP access
In fact your traffic is as next:User browser request page fromangular container, then all pages will rendered to user's browser.The front javascript code usingangular HttpClientto fetch the data fromexpress container.At that time, althoughdocker-composesetup a customized network for you which afford auto-dns to resolveangular&express, but this dns just works among containers, not effect for host.But, youraugular HttpClientwhich callhttp://expresswas happened on user's browser which not in container, so the auto-dns defintly not work for you to resolve the nameexpress.For you, if you just want to open browser from your docker host, you can uselocalhost, but if you also want the user from other machine to visit your web, you had to use the ip of your dockerhost.Then, inangular HttpClientyou need to use something likehttp://your_dockerhost_ip:8000to visit express container.If interested, you can visitthisto seeUser-defined bridges provide automatic DNS resolution between containers.
I'm trying to dockerize my angular + express application. I have a docker-compose file that creates the two containers, and I am able to hit the containers from my host machine(using my browser), but I'll just get a "ERR_NAME_NOT_RESOLVED" whenever I try to hit the backend from http requests made by my frontend.I've looked up the issue, and it seems like most suggest that the service name and container port should be enough to hit the the other container when they're on the same network. I've tried to hit "http://express:8000/user?user=030f0e70-9a8f-11e9-b5d1-f5cb6c0f3616" which I think should work given what I've seen from other places, but regardless, I get the same error.My docker-compose file looks likeversion: '3' # specify docker-compose version # Define the services/containers to be run services: angular: # name of the first service build: ./ # specify the directory of the Dockerfile ports: - "4200:80" # specify port forewarding links: - "express" depends_on: - "express" express: #name of the second service build: # specify the directory of the Dockerfile context: ./ dockerfile: dockerfile.be ports: - "8000:8000" #specify ports forewarding expose: - "8000"Ideally, I'd like my frontend to be able to hit the other container with a set endpoint, so I could deploy the application with minimal changes. I'd appreciate any advice. I feel like I'm missing something really simple, but after a few hours of tinkering, I still haven't caught it.Thanks!
How to connect frontend to backend via docker-compose networks
You should write a customudevrule that runs a script of yours each time a new interface is added. This is what Debian does for handling interface "hotplug"./etc/udev/rules.d/90-my-networking.rules:SUBSYSTEM=="net", RUN+="/usr/local/bin/my-networking-agent.sh"/usr/local/bin/my-networking-agent.sh:#!/bin/sh logger "hey I just got interface ${INTERFACE} with action ${ACTION}"EDITHere is how you can test it:# modprobe dummy0 # ifconfig dummy0 up # tail -n1 /var/log/syslog May 3 01:48:06 ernst logger: hey I just got interface dummy0 with action add
Docker creates avethinterface connected to a bridge (docker0) for each of the containers it create.http://docs.docker.io/use/networking/I want to limit the bandwidth these newvethinterfaces have. I found a way to do this with wondershaper. However I want to automate this.Is there a way to have a hook that runs a script every time a newvethinterface is attached?I have looked into adding scripts in/etc/network/if-up.d/, but they do not run when avethis added only during boot.Here are some syslogs of what I am trying to get notified about. I know I can tail these logs but that method seems sort of hacky and there has to be a way to get notified about this event via the OS.May 2 23:28:41 ip-10-171-7-2 kernel: [22170163.565812] netlink: 1 bytes leftover after parsing attributes. May 2 23:28:42 ip-10-171-7-2 kernel: [22170163.720571] IPv6: ADDRCONF(NETDEV_UP): veth5964: link is not ready May 2 23:28:42 ip-10-171-7-2 kernel: [22170163.720587] device veth5964 entered promiscuous mode May 2 23:28:42 ip-10-171-7-2 avahi-daemon[1006]: Withdrawing workstation service for vethdc8c. May 2 23:28:42 ip-10-171-7-2 kernel: [22170163.743283] IPv6: ADDRCONF(NETDEV_CHANGE): veth5964: link becomes ready May 2 23:28:42 ip-10-171-7-2 kernel: [22170163.743344] docker0: port 27(veth5964) entered forwarding state May 2 23:28:42 ip-10-171-7-2 kernel: [22170163.743358] docker0: port 27(veth5964) entered forwarding state May 2 23:28:48 ip-10-171-7-2 kernel: [22170170.518670] docker0: port 26(vethb06a) entered forwarding state May 2 23:28:57 ip-10-171-7-2 kernel: [22170178.774676] docker0: port 27(veth5964) entered forwarding state
run a script when a new veth interface is added
After a lot of frustrating days of trying out a number of things... finally something worked:Using Pipework (https://github.com/jpetazzo/pipework), the following command worked but there is a catch -pipework eth2 $(docker run -d hipache /usr/sbin/hipache) 50.19.169.157/24running a docker container by only running the above command did not quite help me. I had to runtcpdump -i eth2on my hostto capture packets on eth2 interface, which then started to forward the packets to the docker container.Any idea why is worked and not just running the command??
I have a docker container running a java application which is listening for UDP multicast packets. I am not receiving the packets inside the container, however they appear on the host machine on eth0.Is there a way for docker to automatically pick up these packets and forward them to the container?Thanks
forward udp multicast from eth0 to docker0
The easiest way is to mount a directory into the postgres container, place the file into the mounted directory, and reference it there.We are actually mounting thepgdatadirectory, to be sure that the postgres data lives even if we recreate the postgres docker container. So, my example will also usepgdata:services: db: image: postgres environment: POSTGRES_USER: myuser POSTGRES_PASSWORD: mypassword volumes: - ":/var/lib/postgresql/data/pgdata"Placemyfile.csvinto(relative to directory containing the config or absolute path). The copy command then looks like this:\copy backend_data (t, sth1, sth2) FROM '/var/lib/postgresql/data/pgdata/myfile.csv' CSV HEADER;
I would like to load data from CSV file into PostgreSQL database in Docker. I run:docker exec -ti my project_db_1 psql -U postgresThen I select my database:\c myDatabaseNow I try to load data frommyfile.csvwhich is in the main directory of the Django project intobackend_datatable:\copy backend_data (t, sth1, sth2) FROM 'myfile.csv' CSV HEADER;However I get error:myfile.csv: No such file or directoryIt seems to me that I tried every possible path and nothing works. Any ideas how can I solve it? This is my docker-compose.yml:version: '3' services: db: image: postgres environment: POSTGRES_USER: myuser POSTGRES_PASSWORD: mypassword django: build: . command: python3 manage.py runserver 0.0.0.0:8000 volumes: - .:/code ports: - "8000:8000" depends_on: - db
How can I set path to load data from CSV file into PostgreSQL database in Docker container?
https://docs.docker.com/installation/mac/you need to do thisonce:boot2docker initthen, everytime you reboot your mac you will need to run :boot2docker startThat is the command that starts the docker daemon. But, on each shell you want to access it from you will need to run:$(boot2docker shellinit)Now you can use the docker client, like:docker run hello-world
I randocker imagesand got the following error:FATA[0000] Get http:///var/run/docker.sock/v1.17/images/json: dial unix /var/run/docker.sock: no such file or directory. Are you trying to connect to a TLS-enabled daemon without TLS?There seems to be no useful message on how to fix the error. What could be wrong?
Docker TLS error on Mac
If the goal is to set sysctl settings, docker has realized the issue and in 1.12+ you can use the --sysctl flag when running a docker container (or in your compose file) which will set the values inside the container before it is run.This is sadly not (yet) integrated yet in the dockerfile syntax.https://docs.docker.com/engine/reference/commandline/run/#configure-namespaced-kernel-parameters-sysctls-at-runtimedocker run --sysctl kernel.shmmax=1073741824 yourimageExample docker-compose.yml (must use version 2.1):version: '2.1' services: app: sysctls: - kernel.shmmax=1073741824
I have installed docker 0.11.1 over Ubuntu 12.04. I am trying to change the shmmax from its fixed value (32 M) to something bigger (1G) from within the docker when I run the command:sysctl -w kernel.shmmax=1073741824 error: "Read-only file system" setting key "kernel.shmmax"That is because/procis mountedroin the container.Can someone tell me how to mount the proc asr/win my container to change it?
How to remount the /proc filesystem in a docker as a r/w system?
Check out the--add-hostflag for thedockercommand:https://docs.docker.com/engine/reference/run/#managing-etchosts$ docker run --add-host="smtp:127.17.0.1" container commandIn Docker,/etc/hostscannot be overwritten or modified at runtime (security feature). You need to use Docker's API, in this case--add-hostto modify the file.Fordocker-compose, use theextra_hostsoption.For the whole "connect to services running in host" problem, see the discussion in this GitHub issue:https://github.com/docker/docker/issues/1143.The common approach for this problem is to use--add-hostwith Docker's gateway address for the host, e.g.--add-host="dockerhost:172.17.42.1". Check the issue above for some scripts that find the correct IP and start your containers.
how is it possible to resolve names defined in Docker host's /etc/hosts in containers? Containers running in my Docker host can resolve public names (e.g. www.ibm.com) so Docker dns is working fine. I would like to resolve names from Docker hosts's (e.g. 127.17.0.1 smtp) from containers.My final goal is to connect to services running in Docker host (e.g. smtp server) from containers. I know I can use the Docker Host IP (127.17.0.1) from containers, but I thought that Docker would have used the Docker host /etc/hosts to build containers's resolve files as well.I am even quite sure I have seen this working a while ago... but I could be wrong.Any thoughts?Giovanni
How to resolve docker host names (/etc/hosts) in containers
I found the problem. In the multi-stage build of the docker image, I accidentally copied the nginx.conf file into the builder image, not the production one.The fixed Dockerfile now looks like this:# build environment FROM node:11.13 as builder RUN mkdir /usr/src/app WORKDIR /usr/src/app ENV PATH /usr/src/app/node_modules/.bin:$PATH COPY package.json /usr/src/app/package.json RUN npm install RUN npm install[email protected]-g COPY ./package-lock.json /usr/src/app/ COPY ./public /usr/src/app/public COPY ./src /usr/src/app/src RUN npm run build # production environment FROM nginx:1.15.10-alpine COPY --from=builder /usr/src/app/build /var/www COPY ./nginx.conf /etc/nginx/nginx.conf CMD ["nginx", "-g", "daemon off;"]and nginx.conf:server { listen 80; include /etc/nginx/mime.types; root /var/www; index index.html index.htm; location /api { resolver 127.0.0.11; proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://server:8080$request_uri; } location / { try_files $uri $uri/ =404; } }
I'm trying to create a docker-compose using two services, a Spring Boot backend (running on port 8080) and React frontend running on Nginx.The react app calls backend API like /api/tests. However, when I run the docker compose and frontend makes a request, it always fails with 404 error:GET http://localhost/api/tests 404 (Not Found)When I set the frontend dockerfile not to use Nginx, justnpm start, it worked fine, but I would prefer using production build on Nginx.Current frontend dockerfile:FROM node:11.13 as builder RUN mkdir /usr/src/app WORKDIR /usr/src/app ENV PATH /usr/src/app/node_modules/.bin:$PATH COPY package.json /usr/src/app/package.json RUN npm install RUN npm install[email protected]-g COPY ./package-lock.json /usr/src/app/ COPY ./public /usr/src/app/public COPY ./src /usr/src/app/src COPY ./nginx.conf /etc/nginx/nginx.conf RUN npm run build FROM nginx:1.15.10-alpine COPY --from=builder /usr/src/app/build /usr/share/nginx/html CMD ["nginx", "-g", "daemon off;"]Nginx.conf:server { listen 80; location / { try_files $uri $uri/ /index.html; add_header Cache-Control public; expires 1d; } location /api { proxy_set_header X-Forwarded-Host $host; proxy_set_header X-Forwarded-Server $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_pass http://server:8080/; } }docker-compose:version: "3" services: server: build: test-server/ expose: - 8080 ports: - 8080:8080 ui: build: test-ui/ expose: - 80 ports: - 80:80The react app has a line"proxy": "http://server:8080"in its package.json.Nginx logs the following error:2019/04/15 12:50:03 [error] 6#6: *1 open() "/usr/share/nginx/html/api/tests" failed (2: No such file or directory), client: 172.20.0.1, server: localhost, request: "GET /api/tests HTTP/1.1", host: "localhost", referrer: "http://localhost/"
Set up nginx proxy for react application
The readiness and liveness probes serve slightly different purposes.The readiness probe controls whether the pod IP is included in the list of endpoints for a service, and so also whether a target for a route when it is exposed via an external URL.The liveness probe determines whether a pod is still running normally or whether it should be restarted.Technically an application could still be running fine, but is perhaps backlogged, and so you want to use the readiness probe to temporarily remove it from the set of endpoints for a service to avoid further requests being routed its way and simply being blocked in the request queue for that specific pod when another pod could handle it.So I personally would agree the duplication seems strange, but it is that way so the different situations can be distinguished.
Is there a way to prevent readiness probe from execution once container has successfully started? I suppose that liveness probe should be enough to monitor container health.
Kubernetes - Readiness Probe execution after container started
You can just give the container access to execute docker commands. It will either need direct access to the docker socket or it will need the various tcp environment variables and files (client certs, etc). Obviously it will need adocker clientinstalled on the container as well.A simple example of a container that can execute docker commands on the host:docker run -v /var/run/docker.sock:/var/run/docker.sock your_imageIt's important to note that this is not the same as running a docker daemon in a container. For that you need a solution likejpetazzo/dind.
I have two applications:a Python console script that does a short(ish) task and exitsa Flask "frontend" for starting the console app by passing it command line argumentsCurrently, the Flask project carries a copy of the console script and runs it usingsubprocesswhen necessary. This works great in a Docker container but they are too tightly coupled. There are situations where I'd like to run the console script from the command line.I'd like to separate the two applications into separate containers. To make this work, the Flask application needs to be able to start the console script in a separate container (which could be on a different machine). Ideally, I'd like to not have to run the console script container inside the Flask container, so that only one process runs per container. Plus I'll need to be able to pass the console script command line arguments.Q: How can I spawn a container with a short lived task from inside a container?
Docker - Run Container from Inside Container
Cron doesn't set up thePATHenvironment variable the same as a normal login shell sopythoncan't be found. It should work if you specify a complete path to the Python executable, e.g. replacepythonwith/usr/bin/python(or whatever the path to your Python executable happens to be). Alternatively you can explicitly set thePATHenvironment variable in the Cron configuration file to include the directory where Python can be found.
I want to repeatedly call a script via cron in a docker container, but when I switch from one time execution to execution via cron the official python image suddenly can't seem to find python.Dockerfile:FROM python:3.7-slim COPY main.py /home/main.py #A: works CMD [ "python", "/home/main.py" ] #B: doesn't work #RUN apt-get update && apt-get -y install -qq --force-yes cron #COPY hello-cron /etc/cron.d/hello-cron #CMD ["cron", "-f"]main.pyimport time for i in range(90000): print(i) time.sleep(5000)hello-cron:* * * * * root python /home/main.py > /proc/1/fd/1 2> /proc/1/fd/2 #When I switch A for B in the Dockerfile the error message is:/bin/sh: 1: python: not foundThank you all for he quick responses! AddingPATH=/usr/local/binin the cron file solved my problem.
`/bin/sh: 1: python: not found` when run via cron in docker
The problem is related to the fact that I'm in a network behind a BlueCoat (kind of firewall) which inspect and hide almost of the communication from my desktop and the internet.After fell googles seach I've found the command to ignore the certificat problem:Just add this to my dockerfile--trusted-host pypi.org --trusted-host pypi.python.org --trusted-host=files.pythonhosted.org# our base image FROM alpine:3.5 # Install python and pip RUN apk add --update py2-pip # install Python modules needed by the Python app COPY requirements.txt /usr/src/app/ RUN pip install --trusted-host pypi.org --trusted-host pypi.python.org --trusted-host=files.pythonhosted.org --no-cache-dir -r /usr/src/app/requirements.txt # copy files required for the app to run COPY app.py /usr/src/app/ COPY templates/index.html /usr/src/app/templates/ # tell the port number the container should expose EXPOSE 5000 # run the application CMD ["python", "/usr/src/app/app.py"]
This question already has answers here:pip install fails with "connection error: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:598)"(60 answers)Closed4 years ago.Following the lab from [GitHub][1] to learn more about Docker containers, I felt in this problem:No matching distribution found for Flask==0.10.1 (from -r /usr/src/app/requirements.txt (line 1)) Could not fetch URL https://pypi.python.org/simple/flask/: There was a problem confirming the ssl certificate: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed (_ssl.c:726) - skipping``` [1]: https://github.com/docker/labs/blob/master/beginner/chapters/webapps.md
RUN pip install: There was a problem confirming the ssl certificate: [SSL: CERTIFICATE_VERIFY_FAILED] certificate verify failed [duplicate]
When publishing a container port eg 8080:8080 (host_port:container_port).. Make sure the container port is the same on which your web service is running...My webserver was listening for connections on 8080 port and in the screenshot as you can see.. I have given 4000 port
I'm trying to deploy theMDT(Mobile Distribution Tool)on my local Mac.I'm using docker and have managed to get the container running..In the image you can see MDT running on port 4000. But when I browse to my machine browser on "localhost:4000", I get a timeout.I've gone throughthispost and tried to add a route, but didn't work and then I visitedthisquestion and now I'm totally confused. Can someone please suggest how to get this resolved?
How to access webserver running in docker container from browser?
Entrypoint is the binary that is being executed.Example:--entrypoint=bash--entrypoint=helmlike this.Thetaillinux utility displays the contents of file or, by default, its standard input, to the standard output/dev/null./dev/nullredirects the command standard output to the null device, which is a special device which discards the information written to it. So when you run atail -f /dev/nullin a terminal it prints nothing.If you would like to keep your container running in detached mode, you need to run something in the foreground. An easy way to do this is totailthe/dev/nulldevice as theCMDorENTRYPOINTcommand of your Docker image.
I am running Windows 10 pro, docker installed and linux containers.With Visual Studio 2019, I created a basic .net core web api app, and enabled docker support(linux).I built the solution, and in the output window (View -> Output or Ctrl + Alt + O) I selected "Container Tools" in the Show Output From drop down. Scroll till the end(see the scroll bar in the below image) and you see the entry point option to the docker run command as follows.--entrypoint tail webapp:dev -f /dev/nullThe entire docker run command for your ref is as follows.docker run -dt -v "C:\Users\MyUserName\vsdbg\vs2017u5:/remote_debugger:rw" -v "D:\Trials\Docker\VsDocker\src\WebApp:/app" -v "D:\Trials\Docker\VsDocker\src:/src" -v "C:\Users\UserName\.nuget\packages\:/root/.nuget/fallbackpackages" -e "DOTNET_USE_POLLING_FILE_WATCHER=1" -e "ASPNETCORE_ENVIRONMENT=Development" -e "NUGET_PACKAGES=/root/.nuget/fallbackpackages" -e "NUGET_FALLBACK_PACKAGES=/root/.nuget/fallbackpackages" -P --name WebApp --entrypoint tail webapp:dev -f /dev/nullSo my question is what is this "tail". I saw two so questions(thisandthis) but could not get much. Also fromhere, tail seems to be a linux command(and I am running a linux container) but what does it do here?Please enlighten me.
What is tail command with docker run entrypoint in visual studio 2019?
Found docs on how to use private registry:https://microk8s.io/docs/workingFirst it needs to be enabled:microk8s.enable registryThen images pushed to registry:docker tag backend localhost:32000/backend docker push localhost:32000/backendAnd then in above configimage: backendneeds to be replaced withimage: localhost:32000/backend
I've build docker image locally:docker build -t backend -f backend.dockerNow I want to create deployment with it:apiVersion: apps/v1 kind: Deployment metadata: name: backend-deployment spec: selector: matchLabels: tier: backend replicas: 2 template: metadata: labels: tier: backend spec: containers: - name: backend image: backend imagePullPolicy: IfNotPresent # This should be by default so ports: - containerPort: 80kubectl apply -f file_provided_above.yamlworks, but then I have following pods statuses:$ kubectl get pods NAME READY STATUS RESTARTS AGE backend-deployment-66cff7d4c6-gwbzf 0/1 ImagePullBackOff 0 18sBefore that it wasErrImagePull. So, my question is, how to tell it to use local docker images? Somewhere on the internet I read that I need to build images usingmicrok8s.dockerbut itseems to be removed.
How to configure kubernetes (microk8s) to use local docker images?
You can create a very simpledockerimage containing your custom nginx configuration and mount this volume in the container that uses original nginx image.There are just a few steps to follow.1. Create your custom nginx config image projectmkdir -p nginxcustom/conf cd nginxcustom touch Dockerfile touch conf/custom.conf2. ModifyDockerfileThis is the file content:FROM progrium/busybox ADD conf/ /etc/nginx/sites-enabled/ VOLUME /etc/nginx/sites-enabled/3. Build the new imagedocker build -t nginxcustomconf .4. Modify yourdocker-compose.ymlfilenginxcustomconf: image: nginxcustomconf command: true nginxcustom: image: nginx hostname: nginxcustom ports: - "80:80" - "443:443" volumes_from: - nginxcustomconfThe sampleconf/custom.confmay look like this:server { listen 82; server_name ${HOSTNAME}; set $cadvisor cadvisor.docker; location / { proxy_pass http://$cadvisor:8080; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_connect_timeout 150; proxy_send_timeout 100; proxy_read_timeout 100; proxy_buffers 16 64k; proxy_busy_buffers_size 64k; client_max_body_size 256k; client_body_buffer_size 128k; } }
I have nextdocker-composefile:nginx: build: . ports: - "80:80" - "443:443" links: - fpm fpm: image: php:fpm ports: - "9000:9000"TheDockerfilecommand list is:FROM nginx ADD ./index.php /usr/share/nginx/html/ # Change Nginx config here...The Nginx server work fine and I can see default html page onhttp://localhost/index.html, but don't execute PHP scripts. So when I gethttp://localhost/index.php- browser download PHP file instead of execute them.How can I use custom Nginx config to execute PHP script in my case?
How to use custom Nginx config for official nginx Docker image?
If its ok for the container to be offline why not just remove and run again without the port switches?If you do need to do this without deleting containers you could just modify the underlying iptables changes.# Will list the rules iptables -L # Will delete the rule you want to remove iptables --delete [chain] In general your data should always be in one of 3 placesA data only container that can be linked with a restarted service container.A volume defined in your service container than can be linked with a new container to take backups. Seeherefor an example.In a host mounted volume so that you can restart containers and mount the same location into new containers.With one of these three approaches restarting services becomes easily and this should be standard as micro-services should be designed such that they can go down and recover often. These approaches will also speed up your application as the default union file system is slower than normal file systems which are used for volumes.If you need to recover data from a container where you did not plan volumes properly you can use the docker export functionality to export the state of your container. Then import it into a new container with a host mounted volume. Copy your critical data from inside the container to the volume.
Currently I have a container created withdocker run --detach --name gitlab_app --restart=always --publish 192.168.0.200:80:80 --publish 192.168.0.200:22:22 --volumes-from gitlab_data gitlab_imageI want to remove both port bindings80and22from the image. Is it possible to remove port binding from an existing docker container?NB: It is okay to take the container offline for removing the binding.
Remove port binding from an existing docker container
Why this happened?The Python docker images have been updated recently to use Debian 12bookwormversion which was released on 10 June 2023 instead of Debian 10buster.Sources:GitHub > docker-library/python > Commit > add bookworm, remove busterWikipedia > Debian version history > Release tableWhat is the root cause?It is Docker with libseccomp so a newer syscall used in Debian Bookworm packages/libs is being blocked. libseccomp lets you configure allowed syscalls for a process. Docker sets a default seccomp profile for all containers such that only certain syscalls are allowed and everything else is blocked (so, newer syscalls that are not yet known to libseccomp or docker are blocked).Source:python:3.9 - Failed run apt update from the last version of the image #837Possible Solutions:EitherAdd the following in the Dockerfile:RUN mv -i /etc/apt/trusted.gpg.d/debian-archive-*.asc /root/ RUN ln -s /usr/share/keyrings/debian-archive-* /etc/apt/trusted.gpg.d/OrUse any of thebullseyeimage (e.g.,python:3.8-slim-bullseye).OrUpdatelibseccompanddockeron the host running the containers.
My build pipeline has stopped working all of a sudden which was working fine a few weeks ago. I'm using Dockerfile to build my app withpython:3.8as the base image. It has started failing on theapt-get update && apt-get installpart. I didn't change anything in the Dockerfile.My Dockerfile looks like this:FROM python:3.8 ... ... ... RUN apt-get update && \ apt-get install -y default-libmysqlclient-dev libffi-dev libssl-dev git jq tree ... ... ...Below is the error I'm getting:W: GPG error: http://deb.debian.org/debian bookworm InRelease: The following signatures couldn't be verified because the public key is not available: NO_PUBKEY 0E98404D386FA1D9 NO_PUBKEY 6ED0E7B82643E131 NO_PUBKEY F8D2585B8783D481 E: The repository 'http://deb.debian.org/debian bookworm InRelease' is not signed.What is the cause of this? How to fix it?
Build started failing using Python:3.8 Docker image on apt-get update and install with GPG error: bookworm InRelease is not signed
UPDATE:Figured out an even better way that doesn't involve baking your creds into an image at all. See the following question for information that would be applicable to solving this problem as well:Is it secure to store EC2 User-Data shell scripts in a private S3 bucket?This helps keep your secrets in the least number of places necessary at any given time.Figured out a better way:Launch a machine using your desired OSInstall Dockerrunsudo docker loginon that machineUpon successful authentication Docker will place a.dockercfgfile in your home directory (e.g./home/yourusername/.dockercfg). Docker will use this file for all authentication from now on.Create an image of your machine to be used when launching all new instances. This image will now have the.dockercfgfile baked-in.Add the following to theUser Dataof your machine image:#!/bin/bash sudo docker run -p 3333:3333 -d --name Hello yourusername/helloNow when you launch an instance based on your machine image yoursudo docker runcommands will succeed in pulling private repos provided the user you run the docker command under has a.dockercfgfile in their home directory.Hope that helps anyone looking to figure this out.
I have a EC2 server running Docker and I'd like to add the following to theUser Dataso my private Dockerhub images will be pulled/run when the server starts up, like so:#!/bin/bash sudo docker run -p 3333:3333 -d --name Hello myusername/helloBut I'm unsure as to how to go about authenticating in order to gain access to the private repomyusername/hello.With Github you create and upload a deploy key, does Dockerhub offer a similar deploy key option?
How to automate a docker run from a private Dockerhub repo?
DNS inside busybox only works correctly in images <= 1.28.4.Fixing the versionimage: "busybox:1.28.0"should do the trick.There's a thread herehttps://github.com/kubernetes/kubernetes/issues/66924
Reproduce steps:kubectl run busybox1 --generator=run-pod/v1 --image=busybox:1.28 -- sleep 3600kubectl run busybox2 --generator=run-pod/v1 --image=busybox:1.31.1 -- sleep 3600kubectl exec -ti busybox1 -- nslookup kubernetes.defaultworks fineServer: 10.96.0.10Address 1: 10.96.0.10 kube-dns.kube-system.svc.cluster.localName: kubernetes.defaultAddress 1: 10.96.0.1 kubernetes.default.svc.cluster.localkubectl exec -ti busybox2 -- nslookup kubernetes.defaultnot workingServer: 10.96.0.10 Address: 10.96.0.10:53** server can't find kubernetes.default: NXDOMAIN*** Can't find kubernetes.default: No answercommand terminated with exit code 1does nslookup work differently on 1.31.1?what's the correct way to use nslookup on 1.31.1?
nslookup can not get service ip on latest busybox
You are copying your entire source folder into the directory/appin this step:COPY --from=builder /go/src/ /appThen you try to execute the directory:ENTRYPOINT [ "/app" ]Instead, you need to copy the compiled binary that your go build outputs in the copy step.
When I was trying to build golang using dockerThe image build of docker was successful, but the following error occurred when running with docker rundocker: Error response from daemon: OCI runtime create failed: container_linux.go:345: starting container process caused "exec: \"/app\": permission denied": unknown.I think this error cause no user add, so I added group and user as belowRUN groupadd -g 10001 myapp \ && useradd -u 10001 -g myapp myappbut didn't fix.Here is my source docker fileFROM golang:1.12.9 as builder ADD . /go/src/appname/ WORKDIR /go/src/appname/ ENV GO111MODULE=on COPY go.mod . COPY go.sum . RUN go mod download COPY . . RUN CGO_ENABLED=0 GOOS=linux GOARCH=amd64 make build target=prod FROM alpine RUN apk update \ && apk add --no-cache COPY --from=builder /go/src/ /app ENTRYPOINT [ "/app" ]thanks
starting container process caused "exec: \"/app\": permission denied": unknown
If I understood correctly want you want, then you just need to read what's given bydocker info:❯ docker info | grep Proxy Http Proxy: http://localhost:3128 Https Proxy: http://localhost:3128If these two are set in the GUI, they will appear near the end of the output. If they are not set, they won't, and in my case,No Proxy: *.local, 169.254/16will appear instead.
I am using Docker for mac behind a proxy. I set up the proxy configuration in the Docker GUI under "Proxies" -> "Manual proxy configuration". This lets me download Docker images from the repository behind the proxy.Next, I set thehttp_proxyandhttps_proxyenvironment variables and I use them in my docker-compose.yml to pass them to the build:services: app: build: context: . args: http_proxy: $http_proxy https_proxy: $https_proxyHow can I get the variables that I set through the Docker GUI in the terminal so I don't have to set them twice? Are there any Docker-specific environment variables that I can use?
Getting Docker for mac proxy variables through terminal
For that, you need to define Global ARGs and better to have some default value and override it during build time.ARG sample_TAG=test FROM maven:3.6.1-jdk-8 as maven-build ARG sample_TAG WORKDIR /apps/sample-google RUN echo "image tag is ${sample_TAG}" FROM $sample_TAG VOLUME /apps RUN mkdir /apps/sample-google
I have a Dockerfile that needs to get base image tag from the command line and load it dynamically, but I am getting this error with this command line.$ docker build --network=host --build-arg sample_TAG=7.0 --rm=true . Step 9/12 : FROM "${sample_TAG}" base name ("${sample_TAG}") should not be blankThe Dockerfile:FROM maven:3.6.1-jdk-8 as maven-build ARG sample_TAG ENV LANG en_US.UTF-8 ENV LANGUAGE en_US:en ENV LC_ALL en_US.UTF-8 WORKDIR /apps/sample-google COPY . /apps/sample-google RUN mvn clean package RUN echo "image tag is ${sample_TAG}" FROM $sample_TAG VOLUME /apps RUN mkdir /apps/sample-google COPY --from=maven-build /apps/sample-google/target /apps/sample-googleThe echo line prints 'latest' string correctly, but it fails in 'FROM $sample_TAG' line.
Dynamic Docker base image
The easiest way to set up a docker registry is using theofficial docker registry. This allows you to easily run a registry server with a configurable storage backend. As others have mentioned you can use S3 or Google Cloud storage. (I have personally used Google Cloud storage and have not run into any problems).I would also check out this digital ocean post about setting up a docker registry:How to setup a docker registry.Since you are interested in clustering, all you would need to do at this point is setup multiple registry servers with the same bucket as a storage backend. Then put a load balancer such ashaproxyornginxin front of them. This will give you the fault tolerance and load balancing that you are looking for.
I am looking for an open source solution to sync several docker registries. Could anybody give me some hints about this?
Is there a docker registry cluster solution for private purpose?
I think you need to add apt-get update in order to get cmake to install. Seethisimage: gcc before_script: - apt-get update --yes - apt-get install --yes cmake build: script: - ./runner.sh - ./bin/helloIn general, you can figure stuff out by jumping into the docker image to debug (in your case the image is the debian-based gcc:latest):sudo docker run -it --rm gccIf you had run your original apt-get install command inside the gcc container, you would have seen following error message that you could have then googled to figure out that apt-get update was neededsudo docker run -it --rm gcc apt-get install --yes cmake Reading package lists... Done Building dependency tree Reading state information... Done Package cmake is not available, but is referred to by another package. This may mean that the package is missing, has been obsoleted, or is only available from another source E: Package 'cmake' has no installation candidateAs this blog post mentions, you can do a test run locally bydownloading the gitlab-runner executable:gitlab-runner exec docker buildRunning the gitlab-runner locally will have gitlab clone your repo and run through all the steps in the .gitlab-ci.yml and you can see the output and debug locally rather quickly.
I was able to run the C++ Program and build & test it using GitLab CI unit with the help of Docker Image of gcc. But now I want to compile the program in docker usingcmakeinstead of g++. How to change the '.gitlab-ci.yml' file to support cmake.Current File : .gitlab-ci.ymlimage: gcc before_script: - apt-get install --yes cmake libmatio-dev libblas-dev libsqlite3-dev libcurl4-openssl-dev - apt-get install --yes libarchive-dev liblzma-dev build: script: - ./runner.sh - ./bin/hello./runner.shcmake -H. -Bbuild cmake --build build -- -j3
how to integrate cmake in gitlab repository for Continuous Integration(CI)
After struggling for a while, it seems theJAVA_OPTSvariable can be passed to the containerwhen it's based in a Tomcat image, but Spring Boot uses Java itself as the base image.I've found outthis tutorialwhich solved the problem for me, just modifying the way the process is launched in the DockerFile and adding a JAVA_OPTS variable directly in the ENTRYPOINT:ENTRYPOINT exec java $JAVA_OPTS -Djava.security.egd=file:/dev/./urandom -jar /app.jarThis way, the JVM will pick the value from the command itself.
I've got a Spring Boot application implementing a service which I want to run in a Docker container. I've followed the guideline of the officialSpring docswhich suggest to create a DockerFile similar to this:FROM frolvlad/alpine-oraclejdk8:slim VOLUME /tmp ADD gs-spring-boot-docker-0.1.0.jar app.jar RUN sh -c 'touch /app.jar' ENTRYPOINT ["java","-Djava.security.egd=file:/dev/./urandom","-jar","/app.jar"]Then once the image is pushed to Docker I useDocker Composeto launch it this way:spring-boot-docker: ports: - "80:80" expose: - "80" image: my-repo/spring-boot-docker:0.1.0-SNAPSHOT container_name: spring-boot-docker environment: JAVA_OPTS: '-Xmx64m'Here I've got theJAVA_OPTSvariable which limits the memory allocation, however, when I executedocker stats spring-boot-docker, the memory taken by the container is excessive (I understand the total memory taken by the JVM might be much more than 64M, but in this case is totally boundless).I've also tried with themem_limitparam, but this slows down the application noticeably.
Limit JVM memory consumption in a Docker container
You can configure it in yourapplication.yml:eureka: instance: ipAddress: 192.168.x.x
I'm running Spring Cloud Eureka inside my Docker VM. I have services registering to it, but they use their IP adress from inside the Docker VM, but to be able to use them properly i need them to use the IP adress i can access from outside the VM.For example inside my VM the register using 172.x.x.x and i can access the REST interface from my browser using 192.168.x.x.x. I need them to register as 192.168.x.x.x.How can i tell my service to register with a specific IP adress?
Register to Eureka from Docker with a custom IP
You need to make the script part of the container. To do that, you need to copy the script inside using theCOPYcommand in the Docker file, e.g. like thisFROM ubuntu:14.04 COPY run_netcat_webserver.sh /some/path/run_netcat_webserver.sh CMD /some/path/run_netcat_webserver.shThe/some/pathis a path of your choiceinsidethe container. Since you don't need to care much about users inside the container, it can be even just/.Another option is to provide the script externally, via mounted volume. Example:FROM ubuntu:14.04 VOLUME /scripts CMD /scripts/run_netcat_webserver.shThen, when you run the container, you specify what directory will be mounted as/scripts. Let's suppose that your script is in/tmp, then you run the container asdocker run --volume=/tmp:/scripts (rest of the command options and arguments)This would cause that your (host) directory/tmpwould be mounted under the/scriptsdirectory of the container.
I'm trying to write a docker image to run a simple webserver though netcat.So I have in my docker build folder:Dockerfile index.html run_netcat_webserver.shTherun_netcat_webserver.shis very simple and it works fine:#!/bin/bash while true ; do nc -l 8080 < index.html ; doneHere is my naive Dockerfile that of course is not working:FROM ubuntu:14.04 CMD run_netcat_webserver.shHow should I proceed to make this work inside a docker container?
How do I write a dockerfile to execute a simple bash script?
Edit: This worked for me months ago. New versions of Kubernetes might not have this problem, or this solution might not solve it :)Ok, after struggling for hours with this, I finally managed to push it to th grc.io registry by changing my tag from aimage:versionnotation toimage/version, like this:gcloud docker push gcr.io//hello-node/v1after reading another guide from Kubernetes' documentation:https://cloud.google.com/container-registry/docs/pushing#pushing_to_the_registryHope this helps!
I'm followingKubernete's getting started guide. Everything went smoothly until I ran$ gcloud docker push gcr.io//hello-node:v1(Where is, well, my project id). For some reason, Kubernetes is not able to push to the registry. This is what I get:Warning: '--email' is deprecated, it will be removed soon. See usage. Login Succeeded Warning: '--email' is deprecated, it will be removed soon. See usage. Login Succeeded Warning: '--email' is deprecated, it will be removed soon. See usage. Login Succeeded Warning: '--email' is deprecated, it will be removed soon. See usage. Login Succeeded Warning: '--email' is deprecated, it will be removed soon. See usage. Login Succeeded Warning: '--email' is deprecated, it will be removed soon. See usage. Login Succeeded Warning: '--email' is deprecated, it will be removed soon. See usage. Login Succeeded The push refers to a repository [gcr.io/kubernetes-poc-1320/hello-node] 18465c0e312f: Preparing 5f70bf18a086: Preparing 9f7afc4ce40e: Preparing 828b3885b7b1: Preparing 5dce5ebb917f: Preparing 8befcf623ce4: Waiting 3d5a262d6929: Waiting 6eb35183d3b8: Waiting denied: Unable to create the repository, please check that you have access to do so.Any ideas on what I might be doing wrong? Note that I have run.$ gcloud init, so I've logged in.Thanks in advance!
Kubernetes: Unable to create repository
How do people get around this?a/ They don't start 1000 containers at the same time b/ if they do, they might use acluster management system like docker swarmto manage the all process c/ they do run 1000 containers, in advance in order to take into account the starting time.Truly parallelizedocker runcommand could be tricky considering some of those command might depend on other containers to be created/started first (like adocker run --volumes-from=xxx)
If I have scripts issueing docker run commands in parallel, the docker engine appears to handle these commands in series. Since runing a minimal container image with "docker run" takes around 100ms to start does this mean issueing commands in parallel to run 1000 containers will take the docker engine 100ms x 1000 = 100 s or nearly 2 minutes? Is there some reason why the docker engine is serial instead of parallel? How do people get around this?
Can Docker Engine start containers in parallel
From thetroubleshooting guide:ATTRIBUTE (container instance ID)Your task definition contains a parameter that requires a specific container instance attribute that is not available on your container instances. For more information on which attributes are required for specific task definition parameters and agent configuration variables, seeTask Definition ParametersandAmazon ECS Container Agent Configuration.You can find the attributes required for your task definition by looking at therequiredAttributesfield. You can find the attributes that are present for your container instances in the result of theDescribeContainerInstancesAPI call.
Using theecs agent containeron an Ubuntu instance, I am able to register the agent with my cluster.I also have a service created in that cluster and task definitions as well. When I try to add a task to the cluster I get the useless error message:Run tasks failed Reasons : ["ATTRIBUTE"]The ecs agent log has no related error message. Any thoughts on how I can get better debugging or what the issue might be?The cli also returns the same useless error message{ "tasks": [], "failures": [ { "arn": "arn:aws:ecs:us-east-1:sssssss:container-instance/sssssssssssss", "reason": "ATTRIBUTE" } ] }
Useless Amazon ECS Error Message when creating tasks
About password you are setting all parameters exactly the same as you set root password which is:mysql: image: mysql:5.7 ports: - "3306:3306" environment: MYSQL_ROOT_PASSWORD: R00t+ MYSQL_USER: youruser MYSQL_PASSWORD: yourpasswordTo connect with mysql outside container just as host uselocalhostbecause you redirect ports from container to host machine in this line- "3306:3306"$user = 'root'; $pass = 'R00t+'; $server = 'localhost'; $dbh = new PDO( "mysql:host=$server", $user, $pass );I assume you are running it on your laptop where you have docker containers started.
In themysql docker hub pagethere's a reference on how to create users with:MYSQL_USER, MYSQL_PASSWORDBut how can you specify those parameters on the docker-compose.yml file?So far I have:mysql: image: mysql:5.7 ports: - "3306:3306" environment: MYSQL_ROOT_PASSWORD: R00t+Another question; how can I connect to the mysql host from outside the container? Inside the container I can connect using:$user = 'root'; $pass = 'R00t+'; $server = 'mysql'; $dbh = new PDO( "mysql:host=$server", $user, $pass );
Docker MySQL: create new user
According to another SO post what I am trying to do is not possible. For future reference, one cannot attach volumes to an image, and then later decide to remove them. A new image must be created without the volumes instead. Reference:How to remove configure volumes in docker images
I'm trying to create a new Docker image that no longer uses volumes from a running container that does use images. The volumes were created using docker-compose file, not Dockerfile. The problem is, when I launch a new container via new docker-compose.yml file it still has the volumes mapped. I still need to keep these volumes and the original containers/images that use them. Also, if possible I would like to continue to use the same docker image, just add a new version, or :latest. Here's the steps I used:New version of an existing image:docker commit existingImage:new-versionCreate a new image from current running container:docker commit newimageCreate new docker-compose.yml with no volumes defined and run docker-compose with a different project namedocker-compose -p Running without docker-compose, just use docker run:docker run -d -p 8093:80 :Any time I run any combination of these the volumes are still mapped from the original image. So my question is, how to I create a container from an image that once had mapped volumes but I no longer want to use the volumes?Edit:Additional things I've tried:Stop container, remove container, restart docker, run docker compose again. No luck.Edit 2:Decided to start over on the image. Using a base image, launched a container with an updated docker compose file that uses the now unrelated image. Run docker-compose -f up -d -> STILL has these same volumes mapped even though the image does not (and never has) any volumes mapped, and the current docker-compose.yml file does not map files. It looks like docker-compose caches what volumes are mapped for projects.After searching for caching options in docker-compose, I came across this article:How to get docker-compose to always re-create containers from fresh images?which seems to solve the problem of caching images but not containers caching volumes
Docker compose reusing volumes
You're copying your local/app/folder to the/app/folder in the running Docker container (as mentioned in the comments) creating/app/app/server.pyin the Docker container.How to resolveA simple fix will be to changeCOPY . /apptoCOPY ./app/server.py /app/server.pyExplanationThe commandCOPYworks as follows:COPY You're selecting everything in the folder where the Dockerfile resides, by using.in your firstCOPY, thereby selecting the local/appfolder to be added to the Docker's folder. The destination you're allocating for it in the Docker container is also/appand thus the path in the running container becomes/app/app/..explaining why you can't find the file.Have a look at theDocker docs.
I'm new to docker and creating a simple test app to test my docker container, but docker unable to locate theserver.pyfile.The directory structure of my project is: | |-- Dockerfile |-- app | |-- requirements.txt |-- server.pyBelow is theDockerfilecontent:FROM ubuntu:latest MAINTAINER name <[email protected]> COPY . /app # do I need this ? COPY ./app/requirements.txt /tmp/requirements.txt RUN apt-get -y update && \ apt-get install -y python-pip python-dev build-essential RUN pip install -r /tmp/requirements.txt WORKDIR /app RUN chmod +x server.py # ERROR: No such file or directory EXPOSE 5000 ENTRYPOINT ["python"] CMD ["server.py"] # ERROR: No such file or directoryI'm using boot2docker on windows.What am I missing here?
Unable to locate file in docker container
I am not sure exactly how you would achieve it with docker or anything else, as i dont see anyway to ask tomcat to just expand the war before it actually starts. But as per standard practices its not a good idea to explode a war and tweak the contents. It kills the entire purpose of making a war.Rather you should make changes to the app to read configuration from << TOMCAT_HOME >>/conf.If you achieve this the only thing you will need to tell Docker is to ADD your configuration file to the containers tomcat conf folder.Or if it is a must for you to tamper with war file this is what you can do: Explode the war manually (or by script) on your build machine and instead of adding war directly to the docker image, map the folder. Something like thisADD ./target/app-0.1.0.BUILD-SNAPSHOT /var/lib/jetty/webapps/ROOT.And then manually add all your files to desired destinations.ADD login.jsp /var/lib/jetty/webapps/ROOT/Webapps, so on and so forth.
I am using docker to deploy a tomcat container running a third partywarfile.MyDockerfilelooks something like thisFROM tomcat:7-jre8 ADD my.war ${CATALINA_HOME}/webapps/my.warWhen I run the container tomcat expands mywarat runtime and I can happily access my app athttp://my.ip.addr:8080/mywar/.However my problem is that I want to edit a couple of the config files within thewar. I don't really want to unpack and repack thewarfile as that seems messy and hard to maintain.I was hoping to be able to tell tomcat to expand thewaras part of myRUNsteps and then useADDto put in my custom files but I can't seem to find a way of doing this. The war only gets expanded when theCMDexecutes and then I can't edit the files after that.
Docker tomcat edit expanded war files
As a style point, this gets vastly easier if your image has aCMDthat can be overridden. If you only need to run one command with no initial setup, make it be theCMDand not theENTRYPOINT:CMD ./some_command # not ENTRYPOINTIf you need to do some initial setup and then launch the main command, make theENTRYPOINTbe a shell script that ends with the special instructionexec "$@". TheCMDwill be passed into it as parameters, and this line replaces the shell script with that command.#!/bin/sh # entrypoint.sh ... do first time setup, run database migrations, set variables ... exec "$@"# Dockerfile ... ENTRYPOINT ["./entrypoint.sh"] # MUST be JSON-array syntax CMD ./some_command # as beforeIf you do these things, then you can use your initialdocker runform. This will replace theCMDbut leave theENTRYPOINTintact. In the wrapper-script case, your alternate command will be run as theexec "$@"command, so all of the first-time setup will be done first.# Assuming the image correctly honors the CMD docker run ... \ image-name \ sh -c 'echo "foo is $FOO" && echo "bar is $BAR"'If you really can't do this, you can override thedocker run --entrypoint. This runsinstead ofthe image's entrypoint (if you want the image's entrypoint you have to run it yourself), and the syntax is awkward:# Run a shell command instead of the entrypoint docker run ... \ --entrypoint /bin/sh \ image-name \ -c 'echo "foo is $FOO" && echo "bar is $BAR"'Note that the--entrypointoption comesbeforethe image name, and its arguments comeafterthe image name.
How can I have an entrypoint in adocker runwhich executes multiple commands? Something like:docker run --entrypoint "echo 'hello' && echo 'world'" ... The image I'm trying to run, has already an entrypoint set in the Dockerfile, so solution like the following seems not to work, because it looks my commands are ignored, and only the original entrypoint is executeddocker run ... bash -c "echo 'hello' && echo 'world'"In my use-case I must use thedocker runcommand. Solution which change theDockerfileare not acceptable, since it is not in my hands
docker run entrypoint with multiple commands
Turns out this was an Apache configuration issue.I needed to explicitly enable domain-named virtualhosts, like so:NameVirtualHost *:80This answer helped.Docker had nothing to do with the matter.
I'm trying to run two different domains on one and the same Docker container and port.The Docker container runs CentOS.docker-compose.ymllooks like so:web: image: fab/centos ports: - "80:80" volumes: - ./src/httpd.conf:/etc/httpd/conf/httpd.conf - ./src:/var/www/html - ./src/hosts:/etc/hosts environment: - VIRTUAL_HOST=dummy.dev,tests.devI also declared both .dev domain names inside of/etc/hostson the host computer (OS X.)It's been a while since I configured virtual hosts. My understanding was that I just needed to declare them and that Apache would automatically serve the proper files depending on the HTTP HOST being requested.This is what I have, added at the end ofhttpd.conf: # first host = default host DocumentRoot /var/www/html/default DocumentRoot /var/www/html/dummy ServerName dummy.dev ServerAdmin[email protected]ErrorLog logs/dummy.dev-error_log CustomLog logs/dummy.dev-access_log common DocumentRoot /var/www/html/tests ServerName tests.dev ServerAdmin[email protected]ErrorLog logs/tests.dev-error_log CustomLog logs/tests.dev-access_log common However, in practice, visiting either dummy.dev or tests.dev actually serves/var/www/html/default. This is as if Apache didn't realize which host is being called (though a dump of$_SERVERin PHP does show the expectedHTTP_HOSTvalue, i.e.: either 127.0.0.1, dummy.dev or tests.dev depending on which URL I visit.)What did I miss?It's unclear to me whether this is an Apache issue or a Docker one.(Please note this is a different question from how to host multiple apps on the same domain with different port. In my case, I do want the virtual hosts to be all inside/on the same app/port/container.)
Multiple vhosts on one and the same docker container
Judging by your questions, you would benefit from trying to dockerise dbt on its own, independently from airflow. A lot of your questions would disappear. But here are my answers anyway.Should DBT as a whole project be run as one Docker container, or is it broken down? (for example: are tests ran as a separate container from dbt tasks?)I suggest you build one docker image for the entire project. The docker image can be based on the python image since dbt is a python CLI tool. You then use the CMD arguments of the docker image to run any dbt command you would run outside docker. Please remember the syntax ofdocker run(which has nothing to do with dbt): you can specify any COMMAND you wand to run at invocation time$ docker run [OPTIONS] IMAGE[:TAG|@DIGEST] [COMMAND] [ARG...]Also, the first hit on Google for "docker dbt" isthis dockerfilethat can get you startedAre logs and the UI from DBT accessible and/or still useful when run via the Docker Operator?Again, it's not a dbt question but rather a docker question or an airflow question.Can you see the logs in the airflow UI when using a DockerOperator? Yes,see this how to blog post with screenshots.Can you access logs from a docker container? Yes, Docker containers emit logs tostdoutandstderroutput streams (which you can see in airflow, since airflow picks this up). But logs are also stored in JSON files on the host machine in a folder/var/lib/docker/containers/. If you have any advanced needs, you can pick up those logs with a tool (or a simple BashOperator or PythonOperator) and do what you need with it.How would partial pipelines be run? (example: wanting to run only a part of the pipeline)See answer 1, you would run your docker dbt image with the command$ docker run my-dbt-image dbt run -m stg_customers
Building my question onHow to run DBT in airflow without copying our repo, I am currently running airflow and syncing the dags via git. I am considering different option to include DBT within my workflow. One suggestion bylouis_guittonis to Dockerize the DBT project, and run it in Airflow via theDocker Operator.I have no prior experience using the Docker Operator in Airflow or generally DBT. I am wondering if anyone has tried or can provide some insights about their experience incorporating that workflow, my main questions are:Should DBT as a whole project be run as one Docker container, or is it broken down? (for example: are tests ran as a separate container from dbt tasks?)Are logs and the UI from DBT accessible and/or still useful when run via the Docker Operator?How would partial pipelines be run? (example: wanting to run only a part of the pipeline)
Running DBT within Airflow through the Docker Operator
Emulating a full alternate architecture is generally very slow.QEMUis what allows you to do this on Linux and can be integrated into a Docker container.For building, you can useQEMU User Emulationwhich is much quicker than full emulation. This allows your hardware to execute ARM binaries directly and is used to ease cross-compilation and cross-debugging.Firstget VirtualBoxandget Vagrantand install. (Or usedocker-machinefrom theDocker Toolbox)Setup your VMmkdir raspbian-docker cd raspbian-docker vagrant init debian/jessie64 vagrant up vagrant sshNow you are on your Debian Linux VM, setup the Docker hostsudo su - apt-get install qemu-user-static curl https://get.docker.com/ | shRun a raspbian environmentdocker run -ti \ --volume /usr/bin/qemu-arm-static:/usr/bin/qemu-arm-static \ philipz/rpi-raspbian \ bashAnd do what you need to.Then you candocker exportanddocker importto move images around. You can also usethe huborsetup a registryto usepush/pullTheDocker Toolboxwill also allow you to easily run Docker via a VirtualBox VM on mac but I've run into more troubles than it's been worth (when you have vagrant setup).
This could be more generic and be building an image for architecture B with a machine architecture A. I currently want to create an image with lot of Python dependencies. Which take time on raspberry-pi but is faster on Mac. When I get an error at the end well need to rebuild. Is there a way to build this image on Mac and then pull it on my raspberry pi ?
Docker - Build rpi image on Mac
Depending on the use case, youcanrun multiple processes inside a single container, although I won't recommend that.In some sense it is even simpler to run them in different containers. Keeping containers small, stateless, and around a single job makes it easier to maintain them all. Let me tell you how my workflow with containers is in a similar situation.So:I have one container with nginx that is exposed to the outside world (:443, :80). At this level it is straightforward to manage the configurations, tls certificates, load balancer options etc.One (or more) container(s) with the application. In that case a php-fpm container with the app. Docker image is stateless, the containers mount and share the volumes for static files and so on. At this point, you can at any time to destroy and re-create the application container, keeping the load-balancer up and running. Also, you can have multiple applications behind the same proxy (nginx), and managing one of them would not affect the others.One or more containers for the database... Same benefits apply.Redis, Memcache etc.Having this structure, the deployment is modular, so each and every "service" is separated and logically independent from the rest of the system.As a side effect, in this particular case, you can dozero-downtime deployments(updates) to the application. The idea behind this is simple. When you have to do an update, you create a docker image with the updated application, run the container, run all the tests and maintenance scripts and if everything goes well, you add the newly created container to the chain (load balancer), and softly kill the old one. That's it, you have the updated application and users didn't even notice it at all.
The Dockerbest practicesguide states that:"...you should only run a single process in a single container..."Should Nginx and PHP-FPM run in separate containers? Or does that mean that micro service architectures only run one service or "app" in a container?Having these services in a single container seems easier to deploy and maintain.
Docker best practices: single process for a container
This should work:$ docker run --rm -v //c/Users/Marco:/data composer --help
When running the following command from a CoreOS VM, it works as expected:docker run --rm -v $PWD:/data composer initIt will initialize thecomposer.jsonfile in the current working directory by using the Docker volume mapping as specified. The Docker container basically has the PHP tool composer installed and will run that tool inside the/datafolder of the container. By using the mapping it actually applies it on the files on the host machine.However when trying to run this command on Windows using Docker Toolbox I get the following error.$ docker run --rm -v $PWD:/data composer --help invalid value "C:\\Users\\Marco;C:\\Program Files\\Git\\data" for flag -v: bad mount mode specified : \Program Files\Git\data See 'C:\ProgramData\Chocolatey\lib\docker\bin\docker.exe run --help'.What I notice here is although I am in Git Bash when executing the command it still uses Windows paths. So then I tried following (surround with quotes):$ "docker run --rm -v $PWD:/data composer --help" bash: docker run --rm -v /c/Users/Marco:/data composer --help: No such file or directoryNow it is unable to find the directory.I also tried without the $PWD variable, but this doesn't make a difference.How do I make this work on Windows?
'docker run -v' does not work on Windows using Docker Toolbox
Caveat - Docker is under heavy development so confirming against current docs is advisable.The network element is one of those under current discussion ondocker-dev, it looks like longer term integration withlibvirtis being considered. So to answer your question NET DHCP or something is probably not implemented as you'd want.Some of how Docker's networking is implemented is describedin this blog post. Currently a set of IP ranges inCreateBridgeIfaceinnetwork.go.For the meanwhile you might want to checkoutpipeworkwhich is a tool designed to be used with Docker for various network configuration. This will allow you to add and modify IP addresses on your container, create private networks and connect containers to a physical interface. In the end it's wrapping lower level tools but you might find usingpipeworkeasier.
I'm new to Docker. Is it possible to assign an IP address (from a DHCP server) to Docker containers running on a host or VM? If yes, can someone point me in the correct direction. If no, is it a fundamental limitation of the container approach or it's just a feature that's not in Docker yet.
Assigning IP address to docker containers?
The solution I went for was to edit thedocker-compose.override.ymlfile that was added by Visual Studio Tools for Docker, and add the following lines:version: '3' services: mydockerapp: volumes: - ${USERPROFILE}/.aws:/root/.aws environment: - AWS_REGION=(your region) - AWS_PROFILE=defaultThis mounts the .aws directory containing AWS credentials in the appropriate place in the Docker container (/rootis the default HOME directory), and sets environment variables to select the profile and region. ThelaunchSettings.jsonfile in the .NET Core project is not used when running in Docker.
I have a .NET Core 2.0 console application developed using Visual Studio 2017. The launchSettings.json file sets an environment variable that allows it to use the developer's default AWS credentials"environmentVariables": { "AWS_PROFILE": "default" ... }I have now added Docker support to the VS solution, and am trying to run the application in a Linux Docker container. Of course it fails with the following exception, as it is unable to find the profile:Amazon.Runtime.AmazonClientException: Unable to find the 'default' profile in CredentialProfileStoreChain.What is the best way to pass AWS credentials to the Docker container in a development environment? I obviously don't want to put my credentials as environment variables in launchSettings.json as this file is committed to source control.EDITJust to be clear, I am looking for a solution that allows my Docker container to access the developer's credentials when debugging in Visual Studio 2017 on the developer's machine. Release builds will be deployed to AWS and an IAM role will preclude the need for credentials. The credentials are in the file%USERPROFILE%\.aws\credentialsand I'm looking for a solution that will enable me to use them from within the Docker container without exposing them elsewhere: hence I don't want to put them in launchSettings.json or any other file that launches the Docker container.A solution I envisage could involve mounting the Windows drive in the Docker container (or at least the directory%USERPROFILE%\.aws\) then setting an environment variable (AWS_SHARED_CREDENTIALS_FILE?) in the Docker container so that AWS automagically finds the credentials file.I've no idea how to do this though, as I'm very new to Docker.
How to manage AWS credentials when running Docker container with Visual Studio 2017
Change your proxy_pass fromproxy_pass http://0.0.0.0:8000;toproxy_pass http://web:8000;Your nginx needs to forward to request the web containerEdit-1: Explanation0.0.0.0is a special IP address which is used to refer to any available interface on the machine. So if your machine has a loopback (lo), ethernet (eth0), Wifi (wlan0) with respective IP as127.0.0.1,192.168.0.100,10.0.0.100.So now while listening for incoming connection you can choose any of the above IPgunicorn wsgi:application --workers 2 --bind 10.0.0.100:8000This will only be reachable from your Wifi network. But other machines on Lan network can't visit it. So if you want your app to listen on any available network on the machine you use a special IP0.0.0.0. This means bind on all network interfacesgunicorn wsgi:application --workers 2 --bind 0.0.0.0:8000Now when you access the app usinghttp://0.0.0.0it is equivalent to using127.0.0.1. So yourproxy_pass http://0.0.0.0:8000;is equivalent toproxy_pass http://127.0.0.1:8000;So when you run that in nginx container, it passes on the request on port 8000 of the same container and there is nothing running on 8000 in your nginx container. So you need to send that request to the your gunicorn container. This is reachable using the service namewebindocker-compose.See the below article for more detailshttps://docs.docker.com/compose/networking/
The setup is as follows: I have a Gunicorn/Django app running on0.0.0.0:8000that is accessible via the browser. To serve static files I am running nginx as a reverse proxy./etc/nginx/nginx.confis configured to forward requests as follows:server { location /static/ { alias /data/www/; } # Proxying the connections location / { proxy_set_header Host $host; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_redirect off; proxy_pass http://0.0.0.0:8000; } }and mydocker-compose.ymlfile is as follows:version: '3.3' services: web: restart: always build: ./web expose: - "8000" ports: - "8000:8000" volumes: - staticdata:/usr/src/app/static_files command: gunicorn wsgi:application --workers 2 --bind 0.0.0.0:8000 depends_on: - postgres nginx: restart: always build: ./nginx ports: - "80:80" - "443:443" volumes: - staticdata:/data/www depends_on: - web postgres: image: postgres:9.2 restart: always volumes: - pgdata:/var/lib/postgresql/data ports: - "5432:5432" volumes: staticdata: pgdata:When I visit0.0.0.0:8000via the browser the application works fine (albeit without serving static files), but when I visit127.0.0.1:80I get the following error:nginx_1 | 2017/09/17 13:59:46 [error] 6#6: *5 connect() failed (111: Connection refused) while connecting to upstream, client: 172.18.0.1, server: , request: "GET / HTTP/1.1", upstream: "http://0.0.0.0:8000/", host: "127.0.0.1"I know that this error indicates that the server running on0.0.0.0:8000is not accepting requests, but since I can visit it via the browser I am a bit confused.Thank you in advance.
Connection refused while connecting to upstream when using nginx as reverse proxy
Due to some weirdness in the docker layers and inodes, you have to create the file during the CMD:CMD cron && touch /var/log/cron.log && tail -F /var/log/cron.logThis works both for file and stdout:FROM ubuntu:18.10 RUN apt-get update RUN apt-get update && apt-get install -y cron ADD hello-cron /etc/cron.d/hello-cron # Give execution rights on the cron job RUN chmod 0644 /etc/cron.d/hello-cron # Create the log file to be able to run tail # Run the command on container startup CMD cron && touch /var/log/cron.log && tail -F /var/log/cron.logThe explanation seems to be this one:In the original posttailcommand starts "listening" to a file which is in a layer of the image, then whencronwrites the first line to that file, docker copies the file to a new layer, the container layer (because of the nature of copy-and-write filesystem, the way that docker works). So when the file gets created in a new layer it gets a different inode andtailkeeps listening in the previous state, so looses every update to the "new file".Credits BMitch
Trying to run a docker container that has a cron scheduling. However I cannot make it output logs.Im using docker-compose.docker-compose.yml--- version: '3' services: cron: build: context: cron/ container_name: ubuntu-croncron/DockerfileFROM ubuntu:18.10 RUN apt-get update RUN apt-get update && apt-get install -y cron ADD hello-cron /etc/cron.d/hello-cron # Give execution rights on the cron job RUN chmod 0644 /etc/cron.d/hello-cron # Create the log file to be able to run tail RUN touch /var/log/cron.log # Run the command on container startup CMD cron && tail -F /var/log/cron.logcron/hello-cron* * * * * root echo "Hello world" >> /var/log/cron.log 2>&1The above runs fine its outputting logs inside the container however they are not streamed to the docker.e.g.docker logs -f ubuntu-cronreturns empty resultsbutif you login to the containerdocker exec -it -i ubuntu-cron /bin/bashyou have logs.cat /var/log/cron.log Hello world Hello world Hello worldNow Im thinking that maybe I dont need to log to a file? could attach this to sttoud but not sure how to do this.This looks similar...How to redirect cron job output to stdout
Docker ubuntu cron tail logs not visible
There is a better way than overriding the default command - using/docker-entrypoint-initdb.d:When a container is started for the first time it will execute files with extensions.shand.jsthat are found in/docker-entrypoint-initdb.d. Files will be executed in alphabetical order..jsfiles will be executed by mongo using the database specified by the MONGO_INITDB_DATABASE variable, if it is present, or test otherwise. You may also switch databases within the.jsscript.[Source]So you simply write that command into a file namedmongorestore.sh:mongorestore -d mydatabase /db-dumpand then mount it inside along with the dump file:version: "3.1" services: mongo: image: mongo:bionic ports: - "27017:27017" volumes: - ./mongorestore.sh:/docker-entrypoint-initdb.d/mongorestore.sh - ./db-dump:/db-dumpYou don't even need a Dockerfile.
I'm trying to set up a container running MongoDB that gets populated with data using mongorestore when it starts up. The idea is to quickly set up a dummy database for testing and mocking.My Dockerfile looks like this:FROM mongo:bionic COPY ./db-dump/mydatabase/* /db-dump/and docker-compose.yml looks like this:version: "3.1" services: mongo: build: ./mongo command: mongorestore -d mydatabase ./db-dump ports: - "27017:27017"If I run this withdocker-compose up, it pauses for a while and then I get an error saying:error connecting to host: could not connect to server: server selection error: server selection timeout, current topology: { Type: Single, Servers: [{ Addr: localhost:27017, Type: Unknown, State: Connected, Average RTT: 0, Last error: connection() : dial tcp 127.0.0.1:27017: connect: connection refused }, ] }Opening a CLI on the container and running the exact same command works without any issues, however. I've tried adding-hwith the name of the container or 127.0.0.1, and it doesn't make a difference. Why isn't this command able to connect when it works fine once the container is running?
Running mongorestore on Docker once the container starts
In case anyone else finds this, the problem ended up being a typo in the maven plugin configuration. I was usinginstead of. Below is the correct XML that works: abc mytoken https://ghcr.io
Using Spring Boot 2.4.0, I'm trying to configure thespring-boot:build-imagetask to push an image to my private GitHub container registry.I usedthese instructionsto configure my POM as follows: org.springframework.boot spring-boot-maven-plugin ghcr.io/abc/${project.artifactId}:${project.version} true abc mytoken https://ghcr.io When I execute thespring-boot:build-imagetask, it builds the image but I get the following error when it tries to push:[INFO] Successfully built image 'ghcr.io/abc/def:1.5.0' [INFO] [INFO] > Pushing image 'ghcr.io/abc/def:1.5.0' 100% Execution default-cli of goal org.springframework.boot:spring-boot-maven-plugin:2.4.0:build-image failed: Error response received when pushing image: error parsing HTTP 405 response body: unexpected end of JSON input: "" -> [Help 1]I can manually push the image usingdocker push, and I have tried doing adocker loginwhich doesn't help either. I am also not behind any firewall or proxy.
Maven Spring Boot Cannot Push Docker Image
Withdocker(anddocker-compose), you can run arbitrary commands in a container. TheDockerfiledefines the default command that is run when no other command is specified, but that doesn't mean it's the only one you can run.In your case:npm startis run when no other command is specified. That happens when you dodocker-compose up.But, you can run any command usingdocker runordocker-compose run. For your tests, that might look like this:docker-compose run web mocha.There is a slight difference inupandrun, and I encourage you to read up on it:Should I use docker-compose start up or run?Does this help you get started?
I am trying to run mocha unit test for my node application. The application is built by a docker image.Docker image:FROM node:6.10.0-alpine RUN mkdir -p /app WORKDIR /app COPY package.json /app RUN npm install COPY . /app EXPOSE 3000 CMD ["npm", "start"]Docker compose:version: "3" services: web: #### nodejs image build: . volumes: - ./app/ ports: - "3000:3000" depends_on: - db db: build: ##### postgres db image context: . dockerfile: dbDockerfile ports: - 5432:5432The setup can be built and worked as expected. The problem is not I am sure how to run unit test commands likemochato perform the test.I see a module calleddockunitbut I am not sure if that's the only way for now. Can anyone help me out about this?
How to setup unit test in Docker for nodejs application?
So I finally found the issue! In my originalDockerfile,NODE_ENVwas setbeforeyarn install. This means that for the production build,yarnwould not installdevDependencies, and therefore not any of my@typeslibraries. This caused all the compilation errors all over the project.Moving the definition ofNODE_ENVbelow/afteryarn installin theDockerfilesolved the issue.FROM mhart/alpine-node:10.9 WORKDIR /usr/src COPY yarn.lock package.json ./ RUN yarn ARG REACT_APP_API_ENDPOINT ARG NODE_ENV COPY . . RUN yarn build && mv build /publicNote: As far as I know,yarn buildwill make sure to remove thedevDependenciesagain, so don't worry about this bloating your build. :)
So I have acreate-react-app-tsapp that I would like to Dockerize and host on Zeit Now.Everything works fine locally, runningyarn tscandreact-scripts-ts buildworks great.Creating the Docker image also works great from the following Dockerfile:FROM mhart/alpine-node:10.9 WORKDIR /usr/src ARG REACT_APP_API_ENDPOINT ARG NODE_ENV COPY yarn.lock package.json ./ RUN yarn COPY . . RUN yarn build && mv build /publicHowever, when publishing to Now, the build script fails on Typescript compilation, outputting compilation errors for most files in the project.I am able to reproduce that locally as well if I setENV NODE_ENV productionin my Dockerfile just aboveWORKDIR....So it would seem that either Typescript orreact-scripts-tsacts differently whenNODE_ENV=production. I've never encountered this error before, and I don't know how to debug it. RunningNODE_ENV=production tscorNODE_ENV=production react-scripts-ts buildalso works fine locally.I'm running Typescript v 3.0.1 with the following config:{ "compilerOptions": { "baseUrl": ".", "outDir": "build/dist", "module": "esnext", "target": "es6", "lib": ["es6", "dom", "esnext.asynciterable"], "sourceMap": true, "allowJs": true, "jsx": "react", "moduleResolution": "node", "rootDir": "src", "forceConsistentCasingInFileNames": true, "noImplicitReturns": true, "noImplicitThis": true, "noImplicitAny": true, "strictNullChecks": true, "suppressImplicitAnyIndexErrors": true, "noUnusedLocals": true, "allowSyntheticDefaultImports": true, "strict": true }, "exclude": ["node_modules", "build", "scripts", "acceptance-tests", "webpack", "jest", "src/setupTests.ts"] }Any advice would be much appreciated! :)EDIT: Added env var args to the Dockerfile. It was originally left out for the sake of brevity, but it ended up being part of the issue and solution
Typescript compilation fails (in Docker) when NODE_ENV=production
In VSTS, it's the build service account which execute entire build pipeline. This account should also run the command.Note, the service is setting up during the configuration of build agent. You can run the build agent as a systemd service. More details please refer to thistutorial.You will need to grant appropriate permissions. The user just needs to be added to the group docker.sudo usermod -a -G docker userAlso restart the systemd service and try to trigger the build again.
I have set a private pipeline with linux vm and agent is install and in the portal it shows that the agent is active. I also have install docker. In the same machine if I use sudo docker it works. So I am sure it is a permission issues when the VSTS agent is running the command. Not sure what which user i need to give which premission so that docker command will run when I initial a build from VSTS.Got permission denied while trying to connect to the Docker daemon socket at unix:///var/run/docker.sock: Posthttp://%2Fvar%2Frun%2Fdocker.sock/v.37/build?buildargs=%7B%7D&cachefrom=%5B]&cgroupparent=&cpuperiod=0&cpuquota=0&cpusetcpus=&cpusetmems=&cpushares=0&dockerfile=Dockerfile&labels=%7B%7D&memory=0&memswap=0&networkmode=default&rm=&session=a53bebddc77c89993b6e464d9f2a56fac9b***e62***094***fe70355df2c8dfcf***8b9&shmsize=0&t=mycontainerreg.azurecr.io%2Ftk-dashboard%3A853⌖=&ulimits=null: dial unix /var/run/docker.sock: connect: permission denied/usr/bin/docker failed with return code: ***
setting up docker permission to VSTS agent in a private pipeline
The difference between node and PHP here is that php automatically picks up file system changes between requests, but a node server doesn't.I think you'll see that the file changes get picked up if you restart node by bouncing the container with docker-compose down then up (no need to rebuild things!).If you want node to pick up file system changes without needing to bounce the server you can use some of the node tooling. nodemon is one:https://www.npmjs.com/package/nodemon. Follow the installation instructions for local installation and update your start script to use nodemon instead of node.Plus I really do think you have a mistake in your dockerfile and you need to copy the source code into your working directory. I'm assuming you got your initial recipe from here:https://dev.to/alex_barashkov/using-docker-for-nodejs-in-development-and-production-3cgp. This is the docker file is below. You missed a step!FROM node:10-alpine WORKDIR /usr/src/app COPY package*.json ./ RUN npm install COPY . . CMD [ "npm", "start" ]
I am trying to host a development environment on my Windows machine which hosts a frontend and backend container. So far I have only been working on the backend. All files are on the C Drive which is shared via Docker Desktop.I have the following docker-compose file and Dockerfile, the latter is inside a directory called backend within the root directory.Dockerfile:FROM node:12.15.0-alpine WORKDIR /usr/app COPY package*.json ./ RUN npm install EXPOSE 5000 CMD [ "npm", "start" ]docker-compose.yml:version: "3" services: backend: container_name: backend build: context: ./backend dockerfile: Dockerfile volumes: - ./backend:/usr/app environment: - APP_PORT=80 ports: - '5000:5000' client: container_name: client build: context: ./client dockerfile: Dockerfile volumes: - ./client:/app ports: - '80:8080'For some reason, when I make changes in my local files they are not reflecting inside the container. I am testing this by slightly modifying the outputs of one of my files, but I am having to rebuild the container each time to see the changes take effect.I have worked with Docker in PHP applications before, and have basically done the same thing. So I am unsure why this is not working with by Node.js app. I am wondering if I am just missing something glaringly obvious as to why this is not working.Any help would be appreciated.
Node.js docker container not updating to changes in volume
Docker containers are started with an entrypoint and a command; when the container actually starts they are simply concatenated together. If the ENTRYPOINT in theDockerfileis structured like a single command then the CMD in theDockerfileorcommand:in thedocker-compose.ymlcontains arguments to it.This means you should be able to set up yourdocker-compose.ymlas:services: my.app1: image: ${DOCKER_REGISTRY}my/app ports: - 5000:80 command: [80, db1.db] my.app2: image: ${DOCKER_REGISTRY}my/app ports: - 5001:80 command: [80, db2.db](As a side note: if one of the options to the program is the port to listen on, this needs to match the second port in theports:specification, and in my example I've chosen to have both listen on the "normal" HTTP port and remap it on the hosts using theports:setting. One container could reach the other, if it needed to, ashttp://my.app2/on the default HTTP port.)
I am new to docker, so this may sound a bit basic question.I have a VS.Net core2 console application that is able to take some commandline parameters and provide different services. so in a normal command prompt I can run something likec:>dotnet myapplication.dll 5000 .\mydb1.db c:>dotnet myapplication.dll 5001 .\mydb2.dbwhich creates 2 instance of this application listing on port5000&5001.I want to now create one docker container for this application and want to run multiple instance of that image and have an ability to pass this parameter as a commandline to thedocker runcommand. However I am unable to see how to configure this either in thedocker-compose.ymlor theDockerfileDockerFileFROM microsoft/dotnet:2.1-aspnetcore-runtime AS base WORKDIR /app EXPOSE 80 # ignoring some of the code here ENTRYPOINT ["dotnet", "myapplication.dll"]docker-Compose.ymlversion: '3.4' services: my.app: image: ${DOCKER_REGISTRY}my/app ports: - 5000:80 build: context: . dockerfile: dir/DockerfileI am trying to avoid creating multiple image one per each combination of commandline arguments. so is it possible to achieve what I am looking for?
Docker RUN multiple instance of a image with different parameters
ForRemoteWebDriveryou have to set file detectordriver.setFileDetector(new LocalFileDetector());. Your code:public static void uploadSampleImage(StaticSeleniumDriver driver) { driver.setFileDetector(new LocalFileDetector()); File file = new File(System.getProperty("user.dir") + "/resources/images/" + SAMPLE_DOCUMENT_FILE_NAME); Utils.Log("file exists: " + file.exists()); String imagePath = file.getAbsolutePath(); WebElement input = driver.findElement(By.name("file")); input.sendKeys(imagePath); }
I have following method that uploads image using selenium.public static void uploadSampleImage(StaticSeleniumDriver driver) { File file = new File(System.getProperty("user.dir") + "/resources/images/" + SAMPLE_DOCUMENT_FILE_NAME); Utils.Log("file exists: " + file.exists()); String imagePath = file.getAbsolutePath(); WebElement input = driver.findElement(By.name("file")); input.sendKeys(imagePath); }That's a standard way of feeding file path (like explained in Guru99 tutorial) to upload file.It works fine when testing locally on windowsIt is NOT working when run inside docker container (linux), getting this error:org.openqa.selenium.InvalidArgumentException: invalid argument: File not found : /usr/src/app/resources/images/image2.png (Session info: chrome=72.0.3626.81) (Driver info: chromedriver=2.46.628388 (4a34a70827ac54148e092aafb70504c4ea7ae926),platform=Linux 4.9.125-linuxkit x86_64) (WARNING: The server did not provide any stacktrace information)Which is weird because I am sure file exist in given directory (in my method above, I am checking if file exists and log clearly confirms that)Any suggestions would be welcome, thank you
Selenium upload file: file not found [docker]
There are 2 options: use an existing imageORtell the docker-compose to build it. If both are specified, then Compose names the built image with thejupyter/base-notebook:latest.If you want to use thejupyter/base-notebook:latestimage as is, remove thebuild:section from your compose file and keep theimage::version: "3.7" services: notebook: image: jupyter/base-notebook:latest ports: - "3010:8888" volumes: - "./notebooks:/home/appuser/work"If you want to build a custom image, give it a name that does not conflict with the official image name(preferably) and provide a build context:services: notebook: build: context: ./ dockerfile: Dockerfile args: - NB_USER=appuser - NB_UID=1001 - NB_GID=101 image: /:
I want to usejupyter/base-notebook:latestimage. Here is mydocker-compose.yml:version: "3.7" services: notebook: image: jupyter/base-notebook:latest build: args: - NB_USER=appuser - NB_UID=1001 - NB_GID=101 ports: - "3010:8888" volumes: - "./notebooks:/home/appuser/work"When I rundocker-compose up, I get this error:Service notebook has neither an image nor a build context specified. At least one must be provided.How can I solve it?
Service notebook has neither an image nor a build context specified. At least one must be provided
You have two options:You can run these commands in the dockerfile for your images; as each dockerfile is run when compose is running - your images will have the results of these commands. This is particularly useful when you are doing os-level upgrades and configuration bootstrapping (like yourapt-getcommands).For runtime-level configuration (things you need to do once the system is up), use thecommanddirective in your docker-compose.yml file. These would be your migrations (if you need to run them each time).If you want to persist your data across runs of docker compose (that is, your data should remain when you restart the container); then you need either persistent mapping against your host or a data volume that's shared - which you can configure in your docker-compose.yml as well.docker-compose will happily run whatever script you provide - it doesn't know if it needs to run it, its just executing commands. It is up to you to make sure your pre, post, bootstrap scripts are intelligent enough that they can be repeated even if their effective results are already applied.
I have a sample django app that I am trying to get up and running using docker.docker-compose upbrings up the web, db and other containers along with links between them. But there are pre and post scripts that might need to be run..example of pre-scripts in my scenario:git pip docker docker-compose wgetexample of post-scripts :Database migrations, usually done manually usingdocker run web...after the containers are up and running.Currently I have a deploy.sh at the root of app which follows logic like this..(I choose a ubuntu image when launching)#assuming I always choose ubuntu base image sudo apt-get install x sudo apt-get install y sudo apt-get install z docker-compose build .; docker-compose up -d; docker-compose run web "python manage.py makemigrations"My questions:1) what is the best way to run these commands?2) Are database migrations run each time you deploy (from scratch?) - or is this issue taken care of by volumes?
docker-compose - database migrations and other pre/post scripts
Quick answer: Yes :)In the Dockerfile, you copying your app into /var/www/app.The instructions form the Dockerfile are executed when you build your image (docker build -t :)If you change the code later on, how could the image be aware of that?However, you can mount a volume(a directory) from your host machine, into the container when you execute thedocker run/docker-compose upcommand, right under /var/www/app. You'll then be able to change the code in your local directory and the changes will automatically be seen in the container as well.Perhaps you want to mount the current working directory(the one containing your app) at /var/www/app?volumes: - .:/var/www/app
I am running FastApi via docker by creating a sevice called ingestion-data in docker-compose. My Dockerfile :FROM tiangolo/uvicorn-gunicorn-fastapi:python3.7 # Environment variable for directory containing our app ENV APP /var/www/app ENV PYTHONUNBUFFERED 1 # Define working directory RUN mkdir -p $APP WORKDIR $APP COPY . $APP # Install missing dependencies RUN pip install -r requirements.txtAND my docker-compose.yml fileversion: '3.8' services: ingestion-service: build: context: ./app dockerfile: Dockerfile ports: - "80:80" volumes: - .:/app restart: alwaysI am not sure why this is not picking up any change automatically when I make any change in any endpoint of my application. I have to rebuild my images and container every time.
How to make FASTAPI pickup changes in an API routing file automatically while running inside a docker container?