1

As a quick recap, docker serves as a way to write code or configuration file changes for a specific web service, run environment, virtual machines, all from the cozy confines of a linux terminal/text file. Docker Images are save points of layers of code that are made from either dockerfiles or can be created from containers which require a base image to go off of anyways to create. Dockerfiles serve as a way to automate the build process of making images by running all the desired commands and actions for any new containers to be spawned with it and roll them into one file.

Now this is great and all but i want to take this a step further. Building images, especially those with dependencies are encumbersome because 1,you have to rely on commands that are either not there within the default OS image, or 2, have a lot of other useless commands to which are not needed.

Now in my head i feel like its possible but i cant make the connection just yet. My desire is to get a dockerfile to build itself from scratch (Litterally the image of scratch) and build itself according. It is to copy any dependencies that is desired so like an rpm or something, install it, find its start up command, and relay all dependencies thats needed to succesfully create and run the image with no flaw back to the docker file. In a programming sense,

FROM scratch
COPY package.rpm
RUN *desired cmds*

Run errors are fed back into a file. file searchs the current OS for the dependencies needed and returns them to the RUN cmd.

CMD *service start up*

As for that CMD, we would run the service, and get its status and filter it back its startup commands back into the CMD portion.

The problem here though is that i dont believe i can use docker to these ends. To do a docker build of something, retain its errors and filter it back into the build again seems challenging. I wish docker could come equipped with this as it would seem like my only chance of performing such a task would be through a script which wreaks havoc on the portability factor.

Any ideas?

Jouster500
  • 762
  • 12
  • 25
  • In a sense, i want to automate the automated builds of dockerfiles. – Jouster500 Jul 27 '15 at 16:18
  • I know what i could possibly do, but it would take some prep work. First i might have to do the minimal install of a Machine i want to be building images. This would mean just having the terminal on a particular VM. Using that minimal install, i could reverse the process by deleting all the uneeded commands, but i would still have to sift through the package to figure out what it needs... Ive been told it is easier to build up than to destroy down, but now im not so sure. I know in a dockerfile i could run particular commands, but how do i tell what commands are needed for an entire service? – Jouster500 Jul 31 '15 at 13:15

2 Answers2

0

Docker isn't going to offer you painless builds. Docker doesn't know what you want.

You have several options here:

A simple example:

web:
build: .
volumes:
   - "app:/src/app"
ports:
   - "3030:3000"

To use it:

docker-compose up

Docker compose will then:

  1. Call the container web
  2. build using the current working directory as root
  3. Mount app directory to /src/app in the container
  4. Expose container port 3030 as 3000 to the outside world.

Note that build can also point to a Docker container you found via Kitematic (which reads from registry.hub.docker.com) so you can replace the . (in the example above on the build line) with node:latest and it will build a NodeJS container.

Docker Compose is very similar to the docker command line. You can use https://lorry.io/ for help generating the docker-compose.yml files.

  • If you're looking for an epic solution, then I would recommend something like Mesosphere for an enterprise Docker environment.

There are other solutions you could also look into like Google's Kubernetes and Apache Mesos, but the learning curve will increase.

I also noticed you were mucking with IP's and while I haven't used it, from what I hear, Weave greatly simplifies the network aspect of Docker, which is definitely not Docker's strong suit.

taco
  • 1,367
  • 17
  • 32
  • ive studied kitematic. Its a nicer form of the already present docker daemon in which it supplies a GUI for users to interact with. Much better option for managing docker. Docker compose also falls into the exact same format. Nicer tool for managing the maassive amounts of docker containers, but im afraid i wasnt aiming for this. I was aiming for an image that when you attempt to build, it recognizes that it cant find a certain file that is needed to use the run command. It relays this dependency back to machine in which it can find it and add it to the docker file. – Jouster500 Jul 28 '15 at 12:40
  • This is not how enterprise companies do it. They automate builds, then only worry if the builds fail. You do realize a docker is essentially just an archive, right? You're saying you want something to automate the simple automation of copying files. A human has to be in the process somewhere. There's always an engineer at the top of that automation stack. – taco Jul 28 '15 at 19:47
  • man vs machine. Man makes machine, machine is given ability to make copies of itself. Did you know that there was a virus out there that could completely rewrite its coding while out in the field? If someone can do something as complex as that, someone should also be able to make a script or something that can attach those needed dependencies to a dockerfile. Ive already got a basic one in the making. Takes the error code of the failed dockerfile, greps for the file path, copys that path to a new dockerfile, attempts to build that dockerfile, and grabs the dependencies from that one and so on. – Jouster500 Jul 28 '15 at 21:47
  • the only downside to this is that it is being utilized in a script which is grounded in the terms of the operating system it was made on. Thats why im asking if there is something or some method out there that can replicate that and still be portable to all environments. – Jouster500 Jul 28 '15 at 21:49
  • It sounds like you want to write it in Java, so your tool that runs Docker commands is cross-platform for the major OS's. I don't know why you wouldn't just build for one OS though. Seems like a waste of time and energy that could be focused elsewhere. – taco Aug 09 '15 at 01:16
  • you know you are probably right. When it boils down to it, having something thats on just 1 machine to search for the dependencies of a particular application. i was hoping there was some way to do it to keep it universal to almost all linux distributions because i bet you there are some programs that cant be run on something like centos because of some dependency issues. And java just might have to be the answer. It is one of those languages that are supported on every platform. Idk, i may have to look later to say how cost-effective this really is. – Jouster500 Aug 10 '15 at 13:04
  • I was thinking about this today, and maybe you could just make it curl an API server. There is a lot you could do here: offload some tests to the API server maybe. Maybe have it download the latest build script that a continuous integration tool has verified is working. – taco Aug 10 '15 at 17:11
  • Ill try looking into that in the coming weeks. Because that sounds like one of the ideas i had earlier of having the program communicate to something else to handle the incoming requests. Like the outputs request of a docker build is streamed to a different application that attempts to locate where its dependencies can be found. – Jouster500 Aug 10 '15 at 17:46
0

Sounds more like a provisioning system a la ansible, chef or puppet to me. I know some use those to create images if you have to stay in the dockerland.

Mykola Gurov
  • 8,517
  • 4
  • 29
  • 27
  • I was figuring that i might have to do something like that. I havent versed myself in their languages just yet so i just want to double check and ask again to make sure. **So using the dockerfile to attempt to build an image, it sees that it cant run a command so it goes back to the local system, finds that command and relays that back to the docker file upon which it will continue building itself until no more files are needed to run the package? And you think something like chef or puppet would do the trick?** – Jouster500 Jul 28 '15 at 12:45