Thursday, 28 February 2019

Health Checking With Docker Stack

# Docker Swarm/Stack Compose file 'docker stack deploy <stackname> -c docker-stack-compose.yml
# docker swarm leave --force
# docker swarm init
# docker node update $(docker node inspect self --format '{{.ID}}') --label-add DrupalReady=true
# docker node inspect self --format '{{.Spec.Labels}}'

# docker secret rm postgres_password 
# echo 'SomeSecretPassword!' | docker secret create --label SOMELABEL=WHAT postgres_password -
# docker secret inspect postgres_password

#
#
#
# docker stack deploy drupalstack -c docker-stack-compose.yml 
#  docker stack ps $(docker stack ls --format '{{.Name}}') 
#

# Grab Info on all containers in our current stacks.
# docker container inspect $(docker stack ps --no-trunc $(docker stack ls --format '{{.Name}}') --format '{{.Name}}.{{.ID}}')

version: '3.1'
services:

# DRUPAL MAIN APP 
 drupal:
   image: drupal:8.2
   ports:
    - "8070:80"
   volumes:
     - webdata:/var/www/html

#   healthcheck:
#    test: ["CMD-SHELL", "curl -f http://127.0.0.1 || exit 1"]
#    interval: 20s
#    timeout: 10s
#    retries: 3
#    start-period: 10s
#    disable: true
    
   depends_on:
      - postgres
   deploy:
     replicas: 1
     placement:
        constraints: [node.labels.DrupalReady == true ]

# docker run -p 9009:9009 -v /var/run/docker.sock:/var/run/docker.sock moimhossain/viswar
# STACK VISUALISER
 viz:
  image: bretfisher/visualizer
  ports:
   - "8080:8080"
  volumes:
   - /var/run/docker.sock:/var/run/docker.sock
  deploy:
   placement:
    constraints: [node.role == manager]

# DB 
 postgres:
   image:
    postgres:9.6
   
   healthcheck:
    test: ["CMD-SHELL", "pg_isready -U postgres || exit 1"]
    interval: 30s
    timeout: 10s
    retries: 3
#   start-period: 10s

   secrets:
    - postgres_password

   environment:
    - POSTGRES_PASSWORD_FILE=/run/secrets/postgres_password
    - POSTGRES_USER=drupaluser
    - POSTGRES_DB=egdbase

   volumes:
       - postgresdata:/var/lib/postgresql/data

   deploy:
    restart_policy:
     condition: on-failure
     delay: 5s
     max_attempts: 3


volumes:
 postgresdata:
 webdata:
secrets:
 postgres_password:
  external: true

Docker Stack Script with 'Secrets'

Execute docker stack compose script with the commanddocker stack deploy <stackname> -c docker-stack-compose.yml


# Docker Swarm/Stack Compose file 'docker stack deploy <stackname> -c docker-stack-compose.yml
# docker swarm leave --force
# docker swarm init
# docker node update $(docker node inspect self --format '{{.ID}}') --label-add DrupalReady=true
# docker node inspect self --format '{{.Spec.Labels}}'

# docker secret rm postgres_password 
# echo 'SomeSecretPassword!' | docker secret create --label SOMELABEL=WHAT postgres_password -
# docker secret inspect postgres_password

#
#
#
# docker stack deploy drupalstack -c docker-stack-compose.yml 
#  docker stack ps $(docker stack ls --format '{{.Name}}') 
#

# Grab Info on all containers in our current stacks.
# docker container inspect $(docker stack ps --no-trunc $(docker stack ls --format '{{.Name}}') --format '{{.Name}}.{{.ID}}')

version: '3.1'
services:

# DRUPAL MAIN APP 
 drupal:
   image: drupal:8.2
   ports:
    - "8070:80"
   volumes:
     - webdata:/var/www/html
   depends_on:
      - postgres
   deploy:
     replicas: 1
     placement:
        constraints: [node.labels.DrupalReady == true ]

# docker run -p 9009:9009 -v /var/run/docker.sock:/var/run/docker.sock moimhossain/viswar
# STACK VISUALISER
 viz:
  image: bretfisher/visualizer
  ports:
   - "8080:8080"
  volumes:
   - /var/run/docker.sock:/var/run/docker.sock
  deploy:
   placement:
    constraints: [node.role == manager]

# DB 
 postgres:
   image:
    postgres:9.6

   secrets:
    - postgres_password

   environment:
    - POSTGRES_PASSWORD_FILE=/run/secrets/postgres_password
    - POSTGRES_USER=drupaluser
    - POSTGRES_DB=egdbase

   volumes:
       - postgresdata:/var/lib/postgresql/data

   deploy:
    restart_policy:
     condition: on-failure
     delay: 5s
     max_attempts: 3


volumes:
 postgresdata:
 webdata:
secrets:
 postgres_password:

  external: true

Wednesday, 27 February 2019

Installing a Docker Registry Cache on your Raspberry Pi



In this example - our local Raspberry Pi has the IP address 192.168.1.194

On your Pi – First install ‘Go version 1.8’ -

cd ~

tar -C /usr/local -xzf go1.8.linux-armv6l.tar.gz

export PATH=$PATH:/usr/local/go/bin

Now Clone, build and install the Docker Registry project

git clone https://github.com/docker/distribution.git

cd distribution

go get ./...


GOOS=linux GOARCH=arm make binaries

After making the binaries – you should issue the command

ls bin/

and you should see files: digest, registry, registryapidescriptortemplate


Before running the service you need to set up a config file to tell the registry service where to put images and to tell it to behave as a proxy for the main Docker repos images (nginx:latest, httpd:alpine etc) .

Create ~/distribution/bin/registry-config.yml and copy the following content into it:

version: 0.1
storage:
  cache:
    blobdescriptor: inmemory
  filesystem:
    rootdirectory: /var/lib/registry
http:
  addr: :5000
proxy:
 remoteurl: https://registry-1.docker.io


Create ~/distribution/bin/registry-config.yml and copy the following content into it:
Finally make a directory to store the images on your Pi

sudo mkdir p /var/lib/registry
sudo chown $USER /var/lib/registry


Finally start up your Registry Server.

cd ~/distribution/bin/
./registry serve ~/registryconfig.yml




On your Desktop/Laptop (Docker Client)



Under the Docker Preferences setup your PI server as a
‘Registry Mirror’




Apply and Restart your Docker Application



Testing


On your Docker client desktop/laptop – open up a browser and after making sure you’re on the same subnet as your Pi – go to the location
http://192.168.1.194:5000/v2


You should notice the empty ‘{ }’ JSON response on your browser.





If you have a shell open where your Pi is running the service – you’ll notice the response.




Now let’s pull some images from the main Docker Repo.

Using a terminal – on your Client machine (Desktop or Laptop)

Issue the following commands

docker image rm hello-world  

Don’t worry if you get an error on this command.

docker image pull hello-world


When the client starts to pull the image – you should notice DEBUG information on your Pi terminal – indicating that the image is being cached on its system.


After the ‘pull’ command has  been completed – issue another ‘rm command’

docker image rm hello-world  

And then another ‘pull’
docker image pull hello-world


On the Pi console – you will notice that the image is now being pulled from the local Pi directory rather than over the internet.


On your Docker Client machine (Desktop/Laptop) – issue the _catalog service on the browser (http://192.168.1.194:5000/v2/_catalog)

You should notice – a list of cached images stored on the Pi.

 


 Finally


On your Pi - you may want to bind mount /var/library/registry on a USB drive

sudo mount /dev/sda1 /var/lib/registry/ -o umask=000    

#sudo umount /var/lib/registry/





Monday, 25 February 2019

Voting Docker Swarm

docker swarm init
docker swarm join-token manager
docker swarm join-token worker


docker network rm frontend backend 
#docker service rm $(docker service ls -q)
docker service rm voteservice resultsservice redisservice workerservice dbservice

docker network create --driver overlay backend
docker network create --driver overlay frontend

docker service create --name voteservice --replicas 2 --publish 8080:80 --network frontend dockersamples/examplevotingapp_vote:before
docker service create --name resultsservice --replicas 1 --network backend --publish 8081:80 dockersamples/examplevotingapp_result:before
docker service create --name redis --replicas 1 --network frontend redis:3.2
docker service create --name workerservice --replicas 1  --network frontend --network backend dockersamples/examplevotingapp_worker
docker service create --name db --replicas 1 --network backend  \
--mount type=volume,source=voting-db-data,target=/var/lib/postgresql/data \
postgres:9.4

Sunday, 24 February 2019

Setting up SSH to Google Cloud VM Instance


On PC
ssh-keygen -t rsa -f ~/.ssh/[KEY_FILENAME] -C [USERNAME]
Eg. ssh-keygen -t rsa -b 4096 -f ~/.ssh/userjohnnykey -C johnny

Produces private and public key files .ssh/[KEY_FILENAME].pub  AND .ssh/[KEY_FILENAME]
Eg. .ssh/userjohnnykey.pub and .ssh/userjohnnykey

make sure to 'chmod 600 .ssh/[KEY_FILENAME]'  (private key file)
Eg. chmod 600 .ssh/userjohnnykey

On Google Cloud - 

Goto 'COMPUTE ENGINE' / 'METADATA' / 'SSH Keys'
Select 'Edit'  / 'Add Item'
PASTE in the public key .ssh/userjohnnykey.pub
(Google will add the key to the VM's .ssh/autorized_keys file)

To SSH from your PC

ssh johnny@goggle-instance-35.189.72.232 -i .ssh/userjohnnykey

Or set up in .ssh/config 

Host controller pi0 pimanager
     HostName goggle-instance-35.189.72.232 
     Port 22
     User johnny
     IdentityFile ~/.ssh/userjohnnykey

Saturday, 23 February 2019

PHP Callbacks

In PHP,  function references as parameters are passed as strings - so if you want to reference a class method - wrap it in an array with the 1st entry as the object of the method you are referencing and the 2nd entry is the string name of the function you want to call.


      public function buildQueryResult($query) {
                        // [$this, 'buildSQLQueryResullt'] is our callback!
return ($this->cachingManager->get($query,[$this, 'buildSQLQueryResult']));
        }

       
         public function buildSQLQueryResult($query) {

                $res = array() ;
                $result = $this->mysqli->query($query);
                if ($result) {
                        while ($row = $result->fetch_assoc()) {
                                array_push($res,$row) ;
                        }
                }
                return $res ;
        }

Friday, 22 February 2019

Docker Install Script (Selectable Docker Version) - Useful for installing on Pi Zero

#!/bin/sh
# Downgraded PiZero Docker installation Script
# Thanks to : - https://withblue.ink/2017/12/31/yes-you-can-run-docker-on-raspbian.html
echo 'building downgrade conf file' 
cat <<- EOF > /tmp/docker-ce.txt
Package: docker-ce
Pin: version 18.06.*
Pin-Priority: 1000
EOF

# Update/Add  `/etc/apt/preferences.d/docker-ce` to:
sudo cp /tmp/docker-ce.txt /etc/apt/preferences.d/docker-ce

curl -sSL https://get.docker.com > /tmp/ins.sh && chmod +x /tmp/ins.sh
/tmp/ins.sh


sudo gpasswd -a $USER docker
newgrp docker

# Install required packages for docker-compose
sudo apt update
sudo apt install -y python python-pip

# Install Docker Compose from pip
sudo pip install docker-compose


docker run --rm -it hello-world

Thursday, 21 February 2019

SSH with Public/Private Keys


On Your MAC/LINUX CLIENT PC


mkdir -p ~/.ssh
chmod 700 ~/.ssh
# ssh-keygen produces two files. An empty pass phrase private key, 'piuser_rsa' and the public key 'piuser_rsa.pub'
ssh-keygen -b 4096 -t rsa -N '' -f .ssh/piuser_rsa
touch ~/.ssh/config
# Edit the ssh config file
vi ~/.ssh/config
# Insert a CONFIG entry in config for your particular server and save

Host controller pi0 pimanager
     HostName 192.168.1.81
     Port 22
     User pi

     IdentityFile ~/.ssh/piuser_rsa

Once completed (on server setup) - we can now ssh into the Pi server (192.168.1.81)with the command 'ssh pi0' or 'ssh pimanager'  instead of ssh pi@192.168.1.81 -i .ssh/piuser_rsa


On Your Server Raspberry Pi (eg. 192.168.1.125)


mkdir -p ~/.ssh
touch ~/.ssh/authorized_keys
chmod 600 ~/.ssh/authorized_keys
chmod 700 ~/.ssh
vi ~/.ssh/authorized_keys

# Insert your Client's PUBLIC KEY (eg. 'piuser_rsa.pub') AT THE END OF THE 'authorized_keys' FILE AND SAVE

Debugging SSH daemon


Edit file /etc/ssh/sshd_config
# Logging
SyslogFacility AUTH

LogLevel DEBUG


Peek at logs on Daemon 

tail -lf /var/log/auth.log 



Debugging SSH client


ssh -vvvv pi@192.168.1.81

Docker MAVEN

docker run -it --rm -v $(shell pwd)/target:/usr/src/app/target myrep/mvn-builder package -T 1C -o -Dmaven.test.skip=true

docker run -it --rm --name my-maven-project -v "$PWD":/usr/src/app -w /usr/src/app maven:3.2-jdk-7 mvn clean install


docker run -it --rm --name my-maven-project -v "$PWD":/usr/src/app -w /usr/src/app maven:3.2-jdk-7 mvn clean install

docker run -it --rm --name my-maven-project -v "$PWD":/usr/src/app -v "$HOME"/.m2:/root/.m2 -w /usr/src/app maven:3.2-jdk-7 mvn clean install

Wednesday, 20 February 2019

Docker install script (downgrade for Rpi0)

!/bin/sh
# Raspberry Pi installation Script
# Thanks to : - https://withblue.ink/2017/12/31/yes-you-can-run-docker-on-raspbian.html

cat > /tmp/docker-ce.txt <<EOF
Package: docker-ce
Pin: version 18.06.*
Pin-Priority: 1000
EOF 
exit
# Update/Add  `/etc/apt/preferences.d/docker-ce` to:
sudo cp /tmp/docker-ce.txt /etc/apt/preferences.d/docker-ce

# Install some required packages first
sudo apt update
sudo apt install -y \
     apt-transport-https \
     ca-certificates \
     curl \
     gnupg2 \
     software-properties-common

# Get the Docker signing key for packages
curl -fsSL https://download.docker.com/linux/$(. /etc/os-release; echo "$ID")/gpg | sudo apt-key add -

# Add the Docker official repos
echo "deb [arch=armhf] https://download.docker.com/linux/$(. /etc/os-release; echo "$ID") \
     $(lsb_release -cs) stable" | \
    sudo tee /etc/apt/sources.list.d/docker.list

# Install Docker
sudo apt update
sudo apt install docker-ce

#sudo curl -sSL http://get.docker.com | sh
sudo gpasswd -a $USER docker
newgrp docker

sudo systemctl enable docker
sudo systemctl start docker

# Install required packages for docker-compose
sudo apt update
sudo apt install -y python python-pip

# Install Docker Compose from pip
sudo pip install docker-compose

docker container run --rm -it hello-world

COMMAND LINE INSTALL FOR DOCKER CE

sudo curl -sSL http://get.docker.com | sh
sudo gpasswd -a $USER docker
newgrp docker


Problem - 18.09 Docker service is failing on Rpi Zero

systemctl status docker.service
● docker.service - Docker Application Container Engine
   Loaded: loaded (/lib/systemd/system/docker.service; enabled; vendor preset: enabled)
   Active: failed (Result: core-dump) since Wed 2019-02-20 09:25:55 GMT; 1min 24s ago
     Docs: https://docs.docker.com
  Process: 2028 ExecStart=/usr/bin/dockerd -H unix:// (code=dumped, signal=SEGV)
 Main PID: 2028 (code=dumped, signal=SEGV)
      CPU: 471ms

Feb 20 09:25:52 p1 systemd[1]: Starting Docker Application Container Engine...
Feb 20 09:25:55 p1 systemd[1]: docker.service: Main process exited, code=dumped, status=11/SEGV
Feb 20 09:25:55 p1 systemd[1]: Stopped Docker Application Container Engine.
Feb 20 09:25:55 p1 systemd[1]: docker.service: Unit entered failed state.
Feb 20 09:25:55 p1 systemd[1]: docker.service: Failed with result 'core-dump'.


pi@p1:~ $ systemctl status containerd.service
● containerd.service - containerd container runtime
   Loaded: loaded (/lib/systemd/system/containerd.service; enabled; vendor preset: enabled)
   Active: failed (Result: core-dump) since Wed 2019-02-20 09:25:53 GMT; 1min 52s ago
     Docs: https://containerd.io
  Process: 2031 ExecStart=/usr/bin/containerd (code=dumped, signal=ILL)
  Process: 2027 ExecStartPre=/sbin/modprobe overlay (code=exited, status=0/SUCCESS)
 Main PID: 2031 (code=dumped, signal=ILL)
      CPU: 262ms

Feb 20 09:25:52 p1 systemd[1]: Starting containerd container runtime...
Feb 20 09:25:52 p1 systemd[1]: Started containerd container runtime.
Feb 20 09:25:53 p1 systemd[1]: containerd.service: Main process exited, code=dumped, status=4/ILL
Feb 20 09:25:53 p1 systemd[1]: containerd.service: Unit entered failed state.
Feb 20 09:25:53 p1 systemd[1]: containerd.service: Failed with result 'core-dump'.


The Fix (go down a version)

Update/Add  `/etc/apt/preferences.d/docker-ce` to:

Package: docker-ce
Pin: version 18.06.*
Pin-Priority: 1000


sudo apt-get install docker-ce

Saturday, 16 February 2019

nslookup and ping on non 'bridge' networks

docker network create --driver bridge alpine-net
docker run -dit --name alpine1 --network alpine-net alpine ash
docker run -dit --name alpine2 --network alpine-net alpine ash

docker run -dit --name alpine3 --network bridge alpine ash
docker run -dit --name alpine4 --network bridge alpine ash

note - don't need --driver option for default network 'bridge'  

alpine1 and alpine2 can ping each other by name 

$ docker container attach alpine1
# nslookup alpine2
# ping alpine2


$ docker container attach alpine3
# nslookup alpine4   - Fails





Detaching from a container

You don't like ^P^Q? No problem!
You can change the sequence with docker run --detach-keys.
This can also be passed as a global option to the engine.

Start a container with a custom detach command:
$ docker run -ti --detach-keys ctrl-x,x jpetazzo/clock

Detach by hitting ^X x. (This is ctrl-x then x, not ctrl-x twice!)
Check that our container is still running:

$ docker ps -l

Friday, 15 February 2019

docker-compose for Drupal Website (Postgres)

version: '3.0'
services:
 drupal:
   image: drupal:latest
   ports: 
    - "8080:80"
   volumes:
     - webdata:/var/www/html
 postgres:
   image: 
    postgres:latest 
   environment:
    - POSTGRES_PASSWORD=password
    - POSTGRES_USER=user
    - POSTGRES_DB=egdbase
   volumes:
       - postgresdata:/var/lib/postgresql/data
volumes:
 postgresdata:
 webdata:

Thursday, 14 February 2019

Docker 'Round Robin' DNS Demo

# DNS 'Round Robin' Demo 

# Create Network 'searchnetwork'
docker network create searchnetwork


# Names not really needed since we setting net-alias to the common DNS entry 'mysearchengine'
# We're using container names just to reference for stopping and starting.

docker container run --name searchserv1 --detach --network searchnetwork --net-alias mysearchengine elasticsearch:2

docker container run --name searchserv2 --detach --network searchnetwork --net-alias mysearchengine elasticsearch:2

docker container ls
docker container run --rm --network searchnetwork alpine nslookup mysearchengine

# Now test DNS 'Round Robin' by repeated search calls with the 'net-alias' , 'mysearchengine'
# Elasticserv gives its own names (under the 'name' tag) to each unique service found.

docker container run --rm --network searchnetwork centos curl -s mysearchengine:9200
docker container run --rm --network searchnetwork centos curl -s mysearchengine:9200
docker container run --rm --network searchnetwork centos curl -s mysearchengine:9200
docker container run --rm --network searchnetwork centos curl -s mysearchengine:9200
docker container run --rm --network searchnetwork centos curl -s mysearchengine:9200
docker container run --rm --network searchnetwork centos curl -s mysearchengine:9200

# Stop one of the servers for the next test
docker container stop searchserv1
docker container run --rm --network searchnetwork alpine nslookup mysearchengine

# Now notice only one search service (searchserv2) is being 'resolved' 
docker container run --rm --network searchnetwork centos curl -s mysearchengine:9200
docker container run --rm --network searchnetwork centos curl -s mysearchengine:9200
docker container run --rm --network searchnetwork centos curl -s mysearchengine:9200
docker container run --rm --network searchnetwork centos curl -s mysearchengine:9200
docker container run --rm --network searchnetwork centos curl -s mysearchengine:9200
docker container run --rm --network searchnetwork centos curl -s mysearchengine:9200

Docker container --rm option to clean up and remove containers from process list.

docker container run --rm -it --name mycentos centos:7 bash
docker container run --rm -it --name myubun ubuntu:14.04 bash


After exiting each container - the '--rm' option automatically removes the container from the
process list.

I.E you will not see these containers when you do a  'docker container ps -a' command.

docker network commands

docker network ls
docker network inspect <NETWORKNAME>
docker network create <NETWORKNAME>  [--driver <drivername>]   # default driver is 'bridge'

# attach an existing running container to a network - (containers can be on more than one network)
docker network connect <NETWORKNAME> <CONTAINERNAME> 

# remove a running container from a network
docker network disconnect <NETWORKNAME> <CONTAINERNAME>

Reference other containers within containers by container name.

docker network create johnnynet
docker container run --detach --network johnnynet --name nginx666 nginx:alpine
docker container run --network johnnynet --name nginx777 nginx:alpine ping nginx666
docker network inspect johnnynet  

Note: The last command should be run on another host terminal.
Using alpine versions since main version no longer supports 'ping' command.










Docker MySQL starting up with random ROOT password

docker container run --detach --env MYSQL_RANDOM_ROOT_PASSWORD=yes --name johnsmysql --publish 3307:3306 mysql


The random password can be found by going through the container logs with the mysql instance.

docker container logs johnsmysql



docker container logs --follow johnsmysql 2>&1  | egrep -o 'PASSWORD\s*:\s*\w*$' 



Redirection docker logs


docker container run --detach --env MYSQL_RANDOM_ROOT_PASSWORD=yes --name johnsmysql --publish 3307:3306 mysql

# Search for GENERATED PASSWORD from mySQL logs
docker container logs --follow johnsmysql 2>&1 | grep GENERATED
2 refers to the second file descriptor of the process, i.e. stderr.
> means redirection.
&1 means the target of the redirection should be the same location as the first file descriptor, i.e. stdout.
So > /dev/null 2>&1 first redirects stdout to /dev/null and then redirects stderr there as well. This effectively silences all output (regular or error) from the wget command.

Docker nginx

# Stop all containers
docker container stop $(docker container ps -aq)
# remove all containers
docker container rm $(docker container ps -aq)
# run nginx container - exposing web service (80) on local port 8082
docker container run --detach --volume $PWD/public_html:/usr/share/nginx/html --name mynginx --publish 8082:80 nginx

Friday, 8 February 2019

Publish your first Docker Image to Docker Hub

As you are familiar with Docker from my previous post. Let dive in to explore more.
Now you know how to run a container and pull an image, now we should publish our image for others too. Why you should have all the fun ;)
So what we need to publish our Docker Image?
  • A Dockerfile
  • Your App
Yeah, that’s it.

Why we need my app the Docker way?

Historically we have to our app(maybe python app) and we need python(or all dependencies) runtime environment on our machine. But then it creates a situation where the environment on your machine has to be just so in order for your app to run as expected and for your server too where you are running the server. With Docker, you don’t need anything(no environment). You can just grab a portable Python runtime as an image, no installation necessary. Then, your build can include the base Python image right alongside your app code, ensuring that your app, its dependencies, and the runtime, all travel together. These portable images are defined by something called a Dockerfile.
Dockerfile serves as the environment file inside the container. It helps in creating an isolated environment for your container, what ports will be exposed to outside world, what files you want to “copy in” to that environment. However, after doing that, you can expect that the build of your app defined in this Dockerfile will behave exactly the same wherever it runs.
So let’s create a directory and make a Dockerfile.
FROM python
WORKDIR /app
ADD . /app
RUN pip install -r requirements.txt
EXPOSE 80
ENV NAME world
CMD [“python”, “app.py”]
So you have your Dockerfile. You can see the syntax is pretty easy and self-explanatory.
Now we need our app. Let’s create one, a python app ;)
app.py
from flask import Flask
import os
import socket
app = Flask(__name__)
@app.route(“/”)
def hello():
     html = “<h3>Hello {name}!</h3>” \
            “<b>Hostname:</b> {hostname}<br/>”
     return html.format(name=os.getenv(“NAME”, “world”), hostname=socket.gethostname())
if __name__ == “__main__”:
     app.run(host=’0.0.0.0', port=80)
requirements.txt
Flask
Now you have all the things in order to proceed. Now just build your app.

Let’s Build it

ls will now show you this
$ ls
app.py requirements.txt Dockerfile
Now create the image.
docker build -t imagebuildinginprocess .
Where is your image? It’s in your local image registry.
$ docker images
REPOSITORY               TAG     IMAGE ID       CREATED     SIZE
imagebuildinginprocess  latest  4728a04a9d39  14 minutes ago  694MB

Let’s Run it too

docker run -p 4000:80 imagebuildinginprocess
What we did here is mapping the port 4000 to the container exposed port 80. You should see a notice that Python is serving your app at http://0.0.0.0:80. But that message is coming from inside the container, which doesn’t know you mapped to port 80 of that container to 4000, making the URL http://localhost:4000. Go to that URL in a web browser to see the display content served up on a web page, including “Hello World” text and the container ID.

Let’s Share it :D

we will be pushing our built image to the registry so that we can use it anywhere. The Docker CLI uses Docker’s public registry by default.
  • Log into the Docker public registry on your local machine.(If you don’t have account make it here cloud.docker.com)
docker login
  • Tag the image: It is more like naming the version of the image. It’s optional but it is recommended as it helps in maintaining the version(same like ubuntu:16.04 and ubuntu:17.04)
docker tag imagebuildinginprocess rusrushal13/get-started:part1
  • Publish the image: Upload your tagged image to the repository: Once complete, the results of this upload are publicly available. If you log into Docker Hub, you will see the new image there, with its pull command.
docker push rusrushal13/get-started:part1
Yeah, that’s it, you are done. Now you can go to Docker hub and can check about it also ;). You published your first image.
I found out this GitHub repository really awesome. Have a look on it https://github.com/jessfraz/dockerfiles
Do give me feedback for improvement ;)