Why using bash scripts?
Bash shell scripts are lines of command that can be performed in a terminal plus some features (for example, if-statement or for loops). They will allow you to set up your applications in a user-friendly way since all the strange and incomprehensible commands will be gathered in one file.
Here is an example of a bash script that downloads data and runs a python app.
# create directory.
mkdir models # Download data and deep learning model
wget https:nvjovt/model.zip
cd models && unzip model.zip && rm model.zip && cd ..wget https:nvjovt/data.zip
unzip data.zip && rm data.zip# Run python code
python3.7 app.py
A simple command can run this script with a .sh extension:
bash bash_script.sh
Package management && Virtual environment.
Packaging and virtual environment are closely related because they both take care of the packages’ well-being. Hence, the easiest way to set up a project is to create a virtual environment and install packages inside. But you can do both with different packages that allow you to simplify your life!
The following package managers create a virtual environment, and they also take care of the dependencies between the packages! It more comfortable to use but it comes with a cost of time! It usually takes more time to download packages, but you can trust them to have an easy deployment of your project,
The most well-known one for python is called pipenv. I am going to use this one as an example to show you their features. You can install it with one pip command:
pip install pipenv
Using a simple command, you will create the virtual environment and the package manager:
pipenv shell
You will see a file names Pipfile, where the packages are saved. One of the great features of this manager is the possibility of setting some dev-packages that can be installed only for development reasons.
To download a package, use one of the following commands:
pipenv install package_name
pipenv install "package_name~=2.2"
To download it for the development phase, use:
pipenv install package_name --dev
Then, when you want to deploy the app with the same packages with the right versions, you need to lock the package dependencies with :
pipenv lock
This will create/update your Pipfile.lock
. This file freeze your current packages and their dependencies. Now that anyone has access to this file, you can run pipenv install --ignore-pipfile
to make the same environment! Or pipenv install --dev
if someone needs to access the development package.
Tips 1) Even though it is taking some time, if you don’t use such a packaging manager, you will probably encounter dependency issues. They are a real pain!
Tips 2) They also have extra features that can help you deploy an app on Heroku or Flask! Check the following link.
Tips 3) There exist other packaging managers, and you might check poetry because it can have features that might fit you better than pipenv.
Why should you use Docker?
Docker is an open-source software you might already have heard of. It creates containers with its OS to avoid software dependency problems when deploying solutions on other machines. I encourage you to learn Docker because you will work with it or with another similar software if you are working in computational engineering.
To summarize its complicated layers, Docker can run containers with the same goal as virtual machines. In each container runs a unique OS with its environment. Why the majority of people prefer Docker to virtual machines, you might ask? Because it is much easier to set up, and it is also well optimized compared to VM!
First, you will need to learn about images, and we will see an example together! Then you will learn about Docker Hub that is following the same idea as Github. In the end, you will be able to deploy an app on any Linux OS with one command! Cheers!
How to download docker :
- For Mac users: follow this link.
- For Linux users: follow this link.
- For Windows users: follow this link.
The main commands are: build and run. The first command creates the image. The second one runs it on your machine. But before playing with commands, let us first understand the Dockerfile that will make everything!
What is Dockerfile
When your project is finished and ready to be deployed, you can start by creating a Dockerfil in the working directory. This file is particular and will define the way your docker image will work! If your App requires a lot of memory, you can optimize this file to ensure you don’t have any unuseful packages or data.
As explained above, this file generates the OS, imports the source code, and set up the container action. Here is an example of the possible structure of a Dockerfile:
FROM ubuntu:18.04
COPY . /app
RUN make /app
CMD python /app/app.py
Keep in mind:
- FROM: You will always create an image based on another.
- COPY: Docker creates an OS from the base image, and you will need to copy the source code inside
- RUN: Docker allows you to run commands inside the OS to set up your container (like package installation, import datasets, DB, etc.)
- CMD: You also need to specify the starting command that will be executed to launch the application. It will be used when you run the container.
If you are interested in an example you can try at home, here is a link.
When your Dockerfile is ready, you can run the following command to build the image:
docker build folder_path_of_dockerfile -t name_your_image
If the code is finished without any error, it means that your first image is ready!! Because you might need to deploy the solution on different machines, you should push this new image to a Docker Hub repository. It will allow you to access the image just with an internet connection. Therefore, you will need to: generate a Docker Hub account, create a repository, and link it to your local machine.
docker login --username=yourhubusername
then, tag your image:
docker images
docker tag image_id username/reponame:tagname
And push it to your Docker Hub repository:
docker push account_name/repo_name:tag_name
Now, you can run only one command to launch your app on any machine that has Docker installed with:
docker run account_name/repo_name:tag_name
Maintain your images/containers:
Images:
- Display all the images :
docker images ls
- Erase an image :
docker rmi -f image_id
- Erase all the images:
docker rmi $(docker image -a -q)
Containers:
- Display all the images :
docker container ls -a
- Erase a container :
docker rmi -f container_id
- Erase all the containers:
docker rmi $(docker container -a -q)
If you have reached the end, it means that you should be ready for the first week of your internship!