Tuesday, August 30, 2022

More Linux Commands

 Linux Commands Part-2

11. find command

We could use the find command to locate files within a given directory.

find /home/ -name notes.txt command will search for a file called notes.txt within the home directory and its subdirectories.

find <directory-name>/ -name <filename>

To find files in the current directory use, find . -name test.txt


12. grep command

This help us to search through all the text in a given file.

grep mail app.properties -> This will search for the word mail in the app.properties file. 


13. df command

df command help us to get a report on the system’s disk space usage and it will show in percentage and KBs. 

If we want to see the report in megabytes, then we can use df -m


14. du(Disk Usage) command

dh command help us to check how much space a file or a directory takes. the disk usage summary will show disk block numbers instead of the usual size format.

If we want to see it in bytes, kilobytes, and megabytes, add the -h argument to the command line.


15. head command

head command help us to view the first lines of any text file. This will only show the first ten lines by default, but we have provision to change this number. 

if we only want to see the first 40 lines, command is: head -n 40 config.yml.


16. tail command

A similar command as the head command, but instead of showing the first lines, the tail command will display the last ten lines of a text file. 

we can modify this with tail -1000f test.log

This one has a similar function to the head command, but instead of showing the first lines, the tail command will display the last ten lines of a text file.


17. diff command

Diff command help us to find the difference between 2 given files. This will show the lines that do not match.

diff test.properties test_2.properties


18. chmod(change mode) command

This command sets the permissions of files or directories.

Syntax: chmod options permissions file name

Example: chmod +x <filename>


Suppose I want to set permission for a given file where the user can read, write, and execute it, members of your group can read and execute it, & others may only read it.

Command will look alike:  chmod u=rwx,g=rx,o=r <filename>

Here u,g,o stands for "user", "group", and "other". The equals sign ("=") means "set the permissions exactly equivalent to the passed value.

The letters "r", "w", and "x" stand for "read", "write", and "execute".

octal permissions notation for the above command is chmod 754 <filename>

7, 5, and 4 digits individually represent the permissions for the user, group, and others, in sequence. Each digit is a combination of the numbers 4, 2, 1, and 0:


4 stands for "read",

2 stands for "write",

1 stands for "execute", and

0 stands for "no permission."


Now, we could say that 7 is the combination of permissions 4+2+1 (read, write, and execute for user), 5 is 4+0+1 (read, no write, and execute for groups), and 4 is 4+0+0 (read, no write, and no execute for others).


19. chown(Change Owner) command

All files are owned by a specific user in Linux. This command help us to change or transfer the ownership of a file to the given username. 

chown gauravkumar test.txt will transfer the ownership of the file test.txt to gauravkumar.


20. ping command

This command help us to check connectivity status of a server. For example: ping devopswithgaurav.blogspot.com, the command will check whether you’re able to connect to devopswithgaurav and also measure the response time.


Saturday, August 27, 2022

AboutLinux

 Linux Basic Commands Part-1


What is CLI?


CLI stands for Command Line Interface. This is a program where user is allowed to run text commands instructing the system to perform specific tasks. 

Shell is a user interface responsible for processing all commands typed on CLI. It reads & interprets the commands given by the user and instructs the OS to perform the requested tasks.


Among many types of shell, the most popular ones are bash (for Linux and MacOS) and Windows shell or CMD.exe or the Command Prompt(for Windows). Linux’s shell is case sensitive. 


BASH

Bash stands for Bourne Again SHell and was developed by the Free Software Foundation.


Below are the List of Basic Linux commands:


1. cd command

Helps to navigate through the Linux files and directories.


  • cd .. (with two dots) to move one directory up
  • cd to go straight to the home folder
  • cd- (with a hyphen) to move to your previous directory


2. mkdir command

Use mkdir command to make a new directory


mkdir -p :  It will create parent directory first, if it doesn't exist. But if it already exists, then it will not print an error message and will move further to create sub-directories. This command is most helpful in the case when you don't know whether a directory alredy exists or not.


3. rmdir command

If you need to delete a directory, use the rmdir command. However, rmdir only allows you to delete empty directories.


4. rm command

The rm command is used to delete directories and the contents within them. If you only want to delete the directory — as an alternative to rmdir — use rm -r.


5. touch command

The touch command allows you to create a blank new file through the Linux command line.


6. cp command

This command is used to copy files from the current directory to a different directory.


7. mv command

primary use of the mv command is to move files, although it can also be used to rename files.


8. cat command

cat (stands for concatenate) 

It is used to list the contents of a file on the standard output (sdout). 


9. ls command

The ls command is used to view the contents of a directory.


Variations which we can use with the ls command:

  • ls -R will list all the files in the sub-directories as well
  • ls -a will show the hidden files
  • ls -al will list the files and directories with detailed information like the permissions, size, owner, etc.


10.  pwd command

This command is used to find out the path of the current working directory

Docker-Tutorial

 Docker Basics


Docker is a container management service. The keywords of Docker are Build, share and run anywhere. The whole idea of Docker is for developers to easily develop applications, ship them into containers which can then be deployed anywhere.

what are containers?

Containers are completely isolated environments. As in they can have their own processes or services, their own networking interfaces, their own mounts just like virtual machines, except they’re all will be sharing the same operating system kernel.


How does Docker work? 

OS consist of two things, an OS kernel and a set of software. The OS kernel is responsible for interacting with the underlying hardware, while the OS kernel remains the same. it’s the software that makes these operating systems different. For example: Centos, Ubuntu, Rocky, these all uses Linux kernel internally but the software installed may consist of a different user interface drivers, compilers, file managers, developer tools which differentiate them with each other.  In similar fashion Docker can run on any flavour of OS on top of it. Each Docker container only has the additional software and Docker utilizes the underlying kernel of Docker host, which works with all the operating systems.

The main purpose of Docker is to containerize applications and to ship/share(through publishing to  repository like Docker Hub/AWS ECR e.t.c  them and run them.

Containers solve application problems by improving Development Operations, enabling microservices, increasing portability, and further improving resource utilization. 


Advantages of Containers 

  • They are more lightweight than VMs, as their images are measured in megabytes rather than gigabytes. 
  • Containers require fewer IT resources to deploy, run, and manage. 
  • Containers spin up in milliseconds. Since their size is smaller.

Reference: https://www.geeksforgeeks.org/difference-between-virtual-machines-and-containers/


Step 1 − Building the Docker File(named as Dockerfile without any extension)

FROM alpine

CMD ["echo", "Hello Gaurav, Welcome in the Docker World!"]


Step 2 − Executing the command to build the docker image


docker build -t="docker-demo" .



Step 3 − Using docker run command, to get the output




If we would like to go inside any container and see the content or want to do some modifications then we could use below command:

docker exec -it <container-id> /bin/bash or docker exec -it <container-id> /bin/sh

Note: bash and sh are two different shells of the Unix operating system. bash is sh, but with more features and better syntax. Bash is “Bourne Again SHell”, and is an improvement of the sh (original Bourne shell). bash is a superset of sh. The -i and -t options are used to access the container in an interactive mode.

Basic Commands

docker pull – Ex: docker pull rroemhild/test-openldap

docker search – Ex: docker search openjdk -> To search for public images on the Docker hub

docker images- Ex: docker images -> To list all the local images

docker ps(process status): Ex: docker ps -> list all the running containers

docker ps(process status): Ex: docker ps –a -> list all the running containers including stopped one

docker stop: Ex: docker stop <container-id> -> To stop a container we could use either the container id or container name. 


docker run --rm -p 10389:10389 -p 10636:10636 rroemhild/test-openldap

We could use docker run command like above to execute specific image on multiple ports. This will first creates a writeable container layer over the specified image, and then starts it .

If we want to join 2 environments using docker, so that another env should be able to utilize the resources of first one. We could use below command:

docker swarm init

Execution of above command will give below kind of output, We could execute that command in another environment which needs to join and then we could execute node command to verify the joined worker env.

docker swarm join --token SWMTKN-1-3if1odzpyrajovjatjqa5sp9sgydfmbbp3wzirus6lnol96e9l-ceqr8uuu8987k4w5n3ft3e3p9 10.30.14.142:2377

docker node ls

docker logs


docker service ls


docker service logs –f <ID> OR <NAME>


We could use service logs command to check specific service logs by passing the corresponding id or its name. 


Another way to see the logs


docker ps

docker logs –f <CONTAINER ID>

Note: We could use process status logs using above command by passing the corresponding container ID. 


Prune unused Docker objects


We could remove images, containers, volumes, and networks as these objects are generally not removed unless we explicitly mention to remove Or We could directly use docker system prune to clean up multiple types of objects at once.


docker image prune : Allows us to clean up unused images. 


docker image prune –a : Removes all images which are not used by existing containers.


docker container prune : Remove all stopped containers.


docker volume prune : Remove all the volumes not used by any container


docker network prune : Remove all networks not used by any container


docker system prune : docker system prune command is a shortcut that prunes images, containers, and networks but it won’t remove volumes, we must specify volumes like below:


docker system prune --volumes

S3 Basics

S3 – The Basics

S3 provides developers and IT teams with secure, durable, highly-scalable objects storage.

Amazon S3 is easy to use, with a simple web services interface to store and retrieve any amount of data from anywhere on the web.


Ques: What is S3?

Answer: 

  • S3 is a safe place to store your files.
  • It is Object-based storage.
  • The data is spread across multiple devices and facilities.
  • S3 is Object based- i.e. allows you to upload files.
  • Files can be from 0 Bytes to 5 TB.
  • There is unlimited storage.
  • Files are stored in Buckets (Folders).
  • Built for 99.99% availability for the S3 platform.
  • Amazon Guarantee 99.9% availability.
  • Amazon guarantees 99.999999999 % durability for S3 information. (Remember 11 * 9s).
  • S3 is a universal namespace. That is, names must be unique globally. 

For example: https://<username>.s3.amazonaws.com/

When you upload a file to S3, you will receive a HTTP 200 code if the upload was successful.


S3 is object based. Think of Objects just as files. Objects consist of the following:

Key (This is simply the name of the object)

Value (This is simply the data and is made up of a sequence of bytes).

Version ID (Important for versioning)

Metadata (Data about data you are storing)

Subresources;

Access Control Lists

Torrent 


Ques: How does data consistency work for S3?

Answer: 

Read after Write consistency for PUTS of new Objects.

Means: If you write a new file and read it immediately afterwards, you will be able to view that data.


Eventual Consistency for overwrite PUTS and DELETES (can take some time to propagate)

Means: If you update AN EXISTING file or delete a file and read it immediately, you may get the older version, or you may not. Basically, changes to objects can take a little bit of time to propagate.

S3- Features


Tiered Storage Available
Lifecycle Management
Versioning
Encryption
MFA Delete
Secure your data using Access Control Lists and Bucket Policies.

S3 Storage Classes


1. S3 Standard: 99.99% availability, 99.999999999 % durability, stored redundantly across multiple devices in multiple facilities and is designed to sustain the loss of 2 facilities concurrently.

2. S3 – IA: (Infrequently Accessed): For data that is accessed less frequently, but requires rapid access when needed. Lower fee than S3, but you are charged a retrieval fee.

3. S3 – One Zone – IA: For where you want a lower-cost option for infrequently accessed data, but do not require the multiple Availability Zone data resilience

4. S3 – Intelligent Tiering – Designed to optimize costs by automatically moving data to the most cost-effective access tier, without performance impact or operational overhead.

5. S3 Glacier: S3 Glacier is a secure, durable, and low-cost storage class for data archiving. Retrieval times configurable from minutes to hours.

6. S3 Glacier Deep Achieve: S3 Glacier Deep Archive is Amazon S3’s lowest-cost storage where a retrieval time of 12 hours is acceptable.

7. Reduced redundancy: Frequently accessed, non-critical data.




S3 – Charges depends on


Storage
Requests
Storage Management Pricing
Data Transfer Pricing
Transfer Acceleration
Cross Region Replication Pricing

S3 Transfer Acceleration


Amazon S3 Transfer Acceleration enables fast, easy, and Secure transfers of files over long distances between your end users and an s3 bucket.
Transfer Acceleration takes advantage of Amazon CloudFront’s globally distributed edge locations. As the data arrives at an edge location, data is routed to Amazon S3 over an optimized network path.

Note: Bucket names share a common name space. We can’t have same bucket name twice globally. You can replicate the contents of one bucket to another bucket automatically by using cross region replication. You can change storage classes and encryption of your objects on the fly.

Restricting S3 Bucket Access


  • Bucket Policies – Applies across the whole bucket
  • Object Policies – Applies to individual files
  • IAM Policies to Users & Groups – Applies to Users & Groups

By Default, all newly created buckets are PRIVATE. You can setup access control to your buckets using
Bucket Policies
Access Control Lists

S3 buckets can be configured to create access logs which log all requests made to the S3 bucket. This can be sent to another bucket and even another bucket in another account.

Encryption In Transit is achieved by 


  1. SSL/TLS
Encryption At Rest (Server Side) is achieved by
S3 Managed Keys – SSE-S3
AWS KEY MANAGEMENT SERVICE, MANAGED KEYS-SSE-KMS
SERVER SIDE ENCRYOTION WITH CUSTOMER PROVIDED KEYS – SSE-C

    2. CLIENT SIDE ENCRYPTION

We can encrypt individual objects and we can also encrypt bucket level which is much more efficient.

Versioning with S3:


Stores all versions of an object (including all writes and even if you delete an object)
Great backup tool
Once enabled, Versioning cannot be disabled, only suspended.
Integrates with Lifecycle rules.
Versioning’s MFA Delete capability, which uses multi-factor authentication, can be used to provide an additional layer of security.

S3 LifeCycle Rule


Automates moving your objects between the different storage tiers.
Can be used in conjunction with versioning.
Can be applied to current versions and previous versions.










About CircleCI

 

CircleCI

Ques: What is Continuous integration?

Continuous integration is a practice that encourages developers to integrate their code into the master branch of a shared repository. Instead of building out features in isolation and integrating them at the end of a development cycle, code is integrated with the shared repository by each developer multiple times throughout the day.

  • Every developer commits daily to a shared mainline.
  • Every commit triggers an automated build and test.
  • If build and test fails, then it will be easy to fix them rapidly.

 Ques: Why we need Continuous integration ?

  • Improve team productivity/efficiency.
  • Identify problems and solve them, easily & quickly
  • Releasing a higher quality & more stable products.

CircleCI automates your software builds, tests, and deployments. We want to make engineering teams more productive through intelligent automation. CircleCI provides enterprise-class support and services. CircleCI runs nearly one million jobs per day in support of 30,000 organizations.


Benefits of CircleCI

Organizations choose CircleCI because jobs run fast and builds can be optimized for speed.
CircleCI can be configured to run very complex pipelines efficiently with sophisticated caching, docker layer caching etc. 
As a developer using circleci.com, you can SSH into any job to debug your build issues.
We could set up parallel jobs in your .circleci/config.yml file to run jobs faster
We could also configure caching with two simple keys to reuse data from previous jobs in your workflow.
CircleCI provides monitoring and insights into your builds.
We could also get build and deployment logs to check the failures/errors.

CircleCI Process

After a software repository on GitHub or Bitbucket is authorized and added as a project to circleci.com, every code change triggers automated tests in a clean container or VM. 

CircleCI runs each job in a separate container or VM, that Means each time your job runs CircleCI spins up a container or VM to run the job in. 

CircleCI then sends an email notification of success or failure after the tests complete. We also have provision to include integrated Slack notifications. So that we will receive a notification for build and deployment every time.

CircleCI may be configured to deploy code to various environments, for example: AWS EC2 Container.


Prerequisites for Running our first build

Basic knowledge of Git.

A GitHub/Bitbucket account, of which you are logged into.

An account on CircleCI.

Basic terminal or bash knowledge using the command line is helpful.


what a Pipeline is?

Pipelines represent the entire configuration that is run when you trigger work on your projects that use CircleCI. The entirety of a .circleci/config.yml file is executed by a pipeline.

  

What is DevOps?

DevOps - consolidates both  Development + Operations to increase the efficiency, speed, and security of software development and delivery compared to traditional processes. A quick way of software development which results in a competitive advantage for businesses and their customers.



In DevOps model, two teams are merged into a single team where the engineers work across the entire application lifecycle, from development and test to deployment to operations, and develop a range of skills not limited to a single functionality.

Sometime,QA and security teams also tightly integrated with development and operations teams during the application lifecycle. When security is the focus of everyone on a DevOps team, so sometimes this is also known as DevSecOps.
Mostly, these teams use practices to automate processes which is manual and time taking process. DevOps team uses a technology stack and tooling which help them operate and evolve applications quickly and reliably. These tools also help engineers independently accomplish tasks like deploying code in various env's or provisioning infrastructure. They helps to increase team velocity by automating the processes.

DevOps Benefits

DevOps describes multiple business and technical advantages, many of which can result in happy customers.Benefits are:

  • Faster & better quality product delivery
  • Reducing complexity & faster issue resolution approach 
  • More stable operating environments
  • Better resource utilization
  • Automating process
  • Scalability and availability
  • Innovation
  • Better visibility into system outcomes

DevOps practices

This reflects the idea of continuous improvement & automation. Many practices focus on one or more development cycle phases. These practices include:

  • Continuous development: Planning and coding phases of the DevOps lifecycle. Version-control mechanisms might be involved.
  • Continuous testing: Continued code tests as application code is being written or updated. Automated & prescheduling of tests can speed the delivery of code to production.
  • Continuous integration (CI): Includes Configuration management (CM) tools together with other test and development tools to track how much of the code being developed is ready for production. It involves rapid feedback between testing and development to quickly identify and resolve code issues.
  • Continuous delivery:  Automates the delivery of code changes, after testing, to a preproduction or staging environment. 
  • Continuous deployment (CD):  Similar to continuous delivery, this includes the release of new or changed code into production. A company doing continuous deployment might release code or feature changes several times per day. The use of container technologies, such as Docker and Kubernetes, can enable continuous deployment by helping to maintain consistency of the code across different deployment platforms and environments.
  • Continuous monitoring:  Involves ongoing monitoring of both the code in operation and the underlying infrastructure that supports it. A feedback loop that reports on bugs or issues then makes its way back to development.
  • Infrastructure as code:  Automate the provisioning of infrastructure required for a software release. Developers add infrastructure “code” from within their existing development tools. For example, developers might create a storage volume on demand from Docker, Kubernetes, or OpenShift. This practice also allows operations teams to monitor environment configurations, track changes, and simplify the rollback of configurations.

DevOps methods

Below are few common DevOps methods that an organizations can use to speed and improve development and product releases

  • Scrum: Scrum defines how members of a team should work together to accelerate development and QA projects. This practices include key workflows and specific terminology (sprints, time boxes, daily scrum [meeting]), and designated roles (Scrum Master, product owner).


  • Kanban: This prescribes that the state of software project work in progress (WIP) be tracked on a Kanban board.


  • Agile. Earlier agile software development methods continue to heavily influence DevOps practices and tools. Many DevOps methods, including Scrum and Kanban, incorporate elements of agile programming. Some agile practices are associated with greater responsiveness to changing needs and requirements, documenting requirements as user stories, performing daily standups, and incorporating continuous customer feedback. Agile also prescribes shorter software development lifecycles instead of lengthy, traditional “waterfall” development methods.

dig v/s host v/s nslookup

 dig v/s host v/s nslookup Dig and nslookup are two tools that can be used to query DNS servers.  They both perform similar functions, but t...