Maintaining Servers & Networks for DevOps Engineers
- Nishant Nath
- Sep 4, 2023
- 9 min read
Networking Concepts:
1. IP Addressing:
Concept: IP addresses are numerical labels assigned to devices on a network to identify and locate them.
Example: An IP address like 192.168.1.100 is assigned to a server.
Use Case: Configuring IP addresses for servers, containers, or virtual machines within a network.

2. Subnetting:
Concept: Subnetting divides a large IP network into smaller, more manageable subnetworks.
Example: Using a subnet mask like 255.255.255.0 to create subnets with 256 available IP addresses each.
Use Case: Allocating IP ranges for different purposes within a network, such as separating servers from client devices.

3. DNS (Domain Name System):
Concept: DNS is a hierarchical system for translating human-readable domain names (e.g., google.com) into IP addresses.
Example: Resolving google.com to its IP address (e.g., 172.217.3.110).
Use Case: Setting up DNS records for web applications, email servers, and other network services.

4. Firewalls:
Concept: Firewalls are security devices that control incoming and outgoing network traffic based on defined rules.
Example: Allowing HTTP (port 80) traffic but blocking SSH (port 22) traffic.
Use Case: Configuring firewall rules to restrict access to network services and enhance security.

5. Load Balancing:
Concept: Load balancers distribute incoming network traffic across multiple servers to optimize resource utilization and ensure high availability.
Example: Distributing user requests to different web servers to balance the load.
Use Case: Scaling web applications to handle increased traffic and improve fault tolerance.

6. TCP/IP and Ports:
Concept: TCP/IP is a suite of communication protocols, and ports are endpoints for network connections.
Example: Port 80 is commonly used for HTTP traffic, while port 22 is used for SSH.
Use Case: Configuring applications to listen on specific ports and ensuring that the correct ports are open in firewall rules.

7. VPN (Virtual Private Network):
Concept: VPNs create secure, encrypted connections over public networks, ensuring data privacy and security.
Example: Establishing a VPN tunnel to securely access a remote server or connect remote offices.
Use Case: Providing secure remote access to corporate networks or connecting cloud resources securely.

8. CIDR (Classless Inter-Domain Routing):
Concept: CIDR notation represents IP address ranges more efficiently than traditional subnet masks.
Example: 192.168.0.0/24 denotes a subnet with 256 IP addresses.
Use Case: Efficiently managing and allocating IP address ranges in complex network configurations.
9. VLANs (Virtual LANs):
Concept: VLANs logically segment a physical network into multiple virtual networks.
Example: Isolating guest Wi-Fi traffic from internal corporate traffic within the same physical network.
Use Case: Enhancing network security, traffic isolation, and quality of service.

10. Routing:
Concept: Routing determines how data packets travel between networks, ensuring they reach their destination.
Example: A router directing traffic between a local network and the internet.
Use Case: Configuring routing tables for efficient data packet forwarding and controlling traffic flow.

Understanding Protocols
1. HTTP (Hypertext Transfer Protocol):
Concept: HTTP is a protocol used for transmitting web page data between a web server and a web browser.
Example: Accessing websites, RESTful API requests.
Use Case: Configuring web servers, load balancers, and web application firewalls (WAFs) to handle HTTP traffic.
2. HTTPS (HTTP Secure):
Concept: HTTPS is a secure version of HTTP that encrypts data between a web server and a client.
Example: Securely transmitting sensitive information, such as login credentials or payment details.
Use Case: Configuring SSL/TLS certificates on web servers and load balancers.

3. SSH (Secure Shell):
Concept: SSH is used for secure remote access to servers and for secure file transfers.
Example: Logging into a remote server or transferring files securely with tools like SCP or SFTP.
Use Case: Managing and configuring remote servers securely.

4. FTP (File Transfer Protocol):
Concept: FTP is used for transferring files between a client and a server over a network.
Example: Uploading website files to a web server or sharing files with colleagues.
Use Case: Setting up FTP servers and securing file transfers.

5. SMTP (Simple Mail Transfer Protocol):
Concept: SMTP is used for sending email messages between email servers.
Example: Sending email notifications from applications or relaying emails.
Use Case: Configuring email servers and ensuring reliable email delivery.

6. DHCP (Dynamic Host Configuration Protocol):
Concept: DHCP automates the assignment of IP addresses to devices in a network.
Example: Automatically assigning IP addresses to computers when they connect to a network.
Use Case: Configuring DHCP servers for IP address management in networks.

Proxy | Forward Proxy | Reverse Proxy
1. Proxy:
A proxy server acts as an intermediary between client devices and servers. It can be used for various purposes, including enhancing security, providing anonymity, and caching web content. Here's a simplified example:
Example: Imagine you're in a country where certain websites are blocked by the government. To access a blocked website (e.g., www.example.com), you configure your web browser to use a proxy server located in a country where the website is not blocked. Your web requests go through the proxy server, which fetches the content from www.example.com on your behalf and sends it back to your browser. This way, you can access the blocked website without revealing your true location.
2. Forward Proxy:
A forward proxy server sits between client devices and the internet, forwarding client requests to the internet and returning the responses. It is often used within organizations to control and monitor internet access for internal users.
Example: In a corporate environment, employees use a forward proxy server to access websites on the internet. When an employee's computer requests a web page, it sends the request to the forward proxy server within the organization's network. The forward proxy server then fetches the web page from the internet and delivers it to the employee's computer. This allows the organization to enforce internet usage policies, block specific websites, and monitor employees' web activities.
3. Reverse Proxy:
A reverse proxy server is positioned between client devices (typically on the internet) and backend servers. It acts as a gateway for incoming requests and directs those requests to the appropriate backend server based on various factors such as load balancing, URL routing, and security.
Example: Consider a popular e-commerce website like Amazon. When you visit amazon.com and search for a product, your request is received by a reverse proxy server. This reverse proxy server, also known as a load balancer, is responsible for distributing incoming requests to multiple backend servers that handle different functions, such as search, product listings, and user accounts. It ensures that the load is balanced among the backend servers, allowing the website to handle high traffic efficiently. Additionally, the reverse proxy can provide security by hiding the internal server structure and protecting against security threats like DDoS attacks.

Caching servers
Caching servers are crucial components in modern IT infrastructure that store frequently accessed data, reducing the need to fetch the same data from the original source repeatedly. They help improve application performance, reduce latency, and alleviate load on backend servers.
2. Example: A Web Application with a Caching Server -
A user visits the ShopMart website to browse products or make a purchase.
The web application server, which hosts the website, receives the user's request.
Before fetching data from the database or making complex computations, the web application server first checks the caching server.
If the requested data is found in the cache (a "cache hit"), the web application server retrieves the data from the cache.
For example, frequently accessed product listings, images, or pricing information are often stored in the cache.
Once the data is retrieved from the database or calculated, it is stored in the caching server's cache memory for future use.
The data is typically stored with an expiration time to ensure that it remains fresh and up-to-date.

LoadBalancer:
Your website experiences a significant amount of traffic daily, and you want to ensure that it remains fast and available, even during high-demand periods like holiday sales. To achieve this, you implement a load balancer.
A user visits the ShopNow website by typing www.shopnow.com into their web browser or clicking a link.
Their request is directed to the load balancer, which acts as the entry point to the web application.
The load balancer's job is to distribute incoming requests among multiple backend servers (often referred to as "nodes" or "instances").
Each backend server hosts a copy of the ShopNow website.
The load balancer uses various algorithms and metrics to determine how to distribute incoming requests. Common methods include round-robin (each server receives requests in turn), least connections (requests are sent to the server with the fewest active connections), and IP hashing (based on the source IP address).
For example, if a user's request is the first one to arrive, it might be sent to Server 1. The next request might go to Server 2, and so on.
The selected backend server (e.g., Server 1) processes the user's request.
It may retrieve product information, user data, or perform other tasks necessary to generate the web page.
Load balancers regularly perform health checks on the backend servers to ensure they are operational and responsive.
If a server becomes unresponsive or experiences issues, the load balancer will stop directing traffic to that server until it recovers.
As traffic to ShopNow increases, you can add more backend servers behind the load balancer to handle the load effectively. This is known as horizontal scaling.
If demand decreases, you can remove servers to save resources.

Internet Firewall
A firewall is a network security device or software application that monitors and controls incoming and outgoing network traffic based on an organization's previously established security policies. It acts as a barrier between a trusted internal network and untrusted external networks (such as the internet) to prevent unauthorized access and protect against various threats. Let's explore the concept of a firewall with a real-world example:
Imagine you are the IT administrator for a large corporation called "TechCorp." TechCorp has a corporate network with sensitive data, proprietary software, and a large number of employees accessing the network daily. To safeguard this network, you implement a firewall.
All network traffic, both incoming and outgoing, passes through the firewall.
The firewall inspects each packet of data and evaluates whether it should be allowed or blocked based on predefined security rules and policies.
TechCorp has established a set of firewall rules and policies to control traffic.
For example, the firewall may be configured to allow HTTP (web) traffic on port 80 but block traffic on ports commonly associated with known vulnerabilities.
Firewalls often use Access Control Lists (ACLs) to specify which IP addresses, ports, or protocols are permitted or denied.
TechCorp may configure ACLs to allow access to specific servers for remote employees while denying access to unauthorized devices.
Many modern firewalls employ stateful inspection, which tracks the state of active connections.
For instance, if an internal user requests a web page, the firewall allows the response traffic back in, as it recognizes it as part of an established connection.
The firewall logs all traffic activity and security events.
TechCorp regularly reviews these logs to identify potential security threats or unusual activity, which can aid in incident response and security analysis.
Some advanced firewalls include intrusion detection and prevention systems (IDPS) to detect and block malicious activities.
For example, if the firewall identifies a pattern of traffic consistent with a known attack, it can take action to block further attempts.

Web Server:
A web server is a software application or hardware device that stores, processes, and serves web content (e.g., web pages, images, files) to clients (typically web browsers) over the internet.
Imagine you have a personal blog called "TechBlog" where you write articles and share information about technology. To make your blog accessible to internet users, you set up a web server. Here's how the web server works in this scenario:
Accessibility: Web servers make your website accessible to users worldwide via the internet.
Performance: Well-configured web servers can handle multiple concurrent connections, ensuring fast response times.
Scalability: You can scale your web server to handle increased traffic by adding more server resources or using load balancers.
Reliability: Web servers are designed for high availability, with features to recover from failures and minimize downtime.

NGINX
DevOps engineer responsible for managing the infrastructure of a high-traffic e-commerce website called "ShopHub." To ensure the website can handle a large number of users and provide fast response times during peak shopping seasons, you decide to use NGINX.
NGINX acts as the primary web server to serve static content, such as HTML, CSS, JavaScript files, and images.
NGINX is highly efficient at serving static content and can handle many concurrent connections with minimal resource usage.
To distribute incoming web traffic evenly among multiple backend servers hosting the ShopHub website, you configure NGINX as a load balancer.
NGINX uses load-balancing algorithms (e.g., round-robin, least connections) to route requests to the backend servers.
ShopHub wants to secure user data during transmission, so NGINX is configured to terminate SSL/TLS encryption.
NGINX decrypts incoming HTTPS traffic, forwards it to backend servers over HTTP, and encrypts the response before sending it back to users.
NGINX acts as a reverse proxy server, forwarding requests to application servers running ShopHub's dynamic web application.
You set up NGINX in a high-availability configuration with multiple instances running on different servers.
NGINX can handle a large number of concurrent connections and distribute traffic across multiple backend servers, making it suitable for high-traffic websites.
NGINX can be configured to implement SSL/TLS encryption, rate limiting, and access controls to enhance security.





Comments