Run this to install Docker:
$ curl -fsSL get.docker.com | sh
To preview the installation steps first:
$ curl -fsSL get.docker.com -o get-docker.sh
$ sudo sh ./get-docker.sh –dry-run

One more Linux Magazine
Run this to install Docker:
$ curl -fsSL get.docker.com | sh
To preview the installation steps first:
$ curl -fsSL get.docker.com -o get-docker.sh
$ sudo sh ./get-docker.sh –dry-run
In this article, we will learn how to send email using shell scripting. Sending emails programmatically can be very useful for automated notifications, error alerts, or any other form of communication that needs to be sent without manual intervention. Shell scripting offers a powerful way to achieve this through various utilities and tools available in Unix-like operating systems.
First go to your account:

Search for App Password in the search bar.

Enter your google account password to verify it’s you:

Now Enter the app name for which you wanted to get the app password like Shell Script.
And click on Create to create it:

the password will be generated. Note it down cause will be using it.

Open the terminal and use the nano command to create a new file:
nano email.sh
Step #3:Write a Script to Send Emails
Below is a basic script to send an email.
-----------------------------------------
#!/bin/bash
# Prompt the user for input
read -p "Enter your email: " sender
read -p "Enter recipient email: " receiver
read -s -p "Enter your Google App password: " gapp
echo
read -p "Enter the subject of mail: " sub
# Read the body of the email
echo "Enter the body of mail (Ctrl+D to end):"
body=$(</dev/stdin)
# Sending email using curl
response=$(curl -s --url 'smtps://smtp.gmail.com:465' --ssl-reqd \
--mail-from "$sender" \
--mail-rcpt "$receiver" \
--user "$sender:$gapp" \
-T <(echo -e "From: $sender\nTo: $receiver\nSubject: $sub\n\n$body"))
if [ $? -eq 0 ]; then
echo "Email sent successfully."
else
echo "Failed to send email."
echo "Response: $response"
fi
------------------------------------------

Save the file and exit the editor.
Explanation of the Script:
This script is designed to send an email using Gmail’s SMTP server through the command line. Here’s a step-by-step explanation of each part of the script:
#!/bin/bash line at the beginning of the script indicates that the script should be run using the Bash shell.-s to hide the input).Ctrl+D. The body of the email is then captured into the variable body.curl to send the email
-s makes curl run silently.--url 'smtps://smtp.gmail.com:465' specifies the Gmail SMTP server with SSL on port 465.--ssl-reqd ensures SSL is required.--mail-from "$sender" specifies the sender’s email address.--mail-rcpt "$receiver" specifies the recipient’s email address.--user "$sender:$gapp" provides the sender’s email and Google App password for authentication.-T <(echo -e "From: $sender\nTo: $receiver\nSubject: $sub\n\n$body") sends the email content including headers and body.curl command ($?):
0, it prints “Email sent successfully.”0, it prints “Failed to send email.” and outputs the response from curl.Change the file permissions to make it executable using the chmod command:
chmod +x email.sh
Step #5:Run the script
Run the script by executing the following command:
./email.sh
You will be prompted to enter the sender’s email, recipient’s email, Google App password, subject, and body of the email:

You will get the message Email sent successfully.
Now check the mail box to see if you receive the mail or not.

Conclusion:
In conclusion, sending emails using shell scripting can greatly enhance your automation tasks, enabling seamless communication and notifications. This approach is especially useful for automated notifications, error alerts, and other routine communications. With the power of shell scripting, you can integrate emails functionality into your workflows, ensuring timely and efficient information exchange.
In this article, we will learn Shell Script to send email if website down, how to automate the process of checking the status of URLs and sending email notifications if any of them are down. When a website goes down or experiences issues, it’s essential to be promptly notified so that necessary actions can be taken to restore its functionality.
One effective way to achieve this is by automating email notifications to alert system administrators or stakeholders about the status of URLs. We’ll utilize shell scripting to create a simple yet effective solution for monitoring website health and ensuring timely notifications in case of any disruptions.
First go to your account.

Search for App Password in the search bar.

Enter your google account password to verify it’s you.

Now Enter the app name for which you wanted to get the app password like URL Notification.
And click on Create to create it

the password will be generated. Note it down cause will be using it

Open the terminal and use the nano command to create a new file.
nano url_notify.sh
----------------------------------------------------
Step #3:Write a Script to Send Email Notification
Below is a basic script to send an email notification if any of the url’s are down:
----------------------------------------------------
#!/bin/bash
# Prompt the user for input
read -p "Enter your email: " sender
read -p "Enter recipient email: " receiver
read -s -p "Enter your Google App password: " gapp
echo
# List of URLs to check
urls=("https://www.example.com" "https://www.google.com" "https://www.openai.com")
# Function to check the status of URLs and send email notification if any are down
check_urls_and_send_email() {
local down_urls=""
local subject="Website Down"
for url in "${urls[@]}"; do
response=$(curl -Is "$url" | head -n 1)
if [[ ! $response =~ "200" ]]; then
down_urls+="$url\n"
fi
done
if [[ -n $down_urls ]]; then
body="The following websites are down:\n\n$down_urls"
email_content="From: $sender\nTo: $receiver\nSubject: $subject\n\n$body"
response=$(curl -s --url 'smtps://smtp.gmail.com:465' --ssl-reqd \
--mail-from "$sender" \
--mail-rcpt "$receiver" \
--user "$sender:$gapp" \
-T <(echo -e "$email_content"))
if [ $? -eq 0 ]; then
echo "Email sent successfully."
else
echo "Failed to send email."
echo "Response: $response"
fi
else
echo "All websites are up."
fi
}
# Call the function to check URLs and send email
check_urls_and_send_email
-------------------------------------------------

Save the file and exit the editor.
Explanation of the Script:
Change the file permissions to make it executable using the chmod command.
chmod +x url_notify.sh
Step #5:Run the script
Run the script by executing the following command.
./url_notify.sh
You will be prompted to enter the sender’s email, recipient’s email and Google App password:

You will receive the message Email sent successfully.
Now check the mail box to verify if you received the mail or not.

Conclusion:
In conclusion, automating email notifications for URL status monitoring using shell scripting is a proactive approach to ensuring the continuous availability and performance of critical web services and applications. By implementing a simple shell script, organizations can establish an efficient mechanism for monitoring URL health and receiving timely alerts in case of any disruptions. By following the steps outlined in this article, organizations can enhance their website monitoring capabilities and minimize downtime, ultimately improving the overall reliability and user experience of their online services.

Ubuntu 24.04 LTS, codename Noble Numbat, was released on April 25, 2024, and is now available for download from the official Ubuntu website and its mirrors. This release will receive official security and maintenance updates for five years, until April 2029.
Some key points about Ubuntu 24.04 LTS:
Ubuntu has long been recognized as one of the most popular and widely used Linux distributions among desktop users.
However, it has experienced fluctuations in its popularity, particularly during the period when it introduced significant changes to its desktop experience with the user interface.
A key feature of this new Ubuntu release is that it comes out at the same time (or almost the same time) as other popular versions like Kubuntu, Lubuntu, Xubuntu, and Ubuntu Studio.
This gives users more options for desktop environments, all with official support. The support period can differ (usually 3 years for non-LTS and 5 years for LTS versions), but releasing them together makes it easier for both individuals and companies to choose.
Here are some noticeable changes in Ubuntu 24.04 (Noble Numbat):
Ubuntu has one of the most simple and straightforward installers among all Linux distributions which makes the job of installing the system on hardware very easy even for a beginner or an uninitiated Linux or Windows user with just a few clicks.
Following are the minimum system requirements for installing Ubuntu 24.04 Desktop:
This tutorial will cover fresh installation of Ubuntu 24.04 and with a basic walk through and a few system tweaks and applications.
1. Download the Ubuntu ISO image from the Ubuntu website, and then use a USB Linux Installer to burn the ISO to a USB stick.
2. After creating a bootable Ubuntu USB drive, restart your computer and go into the BIOS or boot menu by pressing the designated key (usually F2, F12, or Del) during startup. Choose the USB drive as the boot device and press Enter.
3. The USB content is loaded into your RAM memory until it reaches the installation screen where you will be given the option to “Try Ubuntu” or “Install Ubuntu“.

4. The next step asks you to test Ubuntu before installing it, choose “Try Ubuntu“, which allows you to use Ubuntu from the USB drive without making any changes to your computer’s hard drive.

5. If you’re ready to install Ubuntu, select “Install Ubuntu” and follow the on-screen instructions to select your language, and keyboard layout.

6. Next, choose “Normal installation” to install the full Ubuntu desktop with office software, games, and media players. You will also have the option to download updates and third-party software during the installation process.

7. Ubuntu will prompt you to choose how you want to partition your disk. You can select “Erase disk and install Ubuntu” to use the entire disk or choose “Something else” for manual partitioning.
If you’re new to partitioning, it’s recommended to select the “Erase disk and install Ubuntu” option.

8. After your disk has been sliced hit on Install Now button. In the next stage choose your Location from the map – Location will have an impact on your system time also so be advised to choose your true location.

9. Next, enter your name, desired username, password, and computer name to create a user account for Ubuntu.

10. Review your settings and click “Continue” to begin the installation process.

The installer now starts copying system files to your hard drive while it presents you with some information about your brand new Ubuntu LTS for 5 years support system.
After the installer finishes his job click on Restart Now prompt and press Enter after a few seconds for your system to reboot.

Congratulations!! Ubuntu 24.04 is now installed on your machine and is ready for daily usage.
After first logging into your new system is time to review your software sources to ensure you have all necessary repositories enabled, including main, universe, restricted, and multiverse.
Open “Software & Updates” from the applications menu, go to the “Ubuntu Software” tab, and ensure that all the checkboxes for the Ubuntu repositories are checked.
You may also check the “Source Code” option if you want access to source packages.

After making any changes to the software sources, click “Close” and then “Reload” to update the software repositories.

Next, open a terminal and issue the following commands to keep your system secure and protected against potential vulnerabilities.
sudo apt-get update
sudo apt-get upgrade
For basic user usage, type “Ubuntu Software” in the search bar, press Enter to open it, and browse through the search results to find the software you’re looking for.

If you prefer to minimize windows by clicking on the app icon, you can enable this feature using a simple command in the terminal:
gsettings set org.gnome.shell.extensions.dash-to-dock click-action 'minimize'
GNOME Tweaks is a powerful tool that allows for extensive customization options such as changing themes, fonts, window behavior, and more.
You can install GNOME Tweaks through the Ubuntu Software app and access it from the applications grid by searching for “tweaks“.

You might want to turn on the “night light” setting in GNOME, which helps to lower the amount of blue light that comes from your screen. This can be good for your sleep. You can find this setting in the “Screen Display” part of your Settings.

Ubuntu offers different desktop environments like GNOME, KDE, and Xfce. You can install and switch between these environments based on your preferences using terminal commands as shown.
Install KDE on Ubuntu:
sudo apt install kubuntu-desktop
Install Xfce on Ubuntu:
sudo apt install xubuntu-desktop
You might want to try installing Snap packages, a new way of packaging software for Linux. It lets you get lots of different software applications and tools easily.
sudo apt install snapd
That’s all for basic Ubuntu installation and minimal software required for average users to browse the Internet, instant messaging, listen to music, watch movies, YouTube clips, or write documents.
Ubuntu is arguably one of the most popular and widely-used Linux distributions owing to its classic UI, stability, user-friendliness, and rich repository that contains over 50,000 software packages. Furthermore, it comes highly recommended for beginners who are trying to give a shot at Linux.
In addition, Ubuntu is supported by a vast community of dedicated open-source developers who actively maintain and contribute to its development to deliver up-to-date software packages, updates, and bug fixes.
There are numerous flavors based on Ubuntu, and a common misconception is that they are all the same. While they may be based on Ubuntu, each flavor ships with its own unique style and variations to make it stand out from the rest.
In this guide, we are going to explore some of the most popular Ubuntu-based Linux distributions.
Table of Contents
Used by millions around the globe, Linux Mint is a massively popular Linux flavor based on Ubuntu. It provides a sleek UI with out-of-the-box applications for everyday use such as LibreOffice Suite, Firefox, Pidgin, Thunderbird, and multimedia apps such as VLC and Audacious media players.

Owing to its simplicity and ease of use, Linux Mint is considered ideal for beginners who are making a transition from Windows to Linux and those who prefer to steer clear from the default GNOME desktop but still enjoy the stability and the same code base that Ubuntu provides.
The most recent releases of Linux Mint are Linux Mint 20 and Linux Mint 21, which are based on Ubuntu 20.04 LTS and Ubuntu 22.04 LTS respectively.
If there was ever a Linux flavor that was built with stunning appeal in mind without compromising crucial aspects such as stability and security, then it has to be an elementary OS, which is based on Ubuntu.
elementary OS is an open-source flavor that ships with an eye-candy Pantheon desktop environment inspired by Apple’s macOS. It provides a dock that is reminiscent of macOS, beautifully styled icons, and numerous fonts.

From its official site, elementary OS emphasizes on keeping users’ data as private as possible by not collecting sensitive data. It also takes pride in being a fast and reliable operating system ideal for those transitioning from macOS and Windows environments.
Just like Ubuntu, elementary OS comes with its own Software store known as App Center from where you can download and install your favourite applications (both free and paid) with a simple mouse-click. Of course, it ships with default apps such as Epiphany, Photo Viewer, and video/media playing applications but the variety is quite limited compared to Linux Mint.
Written in C, C++, and Python, Zorin is a fast, and stable Linux distribution that ships with a sleek UI that closely mimics Windows 7. Zorin is hyped as an ideal alternative to Windows and, upon trying it out, I couldn’t agree more. The bottom panel resembles the traditional taskbar found in Windows with the iconic start menu and pinned application shortcuts.

Like elementary OS, it underscores the fact that it respects users’ privacy by not collecting private and sensitive data. One cannot be certain about this claim and you can only take their word for it.
Another key highlight is its ability to run impressively well on old PCs – with as little as 1 GHz Intel Dual Core processor, 1 GB of RAM & 10G of hard disk space. Additionally, you get to enjoy powerful applications such as LibreOffice, Calendar app & Slack, and games that work out of the box.
Developed & maintained by System76, POP! OS is yet another open-source distribution based on Canonical’s Ubuntu. POP breathes some fresh air into the user experience with an emphasis on streamlined workflows thanks to its raft of keyboard shortcuts and automatic window tiling.

POP! also brings on board a Software Center- Pop! Shop – that is replete with applications from diverse categories such as Science & Engineering, development, communication, and gaming apps to mention a few.
A remarkable improvement that POP! has made is the bundling of NVIDIA drivers into the ISO image. In fact, during the download, you get to select between the standard Intel/AMD ISO image and one that ships with NVIDIA drivers for systems equipped with NVIDIA GPU. The ability to handle hybrid graphics makes POP ideal for gaming.
The latest version of POP! Is POP! 22.04 LTS based on Ubuntu 22.04 LTS.
If you are wondering what to do with your aging piece of hardware, and the only thought that crosses your mind is tossing it in the dumpster, you might want to hold back a little and try out LXLE.

The LXLE project was primarily developed to revive old PCs that have low specifications and have seemingly outlived their usefulness. How does it achieve this? LXLE ships with a lightweight LXDE desktop environment that is friendly to the system resources without compromising on the functionality required to get things done.
We have included it in the best Linux distributions for old computers.
LXLE is packed with cool wallpapers and numerous other additions and customization options that you can apply to suit your style. It’s super fast on boot and general performance and ships with added PPAs to provide extended software availability. LXLE is available in both 32-bit and 64-bit versions.
The latest release of LXLE is LXLE 18.04 LTS.
Kubuntu is a lightweight Ubuntu variant that ships with a KDE Plasma desktop instead of the traditional GNOME environment. The lightweight KDE Plasma is extremely lean and doesn’t gobble up the CPU. In so doing, it frees up system resources to be used by other processes. The end result is a faster and more reliable system that enables you to do so much more.

Like Ubuntu, it’s quite easy to install and use. The KDE Plasma provides a sleek & elegant look and feels with numerous wallpapers and polished icons.
Aside from the desktop environment, it resembles Ubuntu in almost every other way like shipping with a set of apps for everyday use like office, graphics, email, music, and photography applications.
Kubuntu adopts the same versioning system as Ubuntu and the latest release – Kubuntu 20.04 LTS – is based on Ubuntu 20.04 LTS.
We cannot afford to leave out Lubuntu which is a lightweight distro that comes with an LXDE/LXQT desktop environment alongside an assortment of lightweight applications.

With a minimalistic desktop environment, it comes recommended for systems with low hardware specifications, more especially old PCs with 2G RAM. The latest version at the time of writing this guide is Lubuntu 22.04 with the LXQt desktop environment, which will be supported until April 2025.
A portmanteau of Xfce and Ubuntu, Xubuntu is a community-driven Ubuntu variant that is lean, stable, and highly customizable. It ships with a modern and stylish look and out-of-the-box applications to get you started. You can easily install it on your laptop, or desktop and even an older PC would suffice.

The latest release is Xubuntu 22.04 which will be supported till 2025 and is also based on Ubuntu 22.04 LTS.
As you might have guessed, Ubuntu Budgie is a fusion of the traditional Ubuntu distribution with the innovative and sleek Budgie desktop. The latest release, Ubuntu Budgie 22.04 LTS is a flavor of Ubuntu 22.04 LTS. It aims at combining the simplicity and elegance of Budgie with the stability and reliability of the traditional Ubuntu desktop.

Ubuntu Budgie 22.04 LTS features tons of enhancements such as 4K resolution support, a new window shuffler, budgie-nemo integration, and updated GNOME dependencies.
We earlier featured KDE Neon in an article about the best Linux distros for KDE Plasma 5. Just like Kubuntu, it ships with KDE Plasma 5, and the latest version – KDE Neon 22.04 LTS is rebased on Ubuntu 22.04 LTS.

This may not be the entire list of all Ubuntu-based Linux distros. We decided to feature the top 10 commonly used Ubuntu-based distributions.
In this article, we will learn how to monitor linux server using various shell commands and tools. Monitor linux server is crucial for maintaining the health and performance of your servers, workstations, or any computing devices. We’ll cover the creation of a basic script that tracks CPU usage, memory consumption, and disk space.
Table of Contents
Open the terminal and use the nano command to create a new file.
nano monitor_resources.sh
Write the script to monitor linux server into the file.
Following is a shell script to monitor system resources such as CPU usage, memory usage, disk space of linux server:
—————————————————

------------------------------------
#!/bin/bash
while true; do
clear
echo "System Resource Monitoring"
echo "--------------------------"
# Display CPU usage
echo "CPU Usage:"
top -n 1 -b | grep "Cpu"
# Display memory usage
echo -e "\nMemory Usage:"
free -h
# Display disk space usage
echo -e "\nDisk Space Usage:"
df -h
sleep 5 # Wait for 5 seconds before the next update
done
---------------------------------
Save the file and exit the editor.
Explanation of the Script:
Shebang (#!/bin/bash): Specifies the shell to be used for interpreting the script.
while Loop: Creates an infinite loop that repeatedly gathers and displays system resource information.
clear: Clears the terminal screen for a cleaner display.
Display CPU Usage: Uses the top command with the -n 1 flag to display a single iteration of CPU usage information. The grep “Cpu” command filters out the relevant line.
Display Memory Usage: Uses the free command with the -h flag to display memory usage in a human-readable format.
Display Disk Space Usage: Uses the df command with the -h flag to display disk space usage in a human-readable format.
sleep 5: Pauses the loop for 5 seconds before gathering and displaying resource information again.
----------------------------------
Step #3:Make file executable
Change the file permissions to make it executable using the chmod command.
chmod +x monitor_resources.sh
Run the script by executing the following command:
./monitor_resources.sh
Output:

The terminal will now display system resource information, updating every 5 seconds. To stop the script, press CTRL + C.
Conclusion:
In conclusion, to monitor linux server using shell scripts on Ubuntu provides a straightforward and efficient way to keep track of CPU usage, memory consumption, and disk space. By following the steps to create, write, make executable, and run the monitor_resources.sh script, you can easily gather and display essential system information in real-time.
TeamViewer has long been a go-to solution for remote desktop access and collaboration across various platforms. However, for Linux users, finding reliable alternatives that seamlessly integrate with their systems has been a constant quest.
In 2024, the Linux ecosystem has witnessed significant advancements, leading to a surge in alternatives that offer robust features and compatibility.
In this article, we will explore the best TeamViewer alternatives for Linux, addressing frequently asked questions to help users make informed choices.
Contents:
1. Ammyy Admin
2. AnyDesk
3. RealVNC
4. TightVNC
5. Remmina
6. Chrome Remote Desktop
7. DWService
8. TigerVNC
9. X2Go
10. Apache Guacamole
11. RustDesk – Remote Desktop Access Software
Conclusion
Ammyy Admin is a proprietary remote desktop access software with a focus on stability, security, and simplicity used by more than 80 000 000 personal and corporate users.. It is free for personal use.
Ammyy Admin is excellent for system administration tasks, remote office actions e.g. file sharing, and online conference meetings. It runs as a portable execution file so it does not require any installation.

AnyDesk is a modern proprietary multi-platform remote desktop software and has gained popularity as a versatile remote desktop software compatible with Linux.
Known for its low latency and high-quality resolution, AnyDesk supports both free for private use and subscription packages for Lite, Professional, and Enterprise versions for Business use.
It features high frame rates, real-time collaboration, effective bandwidth use, fail-safe Erlang network, low latency, session recording, automated updates, custom aliases, etc. It also offers various security, administration, and flexibility features.
You are free to take it for a test drive – no installation required.

RealVNC a renowned remote access software, provides seamless connectivity across multi-platform. With Support for Linux, Windows, and macOS, RealVNC ensures efficient remote desktop solutions for personal and professional use such as OEMs, managed service providers, system administrators, IT experts, etc.
RealVNC is an enterprise-grade remote desktop access solution with tons of features, 250+ million downloads, 90+ thousand enterprise customers, 100+ major OEMs, and it is available for free private use.

TightVNC is a lightweight and efficient remote desktop software that utilizes the Virtual Network Computing (VNC) protocol. Renowned for its simplicity and reliability, TightVNC enables users to access and control their Linux, Windows, or macOS machines remotely.
It excels in providing a fast and responsive remote desktop experience, making it an ideal choice for users who prioritize performance. With support for various platforms and a focus on ease of use, TightVNC remains a popular choice for those seeking a straightforward solution for remote desktop access on their systems.

Remmina is a feature-rich POSIX (Portable Operating System Interface) software that enables users to remotely access any Operating System with Linux.
It developed to serve system administrators as well as travellers whether they’re working from small netbooks or large monitors. It has support for several network protocols including RDP, VNC, NX, SSH, EXEC, SPICE, and XDMCP.
Remmina also features an integrated and consistent UI and is free to use for both personal and commercial purposes.
Remmina stands out as a free, open-source remote desktop client designed for the GNOME desktop environment. Supporting various protocols like VNC, RDP, SSH, and others, Remmina offers a customizable and easy-to-use interface. Users can manage multiple remote connections simultaneously, making it an ideal choice for those dealing with diverse servers.

To install Remmina on Ubuntu, simply copy and paste the following commands on a terminal window.
$ sudo apt-add-repository ppa:remmina-ppa-team/remmina-next
$ sudo apt update
$ sudo apt install remmina remmina-plugin-rdp remmina-plugin-secret
To install Remmina from Debian Backports, simply copy and paste the following commands on a terminal window.
$ echo 'deb http://ftp.debian.org/debian stretch-backports main' | sudo tee --append /etc/apt/sources.list.d/stretch-backports.list >> /dev/null
$ sudo apt update
$ sudo apt install -t stretch-backports remmina remmina-plugin-rdp remmina-plugin-secret
On Fedora and CentOS, simply copy and paste the following commands on a terminal window.
--------- On Fedora -----------
# dnf copr enable hubbitus/remmina-next
# dnf upgrade --refresh 'remmina*' 'freerdp*'
--------- On CentOS -----------
# yum install epel-release
# yum install remmina*
With Chrome Remote Desktop, you can access a Chromebook or any other computer through the Google Chrome browser – a process unofficially referred to as Chromoting. It streams the desktop using VP8 which makes it responsive with good quality.
Chrome Remote Desktop is a free proprietary extension, but it doesn’t exactly replace Team Viewer because you can only use it for remote access. No meetings, file sharing, etc, so consider it if you’re on a budget or need only remote desktop access and control.

DMService is a lightweight, free, cross-platform, and open-source remote desktop access software with an emphasis on ease of use, security, and performance.
It can be installed on all popular desktop platforms or run completely from your web browser – all you will have to do is log in. Its features include support for terminal sessions, an inbuilt text editor, resource management, log watch, and file sharing.

TigerVNC an open-source implementation of the Virtual Network Computing (VNC) protocol, prioritizes performance and efficiency. It excels in delivering a fast and reliable remote desktop experience, making it suitable for users who prioritize speed and responsiveness.
TigerVNC is compatible with Linux, Windows, and macOS, ensuring seamless connectivity across platforms
TigerVNC has an almost uniform UI across platforms and is extensible with plugin extensions which can be used to add TLS encryption and advanced authentication methods, among other features.
It is important to note that TigerVNC isn’t a centralized service given that its servers are owned by a different company. And also unlike TeamViewer, it requires port forwarding.

TigerVNC is available to install from the default distribution repository on Ubuntu, Debian, Fedora, OpenSUSE, FreeBSD, Arch Linux, Red Hat Enterprise Linux, and SUSE Linux Enterprise.
X2Go is a free, open-source, and cross-platform remote desktop software that works using a modified NX 3 protocol and it works excellently even over low bandwidths.
You can use it to access any Linux GUI and that of a Windows system via a proxy. It also offers sound support, reconnecting to a session from another client, and file sharing.

Apache Guacamole is a free and open-source HTML5 web-based remote desktop gateway for accessing any computer from anywhere – all you need is an internet connection.
Apache Guacamole offers users the convenience of accessing both physical and cloud systems in a true cloud computing fashion.
It supports all the standard protocols not excluding RDP and VNC protocols, can be used at enterprise levels, does not require any plugins whatsoever, and administrators can monitor/kill connections in real-time as well as manage user profiles.

RustDesk is a promising remote desktop application for Linux that provides a user-friendly interface, file transfer, multi-monitor support, and clipboard sharing, catering to diverse remote desktop needs.
With RustDesk’s focus on security and privacy, users can enjoy end-to-end encryption and the ability to host their own servers, ensuring data protection and control.

That wraps up our list of the best gui remote client alternatives for Linux in 2024. Which one have you chosen?
In this article, We are going to cover Real time Kubernetes Interview Questions and Answers for Freshers and Experienced Candidate | Scenario Based Kubernetes Interview Questions and Answers | Kubernetes Troubleshooting Interview Questions and Answers.
Table of Contents:
1. What is Kubernetes
Kubernetes is one of the Leading open source Container Orchestration Engine. It is used to automatic cluster deployment, scaling and manage containerized applications.
2. What is difference between Docker Swarm and Kubernetes
| Docker Swarm | Kubernetes |
| It is Clustering for Docker Container | It is Container Orchestration |
| Setup is Easy | Setup is Hard |
| Application deployment using Pods, Deployments and Service | Application deployment using only Service |
| Auto scaling is Not Possible | Auto scaling is possible |
| It has no GUI Dashboard | It has GUI Dashboard |
| It supports logging and monitoring using ELK Stack,Grafana,Influx,etc. | It does not support |
3. What is Kubeadm
Kubeadm helps for installing and configuring Kubernetes cluster using command line.
4. What are Kubeadm commands ?
| Command Name | Purpose |
| kubeadm init | Used on Master node and It is used to initialize and configure any node as a master node. |
| kubeadm join | Used on worker node and It is used to initialize and configure any node as worker node. |
| kubeadm token | It is used to genrate token. |
| kubeadm version | It used to check kubeadm version. |
5. What are Kubernetes Cluster components on Master node
API Server, Scheduler, Controller Manager, ETCD
6. What are Kubernetes Cluster components on Worker node
Kubelet, Kubeproxy, Pods, Container
7. What is API Server
It is used to exposing various API’s. It is used to create,delete and update any object inside the cluster using kubectl command. API objects can be pods,containers,deployments,services..etc.
8. What is Scheduler ?
Scheduler is responsible for physically scheduling pods across multiple nodes, depending upon when we submit requirement to API server, scheduler schedules pod accordingly.
9. What is Controller Manager?
It is responsible overall health of entire cluster such as no of nodes insides the cluster, up and running status as per specification.
10. What is ETCD
etcd is light weight key-value database, it stores information like about current state of cluster,..etc.
11. What is Worker node in Kubernetes?
Worker node can be any Physical Server or Virtual Machine where containers are deployed , containers can be docker,rocket,.etc.
12. What is Kubelet ?
Kubelet is primary agent which runs on each worker node.It ensures containers are running in pod.
13. What is Kubeproxy?
It is core Networking component of Kubernetes cluster, it is responsible for entire network configuration, it maintains distributed network across all containers, pods and nodes.
14. What is Pod?
Pod is scheduling unit in Kubernetes, it consists of one or more container. With the help of pod we can deploy one or more container.
15. What are the different types of services in Kubernetes ?
Below are different types of services in Kubernetes:
Cluster IP – It is used to expose the service on internal IP within cluster.
Node Port – It is used to expose the service from outside.
Load Balancer – It creates external load balancer and assigns external IP to service.
External Name Creating – It is used expose the service using name.
16. What is the difference between deployment and service in Kubernetes ?
Deployment is an object in Kubernetes, using Deployment we can create and manage pods using replica set from template.
Deployment manages creating Pods using of Replica Sets
Service is responsible to allow network access to a set of pods.
17. What is the difference between pod and deployment in Kubernetes?
Pod is scheduling unit in Kubernetes, it consists of one or more container. With the help of pod we can deploy one or more container.
Deployment is an object in Kubernetes, using Deployment we can create and manage pods using replica set from template.
Both are objects in the Kubernetes API
18. What is the difference between config map and secrets in Kubernetes?
Config maps stores application configuration in a plain text format.
Secrets store sensitive data like password in an encrypted format
19. What is namespace in Kubernetes?
Using namespace, we can logically organize objects in the cluster like pod and deployments. When you create Kubernetes cluster , default, kube-system and kube-public namespace are available.
20. What is ingress in Kubernetes?
Ingress is a collection of routing rules for external services running in a Kubernetes cluster.
21. What is Namespace in Kubernetes/k8s ?
It is Kubernetes objects which is used to create multiple virtual clusters within same physical cluster.
We can deploy Pods, deployment, service within each Virtual Cluster called as Kubernetes Namespace.
22. What is use of Namespace in Kubernetes ?
Suppose you have Dev, QA and Prod Environment in your project and you want separate each environment in same cluster and deploy pods, deployments and services also.
In this scenario you can separate these resource in by creating Namespaces for Dev,QA,Prod and create pods, deployments, services.
23. What is ingress in Kubernetes ?
Ingress it is a Kubernetes objects which allows access to your Kubernetes services from outside/external.
Using Ingress we can expose pod’s port like 80 ,443 from outside network of Kubernetes over internet.
24. What are the different types of Ingress Controller in Kubernetes?
Below are some most used Ingress controllers on Kubernetes Cluster
25. What is Replication Controller in Kubernetes ?
A Replication Controller ensures that a specified number of pod replicas are running at any given time. In other words, a Replication Controller makes sure that a pod or a homogeneous set of pods is always up and available.
26. What is ReplicaSet’s in Kubernetes ?
A ReplicaSet’s purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods. The ReplicaSets are also known as next generation replication controller.
ReplicaSets checks whether the target pod is already managed by another controller as well (like a Deployment or another ReplicaSet).
27. What is the Difference between Kubernetes Replication Controllers and ReplicaSets
Replication Controller and Replica Set do almost the same thing. Both of them ensure that a specified number of pod replicas are running at any given time.
The difference comes with the usage of selectors to replicate pods. Replica Set use Set-Based selectors which gives more flexibility while replication controllers use Equity-Based selectors.
28. Why we need replication in Kubernetes ?
A container or pod may crash because of multiple reasons. The main purpose of using Replication is Reliability, Load Balancing, and Scaling. It ensures that the pre-defined pods always exists.
To understand this in an easier manner lets take an example ->
Lets assume we are running our application on a single pod. What if for some reason our application crashes and the pod fails. Users will no longer be able to access our application.
To prevent users from losing access to our application we would like to have more than one instances of our application running at the same time. That way if one pod fails we still have our application running on the other one. The replication controller helps us run multiple instances of a single pod in the Kubernetes cluster. Thus providing high availability.
Kubernetes Networking and Security Interview Questions and Answers.
How does Kubernetes handle networking for applications?
Kubernetes uses a service mesh to provide networking for applications. A service mesh is a dedicated infrastructure layer for handling communication between microservices.
What is a Kubernetes ingress controller?
An ingress controller is a component that handles incoming traffic to a Kubernetes cluster. It is responsible for routing traffic to the correct service based on the hostname or URL.
How does Kubernetes secure applications?
Kubernetes provides a number of security features, including network policies, pod security policies, and role-based access control (RBAC).
Advanced Kubernetes Interview Questions and Answers.
How does Kubernetes handle horizontal pod autoscaling (HPA)?
HPA is a controller that automatically scales the number of pods in a deployment based on CPU or memory usage.
What are the different ways to manage persistent storage in Kubernetes?
Kubernetes supports a number of different ways to manage persistent storage, including using PersistentVolumes (PVs), PersistentVolumeClaims (PVCs), and CSI drivers.
How does Kubernetes handle log collection and monitoring?
Kubernetes provides a number of tools for log collection and monitoring, including the Fluentd logging agent and the Heapster metrics server.
What is difference between Kubectl Describe vs kubectl get Get vs kubectl Explain
1. The kubectl describe command is used to display detailed information about specific Kubernetes resources.
eg. kubectl describe pod my-pod -n my-namespace
2. The kubectl get command is used to retrieve a curated list of Kubernetes resources of a particular type of resource in the cluster. It provides a view of the current state of multiple resources.
eg. kubectl get pods -n my-namespace
3. The kubectl explain command is used to retrieve detailed information about the structure and properties of Kubernetes resources.
eg. kubectl explain pod
Difference between Cilium and Calico network plugin?
Cilium and Calico are both popular networking solutions used in Kubernetes environments,
but they have some different features and focuses which might make one more suitable than the other depending on the specific needs of a deployment.
Cilium:
1. BPF-based Networking:
Cilium utilizes eBPF (extended Berkeley Packet Filter), a powerful Linux kernel technology, to provide highly efficient network and security capabilities.
eBPF allows Cilium to perform networking, security, and load balancing functionalities directly in the Linux kernel without requiring traditional kernel modules or network proxies.
2. Security:
Cilium is highly focused on security. It offers network policies for container-based environments, API-aware network security, and support for transparent encryption.
3. Scalability and Performance:
Thanks to eBPF, Cilium is known for high performance and scalability, particularly in environments with high throughput and low latency requirements.
4. Service Mesh Integration:
Cilium integrates well with service mesh technologies like Istio, providing efficient load balancing and networking capabilities.
Calico:
1. Flexibility in Data Planes:
Calico provides options to use either standard Linux networking and routing capabilities or eBPF for more advanced scenarios.
This flexibility can be useful in different deployment environments.
2. Network Policy Enforcement:
Calico is well-known for its robust implementation of Kubernetes network policies, offering fine-grained control over network communication.
3. Cross-Platform Support:
Calico supports a wide range of platforms and environments, including Kubernetes, OpenShift, Docker EE, OpenStack, and bare-metal services.
4. Performance:
While Calico can use eBPF for high performance, its standard mode using IP routing and iptables is also very efficient and scalable.
Choosing Between Cilium and Calico:
If your primary focus is on advanced networking capabilities, leveraging the latest kernel technologies for performance, and tight integration with service meshes, Cilium is a strong choice.
If you need a flexible, platform-agnostic solution that offers robust network policy enforcement and can operate in a wide variety of environments, Calico might be more suitable.
Ultimately, the choice between Cilium and Calico will depend on the specific requirements of your infrastructure, such as performance needs, security requirements, existing technology stack, and your team’s familiarity with these tools.
What are different storage options are available in Kubernetes?
Answer:
• 𝗘𝗺𝗽𝘁𝘆𝗗𝗶𝗿
-> created when the Pod is assigned to a node
-> RAM & Disk based mounting options
-> Volume is initially empty
• 𝗟𝗼𝗰𝗮𝗹
-> represents a mounted local storage device
-> only be used as a statically created PV
-> Dynamic provisioning not supported
-> must set a PV nodeAffinity
• 𝗛𝗼𝘀𝘁𝗽𝗮𝘁𝗵
-> mounts a file or dir from the host node’s FS to Pod
-> presents many security risks- Avoid it
-> Mostly useful for Static Pod! 𝗪𝗵𝘆? (static Pods cannot access CM)
• 𝗣𝗩𝗖
-> expanding PVC is enabled by default
-> used to mount a PersistentVolume
-> we can pre-bind PV & PVC
• 𝗦𝗲𝗰𝗿𝗲𝘁
-> secret volumes are backed by tmpfs (a RAM-backed fs) so they are never written to non-volatile storage
-> A Secret is always mounted as readOnly
• 𝗖𝗼𝗻𝗳𝗶𝗴𝗠𝗮𝗽
-> Provides a way to inject config data into pods
-> You must create a CM before you can use it
-> CM is always mounted as readOnly.
• 𝗖𝗦𝗜
-> defines standard interface for container orchestration
-> CSI compatible volume driver need to deployed
-> Most widely used Option
Kubernetes Pod Troubleshooting Interview Questions and Answers:
1. POD OOM (Out of Memory) Errors-Pod exceeds memory limits
Resolution: Analyze resource usage: `kubectl top pod<pod-name>`. Adjust memory requests/limits in pod spec.
2. Kubernetes Pod High CPU Usage – Pod consumes excessive CPU.
Resolution: Monitor CPU utilization: `kubectl top pod <pod-name>`. Optimize application performance or scale horizontally.
3. Kubernetes Pods Stuck in Pending State – Insufficient resources or scheduling issues.
– Resolution: Increase cluster capacity or adjust pod requests/limits. Review node conditions: `kubectl describe node`.
4. Kubernetes Pod Network Connectivity Issues – Pod unable to communicate with external resources.
– Resolution: Diagnose network configurations: `kubectl describe pod <pod-name>`. Check network policies and firewall rules.
5. Kubernetes Pod Storage Volume Errors – Failure in accessing or mounting volumes.
– Resolution: Verify volume configurations: `kubectl describe pod <pod-name>`. Check storage class availability and permissions.
6. Kubernetes pod Crashes and Restarting Pods- Application errors or resource constraints.
– Resolution: Review pod logs: `kubectl logs <pod-name>`. Address application bugs or adjust resource allocations.
7. Kubernetes pod Failed Liveness or Readiness Probes – Pod fails health checks, affecting availability.
– Resolution: Inspect probe configurations: `kubectl describe pod <pod-name>`. Adjust probe settings or application endpoints.
8. Kubernetes Pod Eviction due to Resource Pressure – Cluster resource scarcity triggers pod eviction.
– Resolution: Monitor cluster resource usage: `kubectl top nodes`. Scale resources or optimize pod configurations.
9. Docker Image Pull Failures – Issues fetching container images from the registry.
– Resolution: Verify image availability and credentials. Troubleshoot network connectivity with the registry.
10. Kubernetes Pod Security Policy Violations – Pods violate cluster security policies.
– Resolution: Review pod security policies: `kubectl describe pod <pod-name>`. Adjust pod configurations to comply with policies.
Scenario Based Kubernetes Interview Questions and Answers:
Scenario 1: Troubleshooting a deployment
You have deployed a new application to your Kubernetes cluster, but it is not working as expected. How would you troubleshoot the issue?
Answer:
kubectl logs command to view the logs of a pod.kubectl get pods command to check the status of the pods in the deployment. Make sure that all of the pods are running and that they are in a healthy state.kubectl exec command to run a command inside of a container.Scenario 2: Scaling an application
Your application is experiencing a surge of traffic and you need to scale it up quickly. How would you do this?
Answer:
.yaml file and increasing the number of replicas.Scenario 3: Handling a node failure
One of the nodes in your Kubernetes cluster has failed. How would you recover from this?
Answer:
Scenario 4: Scaling Applications
Question: How would you scale a Kubernetes deployment when you observe an increase in traffic to your application?
Answer: You can scale a deployment using the kubectl scale command. For example, to scale a deployment named “app-deployment” to three replicas, you would use:
bash:
kubectl scale --replicas=3 deployment/app-deployment
This will ensure that three pods are running to handle increased traffic.
Scenario 5: Rolling Updates
Question: Describe the process of performing a rolling update for a Kubernetes deployment.
Answer: To perform a rolling update, you can use the kubectl set image command. For instance, to update the image of a deployment named “app-deployment” to a new version, you would use:
bash
kubectl set image deployment/app-deployment container-name=new-image:tag
Kubernetes will gradually replace the old pods with new ones, ensuring zero downtime during the update.
Scenario 6: Troubleshooting Pods
Question: A pod is not running as expected. How would you troubleshoot and identify the issue?
Answer: First, use kubectl get pods to check the status of the pod. Then, use kubectl describe pod <pod-name> to get detailed information, including events and container statuses. Inspecting the pod’s logs using kubectl logs <pod-name> for each container can provide insights into issues. Additionally, using kubectl exec -it <pod-name> -- /bin/sh allows you to access the pod’s shell for further debugging.
Scenario 7: Persistent Volumes
Question: Explain how you would manage persistent storage in Kubernetes.
Answer: Persistent Volumes (PVs) and Persistent Volume Claims (PVCs) are used for storage. A PV represents a physical storage resource, and a PVC is a request for storage by a pod. Admins create PVs, and users claim storage by creating PVCs. Pods reference PVCs. Storage classes define the type and characteristics of the storage. The YAML files for PVs, PVCs, and the deployment or pod need to be configured accordingly.
Scenario 8: Service Discovery
Question: How does service discovery work in Kubernetes, and how can services communicate with each other?
Answer: Kubernetes uses DNS for service discovery. Each service gets a DNS entry formatted as <service-name>.<namespace>.svc.cluster.local. Pods within the same namespace can communicate using this DNS. To enable communication between services in different namespaces, use the full DNS name, including the namespace. Kubernetes Services abstract the underlying pods, providing a stable endpoint for communication.
Scenario 9: Deploying StatefulSets
Question: Explain when you would use a StatefulSet instead of a Deployment, and how does it handle pod identity?
Answer: Use StatefulSets for stateful applications like databases, where each pod needs a unique identity and stable network identity. StatefulSets provide guarantees about the ordering and uniqueness of pods. Pods in a StatefulSet get a unique and stable hostname (e.g., <pod-name>-0, <pod-name>-1). This is crucial for applications requiring persistent storage and where the order of deployment and scaling matters.
Scenario 10: ConfigMaps and Secrets
Question: How do you manage configuration data and sensitive information in Kubernetes?
Answer: ConfigMaps are used to manage configuration data, while Secrets are used for sensitive information. ConfigMaps can be created from literal values or configuration files and mounted into pods as volumes or environment variables. Secrets store sensitive information and are mounted similarly. Ensure that access to Secrets is properly restricted, and consider using tools like Helm for managing and templating configuration.
Conclusion:
We have covered, Kubernetes Interview Questions and Answers for Freshers and Experienced Candidate. If you need any support please comment.

Before you can use an SD card or USB drive, it needs to be formatted and partitioned. Typically most USB drives and SD cards come preformatted using the FAT file system and do not need to be formatted out of the box. However, in some cases, you may need to format the drive.
In Linux, you can use a graphical tool like GParted or command-line tools such as fdisk or parted to format the drive and create the required partitions.
This article explains how to format a USB Drive or SD Card on Linux using the parted utility.
It’s important to note that formatting is a destructive process, and it will erase all the existing data. If you have data on the UDB drive or the SD card, make sure you back it up.
partedGNU Parted is a tool for creating and managing partition tables. The parted package is pre-installed on most Linux distros nowadays. You can check if it is installed on your system by typing:
$ parted --version
Copyparted (GNU parted) 3.2 Copyright (C) 2014 Free Software Foundation, Inc. ...
If parted is not installed on your system, you can install it using your distribution package manager.
parted on Ubuntu and Debian$ sudo apt update
$sudo apt install parted
parted on CentOS and Fedora$ sudo yum install parted
Insert the USB flash drive or SD card into your Linux machine and find the device name using the lsblk command:
$ lsblk
The command will print a list of all available block devices:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT ...
sdb 8:16 1 14.4G 0 disk
└─sdb1 8:17 1 1.8G 0 part /media/data ...
In the example above, the name of the SD device is /dev/sdb, but this may vary on your system.
You can also use the dmesg command to find the device name:
$ lsblk
Once you attach the device, dmesg will show the device name:
... [ +0.000232] sd 1:0:0:0: [sdb] 30218842 512-byte logical blocks: (15.5 GB/14.4 GiB) ...
Before formatting the drive, you can securely wipe out all the data on it by overwriting the entire drive with random data. This ensures that the data cannot be recovered by any data recovery tool.
You need to completely wipe the data only if the device is going to be given away. Otherwise, you can skip this step.
Be very careful before running the following command and irrevocably erase the drive data. The of=... part of the dd command must point to the target drive:
$ sudo dd if=/dev/zero of=/dev/sdb bs=4096 status=progress
Depending on the size of the drive, the process will take some time to complete.
Once the disk is erased, the dd command will print “No space left on device”:
15455776768 bytes (15 GB, 14 GiB) copied, 780 s, 19.8 MB/s
dd: error writing '/dev/sdb': No space left on device
3777356+0 records in
3777355+0 records out
15472047104 bytes (15 GB, 14 GiB) copied, 802.296 s, 19.3 MB/s
Creating a Partition and Formatting
The most common file systems are exFAT and NTFS on Windows, EXT4 on Linux, and FAT32, which can be used on all operating systems.
We will show you how to format your USB drive or SD card to FAT32 or EXT4. Use EXT4 if you intend to use the drive only on Linux systems, otherwise format it with FAT32. A single partition is sufficient for most use cases.
Format with FAT32
First, create the partition table by running the following command:
$ sudo parted /dev/sdb --script -- mklabel msdos
Create a Fat32 partition that takes the whole space:
$ sudo parted /dev/sdb –script — mkpart primary fat32 1MiB 100%
Format the boot partition to FAT32:
$ sudo mkfs.vfat -F32 /dev/sdb1
$ mkfs.fat 4.1 (2017-01-24)
Once done, use the command below to print the partition table and verify that everything is set up correctly:
$ sudo parted /dev/sdb --script print
The output should look something like this:Model:
Kingston DataTraveler 3.0 (scsi) Disk /dev/sdb: 15.5GB Sector size (logical/physical): 512B/512B Partition Table: msdos Disk Flags: Number Start End Size Type File system Flags 1 1049kB 15.5GB 15.5GB primary fat32 lba
That’s all! You have formatted your device.
Create a GPT partition table by issuing:
$ sudo parted /dev/sdb --script -- mklabel gpt
Run the following command to create a EXT4 partition that takes the whole space:
$ sudo parted /dev/sdb --script -- mkpart primary ext4 0% 100%
Format the partition to ext4:
$ sudo mkfs.ext4 -F /dev/sdb1
mke2fs 1.44.1 (24-Mar-2018)
/dev/sdb1 contains a vfat file system
Creating filesystem with 3777024 4k blocks and 944704 inodes
Filesystem UUID: 72231e0b-ddef-44c9-a35b-20e2fb655b1c
Superblock backups stored on blocks:
32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208
Allocating group tables: done
Writing inode tables: done
Creating journal (16384 blocks): done
Writing superblocks and filesystem accounting information: done
Verify it by printing the partition table:
$ sudo parted /dev/sdb --script print
The output should look something like this:Model:
Kingston DataTraveler 3.0 (scsi) Disk /dev/sdb: 15.5GB Sector size (logical/physical): 512B/512B Partition Table: gpt Disk Flags: Number Start End Size File system Name Flags 1 1049kB 15.5GB 15.5GB ext4 primary
Formatting a USB drive or SD card on Linux is a pretty straight forward process. All you need to do is insert the drive, create a partition table, and format it with FAT32 or your preferred file system.