Microsoft repo secretly installed on all Raspberry Pi’s Linux OS

Raspberry Pi is a little useful computer for learning programming and building projects. It comes with Debian Linux based modified operating system called Raspbian. It is the most widely installed OS on RPi. In a recent update, the Raspberry Pi OS installed a Microsoft apt repository on all machines running Raspberry Pi OS without the person’s or admin’s knowledge. Every time a Raspbian device is updated by having this repo, it will ping a Microsoft server. Microsoft telemetry has a bad reputation in the Linux community. Let us see why and how this matters to Linux users.

Microsoft repo secretly installed on all Raspberry Pi’s Linux OS

Let us find out what this repo contains:
ssh pi@192.168.2.180
Here is how we can confirm it:

lsb_release -a
ls -l /etc/apt/sources.list.d/
ls -l /etc/apt/trusted.gpg.d/
cat /etc/apt/sources.list.d/vscode.list

Let see what Microsoft repo secretly installed without your knowledge on Raspberry PI contains:

curl -s http://packages.microsoft.com/repos/code/dists/stable/main/binary-arm64/Packages \
| grep "^Package: " \
| cut -d" " -f2 \
| sort -u

Heads up: Microsoft repo secretly installed on all Raspberry Pi's Linux OS
It seems that it contains VS Code IDE for your Raspberry Pi. Now keep in mind this is a server with a lite image, and there is no need to install this on my old RPi 2. Naturally, it made many Linux users unhappy. To make matters worse, the official Raspberry Pi forums admins quickly locked down and deleted the topic threads, claiming it was “Microsoft bashing.”

Why is this bad news?

It seems RPi foundation officially recommends MS IDE, and hence this was included Raspberry Pi OS. They should keep this to GUI image for kids or anyone who wish to to learn Python and other stuff using VS Code. Most Linux geeks and power users use RPi as a git server or adblocker and so on as a headless server. There is always a trust issue when unwanted software repo configured and gpg keys are installed secretly, which is the main issue. What other problems Linux users may face:

  1. By using forced MS repo on my RPi 2, MS controls the software I install. For example, when I run `apt install app,` I will get an app distributed and modified by MS. Maybe they will not do anything evil, but I don’t want anything to do with them.
  2. Hardcore Linux users like me (or anyone who works in infosec/IT) will never trust Microsoft or Raspberry Pi OS to install such a repo secretly.
  3. Microsoft may collect more info about RPi and Linux users as many try to reduce their digital footprint such as your IP address and build a profile about you.
  4. Every apt-get update command pingback to MS repo.
  5. If you or any family members logged into the MS ecosystem such as Github, Bing, Office/Live, they could identify and track you when using same shared public IP at home.

If you are okay with this, then stop reading and go back to your life. Nothing is wrong with that. But, if you are not okay with such a change. Here are some options for you.

1. Stop using Raspbian

This is the best possible solution. I will probably switch to plain Debian for RPi 2. Other operating system includes:

2. Block Microsoft VSCode if you still want to use Raspbian OS

Edit your /etc/hosts on RPI (or add that domain to your Pi-Hole)
sudo vim /etc/hosts
Add the following line:
0.0.0.0 packages.microsoft.com
Save and close the file in vim. Put Debian package on hold so that it will not install further updates:
sudo apt-mark hold raspberrypi-sys-mods
Delete Microsoft’s GPG key using the rm command:
sudo rm -vf /etc/apt/trusted.gpg.d/microsoft.gpg
Make sure new keys cannot be installed:
sudo touch /etc/apt/trusted.gpg.d/microsoft.gpg
Next, write protect that file on Linux using the chattr command:
sudo chattr +i /etc/apt/trusted.gpg.d/microsoft.gpg
lsattr /etc/apt/trusted.gpg.d/microsoft.gpg

3. Use VSCode safety, especially when your kids are using it

VSCode has telemetry too, use a version of VSCode with telemetry removed:

vscodium

Free/Libre open source software binaries of VSCode with all telemetry removed

Someone notified me about vscodium-deb-rpm-repo.

Summing up

Truth to be told, RPis is not 100% opensource. Like Intel and AMD CPU/GPU, it comes with a binary closed source firmware too. However, that doesn’t mean, install unwanted software repo and gpg keys secretly on your device without your knowledge. That is what malware does, and hence Linux and the opensource community are upset. I hope they will fix it. Check out Reddit thread with many more suggestions. RPis/OS maintainer should have published a blog post about such a notable change, and doing so without informing RPis users is not great. What do you think? Let us know in the comment section below.

Source

What does “git merge –abort” do? – Linux Hint

When it comes to version control systems, Git is always at the top of the list. Because of its acceptability among users from multiple backgrounds, there are lots and lots of discussions on the different features that it offers, the issues that arise while using it, and also their possible solutions. There is a very commonly used operation in Git, i.e., “git merge –abort” and today, we will try to find the answer to what does the “git merge –abort” operation does.

Purpose of the “git merge –abort” Operation:

Before understanding the usage of the “git merge –abort” operation, we must realize why do we need such an operation in the first place. As you all know that Git maintains a history of all the different versions of a file or a code; therefore, the different versions that you create are known as Git commits. Also, there is a dedicated current commit, i.e., the version of the file that you are currently working on. At times, you might feel the need to merge a previously committed file with the one you are currently working on.

However, during this merging process, it can happen that any other colleague of yours is also working on the same file. He might discard the changes that you have kept or modify the lines that you have just added to the file. This scenario can lead to a merge conflict in Git. Once a merge conflict in Git arises, and you try to check the status of Git, it will display a message that a merge conflict has occurred. You will not be able to do anything with that particular file until you manage to fix that conflict.

This is where the “git merges –abort” operation comes into play. Basically, you want to go back to the old state where you can have your current version of the file unchanged, and you can start making the changes all over again. In this way, you will ensure that no such conflicts arise again in the future. So the “git merge –abort” operation essentially terminates the merger that you have just carried out and separated the two versions of your file, i.e., the current version and the older version.

In this way, the current version of your file will revert back to the same state as it was before you performed the merge operation, and hence you will be able to restore it without any potential difficulty. However, an important point to be noted here is that the “git merge –abort” operation only works if you have just merged your files and have not committed them yet. If you have already committed to this merger, then the “git merge –abort” operation will no longer serve the purpose; rather, you will have to look for other ways to undo the merger.

Conclusion:

By understanding the discussion that we did today, you will easily realize the purpose of the “git merge –abort” operation. This operation not only resolves the merge conflicts that arise before committing a merge but also helps in restoring your files to the same state in which they were before. In this way, your data is not lost, and you can conveniently start working on it all over again.

Source

AWS Cost Anomaly Detection is now generally available

Posted On: Dec 16, 2020

AWS Cost Anomaly Detection is a free service that monitors your spending patterns to detect anomalous spend and provide root cause analysis. It helps customers to minimize cost surprises and enhance cost controls.

Backed by advanced machine learning technology, AWS Cost Anomaly Detection is able to identify gradual spend increases and/or one-time cost spikes. With three simple steps, you can create your own cost monitors and alert subscriptions. Based on your business needs, you can create multiple alert subscriptions for the same cost monitor and/or attach multiple cost monitors to one alert subscription.

With each anomaly detection, we also provide customized root cause analysis so you can quickly investigate and address the cost drivers accordingly. You can provide feedback by submitting assessments to improve future anomaly detection. As part of the AWS Cost Management suite, AWS Cost Anomaly Detection is integrated with AWS Cost Explorer so you can further visualize and analyze your cost and usage as needed.

Source

How to encrypt a single Linux filesystem

Sure, you can manually encrypt a filesystem. But, you can also automate it with Ansible.

How to encrypt a single Linux filesystem

There are a few different reasons that you might want to encrypt a filesystem, such as protecting sensitive information while it’s at rest, not having to worry about encrypting individual files on the filesystem, or other reasons. To manually encrypt a filesystem in Red Hat Enterprise Linux (RHEL), you can use the cryptsetup command. This article will walk you through how to use Ansible to do this for you for a RHEL 8 server.

Before we dive into using Ansible to automate that process, let’s first go through the steps to manually create the encrypted filesystem so that we better understand what we’re asking Ansible to do. There are native commands in RHEL that enable you to create an encrypted filesystem, and we’ll use those in our walkthrough.

Manually create an encrypted partition

To start with, we’ll look at the device on which I’ll put the partition:

[root@ansibleclient ~]# fdisk /dev/vdc

Welcome to fdisk (util-linux 2.32.1).
Changes will remain only in memory until you decide to write them.
Be careful before using the write command.


Command (m for help): p
Disk /dev/vdc: 30 GiB, 32212254720 bytes, 62914560 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x803e8b19

Device     Boot Start     End Sectors Size Id Type
/dev/vdc1        2048 6291455 6289408   3G 83 Linux

Command (m for help):

We can see that my /dev/vdc already has a partition on it, but there is still space available for another partition. I’ll create my /dev/vdc2 partition:

Command (m for help): n
Partition type
   p   primary (1 primary, 0 extended, 3 free)
   e   extended (container for logical partitions)
Select (default p):

Using default response p.
Partition number (2-4, default 2):
First sector (6291456-62914559, default 6291456):
Last sector, +sectors or +size{K,M,G,T,P} (6291456-62914559, default 62914559): +7G

Created a new partition 2 of type 'Linux' and of size 7 GiB.

Command (m for help): p
Disk /dev/vdc: 30 GiB, 32212254720 bytes, 62914560 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x803e8b19

Device     Boot   Start      End  Sectors Size Id Type
/dev/vdc1          2048  6291455  6289408   3G 83 Linux
/dev/vdc2       6291456 20971519 14680064   7G 83 Linux

Command (m for help): w
The partition table has been altered.
Syncing disks.

[root@ansibleclient ~]# partprobe /dev/vdc
[root@ansibleclient ~]#

I now have a partition /dev/vdc2 of size 7G. Next, I format that partition for luks:

[root@ansibleclient ~]# cryptsetup luksFormat /dev/vdc2

WARNING!
========
This will overwrite data on /dev/vdc2 irrevocably.

Are you sure? (Type uppercase yes): YES
Enter passphrase for /dev/vdc2:
Verify passphrase:
[root@ansibleclient ~]#

To open the encrypted volume, I use the luksOpen argument for cryptsetup, and I tell it the name I want my target to be manualluks:

[root@ansibleclient ~]# cryptsetup luksOpen /dev/vdc2 manualluks
Enter passphrase for /dev/vdc2:
[root@ansibleclient ~]# ls /dev/mapper/
control  examplevg-examplelv  manualluks  mycrypt  rhel-root  rhel-swap
[root@ansibleclient ~]#

After it’s been opened, I can actually put it to use. In this example, I’ll put a volume group there:

[root@ansibleclient ~]# vgcreate manual_luks_vg /dev/mapper/manualluks
  Physical volume "/dev/mapper/manualluks" successfully created.
  Volume group "manual_luks_vg" successfully created
[root@ansibleclient ~]# vgdisplay manual_luks_vg
  --- Volume group ---
  VG Name               manual_luks_vg
  System ID            
  Format                lvm2
  Metadata Areas        1
  Metadata Sequence No  1
  VG Access             read/write
  VG Status             resizable
  MAX LV                0
  Cur LV                0
  Open LV               0
  Max PV                0
  Cur PV                1
  Act PV                1
  VG Size               6.98 GiB
  PE Size               4.00 MiB
  Total PE              1787
  Alloc PE / Size       0 / 0   
  Free  PE / Size       1787 / 6.98 GiB
  VG UUID               bjZ7FM-9jNw-pdfs-Dd5y-5IsF-tEdK-CpVqH4
   
[root@ansibleclient ~]#

I have a volume group, manual_luks_vg, so I’m now able to put a logical volume inside:

[root@ansibleclient ~]# lvcreate -n manual_luks_logvol -L +5G manual_luks_vg
  Logical volume "manual_luks_logvol" created.
[root@ansibleclient ~]# lvdisplay manual_luks_vg
  --- Logical volume ---
  LV Path                /dev/manual_luks_vg/manual_luks_logvol
  LV Name                manual_luks_logvol
  VG Name                manual_luks_vg
  LV UUID                nR5UKo-jRvR-97L0-60YF-dbSp-D0pc-l8W3Td
  LV Write Access        read/write
  LV Creation host, time ansibleclient.usersys.redhat.com, 2020-12-03 10:15:03 -0500
  LV Status              available
  # open                 0
  LV Size                5.00 GiB
  Current LE             1280
  Segments               1
  Allocation             inherit
  Read ahead sectors     auto
  - currently set to     8192
  Block device           253:5
   
[root@ansibleclient ~]#

The lvcreate command specified the name for my new logical volume, manual_luks_logvol, its size, 5G, and that the logical volume should be in the volume group of manual_luks_vg.

At this point, I have a logical volume, but I haven’t formatted it yet for ext or xfs. Typing mkfs and then hitting Tab shows me that there are a number of options for me to format this partition:

# mkfs
mkfs         mkfs.cramfs  mkfs.ext2    mkfs.ext3    mkfs.ext4    mkfs.minix   mkfs.xfs

Here, I’ll use mkfs.xfs:

[root@ansibleclient ~]# mkfs.xfs /dev/manual_luks_vg/manual_luks_logvol
meta-data=/dev/manual_luks_vg/manual_luks_logvol isize=512    agcount=4, agsize=327680 blks
         =                       sectsz=512   attr=2, projid32bit=1
         =                       crc=1        finobt=1, sparse=1, rmapbt=0
         =                       reflink=1
data     =                       bsize=4096   blocks=1310720, imaxpct=25
         =                       sunit=0      swidth=0 blks
naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
log      =internal log           bsize=4096   blocks=2560, version=2
         =                       sectsz=512   sunit=0 blks, lazy-count=1
realtime =none                   extsz=4096   blocks=0, rtextents=0

I have it formatted, but not mounted. To mount it, I’ll create a new directory and then run the mount command:

[root@ansibleclient ~]# mkdir /manual_luks
[root@ansibleclient ~]# mount /dev/manual_luks_vg/manual_luks_logvol /manual_luks

To verify that worked, I can use mount by itself and then write to a new file there:

[root@ansibleclient ~]# mount | grep luks
/dev/mapper/manual_luks_vg-manual_luks_logvol on /manual_luks type xfs (rw,relatime,seclabel,attr2,inode64,noquota)
[root@ansibleclient ~]# date > /manual_luks/testing
[root@ansibleclient ~]# cat /manual_luks/testing
Thu Dec  3 10:24:42 EST 2020
[root@ansibleclient ~]#

To enable the system to mount the encrypted partition at boot, I need to update my /etc/crypttab file. The format for the file is the name of your luks device, the physical partition, and then the file whose only contents are the password for that luks device:

# cat /etc/crypttab
manualluks /dev/vdc2 /root/manualluks.txt

In the /root/manualluks.txt, I have just the plaintext password for my luks device.

I use the luksAddKey argument to add the key to the device:

# cryptsetup luksAddKey /dev/vdc2 /root/manualluks.txt

To mount the filesystem at boot time, edit the /etc/fstab file so there is an entry for the logical volume and its mount point:

/dev/manual_luks_vg/manual_luks_logvol /manual_luks xfs defaults 0 0

After you’ve done the manual steps for creating the partition and writing to it, give the system a reboot to verify that the settings are persistent and the system reboots as expected.

Now that we understand what we need to do to manually create an encrypted partition, we know what we need to do to automate that process.

Automate the creation of an encrypted partition

The script hosted at https://people.redhat.com/pgervase/sysadmin/partition.yml gives one example of how to use Ansible to take a blank disk and go through the steps to create an encrypted partition, mount it, and then write to it. Like so many things with technology, there are several different ways to accomplish this, but this approach will also show some examples of variables, getting facts, and using a block and rescue.

---
- name: pb to create partition
  hosts: all
  become: true
  vars:
    target_size: 3GiB
    target_device: /dev/vdc
    myvg: examplevg
    mylv: examplelv
    keyfile: /root/mylukskey.yml
    mycrypt: mycrypt

At the top of the playbook, I place some basic information and declare a few variables. Rather than having the parameters hardcoded in the playbook, by having them defined as variables, I can override them when I run the play and make the tasks able to be used for other purposes.

 tasks:
    - name: block for doing basic setup and verification for target system
      block:
        - name: get facts for "{{ target_device }}"
          parted:
            device: "{{ target_device }}"
          register: target_facts

        - name: print facts for "{{ target_device }}"
          debug:
            msg: "{{ target_facts }}"

        - name: check to see if there are any facts for /dev/vdb1. this means there are existing partitions that we would overwrite, so fail
          debug:
            msg: "{{ target_facts }}.partitions"
          failed_when: ansible_devices.vdb.partitions.vdb1 is defined   ### if vdb1 is defined, there's already a partition there, so abort.

        - name: print size for the disk
          debug:
            msg: "the size is {{ target_facts['disk']['size'] }} kib"

        - name: copy keyfile to remote system
          copy:
            src: mylukskey.yml
            dest: "{{ keyfile }}"

        - name: make sure cryptsetup is installed
          yum:
            name: cryptsetup
            state: installed

The first few tasks that get run are going to get information about my targeted system and make sure that I’m not going to overwrite an existing partition. I then copy the keyfile onto my remote system. This keyfile contains the passphrase which will be used when I create the LUKS container. Not all systems will have the cryptsetup package installed, so the next thing to do is install that RPM if it’s not already installed.

   - name: block to attempt to get info on what my destination device will become
      block:
        - name: task to attempt to get info on what my destination device will be
          parted:
            device: "{{ target_device}}"
            number: 1
            state: info
          register: info_output
        - name: print info_output
          debug:
            msg: "{{ info_output }}"

    - name: block to attempt parted
      block:
        - name: use parted in block to create new partition
          parted:
            device: "{{ target_device }}"
            number: 1
            state: present  
            part_end: "{{ target_size }}"
          register: parted_output

      rescue:
        - name: parted failed
          fail:
            msg: 'parted failed:  {{ parted_output }}'

At this point, I have a system that is ready and appropriate to be partitioned. For my own logging purposes, I have a task that prints out the information that parted gives back for my target device, /dev/sdb. The partitions here should be blank because I’ve already failed when ansible_devices.vdb.partitions.vdb1 is defined, so this is simply for verification. Next, I use parted to create my partition. To catch any errors in this step—maybe my destination device is too small, or something else happened—I use a block and rescue to register the output of parted and then display that in the fail part of my rescue section.

    - name: block for LUKS and filesystem tasks
      block:
        - name: create LUKS container with passphrase
          luks_device:
            device: "{{ target_device }}1"
            state: present
            name: "{{ mycrypt }}"
            keyfile: "{{ keyfile }}"

        - name: open luks container
          luks_device:
            device: "{{ target_device }}1"
            state: opened
            name: "{{ mycrypt }}"
            keyfile: "{{ keyfile }}"

        - name: create a new volgroup in that partition
          lvg:
            vg: "{{ myvg }}"
            pvs: "/dev/mapper/{{ mycrypt }}"

        - name: create a logvol in my new vg
          lvol:
            vg: "{{ myvg }}"
            lv: "{{ mylv }}"
            size: +100%FREE`

       - name: create a filesystem
          filesystem:
            fstype: xfs
            dev: "/dev/mapper/{{ myvg }}-{{ mylv }}"

Now that I have a partition and cryptsetup installed, I need to do the LUKS and filesystem part of my setup. The first step is to use the luks_device module, along with the keyfile that I copied over. After I have the LUKS container, I create the volume group, then the logical volume, and then the filesystem.

       - name: mount device
          mount:
            path: /mnt
            src: "/dev/mapper/{{ myvg }}-{{ mylv }}"
            state: mounted
            fstype: xfs

    - name: put some content in my new filesystem
      copy:
        content: "this is secure content!"
        dest: /mnt/newcontent.txt

    - name: set content in /etc/crypttab so I can mount the partition on reboot
      copy:
        content: "{{ mycrypt }} {{ target_device }}1 {{ keyfile }}"
        dest: /etc/crypttab
        owner: root
        group: root
        mode: 0644

After I have a filesystem there, I mount the filesystem and write a test file to verify that everything is working correctly. The final step is to create the /etc/crypttab file so that the system can mount my filesystem when it gets rebooted.

Wrap up

The process of manually configuring an encrypted partition is not particularly difficult, or even time-consuming. However, such tasks are perfect for Ansible to handle for you, helping to ensure consistent, secure, and reproducible configurations.

Top 25 Linux Commands – Linux Hint

A developer’s best friend is the command line. It ought to be fused into their routine work. It helps make a system more efficient and manageable. For instance, you can write various script-codes to quickly produce and automate time-consuming processes.

Here, we have compiled all the top Linux terminal commands that will help beginners, as well as intermediate and advanced users.

In this article, we will learn about these 25 Linux commands:

  1. ls
  2. echo
  3. touch
  4. mkdir
  5. grep
  6. man
  7. pwd
  8. cd
  9. mv
  10. rmdir
  11. locate
  12. less
  13. compgen
  14. “>”
  15. cat
  16. “|”
  17. head
  18. tail
  19. chmod
  20. exit
  21. history
  22. clear
  23. cp
  24. kill
  25. sleep

Now, let’s learn each of these commands one by one.

1. ls

‘ls’ command is the most widely used in the CLI interface. This command lists out all the files present in the current/present working directory. Open up the terminal by pressing ‘CTRL+ALT+T’, and write out the following command:

ls

You can also list the files from a specific folder using this command:

ls ./Desktop

It will show the list of files that resides in the ‘Desktop’ without changing the present work directory.

Another feature of the ‘ls’ command is that you can write ‘ls -al’, and it will print out all the doted files with the simple one, along with their file permissions.

ls -al

2. echo

This command prints out the text to the command-line interface. The ‘echo’ command is used to print the text and can be used in the scripts and bash files as well. It can be put into the output status text to the main screen or any required file. It is also helpful in depicting environmental variables in the system. For example, write out the following command in the terminal:

echo “Hello World”

It will show you the following results.

3. touch

The ‘touch’ command allows you to create any file. Use the ‘touch’ command with the ‘filename’ you want to give to the file and hit enter.

touch testfile

After that, type the ‘ls’ command in the terminal to confirm the file’s existence.

ls

Here, you can see that the text file is created. Use the command given below to open the file:

nano testfile

Execute the command, and you will see the following result.

At this point, the file would be empty because you only created the file and have not added any content to it. This ‘touch’ command is not only used to create ‘text’ files but can also create multiple types of files by using their extensions. For example, you can also create a python script by using the following command:

touch file.py

Here, ‘.py’ is the extension for the python script.

ls

4. mkdir

‘mkdir’ is used to create directories efficiently. This command will also let you create multiple directories at once, which would save you time.

First, view the list of files that exists in the present working directory by using the command given below:

ls

Now, create a new directory by the name of ‘newDir’.

mkdir newDir

If you are working as a superuser, then the command will be executed, otherwise, you have to execute the following command instead of the one given above.

sudo mkdir newDir

Now, type the ‘ls’ command to view the list of files and folders.

For creating multiple directories at once, give the names of the directories in a single ‘mkdir’ command.

mkdir dir1 dir2 dir3

Or

sudo mkdir dir1 dir2 dir3

Now, list the files and folders using the ‘ls’ command.

ls

You can see the dir1, dir2, and dir3 here.

5. grep

‘grep’ command is also known as the search command. It is a command to search for text files and performs the search through specific keywords. Before that, you should have some text in your text file. For example, use the following sample text in the ‘testfile’, which you already created using the ‘touch’ command.

Open up the file through the terminal.

nano textfile

Execute the command. It will give you the following output.

Now, write the following text in the file ‘testfile’.

this is Linuxhint.com
You are learning 25 basic commands of Linux.

Press CTRL+O to write this content in the file.

Come out of this file by pressing CTRL+X. Now, use the ‘grep’ command. The ‘-c-’ will let you know how many times the word ‘linuxhint’ appeared in the file.

grep -c ‘Linux’ testfile

As the output is ‘2’, it means that the word ‘Linux’ exists two times in the ‘testfile’.

Now, let’s make some changes to this file by opening the file using the ‘nano’ command.

nano testfile

You may write any text multiple times in this file to check the working of the above ‘grep’ command.

this is Linuxhint.com

You are learning 25 basic commands of Linux.

Linux

Linux

Linux

Linux

Linux

Now, press CTRL+O to write out the updated content in the file.

Come out of this file by pressing CTRL+X, and now execute the following commands to check whether it performs correctly or not.

grep -c ‘Linux’ testfile

Different flags can be used with the ‘grep’ command for various purposes, for example, ‘-i’ make the search case sensitive. Once you got the idea about the ‘grep’ command, you can explore it further according to your need.

6. man

man’ command displays you a manual about the working of any command. For example, you don’t know what an ‘echo’ command does, then you can use the ‘man’ command to know its functionality.

man echo

Similarly, you can use the ‘man’ command for ‘grep’ as well.

man grep

Now, you can see all sources of options. Flags and all the other information related to ‘grep’.

7. pwd

‘pwd’ stands for print working directory. It is used to print the current working directory for an instance. If multiple instances are working and you want to know the exact working directory, then in this case use the ‘pwd’ command.

pwd

Here, you can see the path of the present working directory.

If you are working in the Desktop directory, in that case, this ‘pwd’ will print out the whole path leading towards the desktop.

8. cd

‘cd’ stands for change directory. It is used to change the current directory because you can access all the files and folders in different directories in your system. For example, making Desktop as the current or present working directory, write out the following command in the terminal:

cd ./Desktop

To know the path of the present working directory, write the following command:

pwd

To go back to the directory, type this:

cd ~

You can check the present work directory here.

9. mv

‘mv’ command is used to rename and move a directory. While working with files in a directory, each file should be renamed, which is a time-consuming process, so the ‘mv’ command comes into play here. For example, in our directory, we have ‘testfile’ as shown below.

To rename this file use the ‘mv’ command in the following pattern.

mv testfile trialfile

And then view the list of the files to check the changes.

ls

You can also transfer this file to any other directory using this ‘mv’ command. Let’s say you want to move this ‘trialfile’ to desktop. For that, write out the following command in the terminal:

mv trialfile ./Desktop/

10. rmdir

This command is used for removing directories. ‘rmdir’ helps save a lot of space on the computer and organize and clean up files. Directories can be removed using two commands ‘rm’ and ‘rmdir’.

Now, let’s try to delete some directories. Step 1 is to view the directories in your current working space.

ls

Now, we are going to delete the ‘newDir’ directory.

rmdir newDir

Now, use the ‘ls’ command to see if it exists or not.

ls

Now, we are going to delete multiple directories at once.

rmdir dir1 dir2 dir 3

Now, use the ‘ls’ command.

ls

As you can see, all of those directories have been deleted from the home.

11. locate

‘locate’ command helps find a file or a directory. Through this command, a specific file or directory can be found. It also searches regular expressions by using wild-cards.

To find a file by its name, type the name of the file with the ‘locate’ command.

locate trialfile

The output of this command will let you know the exact path to locate this file.

There are certainly other options for the ‘locate’ file. You will get to know all that stuff by using the ‘man’ command.

12. less

‘less’ command views the files without opening them in an editor tool. It is very quick and opens a file in an existing window while also disabling writing abilities such that the file cannot be modified. For that, write the ‘less’ command and define the file name.

less trialfile

It will give you the following output.

13. compgen

‘compgen’ command is a very efficient command that displays the names of all the commands, names, and functions on the command line interface. To display all the commands, write:

compgen -c

Here, you can see a long list of all commands that you can use in the terminal.

Similarly, you can also print out the functions and files name, which is also shown at the end of this list.

14. “>”

This character ‘>’ prints and redirects the shell commands. It displays output from the previous command in the terminal window and sends it to a new file. Now, if you want to send the output of the previous command to a new file, let’s use this command:

> newfile.txt

And then view the files.

ls

Now open the file, it will be empty.

Now, we are sending the ‘compgen’ command result to this file.

compgen -c > newfile.txt

Open the file to view the content, which is the result of the ‘compgen’ command.

15. cat

‘cat’ command is the widely used command, and it performs three main functions:

  • Display file content
  • Combine files
  • Create new files

First of all, we are going to display the content of the ‘trialfile’.

cat trialfile

It will give you the following output.

16. “|”

Pipe command “|” takes the output of the first command and utilize it as input for the second command. For example:

cat trialfile | less

This command will be used to give input to another. We are using the filename and ‘less’ command as an input to that file.

17. head

‘head’ command reads the start of a file. It shows you the first 10 lines of the file. It can also be customized for displaying more lines and the quickest way to read the content of a file. For example, the command given below will show you the first 10 lines from the file ‘newfile.txt’.

head newfile.txt

It is the perfect usage of the ‘head’ command in which you can quickly read the initial ten lines of the file and get the idea of what it is all about.

18. tail

‘tail’ commands read the end of a file. It shows you the last ten lines of the file, but it can also be customized to display more lines.

tail newfile.txt

It will print out the last ten lines of the ‘newfile’ file.

19. chmod

‘chmod’ command edits or sets permissions for a file or a folder. It is one of the best-known commands, and it changes the permissions of a specific file directory through a quick argument.

  • W is used for writing permissions
  • R is used for reading permissions
  • X is used for the execution
  • ‘+’ is used to add permissions
  • ‘-’ is used to remove permissions

To view the files and folders with their permissions, type the following command in the terminal:

ls -al

Here you can see that the highlighted portion represents the file permissions. The first section represents the permissions given to the owner, the second section represents the permissions given to the group, and the last section represents the permissions given to the public. You can change the permissions for all the sections. Let’s change the file permissions of ‘newfile.txt’.

chmod -w newfile.txt

This command will remove the writing permissions from all of the sections.

Type the ‘ls -al’ command for its confirmation.

ls -al

Open the file, and try to add some content to it and save this file. It will definitely give you a warning dialog box.

20. exit

This command is used to quit the terminal without GUI interaction. The terminal gives you the option to kill itself using the ‘exit’ command.

exit

Press enter, and now you can see there is no terminal.

21. history

‘history’ command will show you a list that comprises the most recently used commands. It will display the record of the commands you used in the terminal for different purposes.

history

22. clear

This command clears the content of the terminal. It keeps the terminal clean.

clear

Press enter, and you will see a crystal-clear terminal.

23. cp

‘cp’ command stands for copying directory or file. You have to specify the destination with the filename.

cp trialfile ~

Here, ‘~’ represents the home directory. Execute the command and then write the ‘ls’ command to check if it exists or not.

ls

24. kill

‘kill’ command terminates the process of working on the command line interface. Before using the ‘kill’ command, you have to find out all processes that are currently happening in the system.

ps -ef

Let’s kill the ‘whoopise’ process by using its process ID ‘PID’.

sudo kill 702

Enter your password to give permission.

Here, we have no error message, which means that the process is killed.

25. sleep

‘sleep’ command delays the process for a specific time. It controls and manages processes in scripts as well. It delays the elements of a process for processing till a specified time. The time can be specified using seconds, minutes, or even days.

Let’s sleep the process for two seconds.

sleep 2

It will take a delay of two seconds to execute that command.

Conclusion:

We have learned some top 25 Linux terminal commands in this article. These are the essential commands for beginners to learn more about the Linux command-line interface.

How to Set MySQL Root Password using Ansible – Linux Hint

Most Linux distributions, including CentOS/RHEL and Ubuntu/Debian, do not set the MySQL root password automatically. As the MySQL root password is not automatically set, one can log in to the MySQL console as the root without any password. This is not very good for security.

On CentOS/RHEL, you can easily run the mysql_secure_installation command to set up a root password. But on Ubuntu 20.04 LTS, this method does not work, as MySQL uses a different authentication plugin for the root user.

This article will show you how to set up a MySQL root password on CentOS 8 and Ubuntu 20.04 LTS Linux distributions using Ansible modules.

Prerequisites

If you want to try out the examples included in this article,

1) You must have Ansible installed on your computer.

2) You must have at least a CentOS/RHEL 8 host or an Ubuntu 20.04 LTS host configured for Ansible automation.

There are many articles on LinuxHint dedicated to installing Ansible and configuring hosts for Ansible automation. You may check these out if needed.

Setting Up a Project Directory

Before we move on any further, we will set up a new Ansible project directory, just to keep things a bit organized.

To create the project directory mysql-root-pass/ and all the required subdirectories (in your current working directory), run the following command:

$ mkdir -pv mysql-root-pass/

Once the project directory is created, navigate to the project directory, as follows:

Create a hosts inventory file, as follows:

Add the host IP or DNS names of your CentOS/RHEL 8 or Ubuntu 20.04 LTS hosts in the inventory file (one host per line), as shown in the screenshot below.

Once you are done, save the file by pressing <Ctrl> + X, followed by Y and <Enter>.

Here, I have created two groups, centos8, and ubuntu20. The centos8 group has the DNS name of my CentOS 8 host, vm3.nodekite.com; and the ubuntu20 group has the DNS name of my Ubuntu 20.04 LTS host, vm7.nodekite.com.

Create an Ansible configuration file ansible.cfg in your project directory, as follows:

Type the following lines in the ansible.cfg file:

[defaults]
inventory = hosts
host_key_checking = False

Once you are done, save the ansible.cfg file by pressing <Ctrl> + X, followed by Y and <Enter>.

Try pinging all the hosts you have added in your hosts inventory file, as follows:

$ ansible all -u ansible -m ping

As you can see, the my CentOS 8 host (vm3.nodekite.com) and Ubuntu 20.04 LTS host (vm7.nodekite.com) are accessible.

Install MySQL and Setting Up Root Password on CentOS/RHEL 8

This section will show you how to install the MySQL database server and set up a root password on CentOS 8 using Ansible. The same procedure should work on RHEL 8.

Create the new Ansible playbook install_mysql_centos8.yaml in the playbooks/ directory, as follows:

$ nano playbooks/install_mysql_centos8.yaml

Type the following lines in the install_mysql_centos8.yaml file:

– hosts: centos8
user: ansible
become: True
tasks:
– name: Update DNF Package repository cache
dnf:
update_cache: True
– name: Install MySQL server on CentOS 8
dnf:
name: mysql-server
state: present
– name: Install MySQL client on CentOS 8
dnf:
name: mysql
state: present
– name: Make sure mysqld service is running
service:
name: mysqld
state: started
enabled: True

– name: Install python3-PyMySQL library
dnf:
name: python3-PyMySQL
state: present

Once you are done, press <Ctrl> + X, followed by Y and <Enter>, to save the install_mysql_centos8.yaml file.

The line below tells Ansible to run the playbook install_mysql_centos8.yaml on every host in the centos8 group.

Here, I have defined 5 tasks.

The first task updates the DNF package repository cache of CentOS 8 using the Ansible dnf module.

The second task installs the MySQL server package mysql-server using the Ansible dnf module.

The third task installs the MySQL client package mysql using the Ansible dnf module.

The fourth task ensures that the mysqld service is running and that it has been added to the system startup so that it automatically starts on boot.

The fifth task installs the Python 3 MySQL library pymysql. This is required for accessing MySQL from Ansible.

Run the install_mysql_centos8.yaml playbook, as follows:

$ ansible-playbook playbooks/install_mysql_centos8.yaml

As you can see, the playbook install_mysql_centos8.yaml ran successfully.

On my CentOS 8 host, I can access MySQL as the root user without any password, as you can see in the screenshot below:

Now that the MySQL server is installed, it is time to set up a root password for the MySQL server.

Create the new group variable file centos8 (in the group_vars/ directory) for the centos8 group, as follows:

$ nano group_vars/centos8

Add a new variable mysql_pass with the root password (in my case, secret) you would like to set, as shown in the screenshot below.

Once you are done, press <Ctrl> + X, followed by Y and <Enter, to save the file.

Create a new playbook set_root_pass_centos8.yaml with the following command:

$ nano playbooks/set_root_pass_centos8.yaml

Type the following lines in the set_root_pass_centos8.yaml file:

– hosts: centos8
user: ansible
become: True
tasks:
– name: Set MySQL root Password
mysql_user:
login_host: ‘localhost’
login_user: ‘root’
login_password: ”
name: ‘root’
password: ‘{{ mysql_pass }}’
state: present

Once you are done, press <Ctrl> + X, followed by Y and <Enter>, to save the set_root_pass_centos8.yaml file.

This playbook uses the mysql_user Ansible module to set a MySQL root password.

The login_host, login_user, and login_password options of the mysql_user Ansible module are used to set the current MySQL login hostname, username, and password, respectively. By default, the MySQL login hostname (login_host) is the localhost, the login username (login_user) is the root, and the login password (login_password) is empty (”) on CentOS 8.

The password option of the mysql_user Ansible module is used to set a new MySQL root password, here. The MySQL root password will be the value of the mysql_pass group variable that was set earlier.

Run the playbook set_root_pass_centos8.yaml with the following command:

$ ansible-playbook playbooks/set_root_pass_centos8.yaml

The playbook ran successfully, as seen in the screenshot below:

As you can see, I can no longer log in to the MySQL server without a root password.

To log in to the MySQL server as the root user with a password, run the following command on your CentOS 8 host:

Type in the root password you have set using Ansible, and press <Enter>.

You should be logged in to the MySQL server as the root user.

Install MySQL and Setting Up root Password on Ubuntu 20.04 LTS

This section will show you how to install the MySQL database server and set up a root password on Ubuntu 20.04 LTS using Ansible.

Create a new Ansible playbook install_mysql_ubuntu20.yaml in the playbooks/ directory, as follows:

$ nano playbooks/install_mysql_ubuntu20.yaml

Type the following lines in the install_mysql_ubuntu20.yaml file:

– hosts: ubuntu20
user: ansible
become: True
tasks:
– name: Update APT Package repository cache
apt:
update_cache: True
– name: Install MySQL server on Ubuntu 20.04 LTS
apt:
name: mysql-server
state: present
– name: Install MySQL client on Ubuntu 20.04 LTS
apt:
name: mysql-client
state: present
– name: Make sure mysql service is running
service:
name: mysql
state: started
enabled: True
– name: Install python3-pymysql library
apt:
name: python3-pymysql
state: present

Once you are done, press <Ctrl> + X, followed by Y and <Enter>, to save the install_mysql_ubuntu20.yaml file.

The following line tells Ansible to run the playbook install_mysql_ubuntu20.yaml on every host in the ubuntu20 group:

Here, I have defined 5 tasks.

The first task updates the APT package repository cache of Ubuntu 20.04 LTS using the Ansible apt module.

The second task installs the MySQL server package mysql-server using the Ansible apt module.

The third task installs the MySQL client package mysql using the Ansible apt module.

The fourth task makes sure that the mysql service is running and that it has been added to the system startup so that it automatically starts on boot.

The fifth task installs the Python 3 MySQL library pymysql. This is required to access MySQL from Ansible.

Run the install_mysql_ubuntu20.yaml playbook, as follows:

$ ansible-playbook playbooks/install_mysql_ubuntu20.yaml

As you can see, the playbook install_mysql_ubuntu20.yaml ran successfully.

On my Ubuntu 20.04 LTS host, I can access MySQL as the root user without any password, as you can see in the screenshot below.

Now that the MySQL server is installed, it is time to set up a root password for the MySQL server.

Create a new group variable file ubuntu20 (in the group_vars/ directory) for the ubuntu20 group, as follows:

$ nano group_vars/ubuntu20

Add a new variable, mysql_pass, with the root password (in my case, verysecret) that you would like to set, as shown in the screenshot below.

Once you are done, press <Ctrl> + X, followed by Y and <Enter>, to save the file.

Create a new playbook set_root_pass_ubuntu20.yaml with the following command:

$ nano playbooks/set_root_pass_ubuntu20.yaml

Type the following lines in the set_root_pass_ubuntu20.yaml file:

– hosts: ubuntu20
user: ansible
become: True
tasks:
– name: Change the authentication plugin of MySQL root user to mysql_native_password
shell: mysql -u root -e ‘UPDATE mysql.user SET plugin=”mysql_native_password”
WHERE user=”root” AND host=”localhost”‘
– name: Flush Privileges
shell: mysql -u root -e ‘FLUSH PRIVILEGES’
– name: Set MySQL root password
mysql_user:
login_host: ‘localhost’
login_user: ‘root’
login_password: ”
name: ‘root’
password: ‘{{ mysql_pass }}’
state: present

Once you are done, press <Ctrl> + X, followed by Y and <Enter>, to save the set_root_pass_ubuntu20.yaml file.

Here, I have defined three tasks.

The first task changes the authentication plugin of the MySQL root user from auth_socket to mysql_native_password.

The second task reloads all the privileges.

The third task uses the mysql_user Ansible module to set a MySQL root password.

In the third task, the login_host, login_user, and login_password options of the mysql_user Ansible module are used to set the current MySQL login hostname, username, and password, respectively. By default, the MySQL login hostname (login_host) is localhost, the login username (login_user) is root, and the login password (login_password) is empty (”) on the system.

Here, the password option of the mysql_user Ansible module is used to set a new MySQL root password. The MySQL root password will be the value of the mysql_pass group variable, which I set earlier, in the group_vars/ubuntu20 file.

Run the playbook set_root_pass_ubuntu20.yaml with the following command:

$ ansible-playbook playbooks/set_root_pass_ubuntu20.yaml

The playbook ran successfully, as you can see in the screenshot below:

As you can see, I can no longer log in to the MySQL server without a root password.

To log in to the MySQL server as the root user with the set password, run the following command on your Ubuntu 20.04 LTS host:

Type in the root password you have set using Ansible and press <Enter>.

You should be logged in to the MySQL server as the root user.

Conclusion

This article showed you how to install the MySQL server and set a MySQL root password on CentOS 8 and Ubuntu 20.04 LTS Linux distributions using Ansible. This article used the mysql_user Ansible module for setting up the MySQL root password. You can use this module to change the MySQL root password, create new MySQL users, and perfore many more user management functions.

For more information on the mysql_user module, check the official documentation of the mysql_user module.

Source

How to Setup SSH without Passwords – Linux Hint

SSH is used to remotely log into servers for running the commands and programs. You can log into remote systems via password authentication and via public key authentication. If you regularly use SSH to connect to remote servers, the public key authentication method is best for you. This method is a secure and password-less login method.

In this article, we will explain how to set up SSH without passwords in a Linux operating system. We will be using the command line Terminal application for this purpose. To open the command line Terminal, use the <Ctrl+Alt+T> keyboard shortcut.

We have explained the procedure mentioned in this article on the Ubuntu 20.04 system. More or less the same procedure can be followed in Debian and previous Ubuntu versions.

Follow the steps below to set up SSH without passwords on your Linux system.

Generate A New SSH Key Pair on Local Machine

The first step will be to generate a new SSH key on your local system. To do this, issue the following command in Terminal:

ssh-keygen -t rsa

Press Enter to accept all fields as defaults.

The above command will create the keypair, i.e., the public key and the private key. The private key is kept on the system, while the public key is shared. These keys are stored in the .ssh folder.

You can view the keypair generated by entering the following command:

ls –l .ssh

Copy Public Key to Remote Machine

In this next step, copy the public key to the remote system that you want to access from your local system without passwords. We will use the ssh-copy-id command that is by default available in most Linux distributions. This command will copy the public key id_rsa.pub to the .ssh/authorized_keys file in the remote system.

The syntax for ssh-copy-id is as follows:

ssh-copy-id remote_user@remote_IP

In our example, the command would be:

ssh-copy-id tin@192.168.72.136

On the remote system, you can verify the transfer of the public key by viewing the authorized_keys file.

cat .ssh/authorized_keys

Set the permission on the authorized_keys file on the remote system to 600. Use the following command to do so:

chmod 600 .ssh/authorized_keys

Set the permission on the .ssh directory on the remote system to 700. Use the following command to do so:

chmod 700 .ssh

Add Private Key to SSH Authentication Agent on Local Server

In our local machine, we will add the private key to the SSH authentication agent. This will allow us to log into the remote server without having to enter a password every time.

Here is the command to do so:

ssh-add

Login to Remote Server Using SSH Keys

After performing the above steps, try logging into your remote sever. This time, you will be able to log into your remote server without entering a password.

Source

How to analyze and interpret Apache Webserver Log

Apache web servers can generate a lot of logs. These logs contain information such as the HTTP requests that Apache has handled and responded to, and other activities that are specific to Apache. Analyzing the logs is an important part of administering Apache and ensuring that it runs as expected.

In this guide, we’ll go over the different logging options present in Apache and how to interpret this log data. You’ll learn how to analyze the logs that Apache produces and how to configure the logging settings to give you the most relevant data about what Apache is doing.

In this tutorial you will learn:

  • Configure and understand Apache webserver logging
  • What are Apache log levels
  • How to interpret Apache log formatting and its meaning
  • What are the most common Apache log configuration files
  • How to extend logging configuration to include forensic data

Software Requirements and Conventions Used

Software Requirements and Linux Command Line Conventions
Category Requirements, Conventions or Software Version Used
System Ubuntu, Debian, CentOS, RHEL, Fedora
Software Apache Webserver
Other Privileged access to your Linux system as root or via the sudo command.
Conventions # – requires given linux commands to be executed with root privileges either directly as a root user or by use of sudo command
$ – requires given linux commands to be executed as a regular non-privileged user

Apache log files and their location

Apache produces two different log files:

  • access.log stores information about all the incoming connection requests to Apache. Every time a user visits your website, it will be logged here. Each page a user requests will also be logged as a separate entry.
  • error.log stores information about errors that Apache encounters throughout its operation. Ideally, this file should remain relatively empty.

Apache default Log configuration on Ubuntu Linux server

Apache default Log configuration on Ubuntu Linux server

The location of the log files may depend on which version of Apache you are running and what Linux distribution it’s on. Apache can also be configured to store these files in some other non-default location.

But, by default, you should be able to find the access and error logs in one of these directories:

  • /var/log/apache/
  • /var/log/apache2/
  • /etc/httpd/logs/

Apache log formatting

Apache allows you to customize what information is logged and how each log entry is presented, which we will cover later in this tutorial.

The usual format that Apache follows for presenting log entries is:

"%h %l %u %t \"%r\" %>s %O \"%{Referer}i\" \"%{User-Agent}i\""

Here’s how to interpret this formatting:

  • %h – The IP address of the client.
  • %l – This is the ‘identd’ on the client, which is used to identify them. This field is usually empty, and presented as a hyphen.
  • %u – The user ID of the client, if HTTP authentication was used. If not, the log entry won’t show anything for this field.
  • %t – Timestamp of the log entry.
  • \%r\ – The request line from the client. This will show what HTTP method was used (such as GET or POST), what file was requested, and what HTTP protocol was used.
  • %>s – The status code that was returned to the client. Codes of 4xx (such as 404, page not found) indicate client errors and codes of 5xx (such as 500, internal server error) indicate server errors. Other numbers should indicate success (such as 200, OK) or something else like redirection (such as 301, permanently moved).
  • %O – The size of the file (including headers), in bytes, that was requested.
  • \”%{Referer}i\” – The referring link, if applicable. This tells you how the user navigated to your page (either from an internal or external link).
  • \”%{User-Agent}i\” – This contains information about the connecting client’s web browser and operating system.

A typical entry in the access log will look something like this:

10.10.220.3 - - [17/Dec/2019:23:05:32 -0500] "GET /products/index.php HTTP/1.1" 200 5015 "http://example.com/products/index.php" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.79 Safari/537.36"

The error log is a bit more straightforward and easy to interpret. Here’s what a typical entry may look like:

[Mon Dec 16 06:29:16.613789 2019] [php7:error] [pid 2095] [client 10.10.244.61:24145] script '/var/www/html/settings.php' not found or unable to stat

This is a good way to see how many 404 errors your visitors are encountering, and may clue you in to some dead links on your site. More importantly, it can alert you to missing resources or potential server problems. The example above shows a *.php page that was requested but missing.

Apache log configuration

Apache’s logging is highly customizable and can be adjusted from a couple configuration files. On Ubuntu and Debian, the main configuration file for Apache’s logging is located here:

  • /etc/apache2/apache2.conf

Since you can run multiple websites (referred to as Virtual Hosts) from a single Apache instance, you can also configure each of them to have separate access and error logs. To define how these separate log files should be named and where to save them, configure this file:

  • /etc/apache2/sites-available/000-default.conf

On CentOS, RHEL, and Fedora, the two configuration files are found, respectively, in these locations:

  • /etc/httpd/conf/httpd.conf
  • /etc/httpd/conf.d/ (place additional VirtualHost configurations in this directory)

Log directives

There are quite a few different directives that can be configured inside these files, but these are the main ones you should concern yourself with if you wish to customize Apache’s logging:

  • CustomLog – Defines where the access log file is stored.
  • ErrorLog – Defines where the error log file is stored.
  • LogLevel – Defines how severe an event must be in order to be logged (read below for more information).
  • LogFormat – Define how each entry in the access log should be formatted (read below for more information).

LogLevel is set to warn by default, which means that it will write to the error log on warning conditions or more serious events. If your error log is getting filled with loads of innocuous warning messages, you can bump it up to error which will only report errors or more serious problems.

Other options include (in order of severity) critalert, and emerg. Apache recommends using a level of at least crit. For debugging purposes, you can temporarily set LogLevel to debug, but be aware that you can end up with an unwieldy amount of entries in your error log.

LogFormat allows you to adjust what the entries inside the access log look like. If you find the example entry in access.log (from the Apache log formatting section above) to be a little confusing, you’re not alone. Apache allows you to customize the format of log entries, so you can set them up in a more logical way. You could also use this customization to exclude certain information that you may find irrelevant.

Apache logging modules

The logging configuration we’ve displayed in this guide so far pertains to the mod_log_configApache module. To extend logging functionality even further, you can load other logging modules into Apache. This can provide some more capabilities that aren’t available with default settings.

mod_log_forensic begins logging before a request (when the headers are first received), and logs again after the request. That means two log entries are created for each request, allowing an administrator to measure response times with more precision.

Define the location of your forensic log with the CustomLog directive. For example:

CustomLog ${APACHE_LOG_DIR}/forensic.log forensic

mod_logio logs the number of bytes sent to and received from each request. It provides very accurate information because it also counts the data present in the header and body of each request, as well as the extra data that’s required for SSL/TLS encrypted connections.

Append the %I and O% placeholders to the LogFormat directive in order to make use of the extra data provided by this module. Other modules exist; these are just two of the most useful.

Conclusion

In this article we saw how to analyze and interpret the access and error logs of Apache. We also learned how to customize the logging in Apache’s configuration files to make the log data more relevant. Armed with this knowledge, you will be able to isolate problems more quickly and troubleshoot issues with Apache.

Remember that Apache’s logging functionality can be further extended through other logging modules, though this is only necessary in edge cases that require advanced debugging.

Source

Get ‘Kali Linux — An Ethical Hacker’s Cookbook, 2nd Edition’ ($44.99 value) FREE for a limited time

Many organizations have been affected by recent cyber events. At the current rate of hacking, it has become more important than ever to pentest your environment in order to ensure advanced-level security.

Kali Linux — An Ethical Hacker’s Cookbook from Packt Publishing is packed with practical recipes that will get you off to a strong start by introducing you to the installation and configuration of Kali Linux, which will help you to perform your tests.

SEE ALSO: New Undercover mode lets Kali Linux users pretend to be running Windows

You will also learn how to plan attack strategies and perform web application exploitation using tools such as Burp and JexBoss. Delve into the technique of carrying out wireless and password attacks, as well as the wide range of tools that help in forensics investigations and incident response mechanisms.

This ebook offers:

  • Practical recipes to conduct effective penetration testing using the latest version of Kali Linux
  • Leverage tools like Metasploit, Wireshark, Nmap, and more to detect vulnerabilities with ease
  • Confidently perform networking and application attacks using task-oriented recipes

Kali Linux — An Ethical Hacker’s Cookbook usually retails for $44.99, but BetaNews readers can get it entirely free for a limited time. All you have to do to get your copy for free is go here, enter the required details, and click the Download Now button. The offer expires on January 21, so act fast.

Source