The Evil Twin Attack – ls /blog

By Matthew Cranford

I searched through many guides, and none of them really gave good description of how to do this. There’s a lot of software out there (such as SEToolkit, which can automate this for you), but I decided to write my own. The scope of this guide is NOT to perform any MITM attacks or sniff traffic. I have references below for those things, this guide is just to set up the Evil Twin for those attacks.

Section 1

Information:

An Evil Twin AP is also known as a rogue wireless access point. The idea is to set up your own wireless network that looks exactly like the one you are attacking. Computers won’t differentiate between SSID’s that share the same name. Instead, they’ll only display the one with the stronger connection signal. The goal is to have the victim connect to your spoofed network, perform a Man-In-the-Middle Attack (MITM) and forward their data on to the internet without them ever suspecting a thing. This can be used to steal someone’s credentials or spoof DNS queries so the victim will visit a phishing site, and many more!

Hyperledger Fabric Fundamentals (LFD271) $299

Hardware/Software Required:

A compatible wireless adapter – There are many on the internet to buy. I’m using the TL-WN722N. You can buy this from Amazon for about 15.00 dollars.

Kali Linux – You can either run from a USB or a VM. If you run from a VM, you may have issues getting the wireless card to work. I’ll write more on that later.

An alternate way to connect to the internet – The card you’re into the Evil Twin will be busy and therefore cannot connect you to the internet. You’ll need a way to connect, so as to forward the victim’s information on. You may use a separate wireless adapter, 3G/Modem connection or an Ethernet connection to a network.

Steps:

1. Install software that will also set up our DHCP service.

2. Install some software that will spoof the AP for us.

3. Edit the .conf files for getting our network going.

4. Start the services.

5. Run the attacks.

Section 2

Setting up the Wireless Adapter

PLEASE NOTE:

I recently discovered that you do NOT need to edit the network settings in Virtualbox to get the wireless adapter to work properly. Please go to Section 3 and follow the rest of the tutorial. The section below is for anyone who is having issues with the wireless adapter. It may help somewhat.

Okay, this is one of the hardest and trickiest parts of this tutorial. You may have to be patient and try this a couple of times for this to work, but after you figure this out, it will seriously help you with any future VM wireless adapter problems you may have.

This is going to assume you are running Kali from a Virtual Machine on Virtual box. If you’re running from a live USB, then don’t worry about this part. You can skip to the next section. Just plug in the adapter to a different port on the computer and it should integrate automatically.

First, plug the wireless adapter into one of the USB ports.

Second, if you are already running Kali, power off. Open up Virtual Box and go to the VM’s Settings.

virtualboxsettings

Then, click on ‘Network’ and select the Adapter 2 tab.

Click on ‘Enable Network Adapter’ and then select ‘Bridged Adapter’ from the Attached to menu.

Next, click on Name and select your wireless adapter.

adapter2

In my case, it’s called TP-LINK Wireless USB Adapter.

$299 REGISTERS YOU FOR OUR NEWEST SELF PACED COURSE! LFD201 – INTRODUCTION TO OPEN SOURCE DEVELOPMENT, GIT, AND LINUX!

Click OK and then boot into your VM.

Type in your username and password and log on. The default is root/toor.

Here comes the tricky part. We’ve set up Kali to use the Wireless adapter as a NIC, but to the VM, the wireless adapter hasn’t been plugged in yet.

Section 3

Now in the virtual machine, go to the top and click on ‘Devices‘, select ‘USB Devices‘, and finally click on your wireless adapter. In my case, it’s called ATHEROS USB2.0 WLAN.

usbselect

Sometimes, when we select the USB device, it doesn’t load properly into the VM. I’ve found that if you are having trouble getting Kali to recognize the wireless adapter, try switching USB ports. You may have to try several before it works.

NOTE: Never go back and deselect it from the ‘USB Devices’ menu. This causes major errors and you will have to reboot the entire system to get it to work properly. If it doesn’t find it the first time, simply unplug it and try a different USB port, and then go back and re-select it. This may take a few tries, but it’ll work, trust me!

Section 4

DNSMASQ

Open up a terminal.

Type in the command:

apt-get install -y hostapd dnsmasq wireless-tools iw wvdial

This will install all the needed software.

Now, we’ll configure dnsmasq to serve DHCP and DNS on our wireless interface and start the service.

dnsmasqsetup1

I’ll go through this step by step so you can see what exactly is happening:

cat <<EOF > etc/dnsmasq.conf – This tells the computer to take everything we are going to type and insert it into the file /etc/dnsmaq.conf.

log-facility=var/log/dnsmasq.log – This tells the computer where to put all the logs this program might generate.

#address=/#/10.0.0.1 – is a comment saying that we are going to use the 10.0.0.0/24 network.

#address/google.com/10.0.0.1 – is another comment example.

interface=wlan0 – tells the computer which NIC we are going to use for the DNS and DHCP service.

dhcp-range=10.0.0.10,10.0.0.250,12h – This tells the computer which ip address range we want to assign people. You could change this to any private address you like in order to make your Evil Twin look more authentic.

dhcp-options=3, 10.0.0.1-[insert command description]

dhcp-options=6,10.0.0.1-[insert command description]

#no-resolv = another comment.

EOF – End of File, means that we are done writing to the file.

service dnsmasq start – starts the dnsmasq service.

Section 5

Setting Up the Wireless Access Point

Now, we’re going to set up the Evil Twin. I’m going to deviate slightly from a tutorial I saw recently on this. I made a snippet of their commands, so no copyright infringement here. They use a 3G modem in order to forward the victim’s data, but we’ll be using the network you are already connected to. This is assuming you’re running from a VM and not a live USB. If you run from a USB, you will need an additional wireless card for doing this. Thankfully, using a VM, we only need one wireless adapter. Let’s get to it!

We are going to set up a network with an SSID of ‘linksys’.

I’ll post the picture from the provided tutorial, that I snipped, and then tell you of the additions and changes I did for this to work.

hostapd

ifconfig wlan0 up – This confirms that our wlan0 interface is working. (wlan0 is the NIC we are using to perform this attack)

ifconfig wlan0 10.0.0.1/24 – This sets the interface wlan0 with the ip address 10.0.0.1 in the Class C Private address range.

iptables -t nat -F – [ insert information about iptables commands]

iptables -F – [ insert information about iptables commands]

iptables -t nat -A POSTROUTING -o pp0 -j MASQUERADE – This tells the computer how we are going to route the information. I changed pp0 (which is a 3G modem interface) to eth0 which is the interface we are using to connect to the internet.

iptables -A FORWARD -i wlan0 -o ppp0 -j ACCEPT – This tells the computer that we are going to route the data from wlan0 to ppp0. Again you need to change this to eth0 (or whatever interface you are using to connect to the internet).

echo ‘1’ > /proc/sys/net/ipv4/ip_forward – This adds the number 1 to the ip_forward file which tells the computer that we want to forward information. If it was 0, then it would be off.

cat <<EOF > /etc/hostapd/hostapd.conf – Again, this tells the computer that anything following that we type we want to add it to the hostapd.conf file.

interface=wlan0 – tells the computer which interface we want to use.

driver=nl80211 – tells the computer which driver to use for the interface.

ssid=Freewifi – This tells the computer what you want the SSID to be, I set mine to linksys.

channel=1 – This tells the comptuer which channel we want to broadcast on.

#enable_karma=1 – this is a comment they posted for a certain type of attack, which is outside the scope of this tutorial.

EOF – signifies the end of the file and stops the prompt.

service hostapd start – starts the service

Section 6

Success

At this point, you should be able to search, either with your phone or laptop, and find the Rouge Wireless AP! If so, then congratulations! YOU DID IT!

From here, you should be able to start performing all sorts of nasty tricks, MITM attacks, packet sniffing, password sniffing, etc. A good MITM program is Ettercap. It may be worth it to you to check it out.

Those things are outside the scope of this tutorial, but I may add them in at a future point.

If you weren’t successful, go back and read everything carefully, especially check your spelling when typing in commands.

If you need to re-edit the .conf files, you can use Gedit (apt-get install gedit) or leafpad (already installed), just navigate to the folder and type: gedit <<filename>> (without the ‘<< >>’). If you’re going to edit it, make sure you stop the service before doing so (service dnsmasq stop),(service hostapd stop).

Now, if you ever want to start up the Evil Twin again, just start up the services again and it should work properly!

Full article:

https://www.cybrary.it/0p3n/evil-twin-attack-using-kali-linux/

REGISTER TODAY FOR YOUR KUBERNETES FOR DEVELOPERS (LFD259) COURSE AND CKAD CERTIFICATION TODAY! $499!

Source

How to Install Chamilo on Debian 9

how to install chamilo on debian 9

Chamilo is a free e-learning and collaboration software which aims to improve the access to education on a global level. In this tutorial, we will show you how to install Chamilo on a Linux VPS running Debian 9 as an operating system.

Prerequisites

Before you start with the installation steps make sure your server meets the Chamilo requirements. Here is the list of Chamilo requirements:

  • Apache 2.2+
  • MySQL 5.6+ or MariaDB 5+
  • PHP 5.5+

Also, you need to have the following PHP modules installed and enabled:

  • php-intl
  • php-gd
  • php-mbstring
  • php-imagick
  • php-curl
  • php-mcrypt
  • php-xml
  • php-zip

If some of the software required for Chamilo to work is not installed or enabled on the server you can follow the steps below to install it.

Install Apache, MySQL and PHP on a Debian 9 VPS

Our Debian 9 VPS hosting comes with Apache, MySQL and PHP pre-installed. To install all the software including the dependencies on your server, first you need to connect to your Linux VPS via SSH and then you can run the commands below:

apt-get update
apt-get install apache2 mysql-server php php-mysql php-intl php-gd php-mbstring php-imagick php-curl php-mcrypt php-opcache php-xml php-zip

To verify the installation is completed run the following commands:

apache2 -v
# Server version: Apache/2.4.25 (Debian)
# Server built: 2018-06-02T08:01:13

mysql -V
# mysql Ver 15.1 Distrib 10.1.26-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2

php -v
# PHP 7.0.30-0+deb9u1 (cli) (built: Jun 14 2018 13:50:25) ( NTS )
# Copyright (c) 1997-2017 The PHP Group
# Zend Engine v3.0.0, Copyright (c) 1998-2017 Zend Technologies
# with Zend OPcache v7.0.30-0+deb9u1, Copyright (c) 1999-2017, by Zend Technologies

Configure Apache for Chamilo on Debian 9

First of all, create an Apache virtual host for your new Chamilo website:

nano /etc/apache2/sites-available/yourdomain.com.conf

Add the following content:

<VirtualHost *:80>
ServerAdmin admin@yourdomain.com
DocumentRoot /var/www/chamilo
ServerName yourdomain.com
ServerAlias www.yourdomain.com
<Directory /var/www/chamilo/>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all
</Directory>
ErrorLog /var/log/apache2/yourdomain.com-error_log
CustomLog /var/log/apache2/yourdomain.com-access_log common
</VirtualHost>

Replace yourdomain.com with your actual domain name. Save and close the file. Then, enable the configuration and restart the Apache web server:

a2ensite yourdomain.com
systemctl restart apache2.service

Create MySQL database for Chamilo on Debian 9

Next, create a MySQL database for Chamilo. Log into your MySQL database server as root:

mysql -u root -p

Then, create a new database, a database user and set up a password using the commands below:

mysql> CREATE DATABASE chamilo;
mysql> GRANT ALL PRIVILEGES ON chamilo.* TO ‘chamilo’@’localhost’ IDENTIFIED BY ‘YoUrPaSsWoRd’;
mysql> FLUSH PRIVILEGES;
mysql> q

Of course, you need to replace YoUrPaSsWoRd with a strong password of your choice.

Configure PHP for Chamilo on Debian 9

In order for Chamilo to work properly, you need to adjust some PHP settings. First, find the location of the php.ini file which is currently in use. Run the following command in your terminal:

php –ini | grep “Loaded Configuration File”

Then, edit the following settings as follows:

max_execution_time = 300
max_input_time = 300
memory_limit = 128M
post_max_size = 64M
upload_max_filesize = 128M
max_file_uploads = 20

magic_quotes_gpc = Off
short_open_tag = Off
display_errors = Off

Save the changes you have made in the php.ini and restart Apache again.

systemctl restart apache2.service

Download and Install Chamilo on Debian 9

The next step is to download and install Chamilo on your Debian VPS. Download the latest version of Chamilo from the official Chamilo download page. At the moment of writing, the latest version is 1.11.8 which works well with PHP 7.0.x or later.

cd /var/www
wget https://github.com/chamilo/chamilo-lms/releases/download/v1.11.8/chamilo-1.11.8-php7.zip

Extract the zip archive you have just downloaded:

unzip chamilo-1.11.8-php7.zip

While you are in /var/www, rename the Chamilo directory, change the ownership of the files and remove the zip archive:

mv chamilo-1.11.8-php7 chamilo
chown -R www-data: chamilo/
rm -f chamilo-1.11.8-php7.zip

Now, open your favorite web browser and enter your domain in the search bar. You should see the Chamilo installation wizard which looks like the one on the image below:

installing chamilo on debian 9

 

Follow the installation process to complete the setup. It is OK to accept all default values. You should also consider changing the admin password so you can easily remember it. Once you are done with the installation of Chamilo on your server, you can refer to the official Chamilo documentation for more instructions on how to use and customize the software.

install chamilo on debian 9

Of course, you don’t have to install Chamilo on Debian 9, if you use one of our Debian Cloud VPS Hosting services, in which case you can simply ask our expert Linux admins to install Chamilo on Debian 9 for you. They are available 24×7 and will take care of your request immediately.

PS. If you liked this post on how to install Chamilo on Debian 9, please share it with your friends on the social networks using the buttons on the left or simply leave a reply below. Thanks.

Be the first to write a comment.

Source

Customize Your Payment Frequency and More with AWS Marketplace Flexible Payment Scheduler

Posted On: Oct 16, 2018

AWS Marketplace announces the launch of Flexible Payment Scheduler, a new feature that enables you to negotiate details such as the number of units, payment amounts, payment dates, and end-user licensing in their payment schedule.

Once you and an Independent Software Vendor (ISV) agree on the payment details, ISVs extend a customized offer through the Seller Private Offers feature, and have the option to select up to 36 payment installments. You can track details of your payments through your monthly bill from AWS.

Today, Flexible Payment Scheduler includes software from Armor, Cisco Stealthwatch, CloudHealth, CrowdStrike, and Splunk.

To learn more about AWS Marketplace visit here.

Source

Configure ProFTPd for SFTP on CentOS

This is a guide on how to configure ProFTPd for SFTP sessions. Secure File Transfer Protocol (SFTP) is a secure version of FTP which transfers files via the SSH protocol. ProFTPD can be reconfigured to serve SFTP sessions vs using the default FTP protocol. This guide assumes you already have a existing ProFTPD installation. If you do not already have it installed please follow How to Install Proftpd.

Edit /etc/proftpd.conf To Enable sFTP

nano /etc/proftpd.conf

Un-comment the following lines to load mod_sftp

#LoadModule mod_sftp.c
#LoadModule mod_sftp_pam.c

To

LoadModule mod_sftp.c
LoadModule mod_sftp_pam.c

Add the following to the end of the configuration (outside of the <global> </global> block to run it separately)

<IfModule mod_sftp.c>
SFTPEngine ON
SFTPLog /var/log/sftp.log
Port 2222
SFTPHostKey /etc/ssh/ssh_host_rsa_key
SFTPLog /var/log/proftpd/sftp.log
SFTPCompression delayed
</IfModule>

SFTPEngine – This will enable SFTP
SFTPLog – This will set the log file for sftp connections
Port – This will set the port ProFTPd will listen on for SFTP connections
SFTPHostKey – This points to the SSH key.
SFTPCompression – This sets the compression method used during transfers

Open the sFTP port in the firewall

Firewalld:

Enable firewall rule:

firewall-cmd –zone=public –add-port=2222/tcp –permanent

Load the new firewall

firewall-cmd –reload

Iptables:

Enable the firewall rule:

iptables -A INPUT -p tcp -m tcp –dport 2222 -j ACCEPT

Save the firewall rule:

iptables-save > /etc/sysconfig/iptables

Restart Proftpd

CentOS 7:

systemctl restart proftpd

CentOS 6:

service proftpd restart

Thats all you need to do to configure ProFTPd to accept ssh connections. You should now able to connect via port 2222 using a sFTP client.

Jan 14, 2018LinuxAdmin.io

Source

Parrot Security OS 3.6 (ParrotSec OS) Installation on Oracle VirtualBox

Parrot Security OS 3.6 (ParrotSec OS) Installation
Parrot Security OS 3.6 (ParrotSec OS) Installation on Oracle VirtualBox

This video tutorial shows

Parrot Security OS 3.6 (ParrotSec OS) installation

on Oracle

VirtualBox

step by step. This tutorial is also helpful to install

ParrotSec 3.6

on physical computer or laptop hardware. We also install

Guest Additions

on Parrot Security OS for better performance and usability features: Automatic Resizing Guest Display, Shared Folder, Seamless Mode and Shared Clipboard, Improved Performance and Drag and Drop.

Parrot Security OS 3.6 Installation Steps:

  1. Create Virtual Machine on Oracle VirtualBox
  2. Start Parrot Security OS 3.6 Installation
  3. Install Guest Additions
  4. Test Guest Additions Features: Automatic Resizing Guest Display and Shared Clipboard

Installing Parrot Security OS 3.6 on Oracle VirtualBox

 

Parrot Security OS 3.6 New Features and Improvements

Parrot Security OS 3.6

is less memory-hungry. This was done by tuning up startup daemons management system and minor fixes. As a result, Parrot 3.6 Lite 32-bit now uses less than 200 MB RAM. Anonsurf was improved too, and now the section dedicated to anonymity and privacy is very reliable and well tested, and some nightmares of the precedent anonsurf versions now belong to the past. This release also brings a new Parrot OS derivative named Parrot Air. It’s similar to Parrot Full and comes with tools dedicated solely for wireless testing.

Parrot team says that Parrot Air is just a proof of concept that’ll be improved in future. Parrot Core is not only an awesome security oriented platform, but it is also suitable for more general purpose derivative projects, and workstations and personal computers can only take advantage from a very lightweight debian based system which is ready out of the box with all the customization and configurations already done by Parrot team.

Parrot Security OS Website:

https://www.parrotsec.org/

VirtualBox Guest Additions Features

The Guest Additions offer the following features below:

 

  1. Improved Video Support: While the virtual graphics card which VirtualBox emulates for any guest operating system provides all the basic features, the custom video drivers that are installed with the Guest Additions provide you with extra high and non-standard video modes as well as accelerated video performance.
  2. Mouse Pointer Integration: This provides with seamless mouse support. A special mouse driver would have to be installed in the guest OS, which would exchange information with the actual mouse driver on the host. The special mouse driver then allows users to control the guest mouse pointer.
  3. Time Synchronization: With the Guest Additions installed, VirtualBox can ensure that the guest’s system time is better synchronized with that of the host.
  4. Shared Folders: These provide an easy way to exchange files between the host and the guest.
  5. Seamless Windows: With this feature, the individual windows that are displayed on the desktop of the virtual machine can be mapped on the host’s desktop, as if the underlying application was actually running on the host.
  6. Shared Clipboard: With the Guest Additions installed, the clipboard of the guest operating system can optionally be shared with your host operating system.

Hope you found this Parrot Security OS 3.6 installation on Oracle VirtualBox tutorial helpful and informative. Please consider sharing it. Your feedback and questions are welcome!

Source

Linux Gaming With Valve Proton – For The Record

Linux Gaming With Valve Proton – For The Record
Posted on September 2, 2018

Matt Hartley

Datamation.com

and OpenLogic.com/wazi, Matt also once served as a co-host for a popular Linux-centric podcast. Matt has written about various software titles, such as Moodle, Joomla, WordPress, openCRX, Alfresco, Liferay and more. He also has additional Linux experience working with Debian based distributions, openSUSE, CentOS, and Arch Linux.

(Last Updated On: September 2, 2018)

Linux Gaming With Valve Proton. The chat room and I discuss the development of a WINE port called Proton. Designed to allow Windows games to run on Linux with Steam, I share my experiences with it and provide a working demo as well. (Just a note. AC was running during the video. So yes, there is background noise in and out, with noise filtering only able to do so much. My comfort vs sound quality…guess which one wins. )

Matt HartleyDatamation.com

and OpenLogic.com/wazi, Matt also once served as a co-host for a popular Linux-centric podcast.

Matt has written about various software titles, such as Moodle, Joomla, WordPress, openCRX, Alfresco, Liferay and more. He also has additional Linux experience working with Debian based distributions, openSUSE, CentOS, and Arch Linux.

Source

Caprine | SparkyLinux

There is a new application available for Sparkers: Caprine.

What is Caprine?

Elegant Facebook Messenger desktop app. Caprine is an unofficial and privacy focused Facebook Messenger app with many useful features.

Installation (64 bit only):
apt update
apt install caprine

Caprine

The application is licensed under the MIT License.
The project’s GitHub repository: github.com/sindresorhus/caprine
The project developer is Sindre Sorhus.

Source

Discord | SparkyLinux

There is a new application available for Sparkers: Discord.

What is Discord?

All-in-one voice and text chat for gamers that’s free, secure, and works on both your desktop and phone. Stop paying for TeamSpeak servers and hassling with Skype. Simplify your life.

Installation (64 bit only):
apt update
apt install sparky-aptus sparky-aptus-extra

It requires:
– sparky aptus >= 0.4.x
– sparky-aptus-extra >= 0.2.4
Then run APTus-> IM-> Discord.

Due the Discord is a proprietary application, it is not stored at Sparky repository.
The APTus-> IM script can download and install the application for you.

There are two more applications have been added to APTus as well: Caprine and Ring.
Make sure that Ring can not be installed on Sparky if EFL from Sparky repos is installed.

Discord

The home page of Discord: discordapp.com

Source

Python Asyncio Tutorial | Linux Hint

Asyncio library is introduced in python 3.4 to execute single-threaded concurrent programs. This library is popular than other libraries and frameworks for its impressive speed and various use. This library is used in python to create, execute and structure coroutines and handle multiple tasks concurrently without doing the tasks in parallel. The major parts of this library are defined below:
Coroutine: The part of code that can be paused and resumed in multi-threaded script is called coroutine. coroutines work cooperatively in multi-threaded program. When one coroutine pauses then other coroutine can execute.
Event loop: It is used to start the execution of coroutines and handle input/output operations. It takes multiple tasks and complete them.
Task: The execution and the result of coroutines are defined by the tasks. You can assign multiple number of tasks using asyncio library and run the tasks asynchronously.
Future: It acts as a future storage where the result of coroutines will store after completion. This is useful when any coroutine requires to wait for the result of other coroutine.
How you can implement the above concepts of asyncio library is shown in this tutorial by using some simple examples at the source.

Source

Deploy Apache Kafka using Docker Compose

Microservice oriented design patterns have made our applications more scalable than ever. RESTful API servers, front-end and even the databases are now horizontally scalable. Horizontal scaling is the act of adding new nodes to your application cluster to support additional workload. Conversely, it also allows reducing the resource consumption, when the workload decreases, in order to save costs. Horizontally scalable systems need to be distributed system. These systems that can survive failure of multiple VMs, containers or network links and still stay online and healthy for the end user.

When talking about distributed systems like above, we run into the problem of analytics and monitoring. Each node is generating a lot of information about its own health (CPU usage, memory, etc) and about application status along with what the users are trying to do. These details must be recorded in:

  1. The same order in which they are created,
  2. Seperated in terms of urgency (real-time analytics or batches of data), and most importantly,
  3. The mechanism with which they are collected must itself be a distributed and scalable, otherwise we are left with a single point of failure. Something the distributed system design was supposed to avoid.

Apache Kafka is pitched as a Distributed Streaming Platform. In Kafka lingo, Producers continuously generate data (streams) and Consumers are responsible for processing, storing and analysing it. Kafka Brokers are responsible for ensuring that in a distributed scenario the data can reach from Producers to Consumers without any inconsistency. A set of Kafka brokers and another piece of software called zookeeper constitute a typical Kafka deployment.

The stream of data from many producers needs to be aggregated, partitioned and sent to multiple consumers, there’s a lot of shuffling involved. Avoiding inconsistency is not an easy task. This is why we need Kafka.

The scenarios where Kafka can be used is quite diverse. Anything from IOT devices to cluster of VMs to your own on-premise bare metal servers. Anywhere where a lot of ‘things’ simultaneously want your attention….That’s not very scientific is it? Well the Kafka architecture is a rabbit-hole of its own and deserves an independent treatment. Let’s first see a very surface level deployment of the software.

Using Docker Compose

In whatever imaginative way you decide to use Kafka, one thing is certain — You won’t be using it as a single instance. It is not meant to be used that way, and even if your distributed app needs only one instance (broker) for now, it will eventually grow and you need to make sure that Kafka can keep up.

Docker-compose is the perfect partner for this kind of scalability. Instead for running Kafka brokers on different VMs, we containerize it and leverage Docker Compose to automate the deployment and scaling. Docker containers are highly scalable on both single Docker hosts as well as across a cluster if we use Docker Swarm or Kubernetes. So it makes sense to leverage it to make Kafka scalable.

Let’s start with a single broker instance. Create a directory called apache-kafka and inside it create your docker-compose.yml.

$ mkdir apache-kafka
$ cd apache-kafka
$ vim docker-compose.yml

The following contents are going to be put in your docker-compose.yml file:

version: ‘3’
services:
zookeeper:
image: wurstmeister/zookeeper

kafka:
image: wurstmeister/kafka
ports:
– “9092:9092”
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181

Once you have saved the above contents in your compose file, from the same directory run:

Okay, so what did we do here?

Understanding the Docker-Compose.yml

Compose will start two services as listed in the yml file. Let’s look at the file a bit closely. The first image is zookeeper which Kafka requires to keep track of various brokers, the network topology as well as synchronizing other information. Since both zookeeper and kafka services are going to be a part of the same bridge network (this is created when we run docker-compose up ) we don’t need to expose any ports. Kafka broker can talk to zookeeper and that’s all the communication zookeeper needs.

The second service is kafka itself and we are just running a single instance of it, that is to say one broker. Ideally, you would want to use multiple brokers in order to leverage the distributed architecture of Kafka. The service listens on port 9092 which is mapped onto the same port number on the Docker Host and that’s how the service communicates with the outside world.

The second service also has a couple of environment variables. First, is KAFKA_ADVERTISED_HOST_NAME set to localhost. This is the address at which Kafka is running, and where producers and consumers can find it. Once again, this should be the set to localhost but rather to the IP address or the hostname with this the servers can be reached in your network. Second is the hostname and port number of your zookeeper service. Since we named the zookeeper service…well, zookeeper that’s what the hostname is going to be, within docker bridge network we mentioned.

Running a simple message flow

In order for Kafka to start working, we need to create a topic within it. The producer clients can then publish streams of data (messages) to the said topic and consumers can read the said datastream, if they are subscribed to that particular topic.

To do this we need to start a interactive terminal with the Kafka container. List the containers to retrieve the kafka container’s name. For example, in this case our container is named apache-kafka_kafka_1

With kafka container’s name, we can now drop inside this container.

$ docker exec -it apache-kafka_kafka_1 bash
bash-4.4#

Open two such different terminals to use one as consumer and another producer.

Producer Side

In one of the prompts (the one you choose to be producer), enter the following commands:

## To create a new topic named test
bash-4.4# kafka-topics.sh –create –zookeeper zookeeper:2181 –replication-factor 1
–partitions 1 –topic test

## To start a producer that publishes datastream from standard input to kafka
bash-4.4# kafka-console-producer.sh –broker-list localhost:9092 –topic test
>

The producer is now ready to take input from keyboard and publish it.

Consumer Side

Move on the to the second terminal connected to your kafka container. The following command starts a consumer which feeds on test topic:

$ kafka-console-consumer.sh –bootstrap-server localhost:9092 –topic test

Back to Producer

You can now type messages in the new prompt and everytime you hit return the new line is printed in the consumer prompt. For example:

This message gets transmitted to the consumer, through Kafka, and you can see it printed at the consumer prompt.

Real-World Setups

You now have a rough picture of how Kafka setup works. For your own use case, you need to set a hostname which is not localhost, you need multiple such brokers to be a part of your kafka cluster and finally you need to set up consumer and producer clients.

Here are a few useful links:

  1. Confluent’s Python Client
  2. Official Documentation
  3. A useful list of demos

I hope you have fun exploring Apache Kafka.

Source

WP2Social Auto Publish Powered By : XYZScripts.com