How to Install Chamilo on Debian 9

how to install chamilo on debian 9

Chamilo is a free e-learning and collaboration software which aims to improve the access to education on a global level. In this tutorial, we will show you how to install Chamilo on a Linux VPS running Debian 9 as an operating system.

Prerequisites

Before you start with the installation steps make sure your server meets the Chamilo requirements. Here is the list of Chamilo requirements:

  • Apache 2.2+
  • MySQL 5.6+ or MariaDB 5+
  • PHP 5.5+

Also, you need to have the following PHP modules installed and enabled:

  • php-intl
  • php-gd
  • php-mbstring
  • php-imagick
  • php-curl
  • php-mcrypt
  • php-xml
  • php-zip

If some of the software required for Chamilo to work is not installed or enabled on the server you can follow the steps below to install it.

Install Apache, MySQL and PHP on a Debian 9 VPS

Our Debian 9 VPS hosting comes with Apache, MySQL and PHP pre-installed. To install all the software including the dependencies on your server, first you need to connect to your Linux VPS via SSH and then you can run the commands below:

apt-get update
apt-get install apache2 mysql-server php php-mysql php-intl php-gd php-mbstring php-imagick php-curl php-mcrypt php-opcache php-xml php-zip

To verify the installation is completed run the following commands:

apache2 -v
# Server version: Apache/2.4.25 (Debian)
# Server built: 2018-06-02T08:01:13

mysql -V
# mysql Ver 15.1 Distrib 10.1.26-MariaDB, for debian-linux-gnu (x86_64) using readline 5.2

php -v
# PHP 7.0.30-0+deb9u1 (cli) (built: Jun 14 2018 13:50:25) ( NTS )
# Copyright (c) 1997-2017 The PHP Group
# Zend Engine v3.0.0, Copyright (c) 1998-2017 Zend Technologies
# with Zend OPcache v7.0.30-0+deb9u1, Copyright (c) 1999-2017, by Zend Technologies

Configure Apache for Chamilo on Debian 9

First of all, create an Apache virtual host for your new Chamilo website:

nano /etc/apache2/sites-available/yourdomain.com.conf

Add the following content:

<VirtualHost *:80>
ServerAdmin admin@yourdomain.com
DocumentRoot /var/www/chamilo
ServerName yourdomain.com
ServerAlias www.yourdomain.com
<Directory /var/www/chamilo/>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all
</Directory>
ErrorLog /var/log/apache2/yourdomain.com-error_log
CustomLog /var/log/apache2/yourdomain.com-access_log common
</VirtualHost>

Replace yourdomain.com with your actual domain name. Save and close the file. Then, enable the configuration and restart the Apache web server:

a2ensite yourdomain.com
systemctl restart apache2.service

Create MySQL database for Chamilo on Debian 9

Next, create a MySQL database for Chamilo. Log into your MySQL database server as root:

mysql -u root -p

Then, create a new database, a database user and set up a password using the commands below:

mysql> CREATE DATABASE chamilo;
mysql> GRANT ALL PRIVILEGES ON chamilo.* TO ‘chamilo’@’localhost’ IDENTIFIED BY ‘YoUrPaSsWoRd’;
mysql> FLUSH PRIVILEGES;
mysql> q

Of course, you need to replace YoUrPaSsWoRd with a strong password of your choice.

Configure PHP for Chamilo on Debian 9

In order for Chamilo to work properly, you need to adjust some PHP settings. First, find the location of the php.ini file which is currently in use. Run the following command in your terminal:

php –ini | grep “Loaded Configuration File”

Then, edit the following settings as follows:

max_execution_time = 300
max_input_time = 300
memory_limit = 128M
post_max_size = 64M
upload_max_filesize = 128M
max_file_uploads = 20

magic_quotes_gpc = Off
short_open_tag = Off
display_errors = Off

Save the changes you have made in the php.ini and restart Apache again.

systemctl restart apache2.service

Download and Install Chamilo on Debian 9

The next step is to download and install Chamilo on your Debian VPS. Download the latest version of Chamilo from the official Chamilo download page. At the moment of writing, the latest version is 1.11.8 which works well with PHP 7.0.x or later.

cd /var/www
wget https://github.com/chamilo/chamilo-lms/releases/download/v1.11.8/chamilo-1.11.8-php7.zip

Extract the zip archive you have just downloaded:

unzip chamilo-1.11.8-php7.zip

While you are in /var/www, rename the Chamilo directory, change the ownership of the files and remove the zip archive:

mv chamilo-1.11.8-php7 chamilo
chown -R www-data: chamilo/
rm -f chamilo-1.11.8-php7.zip

Now, open your favorite web browser and enter your domain in the search bar. You should see the Chamilo installation wizard which looks like the one on the image below:

installing chamilo on debian 9

 

Follow the installation process to complete the setup. It is OK to accept all default values. You should also consider changing the admin password so you can easily remember it. Once you are done with the installation of Chamilo on your server, you can refer to the official Chamilo documentation for more instructions on how to use and customize the software.

install chamilo on debian 9

Of course, you don’t have to install Chamilo on Debian 9, if you use one of our Debian Cloud VPS Hosting services, in which case you can simply ask our expert Linux admins to install Chamilo on Debian 9 for you. They are available 24×7 and will take care of your request immediately.

PS. If you liked this post on how to install Chamilo on Debian 9, please share it with your friends on the social networks using the buttons on the left or simply leave a reply below. Thanks.

Be the first to write a comment.

Source

Customize Your Payment Frequency and More with AWS Marketplace Flexible Payment Scheduler

Posted On: Oct 16, 2018

AWS Marketplace announces the launch of Flexible Payment Scheduler, a new feature that enables you to negotiate details such as the number of units, payment amounts, payment dates, and end-user licensing in their payment schedule.

Once you and an Independent Software Vendor (ISV) agree on the payment details, ISVs extend a customized offer through the Seller Private Offers feature, and have the option to select up to 36 payment installments. You can track details of your payments through your monthly bill from AWS.

Today, Flexible Payment Scheduler includes software from Armor, Cisco Stealthwatch, CloudHealth, CrowdStrike, and Splunk.

To learn more about AWS Marketplace visit here.

Source

Configure ProFTPd for SFTP on CentOS

This is a guide on how to configure ProFTPd for SFTP sessions. Secure File Transfer Protocol (SFTP) is a secure version of FTP which transfers files via the SSH protocol. ProFTPD can be reconfigured to serve SFTP sessions vs using the default FTP protocol. This guide assumes you already have a existing ProFTPD installation. If you do not already have it installed please follow How to Install Proftpd.

Edit /etc/proftpd.conf To Enable sFTP

nano /etc/proftpd.conf

Un-comment the following lines to load mod_sftp

#LoadModule mod_sftp.c
#LoadModule mod_sftp_pam.c

To

LoadModule mod_sftp.c
LoadModule mod_sftp_pam.c

Add the following to the end of the configuration (outside of the <global> </global> block to run it separately)

<IfModule mod_sftp.c>
SFTPEngine ON
SFTPLog /var/log/sftp.log
Port 2222
SFTPHostKey /etc/ssh/ssh_host_rsa_key
SFTPLog /var/log/proftpd/sftp.log
SFTPCompression delayed
</IfModule>

SFTPEngine – This will enable SFTP
SFTPLog – This will set the log file for sftp connections
Port – This will set the port ProFTPd will listen on for SFTP connections
SFTPHostKey – This points to the SSH key.
SFTPCompression – This sets the compression method used during transfers

Open the sFTP port in the firewall

Firewalld:

Enable firewall rule:

firewall-cmd –zone=public –add-port=2222/tcp –permanent

Load the new firewall

firewall-cmd –reload

Iptables:

Enable the firewall rule:

iptables -A INPUT -p tcp -m tcp –dport 2222 -j ACCEPT

Save the firewall rule:

iptables-save > /etc/sysconfig/iptables

Restart Proftpd

CentOS 7:

systemctl restart proftpd

CentOS 6:

service proftpd restart

Thats all you need to do to configure ProFTPd to accept ssh connections. You should now able to connect via port 2222 using a sFTP client.

Jan 14, 2018LinuxAdmin.io

Source

Parrot Security OS 3.6 (ParrotSec OS) Installation on Oracle VirtualBox

Parrot Security OS 3.6 (ParrotSec OS) Installation
Parrot Security OS 3.6 (ParrotSec OS) Installation on Oracle VirtualBox

This video tutorial shows

Parrot Security OS 3.6 (ParrotSec OS) installation

on Oracle

VirtualBox

step by step. This tutorial is also helpful to install

ParrotSec 3.6

on physical computer or laptop hardware. We also install

Guest Additions

on Parrot Security OS for better performance and usability features: Automatic Resizing Guest Display, Shared Folder, Seamless Mode and Shared Clipboard, Improved Performance and Drag and Drop.

Parrot Security OS 3.6 Installation Steps:

  1. Create Virtual Machine on Oracle VirtualBox
  2. Start Parrot Security OS 3.6 Installation
  3. Install Guest Additions
  4. Test Guest Additions Features: Automatic Resizing Guest Display and Shared Clipboard

Installing Parrot Security OS 3.6 on Oracle VirtualBox

 

Parrot Security OS 3.6 New Features and Improvements

Parrot Security OS 3.6

is less memory-hungry. This was done by tuning up startup daemons management system and minor fixes. As a result, Parrot 3.6 Lite 32-bit now uses less than 200 MB RAM. Anonsurf was improved too, and now the section dedicated to anonymity and privacy is very reliable and well tested, and some nightmares of the precedent anonsurf versions now belong to the past. This release also brings a new Parrot OS derivative named Parrot Air. It’s similar to Parrot Full and comes with tools dedicated solely for wireless testing.

Parrot team says that Parrot Air is just a proof of concept that’ll be improved in future. Parrot Core is not only an awesome security oriented platform, but it is also suitable for more general purpose derivative projects, and workstations and personal computers can only take advantage from a very lightweight debian based system which is ready out of the box with all the customization and configurations already done by Parrot team.

Parrot Security OS Website:

https://www.parrotsec.org/

VirtualBox Guest Additions Features

The Guest Additions offer the following features below:

 

  1. Improved Video Support: While the virtual graphics card which VirtualBox emulates for any guest operating system provides all the basic features, the custom video drivers that are installed with the Guest Additions provide you with extra high and non-standard video modes as well as accelerated video performance.
  2. Mouse Pointer Integration: This provides with seamless mouse support. A special mouse driver would have to be installed in the guest OS, which would exchange information with the actual mouse driver on the host. The special mouse driver then allows users to control the guest mouse pointer.
  3. Time Synchronization: With the Guest Additions installed, VirtualBox can ensure that the guest’s system time is better synchronized with that of the host.
  4. Shared Folders: These provide an easy way to exchange files between the host and the guest.
  5. Seamless Windows: With this feature, the individual windows that are displayed on the desktop of the virtual machine can be mapped on the host’s desktop, as if the underlying application was actually running on the host.
  6. Shared Clipboard: With the Guest Additions installed, the clipboard of the guest operating system can optionally be shared with your host operating system.

Hope you found this Parrot Security OS 3.6 installation on Oracle VirtualBox tutorial helpful and informative. Please consider sharing it. Your feedback and questions are welcome!

Source

Linux Gaming With Valve Proton – For The Record

Linux Gaming With Valve Proton – For The Record
Posted on September 2, 2018

Matt Hartley

Datamation.com

and OpenLogic.com/wazi, Matt also once served as a co-host for a popular Linux-centric podcast. Matt has written about various software titles, such as Moodle, Joomla, WordPress, openCRX, Alfresco, Liferay and more. He also has additional Linux experience working with Debian based distributions, openSUSE, CentOS, and Arch Linux.

(Last Updated On: September 2, 2018)

Linux Gaming With Valve Proton. The chat room and I discuss the development of a WINE port called Proton. Designed to allow Windows games to run on Linux with Steam, I share my experiences with it and provide a working demo as well. (Just a note. AC was running during the video. So yes, there is background noise in and out, with noise filtering only able to do so much. My comfort vs sound quality…guess which one wins. )

Matt HartleyDatamation.com

and OpenLogic.com/wazi, Matt also once served as a co-host for a popular Linux-centric podcast.

Matt has written about various software titles, such as Moodle, Joomla, WordPress, openCRX, Alfresco, Liferay and more. He also has additional Linux experience working with Debian based distributions, openSUSE, CentOS, and Arch Linux.

Source

Caprine | SparkyLinux

There is a new application available for Sparkers: Caprine.

What is Caprine?

Elegant Facebook Messenger desktop app. Caprine is an unofficial and privacy focused Facebook Messenger app with many useful features.

Installation (64 bit only):
apt update
apt install caprine

Caprine

The application is licensed under the MIT License.
The project’s GitHub repository: github.com/sindresorhus/caprine
The project developer is Sindre Sorhus.

Source

Discord | SparkyLinux

There is a new application available for Sparkers: Discord.

What is Discord?

All-in-one voice and text chat for gamers that’s free, secure, and works on both your desktop and phone. Stop paying for TeamSpeak servers and hassling with Skype. Simplify your life.

Installation (64 bit only):
apt update
apt install sparky-aptus sparky-aptus-extra

It requires:
– sparky aptus >= 0.4.x
– sparky-aptus-extra >= 0.2.4
Then run APTus-> IM-> Discord.

Due the Discord is a proprietary application, it is not stored at Sparky repository.
The APTus-> IM script can download and install the application for you.

There are two more applications have been added to APTus as well: Caprine and Ring.
Make sure that Ring can not be installed on Sparky if EFL from Sparky repos is installed.

Discord

The home page of Discord: discordapp.com

Source

Python Asyncio Tutorial | Linux Hint

Asyncio library is introduced in python 3.4 to execute single-threaded concurrent programs. This library is popular than other libraries and frameworks for its impressive speed and various use. This library is used in python to create, execute and structure coroutines and handle multiple tasks concurrently without doing the tasks in parallel. The major parts of this library are defined below:
Coroutine: The part of code that can be paused and resumed in multi-threaded script is called coroutine. coroutines work cooperatively in multi-threaded program. When one coroutine pauses then other coroutine can execute.
Event loop: It is used to start the execution of coroutines and handle input/output operations. It takes multiple tasks and complete them.
Task: The execution and the result of coroutines are defined by the tasks. You can assign multiple number of tasks using asyncio library and run the tasks asynchronously.
Future: It acts as a future storage where the result of coroutines will store after completion. This is useful when any coroutine requires to wait for the result of other coroutine.
How you can implement the above concepts of asyncio library is shown in this tutorial by using some simple examples at the source.

Source

Deploy Apache Kafka using Docker Compose

Microservice oriented design patterns have made our applications more scalable than ever. RESTful API servers, front-end and even the databases are now horizontally scalable. Horizontal scaling is the act of adding new nodes to your application cluster to support additional workload. Conversely, it also allows reducing the resource consumption, when the workload decreases, in order to save costs. Horizontally scalable systems need to be distributed system. These systems that can survive failure of multiple VMs, containers or network links and still stay online and healthy for the end user.

When talking about distributed systems like above, we run into the problem of analytics and monitoring. Each node is generating a lot of information about its own health (CPU usage, memory, etc) and about application status along with what the users are trying to do. These details must be recorded in:

  1. The same order in which they are created,
  2. Seperated in terms of urgency (real-time analytics or batches of data), and most importantly,
  3. The mechanism with which they are collected must itself be a distributed and scalable, otherwise we are left with a single point of failure. Something the distributed system design was supposed to avoid.

Apache Kafka is pitched as a Distributed Streaming Platform. In Kafka lingo, Producers continuously generate data (streams) and Consumers are responsible for processing, storing and analysing it. Kafka Brokers are responsible for ensuring that in a distributed scenario the data can reach from Producers to Consumers without any inconsistency. A set of Kafka brokers and another piece of software called zookeeper constitute a typical Kafka deployment.

The stream of data from many producers needs to be aggregated, partitioned and sent to multiple consumers, there’s a lot of shuffling involved. Avoiding inconsistency is not an easy task. This is why we need Kafka.

The scenarios where Kafka can be used is quite diverse. Anything from IOT devices to cluster of VMs to your own on-premise bare metal servers. Anywhere where a lot of ‘things’ simultaneously want your attention….That’s not very scientific is it? Well the Kafka architecture is a rabbit-hole of its own and deserves an independent treatment. Let’s first see a very surface level deployment of the software.

Using Docker Compose

In whatever imaginative way you decide to use Kafka, one thing is certain — You won’t be using it as a single instance. It is not meant to be used that way, and even if your distributed app needs only one instance (broker) for now, it will eventually grow and you need to make sure that Kafka can keep up.

Docker-compose is the perfect partner for this kind of scalability. Instead for running Kafka brokers on different VMs, we containerize it and leverage Docker Compose to automate the deployment and scaling. Docker containers are highly scalable on both single Docker hosts as well as across a cluster if we use Docker Swarm or Kubernetes. So it makes sense to leverage it to make Kafka scalable.

Let’s start with a single broker instance. Create a directory called apache-kafka and inside it create your docker-compose.yml.

$ mkdir apache-kafka
$ cd apache-kafka
$ vim docker-compose.yml

The following contents are going to be put in your docker-compose.yml file:

version: ‘3’
services:
zookeeper:
image: wurstmeister/zookeeper

kafka:
image: wurstmeister/kafka
ports:
– “9092:9092”
environment:
KAFKA_ADVERTISED_HOST_NAME: localhost
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181

Once you have saved the above contents in your compose file, from the same directory run:

Okay, so what did we do here?

Understanding the Docker-Compose.yml

Compose will start two services as listed in the yml file. Let’s look at the file a bit closely. The first image is zookeeper which Kafka requires to keep track of various brokers, the network topology as well as synchronizing other information. Since both zookeeper and kafka services are going to be a part of the same bridge network (this is created when we run docker-compose up ) we don’t need to expose any ports. Kafka broker can talk to zookeeper and that’s all the communication zookeeper needs.

The second service is kafka itself and we are just running a single instance of it, that is to say one broker. Ideally, you would want to use multiple brokers in order to leverage the distributed architecture of Kafka. The service listens on port 9092 which is mapped onto the same port number on the Docker Host and that’s how the service communicates with the outside world.

The second service also has a couple of environment variables. First, is KAFKA_ADVERTISED_HOST_NAME set to localhost. This is the address at which Kafka is running, and where producers and consumers can find it. Once again, this should be the set to localhost but rather to the IP address or the hostname with this the servers can be reached in your network. Second is the hostname and port number of your zookeeper service. Since we named the zookeeper service…well, zookeeper that’s what the hostname is going to be, within docker bridge network we mentioned.

Running a simple message flow

In order for Kafka to start working, we need to create a topic within it. The producer clients can then publish streams of data (messages) to the said topic and consumers can read the said datastream, if they are subscribed to that particular topic.

To do this we need to start a interactive terminal with the Kafka container. List the containers to retrieve the kafka container’s name. For example, in this case our container is named apache-kafka_kafka_1

With kafka container’s name, we can now drop inside this container.

$ docker exec -it apache-kafka_kafka_1 bash
bash-4.4#

Open two such different terminals to use one as consumer and another producer.

Producer Side

In one of the prompts (the one you choose to be producer), enter the following commands:

## To create a new topic named test
bash-4.4# kafka-topics.sh –create –zookeeper zookeeper:2181 –replication-factor 1
–partitions 1 –topic test

## To start a producer that publishes datastream from standard input to kafka
bash-4.4# kafka-console-producer.sh –broker-list localhost:9092 –topic test
>

The producer is now ready to take input from keyboard and publish it.

Consumer Side

Move on the to the second terminal connected to your kafka container. The following command starts a consumer which feeds on test topic:

$ kafka-console-consumer.sh –bootstrap-server localhost:9092 –topic test

Back to Producer

You can now type messages in the new prompt and everytime you hit return the new line is printed in the consumer prompt. For example:

This message gets transmitted to the consumer, through Kafka, and you can see it printed at the consumer prompt.

Real-World Setups

You now have a rough picture of how Kafka setup works. For your own use case, you need to set a hostname which is not localhost, you need multiple such brokers to be a part of your kafka cluster and finally you need to set up consumer and producer clients.

Here are a few useful links:

  1. Confluent’s Python Client
  2. Official Documentation
  3. A useful list of demos

I hope you have fun exploring Apache Kafka.

Source

How to Mute Spotify Ads on Arch Linux

By Francesco Mondello</a>”>Francesco Mondello on October 15, 2018

Spotify-AdKiller

In this tutorial we will see how to mute Spotify ads on Arch Linux with a very simple and lightweight script I found on GitHub called Spotify-AdKiller.

If you’re using Spotify Free on your Arch Linux system (or any kind of Linux distro), this useful script will be able to mute its annoying audio ads.

Spotify-AdKiller allows you to block Spotify ads in 3 different ways:

  • Simple
  • Interstitial
  • Continuous

We will see in details these 3 different features after the installation!

How to install Spotify-AdKiller on Arch Linux

Spotify-AdKiller it’s packed for Ubuntu and OpenSuse as well, but we will see in details how to install and configure it on Arch Linux:

Since there’s a package in the AUR repos, we can install it using an AUR package manager or using these commands:

git clone https://aur.archlinux.org/spotify-adkiller-git.git

cd spotify-adkiller-git

makepkg -si


After installing it, we will see a menu entry called Spotify (AdKiller) with the famous Spotify green icon.

How to Configure Spotify-AdKiller

As said before, Spotify-AdKiller has 3 different ways to work:

  • simple: mute Spotify, unmute when ad is over
  • interstitial: mute Spotify, play random local track, stop and unmute when ad is over
  • continuous: mute Spotify, play random local track, stop and unmute when track is over

The default ad blocking mode is continuous.

How to block Spotify Ads

In order to completely mute Spotify ads on Arch Linux, open the file $HOME/.config/Spotify-AdKiller/Spotify-AdKiller.cfg (if it doesn’t exist, run the Spotify-AdKiller script).

In the CUSTOM_MODE section add simple.

From now on, Spotify ads will be muted. Anyway, if you are a frequent Spotify user, consider obviously to get your Premium subscription! 🙂

You can get more info about Spotify-AdKiller on GitHub.

Source

WP2Social Auto Publish Powered By : XYZScripts.com