elementary OS Juno Released, Plasma 5.14.1 Is Out, Chrome 70 Now Available, Docker Raises New Funding and New Badges for Firefox Users

News briefs for October 17, 2018.

elementary
OS Juno
is now available. This new major version sports a ton of updates
and improvements with three major goals: 1) “provide a more refined user
experience; 2) “improve productivity for new and seasoned users alike”; and
3) “take our developer platform to the next level”.

The KDE Project
yesterday announced the first point release
of the KDE Plasma 5.14
desktop series. Plasma 5.14.1 adds new translations and some important
bugfixes. See the changelog
for further details.

Chrome 70 is now available. This release removes the controversial change
from the last version, and now allows users to stop the browser from
automatically signing in to their Google accounts after logging in to one of
its apps, The
Verge reports
. You still need to opt-out and specifically change this setting,
however. Other changes include support for progressive web apps on Windows.
See the “New
in Chrome 70” post
for more information on this release.

Docker has raised $92 million in new funding. According to
TechCrunch,
“the new funding is a signal that while Docker may have lost its race with
Google’s Kubernetes over whose toolkit would be the most widely adopted,
the San Francisco-based company has become the champion for businesses that
want to move to the modern hybrid application development and information
technology operations model of programming.”

Mozilla has created badges for Firefox users who want to show their support.
You can grab the code for the badges here. Mozilla notes that the
“images are hosted on a Mozilla CDN for convenience and performance only. We
do no tracking of traffic to the CDN”.

Source

Hyperledger Continues Strong Momentum with 14 New Members

More than 270 organizations now support leading open source blockchain project, including FedEx & Honeywell International Inc.

SAN FRANCISCO – (September 26, 2018) Hyperledger, an open source collaborative effort created to advance cross-industry blockchain technologies, today announced 14 members have joined its growing global community. More than 270 organizations are now contributing to the growth of Hyperledgers’ open source distributed ledger frameworks and tools.

“Our community ranges from technology giants and industry leaders to start-ups, service providers and academics,” said Brian Behlendorf, Executive Director, Hyperledger. “We are gaining traction around the world in market segments from finance to healthcare and government to logistics. This growth and diversity is a signal of the increasing recognition of the strategic value of enterprise blockchain and commitment to the adoption and development of open source frameworks to drive new business models.”

Hyperledger is a multi-project, multi-stakeholder effort that includes 10 business blockchain and distributed ledger technologies. Hyperledger enables organizations to build robust, industry-specific applications, platforms and hardware systems to support their individual business transactions by creating enterprise-grade, open source distributed ledger frameworks and code bases. The latest general members to join the community are: BetaBlocks, Blockchain Educators, Cardstack, Constellation Labs, Elemential Labs, FedEx, Honeywell International Inc., KoreConX, Northstar Venture Technologies, Peer Ledger, Syncsort and Wanchain.

Hyperledger supports an open community that values contributions and participation from various entities. As such, pre-approved non-profits, open source projects and government entities can join Hyperledger at no cost as associate members. Associate members joining this month include Ministry of Citizens’ Services of British Columbia, Canada, and the Government of Bermuda.

New member quotes:

BetaBlocks

“We are proud to join Hyperledger and thrilled about the opportunity to collaborate with some of the most talented individuals in the distributed ledger space,” said Antonio Manueco, CTO of BetaBlocks. “At BetaBlocks, we educate and help entrepreneurs build the next generation of amazing companies using blockchain through our co-building program. It is important for us to support The Linux Foundation and Hyperledger in order to help the open source community continue building world class software. We are looking forward to collaborating with some of the other important members of these great organizations.”

Blockchain Educators

“Blockchain Educators is excited to join the Linux Foundation and Hyperledger,” said Thomas Rivera, CEO of Blockchain Educators. “We firmly believe blockchain technology will usher in the next generation of business activity and that Hyperledger is at the forefront. Blockchain Educators is fully dedicated to increasing awareness and enhancing blockchain education for beginners, entrepreneurs, corporations and developers. We look forward to working closely with Hyperledger and its community members.”

Cardstack

“To achieve broad adoption of blockchain technologies, we need to focus on orchestrating cohesive experiences on top of decentralized protocols as well as cloud services, so new value networks can form over existing data assets and market relationships,” said Chris Tse, Founding Director of Cardstack. “We are honored to join the Linux Foundation and Hyperledger to contribute open source software, compile architecture patterns, and share solution templates to bring real use cases to the marketplace. Since 2017, we have been developing on the Hyperledger Sawtooth platform with our client-partner dotBC and their network of music industry innovators to architect and develop a new decentralized media rights registry. Cardstack is excited to leverage other Hyperledger projects and share our experience building decentralized ecosystems with our open-sourced framework and tools.”

Constellation Labs

“We are honored to be a part of the premier organization in technology and the blockchain space. Constellation is working with the Hyperledger community to explore new architectures and frameworks that will usher in a new era of applications built on distributed ledger technology,” said Benjamin Diggles, VP of Business Development at Constellation Labs. “Contributing to this project will be imperative to our focus of applying blockchain scalability to real-world, viable enterprise use cases.”

Elemential Labs

“We’re excited to join Hyperledger and bring blockchain infrastructure to the Indian growth story,” said Raunaq Vaisoha, CEO at Elemential Labs. “With our membership, we look to offer additional value to our customers.”

FedEx

“We believe that blockchain has big implications in supply chain, transportation and logistics,” said Kevin Humphries, Senior Vice President, IT, FedEx Services. “We are excited for the opportunity to collaborate with the Hyperledger community as we continue to explore the applications and help set the standards for wide-scale blockchain adoption in our industry and others.”

Honeywell International Inc.

“Honeywell Aerospace, whose solutions are found on virtually every commercial, defense and space aircraft in service today, is pleased to join Hyperledger,” said Sathish Muthukrishnan, Chief Digital and Information Officer for Honeywell Aerospace, “We look forward to leveraging the blockchain technology to solve critical customer needs and enable our position as a leading Software-Industrial Company through the Power of Connected.”

KoreConX

“In Hyperledger Fabric, we found a credible blockchain platform designed especially for financial transactions,” said Oscar A. Jofre, CEO of KoreConX. “This is a highly professional community of technologists who are thoughtful and focused on creating enterprise-class applications, keeping safety and security foremost. A number of respected financial institutions are also building applications with Fabric, which raises our level of confidence and comfort.”

Northstar Venture Technologies

“Northstar is thrilled to join Hyperledger and The Linux Foundation,” said Dean Sutton, CEO of Northstar Venture Technologies. “Having direct access to the Hyperledger resources and community is of great value in our work with enterprise, financial services and capital markets organizations. We look forward to being active contributors to this open standard of ongoing innovation and bringing new platform solutions to market for Northstar and our clients.

Peer Ledger

“Peer Ledger is the creator of the MIMOSI blockchain application for Responsible Sourcing and an Identity Bridge product, which provides identity resolution services among identity systems and multiple blockchains. With MIMOSI, we showcase how the highly modular Hyperledger Fabric can be implemented using a hybrid governance model, providing subscribers with the now-accustomed convenience of zero installation, while still using the Fabric’s distributed consensus mechanism correctly to ensure data consistency and to detect and prevent double spend. No blockchain outside the Hyperledger family would allow us as much flexibility around implementation governance,” said Dawn Jutla, CEO at Peer Ledger. “Hyperledger Fabric’s flexibility and its community of developers enabled our firm to produce this sophisticated blockchain-based SaaS for Responsible Sourcing in under two years. We are thrilled to join and to contribute further as a member of the Hyperledger community.”

Syncsort

“We see blockchain emerging as a next-generation platform with tremendous potential that can be enabled by data integration and data quality,” said Tendü Yoğurtçu, CTO, Syncsort. “Connecting blockchain to existing infrastructure and legacy platforms across the enterprise is consistent with Syncsort’s leadership in Big Iron to Big Data, making enterprise-wide data accessible to next generation platforms and applying it to pressing business use cases. We are excited to join Hyperledger and to identify areas where Syncsort can contribute to maturing the platform and making its benefits more achievable for our customers.”

Wanchain

“Wanchain is honored to join Hyperledger and become a part of this ecosystem to advance an open standard for distributed ledger technology,” said Jack Lu, Founder and CEO of Wanchain. “Wanchain is working to bridge blockchains and connect the world’s digital assets. We are excited to collaborate with member organizations and enterprises to further develop and advance the industry as a whole. The Wanchain team is looking forward to cooperating with such a diverse and global community of industry leaders and contributing our insights on cross-chain technologies.”

Join industry peers in helping build and shape the ecosystem for blockchain technologies, use cases and applications. More information on joining Hyperledger as a member organization can be found here: https://www.hyperledger.org/members/join.

About Hyperledger

Hyperledger is an open source collaborative effort created to advance cross-industry blockchain technologies. It is a global collaboration including leaders in finance, banking, Internet of Things, supply chains, manufacturing and Technology. The Linux Foundation hosts Hyperledger under the foundation. To learn more, visit: https://www.hyperledger.org/.
Source

Raspberry PI 3 model B+ Released: Complete specs and pricing

A new version of the Raspberry PI 3 model B+ has released, and it is incredible update over the older model. Just over two years ago, I got Raspberry Pi 3 Model B. It was my first 64-bit ARM board. It came with 64-bit CPU. Here are the complete specs for updated 64-bit credit card size computer.
Raspberry PI 3 model B+ Released: Complete specs and pricing

From the blog post:

Dual-band wireless LAN and Bluetooth are provided by the Cypress CYW43455 “combo” chip, connected to a Proant PCB antenna similar to the one used on Raspberry Pi Zero W. Compared to its predecessor, Raspberry Pi 3B+ delivers somewhat better performance in the 2.4GHz band, and far better performance in the 5GHz band

Previous Raspberry Pi devices have used the LAN951x family of chips, which combine a USB hub and 10/100 Ethernet controller. For Raspberry Pi 3B+, Microchip have supported us with an upgraded version, LAN7515, which supports Gigabit Ethernet. While the USB 2.0 connection to the application processor limits the available bandwidth, we still see roughly a threefold increase in throughput compared to Raspberry Pi 3B.

The Raspberry Pi model 3 B+ Specs

  1. SOC: Broadcom BCM2837B0, Cortex-A53 (ARMv8) 64-bit SoC
  2. CPU: 1.4GHz 64-bit quad-core ARM Cortex-A53 CPU
  3. RAM: 1GB LPDDR2 SDRAM
  4. WIFI: Dual-band 802.11ac wireless LAN (2.4GHz and 5GHz ) and Bluetooth 4.2
  5. Ethernet: Gigabit Ethernet over USB 2.0 (max 300 Mbps). Power-over-Ethernet support (with separate PoE HAT). Improved PXE network and USB mass-storage booting.
  6. Thermal management: Yes
  7. Video: Yes – VideoCore IV 3D. Full-size HDMI
  8. Audio: Yes
  9. USB 2.0: 4 ports
  10. GPIO: 40-pin
  11. Power: 5V/2.5A DC power input
  12. Operating system support: Linux and Unix

Key Improvements from Pi 3 Model B to Pi 3 Model B+

  • Improved compatibility for network booting
  • New support for Power over Ethernet
  • Processor speed has increased from 1.2Ghz on Pi 3 to 1.4Ghz
  • New dual band wireless LAN chip, 2.4Ghz and 5Ghz with embedded antenna
  • Bluetooth 4.2 Low Energy
  • Faster onboard Ethernet, up to 300mbps speed

The only downside

The Gigabit Ethernet is a nice upgrade. However, the storage and network share the same bus. So you will not get full Gigabit speed. In other words Gigabit connectivity at a theoretical maximum throughput of 300Mb/s. But, you can’t get everything for $35.

Raspberry PI 3 model B+ pricing

The price is same as the existing Raspberry Pi 3 Model B:

  1. USD – $35

For more info see this page.

Posted by: Vivek Gite

The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.

Source

Arm Launches Mbed Linux and Extends Pelion IoT Service | Linux.com

Politics and international relations may be fraught with acrimony these days, but the tech world seems a bit friendlier of late. Last week Microsoft joined the Open Invention Network and agreed to grant a royalty-free, unrestricted license of its 60,000-patent portfolio to other OIN members, thereby enabling Android and Linux device manufacturers to avoid exorbitant patent payments. This week, Arm and Intel kept up the happy talk by agreeing to a partnership involving IoT device provisioning.

Arm’s recently announced Pelion IoT Platform will align with Intel’s Secure Device Onboard (SDO) provisioning technology to make it easier for IoT vendors and customers to onboard both x86 and Arm-based devices using a common Peleon platform. Arm also announced Pelion related partnerships with myDevices and Arduino (see farther below).

In another nod to Intel, Arm unveiled a new, IoT focused Mbed Linux OS distribution that combines the Linux kernel with tools and recipes from the Intel-backed Yocto Project. The distro also integrates security and IoT connectivity code from its open source Mbed RTOS.

When Pelion was announced, Arm mentioned cross-platform support, but there were few details. Now with the Intel SDO deal and the launch of Mbed Linux OS, Arm has formally expanded Pelion from an MCU-only IoT data aggregation platform to one that supports more advanced x86 and Cortex-A based systems.

Mbed Linux OS

The early stage Mbed Linux OS will be released by the end of the year as an invitation-only developer preview. Both the OS source code and related test suites will eventually be open sourced.

In the Mbed Linux OS announcement, Arm’s Mark Wright pitches the distro as a secure, IoT focused “sibling” to the Cortex-M focused Mbed that is designed for Cortex-A processors. Arm will support Mbed Linux with its MCU-oriented Mbed community of 350,000 developers and will offer support for popular Linux development boards and modules. The Softbank-owned company will also supply optional commercial support.

Like Mbed, Mbed Linux will be “deeply integrated” with the Pelion IoT System in order “to simplify lifecycle management.” The Pelion support provides device provisioning, connectivity, and updates, thereby enabling development teams to update the OS and the applications independently, says Wright. Working with the Pelion Device Management Application, Mbed Linux OS can “simplify in-field provisioning and eradicate the need for legacy serial connections for initial device configuration,” says Arm.

Mbed Linux will support Arm’s Platform Security Architecture and hardware based TrustZone security to enable secure, signed boot and signed updates. It will also enable deployment of applications in secure, OCI-compliant containers.

Arm did not specify which components of the Yocto Project code it would integrate with Mbed. In late August, Arm and Facebook joined Intel and TI as Platinum members of the Yocto Project. The Linux Foundation hosted project was launched by Intel but is now widely used on Arm as well as x86 based IoT devices.

Despite common references to “Yocto Linux,” Yocto Project is not a distribution, but rather a collection of open source templates, tools, and methods for creating custom embedded Linux-based systems. A Yocto foundation underlies most major commercial Linux distributions such as Wind River Linux and Mentor Embedded Linux and is often spun into custom builds by DIY developers, especially for resource constrained IoT devices.

We saw no mention of a contribution for the Arm-backed Linaro initiative for either Mbed Linux or Pelion. Linaro, which oversees the 96Boards project, develops open source embedded Linux and Android software components. The Yocto and Linaro projects were initially seen as rivals, but they have grown increasingly complementary. Linaro’s Arm toolchain can be used within Yocto Project, as well as with the related OpenEmbedded build environment and Bitbake build engine.

Developers can sign up for the limited number of invites to participate in the upcoming developer preview of Mbed Linux OS here.

Arm’s Pelion partnerships

Arm’s Pelion IoT Platform will soon run on devices with Intel’s recently launched Secure Device Onboard (SDO) service, enabling customers to deploy both Arm and x86 based systems controlled by the common Pelion platform. “We believe this collaboration is a big step forward for greater customer choice, fewer device SKUs, higher volume and velocity through IoT supply chains and lower deployment cost,” says Arm.

The SDO “zero-touch onboarding service” depends on Intel Enhanced Privacy ID (EPID) data embedded in chips to validate and provision IoT devices automatically. SDO automatically discovers and provisions compliant devices during installation. This “late binding” approach reduces provisioning times from 20 minutes to an hour to a few minutes, says Intel.

Unlike PKI based authentication methods, “SDO does not insert Intel into the authentication path.” Instead, it brokers a rendezvous URL to the Intel SDO service where Intel EPID opens a private authentication channel between the device and the customer’s IoT platform.

The Pelion IoT Platform offers its own scheme for provisioning and configuration of devices using cryptographic identities built into Cortex-M MCUs running Mbed. With the new Mbed Linux, Pelion will also be able to accept devices that run on Cortex-A chips with TrustZone security.

Pelion combines Arm’s Mbed Cloud connected Mbed IoT Device Management Platform with technologies it acquired via two 2018 acquisitions. The new Treasure Data unit supplies data management services to Pelion. Meanwhile, Stream Technologies provides Pelion managed gateway services for wireless technologies including cellular, LoRa, and satellite communications.

The partnership with myDevices extends Pelion support to devices that run myDevices’ new IoT in a Box turnkey IoT software for LoRa gateways and nodes. myDevices, which is known for its Linux- and Arduino-friendly Cayenne drag-and-drop IoT development and management platform, launched IoT in a Box to enable easy set up a LoRa gateway and LoRa sensor nodes. Different IoT in a Box versions target specific applications ranging from home and building management to storage lockers to refrigeration systems. Developers can try out Pelion services together with IoT in a Box for a new, $199 IoT Starter Kit.

The Arduino partnership is a bit less clear. It appears to extend Arm’s Pelion Connectivity Management stack, based on the Stream Technologies acquisition, to Arduino devices. The partnership gives users the option of selecting “competitive global data plans” for cellular service, says Arm.

More details on this and the other Pelion announcements should emerge at Arm TechCon in San Jose, California and IoT Solution World Congress in Barcelona, both of which run Oct 16-18. Intel also offers a video overview of the Pelion/SDO mashup.

Join us at Open Source Summit + Embedded Linux Conference Europe in Edinburgh, UK on October 22-24, 2018, for 100+ sessions on Linux, Cloud, Containers, AI, Community, and more.

Source

Linux Disk Management | Linux Training Academy

Most Popular

Recent Linux Articles

Topics

Arch Linux
careers
CentOS
cheat sheet
Cloud
Command Line
Debian
Desktop
Fedora
File System
FTP
Gentoo
Installation
jobs
Linux
Linux Mint
Mageia
MySQL
Nginx
OpenSuse
Programming
Python
RedHat
SCP
Server
Shell Scripting
Slackware
SSH
text editor
Ubuntu
Vagrant
Video
vim
VirtualBox
Web Hosting
Windows

Source

Tech Jobs Academy – Women in Linux

Tech Jobs Academy is a collaboration between Microsoft Corporation, the NYC Tech Talent Pipeline and the City University of New York (CUNY). In its pilot year, these partners have joined together to launch Tech Jobs Academy at CUNY’s New York City College of Technology. This technical training program serves underemployed and unemployed New Yorkers who are passionate about technology and ready to launch a new career in the field.

Visit Their Website

  • RT @ArlanWasHere: The applications for Backstage Accelerator have been flooding in. The competition is fierce, and the quality is inspiring…(10 Hours Ago)
  • RT @blkintechnology: @buchatech @bitmsp @SaaSy_Sistah Thanks for this!(10 Hours Ago)
  • RT @LisaAbeyta: “I try not to brag, but I don’t owe anyone this shrinking away. I DID this. So I own it.” – such an important insight from…(10 Hours Ago)

Source

Configure Active/Passive NFS Server on a Pacemaker Cluster with Puppet | Lisenet.com :: Linux | Security

We’re going to use Puppet to install Pacemaker/Corosync and configure an NFS cluster.

For instructions on how to compile fence_pve on CentOS 7, scroll to the bottom of the page.

This article is part of the Homelab Project with KVM, Katello and Puppet series.

Homelab

We have two CentOS 7 servers installed which we want to configure as follows:

storage1.hl.local (10.11.1.15) – Pacemaker cluster node
storage2.hl.local (10.11.1.16) – Pacemaker cluster node

SELinux set to enforcing mode.

See the image below to identify the homelab part this article applies to.

Cluster Requirements

To configure the cluster, we are going to need the following:

  1. A virtual IP address, required for the NFS server.
  2. Shared storage for the NFS nodes in the cluster.
  3. A power fencing device for each node of the cluster.

The virtual IP is 10.11.1.31 (with the DNS name of nfsvip.hl.local).

With regards to shared storage, while I agree that iSCSI would be ideal, the truth is that “we don’t have that kind of money“. We will have to make it with a shared disk among different VMs on same Proxmox host.

In terms of fencing, as mentioned earlier, Proxmox does not use libvirt, therefore Pacemaker clusters cannot be fenced by using fence-agents-virsh. There is fence_pve available, but we won’t find it in CentOS/RHEL. We’ll need to compile it from source.

Proxmox and Disk Sharing

I was unable to find a WebUI way to add an existing disk to another VM. Proxmox forum was somewhat helpful, and I ended up manually editing the VM’s config file since the WebUI would not let me assign the same disk to two VMs.

Take a look at the following image, showing two disks attached to the storage1.hl.local node:

We want to use the smaller (2GB) disk for NFS.

The VM ID of the storage2.hl.local node is 208 (see here), therefore we can add the disk by editing the node’s configuration file.

# cat /etc/pve/qemu-server/208.conf
boot: cn
bootdisk: scsi0
cores: 1
hotplug: disk,cpu
memory: 768
name: storage2.hl.local
net0: virtio=00:22:FF:00:00:16,bridge=vmbr0
onboot: 1
ostype: l26
scsi0: data_ssd:208/vm-208-disk-1.qcow2,size=32G
scsi1: data_ssd:207/vm-207-disk-3.qcow2,size=2G
scsihw: virtio-scsi-pci
smbios1: uuid=030e28da-72e6-412d-be77-a79f06862351
sockets: 1
startup: order=208

The disk that we’ve added is scsi1. Note how it references the VM ID 207.

The disk will be visible on both nodes as /dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1.

Configuration with Puppet

Puppet master runs on the Katello server.

Puppet Modules

We use puppet-corosync Puppet module to configure the server. We also use puppetlabs-accounts for Linux account creation.

Please see the module documentation for features supported and configuration options available.

Configure Firewall

It is essential to ensure that Pacemaker servers can talk to each other. The following needs applying to both cluster nodes:

firewall { ‘007 accept HA cluster requests’:
dport => [‘2224’, ‘3121’, ‘5403’, ‘21064’],
proto => ‘tcp’,
source => ‘10.11.1.0/24’,
action => ‘accept’,
}->
firewall { ‘008 accept HA cluster requests’:
dport => [‘5404’, ‘5405’],
proto => ‘udp’,
source => ‘10.11.1.0/24’,
action => ‘accept’,
}->
firewall { ‘009 accept NFS requests’:
dport => [‘2049’],
proto => ‘tcp’,
source => ‘10.11.1.0/24’,
action => ‘accept’,
}->
firewall { ‘010 accept TCP mountd requests’:
dport => [‘20048’],
proto => ‘tcp’,
source => ‘10.11.1.0/24’,
action => ‘accept’,
}->
firewall { ‘011 accept UDP mountd requests’:
dport => [‘20048’],
proto => ‘udp’,
source => ‘10.11.1.0/24’,
action => ‘accept’,
}->
firewall { ‘012 accept TCP rpc-bind requests’:
dport => [‘111’],
proto => ‘tcp’,
source => ‘10.11.1.0/24’,
action => ‘accept’,
}->
firewall { ‘013 accept UDP rpc-bind requests’:
dport => [‘111’],
proto => ‘udp’,
source => ‘10.11.1.0/24’,
action => ‘accept’,
}

Create Apache User and NFS Mountpoint

Before we configure the cluster, we need to make sure that we have the nfs-utils package installed and that the nfs-lock service is disabled – it will be managed by pacemaker.

The Apache user is created in order to match ownership and allow web servers to write to the NFS share.

The following needs applying to both cluster nodes:

package { ‘nfs-utils’: ensure => ‘installed’ }->
service { ‘nfs-lock’: enable => false }->
accounts::user { ‘apache’:
comment => ‘Apache’,
uid => ’48’,
gid => ’48’,
shell => ‘/sbin/nologin’,
password => ‘!!’,
home => ‘/usr/share/httpd’,
home_mode => ‘0755’,
locked => false,
}->
file {‘/nfsshare’:
ensure => ‘directory’,
owner => ‘root’,
group => ‘root’,
mode => ‘0755’,
}

Configure Pacemaker/Corosync on storage1.hl.local

We disable STONITH initially because the fencing agent fence_pve is simply not available yet. We will compile it later, however, it’s not required in order to get the cluster into an operational state.

We use colocations to keep primitives together. While colocation defines that a set of primitives must live together on the same node, order definitions will define the order of which each primitive is started. This is importat, as we want to make sure that we start cluster resources in the correct order.

Note how we configure NFS exports to be available to two specific clients only: web1.hl.local and web2.hl.local. In reality there is no need for any other homelab server to have access to the NFS share.

We make the apache user the owner of the NFS share, and export it with no_all_squash.

class { ‘corosync’:
authkey => ‘/etc/puppetlabs/puppet/ssl/certs/ca.pem’,
bind_address => $::ipaddress,
cluster_name => ‘nfs_cluster’,
enable_secauth => true,
enable_corosync_service => true,
enable_pacemaker_service => true,
set_votequorum => true,
quorum_members => [ ‘storage1.hl.local’, ‘storage2.hl.local’ ],
}
corosync::service { ‘pacemaker’:
## See: https://wiki.clusterlabs.org/wiki/Pacemaker
version => ‘1.1’,
}->
cs_property { ‘stonith-enabled’:
value => ‘false’,
}->
cs_property { ‘no-quorum-policy’:
value => ‘ignore’,
}->
cs_primitive { ‘nfsshare’:
primitive_class => ‘ocf’,
primitive_type => ‘Filesystem’,
provided_by => ‘heartbeat’,
parameters => { ‘device’ => ‘/dev/disk/by-id/scsi-0QEMU_QEMU_HARDDISK_drive-scsi1’, ‘directory’ => ‘/nfsshare’, ‘fstype’ => ‘ext4’ },
}->
cs_primitive { ‘nfsd’:
primitive_class => ‘ocf’,
primitive_type => ‘nfsserver’,
provided_by => ‘heartbeat’,
parameters => { ‘nfs_shared_infodir’ => ‘/nfsshare/nfsinfo’ },
require => Cs_primitive[‘nfsshare’],
}->
cs_primitive { ‘nfsroot1’:
primitive_class => ‘ocf’,
primitive_type => ‘exportfs’,
provided_by => ‘heartbeat’,
parameters => { ‘clientspec’ => ‘web1.hl.local’, ‘options’ => ‘rw,async,no_root_squash,no_all_squash’, ‘directory’ => ‘/nfsshare’, ‘fsid’ => ‘0’ },
require => Cs_primitive[‘nfsd’],
}->
cs_primitive { ‘nfsroot2’:
primitive_class => ‘ocf’,
primitive_type => ‘exportfs’,
provided_by => ‘heartbeat’,
parameters => { ‘clientspec’ => ‘web2.hl.local’, ‘options’ => ‘rw,async,no_root_squash,no_all_squash’, ‘directory’ => ‘/nfsshare’, ‘fsid’ => ‘0’ },
require => Cs_primitive[‘nfsd’],
}->
cs_primitive { ‘nfsvip’:
primitive_class => ‘ocf’,
primitive_type => ‘IPaddr2’,
provided_by => ‘heartbeat’,
parameters => { ‘ip’ => ‘10.11.1.31’, ‘cidr_netmask’ => ’24’ },
require => Cs_primitive[‘nfsroot1′,’nfsroot2’],
}->
cs_colocation { ‘nfsshare_nfsd_nfsroot_nfsvip’:
primitives => [
[ ‘nfsshare’, ‘nfsd’, ‘nfsroot1’, ‘nfsroot2’, ‘nfsvip’ ],
}->
cs_order { ‘nfsshare_before_nfsd’:
first => ‘nfsshare’,
second => ‘nfsd’,
require => Cs_colocation[‘nfsshare_nfsd_nfsroot_nfsvip’],
}->
cs_order { ‘nfsd_before_nfsroot1’:
first => ‘nfsd’,
second => ‘nfsroot1’,
require => Cs_colocation[‘nfsshare_nfsd_nfsroot_nfsvip’],
}->
cs_order { ‘nfsroot1_before_nfsroot2’:
first => ‘nfsroot1’,
second => ‘nfsroot2’,
require => Cs_colocation[‘nfsshare_nfsd_nfsroot_nfsvip’],
}->
cs_order { ‘nfsroot2_before_nfsvip’:
first => ‘nfsroot2’,
second => ‘nfsvip’,
require => Cs_colocation[‘nfsshare_nfsd_nfsroot_nfsvip’],
}->
file {‘/nfsshare/uploads’:
ensure => ‘directory’,
owner => ‘apache’,
group => ‘root’,
mode => ‘0755’,
}

Configure Pacemaker/Corosync on storage2.hl.local

class { ‘corosync’:
authkey => ‘/etc/puppetlabs/puppet/ssl/certs/ca.pem’,
bind_address => $::ipaddress,
cluster_name => ‘nfs_cluster’,
enable_secauth => true,
enable_corosync_service => true,
enable_pacemaker_service => true,
set_votequorum => true,
quorum_members => [ ‘storage1.hl.local’, ‘storage2.hl.local’ ],
}
corosync::service { ‘pacemaker’:
version => ‘1.1’,
}->
cs_property { ‘stonith-enabled’:
value => ‘false’,
}

Cluster Status

If all went well, we should have our cluster up and running at this point.

[[email protected] ~]# pcs status
Cluster name: nfs_cluster
Stack: corosync
Current DC: storage2.hl.local (version 1.1.16-12.el7_4.8-94ff4df) – partition with quorum
Last updated: Sun Apr 29 17:04:50 2018
Last change: Sun Apr 29 16:56:25 2018 by root via cibadmin on storage1.hl.local

2 nodes configured
5 resources configured

Online: [ storage1.hl.local storage2.hl.local ]

Full list of resources:

nfsshare (ocf::heartbeat:Filesystem): Started storage1.hl.local
nfsd (ocf::heartbeat:nfsserver): Started storage1.hl.local
nfsroot1 (ocf::heartbeat:exportfs): Started storage1.hl.local
nfsroot2 (ocf::heartbeat:exportfs): Started storage1.hl.local
nfsvip (ocf::heartbeat:IPaddr2): Started storage1.hl.local

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: inactive/disabled
[[email protected] ~]# pcs status
Cluster name: nfs_cluster
Stack: corosync
Current DC: storage2.hl.local (version 1.1.16-12.el7_4.8-94ff4df) – partition with quorum
Last updated: Sun Apr 29 17:05:04 2018
Last change: Sun Apr 29 16:56:25 2018 by root via cibadmin on storage1.hl.local

2 nodes configured
5 resources configured

Online: [ storage1.hl.local storage2.hl.local ]

Full list of resources:

nfsshare (ocf::heartbeat:Filesystem): Started storage1.hl.local
nfsd (ocf::heartbeat:nfsserver): Started storage1.hl.local
nfsroot1 (ocf::heartbeat:exportfs): Started storage1.hl.local
nfsroot2 (ocf::heartbeat:exportfs): Started storage1.hl.local
nfsvip (ocf::heartbeat:IPaddr2): Started storage1.hl.local

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: inactive/disabled

Test cluster failover by putting the active node into standby:

[[email protected] ~]# pcs node standby

Services should become available on the other cluster node:

[[email protected] ~]# pcs status
Cluster name: nfs_cluster
Stack: corosync
Current DC: storage2.hl.local (version 1.1.16-12.el7_4.8-94ff4df) – partition with quorum
Last updated: Sun Apr 29 17:06:36 2018
Last change: Sun Apr 29 16:56:25 2018 by root via cibadmin on storage1.hl.local

2 nodes configured
5 resources configured

Node storage1.hl.local: standby
Online: [ storage2.hl.local ]

Full list of resources:

nfsshare (ocf::heartbeat:Filesystem): Started storage2.hl.local
nfsd (ocf::heartbeat:nfsserver): Started storage2.hl.local
nfsroot1 (ocf::heartbeat:exportfs): Started storage2.hl.local
nfsroot2 (ocf::heartbeat:exportfs): Started storage2.hl.local
nfsvip (ocf::heartbeat:IPaddr2): Started storage2.hl.local

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: inactive/disabled

Do showmount on the virtual IP address:

[[email protected] ~]# showmount -e 10.11.1.31
Export list for 10.11.1.31:
/nfsshare web1.hl.local,web2.hl.local

Compile fence_pve on CentOS 7

This is where the automated part ends I’m afraid, however, there is nothing that stops you from putting the manual steps below into a Puppet manifest.

Install Packages

# yum install git gcc make automake autoconf libtool
pexpect python-requests

Download Source and Compile

# git clone https://github.com/ClusterLabs/fence-agents.git

Note the configuration part, we are interested in compiling one fencing agent only, fence_pve.

# cd fence-agents/
# ./autogen.sh
# ./configure –with-agents=pve
# make && make install

Verify:

# fence_pve –version
4.1.1.51-6e6d

Configure Pacemaker to Use fence_pve

Big thanks to Igor Cicimov’s blog post which helped me to get it working with minimal effort.

To test the fencing agent, do the following:

[[email protected] ~]# fence_pve –ip=10.11.1.5 –nodename=pve
[email protected] –password=passwd
–plug=208 –action=off

Where 10.11.1.5 is the IP of the Proxmox hypervisor, pve is the name of the Proxmox node, and the plug is the VM ID. In this case we fenced the storage2.hl.local node.

To configure Pacemaker, we can create two STONITH configurations, one for each node that we want to be able to fence.

[[email protected] ~]# pcs stonith create my_proxmox_fence207 fence_pve
ipaddr=”10.11.1.5″ inet4_only=”true” vmtype=”qemu”
login=”[email protected]” passwd=”passwd”
node_name=”pve” delay=”15″
port=”207″
pcmk_host_check=static-list
pcmk_host_list=”storage1.hl.local”
[[email protected] ~]# pcs stonith create my_proxmox_fence208 fence_pve
ipaddr=”10.11.1.5″ inet4_only=”true” vmtype=”qemu”
login=”[email protected]” passwd=”passwd”
node_name=”pve” delay=”15″
port=”208″
pcmk_host_check=static-list
pcmk_host_list=”storage2.hl.local”

Verify:

[[email protected] ~]# stonith_admin -L
my_proxmox_fence207
my_proxmox_fence208
2 devices found
[[email protected] ~]# pcs status
Cluster name: nfs_cluster
Stack: corosync
Current DC: storage1.hl.local (version 1.1.16-12.el7_4.8-94ff4df) – partition with quorum
Last updated: Sun Apr 29 17:50:59 2018
Last change: Sun Apr 29 17:50:55 2018 by root via cibadmin on storage1.hl.local

2 nodes configured
7 resources configured

Online: [ storage1.hl.local ]
OFFLINE: [ storage2.hl.local ]

Full list of resources:

nfsshare (ocf::heartbeat:Filesystem): Started storage1.hl.local
nfsd (ocf::heartbeat:nfsserver): Started storage1.hl.local
nfsroot1 (ocf::heartbeat:exportfs): Started storage1.hl.local
nfsroot2 (ocf::heartbeat:exportfs): Started storage1.hl.local
nfsvip (ocf::heartbeat:IPaddr2): Started storage1.hl.local
my_proxmox_fence207 (stonith:fence_pve): Started storage1.hl.local
my_proxmox_fence208 (stonith:fence_pve): Started storage1.hl.local

Daemon Status:
corosync: active/enabled
pacemaker: active/enabled
pcsd: inactive/disabled

Note how the storage2.hl.local node is down, because we’ve fenced it.

If you decide to use test configuration, do not forget to stop the Puppet agent on the cluster nodes as it will disable STONITH (we set stonith-enabled to false in the manifest).

For more info, do the following:

# pcs stonith describe fence_pve

This will give you a list of other STONITH options available.

Source

Create a user and grant permission to a database — The Ultimate Linux Newbie Guide

Grant, Update, Insert, Drop, Use. MySQL is a copyright of Oracle Incorporated.Here is something that as a system or database admin, you’ll do lots of – create a database, create a database user, and then assign the permission for that user to operate on that database. We can do the same thing to grant permissions to other databases for that user too.

Here’s what you want to know:

First, log in to your database server as a database admin user. Usually this will be root (note this is not the same root user as your Linux server, this is the database root user).

$mysql -u root -p

once logged in, you can create the database, user and assign the right privileges:

mysql> CREATE DATABASE somedatabase;
mysql> CREATE USER ‘new_user’@’localhost’ IDENTIFIED BY ‘their_password’;

mysql> GRANT ALL PRIVILEGES ON somedatabase.* To ‘new_user’@’localhost’ IDENTIFIED BY ‘their_password’;
mysql> FLUSH PRIVILEGES;

Here’s what that all means:

CREATE – This command creates things like databases, users and tables. Note you can’t use usernames with dashes in them (underscores are OK).

GRANT – This command gives (grants) permission to databases, tables and so forth.

ALL PRIVILEGES – This tells it the user will have all standard privileges such as SELECT, INSERT, UPDATE, etc. The only privilege it does not provide is the use of the GRANT query, for obvious reasons!

ON somedatabase.* – this means grant all the privileges to the named database. If you change the * after the dot for a table name, routine or view, then this will apply the GRANT to that specified table,etc only.

TO ‘new_user’@’localhost’ – ‘new_user’ is the username of the user account you are creating. It is very important to ensure you use single quotes (‘). The hostname ‘localhost’ tells MySQL what hosts the user can connect from. In most cases, this will be localhost, because most MySQL servers are only configured to listen its own host. Opening it up to other hosts (especially on the Internet), is insecure.

IDENTIFIED BY ‘their_password’ – This sets the password for that user, replace the text their_password with a sensible password!

FLUSH PRIVILEGES – this makes sure that any privileges granted are updated in mysql so that they are ready to use.

Hope this helps. For more information on creating users, refer to the MySQL Reference Guide.

Source

How to Use Git Version Control System in Linux [Comprehensive Guide]

Version Control (revision control or source control) is a way of recording changes to a file or collection of files over time so that you can recall specific versions later. A version control system (or VCS in short) is a tool that records changes to files on a filesystem.

There are many version control systems out there, but Git is currently the most popular and frequently used, especially for source code management. Version control can actually be used for nearly any type of file on a computer, not only source code.

Version control systems/tools offer several features that allow individuals or a group of people to:

  • create versions of a project.
  • track changes accurately and resolve conflicts.
  • merge changes into a common version.
  • rollback and undo changes to selected files or an entire project.
  • access historical versions of a project to compare changes over time.
  • see who last modified something that might be causing a problem.
  • create a secure offsite backup of a project.
  • use multiple machines to work on a single project and so much more.

A project under a version control system such as Git will have mainly three sections, namely:

  • a repository: a database for recording the state of or changes to your project files. It contains all of the necessary Git metadata and objects for the new project. Note that this is normally what is copied when you clone a repository from another computer on a network or remote server.
  • a working directory or area: stores a copy of the project files which you can work on (make additions, deletions and other modification actions).
  • a staging area: a file (known as index under Git) within the Git directory, that stores information about changes, that you are ready to commit (save the state of a file or set of files) to the repository.

There are two main types of VCSs, with the main difference being the number of repositories:

  • Centralized Version Control Systems (CVCSs): here each project team member gets their own local working directory, however, they commit changes to just a single central repository.
  • Distributed Version Control Systems (DVCSs): under this, each project team member gets their own local working directory and Git directory where they can make commits. After an individual makes a commit locally, other team members can’t access the changes until he/she pushes them to the central repository. Git is an example of a DVCS.

In addition, a Git repository can be bare (repository that doesn’t have a working directory) or non-bare (one with a working directory). Shared (or public or central) repositories should always be bare – all Github repositories are bare.

Learn Version Control with Git

Git is a free and open source, fast, powerful, distributed, easy to use, and popular version control system that is very efficient with large projects, and has a remarkable branching and merging system. It is designed to handle data more like a series of snapshots of a mini filesystem, which is stored in a Git directory.

The workflow under Git is very simple: you make modifications to files in your working directory, then selectively add just those files that have changed, to the staging area, to be part of your next commit.

Once you are ready, you do a commit, which takes the files from staging area and saves that snapshot permanently to the Git directory.

To install Git in Linux, use the appropriate command for your distribution of choice:

$ sudo apt install git [On Debian/Ubuntu]
$ sudo yum install git [On CentOS/RHEL]

After installing Git, it is recommended that you tell Git who you are by providing your full name and email address, as follows:

$ git config –global user.name “Aaron Kili”
$ git config –global user.email “[email protected]

To check your Git settings, use the following command.

$ git config –list

View Git Settings

View Git Settings

Creates a New Git Repository

Shared repositories or centralized workflows are very common and that is what we will demonstrate here. For example, we assume that you have been tasked to setup a remote central repository for system administrators/programmers from various departments in your organization, to work on a project called bashscripts, which will be stored under /projects/scritpts/ on the server.

SSH into the remote server and create the necessary directory, create a group called sysadmins (add all project team members to this group e.g user admin), and set the appropriate permissions on this directory.

# mkdir-p /projects/scripts/
# groupadd sysadmins
# usermod -aG sysadmins admin
# chown :sysadmins -R /projects/scripts/
# chmod 770 -R /projects/scripts/

Then initialize a bare project repository.

# git init –bare /projects/scripts/bashscripts

Initialize Git Shared Repository

Initialize Git Shared Repository

At this point, you have successfully initialized a bare Git directory which is the central storage facility for the project. Try to do a listing of the directory to see all the files and directories in there:

# ls -la /projects/scripts/bashscripts/

List Git Shared Repository

List Git Shared Repository

Clone a Git Repository

Now clone the remote shared Git repository to your local computer via SSH (you can also clone via HTTP/HTTPS if you have a web server installed and appropriately configured, as is the case with most public repositories on Github), for example:

$ git clone ssh://[email protected]_server_ip:/projects/scripts/bashscripts

To clone it to a specific directory (~/bin/bashscripts), use the command below.

$ git clone ssh://[email protected]_server_ip:/projects/scripts/bashscripts ~/bin/bashscripts

Clone Shared Git Repository to Local

You now have a local instance of the project in a non-bare repository (with a working directory), you can create the initial structure of the project (i.e add a README.md file, sub-directories for different categories of scripts e.g recon to store reconnaissance scripts, sysadmin ro store sysadmin scripts etc.):

$ cd ~/bin/bashscripts/
$ ls -la

Create Git Project Structure

Create Git Project Structure

Check a Git Status Summary

To display the status of your working directory, use the status command which will shows you any changes you have made; which files are not being tracked by Git; those changes that have been staged and so on.

$ git status

Check Git Status

Git Stage Changes and Commit

Next, stage all the changes using the add command with the -A switch and do the initial commit. The -a flag instructs the command to automatically stage files that have been modified, and -m is used to specify a commit message:

$ git add -A
$ git commit -a -m “Initial Commit”

Do Git Commit

Publish Local Commits to Remote Git Repository

As the project team lead, now that you have created the project structure, you can publish the changes to the central repository using the push command as shown.

$ git push origin master

Push Commit to Centrol Git Repository

Right now, your local git repository should be up-to-date with the project central repository (origin), you can confirm this by running the status command once more.

$ git status

Check Git Status

You can also inform you colleagues to start working on the project by cloning the repository to their local computers.

Create a New Git Branch

Branching allows you to work on a feature of your project or fix issues quickly without touching the codebase (master branch). To create a new branch and then switch to it, use the branch and checkout commands respectively.

$ git branch latest
$ git checkout latest

Alternatively, you can create a new branch and switch to it in one step using the checkout command with the -b flag.

$ git checkout -b latest

You can also create a new branch based on another branch, for instance.

$ git checkout -b latest master

To check which branch you are in, use branch command (an asterisk character indicates the active branch):

$ git branch

Check Active Branch

After creating and switching to the new branch, make some changes under it and do some commits.

$ vim sysadmin/topprocs.sh
$ git status
$ git commit add sysadmin/topprocs.sh
$ git commit -a -m ‘modified topprocs.sh’

Merge Changes From One Branch to Another

To merge the changes under the branch test into the master branch, switch to the master branch and do the merge.

$ git checkout master
$ git merge test

Merge Test Branch into Master

If you no longer need a particular branch, you can delete it using the -d switch.

$ git branch -d test

Download Changes From Remote Central Repository

Assuming your team members have pushed changes to the central project repository, you can download any changes to your local instance of the project using the pull command.

$ git pull origin
OR
$ git pull origin master #if you have switched to another branch

Pull Changes from Central Repository

Inspect Git Repository and Perform Comparisons

In this last section, we will cover some useful Git features that keep track of all activities that happened in your repository, thus enabling you to view the project history.

The first feature is Git log, which displays commit logs:

$ git log

View Git Commit Logs

Another important feature is the show command which displays various types of objects (such as commits, tags, trees etc..):

$ git show

Git Show Objects

The third vital feature you need to know is the diff command, used to compare or show difference between branches, display changes between the working directory and the index, changes between two files on disk and so much more.

For instance to show the difference between the master and latest branch, you can run the following command.

$ git diff master latest

Show Difference Between Branches

Read Also: 10 Best Git Alternatives to Host Open Source Projects

Summary

Git allows a team of people to work together using the same file(s), while recording changes to the file(s) over time so that they can recall specific versions later.

This way, you can use Git for managing source code, configuration files or any file stored on a computer. You may want to refer to the Git Online Documentation for further documentation.

Source

Linux Top 3: Parted Magic, Quirky and Ultimate Edition

January 16, 2017
By Sean Michael Kerner

1) Parted Magic 2017_01_08

Parted Magic is a very niche Linux distribution that many users first discover when they’re trying to either re-partition a drive or recover data from an older system. The new Parted Magic 2017_01_08 release is an incremental update that follows the very large 2016_10_18 update that provided 800 updates. In contrast the big updates for the new release are:

  • Parted Magic now ships with ZFS on Linux kernel drivers!
  • Added Programs: grub-customizer-5.0.6, x11vnc-0.9.13, fslint-2.44, zerofree-1.0.4, spl-solaris-0.7.0-git12172016, zfs-on-linux-0.7.0-git12172016, and bleachbit-1.12.
  • Updated Programs: bind-9.10.4_P4, btrfs-progs-v4.9, curl-7.51.0, flashplayer-plugin-24.0.0.186, glibc-zoneinfo-2016j, gparted-0.27.0, hdparm-9.50, kernel-firmware-20170106git, libfm-1.2.5, libpng-1.6.27, firefox-50.1.0, ntp-4.2.8p9, pcmanfm-1.2.5, Python-2.7.13, samba-4.4.8, tigervnc-1.7.0.

2) Quirky 8.1.6

The Quirky Linux distribution is part of the Puppy Linux family of distributions, providing users with a lightweight operating system. The new Quirky 8.1.6 update support Ubuntu 16.04, based applications through Quriky is built using a woofQ Quirky Linux build system.

Quirky Linux 8.1.6 x86_64 is codenamed “Xerus” and is built using the woofQ Quirky Linux build system, with the help of Ubuntu 16.04 binary packages. Thus, Xerus has compatibility with all of the Ubuntu repositories. The Linux kernel is version 4.4.40 and SeaMonkey is upgraded to version 2.46. Quirky is a fork of Puppy Linux, and is mainly differentiated by being a “full installation” only, with special snapshot and recovery features, and Service Pack upgrades.

3) Ultimate Edition 5.1

The Ultimate Edition Linux distribution is yet another Ubuntu 16.04 derived distribution.

“Ultimate Edition 5.1 was built from the Ubuntu 16.04 Xenial Xerius tree using a combination of Tmosb (TheeMahn’s Operating System Builder) & work by hand. Tmosb is also included in this release (1.9.7), allowing you to do the same.”

Sean Michael Kerner is a senior editor at LinuxPlanet and InternetNews.com. Follow him on Twitter @TechJournalist.

Source

WP2Social Auto Publish Powered By : XYZScripts.com