Download Bitnami Ghost Stack Linux 2.3.0-0

Bitnami Ghost Stack is a free and multiplatform software project, a native installer that has been designed from offset to allow you to deploy the Ghost application and its runtime dependencies on desktop computers or laptops. Cloud images, a virtual appliance and a Docker container are also available for the Ghost app.

What is Ghost?

Ghost is an open source, platform-independent and free web-based application, a beautifully designed and completely customizable software designed especially for publishing content on the web, allowing users to write and publish their own blogs.

Installing Bitnami Ghost Stack

The Bitnami Ghost Stack product is distributed as native installers for all mainstream operating systems, including all GNU/Linux distributions, as well as the Microsoft Windows and Mac OS X operating systems, supporting 32-bit and 64-bit (recommended) computers.

To install Ghost on your personal computer, simply download the package that corresponds to your computer’s operating system and hardware architecture, run it and follow the instructions displayed on the screen.

Run Ghost in the cloud

Thanks to Bitnami, users are now able to run Ghost in the cloud with their hosting platform of choice. Pre-built cloud images for the Windows Azure and Amazon EC2 cloud hosting services are also available for download on the project’s homepage (see link below).

Virtualize Ghost on VMware and VirtualBox

In addition to deploying Ghost in the cloud or on personal computers, it is possible to virtualize it using Bitnami’s virtual appliance for the VMware ESX, ESXi and Oracle VirtualBox virtualization software.

The Ghost Docker container and LAMP/WAMP/MAMP module

A Ghost Docker container will also be available on the project’s website, but Bitnami does not provide a Ghost module for its LAMP, WAMP and MAMP stacks, which could have allows users to deploy the application on personal computer, without having to deal with its runtime dependencies.

Source

SUSE Linux Enterprise Server 12 STIG is available at Defense Information Systems Agency (DISA)

Share with friends and colleagues on social media

SUSE Linux Enterprise Server 12 STIG has been approved by Defense Information Systems Agency (DISA) and posted on IASE. This assists with the adoption of SUSE Linux Enterprise Server 12 in the US Federal Government and with Government Contractors.

STIG-SLES 12

What is STIG? Where does it come from?

The Security Technical Implementation Guides (STIGs) define the configuration and settings of United States Department of Defense (DoD) IT systems that provide a standardization of the security profile for a particular technology. These cybersecurity guidelines are developed from the Security Requirements Guides (SRGs) that are produced by the Defense Information Systems Agency (DISA).

STIGs are widely used by the United States government and allies, government contractors, and various commercial entities to provide a cybersecurity methodology for securing and hardening operating systems to a DoD security standard.

The SUSE Linux Enterprise Server 12 STIG has several items to note for System Administrators and Security Auditors such as:

  • AppArmor

The SUSE Linux Enterprise Server (SLES) 12 STIG references AppArmor, a Linux Security Module for implementing mandatory access controls (MAC) and application white listing in place of SELinux.

  • Common Access Card (CAC) Support

The SLES 12 STIG prescribes the use of two-factor authentication to access IT resources. Support for CAC smart cards was verified and detailed in a SUSE Blog Configuring Smart Card authentication on SUSE Linux Enterprise.

The acceptance and approval of the SLES 12 STIG continues the commitment of SUSE Security to meet various federal and international security standards such as Common Criteria and Federal Information Processing Standards (FIPS) 140-2.

More information

You can access the SLES 12 STIG and latest SUSE security certifications information at

You can reach out to SUSE security team at https://www.suse.com/support/security/contact/ or Adam Belmonte, Manager-Federal Programs (phone: 978-394-4780, email).

Share with friends and colleagues on social media

Source

MySQL Replication and MEMORY Tables – Lisenet.com

Memory tables do not play well with replication.

The Problem

After upgrading MySQL server from 5.6 to 5.7, we noticed that Master/Slave replication started to fail with the following error:

Could not execute Delete_rows event on table my_database.my_table; Can’t find record in ‘my_table’, Error_code: 1032; handler error HA_ERR_KEY_NOT_FOUND; the event’s master log bin-log.003023, end_log_pos 552195868

If we restart the slave, we lose content of our MEMORY tables, and MySQL replication breaks.

Working Towards the Solution

MySQL Binary Logging: MySQL 5.6 vs MySQL 5.7

Prior to MySQL 5.7.7, the default binlog_format was STATEMENT. That’s what we used before the upgrade.

In MySQL 5.7.7 and later, the default is ROW. This is what we have after the upgrade.

Now, on MySQL 5.6, STATEMENT replication will often continue to run, with contents of the table just being different as there is a little checks whenever statements produce the same results on the slave.

ROW replication, however, will complain about a non-existent ROW for UPDATE or DELETE operation.

Workaround: use SQL_SLAVE_SKIP_COUNTER

When replication is broken because a row was not found and it cannot be deleted, we can do the following:

STOP SLAVE;
SET GLOBAL SQL_SLAVE_SKIP_COUNTER=1;
START SLAVE;

This will skip the offending statement and resume replication. Be careful with it! In our case it’s fine, because the application logic is such that the contents of MEMORY tables can be safely lost (the table in question is used for caching).

Note that this approach is not a solution, because our relication will get broken as soon as there is another update or delete statement that affects MEMORY tables.

Solution: do not replicate MEMORY tables

If we don’t need MEMORY tables on the slave, then we can stop replicating them.

We need to create a replication filter which keeps the slave thread from replicating a statement in which any table matches the given wildcard pattern.

In our case, we would use the following:

–replicate-wild-ignore-table=”my_database.my_table”

If we have more than one database that has this problem, we can use a wildcard:

–replicate-wild-ignore-table=”%.my_table”

The above will not replicate updates that use a table where the database name is any, and the table matches “my_table”.

This can be done on the fly as well:

STOP SLAVE;
CHANGE REPLICATION FILTER REPLICATE_WILD_IGNORE_TABLE = (‘%.my_table’);
START SLAVE;

References

https://www.percona.com/blog/2010/10/15/replication-of-memory-heap-tables/
https://dev.mysql.com/doc/mysql-replication-excerpt/5.7/en/replication-features-memory.html

Source

Container-based server platform for Linux device management goes open source

Resin.io changed its name to balena and released an open source version of its IoT fleet management platform for Linux devices called openBalena. Targets include the Intel NUC, Jetson TX2, Raspberry Pi, and a new RPi CM3 carrier called the balenaFin.

A lot has happened with Resin.io since we covered its Docker container focused Resin.io cloud IoT platform and open source ResinOS Linux distro two years ago. Resin.io started out with a goal to create a “git push for devices” and develop lightweight Docker containers for Linux devices to enable easy security updates and IoT device management. It has since expanded beyond that to provide a comprehensive, scalable platform for IoT fleet management. Now the company has announced a name-change to balena in conjunction with the release of an open source openBalena version of its software.

New names for Balena technologies (left) and new balenaFin carrier for RPi CM3
(click images to enlarge)

 

Resin.io changed its name due “to trademark issues, to cannabis references, and to people mishearing it as ‘raisin.’” explained founder and CEO Alexandros Marinos in a blog announcement. (We

interviewed Marinos

in a Nov. 2016 feature on the use of container technologies in embedded devices.) The non-smokable new branding is based on the company’s balena container engine, now called balenaEngine, which derives its name from the engine’s underlying

Moby Project

container technology.

openBalena is an open source version of the Resin.io server/cloud platform for managing fleets of Linux-based IoT devices, now referred to as balenaCloud. The open source ResinOS distro, meanwhile, is now called balenaOS. Resin.io’s Etcher software for fast image writes to flash drives is now called balenaEtcher and the Project Fin carrier board for the Raspberry Pi Compute Module, which is now available starting at $129, is now called balenaFin (see farther below).

While balenaOS is an open source spinoff of the container-based device software that works with balenaCloud, the new openbalena is an open version of the balenaCloud server software. Customers can now choose between letting balena manage their fleet of devices or building their own openBalena based server platform that manages fleets of devices running balenaOS.

openBalena is a reduced feature version of the commercial product. However, the components shared by both commercial and open versions are closely aligned, which “will allow us to release updates for the project as we update our cloud product, while also allowing open source contributions to flow back into the cloud product,” writes Marinos. The new deployment workflows and tools to accomplish this coordination will be announced soon.

openBalena offers balenaCloud core features such as “the powerful API, the built-in device VPN, as well as our spectacular provisioning workflow,” writes Marinos. It can also similarly scale to large fleets of devices. However, openBalena is single-user rather than supporting multiple users. It’s controlled solely via the already open sourced balena CLI tool rather than balenaCloud’s web-based dashboard, and it lacks “updates with binary container deltas.”

On the device side, openBalena integrates the Yocto Project and Docker-based balenaOS. The client software been updated to let devices more easiy “join and leave a server” so you can set up your own openBalena server instead of being directed to balenaCloud.

openBalena’s CLI lets you provision and configure devices, push updates, check status, and view logs. Its backend services can securely store device information, allow remote management via a built-in VPN service, and distribute container images to devices.

On the server side, openBalena requires the following releases (or higher): Docker 18.05.0, Docker Compose 1.11, OpenSSL 1.0.0, and Python 2.7 or 3.4. The beta version of openBalena supports targets including Raspberry Pi boards, Intel NUC boards, the Nvidia Jetson TX2 module, and the new balenaFin. It appears it will eventually support the full list of devices supported by balenaCloud, most of which are detailed on LinuxGizmos in stories such as our catalog of 116 Linux hacker boards.These include Samsung’s Artik boards, Variscite’s DART-6UL, Aaeon’s UP board, the Banana Pi M1+, the BeagleBone and BeagleBone Green/Green Wireless, SolidRun’s HummingBoard i2, the Odroid-C1/C1+ and Odroid-XU4, the Orange Pi Plus2, Technologic’s TS-4900, and Siemens’ IOT2000 gateway.

balenaFin

The balenaFin, which was announced back in March, is a carrier board for the Raspberry Pi Compute Module 3 Lite (CM3 Lite). The Lite has the same 1.2GHz quad-core, Cortex-A53 Broadcom SoC as the standard version, but has an unpopulated eMMC socket with traces exposed via SODIMM-200.

balenaFin (left) and block diagram
(click images to enlarge)

 

The 91 x 90mm balenaFin is optimized for balena duty but can also be used as a general-purpose hacker board. The board, which has been available to selected customers in a pre-release version, is now publicly available in 8GB eMMC 5.1 ($129), 16GB eMMC ($139), or 32GB ($159) versions. There’s also a $179 dev kit version with 8GB that bundles the CM3 Lite, cables, standoffs, screws, and a 12V PSU. A DIN-rail case adds $25 to the price.

balenaFin detail views
(click images to enlarge)

In addition to running on the CM3, the board integrates a Samsung Artik 020 MCU module. The balenaFin is further equipped with HDMI and 10/100 Ethernet ports, 2x USB 2.0 host ports, and a 40-pin RPi GPIO connector. Wireless support includes a WiFi/Bluetooth module and mini-PCIe and Nano-SIM slots for cellular. You also get a 6-24V DC input, an RTC, and extended temperature support. For a full spec list, see our earlier Fin report.

Further information

The beta version of openBalena is now available for free download, and the unpriced balenaFin is available for pre-order. More information may be found in the openBalena announcement, openBalena product page, and openBalena GitHub page. More on the balenaFin, including link to shopping pages, may be found here.

Source

Red Hat Enterprise Linux 7.6 Released

Red Hat SoftwareOpen SourceOperating SystemsSoftwareITLinuxTechnology

Red Hat Enterprise Linux 7.6 Released (lwn.net)

Posted on Tuesday October 30, 2018 @07:20PM

Fresh on the heels of the IBM purchase announcement, Red Hat released RHEL 7.6 today. Business press release is here and full release notes are here. It’s been a busy week for Red Hat, as Fedora 29 also released earlier this morning. No doubt CentOS and various other rebuilds will begin their build cycles shortly.

The release offers improved security, such as support for the Trusted Platform Module (TPM) 2.0 specification for security authentication. It also provides enhanced support for the open-source nftables firewall technology.

“TPM 2.0 support has been added incrementally over recent releases of Red Hat Enterprise Linux 7, as the technology has matured,” Steve Almy, principal product manager, Red Hat Enterprise Linux at Red Hat, told eWEEK. “The TPM 2.0 integration in 7.6 provides an additional level of security by tying the hands-off decryption to server hardware in addition to the network bound disk encryption (NBDE) capability, which operates across the hybrid cloud footprint from on-premise servers to public cloud deployments.”

No user-servicable parts inside. Refer to qualified service personnel.

Working…

Source

Trying To Make Ubuntu 18.10 Run As Fast As Intel’s Clear Linux

With the recent six-way Linux OS tests on the Core i9 9900K there was once again a number of users questioning the optimizations by Clear Linux out of Intel’s Open-Source Technology Center and remarks whether changing the compiler flags, CPU frequency scaling governor, or other settings would allow other distributions to trivially replicate its performance. Here’s a look at some tweaked Ubuntu 18.10 Cosmic Cuttlefish benchmarks against the latest Intel Clear Linux rolling-release from this i9-9900K 8-core / 16-thread desktop system.

 

 

In the forum comments and feedback elsewhere to that previous Linux distribution comparison, there were random comments questioning:

– Whether Clear Linux’s usage of the P-State “performance” governor by default explains its performance advantage with most other distributions opting for the “powersave” governor.

– Clear Linux is faster because its built with the Intel Compiler (ICC). This is not the case at all with Clear being built by GCC and LLVM/Clang, but seems to be a common misconception. So just tossing that out there…

– Clear Linux is faster because of its aggressive default CFLAGS/CXXFLAGS/FFLAGS. This does certainly help in some built-from-source benchmarks, but that’s not all.

About a year ago I tried similar tests to today of tweaking Ubuntu 17.10 to try to run like Clear Linux while in this article is a fresh look. The OS configurations tested were: Clear Linux – Clear Linux running on the i9-9900K with its Linux 4.18.16 kernel, Mesa 18.3-devel, GCC 8.2.1, EXT4 file-system, and other default components.

Ubuntu – The default Ubuntu 18.10 installation on the same system with its Linux 4.18 kernel, Mesa 18.2.2, GCC 8.2.0, EXT4, and other stock components/settings. Ubuntu + Perf Gov – The same Ubuntu 18.10 stack but when switching over to the P-State performance governor rather than the default P-State powersave mode. Ubuntu + Perf + Flags – The P-State performance mode from above on Ubuntu 18.10 but also setting the same CFLAGS/CXXFLAGS/FFLAGS as used by Clear Linux before re-building all of the source-based benchmarks to compare the performance impact of the default tuning parameters. Ubuntu + Perf + Flags + Kernel – The Ubuntu 18.10 tweaked state of above with the P-State performance governor and tuned compiler flags while also building a Linux 4.18.16 kernel from source with the relevant patches and Kconfig configuration as shipped by Clear Linux. Their kernel configuration and carried patches can be found via clearlinux-pkgs/linux on GitHub. Ubuntu + Clear Docker – Ubuntu 18.10 with the P-State performance governor, running on the Clear Linux optimized kernel, and using Docker CE to run the latest Clear Linux Docker image for having all of the Clear user-space components running within this container. The same system was used for all of the testing and was the Intel Core i9 9900K at stock speeds, ASUS PRIME Z390-A motherboard, 16GB DDR4-3200 memory, Samsung 970 EVO 250GB NVMe SSD, and Radeon RX Vega 64 8GB graphics card.

 

 

I ran 92 different tests on Ubuntu 18.10 and Clear Linux for a wide look at the performance between these distributions ranging from scripting language benchmarks like PHP and Python to various scientific workloads, code compilation, and other tests. With the 92 test runs, here are the key findings from this large round of testing of Clear Linux compared to Ubuntu 18.10 in five different tuned states:

– When comparing the out-of-the-box Clear Linux to Ubuntu 18.10, the Intel distribution was the fastest in 66 of the benchmarks (72%) with Ubuntu only taking the lead in 26 of these different benchmarks.

– Switching to the P-State “performance” governor on Ubuntu 18.10 only allowed it to win over Clear Linux in an extra 5 benchmarks… Clear Linux still came out ahead 66% of the time against Ubuntu either out-of-the-box or with the performance governor.

– In the third state of Ubuntu 18.10 with using the P-State performance governor and copying Clear’s compiler flags allowed Ubuntu 18.10 to enhance its performance relative to the default Ubuntu configuration, but Clear Linux was still leading ~66% of the time.

– When pulling in the Clear Linux kernel modifications to Ubuntu 18.10 and keeping the optimized compiler flags and performance governor, Ubuntu 18.10 just picked up one more win while Clear Linux was still running the fastest in 59 of the 92 benchmarks.

– Lastly, when running the Clear Linux Docker container on Ubuntu 18.10 while keeping the tweaked kernel and P-State performance governor, Clear Linux won now in “just” 54 of the 92 benchmarks, or about 59% of the time it was the fastest distribution.

Going to these varying efforts to tweak Ubuntu for faster performance resulted in Clear Linux’s lead shrinking from 72% to 58%… Or about 64% if not counting the run of using the Clear Linux Docker container itself on Ubuntu 18.10 for the optimized Clear user-space.

This data shows that Clear Linux still does much more than adjusting a few tunables to get to the leading performance that it’s not as trivial as adjusting CFLAGS/CXXFLAGS, opting for the performance governor, etc. Clear additionally makes use of GCC Function Multi-Versioning (FMV) for optimizing its binaries to use the fastest code path depending upon the CPU detected at run-time among other compiler/tooling optimizations. It also often patches its Glibc and other key components beyond just Linux kernel patches not yet ready to be mainlined. Other misconceptions to clear up about this open-source operating system is that it does not use the Intel ICC compiler, it does run on AMD hardware (and does so in a speedy manner as well), and runs on Intel hardware going back to around Sandy Bridge, just not the very latest and greatest generations.

While the prominent performance numbers are already shared, on the following pages are a look at some of the interesting benchmark results from this comparison.
Source

New Quick Start builds a CI/CD pipeline to test AWS CloudFormation templates using AWS TaskCat

Posted On: Oct 30, 2018

This Quick Start deploys a continuous integration and continuous delivery (CI/CD) pipeline on the Amazon Web Services (AWS) Cloud in about 15 minutes, to automatically test AWS CloudFormation templates from a GitHub repository.

The CI/CD environment includes AWS TaskCat for testing, AWS CodePipeline for continuous integration, and AWS CodeBuild as your build service.

TaskCat is an open-source tool that tests AWS CloudFormation templates. It creates stacks in multiple AWS Regions simultaneously and generates a report with a pass/fail grade for each region. You can specify the regions, indicate the number of Availability Zones you want to include in the test, and pass in the AWS CloudFormation parameter values you want to test. You can use the CI/CD pipeline to test any AWS CloudFormation templates, including nested templates, from a GitHub repository.

To get started:

You can also download the AWS CloudFormation template that automates the deployment from

GitHub, or view the TaskCat source code.

To browse and launch other AWS Quick Start reference deployments, see our

complete catalog

Quick Starts are automated reference deployments that use AWS CloudFormation templates to deploy key technologies on AWS, following AWS best practices. This Quick Start was built by AWS solutions architects.

Source

Braiins OS Is The First Fully Open Source, Linux-based Bitcoin Mining System

Braiins Systems, the company behind the Slush Pool, has announced Braiins OS. The creators of this bitcoin mining software have claimed that it’s the world’s first fully open source system for cryptocurrency embedded devices.

The initial release of the operating system is based on OpenWrt, which is basically a Linux operating system for embedded devices. You can find its code here.

Those who know about OpenWrt must be aware of the fact that it’s very versatile. As a result, Braiins OS can also be extended in different applications in future.

In a Medium post, Braiins Systems has said that different weird cases of non-standard behavior of mining devices cause tons of issues. With this new mining software, the company wishes to make things easier for mining pool operators and miners.

The OS keeps monitoring the working conditions and hardware to create reports of errors and performance. Braiins also claimed to reduce power consumption by 20%.

The very first Braiins OS release lets you download the images for Antminer S9 and DragonMintT1. Currently, the software is in the alpha stage, and the developers have requested the miners to test it and share the feedback.

Also Read: Top 10 Best Linux Distros For 2018 — Distro Choosing Guide

Source

Install Ubuntu on Raspberry Pi

Canonical released a minimal version of Ubuntu specifically made for IoT devices which is called Ubuntu Core. Ubuntu Core requires less storage and memory to run. Ubuntu Core is really fast. It is very lightweight. Ubuntu Core can be installed on Raspberry Pi microcomputers. You need Raspberry Pi 2 or 3 single board microcomputer if you want to install and run Ubuntu Core on it.

In this article, I will show you how to install Ubuntu Core on Raspberry Pi 3 Model B. So, let’s get started.

To follow this article, you need:

  • Raspberry Pi 2 or 3 Single Board Microcomputer.
  • A 16GB or more microSD card.
  • HDMI Cable.
  • An USB Keyboard.
  • Ethernet Cable.
  • Power Adapter for Raspberry Pi.
  • A Laptop or Desktop computer for installing/flashing Ubuntu Core on the SD card.

Setting Up Ubuntu One Account for Ubuntu Core:

If you want to use Ubuntu Core on your Raspberry Pi 3, then you need an Ubuntu One account. If you don’t have an Ubuntu One account, you can create one for free. Just visit https://login.ubuntu.com and click on I don’t have an Ubuntu One account as marked in the screenshot below.

Now, fill in the required details and click on Create account.

Now, verify your email address and your account should be created. Now, visit https://login.ubuntu.com/ and login to your Ubuntu One account. Now, click on SSH keys and you should see the following page. Here, you have to import the SSH key of the machine from which you will be connecting to your Ubuntu Core installed on your Raspberry Pi 3 device.

You can generate SSH key very easily with the following command:

By default, the SSH keys will be saved in the .ssh/ directory of your login user’s HOME directory. If you want to save it somewhere else, just type in the path where you would like to save it and press <Enter>. I will leave the defaults.

Now, press <Enter>.

NOTE: If you want to encrypt the SSH key with password, type it in here and press <Enter>.

Press <Enter> again.

NOTE: If you’ve typed in a password in the earlier step, just re-type the same password and press <Enter>.

Your SSH key should be generated.

Now, read the SSH key with the following command:

Now, copy the SSH key as marked in the screenshot below.

Now, paste it in the Ubuntu One website and click on Import SSH key as marked in the screenshot below.

As you can see, the SSH key is added.

Downloading Ubuntu Core:

Now that you have your Ubuntu One account set up, it’s time to download Ubuntu Core. First, go to the official website of Ubuntu at https://www.ubuntu.com/download/iot/raspberry-pi-2-3

Now, scroll down to the Download Ubuntu Core section and click on the download link for either Raspberry Pi 2 or Raspberry Pi 3 depending on the version of Raspberry Pi you have. I have Raspberry Pi 3 Model B, so I am going for the Raspberry Pi 3 image.

Your download should start.

Flashing Ubuntu Core on microSD Card:

You can flash Ubuntu Core on your microSD card very easily on Windows, Linux and macOS operating system using Etcher. Etcher is a really easy to use software for flashing microSD cards for Raspberry Pi devices. You can download Etcher from the official website of Etcher at https://etcher.io/

NOTE: I can’t show you how to install Etcher in this article as it is out of the scope of this article. You should be able to install Etcher on your own. It’s very easy.

Once you install Etcher, open Etcher and click on Select image.

A file picker should be opened. Now, select the Ubuntu Core image that you just downloaded and click on Open.

Now, insert the microSD card on your computer and click on Select drive.

Now, click to select your microSD card and click on Continue.

Finally, click on Flash!

As you can see, your microSD card is being flashed…

Once your microSD card is flashed, close Etcher.

Preparing Raspberry Pi:

Now that you have flashed Ubuntu Core on the microSD card, insert it on your Raspberry Pi’s microSD card slot. Now, connect one end of the Ethernet cable to the RJ45 Ethernet port of your Raspberry Pi and another end to one of the port on your Router or Switch. Now, connect one end of the HDMI cable to your Raspberry Pi and the other end to your Monitor. Also, connect the USB keyboard to one of the USB port of your Raspberry Pi. Finally, plug in the power adapter to your Raspberry Pi.

After connecting everything, my Raspberry Pi 3 Model B looks as follows:

Setting Up Ubuntu Core on Raspberry Pi:

Now, power on your Raspberry Pi device and it should boot into Ubuntu Core as you can see in the screenshot below.

One you see the following window, press <Enter> to configure Ubuntu Core.

First, you have to configure networking. This is essential for Ubuntu Core to work. To do that, press <Enter> here.

As you can see, Ubuntu Core has automatically configured the network interface using DHCP. The IP address is 192.168.2.15 in my case. Yours should be different. Once you’re done, select [ Done ], press <Enter>.

Now, type in the email address that you used to create your Ubuntu One account. Then, select [ Done ] and press <Enter>.

The configuration is complete. Now press <Enter>.

Now, you should see the following window. You can SSH into your Raspberry Pi with the command as marked in the screenshot below.

Connecting to Raspberry Pi Using SSH:

Now, SSH into your Raspberry Pi device from your computer as follows:

$ ssh dev.shovon8@192.168.2.15

Now, type in yes and press <Enter>.

You should be logged into your Raspberry Pi.

As you can see, I am running Ubuntu Core 16.

It’s using just a few megabytes of memory. It’s very lightweight as I said.

So, that’s how you install Ubuntu Core on Raspberry Pi 2 and Raspberry Pi 3. Thanks for reading this article.

Source

Download Bitnami GitLab Stack Linux 11.4.3-0

Bitnami GitLab Stack is a freely distributed and multiplatform software project that greatly simplifies the installation and hosting of the GitLab application, as well as of its runtime dependencies, on personal computers, so you can easily run your own GitLab server.

What is GitLab?

GitLab is an open source and self hosted Git management application, which can be easily described as a secure, stable and fast solution based on Gitolite and Rails. BitNami GitLab Stack will install the following packages: GitLab, Apache, Ruby, Rails, Redis, GitLab’s fork for Gitolite and Git.

Installing Bitnami GitLab Stack

Bitnami GitLab Stack is distributed as native installers, which have been built using BitRock’s cross-platform installer tool. They are available for all GNU/Linux distributions, but won’t work on Microsoft Windows and Mac OS X operating systems.

To install GitLab on your desktop computer or laptop, simply download the package that corresponds to your computer’s hardware architecture (32-bit or 64-bit), make it executable, run it and follow the instructions displayed on the screen.

Run GitLab in the cloud

Thanks to Bitnami, customers are now able to run the GitLab application in the cloud with their hosting platform or by using a pre-built cloud image for the Amazon EC2 or Windows Azure, or any other supported cloud hosting provider.

Bitnami’s GitLab virtual appliance

In addition to run GitLab in the cloud or to install it on personal computers, you can also virtualize it using Bitnami’s virtual appliance, which is based on the latest stable release of Ubuntu (64-bit), and designed for the VMware ESX, ESXi and Oracle VirtualBox virtualization software.

The GitLab Docker container and LAMP module

Bitnami will also provide users with a GitLab Docker container, which can be downloaded from the project’s homepage (see link below). Unfortunately, they don’t provide a module that could have allowed you to deploy GitLab on top of your LAMP (Linux, Apache, MySQL and PHP) stack.

Source

WP2Social Auto Publish Powered By : XYZScripts.com