Install Magento 2 on Ubuntu

E-commerce websites provide a platform for buyers and sellers and make online business easier than before. Ecommerce site can be developed with a variety of different types of web applications. Magento is a popular open source CMS to develop e-commerce website with numerous functionalities, security and minimum programming knowledge. This CMS is supported running on both Windows and Linux. How

Magento 2.2.6

can be installed on Ubuntu is shown in this tutorial.

You have to update the system and confirm that the following packages are installed and working properly before starting Magento Installation steps.

  • Apache2
  • MySQL 5.6+ or MariaDB
  • PHP 7+

Run the following commands to update and check your system is ready to start the installation.

$ sudo apt-get update
$ Apache2
$ PHP –v
$ mysql

If the following output appears then Apache, PHP and MySQL are installed.

Magento Installation Steps

Step-1: Download Magento Installer

Go to the following URL address, select the latest version of Magento and click on DOWNLOAD button. Here ‘Magento Open Source 2.2.6 tar.gz’ is selected and downloaded.

https://magento.com/tech-resources/download

You have to create an account in Magento site to download Magento. So, the following screen will appear before download. Create an account or login to your existing account to start the download process.

Step-2: Create folder and unzip Magento installer

By default, Magento will be downloaded on Downloads folder. Run the commands to create a folder named ‘magento’ under /var/www/html folder and copy the downloaded Magento file on that folder.

$ cd Downloads
$ ls
$ mkdir /var/www/html/magento
$ cp Magento-CE-2.2.6-2018-09-07-02-12-38.tar.gz /var/www/html/magento

Go to magento folder and unzip Magento installer.

$ cd /var/www/html/magento
$ ls
$ tar xzvf Magento-CE-2.2.6-2018-09-07-02-12-38.tar.gz /var/www/html/magento

Step-3: PHP Settings

You have to set permission to edit and save php.ini file.

chmod 777 /etc/php/7.1/apache2/php.ini

Open php.ini file from the location /etc/php/7.1/apache2/ and increase the memory limit and enable the following extensions.

Step-4: Database Setting

Login to MySQL server.

Create a database, ‘magento’.

Step-5: Setting Necessary Permissions

Run the following commands to set the necessary permissions for magento folder and restart the apache server.

$ sudo chown -R www-data:www-data /var/www/html/magento
$ sudo chmod -R /var/www/html/magento
$ sudo service apache2 restart

Step-6: Setup Magento

Open a browser and enter the following URL and click on Agree and Setup Magento button.

http://localhost/magento

If any necessary PHP extension is missing then it will be displayed in this page. Here, two extension are missing or not working, soap and bcmath.

Install the missing extensions.

$ sudo apt-get install php-soap
$ sudo apt-get install php7.1-bcmath

Again run the setup. If everything is ok then click on Next button. Fill up the following form with database hostname, username, password and database name. Click on Next button.

Set the base URL of the store and admin in this step. Here, I have removed admin prefix for easy access.

Next step is for customizing the store. Keep default settings and click on Next.

Create admin account to login to the dashboard of admin panel and click Next.

Click on Install Now button after completing all setup.

If the installation completes successfully then the following screen will appear.

Step-7: Verify Magento is working or not

Run the following URL from the browser to test the store view is working or not.

http://localhost/magento

Run the following URL to login to Admin Panel. Provide the valid username and password that you have created in previous step.

http://localhost/magento/admin

The following admin panel will appear if Magento is installed and working properly.

Hope, this tutorial will help you to learn and use Magento on Ubuntu.

Source

Best Android emulators for Linux

The first question you need to ask yourself is what you want the Android emulator to do for you. Many times, you only need it for a specific application that you cannot get for your Linux desktop. Some times you want to run a game and sometimes you are looking to develop your own application.

Which works best for what?

Android Virtual Device

Designed especially for testing your own code when coding from Android Studio. The built-in emulator is superior for testing your own applications, as expected this works best for using the Android SDK but you can use the emulator stand-alone. The images take a lot of disc space and use a lot of memory when used but all the features are there and it runs almost flawlessly. With this package you can also emulate the phone moving around, low battery and other hardware related situations.

Shashlik

Shashlik still works and is surprisingly powerful and simple to get started. Once you have it installed, you can install android applications by starting the Shaslik emulator and connecting to it using adb. Applications can then be started directly from your desktop. They will look like regular applications, but will be a little slow to start since the VM has to start before the application itself. Note that this package is still in beta and the last update was back in 2016 so make sure you don’t rely on updates. If, however, your application works then you can keep using it.

Android_x86

Android_x86 is also possible to run in a Virtual machine, Virtualbox is an idea but not the only one. The fun part about this package is that you can install it as a second OS on your disc and run it. When you do this you are no longer emulating, instead the whole install is done for your laptop. This also works when you want to use have more battery life from your Laptop, it usually uses much less power than your regular OS. Since it can replace your OS, it emulates everything near exactly. The analysis tools for your own applications is not something that has been prioritized in this project so it may lack a little in that area but for regular use it is great. Upgrading it is also simple, as the image is an ISO and there is an RPM file for the install.

Anbox

Anbox is aiming to give you the ability to run Android Apps in Linux; In a box, as the name suggests. The package comes as a snap only, unless you are going to develop. If you want to build the code yourself, you need to download the entire source code for Android. This application is excellent for running small applications directly on your desktop. To install Android apps, the easiest way is to find a package manager and use that. The adb program contacts any Android connected to the computer, Anbox will act as a mobile connected to the computer it is running on. You can also add Google Play yourself, it is not included for legal reasons. F-Droid is a popular open source alternative.

Genymotion

Genymotion is only available as a closed source distribution but you can use it for free. When it is installed it is fully capable of emulating everything a phone can make. It uses VirtualBox in the background but has a nice GUI on top of it that makes running the tests a breeze. The company also offers online farms of emulated hardware that you can rent. This comes at a hefty price of course and it is only intended for professional developers.

Chrome Browser

If you use the Chrome browser, you can also use ARC-welder. This is an app from the Chrome Web store, the install takes a while since the app is large, it does include an emulator. Once it is installed and you start it, you are greeted with an extremely simple screen. The screen contains one big plus and the text ‘Add your APK’, here you have to have the apk file ready in local storage. Once the app is installed, there is an icon on your new tab, which you can click to start the app again. ARC-welder is intended for testing only and there are serious bugs for the Linux version but it integrates well with the Gnome desktop.

Illustration 1: ARC Welder running F-Droid

Conclusion

Sometimes you want to test apps, as a hobbyist use your own, as a pro, use online systems. For playing games, use Anbox, it is still early but already useful. When you are developing yourself, the Android SDK is your best option and their own virtual device will be the easiest to use. Genymotion comes into it’s own though when you need to test hardware related features and mapping applications.

Source

Hands on Linux Training and Linux Certification

Enroll now for a free laptop!!

Best Linux Training!

Upcoming Classes

Linux Fundamentals

October 18 – October 19, 2018
[Virtual live ]

Linux System & Network Administration

October 20 – October 21, 2018
[Virtual live ]

Hands On Embedded Linux Development

October 10 – October 12, 2018 [Virtual live ]

Linux Device driver Development

October 15 – October 17, 2018 [Virtual live ]

Linux Kernel Internals

October 22 – October 24, 2018 [Virtual live ]

Shell Programming

October 22 – October 23, 2018 [Sunnyvale, California][Virtual]

Linux Development Essentials: Fundamentals, Tools and Techniques

October 24 – October 26, 2018 [Virtual live ]

Android Development Training

TBD [Sunnyvale, California] [Virtual]

Source

Complete guide to Dual Boot Ubuntu 18.XX with Windows 10

Ubuntu 18.04 aka Bionic Beaver was released on April 26th, 2018 with a lot of changes on front end as well as on backend. Major change that anybody who has ever used Ubuntu, will notice is the Desktop Environment.

Update : Ubuntu 18.10 aka Cosmic Cattlefish release is around the corner & it will be released on Oct 18, 2018. This tutorial can also be used to Dual boot Ubuntu 18.10.

You should start noticing the changes beginning with the Ubuntu Installation, we now have a new wallpaper when we boot into Ubuntu & during installation, we now get option to have a minimal installation, that gives you a web browser & some basic utilities.

The Unity Desktop environment has been replaced with GNOME, as Ubuntu has ended development on Unity.LightDM login manager has been swapped with Gnome’s GDM login manager. Ubuntu 18.04 still uses XORG display server by default, which was replaced to Wayland display server in Ubuntu 17.10. Ubuntu 18.04 now has support for color emoji icons, also Ubuntu now allows us to live patch our systems i.e. we can now patch our kernels without having to restart our systems.

On the software front, most of the defaults remains the same. We have usual thunderbird, libre office, Firefox , Nautilus file manager etc at our disposal, their versions have been upgraded. Also, you should not worry about Spectre & Meltdown attacks with Ubuntu 18.04.

Recommended Read : How to Add and Remove Users in Ubuntu

Also Read : Easy guide on how to install Java on Ubuntu systems

So that was something brief about Ubuntu 18.04, let’s now get to our main topic, which is how to dual boot Ubuntu with Windows 10 system. So let’s get starting,

For purpose of this tutorial, we assume that you have Windows 10 system with at least 25 Gb free HDD space. If you have than move onto the article,

1- Login into your Windows 10 system. Once done, we need to open Disk Management Console. To open Disk Management console, open run prompt (by pressing Windows Key + R or by typing ‘run’ in search bar) & type ‘diskmgmt.msc’

dual boot ubuntu

2- We need to free up some space from our HDD, we will do it by shrinking the volume that has some free space. Select the partition with some free space (at least 25 Gb), right click & select ‘Shrink Volume’,

dual boot ubuntu

3- On next screen, select the amount you want to shrink and press ‘Shrink’. For our tutorial, we will be using 40 Gb,

dual boot ubuntu
4- As shown is the screenshot below, we now have 40 Gb of unallocated space that will be used for Ubuntu 18.04 installation.

dual boot ubuntu

5- Next step will to shutdown your system & insert a bootable USB or a Live DVD for Ubuntu 18.04 onto the system & than boot the system into the Ubuntu image,

dual boot ubuntu

6- Once the Ubuntu image has been booted up, we should see the following options. Select ‘Install Ubuntu’,

dual boot ubuntu

7- On next screen, we will be asked for ‘Keyboard Layout’, I am leaving it at default, modify it as per your needs & press Continue,

dual boot ubuntu

8- On the next screen we are introduced to a little change, we have an extra option to select what apps we need to install on our system & we can either select Normal installation with all the default applications or we can also select Minimal Installation with only a web browser & some basic utilities. Select the option, you see fit & press Continue,

dual boot ubuntu

9- On the next screen, we will select the installation type. Since we are dual booting with Window 10, we will select the first option ‘Install Ubuntu alongside Windows 10’, rest options can be chosen when we are installing Ubuntu afresh,

dual boot ubuntu

10 – On the following screen, press ‘Write changes to disk ’ to proceed further with the installation,

dual boot ubuntu

11- Now we would be asked about our Geographical location, select your country & proceed by clicking on the Continue button,

12- Now comes the part of installation, where we need to create information about our User, our computer name & password etc. Enter all the requested information & once done, press Continue,

dual boot ubuntu

13- We have now entered all the needed information, Ubuntu will now proceed with the installation & depending on the allocated system resources it can take anywhere between 5 to 20 mins,

dual boot ubuntu

14- Once the installation has been completed, we will be asked to restart our system. Press on ‘Restart Now’. Also remove the installation media from system before restarting the system,

dual boot ubuntu

15- Now when the system is restarted, we will see the following boot menu first. Here we can either opt to login to Ubuntu installation by selecting the first option or we can also boot into Windows 10 installation by selecting the last option. We will boot into Ubuntu,

dual boot ubuntu

16- Now after completing the bootup process, we will see the login screen. Select the created user & enter the password to login,

dual boot ubuntu

17- That’s it guys, we have successfully installed Ubuntu along with Windows 10 & can now use both the operating systems without an issue,

dual boot ubuntu

This completes our tutorial on how to dual boot Ubuntu with Windows 10. Hope the tutorial was clear enough. If have any questions or queries, please let us know using the comment box below.

Source

Working with Calendars in the Linux Terminal

The graphical Calendar tool available on your Ubuntu system is pretty useful. However, if you are more Terminal-savvy, you can use the powerful command line utilities like cal and ncal in order to customize the way you want to view calendars for a specific month or year. This article explains the cal and ncal commands in detail alongwith the options you can use with them.

We have run the commands and procedures mentioned in this article on a Ubuntu 18.04 LTS system.

Since you will be using the Linux Terminal in order to view customized calendars, you can open it through the Dash or the Ctrl+Alt+T shortcut.

The cal Command

The cal utility displays the calendar in the traditional horizontal format. The following simple cal command is used to view the calendar for the current month with the current date highlighted:

$ cal

Cal Command Options

You can view the calendar according to the following syntax, based on the options explained below:

$ cal [-m [month]] [-y [year]] [-3] [-1] [-A [number]] [-B [number]] [-d [YYYY-MM]] [-j]

Option Use
-m [month] Use this option to display the calendar for the specified month. You can specify the entire month name such as “January” or the three letter abbreviated form such as “Jan”. Alternatively, you can also specify the month number with this option. This switch also gives you the option for viewing calendar for a month of the next year; in that case, you can add the letter f after the month number such as -m 1f
-y [year] Use this option in order to view the calendar for a specified year. For example ‘-y 2019’ will display all months for the year 2019
-1 Use this option to view the calendar of only one month. Since this is the default setting, you can avoid using this switch unless necessary.
-3 Use this option in order to view calendars for three months; these include the current month, the previous month, and the coming month.
-A [number] Use this option when you want to view an X number of coming months along with the calendar you have already set for viewing.

Example 1: cal -3 -A 1 (this command will display the calendar for the current, previous and next month, and also 1 more month after the next month)

Example 2: cal -y 2019 -A 1 (this command will display the calendar for the year 2019 along with one more month i.e. January for 2020

-B [number] Use this option when you want to view an X number of previous months along with the calendar you have already set for viewing.

Example 1: cal -3 -B 1 (this command will display the calendar for the current, previous and next month, and also 1 more month before the previous month)

Example 2: cal -y 2019 -B 1 (this command will display the calendar for the year 2019 along with one month of the previous year i.e. Dec for 2018

-d [YYYY-MM] You can view the calendar of a specific month of the specific year by mentioning that year and month in YYYY-MM format with -d option.
-j You can use this option to view the calendar in Julian format rather than the default Gregorian format.

Cal Command Examples

The following command will display the entire calendar for the current year:

$ cal -y

The following command will display the calendar for January 2017 as it is specified in the YYYY-MM format in the command:

$ cal -d 2017-01

Show one month with calThe ncal Command

The ncal command is more powerful than the cal command. It displays the calendar in a vertical format with some more additional options. These include displaying the date of Easter, viewing calendar with Monday or Sunday as starting days and much more.

The following simple ncal command is used to view the calendar in vertical format for the current month with the current date highlighted:

$ ncal

Ncal Command Options

You can view the calendar according to the following nval syntax, based on the options explained below:

ncal [-m [month]] [-y [year]] [-h] [-3] [-1] [-A [number]] [-B [number]] [-d [YYYY-MM]] [-C] [-e] [-o] [-p] [-w] [-M] [-S] [-b]

Note: The options already explained for the cal command can be used in the same manner for the ncal command.

Options Use
-h By default, the cal command highlights today’s date. However, if you use the the -h option, it will not highlight the date.
-e Use this option to view the date of Easter for western calendars.
-o Use this option to view the date of the Orthodox Easter.
-p Use this option to view country codes and switching days that are used for switching from Julian to Gregorian calendars for that country.
-w When you use this option, ncal will print the week number under each week.
-C By using this option, you can use all the options of the cal command with the ncal command.
-M Use this option to view calendars with Monday as the first day of the week.
-S Use this option to view calendars with Sunday as the first day of the week.
-b When you use this option, ncal will display the calendar horizontally as it is displayed through the cal command

Ncal Command Examples

The following command will display the calendar for the current month without highlighting today’s date:

$ ncal -hHighlight todays date in calendar

The following command will display the calendar of the current month with Monday as the first day of the week.

Through this article, you have learned to view calendars according to the many options available for the cal and ncal commands. By using these options you can customize the way you want to view calendars instead of the usual way calendars are displayed in Linux.

Source

Airtame speeds up its Linux-driven TV dongle

Airtame has released a faster new “Airtame 2” version of its Linux-based HDMI dongle for mirroring content to a TV, adding WiFi-ac, 2GB RAM, and a growing enterprise focus.

One category that often gets overlooked in the discussion of Linux computers is the market for HDMI dongle devices that plug into your TV to stream, mirror, or cast content from your laptop or mobile device. Yesterday, Google announced its third-gen version of its market-leading, Linux-powered Chromecast device. The latest Chromecast has a new design and Google Home support, and it’s claimed to be 15 percent faster than the 2015 version, with new support for [email protected] video. However, the rumored addition of Bluetooth did not materialize.

Airtame 2 (left) and signage examples
(click images to enlarge)

Here, we look at a similar Linux-based HDMI dongle device that launched this morning with a somewhat different feature set and market focus. The

Airtame 2

is the first hardware overhaul since the

original Airtame

generated $1.3 million on Indiegogo in 2013. The new version quadruples the RAM, improves the Fedora Linux firmware, and advances to dual-band 802.11a/b/g/n/ac, which is now known as WiFi 5 in the new

Wi-Fi Alliance naming scheme

that accompanied its recent WiFi 6 (ax) announcement.


Airtame 1 prototype

In its first year, Copenhagen, Denmark-based Airtame struggled to fulfill its Indiegogo orders and almost collapsed in the process. Yet, the company went on to find success and recently surpassed 100,000 device shipments. With a growing focus on enterprise and educational markets, Airtame upgraded its software with cloud device management features, and expanded its media sources beyond cross-platform desktops to Android and iOS devices.

The key difference with Chromecast is that Airtame supports mirroring to multiple devices at once, as long as you’re video is coming from a laptop or desktop rather than a mobile. Chromecast also requires the Chrome browser, and it lacks cloud-based device management features.


Third-gen
Chromecast

The $35 Chromecast is still a major player in the low-end consumer media player segment, but its dominance has faded due to greater competition from devices such as the Linux-based Roku and the Amazon Fire. Airtame has further backed away from that competition by focusing more on the enterprise, signage, and educational markets. Accordingly, the Airtame 2 price went up by $100 to $399 per device.

Airtame 2 extends its enterprise trajectory by “re-imagining how to turn blank screens into smart, collaborative displays,” says the company. Airtame recently released four Homescreen apps, providing “simple app integrations for better team collaboration and digital signage.” These deployments are controlled via Airtame Cloud, which was launched in early 2017. The cloud service enables enterprise and educational customers to monitor their Airtame devices, perform bulk updates, and add updated content directly from the cloud.

Twice the RAM, five times the WiFi performance

The Airtame 2 offers the same basic functionality as the Airtame 1, but it adds a number of performance benefits. It moves from the DualLite version of the NXP i.MX6 to the similarly dual-core, Cortex-A9 Dual model. This has the same 1GHz clock rate, but with a more advanced Vivante GC2000 GPU. Output resolution via the HDMI 1.4b port stays the same at 1920×1080, but you now get a 60fps frame rate instead of 30fps. As before, you can plug into VGA or DVI ports using adapters.

Airtame 2 with new magnetic mount (left) and spec comparison with Airtame 1
(click images to enlarge)

More importantly for performance, the Airtame 2 quadruples the RAM to 2GB. In place of an SD card slot, the firmware is stored on onboard eMMC.

The new Cypress (Broadcom) CYW89342 RSDB WiFi 5 chip is about five times faster than the original’s Qualcomm WiFi 4 (802.11n) chip, which also provided dual-band MIMO 2.4GHz/5.2GHz WiFi. The Airtame 2 has twice the range, at up to 20 meters, which is helpful for its enterprise and educational customers.

Other hardware improvements include a smaller, 77.9 x 13.5mm footprint, a Kensington Lock input, an LED, and a magnetic wall mount. A USB Type-C port replaces the power-only micro-USB OTG, adding support for HDMI, USB host, and Ethernet.

Airtame 2 with Kensington Lock (left) and new power cable and adapter
(click images to enlarge)

As before, there’s also a micro-USB host port that with the help of an adapter, supports Ethernet and Power-over-Ethernet (PoE). (Unfortunately, the device is no longer powered over HDMI, so if you don’t have the upcoming PoE option, you must plug in via a new power jack with a thin cable that includes a power adapter.)

Ethernet can run simultaneously with WiFi, and can improve throughput and reliability, says Airtame. We saw no mention of the new product’s latency, which is said to be improved, but on the previous Airtame, WiFi streaming latency was one second with audio.

Once again, iOS 9 devices can mirror video using AirPlay. However, Android (4.2.2) devices are limited to the display of static images and PDF files, including non-animated PowerPoint presentations. Desktop support, which also includes a special optimization for Chromebooks, includes support for Windows 10/7, Ubuntu 15.05, and Mac OS X 10.12.

Airtame 2 video

This article is copyright © 2018 Linux.com and was originally published here. It has been reproduced by this site with the permission of its owner. Please visit Linux.com for up-to-date news and articles about Linux and open source.

Source

Community backed Kaby Lake SBC ships with downloadable Ubuntu image


DFRobot has fulfilled KS orders for its Kaby Lake based LattePanda Alpha SBC, and is shipping a model with 8GB RAM and 64GB eMMC without OS that supports Windows 10 or Ubuntu 16.04 LTS.

DFRobot’s LattePanda project has fulfilled its Kickstarter orders for its community-backed, Intel 7th Gen Core based LattePanda Alpha after several months of delays, and public sales have switched from pre-order to in-stock fulfillment for at least one model. Like the earlier, Intel Cherry Trail based LattePanda, the LattePanda Alpha is notable for being a community backed (but not fully open source) hacker board loaded with Windows 10. Yet with the LattePanda Alpha, you can also choose a more affordable barebones version without a Windows 10 key that supports an optimized, downloadable Ubuntu 16.04 LTS image.

LattePanda Alpha
(click image to enlarge)

The almost identical LattePanda Delta board that was promoted in the same Dec. 2017 Kickstarter campaign is still not ready, although like the Alpha it’s been

available for pre-order

since June. The Delta delay may well be due to shortages of the Intel’s 8th Gen

Gemini Lake

follow-on to its lower-power Apollo Lake SoCs. Yet, Gemini Lake has shipped on a few computers such as the Windows-equipped

Alldocube KNote 5

2-in-1 tablet PC and

Beelink S2

mini-PC.


LattePanda Alpha

It appears that the only LattePanda Alpha model currently in stock — and only at DFRobot — is the $358 barebones version with 8GB LPDDR3 and 64GB eMMC. A $298 barebones model without the 64GB eMMC and a $398 model with 64GB eMMC and Windows Pro 10 are both listed as pre-order, without a promised ship date. The $358 price for the shipping, Linux-ready model is $60 to $70 more than the original Kickstarter packages.

The LattePanda Alpha’s 7th Gen Kaby Lake Core m3-7Y30 is a dual quad-thread 1.6GHz/2.6GHz processor with 900MHz Intel HD Graphics 615. The processor has a configurable TDP of 3.75W to 7W, and like the original LattePanda, is accompanied by an Arduino-compatible co-processor.

In addition to the 8GB LPDDR3 RAM and optional 64GB eMMC 5.0, there’s a microSD slot and an M.2 M Key interface that supports PCIe x4, SATA SSD, and NVMe SSD expansion. (The upcoming Delta model instead has an M.2 B-Key without NVMe support, which was the only major difference from the Alpha aside from its Gemini Lake SoC.)

LattePanda Alpha front detail view
(click image to enlarge)

The Alpha is also equipped with an M.2 E-Key slot with PCIe x2, USB 2.0, I2C, and UART support. This offers additional wireless possibilities in addition to the standard dual-band 802.11ac (now called WiFi 5), which is accompanied by Bluetooth 4.2. A GbE port is also onboard.

LattePanda Alpha back detail view
(click image to enlarge)

The LattePanda Alpha provides 3x USB 3.0 host ports and a USB Type-C port with support for USB 3.0, power input, and DisplayPort. Dual simultaneous 4K display support is available via the Type-C DisplayPort, as well as an HDMI port and eDP interface that supports optional 7- and 10.1-inch touchscreens.

Dual 50-pin GPIO connectors include one with an Arduino pinout. Other features include a 12V input, an audio jack, a PMIC, an RTC, and a cooling fan. We’re still not seeing dimensions for the SBC except for the slim 13mm height except to say it’s 70 percent smaller than iPhone Plus. For more details, see the spec list and other background in our original LattePanda Alpha and Delta story.

One novel feature is a streaming cable that enables Linux, Mac, or Windows desktop users to plug the LattePanda into a USB port to provide easy access to a Windows device without requiring partitioning or dual booting. The streaming configuration, which enables a PiP (Picture in Picture) view for “seamless interaction,” is intended primarily for Linux and Mac developers who want to develop Windows 10-based IoT devices. As noted, however, you can also buy the barebones version loaded with Ubuntu.

LattePanda has a thriving community site with a forum and extensive documentation, including GPIO pinouts, but so far, only for the original LattePanda. No schematics are provided.

Further information

The LattePanda Alpha is now available in a barebones package with 8GB RAM and 64GB eMMC, with shipments due Oct. 20. Other models are available on pre-order, as linked to farther above. More information may be found at the DFRobot LattePanda shopping page.

Source

The Future of Open Source | Software

By Jack M. Germain

Sep 19, 2018 5:00 AM PT

Linux and the open source business model are far different today than many of the early developers might have hoped. Neither can claim a rags-to-riches story. Rather, their growth cycles have been a series of hit-or-miss milestones.

The Linux desktop has yet to find a home on the majority of consumer and enterprise computers. However, Linux-powered technology has long ruled the Internet and conquered the cloud and Internet of Things deployments. Both Linux and free open source licensing have dominated in other ways.

Microsoft Windows 10 has experienced similar deployment struggles as proprietary developers have searched for better solutions to support consumers and enterprise users.

Meanwhile, Linux is the more rigorous operating system, but it has been beset by a growing list of open source code vulnerabilities and compatibility issues.

The Windows phone has come and gone. Apple’s iPhone has thrived in spite of stagnation and feature restrictions. Meanwhile, the Linux-based open source Android phone platform is a worldwide leader.

Innovation continues to drive demand for Chromebooks in homes, schools and offices. The Linux kernel-driven Chrome OS, with its browser-based environment, has made staggering inroads for simplicity of use and effective productivity.

Chromebooks now can run Android apps. Soon the ability to run Linux programs will further feed open source development and usability, both for personal and enterprise adoption.

One of the most successful aspects of non-proprietary software trends is the wildfire growth of container technology in the cloud, driven by Linux and open source. Those advancements have pushed Microsoft into bringing Linux elements into the Windows OS and containers into its Azure cloud environment.

“Open source is headed toward faster and faster rates of change, where the automated tests and tooling wrapped around the delivery pipeline are almost as important as the resulting shipped artifacts,” said Abraham Ingersoll, vice president of sales and solutions engineering at
Gravitational.

“The highest velocity projects will naturally win market share, and those with the best feedback loops are steadily gaining speed on the laggards,” he told LinuxInsider.

Advancement in Progress

To succeed with the challenges of open source business models, enterprises have to devise a viable way to monetize community development of reusable code. Those who succeed also have to master the formula for growing a free computing platform or its must-have applications into a profitable venture.

Based on an interesting GitLab report, 2018 is the year for open source and DevOps, remarked Kyle Bittner, business development manager at
Exit Technologies.

That forecast may be true eventually, as long as open source can dispel the security fears, he told LinuxInsider.

“With open source code fundamental to machine learning and artificial intelligence frameworks, there is a challenge ahead to convince the more traditional IT shops in automotive and oil and gas, for example, that this is not a problem,” Bittner pointed out.

The future of the open source model may be vested in the ability to curb worsening security flaws in bloated coding. That is a big “if,” given how security risks have grown as Linux-based deployments evolved from isolated systems to large multitenancy environments.

LinuxInsider asked several open source innovators to share their views on where the open source model is headed, and to recommend the best practices developers should use to leverage different OS deployment models.

Oracle’s OS Oracle

Innovative work and developer advances changed the confidence level for Oracle engineers working with hardware where containers are involved, according to Wim Coekaerts, senior vice president of operating systems and virtualization engineering at Oracle. Security of a container is critical to its reliability.

“Security should be part of how you do your application rollout and not something you consider afterward. You really need to integrate security as part of your design up front,” he told LinuxInsider.

Several procedures in packaging containers require security considerations. That security assessment starts when you package something. In building a container, you must consider the source of those files that you are packaging, Coekaerts said.

Security continues with how your image is created. For instance, do you have code scanners? Do you have best practices around the ports you are opening? When you download from third-party websites, are those images signed so you can be sure of what you are getting?

“It is common today with
Docker Hub to have access to a million different images. All of this is cool. But when you download something, all that you have is a black box,” said Coekaerts. “If that image that you run contains ‘phone home’ type stuff, you just do not know unless you dig into it.”

Yesterday Returns

Ensuring that containers are built securely is the inbound side of the technology equation. The outbound part involves running the application. The current model is to run containers in a cloud provider world inside a virtual machine to ensure that you are protected, noted Coekaerts.

“While that’s great, it is a major change in direction from when we started using containers. It was a vehicle for getting away from a VM,” he said. “Now the issue has shifted to concerns about not wanting the VM overhead. So what do we do today? We run everything inside a VM. That is an interesting turn of events.”

A related issue focuses on running containers natively because there is not enough isolation between processes. So now what?

The new response is to run containers in a VM to protect them. Security is not compromised, thanks to lots of patches in Linux and the hypervisor. That ensures all the issues with the cache and side channels are patched, Coekearts said.

However, it leads to new concerns among Oracle’s developers about how they can ramp up performance and keep up that level of isolation, he added.

Are Containers the New Linux OS?

Some view today’s container technology as the first step in creating a subset of traditional Linux. Coekaerts gives that view some credence.

“Linux the kernel is Linux the kernel. What is an operating system today? If you look at a Linux distribution, that certainly is morphing a little bit,” he replied.

What is running an operating system today? Part of the model going forward, Coekaerts continued, is that instead of installing an OS and installing applications on top, you basically pull in a Docker-like structure.

“The nice thing with that model is you can run different versions on the same machine without having to worry about library conflicts and such,” he said.

Today’s container operations resemble the old mainframe model. On the mainframe, everything was a VM. Every application you started had its own VM.

“We are actually going backward in time, but at a much lighter weight model. It is a similar concept,” Coekearts noted.

Container Tech Responds Rapidly

Container technology is evolving quickly.

“Security is a central focus. As issues surface, developers are dealing with them quickly,” Coekearts said, and the security focus applies to other aspects of the Linux OS too.

“All the Linux developers have been working on these issues,” he noted. “There has been a great communication channel before the disclosure date to make sure that everyone has had time to patch their version or the kernel, and making sure that everyone shares code,” he said. “Is the process perfect? No. But everyone works together.”

Security Black Eye

Vulnerabilities in open source code have been the cause of many recent major security breaches, said Dean Weber, CTO of
Mocana.

Open source components
are present in 96 percent of commercial applications, based on a report Black Duck released last year.

The average application has 147 different open source components — 67 percent of which are used components with known vulnerabilities, according to the report.

“Using vulnerable, open source code in embedded OT (operational technology), IoT (Internet of Things) and ICS (industrial control system) environments is a bad idea for many reasons,” Weber told LinuxInsider.

He cited several examples:

  • The code is not reliable within those devices.
  • Code vulnerabilities easily can be exploited. In OT environments, you don’t always know where the code is in use or if it is up to date.
  • Systems cannot always be patched in the middle of production cycles.

“As the use of insecure open source code continues to grow in OT, IoT and ICS environments, we may see substations going down on the same day, major cities losing power, and sewers backing up into water systems, contaminating our drinking water,” Weber warned.

Good and Bad Coexist

The brutal truth for companies using open source libraries and frameworks is that open source is awesome, generally high-quality, and absolutely the best method for accelerating digital transformation, maintained Jeff Williams, CTO of
Contrast Security.

However, open source comes with a big *but,* he added.

“You are trusting your entire business to code written by people you don’t know for a purpose different than yours, and who may be hostile to you,” Williams told Linuxinsider.

Another downside to open source is that hackers have figured out that it is an easy attack vector. Dozens of new vulnerabilities in open source components are released every week, he noted.

Every business option comes with a bottom line. For open source, the user is responsible for the security of all the open source used.

“It is not a free lunch when you adopt it. You are also taking on the responsibility to think about security, keep it up to date, and establish other protections when necessary,” Williams said.

Best Practices

Developers need an efficient guideline to leverage different deployment models. Software complexity makes it almost impossible for organizations to deliver secure systems. So it is about covering the bases, according to Exit Technologies’ Bittner.

Fundamental practices, such as creating an inventory of open source components, can help devs match known vulnerabilities with installed software. That reduces the threat risk, he said.

“Of course, there is a lot of pressure on dev teams to build more software more quickly, and that has led to increased automation and the rise of DevOps,” Bittner acknowledged. “Businesses have to ensure they don’t cut corners on testing.”

Developers should follow the Unix philosophy of minimalist, modular deployment models, suggested Gravitational’s Ingersoll. The Unix approach involves progressive layering of small tools to form end-to-end continuous integration pipelines. That produces code running in a real target environment without manual intervention.

Another solution for developers is an approach that can standardize with a common build for their specific use that considers third-party dependencies, security and licenses, suggested Bart Copeland, CEO of
ActiveState. Also, best practices for OS deployment models need to consider dependency management and environment configuration.

“This will reduce problems when integrating code from different departments, decrease friction, increase speed, and reduce attack surface area. It will eliminate painful retrofitting open source languages for dependency management, security, licenses and more,” he told LinuxInsider.

Where Is the Open Source Model Headed?

Open source has been becoming more and more enterprise led. That has been accompanied by an increased rise in distributed applications composed from container-based services, such as Kubernetes, according to Copeland.

Application security is at odds with the goals of development: speed, agility and leveraging open source. These two paths need to converge in order to facilitate development and enterprise innovation.

“Open source has won. It is the way everyone — including the U.S. government — now builds applications. Unfortunately, open source remains chronically underfunded,” said Copeland.

That will lead to open source becoming more and more enterprise-led. Enterprises will donate their employee time to creating and maintaining open source.

Open source will continue to dominate the cloud and most server estates, predicted Howard Green, vice president of marketing for
Azul Systems. That influence starts with the Linux OS and extends through much of the data management, monitoring and development stack in enterprises of all sizes.

It is inevitable that open source will continue to grow, said Contrast Security’s Williams. It is inextricably bound with modern software.

“Every website, every API, every desktop application, every mobile app, and every other kind of software almost invariably includes a large amount of open source libraries and frameworks,” he observed. “It is simply unavoidable and would be fiscally imprudent to try to develop all that code yourself.”

Jack M. Germain has been an ECT News Network reporter since 2003. His main areas of focus are enterprise IT, Linux and open source technologies. He has written numerous reviews of Linux distros and other open source software.
Email Jack.

Source

The Crypto-Criminal Bar Brawl | Enterprise Security

As if e-commerce companies didn’t have enough problems with transacting securely and defending against things like fraud, another avalanche of security problems — like cryptojacking, the act of illegally mining cryptocurrency on your end servers — has begun.

We’ve also seen a rise in digital credit card skimming attacks against popular e-commerce software such as Magento. Some of the attacks are relatively naive and un-targeted, taking advantage of lax security on websites found to be vulnerable, while others are highly targeted for maximum volume.

Indeed, it’s so ridiculous that there are websites such as
MageReport.com
and
Mage Scan
that will provide scans of your website for any client-facing malware.

As for server-side problems, you might be out of luck. A lot of e-commerce software lives in a typical LAMP stack, and while there is a plethora of security software for Windows-based environments, the situation is fairly bleak for Linux.

For a long time, Linux enjoyed a kind of smug arrogance with regard to security, and its advocates pooh-poohed the notoriously hackable Windows operating system. However, it’s becoming ultra clear that it’s just as susceptible, if not more so, for specific software such as e-commerce solutions.

Bridges Falling Down

Why have things seemingly gotten so much worse lately? It is not that security controls and processes have changed dramatically. It’s more that the attacks have become more lucrative, more tempting, and easier to get away with, thanks to the rise of cryptocurrency. It allows attackers to generate money quickly, easily and, more important, anonymously.

Folks — this is the loudspeaker — our digital roads and bridges are falling down. They are old and decrepit. Our security controls and processes have not kept pace with the rapid advancement of malware, it’s ease of use, and its coupling with a new range of software that allows attackers to hide their trails more effectively.

Things like cryptocurrency, however, are just the symptom of a greater issue. That issue is the fact that the underlying software foundations we’ve been using ever since the first browsers appeared are built on a fundamentally flawed architecture.

Feature and Flaw

The general purpose operating system that allowed every company to have a whole slew of easy-to-use desktop software in the 90s, and that built up amazingly large Internet companies in the early 2000s, has an Achilles heel. It is explicitly designed to run multiple programs on the same system — such as cryptominers on the server that runs your WooCommerce or Magento application.

It is an old concept that dates back to the late 1960s, when the first general purpose operating systems, such as Unix, were introduced. Back then, the computers had a business need to run multiple programs and applications on them. The systems back then were just too big and too expensive not to. They literally filled entire walls.

That’s not the case in 2018. Today our computers are “virtual,” and they can be taken down and brought up with the push of a button — usually by other programs. It’s a completely different world.

Now for end user computing devices such as personal laptops and phones, we want this design characteristic, as we have the need to use the browser, check our email, use the calendar and such. However, on the server side where our databases and websites live, it’s a flaw.

Virtual Ransacking

This seemingly innocuous design characteristic is what allows attackers to run their programs, such as cryptominers, on your servers. It is what allows attackers to insert card skimmers into your websites. It is what allows the attackers to run malware on your servers that try and shut down other pieces of malware in order to remain the dominant attacker.

Yes, you read that right — many of these variants now have so much free rein on so many thousands of websites that they literally fight against each other for your computing resources. This is how bad it’s gotten. It’s as if the cryptocriminals threw a party at your house while you were gone and then got into a big brawl and tore up all your furniture and ransacked your house. Then they woke up the next day and laughed all the way to the bank.

This isn’t the only way to deploy software, though. Consider famous software companies such as Uber, Airbnb, Twitter and Facebook. If you talk to their engineers, they’ll tell you that they already have to isolate a given program per server — in this case, a virtual machine. Why? It’s because they simply have too much software to begin with.

Instead of dealing with a single database, they might have to deal with hundreds or thousands. Likewise, the old concept of allowing multiple users on a given system doesn’t make a lot of sense anymore. It has evolved to the point where identity access management lives outside of the single server model.

Hack Attacks Are Not Inevitable

Unikernels embrace this new model of software provisioning yet enforce it at the same time. They run only one single application per virtual machine (the server). They can not, by design, run other programs on the same server.

This completely prevents attackers from running their programs on your server. It prevents them from downloading new software onto the server and massively limits their ability to inject malicious content, such as credit card skimming scripts and cryptomining programs.

Instead of scanning for hacked systems or unpatched systems waiting to be attacked, you could even run outdated software that has known bugs in it, and these same styles of attacks would fall flat, as there would be no capability to execute them. This is all enforced at the operating system level and backed by hardware baked-in isolation.

Are we going to continue to let the cryptocriminals run free on our servers? How are you going to call the cops on people you can’t even see who might live halfway around the world? Don’t fall prey to the notion that hackers are natural disasters and it’s only inevitable that they’ll get you one day. It doesn’t need to be like that. We don’t have to deploy our software like we are using computers from the 1970s. It’s time that we rebuilt our digital infrastructure.

Ian Eyberg is CEO of
NanoVMs, based in San Francisco. A self-taught expert in computer science, specifically operating systems and mainstream security, Eyberg is dedicated to initiating a revolution and mass-upgrading of global software infrastructure, which for the most part is based on 40-year-old tired technology. Prior to cracking the code of unikernels and developing a commercial viable solution, Eyberg was an early engineer at Appthority, an enterprise mobile security company.

Source

WP2Social Auto Publish Powered By : XYZScripts.com