Learn to Use curl Command with Examples | Linux.com

Curl command is used to transfer files to and from a server, it supports a number of protocols like HTTP, HTTPS, FTP, FTPS, IMAP, IMAPS, DICT, FILE, GOPHER, LDAP, LDAPS, POP3, POP3S, RTMP, RTSP, SCP, SFTP, SMB, SMBS, SMTP, SMTPS, TELNET and TFTP etc.

Curl also supports a lot of features like proxy support, user authentication, FTP upload, HTTP post, SSL connections, cookies, file transfer pause & resume, etc. There are around 120 different options that can be used with curl & in this tutorial, we are going to discuss some important Curl commands with examples.

Curl command with examples

Download or visit a single URL

To download a file using CURL from http or ftp or any other protocol, use the following command

$ curl https://linuxtechlab.com

If curl can’t identify the protocol being used, it will switch to http. We can also store the output of the command to a file with ‘-o’ option or can also redirect using ‘>’,

$ curl https://linuxtechlab.com -o test.html , or,

$ curl https://linuxtechlab.com > test.html

 

Download multiple files

To download two or more files with curl in a single command, we will use ‘-O’ option. Complete command is,

$ curl -O https://linuxtechlab.com/test1.tar.gz -O https://linuxtechlab.com/test2.tar.gz

 

Using ftp with curl

To browse a ftp server, use the following command,

$ curl ftp://test.linuxtechlab.com –user username:password

To download a file from the ftp server, use the following command,

$ curl ftp://test.linuxtechlab.com/test.tar.gz –user username:password -o test.tar.gz

To upload a file to the ftp server using th curl command, use the following,

$ curl -T test.zip ftp:/test.linuxtechlab.com/test_directory/ –user username:password

 

Resume a paused download

We can also pause and resume a download with curl command. To do this, we will first start the download ,

$ curl -O https://linuxtechlab.com/test1.tar.gz

than pause the download using ‘ctrl+C’ & to resume the download, use the following command,

$ curl -C – -O https://linuxtechlab.com/test1.tar.gz

here, ‘-C’ option is used to resume the download.

 

Sending an email

Though you might not be using it any time soon, but none the less we can use curl command to send email. Complete command for sending an email is,

$ curl –url “smtps://smtp.linuxtechlab.com:465” –ssl-reqd –mail-from “dan@linuxtechlab.com” –mail-rcpt “susan@readlinux.com” –upload-file mailcontent.txt –user “dan@linuxtechlab.com:password” –insecure

 

Limit download rate

To limit the rate at which a file is downloaded, in order to avoid network choking or for some other reason, use the curl command with ‘–limit-rate’ option,

$ curl –limit-rate 200k -O https://linuxtechlab.com/test.tar.gz

 

Show response headers

To only see the response header of a URL & not the complete content , we can use option ‘-I’ with curl command,

$ curl -I https://linuxtechlab.com/

This will only show the headers like http protocol, Cache-contorol headers, content-type etc of the mentioned url.

 

Using http authentication

We can also use curl to open a web url that has http authentication enabled with curl using ‘-u ‘ option. Complete command is,

$ curl -u user:passwd https://linuxtechlab.com

Using a proxy

To use a proxy server when visiting an URL or downloading, use ‘-x’ option with curl,

$ curl -x squid.proxy.com:3128 https://linuxtechlab.com

 

Verifying Ssl certificate

To verify a SSL certificate of an URL, use the following command,

$ curl –cacert ltchlb.crt https://linuxtechlab.com

 

Ignoring SSL certificate

To ignore the SSL certificate for an URL, we can use ‘-k’ option with curl command,

$ curl -k https://linuxtechlab.com

With this we end our tutorial on learning CURL command with examples. Please leave your valuable feedback & questions in the comment box below.

Source

Linux Today – 62 Benchmarks, 12 Systems, 4 Compilers: Our Most Extensive Benchmarks Yet Of GCC vs. Clang Performance

After nearly two weeks of benchmarking, here is a look at our most extensive Linux x86_64 compiler comparison yet between the latest stable and development releases of the GCC and LLVM Clang C/C++ compilers. Tested with GCC 8, GCC 9.0.1 development, LLVM Clang 7.0.1, and LLVM Clang 8.0 SVN were tests on 12 distinct 64-bit systems and a total of 62 benchmarks run on each system with each of the four compilers… Here’s a look at this massive data set for seeing the current GCC vs. Clang performance.

With the GCC 9 and Clang 8 releases coming up soon, I’ve spent the past two weeks running this plethora of compiler benchmarks on a range of new and old, low and high-end systems within the labs. The 12 chosen systems aren’t meant for trying to compare the performance between processors but rather a diverse look at how Clang and GCC perform on varying Intel/AMD microarchitectures. For those curious about AArch64 and POWER9 compiler performance, that will come in a separate article with this testing just looking at the Linux x86_64 compiler performance.

The 13 systems tested featured the following processors:

– AMD FX-8370E (Bulldozer)
– AMD A10-7870K (Godavari)
– AMD Ryzen 7 2700X (Zen)
– AMD Ryzen Threadripper 2950X (Zen)
– AMD Ryzen Threadripper 2990WX (Zen)
– AMD EPYC 7601 (Zen)
– Intel Core i5 2500K (Sandy Bridge)
– Intel Core i7 4960X (Ivy Bridge)
– Intel Core i9 7980XE (Skylake X)
– Intel Core i7 8700K (Coffeelake)
– Intel Xeon E5-2687Wv3 (Haswell)
– Intel Xeon Silver 4108 (SP Skylake)

The selection was chosen based upon systems in the server room that weren’t pre-occupied with other tests, of interest for a diverse look across several generations of Intel/AMD processors, and obviously based upon the hardware I have available. The storage and RAM varied between the systems, but again the focus isn’t for comparing these CPUs rather seeing how GCC 8, GCC 9, Clang 7, and Clang 8 compare. Ubuntu 18.10 was running on these systems with the Linux 4.18 kernel. All of the compiler releases were built in their release/optimized (non-debug) builds. During the benchmarking process on all of the systems, the CFLAGS/CXXFLAGS were maintained of “-O3 -march=native” throughout.

These compiler benchmarks are mostly focused on the raw performance of the resulting binaries but also included a few tests looking at the compile time performance too. For those short on time and wanting a comparison at the macro level, here is an immediate look at the four-way compiler performance across the dozen systems and looking at the geometric mean of all 62 compiler benchmarks carried out in each configuration:

On the AMD side, the Clang vs. GCC performance has reached the stage that in many instances they now deliver similar performance… But in select instances, GCC still was faster: GCC was about 2% faster on the FX-8370E system and just a hair faster on the Threadripper 2990WX but with Clang 8.0 and GCC 9.0 coming just shy of their stable predecessors. These new compiler releases didn’t offer any breakthrough performance changes overall for the AMD Bulldozer to Zen processors benchmarked.

On the Intel side, the Core i5 2500K interestingly had slightly better performance on Clang over GCC. With Haswell and Ivy Bridge era systems the GCC vs. Clang performance was the same. With the newer Intel CPUs like the Xeon Silver 4108, Core i7 8700K, and Core i9 7980XE, these newer Intel CPUs were siding with the GCC 8/9 compilers over Clang for a few percent better performance.

Now onward to the interesting individual data points… But before getting to that, if you appreciate all of the Linux benchmarking done day in and day out at Phoronix, consider joining Phoronix Premium to make this testing possible. Phoronix relies primarily on (pay per impression) advertisements to continue publishing content as well as premium subscriptions for those who prefer not seeing ads. Premium gets you ad-free access to the site as well as multi-page articles (like this!) all on a single page, among other benefits. Thanks for your support and at the very least to not be utilizing any ad-blocker on this web-site. Now here is the rest of these 2019 compiler benchmark results.

With the PolyBench-C polyhedral benchmark, what was interesting to note is that for the most part the Clang and GCC performance across this diverse range of systems was almost identical… But the interesting bit is the Intel Xeon Silver 4108 and Core i9 7980XE CPUs both performing noticeably better with GCC over Clang. Potentially explaining this is those two CPUs have AVX-512 and perhaps better utilized currently on the GCC side.

Of interest with the FFTW benchmark was seeing GCC 8.2 doing much better on the 2700X / 2990WX / EPYC 7601 Zen systems but the performance dropping back with GCC 9.0. On the Intel side, both AVX-512 Core i9 / Xeon Scalable systems saw nice performance improvements over Clang with GCC 8.2 and now moreso with the upcoming GCC 9.1.

The HMMer molecular biology benchmark was interesting in that with a number of systems the Clang performance was better than GCC, but for the older AMD systems and select Intel systems, GCC was still faster. So this case was a mixed bag between the compilers.

MAFFT is bringing better performance on the range of systems tested with GCC 9 compared to the current GCC 8 release, but that largely makes its performance in line with Clang.

The BLAKE2 crypto benchmark was one of the cases where Clang was easily beating out GCC on nearly all of the configurations.

The SciMark2 benchmarks always tend to be quite susceptible to compiler changes and in some cases like Jacobi, GCC is performing much faster than Clang.

Clang was generating faster code over GCC on the twelve systems with the TSCP chess benchmark.

On the AMD Zen systems, the Clang-generated binary for VP9 vpxenc video encoding was slightly faster while the Intel performance was close between these compilers. The exception on the Intel side was the Intel Core i9 7980XE with seeing measurably better performance using GCC.

With the H.264/H.265 video encode tests among other video coding benchmarks there isn’t too much change with most of the programs/libraries relying upon hand-tuned Assembly code already. But in the case of the x265 benchmark, the AVX512-enabled Xeon Silver and Core i9 Skylake-X processors were yielding better performance on GCC.

The OpenMP performance in LLVM Clang has come a long way in recent years and for many situations yields performance comparable to the GCC OpenMP implementation. In the case of GraphicsMagick that makes use of OpenMP, it depended upon the operations being carried out whether GCC still carried a lofty lead or was a neck-and-neck race.

With the Himeno pressure solver, on the AMD side GCC performed noticeably better than Clang with the old Bulldozer era FX-8370E. On the Intel side, GCC tended to outperform Clang particularly with the newer generations of processors.

As for compiler performance in building out some sample projects, compiling Apache was quite close between GCC and Clang but sided in favor of the LLVM-based compiler. When it came to building the ImageMagick program, using Clang led to much quicker build times than GCC. GCC 9 is building slower than GCC 8, which isn’t to much surprise considering the newer compilers tend to tack on additional optimization passes and other work in trying to yield faster binaries at the cost of slower build times.

When it came to the time needed to build LLVM, the Clang compiler was still faster though on the newer Intel CPUs was quite a tight race.

There were also cases where GCC did compile faster than Clang: building out PHP was quicker on GCC than Clang across all of the systems tested.

The C-Ray multi-threaded ray-tracer remains much faster with GCC over Clang on all of the systems tested.

The AOBench ambient occlusion renderer was also faster with GCC.

The dav1d AV1 video decoder was quicker with Clang on the older AMD systems as well as the older Intel (Sandy Bridge / Ivy Bridge) systems while on the newer CPUs the performance between the compilers yielded similar video decode speed.

LAME MP3 audio encoding was faster when built under GCC.

In very common workloads like OpenSSL, its performance has already been studied and well-tuned by all of the compilers for the past number of years.

Redis was faster on the newer (AVX-512) CPUs with GCC where as on the other systems the performance was similar.

Interestingly in the case of Sysbench, the AMD performance was faster when built by the GCC compiler while the Intel systems performed much better with the Clang compiler.

Broadly, it’s a very competitive race these days between GCC and Clang on Linux x86_64. As shown by the geometric means for all these tests, the race is neck-and-neck with GCC in some cases just having a ~2% advantage. Depending upon the particular code-base, in some cases the differences were more pronounced. One area where GCC seemed to do better on average than Clang was with the newer Core i9 7980XE and Xeon Silver systems that have AVX-512 and there the GNU Compiler Collection most often outperformed Clang. In the tests looking at the compile times, Clang still had some cases of beating out GCC but with some of the build tests the performance was close and in the case of compiling PHP it was actually faster to build on GCC.

Those wishing to dig through the 62 benchmarks across the dozen systems and the four compilers can find all of the raw performance data via this OpenBenchmarking.org result file. And if you appreciate all of our benchmarking, consider going premium.

Source

Amazon Elasticsearch Service now supports three Availability Zone deployments

Posted On: Feb 7, 2019

Amazon Elasticsearch Service now enables you to deploy your instances across three Availability Zones (AZs) providing better availability for your domains. If you enable replicas for your Elasticsearch indices, Amazon Elasticsearch Service distributes the primary and replica shards across nodes in different AZs to maximize availability. If you have configured dedicated master nodes while using multi-AZ deployment, they are automatically placed into three AZs to ensure that your cluster can elect a new master even in the rare event of an AZ disruption.

If you have already launched domains with “Zone Awareness” turned on (two AZ configuration), you can now easily reconfigure them to leverage three AZs. You can enable three AZ deployments for both existing and new domains at no extra cost using the AWS console, CLI or SDKs. Three AZs are supported in the following regions: US East (N. Virginia, Ohio), US West (Oregon), EU (Ireland, Frankfurt, London, Paris), Asia Pacific (Sydney, Tokyo, Seoul), and AWS China (Ningxia) region, operated by NWCD.

To learn more, read our documentation.

Source

Pine64 previews open source phone, laptop, tablet, camera, and SBCs

Pine64’s 2019 line-up of Linux-driven, open-spec products will include Rock64, Pine H64, and Pinebook upgrades plus a PinePhone, PineTab, CUBE camera, and Retro-Gaming case.

At FOSDEM last weekend, California-based Linux hacker board vendor Pine64 previewed an extensive lineup of open source hardware it intends to release in 2019. Surprisingly, only two of the products are single board computers.

The Linux-driven products will include a PinePhone Development Kit based on the Allwinner A64. There will be second, more consumer focused Pinebook laptop — a Rockchip RK3399 based, 14-inch Pinebook Pro — and an Allwinner A64-based, 10.1-inch PineTab tablet. Pine64 also plans to release an Allwinner S3L-driven IP camera system called the CUBE and a Roshambo Retro-Gaming case that supports Pine64’s Rock64 and RockPro64, as well as the Raspberry Pi.

PinePhone Development Kit (left) and PineTab with optional keyboard
(click images to enlarge)

 

The SBC entries are a Pine H64 Model B that will be replace the larger, but similarly Allwinner H6 based, Model A version and will add WiFi/Bluetooth. There will also be a third rev of the popular, RK3399 based Rock64 board that adds Power-over-Ethernet support. (See farther below for more details.)

Pine H64 Model B (left) and earlier Pine H64 Model A
(click images to enlarge)

The launch of the phone, laptop, tablet, and camera represents the most ambitious expansion to date by an SBC vendor to new open source hardware form factors. As we noted last month in our hacker board analysis piece, community-based SBC projects are increasingly specializing to survive in today’s Raspberry Pi dominated market. In a Feb. 1 Tom’s Hardware story, RPi Trading CEO Eben Upton confirmed our speculation that a next generation Raspberry Pi 4 that moves beyond 40nm fabrication will not likely ship until 2020. That offers a window of opportunity for other SBC vendors to innovate.

It’s a relatively short technical leap to move from a specialized SBC to a finished consumer electronics or industrial device, but it’s a larger marketing hurdle, especially with consumer electronics. Still, we can expect a few more vendors to follow Pine64’s lead in building on their SBCs, Linux board support packages, and hacker communities to launch more purpose-built consumer electronics and industrial gear.

Already, community projects have begun offering a more diverse set of enclosures and other accessories to turn their boards into mini-PCs, IoT gateways, routers, and signage systems. Meanwhile, established embedded board vendors are using their community-backed SBC platforms as a foundation for end-user products. Acer subsidiary Aaeon, for example, has spun off its UP boards into a variety of signage systems, automation controllers, and AI edge computing systems.

So far, most open source, Linux phone and tablet alternatives have emerged from open source software projects, such as Mozilla’s Firefox OS, the Ubuntu project’s Ubuntu Phone, and the Jolla phone. Most of these alternative mobile Linux projects have either failed, faded, or never took off.


PiTalk

Some of the more recent Linux phone projectss, such as the PiTalk and ZeroPhone, have been built around the Raspberry Pi platform. The PinePhone and PineTab would be even more open source given that the mainboards ship with full schematics.

Unlike many hacker board projects, the Pine64 products offer software tied to mainline Linux. This is easier to do with the Rockchip designs, but it’s been a slower road to mainline for Allwinner. Work by Armbian and others have now brought several Allwinner SoCs up to speed.

Working from established hardware and software platforms may offer a stronger foundation for launching mobile Android alternatives than a software-only project. “The idea, in principle, is to build convergent device-ecosystems (SBC, Module, Laptop/Tablet/ Phone / Other Devices) based on SOCs that we’ve already have developers engaged with and invested in,” says the Pine64 blog announcement.

PinePhone (left) and demo running Unity 8 and KDE
(click images to enlarge)

 

Here’s a closer look at Pine64’s open hardware products for 2019:

  • PinePhone Development Kit — Based on the quad -A53 Allwinner A64 driven SoPine A64 module, the PinePhone will run mainline Linux and support alternative mobile platforms such as UBPorts, Maemo Leste, PostmarketOS, and Plasma Mobile. It can also run Unity 8 and KDE Plasma with Lima. This upgradable, modular phone kit will be available soon in limited quantity and will be spun off later this year or in 2020 into an end-user phone with a target price of $149.The PinePhone kit includes 2GB LPDDR3, 32GB eMMC, and a small 1440 x 720-pixel LCD screen. There’s a 4G LTE module with Cat 4 150Mb downlink, a battery, and 2- and 5MP cameras. Other features include WiFi/BT, microSD, HDMI, MIPI I/O, sensors, and privacy hardware switches.

    Pinebook Pro (left) and earlier 14-inch Pinebook
    (click images to enlarge)

     

  • Pinebook Pro — Like many of the upcoming Pine64 products, the original Pinebooks are limited edition developer systems. The Pinebook Pro, however, is aimed at a broader audience that might be considering a Chromebook. This second-gen Pro laptop will not replace the $99 and up 11.6-inch version of the Pinebook. The original 14-inch version may receive an upgrade to make it more like the Pro.The $199 Pinebook Pro advances from the quad-core, Cortex-A53 Allwinner H64 to a hexa-core -A53 and -A72 Rockchip RK3399. It supports mainline Linux and BSD.

    SoPine A64

    The more advanced features include a higher-res 14-inch, 1080p screen, now with IPS, as well as twice the RAM (4GB LPDDR4). It also offers four times the storage at 64GB, with a 128GB option for registered developers. Other highlights include USB 3.0 and 2.0 ports, a USB Type-C port that supports DP-like [email protected] video, a 10,000 mAh battery, and an improved 2-megapixel camera. There’s also an option for an M.2 slot that supports NVMe storage.

  • PineTab — The PineTab is like a slightly smaller, touchscreen-enabled version of the first-gen Pinebook, but with the keyboard optional instead of built-in. The magnetically attached keyboard has a trackpad and can fold up to act as a screen cover.
    Like the original Pinebooks, the PineTab runs Linux or BSD on an Allwinner A64 with 2GB of LPDDR3 and 16GB eMMC. The 10-inch IPS touchscreen is limited to 720p resolution. Other features include WiFi/BT, USB, micro-USB, microSD, speaker, mic, and dual cameras.
    Pine64 notes that touchscreen-ready Linux apps are currently in short supply. The PineTab will soon be available for $79, or $99 with the keyboard.
  • The CUBE — This “early concept” IP camera runs on the Allwinner S3L — a single-core, Cortex-A7 camera SoC. It ships with a MIPI-CSI connected, 8MP Sony iMX179 CMOS camera with an m12 mount for adding different lenses.The CUBE offers 64MB or 128MB RAM, a WiFi/BT module, plus a 10/100 Ethernet port with Power-over-Ethernet (PoE) support. Other features include USB, microSD, and 32-pin GPIO. Target price: about $20.

    The CUBE camera (left) and Roshambo Retro-Gaming case and controller
    (click images to enlarge)

     

  • Roshambo Retro-Gaming — This retro gaming case and accessory set from Pine64’s Chinese partner Roshambo will work with Pine64’s Rock64 SBC, which is based on the quad -A53 Rockchip RK3328, or its RK3399 based RockPro64. It can also accommodate a Raspberry Pi. The $30 Super Famicom inspired case will ship with an optional $13 gaming controller set. Other features include buttons, switches, a SATA slot, and cartridge-shaped 128GB ($25) or 256GB ($40) SSDs.

    Rock64 Rev 1
  • Rock64 Rev 3 — Pine64 says it will continue to focus primarily on SBCs, although the only 2019 products it is disclosing are updates to existing designs. The Rock64 Rev 3 improves upon Pine64’s RK3399-based RPi lookalike, which it says has been its most successful board yet. New features include PoE, RTC, improved RPi 2 GPIO compatibility, and support for high-speed microSD cards. Pricing stays the same.
  • Pine H64 Model B — The Pine H64 Model B will replace the currently unavailable Pine H64 Model A, which shipped in limited quantities. The board trims down to a Rock64 (and Raspberry Pi) footprint, enabling use of existing cases, and adds WiFi/BT. It sells for $25 (1GB LPDDR3 RAM), $35 (2GB), and $45 (3GB).This article is copyright © 2019 Linux.com and was originally published here. It has been reproduced by this site with the permission of its owner. Please visit Linux.com for up-to-date news and articles about Linux and open source.

Source

SUSE releases enterprise Linux for all major ARM processors

SUSE Linux Enterprise Server has been around a while, but this is the first official release.

SUSE releases enterprise Linux for all major ARM processors

SUSE has released its enterprise Linux distribution, SUSE Linux Enterprise Server (SLES), for all major ARM server processors. It also announced the general availability of SUSE Manager Lifecycle.

SUSE is on par with the other major enterprise Linux distributions — Red Hat and Ubuntu — in the x86 space, but it has lagged in its ARM support. It’s not like SLES for ARM is only now coming to market for the first time, either. It has been available for several years, but on a limited basis.

“Previously, SUSE subscriptions for the ARM hardware platforms were only available to SUSE Partners due to the relative immaturity of the ARM server platform,” Jay Kruemcke, a senior product manager at SUSE, wrote in a blog post announcing the availability.

“Now that we have delivered four releases of SUSE Linux Enterprise Server for ARM and have customers running SUSE Linux on ARM servers as diverse as the tiny Raspberry Pi and the high-performance HPE Apollo 70 servers, we are now ready to sell subscriptions directly to customers,” he added.

SLES is available for a variety of ARM server processors, including chips from Cavium, Broadcom, Marvell, NXP, Ampere, and… Qualcomm Centriq. Well, who can blame them for taking Qualcomm seriously?

sles15 arm enablement 1024x809

Because it covers such a wide range of processors, the company has come up with a rather complex approach to pricing — and Kruemcke spent a lot of time on the subject. So much so he didn’t get into technical details. Kruemcke said the company is using a model that has “core-based pricing for lower-end ARM hardware and socket-based pricing for everything else.”

Servers with fewer than 16 cores are priced based on the number of groups of four processor cores. Each group of four cores, up to 15 cores, requires a four-core group subscription that is stackable to a maximum four subscriptions. The number of cores is rounded up to the nearest group of four cores; therefore, a server with 10 cores would require three four-core group subscriptions.

Servers with 16 or more cores use the traditional 1-2 socket-based pricing.

Subscriptions for SUSE Linux Enterprise Server for Arm and SUSE Manager Lifecycle for ARM are now available directly to customers through the corporate price list or through the SUSE Shop.

Source

Download Lights Off Linux 3.30.0

Lights Off is an open source piece of software that provides users with a really beautiful and fun puzzle game, specifically designed for the GNOME desktop environment. It is distributed as part of the GNOME Games initiative.

It’s a very popular and fun puzzle/board game where the main objective is to turn off all of the tiles on the board. With each click, the player will toggle the state of the clicked tiles, as well as their non-diagonal neighbors.

Features hundreds of levels

The game is played on a 5×5 grid and features hundreds of levels. To start a new game, players will have to go to Game -> New Game or access the “New Game” entry from the GNOME panel if running the program under the controversial desktop environment.

It can be played using either the keyboard or the mouse, simply by selecting or clicking on a single tile. On the first level, if you click the right tile, it will complete the level and automatically display the next one.

Players can navigate between levels without restrictions

The game has been designed in such a way that it allows users to navigate between levels without restrictions, by using the back and next button provided under the main board. The big blue digital display will show the current level.

Advanced levels are very difficult and it will require a lot of time to complete them. It’s a memory game, so you’ll have to remember the clicked tiles and what non-diagonal neighbors it triggers when clicked, with the ultimate goal to turn them all off.

Designed for GNOME

It is possible to click on both blank and occupied tiles, but you will have to read its manual for detailed strategies and examples. The game integrates well with the GNOME desktop environment, but it can also be used on other open source graphical interfaces.

Puzzle game Board game GNOME game LightsOff Puzzle Board Tile

Source

The Linux Command-Line Cheat Sheet

This select set of Linux commands can help you master the command line and speed up your use of the operating system.

The Linux command-line cheat sheet

When coming up to speed as a Linux user, it helps to have a cheat sheet that can help introduce you to some of the more useful commands.

In the tables below, you’ll find sets of commands with simple explanations and usage examples that might help you or Linux users you support become more productive on the command line.

Getting familiar with your account

These commands will help new Linux users become familiar with their Linux accounts.

Command Function Example
pwd Displays your current location in the file system pwd
whoami Displays your username – most useful if you switch users with su and need to be reminded what account you’re using currently whoami
ls Provides a file listing. With -a, it also displays files with names starting with a period (e.g., .bashrc). With -l, it also displays file permissions, sizes and last updated date/time. ls
ls -a
ls -l
env Displays your user environment settings (e.g., search path, history size, home directory, etc.) env
echo Repeats the text you provide or displays the value of some variable echo hello
echo $PATH
history Lists previously issued commands history
history | tail -5
passwd Changes your password. Note that complexity requirements may be enforced. passwd
history | tail -5

Examining files

Linux provides several commands for looking at the content and nature of files. These are some of the most useful commands.

When coming up to speed as a Linux user, it helps to have a cheat sheet that can help introduce you to some of the more useful commands.

In the tables below, you’ll find sets of commands with simple explanations and usage examples that might help you or Linux users you support become more productive on the command line.

Getting familiar with your account

These commands will help new Linux users become familiar with their Linux accounts.

Command Function Example
pwd Displays your current location in the file system pwd
whoami Displays your username – most useful if you switch users with su and need to be reminded what account you’re using currently whoami
ls Provides a file listing. With -a, it also displays files with names starting with a period (e.g., .bashrc). With -l, it also displays file permissions, sizes and last updated date/time. ls
ls -a
ls -l
env Displays your user environment settings (e.g., search path, history size, home directory, etc.) env
echo Repeats the text you provide or displays the value of some variable echo hello
echo $PATH
history Lists previously issued commands history
history | tail -5
passwd Changes your password. Note that complexity requirements may be enforced. passwd
history | tail -5

Examining files

Linux provides several commands for looking at the content and nature of files. These are some of the most useful commands.

Command Function Example
cat Displays the entire contents of a text file. cat .bashrc
more Displays the contents of a text file one screenful at a time. Hit the spacebar to move to each additional chunk. more .bash_history
less Displays the contents of a text file one screenful at a time, but in a manner that allows you to back up using the up arrow key. less .bash_history
file Identifies files by type (e.g., ASCII text, executable, image, directory) file myfile
file ~/.bashrc
file /bin/echo

Managing files

These are some Linux commands for changing file attributes as well as renaming, moving and removing files.

Command Function Example
chmod Changes file permissions (who can read it, whether it can be executed, etc.) chmod a+x myscript
chmod 755 myscript
chown Changes file owner sudo chown jdoe myfile
cp Makes a copy of a file. cp origfile copyfile
mv Moves or renames a file – or does both mv oldname newname
mv file /new/location
mv file /newloc/newname
rm Deletes a file or group of files rm file
rm *.jpg
rm -r directory

Creating and editing files

Linux systems provide commands for creating files and directories. Users can choose the text editor they are comfortable using. Some require quite a bit of familiarity before they’ll be easy to use while others are fairly self-explanatory.

Command Function Example
nano An easy-to-use text editor that requires you to move around in the file using your arrow keys and provides control sequences to locate text, save your changes, etc. nano myfile
vi A more sophisticated editor that allows you to enter commands to find and change text, make global changes, etc. vi myfile
ex A text editor designed for programmers and has both a line-oriented and visual mode ex myfile
touch Creates a file if it doesn’t exist or updates its timestamp if it does touch newfile
touch updatedfile
> Creates files by directing output to them. A single > creates a file while >> appends to an existing file. cal > calendar
ps > myprocs
date >> date.log
mkdir Creates a directory mkdir mydir
mkdir ~/mydir
mkdir /tmp/backup

Moving around the file system

The command for moving around the Linux file system is ls, but there are many variations.

Command Function Example
cd With no arguments, takes you to your home directory. The same thing would happen if you typed cd $HOMEor cd ~ cd
cd .. Moves up (toward /) one directory from your current location cd ..
cd <location> Takes you to the specified location. If the location begins with a /, it is taken to be relative to the root directory; otherwise it is taken as being relative to your current location. The ~ character represents your home directory. cd /tmp
cd Documents
cd ~/Documents

Learning about and identifying commands

There are a number of Linux commands that can help you learn about other commands, the options they offer and where these commands are are located in the file system. Linux systems also provide a command that can help you to learn what commands are available related to some subject – for example, commands that deal with user accounts.

Command Function Example
man Displays the manual (help) page for a specified command and (with -k) provides a list of commands related to a specified keyword man cd
man -k account
which Displays the location of the executable that represents the particular command which cd
apropos Lists commands associated with a particular topic or keyword apropos user
apropos account

Finding files

There are two commands that can help you find files on Linux, but they work very differently. One searches the file system while the other looks through a previously built database.

Command Function Example
find Locates files based on criteria provided (file name, type, owner, permissions, size, etc.). Unless provided with a location from which to start the search, find only looks in the current directory. find . -name myfile
find /tmp -type d
locate Locates files using the contents of the /var/lib/mlocate/mlocate.db which is updated by the updatedb command usually run through cron. No starting location is required. locate somefile
locate “*.html” -n 20

Viewing running processes

You can easily view processes that are running on the system – yours, another user’s or all of them.

Command Function Example
ps Shows processes that you are running in your current login session ps
ps -ef Shows all processes that are currently running on the system ps -ef
ps -ef | more
pstree Shows running processes in a hierarchical (tree-like) display that demonstrates the relationships between processes (-h highlights current process) pstree
pstree username
pstree -h

Starting, stopping and listing services

These commands allow you to display services as well as start and stop them.

Command Function Example
systemctl The systemctl command can start, stop, restart and reload services. Privileged access is required. sudo systemctl stop apache2.service
sudo systemctl restart apache2.service
sudo systemctl reload apache2.service
service Lists services and indicates whether they are running service –status-all

Killing processes

Linux offers a few commands for terminating processes. Privileged access is needed if you did not start the process in question.

Command Function Example
kill Terminates a running process provided you have the authority to do so kill 8765
sudo kill 1234
kill -9 3456
killall Terminates all processes with the provided name killall badproc
pkill Terminates a process based on its name pkill myproc

Identifying your OS release

The table below lists commands that will display details about the Linux OS that is running on a system.

Command Function Example
uname Displays information on OS release in a single line of text uname -a
uname -r
lsb_release On Debian-based systems, this command displays information on the OS release including its codename and distributor ID lsb_release -a
hostnamectl Displays information on the system including hostname, chassis type, OS, kernel and architecture hostnamectl

Gauging system performance

These are some of the more useful tools for examining system performance.

Command Function Example
top Shows running processes along with resource utilization and system performance data. Can show processes for one selected user or all users. Processes can be ordered by various criteria (CPU usage by default) top
top jdoe
atop Similar to top command but more oriented toward system performance than individual processes atop
free Shows memory and swap usage – total, used and free free
df Display file system disk space usage df
df -h

Managing users and groups

Commands for creating and removing user accounts and groups are fairly straightforward.

Command Function Example
useradd Adds a new user account to the system. A username is mandatory. Other fields (user description, shell, initial password, etc.) can be specified. Home directory will default to /home/username. useradd -c “John Doe” jdoe
useradd -c “Jane Doe” -g admin -s /bin/bash jbdoe
userdel Removes a user account from the system. The -foption runs a more forceful removal, deleting the home and other user files even if the user is still logged in. userdel jbdoe
userdel -f jbdoe
groupadd Adds a new user group to the system, updating the /etc/group. groupadd developers
groupdel Removes a user group from the system groupdel developers

Examining network connections

The commands below help you view network interfaces and connections.

Command Function Example
ip Displays information on network interfaces ip a
ss Displays information on sockets. The -s option provides summary stats. The -l option shows listening sockets. The -4 or -6 options restrict output to IPv4 or IPv6 connections. ss -s
ss -l
ss -4 state listening
ping Check connectivity to another system ping remhost
ping 192.168.0.11

Managing security

There are many aspects to managing security on a Linux system, but there are also a lot of commands that can help. The commands below are some that will get you started. Click on this link to see these and other commands on 22 essential Linux security commands.

Command Function Example
visudo The visudo command allows you to configure privileges that will allow select individuals to run certain commands with superuser authority. The command does this by making changes to the /etc/sudoers file. visudo
sudo The sudo command is used by privileged users (as defined in the /etc/sudoers file to run commands as root. sudo useradd jdoe
su Switches to another account. This requires that you know the user’s password or can use sudo and provide your own password. Using the  means that you also pick up the user’s environment settings. su (switch to root)
su – jdoe
sudo su – jdoe
who Shows who is logged into the system who
last Lists last logins for specified user using records from the /var/log/wtmp file. last jdoe
ufw Manages the firewall on Debian-based systems. sudo ufw status
sudo ufs allow ssh
ufw show
firewall-cmd Manages the firewall (firewalld) on RHEL and related systems. firewall-cmd –list-services
firewall-cmd –get-zones
iptables Displays firewall rules. sudo iptables -vL -t security

Setting up and running scheduled processes

Tasks can be scheduled to run periodically using the command listed below.

Command Function Example
crontab Sets up and manages scheduled processes. With the -l option, cron jobs are listed. With the -eoption, cron jobs can be set up to run at selected intervals. crontab -l
crontab -l -u username
crontab -e
anacron Allows you to run scheduled jobs on a daily basis only. If the system is powered off when a job is supposed to run, it will run when the system boots. sudo vi /etc/anacrontab

Updating, installing and listing applications

The commands for installing and updating applications depend on what version of Linux you are using, specifically whether it’s Debian- or RPM-based.

Command Function Example
apt update On Debian-based systems, updates the list of available packages and their versions, but does not install or upgrade any packages sudo apt update
apt upgrade On Debian-based systems, installs newer versions of installed packages sudo apt upgrade
apt list Lists all packages installed on Debian-basedsystem. With –upgradable option, it shows only those packages for which upgrades are available. apt list
apt list –installed
apt list –upgradable
apt install On Debian-based systems, installs requested package sudo apt install apache2
yum update On RPM-cased systems, updates all or specified packages sudo yum update
yum update mysql
yum list On RPM-based systems, lists package sudo yum update mysql
yum install On RPM-based systems, installs requested package sudo yum -y install firefox
yum list On RPM-based systems, lists known and installed packages sudo yum list
sudo yum list –installed

Shutting down and rebooting

Commands for shutting down and rebooting Linux systems require privileged access. Options such as +15 refer to the number of minutes that the command will wait before doing the requested shutdown.

Command Function Example
shutdown Shuts down the system at the requested time. The -H option halts the system while the -P powers it down as well. sudo shutdown -H now
shutdown -H +15
shutdown -P +5
halt Shuts down the system at the requested time. sudo halt
sudo halt -p
sudo halt –reboot
poweroff Powers down the system at the requested time. sudo shutdown -H now
sudo shutdown -H +15
sudo shutdown -P +5

Wait, wait, there’s more!

Remember to consult the man pages for more details on these commands. A cheat sheet provides only a quick explanation and a handful of command examples to help you get started.

Source

Configure LVM on Linux Mint – Linux Hint

Imagine that you have a Hard Disk that requires you to resize a chosen partition. This is possible on Linux thanks to LVM. With this in mind, this article will teach you how to Configure LVM on Linux Mint. However, you can apply this tutorial to any Linux distribution.

What is LVM?

LVM is a logical volume manager developed for the Linux Kernel. Currently, there are 2 versions of LVM. LVM1 is practically out of support while LVM version 2 commonly called LVM2 is used.

LVM includes many of the features that are expected of a volume manager, including:

  • Resizing logical groups.
  • Resizing logical volumes.
  • Read-only snapshots (LVM2 offers read and write).

To give you an idea of the power and usefulness of LVM, I will give you the following example: Suppose we have a small hard drive, for example, 80Gb. The way the disk is distributed would be something like that:

  • The 400Mb /boot partition
  • For root partition / 6Gb
  • In the case of the home partition /home 32Gb
  • And the swap partition is 1Gb.

This distribution could be correct and useful but imagine that we install many programs and the root partition fills up, but in personal files, there is practically no data and the /home partition has 20 Gb available. This is a bad use of the hard disk. With LVM, the solution to this problem is simple, since you could simply reduce the partition containing /home and then increase the space allocated to the root directory.

LVM vocabulary

In order to make this post as simple as possible for the reader, it is necessary to take into account some concepts intimately related to LVM. Knowing these concepts effectively will make better understand the full potential of this tool:

So, let us start:

  • Physical Volume (PV): A PV is a physical volume, a hard drive, or a particular partition.
  • Logical Volume (LV): an LV is a logical volume, it is the equivalent of a traditional partition in a system other than LVM.
  • Volume Group (VG): a VG is a group of volumes, it can gather one or more PV.
  • Physical Extent (PE): a PE is a part of each physical volume, of a fixed size. A physical volume is divided into multiple PEs of the same size.
  • Logical extent (LE): an LE is a part of each fixed-size logical volume. A logical volume is divided into multiple LEs of the same size.
  • Device mapper: is a generic Linux kernel framework that allows mapping one device from blocks to another.

Configure LVM on Linux Mint

First of all, you must install the lvm2 package in your system. To do this, open a terminal emulator and write. Note that to execute this command you need super user privileges.

sudo apt install lvm2

Next, I am going to use fdisk to verify which partitions I have. Of course, you must also do this to ensure which are your partitions as well.

sudo -i
fdisk -l

As you can see, I have a second hard drive. In order for LVM to do its job, it is necessary to prepare the disk or partitions to be of the LVM type. Therefore, I have to do some work on the second hard disk called sdb.

So, type this command:

fdisk /dev/sdb

Next, press “n” key to create a new partition. Then, Press enter. Next, press “p” key to set the partition as a primary. Then, Press enter. Now, you have to press 1 to create it as the first partition of the disk. Then, Press enter.

So, the next step is press “t” key to change the system identifier of a partition. Then, Press enter. And select LVM partition. To do it, type “8e”. Then, Press enter. So, type “w” key to write all the changes.

Finally, check the partition.

fdisk -l /dev/sdb

NOTE: If you are going to work with several partitions, you must repeat this process with each of them.

Now, we are ready to continue.

Create the Physical Volume (PV)

To work with LVM we must first define the Physical Volumes (PV), for this we will use the pvcreate command. So, let us go.

pvcreate /dev/sdb1

Check the changes.

pvdisplay

NOTE: If we had more than one partition, we would have to add them all to the PV.

Create the Volume Group (VG)

Once you have the partitions ready, you have to add them to a volume group. So, type this command:

vgcreate volumegroup /dev/sdb1

Replace “volumegroup” by the name you want. If you had more partitions you would only have to add them to the command. For example:

vgcreate volumegroup /dev/sdb1

You can write the name what you want for the VG. So, check the volume group with this command:

vgdisplay

Create the logical volumes (LV)

This is the central moment of the post because in this part we will create the logical volumes that will be like a normal partition.

So, run this command:

lvcreate -L 4G -n  volume volumegroup

This command creates a logical volume of 4G of space over the previously created group.

With lvdisplay you can check the LV.

lvdisplay

The next step is to format and mount the VL.

mkfs.ext4 /dev/volumegroup/volume

Now, create a temporal folder and mount the VL on it.

mkdir /temporal/
mount /dev/volumegroup/volume /temporal/

Now, check the VL.

df -h | grep termporal

Increase or decrease the size of the logical volume

One of the most phenomenal possibilities of LVM is the possibility to increase the size of a logical volume in a very simple way. To do this, type the following command.

lvextend -L +2G /dev/volumegroup/volume

Finally, it is necessary to reflect the same change in the file system, for this, run this command.

resize2fs /dev/volumegroup/volume

Check the new size:

df -h | grep temporal

Final thoughts

Learning to configure LVM in Linux Mint is a simple process that can save many problems when working with partitions. To do this, I invite you to read more about the subject since here I have shown you practical and simple examples on how to configure it.

Source

Red Hat introduces first Kubernetes-native IDE

These days we often package our new programs in containers, and we then manage those containers with Kubernetes. That’s great as far as it goes, but if you’re a programmer, it’s still missing a vital part: An integrated development environment (IDE). Now, Red Hat is filling this hole with Red Hat CodeReady Workspaces, a Kubernetes-native, browser-based IDE.

Also: Why IBM bought Red Hat: It’s all open source cloud, all the time

CodeReady is based on the open-source Eclipse Che IDE. It also includes formerly proprietary features from Red Hat’s Codenvy acquisition.

This new IDE is optimized for Red Hat OpenShift, Red Hat’s Docker/Kubernetes platform and Red Hat Enterprise Linux (RHEL). Red Hat claims CodeReady Workspaces is the first IDE, which runs inside a Kubernetes cluster. There’s been other IDEs, which can work with Kubernetes — notably JetBrain’s IntelliJ IDEA with a plugin — but CodeReady appears to be the first native Kubernetes IDE.

With CodeReady Workspaces, you can manage your code, its dependencies and artifacts inside OpenShift Kubernetes pods, and containers. By contrast, with older IDEs, you can only take advantage of Kubernetes during the final phase of testing and deployment. CodeReady Workspaces lets you develop in OpenShift from the start. Thus, you don’t have to deal with the hassle of moving applications from your development platforms to production systems.

Another CodeReady plus is you don’t need to be a Kubernetes or OpenShift expert to use it. CodeReady handles Kubernetes’ complexities behind the scenes, so you can focus on developing your containerized applications instead of wrestling with Kubernetes. In short, CodeReady includes the tools and dependencies you’ll need to code, build, test, run, and debug container-based applications without requiring you to be a container expert.

Must read

CodeReady also includes a new sharing feature: Factories. A Factory is a template containing the source-code location, runtime and tooling configuration, and commands needed for a project. Factories enable development teams to get up and running with a Kubernetes-native developer environment in a couple of minutes. Team members can use any device with a browser, any operating system, and any IDE — not just CodeReady — to work on their own or shared workspaces. You can also bring other programmers into your CodeReady projects by simply sending them a shareable link.

In a statement, Brad Micklea, Red Hat’s senior director for developer experience and programs, said:

“The rise of cloud-native applications and Kubernetes as the platform for modern workloads requires a change in how developers approach building, testing and deploying their critical applications. Existing developer tooling does not adequately address the evolving needs of containerized development, a challenge that we’re pleased to answer with Red Hat CodeReady Workspaces.”

With CodeReady, developer teams can:

  • Integrate with your preferred version control system with both public and private repositories
  • Control workspace permissions and resourcing
  • Better protect intellectual property by keeping source code off hard-to-secure laptops and mobile devices
  • Use Lightweight Directory Access Protocol (LDAP) or Active Directory (AD) authentication for single sign-on

All-in-all, this is a no brainer. If you’re developing with Kubernetes and containers on OpenShift, get CodeReady. Period. End of statement. Better still, CodeReady is available without charge to anyone with an OpenShift subscription. Just join the Red Hat Developer Program, download, and get to work.

Related Stories:

Source

WLinux & WLinux Enterprise Benchmarks, The Linux Distributions Built For Windows 10 WSL

 

Making the news rounds a few months back was “WLinux”, which was the first Linux distribution designed for Microsoft’s Windows Subsystem for Linux (WSL) on Windows 10. But is this pay-to-play Linux distribution any faster than the likes of Ubuntu, openSUSE, and Debian already available from the Microsoft Store? Here are some benchmarks of these different Linux distribution options with WSL.

 

 

WLinux is a Linux distribution derived from Debian that is focused on offering an optimal WSL experience. This distribution isn’t spun by Microsoft but a startup called Whitewater Foundry. WLinux focuses on providing good defaults for WSL with the catering of its default package set while the Debian archive via APT is still accessible. There is also support for graphical applications when paired with a Windows-based X client. For this easy-setup, quick-to-get-going Linux distribution on WSL, it retails for $19.99 USD from the Microsoft Store though often sells for $9.99 USD.

 

 

Whitewater Foundry also offers WLinux Enterprise. Rather than being a Debian-based WSL operating system, WLinux Enterprise is derived from the EL7 / Scientific Linux 7 package set. WLinux Enterprise is also available from the Microsoft Store where it has a listed retail price of $99.99 USD though is on sale for $5.99.

 

 

Whitewater Foundry had sent over some review keys for WLinux a while back so I decided to do some WSL options currently on Windows 10: Ubuntu 18.04 LTS, openSUSE Leap 42.3, and Debian Stretch. There has been indications of Intel engineers exploring WSL support for Clear Linux, which would be quite interesting given its performance focus, but so far we haven’t seen that offering premiere yet via the Microsoft Store.

 

The same system was used for all of this benchmarking, which consisted of an Intel Core i9 7980XE, ASUS PRIME X299-A motherboard, 4 x 4GB DDR4-3200MHz memory, Samsung 970 EVO NVMe SSD, and GeForce GTX TITAN X. Windows 10 Pro x64 was running on the system with all available system updates as of Build 17763. For supported tests, besides the WSL Linux distributions tested there are also results from the bare metal Windows 10 installation. All of these Linux and Windows benchmarks were carried out using the Phoronix Test Suite.

 

Source

WP2Social Auto Publish Powered By : XYZScripts.com