Microsoft Open Sources Over 60,000 Patents to Protect Linux
Last updated October 11, 2018 By Ankush Das 11 Comments
We are well aware that Microsoft is interested to help the open-source community now more than ever. It has open sourced several of its projects such as .Net Core, VS Code, PowerShell, MS-DOS and a number of AI frameworks and libraries.
But, its latest decision is a big deal for the users, developers and the companies associated with Linux.
Microsoft’s Corporate VP – Erich Andersen – announced in a blog post that they would be bringing their portfolio of over 60,000 issued patents to Open Invention Network (OIN).
Open Invention Network (OIN) is a community backed by a lot of big companies like IBM, Google, Sony, etc. to protect Linux and associated open-source projects from patent lawsuits.
In the blog post, Erich mentioned how OIN is helping protect Linux:
“Since its founding in 2005, OIN has been at the forefront of helping companies manage patent risks. In the years before the founding of OIN, many open source licenses explicitly covered only copyright interests and were silent about patents. OIN was designed to address this concern by creating a voluntary system of patent cross-licenses between member companies covering Linux System technologies. OIN has also been active in acquiring patents at times to help defend the community and to provide education and advice about the intersection of open source and intellectual property. Today, through the stewardship of its CEO Keith Bergelt and its Board of Directors, the organization provides a license platform for roughly 2,650 companies globally. The licensees range from individual developers and startups to some of the biggest technology companies and patent holders on the planet.“
Now, with Microsoft taking such a big step, Erich also mentioned how it would impact the open-source community:
“Now, as we join OIN, we believe Microsoft will be able to do more than ever to help protect Linux and other important open source workloads from patent assertions. We bring a valuable and deep portfolio of over 60,000 issued patents to OIN. We also hope that our decision to join will attract many other companies to OIN, making the license network even stronger for the benefit of the open source community.”
Satya Nadella quoted this after buying GitHub for $7.5 billion
It would be interesting to see how it unfolds because when it comes to money, Microsoft is no one’s friend. Microsoft earns a huge chunk of revenue from patents. It has patents with Android as well that enables it to earn $5-$15 from every Android device sold. I don’t think those 60,000 patents were bringing any revenue to Microsoft. But that just my presumption.
Input from Abhishek: Microsoft has its own selfish interest in this case. This time around it is more about protecting Microsoft and its cloud business on Azure that depends heavily on Linux. Remember Oracle vs Google legal battle over the use of Java in the Android operating system? If Linux ever gets into patent battle and loses it, all the companies using Linux might have to pay billions. Microsoft surely doesn’t want to be in such a situation and hence it (along with many other big corporations) wants to protect Linux in order to save its own back.
What do you think about the entire episode? Let us know your thoughts in the comments below.
About Ankush Das
A passionate technophile who also happens to be a Computer Science graduate. He has had bylines at a variety of publications that include Ubergizmo & Tech Cocktail. You will usually see cats dancing to the beautiful tunes sung by him.
Download X.Org Server Linux 1.20.2
X.Org Server (xorg-server) is an open source and freely distributed implementation of the X Window System (X.Org), provided by the X.Org Foundation, specially designed for the GNU/Linux operating system.
Features at a glance
Key features include input hotplug, KDrive, DTrace and EXA. It’s designed to run on many UNIX-like operating systems, including most Linux distributions and BSD variants it. It is also the default X server for the Solaris operating system.
Forked from XFree86
X.Org Server is part of the X.Org software, the popular and powerful X Window System used in many POSIX operating systems, including almost all GNU/Linux distributions, as well as some BSD and Solaris flavors. The software was originally forked from the XFree86 project.
An important component of every Linux distro
This is a very important and essential component of all Linux kernel-based operating systems that run a graphical desktop environment or a window manager. Without X.Org and X.Org Server, you will only be able to use a distro from the command-line.
It’s installed by default
Of course, this means that it is installed by default in all these GNU/Linux distributions, without exception. If you remove this package from your installation, you won’t be able to access the graphical environment anymore.
X.Org, X.Org Server and X.Org Foundation
Many people get confused about these two essential components of a Linux distribution that uses a graphical session, but one should know that X.Org (X Window System) is the display server and X.Org Server is the X Window System implementation that contains several other projects, such as XCB and Xlib.
Furthermore, X.Org Foundation is the organization that governs these two projects. The X.Org (X Window System) packages are freely available for download on Softpedia.
X11 server Window system X window system Xorg X.Org Server Xorg-server
Linux Scoop — Nitrux 1.0.16
Nitrux 1.0.16 – See What’s New
Nitrux 1.0 .16 is the latest release of Nitrux OS based on based on the development branch of Ubuntu 18.10 Cosmic Cuttlefish and powered by Linux Kernel 4.18 series. This release also brings together the latest software updates, bug fixes, performance improvements, and ready-to-use hardware support.
Using the latest version of Nomad Desktop as default desktop environment, which is built on top of KDE Plasma 5.13.90 and Qt 5.11.1. The Software Center was updated to using new web scraper backend allowing for automated sorting and listing of AppImages.
Download Nitrux 1.0 .16
Configure your web application pentesting lab
By
Shashwat Chaudhary
April 04, 2017
- Disclaimer – TLDR; some stuff here can be used to carry out illegal activity, our intention is, however, to educate
In the previous tutorial, we set up our web application pentesting lab. However, it’s far from ready, and we need to make some changes to get it working as per our needs. Here’s the link to the previous post if you didn’t follow that-
Set up your web app pentesting lab
Contents
- Fixing the problems
- Changing credentials
- Adding recaptcha key
- Enabling disabled stuff
- Installing missing stuff
- Giving write privileges
Fixing problems
If you remember from previous post, we reached this point-
![]() |
There’s some stuff in red color |
All the stuff in red needs fixing. If you are lucky, we have the same set of issues which need fixing. Otherwise, you’ll have to do some googling to find out how to fix problems which you are facing and I am not.
Changing mysql username and password
The default credentials are ‘root’ and ‘p@ssw0rd’ in the config.inc.php file. We change it to the correct mysql login credentials, ‘root’ and ”, in my case. You can change depending on your mysql credentials. This gets rid of our biggest worry – Unable to connect to database!
Now we’ll fix the other remaining issues.
Fixing missing recaptcha key
Firstly, we need to solve the recaptcha key missing problem. Go to this
–
![]() |
Go to the URL, you’ll see a form like this |
![]() |
Fill form, values don’t matter much |
![]() |
You obtain site key and secret key. Site key = Private key, secret key = private key |
![]() |
Open the config.ini.php file in your favourite text editor |
![]() |
Edit the recaptcha public key and private key fields. Here is what I did. |
![]() |
Now we have a a recaptcha key. One red down, 3 to go. |
Fixing disabled allow_url_include
We simply have to locate the configuration file and edit the value of the parameter from Off to On.
![]() |
The php configuration file is located at /opt/lampp/etc/php.ini Edit it with your favourite text editor, you’ll need root privileges (sudo) |
![]() |
Locate the allow_url_include line by using search feature of your text editor |
![]() |
Change Off to On |
![]() |
Restart the lampp service |
![]() |
Reload page, you’ll see that the issue is fixed |
Note: Any other function which is disabled can be enabled in a similar manner. All settings are in the php.ini file. You just need to search for the corresponding line and edit it.
Fixing missing modules
If a module is shown as missing , then we need to install it. In my case, everything is installed. Most likely, since you are also using XAMPP, everything would be installed. However, if that is not the case, then you have to figure out how to install the modules. If you aren’t using XAMPP and did everything manually, then apt-get would be the way to go. Otherwise look at XAMPP’s (or whichever bundle you are using) documentation.
Fixing File Ownership
We need to give www-data user write access to two directories. We’ll can use chgrp and chmod commands in unison to give only the privileges that are needed, or we could go the lazy way and use chmod 777 (full read, write and execute privileges to everyone). I’m feeling lazy and I’m just gonna go the chmod way. Run the command below-
chmod 777 <directory>
Replace directory with the correct directory.
![]() |
This is the last thing that needs to be done |
![]() |
Everything is green finally! Also, notice the credentials, we’ll need it later. “admin // password” |
![]() |
Database created. Populated with tables. |
![]() |
Finally the damn vulnerable application is running. |
The username = “admin” and password is “password” (“admin // password” that we saw three pics ago).
![]() |
Everything is running perfectly. This is the page you should see after successful login. |
I’ll leave you at the welcome page of DVWA. In the next tutorial, we’ll begin proper exploitation of the intentional vulnerabilities, moving from trivial stuff to the really hard stuff. The first two tutorials complete the installation and configuration parts.
Clearing the (hybrid and multi-) clouds of confusion
Share with friends and colleagues on social media
Despite cloud computing being a generally well-accepted and used technology that has slipped into the common vernacular very easily, there is still some confusion around the different types of cloud options out there. Specifically around the concepts of multi-cloud and hybrid cloud. While some of this is due to slightly hazy marketing, largely it is down to misunderstanding. We know, just from looking at many of the cars on the street today, that a hybrid is a combination of two things (in the case of the image on this page, a bobcat and a bird), but how does that differ from multi-cloud?
A blog written in 2017 by our own Terri Schlosser previously addressed this, but having had a number of conversations with confused customers and partners over the last year or so, I decided to record a very brief video to help clarify the situation.
Follow this link to watch the video, and if you have any thoughts, please leave a comment on this blog, contact me at matthew.johns@suse.com or via Twitter. I hope that you find it useful in understanding more about what can be at times a very confusing set of terms. If you’d like to read more about cloud in general, then please visit our Cloud Solutions page on suse.com, or get in contact with us to see how SUSE can support you in your journey to the cloud.
Share with friends and colleagues on social media
HTTP download speed difference in windows vs Linux | Elinux.co.in | Linux Cpanel/ WHM blog
HTTP download speed difference in windows 7 vs Linux
I have a strange situation regarding a Windows PC which is showing limited internet transfer speeds for no apparent reason. If I am performing the same test on Linux box then I am getting good speed.
Upon intense debugging, I am able to diagnose and find out the root cause of the problem.
It was/is Windows HTTP packet fragmentation that happens locally. Basically its
how windows compile HTTP headers locally so found a fix to it.
We came across some TCP settings which restrict download speed in the windows
box, hence in order to permit download of large files, have modified below
settings:
These were my initial TCP settings
C:Windowssystem32>netsh interface tcp show global
Querying active state…
TCP Global Parameters
———————————————-
Receive-Side Scaling State: disabled
Chimney Offload State : automatic
NetDMA State: enabled
Direct Cache Acess (DCA): disabled
Receive Window Auto-Tuning Level: disabled
Add-On Congestion Control Provider: none
ECN Capability: disabled
RFC 1323 Timestamps : disabled
** The above autotuninglevel setting is the result of Windows Scaling heuristics
overriding any local/policy configuration on at least one profile.
C:Windowssystem32>netsh interface tcp show heuristics
TCP Window Scaling heuristics Parameters
———————————————-
Window Scaling heuristics : enabled
Qualifying Destination Threshold: 3
Profile type unknown: normal
Profile type public : normal
Profile type private: restricted
Profile type domain : normal
Thus I did:
# disable heuristics
C:Windowssystem32>netsh interface tcp set heuristics wsh=disabled
Ok.
# enable receive-side scaling
C:Windowssystem32>netsh int tcp set global rss=enabled
Ok.
# manually set autotuning profile
C:Windowssystem32>netsh interface tcp set global autotuning=experimental
Ok.
# set congestion provider
C:Windowssystem32>netsh interface tcp set global congestionprovider=ctcp
Ok.
C:Windowssystem32>netsh interface tcp show global
Querying active state…
TCP Global Parameters
———————————————-
Receive-Side Scaling State: enabled
Chimney Offload State : automatic
NetDMA State: enabled
Direct Cache Acess (DCA): disabled
Receive Window Auto-Tuning Level: experimental
Add-On Congestion Control Provider: ctcp
ECN Capability: disabled
RFC 1323 Timestamps : disabled
After changing these settings downloading is fast again, hitting the internet connection’s limit.
Find Exact Installation Date And Time Of Your Linux OS | Elinux.co.in | Linux Cpanel/ WHM blog
On Fedora, RHEL and its clones such as CentOS, Scientific Linux, Oracle Linux, you can find it using the following command:
rpm -qi basesystem
Sample output
[[email protected] ~]# rpm -qi basesystem
Name : basesystem
Version : 10.0
Release : 7.el7.centos
Architecture: noarch
Install Date: Thu 29 Mar 2018 05:05:32 PM IST
Group : System Environment/Base
Size : 0
License : Public Domain
Signature : RSA/SHA256, Fri 04 Jul 2014 06:16:57 AM IST, Key ID 24c6a8a7f4a80eb5
Source RPM : basesystem-10.0-7.el7.centos.src.rpm
Build Date : Fri 27 Jun 2014 04:07:10 PM IST
Build Host : worker1.bsys.centos.org
Relocations : (not relocatable)
Packager : CentOS BuildSystem http://bugs.centos.org
Vendor : CentOS
Summary : The skeleton package which defines a simple CentOS Linux system
Description :
Basesystem defines the components of a basic CentOS Linux
system (for example, the package installation order to use during
bootstrapping). Basesystem should be in every installation of a system,
and it should never be removed.
Unleash powerful Linux container-building capabilities with Buildah – Red Hat Enterprise Linux Blog
Balancing size and features is a universal challenge when building software. So, it’s unsurprising that this holds true when building container images. If you don’t include enough packages in your base image, you end up with images which are difficult to troubleshoot, missing something you need, or just cause different development teams to add the exact same package to layered images (causing duplication). If you build it too big, people complain because it takes too long to download – especially for quick and dirty projects or demos. This is where Buildah comes in.
In the currently available ecosystem of build tools, there are two main kinds of build tools:
- Ones which build container images from scratch.
- Those that build layered images.
Buildah is unique in that it elegantly blurs the line between both – and, it has a rich set of capabilities for each. One of those rich capabilities is multi-stage builds.
At Red Hat Summit 2018 in San Francisco, Scott McCarty and I boiled the practice of building production ready containers down into five key tenets – standardize, minimize, delegate, process, and iterate (video & presentation).
Two tenets in particular are often at odds – standardize and minimize. It makes sense to standardize on a rich base image, while at the same time minimizing the content in layered builds. Balancing both is tricky, but when done right, reaps the benefits of OCI image layers at scale (lots of applications) and improves registry storage efficiency.
Multi-stage builds
A particularly powerful example of how to achieve this balance is the concept of multi-stage builds. Since build dependencies like compilers and package managers are rarely required at runtime, we can exclude them from the final build by breaking it into two parts. We can do the heavy lifting in the first part, then use the build artifacts (think Go binaries or jars) in the second. We will then use the container image from the second build in production.
Using this methodology leverages the power of rich base images, while at the same time, results in a significantly smaller container image. The resultant image isn’t carrying additional dependencies that aren’t used during runtime. The multi-stage build concept became popular last year with the release of Docker v17.05, and OpenShift has long had a similar capability with the concept of chaining builds.
OK, multi-stage builds are great, you get it, but to make this work right, the two builds need to be able to copy data between them. Before we tackle this, let’s start with some background.
Buildah background
Buildah was a complete rethink of how container image builds could and should work. It follows the Unix philosophy of small, flexible tools. Multi-stage builds were part of the original design and have been possible since its inception. With the release of Buildah 1.0, users can now take advantage of the simplicity of using multi-stage builds with the Dockerfile format. All of this, with a smaller tool, no daemon, and tons of flexibility during builds (ex. build time volumes).
Below we’ll take a look at how to use Buildah to accomplish multi-stage builds with a Dockerfile and also explore a simpler, yet more sophisticated way to tackle them.
Using Dockerfiles:
$buildah bud -t [image:tag] .
….and that’s it! Assuming your Dockerfile is written for multi-stage builds and in the directory the command is executed, everything will just work. So if this is all you’re looking for, know that it’s now trivial to accomplish this with Buildah in Red Hat Enterprise Linux 7.5.
Now, let’s dig a little deeper and take a look at using Buildah’s native commands to achieve the same outcome and some reasons why this can be a powerful alternative for certain use cases.
For clarity, we’ll start by using Alex Ellis’s blog post that demonstrates the benefits of performing multi-stage builds. Use of this example is simply to compare and contrast the Dockerfile version with Buildah’s native capabilities. It’s not an endorsement any underlying technologies such as Alpine Linux or APK. These examples could all be done in Fedora, but that would make the comparison less clear.
Using Buildah Commands
Using his https://github.com/alexellis/href-counter we can convert the included Dockerfile.multi file to a simple script like this:
First Build
#!/bin/bash
# build container
buildcntr1=$(buildah from golang:1.7.3)
buildmnt1=$(buildah mount $buildcntr)
Using simple variables like this are not required, but they will make the later commands clearer to read so it’s recommended. Think of the buildcntr1 as a handle which represents the container build, while the variable buildmnt1 represents a directory which will mount the container.
buildah run $buildcntr1 go get -d -v golang.org/x/net/html
This is the first command verbatim in the original Dockerfile. All that’s needed is to change RUN to run and point Buildah to the container we want to execute the command in. Once, the command completes, we are left with a local copy of the go program. Now we can move it wherever we want. Buildah has a native directive to copy the contents out of a container build:
buildah copy $buildcntr1 app.go .
Alternatively, we can use the system command to do the same thing by referencing the mount point:
cp app.go $buildmnt1/go
For this example both of these lines will accomplish the same thing. We can use buildah’s copy command the same way the COPY command works in a Dockerfile, or we can simply use the host’s cp command to perform the task of copying the binary out of the container. In the rest of this tutorial, we’ll rely on the hosts command.
Now, let’s build the code:
buildah run $buildcntr1 /bin/sh -c “CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .”
Second Build
The same applies to this command. We’re changing RUN to run and executing the command in the same container:
# runtime container
buildcntr2=$(buildah from alpine:latest)
buildmnt2=$(buildah mount $buildcntr2)
Now let’s define a separate runtime image that we’ll use to run our application in production with.
buildah run $buildcntr2 apk –no-cache add ca-certificates
Same tweaks for the RUN command
#buildah copy $buildcntr2 $buildmnt1/go/app .
Or:
cp $buildmnt1/go/app $buildmnt2
Here we have the same option as above. To bring the compiled application into the second build, we can use the copy command from buildah or the host.
Now, add the default command to the production image.
buildah config –cmd ./app $buildcntr2
Finally, we unmount and commit the image, and optionally clean up the environment:
#unmount & commit the image
buildah unmount $buildcntr2
buildah commit $buildcntr2 multi-stage:latest
#clean up build
buildah rm $buildcntr1 $buildcntr2
Don’t forget that Buildah can also push the image to your desired registry using buildah push`
The beauty of Buildah is that we can continue to leverage the simplicity of the Dockerfile format, but we’re no longer bound by the limitations of it. People do some nasty, nasty things in a Dockerfile to hack everything onto a single line. This can make them hard to read, difficult to maintain, and it’s inelegant.
When you combine the power of being able to manipulate images with native Linux tooling from the build host, you are now free to go beyond the Dockerfile commands! This opens up a ton of new possibilities for the content of container images, the security model involved, and the process for building.
A great example of this was explored in one of Tom Sweeney’s blog posts on creating minimal containers. Tom’s example of leveraging the build host’s package manager is a great one, and means we no longer require something like “yum” to be available in the final image.
On the security side, we no longer require access to the Docker socket which is a win for performing builds from Kubernetes/OpenShift. In fairness Buildah currently requires escalated privileges on the host, but soon this will no longer be the case. Finally, on the process side, we can leverage Buildah to augment any existing build process, be it a CI/CD pipeline or building from a Kubernetes cluster to create simple and production-ready images.
Buildah provides all of the primitives needed to take advantage of the simplicity of Dockerfiles combined with the power of native Linux tooling, and is also paving the way to more secure container builds in OpenShift. If you are running Red Hat Enterprise Linux, or possibly an alternative Linux distribution, I highly recommend taking a look at Buildah and maximizing your container build process for production.
What Is /dev/shm in linux?
Shared (Virtual) Memory (SHM)
Shared memory is a way to shared state between process.
Shared memory, as its name implies, is a method to “share” data between processes.
Both processes define the same memory area as “shared”, and they can then exchange
information simply by writing into it. This (used to be, and still is somewhat) faster than
the alternative of sending network or pipe-based messages between processes.
If you see the memory as a mean of storing data, a file on a file system can be seen as
shared memory (ie shared file).
It is difficult to account for shared memory. Does it belong to one process? Both? Neither?
If we naively sum the memory belonging to multiple processes, we grossly “over-count”.
As the name implies, the Shared (Virtual) Memory refers to virtual memory that are shared
by more than one process and then can be used by multiple programs simultaneously.
Although virtual memory allows processes to have separate (virtual) address spaces, there
are times when you need processes to share memory.
Shared memory (SHM) is another method of interprocess communication (IPC)
whereby several processes share a single chunk of memory to communicate.
Shared memory provides the fastest way for processes to pass large amounts of data
to one another.
/dev/shm is nothing but implementation of traditional shared memory concept. It is an
efficient means of passing data between programs. One program will create a memory
portion, which other processes (if permitted) can access. This will result into speeding up
things on Linux.
shm / shmfs is also known as tmpfs, which is a common name for a temporary file storage
facility on many Unix-like operating systems. It is intended to appear as a mounted
file system, but one which uses virtual memory instead of a persistent storage device.
If you type mount command you will see /dev/shm as a tempfs file system. Therefore,
it is a file system, which keeps all files in virtual memory. Everything in tmpfs is temporary
in the sense that no files will be created on your hard drive. If you unmount a tmpfs instance, everything stored therein is lost. By default almost all Linux distros configured to use /dev/shm.
Difference between tmpfs and swap
- tmpfs uses memory while as swap uses persistent storage devices.
- tmpfs can be viewed as file system in df output whereas swap dont
- swap has general size recommendations, tmpsfs not. tmpfs size varies on system purpose.
- tmpfs makes applications fasters on loaded systems. swap helps system breath in memory full situations.
- swap full indicates system heavily loaded, degraded performance and may crash.
- tmpfs being full not necessarily means heavy load or prone to crash.
- tmpfs is enhancement where as swap is must have feature!