How to Protect Your Online Privacy: A Practical Guide | Privacy

Do you take your online privacy seriously?

Most people don’t. They have an ideal scenario of just how private their online activities should be, but they rarely do anything to actually achieve it.

The problem is that bad actors know and rely on this fact, and that’s why there’s been a
steady rise in identity theft cases from 2013 to 2017. The victims of these cases often suffer a loss of reputation or financial woes.

If you take your online privacy seriously, follow this 10-step guide to protect it.

1. Beware of Internet Service Providers

You may not be aware of it, but your ISP already might know
all about your online searches.

Each time you search for something online, your browser sends a query to a DNS server. Before the query reaches a DNS server, however, it first has to go through your ISP. Needless to say, your ISP easily can read and monitor these queries, which gives it a window into your online activity.

Not all ISPs monitor your browser queries but the ones that don’t are the exception and not the rule. Most ISPs will keep records of your Web browsing for a period of a few months to a year. Most ISPs don’t record your texts, but they do keep records of who texted you.

There are two ways to protect your privacy if you don’t want your ISP monitoring your browser queries: 1) Switch to an ISP that doesn’t monitor your online data, if practicable; or 2) Get a VPN to protect your data (more on this later).

2. Strengthen and Protect Your Login Credentials

One thing most people take for granted is the login credentials they use to access their many online accounts. Your username and password are the only things keeping your information and privileges from getting into the wrong hands. This is why it’s important to make them as strong as possible.

Choose a strong username that is simple and easy to remember but can’t easily be linked to your identity. This is to prevent hackers from correctly guessing your username based on your name, age, or date of birth. You’d be surprised just how cunningly hackers can find this information. Also, never use your Social Security Number as your username.

Next, pick a strong password. There are many ways to do this, but we can narrow them down to two options: 1) Learn how to make strong passwords; or 2) Use a password manager app.

Learning how to make a strong password requires time and imagination. Do you want to know what the most common passwords are? They are “1234,” “12345,” “0000,” “password” and “qwerty” — no imagination at all. A password combining your name and date of birth won’t cut it. Nor will a password that uses any word found in the dictionary.

You need to use a combination of upper and lower case letters, numbers, and even symbols (if allowed). Complexity is what matters, not length, since a complex password will take centuries for a computer to figure out. In fact, you can
try your password if you want to see just how long it will take to crack.

If you don’t have the time and imagination to formulate a strong and complex password, you can use one of the
six best password managers. These apps not only save you the hassle of memorizing your complex passwords but also auto-fill online login forms and formulate strong passwords for you.

Whether you want to learn how to make strong passwords or choose to install a password manager app is up to you. What you should never neglect, though, is 2FA (2-factor authentication). 2FA adds an extra layer of protection for your passwords in case someone ever does learn what they are. In fact, you may already have tried it when logging into an account on a new device.

The app or service requires you to key in the access code sent to another one of your devices (usually your phone) before you are given access to your account. Failing to provide this access code locks you out of your account. This means that even if hackers obtain your login credentials in some way, they still can’t log into your account without the access code.

Never use the same usernames or passwords for different accounts. This prevents hackers from accessing multiple accounts with just one or more of your login credentials. Also, never share your login credentials with anybody —
not even your significant other.

3. Check the WiFi You’re Using

Have you ever heard of a
KRACK attack? It’s a proof-of-concept cyberattack carried out by infiltrating your WiFi connection. The hacker then can steal information like browsing data, personal information, and even text message contents.

The problem is that not even WPA2 encryption can stop it. This is actually why The WiFi Alliance started development of WPA3, which it officially introduced this summer.

Do you need WPA3 to defend against KRACK attacks? No. You just need to install security updates when they become available. This is because security updates ensure that a key is installed only once, thereby, preventing KRACK attacks. You can add additional layers of protection by visiting only HTTPS sites and by using a VPN.

You also can use a VPN to protect your device whenever you connect to a public network. It prevents hackers from stealing your information via a MitM (Man in the Middle) attack, or if the network you’ve connected to is actually a rogue network.

4. Watch Your Browser

If you read through your browser company’s Terms of Use and Privacy Policy, you might find that they actually track your online activities. They then sell this information to ad companies that use methods like analytics to create a profile for each user. This information then is used to create those annoying targeted ads.

How do they do this?

Answer: Web cookies.

For the most part, Web cookies are harmless. They’re used to remember your online preferences like Web form entries and shopping cart contents. However, some cookies (third-party cookies) are made specifically to remain active even on websites they didn’t originate from. They also track your online behavior through the sites you visit and monitor what you click on.

This is why it’s a good idea to clear Web cookies every once in a while. You may be tempted to change your browser settings to simply reject all cookies, but that would result in an overall inconvenient browsing experience.

Another way to address the monitoring issue is to use your browser’s Incognito mode. Your browser won’t save any visited sites, cookies, or online forms while in this mode, but your activities may be visible to the websites you visit, your employer or school, and your ISP.

The best way I’ve found so far is to replace your browser with an anonymous browser.

One example is TOR (The Onion Browser). TOR is a browser made specifically to protect user privacy. It does this by wrapping your online data in several layers of encryption and then “bouncing” it for the same number of times before finally arriving at the right DNS server.

Another example is Epic Browser. While this browser doesn’t run on an onion network like TOR, it does do away with the usual privacy threats, including browsing history, DNS pre-fetching, third-party cookies, Web or DNS caches, and auto-fill features. It automatically deletes all session data once you close the browser.

SRWare Iron will be familiar to Google Chrome users, since it’s based on the open source Chromium project. Unlike Chrome, however, it gets rid of data privacy concerns like usage of a unique user ID and personalized search suggestions.

These three are the best ones I’ve found, but there are other alternatives out there. Whatever privacy browser you choose, make sure it’s compatible with your VPN, as not all privacy browsers are VPN-compatible — and vice-versa.

5. Use a Private Search Engine

Presenting risks similar to popular browsers are the search engines many people use. Most browser companies also produce their own search engine, which — like the browser — also tracks your online searches. These searches then can be traced to your personal identity by linking them to your computer, account, or IP address.

Aside from that, search engines keep information on your location and usage for up to several days. What most people don’t know is that persons in the legal field actually are allowed to use the information collected by search engines.

If this concerns you at all, you may want to switch to a private search engine. These private search engines often work in the same way: They obtain search results from various sources, and they don’t use personalized search results.

Some of the more popular private search engines include DuckDuckGo, Fireball, and Search Encrypt.

6. Install a VPN

What is a VPN, and why do I strongly recommend it?

A VPN (virtual private network) is a type of software that protects your Internet browsing by encrypting your online data and hiding your true IP address.

Since you already know how online searches are carried out, you already know that browser queries are easily readable by your ISP — or anyone else, for that matter. This is because your online data is, by default, unencrypted. It’s made up of plain text contained in data packets.

You also already know that not even built-in WPA2 encryption is good enough to protect against certain attacks.

This is where a VPN comes in. The VPN courses your online data through secure tunnels until it gets to its intended DNS server. Anyone intercepting your browsing data will find unreadable jargon instead.

You may hear advice against trusting VPNs with your security. I’m actually inclined to partially agree — not all VPNs are secure. However, that doesn’t mean all VPNs are not secure.

The unsecured VPNs I’m referring to are the “free lunch” types that promise to be free forever but actually use or sell your data to ad companies. Use only the safest VPN services you can find.

A VPN is primarily a security tool. While you may enjoy some privacy from its functions, you will want to pair it with a privacy browser and search engine to get the full privacy experience.

A VPN can’t secure your computer or device from malware that’s already present. This is why I always recommend using a VPN together with a good antivirus and firewall program.

Some popular browsers run WebRTC protocols by default. You have to turn off this protocol. This protocol compromises a VPN’s security by allowing your true IP address to be read.

7. Watch Out for Phishing

You may have the best VPN, anonymous browser, and private search engine on the market, but they won’t do you much good if you’re hooked by a phishing scam.

Phishing employs psychological analysis and social engineering to trick users into clicking a malicious link. This malicious link can contain anything from viruses to cryptojackers.

While phishing attacks usually are sent to many individuals, there’s a more personalized form called “spearphishing.” In that case, the hackers attempt to scam a specific person (usually a high-ranking officer at a company) by using information that’s available only to a select few people that the target knows.

So, how do you avoid being reeled in by phishing attacks?

The first option is to learn how to identify phishing attempts. Beware of messages from people you don’t know. Hover over a link before clicking it to make sure it navigates to the site it portrays. Most importantly, remember that if it’s too good to be true, it most likely is.

The second option is to install an antiphishing toolbar. This software prevents phishing by checking the links you click against a list of sites known to host malware or those that trick you into disclosing financial or personal information.

It then will prompt you, once it determines the link to be connected to one of those sites, and provide you with a path back to safety.

The best examples I’ve found are OpenDNS, Windows Defender Browser Protection, and Avira Browser Safety.

8. Encrypt Your Communications

If you’ve been following tech news in the recent months, you may have found an item about the FBI wanting
to break Facebook Messenger’s encryption. Say what you will about the social network giant, but this news reveals one thing: Even the FBI can’t crack encrypted messages without help.

This is why you should always use “encryption mode” in your messaging apps. Apps like Signal, Telegram, and Threema all come with end-to-end encryption and support for text, calls, and even video calls.

If you require constant use of emails, ProtonMail, Tutanota, Mailinator, and MailFence are great alternatives to popular email services that actually monitor your email content.

9. Watch What You Share on Social Media

Social media has become one of the best ways to keep in touch with important people in our lives. Catching up to everyone we care about is just a few clicks away. That said, we’re not the only ones looking at their profiles.

Hackers actually frequent social media sites as they hunt for any personal information they can steal. They even can circumvent your “friends only” information by adding you as a friend using a fake account. I don’t think I need to mention the problems hackers can cause once they’ve stolen your identity.

This is why you should exercise caution about what you share on social media. You never know if hackers are using the photos you share to target you for their next attack. You may want to skip out on filling out your profile completely. Avoid giving your phone or home number, and perhaps use a private email to sign up.

10. Update Early and Often

You may have heard this before but it’s worth repeating now: Don’t ignore system updates. You may not be aware of it, but updates fix many vulnerabilities that could jeopardize your online privacy.

Most people put off installing updates since they always seem to come at inopportune times. Sometimes we just can’t put up with the dip in performance or Internet speed while updates are being installed.

It’s usually best to suffer what minor inconvenience they cause early rather than risk getting caught in the whirlwind of problems hackers can cause if you should get targeted. Most software and apps now come with an auto-update feature, so you won’t have to manually search and download them.

In Conclusion

Privacy is a human right, and our online privacy should be taken seriously. Don’t neglect to take the necessary steps to protect yours.

Beware of your Internet service provider, and always protect your login credentials no matter how strong they are. Remember to check the network you’re connecting to before you log in.

Watch what your browser and search engine are doing, and consider replacing them with more private ones. Prepare against phishing by learning to identify attempts and installing an antiphishing toolbar.

Always use encrypted messaging, and watch what you share on social media. Finally, never ignore system updates when they become available.

Follow these steps and you’ll soon be on your way to a more private browsing experience.

Source

GPL Initiative Expands with 16 Additional Companies Joining Campaign for Greater Predictability in Open Source Licensing

November 7, 2018

Red Hat, Inc. (NYSE: RHT), the world’s leading provider of open source solutions, today announced that Adobe, Alibaba, Amadeus, Ant Financial, Atlassian, Atos, AT&T, Bandwidth, Etsy, GitHub, Hitachi, NVIDIA, Oath, Renesas, Tencent, and Twitter have joined an ongoing industry effort to combat harsh tactics in open source license enforcement by adopting the GPL Cooperation Commitment. By making this commitment, these 16 corporate leaders are strengthening long-standing community norms of fairness, pragmatism, and predictability in open source license compliance.

We are thrilled to see the continued success of the GPL Cooperation Commitment. Compliance in the open source community is a forgiving process and rightly aimed at maximizing use of open source software.

Today’s announcement follows an earlier wave of adoption of the commitment within the technology industry. Red Hat, Facebook, Google and IBM made the initial commitment in November 2017. They were joined in March 2018 by CA Technologies, Cisco, Hewlett Packard Enterprise, Microsoft, SAP and SUSE. In July 2018, 14 additional companies signed on to the commitment: Amazon, Arm, Canonical, GitLab, Intel Corporation, Liferay, Linaro, MariaDB, NEC, Pivotal, Royal Philips, SAS, Toyota and VMware. One month later, in August 2018, the eight funding members of the Open Invention Network (OIN) — Google, IBM, Red Hat, SUSE, Sony, NEC, Philips, Toyota — announced that they had unanimously adopted the GPL Cooperation Commitment. With today’s announcement, more than 40 organizations have adopted the GPL Cooperation Commitment.

The 16 new companies in today’s announcement are a diverse set of technology firms whose participation makes evident the worldwide reach of the GPL Cooperation Commitment. They comprise globally-operating companies based on four continents and mark a significant expansion of the initiative into the Asia-Pacific region. They represent various industries and areas of commercial focus, including IT services, software development tools and platforms, social networking, fintech, semiconductors, e-commerce, multimedia software and more.

The GPL Cooperation Commitment is a means for companies, individual developers and open source projects to provide opportunities for licensees to correct errors in compliance with software licensed under the GPLv2 family of licenses before taking action to terminate the licenses. Version 2 of the GNU General Public License (GPLv2), version 2 of the GNU Library General Public License (LGPLv2), and version 2.1 of the GNU Lesser General Public License (LGPLv2.1) do not contain express “cure” periods to fix noncompliance prior to license termination. Version 3 of the GNU GPL (GPLv3) addressed this by adding an opportunity to correct mistakes in compliance. Those who adopt the GPL Cooperation Commitment extend the cure provisions of GPLv3 to their existing and future GPLv2 and LGPLv2.x-licensed code.

Specifically, the commitment language adopted by each company is:

Before filing or continuing to prosecute any legal proceeding or claim (other than a Defensive Action) arising from termination of a Covered License, [Company] commits to extend to the person or entity (“you”) accused of violating the Covered License the following provisions regarding cure and reinstatement, taken from GPL version 3. As used here, the term ‘this License’ refers to the specific Covered License being enforced.

However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation.

Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice.

[Company] intends this Commitment to be irrevocable, and binding and enforceable against [Company] and assignees of or successors to [Company]’s copyrights.

[Company] may modify this Commitment by publishing a new edition on this page or a successor location.

Definitions

‘Covered License’ means the GNU General Public License, version 2 (GPLv2), the GNU Lesser General Public License, version 2.1 (LGPLv2.1), or the GNU Library General Public License, version 2 (LGPLv2), all as published by the Free Software Foundation.

Defensive Action’ means a legal proceeding or claim that [Company] brings against you in response to a prior proceeding or claim initiated by you or your affiliate.

‘[Company]’ means [Company] and its subsidiaries.

Read the individual commitments:

Supporting Quotes

Michael Cunningham, executive vice president and general counsel, Red Hat
“We are thrilled to see the continued success of the GPL Cooperation Commitment. Compliance in the open source community is a forgiving process and rightly aimed at maximizing use of open source software. Adoption of the commitment by these 16 prominent technology companies strengthens this message and will enhance predictability in the use of open source software.”

Jiangwei Jiang, general manager of Technology R&D, Alibaba Cloud
“Alibaba is an active advocate, contributor and leader in the open source movement, and resorts to openness and inclusiveness to address controversy within the community.”

Benjamin Bai, vice president of Intellectual Property, Ant Financial
“Ant Financial is pleased to join the GPL Cooperation Commitment. Open source software has thrived on the basis of collaboration. Litigation should be used only as a last resort and in a responsible manner.”

Sri Viswanath, chief technology officer, Atlassian
“Atlassian embraces the open source movement and wants to help in responsibly shaping its future. The GPL Cooperation Commitment is a common-sense solution that makes it easier for users to adopt and innovate with open source, which is why we are pleased to join the Commitment.”

Mazin Gilbert, vice president of Advanced Technology and Systems, AT&T Labs
“AT&T is delighted to join the already successful GPL Cooperation Commitment. As a long-time contributor to the open source community, we’re excited to continue on this journey and encourage the spirit of collaboration.”

Mike Linksvayer, director of Policy, GitHub
“We’re thrilled to encourage and join in broad software industry cooperation to improve the legal and policy underpinnings of open source, which ultimately protects and empowers the people–and the community–behind the technology.”

Gil Yehuda, senior director of Open Source, Oath
“Oath is committed to promoting open source success and we support the GPL Cooperation Commitment. Open source collaboration is about working together to make better software for the entire industry.”

Hisao Munakata, senior director of Automotive Technical Customer Engagement Division, Automotive Solution Business Unit, Renesas
“Renesas is committed to being a first-class citizen in an OSS community which is why we contributed to the development of the Linux kernel, especially for the development of device drivers. We strongly believe by supporting the GPL Cooperation Commitment, we will be able to drive the worldwide adoption of the OSS license and bring great advantages to the whole automotive industry.”

Takahiro Yasui, director of OSS Solution Center, Systems & Services Business Division, Hitachi, Ltd.
“It is our pleasure to declare our participation to this Cooperation Commitment. Hitachi has been participating in the open source ecosystem through being a member of wide varieties of open source communities and working with open source organizations. Hitachi believes this activity helps the open source community grow healthy, and accelerate the speed of further open innovation in the open source ecosystem.”

Sam Xu, head of Intellectual Property, Tencent
“Open source is a key part of Tencent’s technology strategy. We look forward to working more closely with the international open source community to create new and cutting edge open source solutions. The GPL Cooperation Commitment will provide more reasonable and predictable protection for developers and contributors, which will foster a thriving and healthy open source ecosystem.”

Remy DeCausemaker, Open Source Program Manager, Twitter
“Twitter is proud to join the GPL Cooperation Commitment. Efforts like these encourage adoption, reduce uncertainty, and build trust. #GPLCC

About Red Hat

Red Hat is the world’s leading provider of enterprise open source software solutions, using a community-powered approach to deliver reliable and high-performing Linux, hybrid cloud, container, and Kubernetes technologies. Red Hat helps customers integrate new and existing IT applications, develop cloud-native applications, standardize on our industry-leading operating system, and automate, secure, and manage complex environments. Award-winning support, training, and consulting services make Red Hat a trusted adviser to the Fortune 500. As a strategic partner to cloud providers, system integrators, application vendors, customers, and open source communities, Red Hat can help organizations prepare for the digital future.

Forward-looking statements

Certain statements contained in this press release may constitute “forward-looking statements” within the meaning of the Private Securities Litigation Reform Act of 1995. Forward-looking statements provide current expectations of future events based on certain assumptions and include any statement that does not directly relate to any historical or current fact. Actual results may differ materially from those indicated by such forward-looking statements as a result of various important factors, including: risks related to the ability of the Company to compete effectively; the ability to deliver and stimulate demand for new products and technological innovations on a timely basis; delays or reductions in information technology spending; the integration of acquisitions and the ability to market successfully acquired technologies and products; risks related to errors or defects in our offerings and third-party products upon which our offerings depend; risks related to the security of our offerings and other data security vulnerabilities; fluctuations in exchange rates; changes in and a dependence on key personnel; the effects of industry consolidation; uncertainty and adverse results in litigation and related settlements; the inability to adequately protect Company intellectual property and the potential for infringement or breach of license claims of or relating to third party intellectual property; the ability to meet financial and operational challenges encountered in our international operations; and ineffective management of, and control over, the Company’s growth and international operations, as well as other factors contained in our most recent Quarterly Report on Form 10-Q (copies of which may be accessed through the Securities and Exchange Commission’s website at http://www.sec.gov), including those found therein under the captions “Risk Factors” and “Management’s Discussion and Analysis of Financial Condition and Results of Operations”. In addition to these factors, actual future performance, outcomes, and results may differ materially because of more general factors including (without limitation) general industry and market conditions and growth rates, economic and political conditions, governmental and public policy changes and the impact of natural disasters such as earthquakes and floods. The forward-looking statements included in this press release represent the Company’s views as of the date of this press release and these views could change. However, while the Company may elect to update these forward-looking statements at some point in the future, the Company specifically disclaims any obligation to do so. These forward-looking statements should not be relied upon as representing the Company’s views as of any date subsequent to the date of this press release.

Source

Revisiting the Unix philosophy in 2018

In 1984, Rob Pike and Brian W. Kernighan published an article called “Program Design in the Unix Environment” in the AT&T Bell Laboratories Technical Journal, in which they argued the Unix philosophy, using the example of BSD’s cat -v implementation. In a nutshell that philosophy is: Build small, focused programs—in whatever language—that do only one thing but do this thing well, communicate via stdin/stdout, and are connected through pipes.

Sound familiar?

Yeah, I thought so. That’s pretty much the definition of microservices offered by James Lewis and Martin Fowler:

In short, the microservice architectural style is an approach to developing a single application as a suite of small services, each running in its own process and communicating with lightweight mechanisms, often an HTTP resource API.

While one *nix program or one microservice may be very limited or not even very interesting on its own, it’s the combination of such independently working units that reveals their true benefit and, therefore, their power.

*nix vs. microservices

The following table compares programs (such as cat or lsof) in a *nix environment against programs in a microservices environment.

*nix Microservices
Unit of execution program using stdin/stdout service with HTTP or gRPC API
Data flow Pipes ?
Configuration & parameterization Command-line arguments,
environment variables, config files
JSON/YAML docs
Discovery Package manager, man, make DNS, environment variables, OpenAPI

Let’s explore each line in slightly greater detail.

Unit of execution

The unit of execution in *nix (such as Linux) is an executable file (binary or interpreted script) that, ideally, reads input from

stdin

and writes output to

stdout

. A microservices setup deals with a service that exposes one or more communication interfaces, such as HTTP or gRPC APIs. In both cases, you’ll find stateless examples (essentially a purely functional behavior) and stateful examples, where, in addition to the input, some internal (persisted) state decides what happens.

Data flow

Traditionally, *nix programs could communicate via pipes. In other words, thanks to Doug McIlroy, you don’t need to create temporary files to pass around and each can process virtually endless streams of data between processes. To my knowledge, there is nothing comparable to a pipe standardized in microservices, besides my little Apache Kafka-based experiment from 2017.

Configuration and parameterization

How do you configure a program or service—either on a permanent or a by-call basis? Well, with *nix programs you essentially have three options: command-line arguments, environment variables, or full-blown config files. In microservices, you typically deal with YAML (or even worse, JSON) documents, defining the layout and configuration of a single microservice as well as dependencies and communication, storage, and runtime settings. Examples include Kubernetes resource definitions, Nomad job specifications, or Docker Compose files. These may or may not be parameterized; that is, either you have some templating language, such as Helm in Kubernetes, or you find yourself doing an awful lot of sed -i commands.

Discovery

How do you know what programs or services are available and how they are supposed to be used? Well, in *nix, you typically have a package manager as well as good old man; between them, they should be able to answer all the questions you might have. In a microservices setup, there’s a bit more automation in finding a service. In addition to bespoke approaches like Airbnb’s SmartStack or Netflix’s Eureka, there usually are environment variable-based or DNS-based approaches that allow you to discover services dynamically. Equally important, OpenAPI provides a de-facto standard for HTTP API documentation and design, and gRPC does the same for more tightly coupled high-performance cases. Last but not least, take developer experience (DX) into account, starting with writing good Makefiles and ending with writing your docs with (or in?) style.

Pros and cons

Both *nix and microservices offer a number of challenges and opportunities

Composability

It’s hard to design something that has a clear, sharp focus and can also play well with others. It’s even harder to get it right across different versions and to introduce respective error case handling capabilities. In microservices, this could mean retry logic and timeouts—maybe it’s a better option to outsource these features into a service mesh? It’s hard, but if you get it right, its reusability can be enormous.

Observability

In a monolith (in 2018) or a big program that tries to do it all (in 1984), it’s rather straightforward to find the culprit when things go south. But, in a

yes | tr \n x | head -c 450m | grep n

or a request path in a microservices setup that involves, say, 20 services, how do you even start to figure out which one is behaving badly? Luckily we have standards, notably OpenCensus and OpenTracing. Observability still might be the biggest single blocker if you are looking to move to microservices.

Global state

While it may not be such a big issue for *nix programs, in microservices, global state remains something of a discussion. Namely, how to make sure the local (persistent) state is managed effectively and how to make the global state consistent with as little effort as possible.

Wrapping up

In the end, the question remains: Are you using the right tool for a given task? That is, in the same way a specialized *nix program implementing a range of functions might be the better choice for certain use cases or phases, it might be that a monolith is the best option for your organization or workload. Regardless, I hope this article helps you see the many, strong parallels between the Unix philosophy and microservices—maybe we can learn something from the former to benefit the latter.

Source

Linux Apps For MediaTek Chromebooks A Little Closer

Linux Apps For MediaTek Chromebooks A Little Closer

November 7, 2018

If you are the proud owner of a MediaTek-powered Chromebook such as the Acer Chromebook R13 or Lenovo Flex 11, some new features are headed your way.

Spotted in the Canary channel in mid-October, the Crostini Project is now live in the Developer channel for Chromebooks with the ARM-based MediaTek processor. This brings native Linux app functionality to the Chromebooks with the MT8173C chipset and although the number of devices is few, MediaTek Chromebooks are relatively inexpensive and versatile machines.

Here’s the list of Chromebooks with the MediaTek processor.

  • Lenovo 300e Chromebook
  • Lenovo N23 Yoga Chromebook
  • Acer Chromebook R13
  • Poin2 Chromebook 11C
  • Lenovo Chromebook C330
  • Poin2 Chromebook 14
  • Lenovo Chromebook S330

I have the Lenovo 300e at my desk and it looks to be handling Linux apps like a champ thus far. I know that the low-powered ARM processor isn’t going to be a device that will draw developers in need of serious horsepower but for the average user, these devices are great. On top of that, you can pick up most of the Chromebooks on this list for $300 or less. As a second device to pack around when you’re out of the office or relaxing on the couch, they’re tough to beat.

If you’re interested in checking out Linux apps on your MediaTek Chromebook, head to the settings menu and click About Chrome OS>Detailed build information>Change Channel. Keep in mind, the Developer channel can frequently be unstable and moving back to Beta or Stable will powerwash your device and delete locally saved data. Make sure you back up anything you don’t wish to lose. Once you’re there, head back to settings and you should see a Linux apps menu. Turn it on and wait for the terminal to install.

If you’re new to Linux apps, you can check out how to install the Gnome Software center here and start exploring new apps for your Chromebook.

Shop MediaTek Chromebooks On Amazon

Source

Download Mozilla Thunderbird Linux 60.3.0

Sending and receiving emails is like breathing these days, and you will need a reliable and extremely stable application to do it right. Mozilla Thunderbird is one of those rare applications that provides users with a feature-rich, easy to use and extendable email client. Besides begin an email client, the software is also a very good RSS news reader, as well as a newsgroup and chat client. It is supported and installed by default in many Linux operating systems.

Features at a glance

Among some of its major highlights, we can mention adaptive junk mail controls, saved search folders, global inbox support, message grouping, privacy protection, and comprehensive mail migration from other email clients.

Mozilla Thunderbird is designed to be very comprehensive. It will help users communicate better in an office space, allowing them to send, received emails, chat with their colleagues, and stay updated with the latest news.

Few know that the application provides users with a built-in web browser functionality, using a tabbed user interface and based on its bigger brother, the powerful Mozilla Firefox web explorer. Another interesting feature is the ability to add extensions, popularly known as add-ons, which will extended the default functionality of the application.

Supported operating systems

Do to the fact that it is written by Mozilla, the software supports multiple operating systems, including Linux, Microsoft Windows and Mac OS X, as well as the 64-bit and 32-bit hardware platforms.

There are many popular Linux distribution that use Mozilla Thunderbird as the default email client application, integrated in a wide range of open source desktop environments including GNOME, Xfce, LXDE, Openbox, Enlightenment, or KDE.

Bottom line

Using the Mozilla applications in a Linux environment is the best choice one can make. They are without any doubt among the most popular and open source email, news reader, newsgroup, chat and web browsing apps.

Source

GraphQL Gets Its Own Foundation | Linux.com

Addressing the rapidly growing user base around GraphQL, The Linux Foundation has launched the GraphQL Foundation to build a vendor-neutral community around the query language for APIs (application programming interfaces).

“Through the formation of the GraphQL Foundation, I hope to see GraphQL become industry standard by encouraging contributions from a broader group and creating a shared investment in vendor-neutral events, documentation, tools, and support,” said Lee Byron, co-creator of GraphQL, in a statement.

“GraphQL has redefined how developers work with APIs and client-server interactions,” said Chris Aniszczyk, Linux Foundation vice president of developer relations…

Read more at The New Stack

Source

Install Docker on Raspberry Pi

Docker is a containerization system for Linux. It is used to run lightweight Linux containers on top of another Linux host operation system (a.k.a Docker host). If you’re trying to learn Docker on a real computer, then Raspberry Pi is a very cost effective solution. As Docker containers are lightweight, you can easily fit it 5-10 or more Docker containers on a Raspberry Pi host. I recommend you buy Raspberry Pi 3 Model B or Raspberry Pi 3 Model B+ if you want to setup Docker on it as these models of Raspberry Pi has 1GB of memory (RAM). The more memory you have the better. But sadly, there’s no Raspberry Pi released yet that has more than 1 GB of memory.

In this article, I will show you how to install Docker on Raspberry Pi 3 Model B. I will be using Ubuntu Core operating system on my Raspberry Pi 3 Model B for the demonstration.

You need:

  • A Raspberry Pi 3 Model B or Raspberry Pi 3 Model B+ Single Board Computer device.
  • At least 16GB of microSD Card for installing Ubuntu Core.
  • An Ethernet Cable for internet connection. You can also use the built-in Wi-Fi for the internet. But I prefer wired connection as I think it’s more reliable.
  • HDMI Cable.
  • A Monitor with HDMI port.
  • An USB Keyboard for configuring Ubuntu Core for the first time.
  • A Power Adapter for the Raspberry Pi.

Install Ubuntu Core on Raspberry Pi 3:

I showed you how to install and configure Ubuntu Core on Raspberry Pi 2 and Raspberry Pi 3 in another Raspberry Pi article I wrote on LinuxHint. You can check it at (Link to the Install Ubuntu on Raspberry Pi article)

Powering on Raspberry Pi 3:

Once you have everything set up, connect all the required devices and connectors to your Raspberry Pi and turn it on.

Connecting to Raspberry Pi 3 via SSH:

Once you have Ubuntu Core OS configured, you should be able to connect to your Raspberry Pi 3 via SSH. The required information to connect to your Raspberry Pi via SSH should be displayed on the Monitor connected to your Raspberry Pi as you can see in the marked section of the screenshot below.

Now, from any of the computer that you have SSH key added to your Ubuntu One account, run the following command to connect to the Raspberry Pi via SSH:

$ ssh dev.shovon8@192.168.2.15

NOTE: Replace the username and the IP address of the command with yours.

You may see an error while connecting to your Raspberry Pi via SSH, in that case, just run the following command:

$ ssh-keygen -f ~/.ssh/known_hosts -R 192.168.2.15

Now, you should be able to connect to your Raspberry Pi via SSH again. If it’s the first time you’re connecting to your Raspberry Pi via SSH, then you should see the following message. Just type in yes and then press <Enter>.

You should be connected.

Installing Docker on Raspberry Pi 3:

On Ubuntu Core, you can only install snap packages. Luckily, Ubuntu Core has Docker snap package in the official snap package repository. So, you won’t have any trouble installing Docker on Raspberry Pi 3. To install Docker on Raspberry Pi 3, run the following command:

$ sudo snap install docker

As you can see, Docker is being installed. It will take a while to complete.

At this point Docker is installed. As you can see, the version of Docker is 18.06.1. It is Docker Community Edition.

Now, run the following command to connect Docker to the system:

$ sudo snap connect docker:home

Using Docker on Raspberry Pi 3:

In this section, I will show you how to run Docker containers on Raspberry Pi 3. Let’s get started. You can search for Docker images with the following command:

$ sudo docker search KEYWORD

For example, to search for Ubuntu docker images, run the following command:

$ sudo docker search ubuntu

As you can see, the search result is displayed. You can download and use any Docker image from here. The first Docker image in the search result is ubuntu. Let’s download and install it.

To download (in Docker term pull) the ubuntu image, run the following command:

$ sudo docker pull ubuntu

As you can see, the Docker ubuntu image is being pulled.

The Docker ubuntu image is pulled.

You can list all the Docker images that you’ve pulled with the following command:

Now, you can create a Docker container using the ubuntu image with the following command:

$ sudo docker run -it ubuntu

As you can see, a Docker container is created and you’re logged into the shell of the new container.

Now, you can run any command you want here as you can see in the screenshot below.

To exit out of the shell of the container, run the following command:

You can list all the containers you’ve created with the following command:

As you can see, the container I’ve created earlier has the Container ID 0f097e568547. The container is not running anymore.

You can start the container 0f097e568547 again, with the following command:

$ sudo docker start 0f097e568547

As you can see, the container 0f097e568547 is running again.

To log in to the shell of the container, run the following command:

$ sudo docker attach 0f097e568547

As you can see, I am logged into the shell of the container 0f097e568547 again.

You can check how much memory, CPU, disk I/O, network I/O etc the running containers are using with the following command:

As you can see, I have two containers running and their ID, name, CPU usage, memory usage, network usage, disk usage, pid etc are displayed in a nicely formatted way.

I am running Docker and 2 containers on my Raspberry Pi 3 and I still have about 786 MB of memory available/free. Docker on Raspberry Pi 3 is amazing.

So, that’s how you install and use Docker on Raspberry Pi 3. Thanks for reading this article.

Source

Virtualizing the Clock – Linux Journal

Dmitry Safonov wanted to implement a namespace for time information. The
twisted and bizarre thing about virtual machines is that they get more
virtual all the time. There’s always some new element of the host system
that can be given its own namespace and enter the realm of the virtual
machine. But as that process rolls forward, virtual systems have to share
aspects of themselves with other virtual systems and the host system
itself—for example, the date and time.

Dmitry’s idea is that users should be able to set the day and time on their
virtual systems, without worrying about other systems being given the same
day and time. This is actually useful, beyond the desire to live in the past
or future. Being able to set the time in a container is apparently one of
the crucial elements of being able to migrate containers from one physical
host to another, as Dmitry pointed out in his post.

As he put it:

The kernel provides access to several clocks:
CLOCK_REALTIME,
CLOCK_MONOTONIC, CLOCK_BOOTTIME. Last two clocks are monotonous, but the
start points for them are not defined and are different for each running
system. When a container is migrated from one node to another, all clocks
have to be restored into consistent states; in other words, they have to
continue running from the same points where they have been dumped.

Dmitry’s patch wasn’t feature-complete. There were various questions still
to consider. For example, how should a virtual machine interpret the time
changing on the host hardware? Should the virtual time change by the same
offset? Or continue unchanged? Should file creation and modification times
reflect the virtual machine’s time or the host machine’s time?

Eric W. Biederman supported this project overall and liked the code in the
patch, but he did feel that the patch could do more. He thought it was a little
too lightweight. He wanted users to be able to set up new time namespaces at
the drop of a hat, so they could test things like leap seconds before
they actually occurred and see how their own projects’ code worked under
those various conditions.

To do that, he felt there should be a whole “struct timekeeper” data
structure for each namespace. Then pointers to those structures could be
passed around, and the times of virtual machines would be just as
manipulable and useful as times on the host system.

In terms of timestamps for filesystems, however, Eric felt that it might
be best to limit the feature set a little bit. If users could create files
with timestamps in the past, it could introduce some nasty security
problems. He felt it would be sufficient simply to “do what distributed
filesystems do when dealing with hosts with different clocks”.

The two went back and forth on the technical implementation details. At one
point, Eric remarked, in defense of his preference:

My experience with
namespaces is that if we don’t get the advanced features working there is
little to no interest from the core developers of the code, and the
namespaces don’t solve additional problems. Which makes the namespace a
hard sell. Especially when it does not solve problems the developers of the
subsystem have.

At one point, Thomas Gleixner came into the conversation to remind Eric that
the time code needed to stay fast. Virtualization was good, he said, but
“timekeeping_update() is already heavy and walking through a gazillion of
namespaces will just make it horrible.”

He reminded Eric and Dmitry that:

It’s not only timekeeping, i.e. reading time, this is also affecting all
timers which are armed from a namespace.

That gets really ugly because when you do settimeofday() or adjtimex() for a
particular namespace, then you have to search for all armed timers of that
namespace and adjust them.

The original posix timer code had the same issue because it mapped the clock
realtime timers to the timer wheel so any setting of the clock caused a full
walk of all armed timers, disarming, adjusting and requeing them. That’s
horrible not only performance wise, it’s also a locking nightmare of all
sorts.

Add time skew via NTP/PTP into the picture and you might have to adjust
timers as well, because you need to guarantee that they are not expiring
early.

So, there clearly are many nuances to consider. The discussion ended there,
but this is a good example of the trouble with extending Linux to create
virtual machines. It’s almost never the case that a whole feature can be
fully virtualized and isolated from the host system. Security concerns,
speed concerns, and even code complexity and maintainability come into the
picture. Even really elegant solutions can be shot down by, for example, the
possibility of hostile users creating files with unnaturally old timestamps.

Note: if you’re mentioned above and want to post a response above the comment section, send a message with your response text to ljeditor@linuxjournal.com.

Source

Gumstix enhances Geppetto board design service with new Board Builder UI

Nov 7, 2018

Gumstix has expanded its Linux-oriented Geppetto online embedded board development platform with a free “Board Builder” service that offers a checklist interface for selecting modules, ports, and more.

Gumstix has added a free Board Builder service to its Geppetto Design-to-Order (D2O) custom board design service. The Board Builder improvements make the drag-and-drop Geppetto interface even easier to use, enabling customization of ports, layout and other features.

With Board Builder, you can select items from a checklist, including computer-on-modules, memory, network, sensors, audio, USB, and other features. You can then select a custom size, and you’re presented with 2D and 3D views of board diagrams that you can further manipulate.

Geppetto Board Builder design process for a Raspberry Pi CM3 based IoT design

Board Builder will prompt you with suggestions for power and other features. These tips are based on your existing design, as well as Gumstix’s deep knowledge base about embedded Linux boards.

We quickly whipped up a little Raspberry Pi Compute Module 3 based carrier board (above), which admittedly needs a lot of work. Even if you’re not a serious board developer, it’s a painless, and rather addictive way to do hardware prototyping — sort of a Candy Crush for wannabe hardware geeks.

Serious developers, meanwhile, can go on to take full advantage of the Geppetto service. Once the board is created, “free Automated Board Support Package (AutoBSP), technical documentation (AutoDoc) and 3D previews can be instantly downloaded to anyone who designs a hardware device in the Geppetto online D2O,” says Gumstix.

You can then use Geppetto’s fast small-run manufacturing order service to quickly manufacture small runs of the board within 15 days. The initial $1,999 manufacturing price is reduced for higher quantity jobs and repeat board spins.

Since Gumstix launched its free, web-based Geppetto service several years ago, it designs most of its own boards with the service. Anyone can use Geppetto to modify Gumstix’s carrier board designs or start from scratch and build a custom board. The Geppetto service supports a growing number of Linux- and Android driven modules ranging from the company’s own DuoVero and Overo modules to the Nvidia Jetson TX2 that drives the recent Gumstix Aerocore 2 for Nvidia Jetson.

Further information

The Board Builder interface is available now on the free Geppetto D2O service. More information may be found on the Gumstix Geppetto Board Builder page. You can start right away with Board Builder here.

Source

Overcoming Your Terror of Arch Linux | Software

A recent episode of a Linux news podcast I keep up with featured an interview with a journalist who had written a piece for a non-Linux audience about giving it a try. It was surprisingly widely read. The writer’s experience with some of the more popular desktop distributions had been overwhelmingly positive, and he said as much in his piece and during the subsequent podcast interview.

However, when the show’s host asked whether he had tried Arch Linux — partly to gauge the depth of his experimentation and partly as a joke — the journalist immediately and unequivocally dismissed the idea, as if it were obviously preposterous.

Although that reaction came from an enthusiastic Linux novice, it is one that is not uncommon even among seasoned Linux users. Hearing it resurface in the podcast got me contemplating why that is — as I am someone who is comfortable with and deeply respects Arch.

What Are You Afraid Of?

1. “It’s hard to install.”

The most common issue skeptics raise, by far, is that the installation process is challenging and very much hands-on. Compared to modern day installers and wizards, this is undoubtedly true. In contrast to most mainstream Linux distributions (and certainly to proprietary commercial operating systems), installing Arch is a completely command line-driven process.

Parts of the operating system that users are accustomed to getting prefabricated, like the complete graphical user interface that makes up the desktop, have to be assembled from scratch out of the likes of the X Window server, the desired desktop environment, and the display manager (i.e. the startup login screen).

Linux did not always have installers, though, and Arch’s installation process is much closer to how it was in the days of yore. Installers are a huge achievement, and a solution to one of the biggest obstacles to getting non-expert general users to explore and join the Linux community, but they are a relative luxury in the history of Linux.

Also, installers can get it wrong, as I found out when trying to make some modest adjustments to the default Ubuntu installation settings. While Arch let me set up a custom system with a sequence of commands, Ubuntu’s installer nominally offered a menu for selecting the same configuration, but simply could not to execute it properly under the hood once the installer was set in motion.

2. “The rolling releases are unstable.”

In my experience, Arch’s implementation of the rolling release model has been overwhelmingly stable, so claims to the contrary are largely overblown as far as I am concerned.

When users have stability problems, it’s generally because they’re trying something that either is highly complicated or something for which there is little to no documentation. These precarious use cases are not unique to Arch. Combining too many programs or straying into uncharted territory are more or less equally susceptible to stability issues in Arch as with any other distribution — or any operating system, for that matter.

Just like any software developers, the Arch developers want people to like and have a good experience using their distro, so they take care to get it right. In a way, Arch’s modular approach, with each package optimized and sent out as soon as it’s ready, actually makes the whole operation run more smoothly.

Each sub-team at Arch receives a package from upstream (wherever that might be), makes the minimum number of changes to integrate it with Arch’s conventions, and then pushes it out to the whole Arch user base.

Because every sub-team is doing this and knows every other sub-team is doing the same, they can be sure of exactly what software environment they will be working with and integrating into: the most recent one.

The only times I’ve ever had an update break my system, the Arch mailing list warned me it would, and the Arch forums laid out exactly how to fix it. In other words, by checking the things that responsible users should check, you should be fine.

3. “I don’t want to have to roll back packages.”

Package downgrades are related to, and probably the more feared manifestation of, the above. Again, if you’re not doing anything crazy with your system and the software on it, and you read from Arch’s ample documentation, you probably won’t have to.

As with the risk of instability that comes from complicated setups on any distribution, package downgrades are potentially necessary on distributions besides Arch as well. In fact, whereas most distributions assume you never will have to perform a downgrade and thus don’t design their package management systems to easily (or at least intuitively) do it, Arch makes it easy and thoroughly outlines the steps.

4. “It doesn’t have as many packages,” and “I heard the AUR is scary.”

The criticism of Arch’s relatively smaller base of total available packages usually goes hand-in-hand with that of the unofficial repository being a sort of Wild West. As far as the official repositories are concerned, the number is somewhat smaller than in Debian- or Red Hat-based distributions. Fortunately, the Arch User Repository (AUR) usually contains whatever the official repos lack that most any user possibly could hope for.

This is where most naysayers chime in to note that malicious packages have been found in the AUR. This occasionally has been the case, but what most of us don’t always think about is that this also can be said of the Android Play Store, the Apple App Store, and just about every other software manager that you can think of.

Just as with every app store or software center, if users are careful to give a bit of scrutiny to the software they are considering — in AUR’s case by scanning the (very short) files associated with AUR packages and reading forum pages on the more questionable ones — they will generally be fine.

Others may counter that it’s not the potential hazards of the AUR that are at issue, but that more so than with, say, Debian-based distributions, there is software that falls outside of both the official Arch repos and the AUR. To start with, this is less the case than it once was, given the meteoric rise in the popularity of the Arch-based Manjaro distribution.

Beyond that, most software that isn’t in any of Arch’s repos can be compiled manually. Just as manual installations like Arch’s were the norm for Linux once upon a time, the same holds true for compilations being the default mode of software installation.

Arch’s Tricks Come With Some Major Treats

With those points in mind, hopefully Arch doesn’t seem so daunting. If that’s not enough to convince you to give it a whirl, here are a few points in Arch’s favor that are worth considering.

To start off, manual installation not only gives you granular control over your system, but also teaches you where everything is, because you put it there. Things like the root directory structure, the initial ram filesystem and the bootloader won’t be a mystery that computer use requires you to blindly accept, because during installation you directly installed and generated all these (and more) and arranged them in their proper places.

Manual installation also cuts way down on bloat, since you install everything one package at a time — no more accepting whatever the installer dumps onto your fresh system. This is an especially nice advantage considering that, as many Linux distributions become more geared toward mainstream audiences, their programs become more feature-rich, and therefore bulkier.

Depending on how you install it, Arch running the heaviest desktop environment still can be leaner than Ubuntu running the lightest one, and that kind of efficiency is never a bad thing.

Rolling releases are actually one of Arch’s biggest strengths. Arch’s release model gives you the newest features right away, long before distros with traditional synchronized, batch update models.

Most importantly, with Arch, security patches drop immediately. Every time a major Linux vulnerability comes out — there usually isn’t much malware that exploits these vulnerabilities, but there are a lot of vulnerabilities to potentially exploit — Arch is always the first to get a patch out and into the hands of its users, and usually within a day of the vulnerability being announced.

You’ll probably never have to roll back packages, but if you do, you will be armed with the knowledge to rescue your system from some of the most serious problems.

If you can live-boot the Arch installation image (which doubles as a repair image) from a USB, mount your non-booted installed system to the live system, chroot in to the non-booted system (i.e. switch from the root of the live system to treating your non-booted system as the temporary root), and install a cached previous version of problem packages, you know how to solve a good proportion of the most serious problems any system might have.

That sounds like a lot, but that’s also why Arch Linux has the best documentation of any Linux distribution, period.

Finally, plumbing the AUR for packages will teach you how to review software for security, and compiling source code will give you an appreciation for how software works. Getting in the habit of spotting sketchy behavior in package build and make files will serve you well as a computer user overall.

It also will prod you to reevaluate your relationship with your software. If you make a practice of seriously weighing every installation, you might start being pickier with what you do choose to install.

Once you’ve compiled a package or two, you will start to realize just how unbounded you are in how to use your system. App stores have gotten us used to thinking of computing devices in terms of what its developers will let us do with them, not in terms of what we want to do with them, or what it’s possible to do with them.

It might sound cheesy, but compiling a program really makes you reshape the way you see computers.

Safely Locked Away in a Virtual World of Its Own

If you’re still apprehensive about Arch but don’t want to pass on it, you can install it as a virtual machine to tinker with the installation configurations before you commit to running it on bare hardware.

Software like VirtualBox allows you to allocate a chunk of your hard drive and blocks of memory to running a little computer inside your computer. Since Linux systems in general, and Arch in particular, don’t demand much of your hardware resources, you don’t have to allocate much space to it.

To create a sandbox for constructing your Arch Linux, tell VirtualBox you want a new virtual system and set the following settings (with those not specified here left to default): 2 GB of RAM (though you can get away with 1 GB) and 8 GB of storage.

You will now have a blank system to choose in VirtualBox. All you have to do now is tell it where to find the Arch installation image — just enter the system-specific settings, go to storage, and set the Arch ISO as storage.

When you boot the virtual machine, it will live-boot this Arch image, at which point your journey begins. Once your installation is the way you want it, go back into the virtual system’s settings, remove the Arch installer ISO, reboot, and see if it comes to life.

There’s a distinct rush you feel when you get your own Arch system to boot for the first time, so revel in it.

Source

WP2Social Auto Publish Powered By : XYZScripts.com