vkQuake2, the project adding Vulkan support to Quake 2 now supports Linux

At the start of this year, I gave a little mention to vkQuake2, a project which has updated the classic Quake 2 with various improvements including Vulkan support.

Other improvements as part of vkQuake2 include support for higher resolution displays, it’s DPI aware, HUD scales with resolution and so on.

Initially, the project didn’t support Linux which has now changed. Over the last few days they’ve committed a bunch of new code which fully enables 64bit Linux support with Vulkan.

Screenshot of it running on Ubuntu 18.10.

Seems to work quite well in my testing, although it has a few rough edges. During ALT+TAB, it decided to lock up both of my screens forcing me to drop to a TTY and manually kill it with fire. So just be warned on that, might happen to you.

To build it and try it out, you will need the Vulkan SDK installed along with various other dependencies you can find on the GitHub.

For the full experience, you do need a copy of the data files from Quake 2 which you can find easily on GOG. Otherwise, you can test it using the demo content included in the releases on GitHub. Copy the demo content over from the baseq2 directory.

Source

Download Bitnami ProcessWire Module Linux 3.0.123-0

Bitnami ProcessWire Module iconA free software that allows you to deploy ProcessWire on top of a Bitnami LAMP Stack

Bitnami ProcessWire Module is a multi-platform and free software project that allows users to deploy the ProcessWire application on top of the Bitnami LAMP, MAMP and WAMP stacks, without having to deal with its runtime dependencies.

What is ProcessWire?

ProcessWire is a free, open source, web-based and platform-independent application that has been designed from the offset to act as a CMS (Content Management System) software. Highlights include a modular and flexible plugin architecture, support for thousands of pages, modern drag & drop image storage, as well as an intuitive and easy-to-use WYSIWYG editor.

Installing Bitnami ProcessWire Module

Bitnami’s stacks and modules are distributed as native installers built using BitRock’s cross-platform installer tool and designed to work flawlessly on all GNU/Linux distributions, as well as on the Mac OS X and Microsoft Windows operating systems.

To install the ProcessWire application on top of your Bitnami LAMP (Linux, Apache, MySQL and PHP) stack, you will have to download the package that corresponds to your computer’s hardware architecture, 32-bit or 64-bit (recommended), run it and follow the on-screen instructions.

Host ProcessWire in the cloud or virtualize it

Besides installing ProcessWire on top of your LAMP server, you can host it in the cloud, thanks to Bitnami’s pre-build cloud images for the Amazon EC2 and Windows Azure cloud hosting services. Virtualizing ProcessWire is also possible, as Bitnami offers a virtual appliance based on the latest LTS release of Ubuntu Linux and designed for the Oracle VirtualBox and VMware ESX/ESXi virtualization software.

The Bitnami ProcessWire Stack and Docker container

The Bitnami ProcessWire Stack product has been designed as an all-in-one solution that greatly simplifies the installation and hosting of the ProcessWire application, as well as of its runtime dependencies, on real hardware. While Bitnami ProcessWire Stack is available for download on Softpedia, you can check the project’s homepage for a Docker container.

Source

How to Install Microsoft PowerShell 6.1.1 on Ubuntu 18.04 LTS

What is PowerShell?

Microsoft PowerShell is a shell framework used to execute commands, but primarily it is developed to perform administrative tasks such as

  • Automation of repetitive jobs
  • Configuration management

PowerShell is an open-source and cross-platform project; it can be installed on Windows, macOS, and Linux. It includes an interactive command-line shell and a scripting environment.

How Ubuntu 18.04 made installation of PowerShell easier?

Ubuntu 18.04 has made installation of apps much easier via snap packages. For those who’re new to the phrase “snap package”, Microsoft has recently introduced a snap package for PowerShell. This major advancement allows Linux users/admins to install and run the latest version of PowerShell in fewer steps explained in this article.

Prerequisites to install PowerShell in Ubuntu 18.04

The following minimum requirements must exist before installing PowerShell 6.1.1 in Ubuntu 18.04

  • 2 GHz dual-core processor or better
  • 2 GB system memory
  • 25 GB of free hard drive space
  • Internet access
  • Ubuntu 18.04 LTS (long term support)

Steps to Install PowerShell 6.1.1 via Snap in Ubuntu 18.04 LTS

There are two ways to install PowerShell in Ubuntu i.e. via terminal or via Ubuntu Software Application.

via Terminal

Step 1: Open A Terminal Console

The easiest way to open a Terminal is to use the key combination Ctrl+Alt+T at the same time.

Open Ubuntu Console

Step 2: Snap Command to Install PowerShell

Enter snap package command i.e. “snap install powershell –classic” in the Terminal console to initiate installation of PowerShell in Ubuntu.

The prompt of Authentication Required on your screen is exclusively for security purposes. Before initiating any installation in Ubuntu 18.04, by default, the system requires to authenticate the account initiating this installation.

To proceed, the user must enter credentials of the account they’re currently logged in with.

Authenticate as admin

Step 3: Successful Installation of PowerShell

As soon as the system authenticates the user, Installation of PowerShell will begin in Ubuntu. (Usually, this installation takes 1-2 minutes)

The user can continuously see the status of installation in the terminal console.

At the end of the installation, the status of PowerShell 6.1.1 from ‘microsoft-powershell’ installed is shown as it can be seen in the screenshot below.

Install PowerShell snap

Step 4: Launch PowerShell via Terminal

After successful installation, it’s time to launch PowerShell which is a one-step process.

Enter Linux command “powershell” in the terminal console and it will take you to PowerShell terminal in an instant.

powershell

You must be in the PowerShell prompt by now and ready to experience the world of automation and scripting.

Microsoft PowerShell on Ubuntu

via Ubuntu Software

Step 1: Open Ubuntu Software

Ubuntu has facilitated its users with a desktop application of Ubuntu Software. It contains the list of all software and updates available.

  • Open the Ubuntu Software Manager from the Ubuntu desktop.

Step 2: Search for PowerShell in Ubuntu Software

  • Under the list of All software, search for “powershell” through the search bar.
  • Search Results must include “powershell” software as marked in the screenshot below.
  • Click on “powershell” software and proceed to Step 3.

Step 3: Installing PowerShell via Ubuntu Software

  • The user must be able to see the details of “powershell” software and the Install button

(for reference, it’s marked in below image)

  • Click on the Install button, it will initiate installation.

(Installation via Ubuntu Software takes 1-2 minutes)

  • The User can see installation status continuously on the screen and will be notified once installation completes.

Install PowerShell

Installing PowerShell

Step 4: Launch PowerShell via Ubuntu Software

After successful installation of PowerShell 6.1.1 via Ubuntu Software, the user can now launch PowerShell terminal and use it for multiple purposes and features which Microsoft PowerShell has to offer for its Linux users.

  • Click on “Launch” button (for reference, marked in below image). It will take you to PowerShell terminal successfully.

Launch PowerShell

Test PowerShell Terminal via Commands

To test if PowerShell is working accurately, the user can enter few Linux commands like:

“$PSVersionTable” to find Version of PowerShell installed (for reference, the result of this command attached in the screenshot below)

PowerShell gives its user endless power over the system and its directories. After following the above-mentioned steps in this article, now you must be all set to experience the exciting and productive world of automation and scripting through Microsoft PowerShell.

Source

Linux Today – Get started with Cypht, an open source email client

Integrate your email and news feeds into one view with Cypht, the fourth in our series on 19 open source tools that will make you more productive in 2019.

Email arriving at a mailbox

Cypht

We spend a lot of time dealing with email, and effectively managing your emailcan make a huge impact on your productivity. Programs like Thunderbird, Kontact/KMail, and Evolution all seem to have one thing in common: they seek to duplicate the functionality of Microsoft Outlook, which hasn’t really changed in the last 10 years or so. Even the console standard-bearers like Mutt and Cone haven’t changed much in the last decade.

Cypht main screen

Cypht is a simple, lightweight, and modern webmail client that aggregates several accounts into a single view. Along with email accounts, it includes Atom/RSS feeds. It makes reading items from these different sources very simple by using an “Everything” screen that shows not just the mail from your inbox, but also the newest articles from your news feeds.

Cypht's 'Everything' screen

It uses a simplified version of HTML messages to display mail or you can set it to view a plain-text version. Since Cypht doesn’t load images from remote sources (to help maintain security), HTML rendering can be a little rough, but it does enough to get the job done. You’ll get plain-text views with most rich-text mail—meaning lots of links and hard to read. I don’t fault Cypht, since this is really the email senders’ doing, but it does detract a little from the reading experience. Reading news feeds is about the same, but having them integrated with your email accounts makes it much easier to keep up with them (something I sometimes have issues with).

Reading a message in Cypht

Users can use a preconfigured mail server and add any additional servers they use. Cypht’s customization options include plain-text vs. HTML mail display, support for multiple profiles, and the ability to change the theme (and make your own). You have to remember to click the “Save” button on the left navigation bar, though, or your custom settings will disappear after that session. If you log out and back in without saving, all your changes will be lost and you’ll end up with the settings you started with. This does make it easy to experiment, and if you need to reset things, simply logging out without saving will bring back the previous setup when you log back in.

Settings screen with a dark theme

Installing Cypht locally is very easy. While it is not in a container or similar technology, the setup instructions were very clear and easy to follow and didn’t require any changes on my part. On my laptop, it took about 10 minutes from starting the installation to logging in for the first time. A shared installation on a server uses the same steps, so it should be about the same.

In the end, Cypht is a fantastic alternative to desktop and web-based email clients with a simple interface to help you handle your email quickly and efficiently.

Source

The new System Shock is looking quite impressive with the latest artwork

System Shock, the remake coming eventually from Nightdive Studios continues along in development and it’s looking impressive.

In their latest Kickstarter update, they showed off what they say is the “final art” after they previously showed the game using “temporary art”. I have to admit, while this is only a small slice of what’s to come, from the footage it certainly seems like it will have a decent atmosphere to it.

Take a look:

I missed their last few updates, since this is one game I am trying not to spoil too much from seeing all the bits and pieces start to come together now.

They put out a few more updates since I last took a look, showing off more interesting parts of their final art like these:

I’m very interested in seeing the final game, Nightdive Studios have done some pretty good work reviving older games and System Shock is clearly a labour of love for them. It’s using Unreal Engine, so I do hope they’re getting plenty of Linux testing done closer to release since many developers have had issue with it.

There’s no current date for the final release, will keep you posted.

Source

Linus Torvalds Says Things Look Pretty Normal for Linux 5.0, Releases Second RC

Linux creator Linus Torvalds announced today the general availability for testing of the second RC (Release Candidate) of the upcoming major release of the Linux kernel, Linux 5.0.

According to Linus Torvalds, things are going in the right direction for Linux kernel 5.0 series, which should launch sometime at the end of February or early March 2019, and the second Release Candidate is here to add several perf tooling improvements, updated networking, SCSI, GPU, and block drivers, updated x86, ARM, RISC-V, and C-SKY architectures, as well as fixes to Btrfs and CIFS filesystems.

“So the merge window had somewhat unusual timing with the holidays, and I was afraid that would affect stragglers in rc2, but honestly, that doesn’t seem to have happened much. rc2 looks pretty normal. Were there some missing commits that missed the merge window? Yes. But no more than usual. Things look pretty normal,” said Linus Torvalds in a mailing list announcement.

Linux kernel 5.0 RC3 expected on January 17th

Of course, it’s a bit early to say that everything’s fairly normal for the Linux 5.0 kernel series as the development cycle was just kicked off a week ago, when Linus Torvalds announced the first Release Candidate, and it remains to be seen if it will be a normal cycle with seven RCs or a long one with eight RCs. Depending on that, Linux kernel 5.0 could arrive on February 24th or March 3rd.

Until then, we’re looking forward to the third Release Candidate of Linux kernel 5.0, which is expected to hit the streets at the end of the week on January 17th. Meanwhile, you can go ahead and give Linux 5.0 a try on your Linux-powered computer by downloading and compiling the second Release Candidate from kernel.org. Keep in mind though that this is a pre-release version, so don’t use it on production machines.

Source

Nginx vs Apache: Which Serves You Best in 2019?

For two decades Apache held sway over the web server market which is shrinking by the day. Not only has Nginx caught up with the oldest kid on the block, but it is currently the toast of many high traffic websites. Apache users might disagree here. That is why one should not jump to conclusions about which web server is better. The truth is that both form the core of complete web stacks (LAMP and LEMP), and the final choice boils down to individual needs.

For instance, people running Drupal websites often call on Apache, whereas WordPress users seem to favor Nginx as much if not more. Accordingly, our goal is to help you understand your own requirements better rather than providing a one-size recommendation. Having said that, the following comparison between the two gives an accurate picture.

1. Popularity

Up until 2012 more than 65% of websites were based on Apache, a popularity due in no small measure to its historical legacy. It was among the first software that pioneered the growth of the World Wide Web. However, times have changed. According to W3Tech.com, as of January 14, 2019, Apache (44.4%) is just slightly ahead of Nginx (40.9%) in terms of websites using their servers. Between them they dominate nearly 85% of the web server market.

Web Servers Market Share W3techs.com

When it comes to websites with high traffic, the following graph is interesting. Of course, Nginx is quite ahead of Apache but trails behind Google Servers which powers websites like YouTube, Gmail and Drive.

Web Servers Market @ W3Techs 15-Jan-2019

At some point a large number of websites (including this site) migrated from Apache to Nginx. Clearly, the latter is seen as a the latest, and a trendier web server. High traffic websites that are on Apache, e.g. Wikipedia and New York Times, are often using a front-end HTTP proxy like Varnish.

Score: The popularity gap between Apache and Nginx is closing very fast. But, as Apache is still ahead in absolute numbers, we will consider this round a tie.

2. Speed

The main characteristic of a good web server is that it should run fast and easily respond to connections and traffic from anywhere. To measure the server speeds, we compared two popular travel websites based on Apache (Expedia.com) and Nginx (Booking.com). Using an online tool called Bitcatcha, the comparisons were made for multiple servers and measured against Google’s benchmark of 200 ms. Booking.com based on Nginx was rated “exceptionally quick.” In contrast, Expedia.com based on Apache was rated “above average and could be improved.”

Having used both travel websites so many times, I can personally vouch that Expedia feels slightly slower in returning results to my query than Booking does.

Web server response time Booking.com (Nginx) vs. Expedia.com (Apache)

Here are comparisons between the two servers for a few other websites. Nginx does feel faster in all cases below except one.

Website server speeds tested at Bitcatcha

Score: Nginx wins the speed round.

3. Security

Both Nginx and Apache take security very seriously on their websites. There is no dearth of robust systems to deal with DDoS attacks, malware and phishing. Both periodically release security reports and advisories which ensure that the security is strengthened at every level.

Score: We will consider this round a tie.

4. Concurrency

There is a perception that Apache somehow does not measure up to Nginx’s sheer scale and capability. After all, Nginx was originally designed to accelerate speed issues with FastCGI and SCGI handlers. However, from Apache 2.4 onwards (which is the default version), there has been a drastic improvement in the number of simultaneous connections. How far this improvement has been made is worth finding out.

Based on stress tests at Loadimpact.com, we again compared Booking.com (Nginx) with Expedia.com (Apache). For 25 virtual users, the Nginx website was able to record 200 requests per second, which is 2.5 times higher than Apache’s 80 requests per second. Clearly, if you have a dedicated high-traffic website, Nginx is a safer bet.

Scalability testing Apache versus Nginx at Loadimpact.com

Score: Nginx wins the concurrency round.

5. Flexibility

A web server should be flexible enough to allow customizations. Apache does it quite well using .htaccess tools, which Nginx does not support. It allows decentralization of administrator duties. Third party and second-level admins can be prevented from accessing the main server. Moreover, Apache supports more than 60 modules which makes it highly extensible. There is a reason Apache is more popular with shared hosting providers.

Flexible features of Apache: Modules plus htaccess example

Score: Apache wins this round.

Other Parameters

In the past Nginx did not support Windows OS very well, unlike Apache. That is no longer the case. Also, Apache was considered weak for load balancing and reverse proxy which has changed now.

Final Result

Nginx narrowly wins this contest 2-1. Having said this, an objective comparison between Nginx and Apache on technical parameters does not give the complete picture. In the end, our verdict is that both web servers are useful in their own ways.

While Apache should be used with a front-ending server (Nginx itself is one option), Nginx can be better with more customizations and flexibility.

Source

The Start of the RHCA Journey

I’m starting my RHCA (Red Hat Certified Architect) journey!

It took me some time to get my mind set on this, and it was important to understand the reasons I’m willingt to do this in the first place.

Why Red Hat?

I do Linux system administration for a living. Although the world is moving towards DevOps, containers and automation, this doesn’t change the fact that Linux remains the go-to choice for the cloud, and regardless of the job title, one still does a lot of sysadmin work day in, day out.

I’ve been running Linux in production for the past 7 years (and even longer as my personal desktop OS), with the last 4 years being Red Hat based OS exclusively. Over time, I transitioned from running servers on Debian to Ubuntu and then to Red Hat/CentOS. As much as I like Debian, Red Hat has become my distribution of choice. As a result it just seemed natural to learn it in depth.

Why RHCA?

I’m a self-taught RHCE. I passed the exam a couple of years ago.

If you’re reading this, then you’re likely aware that Red Hat exams are hands-on. As a result, they have something of value. You get presented with complex problems, and more often than not you need to know where to find answers on a RHEL system.

This testing methodology is advantageous because it does not require you to simply memorise things, but to know where to find information. Of course, you need to memorise bits and pieces, but it’s muscle memory that’s the key to success.

Having said that, there are three things required to achieve RHCA: practice, practice, practice. You need to perform the tasks over and over to be an expert in using a product, be it Red Hat High Availability clustering, Satellite or OpenStack.

Why am I doing this? Motivation and Expectations

I’m a person who’s eager to learn new things. This includes looking for challenges that would help me grow both personally, and professionally.

As I said some time ago, RHCA is not a sprint, it’s a marathon. It’s also a massive undertaking and should not be taken lightly. This alone makes me want to pursue it. To become better in what I do.

To quote Arnold Schwarzenegger:

“Never, ever think small. If you’re going to accomplish anything, you have to think big. No matter what you do, work, work, work!”

I’m doing this for myself. It’s a project that I feel is worth investing time and resources. I’m not doing RHCA to get a new job, or to get a rise. There are much easier and less time consuming ways of achieving either of those things.

I expect this journey to be a lengthy process with lots of challenges that I’ll need to overcome, including exams, travel and life itself.

Chances are that things won’t always go my way even if I’m well prepared, therefore it’s important to be honest with myself and understand why I’m doing this in the first place.

Timescale

I don’t have a strict deadline, but my aim is to pass the exams by the end of the year. I started planning my RHCA studies back in 2018 so that I would have plenty of time in 2019.

The First Exam: EX436 High Availability Clustering

I use HA clustering at work, therefore the decision to take the EX436 was somewhat easy to make.

EX436 will be my first exam, and I’m already approaching the end of the study process. I use official documentation available on Red Hat’s website, and a lot of practicing.

My home lab for HA clustering is quite simple: a laptop with a quad-core CPU, 16GB of RAM and 128GB SSD, running KVM hypervisor and four RHEL 7.1 virtual machines. One VM is used to provide storage services, and the other three VMs are for clustering. In terms of networking, I use five network interfaces (2x corosync redundand rings, 2x iSCSI multipath, 1x for LAN). Corosync and iSCSI networks are non-routable.

Source

How to Resize OpenStack Instance (Virtual Machine) from Command line

How to Resize OpenStack Instance (Virtual Machine) from Command line

Being a Cloud administrator, resizing or changing resources of an instance or virtual machine is one of the most common tasks.

Source

How Do You Fedora: Journey into 2019

Fedora had an amazing 2018. The distribution saw many improvements with the introduction of Fedora 28 and Fedora 29. Fedora 28 included third party repositories, making it easy to get software like the Steam client, Google Chrome and Nvidia’s proprietary drivers. Fedora 29 brought support for automatic updates for Flatpack.

One of the four foundations of Fedora is Friends. Here at the Magazine we’re looking back at 2018, and ahead to 2019, from the perspective of several members of the Fedora community. This article focuses on what each of them did last year, and what they’re looking forward to this year.

Fedora in 2018

Radka Janekova attended five events in 2018. She went to FOSDEM as a Fedora Ambassador, gave two presentations at devconf.cz and three presentation on dotnet in Fedora. Janekova starting using DaVinci Resolve in 2018: “DaVinci Resolve which is very Linux friendly video editor.” She did note one drawback, saying, “It may not be entirely open source though!”

Julita Inca has been to many places in the world in 2018. “I took part of the Fedora 29 Release Party in Poland where I shared my experiences of being an Ambassador of Fedora these years in Peru.” She is currently located in the University of Edinburgh. “I am focusing in getting a Master in High Performance Computing in the University of Edinburgh using ARCHER that has CentOS as Operating System.” As part of her masters degree she is using a lot of new software. “I am learning new software for parallel programming I learned openMP and MPI.” To profile code in C and Fortran she is using Intel’s Vtune

Jose Bonilla went to a DevOps event hosted by a company called Rancher. Rancher is an open source company that provides a container orchestration framework which can be hosted in a variety of ways, including in the cloud or self-hosted. “I went to this event because I wished to gain more insight into how I can use Fedora containerization in my organization and to teach students how to manage applications and services.” This event showed that the power of open source is less focus on competition and more on completion. “There were several open source projects at this event working completely in tandem without ever having this as a goal. The companies at this event were Google, Rancher, Gitlab and Aqua.” Jose used a variety of open source applications in 2018. “I used Cockpit, Portainer and Rancher OS. Portainer and Rancher are both services that manage dockers containers. Which only proves the utility of containers. I believe this to be the future of compute environments.” He is also working on tools for data analytics. “I am improving on my knowledge of Elasticsearch and the Elastic Stack — Kibana, which is an extraordinarily powerful open source set of tools for data analytics.”

Carlos Enrique Castro León has not been to a Fedora event in Peru, but listens to Red Hat Command Line Hero. “I really like to listen to him since I can meet people related to free code.” Last year he started using Kdenlive and Inkscape. “I like them because there is a large community in Spanish that can help me.”

Akinsola Akinwale started using VSCode, Calligra and Qt5 Designer in 2018. He uses VScode for Python development. For editing documents and spreadsheets he uses Calligra. “I love Vscode for its embedded VIM , terminal & easy of use.” He started using Calligra just for a change of pace. He likes the flexibility of Qt5 designed for creating graphical user interfaces instead of coding it all in Vscode.

Kevin Fenzi went to several Fedora events in 2018. He enjoyed all of them, but liked Flock in Dresden the best of them all. “At Flock in Dresden I got a chance to talk face to face with many other Fedora contributors that I only talk to via IRC or email the rest of the time. The organizers did an awesome job, the venue was great and it was all around just a great time. There were some talks that made me think, and others that made me excited to see what would happen with them in the coming year. Also, the chance to have high bandwith talks really helped move some ideas along to reality.” There were two applications Kevin started using in 2018. “First, after many years of use, I realized it was time to move on from using rdiff-backups for my backups. It’s a great tool, but it’s in python2 and very inactive upstream. After looking around I settled on borg backup and have been happily using that since. It has a few rough edges (it needs lots of cache files to do really fast backups, etc) but it has a very active community and seems to work pretty nicely.” The other application that Kevin started using in OpenShift. “Secondly, 2018 was the year I really dug into OpenShift. I understand now much more about how it works and how things are connected and how to manage and upgrade it. In 2019 we hope to move a bunch of things over to our OpenShift cluster. The OpenShift team is really doing a great job of making something that deploys and upgrades easily and are adding great features all the time (most recently the admin console, which is great to watch what your cluster is doing!).”

Fedora in 2019

Radka plans to do similar presentations in 2019. “At FOSDEM this time I’ll be presenting a story of an open source project eating servers with C#.” Janekova targets pre-university students in an effort to encourage young women to get involved in technology. “I really want to help dotnet and C# grow in the open source world, and I also want to educate the next generation a little bit better in terms of what women can or can not do.”

Julita plans on holding two events in 2019. “I can promote the use of Fedora and GNOME in Edinburgh University.” When she returns to Peru she plans on holding a conference on writing parallel code on Fedora and Gnome.

Jose plans on continuing to push open source initiatives such as cloud and container infrastructures. He will also continue teaching advanced Unix systems administration. “I am now helping a new generation of Red Hat Certified Professionals seek their place in the world of open source. It is indeed a joy when a student mentions they have obtained their certification because of what they were exposed to in my class.” He also plans on spending some more time with his art again.

Carlos would like to write for Fedora Magazine and help bring the magazine to the Latin American community. “I would like to contribute to Fedora Magazine. If possible I would like to help with the magazine in Spanish.”

Akinsola wants to hold a Fedora a release part in 2019. “I want make many people aware of Fedora, make them aware they can be part of the release and it is easy to do.” He would also like to ensure that new Fedora users have an easy time of adapting to their new OS.

Kevin is planning is excited about 2019 being a time of great change for Fedora. “In 2019 I am looking forward to seeing what and how we retool things to allow for lifecycle changes and more self service deliverables. I think it’s going to be a ton of work, but I am hopeful we will come out of it with a much better structure to carry us forward to the next period of Fedora success.” Kevin also had some words of appreciation for everyone in the Fedora community. “I’d like to thank everyone in the Fedora community for all their hard work on Fedora, it wouldn’t exist without the vibrant community we have.”

Source

WP2Social Auto Publish Powered By : XYZScripts.com