Surviving Mars: Space Race expansion and Gagarin free update have released, working well on Linux

Haemimont Games along with Paradox Interactive have released the Surviving Mars: Space Race expansion today and it’s great.

Note: DLC key provided by TriplePoint PR.

The Surviving Mars: Space Race expansion expands the game in some rather interesting ways. One of the biggest obviously being the AI controlled colonies which are optional. Being able to trade with them, deal with distress calls and do covert ops against them has certainly added an interesting layer into the game. While I enjoyed the game anyway, this actually gives it a little bit more of a purpose outside of trying to stave off starvation and not get blown into tiny pieces by meteorites. It’s interesting though, that this could pave the way for a full multiplayer feature, since they have things you can actually do with/against other colonies now. Although, you can’t directly view AI colonies, only see them on the world map which is a bit of a shame.

There’s also the free Gagarin content update which has released at the same time, which includes some interesting goals from each Sponsor. They each have their own special list of goals for you to achieve, which will give you rewards for completing them. For example, with the International Mars Mission sponsor, one such goal is to have a colonist born on Mars which will give you a bunch of supply pods for free. They’re worth doing, but not essential.

Another fun free feature our planetary anomalies, which will require you to send a rocket stocked with Drones, Rovers or Colonists across Mars outside of your colony to investigate, they’re pretty good too since they offer some pretty big rewards at times. There’s also Supply Pods now which cost a bit more, but you don’t need to wait for them to get prepped and so they’re really good for an emergency situation and can end up saving your colony should the worst happen.

Bringing Surviving Mars closer to games like Stellaris, there’s also now special events that will happen. The game was a little, how do I put it, empty? Empty is perhaps too harsh, it’s hard to explain properly. I enjoyed the game in the initially released form, but it did feel lacking in places. Not so much now since there’s around 250 events that will come up at various points throughout the game. Some good, some bad, some completely terrible and so it just makes the game a lot more fresh. Considering this was added in free, it’s quite a surprise. It goes to show how much they care about the game I think.

I was actually having the game freeze up on me a few times in previous version, since I’ve put many hours into this latest update I haven’t seen a single one so that’s really welcome. The game has been running like an absolute dream on maximum settings, incredibly smooth and (as weird as it is to say this about Mars) it looks great too. On top of that, the Linux version got a fix for switching between fullscreen and windowed mode with the patch.

One thing to note, is that there’s a few new keybinds and so I do suggest resetting them or checking them over as I had one or two that were doubled up.

As a whole, the game has changed quite dramatically with this expansion and free update. I liked it a lot before, now I absolutely love it. Even going with just the free update, it’s so much more worthwhile playing! I’m very content with it and I plan to play a lot more in my own personal free time now.

This exciting expansion will be available from Humble Store, GOG and Steam. It’s also having a free weekend on Steam.

Source

Canonical Extends Ubuntu 18.04 LTS Linux Support to 10 Years

BERLIN — In a keynote at the OpenStack Summit here, Mark Shuttleworth, founder and CEO of Canonical Inc and Ubuntu, detailed the progress made by his Linux distribution in the cloud and announced new extended support.

The Ubuntu 18.04 LTS (Long Term Support) debuted back on April 26, providing new server and cloud capabilities. An LTS release comes with five year of support, but during his keynote Shuttleworth announced that 18.04 would have support that is available for up to 10 years.

“I’m delighted to announce that Ubuntu 18.04 will be supported for a full 10 years,” Shuttleworth said. “In part because of the very long time horizons in some of industries like financial services and telecommunications but also from IOT where manufacturing lines for example are being deployed that will be in production for at least a decade .”

Mark Shuttleworth

OpenStack

The long term, stable support for the OpenStack cloud is something that Shuttleworth has committed for some time. In April 2014, the OpenStack Icehouse release came out and it is still being supported by Canonical.

“The OpenStack community is an amazing community and it attracts amazing technology, but that won’t be meaningful if it doesn’t deliver for everyday businesses,” Shuttleworth said. “We actually manage more OpenStack clouds for more different industries, more different architectures than any other company.”

Shuttleworth said that when Icehouse was released, he committed to supporting it for five years, because long term support matters.

“What matters isn’t day two, what matters is day 1,500,” Shuttleworth said. “Living with OpenStack scaling it, upgrading it, growing it, that is important to master to really get the value for your business.”

IBM Red Hat

Shuttleworth also provided some color about his views on the $34 billion acquisition of Red Hat by IBM, which was announced on Oct. 28.

“I wasn’t surprised to see Red Hat sell,” Shuttleworth said. “But I was surprised at the amount of debt that IBM took on to close the deal.”

He added that he would be worried for IBM, except for the fact that the public cloud is a huge opportunity.

“I guess it makes sense if you think of IBM being able to steer a large amount of on prem RHEL workloads to the cloud, then that deal might make sense,” he said.

Sean Michael Kerner is a senior editor at ServerWatch and InternetNews.com. Follow him on Twitter @TechJournalist.

Source

Download GNOME Shell Extensions Linux 3.31.2

GNOME Shell Extensions is an open source and freely distributed project that provides users with a modest collection of extensions for the GNOME Shell user interface of the GNOME desktop environment. It contains a handful of extensions, carefully selected by technical members of the GNOME project. These extensions are designed to enhance users’ experience with the GNOME desktop environment.

Designed for GNOME

In general, GNOME Shell extensions can be used to customize the look and feel of the controversial desktop environment. In other words, to make your life a lot easier when working in GNOME. While the software is distributed as part of the GNOME project, installable from the default software channels of their Linux distributions, it is also available for download as a source archive, engineered to enable advanced users to configure, compile and install the software on any Linux OS.

Includes a wide variety of extensions for GNOME Shell

At the moment, the following GNOME-Shell extensions are included in this package: Alternate Tab, Apps Menu, Auto Move Windows, Drive Menu, Launch New Instance, Native Window Placement, Places Menu, systemMonitor, User Theme, Window List, windowsNavigator, and Workspace Indicator.

While some of them are self-explanatory, like systemMonitor, Window List, Workspace Indicator, Apps Menu or Alternate Tab, we should mention that the User Theme allows you to add a custom theme (skin) for the GNOME Shell, and Alternate Tab replaces the default ALT+TAB functionality with a sophisticated one.

In addition, the windowsNavigator extension allows you to select windows and workspaces in the GNOME Shell overlay mode using your keyboard, Native Window Placement will arrange windows in the overview mode in a more compact way, and Auto Move Windows will automatically move apps to a specific workspace when it is opened.

Bottom line

Overall, GNOME Shell Extensions is yet another important component of the GNOME desktop environment, especially when using the GNOME Shell user interface, making your life much easier and helping you achieve your goals faster. However, we believe that there are many other useful extensions out there that should be installed in this package.

Source

Install Etcher on Linux | Linux Hint

Etcher is a free tool for flashing microSD card with the operating system images for Raspberry Pi single board computers. The user interface of Etcher is simple and it is really easy to use. It is a must have tool if you’re working with a Raspberry Pi project. I highly recommend it. Etcher is available for Windows, macOS and Linux. So you get the same user experience no matter which operating system you’re using.

In this article, I will show you how to install and use Etcher on Linux. I will be using Debian 9 Stretch for the demonstration. But this article should work on any other Debian based Linux distributions such as Ubuntu without any modification. With slight modification, it should work on other Linux distributions as well. So, let’s get started.

You can download Etcher from the official website of Etcher. First, go to the official website of Etcher at https://www.balena.io/etcher/ and you should see the following page. You can click on the download link as marked in the screenshot below to download Etcher for Linux but it may not work all the time. It did not work for me.

If that is the case for you as well, scroll down a little bit and click on the link as marked in the screenshot below.

Your browser should prompt you to save the file. Just, click on Save File.

Your download should start as you can see in the screenshot below.

Installing Etcher on Linux:

Now that you have downloaded Etcher for Linux, you are ready to install Etcher on Linux. In order to run Etcher on Linux, you need to have zenity or Xdialog or kdialog package installed on your desired Linux distribution. On Ubuntu, Debian, Linux Mint and other Debian based Linux distributions, it is a lot easier to install zenity as zenity is available in the official package repository of these Linux distributions. As I am using Debian 9 Stretch for the demonstration, I will cover Debian based distributions here only.

First, update the package repository of your Ubuntu or Debian machine with the following command:

Now, install zenity with the following command:

$ sudo apt install zenity

Now, press y and then press <Enter> to continue.

zenity should be installed.

Now, navigate to the ~/Downloads directory where you downloaded Etcher with the following command:

As you can see, the Etcher zip archive file is here.

Now, unzip the file with the following command:

$ unzip etcher-electron-1.4.6-linux-x64.zip

The zip file should be extracted and a new AppImage file should be generated as you can see in the screenshot below.

Now, move the AppImage file to the /opt directory with the following command:

$ sudo mv etcher-electron-1.4.6-x86_64.AppImage /opt

Now, run Etcher with the following command:

$ /opt/etcher-electron-1.4.6-x86_64.AppImage

You should see the following dialog box. Just click on Yes.

Etcher should start as you can see in the screenshot below.

Now, you don’t have to start Etcher from the command line anymore. You can start Etcher from the Application Menu as you can see in the screenshot below.

Using Etcher on Linux:

You can now flash microSD cards using Etcher for your Raspberry Pi. First, open Etcher and click on Select image.

A file picker should be opened. Now, select the operating system image file that you want to flash your microSD card with and click on Open.

The image should be selected.

Now, insert the microSD card or USB storage device that you want to flash with Etcher. It may be selected by default. If you do have multiple USB storage devices or microSD card attached on your computer, and the right one is not selected by default, then you can click on Change as marked in the screenshot below to change it.

Now, select the one you want to flash using Etcher from the list and click on Continue.

NOTE: You can also flash multiple USB devices or microSD cards at the same time with Etcher. Just select the ones that you want to flash from the list and click on Continue.

It should be selected as you can see in the screenshot below.

You can also change Etcher settings to control how Etcher will flash the microSD cards or USB storage devices as well. To do that, click on the gear icon as marked in the screenshot below.

The Etcher settings panel is very clear and easy to use. All you have to do is either check or uncheck the things you want and click on the Back button. Normally you don’t have to do anything here. The default settings are good. But if you uncheck Validate write on success, it will save you a lot of time. Because this option will check if everything is written on the microSD cards or USB storage devices correctly. That puts a lot of stress on your microSD cards or USB devices and takes a lot of time to complete. Unless you have a faulty microSD card or USB storage device, unchecking this option would do you no harm. It’s up to you to decide what you want.

Finally, click on Flash!

Etcher should start flashing your microSD card or USB storage device.

Once the microSD card or the USB storage device is flashed, you should see the following window. You can now close Etcher and eject your microSD card or USB storage device and use it on your Raspberry Pi device.

So that’s how you install and use Etcher on Linux (Ubuntu/Debian specifically). Thanks for reading this article.

Source

Excited About Application Modernization? Contain Yourself…

Introduction

For those of us who work with technologies every day, it’s important to remember one key thing: every topic is new to someone somewhere every day.

With that in mind, we are starting a series of posts here that will start from basics to help you build your knowledge of modern application delivery. Think of it as Containers 101.

To understand what containers are and how they benefit application developers, devops, and operations teams, let’s look at an essential change in the architecture of applications: the use of microservices.

What are Microservices?

Microservices are an evolution of a software architecture concept that developed in the 1990s and became widespread in the 2000s – service-oriented architecture (SOA). SOA defines an application as a collection of services. A service is an independent and self-contained function that is well-defined and stateless. Services act together as an application by taking input from each other (or, at one end of the application pipeline, from a user or other input source), performing some processing on the data, and passing it on to another service (or, at the other end of the pipeline, to some data store or to a user).

Services are reusable – that is, the same service can be connected to many different services, often from different applications with the same needs. Here’s a very simple example: whether it is a person, a command shell, or another program that needs to convert a domain name to an IP address, there can be a single Domain Name Service in the environment that resolves those requests.

Many of today’s developers were exposed to SOA in the form of web services, functions that could be exposed by web protocols such as HTTP, with their inputs and outputs composed into structured requests via REST APIs. These services communicate with each other over networks. Services can also use other communication mechanisms, for example, shared memory.

Microservices are a next step, where monolithic applications that traditionally run on a single server (or redundantly in a cluster) are decomposed – or new ones built – as a collection of small well-defined units of processing. Microservices may run on the same system or across nodes of a cluster.

The benefits of using microservice-based architecture include:

  • functions can be shared with other applications
  • functions can be updated without requiring rebuilding and updating entire applications (continuous delivery)
  • functions can be scaled up and down independently, making it easy to deploy resources where they are needed

Using microservices has become much simpler with the development of a relatively new architectural construct: containers.

What are Containers?

The adoption of virtual machines became widespread in the 1990s and 2000s for IT on industry-standard system architectures because they made it possible to do two very important things: to isolate an application from the behavior of other applications on the same system or cluster, and to package up all of the resources an application or set of applications require into an easily deployable, easily portable format. But virtual machines can be a solution for these issues that is too resource-intensive for the needs of many applications, and especially, of microservices. Each virtual machine needs to not only carry with it the application or service and all of its dependencies, but also, an entire operating system environment, and the emulation of a software version of a standalone computer.

Containers are a “best of both worlds” architectural idea that attains many of the isolation and packaging benefits of virtualization, but by using lighter-weight mechanisms within a shared operating system, offers many benefits. Because containers don’t need to boot a new operating system environment, they can start and stop rapidly, often in less than a second – especially useful when scaling them up and down to accommodate changing demand. Because this makes them smaller than a virtual machine, more of them can be run on the same hardware simultaneously. For the same reason, they are especially well suited to microservices, of which a well decomposed application may have a large number. But containers still carry with them the libraries and commands that each application or service needs – making it possible for apps and services build on different OS releases to coexist on the same hardware.

What aren’t Containers?

Containers are not virtual machines. They do not offer the heavy-weight security and performance isolation that virtual machines can offer. (Though there are new container implementations in development that come close; we will discuss those in a future educational blog post.)

Containers are not installation packages – they take the place of software installation. Containers can be deployed on demand to specific servers and their deployment can replace the complex tasks of software installation.

Containers are not whole applications. Well, to be honest, some of them can be: there are certainly gains in flexibility of deployment and management that can be realized by just putting a monolithic application in a container, But the real gain comes from rearchitecting legacy applications into microservices, and designing new ones that way. Note that the journey to microservices and application modernization need not be all-or-nothing: many organizations start on their existing application by chipping away at them to break off reusable and scalable capabilities into microservices gradually.

Where Do I Go From Here?

If you’re new to containers and microservices, I hope this has given you a good introduction. The next post that builds on this knowledge will be available in about two weeks. If you want to read ahead, SUSE Linux Enterprise Server includes a containers module about which you can find information on our website and in the blog. And SUSE CaaS Platform includes containers and management capabilities for them in a purpose-built product. If you find the reading gets deep for you, though, stop back to the SUSE Blog for more of Containers 101 soon.

Source

Best Free Linux Computer Algebra Systems

graphics-multiedge

A computer algebra system (CAS) is mathematical software that can manipulate mathematical formulae in a way similar to the traditional manual computations of mathematicians and scientists. This type of system supports a wide range of mathematics including linear algebra, calculus, and algebraic and ordinary differential equations.

A CAS offers a rigorous environment for defining and working with structures such as groups, rings, fields, modules, algebras, schemes, curves, graphs, designs, codes and many others.

They have been extensively used in higher education.

The main features of a CAS include:

  • Numerical Computations: The software can determine numerical approximations of solutions, derivatives, integrals, differential equations, etc. Solve, manipulate, and plot functions without needing to generate numeric data. Often problems that cannot be solved explicitly can be solved numerically, and often only a numerical answer is sufficient.
  • Data Analysis: Having data is not sufficient; we need to extract useful information from it. There are many algorithms designed for data analysis, most of which involve too much work to be done by manual computations. CAS’s put these algorithms in one place, and offer an environment where the algorithms are easy to implement.
  • Data Visualization: CAS’s can graph 2D and 3D functions in a variety of ways. They are also designed to graph vector fields and solutions to differential equations.
  • Symbolic Computations: Most of the CAS’s can perform symbolic manipulation of expressions: reducing, expanding, simplifying, derivatives, antiderivatives, etc. Unlike numerical computations, which can exhibit floating-point errors, symbolic computations are determined exactly. They can therefore provide the exact answer to an equation (as opposed to a decimal approximation), and they can express results in terms of a wide variety of previously defined functions.

A CAS automates tedious and sometimes difficult algebraic manipulation tasks. The principal difference between a CAS and a traditional calculator is the ability to deal with equations symbolically rather than numerically.

The chart below offers our rating for each software. Some of the software is very specialized, designed to fill a particular niche. This makes comparisons difficult.

Computer Algebra Systems Chart

To provide an insight into the quality of software that is available, we have compiled a list of 13 impressive algebra systems. There’s general purposes systems as well as specialist software solutions. All of them are open source software.

Let’s explore the 13 algebra systems at hand. For each application we have compiled its own portal page, a full description with an in-depth analysis of its features, screenshots, together with links to relevant resources.

Computer Algebra Systems
Maxima System for the manipulation of symbolic and numerical expressions
PARI/GP Widely used algebra system designed for fast computations in number theory
SymPy Python library for symbolic mathematics
Scilab Numerical computational package
SageMath Open source alternative to Magma, Maple, Mathematica and Matlab
Octave Powerful programming language with built-in plotting and visualization tools
Axiom General purpose Computer Algebra system
SINGULAR Computer Algebra System for polynomial computations
GAP System for computational discrete algebra
CoCoA System for doing computations in commutative algebra
Cadabra Symbolic computer algebra system for field theory problems
Macaulay2 Software system for research in algebraic geometry
FriCAS Fork of Axiom

Source

Machine Learning for Operations | Linux.com

Managing infrastructure is a complex problem with a massive amount of signals and many actions that can be taken in response; that’s the classic definition of a situation where machine learning can help.

IT and operations is a natural home for machine learning and data science. According to Vivek Bhalla, until recently a Gartner research director covering AIOps and now director of product management at Moogsoft, if there isn’t a data science team in your organization the IT team will often become the “center of excellence”.

By 2022, Gartner predicts, 40 percent of all large enterprises will use machine learning to support or even partly replace monitoring, service desk and automation processes. That’s just starting to happen in smaller numbers.

In a recent Gartner survey, the most popular use of AI in IT and operations is analyzing big data (18 percent) and chatbots for IT service management — 15 percent are already using chatbots and a further 30 percent plan to do so by the end of 2019.

Read more at The New Stack

Source

5 key differences between MySQL and TiDB

As businesses adopt cloud-native architectures, conversations will naturally lead to what we can do to make the database horizontally scalable. The answer will likely be to take a closer look at TiDB.

TiDB is an open source NewSQL database released under the Apache 2.0 License. Because it speaks the MySQL protocol, your existing applications will be able to connect to it using any MySQL connector, and most SQL functionality remains identical (joins, subqueries, transactions, etc.).

Step under the covers, however, and there are differences. If your architecture is based on MySQL with Read Replicas, you’ll see things work a little bit differently with TiDB. In this post, I’ll go through the top five key differences I’ve found between TiDB and MySQL.

1. TiDB natively distributes query execution and storage

With MySQL, it is common to scale-out via replication. Typically you will have one MySQL master with many slaves, each with a complete copy of the data. Using either application logic or technology like ProxySQL, queries are routed to the appropriate server (offloading queries from the master to slaves whenever it is safe to do so).

Scale-out replication works very well for read-heavy workloads, as the query execution can be divided between replication slaves. However, it becomes a bottleneck for write-heavy workloads, since each replica must have a full copy of the data. Another way to look at this is that MySQL Replication scales out SQL processing, but it does not scale out the storage. (By the way, this is true for traditional replication as well as newer solutions such as Galera Cluster and Group Replication.)

TiDB works a little bit differently:

  • Query execution is handled via a layer of TiDB servers. Scaling out SQL processing is possible by adding new TiDB servers, which is very easy to do using Kubernetes ReplicaSets. This is because TiDB servers are stateless; its TiKV storage layer is responsible for all of the data persistence.
  • The data for tables is automatically sharded into small chunks and distributed among TiKV servers. Three copies of each data region (the TiKV name for a shard) are kept in the TiKV cluster, but no TiKV server requires a full copy of the data. To use MySQL terminology: Each TiKV server is both a master and a slave at the same time, since for some data regions it will contain the primary copy, and for others, it will be secondary.
  • TiDB supports queries across data regions or, in MySQL terminology, cross-shard queries. The metadata about where the different regions are located is maintained by the Placement Driver, the management server component of any TiDB Cluster. All operations are fully ACID compliant, and an operation that modifies data across two regions uses a two-phase commit.

For MySQL users learning TiDB, a simpler explanation is the TiDB servers are like an intelligent proxy that translates SQL into batched key-value requests to be sent to TiKV. TiKV servers store your tables with range-based partitioning. The ranges automatically balance to keep each partition at 96MB (by default, but configurable), and each range can be stored on a different TiKV server. The Placement Driver server keeps track of which ranges are located where and automatically rebalances a range if it becomes too large or too hot.

This design has several advantages of scale-out replication:

  • It independently scales the SQL Processing and Data Storage tiers. For many workloads, you will hit one bottleneck before the other.
  • It incrementally scales by adding nodes (for both SQL and Data Storage).
  • It utilizes hardware better. To scale out MySQL to one master and four replicas, you would have five copies of the data. TiDB would use only three replicas, with hotspots automatically rebalanced via the Placement Driver.

2. TiDB’s storage engine is RocksDB

MySQL’s default storage engine has been InnoDB since 2010. Internally, InnoDB uses a B+tree data structure, which is similar to what traditional commercial databases use.

By contrast, TiDB uses RocksDB as the storage engine with TiKV. RocksDB has advantages for large datasets because it can compress data more effectively and insert performance does not degrade when indexes can no longer fit in memory.

Note that both MySQL and TiDB support an API that allows new storage engines to be made available. For example, Percona Server and MariaDB both support RocksDB as an option.

3. TiDB gathers metrics in Prometheus/Grafana

Tracking key metrics is an important part of maintaining database health. MySQL centralizes these fast-changing metrics in Performance Schema. Performance Schema is a set of in-memory tables that can be queried via regular SQL queries.

With TiDB, rather than retaining the metrics inside the server, a strategic choice was made to ship the information to a best-of-breed service. Prometheus+Grafana is a common technology stack among operations teams today, and the included graphs make it easy to create your own or configure thresholds for alarms.

4. TiDB handles DDL significantly better

If we ignore for a second that not all data definition language (DDL) changes in MySQL are online, a larger challenge when running a distributed MySQL system is externalizing schema changes on all nodes at the same time. Think about a scenario where you have 10 shards and add a column, but each shard takes a different length of time to complete the modification. This challenge still exists without sharding, since replicas will process DDL after a master.

TiDB implements online DDL using the protocol introduced by the Google F1 paper. In short, DDL changes are broken up into smaller transition stages so they can prevent data corruption scenarios, and the system tolerates an individual node being behind up to one DDL version at a time.

5. TiDB is designed for HTAP workloads

The MySQL team has traditionally focused its attention on optimizing performance for online transaction processing (OLTP) queries. That is, the MySQL team spends more time making simpler queries perform better instead of making all or complex queries perform better. There is nothing wrong with this approach since many applications only use simple queries.

TiDB is designed to perform well across hybrid transaction/analytical processing (HTAP) queries. This is a major selling point for those who want real-time analytics on their data because it eliminates the need for batch loads between their MySQL database and an analytics database.

Conclusion

These are my top five observations based on 15 years in the MySQL world and coming to TiDB. While many of them refer to internal differences, I recommend checking out the TiDB documentation on MySQL Compatibility. It describes some of the finer points about any differences that may affect your applications.

Source

deepin 15.8 Linux distribution available for download — replace Windows 10 now!

As more and more people wake up to the fact that Windows 10 is a giant turd lately, computer users are exploring alternatives, such as Linux-based operating systems. First impressions can be everything, so when searching for a distribution, it is important that beginners aren’t scared off by bewildering installers or ugly and confusing interfaces.

Linux “n00bs” often opt for Ubuntu, and while that is a good choice, there are far more pretty and intuitive options these days. One such operating system that I recommend often is deepin. Why? It is drop-dead gorgeous and easy to use. It is guaranteed to delight the user, and its intuitive interface will certainly impress. Today, the newest version of the excellent Linux distro, deepin 15.8, becomes available for download.

ALSO READ: IBM gobbles up open source and Linux darling Red Hat in $34 billion deal

“Compared with deepin 15.7, the ISO size of deepin 15.8 has been reduced by 200MB. The new release is featured with newly designed control center, dock tray and boot theme, as well as improved deepin native applications, hoping to bring users a more beautiful and efficient experience,” says deepin developers.

ALSO READ: System76 Thelio computer is open source, Linux-powered, and made in the USA

The devs further say, “Prior to deepin official release, usually an internal test is implemented by a small number of community users, then we record their feedbacks and fix the bugs. Before this release, we test deepin 15.8 both from system upgrade and ISO installation. Thanks to the members of internal testing team. Their contributions are highly appreciated!”

As is typical with deepin, there are many eye candy changes to be found in the new release, including enhancements to the dock. The grub menu is now prettier, and the file manager has improved icons for the dark theme. It is not all about the superficial, however, as there is now an option for full disk encryption when installing the operating system — a very welcome addition.

The deepin developers share additional bug fixes and improvements below.

dde-session-ui

  • Optimized background drawing;
  • Optimized dual screen display;
  • Optimized the login process;
  • Optimized the notification animation;
  • Fixed the error message when switching to multi-user while verifying the password;
  • Fixed user login failure;
  • Fixed the setting failure of user’s keyboard layout;
  • Added the verification dialog for network password.

dde-dock

  • Fixed the identification error of connected network;
  • Fixed the high CPU usage of network when hotspot was enabled;
  • Fixed the issue that the network connecting animation did not disappear correctly;
  • Supported dragging and dropping any desktop file to the dock;
  • Recognized whether the preview window can be closed or not;
  • Supported transparency settings (set in Control Center);
  • Supported the new dock protocol (SNI);
  • Added “Show Desktop” button in efficient mode;
  • Redesigned the tray area in fashion mode;
  • Removed hot corner presets which can be customized by users.

Deepin Image Viewer

  • Removed the picture management function;
  • Fixed the distortion of high resolution pictures when zoom out.

Deepin Graphics Driver Manager

  • Fixed the identification error of Bumblebee solution;
  • Fixed the interface scaling problem on HiDPI screen;
  • Used glvnd series of drivers for PRIME solution;
  • Optimized error handling.

Ready to download deepin 15.8 and possibly replace Windows 10 with it? You can grab the ISO here. After you try it, please head to the comments and tell me what you think of the operating system — I suspect you will be pleasantly surprised.

Image Credit: HomeArt / Shutterstock

Source

WP2Social Auto Publish Powered By : XYZScripts.com