Must-Have Tools for Writers on the Linux Platform | Linux.com

I’ve been a writer for more than 20 years. I’ve written thousands of articles and how-tos on various technical topics and have penned more than 40 works of fiction. So, the written word is not only important to me, it’s familiar to the point of being second nature. And through those two decades (and counting) I’ve done nearly all my work on the Linux platform. I must confess, during those early years it wasn’t always easy. Formats didn’t always mesh with what an editor required and, in some cases, the open source platform simply didn’t have the necessary tools required to get the job done.

That was then, this is now.

A perfect storm of Linux evolution and web-based tools have made it such that any writer can get the job done (and done well) on Linux. But what tools will you need? You might be surprised to find out that, in some instances, the job cannot be efficiently done with 100% open source tools. Even with that caveat, the job can be done. Let’s take a look at the tools I’ve been using as both a tech writer and author of fiction. I’m going to outline this by way of my writing process for both nonfiction and fiction (as the process is different and requires specific tools).

A word of warning to seriously hard-core Linux users. A long time ago, I gave up on using tools like LaTeX and DocBook for my writing. Why? Because, for me, the focus must be on the content, not the process. When you’re facing deadlines, efficiency must take precedent.

Nonfiction

We’ll start with nonfiction, as that process is the simpler of the two. For writing technical how-tos, I collaborate with different editors and, in some cases, have to copy/paste content into a CMS. But like with my fiction, the process always starts with Google Drive. This is the point at which many open source purists will check out. Fear not, you can always opt to either keep all of your files locally, or use a more open-friendly cloud service (such as Zoho or nextCloud).

Why start on the cloud? Over the years, I’ve found I need to be able to access that content from anywhere at any time. The simplest solution was to migrate the cloud. I’ve also become paranoid about losing work. To that end, I make use of a tool like Insync to keep my Google Drive in sync with my desktop. With that desktop sync in place, I know there’s always a backup of my work, in case something should go awry with Google Drive.

For those clients with whom I must enter content into a Content Management System (CMS), the process ends there. I can copy/paste directly from a Google Doc into the CMS and be done with it. Of course, with technical content, there are always screenshots involved. For that, I use Gimp, which makes taking screenshots simple:

  1. Open Gimp.
  2. Click File > Create > Screenshot.
  3. Select from a single window, the entire screen, or a region to grab.
  4. Click Snap.

The majority of my clients tend to prefer I work with Google Docs, because I can share folders so that they have reliable access to the content. There are a few clients I have that do not work with Google Docs, and so I must download the files into a format that can be used. What I do for this is download in .odt format, open the document in LibreOffice , format as needed, save in a format required by the client, and send the document on.

And that, is the end of the line for nonfiction.

Fiction

This is where it gets a bit more complicated. The beginning steps are the same, as I always write every first draft of a novel in Google Docs. Once that is complete, I then download the file to my Linux desktop, open the file in LibreOffice, format as necessary, and then save as a file type supported by my editor (unfortunately, that means .docx).

The next step in the process gets a bit dicey. My editor prefers to use comments over track changes (as it makes it easier for both of us to read the document as we make changes). Because of this, a 60k word doc can include hundreds upon hundreds of comments, which slows LibreOffice to a useless crawl. Once upon a time, you could up the memory used for documents, but as of LibreOffice 6, that is no longer possible. This means any larger, novel-length, document with numerous comments will become unusable. Because of that, I’ve had to take drastic measures and use WPS Office (Figure 3). Although this isn’t an open source solution, WPS Office does a fine job with numerous comments in a document, so there’s no need to deal with the frustration that is LibreOffice (when working with these large files with hundreds of comments).

Once my editor and I finish up the edits for the book (and all comments have been removed), I can then open the file in LibreOffice for final formatting. When the formatting is complete, I save the file in .html format and then open the file in Calibre for exporting the file to .mobi and .epub formats.

Calibre is a must-have for anyone looking to publish on Amazon, Barnes & Noble, Smashwords, or other platforms. One thing Calibre does better than other, similar, solutions is enable you to directly edit the .epub files (Figure 4). For the likes of Smashword, this is an absolute necessity (as the export process will add elements not accepted on the Smashwords conversion tool).

After the writing process is over (or sometimes while waiting for an editor to complete a pass), I’ll start working on the cover for the book. That task is handled completely in Gimp (Figure 5).

And that finishes up the process of creating a work of fiction on the Linux platform. Because of the length of the documents, and how some editors work, it can get a bit more complicated than the process of creating nonfiction, but it’s far from challenging. In fact, creating fiction on Linux is just as simple (and more reliable) than other platforms.

HTH

I hope this helps aspiring writers to have the confidence to write on the Linux platform. There are plenty of other tools available to use, but the ones I have listed here have served me quite well over the years. And although I do make use of a couple of proprietary tools, as long as they keep working well on Linux, I’m okay with that.

Learn more about Linux in the Introduction to Open Source Development, Git, and Linux (LFD201) training course from The Linux Foundation, and sign up now to start your open source journey.

Source

Microsoft is Supporting Patent Trolls, Still. New Leadership at USPTO Gives Room for Concern.

Posted in Deception, Microsoft, Patents at 7:58 am by Dr. Roy Schestowitz

LOT Network: A WHOLE LOT OF SOFTWARE PATENTS

Summary: New statements from Microsoft’s management (Andersen) serve to show that Microsoft hasn’t really changed; it’s just trying to sell “Azure IP Advantage”, hoping that enough patent trolls with their dubious software patents will blackmail GNU/Linux users into adopting Azure for ‘protection’

THIS morning we wrote four articles about the European Patent Office (EPO), but we haven’t lost sight of American matters, which we typically cover in the weekend due to lack of time. Yesterday we wrote about the aggressive past (arguably patent trolling) of the new Deputy Director of the U.S. Patent and Trademark Office (USPTO). This past of hers has been mentioned in IPPro Patents coverage that yesterday noted “Peter was also previously vice president and general counsel of Immersion Corporation, where, among other legal roles, she led its IP portfolio.”

Immersion Corporation is a patent aggressor that many out there have also dubbed “patent troll”. Peter’s boss/superior is another person who comes from a questionable background and was likely appointed because of nepotism. So what on Earth is going on? It’s not hard to see who benefits (cui bono).

The Microsoft-friendly and Microsoft-sponsored IAM (it is also sponsored by Microsoft’s patent trolls) has just quoted/paraphrased Microsoft’s patent chief Andersen as saying: “more valuable for us to essentially license our patents through Open Invention And LOT Networks than to try to license them on our own…”

What on Earth does that even mean? Can that be interpreted as Microsoft using OIN to just tax/cross-license with software patents? Remember that Microsoft staff was forbidden from commenting on it. That can only mean that Microsoft is hiding something.

Microsoft, morever, supports the terrible Director who supports patent trolls. Quoting IAM’s tweet: “Andersen on PTO Director Iancu – I’m a fan, he’s doing a super job. One of things we’ve told him is importance of getting certainty back particularly post-Alice. PTO has a role to play in giving us more clarity and I think he’s taken that to heart…”

That doesn’t inspire a positive view of Microsoft’s ‘new’ policy or strategy, which also involves selling ‘protection’ from its patent trolls through Azure Source

Linux Today – Oracle Updates Its Linux Distro with Red Hat Enterprise Linux 7.6 Compatibility

Nov 09, 2018, 14:00

Derived from the sources of Red Hat Enterprise Linux 7.6, the Oracle Enterprise Linux 7 Update 6 release ships with Oracle’s Unbreakable Enterprise Kernel (UEK) Release 5 version 4.14.35-1818.3.3 for both 64-bit (x86_64) and ARM architectures, and the Red Hat Compatible Kernel 3.10.0-957, which is only available for 64-bit systems. Besides updated kernels, the Oracle Enterprise Linux 7 Update 6 release comes with numerous new features and improvements, including support for managing path, mount, and timer systemd unit files in the Pacemaker component, as well as the ability to track package installations and upgrades using audit events.

Complete Story

Source

Book of Demons no longer getting a native Linux port, developer plans on ‘supporting’ Steam Play (updated)

UPDATE: The developer provided some clarifications here. I think the key point to take away is this “Last but not least, we are shelving the Linux port, not outright killing it. This doesn’t mean we won’t do it after the launch.”

ORIGINAL: Book of Demons [Steam], a dungeon crawling hack and slash with deck-building will no longer get a native Linux port. Steam Play is part of the reason.

It won’t be the last game to do this I’m sure. At least in this case, they aren’t pulling support for an already released game like Human: Fall Flat as Book of Demons didn’t have a public Linux version. Anyway, writing on the Steam forum the developer noted a few vague issues they were having.

Things like “We had as many different issues with the build as testers. With each flavor of Linux came different issues.” along with “Right now everything indicates that Linux port would be very high maintenance.”. I always find these types of statements highly unhelpful, unless they actually say why that is. Let’s be clear on this again too, you do not need to support all Linux distributions, support the most popular.

They went on to mention the issue of users only getting a single choice between Native or Proton, since Steam has no built-in way of picking between Steam Play or a Native build. An issue that seems to be mentioned more lately by gamers and developers. So, they said they will “focus our efforts on supporting Steam Play and Proton.”.;

This does bring up some interesting thoughts. To be clear, I’m very open minded about Steam Play especially since sales will still show up as Linux and that I do like.

However, there’s a lot that’s unclear right now. When developers say they will support Steam Play/Proton, how will they do that? It would at the very least, require them to test every single patch they do on a Linux system through Steam Play to ensure they haven’t broken it. Anything less than that and I wouldn’t say they were actually supporting it. If it is broken, finding out why might end up being a hassle and hold them back and end up causing more issues. They can’t really guarantee any degree of support since it is Valve and co handling it for them, the way I see it is that the game developer is not really doing anything.

Source

IBM Dons Red Hat for Cloudy Future | Business

IBM’s deal to acquire Red Hat caught everyone by surprise when it was announced less than two weeks ago. While concerns spread quickly about what it would mean for the largest enterprise Linux platform, IBM and Red Hat executives assured employees and customers that Red Hat would continue to operate independently — at least for now.

IBM Dons Red Hat for Cloudy Future

Intel made a similar acquisition of Wind River, the leader in embedded operating systems, in 2009. In a similar manner, that deal could have been viewed negatively by other chip and embedded systems vendors because of their competition with Intel.

However, Intel successfully operated Wind River as an independent entity for many years. That helped preserve Wind River’s business, but it also made employees feel like they were immune from Intel’s culture and oversight.

With any acquisition, the overall value must equal more than the two entities alone, which means integration of the company culture, as well as its the products and services, is needed. For various reasons, Intel never did realize the full value of Wind River, and it sold the group for an undisclosed amount earlier this year.

Change Without Fear

For IBM and its customers, the acquisition of Red Hat is a great move. It combines IBM’s platforms and services with the largest enterprise Linux platform and container solution. Services and solutions from the two companies complement each other very well, especially for private and hybrid cloud implementations.

The combination also makes IBM more competitive with vendors like Amazon, Google and Microsoft — all of which have a large customer base leveraging Red Hat.

The acquisition comes with significant hurdles, however.

The challenge is convincing existing Red Hat customers and partners, including IBM’s competitors, that the change will not impact them, while offering a solution that combines the technology and expertise of the two entities into something greater.

Meshing Open Source, Corporate Cultures

The first objective can be achieved by operating Red Hat independently, but that would not advance the financial or strategic goals of the acquisition. Strategically, it would be better to integrate the two over a reasonable time.

Whether the integration begins immediately or in the near future, it is necessary for the success of the combined company.

Additionally, the acquisition will spark competitors to seek alternative solutions — so, the clock is ticking for IBM to reassure and secure existing customers. Going forward, however, IBM has the opportunity to expand into new market segments with new customers.

An even greater challenge is the difference in culture. While IBM has been a strong supporter of the open source community, it is faced with the challenge of integrating an open source mentality into a more formal corporate culture. This means either adapting to the new culture or risk losing some of the talent and prospects for a group that currently is growing rapidly.

The acquisition of Red Hat will be a good move by IBM, but challenges lie ahead, and the company should address them quickly to ensure that its US$34 billion was well spent and helps enhance IBM’s position as leading cloud services provider.

Source

The Polaris/Vega Performance At The End Of Mesa 18.3 Feature Development

With Mesa 18.3 feature development having wrapped up at the end of October, here are some benchmarks showing how the updated RadeonSI and RADV drivers are performing for this code that is now under a feature freeze before its official release around the end of November. AMD Radeon Vega and Polaris graphics cards were tested with a slew of NVIDIA graphics cards also tested on their respective driver to show where the Linux gaming GPU performance is at as we head into the 2018 holiday shopping season.

 

 

These tests were done on Ubuntu 18.10 but with switching to Linux 4.19 stable and Mesa 18.3-devel built against LLVM 8.0 SVN for showing the current open-source RadeonSI OpenGL and RADV Vulkan performance potential for Polaris/Vega GPUs. The Radeon cards tested were the RX 560, RX 580, RX Vega 56, and RX Vega 64 graphics cards.

For putting the current Radeon Mesa performance into perspective, the NVIDIA 410.73 driver was benchmarked with the GTX 980 Ti, GTX 1060, GTX 1070, GTX 1070 Ti, GTX 1080, GTX 1080 Ti, RTX 2070, and RTX 2080 Ti based on the newer graphics cards I had available for benchmarking.

Tests were done form the Intel Core i9 9900K box with Ubuntu 18.10 while running a variety of OpenGL and Vulkan gaming benchmarks, including the few Steam Play capable game benchmarks so far. All of these benchmarks were handled by our open-source Phoronix Test Suite benchmarking software.

Source

Samsung finally launches Linux on DeX beta program

Samsung DeX attached to a screen and peripherals.

  • Samsung has launched the Linux on DeX beta trial, bringing Linux to select devices.
  • The beta program currently includes the Galaxy Note 9 and Galaxy Tab S4.
  • Registration closes on December 14, so you still have more than a month to sign up.

Samsung DeX is a pretty handy bit of software in theory, giving users a computer-style experience when hooked up to the big screen. The company announced it was bringing Linux to DeX last year, and it’s finally launched the beta program this week.

The Korean firm sent an email to users who had previously pre-registered their interest in Linux on DeX, notifying them of the beta program’s launch. Once you’ve registered via the email (or the Linux on DeX website), Samsung will send users a confirmation email and a follow-up message with instructions to download the Linux on DeX app. Then again, I haven’t received the latter email just yet, so don’t expect to be up and running within minutes.

In any event, the beta is a private affair at this point, and supports the Galaxy Note 9 and Galaxy Tab S4 right now. Samsung hasn’t clarified whether other devices, such as the Galaxy S9, Galaxy Note 8, and Galaxy S8 will eventually receive the app.

 

Best Android phones (November 2018): Our picks, plus a giveaway

With Android thoroughly dominating the mobile industry, picking the best Android phones is almost synonymous with choosing the best smartphones, period. While Android phones have few real opponents on other platforms, internal competition is incredibly …

The Linux on DeX app supports the Ubuntu 16.04 LTS distribution, and requires at least 8GB of storage space and 4GB of RAM. The latter figure suggests that older flagships might be supported yet. Programs also need to be built for the ARM 64-bit architecture, so it seems like you won’t be able to run any old Linux program.

The registration page is accessible via the button below. You’ll need a Google account to sign up and a Samsung account to actually use the service. Sign-ups end on December 14, 2018, so you’ve still got over a month to spare.

Source

Download Fedora Jam KDE Live 29

Fedora Jam KDE Live is an easy-to-use, free and open source GNU/Linux distribution, a spin (remix) of the widely used Fedora operating system tailored specifically for musicians and audio enthusiasts, allowing them to easily produce digital music.

Distributed as 32 and 64-bit Live DVDs

Fedora Jam KDE Live CD is distributed as two Live DVD ISO images, offering complete support for the 64-bit, as well as the 32-bit architectures. It can be installed on a local or external drive, or used directly from the live media.

Boot options à la Fedora Linux

Being derived from Fedora, the distro offers a boot menu that it identical in look and functionality with the one of the official Fedora Live DVDs. It allows users to try the operating system without installing it (live mode), as well as to test the RAM or boot an existing OS from the local disk.

Uses the beautiful KDE desktop environment

The distribution uses the beautiful KDE desktop environment, which features a traditional and familiar layout, comprised of a taskbar (panel) located on the bottom edge of the screen, as well as a Desktop Folder widget on the desktop.

Comes pre-loaded with a wide range of music-related apps

The operating system contains various open source audio creation applications, such as Audacity, Ardour, Frescobaldi, Musescore, and Qtractor, as well as the PulseAudio, ALSA and Jack sound servers. The latest LV2/LADSPA plugins are also included.

Among other open source applications that are included in Fedora Jam KDE Live CD, we can mention the Mozilla Thunderbird email and news client, Internet DJ Console graphical Shoutcast and Icecast client, Mozilla Firefox web browser, the entire Calligra office suite, Sieve mail filtering scripts editor, as well as all the standard KDE applications.

Source

Removing Duplicate PATH Entries | Linux Journal

The goal here is to remove duplicate entries from the PATH variable.
But before I begin, let’s be clear: there’s no compelling reason to
to do this. The shell will, in essence, ignore duplicates PATH entries;
only the first occurrence of any one path is important.
Two motivations drive this exercise.
The first is to look at an awk one-liner that initially
doesn’t really appear to do much at all.
The second is to feed the needs of those who are annoyed by
such things as having duplicate PATH entries.

I first had the urge to do this when working with Cygwin.
On Windows, which puts almost every executable in a different
directory, your PATH variable quickly can become overwhelming,
so removing duplicates makes it slightly less confusing
when you’re trying to decipher what’s actually in your PATH variable.

Your first thought about how to this might be to break up the path
into the individual elements with sed and
then pass that through sort and uniq to get rid of duplicates.
But you’d quickly realize that that doesn’t work, since you’ve
now reordered the paths, and you don’t want that. You want to keep
the paths in their original order, just with duplicates removed.

The original idea for this was not mine. I found the basic
code for it on the internet. I don’t remember exactly where, but
I believe it was on Stack Exchange.
The original bash/awk code was something like this:

PATH=$(echo $PATH | awk -v RS=: -v ORS=: ‘!($0 in a) ‘)

And it’s close. It almost works, but before looking at the output,
let’s look at why/how it works.
To do that, first notice the -v options. Those set the input
and output Record Separator variables that awk uses to separate
the input data into individual records of data
and how to reassemble them on output.
The default is to separate them by newlines—that is, each
line of input is a separate record.
Instead of newlines, let’s use colons as the separators,
which gives each of the individual paths in the PATH variable
as a separate record.
You can see how this works in the following where you change only
the input separator and leave the output separator as the newline,
and come up with a simple awk one-liner to print each of the elements
of the path on a separate line:

$ cat showpath.sh
export PATH=/usr/bin:/bin:/usr/local/bin:/usr/bin:/bin
awk -v RS=: ” <<<$PATH

$ bash showpath.sh
/usr/bin
/bin
/usr/local/bin
/usr/bin
/bin

So, back to the original code.
To help understand it, let’s make it look at bit more awkish by reformatting
it so that it has the more normal pattern { action }
or condition { action } look to it:

!($0 in a) {
a[$0];
print
}

The condition here is !($0 in a).
In this, $0 is the current input record, and a is an awk variable
(the use of the in operator, tells you that a is an array).
Remember, each input record is an individual path from the PATH variable.
The part inside the parentheses, $0 in a tests to see if the path
is in the array a.
The exclamation and the parentheses are to negate the condition.
So, if the current path is not in a, the action executes.
If the current path is in a, the action doesn’t execute,
and since that’s all there is to the script, nothing happens in that case.

If the current path is not in the array,
the code in the action uses the path as a key to
reference into the array.
In awk, arrays are associative arrays, and referencing a
non-existent element in an associate array automatically creates
the element.
By creating the element in the array, you’ve now set the array so
that the next time you see the same path element, your condtiion !($0 in a)
will fail and the acton will not execute.
In other words the action will execute only the first time that you see a path.
And finally, after referencing the array, you print the current path,
and awk automatically adds the output separtor.
Note that an empty print is equivalent to print $0.
Let’s see it in action:

$ cat nodupes.sh
export PATH=/usr/bin:/bin:/usr/local/bin:/usr/bin:/bin
echo $PATH | awk -v RS=: -v ORS=: ‘!($0 in a) ‘

$ bash nodupes.sh
/usr/bin:/bin:/usr/local/bin:/bin
:

As I said, it almost works.
The only problem is there’s an extra newline and an extra colon on
the following line.
The extra newline comes from the fact that echo is adding a newline
onto the end of the path, and since awk is not treating newlines as
separators, it gets added to the end of the last path,
which, in this case, causes it to look like awk failed to remove a duplicate.
But awk doesn’t see them as duplicates, it sees
/bin and /binn.
You can eliminate the trailing newline by using the -n option to echo:

$ cat nodupes2.sh
export PATH=/usr/bin:/bin:/usr/local/bin:/usr/bin:/bin
echo -n $PATH | awk -v RS=: -v ORS=: ‘!($0 in a) ‘

$ bash nodupes2.sh
/usr/bin:/bin:/usr/local/bin:

And you’re almost there, except for the trailing colon, which is not actually
a problem. Empty PATH elements will be ignored, but since you’ve come this
far on this somewhat pointless journey, you might as well go the distance.
To fix the problem, use awk’s printf command rather than print.
Unlike print, printf does not automatically include output record separators,
so you have to output them yourself:

$ cat nodupes3.sh
export PATH=/usr/bin:/bin:/usr/local/bin:/usr/bin:/bin
echo -n $PATH | awk -v RS=: ‘!($0 in a) ‘

$ bash nodupes3.sh
/usr/bin:/bin:/usr/local/bin

You may be a bit confused by this at first glance.
Rather than eliminating the trailing separtor,
you’ve reversed the logic, and you’re outputting the separator first,
then the PATH element, so instead of needing to eliminate the
trailing separator, you need to suppress a leading separator.
The record separator is output by the first %s format specifier
and comes from the length(a) > 1 ? “:” : “”,
so it is only printed when there’s more than one element in the array
(that is, the second and subsequent times).

As I said at the outset, there’s no reason you have to remove
duplicate path entries; they cause no harm.
However, for some, the simple fact that they are there is
reason enough to eliminate them.

Source

Introducing ODPi Egeria – The Industry’s First Open Metadata Standard | Linux.com

Organizations looking to better locate, understand, manage and gain value from their data have a new industry standard to leverage. ODPi, a nonprofit Linux Foundation organization focused upon accelerating the open ecosystem of big data solutions, recently announced ODPi Egeria, a new project that supports the free flow of metadata between different technologies and vendor offerings.

Recent data privacy regulations such as GDPR have brought data governance and security concerns to the forefront for enterprises, driving the need for a standard to ensure that data providence and management is clear and consistent—supporting the free flow of metadata between different technologies and vendor offerings. Egeria enables this, as the only open source driven solution designed to set a standard for leveraging metadata in line of business applications, and enabling metadata repositories to federate across the enterprise.

The first release of Egeria focuses on creating a single virtual view of metadata. It can federate queries across different metadata repositories and has the ability to synchronize metadata between different repositories. The synchronization protocol controls what is shared, with which repositories and ensures that updates to metadata can be made with integrity.

Read more at OpenDataScience

Source

WP2Social Auto Publish Powered By : XYZScripts.com