Workarounds for Exporting to EPUB » Linux Magazine

With a little workaround knowledge and Calibre, you can get the most out of LibreOffice’s EPUB export function.

Starting with the 6.0 release, LibreOffice supports export to EPUB format. Although an extension for EPUB export has been available for several years, this feature is long overdue, considering the growing popularity of ebooks. Unfortunately, though, using ebook export is not as simple as selecting an option from File | Export As. Unless you are exporting the simplest of document designs, many of LibreOffice’s features are not exported. Just as efficiently exporting to HTML requires a special approach to document design, a successful EPUB format requires a few workarounds, or else editing the result in Calibre or another tool like Sigil. Since, post-export editing requires a knowledge of Cascading Style Sheets (CSS), for most users, the easiest place to make adjustments is in the original document.

The simpler a document, the more likely that it will to export to EPUB without problem. If a document is entirely text, most likely you can select File | Export As | Export Directly to EPUB and let LibreOffice do the work using the default export settings. Probably, you also can successfully use File | Export As | Export as EPUB; this allows you to tweak the metadata that is otherwise borrowed from your user settings and File | Properties, such as where page breaks occur and the cover image for the export file. However, although Export as EPUB offers the option for a fixed format, the setting applies mainly to page breaks, and the formatting options remain limited (Figure 1).


Figure 1: LibreOffice offers a few controls for EPUB export, but drops many common formatting choices.

At the opposite extreme, if you require elaborate formatting – as in a brochure – the format for you is probably PDF. An offshoot of the Postscript printing language, the PDF format is well-equipped to handle any design you care to use, although online forms in particular can be challenging. LibreOffice has had PDF export for years, and the options available from File | Export As | Export as PDF allow fine-tuned control (Figure 2).


Figure 2: In contrast to the EPUB export controls, LibreOffice’s PDF controls offer a rich array of options.

However, if your formatting is somewhere in the middle of these two extremes, you can still use EPUB export control with some success, so long as you know what design features are available and how to workaround some of the deficiencies.

Basic Design Limits

Successful EPUB export is often a matter of trial and error. It may take several attempts to get the desired results. For that reason, you can save yourself effort if you use character and paragraph styles, which allow you to quickly make adjustments. Other LibreOffice styles – frames, pages, lists, and tables – are not recognized by the EPUB export filter, but you can use them for your own convenience.

EPUB does not support character and paragraph styles, but you can count on font size, line spacing, alignment, and font color to export. If you have formatted some characters differently from the rest of the text, the difference is preserved, including subscript or superscript. Footnotes and endnotes are also preserved, as well as hyperlinks

However, that is the end of the good news. Any characters that are not entered directly or by fields are lost and will not appear in the output. That means that while list items are preserved, bullets and numbers entered from the toolbar or by a list style are dropped – so are cross references based on page numbers and headings and useful fields like page numbers.

Similarly, any sort of text frame is simply dropped. Text in sections, text frames, columns, and captions are exported, but only as regular paragraphs. The same is true for drop capitals. Text in table cells is also imported, but with erratic spacing and text formatting that makes it unusable.

Worst of all, no graphics or objects created using the Drawing toolbar are exported.

If designing for EPUBS sounds like a study in limitations, you are beginning to get the right idea.

Some Basic Workarounds

This advice is worth repeating: the simpler the document design, the less trouble with exporting to EPUB. The trouble, of course, is that you often need the features not supported by EPUB export. Fortunately, other ways exist to get the same results. At times, you can even use the limitations themselves to find a workaround.

To start with, if features cannot be exported automatically, you can export them by entering them manually. For example, if you want a bulleted list, use Insert | Special Character and enter the bullet character manually (Figure 3). Keep in mind that you cannot indent a list item’s second line to align it with the first line of text rather than the bullet, so you will want to keep your list items short. Cross references can also be entered manually, although admittedly at the cost of increased maintenance if you edit documents. If you are using the Fixed setting in the Export as EPUB window, you can also manually enter headers and footers, including lines to separate them from the rest of the text, by adding them to each page.


Figure 3: If you want a bulleted list in an EPUB export, you have to add the bullet character from the Special Characters dialog window.

If you want a drop capital, you can create one by adding a text box (Insert | Text Box), inserting a large capital into it, and adjusting the space between the capital and the right and bottom of the frame and the surrounding text. Exporting will drop the frame, leaving the capital and the space around it. Text boxes can also be added to create a multicolumn layout, although you need to be aware that EPUB reduces the spacing set between columns in LibreOffice.

Still more features can be preserved by not using the LibreOffice export features at all. Instead, install Calibre and export the original LibreOffice file into Calibre. Calibre has its own export filters, and the one for EPUB is much more versatile. For instance, Calibre exports graphics and preserves their alignment. While it takes the first graphic for a cover, that can be edited later (Figure 4).


Figure 4: Calibre is useful for touching up EPUB exports.

Another useful feature of exporting with Calibre is that, while it cannot reproduce cross references based on page numbers or headings, it can reproduce those made with bookmarks.

Other workarounds depend on a knowledge of CSS. You can, for example, specify a font to use, rather than letting users choose their own. However, the use of CSS to produce ebooks is a lengthy subject and deserves an article to itself.

Other Considerations

LibreOffice probably chose to add EPUB support, because it is an open standard. Kindle/Mobi is not an open standard, but if you need that format, export your LibreOffice or EPUB file into Calibre – or, better yet, export the original LibreOffice .odt file, since the fewer conversions done means the less trouble you are likely to have. However, I am still investigating the limitations of converting to the Kindle/Mobi format.

Be aware, too, that the various validators online for different ebook uses can sometimes have wildly differing results. Some may not permit a fixed file, so check the preferred format before you begin. Since an EPUB file that reads well in Calibre may not be acceptable for a particular use, you should always refer back to any standards that the output file is required to meet.

As open source software, LibreOffice offers a convenient graphical interface for producing ebooks. For now, its EPUB tools remain basic, but, with the addition of Calibre, you can still have an open source tool chain for producing ebooks. Should both those applications fail to give the results you want, then look into CSS, which can be edited in Calibre or the text editor of your choice.

Source

Have a Plan for Netplan

Ubuntu changed networking. Embrace the YAML.

If I’m being completely honest, I still dislike the switch from eth0,
eth1, eth2 to names like, enp3s0, enp4s0, enp5s0. I’ve learned to accept
it and mutter to myself while I type in unfamiliar interface names. Then I
installed the new LTS version of Ubuntu and typed vi
/etc/network/interfaces. Yikes. After a technological lifetime of entering
my server’s IP information in a simple text file, that’s no longer how
things are done. Sigh. The good news is that while figuring out Netplan for
both desktop and server environments, I fixed a nagging DNS issue I’ve had
for years (more on that later).

The Basics of Netplan

The old way of configuring Debian-based network interfaces was based on the
ifupdown package. The new default is called Netplan, and
although it’s not
terribly difficult to use, it’s drastically different. Netplan is sort of
the interface used to configure the back-end dæmons that actually
configure the interfaces. Right now, the back ends supported are
NetworkManager and networkd.

If you tell Netplan to use NetworkManager, all interface configuration
control is handed off to the GUI interface on the desktop. The
NetworkManager program itself hasn’t changed; it’s the same GUI-based
interface configuration system you’ve likely used for years.

If you tell Netplan to use networkd, systemd itself handles the interface
configurations. Configuration is still done with Netplan files, but once
“applied”, Netplan creates the back-end configurations systemd requires. The
Netplan files are vastly different from the old /etc/network/interfaces
file, but it uses YAML syntax, and it’s pretty easy to figure out.

The Desktop and DNS

If you install a GUI version of Ubuntu, Netplan is configured with
NetworkManager as the back end by default. Your system should get IP
information via DHCP or static entries you add via GUI. This is usually not
an issue, but I’ve had a terrible time with my split-DNS setup and
systemd-resolved. I’m sure there is a magical combination of configuration
files that will make things work, but I’ve spent a lot of time, and it
always behaves a little oddly. With my internal DNS server resolving domain
names differently from external DNS servers (that is, split-DNS), I get random
lookup failures. Sometimes ping will resolve, but
dig will not. Sometimes
the internal A record will resolve, but a CNAME will not. Sometimes I get
resolution from an external DNS server (from the internet), even though I
never configure anything other than the internal DNS!

I decided to disable systemd-resolved. That has the potential to break DNS
lookups in a VPN, but I haven’t had an issue with that. With
resolved
handling DNS information, the /etc/resolv.conf file points to 127.0.0.53 as
the nameserver. Disabling systemd-resolved will stop the automatic creation
of the file. Thankfully, NetworkManager itself can handle the creation and
modification of /etc/resolv.conf. Once I make that change, I no longer have
an issue with split-DNS resolution. It’s a three-step process:

  1. Do sudo systemctl disable systemd-resolved.service.
  2. Then sudo rm /etc/resolv.conf (get rid of the symlink).
  3. Edit the /etc/NetworkManager/NetworkManager.conf file, and in the
    [main]
    section, add a line that reads DNS=default.

Once those steps are complete, NetworkManager itself will create the
/etc/resolv.conf file, and the DNS server supplied via DHCP or static entry
will be used instead of a 127.0.0.53 entry. I’m not sure why the
resolved
dæmon incorrectly resolves internal addresses for me, but the above method
has been foolproof, even when switching between networks with my
laptop.

Netplan CLI Configuration

If Ubuntu is installed in server mode, it is almost certainly configured to
use networkd as the back end. To check, have a look at the
/etc/netplan/config.yaml file. The renderer should be set to
networkd
in order to use the systemd-networkd back end. The file should look
something like this:

network:
version: 2
renderer: networkd
ethernets:
enp2s0:
dhcp4: true

Important note: remember that with
YAML files, whitespace matters, so the indentation is important. It’s also
very important to remember that after making any changes, you need to
run sudo netplan apply so the back-end configuration files are
populated.

The default renderer is networkd, so it’s possible you won’t have that line
in your configuration file. It’s also possible your configuration file will
be named something different in the /etc/netplan folder. All .conf files
are read, so it doesn’t matter what it’s called as long as it ends with
.conf. Static configurations are fairly simple to set up:

network:
version: 2
renderer: networkd
ethernets:
enp2s0:
dhcp4: no
addresses:
– 192.168.1.10/24
– 10.10.10.10/16
gateway4: 192.168.1.1
nameservers:
addresses: [192.168.1.1, 8.8.8.8]

Notice I’ve assigned multiple IP addresses to the interface. Netplan does
not support virtual interfaces like enp3s0:0, rather multiple IP
addresses can be assigned to a single interface.

Unfortunately, networkd doesn’t create an /etc/resolv.conf file if you
disable the resolved dæmon. If you have problems with split-DNS on a
headless computer, the best solution I’ve come up with is to disable
systemd-resolved and then manually create an /etc/resolv.conf file. Since
headless computers don’t usually move around as much as laptops, it’s likely
the /etc/resolv.conf file won’t need to be changed. Still, I wish
networkd
had an option to manage the resolv.conf file the same way
NetworkManager
does.

Advanced Network Configurations

The configuration formats are different, but it’s still possible to do more
advanced network configurations with Netplan:

Bonding:

network:
version: 2
renderer: networkd
bonds:
bond0:
dhcp4: yes
interfaces:
– enp2s0
– enp3s0
parameters:
mode: active-backup
primary: enp2s0

The various bonding modes (balance-rr,
active-backup, balance-xor,
broadcast, 802.3ad, balance-tlb and
balance-alb) are supported.

Bridging:

network:
version: 2
renderer: networkd
bridges:
br0:
dhcp4: yes
interfaces:
– enp4s0
– enp3s0

Bridging is even simpler to set up. This configuration creates a bridge
device using the two interfaces listed. The device (br0) gets address
information via DHCP.

CLI Networking Commands

If you’re a crusty old sysadmin like me, you likely type
ifconfig to see
IP information without even thinking. Unfortunately, those tools are not
usually installed by default. This isn’t actually the fault of Ubuntu and
Netplan; the old ifconfig toolset has been deprecated. If you want to use
the old ifconfig tool, you can install the package:

sudo apt install net-tools

But, if you want to do it the “correct” way, the new “ip” tool is the proper
way to do it. Here are some equivalents of things I commonly do with
ifconfig:

Show network interface information.

Old way:

ifconfig

New way:

ip address show

(Or you can just do ip a, which is actually less typing than
ifconfig.)

Bring interface up.

Old way:

ifconfig enp3s0 up

New way:

ip link set enp3s0 up

Assign IP address.

Old way:

ifconfig enp3s0 192.168.1.22

New way:

ip address add 192.168.1.22 dev enp3s0

Assign complete IP information.

Old way:

ifconfig enp3s0 192.168.1.22 net mask 255.255.255.0 broadcast
↪192.168.1.255

New way:

ip address add 192.168.1.22/24 broadcast 192.168.1.255
↪dev enp3s0

Add alias interface.

Old way:

ifconfig enp3s0:0 192.168.100.100/24

New way:

ip address add 192.168.100.100/24 dev enp3s0 label enp3s0:0

Show the routing table.

Old way:

route

New way:

ip route show

Add route.

Old way:

route add -net 192.168.55.0/24 dev enp4s0

New way:

ip route add 192.168.55.0/24 dev enp4s0

Old Dogs and New Tricks

I hated Netplan when I first installed Ubuntu 18.04. In fact, on the
particular server I was installing, I actually started over and installed
16.04 because it was “comfortable”. After a while, curiosity got the better
of me, and I investigated the changes. I’m still more comfortable with the
old /etc/network/interfaces file, but I have to admit, Netplan makes a
little more sense. There is a single “front end” for configuring networks,
and it uses different back ends for the heavy lifting. Right now, the only
back ends are the GUI NetworkManager and the systemd-networkd
dæmon. With
the modular system, however, that could change someday without the need to
learn a new way of configuring interfaces. A simple change to the
renderer line would send the configuration information to a new back end.

With regard to the new command-line networking tool (ip vs.
ifconfig),
it really behaves more like other network devices (routers and so on), so that’s
probably a good change as well. As technologists, we need to be ready and
eager to learn new things. If we weren’t always trying the next best thing,
we’d all be configuring Trumpet Winsock to dial in to the internet on our
Windows 95 machines. I’m glad I tried that new Linux thing, and while it
wasn’t quite as dramatic, I’m glad I tried Netplan as well!

Source

Canonical Announces Partnership with Eurotech, the Big Four to End Support of TLS 1.0 and 1.1, Sony Using Blockchain for DRM, NETWAYS Web Services Launches IaaS OpenStack, Grey Hat Patching MikroTik Routers and Paul Allen Dies at 65

News briefs for October 16, 2018.

Canonical
announced a partnership with Eurotech
to help organizations
advance in the IoT realm. In connection with this partnership, Canonical
“has published a Snap for the Eclipse Kura project—the
popular, open-source Java-based IoT edge framework. Having Kura available as
a Snap—the universal Linux application packaging format—will
enable a wider availability of Linux users across multiple distributions to
take advantage of the framework and ensure it is supported on more hardware.
Snap support will also extend on Eurotech’s commercially supported
version; the Everywhere Software Framework (ESF).”

Apple, Google, Microsoft and Mozilla all announce the end of support for TLS
1.0 and 1.1 standards starting in 2020, ZDNet
reports
. Chrome and Firefox already support TLS 1.3, and Microsoft and
Apple will soon follow suit.

Sony announced it’s planning to use the blockchain for digital rights
management (DRM). According to the story
on Engadget
, the company plans to begin with the Sony Global Education
written educational materials. This blockchain system is “built on Sony’s
pre-existing DRM tools, which keep track of the distribution of copyrighted
materials, but will have advantages that come with blockchain’s inherent
security.”

NETWAYS Web Services launches IaaS OpenStack.
According to the press release, “the Open Source experts from ‘NETWAYS Web
Services’ (NWS) add with OpenStack
a customizable, fully managed Infrastructure as a Service (Iaas) to their
platform.” Customers can choose between SSD or Ceph based packages, and in
addition to OpenStack, the platform offers “a diverse selection of Open
Source applications for various purposes”. If you’re interested, you can try NWS OpenStack 30 days for free.
For more information and to get started, go here.

A grey-hat hacker is breaking into MikroTik routers and patching them so
they can’t be compromised by cryptojackers or other attackers. According
to ZDNet
, the hacker, who goes by Alexey, is a system
administrator and claims to have disinfected more then 100,000 MikroTik
routers. He told ZDNet that he added firewall rules to block access to
the routers from outside the local network, and then “in the comments, I wrote information about the
vulnerability and left the address of the @router_os Telegram channel, where
it was possible for them to ask questions.” Evidently, a few folks have said “thanks”, but many are outraged.

Paul Allen—”co-founder of Microsoft and noted technologist,
philanthropist, community builder, conservationist, musician and supporter of
the arts”—passed away yesterday. See the statements released on behalf
of the Allen Family, Vulcan Inc. and the Paul G. Allen network at the Vulcan
Inc. website
.

Source

Set Up a CI/CD Pipeline with a Jenkins Pod in Kubernetes (Part 2) | Linux.com

In Part 1 of our series, we got our local Kubernetes cluster up and running with Docker, Minikube, and kubectl. We set up an image repository, and tried building, pushing, and deploying a container image with code changes we made to the Hello-Kenzan app. It’s now time to automate this process.

In Part 2, we’ll set up continuous delivery for our application by running Jenkins in a pod in Kubernetes. We’ll create a pipeline using a Jenkins 2.0 Pipeline script that automates building our Hello-Kenzan image, pushing it to the registry, and deploying it in Kubernetes. That’s right: we are going to deploy pods from a registry pod using a Jenkins pod. While this may sound like a bit of deployment alchemy, once the infrastructure and application components are all running on Kubernetes, it makes the management of these pieces easy since they’re all under one ecosystem.

With Part 2, we’re laying the last bit of infrastructure we need so that we can run our Kr8sswordz Puzzle in Part 3.

Read all the articles in the series:

RyHz4CL2OgRH4G8M7BLPxwZ7MMAZh-DpmmEuXBoa

This tutorial only runs locally in Minikube and will not work on the cloud. You’ll need a computer running an up-to-date version of Linux or macOS. Optimally, it should have 16 GB of RAM. Minimally, it should have 8 GB of RAM. For best performance, reboot your computer and keep the number of running apps to a minimum.

Creating and Building a Pipeline in Jenkins

Before you begin, you’ll want to make sure you’ve run through the steps in Part 1, in which we set up our image repository running in a pod (to do so quickly, you can run the npm part1 automated script detailed below).

If you previously stopped Minikube, you’ll need to start it up again. Enter the following terminal command, and wait for the cluster to start:

minikube start

You can check the cluster status and view all the pods that are running.

kubectl cluster-info

kubectl get pods –all-namespaces

Make sure that the registry pod has a Status of Running.

We are ready to build out our Jenkins infrastructure.

Remember, you don’t actually have to type the commands below—just press Enter at each step and the script will enter the command for you!

1. First, let’s build the Jenkins image we’ll use in our Kubernetes cluster.

docker build -t 127.0.0.1:30400/jenkins:latest

-f applications/jenkins/Dockerfile applications/jenkins

2. Once again we’ll need to set up the Socat Registry proxy container to push images, so let’s build it. Feel free to skip this step in case the socat-registry image already exists from Part 1 (to check, run docker images).

docker build -t socat-registry -f applications/socat/Dockerfile applications/socat

3. Run the proxy container from the image.

docker stop socat-registry; docker rm socat-registry;

docker run -d -e “REG_IP=`minikube ip`” -e “REG_PORT=30400”

–name socat-registry -p 30400:5000 socat-registry

n-z3awvRgVXCO-QIpllgiqXOWtsTeePM62asXPD5

This step will fail if local port 30400 is currently in use by another process. You can check if there’s any process currently using this port by running the command
lsof -i :30400

4. With our proxy container up and running, we can now push our Jenkins image to the local repository.

docker push 127.0.0.1:30400/jenkins:latest

You can see the newly pushed Jenkins image in the registry UI using the following command.

minikube service registry-ui

5. The proxy’s work is done, so you can go ahead and stop it.

docker stop socat-registry

6. Deploy Jenkins, which we’ll use to create our automated CI/CD pipeline. It will take the pod a minute or two to roll out.

kubectl apply -f manifests/jenkins.yaml; kubectl rollout status deployment/jenkins

Inspect all the pods that are running. You’ll see a pod for Jenkins now.

kubectl get pods

_YIHeGg141vkuJmdJZBO0zN2s3pjLdDMgo5pfQFe

Jenkins as a CD tool needs special rights in order to interact with the Kubernetes cluster, so we’ve setup RBAC (Role Based Access Control) authorization for it inside the jenkins.yaml deployment manifest. RBAC consists of a Role, a ServiceAccount and a Binding object that binds the two together. Here’s how we configured Jenkins with these resources:

Role: For simplicity we leveraged the pre-existing ClusterRole “cluster-admin” which by default has unlimited access to the cluster. (In a real life scenario you might want to narrow down Jenkins’ access rights by creating a new role with the least privileged PolicyRule.)

ServiceAccount: We created a new ServiceAccount named “Jenkins”. The property “automountServiceAccountToken” has been set to true; this will automatically mount the authentication resources needed for a kubeconfig context to be setup on the pod (i.e. Cluster info, User represented by a token and a Namespace).

RoleBinding: We created a ClusterRoleBinding that binds together the “Jenkins” serviceAccount to the “cluster-admin” ClusterRole.

Lastly, we tell our Jenkins deployment to run as the Jenkins ServiceAccount.

n-z3awvRgVXCO-QIpllgiqXOWtsTeePM62asXPD5

Notice our Jenkins deployment has an initContainer. This is a container that will run to completion before the main container is deployed on our pod. The job of this init container is to create a kubeconfig file based on the provided context and to share it with the main Jenkins container through an “emptyDir” volume.

7. Open the Jenkins UI in a web browser.

minikube service jenkins

8. Display the Jenkins admin password with the following command, and right-click to copy it.

kubectl exec -it `kubectl get pods –selector=app=jenkins

–output=jsonpath={.items..metadata.name}` cat

/var/jenkins_home/secrets/initialAdminPassword

9. Switch back to the Jenkins UI. Paste the Jenkins admin password in the box and click Continue. Click Install suggested plugins. Plugins have actually been pre-downloaded during the Jenkins image build, so this step should finish fairly quickly.

RyHz4CL2OgRH4G8M7BLPxwZ7MMAZh-DpmmEuXBoa

One of the plugins being installed is Kubernetes Continuous Deploy, which allows Jenkins to directly interact with the Kubernetes cluster rather than through kubectl commands. This plugin was pre-downloaded with the Jenkins image build.

10. Create an admin user and credentials, and click Save and Continue. (Make sure to remember these credentials as you will need them for repeated logins.)

s7KGWbFBCOau5gi7G05Fs_mjAtBOVNy7LlEQ4wTL

11. On the Instance Configuration page, click Save and Finish. On the next page, click Restart (if it appears to hang for some time on restarting, you may have to refresh the browser window). Login to Jenkins.

12. Before we create a pipeline, we first need to provision the Kubernetes Continuous Deploy plugin with a kubeconfig file that will allow access to our Kubernetes cluster. In Jenkins on the left, click on Credentials, select the Jenkins store, then Global credentials (unrestricted), and Add Credentials on the left menu

13. The following values must be entered precisely as indicated:

  • Kind: Kubernetes configuration (kubeconfig)

  • ID: kenzan_kubeconfig

  • Kubeconfig: From a file on the Jenkins master

  • File: /var/jenkins_home/.kube/config

Finally click Ok.

HznE6h9fOjuiv543Oqs5MqiIj0D52wSFJ44a-3An

13. We now want to create a new pipeline for use with our Hello-Kenzan app. Back on Jenkins Home, on the left, click New Item.

EdS4p4roTIfvBrg5Fz0n7sx8gTtMiXQMT7mqYqT-

Enter the item name as Hello-Kenzan Pipeline, select Pipeline, and click OK.

4If4KfHDUj8hGFn8kkaavcX9H8sboABcODIkrVL3

14. Under the Pipeline section at the bottom, change the Definition to be Pipeline script from SCM.

15. Change the SCM to Git. Change the Repository URL to be the URL of your forked Git repository, such as https://github.com/[GIT USERNAME]/kubernetes-ci-cd.

OPuG1YZM70f-TcKx-dkQQLl223gu0PudZe12eQPl

RyHz4CL2OgRH4G8M7BLPxwZ7MMAZh-DpmmEuXBoa

Note for the Script Path, we are using a Jenkinsfile located in the root of our project on our Github repo. This defines the build, push and deploy steps for our hello-kenzan application.

Click Save. On the left, click Build Now to run the new pipeline. You should see it run through the build, push, and deploy steps in a few seconds.

b4KTpFJ4vnNdFbTKcMxn7Yy3aFr8UTlmQBuVK6YB

16. After all pipeline stages are colored green as complete, view the Hello-Kenzan application.

minikube service hello-kenzan

You might notice that you’re not seeing the uncommitted change you previously made to index.html in Part 1. That’s because Jenkins wasn’t using your local code. Instead, Jenkins pulled the code from your forked repo on GitHub, used that code to build the image, push it, and then deploy it.

Pushing Code Changes Through the Pipeline

Now let’s see some Continuous Integration in action! try changing the index.html in our Hello-Kenzan app, then building again to verify that the Jenkins build process works.

a. Open applications/hello-kenzan/index.html in a text editor.

nano applications/hello-kenzan/index.html

b. Add the following html at the end of the file (or any other html you like). (Tip: You can right-click in nano and choose Paste.)

<p style=”font-family:sans-serif”>For more from Kenzan, check out our
<a href=”http://kenzan.io”>website</a>.</p>

c. Press Ctrl+X to close the file, type Y to confirm the filename, and press Enter to write the changes to the file.

d. Commit the changed file to your Git repo (you may need to enter your GitHub credentials):

git commit -am “Added message to index.html”

git push

In the Jenkins UI, click Build Now to run the build again.

Jc8EnFCovLr3FfxWQxfuaeqX4VDJCHaq-mxvBIeC

18. View the updated Hello-Kenzan application. You should see the message you added to index.html. (If you don’t, hold down Shift and refresh your browser to force it to reload.)

minikube service hello-kenzan

ZyyeJWIXiqbBXfNd9MwG25_9Ewb8YmrKFTI-4zUz

And that’s it! You’ve successfully used your pipeline to automatically pull the latest code from your Git repository, build and push a container image to your cluster, and then deploy it in a pod. And you did it all with one click—that’s the power of a CI/CD pipeline.

If you’re done working in Minikube for now, you can go ahead and stop the cluster by entering the following command:

minikube stop

Automated Scripts

If you need to walk through the steps we did again (or do so quickly), we’ve provided npm scripts that will automate running the same commands in a terminal.

1. To use the automated scripts, you’ll need to install NodeJS and npm.

On Linux, follow the NodeJS installation steps for your distribution. To quickly install NodeJS and npm on Ubuntu 16.04 or higher, use the following terminal commands.

a. curl -sL https://deb.nodesource.com/setup_7.x | sudo -E bash –

b. sudo apt-get install -y nodejs

2. Change directories to the cloned repository and install the interactive tutorial script:

a. cd ~/kubernetes-ci-cd

b. npm install

3. Start the script

npm run part1 (or part2, part3, part4 of the blog series)

​4. Press Enter to proceed running each command.

Up Next

In Parts 3 and 4, we will deploy our Kr8sswordz Puzzle app through a Jenkins CI/CD pipeline. We will demonstrate its use of caching with etcd, as well as scaling the app up with multiple puzzle service instances so that we can try running a load test. All of this will be shown in the UI of the app itself so that we can visualize these pieces in action.

Curious to learn more about Kubernetes? Enroll in Introduction to Kubernetes, a FREE training course from The Linux Foundation, hosted on edX.org.

This article was revised and updated by David Zuluaga, a front end developer at Kenzan. He was born and raised in Colombia, where he studied his BE in Systems Engineering. After moving to the United States, he studied received his master’s degree in computer science at Maharishi University of Management. David has been working at Kenzan for four years, dynamically moving throughout a wide range of areas of technology, from front-end and back-end development to platform and cloud computing. David’s also helped design and deliver training sessions on Microservices for multiple client teams.

Source

Automotive Grade Linux Enables Telematics and Instrument Cluster Applications with Latest UCB 6.0 Release

SAN FRANCISCO, October 15, 2018Automotive Grade Linux (AGL), a collaborative cross-industry effort developing an open platform for the connected car, today announced the latest release of the AGL platform, Unified Code Base (UCB) 6.0, which features device profiles for telematics and instrument cluster.

“The addition of the telematics and instrument cluster profiles opens up new deployment possibilities for AGL,” said Dan Cauchy, Executive Director of Automotive Grade Linux at The Linux Foundation. “Motorcycles, fleet services, rental car tracking, basic economy cars with good old-fashioned radios, essentially any vehicle without a head unit or infotainment display can now leverage the AGL Unified Code Base as a starting point for their products.”

Developed through a joint effort by dozens of member companies, the AGL Unified Code Base (UCB) is an open source software platform that can serve as the de facto industry standard for infotainment, telematics and instrument cluster applications. Sharing a single software platform across the industry reduces fragmentation and accelerates time-to-market by encouraging the growth of a global ecosystem of developers and application providers that can build a product once and have it work for multiple automakers.

Many AGL members have already started integrating the UCB into their production plans. Mercedes-Benz Vans is using AGL as a foundation for a new onboard operating system for its commercial vehicles, and Toyota’s AGL-based infotainment system is now in vehicles globally.

The AGL UCB 6.0 includes an operating system, middleware and application framework. Key features include:

  • Device profiles for telematics and instrument cluster
  • Core AGL Service layer can be built stand-alone
  • Reference applications including media player, tuner, navigation, web browser, Bluetooth, WiFi, HVAC control, audio mixer and vehicle controls
  • Integration with simultaneous display on IVI system and instrument cluster
  • Multiple display capability including rear seat entertainment
  • Wide range of hardware board support including Renesas, Qualcomm Technologies, Intel, Texas Instrument, NXP and Raspberry Pi
  • Software Development Kit (SDK) with application templates
  • SmartDeviceLink ready for easy integration and access to smartphone applications
  • Application Services APIs for navigation, voice recognition, bluetooth, audio, tuner and CAN signaling
  • Near Field Communication (NFC) and identity management capabilities including multilingual support
  • Over-The-Air (OTA) upgrade capabilities
  • Security frameworks with role-based-access control

The full list of additions to the UCB 6.0 can be found here.

The global AGL community will gather in Dresden, Germany for the bi-annual All Member Meeting on October 17-18, 2018. At this gathering, members and community leaders will get together to share best practices and future plans for the project. To learn more or register, please visit here.

About Automotive Grade Linux (AGL)

Automotive Grade Linux is a collaborative open source project that is bringing together automakers, suppliers and technology companies to accelerate the development and adoption of a fully open software stack for the connected car. With Linux at its core, AGL is developing an open platform from the ground up that can serve as the de facto industry standard to enable rapid development of new features and technologies. Although initially focused on In-Vehicle-Infotainment (IVI), AGL is the only organization planning to address all software in the vehicle, including instrument cluster, heads up display, telematics, advanced driver assistance systems (ADAS) and autonomous driving. The AGL platform is available to all, and anyone can participate in its development. Learn more: https://www.automotivelinux.org/

Automotive Grade Linux is a Collaborative Project at The Linux Foundation. Linux Foundation Collaborative Projects are independently funded software projects that harness the power of collaborative development to fuel innovation across industries and ecosystems. www.linuxfoundation.org

###

The Linux Foundation has registered trademarks and uses trademarks. For a list of trademarks of The Linux Foundation, please see our trademark usage page: https://www.linuxfoundation.org/trademark-usage. Linux is a registered trademark of Linus Torvalds.

Media Inquiries

Emily Olin

Automotive Grade Linux

eolin@linuxfoundation.org

Source

Install Plex Media Server on CentOS 7

by Marin Todorov | Published: October 15, 2018 |

October 15, 2018

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: },
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘Install Plex Media Server on CentOS 7’,media: ‘https://www.tecmint.com/wp-content/uploads/2018/10/Install-Plex-Media-Server-on-CentOS-7.png’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){
var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset();
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset();
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});

If You Appreciate What We Do Here On TecMint, You Should Consider:

  1. Stay Connected to: Twitter | Facebook | Google Plus
  2. Subscribe to our email updates: Sign Up Now
  3. Get your own self-hosted blog with a Free Domain at ($3.45/month).
  4. Become a Supporter – Make a contribution via PayPal
  5. Support us by purchasing our premium books in PDF format.
  6. Support us by taking our online Linux courses

We are thankful for your never ending support.

View all Posts

Marin Todorov

I am a bachelor in computer science and a Linux Foundation Certified System Administrator. Currently working as a Senior Technical support in the hosting industry. In my free time I like testing new software and inline skating.

Your name can also be listed here. Got a tip? Submit it here to become an TecMint author.

Source

How to List All Virtual Hosts in Apache Web Server

by Aaron Kili | Published: October 16, 2018 |

October 16, 2018

‘,
enableHover: false,
enableTracking: true,
buttons: { twitter: },
click: function(api, options){
api.simulateClick();
api.openPopup(‘twitter’);
}
});
jQuery(‘#facebook’).sharrre({
share: {
facebook: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
click: function(api, options){
api.simulateClick();
api.openPopup(‘facebook’);
}
});
jQuery(‘#googleplus’).sharrre({
share: {
googlePlus: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
urlCurl: ‘https://www.tecmint.com/wp-content/themes/tecmint/js/sharrre.php’,
click: function(api, options){
api.simulateClick();
api.openPopup(‘googlePlus’);
}
});
jQuery(‘#linkedin’).sharrre({
share: {
linkedin: true
},
template: ‘

‘,
enableHover: false,
enableTracking: true,
buttons: {
linkedin: {
description: ‘How to List All Virtual Hosts in Apache Web Server’,media: ‘https://www.tecmint.com/wp-content/uploads/2018/10/List-All-Virtual-Hosts-in-Apache-Web-Server.png’ }
},
click: function(api, options){
api.simulateClick();
api.openPopup(‘linkedin’);
}
});
// Scrollable sharrre bar, contributed by Erik Frye. Awesome!
var shareContainer = jQuery(“.sharrre-container”),
header = jQuery(‘#header’),
postEntry = jQuery(‘.entry’),
$window = jQuery(window),
distanceFromTop = 20,
startSharePosition = shareContainer.offset(),
contentBottom = postEntry.offset().top + postEntry.outerHeight(),
topOfTemplate = header.offset().top;
getTopSpacing();
shareScroll = function(){
if($window.width() > 719){
var scrollTop = $window.scrollTop() + topOfTemplate,
stopLocation = contentBottom – (shareContainer.outerHeight() + topSpacing);
if(scrollTop > stopLocation){
shareContainer.offset();
}
else if(scrollTop >= postEntry.offset().top-topSpacing){
shareContainer.offset();
}else if(scrollTop 1024)
topSpacing = distanceFromTop + jQuery(‘.nav-wrap’).outerHeight();
else
topSpacing = distanceFromTop;
}
});

If You Appreciate What We Do Here On TecMint, You Should Consider:

  1. Stay Connected to: Twitter | Facebook | Google Plus
  2. Subscribe to our email updates: Sign Up Now
  3. Get your own self-hosted blog with a Free Domain at ($3.45/month).
  4. Become a Supporter – Make a contribution via PayPal
  5. Support us by purchasing our premium books in PDF format.
  6. Support us by taking our online Linux courses

We are thankful for your never ending support.

View all Posts

Aaron Kili

Aaron Kili is a Linux and F.O.S.S enthusiast, an upcoming Linux SysAdmin, web developer, and currently a content creator for TecMint who loves working with computers and strongly believes in sharing knowledge.

Your name can also be listed here. Got a tip? Submit it here to become an TecMint author.

Source

Helios4 Arm-Based Open Source NAS SBC For Linux/FreeBSD

Helios4 is ARM-based open source NAS SBC (Single-board computer) for Linux. This NAS (Network Attached Storage) comes with 4 SATA 3.0 port and comes with ECC memory. Let us see some details about the Helios4 Arm-Based Open Source NAS SBC and ongoing Kickstarter camping.

What is network-attached storage (NAS)?

NAS is an acronym for Network-attached storage. A NAS server or computer can store and retrieve files from a centralized location on your LAN or Intranet. NAS device typically uses Ethernet-based connections and do not have display output. NAS do not need keyboard or mouse to operate. You can manage your NAS using an ssh-based tool or browser-based configuration tool.

NAS allows users to share data using standard protocols such as NFS, CIFS, SSH, iSCSI, FTP, SSH and more. You can turn NAS into a personal cloud. NAS supports MS-Windows, macOS, Linux and Unix clients. Advanced NAS features may include full-disk encryption and virtualization support.

Helios4 Arm-Based Open Source NAS SBC For Linux

Helios4 is an ARM-based device specially designed for Network Attached Storage (NAS). The ARMADA 38x-MicroSoM from SolidRun is main SoC (system on chip) for Helios4. Kobol claims that the specs for Helios4 are open source and it is an open hardware project.

Helios4 hardware specification

  1. CPU – Marvell Armada 388 (88F6828) ARM Cortex-A9. ARMv7 32-bit. Dual Core 1.6 Ghz.
  2. CPU feature – RAID acceleration engines and security acceleration engines
  3. RAM – 2GB DDR3L ECC
  4. SATA 3.0 Ports – 4
  5. GbE LAN Port – 1
  6. USB 3.0 ports – 2
  7. microSD (SDIO 3.0) – 1
  8. GPIO – 12
  9. I2C – 1
  10. UART – 1 (via onboard Micro-USB converter)
  11. SPI NOR Flash – 32Mbit onboard
  12. PWM FAN – 2
  13. DC input – 12V / 8A

Helios4 Arm-Based Open Source NAS SBC For LinuxHelios4 SBC

Helios4 software specification

  1. Armbian Linux operating system
  2. Mdadm for RAID support on Linux
  3. OpenMediaVault Linux NAS operating system
  4. FreeBSD head
  5. U-Boot the Universal Boot Loader for SBC. You need it for both Linux and FreeBSD

Helios4 pricing

Helios4 PriceHelios4 Price

  • Full Kit (2GB RAM ECC) – USD 194.60 + VAT + Shipping
  • Basic Kit (2GB RAM ECC) – USD 176.20 + VAT + Shipping

Conclusion

helios4 assmbled with sbc
Overall Helios4 is a low-cost and a sturdy energy-efficient SoC NAS. It can run both Linux and FreeBSD (head) Unix operating system. It comes with 2GB ECC ram for data protection. It supports software RAID and maxes out disk support up to 48TB (4 x 12TB disks). The price is competitive too. I think it a hackers dream system due to open hardware and open source software. You can order it online here and find more information here.

Posted by: Vivek Gite

The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.

Source

Kali Linux for Vagrant: Hands-On | Linux.com

What Vagrant actually does is provide a way of automating the building of virtualized development environments using a variety of the most popular providers, such as VirtualBox, VMware, AWS and others. It not only handles the initial setup of the virtual machine, it can also provision the virtual machine based on your specifications, so it provides a consistent environment which can be shared and distributed to others.

The first step, obviously, is to get Vagrant itself installed and working — and as it turns out, doing that requires getting at least one of the virtual machine providers installed and working. In the case of the Kali distribution for Vagrant, this means getting VirtualBox installed.

Fortunately, both VirtualBox and Vagrant are available in the repositories of most of the popular Linux distributions. I typically work on openSUSE Tumbleweed, and I was able to install both of them from the YAST Software Management tool. I have also checked that both are available on Manjaro, Debian Testing and Linux Mint. I didn’t find Vagrant on Fedora, but there are several articles in the Fedora Developer Portal which describe installing and using it.

Read more at ZDNet

Source

Spinnaker: The Kubernetes of Continuous Delivery | Linux.com

Comparing Spinnaker and Kubernetes in this way is somewhat unfair to both projects. The scale, scope, and magnitude of these technologies are different, but parallels can still be drawn.

Just like Kubernetes, Spinnaker is a technology that is battle tested, with Netflix using Spinnaker internally for continuous delivery. Like Kubernetes, Spinnaker is backed by some of the biggest names in the industry, which helps breed confidence among users. Most importantly, though, both projects are open source, designed to build a diverse and inclusive ecosystem around them.

Frankenstein’s Monster

Continuous Delivery (CD) is a solved problem, but it has been a bit of a Frankenstein’s monster, with companies trying to build their own creations by stitching parts together, along with Jenkins. “We tried to build a lot of custom continuous delivery tooling, but they all fell short of our expectation,” said Brandon Leach, Sr. Manager of Platform Engineering at Lookout.

“We were using Jenkins along with tools like Rundeck, but both had their own set of problems. While Rundeck didn’t have a first-class deployment tool, Jenkins was becoming a nightmare and we ended up moving to Gitlabs,” said Gard Voigt Rimestad of Schibsted, a major Norwegian media group.

Netflix created a more elegant way for continuous delivery called Asgard, open sourced in 2012, which was designed to run Netflix’s own workload on AWS. Many companies were using Asgard, including Schibsted, and it was gaining momentum. But it was tied closely to the kind of workload Netflix was running with AWS. Bigger companies who liked Asgard forked it to run their own workloads. IBM forked it twice to make it work with Docker containers.

IBM’s forking of Asgard was an eye-opening experience for Netflix. At that point, Netflix had started looking into containerized workloads, and IBM showed how it could be done with Asgard.

Google was also planning to fork Asgard to make it work on Google Compute Engine. By that time, Netflix had started working on the successor to Asgard, called Spinnaker. “Before Google could fork the project, we managed to convince Google to collaborate on Spinnaker instead of forking Asgard. Pivotal also joined in,” said Andy Glover, shepherd of Spinnaker and Director of Delivery Engineering at Netflix. The rest is history.

Continuous popularity

There are many factors at play that contribute to the popularity and adoption of Spinnaker. First and foremost, it’s a proven technology that’s been used at Netflix. It instills confidence in users. “Spinnaker is the way Netflix deploys its services. They do things at the scale we don’t do in AWS. That was compelling,” said Leach.

The second factor is the powerful community around Spinnaker that includes heavyweights like Microsoft, Google, and Netflix. “These companies have engineers on their staff that are dedicated to working on Spinnaker,” added Leach.

Governance

In October 2018, the Spinnaker community organized its first official Spinnaker Summit in Seattle. During the Summit, the community announced the governance structure for the project.

“Initially, there will be a steering committee and a technical oversight committee. At the moment Google and Netflix are steering the governance body, but we would like to see more diversity,” said Steven Kim, Google’s Software Engineering Manager who leads the Google team that works on Spinnaker. The broader community is organized around a set of special interest groups (SIGs) that enable users to focus on particular areas of interest.

“There are users who have deployed Spinnaker in their environment, but they are often intimidated by two big players like Google and Netflix. The governance structure will enable everyone to be able to have a voice in the community,” said Kim.

At the moment, the project is being run by Google and Netflix, but eventually, it may be donated to an organization that has a better infrastructure for managing such projects. “It could be the OpenStack Foundation, CNCF, or the Apache Foundation,” said Boris Renski, Co-founder and CMO of Mirantis.

I met with more than a dozen users at the Summit, and they were extremely bullish about Spinnaker. Companies are already using it in a way even Netflix didn’t envision. Since continuous delivery is at the heart of multi-cloud strategy, Spinnaker is slowly but steadily starting to beat at the heart of many companies.

Spinnaker might not become as big as Kubernetes, due to its scope, but it’s certainly becoming as important. Spinnaker has made some bold promises, and I am sure it will continue to deliver on them.

Source

WP2Social Auto Publish Powered By : XYZScripts.com