The Brains Behind the Books – Part IV: Tanja Roth

Share with friends and colleagues on social media

    This article has been contributed by Tanja Roth, Technical Writer at the SUSE Documentation Team. It is part of a series of articles focusing on SUSE Documentation and the great minds that create the manuals, guides, quick starts, and many more helpful documents.

    The Early Years

    I grew up in Trier, Rhineland-Palatinate, Germany, a city famous for its well-preserved Roman and medieval buildings. At school, I was interested in both languages and natural sciences. As an adolescent, I did not have any particular career aspirations and my parents did not push me in any specific direction – something that I am still grateful for! Based on my interests, I decided to study phonetics – a branch of linguistics that deals with the sounds of human speech. Apart from linguistics and communication theory, my studies also involved applied physics (analyzing speech signals), medical details (learning how speech is produced, perceived, and neurophysiologically processed) and some computational linguistics. During two courses in programming languages I first came into contact with UNIX: I needed to upload my programming homework via FTP from the university’s computer pool, using the Bourne shell. Other than that, I mostly worked with a DOS and Microsoft Windows environment at university, and some specialized systems for signal processing.

    Analysis of Speech Signal incl. Fast Fourier Transformation

    From Phonetics to Steelworks

    Had anyone told me at that time that I would work in information technology later on, I would not have believed it! But instead of treating patients with language and speech disorders or helping to identify blackmailers based on their ‘voiceprints’, further studies led me to the field of technical communication. I got my first insights into technical writing as a working student in a company that constructed cold-rolling mills – very exciting! Cold-rolling mills are used to produce metal sheets and foils. The mills have gigantic dimensions and are usually customer-specific, built from many components that are manufactured at different sites around the world. Unfortunately for this reason, it was impossible for me to see a cold-rolling mill in action. I could only study some components that were manufactured on site. For the product documentation, we had to heavily rely on blueprints, interviews with the engineers and the results of the obligatory hazard analysis. For legal reasons, the documentation for the cold-rolling mills had to be provided in printed format. The complete set of documentation for each rolling mill required several shelves!

    Knowing that I had found ‘my’ vocational field, I moved on to a service provider for technical documentation. This allowed me to ramp up my knowledge about technical communication and project management in a very short time. I worked with customers in different industries, using highly standardized documentation processes and workflows. All of the documentation was written in SGML or XML and was published into multiple output formats using single-source publishing. I got an excellent training on the job at this service provider, but in some projects I also had to jump in the pond and learn to swim. Looking back, both has helped me a lot for any subsequent jobs!

    From Mechanical Engineering to IT

    My next job as a technical writer in information technology showed me that documenting software is an entirely different beast. Software is far more volatile than larger machines or mechanical parts. Mechanical parts are ‘palpable’ and allow you to analyze their main functions even when looking at a work-in-progress versions of them. However, bugs during the alpha and beta phase of software development can easily keep you from installing or trying out the software altogether. This makes it sometimes hard to come up with a useful draft documentation during the early stages of the product. In addition, software often comes with a lot of last minute changes. This can be a real challenge, especially if the documentation needs to be localized and thus has to be handed off to the translators well in advance of the release date of the product.

    Step into the Open Source World

    When I started to work for SUSE in 2005, it was my first contact with the open source environment, which is quite different from a proprietary software environment. For example, in proprietary software, you usually have full control over the terminology to use within your product and your documentation. In the open source environment, certain terms may already exist in the upstream community or documentation. You cannot start from scratch and have to find compromises. Also, open source communities are self-organizing entities to a large degree. A lot of things work differently, including decision-making processes.

    It took me some time to get used to this new world and to its broader dimensions: We do not work on the Q.T., but in the open, and we are members of a much larger community around the world. Being part of this ecosystem taught me a lot about how communities work – and about myself. It also changed my mind-set: I have come to appreciate the ‘release early, release often’ dictum. Feedback at an early stage can be really helpful. And more than once I made the experience that the whole is more than the sum of its parts, especially when working in this great team that I’m part of. I love my documentation colleagues for always sharing their spirit and for their creativity and knowledge. We often have different points of view, but everybody contributes their expertise and we work well together as a team. In addition, collaborating with people from all over the world and from different backgrounds is rewarding. It adds new perspectives and helps to analyze and solve problems more efficiently.

    What Would Life Be Without…?

    As most of my daily work requires a lot of research, analytical thinking and sitting in front of a computer, music and sports are part of my work/life balance. I always loved listening to music, and enjoy dancing and going to concerts. But the switch from listening to music to making music happened about 10 years ago, when I first got hold of a drum. It was during an event called a ‘drum circle’, where everybody can participate and contribute to the common groove. Shortly afterwards, I started to learn playing drums and percussion. The following pictures show me building and playing an African drum.

    Gahun Drum

    In 2015, my husband and me became certified ‘drum circle facilitators’. During our free time, we organize drum circles in the tradition of Arthur Hull – similar to the one we stumbled into many years ago. We also did a community drum circle at the openSUSE Conference in Nuremberg 2017.

    At SUSE, there are several bands and music projects going on, many of which originated during Hackweek. One of last year’s projects was an acapella ensemble. Going from polyrhythmic drumming to singing in harmony was quite a leap for me, but I decided to give it a go. We performed three songs during the final Hackweek presentations and had so much fun singing together that we continued the project afterwards (and are in search of more ‘harmonists’ who would like to join us). Making music with others has added another dimension to my life for which I am very grateful!

    Share with friends and colleagues on social media

      Source

      Application Modernization with Enterprise Linux – Red Hat Enterprise Linux Blog

      Special guest blogger: Ashish Nadkarni, Program Vice President, Worldwide Infrastructure Practice, IDC

      Applications are crucial to the functioning of a modern enterprise. Firms that constantly evolve and update their application strategy are the ones that are successful at expanding their competitive differentiation in the digital economy. They infuse their applications portfolio with new-generation applications that run in the cloud, are delivered as microservices, leverage open-source technologies, and are increasingly (infrastructure) platform independent. During application design, the choice of database management systems (DBMS) and operating system environments (OSE) heavily influences the scalability and reliability of the overall stack.

      Let us start with the choice of a proven SQL-based DBMS with modern in-memory capabilities for storing structured and semi-structured data. It enables the application to process transactions quickly and reliably. It enables the ingestion of huge and diverse data sets with low latency for large and/or real-time analytical tasks. Such databases support the ability to embed analytic queries in transaction processing, moving from online transaction processing (OLTP) to analytic transaction processing (ATP). And finally, databases make it easier for the application stack to meet security and compliance requirements such as PCI-DSS, GDPR and HIPAA. An example of a widely used SQL-based DBMS for this purpose is Microsoft SQL Server.

      Next, let us look at the role played by the OSE. The choice of an appropriate OSE like Linux is essential for consolidating and modernizing the current-generation of applications, while also supporting the development and delivery of new-generation applications. The more versatile the OSE, the easier it is to repackage, replatform and refactor the entire application stack, including its data management and analytics components. A commercial Linux distribution such as Red Hat Enterprise Linux can accelerate database consolidation, application modernization, development and packaging initiatives. Red Hat Enterprise Linux is also Microsoft’s reference Linux platform for SQL Server – which means existing Microsoft and Red Hat customers can take advantage of the consolidation benefits inherent in using Linux without compromising on functionality or service quality.

      Linux, especially commercial Linux, has grown in the enterprise. This growth isn’t surprising given the ability of Linux to enable deployment flexibility, development agility and vendor choice. Linux also helps IT organizations to achieve greater ROI through faster release cycles and meet enterprise-wide service level objectives.

      Linux is also developer friendly. It enables IT to give developers more control over provisioning and orchestration of infrastructure resources. For example, running applications and databases in containers enables integration with development methodologies like DevOps and continuous integration / continuous delivery (CI/CD). The use of a microservice delivery model creates a smaller and nimbler database footprint and allows for a higher density of database instances when compared with running the same environment in a virtual machine.

      It is imperative for IT to select an appropriate DBMS and OSE for enterprise-wide consolidation in order to maximize the chances of a successful outcome. A tried and trusted DBMS platform when matched with an equally proven commercial Linux OS can amplify the value proposition of the entire application stack. It brings together industry-leading product development expertise, investments in cloud platforms and services, and the support and services reputation of the respective vendors.

      To learn more about data management platform consolidation with enterprise Linux visit https://red.ht/DBMSwithLinux

      Source

      Richard Stallman Says Linux Code Contributions Can’t Be Rescinded

      binspamdupenotthebestofftopicslownewsdaystalestupid
      freshfunnyinsightfulinterestingmaybe
      descriptive

      103102900
      story

      Open Source

      Linux

      Richard Stallman Says Linux Code Contributions Can’t Be Rescinded (itwire.com)

      Posted
      by

      EditorDavid

      on Saturday September 29, 2018 @10:34AM

      from the nuclear-options dept.

      An anonymous reader quotes iTWire:

      Linux developers who contribute code to the kernel cannot rescind those contributions, according to the software programmer who devised the GNU General Public Licence version 2.0, the licence under which the kernel is released. Richard Stallman, the head of the Free Software Foundation and founder of the GNU Project, told iTWire in response to queries that contributors to a GPLv2-covered program could not ask for their code to be removed. “That’s because they are bound by the GPLv2 themselves. I checked this with a lawyer,” said Stallman, who started the free software movement in 1984.

      There have been claims made by many people, including journalists, that if any kernel developers are penalised under the new code of conduct for the kernel project — which was put in place when Linux creator Linus Torvalds decided to take a break to fix his behavioural issues — then they would ask for their code to be removed from the kernel… Stallman asked: “But what if they could? What would they achieve by doing so? They would cause harm to the whole free software community. The anonymous person who suggests that Linux contributors do this is urging them to [use a] set of nuclear weapons in pique over an internal matter of the development team for Linux. What a shame that would be.”

      Slashdot reader

      dmoberhaus

      shared an article from Motherboard with more perspetives from Eric S. Raymond and LWN.net founder Jonathan Corbet, which also

      traces the origins of the suggestion

      . “[A]n anonymous user going by the handle ‘unconditionedwitness’ called for developers who end up getting banned through the Code of Conduct in the future to rescind their contributions to the Linux kernel ‘in a bloc’ to produce the greatest effect.

      “It is worth noting that the email address for unconditionedwitness pointed to redchan.it, a now defunct message board on 8chan that mostly hosted misogynistic memes, many of which were associated with gamergate.”

      Can’t open /usr/games/lib/fortunes.dat.

      Working…

      Source

      On the DVD » Linux Magazine



      Knoppix 8.2

      Knoppix is the ultimate Live distro, with dozens of built-in tools for troubleshooting, monitoring, and resurrecting downed Linux (and Windows) systems. The abundant menus of the Knoppix user interface also include a vast selection of Linux desktop and development tools for a complete admin environment on a single disc.






      Nitrux 1.0.15

      This Ubuntu-based Linux offers a special spin on the KDE Plasma 5 environment. The in-house Nomad desktop, which is based on KDE, provides simplicity and elegance for a unique user experience. The latest version comes with Linux kernel 4.18.5, as well as updates to the graphics stack and support for lots of new hardware.


      Infos

      1. Knoppix 8.2: http://www.knopper.net/knoppix/knoppix820-en.html
      2. Contacting Knoppix: http://www.knopper.net/kontakt/index-en.php
      3. Nitrux: https://nxos.org/
      4. Nomad Desktop: https://nxos.org/#nomad-desktop
      5. Nitrux Help: https://nxos.org/en/compendium/

      Source

      FSF Issues Statement on Microsoft Joining OIN, RaspEX Build 181010 Now Available for Raspberry Pi 3 Model B+, OpenShift Container Platform 3.11 Released, Kernel Security Update for CentOS 6 and RHEL 6, and Qt Creator 4.8 Beta Is Out

      News briefs for October 11, 2018.

      Following the news of Microsoft joining the Open Invention Network, the Free
      Software Foundation issued a statement
      calling on Microsoft to “take
      additional steps to continue the momentum toward a complete resolution”.
      These steps include “make a clear, unambiguous statement that it has ceased
      all patent infringement claims on the use of Linux in Android”; “work
      within OIN to expand the definition of what it calls the ‘Linux System’ so
      that the list of packages protected from patents actually includes
      everything found in a GNU/Linux system”; and “use the past patent royalties
      extorted from free software to fund the effective abolition of all patents
      covering ideas in software.”

      RaspEX Build 181010 is now available for the Raspberry Pi 3 Model B+. It
      features the Helium Desktop from BunsenLabs Linux, and according to Softpedia
      News
      , it’s
      “based on the latest Ubuntu 18.04 LTS (Bionic Beaver) operating system
      series, using packages from the Debian GNU/Linux 9 ‘Stretch’ and Linaro
      open source software for ARM SoCs. RaspEX is compatible with Raspberry Pi
      2, Raspberry Pi 3, and Raspberry Pi 3 Model B+.” See also Arne Exton’s release
      announcement
      for more details.

      Red Hat announced the availability of the OpenShift Container Platform 3.11
      release yesterday. eWeek
      reports
      that key highlights with this release “are multiple components
      that have been integrated from the CoreOS Tectonic distribution of
      Kubernetes, including a new cluster administrator console. Red Hat has also
      integrated CoreOS’ Operator concept into OpenShift making it easier for
      organizations to deploy cloud native applications.”

      CentOS and Red Hat announced an important Linux kernel security update for
      CentOS Linux 6 and Red Hat Enterprise Linux 6 that addresses two
      vulnerabilities found in those operating systems: CVE-2018-5391
      and CVE-2018-14634.Users
      should update immediately.
      See the Softpedia
      News post
      for details.

      Qt
      Creator 4.8 Beta was released
      today. This version introduces
      experimental support for the Language Server
      Protocol
      , added some experimental C++ features and added support for
      running debuggers on one or more executables simultaneously. You can
      download the open-source version here.

      Source

      Open FinTech Forum Offers Tips for Open Source Success

















      Enterprise open source adoption has its own set of challenges, but it becomes easier if you have a clear plan to follow. At Open FinTech Forum, Ibrahim Haddad provides guidelines based on proven practices.

      2018 marks the year that open source disrupts yet another industry, and this time it’s financial services. The first-ever Open FinTech Forum, happening October 10-11 in New York City, focuses on the intersection of financial services and open source. It promises to provide attendees with guidance on building internal open source programs along with an in-depth look at cutting-edge technologies being deployed in the financial sector, such as AI, blockchain/distributed ledger, and Kubernetes.

      Several factors make Open FinTech Forum special, but the in-depth sessions on day 1 especially stand out. The first day offers five technical tutorials, as well as four working discussions covering open source in an enterprise environment, setting up an open source program office, ensuring license compliance, and best practices for contributing to open source projects.

      Enterprise open source adoption has its own set of challenges, but it becomes easier if you have a clear plan to follow. At Open FinTech, I’ll present a tutorial session called “Using Open Source: An Enterprise Guide,” which provides a detailed discussion on how to use open source. We’ll start by answering the question, “Why Open Source,” then discuss how to build an internal supporting infrastructure and look at some lessons learned from over two decades of enterprise open source experience. This session — run under the Chatham House Rule — offers a workshop-style environment that is a mix of presentation and discussion triggered by audience questions. The workshop is divided into five sections, explored below.

      Why Open Source?

      This question may seem trivial but it’s a very important consideration that even the most open source mature companies revisit regularly. In this part of the workshop, we’ll examine seven key reasons why enterprises should engage with open source software, regardless of industry and focus, and how they can gain incredible value from such engagements.

      The Importance of Open Source Strategy

      Going through the exercise of establishing an open source strategy is a great way to figure out your company’s current position and its future goals with respect to open source. These strategy discussions will usually evolve around goals you’d like to achieve, along with why and how you’d like to achieve them. In this part of the tutorial, we discuss the many questions to consider when determining your open source strategy and tie that to your product and services strategy for a path to a better ROI.

      Implementing an Open Source infrastructure

      Once you have identified your company’s open source strategy, you need to build infrastructure to support your open source efforts and investments. That infrastructure should act as a enabler for your efforts in using open source, complying with license, contributing to projects, and leading initiatives. In the workshop, I’ll present these various elements that together form an incredible enabling environment for your open source efforts.

      Recommended Practices (17 of them)

      When IBM pledged to spend $1 billion on Linux R&D back in 2000, it was a major milestone. IBM was a pioneer in the enterprise open source world, and the company had to learn a lot about working with open source software and the various communities. Other companies have since followed suit, and many more are now entering open source as it becomes the new normal of software development. The question is: How can you minimize the enterprise learning curve on your own open source journey? We’ve got you covered. In this talk, we’ll explore 17 lessons learned from nearly two decades of enterprise experience with open source software.

      Challenges

      Beyond implementing these best practices, open source adoption requires a cultural shift from traditional software development practices to a more open and collaborative mindset. Internal company dynamics need to be favorable to open source efforts. As an open source leader inside your organization, you will face several challenges in terms of funding resources, justifying ROI, getting upstream focus, etc. These challenges often require a major shift in mindset and a lot of education up the chain. We will explore various considerations relating to culture, processes, tools, continuity, and education to ensure you are on track to open source success in your organization.

      We hope to see you at Open FinTech Forum for an informative and high-value event.

      Sign up to receive updates on Open FinTech Forum:
















      Source

      Book review: Ed Mastery – nixCraft

      Book review: Ed Masteryed is a powerful line text editor for the Linux and Unix-like systems. It was one of the first standard Unix text editor developed in 1969 by Ken Thompson. Much older and legacy Unix like system only shipped with ed for the rescue purpose. There was no vi. So learning ed might be a good idea. A low-level understanding of ed editor helps when one uses a high-level application such as vi or vim. The “Ed Mastery” book teaches you how to use the ed and forgotten art of Unix where the line-oriented paradigm is the only option. The author describe book as, “If you don’t know ed, you’re not a real sysadmin. Forty years after ed’s introduction, author Michael W Lucas has finally unlocked the mysteries of ed for everyone. With Ed Mastery, you too can become a proper Unix sysadmin.”

      Ed Mastery

      There are eight chapters. Each chapter introduces to basic concepts of ed text editor. These chapters teach you concepts such as:

      1. Getting started with ed
      2. How to view, print, delete, find/replace and edit a file
      3. How to enter and end all commands
      4. Advanced search and replace text
      5. File management (writing or loading files) and running Unix shell commands
      6. Moving around a file or append something to the file
      7. Shell scripting and more.

      The author covered FreeBSD (macOS) and Linux version of ed with some feature specific to Linux or FreeBSD only.

      The modern text editor such as vim or emacs is arguably one of the best text editors in existence. But, they lack the simplicity of ed. Unix like system known for its simplicity and KISS (Keep It Simple Stupid), principles. If someone wants to learn ed, grab this book. Much of stuff learned in this book can apply to other applications. Some might say ed is hard to learn. Ed’s learning curve is steep but “Ed Mastery” will take away the fear of learning line editor.

      The book is available in two editions as follows:

      1. Ed Mastery: The Standard Unix Text Editor (IT Mastery) – Priced at $9.99
      2. Ed Mastery: Manly McManface Edition: The Standard Unix Text Editor – Every so often, men contact the author complaining that his books use both male and female pronouns. This special edition, using only male third-person pronouns, is for those special people. As the market is so much smaller, it’s unfortunately priced higher. For each copy of the Manly McManface edition sold, the author will donate one dollar to his local chapter of Soroptomists International – Priced at $29.99

      Book Info:

      • Title: Ed Mastery.
      • Author: Michael W. Lucus.
      • Publisher: Tilted Windmill Press.
      • Length: 104 pages.
      • Target: System administrators or developers.
      • Rating: 5/5.
      • Disclaimer: Tilted Windmill Press sent us a review copy.
      • Purchase online at Amazon (Ed Mastery: Manly McManface Edition: The Standard Unix Text Editor at Amazon).

      Posted by: Vivek Gite

      The author is the creator of nixCraft and a seasoned sysadmin, DevOps engineer, and a trainer for the Linux operating system/Unix shell scripting. Get the latest tutorials on SysAdmin, Linux/Unix and open source topics via RSS/XML feed or weekly email newsletter.

      Source

      Chrome OS Stable Channel Gets Linux Apps | Linux.com

      After months of user testing in developer and beta channels, the Crostini project at Google finally delivered the goods, Linux apps for most users of Chromebooks in the stable channeldefinitely worth the wait. While this still is aimed primarily at developers using Chromebooks, I think there’s a good chance these Linux apps will be used and enjoyed by the general public using Chromebooks as well. There’s still a bit of a learning curve to overcome before that possibility is realized, but if you already are a user of any Linux distro, it will feel very familiar. Here’s an overview of how to install it and what to expect afterward.

      After getting the update to version 69, go to Settings and scroll down a bit, and you’ll see the option to turn on Linux apps. Figure 1 shows this first step. Note that this isn’t available on all Chromebooks; if you’re using an older one, you’ll have to wait a while before this function is available. If you don’t see the option to turn on Linux apps, your Chromebook currently lacks that functionality. But, if you have a Chromebook produced in the past two years, you probably will see the option.

      Read more at Linux Journal

      Click Here!

      Source

      How To Change MAC Address in Kali Linux Using Macchanger

      Last updated: July 11 2017

      Today you’ll learn how to spoof your MAC address in Kali Linux using macchanger.

      Before I start, there are two ways to go about changing your MAC address in Kali Linux.

      1. Using Macchanger
      2. Manually​

      I’ll be showing you how to do both.

      ​So, what is macchanger?

      Macchanger is a free MAC address manipulation tool that comes pre-installed in Kali Linux.

      In short, Macchanger makes it easy to spoof or change your MAC address.

      Here’s a quick how-to navigation for those who want to jump ahead.

      NOTE: If macchanger is not installed on your system, follow the steps below to install it.

      How to install Macchanger

      Open up a terminal and type the following:

      sudo apt-get install macchanger

      During the installation process, you’ll be asked if you want to spoof the MAC address each time you attach a ethernet cable or a network adapter.

      If you select yes, you’ll automatically be assigned a new MAC address for each interface brought up or plugged in. It’s up to you if you want to enable this or not.

      It’s up to you if you want to enable this or not.

      If you have mac-filtering enabled on your router, you’ll want to select NO on this one; otherwise, you won’t be able to connect to your own wifi.

      configure macchanger kali linux

      Afterwards, check to make sure it’s installed and working by typing “macchanger” in the terminal.

      It should look like this:

      kali linux macchanger

      How to display available network interfaces

      Next, we need to determine the interface we want to spoof.

      To list all available network interfaces on your system, type the following command:

      ifconfigifconfig determine interface to spoof

      Depending on which interface you want to spoof, you’ll want to make sure you use the name of your interface. Mine is “wlan0.”

      Now that we know what interface to use, it’s time to get spoofin!

      Bring down the interface you want to spoof

      If you spoof without bringing down the interface, you’ll get the “insufficient permission” or “device is busy” error.

      You must always bring the interface down before changing the MAC address.

      Here’s how:

      ifconfig wlan0 down

      Remember to replace “wlan0” with your interface.

      Print your MAC address

      This will show you the current (and permanent) MAC address of an interface.

      -s = print the MAC address

      macchanger -s wlan0​

      Set (specify) your MAC address

      This will set your MAC address to whatever value you assign it.

      -m = set the MAC address

      macchanger -m 11:22:33:44:55:66 wlan0

      Change MAC address to a random vendor

      This will randomize your MAC address and it will use one from a known vendor.

      NOTE: A vendor is the manufacturer of the network interface card (ie. TP-LINK TECHNOLOGIES CO., LTD)

      -A = set a random MAC address of any kind.

      macchanger -A wlan0

      Change MAC address but use the same vendor

      This will randomize your MAC address but it will use the same vendor as your current one.

      -a = set a random MAC address of the same kind

      macchanger -a wlan0

      Change to a fully random MAC address with no vendor

      This will set a fully random MAC address with no vendor.

      If you were to use a MAC address lookup tool, it won’t be able to detect the manufacturer because it’s completely made up.

      -r = set a fully random MAC address

      macchanger -r wlan0

      Reset (change back) to original MAC address

      This will revert the changes and it will go back to using the permanent MAC address of the interface.

      -p = reset to original MAC address ​

      macchanger -p wlan0

      View a list of known MAC vendors

      If you want to set your own MAC address, this will show you a huge list of legit MAC vendors ​you can choose from.

      -l = print known vendors

      macchanger -l

      NOTE: After making changes to the MAC address, you’ll need to bring the interface back up by typing: ifconfig wlan0 up.

      How to manually change MAC address (without using macchanger)

      If you don’t have macchanger installed, here’s how to change your MAC address manually.

      For this to work, you have to use a MAC address from a known vendor. Fully random MAC address won’t always work.

      ifconfig wlan0 down
      ifconfig ​wlan0 hw ether fc:c7:34:12:bc:1c
      ifconfig wlan0 up

      That’s it! Let me know if you have any questions. I’ll be glad to help.

      Source

      OSSEC Intrusion Detection Installation On Centos 7

      OSSEC Installation On CentOS 7

      OSSEC (Open Source HIDS SECurity) is an open source host-based intrusion detection system (HIDS). It performs log analysis, integrity scanning, rootkit detection, time-based alerting, and active responses to triggers. You can install it on linux, windows, and mac. It allows for both local installs as well as an agent that can be deployed out to multiple systems with a centralized logging system. It is capable of scanning logs, file intregrity monitoring, and action based responses to threats. This guide covers how to perform a basic install on CentOS. To view their official documentation and site you can visit the github project. This guide is for a installation of a local version, however, it can be deployed to thousands of servers with agents reporting into a centralized server.

      Preparing To Install Ossec

      Install the packages needed for installation:

      yum install -y gcc inotify-tools bind-utils

      Change to the source directory to download ossec:

      cd /usr/src

      Get the newest release

      wget -O ossec.2.9.3.tar.gz https://github.com/ossec/ossec-hids/archive/2.9.3.tar.gz

      Unpack the tar file

      tar xfvz ossec.2.9.3.tar.gz

      Change directories:

      cd ossec-hids-2.9.3/

      Ossec Installation

      Start the installer:

      ./install.sh

      Once the installer has been started, it will walk you through a series of options to install OSSEC. Unless you are planning on running agent and server on different servers, select local install

      1- What kind of installation do you want (server, agent, local, hybrid or help)? local

      – Local installation chosen.

      You can select the default installation path or choose another one.

      2- Setting up the installation environment.

      – Choose where to install the OSSEC HIDS [/var/ossec]:

      Determine if the OSSEC installation should send email notifications

      3.1- Do you want e-mail notification? (y/n) [y]: y
      – What’s your e-mail address? [email protected]
      – What’s your SMTP server ip/host? domain.com

      The integrity check daemon will check files against a database of md5sums for changes to files:

      3.2- Do you want to run the integrity check daemon? (y/n) [y]: y

      – Running syscheck (integrity check daemon).

      The rootkit detection will check for common rootkits”

      3.3- Do you want to run the rootkit detection engine? (y/n) [y]: y

      – Running rootcheck (rootkit detection).

      Active response will allow OSSEC to response to events and execute ip blocks etc:3.4- Active response allows you to execute a specific
      command based on the events received. For example,
      you can block an IP address or disable access for
      a specific user.
      More information at:
      http://www.ossec.net/en/manual.html#active-response

      – Do you want to enable active response? (y/n) [y]: n

      – Active response disabled.3.6- Setting the configuration to analyze the following logs:
      — /var/log/messages
      — /var/log/secure
      — /var/log/maillog

      – If you want to monitor any other file, just change
      the ossec.conf and add a new localfile entry.
      Any questions about the configuration can be answered
      by visiting us online at http://www.ossec.net .

      — Press ENTER to continue —

      Use the following commands to start or stop ossec:

      – To start OSSEC HIDS:
      /var/ossec/bin/ossec-control start- To stop OSSEC HIDS:
      /var/ossec/bin/ossec-control stop

      This completes the initial install of the application, the configuration can be viewed or modified at /var/ossec/etc/ossec.conf with more granular options for configuration of the platform.

      Mar 26, 2018LinuxAdmin.io

      Source

      WP2Social Auto Publish Powered By : XYZScripts.com