Linux has its Nails on UNIX’s Coffin – OSnews

Today we feature a very interesting interview with Havoc Pennington. Havoc works for Red Hat, he is heading the desktop team, while he is well known also for his major contributions to GNOME, his GTK+ programming book, plus the freedesktop.org initiative which aims to standardize the X11 desktop environments. In the following interview we discuss about the changes inside Red Hat, Xouvert, freedesktop.org and Gnome’s future, and how Linux, in general, is doing in the desktop market.

1. Looking Red Hat’s recent press releases and web site lately, it reveals a new, stronger effort to shift focus further into the Enterprise and leaving Red Hat Linux to the hands of the community for the home/desktop market. This seems to leave a “hole” in the previous target of Red Hat at the “Corporate Desktop market”. The new Red Hat Linux might sound like “power to the people”, but to me sounds like an action that will have consequences (good & bad) in the quality, testing, development of what we got to know as your “corporate/desktop” product. Given the fact that Red Hat is the No1 Linux distribution on the planet, do you think that this new direction will slow down the Linux penetration to the desktop market?

Havoc Pennington: In my view it’s a mistake to create an “Enterprise vs. Desktop” contrast; these are largely separate dimensions. There are enterprise desktops, enterprise servers, consumer desktops, and consumer servers. Quite possibly small business desktops and servers are another category in between.

I don’t think we’ll see a slowdown in Linux penetration into the desktop market. In fact I hope to see it speed up. Today there are many large software companies making investments in the Linux desktop.

2. How have things changed internally after the [further] focus shift to Enterprise? Is your desktop team still fully working on Gnome/GTK+/X/etc or have developers been pulled into other projects that are more in line with this new focus at Red Hat?

Havoc Pennington: We’re still working on the desktop, more so than ever. (Including applications such as Mozilla, OpenOffice, and Evolution, not just the base environment.)

3. In the past (pre-SCO), Red Hat has admitted that was growing wary of patent issues that might arise in the future. Do you believe that desktop open source software written by many different individuals around the globe might be infringing on patents in some cases without the knowledge of these developers? At the end of the day, we have seen some patents that were issued so shortsightedly that many have said that writing software is almost impossible nowadays. What kind of solution for this issue might OSS software developers find, to ensure a future that is not striken by lawsuits left and right?

Havoc Pennington: As you know we’ve been more aggressive than other Linux vendors about removing potentially patented software from our distribution, specifically we took a lot of criticism for removing mp3 support.

One strategy for helping defend the open source community is to create defensive patents, as described here.

Another strategy is the one taken by Lawrence Rosen in the Academic Free License and Open Software License.

These licenses contain a “Termination for Patent Action” clause that’s an interesting approach.

Political lobbying and education can’t hurt either. These efforts become stronger as more people rely upon open source software.

4. What major new features are scheduled for GTK+ 2.4/2.6 and for the future in general? Once, you started a C++ wrapper for GTK+, but then the project got sterile. Do you believe that Gnome needs a C++ option, and if yes, do you believe that Gtkmm is a good one? Are there plans to sync GTK+ and Gtkmm more often and include it by default on Gnome releases?

Havoc Pennington: GTK+ 2.4 and 2.6 plans are pretty well described here.

One theme of these releases are to make GTK+ cover all the GUI functionality provided historically by libgnomeui. So there will be a single clear GUI API, rather than “plain GTK+” and “GNOME libs” – at that point being a “GNOME application” is really just a matter of whether you follow the GNOME user interface guidelines, rather than an issue of which libs you link to. This cuts down on bloat and developer confusion.

The main user-visible change in 2.4 is of course the new file selector.

The other user-visible effects of 2.4 and 2.6 will mostly be small tweaks and improved consistency between applications as they use the new standard widgets.

At some point we’ll support Cairo which should allow for some nice themes. Cairo also covers printing.

Regarding C++, honestly I’m not qualified to comment on the current state of gtkmm, because I haven’t evaluated it in some time. I do think a C++ option is important. There are two huge wins I’d consider even more important for your average one-off in-house simple GUI app though. 1) to use a language such as Python, Java, C#, Visual Basic, or whatever with automatic memory management, high-level library functions, and so forth; 2) use a user interface builder such as Glade. Both of those will save you more time than the difference between a C and a C++ UI toolkit.

5. What do you think of the XFree86 fork, Xouvert? Do you support the fork, and if yes, what exactly you want to see changed with Xouvert (feature-wise and architecture-wise for X)?

Havoc PenningtonHavoc Pennington: The huge architectural effort I want to see in the X server is to move to saving all the window contents and using the 3D engine of the graphics cards, allowing transparency, faster redraws, nice visual effects, and thumbnailing/magnification, for example.

The trick is that there are *very* few people in the world with the qualifications to architect this change. I don’t know if the Xouvert guys have the necessary knowledge, but if they do that would be interesting. It may well be that no single person understands how to do this right; we may need a collaboration between toolkit people, X protocol people, and 3D hardware experts.

Aside from that, most of the changes to X I’d like to see aren’t really to the window system. Instead, I’d like us to think of the problem as building a base desktop platform. This platform would include a lot of things currently in the X tarball, a lot of things currently on freedesktop.org, and a lot of things that GNOME and KDE and GTK+ and Qt are doing independently. You can think of it as implementing the common backend or framework that GUI toolkits and applications are ported to when they’re ported to Linux.

This may be of interest. If we can negotiate the scary political waters, I’d like to see the various X projects, freedesktop.org, and the desktop environments and applications work together on a single base desktop platform project. With the new freedesktop.org server I’m trying to encourage such a thing.

6. How are things with freedesktop.org; what is its status? Do these standards get implemented in KDE and Gnome, or do they find resistance by hardcore devs on either projects? When do you think KDE and Gnome will reach a good level of interoperability as defined by freedesktop.org? What work has being done so far?

Havoc Pennington: freedesktop.org is going pretty well, I recently posted about the status of the hosting move. See here, I also had a lot of fun at the KDE conference in Nove Hrady and really enjoyed meeting a lot of quality developers I hadn’t met before.

I find that hardcore devs understand the importance of what we’re trying to do, though they also understand the difficulty of changing huge codebases such as Mozilla, OpenOffice, GNOME, or KDE so are understandably careful.

There are people who think of things in “GNOME vs. KDE” terms but in general the people who’ve invested the most time are interested in the bigger picture of open source vs. proprietary, Linux vs. Microsoft, and democratizing access to software.

Of course everyone has their favorite technologies – I think GNOME is great and have a lot of investment in it, and I also like Emacs and Amazon.com and Red Hat Linux. These preferences change over time. When it comes down to it the reason I’m here is larger than any particular technology.

As to when freedesktop.org will achieve interoperability, keep in mind that currently any app will run with any desktop. The issue is more sustaining that fact as the desktop platforms add new bells and whistles; and factoring new features down into the base desktop platform so that apps are properly integrated into any desktop. So it’s a process that I don’t think will ever end. There are always new features and those will tend to be tried out in several apps or desktops before they get spec’d out and documented on the freedesktop.org level.

7. Gnome 2.4 was released last week. Are you satisfied with the development progress of Gnome? What major features/changes do you want to see in Gnome in the next couple of years?

Havoc Pennington: I’m extremely satisfied with GNOME’s progress. Time based releases (see here
for the long definition) are the smartest thing a free software project can do.

This mail has some of my thoughts on what we need to add.

Honestly though the major missing bits of the Linux desktop are not on the GNOME/KDE level anymore. The desktop environments can be endlessly tweaked but they are pretty usable already.

We need to be looking at issues that span and integrate the large desktop projects – WINE, Mozilla, OpenOffice, Evolution on top of the desktops, X below them. And integrate all of them with the operating system.

Some of the other major problems, as explained here, have “slipped through the cracks” in that they don’t clearly fall under the charter of any of the existing large projects.

And of course manageability, administration, security, and application features.

8. Your fellow Red Hat engineer Mike Harris said recently that “There will be a time and a place for Linux on the home desktop. When and where it will be, and wether it will be something that can turn a profit remains to be seen. When Red Hat believes it may be a viable market to enter, then I’m sure we will. Personally, in my own opinion, I don’t think it will be viable for at least 1.5 – 2 years minimum.” Do you agree with this time frame and if yes, what parts exactly need to be “fixed/changed” in the whole Linux universe (technical or not) before Linux becomes viable to the home/desktop market?

Havoc Pennington: I wouldn’t try to guess the timeframe exactly. My guess would be something like “0 to 7 years” 😉

On the technology side, we need some improvements to robustness, to hardware handling, to usability.

However the consumer barriers have a lot to do with consumer ISV and IHV support. And you aren’t going to get that until you can point to some desktop marketshare. That’s why you can’t bootstrap the Linux desktop by targeting consumers. You need to get some initial marketshare elsewhere.

There’s also the business issue that targeting consumers involves very expensive mass market advertising.

9. Have you had a look at the Mac OS X 10.3 Panther previews? Apple is introducing some new widgets, like the new Tabs that look like buttons instead of tabs, and there is of course, Expose, which by utilizing the GL-based QuartzExtreme, offers new usability enhancements, plus cool and modern eye-candy. Do you think that X with GTK+/Gnome will be able to have such innovations in a timely manner, or will it take some years before we see those to a common Linux desktop?

Havoc Pennington: I haven’t tried Panther, though I saw some screenshots and articles.

As I mentioned earlier, the big X server feature I think we need is to move to this kind of 3D-based architecture. If we got the right 2 or 3 people working on it today, we could have demoware in a few months and something usable in a couple of years. I’m just making up those numbers of course.

However, nobody can predict when the right 2 or 3 people will start to work on it. As always in free software, the answer to “when will this be done?” is “faster if you help.”

One stepping stone is to create a robust base desktop platform project where these people could do their work, and some of us are working hard on that task.

10. How do you see the Linux and Unix landscape today? Do you feel that Linux is replacing Unix slowly but steadily, or do they follow parallel and different directions in your opinion?

Havoc Pennington: I would say that the nails are firmly in the UNIX coffin, and it’s just a matter of time.

Source

Python Testing with pytest: Fixtures and Coverage

Python

Improve your Python testing even more.

In my last two articles, I introduced pytest, a library for testing Python code (see “Testing Your Code with Python’s pytest” Part I and Part II). pytest has become quite popular, in no small part because it’s so easy to write tests and integrate those tests into your software development process. I’ve become a big fan, mostly because after years of saying I should get better about testing my software, pytest finally has made it possible.

So in this article, I review two features of pytest that I haven’t had a chance to cover yet: fixtures and code coverage, which will (I hope) convince you that pytest is worth exploring and incorporating into your work.

Fixtures

When you’re writing tests, you’re rarely going to write just one or two. Rather, you’re going to write an entire “test suite”, with each test aiming to check a different path through your code. In many cases, this means you’ll have a few tests with similar characteristics, something that pytest handles with “parametrized tests”.

But in other cases, things are a bit more complex. You’ll want to have some objects available to all of your tests. Those objects might contain data you want to share across tests, or they might involve the network or filesystem. These are often known as “fixtures” in the testing world, and they take a variety of different forms.

In pytest, you define fixtures using a combination of the pytest.fixture decorator, along with a function definition. For example, say you have a file that returns a list of lines from a file, in which each line is reversed:


def reverse_lines(f):
   return [one_line.rstrip()[::-1] + '\n'
           for one_line in f]

Note that in order to avoid the newline character from being placed at the start of the line, you remove it from the string before reversing and then add a '\n' in each returned string. Also note that although it probably would be a good idea to use a generator expression rather than a list comprehension, I’m trying to keep things relatively simple here.

If you’re going to test this function, you’ll need to pass it a file-like object. In my last article, I showed how you could use a StringIO object for such a thing, and that remains the case. But rather than defining global variables in your test file, you can create a fixture that’ll provide your test with the appropriate object at the right time.

Here’s how that looks in pytest:


@pytest.fixture
def simple_file():
   return StringIO('\n'.join(['abc', 'def', 'ghi', 'jkl']))

On the face of it, this looks like a simple function—one that returns the value you’ll want to use later. And in many ways, it’s similar to what you’d get if you were to define a global variable by the name of “simple_file”.

At the same time, fixtures are used differently from global variables. For example, let’s say you want to include this fixture in one of your tests. You then can mention it in the test’s parameter list. Then, inside the test, you can access the fixture by name. For example:


def test_reverse_lines(simple_file):
   assert reverse_lines(simple_file) == ['cba\n', 'fed\n',
 ↪'ihg\n', 'lkj\n']

But it gets even better. Your fixture might act like data, in that you don’t invoke it with parentheses. But it’s actually a function under the hood, which means it executes every time you invoke a test using that fixture. This means that the fixture, in contrast with regular-old data, can make calculations and decisions.

You also can decide how often a fixture is run. For example, as it’s written now, this fixture will run once per test that mentions it. That’s great in this case, when you want to compare with a list or file-like structure. But what if you want to set up an object and then use it multiple times without creating it again? You can do that by setting the fixture’s “scope”. For example, if you set the scope of the fixture to be “module”, it’ll be available throughout your tests but will execute only a single time. You can do this by passing the scope parameter to the @pytest.fixture decorator:


@pytest.fixture(scope='module')
def simple_file():
   return StringIO('\n'.join(['abc', 'def', 'ghi', 'jkl']))

I should note that giving this particular fixture “module” scope is a bad idea, since the second test will end up having a StringIO whose location pointer (checked with file.tell) is already at the end.

These fixtures work quite differently from the traditional setup/teardown system that many other test systems use. However, the pytest people definitely have convinced me that this is a better way.

But wait—perhaps you can see where the “setup” functionality exists in these fixtures. And, where’s the “teardown” functionality? The answer is both simple and elegant. If your fixture uses “yield” instead of “return”, pytest understands that the post-yield code is for tearing down objects and connections. And yes, if your fixture has “module” scope, pytest will wait until all of the functions in the scope have finished executing before tearing it down.

Coverage

This is all great, but if you’ve ever done any testing, you know there’s always the question of how thoroughly you have tested your code. After all, let’s say you’ve written five functions, and that you’ve written tests for all of them. Can you be sure you’ve actually tested all of the possible paths through those functions?

For example, let’s assume you have a very strange function, only_odd_mul, which multiplies only odd numbers:


def only_odd_mul(x, y):
   if x%2 and y%2:
       return x * y
   else:
       raise NoEvenNumbersHereException(f'{x} and/or {y}
 ↪not odd')

Here’s a test you can run on it:


def test_odd_numbers():
   assert only_odd_mul(3, 5) == 15

Sure enough, the test passed. It works great! The software is terrific!

Oh, but wait—as you’ve probably noticed, that wasn’t a very good job of testing it. There are ways in which the function could give a totally different result (for example, raise an exception) that the test didn’t check.

Perhaps it’s easy to see it in this example, but when software gets larger and more complex, it’s not going to be so easy to eyeball it. That where you want to have “code coverage”, checking that your tests have run all of the code.

Now, 100% code coverage doesn’t mean that your code is perfect or that it lacks bugs. But it does give you a greater degree of confidence in the code and the fact that it has been run at least once.

So, how can you include code coverage with pytest? It turns out that there’s a package called pytest-cov on PyPI that you can download and install. Once that’s done, you can invoke pytest with the --cov option. If you don’t say anything more than that, you’ll get a coverage report for every part of the Python library that your program used, so I strongly suggest you provide an argument to --cov, specifying which program(s) you want to test. And, you should indicate the directory into which the report should be written. So in this case, you would say:


pytest --cov=mymul .

Once you’ve done this, you’ll need to turn the coverage report into something human-readable. I suggest using HTML, although other output formats are available:


coverage html

This creates a directory called htmlcov. Open the index.html file in this directory using your browser, and you’ll get a web-based report showing (in red) where your program still lacks coverage. Sure enough, in this case, it showed that the even-number path wasn’t covered. Let’s add a test to do this:


def test_even_numbers():
   with pytest.raises(NoEvenNumbersHereException):
       only_odd_mul(2,4)

And as expected, coverage has now gone up to 100%! That’s definitely something to appreciate and celebrate, but it doesn’t mean you’ve reached optimal testing. You can and should cover different mixtures of arguments and what will happen when you pass them.

Summary

If you haven’t guessed from my three-part focus on pytest, I’ve been bowled over by the way this testing system has been designed. After years of hanging my head in shame when talking about testing, I’ve started to incorporate it into my code, including in my online “Weekly Python Exercise” course. If I can get into testing, so can you. And although I haven’t covered everything pytest offers, you now should have a good sense of what it is and how to start using it.

Resources

  • The pytest website is at http://pytest.org.
  • An excellent book on the subject is Brian Okken’s Python testing with pytest, published by Pragmatic Programmers. He also has many other resources, about pytest and code testing in general, athttp://pythontesting.net.
  • Brian’s blog posts about pytest’s fixtures are informative and useful to anyone wanting to get started with them.

Source

(Don’t) Return to Sender: How to Protect Yourself From Email Tracking | Linux.com

There are a lot of different ways to track email, and different techniques can lie anywhere on the spectrum from marginally acceptable to atrocious. Responsible tracking should aggregate a minimal amount of anonymous data, similar to page hits: enough to let the sender get a sense of how well their campaign is doing without invading users’ privacy. Email tracking should always be disclosed up-front, and users should have a clear and easy way to opt out if they choose to. Lastly, organizations that track should minimize and delete user data as soon as possible according to an easy-to-understand data retention and privacy policy.

Unfortunately, that’s often not how it happens. Many senders, including the U.S. government, do email tracking clumsily. Bad email tracking is ubiquitous, secretive, pervasive, and leaky. It can expose sensitive information to third parties and sometimes even others on your network. According to a comprehensive study from 2017, 70% of mailing list emails contain tracking resources. To make matters worse, around 30% of mailing list emails also leak your email address to third party trackers when you open them. And although it wasn’t mentioned in the paper, a quick survey we did of the same email dataset they used reveals that around 80% of these links were over insecure, unencrypted HTTP.

Here are some friendly suggestions to help make tracking less pervasive, less creepy, and less leaky.

Read more at EFF

Source

MultiBootUSB | SparkyLinux

There is a new tool available: MultiBootUSB.

What is MultiBootUSB?

MultiBootUSB is a software/utility to create multi boot live Linux on a removable USB disk. It is similar to UNetbootin but many distros can be installed, provided you have enough space on the disk. MultiBootUSB also provides an option to uninstall distro(s) at any time, if you wish.

MultiBootUSB allows you to do the following:
– Install multiple live Linux and other Operating Systems to a USB disk and make it bootable without erasing existing data.
– Ability to uninstall installed OS later.
– Write ISO image directly to a USB disk (you can think of GUI for Linux dd command).
– Boot ISO images directly without rebooting your system using QEMU option.
– Boot bootable USBs without rebooting your system using QEMU option.
– Boot USB on UEFI/EFI system through GRUB2 bootloader (limited support).

Installation:
sudo apt update
sudo apt install python3-multibootusb

MultiBootUSB

The MultiBootUSB GitHub project page: github.com/mbusb/multibootusb
The project author is Sundar; and co-author is Ian Bruce.

Source

Key Resources for Effective, Professional Open Source Management | Linux.com

At organizations everywhere, managing the use of open source software well requires the participation of business executives, the legal team, software architecture, software development and maintenance staff and product managers. One of the most significant challenges is integrating all of these functions with their very different points of view into a coherent and efficient set of practices.

More than ever, it makes sense to investigate the many free and inexpensive resources for open source management that are available, and observe the practices of professional open source offices that have been launched within companies ranging from Microsoft to Oath to Red Hat.

Fundamentals

The Linux Foundation’s Fundamentals of Professional Open Source Management (LFC210) course is a good place to start. The course is explicitly designed to help individuals in disparate organizational roles understand the best practices for success.

The course is organized around the key phases of developing a professional open source management program:

  • Open Source Software and Open Source Management Basics
  • Open Source Management Strategy
  • Open Source Policy
  • Open Source Processes
  • Open Source Management Program Implementation

Best Practices

The Linux Foundation also offers a free ebook on open source management: Enterprise Open Source: A Practical Introduction. The 45-page ebook can teach you how to accelerate your company’s open source efforts, based on the experience of hundreds of companies spanning more than two decades of professional enterprise open source management. The ebook covers:

  • Why use open source
  • Various open source business models
  • How to develop your own open source strategy
  • Important open source workflow practices
  • Tools and integration

Official open source programs play an increasingly significant role in how DevOps and open source best practices are adopted by organizations, according to a survey conducted by The New Stack and The Linux Foundation (via the TODO Group). More than half of respondents to the survey (53 percent) across many industries said their organization has an open source software program or has plans to establish one.

“More than anything, open source programs are responsible for fostering open source culture,” the survey’s authors have reported. “By creating an open source culture, companies with open source programs see the benefits we’ve previously reported, including increased speed and agility in the development cycle, better license compliance and more awareness of which open source projects a company’s products depend on.”

Free Guides

How can your organization professionally create and manage a successful open source program, with proper policies and a strong organizational structure? The Linux Foundation offers a complete guide to the process, available here for free. The guide covers an array of topics for open source offices including: roles and responsibilities, corporate structures, elements of an open source management program, how to choose and hire an open source program manager, and more.

The free guide also features contributions from open source leaders. “The open source program office is an essential part of any modern company with a reasonably ambitious plan to influence various sectors of software ecosystems,” notes John Mark Walker, Founder of the Open Source Entrepreneur Network (OSEN) in the guide. “If a company wants to increase its influence, clarify its open source messaging, maximize the clout of its projects, or increase the efficiency of its product development, a multifaceted approach to open source programs is essential.”

Interested in even more on professional open source management? Don’t miss The Linux Foundation’s other free guides, which delve into tools for open source management, how to measure the success of an open source program, and much more.

This article originally appeared at The Linux Foundation

Source

MongoDB Atlas – Women in Linux

My First Cluster – Getting started with MongoDB Atlas!

We will be exploring one innovation in data storage system known as MongoDB.

MongoDB provides schema-less design, high performance, high availability, and automatic scaling qualities which have now become a need and cannot be satisfactorily met by traditional RDBMS systems.

Speaker: Jay Gordon, Cloud Developer Advocate, MongoDB

Agenda:

• Intro to MongoDB / document model / compatibility

• Guided lesson on how to setup

• Connecting and creating a document

• how to use as part of an app

Required for class:

• Virtual Machine (if you need a vm email organizers to get set-up)

• Command line experience

• Basic understand of distributed systems and cloud computing (API’s, virtual machines, basic understanding of HTTP, PUT/GET, knowledge of AWS/Azure/Goggle Compute)

• Community Server: https://mongodb.com/download

• Atlas: https://cloud.mongodb.com/

Source

How To Move Multiple File Types Simultaneously From Commandline

The other day I was wondering how can I move (not copy) multiple file types from directory to another. I already knew how to find and copy certain type of files from one directory to another. But, I don’t know how to move multiple file types simultaneously. If you’re ever in a situation like this, I know a easy way to do it from commandline in Unix-like systems.

Move Multiple File Types Simultaneously

Picture this scenario.You have multiple type of files, for example .pdf, .doc, .mp3, .mp4, .txt etc., on a directory named ‘dir1’. Let us take a look at the dir1 contents:

$ ls dir1
file.txt image.jpg mydoc.doc personal.pdf song.mp3 video.mp4

You want to move some of the file types (not all of them) to different location. For  example, let us say you want to move doc, pdf and txt files only to another directory named ‘dir2’ in one go.

To copy .doc, .pdf and .txt files from dir1 to dir2 simultaneously, the command would be:

$ mv dir1/*.{doc,pdf,txt} dir2/

It’s easy, isn’t it?

Now, let us check the contents of dir2:

$ ls dir2/
file.txt mydoc.doc personal.pdf

See? Only the file types .doc, .pdf and .txt from dir1 have been moved to dir2.

mv command

You can add as many file types as you want to inside curly braces in the above command to move them across different directories. The above command just works fine for me on Bash.

Another way to move multiple file types is go to the source directory i.e dir1 in our case:

$ cd ~/dir1

And, move file types of your choice to the destination (E.g dir2) as shown below.

$ mv *.doc *.txt *.pdf /home/sk/dir2/

To move all files having a particular extension, for example .doc only, run:

$ mv dir1/*.doc dir2/

For more details, refer man pages.

$ man mv

Moving a few number of same or different file types is easy! You could do this with couple mouse clicks in GUI mode or use a one-liner command in CLI mode. However, If you have thousands of different file types in a directory and wanted to move multiple file types to different directory in one go, it would be a cumbersome task. To me, the above method did the job easily! If you know any other one-liner commands to move multiple file types at a time, please share it in the comment section below. I will check and update the guide accordingly.

And, that’s all for now. Hope this was useful. More good stuffs to come. Stay tuned!

Cheers!

Source

NTFS Read/Write Filesystem Addon for Linux

Nice to see my submission right here 🙂 Really fast Eugenia !

I have downloaded and tested captive-ntfs on a notebook (Sony Vaio) here running Fedora Core 1. These are so far my impressions:

– installation of the RPM worked perfectly (it also scans the hdd for ntfs drives and creates entries in /etc/fstab plus the

directories in /mnt )

– first mount is slower than the Linux native NTFS driver from kernel

– read/write seems to work okay at the beginning….BUT i recognized a problem with big files. For example i tried to watch a movie sitting on one of my ntfs partitions…after 10 minutes or so mplayer stops. When i try to continue the playback mplayer loads and then just quits with “file ended”…really curious.

– i also once had a CPU raise to 100% for a few moments after mplayer stops working.

Maybe this is a bug or maybe it´s just because i used the RPM package which is not approved for Fedora (RedHat 9 was tested).

Any others share the same experience here ?

Source

Key Resources for Effective, Professional Open Source Management

The Linux Foundation offers an abundance of resources to help you achieve success with open source.

At organizations everywhere, managing the use of open source software well requires the participation of business executives, the legal team, software architecture, software development and maintenance staff and product managers. One of the most significant challenges is integrating all of these functions with their very different points of view into a coherent and efficient set of practices.

More than ever, it makes sense to investigate the many free and inexpensive resources for open source management that are available, and observe the practices of professional open source offices that have been launched within companies ranging from Microsoft to Oath to Red Hat.

Fundamentals

The Linux Foundation’s Fundamentals of Professional Open Source Management (LFC210) course is a good place to start. The course is explicitly designed to help individuals in disparate organizational roles understand the best practices for success.

The course is organized around the key phases of developing a professional open source management program:

  • Open Source Software and Open Source Management Basics
  • Open Source Management Strategy
  • Open Source Policy
  • Open Source Processes
  • Open Source Management Program Implementation

Best Practices

The Linux Foundation also offers a free ebook on open source management: Enterprise Open Source: A Practical Introduction. The 45-page ebook can teach you how to accelerate your company’s open source efforts, based on the experience of hundreds of companies spanning more than two decades of professional enterprise open source management. The ebook covers:

  • Why use open source
  • Various open source business models
  • How to develop your own open source strategy
  • Important open source workflow practices
  • Tools and integration

Official open source programs play an increasingly significant role in how DevOps and open source best practices are adopted by organizations, according to a survey conducted by The New Stack and The Linux Foundation (via the TODO Group). More than half of respondents to the survey (53 percent) across many industries said their organization has an open source software program or has plans to establish one.

“More than anything, open source programs are responsible for fostering open source culture,” the survey’s authors have reported. “By creating an open source culture, companies with open source programs see the benefits we’ve previously reported, including increased speed and agility in the development cycle, better license compliance and more awareness of which open source projects a company’s products depend on.”

Free Guides

How can your organization professionally create and manage a successful open source program, with proper policies and a strong organizational structure? The Linux Foundation offers a complete guide to the process, available here for free. The guide covers an array of topics for open source offices including: roles and responsibilities, corporate structures, elements of an open source management program, how to choose and hire an open source program manager, and more.

The free guide also features contributions from open source leaders. “The open source program office is an essential part of any modern company with a reasonably ambitious plan to influence various sectors of software ecosystems,” notes John Mark Walker, Founder of the Open Source Entrepreneur Network (OSEN) in the guide. “If a company wants to increase its influence, clarify its open source messaging, maximize the clout of its projects, or increase the efficiency of its product development, a multifaceted approach to open source programs is essential.”

Interested in even more on professional open source management? Don’t miss The Linux Foundation’s other free guides, which delve into tools for open source management, how to measure the success of an open source program, and much more.

Source

How to Setup DRBD to Replicate Storage on Two CentOS 7 Servers

The DRBD (stands for Distributed Replicated Block Device) is a distributed, flexible and versatile replicated storage solution for Linux. It mirrors the content of block devices such as hard disks, partitions, logical volumes etc. between servers. It involves a copy of data on two storage devices, such that if one fails, the data on the other can be used.

You can think of it somewhat like a network RAID 1 configuration with the disks mirrored across servers. However, it operates in a very different way from RAID and even network RAID.

Originally, DRBD was mainly used in high availability (HA) computer clusters, however, starting with version 9, it can be used to deploy cloud storage solutions.

In this article, we will show how to install DRBD in CentOS and briefly demonstrate how to use it to replicate storage (partition) on two servers. This is the perfect article to get your started with using DRBD in Linux.

Testing Environment

For the purpose of this article, we are using two nodes cluster for this setup.

  • Node1: 192.168.56.101 – tecmint.tecmint.lan
  • Node2: 192.168.10.102 – server1.tecmint.lan

Step 1: Installing DRBD Packages

DRBD is implemented as a Linux kernel module. It precisely constitutes a driver for a virtual block device, so it’s established right near the bottom of a system’s I/O stack.

DRBD can be installed from the ELRepo or EPEL repositories. Let’s start by importing the ELRepo package signing key, and enable the repository as shown on both nodes.

# rpm --import https://www.elrepo.org/RPM-GPG-KEY-elrepo.org
# rpm -Uvh http://www.elrepo.org/elrepo-release-7.0-3.el7.elrepo.noarch.rpm

Then we can install the DRBD kernel module and utilities on both nodes by running:

# yum install -y kmod-drbd84 drbd84-utils

If you have SELinux enabled, you need to modify the policies to exempt DRBD processes from SELinux control.

# semanage permissive -a drbd_t

In addition, if your system has a firewall enabled (firewalld), you need to add the DRBD port 7789 in the firewall to allow synchronization of data between the two nodes.

Run these commands on the first node:

# firewall-cmd --permanent --add-rich-rule='rule family="ipv4"  source address="192.168.56.102" port port="7789" protocol="tcp" accept'
# firewall-cmd --reload

Then run these commands on second node:

# firewall-cmd --permanent --add-rich-rule='rule family="ipv4" source address="192.168.56.101" port port="7789" protocol="tcp" accept'
# firewall-cmd --reload

Step 2: Preparing Lower-level Storage

Now that we have DRBD installed on the two cluster nodes, we must prepare a roughly identically sized storage area on both nodes. This can be a hard drive partition (or a full physical hard drive), a software RAID device, an LVM Logical Volume or a any other block device type found on your system.

For the purpose of this article, we will create a dummy block device of size 2GB using the dd command.

 
# dd if=/dev/zero of=/dev/sdb1 bs=2024k count=1024

We will assume that this is an unused partition (/dev/sdb1) on a second block device (/dev/sdb) attached to both nodes.

Step 3: Configuring DRBD

DRBD’s main configuration file is located at /etc/drbd.conf and additional config files can be found in the /etc/drbd.d directory.

To replicate storage, we need to add the necessary configurations in the /etc/drbd.d/global_common.conf file which contains the global and common sections of the DRBD configuration and we can define resources in .resfiles.

Let’s make a backup of the original file on both nodes, then then open a new file for editing (use a text editor of your liking).

# mv /etc/drbd.d/global_common.conf /etc/drbd.d/global_common.conf.orig
# vim /etc/drbd.d/global_common.conf 

Add the following lines in in both files:

global {
 usage-count  yes;
}
common {
 net {
  protocol C;
 }
}

Save the file, and then close the editor.

Let’s briefly shade more light on the line protocol C. DRBD supports three distinct replication modes (thus three degrees of replication synchronicity) which are:

  • protocol A: Asynchronous replication protocol; it’s most often used in long distance replication scenarios.
  • protocol B: Semi-synchronous replication protocol aka Memory synchronous protocol.
  • protocol C: commonly used for nodes in short distanced networks; it’s by far, the most commonly used replication protocol in DRBD setups.

Important: The choice of replication protocol influences two factors of your deployment: protection and latency. And throughput, by contrast, is largely independent of the replication protocol selected.

Step 4: Adding a Resource

resource is the collective term that refers to all aspects of a particular replicated data set. We will define our resource in a file called /etc/drbd.d/test.res.

Add the following content to the file, on both nodes (remember to replace the variables in the content with the actual values for your environment).

Take note of the hostnames, we need to specify the network hostname which can be obtained by running the command uname -n.

resource test {
        on tecmint.tecmint.lan {
 		device /dev/drbd0;
       		disk /dev/sdb1;
        		meta-disk internal;	
                	address 192.168.56.101:7789;
        }
        on server1.tecmint.lan  {
		device /dev/drbd0;
        		disk /dev/sdb1;
        		meta-disk internal;
                	address 192.168.56.102:7789;
        }
}
}

where:

  • on hostname: the on section states which host the enclosed configuration statements apply to.
  • test: is the name of the new resource.
  • device /dev/drbd0: specifies the new virtual block device managed by DRBD.
  • disk /dev/sdb1: is the block device partition which is the backing device for the DRBD device.
  • meta-disk: Defines where DRBD stores its metadata. Using Internal means that DRBD stores its meta data on the same physical lower-level device as the actual production data.
  • address: specifies the IP address and port number of the respective node.

Also note that if the options have equal values on both hosts, you can specify them directly in the resource section.

For example the above configuration can be restructured to:

resource test {
	device /dev/drbd0;
	disk /dev/sdb1;
        	meta-disk internal;	
        	on tecmint.tecmint.lan {
 		address 192.168.56.101:7789;
        	}
        	on server1.tecmint.lan  {
		address 192.168.56.102:7789;
        		}
}

Step 4: Initializing and Enabling Resource

To interact with DRBD, we will use the following administration tools which communicate with the kernel module in order to configure and administer DRBD resources:

  • drbdadm: a high-level administration tool of the DRBD.
  • drbdsetup: a lower-level administration tool for to attach DRBD devices with their backing block devices, to set up DRBD device pairs to mirror their backing block devices, and to inspect the configuration of running DRBD devices.
  • Drbdmeta:is the meta data management tool.

After adding all the initial resource configurations, we must bring up the resource on both nodes.

# drbdadm create-md test
Initialize Meta Data Storage

Initialize Meta Data Storage

Next, we should enable the resource, which will attach the resource with its backing device, then it sets replication parameters, and connects the resource to its peer:

# drbdadm up test

Now if you run the lsblk command, you will notice that the DRBD device/volume drbd0 is associated with the backing device /dev/sdb1:

# lsblk
List Block Devices

List Block Devices

To disable the resource, run:

# drbdadm down test

To check the resource status, run the following command (note that the Inconsistent/Inconsistent disk state is expected at this point):

# drbdadm status test
OR
# drbdsetup status test --verbose --statistics 	#for  a more detailed status 
Check Resource Status on Nodes

Check Resource Status on Nodes

Step 5: Set Primary Resource/Source of Initial Device Synchronization

At this stage, DRBD is now ready for operation. We now need to tell it which node should be used as the source of the initial device synchronization.

Run the following command on only one node to start the initial full synchronization:

# drbdadm primary --force test
# drbdadm status test
Set Primary Node for Initial Device

Set Primary Node for Initial Device

Once the synchronization is complete, the status of both disks should be UpToDate.

Step 6: Testing DRBD Setup

Finally, we need to test if the DRBD device will work well for replicated data storage. Remember, we used an empty disk volume, therefore we must create a filesystem on the device, and mount it, to test if we can use it for replicated data storage.

We can create a filesystem on the device with the following command, on the node where we started the initial full synchronization (which has the resource with primary role):

# mkfs -t ext4 /dev/drbd0 
Make Filesystem on Drbd Volume

Make Filesystem on Drbd Volume

Then mount it as shown (you can give the mount point an appropriate name):

# mkdir -p /mnt/DRDB_PRI/
# mount /dev/drbd0 /mnt/DRDB_PRI/

Now copy or create some files in the above mount point and do a long listing using ls command:

# cd /mnt/DRDB_PRI/
# ls -l 
List Contents of Drbd Primary Volume

List Contents of Drbd Primary Volume

Next, unmount the the device (ensure that the mount is not open, change directory after unmounting it to prevent any errors) and change the role of the node from primary to secondary:

# umount /mnt/DRDB_PRI/
# cd
# drbdadm secondary test

On the other node (which has the resource with a secondary role), make it primary, then mount the device on it and perform a long listing of the mount point. If the setup is working fine, all the files stored in the volume should be there:

# drbdadm primary test
# mkdir -p /mnt/DRDB_SEC/
# mount /dev/drbd0 /mnt/DRDB_SEC/
# cd /mnt/DRDB_SEC/
# ls  -l 
Test DRBD Setup Working on Secondary Node

Test DRBD Setup Working on Secondary Node

For more information, see the man pages of the user space administration tools:

# man drbdadm
# man drbdsetup
# man drbdmeta

ReferenceThe DRBD User’s Guide.

Summary

DRBD is extremely flexible and versatile, which makes it a storage replication solution suitable for adding HA to just about any application. In this article, we have shown how to install DRBD in CentOS 7 and briefly demonstrated how to use it to replicate storage.

Source

WP2Social Auto Publish Powered By : XYZScripts.com