Clearing the (hybrid and multi-) clouds of confusion

Share with friends and colleagues on social media

    Despite cloud computing being a generally well-accepted and used technology that has slipped into the common vernacular very easily, there is still some confusion around the different types of cloud options out there. Specifically around the concepts of multi-cloud and hybrid cloud. While some of this is due to slightly hazy marketing, largely it is down to misunderstanding. We know, just from looking at many of the cars on the street today, that a hybrid is a combination of two things (in the case of the image on this page, a bobcat and a bird), but how does that differ from multi-cloud?

    A blog written in 2017 by our own Terri Schlosser previously addressed this, but having had a number of conversations with confused customers and partners over the last year or so, I decided to record a very brief video to help clarify the situation.

    Follow this link to watch the video, and if you have any thoughts, please leave a comment on this blog, contact me at matthew.johns@suse.com or via Twitter. I hope that you find it useful in understanding more about what can be at times a very confusing set of terms. If you’d like to read more about cloud in general, then please visit our Cloud Solutions page on suse.com, or get in contact with us to see how SUSE can support you in your journey to the cloud.

    Share with friends and colleagues on social media

      Source

      HTTP download speed difference in windows vs Linux | Elinux.co.in | Linux Cpanel/ WHM blog


      HTTP download speed difference in windows 7 vs Linux

      I have a strange situation regarding a Windows PC which is showing limited internet transfer speeds for no apparent reason. If I am performing the same test on Linux box then I am getting good speed.

      Upon intense debugging, I am able to diagnose and find out the root cause of the problem.

      It was/is Windows HTTP packet fragmentation that happens locally. Basically its
      how windows compile HTTP headers locally so found a fix to it.

      We came across some TCP settings which restrict download speed in the windows
      box, hence in order to permit download of large files, have modified below
      settings:

      These were my initial TCP settings

      C:Windowssystem32>netsh interface tcp show global

      Querying active state…

      TCP Global Parameters

      ———————————————-

      Receive-Side Scaling State: disabled

      Chimney Offload State : automatic

      NetDMA State: enabled

      Direct Cache Acess (DCA): disabled

      Receive Window Auto-Tuning Level: disabled

      Add-On Congestion Control Provider: none

      ECN Capability: disabled

      RFC 1323 Timestamps : disabled

      ** The above autotuninglevel setting is the result of Windows Scaling heuristics

      overriding any local/policy configuration on at least one profile.

      C:Windowssystem32>netsh interface tcp show heuristics

      TCP Window Scaling heuristics Parameters

      ———————————————-

      Window Scaling heuristics : enabled

      Qualifying Destination Threshold: 3

      Profile type unknown: normal

      Profile type public : normal

      Profile type private: restricted

      Profile type domain : normal

      Thus I did:

      # disable heuristics

      C:Windowssystem32>netsh interface tcp set heuristics wsh=disabled

      Ok.

      # enable receive-side scaling

      C:Windowssystem32>netsh int tcp set global rss=enabled

      Ok.

      # manually set autotuning profile

      C:Windowssystem32>netsh interface tcp set global autotuning=experimental

      Ok.

      # set congestion provider

      C:Windowssystem32>netsh interface tcp set global congestionprovider=ctcp

      Ok.

      C:Windowssystem32>netsh interface tcp show global

      Querying active state…

      TCP Global Parameters

      ———————————————-

      Receive-Side Scaling State: enabled

      Chimney Offload State : automatic

      NetDMA State: enabled

      Direct Cache Acess (DCA): disabled

      Receive Window Auto-Tuning Level: experimental

      Add-On Congestion Control Provider: ctcp

      ECN Capability: disabled

      RFC 1323 Timestamps : disabled

      After changing these settings downloading is fast again, hitting the internet connection’s limit.

      Source

      Find Exact Installation Date And Time Of Your Linux OS | Elinux.co.in | Linux Cpanel/ WHM blog

      On Fedora, RHEL and its clones such as CentOS, Scientific Linux, Oracle Linux, you can find it using the following command:

      rpm -qi basesystem

      Sample output

      [[email protected] ~]# rpm -qi basesystem
      Name : basesystem
      Version : 10.0
      Release : 7.el7.centos
      Architecture: noarch
      Install Date: Thu 29 Mar 2018 05:05:32 PM IST
      Group : System Environment/Base
      Size : 0
      License : Public Domain
      Signature : RSA/SHA256, Fri 04 Jul 2014 06:16:57 AM IST, Key ID 24c6a8a7f4a80eb5
      Source RPM : basesystem-10.0-7.el7.centos.src.rpm
      Build Date : Fri 27 Jun 2014 04:07:10 PM IST
      Build Host : worker1.bsys.centos.org
      Relocations : (not relocatable)
      Packager : CentOS BuildSystem http://bugs.centos.org
      Vendor : CentOS
      Summary : The skeleton package which defines a simple CentOS Linux system
      Description :
      Basesystem defines the components of a basic CentOS Linux
      system (for example, the package installation order to use during
      bootstrapping). Basesystem should be in every installation of a system,
      and it should never be removed.

      Source

      Unleash powerful Linux container-building capabilities with Buildah – Red Hat Enterprise Linux Blog

      Balancing size and features is a universal challenge when building software. So, it’s unsurprising that this holds true when building container images. If you don’t include enough packages in your base image, you end up with images which are difficult to troubleshoot, missing something you need, or just cause different development teams to add the exact same package to layered images (causing duplication). If you build it too big, people complain because it takes too long to download – especially for quick and dirty projects or demos. This is where Buildah comes in.

      In the currently available ecosystem of build tools, there are two main kinds of build tools:

      1. Ones which build container images from scratch.
      2. Those that build layered images.

      Buildah is unique in that it elegantly blurs the line between both – and, it has a rich set of capabilities for each. One of those rich capabilities is multi-stage builds.

      At Red Hat Summit 2018 in San Francisco, Scott McCarty and I boiled the practice of building production ready containers down into five key tenets – standardize, minimize, delegate, process, and iterate (video & presentation).

      Two tenets in particular are often at odds – standardize and minimize. It makes sense to standardize on a rich base image, while at the same time minimizing the content in layered builds. Balancing both is tricky, but when done right, reaps the benefits of OCI image layers at scale (lots of applications) and improves registry storage efficiency.

      Multi-stage builds

      A particularly powerful example of how to achieve this balance is the concept of multi-stage builds. Since build dependencies like compilers and package managers are rarely required at runtime, we can exclude them from the final build by breaking it into two parts. We can do the heavy lifting in the first part, then use the build artifacts (think Go binaries or jars) in the second. We will then use the container image from the second build in production.

      Using this methodology leverages the power of rich base images, while at the same time, results in a significantly smaller container image. The resultant image isn’t carrying additional dependencies that aren’t used during runtime. The multi-stage build concept became popular last year with the release of Docker v17.05, and OpenShift has long had a similar capability with the concept of chaining builds.

      OK, multi-stage builds are great, you get it, but to make this work right, the two builds need to be able to copy data between them. Before we tackle this, let’s start with some background.

      Buildah background

      Buildah was a complete rethink of how container image builds could and should work. It follows the Unix philosophy of small, flexible tools. Multi-stage builds were part of the original design and have been possible since its inception. With the release of Buildah 1.0, users can now take advantage of the simplicity of using multi-stage builds with the Dockerfile format. All of this, with a smaller tool, no daemon, and tons of flexibility during builds (ex. build time volumes).

      Below we’ll take a look at how to use Buildah to accomplish multi-stage builds with a Dockerfile and also explore a simpler, yet more sophisticated way to tackle them.

      Using Dockerfiles:

      $buildah bud -t [image:tag] .

      ….and that’s it! Assuming your Dockerfile is written for multi-stage builds and in the directory the command is executed, everything will just work. So if this is all you’re looking for, know that it’s now trivial to accomplish this with Buildah in Red Hat Enterprise Linux 7.5.

      Now, let’s dig a little deeper and take a look at using Buildah’s native commands to achieve the same outcome and some reasons why this can be a powerful alternative for certain use cases.

      For clarity, we’ll start by using Alex Ellis’s blog post that demonstrates the benefits of performing multi-stage builds. Use of this example is simply to compare and contrast the Dockerfile version with Buildah’s native capabilities. It’s not an endorsement any underlying technologies such as Alpine Linux or APK. These examples could all be done in Fedora, but that would make the comparison less clear.

      Using Buildah Commands

      Using his https://github.com/alexellis/href-counter we can convert the included Dockerfile.multi file to a simple script like this:

      First Build

      #!/bin/bash

      # build container

      buildcntr1=$(buildah from golang:1.7.3)

      buildmnt1=$(buildah mount $buildcntr)

      Using simple variables like this are not required, but they will make the later commands clearer to read so it’s recommended. Think of the buildcntr1 as a handle which represents the container build, while the variable buildmnt1 represents a directory which will mount the container.

      buildah run $buildcntr1 go get -d -v golang.org/x/net/html

      This is the first command verbatim in the original Dockerfile. All that’s needed is to change RUN to run and point Buildah to the container we want to execute the command in. Once, the command completes, we are left with a local copy of the go program. Now we can move it wherever we want. Buildah has a native directive to copy the contents out of a container build:

      buildah copy $buildcntr1 app.go .

      Alternatively, we can use the system command to do the same thing by referencing the mount point:

      cp app.go $buildmnt1/go

      For this example both of these lines will accomplish the same thing. We can use buildah’s copy command the same way the COPY command works in a Dockerfile, or we can simply use the host’s cp command to perform the task of copying the binary out of the container. In the rest of this tutorial, we’ll rely on the hosts command.

      Now, let’s build the code:

      buildah run $buildcntr1 /bin/sh -c “CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .”

      Second Build

      The same applies to this command. We’re changing RUN to run and executing the command in the same container:

      # runtime container

      buildcntr2=$(buildah from alpine:latest)

      buildmnt2=$(buildah mount $buildcntr2)

      Now let’s define a separate runtime image that we’ll use to run our application in production with.

      buildah run $buildcntr2 apk –no-cache add ca-certificates

      Same tweaks for the RUN command

      #buildah copy $buildcntr2 $buildmnt1/go/app .

      Or:

      cp $buildmnt1/go/app $buildmnt2

      Here we have the same option as above. To bring the compiled application into the second build, we can use the copy command from buildah or the host.

      Now, add the default command to the production image.

      buildah config –cmd ./app $buildcntr2

      Finally, we unmount and commit the image, and optionally clean up the environment:

      #unmount & commit the image

      buildah unmount $buildcntr2

      buildah commit $buildcntr2 multi-stage:latest


      #clean up build

      buildah rm $buildcntr1 $buildcntr2

      Don’t forget that Buildah can also push the image to your desired registry using ​buildah push`

      The beauty of Buildah is that we can continue to leverage the simplicity of the Dockerfile format, but we’re no longer bound by the limitations of it. People do some nasty, nasty things in a Dockerfile to hack everything onto a single line. This can make them hard to read, difficult to maintain, and it’s inelegant.

      When you combine the power of being able to manipulate images with native Linux tooling from the build host, you are now free to go beyond the Dockerfile commands! This opens up a ton of new possibilities for the content of container images, the security model involved, and the process for building.

      A great example of this was explored in one of Tom Sweeney’s blog posts on creating minimal containers. Tom’s example of leveraging the build host’s package manager is a great one, and means we no longer require something like “yum” to be available in the final image.

      On the security side, we no longer require access to the Docker socket which is a win for performing builds from Kubernetes/OpenShift. In fairness Buildah currently requires escalated privileges on the host, but soon this will no longer be the case. Finally, on the process side, we can leverage Buildah to augment any existing build process, be it a CI/CD pipeline or building from a Kubernetes cluster to create simple and production-ready images.

      Buildah provides all of the primitives needed to take advantage of the simplicity of Dockerfiles combined with the power of native Linux tooling, and is also paving the way to more secure container builds in OpenShift. If you are running Red Hat Enterprise Linux, or possibly an alternative Linux distribution, I highly recommend taking a look at Buildah and maximizing your container build process for production.

      Source

      What Is /dev/shm in linux?


      Shared (Virtual) Memory (SHM)


      Shared memory is a way to shared state between process.

      Shared memory, as its name implies, is a method to “share” data between processes.

      Both processes define the same memory area as “shared”, and they can then exchange

      information simply by writing into it. This (used to be, and still is somewhat) faster than

      the alternative of sending network or pipe-based messages between processes.

      If you see the memory as a mean of storing data, a file on a file system can be seen as

      shared memory (ie shared file).

      It is difficult to account for shared memory. Does it belong to one process? Both? Neither?

      If we naively sum the memory belonging to multiple processes, we grossly “over-count”.

      As the name implies, the Shared (Virtual) Memory refers to virtual memory that are shared

      by more than one process and then can be used by multiple programs simultaneously.

      Although virtual memory allows processes to have separate (virtual) address spaces, there

      are times when you need processes to share memory.

      Shared memory (SHM) is another method of interprocess communication (IPC)

      whereby several processes share a single chunk of memory to communicate.

      Shared memory provides the fastest way for processes to pass large amounts of data

      to one another.


      /dev/shm is nothing but implementation of traditional shared memory concept. It is an

      efficient means of passing data between programs. One program will create a memory

      portion, which other processes (if permitted) can access. This will result into speeding up

      things on Linux.


      shm / shmfs is also known as tmpfs, which is a common name for a temporary file storage

      facility on many Unix-like operating systems. It is intended to appear as a mounted

      file system, but one which uses virtual memory instead of a persistent storage device.

      If you type mount command you will see /dev/shm as a tempfs file system. Therefore,

      it is a file system, which keeps all files in virtual memory. Everything in tmpfs is temporary

      in the sense that no files will be created on your hard drive. If you unmount a tmpfs instance, everything stored therein is lost. By default almost all Linux distros configured to use /dev/shm.

      Difference between tmpfs and swap

      • tmpfs uses memory while as swap uses persistent storage devices.
      • tmpfs can be viewed as file system in df output whereas swap dont
      • swap has general size recommendations, tmpsfs not. tmpfs size varies on system purpose.
      • tmpfs makes applications fasters on loaded systems. swap helps system breath in memory full situations.
      • swap full indicates system heavily loaded, degraded performance and may crash.
      • tmpfs being full not necessarily means heavy load or prone to crash.
      • tmpfs is enhancement where as swap is must have feature!



        Source

        Darkened Skye Guide | GamersOnLinux


        darkenedsky87.jpg

        Darkened Skye is an Action game based off the Skittles candy where Skye can execute magic spells when combining Skittles. From fireballs, iceballs, lightning to firewalking, floating, shrinking and necromancy… Skye can use them to eliminate enemies and solve puzzles.

        darkenedsky83.jpg

        Follow my step-by-step guide on installing, configuring and optimizing Darkened Skye in Linux with PlayOnLinux.

        Note: This guide applies to the Retail CD ROM version of Darkened Skye. Other versions may require additional steps.Tips & Specs:
        To learn more about PlayOnLinux and Wine configuration, see the online manual: PlayOnLinux Explained

        Mint 18.3 64-bit

        PlayOnLinux: 4.2.12
        Wine: 3.0

        Wine Installation
        Click Tools

        Select “Manage Wine Versions”
        wine01.png

        Look for the Wine Version: 3.0

        Select it
        Click the arrow pointing to the right
        wine02.png

        Click Next

        Downloading Wine

        wine04.png

        Extracting

        Downloading Gecko

        wine05.png

        Installed

        wine06.png

        Wine 3.0 is installed and you can close this window

        Copy Disk Data

        1. Create a new folder on your Desktop
        2. Name it Darkened Skye
        3. Insert Disk 1 and copy all the data into your new folder
        4. Insert Disk 2 and copy all the data into the new folder

        Merge and over-write any files

        darkenedsky01.png

        Note: Leave Disk 2 in the drivePlayOnLinux Setup
        Launch PlayOnLinux

        Click Install
        darkenedsky02.png

        Click “Install a non-listed program”

        darkenedsky03.png

        Select “Install a program in a new virtual drive”

        Click Next
        darkenedsky04.png

        Name the virtual drive: darkenedskye

        Click Next
        darkenedsky05.png

        Check all three options:

        • Use another version of Wine
        • Configure Wine
        • Install some libraries

        Click Next
        darkenedsky06.png

        Select Wine 3.0

        Click Next
        darkenedsky07.png

        Select “32 bits windows installation”

        Click Next
        darkenedsky08.png

        Wine ConfigurationApplications Tab
        Windows version: Windows XP

        Click Apply
        darkenedsky09.png

        Graphics Tab
        Check “Automatically capture the mouse in full-screen windows”

        Check “emulate a virtual desktop”
        Desktop size: 1280×960
        Click OK
        darkenedsky10.png

        PlayOnLinux Packages (DLLs, Libraries, Components)

        Check the following:

        • POL_Install_corefonts
        • POL_Install_d3dx9
        • POL_Install_tahoma

        Click Next
        darkenedsky11.png

        Note: All packages should automatically download and install
        Click Browse

        Navigate to your Darkened Skye folder on your Desktop

        Select “Setup.exe”
        Click Open
        darkenedsky13.png

        Click Next again

        Click “Install Now”

        darkenedsky15.png

        Click Next

        darkenedsky16.png

        Click Yes

        darkenedsky17.png

        Uncheck “Create shortcut on desktop”

        Click Next
        darkenedsky18.png

        Check “Direct 3D”

        Screen: 1280x960x32
        Check “Fit video to screen”
        Check “Full Screen”
        Click Apply
        Click Exit
        darkenedsky19.png

        Click “Don’t play now”

        darkenedsky20.png

        Installation may crash

        Do not click Cancel!!!
        Click Next
        darkenedsky21.png

        PlayOnLinux Shortcut
        Select “Skye.exe”

        Click Next
        darkenedsky22.png

        Name the shortcut: Darkened Skye

        Click Next
        darkenedsky23.png

        Select “I don’t want to make another shortcut”

        Click Next
        darkenedsky24.png

        PlayOnLinux Configure
        Select “Darkened Skye”

        Click Configure
        darkenedsky25.png

        General Tab
        Wine version: 3.0

        darkenedsky26.png

        Note: Click the down-arrow to select other versions of Wine. Click the + to download other versions of WineDisplay Tab
        Video memory size: Enter the amount of memory your video card/chip uses

        darkenedsky27.png

        Close Configure

        Launch Darkened Skye by clicking Run

        darkenedsky28.png

        Note: Click debug to see errors and bugsConclusion:
        Darkened Skye is definitely a low budget game with some simple game mechanics. I like how the dialog makes fun of adventure gaming, but its not the most original or clever way of implementing it. Widescreen support doesn’t exist, so you have to run at 1280×960 and your Linux Desktop should resize to match and appear fullscsreen. Otherwise, set your Linux Desktop to 1280×960 before launching the game.

        Screenshots:darkenedsky80.jpg

        darkenedsky81.jpg

        darkenedsky85.jpg

        darkenedsky90.jpg

        darkenedsky91.jpg

        darkenedsky94.jpg

        darkenedsky96.jpg

        Source

        Sysget – A Front-end For Popular Package Managers

        by
        sk
        ·
        October 11, 2018

        Sysget - A Front-end For Popular Package Managers

        ‘,
        enableHover: false,
        enableTracking: true,
        buttons: { twitter: },
        click: function(api, options){
        api.simulateClick();
        api.openPopup(‘twitter’);
        }
        });
        $(‘#facebook’).sharrre({
        share: {
        facebook: true
        },
        template: ‘

        ‘,
        enableHover: false,
        enableTracking: true,
        buttons:,
        click: function(api, options){
        api.simulateClick();
        api.openPopup(‘facebook’);
        }
        });
        $(‘#googleplus’).sharrre({
        share: {
        googlePlus: true
        },
        template: ‘

        ‘,
        enableHover: false,
        enableTracking: true,
        buttons:,
        urlCurl: ‘https://www.ostechnix.com/wp-content/themes/hueman-pro/addons/assets/front/js/sharrre.php’,
        click: function(api, options){
        api.simulateClick();
        api.openPopup(‘googlePlus’);
        }
        });
        $(‘#linkedin’).sharrre({
        share: {
        linkedin: true
        },
        template: ‘

        ‘,
        enableHover: false,
        enableTracking: true,
        buttons: {
        linkedin: {
        description: ‘Sysget – A Front-end For Popular Package Managers’,media: ‘https://www.ostechnix.com/wp-content/uploads/2018/10/sysget.png’ }
        },
        click: function(api, options){
        api.simulateClick();
        api.openPopup(‘linkedin’);
        }
        });


        // Scrollable sharrre bar, contributed by Erik Frye. Awesome!
        var $_shareContainer = $(“.sharrre-container”),
        $_header = $(‘#header’),
        $_postEntry = $(‘.entry’),
        $window = $(window),
        startSharePosition = $_shareContainer.offset(),//object
        contentBottom = $_postEntry.offset().top + $_postEntry.outerHeight(),
        topOfTemplate = $_header.offset().top,
        topSpacing = _setTopSpacing();

        //triggered on scroll
        shareScroll = function(){
        var scrollTop = $window.scrollTop() + topOfTemplate,
        stopLocation = contentBottom – ($_shareContainer.outerHeight() + topSpacing);

        $_shareContainer.css();

        if( scrollTop > stopLocation ){
        $_shareContainer.css( { position:’relative’ } );
        $_shareContainer.offset(
        {
        top: contentBottom – $_shareContainer.outerHeight(),
        left: startSharePosition.left,
        }
        );
        }
        else if (scrollTop >= $_postEntry.offset().top – topSpacing){
        $_shareContainer.css( { position:’fixed’,top: ‘100px’ } );
        $_shareContainer.offset(
        {
        //top: scrollTop + topSpacing,
        left: startSharePosition.left,
        }
        );
        } else if (scrollTop 1024 ) {
        topSpacing = distanceFromTop + $(‘.nav-wrap’).outerHeight();
        } else {
        topSpacing = distanceFromTop;
        }
        return topSpacing;
        }

        //setup event listeners
        $window.scroll( _.throttle( function() {
        if ( $window.width() > 719 ) {
        shareScroll();
        } else {
        $_shareContainer.css({
        top:”,
        left:”,
        position:”
        })
        }
        }, 50 ) );
        $window.resize( _.debounce( function() {
        if ( $window.width() > 719 ) {
        shareMove();
        } else {
        $_shareContainer.css({
        top:”,
        left:”,
        position:”
        })
        }
        }, 50 ) );

        });

        Source

        OWASP Security Shepherd- Session Management Challenge One – Solution – LSB – ls /blog

        We have another solution in the OWASP Security Shepherd challenges and we enjoyed completing this one. You can find out about Session Management from OWASP here. So let’s get on with the challenge!!

        Below is the screen we are presented with and if we click on the Administrators Only Button we are told we are not admin. Simple enough, we need to escalate our privileges to admin to complete the challenge.

        sesh1

        Apparently the dogs have been released. This challenge will require a proxy for us to intercept the packet before it hits the server to see what is going across the airwaves. We will use Burp Suite for this task which comes as a default tool in Kali Linux.

        Hyperledger Fabric Fundamentals (LFD271) $299

        You can find out how to configure your browser to work with Burp Suite here. So let’s hit the Admin button again and catch the packet in Burp. [ Click on images for a better view. ]

        sesh2

        At the bottom of the data being sent over the wire we can see a few Boolean statements. AdminDetected=false, what can we do with that?

        $299 REGISTERS YOU FOR OUR NEWEST SELF PACED COURSE! LFD201 – INTRODUCTION TO OPEN SOURCE DEVELOPMENT, GIT, AND LINUX!

        Let’s change it to true and forward the packet to the server?

        sesh3

        Whoops!! That was detected on the server, probably best to not do that again. So what’s next? Let’s look at the packet again to see what other information we can extract from it. We will send the packet again, click the admin button, catch it in the proxy and inspect the packet.

        sesh2

        Looking more carefully this time at the packet we should notice that there is a strange cookie in there and it’s called checksum. The checksum looks to be encoded with an MD5 hash. So let’s right click on the packet in Burp and send to our decoder tab to decode the hash.

        REGISTER TODAY FOR YOUR KUBERNETES FOR DEVELOPERS (LFD259) COURSE AND CKAD CERTIFICATION TODAY! $499!

        sesh4

        Bingo!! When we decode the hash we can see that it queries if userRole=admin. This cookie seems to be checking if the user is an admin and just encoded with the MD5 algorithm. We can’t just send that to the server, that is a normal request and we are just back to the start. So maybe we need to change it slightly and then send it to the server?

        How about we lengthen the word admin to administrator?

        sesh5

        Let’s quickly encode that back to MD5 with the tabs on the right hand side, replace the checksum in the sending packet with our new checksum and then forward that packet to the server.

        sesh6

        Looks fine and dandy, will we gain privileges? Let’s Forward the packet and see what happens.

        sesh7

        Perfecto!! To be honest, we didn’t get this first go and it was a bit of a challenge. But I managed to get there in the end. Hacking requires us to have attention to detail and knowing when cookies are sent in a HTTP request helps us to be able to manipulate those cookies. Having a basic understanding of encryption helps too as we were able to identify the hash used in the cookie. So another level of SecShep DEFEATED!!

        Thanks for reading and I hope it helps you in some way.

        QuBits 2018-10-10

        ENROLL IN THE LINUX FOUNDATION LFC210 – FUNDAMENTALS OF PROFESSIONAL OPEN SOURCE MANAGEMENT TODAY! $179

        Source

        Greg Kroah-Hartman: Outside Phone Vendors Aren’t Updating Their Linux Kernels

        David on Saturday October 06, 2018 @03:34PM

        from the downstream-developers dept.

        “Linux runs the world, right? So we want to make sure that things are secure,” says Linux kernel maintainer Greg Kroah-Hartman. When asked in a new video interview which bug makes them most angry, he first replies “the whole Spectre/Meltdown problem. What made us so mad, in a way, is we were fixing a bug in somebody else’s layer!”

        One also interesting thing about the whole Spectre/Meltdown is the complexity of that black box of a CPU is much much larger than it used to be. Right? Because they’re doing — in order to eke out all the performance and all the new things like that, you have to do extra-special tricks and things like that. And they have been, and sometimes those tricks come back to bite you in the butt. And they have, in this case. So we have to work around that.

        But a companion article on Linux.com notes that “Intel has changed its approach in light of these events. ‘They are reworking on how they approach security bugs and how they work with the community because they know they did it wrong,’ Kroah-Hartman said.” (And the article adds that “for those who want to build a career in kernel space, security is a good place to get started…”)

        Kroah-Hartman points out in the video interview that “we’re doing more and more testing, more and more builds,” noting “This infrastructure we have is catching things at an earlier stage — because it’s there — which is awesome to see.” But security issues can persist thanks to outside vendors beyond their control. Linux.com reports:
        Hardening the kernel is not enough, vendors have to enable the new features and take advantage of them. That’s not happening. Kroah-Hartman releases a stable kernel every week, and companies pick one to support for a longer period so that device manufacturers can take advantage of it. However, Kroah-Hartman has observed that, aside from the Google Pixel, most Android phones don’t include the additional hardening features, meaning all those phones are vulnerable. “People need to enable this stuff,” he said.

        “I went out and bought all the top of the line phones based on kernel 4.4 to see which one actually updated. I found only one company that updated their kernel,” he said. “I’m working through the whole supply chain trying to solve that problem because it’s a tough problem. There are many different groups involved — the SoC manufacturers, the carriers, and so on. The point is that they have to push the kernel that we create out to people.”

         

        “The good news,” according to Linux.com, “is that unlike with consumer electronics, the big vendors like Red Hat and SUSE keep the kernel updated even in the enterprise environment. Modern systems with containers, pods, and virtualization make this even easier. It’s effortless to update and reboot with no downtime.”

         

        The trouble with being punctual is that nobody’s there to appreciate it.
        — Franklin P. Jones

        Working…

        Source

        How to install Moodle on Debian 9 • LinuxCloudVPS Blog

        how to install moodle on debian 9

        Moodle is a free and open-source learning management system designed to provide teachers or educators the tools to create personalized learning environments filled with dynamic online courses which help students and other users to achieve their learning goals. Today we will learn how to install the latest Moodle 3.5 version on Debian 9, with Apache web server, MariaDB and PHP 7.

        Moodle comes with hundreds of built-in features such as:

        • Modern and easy to use interface
        • Personalized Dashboard
        • Collaborative tools and activities
        • All-in-one calendar
        • Secure authentication and mass enrollment
        • Multilingual capability
        • Direct learning paths
        • Multimedia Integration
        • Customizable site design and layout
        • and much more …

        1. Login via SSH

        Connect to your server via SSH as user root, using the following command:

        ssh root@IP_ADDRESS -p PORT_NUMBER

        make sure that you replace “IP_ADDRESS” and “PORT_NUMBER” with your actual server IP address and SSH port number.

        2. Update the OS Packages

        Once logged in, run the following command to update your OS packages:

        apt-get update
        apt-get upgrade

        3. Install Apache Web Server

        To install the Apache web server on your server, run the following command:

        apt-get install apache2

        Once the installation is complete, you need to start Apache and enable it to start automatically upon system boot

        systemctl start apache2
        systemctl enable apache2

        4. Install MariaDB

        Moodle stores most of its data in a database, so we will install the MariaDB database server:

        apt-get install mysql-client mysql-server

        When the MariaDB installation is complete, run the following command to secure your MariaDB installation:

        mysql_secure_installation

        5. Install PHP 7

        Next, we will install PHP 7 and all the additional PHP modules which will be required by Moodle:

        apt-get install php7.0 libapache2-mod-php7.0 php7.0-pspell php7.0-curl php7.0-gd php7.0-intl php7.0-mysql php7.0-xml php7.0-xmlrpc php7.0-ldap php7.0-zip php7.0-soap php7.0-mbstring

        6. Download and install Moodle

        Before we download the moodle package, first let’s navigate to the default Apache web server root directory:

        cd /var/www/html

        To download the latest moodle package, use the following command:

        wget https://download.moodle.org/stable35/moodle-latest-35.tgz

        root@host:/# wget https://download.moodle.org/stable35/moodle-latest-35.tgz
        –2018-09-15 12:56:34– https://download.moodle.org/stable35/moodle-latest-35.tgz
        Resolving download.moodle.org (download.moodle.org)… 104.20.218.25, 104.20.219.25, 2400:cb00:2048:1::6814:da19, …
        Connecting to download.moodle.org (download.moodle.org)|104.20.218.25|:443… connected.
        HTTP request sent, awaiting response… 200 OK
        Length: 46447511 (44M) [application/x-gzip]
        Saving to: ‘moodle-latest-35.tgz’

        moodle-latest-35.tgz 100%[=====================================================================================================>] 44.29M 60.7MB/s in 0.7s

        2018-09-15 12:56:36 (60.7 MB/s) – ‘moodle-latest-35.tgz’ saved [46447511/46447511]

        Change the ownership and the permissions of the extracted Moodle directory with the following command:

        chown -R www-data:www-data /var/www/html/moodle
        chmod -R 775 /var/www/html/moodle

        Additionally, you will also need to create a directory for the Moodle data:

        mkdir /var/moodledata

        And set the correct ownership and permissions:

        chown www-data:www-data /var/www/html/moodledata
        chmod 775 /var/www/html/moodledata

        7. Configure MariaDB and create a new database

        Before you create a new moodle database, you will need to modify the default MariaDB configuration file. Moodle requires that you change the default storage engine to innodb and change the default file format to Barracuda. You will also need to set innodb_file_per_table in order for Barracuda to work properly.

        To edit the MariaDB configuration file. Run the following command:

        nano /etc/mysql/mariadb.conf.d/50-server.cnf

        Then add the following lines just below the [mysqld] section:

        default_storage_engine = innodb
        innodb_file_per_table = 1
        innodb_file_format = Barracuda
        innodb_large_prefix = 1

        Save and exit the file and restart the MariaDB server with:

        systemctl mariadb restart

        You can now log in to the MariaDB server as user root and create a new user and database for the Moodle installation:

        mysql -u root -p
        CREATE DATABASE moodle DEFAULT CHARACTER SET utf8 COLLATE utf8_unicode_ci;
        GRANT ALL PRIVILEGES ON moodle.* TO ‘moodle_user’@’localhost’ IDENTIFIED BY ‘<span style=”color: #ff0000;”>PASSWORD</span>’;
        FLUSH PRIVILEGES;
        exit;

        Don’t forget to replace ‘PASSWORD‘ with an actual strong password.

        8. Configure Apache Web Server

        If you have a valid domain name which you would like to use to access your Moodle installation, you will need to create a new Apache virtual host for your domain name with the following content:

        nano /etc/apache2/sites-available/yourdomain.com.conf
        <VirtualHost *:80>
        ServerAdmin admin@yourdomain.com
        DocumentRoot /var/www/html/moodle
        ServerName yourdomain.com
        ServerAlias www.yourdomain.com

        <Directory /var/www/html/moodle/>
        Options FollowSymLinks
        AllowOverride All
        Order allow,deny
        allow from all
        </Directory>

        ErrorLog /var/log/apache2/yourdomain.com-error_log
        CustomLog /var/log/apache2/yourdomain.com-access_log common
        </VirtualHost>

        Save the file and enable the virtual host with the following command:

        a2ensite yourdomain.com.conf

        Once you enable the virtual host, you will need to restart the Apache web server:

        systemctl restart apache2

        9. Finish the Moodle installation in your browser

        If the DNS records are properly configured, and your domain is pointed to your server, you can use it to access your Moodle installation by typing http://yourdomain.com in your browser and choose the preferred language to continue with the installation.

        installing moodle on debian 9

        Verify that all Moodle directory paths are correct and click on Next.

        install moodle on debian 9

        Choose the database type.

        install moodle on debian

        Enter the database name, username, and password of the moodle database we have created earlier.

        Moodle Installation on Debian 9

        Follow the on-screen instructions to finish the installation. In the end, you should see the following screen where you need to configure your main administrator account.

        How do you install Moodle on Debian

        Congratulations! You have now successfully installed Moodle on your server. For more information on how to configure and use Moodle, you can check their official documentation.

        Of course, you don’t have to install Moodle on Debian 9, if you use one of our Managed Debian Cloud Hosting services, in which case you can simply ask our expert system administrators to install Moodle on Debian 9 for you. They are available 24×7 and will take care of your request immediately.

        PS. If you liked this post, on how to install Moodle on Debian 9, please share it with your friends on the social networks using the buttons below or simply leave a comment in the comments section. Thanks.

        Source

        WP2Social Auto Publish Powered By : XYZScripts.com