Linux Scoop — Nitrux 1.0.16




Nitrux 1.0.16 – See What’s New

Nitrux 1.0 .16 is the latest release of Nitrux OS based on based on the development branch of Ubuntu 18.10 Cosmic Cuttlefish and powered by Linux Kernel 4.18 series. This release also brings together the latest software updates, bug fixes, performance improvements, and ready-to-use hardware support.

Using the latest version of Nomad Desktop as default desktop environment, which is built on top of KDE Plasma 5.13.90 and Qt 5.11.1. The Software Center was updated to using new web scraper backend allowing for automated sorting and listing of AppImages.

Download Nitrux 1.0 .16





Source

Configure your web application pentesting lab

By

Shashwat Chaudhary


April 04, 2017

  • Disclaimer – TLDR; some stuff here can be used to carry out illegal activity, our intention is, however, to educate

In the previous tutorial, we set up our web application pentesting lab. However, it’s far from ready, and we need to make some changes to get it working as per our needs. Here’s the link to the previous post if you didn’t follow that-

Set up your web app pentesting lab

Contents

  1. Fixing the problems
  2. Changing credentials
  3. Adding recaptcha key
  4. Enabling disabled stuff
  5. Installing missing stuff
  6. Giving write privileges

Fixing problems

If you remember from previous post, we reached this point-

There’s some stuff in red color

All the stuff in red needs fixing. If you are lucky, we have the same set of issues which need fixing. Otherwise, you’ll have to do some googling to find out how to fix problems which you are facing and I am not.

Changing mysql username and password

The default credentials are ‘root’ and ‘p@ssw0rd’ in the config.inc.php file. We change it to the correct mysql login credentials, ‘root’ and ”, in my case. You can change depending on your mysql credentials. This gets rid of our biggest worry – Unable to connect to database!

This is the biggest problem. Solving this means we can create our database, some modules may not work
perfectly, but DVWA will run. Without fixing this, we won’t even be able to start.
To fix this, open /opt/lamp/htdocs/DVWA-master/config/config.inc.php file in your favorite text editor.

This password isn’t the password of our mysql database. In my case, password is nothing, i.e. two single quotes (i.e. ”).
Update the value here. In case your mysql password is something else, use that. Change
the username too is need be.

Now we’ll fix the other remaining issues.

Fixing missing recaptcha key

Firstly, we need to solve the recaptcha key missing problem. Go to this

URL

Go to the URL, you’ll see a form like this
Fill form, values don’t matter much
You obtain site key and secret key. Site key = Private key, secret key = private key
Open the config.ini.php file in your favourite text editor
Edit the recaptcha public key and private key fields. Here is what I did.
Now we have a a recaptcha key. One red down, 3 to go.

Fixing disabled allow_url_include

We simply have to locate the configuration file and edit the value of the parameter from Off to On.

The php configuration file is located at /opt/lampp/etc/php.ini
Edit it with your favourite text editor, you’ll need root privileges (sudo)
Locate the allow_url_include line by using search feature of your text editor
Change Off to On
Restart the lampp service

Reload page, you’ll see that the issue is fixed

Note: Any other function which is disabled can be enabled in a similar manner. All settings are in the php.ini file. You just need to search for the corresponding line and edit it.

Fixing missing modules

If a module is shown as missing , then we need to install it. In my case, everything is installed. Most likely, since you are also using XAMPP, everything would be installed. However, if that is not the case, then you have to figure out how to install the modules. If you aren’t using XAMPP and did everything manually, then apt-get would be the way to go. Otherwise look at XAMPP’s (or whichever bundle you are using) documentation.

Fixing File Ownership

We need to give www-data user write access to two directories. We’ll can use chgrp and chmod commands in unison to give only the privileges that are needed, or we could go the lazy way and use chmod 777 (full read, write and execute privileges to everyone). I’m feeling lazy and I’m just gonna go the chmod way. Run the command below-

chmod 777 <directory>

Replace directory with the correct directory.

This is the last thing that needs to be done
Everything is green finally! Also, notice the credentials, we’ll need it later.
“admin // password”
Database created. Populated with tables.
Finally the damn vulnerable application is running.

The username = “admin” and password is “password” (“admin // password” that we saw three pics ago).

Everything is running perfectly. This is the page you should see after successful login.

I’ll leave you at the welcome page of DVWA. In the next tutorial, we’ll begin proper exploitation of the intentional vulnerabilities, moving from trivial stuff to the really hard stuff. The first two tutorials complete the installation and configuration parts.

Source

Clearing the (hybrid and multi-) clouds of confusion

Share with friends and colleagues on social media

    Despite cloud computing being a generally well-accepted and used technology that has slipped into the common vernacular very easily, there is still some confusion around the different types of cloud options out there. Specifically around the concepts of multi-cloud and hybrid cloud. While some of this is due to slightly hazy marketing, largely it is down to misunderstanding. We know, just from looking at many of the cars on the street today, that a hybrid is a combination of two things (in the case of the image on this page, a bobcat and a bird), but how does that differ from multi-cloud?

    A blog written in 2017 by our own Terri Schlosser previously addressed this, but having had a number of conversations with confused customers and partners over the last year or so, I decided to record a very brief video to help clarify the situation.

    Follow this link to watch the video, and if you have any thoughts, please leave a comment on this blog, contact me at matthew.johns@suse.com or via Twitter. I hope that you find it useful in understanding more about what can be at times a very confusing set of terms. If you’d like to read more about cloud in general, then please visit our Cloud Solutions page on suse.com, or get in contact with us to see how SUSE can support you in your journey to the cloud.

    Share with friends and colleagues on social media

      Source

      HTTP download speed difference in windows vs Linux | Elinux.co.in | Linux Cpanel/ WHM blog


      HTTP download speed difference in windows 7 vs Linux

      I have a strange situation regarding a Windows PC which is showing limited internet transfer speeds for no apparent reason. If I am performing the same test on Linux box then I am getting good speed.

      Upon intense debugging, I am able to diagnose and find out the root cause of the problem.

      It was/is Windows HTTP packet fragmentation that happens locally. Basically its
      how windows compile HTTP headers locally so found a fix to it.

      We came across some TCP settings which restrict download speed in the windows
      box, hence in order to permit download of large files, have modified below
      settings:

      These were my initial TCP settings

      C:Windowssystem32>netsh interface tcp show global

      Querying active state…

      TCP Global Parameters

      ———————————————-

      Receive-Side Scaling State: disabled

      Chimney Offload State : automatic

      NetDMA State: enabled

      Direct Cache Acess (DCA): disabled

      Receive Window Auto-Tuning Level: disabled

      Add-On Congestion Control Provider: none

      ECN Capability: disabled

      RFC 1323 Timestamps : disabled

      ** The above autotuninglevel setting is the result of Windows Scaling heuristics

      overriding any local/policy configuration on at least one profile.

      C:Windowssystem32>netsh interface tcp show heuristics

      TCP Window Scaling heuristics Parameters

      ———————————————-

      Window Scaling heuristics : enabled

      Qualifying Destination Threshold: 3

      Profile type unknown: normal

      Profile type public : normal

      Profile type private: restricted

      Profile type domain : normal

      Thus I did:

      # disable heuristics

      C:Windowssystem32>netsh interface tcp set heuristics wsh=disabled

      Ok.

      # enable receive-side scaling

      C:Windowssystem32>netsh int tcp set global rss=enabled

      Ok.

      # manually set autotuning profile

      C:Windowssystem32>netsh interface tcp set global autotuning=experimental

      Ok.

      # set congestion provider

      C:Windowssystem32>netsh interface tcp set global congestionprovider=ctcp

      Ok.

      C:Windowssystem32>netsh interface tcp show global

      Querying active state…

      TCP Global Parameters

      ———————————————-

      Receive-Side Scaling State: enabled

      Chimney Offload State : automatic

      NetDMA State: enabled

      Direct Cache Acess (DCA): disabled

      Receive Window Auto-Tuning Level: experimental

      Add-On Congestion Control Provider: ctcp

      ECN Capability: disabled

      RFC 1323 Timestamps : disabled

      After changing these settings downloading is fast again, hitting the internet connection’s limit.

      Source

      Find Exact Installation Date And Time Of Your Linux OS | Elinux.co.in | Linux Cpanel/ WHM blog

      On Fedora, RHEL and its clones such as CentOS, Scientific Linux, Oracle Linux, you can find it using the following command:

      rpm -qi basesystem

      Sample output

      [[email protected] ~]# rpm -qi basesystem
      Name : basesystem
      Version : 10.0
      Release : 7.el7.centos
      Architecture: noarch
      Install Date: Thu 29 Mar 2018 05:05:32 PM IST
      Group : System Environment/Base
      Size : 0
      License : Public Domain
      Signature : RSA/SHA256, Fri 04 Jul 2014 06:16:57 AM IST, Key ID 24c6a8a7f4a80eb5
      Source RPM : basesystem-10.0-7.el7.centos.src.rpm
      Build Date : Fri 27 Jun 2014 04:07:10 PM IST
      Build Host : worker1.bsys.centos.org
      Relocations : (not relocatable)
      Packager : CentOS BuildSystem http://bugs.centos.org
      Vendor : CentOS
      Summary : The skeleton package which defines a simple CentOS Linux system
      Description :
      Basesystem defines the components of a basic CentOS Linux
      system (for example, the package installation order to use during
      bootstrapping). Basesystem should be in every installation of a system,
      and it should never be removed.

      Source

      Unleash powerful Linux container-building capabilities with Buildah – Red Hat Enterprise Linux Blog

      Balancing size and features is a universal challenge when building software. So, it’s unsurprising that this holds true when building container images. If you don’t include enough packages in your base image, you end up with images which are difficult to troubleshoot, missing something you need, or just cause different development teams to add the exact same package to layered images (causing duplication). If you build it too big, people complain because it takes too long to download – especially for quick and dirty projects or demos. This is where Buildah comes in.

      In the currently available ecosystem of build tools, there are two main kinds of build tools:

      1. Ones which build container images from scratch.
      2. Those that build layered images.

      Buildah is unique in that it elegantly blurs the line between both – and, it has a rich set of capabilities for each. One of those rich capabilities is multi-stage builds.

      At Red Hat Summit 2018 in San Francisco, Scott McCarty and I boiled the practice of building production ready containers down into five key tenets – standardize, minimize, delegate, process, and iterate (video & presentation).

      Two tenets in particular are often at odds – standardize and minimize. It makes sense to standardize on a rich base image, while at the same time minimizing the content in layered builds. Balancing both is tricky, but when done right, reaps the benefits of OCI image layers at scale (lots of applications) and improves registry storage efficiency.

      Multi-stage builds

      A particularly powerful example of how to achieve this balance is the concept of multi-stage builds. Since build dependencies like compilers and package managers are rarely required at runtime, we can exclude them from the final build by breaking it into two parts. We can do the heavy lifting in the first part, then use the build artifacts (think Go binaries or jars) in the second. We will then use the container image from the second build in production.

      Using this methodology leverages the power of rich base images, while at the same time, results in a significantly smaller container image. The resultant image isn’t carrying additional dependencies that aren’t used during runtime. The multi-stage build concept became popular last year with the release of Docker v17.05, and OpenShift has long had a similar capability with the concept of chaining builds.

      OK, multi-stage builds are great, you get it, but to make this work right, the two builds need to be able to copy data between them. Before we tackle this, let’s start with some background.

      Buildah background

      Buildah was a complete rethink of how container image builds could and should work. It follows the Unix philosophy of small, flexible tools. Multi-stage builds were part of the original design and have been possible since its inception. With the release of Buildah 1.0, users can now take advantage of the simplicity of using multi-stage builds with the Dockerfile format. All of this, with a smaller tool, no daemon, and tons of flexibility during builds (ex. build time volumes).

      Below we’ll take a look at how to use Buildah to accomplish multi-stage builds with a Dockerfile and also explore a simpler, yet more sophisticated way to tackle them.

      Using Dockerfiles:

      $buildah bud -t [image:tag] .

      ….and that’s it! Assuming your Dockerfile is written for multi-stage builds and in the directory the command is executed, everything will just work. So if this is all you’re looking for, know that it’s now trivial to accomplish this with Buildah in Red Hat Enterprise Linux 7.5.

      Now, let’s dig a little deeper and take a look at using Buildah’s native commands to achieve the same outcome and some reasons why this can be a powerful alternative for certain use cases.

      For clarity, we’ll start by using Alex Ellis’s blog post that demonstrates the benefits of performing multi-stage builds. Use of this example is simply to compare and contrast the Dockerfile version with Buildah’s native capabilities. It’s not an endorsement any underlying technologies such as Alpine Linux or APK. These examples could all be done in Fedora, but that would make the comparison less clear.

      Using Buildah Commands

      Using his https://github.com/alexellis/href-counter we can convert the included Dockerfile.multi file to a simple script like this:

      First Build

      #!/bin/bash

      # build container

      buildcntr1=$(buildah from golang:1.7.3)

      buildmnt1=$(buildah mount $buildcntr)

      Using simple variables like this are not required, but they will make the later commands clearer to read so it’s recommended. Think of the buildcntr1 as a handle which represents the container build, while the variable buildmnt1 represents a directory which will mount the container.

      buildah run $buildcntr1 go get -d -v golang.org/x/net/html

      This is the first command verbatim in the original Dockerfile. All that’s needed is to change RUN to run and point Buildah to the container we want to execute the command in. Once, the command completes, we are left with a local copy of the go program. Now we can move it wherever we want. Buildah has a native directive to copy the contents out of a container build:

      buildah copy $buildcntr1 app.go .

      Alternatively, we can use the system command to do the same thing by referencing the mount point:

      cp app.go $buildmnt1/go

      For this example both of these lines will accomplish the same thing. We can use buildah’s copy command the same way the COPY command works in a Dockerfile, or we can simply use the host’s cp command to perform the task of copying the binary out of the container. In the rest of this tutorial, we’ll rely on the hosts command.

      Now, let’s build the code:

      buildah run $buildcntr1 /bin/sh -c “CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .”

      Second Build

      The same applies to this command. We’re changing RUN to run and executing the command in the same container:

      # runtime container

      buildcntr2=$(buildah from alpine:latest)

      buildmnt2=$(buildah mount $buildcntr2)

      Now let’s define a separate runtime image that we’ll use to run our application in production with.

      buildah run $buildcntr2 apk –no-cache add ca-certificates

      Same tweaks for the RUN command

      #buildah copy $buildcntr2 $buildmnt1/go/app .

      Or:

      cp $buildmnt1/go/app $buildmnt2

      Here we have the same option as above. To bring the compiled application into the second build, we can use the copy command from buildah or the host.

      Now, add the default command to the production image.

      buildah config –cmd ./app $buildcntr2

      Finally, we unmount and commit the image, and optionally clean up the environment:

      #unmount & commit the image

      buildah unmount $buildcntr2

      buildah commit $buildcntr2 multi-stage:latest


      #clean up build

      buildah rm $buildcntr1 $buildcntr2

      Don’t forget that Buildah can also push the image to your desired registry using ​buildah push`

      The beauty of Buildah is that we can continue to leverage the simplicity of the Dockerfile format, but we’re no longer bound by the limitations of it. People do some nasty, nasty things in a Dockerfile to hack everything onto a single line. This can make them hard to read, difficult to maintain, and it’s inelegant.

      When you combine the power of being able to manipulate images with native Linux tooling from the build host, you are now free to go beyond the Dockerfile commands! This opens up a ton of new possibilities for the content of container images, the security model involved, and the process for building.

      A great example of this was explored in one of Tom Sweeney’s blog posts on creating minimal containers. Tom’s example of leveraging the build host’s package manager is a great one, and means we no longer require something like “yum” to be available in the final image.

      On the security side, we no longer require access to the Docker socket which is a win for performing builds from Kubernetes/OpenShift. In fairness Buildah currently requires escalated privileges on the host, but soon this will no longer be the case. Finally, on the process side, we can leverage Buildah to augment any existing build process, be it a CI/CD pipeline or building from a Kubernetes cluster to create simple and production-ready images.

      Buildah provides all of the primitives needed to take advantage of the simplicity of Dockerfiles combined with the power of native Linux tooling, and is also paving the way to more secure container builds in OpenShift. If you are running Red Hat Enterprise Linux, or possibly an alternative Linux distribution, I highly recommend taking a look at Buildah and maximizing your container build process for production.

      Source

      What Is /dev/shm in linux?


      Shared (Virtual) Memory (SHM)


      Shared memory is a way to shared state between process.

      Shared memory, as its name implies, is a method to “share” data between processes.

      Both processes define the same memory area as “shared”, and they can then exchange

      information simply by writing into it. This (used to be, and still is somewhat) faster than

      the alternative of sending network or pipe-based messages between processes.

      If you see the memory as a mean of storing data, a file on a file system can be seen as

      shared memory (ie shared file).

      It is difficult to account for shared memory. Does it belong to one process? Both? Neither?

      If we naively sum the memory belonging to multiple processes, we grossly “over-count”.

      As the name implies, the Shared (Virtual) Memory refers to virtual memory that are shared

      by more than one process and then can be used by multiple programs simultaneously.

      Although virtual memory allows processes to have separate (virtual) address spaces, there

      are times when you need processes to share memory.

      Shared memory (SHM) is another method of interprocess communication (IPC)

      whereby several processes share a single chunk of memory to communicate.

      Shared memory provides the fastest way for processes to pass large amounts of data

      to one another.


      /dev/shm is nothing but implementation of traditional shared memory concept. It is an

      efficient means of passing data between programs. One program will create a memory

      portion, which other processes (if permitted) can access. This will result into speeding up

      things on Linux.


      shm / shmfs is also known as tmpfs, which is a common name for a temporary file storage

      facility on many Unix-like operating systems. It is intended to appear as a mounted

      file system, but one which uses virtual memory instead of a persistent storage device.

      If you type mount command you will see /dev/shm as a tempfs file system. Therefore,

      it is a file system, which keeps all files in virtual memory. Everything in tmpfs is temporary

      in the sense that no files will be created on your hard drive. If you unmount a tmpfs instance, everything stored therein is lost. By default almost all Linux distros configured to use /dev/shm.

      Difference between tmpfs and swap

      • tmpfs uses memory while as swap uses persistent storage devices.
      • tmpfs can be viewed as file system in df output whereas swap dont
      • swap has general size recommendations, tmpsfs not. tmpfs size varies on system purpose.
      • tmpfs makes applications fasters on loaded systems. swap helps system breath in memory full situations.
      • swap full indicates system heavily loaded, degraded performance and may crash.
      • tmpfs being full not necessarily means heavy load or prone to crash.
      • tmpfs is enhancement where as swap is must have feature!



        Source

        Darkened Skye Guide | GamersOnLinux


        darkenedsky87.jpg

        Darkened Skye is an Action game based off the Skittles candy where Skye can execute magic spells when combining Skittles. From fireballs, iceballs, lightning to firewalking, floating, shrinking and necromancy… Skye can use them to eliminate enemies and solve puzzles.

        darkenedsky83.jpg

        Follow my step-by-step guide on installing, configuring and optimizing Darkened Skye in Linux with PlayOnLinux.

        Note: This guide applies to the Retail CD ROM version of Darkened Skye. Other versions may require additional steps.Tips & Specs:
        To learn more about PlayOnLinux and Wine configuration, see the online manual: PlayOnLinux Explained

        Mint 18.3 64-bit

        PlayOnLinux: 4.2.12
        Wine: 3.0

        Wine Installation
        Click Tools

        Select “Manage Wine Versions”
        wine01.png

        Look for the Wine Version: 3.0

        Select it
        Click the arrow pointing to the right
        wine02.png

        Click Next

        Downloading Wine

        wine04.png

        Extracting

        Downloading Gecko

        wine05.png

        Installed

        wine06.png

        Wine 3.0 is installed and you can close this window

        Copy Disk Data

        1. Create a new folder on your Desktop
        2. Name it Darkened Skye
        3. Insert Disk 1 and copy all the data into your new folder
        4. Insert Disk 2 and copy all the data into the new folder

        Merge and over-write any files

        darkenedsky01.png

        Note: Leave Disk 2 in the drivePlayOnLinux Setup
        Launch PlayOnLinux

        Click Install
        darkenedsky02.png

        Click “Install a non-listed program”

        darkenedsky03.png

        Select “Install a program in a new virtual drive”

        Click Next
        darkenedsky04.png

        Name the virtual drive: darkenedskye

        Click Next
        darkenedsky05.png

        Check all three options:

        • Use another version of Wine
        • Configure Wine
        • Install some libraries

        Click Next
        darkenedsky06.png

        Select Wine 3.0

        Click Next
        darkenedsky07.png

        Select “32 bits windows installation”

        Click Next
        darkenedsky08.png

        Wine ConfigurationApplications Tab
        Windows version: Windows XP

        Click Apply
        darkenedsky09.png

        Graphics Tab
        Check “Automatically capture the mouse in full-screen windows”

        Check “emulate a virtual desktop”
        Desktop size: 1280×960
        Click OK
        darkenedsky10.png

        PlayOnLinux Packages (DLLs, Libraries, Components)

        Check the following:

        • POL_Install_corefonts
        • POL_Install_d3dx9
        • POL_Install_tahoma

        Click Next
        darkenedsky11.png

        Note: All packages should automatically download and install
        Click Browse

        Navigate to your Darkened Skye folder on your Desktop

        Select “Setup.exe”
        Click Open
        darkenedsky13.png

        Click Next again

        Click “Install Now”

        darkenedsky15.png

        Click Next

        darkenedsky16.png

        Click Yes

        darkenedsky17.png

        Uncheck “Create shortcut on desktop”

        Click Next
        darkenedsky18.png

        Check “Direct 3D”

        Screen: 1280x960x32
        Check “Fit video to screen”
        Check “Full Screen”
        Click Apply
        Click Exit
        darkenedsky19.png

        Click “Don’t play now”

        darkenedsky20.png

        Installation may crash

        Do not click Cancel!!!
        Click Next
        darkenedsky21.png

        PlayOnLinux Shortcut
        Select “Skye.exe”

        Click Next
        darkenedsky22.png

        Name the shortcut: Darkened Skye

        Click Next
        darkenedsky23.png

        Select “I don’t want to make another shortcut”

        Click Next
        darkenedsky24.png

        PlayOnLinux Configure
        Select “Darkened Skye”

        Click Configure
        darkenedsky25.png

        General Tab
        Wine version: 3.0

        darkenedsky26.png

        Note: Click the down-arrow to select other versions of Wine. Click the + to download other versions of WineDisplay Tab
        Video memory size: Enter the amount of memory your video card/chip uses

        darkenedsky27.png

        Close Configure

        Launch Darkened Skye by clicking Run

        darkenedsky28.png

        Note: Click debug to see errors and bugsConclusion:
        Darkened Skye is definitely a low budget game with some simple game mechanics. I like how the dialog makes fun of adventure gaming, but its not the most original or clever way of implementing it. Widescreen support doesn’t exist, so you have to run at 1280×960 and your Linux Desktop should resize to match and appear fullscsreen. Otherwise, set your Linux Desktop to 1280×960 before launching the game.

        Screenshots:darkenedsky80.jpg

        darkenedsky81.jpg

        darkenedsky85.jpg

        darkenedsky90.jpg

        darkenedsky91.jpg

        darkenedsky94.jpg

        darkenedsky96.jpg

        Source

        Sysget – A Front-end For Popular Package Managers

        by
        sk
        ·
        October 11, 2018

        Sysget - A Front-end For Popular Package Managers

        ‘,
        enableHover: false,
        enableTracking: true,
        buttons: { twitter: },
        click: function(api, options){
        api.simulateClick();
        api.openPopup(‘twitter’);
        }
        });
        $(‘#facebook’).sharrre({
        share: {
        facebook: true
        },
        template: ‘

        ‘,
        enableHover: false,
        enableTracking: true,
        buttons:,
        click: function(api, options){
        api.simulateClick();
        api.openPopup(‘facebook’);
        }
        });
        $(‘#googleplus’).sharrre({
        share: {
        googlePlus: true
        },
        template: ‘

        ‘,
        enableHover: false,
        enableTracking: true,
        buttons:,
        urlCurl: ‘https://www.ostechnix.com/wp-content/themes/hueman-pro/addons/assets/front/js/sharrre.php’,
        click: function(api, options){
        api.simulateClick();
        api.openPopup(‘googlePlus’);
        }
        });
        $(‘#linkedin’).sharrre({
        share: {
        linkedin: true
        },
        template: ‘

        ‘,
        enableHover: false,
        enableTracking: true,
        buttons: {
        linkedin: {
        description: ‘Sysget – A Front-end For Popular Package Managers’,media: ‘https://www.ostechnix.com/wp-content/uploads/2018/10/sysget.png’ }
        },
        click: function(api, options){
        api.simulateClick();
        api.openPopup(‘linkedin’);
        }
        });


        // Scrollable sharrre bar, contributed by Erik Frye. Awesome!
        var $_shareContainer = $(“.sharrre-container”),
        $_header = $(‘#header’),
        $_postEntry = $(‘.entry’),
        $window = $(window),
        startSharePosition = $_shareContainer.offset(),//object
        contentBottom = $_postEntry.offset().top + $_postEntry.outerHeight(),
        topOfTemplate = $_header.offset().top,
        topSpacing = _setTopSpacing();

        //triggered on scroll
        shareScroll = function(){
        var scrollTop = $window.scrollTop() + topOfTemplate,
        stopLocation = contentBottom – ($_shareContainer.outerHeight() + topSpacing);

        $_shareContainer.css();

        if( scrollTop > stopLocation ){
        $_shareContainer.css( { position:’relative’ } );
        $_shareContainer.offset(
        {
        top: contentBottom – $_shareContainer.outerHeight(),
        left: startSharePosition.left,
        }
        );
        }
        else if (scrollTop >= $_postEntry.offset().top – topSpacing){
        $_shareContainer.css( { position:’fixed’,top: ‘100px’ } );
        $_shareContainer.offset(
        {
        //top: scrollTop + topSpacing,
        left: startSharePosition.left,
        }
        );
        } else if (scrollTop 1024 ) {
        topSpacing = distanceFromTop + $(‘.nav-wrap’).outerHeight();
        } else {
        topSpacing = distanceFromTop;
        }
        return topSpacing;
        }

        //setup event listeners
        $window.scroll( _.throttle( function() {
        if ( $window.width() > 719 ) {
        shareScroll();
        } else {
        $_shareContainer.css({
        top:”,
        left:”,
        position:”
        })
        }
        }, 50 ) );
        $window.resize( _.debounce( function() {
        if ( $window.width() > 719 ) {
        shareMove();
        } else {
        $_shareContainer.css({
        top:”,
        left:”,
        position:”
        })
        }
        }, 50 ) );

        });

        Source

        OWASP Security Shepherd- Session Management Challenge One – Solution – LSB – ls /blog

        We have another solution in the OWASP Security Shepherd challenges and we enjoyed completing this one. You can find out about Session Management from OWASP here. So let’s get on with the challenge!!

        Below is the screen we are presented with and if we click on the Administrators Only Button we are told we are not admin. Simple enough, we need to escalate our privileges to admin to complete the challenge.

        sesh1

        Apparently the dogs have been released. This challenge will require a proxy for us to intercept the packet before it hits the server to see what is going across the airwaves. We will use Burp Suite for this task which comes as a default tool in Kali Linux.

        Hyperledger Fabric Fundamentals (LFD271) $299

        You can find out how to configure your browser to work with Burp Suite here. So let’s hit the Admin button again and catch the packet in Burp. [ Click on images for a better view. ]

        sesh2

        At the bottom of the data being sent over the wire we can see a few Boolean statements. AdminDetected=false, what can we do with that?

        $299 REGISTERS YOU FOR OUR NEWEST SELF PACED COURSE! LFD201 – INTRODUCTION TO OPEN SOURCE DEVELOPMENT, GIT, AND LINUX!

        Let’s change it to true and forward the packet to the server?

        sesh3

        Whoops!! That was detected on the server, probably best to not do that again. So what’s next? Let’s look at the packet again to see what other information we can extract from it. We will send the packet again, click the admin button, catch it in the proxy and inspect the packet.

        sesh2

        Looking more carefully this time at the packet we should notice that there is a strange cookie in there and it’s called checksum. The checksum looks to be encoded with an MD5 hash. So let’s right click on the packet in Burp and send to our decoder tab to decode the hash.

        REGISTER TODAY FOR YOUR KUBERNETES FOR DEVELOPERS (LFD259) COURSE AND CKAD CERTIFICATION TODAY! $499!

        sesh4

        Bingo!! When we decode the hash we can see that it queries if userRole=admin. This cookie seems to be checking if the user is an admin and just encoded with the MD5 algorithm. We can’t just send that to the server, that is a normal request and we are just back to the start. So maybe we need to change it slightly and then send it to the server?

        How about we lengthen the word admin to administrator?

        sesh5

        Let’s quickly encode that back to MD5 with the tabs on the right hand side, replace the checksum in the sending packet with our new checksum and then forward that packet to the server.

        sesh6

        Looks fine and dandy, will we gain privileges? Let’s Forward the packet and see what happens.

        sesh7

        Perfecto!! To be honest, we didn’t get this first go and it was a bit of a challenge. But I managed to get there in the end. Hacking requires us to have attention to detail and knowing when cookies are sent in a HTTP request helps us to be able to manipulate those cookies. Having a basic understanding of encryption helps too as we were able to identify the hash used in the cookie. So another level of SecShep DEFEATED!!

        Thanks for reading and I hope it helps you in some way.

        QuBits 2018-10-10

        ENROLL IN THE LINUX FOUNDATION LFC210 – FUNDAMENTALS OF PROFESSIONAL OPEN SOURCE MANAGEMENT TODAY! $179

        Source

        WP2Social Auto Publish Powered By : XYZScripts.com