Security of Information, Threat Intelligence, Hacking, Offensive Security, Pentest, Open Source, Hackers Tools, Leaks, Pr1v8, Premium Courses Free, etc

  • Penetration Testing Distribution - BackBox

    BackBox is a penetration test and security assessment oriented Ubuntu-based Linux distribution providing a network and informatic systems analysis toolkit. It includes a complete set of tools required for ethical hacking and security testing...
  • Pentest Distro Linux - Weakerth4n

    Weakerth4n is a penetration testing distribution which is built from Debian Squeeze.For the desktop environment it uses Fluxbox...
  • The Amnesic Incognito Live System - Tails

    Tails is a live system that aims to preserve your privacy and anonymity. It helps you to use the Internet anonymously and circumvent censorship...
  • Penetration Testing Distribution - BlackArch

    BlackArch is a penetration testing distribution based on Arch Linux that provides a large amount of cyber security tools. It is an open-source distro created specially for penetration testers and security researchers...
  • The Best Penetration Testing Distribution - Kali Linux

    Kali Linux is a Debian-based distribution for digital forensics and penetration testing, developed and maintained by Offensive Security. Mati Aharoni and Devon Kearns rewrote BackTrack...
  • Friendly OS designed for Pentesting - ParrotOS

    Parrot Security OS is a cloud friendly operating system designed for Pentesting, Computer Forensic, Reverse engineering, Hacking, Cloud pentesting...
Showing posts with label Cyber Forensics. Show all posts
Showing posts with label Cyber Forensics. Show all posts

Sunday, February 18, 2024

Linpmem - A Physical Memory Acquisition Tool For Linux


Like its Windows counterpart, Winpmem, this is not a traditional memory dumper. Linpmem offers an API for reading from any physical address, including reserved memory and memory holes, but it can also be used for normal memory dumping. Furthermore, the driver offers a variety of access modes to read physical memory, such as byte, word, dword, qword, and buffer access mode, where buffer access mode is appropriate in most standard cases. If reading requires an aligned byte/word/dword/qword read, Linpmem will do precisely that.

Currently, the Linpmem features:

  1. Read from physical address (access mode byte, word, dword, qword, or buffer)
  2. CR3 info service (specify target process by pid)
  3. Virtual to physical address translation service

Cache Control is to be added in future for support of the specialized read access modes.

Building the kernel driver

At least for now, you must compile the Linpmem driver yourself. A method to load a precompiled Linpmem driver on other Linux systems is currently under work, but not finished yet. That said, compiling the Linpmem driver is not difficult, basically it's executing 'make'.

Step 1 - getting the right headers

You need make and a C compiler. (We recommend gcc, but clang should work as well).

Make sure that you have the linux-headers installed (using whatever package manager your target linux distro has). The exact package name may vary on your distribution. A quick (distro-independent) way to check if you have the package installed:

ls -l /usr/lib/modules/`uname -r`/

That's it, you can proceed to step 2.

Foreign system: Currently, if you want to compile the driver for another system, e.g., because you want to create a memory dump but can't compile on the target, you have to download the header package directly from the package repositories of that system's Linux distribution. Double-check that the package version exactly matches the release and kernel version running on the foreign system. In case the other system is using a self-compiled kernel you have to obtain a copy of that kernel's build directory. Then, place the location of either directory in the KDIR environment variable.

export KDIR=path/to/extracted/header/package/or/kernel/root

Step 2 - make

Compiling the driver is simple, just type:

make

This should produce linpmem.ko in the current working directory.

You might want to check precompiler.h before and chose whether to compile for release or debug (e.g., with debug printing). There aren't much other precompiler settings right now.

Loading The Driver

The linpmem.ko module can be loaded by using insmod path-to-linpmem.ko, and unloaded with rmmod path-to-linpmem.ko. (This will load the driver only for this uptime.) If you compiled for debug, also take a look at dmesg.

After loading, for talking to the driver, you need to create the device:

mknod /dev/linpmem c 42 0

If you can't talk to the driver, potentially check in dmesg log to verify that '42' was indeed the registered major:

[12827.900168] linpmem: registered chrdev with major 42

Though usually the kernel would try to really assign this number.

You can use chown on the device to give it to your user, if you do not want to have a root console open all the time. (Or just keep using it in a root console.)

  • Watch dmesg output. Please report errors if you see any!
  • Warning: if there is a dmesg error print from Linpmem telling to reboot, better do it immediately.
  • Warning: this is an early version.

Usage

Demo Code

There is an example code demonstrating and explaining (in detail) how to interact with the driver. The user-space API reference can furthermore be found in ./userspace_interface/linpmem_shared.h.

  1. cd demo
  2. gcc -o test test.c
  3. (sudo) ./test // <= you need sudo if you did not use chown on the device.

This code is important, if you want to understand how to directly interact with the driver instead of using a library. It can also be used as a short function test.

Command Line Interface Tool

There is an (optional) basic command line interface tool to Linpmem, the pmem CLI tool. It can be found here: https://github.com/vobst/linpmem-cli. Aside from the source code, there is also a precompiled CLI tool as well as the precompiled static library and headers that can be found here (signed). Note: this is a preliminary version, be sure to check for updates, as many additions and enhancements will follow soon.

The pmem CLI tool can be used for testing the various functions of Linpmem in a (relatively) safe and convenient manner. Linpmem can also be loaded by this tool instead of using insmod/rmmod, with some extra options in future. This also has the advantage that pmem auto-creates the right device for you for immediate use. It is extremely portable and runs on any Linux system (and, in fact, has been tested even on a Linux 2.6).

$ ./pmem -h
Command-line client for the linpmem driver

Usage: pmem [OPTIONS] [COMMAND]

Commands:
insmod Load the linpmem driver
help Print this message or the help of the given subcommand(s)

Options:
-a, --address <ADDRESS> Address for physical read operations
-v, --virt-address <VIRT_ADDRESS> Translate address in target process' address space (default: current process)
-s, --size <SIZE> Size of buffer read operations
-m, --mode <MODE> Access mode for read operations [possible values: byte, word, dword, qword, buffer]
-p, --pid <PID> Target process for cr3 info and virtual-to-physical translations
--cr3 Query cr3 value of target process (default: current process)
--verbose Display debug output
-h, --help Print help (see more with '--help')
-V, --version Print version

If you want to compile the cli tool yourself, change to its directory and follow the instructions in the (cli) Readme to build it. Otherwise, just download the prebuilt program, it should work on any Linux. To load the kernel driver with the cli tool:

# pmem insmod path/to/linpmem.ko

The advantage of using the pmem tool to load the driver is that you do not have to create the device file yourself, and it will offer (on next releases) to choose who owns the linpmem device.

Libraries

The pmem command line interface is only a thin wrapper around a small Rust library that exposes an API for interfacing with the driver. More advanced users can also use this library. The library is automatically compiled (as static portable library) along with the pmem cli tool when compiling from https://github.com/vobst/linpmem-cli, but also included (precompiled) here (signed). Note: this is a preliminary version, more to follow soon.

If you do not want to use the usermode library and prefer to interface with the driver directly on your own, you can find its user-space API/interface and documentation in ./userspace_interface/linpmem_shared.h. We also provide example code in demo/test.c that explains how to use the driver directly.

Memdumping tool

Not implemented yet.

Tested Linux Distributions

  • Debian, self-compiled 6.4.X, Qemu/KVM, not paravirtualized.
    • PTI: off/on
  • Debian 12, Qemu/KVM, fully paravirtualized.
    • PTI: on
  • Ubuntu server, Qemu/KVM, not paravirtualized.
    • PTI: on
  • Fedora 38, Qemu/KVM, fully paravirtualized.
    • PTI: on
  • Baremetal Linux test, AMI BIOS: Linux 6.4.4
    • PTI: on
  • Baremetal Linux test, HP: Linux 6.4.4
    • PTI: on
  • Baremetal, Arch[-hardened], Dell BIOS, Linux 6.4.X
  • Baremetal, Debian, 6.1.X
  • Baremetal, Ubuntu 20.04 with Secure Boot on. Works, but sign driver first.
  • Baremetal, Ubuntu 22.04, Linux 6.2.X

Handling Secure Boot

If the system reports the following error message when loading the module, it might be because of secure boot:

$ sudo insmod linpmem.ko
insmod: ERROR: could not insert module linpmem.ko: Operation not permitted

There are different ways to still load the module. The obvious one is to disable secure boot in your UEFI settings.

If your distribution supports it, a more elegant solution would be to sign the module before using it. This can be done using the following steps (tested on Ubuntu 20.04).

  1. Install mokutil:
    $ sudo apt install mokutil
  2. Create the singing key material:
    $ openssl req -new -newkey rsa:4096 -keyout mok-signing.key -out mok-signing.crt -outform DER -days 365 -nodes -subj "/CN=Some descriptive name/"
    Make sure to adjust the options to your needs. Especially, consider the key length (-newkey), the validity (-days), the option to set a key pass phrase (-nodes; leave it out, if you want to set a pass phrase), and the common name to include into the certificate (-subj).
  3. Register the new MOK:
    $ sudo mokutil --import mok-signing.crt
    You will be asked for a password, which is required in the following step. Consider using a password, which you can type on a US keyboard layout.
  4. Reboot the system. It will enter a MOK enrollment menu. Follow the instructions to enroll your new key.
  5. Sign the module Once the MOK is enrolled, you can sign your module.
    $ /usr/src/linux-headers-$(uname -r)/scripts/sign-file sha256 path/to/mok-singing/MOK.key path/to//MOK.cert path/to/linpmem.ko

After that, you should be able to load the module.

Note that from a forensic-readiness perspective, you should prepare a signed module before you need it, as the system will reboot twice during the process described above, destroying most of your volatile data in memory.

Known Issues

  • Huge page read is not implemented. Linpmem recognizes a huge page and rejects the read, for now.
  • Reading from mapped io and DMA space will be done with CPU caching enabled.
  • No locks are taken during the page table walk. This might lead to funny results when concurrent modifications are going on. This is a general and (mostly unsolvable) problem of live RAM reading, without halting the entire OS to full stop.
  • Secure Boot (Ubuntu): please sign your driver prior to using.
  • Any CPU-powered memory encryption, e.g., AMD SME, Intel SGX/TDX, ...
  • Pluton chips?

(Please report potential issues if you encounter anything.)

Under work

  • Loading precompiled driver on any Linux.
  • Processor cache control. Example: for uncached reading of mapped I/O and DMA space.

Future work

  • Arm/Mips support. (far future work)
  • Legacy kernels (such as 2.6), unix-based kernels

Acknowledgements

Linpmem, as well as Winpmem, would not exist without the work of our predecessors of the (now retired) REKALL project: https://github.com/google/rekall.

  • We would like to thank Mike Cohen and Johannes Stüttgen for their pioneer work and open source contribution on PTE remapping, a technique which is still in use 10 years later.

Our open source contributors:

  • Viviane Zwanger
  • Valentin Obst


Share:

Monday, February 7, 2022

Fhex - A Full-Featured HexEditor

This project is born with the aim to develop a lightweight, but useful tool. The reason is that the existing hex editors have some different limitations (e.g. too many dependencies, missing hex coloring features, etc.).


This project is based on qhexedit2, capstone and keystone engines. New features could be added in the future, PRs are welcomed.

Features
  • Chunks loader - Used to load only a portion of large files without exhaust the memory (use alt + left/right arrows to move among chunks). Please note that in chunk mode, all the operations (e.g. search) applies only to the current chunk except for file save (the entire file is saved). However, each time you edit a chunk, save it before to move to another chunk, otherwise you will lose your changes.
  • Search and replace (UTF-8, HEX, regex, reverse search supported) [CTRL + F]
  • Colored output (white spaces, ASCII characters, 0xFF, UTF-8 and NULL bytes have different colors)
  • Interpret selected bytes as integer, long, unsigned long [CTRL + B]
  • Copy & Paste [CTRL + C and CTRL + V]
  • Copy selected unicode characters [CTRL + Space]
  • Zeroing all the selected bytes [Delete or CTRL + D]
  • Undo & Redo [CTRL + Z and CTRL + Y]
  • Drag & Drop (Hint: Drag&Drop two files to diff them)
  • Overwrite the same file or create a new one [CTRL + S]
  • Goto offset [CTRL + G]
  • Insert mode supported in order to insert new bytes instead to overwrite the existing one [INS]
  • Create new instances [CTRL + N]
  • Basic text viewer for the selected text [CTRL + T]
  • Reload the current file [F5]
  • Compare two different files at byte level
  • Browsable Binary Chart (see later for details) [F1]
  • Hex - Dec number converter [F2]
  • Hex String escaper (e.g from 010203 to \x01\x02\x03) [F3]
  • Pattern Matching Engine (see later for details)
  • Disassebler based on Capstone Engine [F4]
  • Assembler based on Keystone Engine [F4]
  • Zoom-Out/Zoom-In bytes view (CTRL + Up/Down or CTRL + -/+)
  • Shortcuts for all these features
Pattern Matching Engine

Fhex can load at startup a configuration file (from ~/fhex/config.json) in JSON format with a list of strings or bytes to highlight and a comment/label to add close to the matches.

Examples:

{
"PatternMatching":
[
{
"string" : "://www.",
"color" : "rgba(250,200,200,50)",
"message" : "Found url"
},
{
"bytes" : "414243",
"color" : "rgba(250,200,200,50)",
"message" : "Found ABC"
}
]
}

To activate pattern matching press CTRL + P At the end, Fhex will show also an offset list with all the result references. Note: Labels with comments are added only if the window is maximized, if labels are not displayed correctly please try to run pattern matching again.

Binary Chart

Fhex has the feature to chart the loaded binary file (Note: In order to compile the project, now you need also qt5-charts installed on the system). The y-axis range is between 0 and 255 (in hex 0x0 and 0xff, i.e. the byte values). The x-axis range is between 0 and the filesize.

The chart plots the byte values of the binary file and let you focus only on the relevant sections. For example, if in a binary file there is an area full of null bytes, you can easily detect it from the chart.

License

GPL-3



Share:

Sunday, February 6, 2022

AzureHunter - A Cloud Forensics Powershell Module To Run Threat Hunting Playbooks On Data From Azure And O365


A Powershell module to run threat hunting playbooks on data from Azure and O365 for Cloud Forensics purposes.


Getting Started

1. Check that you have the right O365 Permissions

The following roles are required in Exchange Online, in order to be able to have read only access to the UnifiedAuditLog: View-Only Audit Logs or Audit Logs.

These roles are assigned by default to the Compliance Management role group in Exchange Admin Center.

NOTE: if you are a security analyst, incident responder or threat hunter and your organization is NOT giving you read-only access to these audit logs, you need to seriously question what their detection and response strategy is!

More information:

NOTE: your admin can verify these requirements by running Get-ManagementRoleEntry "*\Search-UnifiedAuditLog" in your Azure tenancy cloud shell or local powershell instance connected to Azure.


2. Ensure ExchangeOnlineManagement v2 PowerShell Module is installed

Please make sure you have ExchangeOnlineManagement (EXOv2) installed. You can find instructions on the web or go directly to my little KB on how to do it at the soc analyst scrolls


3. Either Clone the Repo or Install AzureHunter from the PSGallery

3.1 Cloning the Repo
  1. Clone this repository
  2. Import the module Import-Module .\source\AzureHunter.psd1

3.2 Install AzureHunter from the PSGallery

All you need to do is:

Install-Module AzureHunter -Scope CurrentUser
Import-Module AzureHunter

What is the UnifiedAuditLog?

The unified audit log contains user, group, application, domain, and directory activities performed in the Microsoft 365 admin center or in the Azure management portal. For a complete list of Azure AD events, see the list of RecordTypes.

The UnifiedAuditLog is a great source of cloud forensic information since it contains a wealth of data on multiple types of cloud operations like ExchangeItems, SharePoint, Azure AD, OneDrive, Data Governance, Data Loss Prevention, Windows Defender Alerts and Quarantine events, Threat intelligence events in Microsoft Defender for Office 365 and the list goes on and on!


AzureHunter Data Consistency Checks

AzureHunter implements some useful logic to ensure that the highest log density is mined and exported from Azure & O365 Audit Logs. In order to do this, we run two different operations for each cycle (batch):

  1. Automatic Window Time Reduction: this check ensures that the time interval is reduced to the optimal interval based on the ResultSizeUpperThreshold parameter which by default is 20k. This means, if the amounts of logs returned within your designated TimeInterval is higher than ResultSizeUpperThreshold, then an automatic adjustment will take place.
  2. Sequential Data Check: are returned Record Indexes sequentially valid?



Usage

Ensure you connect to ExchangeOnline

It's recommended that you run Connect-ExchangeOnline before running any AzureHunter commands. The program checks for an active remote session and attempts to connect but some versions of Powershell don't allow this and you need to do it yourself regardless.


Run AzureHunter

AzureHunter has two main commands: Search-AzureCloudUnifiedLog and Invoke-HuntAzureAuditLogs.

The purpose of Search-AzureCloudUnifiedLog is to implement a complex logic to ensure that the highest percentage of UnifiedAuditLog records are mined from Azure. By default, it will export extracted and deduplicated records to a CSV file.

The purpose of Invoke-HuntAzureAuditLogs is to provide a flexible interface into hunting playbooks stored in the playbooks folder. These playbooks are designed so that anyone can contribute with their own analytics and ideas. So far, only two very simple playbooks have been developed: AzHunter.Playbook.Exporter and AzHunter.Playbook.LogonAnalyser. The Exporter takes care of exporting records after applying de-duplication and sorting operations to the data. The LogonAnalyser is in beta mode and extracts events where the Operations property is UserLoggedIn. It is an example of what can be done with the playbooks and how easy it is to construct one.

When running Search-AzureCloudUnifiedLog, you can pass in a list of playbooks to run per log batch. Search-AzureCloudUnifiedLog will pass on the batch to the playbooks via Invoke-HuntAzureAuditLogs.

Finally Invoke-HuntAzureAuditLogs can, be used standalone. If you have an export of UnifiedAuditLog records, you can load them into a Powershell Array and pass them on to this command and specify the relevant playbooks.


Example 1 | Run search on Azure UnifiedAuditLog and extract records to CSV file (default behaviour)
Search-AzureCloudUnifiedLog -StartDate "2020-03-06T10:00:00" -EndDate "2020-06-09T12:40:00" -TimeInterval 12 -AggregatedResultsFlushSize 5000 -Verbose

This command will:

  • Search data between the dates in StartDate and EndDate
  • Implement a window of 12 hours between these dates, which will be used to sweep the entire length of the time interval (StartDate --> EndDate). This window will be automatically reduced and adjusted to provide the maximum amount of records within the window, thus ensuring higher quality of output. The time window slides sequentially until reaching the EndDate.
  • The AggregatedResultsFlushSize parameter speficies the batches of records that will be processed by downstream playbooks. We are telling AzureHunter here to process the batch of records once the total amount reaches 5000. This way, you can get results on the fly, without having to wait for hours until a huge span of records is exported to CSV files.

Example 2 | Run Hunting Playbooks on CSV File

We assume that you have exported UnifiedAuditLog records to a CSV file, if so you can then do:

$RecordArray = Import-Csv .\my-exported-records.csv
Invoke-HuntAzureAuditLogs -Records $RecordArray -Playbooks 'AzHunter.Playbook.LogonAnalyser'

You can run more than one playbook by separating them via commas, they will run sequentially:

$RecordArray = Import-Csv .\my-exported-records.csv
Invoke-HuntAzureAuditLogs -Records $RecordArray -Playbooks 'AzHunter.Playbook.Exporter', 'AzHunter.Playbook.LogonAnalyser'

Why?

Since the aftermath of the SolarWinds Supply Chain Compromise many tools have emerged out of deep forges of cyberforensicators, carefully developed by cyber blacksmith ninjas. These tools usually help you perform cloud forensics in Azure. My intention with AzureHunter is not to bring more noise to this crowded space, however, I found myself in the need to address some gaps that I have observed in some of the tools in the space (I might be wrong though, since there is a proliferation of tools out there and I don't know them all...):

  1. Azure cloud forensic tools don't usually address the complications of the Powershell API for the UnifiedAuditLog. This API is very unstable and inconsistent when exporting large quantities of data. I wanted to develop an interface that is fault tolerant (enough) to address some of these issues focusing solely on the UnifiedAuditLog since this is the Azure artefact that contains the most relevant and detailed activity logs for users, applications and services.
  2. Azure cloud forensic tools don't usually put focus on developing extensible Playbooks. I wanted to come up with a simple framework that would help the community create and share new playbooks to extract different types of meaning off the same data.

If, however, you are looking for a more feature rich and mature application for Azure Cloud Forensics I would suggest you check out the excellent work performed by the cyber security experts that created the following applications:

I'm sure there is a more extensive list of tools, but these are the ones I could come up with. Feel free to suggest some more.


Why Powershell?
  1. I didn't want to re-invent the wheel
  2. Yes the Powershell interface to Azure's UnifiedAuditLog is unstable, but in terms of time-to-production it would have taken me an insane amount of hours to achieve the same thing writing a whole new interface in languages such as .NET, Golang or Python to achieve the same objectives. In the meanwhile, the world of Cyber Defense and Response does not wait!

TODO
  • Specify standard playbook metadata attributes that need to be present so that AzureHunter can leverage them.
  • Allow for playbooks to specify dependencies on other playbooks so that one needs to be run before the other. Playbook chaining could produce interesting results and avoid code duplication.
  • Develop Pester tests and Coveralls results.
  • Develop documentation in ReadTheDocs.
  • Allow for the specification of playbooks in SIGMA rule standard (this might require some PR to the SIGMA repo)

More Information

For more information


Credits


Share:

Tuesday, January 16, 2018

Easy-To-Use Live Forensics Toolbox For Linux Endpoints - Linux Expl0rer






Easy-to-use live forensics toolbox for Linux endpoints written in Python & Flask.

Capabilities

ps
  • View full process list
  • Inspect process memory map & fetch memory strings easly
  • Dump process memory in one click
  • Automaticly search hash in public services

users
  • users list

find
  • Search for suspicious files by name/regex

netstat
  • Whois

logs
  • syslog
  • auth.log(user authentication log)
  • ufw.log(firewall log)
  • bash history

anti-rootkit
  • chkrootkit

yara
  • Scan a file or directory using YARA signatures by @Neo23x0
  • Scan a running process memory address space
  • Upload your own YARA signature

Requirements
  • Python 2.7
  • YARA
  • chkrootkit

Installation
  1. Clone repository
git clone https://github.com/intezer/linux_expl0rer
  1. Install required packages
pip install -r requirements.txt
  1. Setup VT/OTX api keys
nano config.py
Edit following lines:
VT_APIKEY = '<key>'
OTX_APIKEY = '<key>'
  1. Install YARA
sudo apt-get install yara
  1. Install chkrootkit
sudo apt-get install chkrootkit

Start Linux Expl0rer server
sudo python linux_explorer.py

Usage
  1. Start your browser
firefox http://127.0.0.1:8080
  1. do stuff

Notes




Share:

Sunday, January 14, 2018

Linux Memory Cryptographic Keys Extractor - CryKeX





CryKeX - Linux Memory Cryptographic Keys Extractor

Properties:
  • Cross-platform
  • Minimalism
  • Simplicity
  • Interactivity
  • Compatibility/Portability
  • Application Independable
  • Process Wrapping
  • Process Injection

Dependencies:
  • Unix - should work on any Unix-based OS
    • BASH - the whole script
    • root privileges (optional)
Limitations:
  • AES and RSA keys only
  • Fails most of the time for Firefox browser
  • Won't work for disk encryption (LUKS) and PGP/GPG
  • Needs proper user privileges and memory authorizations

How it works
Some work has been already published regarding the subject of cryptograhic keys security within DRAM. Basically, we need to find something that looks like a key (entropic and specific length) and then confirm its nature by analyzing the memory structure around it (C data types).
The idea is to dump live memory of a process and use those techniques in order to find probable keys since, memory mapping doesn't change. Thanks-fully, tools exist for that purpose.
The script is not only capable of injecting into already running processes, but also wrapping new ones, by launching them separately and injecting shortly afterwards. This makes it capable of dumping keys from almost any process/binary on the system.
Of course, accessing a memory is limited by kernel, which means that you will still require privileges for a process.
Linux disk ecnryption (LUKS) uses anti-forensic technique in order to mitigate such issue, however, extracting keys from a whole memory is still possible.
Firefox browser uses somehow similar memory management, thus seems not to be affected.
Same goes for PGP/GPG.

HowTo
Installing dependencies:
sudo apt install gdb aeskeyfind rsakeyfind || echo 'have you heard about source compiling?'
An interactive example for OpenSSL AES keys:
openssl aes-128-ecb -nosalt -out testAES.enc
Enter a password twice, then some text and before terminating:
CryKeX.sh openssl
Finally, press Ctrl+D 3 times and check the result.
OpenSSL RSA keys:
openssl genrsa -des3 -out testRSA.pem 2048
When prompted for passphrase:
CryKeX.sh openssl
Verify:
openssl rsa -noout -text -in testRSA.pem
Let's extract keys from SSH:
echo 'Ciphers [email protected]' >> /etc/ssh/sshd_config
ssh [email protected]
CryKeX.sh ssh
From OpenVPN:
echo 'cipher AES-256-CBC' >> /etc/openvpn/server.conf
openvpn yourConf.ovpn
sudo CryKeX.sh openvpn
TrueCrypt/VeraCrypt is also affected: Select "veracrypt" file in VeraCrypt, mount with password "pass" and:
sudo CryKeX.sh veracrypt
Chromium-based browsers (thanks Google):
CryKeX.sh chromium
CryKeX.sh google-chrome
Despite Firefox not being explicitly affected, Tor Browser Bundle is still susceptible due to tunneling:
CryKeX.sh tor
As said, you can also wrap processes:
apt install libssl-dev
gcc -lcrypto cipher.c -o cipher
CryKeX.sh cipher
 wrap
 cipher




Share:

Saturday, May 27, 2017

Tools to analyze MS OLE2 files and MS Office documents, for malware analysis, forensics and debugging - oletools



oletools is a package of python tools to analyze Microsoft OLE2 files (also called Structured Storage, Compound File Binary Format or Compound Document File Format), such as Microsoft Office documents or Outlook messages, mainly for malware analysis, forensics and debugging. It is based on the olefile parser. See http://www.decalage.info/python/oletools for more info.


News
  • 2016-11-01 v0.50: all oletools now support python 2 and 3.
    • olevba: several bugfixes and improvements.
    • mraptor: improved detection, added mraptor_milter for Sendmail/Postfix integration.
    • rtfobj: brand new RTF parser, obfuscation-aware, improved display, detect executable files in OLE Package objects.
    • setup: now creates handy command-line scripts to run oletools from any directory.
  • 2016-06-10 v0.47: olevba added PPT97 macros support, improved handling of malformed/incomplete documents, improved error handling and JSON output, now returns an exit code based on analysis results, new --relaxed option. rtfobj: improved parsing to handle obfuscated RTF documents, added -d option to set output dir. Moved repository and documentation to GitHub.
  • 2016-04-19 v0.46: olevba does not deobfuscate VBA expressions by default (much faster), new option --deobf to enable it. Fixed color display bug on Windows for several tools.
  • 2016-04-12 v0.45: improved rtfobj to handle several anti-analysis tricks, improved olevba to export results in JSON format.
See the full changelog for more information.

Tools:
  • olebrowse: A simple GUI to browse OLE files (e.g. MS Word, Excel, Powerpoint documents), to view and extract individual data streams.
  • oleid: to analyze OLE files to detect specific characteristics usually found in malicious files.
  • olemeta: to extract all standard properties (metadata) from OLE files.
  • oletimes: to extract creation and modification timestamps of all streams and storages.
  • oledir: to display all the directory entries of an OLE file, including free and orphaned entries.
  • olemap: to display a map of all the sectors in an OLE file.
  • olevba: to extract and analyze VBA Macro source code from MS Office documents (OLE and OpenXML).
  • MacroRaptor: to detect malicious VBA Macros
  • pyxswf: to detect, extract and analyze Flash objects (SWF) that may be embedded in files such as MS Office documents (e.g. Word, Excel) and RTF, which is especially useful for malware analysis.
  • oleobj: to extract embedded objects from OLE files.
  • rtfobj: to extract embedded objects from RTF files.
  • and a few others (coming soon)

Projects using oletools:
oletools are used by a number of projects and online malware analysis services, including Viper, REMnux, FAME, Hybrid-analysis.com, Joe Sandbox, Deepviz, Laika BOSS, Cuckoo Sandbox, Anlyz.io, ViperMonkey, pcodedmp, dridex.malwareconfig.com, and probably VirusTotal. (Please contact me if you have or know a project using oletools)

Download and Install:
The recommended way to download and install/update the latest stable release of oletools is to use pip:
  • On Linux/Mac:  Sudo -H pip install -U oletools
  • On Windows:  Pip install -U oletools 
This should automatically create command-line scripts to run each tool from any directory: olevba, mraptor, rtfobj, etc.
To get the latest development version instead:
  • On Linux/Mac: sudo -H pip install -U https://github.com/decalage2/oletools/archive/master.zip
  • On Windows: pip install -U https://github.com/decalage2/oletools/archive/master.zip
See the documentation for other installation options.

Documentation:
The latest version of the documentation can be found online, otherwise a copy is provided in the doc subfolder of the package.



Share:

Saturday, December 31, 2016

A Tool For Forensic File System Reconstruction - RecuperaBit




A software which attempts to reconstruct file system structures and recover files. Currently it supports only NTFS.

RecuperaBit attempts reconstruction of the directory structure regardless of:
  • missing partition table
  • unknown partition boundaries
  • partially-overwritten metadata
  • quick format


You can get more information about the reconstruction algorithms and the architecture used in RecuperaBit by reading my MSc thesis or checking out the slides.

Usage
usage: main.py [-h] [-s SAVEFILE] [-w] [-o OUTPUTDIR] path

Reconstruct the directory structure of possibly damaged filesystems.

positional arguments:
  path                  path to the disk image

optional arguments:
  -h, --help            show this help message and exit
  -s SAVEFILE, --savefile SAVEFILE
                        path of the scan save file
  -w, --overwrite       force overwrite of the save file
  -o OUTPUTDIR, --outputdir OUTPUTDIR
                        directory for restored contents and output files
The main argument is the path to a bitstream image of a disk or partition. RecuperaBit automatically determines the sectors from which partitions start.

RecuperaBit does not modify the disk image, however it does read some parts of it multiple times through the execution. It should also work on real devices, such as /dev/sda but this is not advised.
Optionally, a save file can be specified with -s . The first time, after the scanning process, results are saved in the file. After the first run, the file is read to only analyze interesting sectors and speed up the loading phase.

Overwriting the save file can be forced with -w .

RecuperaBit includes a small command line that allows the user to recover files and export the contents of a partition in CSV or body file format. These are exported in the directory specified by -o (or recuperabit_output ).

Pypy
RecuperaBit can be run with the standard cPython implementation, however speed can be increased by using it with the Pypy interpreter and JIT compiler:
pypy main.py /path/to/disk.img

Recovery of File Contents
Files can be restored one at a time or recursively, starting from a directory. After the scanning process has completed, you can check the list of partitions that can be recovered by issuing the following command at the prompt:
recoverable

Each line shows information about a partition. Let's consider the following output example:
Partition #0 -> Partition (NTFS, 15.00 MB, 11 files, Recoverable, Offset: 2048, Offset (b): 1048576, Sec/Clus: 8, MFT offset: 2080, MFT mirror offset: 17400)
If you want to recover files starting from a specific directory, you can either print the tree on screen with the tree command (very verbose for large drives) or you can export a CSV list of files (see help for details).

If you rather want to extract all files from the Root and the Lost Files nodes, you need to know the identifier for the root directory, depending on the file system type. The following are those of file systems supported by RecuperaBit:
File System Type Root Id
NTFS 5

The id for Lost Files is -1 for every file system.

Therefore, to restore Partition #0 in our example, you need to run:
restore 0 5
restore 0 -1
The files will be saved inside the output directory specified by -o .


Share:

Thursday, September 22, 2016

Forensic Challenges - Labs




URLs

Host Forensics

Computer Forensic Investigation
http://www.shortinfosec.net/2008/07/competition-computer-forensic.html/
Digital Forensics Tool Testing Images
http://dftt.sourceforge.net/
DigitalCorpora
http://digitalcorpora.org/
DFRWS 2014 Forensics Rodeo
http://www.cs.uno.edu/~golden/dfrws-2014-rodeo.html
ForGe Forensic test image generator
https://github.com/hannuvisti/forge
ISFCE Sample Practical Exercise
http://www.isfce.com/sample-pe.htm
Linux LEO Supplemental Files
http://linuxleo.com/
NIST CFREDS
http://www.cfreds.nist.gov/dfr-test-images.html
http://www.cfreds.nist.gov/Hacking_Case.html
p0wnlabs Sample Challenges
http://www.p0wnlabs.com/free/forensics
Samples from Automating DFIR Series
http://www.hecfblog.com/2015/02/automating-dfir-how-to-series-on.html
volatility memory samples
https://code.google.com/p/volatility/wiki/FAQ

Network Forensics

Chris Sanders Packet Captures
http://chrissanders.org/packet-captures/
DigitalCorpora Packet Dumps
http://digitalcorpora.org/corpora/packet-dumps
Enron Email Dataset
http://www.cs.cmu.edu/~enron/
Ethereal Sample Captures
http://www.stearns.org/toolscd/current/pcapfile/README.ethereal-pcap.html
Evil Fingers PCAP Challenges
https://www.evilfingers.com/repository/pcaps_challenge.php
Kholia's Packet Captures
https://github.com/kholia/my-pcaps
LBNL-FTP-PKT
http://ee.lbl.gov/anonymized-traces.html/
MAWI Working Group Traffic Archive
http://mawi.wide.ad.jp/mawi/
PacketLife Capture Collection
http://packetlife.net/captures/
pcapr
http://www.pcapr.net
PCAPS Repository
https://github.com/markofu/pcaps
SANS DFIR Challenge
https://digital-forensics.sans.org/community/challenges
Spy Hunter Holiday Challenge
http://blog.mywarwithentropy.com/2015/11/spy-hunter-holiday-challenge-2015.html
http://blog.mywarwithentropy.com/2014/11/spy-hunter-holiday-challenge-2014.html
Tcpreplay Sample Captures
http://tcpreplay.appneta.com/wiki/captures.html
Wireshark Network Analysis Book Supplements
http://www.wiresharkbook.com/studyguide.html
Wireshark Sample Captures
http://wiki.wireshark.org/SampleCaptures
Xplico Sample captures
http://wiki.xplico.org/doku.php?id=pcap:pcap

Malware Analysis

Contagio
http://contagiodump.blogspot.com/
FakeAVs blog
http://www.fakeavs.com/
malc0de
http://malc0de.com/database/
MalShare
http://malshare.com/
Open Malware / Offensive Computing
http://openmalware.org/
theZoo / Malware DB
http://ytisf.github.io/theZoo/
VirusShare.com / VXShare
http://virusshare.com/
Virusign
http://www.virusign.com/
VX Heaven
http://vxheaven.org/
VXVault
http://vxvault.siri-urz.net
Georgia Tech malrec Page
http://panda.gtisc.gatech.edu/malrec/
Malware Traffic
http://malware-traffic-analysis.net/
Kernelmode Forum
http://www.kernelmode.info
Malware Hub Forum
http://malwaretips.com/categories/malware-hub.103/
Public Documents about APTs
https://github.com/kbandla/APTnotes
CLEAN MX realtime database
http://support.clean-mx.de/clean-mx/viruses.php
Joxean Koret's List
http://malwareurls.joxeankoret.com
MalwareBlacklist.com
http://www.malwareblacklist.com
Sucuri Research Labs
http://labs.sucuri.net/?malware
Android Sandbox
http://androidsandbox.net/samples/
Contagio Mobile Malware
http://contagiominidump.blogspot.com/
HoneyDrive
http://bruteforce.gr/honeydrive
maltrieve
http://maltrieve.org/

Online and CTFs

Black T-Shirt Cyber Forensics Challenge
https://cyberforensicschallenge.com/
DEFCON CTF Archive
https://www.defcon.org/html/links/dc-ctf.html
DFRWS
http://www.dfrws.org/2013/challenge/index.shtml
http://www.dfrws.org/2010/challenge/
http://www.dfrws.org/2011/challenge/index.shtml
http://www.dfrws.org/2007/challenge/index.shtml
http://www.dfrws.org/2006/challenge/
http://www.dfrws.org/2005/challenge/
Digital Forensics Security Treasure Hunt
http://digitalforensics.securitytreasurehunt.com/
ENISA CERT Training Material
https://www.enisa.europa.eu/activities/cert/support/exercise
ForensicKB Practicals
http://www.forensickb.com/2008/01/forensic-practical.html
http://www.forensickb.com/2008/01/forensic-practical-2.html
http://www.forensickb.com/2010/01/forensic-practical-exercise-3.html
http://www.forensickb.com/2010/06/forensic-practical-exercise-4.html
http://www.forensickb.com/2011/01/simple-forensic-puzzle-1.html
http://www.forensickb.com/2011/02/forensic-puzzle-6.html
HackEire CTF
https://github.com/markofu/hackeire
Honeynet Challenges
https://www.honeynet.org/challenges
http://old.honeynet.org/scans/index.html
Jack Crook's DFIR Challenges
https://docs.google.com/file/d/0B_xsNYzneAhEN2I5ZXpTdW9VMGM
I Smell Packets
http://ismellpackets.com/
Network Forensics Puzzle Contest
http://forensicscontest.com/puzzles
RingZer0 Team
http://ringzer0team.com/challenges
UMass Trace Repository
http://traces.cs.umass.edu/

Source: amanhardikar

By OffSec
Share:
Copyright © Offensive Sec Blog | Powered by OffensiveSec
Design by OffSec | Theme by Nasa Records | Distributed By Pirate Edition