Posts

Plaso 20221229 released

Plaso 20221229 released The Plaso team is delighted to announce a new Plaso release, 20221229. This release has a mixture of new features and under the hood improvements. Notable changes The sources.conf configuration file has been moved to formatter configuration ( #4287 ). Updated the maximum worker limit to 99 ( #4312 ). Event generation has been split from event data extraction and can be configured using timeliner.yaml. Unified single-line and multi-line text parsers. Changes to the located parser with thanks to @sydp ( #4395 ). Added support for Safari Downloads.plist with thanks to @chb2mn ( #4486 ). Fix for an issue that did not surface before the 20221227 release with thanks to @william-billaud ( #4526 ). The full list of cleanups, performance tweaks and bug fixes can be found in the release milestone .  Upcoming changes in future releases Additional improvements to Windows EventLog resource extraction and message formatting ( #4259 ). Various legacy/backwards compatibility co

DFIR for good

December is typically the time of year we think of donating to charity. There are many ways we can help others, including with DFIR. The following is one of such tales authored by Assen Tasheff and copied with permission. Once upon a time Once upon a time there was a humanitarian aid organization. They had a limited budget to spend on commercial software therefore Linux was their operating system of choice. For DFIR they relied on a custom toolset that leveraged the Sleuth Kit (TSK) [ 1 ] in an EDR (live-response) solution. All was well. The EDR solution functioned flawlessly for years until the day came that their hardware needed to be upgraded. This hardware upgrade also forced the organization to upgrade their version of Linux. This is where the organization was confronted with (Linux) Logical Volume Manager or LVM. As a result their custom developed EDR solution stopped working. Although the organization could now enjoy faster hardware, they were left without a functioning DFIR sol

Transitioning Forensics Wiki to GitHub

Transitioning Forensics Wiki to GitHub Copied on  20 November 2022  from  https://forensicswiki.xyz/page/Main_Page with permission. Authored by  Simson Garfinkel. We created the Forensics Wiki so that digital forensics practitioners would have a place where they could record things that they had learned for the benefit of themselves, their co-workers, and the community as a whole. The Forensics Wiki was modeled on Wikipedia, in that we created it as a non-commercial, space without advertising. But the Forensics Wiki soon developed ground rules that were fundamentally different from Wikipedia. Unlike Wikipedia, information on the Forensics Wiki does not need to be cited. Because the Forensics Wiki is created and maintained by practitioners, much of the information that the wiki contains is either original information that the authors have learned through the process of reverse engineering, or else it is information they couldn’t attribute to a specific source due to security concerns.

Find the needle faster with hashR data

Image
Co-author:  Janosch Köpper A challenge in compromise investigations is the volume of data to be analysed. In a previous article we showed how hashR can be used to generate custom hash sets. In this article we demonstrate how such a custom hash set can speed up your investigation greatly by being able to find files (new binaries, modified configs) that are not part of a base (operating system) image. In this article we are going to walk through investigating a compromised GCE VM running CentOS. Let’s assume we get an alert from our detection systems that this VM connected to an IP associated with a nation state (APT) actor. Processing and preparing the data  First, we will run dfTimewolf’s gcp_forensics to acquire the disk from the compromised VM and prepare our investigative environment: dftimewolf gcp_forensics --instances <compromised_vm_name> --analysis_project_name <analysis_project_name> <compromised_vm_project_name> This command will perform the following step

Plaso 20220930 released

Plaso 20220930 released The Plaso team is delighted to announce a new Plaso release, 20220930. This release has a mixture of new features and under the hood improvements. Notable changes New parsers and parser plugins and improvements to format support: Windows Defender History Log ( #4167 ), PostgreSQL application log parser ( #4175 ) and enhanced format support of IIS parser ( #4146 ) with thanks to @Fryyyyy   Windows User Access Logging parser ( #4176 ) with thanks to @hur   Snort/Suricata fast-format alert log parser ( #4220 ) with thanks to @jgru   Android logcat parser ( #4214 ) and .viminfo parser ( #4233 ) with thanks to @sydp   Enhanced format support of AWS ELB text parser plugin ( #4189 ) with thanks to @alexgoedeke iOS CarPlayApp.plist plist plugin ( #4156 ) with thanks to @studiawan   Support for processing the content of .dmg, .iso and .vhd[x] “as archives” was added. To specify which archive types should be processed the `--process-archives` has been superseded by the `-
Image
  Timesketch, Header Mapping for CSV imports Introduction Timesketch  is an open-source tool for collaborative forensic timeline analysis. The Timesketch user interface allows a user to  upload CSV, JSONL  and  Plaso  storage files. This blogpost will focus on Timesketch's CSV import capability. CSV  is a common denominator format for data change between analytic tools. Timesketch's current CSV import feature requires the file having a set of columns representing a message, a timestamp and its description. These fields need to be named in a certain way otherwise the Timesketch server will discard them. For this reason, usually, users have to pre-process their CSVs between export from a tool and import into Timesketch to guarantee this constraint. This blogpost describes a new solution to overcome this limitation, i.e., uploading a CSV without one or more required headers. This new functionality is called  header mapping  because it allows the user to map one or more missing req