Posts

Plaso 20220930 released

Plaso 20220930 released The Plaso team is delighted to announce a new Plaso release, 20220930. This release has a mixture of new features and under the hood improvements. Notable changes New parsers and parser plugins and improvements to format support: Windows Defender History Log ( #4167 ), PostgreSQL application log parser ( #4175 ) and enhanced format support of IIS parser ( #4146 ) with thanks to @Fryyyyy   Windows User Access Logging parser ( #4176 ) with thanks to @hur   Snort/Suricata fast-format alert log parser ( #4220 ) with thanks to @jgru   Android logcat parser ( #4214 ) and .viminfo parser ( #4233 ) with thanks to @sydp   Enhanced format support of AWS ELB text parser plugin ( #4189 ) with thanks to @alexgoedeke iOS CarPlayApp.plist plist plugin ( #4156 ) with thanks to @studiawan   Support for processing the content of .dmg, .iso and .vhd[x] “as archives” was added. To specify which archive types should be processed the `--process-archives` has been superseded by the `-
Image
  Timesketch, Header Mapping for CSV imports Introduction Timesketch  is an open-source tool for collaborative forensic timeline analysis. The Timesketch user interface allows a user to  upload CSV, JSONL  and  Plaso  storage files. This blogpost will focus on Timesketch's CSV import capability. CSV  is a common denominator format for data change between analytic tools. Timesketch's current CSV import feature requires the file having a set of columns representing a message, a timestamp and its description. These fields need to be named in a certain way otherwise the Timesketch server will discard them. For this reason, usually, users have to pre-process their CSVs between export from a tool and import into Timesketch to guarantee this constraint. This blogpost describes a new solution to overcome this limitation, i.e., uploading a CSV without one or more required headers. This new functionality is called  header mapping  because it allows the user to map one or more missing req

Generate your own hash sets with HashR

Image
The HashR team is pleased to announce the first public release of HashR. HashR extracts files from a source and uploads hashes, metadata and the actual content of the files to a given data sink. By doing that it allows you to generate your own hash sets, which then you can use during Blue Team operations, by extracting and hashing the actual files from complex data sources like physical or cloud disk images.  There are many hash set providers out there (e.g. NSRL ), but shared hash sets have some limitations: They come in different formats, contain different metadata and use different hashing algorithms so are difficult to use consistently. They are updated infrequently, e.g. every couple of months. By definition they don’t provide the actual content of the file, complicating further investigation once a matching hash is identified. The hashed files are only from public sources, if your organisation is using custom base OS images then you won’t find your hashes in those sets. HashR has

Plaso 20220724 released

  Plaso 20220724 released The Plaso team is pleased to announce a new Plaso release, 20220724. This release has a mixture of new features and under the hood improvements. Notable changes An iOS application privacy report log JSON-L parser plugin and SQLite parser plugins for iOS Screen Time and iOS net usage were added, with thanks to @rick-slin . A Microsoft Office 365 audit log parser was added, with thanks to @Zawadidone . A parser for Atlassian Confluence access logs was added ( #4123 ) Local time support was added to the dpkg parser, with thanks to @plague006 . The mactime parser has been renamed to bodyfile parser ( #4131 ) A tagging rule for Windows EventLog Cleared was added, with thanks to @pyllyukko . Added initial support for WEVT_TEMPLATE resources for Windows EventLog resource extraction ( #163 ). The log2timeline.py `--no_vss` option has been deprecated in favour of `--vss_stores=none` ( #4150 ) The elasticsearch and elasticsearch_ts output modules have been removed in fa

Investigating a GKE Container

As containerized applications have gained popularity and are the most common method for deploying applications in the cloud, digital forensics analysts need to be familiar with the process and tools to examine the containers.  A previous blog post Container Forensics with Docker Explorer dives into details on analyzing a compromised container with an open source tool Docker Explorer .  This article focuses on analysis of Google Kubernetes Engine (GKE) containers running containerd and the process and open source tools that would aid in forensic analysis of a potential compromised container. The research work for this article was conducted using a GKE cluster. In this article, the commands starting with gcloud are Google Cloud Platform (GCP) specific commands, and kubectl commands are Kubenetes commands that are the same for all cloud providers. In our investigation, we received signals indicating scanning activity towards the web server running on a GKE cluster. All we know at this