Pearls and pitfalls of timeline analysis

Background

Timeline analysis is a commonly used method in digital forensics. A timeline typically is a chronological representation of events found in digital media, such as a log file or a file system. An event in the context of digital forensics typically refers to an action or occurrence recognized by software [1]. Events can be generated or triggered by the system, by the user, or in other ways. Note that events in a timeline do not need to be limited to the digital space.


A timeline can be a very valuable analysis practice, therefore many sources talk about how to create them. Unfortunately these sources rarely cover caveats of the analysis technique. 


In “Let's talk about time” [2] Alexander went into various challenges of time in digital forensics and incident response and provided some recommendations on how to account for them. This article will first go into depth of some of the challenges specifically to timeline analysis and then will look at some strategies to overcome some of these challenges.


What is my timeline telling me?

A question rarely answered in sources about timeline analysis is what does my timeline actually represent?


“Think of a timeline as if it were the outline to a story. The milestones you include should help tie together the narrative.” [3]


In digital forensics timelines created by most tools, mostly capture low-level, system types of events, such as the creation of a file in a file system or when a log line was written. For example the SleuthKit tool, mactime, creates timelines of file activity [4], and Plaso log2timeline creates a timeline of various different types of system events [5]; both different types of low-level timelines. 


The task of an analyst is to determine the relevant events (milestones) and how these make up the narrative of the story relevant for the case. Several aspects of system event timelines make it challenging to piece together a story, such as:


  1. Date and time values;

  2. Information underflow;

  3. Information overflow;

  4. Interpreting timeline entries;

  5. Relationships between timeline entries.


1. Perils of date and time values

In digital forensics there are many aspects an analyst needs to think about when dealing with date and time values:


  1. Universal time (UTC) versus local time

  2. Time zones and daylight savings

  3. Calendars

  4. Different types of ambiguous representations

  5. Accuracy and precision

  6. Date and time values that have a special meaning

  7. Clock drifts and shifts

  8. Date and time manipulation

A. Universal time (UTC) versus local time

Universal time (UTC) [6] is a time standard [7] used by most modern digital systems to regulate their clocks and time. UTC does not have a time zone offset, since it was based on Greenwich Mean Time (GMT), and is not adjusted for daylight saving time.


Local time refers to a digital system using a local time zone, and sometimes daylight savings, to regulate their clocks and time, such as Europe/Amsterdam, which uses the Central European Time (CET) zone and with daylight savings time Central European Summer Time (CEST).


Representing date and time in UTC allows us to easily compare events from systems in different time zones.

B. Time zones and daylight savings

A time zone is a single time standard typically bound to a geographical area [8]. A time zone typically has an offset difference in hours and minutes from UTC, such as Central European Time (CET) that has an offset of 1 hour (or 60 minutes).


Daylight savings time is the practice of adjusting clocks during months with extended daylight so that darkness falls at a later clock time [9].


Time zones and daylight savings agreements change over the course of history. To prevent ambiguity it is good practice to represent time zone and daylight savings offsets in actual hours and minutes, such as “2021-04-11T07:15:32+02:00”, instead of “2021-04-11 07:15:32 (CEST)”. 

C. Calendars

A calendar is a system of organizing days [10], for example the Gregorian [11] or Julian [12] calendars.


Most digital systems use the Gregorian calendar, however different calendars are known to be used such as Julian Day [13] in the Notes Storage Facility (NSF) file format.

D. Different types of ambiguous representations

There are many different ways to represent date and time values [14]. Some of these formats are ambiguous and lead to interpretation errors when used internationally, for example “04/01/2021” means January 4, 2021 to an inhabitant of the Netherlands, but means April 1, 2021 to an inhabitant of the United States. Or “1618742863” which could be the number of seconds since January 1, 1970 00:00:00 in UTC, or something completely different.


Another cause of ambiguous representation is when certain information is left out, such as the year as observed in older variants of syslog [15].


To prevent misinterpretation it is good practice to represent date and time values in an unambiguous format (at least for the Gregorian calendar) such as ISO 8601 [16] or a format like "January 4, 2021".

E. Accuracy and precision

Accuracy and precision [17] are 2 concepts often ignored by digital forensic tools and analysts. It is common practice for tools to normalize date and time values using a granularity of seconds, as in the case of mactime, or microseconds, in the case of Plaso log2timeline.


Normalization [18] makes sense from the perspective of being able to compare different date and time values. However it also means that we might be making modifications of our input data [19], such as:

  1. Changing a low-granularity date and time values such as a 1-day FAT access time, into a higher-granularity date and time value such as number of microseconds since January 1, 1970 00:00:00.000000.

  2. Changing high-granularity date and time values such as a 100ns FILETIME value, into a lower-granularity date and time value such as number of seconds since January 1, 1970 00:00:00.


Both can lead to incorrect identification of relevant events (milestones) in timeline analysis. Be warned that a lot of timeline analysis tools and formats suffer from granularity changes.

I. Changing to higher-granularity

Let’s say we have a naive FAT implementation that applies a floor function [20] to the current time to just the date, for example an access event occurring April 18, 2021 at 14:37:33 CEST is stored as April 18, 2021 local time in 2-second datetime storage granularity [21], which makes it 2021-04-18T00:00:00+02:00. 


Now what if the FAT implementation would use a rounding function instead? This would make the stored access time 2021-04-19T00:00:00+02:00. So a difference in behavior of the FAT implementation could mean a deviation of 1 day in how the date and time value is stored.


Let’s use fls and mactime on an image that contains 2021-04-18T00:00:00+02:00 as the date and time value:

TZ=CEST fls -r -m / fat12.raw > bodyfile

mactime -d -y -z UTC -b bodyfile


This will represent the event in UTC as:

2021-04-17T22:00:00Z,22,.a..,r/rrwxrwxrwx,0,0,584,"/a_directory/another_file"


As you can see we lost the information that this event came from a FAT file system and that its datetime value granularity is 1-day. If we now search this timeline for events that occurred around 14:37 on April 18, 2021 a naive search for “2021-04-18T14:37” will exclude the event.


To prevent misinterpretation a timeline date and time search method should not only account for datetime value and storage granularity but also for deviation due to implementation differences.

II. Changing to lower-granularity

An example of changing a high-granularity date and time values into lower-granularity is provided in “Testing digital forensic data processing tools” [22] section “4. Be transparent about representation modifications it makes”. In this example where the mactime tool is inferring that 3 distinct file activities (events) are 1 based on the lower-granularity of the date and time values.


To prevent misinterpretation, event grouping methods should take at least the full datetime value granularity into account and preferable additional sources of information that support the hypothesis that events make up a single group. In the previous example one of such sources could be the USN journal.

F. Date and time values that have a special meaning

Some date and time values can have a special meaning in a certain context, for example FILETIME {0x7fffffff, 0xffffffff} which is “kickoff time” in the context of Windows Kerberos PAC_LOGON_INFO [23].


To prevent misinterpretation it is good practice to represent such date and time values with their intended meaning.

G. Clock drifts and shifts

The internal clocks of digital systems can drift [24], NTP or equivalents can be used to keep clocks of multiple digital systems mostly synchronized, but that means date and time on such systems is adjusted over time.


On most modern systems the drift and corrections are relatively small and might not be sufficiently significant for the timeline analysis. However a faulty clock can seriously mess up a timeline.

H. Date and time manipulation

Date and time values can be manipulated by software as a feature, such as the behavior observed of OLE compound file attachments in Microsoft Outlook in the SecureTemp folder, or deliberately by a user changing the system clock to backdate the creation of a document.


Information underflow

In “Testing digital forensic data processing tools” [22] section “4. Be transparent about representation modifications it makes” an example was given of a file activity timeline. This example demonstrated that the mactime tool incorrectly inferred that 3 distinct file activities are 1.


Reflecting on this from an analysis perspective, one could argue that the mactime tool, and as a result the analyst, has too little information to correctly deduce file activity. This article will refer to this concept as information underflow.


Information underflow happens when a tool or an analyst does not have sufficient information to prove or disprove a hypothesis (or answer an investigative question). Not being aware of having information underflow can lead to incorrect deductions.


Information underflow can happen easily when doing timeline analysis, common causes are:


  • Not having sufficient relevant information in the timeline to answer your investigative question. For example you are trying to determine if a file has been tampered with, but you only have information from the $MFT but not that of $UsnJrnl:$J in your timeline. Or not having the relevant information at all, for example due to a log rotation.

  • Treating proximity of entries in a timeline as causality. For example you find a command being executed just before the time of interest. You conclude the 2 are related, however upon further inspection the command is executed on a regular basis.

  • The series of entries in a timeline. The series of events can be modified due to changes to datetime value granularity, such as the previously mentioned FAT access time example. Potentially leading to conclusions like a file was accessed before it was created and therefore must have been tampered with.


Information overflow

On the other spectrum is information overflow. It is not uncommon for Plaso to generate timelines that consist of multiple millions of events. Such "super timelines", can provide the analyst with more information than might be necessary to answer the investigative questions.


So what about always using targeted timeline collections [25]? In practice it is not always clear what information might be required in a timeline to answer the investigative question. In one of those less common cases, a timeline of approximately 20 million events did contain only a few events from an USN Journal (extracted from a volume snapshot), that indicated when the system was compromised. One of the contributing factors here was that the operating system was reinstalled after compromise. This specific artifact was not part of the set used for targeted collection.


Information overflow can also happen due to other factors. The more heterogeneous (different types of events in) the timeline the broader the understanding of the analyst needs to be to understand what these events represent. The easier it is to make links, including incorrect ones, between similar looking events.


Interpreting timeline entries

Sources like [26] and [27] stress the importance of understanding what entries in the timeline represent and how to interpret them. A less often mentioned aspect of a timeline is how it represents grouped entries.


In the example in “Testing digital forensic data processing tools” [22] section “4. Be transparent about representation modifications it makes”, we discussed the mactime tool MACB grouping resulting in a file activity timeline that is easy to misinterpret. However misrepresentation does not only apply to situations where grouping is applied but also the reverse. For example consider the following MRUList Windows Registry key:


Key path: HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\FileExts\.jpg\OpenWithList

Values: a, b, c, d, e, MRUList


The corresponding Windows Registry key has a last written time, that indicates the last time the key (content and/or metadata) was updated. The 6 Windows Registry values themselves do not have a date and time value directly associated. A timeline tool could represent this as:


<date time>, <description>, <source>, <key path>, <value name>, <value contents>


For example

2012-04-07T13:52:39.103000200Z,Last Written,REG,HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\FileExts\.jpg\OpenWithList,a,Lyskamm.jpg


2012-04-07T13:52:39.103000200Z,Last Written,HKEY_CURRENT_USER\Software\Microsoft\Windows\CurrentVersion\Explorer\FileExts\.jpg\OpenWithList,b,Schreckhorn.jpg


...


This representation can be misleading, since from the timeline it looks like the individual values were last written on April 7, 2012. This makes it easier to (incorrectly) conclude that “Schreckhorn.jpg” was last used at that time. Whereas in reality the full list of MRU .jpg files was updated in a single write and “Schreckhorn.jpg” was last used several days before.


When analyzing timelines, it is essential that you, as the analyst, understand what an entry in the timeline is representing.


Sometimes you need additional context to fully understand what an entry in the timeline is representing. For example the behavior of updating file access times can be controlled by configuration on most modern operating systems, such as NtfsDisableLastAccessUpdate on Windows [28] or noatime on Linux [29]. Different settings can lead to different conclusions based on the same timeline.


Relationships between timeline entries

Plaso mainly generates source-level events [30]. For example a process execution is represented as 2 different source-level events; one for process start and one for process termination.


When such source-level events are represented as an entry-per-event timeline the analyst can observe the events around the time of interest. However the entry-per-event representation makes it harder to observe the duration of the process execution. Therefore making it harder to observe (system-level) activity.


Overcoming some of the challenges of timeline analysis

Pointing out problems and shortcoming is easy, coming up with sustainable solutions to overcome them is hard work. Let’s look at a couple of solutions that tools like Plaso and Timesketch have been working on.


Dynamic time representation

As part of Plaso we been working on migrating the internal Plaso timestamp to dfDateTime object [31] with the intent to be able to handle the following aspects of date and time values within automation:

  • A. Universal time (UTC) versus local time

  • B. Time zones and daylight savings

  • D. Different types of ambiguous representations

  • E. Accuracy and precision

  • F. Date and time values that have a special meaning


As part of Plaso release 20210606 [32] a first iteration of dynamic time representation was introduced. The psort.py "--dynamic-time" option allows you to represent date and time values closer to their original granularity in the output.


Output of a FAT file system timeline without the dynamic time option:

0000-00-00T00:00:00.000000+00:00,Content Modification Time,FILE,File stat,TSK:/$FAT1,filestat,TSK:/$FAT1,-

...

2003-08-21T00:00:00.000000+00:00,Last Access Time,FILE,File stat,TSK:/file1.dat Type: file,filestat,TSK:/file1.dat,-

2003-08-21T01:20:38.000000+00:00,Content Modification Time,FILE,File stat,TSK:/file1.dat Type: file,filestat,TSK:/file1.dat,-

2003-08-21T01:20:38.000000+00:00,Creation Time,FILE,File stat,TSK:/file1.dat Type: file,filestat,TSK:/file1.dat,-


Output of a FAT file system timeline with the dynamic time option:

1970-01-01T00:00:00+00:00,Content Modification Time,FILE,File stat,TSK:/$FAT1,filestat,TSK:/$FAT1,-

...

2003-08-21T00:00:00+00:00,Last Access Time,FILE,File stat,TSK:/file1.dat Type: file,filestat,TSK:/file1.dat,-

2003-08-21T01:20:38+00:00,Content Modification Time,FILE,File stat,TSK:/file1.dat Type: file,filestat,TSK:/file1.dat,-

2003-08-21T01:20:38+00:00,Creation Time,FILE,File stat,TSK:/file1.dat Type: file,filestat,TSK:/file1.dat,-


If you recall the FAT access date and time example mentioned previously there is more we could do here such as representing the access time as “2003-08-21+00:00”. Another thing we could do is to represent “1970-01-01T00:00:00+00:00” as “Not set”, or exclude it from the timeline. Expect more dynamic time related changes to happen in future releases.


Using dfDateTime objects also allows Plaso to programmatically preserve the datetime value granularity so that accuracy and precision. So that tools like Timesketch can take granularity into account when searching for an event that occurred in a specific time range. 


Overcoming information under and overflow with automation

Automating part of your analysis can help reduce error prone and repetitive tasks that are a result of having too much non-relevant information in the timeline (information overflow) or identifying that data might be missing from your timeline (information underflow).


A very basic example of such an automation would be to label events (timeline entries) of interest. Both Plaso analysis plugins [33, 34] and Timesketch analyzers [35, 36] have means to automate such tasks. Such automation has been used to:

  • Enrich events with contextual information for example what additional information you have on a specific IP address, domain name or file hash;

  • Find anomalies, for example to find a domain names that looks very similar to that of your organization or gaps in your logs;

  • Determine behavioral baselines, for example regular log-on and log-off times for a specific account;

  • And more.


Analysis plugins and analyzers are useful when you repeat a certain task often. However sometimes the case at hand calls for something unique. Here an interactive programming environment like Colab can be more beneficial [37].


Note that a current challenge with the automation of analysis is that the resulting logic is very specific to the (1) data model of the tool, (2) the data on which the analysis is run and (3) the investigative question being answered. Having a solid baseline understanding of these 3-factors is essential in interpreting the results of your automated analysis.


Another method of addressing data missing from your timeline (information underflow) is to have the (analysis) tooling guide the user to the data it needs to be able to answer investigative questions and flag potential information underflow situations. We have been talking about this functionality since the early days of the Digital Forensics Artifacts definitions [38] in GRR and there are many aspects to this:

  • How to represent investigative questions?

  • How to represent data sources?

  • How to represent the information in a data source that is needed to answer an investigative question?


As another step towards having an automated solution Timesketch introduced the data finder [39]. Seeing that this is still actively being worked on, expect more about this feature in a future post.


Law of the instrument

Sometimes a timeline is not the most effective data representation to answer an investigative question. Here alternate data representations, such as a file system hierarchy or a lateral movement map [40] might be better suited.


Be cautious of the law of the instrument: “I suppose it is tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." [41].


Summary

Time in digital forensics is a complex concept that is easily taken for granted; more so when analyzing timelines. This article is not intended to provide an all encompassing list of the challenges specifically to timeline analysis and strategies to overcome them. It covered some of the more common challenges and how tools like Plaso and Timesketch can help overcome some of these.


If you have additional ideas don’t hesitate to reach out via email or on the Open Source DFIR Slack community.


Comments

Popular posts from this blog

Parsing the $MFT NTFS metadata file

Incident Response in the Cloud

Container Forensics with Docker Explorer