”;
This chapter will explain about further artifacts that an investigator can obtain during forensic analysis on Windows.
Event Logs
Windows event log files, as name –suggests, are special files that stores significant events like when user logs on the computer, when program encounter an error, about system changes, RDP access, application specific events etc. Cyber investigators are always interested in event log information because it provides lots of useful historical information about the access of system. In the following Python script we are going to process both legacy and current Windows event log formats.
For Python script, we need to install third party modules namely pytsk3, pyewf, unicodecsv, pyevt and pyevtx. We can follow the steps given below to extract information from event logs −
-
First, search for all the event logs that match the input argument.
-
Then, perform file signature verification.
-
Now, process each event log found with the appropriate library.
-
Lastly, write the output to spreadsheet.
Python Code
Let us see how to use Python code for this purpose −
First, import the following Python libraries −
from __future__ import print_function import argparse import unicodecsv as csv import os import pytsk3 import pyewf import pyevt import pyevtx import sys from utility.pytskutil import TSKUtil
Now, provide the arguments for command-line handler. Note that here it will accept three arguments – first is the path to evidence file, second is the type of evidence file and third is the name of the event log to process.
if __name__ == "__main__": parser = argparse.ArgumentParser(''Information from Event Logs'') parser.add_argument("EVIDENCE_FILE", help = "Evidence file path") parser.add_argument("TYPE", help = "Type of Evidence",choices = ("raw", "ewf")) parser.add_argument( "LOG_NAME",help = "Event Log Name (SecEvent.Evt, SysEvent.Evt, ""etc.)") parser.add_argument( "-d", help = "Event log directory to scan",default = "/WINDOWS/SYSTEM32/WINEVT") parser.add_argument( "-f", help = "Enable fuzzy search for either evt or"" evtx extension", action = "store_true") args = parser.parse_args() if os.path.exists(args.EVIDENCE_FILE) and os.path.isfile(args.EVIDENCE_FILE): main(args.EVIDENCE_FILE, args.TYPE, args.LOG_NAME, args.d, args.f) else: print("[-] Supplied input file {} does not exist or is not a ""file".format(args.EVIDENCE_FILE)) sys.exit(1)
Now, interact with event logs to query the existence of the user supplied path by creating our TSKUtil object. It can be done with the help of main() method as follows −
def main(evidence, image_type, log, win_event, fuzzy): tsk_util = TSKUtil(evidence, image_type) event_dir = tsk_util.query_directory(win_event) if event_dir is not None: if fuzzy is True: event_log = tsk_util.recurse_files(log, path=win_event) else: event_log = tsk_util.recurse_files(log, path=win_event, logic="equal") if event_log is not None: event_data = [] for hit in event_log: event_file = hit[2] temp_evt = write_file(event_file)
Now, we need to perform signature verification followed by defining a method that will write the entire content to the current directory −
def write_file(event_file): with open(event_file.info.name.name, "w") as outfile: outfile.write(event_file.read_random(0, event_file.info.meta.size)) return event_file.info.name.name if pyevt.check_file_signature(temp_evt): evt_log = pyevt.open(temp_evt) print("[+] Identified {} records in {}".format( evt_log.number_of_records, temp_evt)) for i, record in enumerate(evt_log.records): strings = "" for s in record.strings: if s is not None: strings += s + "n" event_data.append([ i, hit[0], record.computer_name, record.user_security_identifier, record.creation_time, record.written_time, record.event_category, record.source_name, record.event_identifier, record.event_type, strings, "", os.path.join(win_event, hit[1].lstrip("//")) ]) elif pyevtx.check_file_signature(temp_evt): evtx_log = pyevtx.open(temp_evt) print("[+] Identified {} records in {}".format( evtx_log.number_of_records, temp_evt)) for i, record in enumerate(evtx_log.records): strings = "" for s in record.strings: if s is not None: strings += s + "n" event_data.append([ i, hit[0], record.computer_name, record.user_security_identifier, "", record.written_time, record.event_level, record.source_name, record.event_identifier, "", strings, record.xml_string, os.path.join(win_event, hit[1].lstrip("//")) ]) else: print("[-] {} not a valid event log. Removing temp" file...".format(temp_evt)) os.remove(temp_evt) continue write_output(event_data) else: print("[-] {} Event log not found in {} directory".format(log, win_event)) sys.exit(3) else: print("[-] Win XP Event Log Directory {} not found".format(win_event)) sys.exit(2
Lastly, define a method for writing the output to spreadsheet as follows −
def write_output(data): output_name = "parsed_event_logs.csv" print("[+] Writing {} to current working directory: {}".format( output_name, os.getcwd())) with open(output_name, "wb") as outfile: writer = csv.writer(outfile) writer.writerow([ "Index", "File name", "Computer Name", "SID", "Event Create Date", "Event Written Date", "Event Category/Level", "Event Source", "Event ID", "Event Type", "Data", "XML Data", "File Path" ]) writer.writerows(data)
Once you successfully run the above script, we will get the information of events log in spreadsheet.
Internet History
Internet history is very much useful for forensic analysts; as most cyber-crimes happen over the internet only. Let us see how to extract internet history from the Internet Explorer, as we discussing about Windows forensics, and Internet Explorer comes by default with Windows.
On Internet Explorer, the internet history is saved in index.dat file. Let us look into a Python script, which will extract the information from index.dat file.
We can follow the steps given below to extract information from index.dat files −
-
First, search for index.dat files within the system.
-
Then, extract the information from that file by iterating through them.
-
Now, write all this information to a CSV report.
Python Code
Let us see how to use Python code for this purpose −
First, import the following Python libraries −
from __future__ import print_function import argparse from datetime import datetime, timedelta import os import pytsk3 import pyewf import pymsiecf import sys import unicodecsv as csv from utility.pytskutil import TSKUtil
Now, provide arguments for command-line handler. Note that here it will accept two arguments – first would be the path to evidence file and second would be the type of evidence file −
if __name__ == "__main__": parser = argparse.ArgumentParser(''getting information from internet history'') parser.add_argument("EVIDENCE_FILE", help = "Evidence file path") parser.add_argument("TYPE", help = "Type of Evidence",choices = ("raw", "ewf")) parser.add_argument("-d", help = "Index.dat directory to scan",default = "/USERS") args = parser.parse_args() if os.path.exists(args.EVIDENCE_FILE) and os.path.isfile(args.EVIDENCE_FILE): main(args.EVIDENCE_FILE, args.TYPE, args.d) else: print("[-] Supplied input file {} does not exist or is not a ""file".format(args.EVIDENCE_FILE)) sys.exit(1)
Now, interpret the evidence file by creating an object of TSKUtil and iterate through the file system to find index.dat files. It can be done by defining the main() function as follows −
def main(evidence, image_type, path): tsk_util = TSKUtil(evidence, image_type) index_dir = tsk_util.query_directory(path) if index_dir is not None: index_files = tsk_util.recurse_files("index.dat", path = path,logic = "equal") if index_files is not None: print("[+] Identified {} potential index.dat files".format(len(index_files))) index_data = [] for hit in index_files: index_file = hit[2] temp_index = write_file(index_file)
Now, define a function with the help of which we can copy the information of index.dat file to the current working directory and later on they can be processed by a third party module −
def write_file(index_file): with open(index_file.info.name.name, "w") as outfile: outfile.write(index_file.read_random(0, index_file.info.meta.size)) return index_file.info.name.name
Now, use the following code to perform the signature validation with the help of the built-in function namely check_file_signature() −
if pymsiecf.check_file_signature(temp_index): index_dat = pymsiecf.open(temp_index) print("[+] Identified {} records in {}".format( index_dat.number_of_items, temp_index)) for i, record in enumerate(index_dat.items): try: data = record.data if data is not None: data = data.rstrip("x00") except AttributeError: if isinstance(record, pymsiecf.redirected): index_data.append([ i, temp_index, "", "", "", "", "",record.location, "", "", record.offset,os.path.join(path, hit[1].lstrip("//"))]) elif isinstance(record, pymsiecf.leak): index_data.append([ i, temp_index, record.filename, "","", "", "", "", "", "", record.offset,os.path.join(path, hit[1].lstrip("//"))]) continue index_data.append([ i, temp_index, record.filename, record.type, record.primary_time, record.secondary_time, record.last_checked_time, record.location, record.number_of_hits, data, record.offset, os.path.join(path, hit[1].lstrip("//")) ]) else: print("[-] {} not a valid index.dat file. Removing " "temp file..".format(temp_index)) os.remove("index.dat") continue os.remove("index.dat") write_output(index_data) else: print("[-] Index.dat files not found in {} directory".format(path)) sys.exit(3) else: print("[-] Directory {} not found".format(win_event)) sys.exit(2)
Now, define a method that will print the output in CSV file, as shown below −
def write_output(data): output_name = "Internet_Indexdat_Summary_Report.csv" print("[+] Writing {} with {} parsed index.dat files to current " "working directory: {}".format(output_name, len(data),os.getcwd())) with open(output_name, "wb") as outfile: writer = csv.writer(outfile) writer.writerow(["Index", "File Name", "Record Name", "Record Type", "Primary Date", "Secondary Date", "Last Checked Date", "Location", "No. of Hits", "Record Data", "Record Offset", "File Path"]) writer.writerows(data)
After running above script we will get the information from index.dat file in CSV file.
Volume Shadow Copies
A shadow copy is the technology included in Windows for taking backup copies or snapshots of computer files manually or automatically. It is also called volume snapshot service or volume shadow service(VSS).
With the help of these VSS files, forensic experts can have some historical information about how the system changed over time and what files existed on the computer. Shadow copy technology requires the file system to be NTFS for creating and storing shadow copies.
In this section, we are going to see a Python script, which helps in accessing any volume of shadow copies present in the forensic image.
For Python script we need to install third party modules namely pytsk3, pyewf, unicodecsv, pyvshadow and vss. We can follow the steps given below to extract information from VSS files
-
First, access the volume of raw image and identify all the NTFS partitions.
-
Then, extract the information from that shadow copies by iterating through them.
-
Now, at last we need to create a file listing of data within the snapshots.
Python Code
Let us see how to use Python code for this purpose −
First, import the following Python libraries −
from __future__ import print_function import argparse from datetime import datetime, timedelta import os import pytsk3 import pyewf import pyvshadow import sys import unicodecsv as csv from utility import vss from utility.pytskutil import TSKUtil from utility import pytskutil
Now, provide arguments for command-line handler. Here it will accept two arguments – first is the path to evidence file and second is the output file.
if __name__ == "__main__": parser = argparse.ArgumentParser(''Parsing Shadow Copies'') parser.add_argument("EVIDENCE_FILE", help = "Evidence file path") parser.add_argument("OUTPUT_CSV", help = "Output CSV with VSS file listing") args = parser.parse_args()
Now, validate the input file path’s existence and also separate the directory from output file.
directory = os.path.dirname(args.OUTPUT_CSV) if not os.path.exists(directory) and directory != "": os.makedirs(directory) if os.path.exists(args.EVIDENCE_FILE) and os.path.isfile(args.EVIDENCE_FILE): main(args.EVIDENCE_FILE, args.OUTPUT_CSV) else: print("[-] Supplied input file {} does not exist or is not a " "file".format(args.EVIDENCE_FILE)) sys.exit(1)
Now, interact with evidence file’s volume by creating the TSKUtil object. It can be done with the help of main() method as follows −
def main(evidence, output): tsk_util = TSKUtil(evidence, "raw") img_vol = tsk_util.return_vol() if img_vol is not None: for part in img_vol: if tsk_util.detect_ntfs(img_vol, part): print("Exploring NTFS Partition for VSS") explore_vss(evidence, part.start * img_vol.info.block_size,output) else: print("[-] Must be a physical preservation to be compatible ""with this script") sys.exit(2)
Now, define a method for exploring the parsed volume shadow file as follows −
def explore_vss(evidence, part_offset, output): vss_volume = pyvshadow.volume() vss_handle = vss.VShadowVolume(evidence, part_offset) vss_count = vss.GetVssStoreCount(evidence, part_offset) if vss_count > 0: vss_volume.open_file_object(vss_handle) vss_data = [] for x in range(vss_count): print("Gathering data for VSC {} of {}".format(x, vss_count)) vss_store = vss_volume.get_store(x) image = vss.VShadowImgInfo(vss_store) vss_data.append(pytskutil.openVSSFS(image, x)) write_csv(vss_data, output)
Lastly, define the method for writing the result in spreadsheet as follows −
def write_csv(data, output): if data == []: print("[-] No output results to write") sys.exit(3) print("[+] Writing output to {}".format(output)) if os.path.exists(output): append = True with open(output, "ab") as csvfile: csv_writer = csv.writer(csvfile) headers = ["VSS", "File", "File Ext", "File Type", "Create Date", "Modify Date", "Change Date", "Size", "File Path"] if not append: csv_writer.writerow(headers) for result_list in data: csv_writer.writerows(result_list)
Once you successfully run this Python script, we will get the information residing in VSS into a spreadsheet.
”;