It can be argued that nothing demonstrates the concept of evidence dynamics better than Internet artifacts. On a modern end-user computer system, the bulk of the user’s interaction with the system will likely be related to Internet communication of some sort. Every click of a link, every bookmark, and every search query can leave telltale traces on the user’s system.
In this presentation, Cory will discuss the use of open source tools to explore and analyze the local artifacts generated by modern web browsers and web applications
Forensic tools, both open and closed source, are commonly deployed as dynamically-linked executables in a relatively unstable environment (general purpose operating systems – such as Windows and Linux – with automatic system updates). It has been shown (*) that in such an environment it is not possible to obtain positive assurance that tool behaviour is correct. This presentation illustrates a method for constructing a stable computer forensic tool system using open source components such as Linux and TSK. Positive validation of such a system is meaningful because it is stable, uncontrolled state changes do not occur. The benefits of open source tools for validation are discussed. Practical examples of how tools would be validated in the US (Daubert) and New Zealand (Evidence Act 2006) are also discussed.
This talk will cover the updates and changes to The Sleuth Kit and Autopsy from the past year. It will also provide a quick introduction to the tools. For users in the audience, the talk will cover new Sleuth Kit tools and the new Autopsy interface. Autopsy 3.0 is being developed using a Java framework that will allow for a more powerful user experience and make it easier for other developers to make plug-in modules. For the developers in the audience, the talk will cover new C++ classes and interfaces and multi-threaded support. A new C++ framework will also be presented that will allow for easier end-to-end integration in the future.
Commercially available forensic analysis applications put a great deal of functionality in the hands of a wide range of forensic analysts. As the sophistication of cyber-incidents continues to increase, there is a need for innovation that can only be met through the use of open-source tools and frameworks, written and deployed by those on the front lines of the analysis and response efforts. This presentation will discuss extending the plugin-based approach used by tools such as RegRipper to provide a “scanner” capability that goes beyond AV scanners, and allows for the retention and expansion of institutional knowledge.
This presentation will describe the development of a new technique for recovering data from a file system, focusing on a FAT32 file system. It complements existing approaches of carving and examining metadata. The idea is that by considering only FAT directory entries, that is ignoring the FAT and other data structures, it should be possible to reconstruct the layout of the files on the file system.
Using the metadata recovered from the directory entries, that is the creation dates, last modification dates, file length and starting cluster, it should be possible to reconstruct the history of operations on the file system and therefore work out where files were laid out on the disk.
A preliminary version of the system will be presented which uses FUSE to create a virtual file system to present a read only view of the recovered files.
bulk_extractor is a high-performance carving and feature extraction tool. Instead of operating on individual files, bulk_extractor scans an entire disk image from beginning to end and extracts salient details that are of use in the typical digital forensics investigation. The tool demonstrates a new approach to computer forensics---stream-based forensics---which eschews file extraction and instead relies on parallelizable operations performed on bulk data. This tool has given us a high-performance platform that has allowed us to explore new forensic ideas such as memory carving, histogram analysis, and context-based stop lists. Although bulk_extractor was developed as a prototype, it has proved useful in actual police investigations, two of which we recount.
This presentation will cover the Automated Network Triage and Rapid Evidence Acquisition Project. This project is focused on easy to use, low cost digital forensic investigation tools that allow for the automation of each phase of the digital investigation process. The project’s philosophy and goals concerning process automation in digital forensic investigations will briefly be discussed, followed by a description and presentation of the developed tools. Comments from digital forensic investigators in developed and developing countries who have used the project will then be examined. The presentation will then conclude with future project development goals.
In 2010 the Brazilian Federal Police experts elaborated 9.050 reports results in an analysis of approximately 4.6 PB of data on cyber crimes. Many national police operations resulted in the seizure of several hundred computers. A typical forensic analysis, with a powerful tool like the Sleuth Kit, results in a few gigabytes of data to be analyzed later. This often results in only data but not always in knowledge.
To analyze this data and correlated in timely is needed an efficient indexing process. The objective of the ForeIndex framework is to enable an efficient indexing process allowing an analysis of correlated data from several different computers.
The ForeIndex allows efficient indexing through the use of a controlled cluster for distributed processing. This results in a satisfactory solution to the transformation of the data extracted with forensic knowledge to be used judicially for the correlation of information from different computers and users.
This paradigm is quickly reaching the end of its rope in the face of huge, and growing, data sets--it is simply not scalable. The other major problem is that this approach does not lend itself to GUI integration and that (as far as most users are concerned) is a crucial weakness for OSFT. Finally, building a robust, extensible, and integrated environment around unstable plaintext input/output interfaces is extremely difficult and, generally, not a sound engineering approach.
Fortunately, we do not have to re-invent a whole lot. If we look carefully we can realize that the modern web faces many of the same technical challenges and offers a large number of robust open tools and approaches that can go a long way towards solving our problems.
In this presentation, we present a metadata-centric view of the investigation and demonstrate how current tools can be easily adapted to utilize the more structured data interfaces, such as JSON, used by web applications. Once forensic tools start talking the language(s) of web tools, we can immediately leverage the fast and scalable data stores that have been developed for the web. As well, we can immediately take advantage of the web GUI development infrastructure to quickly put together complete investigation environments.
We will present a case study based on Sleuthkit and other tools to show how this transformation can be accomplished with a reasonable effort. Most importantly, once the nucleus of a system is developed, it is quite easy to extend it with new capabilities. Ultimately, by embracing more scalable data stores than the ones employed by commercial vendors, OSFT can finally leapfrog them and present investigators with what they really need.
This talk will chronicle the endeavor to build software to perform routine processing of Windows hard drive disk image using publicly available tools with an emphasis on open source. The goal is to create timelines, extract files, and initiate of collection of operating system information from the registry with the push of a button. The tools exist separately to handle each of the actions, but the combination of these tasks will expedite analysts’ response. There was additional emphasis on aligning the tools on the same platform and language.
Automating tasks lets investigators deal with large evidence sets efficiently and enforce standardized procedures. In this talk, we’ll explore the various ways you can use The Sleuth Kit in your favorite scripting language, including bash, Perl, Python, and, yes, VisualBasic. We’ll cover several examples, including comparing STDINFO to FILENAME attribute timestamps on NTFS filesystems and extracting files of your choosing. No C programming required!
If you want to tell your fellow attendees about an open source project that you are involved with, about problems that you want solved, or about anything else open source digital forensics, we will have a lightning talk session at the end of the day. Talks are limited to 5 minutes and slots will be filled on a first come, first served basis. A signup sheet will be available at the registration desk. If you want to use a slide or two, they must be loaded onto the laptop before the session starts.