Hi folks,
Puzzle #5 is now closed! Thank you all for your entries. The answers and winners will be up soon. Stay tuned for Puzzle #6, which comes out next week… 🙂
Hi folks,
Puzzle #5 is now closed! Thank you all for your entries. The answers and winners will be up soon. Stay tuned for Puzzle #6, which comes out next week… 🙂
Our latest forensics puzzle has a malware twist to it, and was written by Lenny Zeltser. Lenny teaches the reverse-engineering malware (REM) course at SANS Institute.
It was a morning ritual. Ms. Moneymany sipped her coffee as she quickly went through the email that arrived during the night. One of the messages caught her eye, because it was clearly spam that somehow got past the email filter. The message extolled the virtues of buying medicine on the web and contained a link to the on-line pharmacy. “Do people really fall for this stuff?” Ms. Moneymany thought. She was curious to know how the website would convince its visitors to make the purchase, so she clicked on the link.
The website was slow to load, and seemed to be broken. There was no content on the page. Disappointed, Ms. Moneymany closed the browser’s window and continued with her day.
She didn’t realize that her Windows XP computer just got infected.
You are the forensic investigator. You possess the network capture (PCAP) file that recorded Ms. Moneymany’s interactions with the website. Your mission is to understand what probably happened to Ms. Moneymany’s system after she clicked the link. Your analysis will start with the PCAP file and will reveal a malicious executable.
Here is the network capture file for this puzzle. The MD5 hash of this PCAP file is c09a3019ada7ab17a44537b069480312. Please use the Official Submission Form to submit your answers.
1. As part of the infection process, Ms. Moneymany’s browser downloaded two Java applets. What were the names of the two .jar files that implemented these applets?
2. What was Ms. Moneymany’s username on the infected Windows system?
3. What was the starting URL of this incident? In other words, on which URL did Ms. Moneymany probably click?
4. As part of the infection, a malicious Windows executable file was downloaded onto Ms. Moneymany’s system. What was the file’s MD5 hash? Hint: It ends on “91ed”.
5. What is the name of the packer used to protect the malicious Windows executable? Hint: This is one of the most popular freely-available packers seen in “mainstream” malware.
6. What is the MD5 hash of the unpacked version of the malicious Windows executable file?
7. The malicious executable attempts to connect to an Internet host using an IP address which is hard-coded into it (there was no DNS lookup). What is the IP address of that Internet host?
Prize: Lenovo Ideapad S10-2 netbook
Deadline is 5/13/10 (11:59:59PM UTC-11) (In other words, if it’s still 5/13/10 anywhere in the world, you can submit your entry.)
Consider using an automated tool for extracting file artifacts (web pages, executable files, etc) embedded in the network capture file. Doing this manually tends to be slow and error-prone.
Also, note that to complete a comprehensive analysis of this incident, we should examine the malicious executable that found its way onto Ms. Moneymany’s system. That task is outside the scope of this particular puzzle, but we may look at it in a later puzzle.
When answering this puzzle, remember that you will be working with real-world malicious software. Be careful not to infect yourself! Use an isolated system, which you will be able to reinstall at the end of your investigation.
Use the Official Submission form to submit your solution. All responses should be submitted as plain text. Microsoft Word documents, PDFs, etc will not be reviewed.
When grading your solutions, we will not just look for correct answers, but will also look at the explanation of how you derived your answers. The winning solution will stand out due to its elegance, insights, and readability. In the event of a tie, the entry submitted first will receive the prize.
You are welcome to collaborate with other people and discuss ideas back and forth. You can even submit as a team (there will be only one prize). However, please do not publish the answers before the deadline, or you (and your team) will be automatically disqualified.
By submitting your answer to this puzzle, you agree to license your solution’s text according to the Creative Commons v3 “Attribution” License.
Coding is always encouraged. We love to see well-written, easy-to-use tools which automate even small sections of the analysis process. Graphical and command-line tools are all eligible. You are welcome to build upon the work of others, as long as their work has been released under a license that allows free derivative works.
Exceptional solutions may be incorporated into the SANS Network Forensics Investigative Toolkit (SNIFT kit) and/or Reverse-Engineering Malware course materials. Authors agree that their code submissions will be freely published under the GPL license, in order to further the state of network forensics knowledge. Exceptional submissions may also be used as examples and tools in the Reverse-Engineering Malware or Network Forensics course. All authors will receive full credit for their work.
If you’re interested in malware analysis, here are a few resources to help you get started:
• Building a Malware Analysis Toolkit Using Free Tools
• Using VMware for Malware Analysis
• Introduction to Malware Analysis Webcast
Lenny Zeltser holds the copyright for this puzzle. He thanks Anand Sastry, Sherri Davidoff and Slava Frid for their feedback when creating this puzzle.
Here it is, finally, the announcement of the Puzzle #4 winner, finalists, and semifinalists. Once again, a huge congratulations to everyone who sent in correct answers to what was arguably our most difficult contest yet!
And as we’re sort of beginning to expect, we were totally blown away by the quality of the analysis we received. While there were lots of correct guesses at the “X-tra Credit”, many of you found solid ways to demonstrate (with references and citations) your passive fingerprinting of the active fingerprinting tool. Nice.
I’ll be following up with commentary and emails to a few of you and answering previous posts and the like, over the next few days. In the meantime, please do check out the Finalist submissions, particularly that of our winner… (drum roll)…
Sébastien Damaye has seriously thrown down the gauntlet on this one, and deserves an uncontested First Prize. (We’ve already begun to use his tools to look at other pcaps.)
At the core of the solution to this puzzle, and so many other similar real-world puzzles, is the ability to look at stochastic data, and do a sufficiently deep (and sometimes fuzzy) statistical analysis to determine what was going on. Lots of you made impressive inroads on how to shake out that analysis, but Sébastien gave us a new tool to bring things like sequence and acknowledgement number distributions stark view. Rather than go on to describe his efforts further myself, I’ll direct you to his own impressive write-up at aldeid.com.
Congratulations, Sébastien! Your shiny new netbook is on it’s way soon!
Of course there are several other submissions we want to mention (in order of submission):
As a few other folks did, Eugenio Delfa began an excellent first pass with snort to look for malfeasance, and to identify the port scanner. His new python script looks useful as well, allowing command-line statistical inspection without all the awk’ing and sorting I typically do with tcpdump or tshark output.
Eric Kollmann starts right off with a correct identification of nmap based on its known behavior, including the predictable things it does with SYN packets, and its use of a bogus ICMP code in the OS fingerprinting tests. His development of a new exe (“nfc”), and tweaks to Satori are welcome additions to his ongoing contributions to the community.
Arvind Doraiswamy submitted a perl script to extract and summarize flow data as well, and Adam Bray‘s pkts2db.pl & scansearcher.pl are solid contributions.
Thanks again to everyone who participated, and more than that, hold on to your hats. Puzzle #5 is imminent, and looks like a lot of fun!
Sébastien Damaye (wins a Lenovo Netbook)
Adam Bray
Arvind Doraiswamy
Eric Kollmann
Eugenio Delfa
Ahmed Adel Mohamed
Christian
Garima
Jason Kendall
Juan Garrido & Pedro Sanchez
Peter Chong
Sterling Thomas
Tom Samstag
Vikrant
Adam Bray
Ahmed Adel Mohamed
Anand Harikrishnan
Arvind Doraiswamy
Chad Stewart
Chris Steenkamp
Christian
David Clements
Eric Kollmann
Eugenio Delfa
Francisco Pecorella
Garima
Gustavo Delgado
Jason Kendall
Juan Garrido & Pedro Sanchez
Marco Castro
Masashi Fujiwara
Matt McKnew
Peter Chong
Sébastien Damaye (wins a Lenovo Netbook)
Sterling Thomas
Takuro Uetori
Tom Samstag
Vikrant
Winter Faulk
Here are the answers to Puzzle #4. Another big thanks to everyone who played. 🙂
Answer 1: 10.42.42.253
Answer 2: TCP Connect
Answer 3: 10.42.42.50, 10.42.42.56, & 10.42.42.25
Answer 4: 00:16:cb:92:6e:dc
Answer 5: 10.42.42.50
Answer 6: 135, 139
X-TRA CREDIT: The tool used was nmap. There are many ways to try to fingerprint the tool, but one fast way is to look at the TCP window sizes coming from the scanning system. In the case of nmap, some things stand out, including SYN packets with a window size of 31337. A google search on that turns up Fyodor’s patent application. 🙂
The first scan, run with “nmap 10.42.42.1/24” would have yielded results that looked something like this:
Starting Nmap 4.76 ( http://nmap.org ) at 2009-11-02 18:33 EST
All 1000 scanned ports on 10.42.42.25 are closed
Interesting ports on 10.42.42.50:
Not shown: 998 closed ports
PORT STATE SERVICE
135/tcp open msrpc
139/tcp open netbios-ssn
All 1000 scanned ports on 10.42.42.56 are closed
Interesting ports on 10.42.42.253:
Not shown: 999 closed ports
PORT STATE SERVICE
3128/tcp open squid-http
Nmap done: 256 IP addresses (4 hosts up) scanned in 468.46 seconds
(Though of course you couldn’t have known about 10.42.42.253, which was the scanner itself, as it would have used the loopback interface for that, and so the external packet sniffer wouldn’t have seen those bits.)
The second scan, using nmap’s “-A” option would have yielded results like this:
Starting Nmap 4.76 ( http://nmap.org ) at 2009-11-02 18:42 EST
All 1000 scanned ports on 10.42.42.25 are closed
MAC Address: 00:16:CB:92:6E:DC (Apple Computer)
Device type: phone|media device|general purpose|web proxy|specialized
Running: Apple embedded, Apple iPhone OS 1.X, Apple Mac OS X 10.2.X|10.3.X|10.4.X|10.5.X, Blue Coat SGOS 5.X, FreeBSD 4.X, VMware ESX Server 3.0.X
Too many fingerprints match this host to give specific OS details
Network Distance: 1 hop
Interesting ports on 10.42.42.50:
Not shown: 998 closed ports
PORT STATE SERVICE VERSION
135/tcp open msrpc Microsoft Windows RPC
139/tcp open netbios-ssn
MAC Address: 70:5A:B6:51:D7:B2 (Unknown)
Device type: general purpose
Running: Microsoft Windows XP
OS details: Microsoft Windows 2000 SP4, Windows XP SP2 or SP3, or Windows Server 2003
Network Distance: 1 hop
Service Info: OS: Windows
All 1000 scanned ports on 10.42.42.56 are closed
MAC Address: 00:26:22:CB:1E:79 (Unknown)
Too many fingerprints match this host to give specific OS details
Network Distance: 1 hop
Interesting ports on 10.42.42.253:
Not shown: 999 closed ports
PORT STATE SERVICE VERSION
3128/tcp open http-proxy Squid webproxy 2.7.STABLE3
Device type: general purpose
Running: Linux 2.6.X
OS details: Linux 2.6.17 – 2.6.25
Network Distance: 0 hops
OS and Service detection performed. Please report any incorrect results at http://nmap.org/submit/ .
Nmap done: 256 IP addresses (4 hosts up) scanned in 78.42 seconds
(Again, you wouldn’t have seen nmap inspect the host it was running on, but the results are included for completeness.)
At last, the long-awaited Puzzle #3 winners! Thank you all for your terrific submissions, and your patience as we tested each one carefully. Congratulations to everyone who sent in the correct answers.
As always, we were tremendously impressed by the quality of the entries. We received a wide variety of creative, original submissions, including file carving tools, network-layer tools, HTTP, XML and Plist analysis tools, graphical tools, command-line tools, and more. It was very hard to narrow down a winner, and there were several production-quality tools which will now be covered in future SANS “Network Forensics” curriculum. Please check out all the Finalist submissions!
The winner is… Matt Sabourin, for his elegant tool, “findappletv.py“. Matt’s tool is simple to use. It parses a pcap and creates a report for each potential AppleTV client, containing “Search Terms Sent by Client,” “Movie Items Viewed by Client,” “Overview of Recognized Requests,” and more. It also creates an overview report for all clients. Each of these reports can easily be included in the appendix of a professional forensics report. We could definitely envision using this in a real forensics case to quickly summarize AppleTV usage information. Congratulations, Matt! Your AppleTV is on it’s way.
We’d also like to call attention to several other submissions (in no particular order):
Amar Yousif created two excellent tools: applejuice and gzippedNOT. Amar’s “gzippedNOT” parses gzipped content out of HTTP responses. This tool will be AWESOME for squid proxy analysis as well. 🙂 “Applejuice” dumps out the list of search queries for each AppleTV IP address. “Applejuice” also wins the Best Name Award!
Richard Springs built two great tools: transmute.rb and scarabsieve.rb. Scarabsieve parses through any Webscarab-logged traffic, carves it all out, dumps it into a directory, and prints MD5 and SHA1 hashes for each carved file. This script alone is very useful for any WebScarab user. Richard also wrote “transmute.rb” to convert any pcap into the WebScarab log format so that scarabsieve can parse it. Wow! Nice work.
Sébastien Damaye built a tool called “pyHttpXtract.py” to extract all the files in the packet capture and list out the search requests. This tool even goes a step above and automatically creates a graphical web interface which you can scroll through to view all the files. He also submitted a companion tool, webObjects.py, which pulls AppleTV searches out of the packet capture and prints them out. Sébastien included a *fantastic* writeup which everybody should read. We were really impressed.
Franck Guénichot lived up to his reputation as network forensics hacker extraordinare with his excellent tool, “httpdumper.” This tool displays HTTP conversations, filters and dumps the contents (automatically decompressing gzipped content). Franck also submitted two handy tools, macfinder.rb, and plist.rb. Franck’s writeup is very thorough– definitely check it out for a great walk-through of the solutions.
Tom Samstag wrote a really cool tool, httpAnalyzer, which creates a graphical web interface that lets you browse through HTTP traffic. It includes MD5 and SHA1 hashes of each file contained in the packet capture. The interface is very user-friendly! Tom’s httpAnalyzer is easily extensible, and we hope we’ll see it again in future contests.(Note: When you load the page, httpAnalyzer makes a request to jQuery.com, apparently in order to get up-to-date jQuery Javascript library. If you are using it for forensics work, you’ll want to block outbound traffic.) Tom also wrote a very handy tool called “trafficAnalyzer.sh,” which analyzes a pcap and reports basic info such as a packet count, MAC addresses and IP addresses.
Lou Arminio built a Plist parser to analyze Apple plist files, as well as an HTTP analyzer called “httpparse”. On top of that, he created a great tool called pcaputil which analyzes TCP flows and carves files out of selected TCP flows and creates MD5sums. These are three handy little tools. Nice work!
Michael_Nijs built upon an open-source pcap analysis tool, read_pcap.py, adding the option to parse GET and POST requests and display the values of any parameter in the URL. We appreciated that he leveraged existing code and built a useful extension.
Alan Tu wrote a script, http_analysis.pl, which leverages tshark’s powerful HTTP dissection capability, outputs handy information to a file, and can also produce filtered pcaps. Alan also wrote an HTTP response extractor, http_rx.pl, and polished his TCP stream analysis tool, stream.pl. Check them out!
Wesley McGrew wrote an excellent tool, “atvsnarf.py,” which carves out plist files and creates a CSV file with useful information about AppleTV traffic from a pcap. The tool is very easy to use, and a great foundation for detailed forensic analysis. His writeup is outstanding, too– read about how he identified six request types from the pcap file, and incorporated these into atvsnarf.py’s output.
These tools are great! Thank you all for making your work available to the community. We hope you’ll continue to maintain and extend your code.
Many thanks to everyone who participated. We hope to see you guys in future contests.
WINNERS:
| |
Finalists:
|
|
Semifinalists:
|
|
Correct Answers:
|
After reviewing the submissions so far, it seems that question #2 is perhaps a little too ambiguous. We’re amending it to read:
For the FIRST port scan that MR. X conducted, what type was it?
If you’ve already posted a submission, please re-evaluate your answer accordingly, and feel free to re-submit!
Also, we’ll be extending the deadline by two weeks to 3/18/10.
Cheers!
After much deliberation, we’ve decided to again offer a Lenovo IdeaPad S10-2 to the winner of Contest #4.
As mentioned before, this model is the same as the free netbooks Sec558 students will get in Orlando!
The MOST ELEGANT solution wins. Deadline is 03/04/10. Good luck!!
While a fugitive in Mexico, Mr. X remotely infiltrates the Arctic Nuclear Fusion Research Facility’s (ANFRF) lab subnet over the Interwebs. Virtually inside the facility (pivoting through a compromised system), he conducts some noisy network reconnaissance. Sadly, Mr. X is not yet very stealthy.
Unfortunately for Mr. X, the lab’s network is instrumented to capture all traffic (with full content). His activities are discovered and analyzed… by you!
Here is the packet capture containing Mr. X’s activity. As the network forensic investigator, your mission is to answer the following questions:
1. What was the IP address of Mr. X’s scanner?
2. For the FIRST port scan that Mr. X conducted, what type of port scan was it? (Note: the scan consisted of many thousands of packets.) Pick one:
3. What were the IP addresses of the targets Mr. X discovered?
4. What was the MAC address of the Apple system he found?
5. What was the IP address of the Windows system he found?
6. What TCP ports were open on the Windows system? (Please list the decimal numbers from lowest to highest.)
X-TRA CREDIT (You don’t have to answer this, but you get super bonus points if you do): What was the name of the tool Mr. X used to port scan? How can you tell? Can you reconstruct the output from the tool, roughly the way Mr. X would have seen it?
Deadline is 3/18/10 (11:59:59PM UTC-11) (In other words, if it’s still 3/18/10 anywhere in the world, you can submit your entry.)
Please use the Official Submission form to submit your answers. Here is your evidence file:
http://forensicscontest.com/contest04/evidence04.pcap
MD5 (evidence04.pcap) = 804648497410b18d9a7cb1d4b2252ef7
The MOST ELEGANT solution wins. In the event of a tie, the entry submitted first will receive the prize. Coding is always encouraged. We love to see well-written, easy-to-use tools which automate even small sections of the evidence recovery. Graphical and command-line tools are all eligible. You are welcome to build upon the work of others, as long as their work has been released under a an approved Open Source License. All responses should be submitted as plain text. Microsoft Word documents, PDFs, etc will NOT be reviewed.
Feel free to collaborate with other people and discuss ideas back and forth. You can even submit as a team (there will be only one prize). However, please do not publish the answers before the deadline, or you (and your team) will be automatically disqualified. Also, please understand that the contest materials are copyrighted and that we’re offering them publicly for the community to enjoy. You are welcome to publish full solutions after the deadline, but please use proper attributions and link back. If you are interested in using the contest materials for other purposes, just ask first.
Exceptional solutions may be incorporated into the SANS Network Forensics Investigative Toolkit (SNIFT kit). Authors agree that their code submissions will be freely published under the GPL license, in order to further the state of network forensics knowledge. Exceptional submissions may also be used as examples and tools in the Network Forensics course. All authors will receive full credit for their work.
Deadline is 3/18/10 (11:59:59PM UTC-11). Here’s the Official Submission form. Good luck!!
Copyright 2010, Lake Missoula Group, LLC. All rights reserved.
Here are the answers for Puzzle #3. Big thanks to everyone who entered! 🙂
Just wanted to send a hint out for those of you who are out to win Ann’s AppleTV.
We’ve received lots of submissions with the correct answer, but to win the AppleTV, you’ll need to go a step beyond manual extraction with Wireshark or Network Miner. Imagine if you had a huge packet capture containing LOTS of AppleTV traffic. There’s no way you could extract that manually!
Can you build a tool that will automatically list each of the movies that a user previewed? Or all of the terms that Ann searched for? Carve out files transferred and their MD5sums? Even perhaps reconstruct what Ann saw on the AppleTV based on the traffic content?
To win the AppleTV, you’ll need to be creative and take things to a level beyond just manual extraction. (By the way, we suspect that the underlying traffic for the AppleTV is the same format as iTunes traffic.)
Submissions are due by the end of 2/1/10 (next Monday night). Good luck!!