As a SANS instructor, we are always looking to improve our skills and knowledge. I’ve been teaching SEC401 (GSEC) and SEC560 (GPEN) for some time now, and I had been looking for additional training to supplement my knowledge in the areas those classes cover. I had heard so many great things about SEC504 (GCIH), that I decided to take it. Even though it has some overlap with 401 and 560, it covers a lot of different tools and techniques, and is more focused from the viewpoint of a blue team member.
One of the tools we use in this course I called Spiderfoot. Now, this is a very interesting tool, and as with many tools in the Information Security world, there are both commercial and open-source community versions. This tool is focused on pulling OSINT information about a target domain or keyword. OSINT, or Open Source INTelligence, is the practice of discovering information about a target from publicly available information, such as DNS records, social media, website text, and correlating the data and metadata found against various sources. In this way, one can build a detailed map about a target organization with very little initial seed data.
The challenge here is that sometimes it gets things wrong. Here’s a recent example. I decided to run the tool pointed at a friend’s site to see what it could come up with. Over a period of a couple hours, it had scraped metadata to find some usernames, and compared those to various other websites where they might also have accounts. It worked pretty well, except that one of the users has a very common name, and his username was in the format of ‘firstnamelastname.’ The tool found a user account of the same name on Wikipedia, where that user had posted some pretty ignorant and hateful things. Upon verification, this was a totally separate person who happened to have the same name and username.
Now, this is an incredibly powerful land useful tool. However, it is critical that a competent researcher actually verify most of its findings. In this case, some of its findings were false positives, but the internet at large is not so thorough or careful. Imagine if this was being run by a competing political candidate, or by a corporate rival. They might pull information like this and either mistakenly think it is true, or deliberately share it knowing that by the time anyone bothers to verify it, the damage will have already been done to public reputation.
So, what’s the solution here? Should a business try to force potentially harmful information to be scrubbed from public sources? Do they even have a right to wiping part of public internet history? The act itself of trying to destroy the data may seem even more damning, even if it is not true. Tools such as this will continue to be produced and will only become easier to user and more powerful. It is critical that organizations (and public individuals) engage in reputation management by looking for information like this before others do. If it should not be removed, one should at least have answer ready for it, or even make a public note of it in advance to get ahead of the possible issue.
For more information and training on subject like this, please refer to the SANS course catalog, as there are classes which cover these tools and techniques in detail.
*** This is a Security Bloggers Network syndicated blog from SANS Blog authored by SANS Blog. Read the original post at: http://feedproxy.google.com/~r/SANSForensics/~3/2FDBWQhuO7Q/spiderfoot-and-the-dangers-of-doxing