While I’ve written a lot of code in my time, I don’t think I’ve ever firmly appreciated how complex it can be to write secure code. We go about our lives taking for granted that our apps will just work, and hopefully the programmers used the right techniques to not get us in trouble. Recently, I’ve started exploring buffer overflows (BOFs) as part of my Penetration Testing Professional (PTP) course by eLearnSecurity. I had heard the term “buffer overflow” and have actually seen it happen while using an application but never from a security angle. Generally, it appeared as an app crash that was resolved by restarting it, resolving my immediate issue and allowing me to carry on. But I always knew that there was much more happening underneath. This article is my braindump of my deeper exploration in an attempt to make reinforce this new knowledge in my own head. Hopefully it can help you, too.
Having just finished the BOF topic and having a better understanding of what actually causes them, it made me realize a few things:
- Building a good working BOF is non-trivial.
- Having a deep understanding of system architectures is a key advantage to building them.
- I’m grateful for my developer background, since it helped me understand the logic flow.
The premise of a BOF is to be able to identify payloads that can cause a memory corruption error in an application or process, because an invalid instruction has overwritten an important memory space (AKA a register). Hence why having an understanding of system architectures is so important. You’re working to overwrite registers at a machine code level which is basically unreadable without a good debugger like Immunity Debugger or IDA. And unless you’re an expert in assembly code (and sometimes even if your are), (Read more...)
*** This is a Security Bloggers Network syndicated blog from The Ethical Hacker Network authored by Rey Bango. Read the original post at: http://feedproxy.google.com/~r/eh-net/~3/5h5r_aDgk3E/