In today’s world of cyber security, an increasingly high number of new vulnerabilities (CVE’s) are being uncovered and published every day.


As our skills and technology increase, we as a society have become better at discovering vulnerabilities in various technologies. This of course both applies to white hats and black hats (aka. Security professionals and malicious hackers). To prevent hackers from abusing vulnerabilities in an organization’s software, it makes sense to find those vulnerabilities first and mitigate them.

The process of reverse engineering a binary application does exactly this. Nordic Resilience recommends an organization perform a reverse engineering of a binary application if either is true:

  1. An organization has developed a binary application themselves, which resides on multiple hosts or a single host in their DMZ.
  2. An organization uses a vendor’s binary application, which has not undergone a reverse engineering security audit.

If Nordic Resilience audits a vendor’s binary application, we will use the responsible disclosure model. This model dictates that we (Nordic Resilience) contact the vendor and share all technical details regarding any vulnerabilities we discover. Afterwards, the vendor gets a typical time period of 90 days to patch/remedy the vulnerabilities before details are publicly released.


The methodology of conducting a reverse engineering security audit depends on the binary application:

What language is it written in?
How complex is the binary application?
Is the source code available and will debugging Symbols be included in the compiled version?

Generally speaking, the process of reverse engineering consists of three phases:

  1. Fuzzing the binary application with input, in order to detect any uncaught exceptions and crashes.
  2. Manually examining crashes in order to determine whether they can be leveraged to create an exploit.
  3. Manually create an exploit that can compromise the confidentiality, integrity and/or availability.

If source code is also provided for the binary application, additional security audits can be conducted, such as a (brief) review of the source code in order to determine if it includes bad coding practice from a security point of view. Examples of such are:

Are buffers containing cryptographic secrets zeroed after their use?
Are the return values from each function (such as malloc) religiously checked for errors?
Is pointer arithmetic excessively common in the code?

At the end of the reverse engineering audit, the organization will receive a single commercial-grade report that will contain a non-technical section for the C-suite members of the organization, as well as a technical section that will provide in-dept details regarding the vulnerabilities that were observed. Lastly, all vulnerabilities will be manually scored with a risk-assessment (CVSS or Low/Medium/High/Critical), in order to assist the organization with the priority of remediating each one.

The reverse engineering of a single binary usually takes 8 days but this depends on the complexity of the binary application.