We used to get a pointer to file data without locking and for some samples
this pointer would be invalidated by the time we used it. Now, we just
store the offset for the sections that should be hashed as part of the
Authenticode hash computation and get the file data pointer right before
it's needed.
A more reliable way to calculate the authenticode hash appears to
be to hash the header (minus the checksum and security table) and
then just hash everything between the end of the header and the
start of the security section.
The diff is confusing, but basically I moved the countersignature
verification code into it's own function and then in asn1_parse_mscat
we now loop through the unauthenticatedAttributes to find the
counterSignature attribute (instead of assuming it's the first
attribute in the list.)
We also now do time-validation in the case where an unauthAttrs
section exists but doesn't include a counterSignature
If no unauthenticatedAttributes sections exist, the code will now judge
validity based on whether the code signing certificate is valid at the
time of the scan.
In my sample set of 2,000 signed binaries, there were 69 with x509
certificates included that didn't seem to comply with the spec. These
weren't in the actual certificate chain used to verify the binary,
though, and the Windows verification API had no problems with it, so
we shouldn't either. The specific errors varied. Specifically:
- 54 - expected NULL following RSA OID - For some
binaries this was due to an old "DUMMY CERTIFICATE" included
for some reason.
- 8 - module has got an unsupported length (392) - Binaries from
one company include 392-bit RSA keys for some reason
- 7 - expected [0] version container in TBSCertificate - Some
really older certificates don't seem to include the version
number (maybe the RFC didn't include one at the time?)
The following changes were made
- The code to calculate the authenticode hash was not properly
accounting for the case where a PE had sections that either
overlapped with each other or overlapped with the PE header.
One common case for this is UPX-packed binaries, where the
first section with data on disk starts at offset 0x400, which
overlaps with the specified PE header by 0xC00 bytes.
- The code didn't wrap accesses to fields in the Security
DataDirectory with EC32(), so it seems likely that authenticode
parsing always encountered issues on big endian systems. I
think I fixed all of the accesses in cli_checkfp_pe, but there
might still be issues here. I'll test this further.
- We parse the authenticode data header to better ensure that it's
PCKS7 we are trying to parse, and not one of the other types
- cli_checkfp_pe should now finish faster in the case where there
is no authenticode data and we don't want to compute the section
hashes.
- Fixed a potential memory leak in one cli_checkfp_pe failure case
This doesn't add support to actually verify whitelisting rules
against SHA384 signatures, but makes it so that verification
doesn't fail completely if there is a SHA384 certificate somewhere
in the signature.
In the case where nested signatures are present, we still don't parse out
the nested signatures, but now signature verification based on the
non-nested signatures can continue
Everything should be working, but I'm having a hard time finding a binary
to test with that doesn't encounter other parsing issues (no countersignature,
extra data in the unauthenticatedAttributes section, etc.)