The fixes to the fmap bounds for nested (duplicate) fmaps added recently
introduced a subtle arithmetic bug that was detected by OSS-Fuzz:
```c
scanat = m->nested_offset + *at % m->pgsz;
```
should have been:
```c
scanat = (m->nested_offset + *at) % m->pgsz;
```
Without the parenthesis, `scanat` could be > `m->pgsz`, which would
overflow in the subsequent `memchr()` call.
See:
- https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=40452
- https://bugs.chromium.org/p/oss-fuzz/issues/detail?id=40455
This commit also tightens up some of the other bounds checks done with
`CLI_ISCONTAINED()` macro so the check limits the bounds to the nested
fmap and not the original map.
In addition, I've added a `CLI_ISCONTAINED_0_TO()` macro that removes
checks when the "bigger" buffer starts at offset 0. This should silence
a bunch of (benign) warnings and medium severity Coverity issues.
There is also a possible use of an uninitialized variable
(`old_hook_lsig_matches`) in `cli_magic_scan()`.
Finally, I also removed an unecessary NULL-check on `filebase` in
`fmap_dup_to_file()` that Coverity was unhappy with.
CID 361074: fmap.c: Possible invalid dereference if status != success
and the new map was not yet allocated.
CID 361077: others.c: Structurally dead code revealed a bug in the
cli_recursion_stack_get_size() function.
CID 361080, 361078, 361083: sigtool.c: Inverted check for if engine
needs to be free'd, could leak the engine structure.
CID 361075: sigtool.c: Missed a `return -1` that should've been `goto
done;` and would leak the new_map buffer.
CID 361079: sigtool/vba.c: Checking if we should free the new_map on
failure only if ctx also needs to be free'd, which would leak the
new_map if ctx was not allocated yet.
The previous commit broke alerting when exceeding the recursion limit
because recursion tracking is so effective that by limiting the final
layer of recursion to a scan of the fmap, we prevented it from ever
hitting the recursion limit.
This commit removes that restriction where it only does an fmap scan
(aka "raw scan") of files that are at their limit so that we can
actually hit the recursion limit and alert as intended.
Also tidied up the cache_clean check so it checks the
`fmap->dont_cache_flag` at the right point (before caching) instead of
before setting the "CLEAN" verdict.
Note: The `cache_clean` variable appears to be used to record the clean
status so the `ret` variable can be re-used without losing the verdict.
This is of course only required because the verdict is stored in the
error enum. *cough*
Also fixed a couple typos.
The fmap module provides a mechanism for creating a mapping into an
existing map at an offset and length that's used when a file is found
with an uncompressed archive or when embedded files are found with
embedded file type recognition in scanraw(). This is the
"fmap_duplicate()" function. Duplicate fmaps just reference the original
fmap's 'data' or file handle/descriptor while allowing the caller to
treat it like a new map using offsets and lengths that don't account for
the original/actual file dimensions.
fmap's keep track of this with m->nested_offset & m->real_len, which
admittedly have confusing names. I found incorrect uses of these in a
handful of locations. Notably:
- In cli_magic_scan_nested_fmap_type().
The force-to-disk feature would have been checking incorrect sizes and
may have written incorrect offsets for duplicate fmaps.
- In XDP parser.
- A bunch of places from the previous commit when making dupe maps.
This commit fixes those and adds lots of documentation to the fmap.h API
to try to prevent confusion in the future.
nested_offset should never be referenced outside of fmap.c/h.
The fmap_* functions for accessing or reading map data have two
implementations, mem_* or handle_*, depending the data source.
I found issues with some of these so I made a unit test that covers each
of the functions I'm concerned about for both types of data sources and
for both original fmaps and nested/duplicate fmaps.
With the tests, I found and fixed issues in these fmap functions:
- handle_need_offstr(): must account for the nested_offset in dupe maps.
- handle_gets(): must account for nested_offset and use len & real_len
correctly.
- mem_need_offstr(): must account for nested_offset in dupe maps.
- mem_gets(): must account for nested_offset and use len & real_len
correctly.
Moved CDBRANGE() macro out of function definition so for better
legibility.
Fixed a few warnings.
Scan recursion is the process of identifying files embedded in other
files and then scanning them, recursively.
Internally this process is more complex than it may sound because a file
may have multiple layers of types before finding a new "file".
At present we treat the recursion count in the scanning context as an
index into both our fmap list AND our container list. These two lists
are conceptually a part of the same thing and should be unified.
But what's concerning is that the "recursion level" isn't actually
incremented or decremented at the same time that we add a layer to the
fmap or container lists but instead is more touchy-feely, increasing
when we find a new "file".
To account for this shadiness, the size of the fmap and container lists
has always been a little longer than our "max scan recursion" limit so
we don't accidentally overflow the fmap or container arrays (!).
I've implemented a single recursion-stack as an array, similar to before,
which includes a pointer to each fmap at each layer, along with the size
and type. Push and pop functions add and remove layers whenever a new
fmap is added. A boolean argument when pushing indicates if the new layer
represents a new buffer or new file (descriptor). A new buffer will reset
the "nested fmap level" (described below).
This commit also provides a solution for an issue where we detect
embedded files more than once during scan recursion.
For illustration, imagine a tarball named foo.tar.gz with this structure:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| baz.exe | PE | 0 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| │ └── hello.txt | ASCII | 2 | 0 |
| └── sfx.7z | 7Z | 1 | 1 |
| └── world.txt | ASCII | 2 | 0 |
(A) If we scan for embedded files at any layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| ├── foo.tar | TAR | 1 | 0 |
| │ ├── bar.zip | ZIP | 2 | 1 |
| │ │ └── hola.txt | ASCII | 3 | 0 |
| │ ├── baz.exe | PE | 2 | 1 |
| │ │ ├── sfx.zip | ZIP | 3 | 1 |
| │ │ │ └── hello.txt | ASCII | 4 | 0 |
| │ │ └── sfx.7z | 7Z | 3 | 1 |
| │ │ └── world.txt | ASCII | 4 | 0 |
| │ ├── sfx.zip | ZIP | 2 | 1 |
| │ │ └── hello.txt | ASCII | 3 | 0 |
| │ └── sfx.7z | 7Z | 2 | 1 |
| │ └── world.txt | ASCII | 3 | 0 |
| ├── sfx.zip | ZIP | 1 | 1 |
| └── sfx.7z | 7Z | 1 | 1 |
(A) is bad because it scans content more than once.
Note that for the GZ layer, it may detect the ZIP and 7Z if the
signature hits on the compressed data, which it might, though
extracting the ZIP and 7Z will likely fail.
The reason the above doesn't happen now is that we restrict embedded
type scans for a bunch of archive formats to include GZ and TAR.
(B) If we scan for embedded files at the foo.tar layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| ├── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 2 | 1 |
| │ └── hello.txt | ASCII | 3 | 0 |
| └── sfx.7z | 7Z | 2 | 1 |
| └── world.txt | ASCII | 3 | 0 |
(B) is almost right. But we can achieve it easily enough only scanning for
embedded content in the current fmap when the "nested fmap level" is 0.
The upside is that it should safely detect all embedded content, even if
it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe.
The biggest risk I can think of affects ZIPs. SFXZIP detection
is identical to ZIP detection, which is why we don't allow SFXZIP to be
detected if insize of a ZIP. If we only allow embedded type scanning at
fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP
if the bar.exe was not compressed in foo.zip and if non-compressed files
extracted from ZIPs aren't extracted as new buffers:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.zip | ZIP | 0 | 0 |
| └── bar.exe | PE | 1 | 1 |
| └── sfx.zip | ZIP | 2 | 2 |
Provided that we ensure all files extracted from zips are scanned in
new buffers, option (B) should be safe.
(C) If we scan for embedded files at the baz.exe layer, we may detect:
| description | type | rec level | nested fmap level |
| ------------------------- | ----- | --------- | ----------------- |
| foo.tar.gz | GZ | 0 | 0 |
| └── foo.tar | TAR | 1 | 0 |
| ├── bar.zip | ZIP | 2 | 1 |
| │ └── hola.txt | ASCII | 3 | 0 |
| └── baz.exe | PE | 2 | 1 |
| ├── sfx.zip | ZIP | 3 | 1 |
| │ └── hello.txt | ASCII | 4 | 0 |
| └── sfx.7z | 7Z | 3 | 1 |
| └── world.txt | ASCII | 4 | 0 |
(C) is right. But it's harder to achieve. For this example we can get it by
restricting 7ZSFX and ZIPSFX detection only when scanning an executable.
But that may mean losing detection of archives embedded elsewhere.
And we'd have to identify allowable container types for each possible
embedded type, which would be very difficult.
So this commit aims to solve the issue the (B)-way.
Note that in all situations, we still have to scan with file typing
enabled to determine if we need to reassign the current file type, such
as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2-
compressed. Detection of DMG and a handful of other types rely on
finding data partway through or near the ned of a file before
reassigning the entire file as the new type.
Other fixes and considerations in this commit:
- The utf16 HTML parser has weak error handling, particularly with respect
to creating a nested fmap for scanning the ascii decoded file.
This commit cleans up the error handling and wraps the nested scan with
the recursion-stack push()/pop() for correct recursion tracking.
Before this commit, each container layer had a flag to indicate if the
container layer is valid.
We need something similar so that the cli_recursion_stack_get_*()
functions ignore normalized layers. Details...
Imagine an LDB signature for HTML content that specifies a ZIP
container. If the signature actually alerts on the normalized HTML and
you don't ignore normalized layers for the container check, it will
appear as though the alert is in an HTML container rather than a ZIP
container.
This commit accomplishes this with a boolean you set in the scan context
before scanning a new layer. Then when the new fmap is created, it will
use that flag to set similar flag for the layer. The context flag is
reset those that anything after this doesn't have that flag.
The flag allows the new recursion_stack_get() function to ignore
normalized layers when iterating the stack to return a layer at a
requested index, negative or positive.
Scanning normalized extracted/normalized javascript and VBA should also
use the 'layer is normalized' flag.
- This commit also fixes Heuristic.Broken.Executable alert for ELF files
to make sure that:
A) these only alert if cli_append_virus() returns CL_VIRUS (aka it
respects the FP check).
B) all broken-executable alerts for ELF only happen if the
SCAN_HEURISTIC_BROKEN option is enabled.
- This commit also cleans up the error handling in cli_magic_scan_dir().
This was needed so we could correctly apply the layer-is-normalized-flag
to all VBA macros extracted to a directory when scanning the directory.
- Also fix an issue where exceeding scan maximums wouldn't cause embedded
file detection scans to abort. Granted we don't actually want to abort
if max filesize or max recursion depth are exceeded... only if max
scansize, max files, and max scantime are exceeded.
Add 'abort_scan' flag to scan context, to protect against depending on
correct error propagation for fatal conditions. Instead, setting this
flag in the scan context should guarantee that a fatal condition deep in
scan recursion isn't lost which result in more stuff being scanned
instead of aborting. This shouldn't be necessary, but some status codes
like CL_ETIMEOUT never used to be fatal and it's easier to do this than
to verify every parser only returns CL_ETIMEOUT and other "fatal
status codes" in fatal conditions.
- Remove duplicate is_tar() prototype from filestypes.c and include
is_tar.h instead.
- Presently we create the fmap hash when creating the fmap.
This wastes a bit of CPU if the hash is never needed.
Now that we're creating fmap's for all embedded files discovered with
file type recognition scans, this is a much more frequent occurence and
really slows things down.
This commit fixes the issue by only creating fmap hashes as needed.
This should not only resolve the perfomance impact of creating fmap's
for all embedded files, but also should improve performance in general.
- Add allmatch check to the zip parser after the central-header meta
match. That way we don't multiple alerts with the same match except in
allmatch mode. Clean up error handling in the zip parser a tiny bit.
- Fixes to ensure that the scan limits such as scansize, filesize,
recursion depth, # of embedded files, and scantime are always reported
if AlertExceedsMax (--alert-exceeds-max) is enabled.
- Fixed an issue where non-fatal alerts for exceeding scan maximums may
mask signature matches later on. I changed it so these alerts use the
"possibly unwanted" alert-type and thus only alert if no other alerts
were found or if all-match or heuristic-precedence are enabled.
- Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata
when the --gen-json feature is enabled. These will show up once under
"ParseErrors" the first time a limit is exceeded. In the present
implementation, only one limits-exceeded events will be added, so as to
prevent a malicious or malformed sample from filling the JSON buffer
with millions of events and using a tonne of RAM.
Yara rule files may contain multiple signatures. If one of the
signatures fails to load because of a parse error in the yara rule
condition, the rest of the rules still load. This is fine, but it seems
that something isn't properly cleaned up, so there end up being runtime
crashes when running the correctly loaded rules as a result.
Specifically, the crash occurs because of an assert() that expects the
operation stack to be empty and it is not. A simple fix is to print an
error or debug message instead of crashing. It's not the right fix, but
it at least prevents crash.
Resolves: https://bugzilla.clamav.net/show_bug.cgi?id=12077
Also fixed a bunch of warnings in the yara module caused by comparing
different integer types.
Adds an equivalent functionality to ClamScan's --gen-json option to
ClamD.
Behavior for GenerateMetadataJson is the same as with --gen-json.
If Debug is enabled, it will print out the JSON after each scan.
If LeaveTemporaryFiles is enabled, it will drop a metadat.json file
in the scan temp directory, which of course may be customized using
the TemporaryDirectory option.
To build with code signing, the macOS build must have:
-G Xcode \
-D CLAMAV_SIGN_FILE=ON \
-D CODE_SIGN_IDENTITY="...your codesign ID..." \
-D DEVELOPMENT_TEAM_ID="...your team ID..." \
You can find the codesign ID using:
/usr/bin/env xcrun security find-identity -v -p codesigning
The team ID should also be listed in the identity description.
Also I changed the package name for APPLE to be "clamav" so it doesn't
put "ClamAV <version>" in the PKG PackageInfo like this:
com.cisco.ClamAV 0.104.0.libraries
Instead, it should just be something like:
com.cisco.clamav.libraries
Version is a separate field in that file and shouldn't be in the name.
At present the .msi installer is only installing documentation component
files and the vcredist files but fails to install clamav libraries,
programs, and dependencies.
It appears that explicitly installing the NEWS & README files under the
documentation component before calling "include(CPack)" was causing the
MSI installer to think it needed to install the documentation component
but nothing else.
This commit removes the component name, since we don't want to use
components in the Windows MSI installer anyways. This appears to resolve
the issue so that the MSI installer installs all the desired files.
When locale is UTF-8, check that signature pattern bytes are < 0x80
before using the isalpha() and toupper() functions since that can lead
to segfaults and/or unintended matches.
For example take a LDB signature with a case-insensitive subsignature
containing byte 0xb5. The uint16_t value of pattern->pattern[i] is
0x10b5 since 0xb5 is OR'd with the CLI_MATCH_NOCASE (0x1000) flag.
Locale: C
isalpha((unsigned char) (0x10b5 & 0xff)): 0
toupper((unsigned char) (0x10b5 & 0xff)): b5
Locale: en_US.UTF-8
isalpha((unsigned char) (0x10b5 & 0xff)): 1
toupper((unsigned char) (0x10b5 & 0xff)): 39c
U+00B5 is the Micro Sign (also known as Mu)
U+03BC is the Greek Small Letter Mu
U+039C is the Greek Capital Letter Mu
Zero-byte CDIFFs are sometimes issued in place of real CDIFFs to force
freshclam to download a whole CVD because using a CDIFF would be less
efficient or otherwise problematic.
There is a bug where freshclam fails to detect if a downloaded CDIFF is
empty. This issue prints an ugly warning message and may require the
user to run freshclam up to 3x before they get over the empty-CVD hump
and are back to normal updates.
This commit resolves this bug by checking the size of the downloaded
CDIFF patch and returning an appropriate status code.
There is a bug where freshclam fails to detect if a downloaded CDIFF is
empty. In 0.103 this, combined with a CDN caching issue could result in
freshclam downloading a daily.cvd but failing to update, putting it in a
sort of infinite loop. In 0.104 this issue manifests slightly
differently, requiring freshclam to run up to 3x before you get over the
empty-CVD hump and are back to normal updates.
This commit updates an existing cdiff test with the zero-byte cdiff + an
out-of-date CVD to confirm the bug. The following commit will fix it.
The freshclam.dat file shouldn't be in the Docker images or else
everyone using the image will have the same UUID.
This commit deletes it after each update.
If running multiple parallel processes of "xor_testfile.py" there was a
race condition between checking for the existence of the directory and
creating it. Now this is handled as a dependency in CMake.
Remove the README and COPYING entries from the .dockerignore file.
These are now required by CPack for the build to succeed.
Also removed the autotools entries, since they no longer exist.
The libclamunrar (and libclamunrar_iface) SO versions tracked
libclamav's SO version in the old Autotools build system.
We accidentally rolled it backwards, setting it to be similar to UnRAR's
project version. Since the official UnRAR project doesn't have a Unix SO
version that we "should" match, and to prevent the theoretical
possibility of having a collision if an old and new clamav were
installed on the same box, we should make libclamunrar's version track
libclamav as it was before (and before 0.104 is released with CMake
being the stable, and only, build system).
For Windows to match 0.103 installer behavior, include NEWS.md and
README.md and rename the html directory to UserManual during the
install.
Unfortunately I can't match the behavior for the main page for the
user manual. It is now called index.html instead of UserManual.html
and is inside the UserManual directory instead of at the top level.
Add the process memory scanning feature from ClamWin's ClamScan.
This commit extends that feature to make it available in ClamDScan
as well.
This adds three new options to ClamScan and ClamDScan on Windows:
* --memory
* --kill
* --unload
--allmatch and --stream are available for ClamDScan.
To reduce code duplication, this refactors clamd related code
used in both scanmem.c and proto.c into clamdcom.
Moved send_fdpass(), send_stream(), chkpath(), dconnect(), and
dsresult(); as well as some type definitions.
Special thanks to Gianluigi Tiesi for allowing us to integrate the
Windows process memory scanning feature from ClamWin into the ClamAV.
Currently ReceiveTimeout sets CURLOPT_TIMEOUT which is an absolute timeout
on the HTTP download and not particularly useful without knowing the size
of the file or the throughput available to download it.
Change it to use CURLOPT_LOW_SPEED_TIME instead, and set the related low
speed limit (CURLOPT_LOW_SPEED_LIMIT) to 1 byte per second. This will allow
the ReceiveTimeout to abort the attempt if the download is not making
any significant progress.
Restore the documentation, default and sample options back to before
2fd28e1d09 and
f5d465a864.
This fixes#266 and avoids problems caused by the Ubuntu default
ReceiveTimeout of 30 seconds.
If ncurses or pdcurses are static libraries, they are not properly
detected.
First, the user compiling clamav needs to specify if the include path is
for NCURSES or PDCURSES, which will differentiate the two. I've updated
the INSTALL.md file to show this.
Second, the wrong variable was being used to add the include path to the
Curses::curses target, which means that clamdtop would fail to include
ncurses.h. I fixed this.
Xcode (and perhaps some other generators?) do not like targets that have
only object files. See:
https://cmake.org/cmake/help/latest/command/add_library.html#object-libraries
And: https://cmake.org/pipermail/cmake/2016-May/063479.html
This issue manifests when using `-G Xcode` on macOS as the library
dylibs being missing when linking with other binaries.
This commit removes the object libraries for libclamav, libfreshclam,
libclamunrar_iface, libclamunrar, libclammspack, and (lib)common
because they were used by static or shared libs that didn't
themselves have any added sources.
Add getter & setter for the debug flag, so it isn't referenced by unit
tests or other code that links with libclamav. This is needed because
global variables are exported symbols on Windows.
The Jenkinsfile renames the tarball, removing the version string suffix.
This is problematic because A) we want that suffix when we publish
release candidates and B) the tarball should extract with the same
directory name as the tarball name.
CMake/CPack is already used to build:
- TGZ source tarball
- WiX-based installer (Windows)
- ZIP install packages (Windows)
This commit adds support for building:
- macOS PKG installer
- DEB package
- RPM package
This should also enable building FreeBSD packages, but while I was able
to build all of the static dependencies using Mussels, CMake/CPack 3.20
doesn't appear to have the the FreeBSD generator despite being in the
documentation.
The package names are will be in this format:
clamav-<version><suffix>.<os>.<arch>.<extension>
This includes changing the Windows .zip and .msi installer names.
E.g.:
- clamav-0.104.0-rc.macos.x86_64.pkg
- clamav-0.104.0-rc.win.win32.msi
- clamav-0.104.0-rc.win.win32.zip
- clamav-0.104.0-rc.win.x64.msi
- clamav-0.104.0-rc.linux.x86_64.deb
- clamav-0.104.0-rc.linux.x86_64.rpm
Notes about building the packages:
I've only tested this with building ClamAV using static dependencies that
I build using the clamav_deps "host-static" recipes from the "clamav"
Mussels cookbook. Eg:
msl build clamav_deps -t host-static
Here's an example configuration to build clam in this way, installing to
/usr/local/clamav:
```sh
cmake .. \
-D CMAKE_FIND_PACKAGE_PREFER_CONFIG=TRUE \
-D CMAKE_PREFIX_PATH=$HOME/.mussels/install/host-static \
-D CMAKE_INSTALL_PREFIX="/usr/local/clamav" \
-D CMAKE_MODULE_PATH=$HOME/.mussels/install/host-static/lib/cmake \
-D CMAKE_BUILD_TYPE=RelWithDebInfo \
-D ENABLE_EXAMPLES=OFF \
-D JSONC_INCLUDE_DIR="$HOME/.mussels/install/host-static/include/json-c" \
-D JSONC_LIBRARY="$HOME/.mussels/install/host-static/lib/libjson-c.a" \
-D ENABLE_JSON_SHARED=OFF \
-D BZIP2_INCLUDE_DIR="$HOME/.mussels/install/host-static/include" \
-D BZIP2_LIBRARY_RELEASE="$HOME/.mussels/install/host-static/lib/libbz2_static.a" \
-D OPENSSL_ROOT_DIR="$HOME/.mussels/install/host-static" \
-D OPENSSL_INCLUDE_DIR="$HOME/.mussels/install/host-static/include" \
-D OPENSSL_CRYPTO_LIBRARY="$HOME/.mussels/install/host-static/lib/libcrypto.a" \
-D OPENSSL_SSL_LIBRARY="$HOME/.mussels/install/host-static/lib/libssl.a" \
-D LIBXML2_INCLUDE_DIR="$HOME/.mussels/install/host-static/include/libxml2" \
-D LIBXML2_LIBRARY="$HOME/.mussels/install/host-static/lib/libxml2.a" \
-D PCRE2_INCLUDE_DIR="$HOME/.mussels/install/host-static/include" \
-D PCRE2_LIBRARY="$HOME/.mussels/install/host-static/lib/libpcre2-8.a" \
-D CURSES_INCLUDE_DIR="$HOME/.mussels/install/host-static/include" \
-D CURSES_LIBRARY="$HOME/.mussels/install/host-static/lib/libncurses.a" \
-D ZLIB_INCLUDE_DIR="$HOME/.mussels/install/host-static/include" \
-D ZLIB_LIBRARY="$HOME/.mussels/install/host-static/lib/libz.a" \
-D LIBCHECK_INCLUDE_DIR="$HOME/.mussels/install/host-static/include" \
-D LIBCHECK_LIBRARY="$HOME/.mussels/install/host-static/lib/libcheck.a"
```
Set CPACK_PACKAGING_INSTALL_PREFIX to customize the resulting package's
install location. This can be different than the install prefix. E.g.:
```sh
-D CMAKE_INSTALL_PREFIX="/usr/local/clamav" \
-D CPACK_PACKAGING_INSTALL_PREFIX="/usr/local/clamav" \
```
Then `make` and then one of these, depending on the platform:
```sh
cpack # macOS: productbuild is default
cpack -G DEB # Debian-based
cpack -G RPM # RPM-based
```
On macOS you'll need to `pip3 install markdown` so that the NEWS.md file can
be converted to html so it will render in the installer.
On RPM-based systems, you'll need rpmbuild (install rpm-build)
This commit also fixes an issue where the html manual (if present) was
not correctly added to the Windows (or now other) install packages.
Fix num to hex function for Windows installer guid
Fix win32 cpack build
Fix macOS cpack build
The access-denied test and excludepath tests both relied on the full
path of the test file to be in the expected results. This fails if
you're working within a path that has a symlink because clamd and
clamdscan determine real-paths before scanning and end up sending
back the real-path in the results, not the original path.
This fixes the tests by removing the full paths from the expected
results.
I also cleaned up some type safety warnings.
The CURL_CA_BUNDLE environment variable used by freshclam & clamsubmit to
specify a custom path to a CA bundle is undocumented.
Feature was added here: https://bugzilla.clamav.net/show_bug.cgi?id=12504
Resolves: https://github.com/Cisco-Talos/clamav/issues/175
Also document:
- clamd/clamscan: using LD_LIBRARY_PATH to find libclamunrar_iface.so/dylib
- sigtool: using SIGNDUSER, SIGNDPASS for auth creds when building CVD
This info also needs to be added to the online documentation.
* Changed rename() on Windows
via w32_rename(). rename() doesn't work on Windows if the dest file
already exists.
* Change access() and buildcld() to support UNC paths
access uses CreateFileA() and buildcld() opens absolute path to tmpdir
Move all step-by-step instructions for installing dependencies to
docs.clamav.net.
INSTALL.md serves to direct folks to our online documentation (or
the offline copy in the release tarball), and as a reference for
all custom config options.
Add some introductory CMake material to help people new to CMake.
Add un-install instructions.
Also fix broken links in README.md.
For reference, version 0.103 started at 120 and we're already at 124
with v0.103.3.
Ordinarily we would reserve 10 FLEVELs for each feature release, but
we're implementing a new Long Term Support (LTS) program and will be
starting with 0.103, which means additional critical bug fixes for the
0.103 series for the next 2-3 years.
This commit pushes v0.104's FLEVEL to 140 to ensure that there will be
enough FLEVELs for future 0.103 patch versions.
docs: Fix a few typos
There are small typos in:
- libclamav/others_common.c
- libclamav/pe.c
- libclamav/unzip.c
Fixes:
- Should read `descriptor` rather than `desriptor`.
- Should read `record` rather than `reocrd`.
- Should read `overarching` rather than `overaching`.