mirror of https://github.com/Cisco-Talos/clamav
Tag:
Branch:
Tree:
3dff2e2f52
0.95
0.96
0.97
0.98
0.98.1
0.98.2
0.98.3
0.98.4
0.98.5
0.98.6
0.98.7
0.99
0.99.1
0.99.2
0.99.3
CLAM-2277-ExtractImagesFromOle2
CLAM-2787-pdf-rendering-pdfium
dev/0.103.12
dev/1.0.6
dev/1.0.7
dev/1.0.8
dev/1.2.3
dev/1.3.1
dev/1.3.2
dev/1.4.1
dev/1.4.2
feature/integrate-clamav-sys
main
rel/0.100
rel/0.101
rel/0.102
rel/0.103
rel/0.104
rel/0.105
rel/0.99
rel/1.0
rel/1.1
rel/1.2
rel/1.3
rel/1.4
0.93.3
0.94.1rc1
CLAMAV_090RC1
CLAMAV_090RC2
CLAMAV_090RC3
CLAMAV_0_70
CLAMAV_0_71
CLAMAV_0_80
CLAMAV_0_80RC
CLAMAV_0_80RC1
CLAMAV_0_80RC3
CLAMAV_0_80RC4
CLAMAV_0_81
CLAMAV_0_84RC1
clamav-0.100-beta
clamav-0.100.0
clamav-0.100.0-rc
clamav-0.100.1
clamav-0.100.2
clamav-0.100.3
clamav-0.101.0
clamav-0.101.0-beta
clamav-0.101.0-rc
clamav-0.101.1
clamav-0.101.2
clamav-0.101.3
clamav-0.101.4
clamav-0.101.5
clamav-0.102.0
clamav-0.102.0-beta
clamav-0.102.0-rc
clamav-0.102.1
clamav-0.102.2
clamav-0.102.3
clamav-0.102.4
clamav-0.103.0
clamav-0.103.0-rc
clamav-0.103.0-rc2
clamav-0.103.1
clamav-0.103.10
clamav-0.103.11
clamav-0.103.12
clamav-0.103.2
clamav-0.103.3
clamav-0.103.4
clamav-0.103.5
clamav-0.103.6
clamav-0.103.7
clamav-0.103.8
clamav-0.103.9
clamav-0.104.0
clamav-0.104.0-rc2
clamav-0.104.1
clamav-0.104.2
clamav-0.104.3
clamav-0.104.4
clamav-0.105.0
clamav-0.105.0-rc
clamav-0.105.0-rc2
clamav-0.105.1
clamav-0.105.2
clamav-0.70
clamav-0.70@2754
clamav-0.70@502
clamav-0.71
clamav-0.71@2754
clamav-0.71@565
clamav-0.72
clamav-0.72@594
clamav-0.73
clamav-0.73@612
clamav-0.74
clamav-0.74@643
clamav-0.75
clamav-0.75.1
clamav-0.80
clamav-0.80@1021
clamav-0.80@2754
clamav-0.80rc
clamav-0.80rc1
clamav-0.80rc1@1265
clamav-0.80rc1@2754
clamav-0.80rc3
clamav-0.80rc3@2754
clamav-0.80rc3@939
clamav-0.80rc4
clamav-0.80rc4@2754
clamav-0.80rc4@988
clamav-0.80rc@2754
clamav-0.80rc@909
clamav-0.81
clamav-0.81@1286
clamav-0.81@2754
clamav-0.82
clamav-0.83
clamav-0.84
clamav-0.84rc1
clamav-0.84rc1@1466
clamav-0.84rc1@2754
clamav-0.84rc2
clamav-0.85
clamav-0.85.1
clamav-0.86
clamav-0.86.1
clamav-0.86.2
clamav-0.86rc1
clamav-0.87
clamav-0.87.1
clamav-0.88
clamav-0.88.1
clamav-0.88.2
clamav-0.88.3
clamav-0.88.4
clamav-0.88.5
clamav-0.88.6
clamav-0.88.7
clamav-0.90
clamav-0.90.1
clamav-0.90@2749
clamav-0.90rc1
clamav-0.90rc1@2403
clamav-0.90rc1@2754
clamav-0.90rc2
clamav-0.90rc2@2468
clamav-0.90rc2@2754
clamav-0.90rc3
clamav-0.90rc3@2666
clamav-0.90rc3@2754
clamav-0.91
clamav-0.91rc2
clamav-0.92
clamav-0.92_sf
clamav-0.92rc1
clamav-0.92rc2
clamav-0.93
clamav-0.93.1rc1
clamav-0.94
clamav-0.94.1
clamav-0.94.1rc1
clamav-0.94.2
clamav-0.94rc1
clamav-0.95
clamav-0.95.1
clamav-0.95.2
clamav-0.95.3
clamav-0.95rc1
clamav-0.95rc2
clamav-0.96
clamav-0.96.1
clamav-0.96.2
clamav-0.96.3
clamav-0.96.4
clamav-0.96.5
clamav-0.96rc1
clamav-0.96rc2
clamav-0.97
clamav-0.97.1
clamav-0.97.2
clamav-0.97.3
clamav-0.97.4
clamav-0.97.5
clamav-0.97.6
clamav-0.97.7
clamav-0.97.8
clamav-0.97rc
clamav-0.98
clamav-0.98-dmgxar
clamav-0.98.1
clamav-0.98.1rc
clamav-0.98.2
clamav-0.98.3
clamav-0.98.4
clamav-0.98.4-rc1
clamav-0.98.5
clamav-0.98.5-rc1
clamav-0.98.5-rc2
clamav-0.98.5beta
clamav-0.98.6
clamav-0.98.7
clamav-0.98rc
clamav-0.98rc2
clamav-0.99
clamav-0.99-beta1
clamav-0.99-beta2
clamav-0.99-rc1
clamav-0.99-rc2
clamav-0.99.1
clamav-0.99.1-beta1
clamav-0.99.2
clamav-0.99.3
clamav-0.99.3-beta1
clamav-0.99.3-beta2
clamav-0.99.4
clamav-1.0.0
clamav-1.0.0-rc
clamav-1.0.0-rc2
clamav-1.0.1
clamav-1.0.2
clamav-1.0.3
clamav-1.0.4
clamav-1.0.5
clamav-1.0.6
clamav-1.0.7
clamav-1.0.8
clamav-1.1.0
clamav-1.1.0-rc
clamav-1.1.1
clamav-1.1.2
clamav-1.1.3
clamav-1.2.0
clamav-1.2.0-rc
clamav-1.2.1
clamav-1.2.2
clamav-1.2.3
clamav-1.3.0
clamav-1.3.0-rc
clamav-1.3.0-rc2
clamav-1.3.1
clamav-1.3.2
clamav-1.4.0
clamav-1.4.0-rc
clamav-1.4.1
clamav-1.4.2
clamav-1.5.0-beta
clamav-20080204
merge-llvm-79908
merge-llvm-80601
merge-llvm-83242
merge-llvm-90002
merge-llvm-91214
merge-llvm-91428
merge-llvm-92222
merge-llvm-94539
merge-llvm-97877
r5076
start
test_prefilter_enable
test_prefilter_enable2
test_prefiltering_disable
${ noResults }
21 Commits (3dff2e2f5219d939a842765b92c023a4f37109c9)
Author | SHA1 | Message | Date |
---|---|---|---|
![]() |
89b72cb002
|
Sigtool: Add --fuzzy-img option to generate image fuzzy hash
Add `sigtool --fuzzy-img` option to generate image fuzzy hash. Also fix assorted warnings, mostly ensuring enough buffer space so format strings aren't truncated. For the dsig change: the returned string is allocated and is not const. The caller will have to free it. |
3 years ago |
![]() |
15eef50656 |
Code cleanup: Refactor to clean up formatting issues
Refactored the clamscan code that determines 'what to scan' in order to clean up some very messy logic and also to get around a difference in how vscode and clang-format handle formatting #ifdef blocks in the middle of an else/if. In addition to refactoring, there is a slight behavior improvement. With this change, doing `clamscan blah -` will now scan `blah` and then also scan `stdin`. You can even do `clamscan - blah` to now scan `stdin` and then scan `blah`. Before, The `-` had to be the only "filename" argument in order to scan from stdin. In addition, added a bunch of extra empty lines or changing multi-line function calls to single-line function calls in order to get around a bug in clang-format with these two options do not playing nice together: - AlignConsecutiveAssignments: true - AlignAfterOpenBracket: true AlignAfterOpenBracket is not taking account the spaces inserted by AlignConsecutiveAssignments, so you end up with stuff like this: ```c bleeblah = 1; blah = function(arg1, arg2, arg3); // ^--- these args 4-left from where they should be. ``` VSCode, meanwhile, somehow fixes this whitespace issue so code that is correctly formatted by VSCode doesn't have this bug, meaning that: 1. The clang-format check in GH Actions fails. 2. We'd all have to stop using format-on-save in VSCode and accept the bug if we wanted those GH Actions tests to pass. Adding an empty line before variable assignments from multi-line function calls evades the buggy behavior. This commit should resolve the clang-format github action test failures, for now. |
3 years ago |
![]() |
d1656ee241 |
Increase default file maximums
MaxFileSize 25M -> 100M MaxScanSize 100M -> 400M StreamMaxLength 25M -> 100M MaxEmbeddedPE 10M -> 40M MaxHTMLNormalize 10M -> 40M MaxHTMLNoTags 2M -> 8M MaxScriptNormalize 5M -> 20M PCREMaxFileSIze 25M -> 100M |
3 years ago |
![]() |
09128bc435 |
Logging: Fix log-level bug
We must pass the loglevel from logg to mprintf. Bug recently introduced with improvement to logg() API. |
3 years ago |
![]() |
a21cc6dcd7
|
Add explicit log level parameter to application logging API
* Added loglevel parameter to logg() * Fix logg and mprintf internals with new loglevels * Update all logg calls to set loglevel * Update all mprintf calls to set loglevel * Fix hidden logg calls * Executed clam-format |
3 years ago |
![]() |
dda0a70d90 |
cdiff: Replace cdiff-apply feature with Rust implementation
Apply both .cdiff and .script CVD patches. Note: A script is a non-compressed and unsigned file containing cdiff commands. There is no header or footer that should be processed. This Rust-based implementation of the cdiff-apply feature includes equivalent features as found in the C-based implementation: - cdiff file signature validation against sha256 of the file contents - Gz decoding of file contents - File open command - File close command - Signature add command - Line delete command - Xchg command - Move command - Unlink command This Rust implementation adds cdiff-apply unit tests to verify correct functionality. |
3 years ago |
![]() |
140c88aa4e |
Bump copyright for 2022
Includes minor format corrections. |
3 years ago |
![]() |
cca5f489b0 |
freshclam, clamd: Increase max config line length
ClamAV's option parser, used for `freshclam.conf`, `clamd.conf`, and `clamav-milter.conf` has a max line length of 512 characters. By request, this commit increases the line length to 1024 to accommodate very long `DatabaseMirror` options when using access tokens in the URI. Resolves: https://github.com/Cisco-Talos/clamav/issues/281 |
3 years ago |
![]() |
0a24f70218 |
Rename Heuristics.Email.ExceedsMax alerts
Rename Heuristics.Email.ExceedsMax alerts to start with Heuristics.Limits.Exceeded.Email instead, so that all heuristic alerts for exceeded scan limits have the same prefix. |
4 years ago |
![]() |
db013a2bfd |
libclamav: Fix scan recursion tracking
Scan recursion is the process of identifying files embedded in other files and then scanning them, recursively. Internally this process is more complex than it may sound because a file may have multiple layers of types before finding a new "file". At present we treat the recursion count in the scanning context as an index into both our fmap list AND our container list. These two lists are conceptually a part of the same thing and should be unified. But what's concerning is that the "recursion level" isn't actually incremented or decremented at the same time that we add a layer to the fmap or container lists but instead is more touchy-feely, increasing when we find a new "file". To account for this shadiness, the size of the fmap and container lists has always been a little longer than our "max scan recursion" limit so we don't accidentally overflow the fmap or container arrays (!). I've implemented a single recursion-stack as an array, similar to before, which includes a pointer to each fmap at each layer, along with the size and type. Push and pop functions add and remove layers whenever a new fmap is added. A boolean argument when pushing indicates if the new layer represents a new buffer or new file (descriptor). A new buffer will reset the "nested fmap level" (described below). This commit also provides a solution for an issue where we detect embedded files more than once during scan recursion. For illustration, imagine a tarball named foo.tar.gz with this structure: | description | type | rec level | nested fmap level | | ------------------------- | ----- | --------- | ----------------- | | foo.tar.gz | GZ | 0 | 0 | | └── foo.tar | TAR | 1 | 0 | | ├── bar.zip | ZIP | 2 | 1 | | │ └── hola.txt | ASCII | 3 | 0 | | └── baz.exe | PE | 2 | 1 | But suppose baz.exe embeds a ZIP archive and a 7Z archive, like this: | description | type | rec level | nested fmap level | | ------------------------- | ----- | --------- | ----------------- | | baz.exe | PE | 0 | 0 | | ├── sfx.zip | ZIP | 1 | 1 | | │ └── hello.txt | ASCII | 2 | 0 | | └── sfx.7z | 7Z | 1 | 1 | | └── world.txt | ASCII | 2 | 0 | (A) If we scan for embedded files at any layer, we may detect: | description | type | rec level | nested fmap level | | ------------------------- | ----- | --------- | ----------------- | | foo.tar.gz | GZ | 0 | 0 | | ├── foo.tar | TAR | 1 | 0 | | │ ├── bar.zip | ZIP | 2 | 1 | | │ │ └── hola.txt | ASCII | 3 | 0 | | │ ├── baz.exe | PE | 2 | 1 | | │ │ ├── sfx.zip | ZIP | 3 | 1 | | │ │ │ └── hello.txt | ASCII | 4 | 0 | | │ │ └── sfx.7z | 7Z | 3 | 1 | | │ │ └── world.txt | ASCII | 4 | 0 | | │ ├── sfx.zip | ZIP | 2 | 1 | | │ │ └── hello.txt | ASCII | 3 | 0 | | │ └── sfx.7z | 7Z | 2 | 1 | | │ └── world.txt | ASCII | 3 | 0 | | ├── sfx.zip | ZIP | 1 | 1 | | └── sfx.7z | 7Z | 1 | 1 | (A) is bad because it scans content more than once. Note that for the GZ layer, it may detect the ZIP and 7Z if the signature hits on the compressed data, which it might, though extracting the ZIP and 7Z will likely fail. The reason the above doesn't happen now is that we restrict embedded type scans for a bunch of archive formats to include GZ and TAR. (B) If we scan for embedded files at the foo.tar layer, we may detect: | description | type | rec level | nested fmap level | | ------------------------- | ----- | --------- | ----------------- | | foo.tar.gz | GZ | 0 | 0 | | └── foo.tar | TAR | 1 | 0 | | ├── bar.zip | ZIP | 2 | 1 | | │ └── hola.txt | ASCII | 3 | 0 | | ├── baz.exe | PE | 2 | 1 | | ├── sfx.zip | ZIP | 2 | 1 | | │ └── hello.txt | ASCII | 3 | 0 | | └── sfx.7z | 7Z | 2 | 1 | | └── world.txt | ASCII | 3 | 0 | (B) is almost right. But we can achieve it easily enough only scanning for embedded content in the current fmap when the "nested fmap level" is 0. The upside is that it should safely detect all embedded content, even if it may think the sfz.zip and sfx.7z are in foo.tar instead of in baz.exe. The biggest risk I can think of affects ZIPs. SFXZIP detection is identical to ZIP detection, which is why we don't allow SFXZIP to be detected if insize of a ZIP. If we only allow embedded type scanning at fmap-layer 0 in each buffer, this will fail to detect the embedded ZIP if the bar.exe was not compressed in foo.zip and if non-compressed files extracted from ZIPs aren't extracted as new buffers: | description | type | rec level | nested fmap level | | ------------------------- | ----- | --------- | ----------------- | | foo.zip | ZIP | 0 | 0 | | └── bar.exe | PE | 1 | 1 | | └── sfx.zip | ZIP | 2 | 2 | Provided that we ensure all files extracted from zips are scanned in new buffers, option (B) should be safe. (C) If we scan for embedded files at the baz.exe layer, we may detect: | description | type | rec level | nested fmap level | | ------------------------- | ----- | --------- | ----------------- | | foo.tar.gz | GZ | 0 | 0 | | └── foo.tar | TAR | 1 | 0 | | ├── bar.zip | ZIP | 2 | 1 | | │ └── hola.txt | ASCII | 3 | 0 | | └── baz.exe | PE | 2 | 1 | | ├── sfx.zip | ZIP | 3 | 1 | | │ └── hello.txt | ASCII | 4 | 0 | | └── sfx.7z | 7Z | 3 | 1 | | └── world.txt | ASCII | 4 | 0 | (C) is right. But it's harder to achieve. For this example we can get it by restricting 7ZSFX and ZIPSFX detection only when scanning an executable. But that may mean losing detection of archives embedded elsewhere. And we'd have to identify allowable container types for each possible embedded type, which would be very difficult. So this commit aims to solve the issue the (B)-way. Note that in all situations, we still have to scan with file typing enabled to determine if we need to reassign the current file type, such as re-identifying a Bzip2 archive as a DMG that happens to be Bzip2- compressed. Detection of DMG and a handful of other types rely on finding data partway through or near the ned of a file before reassigning the entire file as the new type. Other fixes and considerations in this commit: - The utf16 HTML parser has weak error handling, particularly with respect to creating a nested fmap for scanning the ascii decoded file. This commit cleans up the error handling and wraps the nested scan with the recursion-stack push()/pop() for correct recursion tracking. Before this commit, each container layer had a flag to indicate if the container layer is valid. We need something similar so that the cli_recursion_stack_get_*() functions ignore normalized layers. Details... Imagine an LDB signature for HTML content that specifies a ZIP container. If the signature actually alerts on the normalized HTML and you don't ignore normalized layers for the container check, it will appear as though the alert is in an HTML container rather than a ZIP container. This commit accomplishes this with a boolean you set in the scan context before scanning a new layer. Then when the new fmap is created, it will use that flag to set similar flag for the layer. The context flag is reset those that anything after this doesn't have that flag. The flag allows the new recursion_stack_get() function to ignore normalized layers when iterating the stack to return a layer at a requested index, negative or positive. Scanning normalized extracted/normalized javascript and VBA should also use the 'layer is normalized' flag. - This commit also fixes Heuristic.Broken.Executable alert for ELF files to make sure that: A) these only alert if cli_append_virus() returns CL_VIRUS (aka it respects the FP check). B) all broken-executable alerts for ELF only happen if the SCAN_HEURISTIC_BROKEN option is enabled. - This commit also cleans up the error handling in cli_magic_scan_dir(). This was needed so we could correctly apply the layer-is-normalized-flag to all VBA macros extracted to a directory when scanning the directory. - Also fix an issue where exceeding scan maximums wouldn't cause embedded file detection scans to abort. Granted we don't actually want to abort if max filesize or max recursion depth are exceeded... only if max scansize, max files, and max scantime are exceeded. Add 'abort_scan' flag to scan context, to protect against depending on correct error propagation for fatal conditions. Instead, setting this flag in the scan context should guarantee that a fatal condition deep in scan recursion isn't lost which result in more stuff being scanned instead of aborting. This shouldn't be necessary, but some status codes like CL_ETIMEOUT never used to be fatal and it's easier to do this than to verify every parser only returns CL_ETIMEOUT and other "fatal status codes" in fatal conditions. - Remove duplicate is_tar() prototype from filestypes.c and include is_tar.h instead. - Presently we create the fmap hash when creating the fmap. This wastes a bit of CPU if the hash is never needed. Now that we're creating fmap's for all embedded files discovered with file type recognition scans, this is a much more frequent occurence and really slows things down. This commit fixes the issue by only creating fmap hashes as needed. This should not only resolve the perfomance impact of creating fmap's for all embedded files, but also should improve performance in general. - Add allmatch check to the zip parser after the central-header meta match. That way we don't multiple alerts with the same match except in allmatch mode. Clean up error handling in the zip parser a tiny bit. - Fixes to ensure that the scan limits such as scansize, filesize, recursion depth, # of embedded files, and scantime are always reported if AlertExceedsMax (--alert-exceeds-max) is enabled. - Fixed an issue where non-fatal alerts for exceeding scan maximums may mask signature matches later on. I changed it so these alerts use the "possibly unwanted" alert-type and thus only alert if no other alerts were found or if all-match or heuristic-precedence are enabled. - Added the "Heuristics.Limits.Exceeded.*" events to the JSON metadata when the --gen-json feature is enabled. These will show up once under "ParseErrors" the first time a limit is exceeded. In the present implementation, only one limits-exceeded events will be added, so as to prevent a malicious or malformed sample from filling the JSON buffer with millions of events and using a tonne of RAM. |
4 years ago |
![]() |
d8dc3f00f9 |
ClamD: Add GenerateMetadataJson option, like clamscan --gen-json
Adds an equivalent functionality to ClamScan's --gen-json option to ClamD. Behavior for GenerateMetadataJson is the same as with --gen-json. If Debug is enabled, it will print out the JSON after each scan. If LeaveTemporaryFiles is enabled, it will drop a metadat.json file in the scan temp directory, which of course may be customized using the TemporaryDirectory option. |
4 years ago |
![]() |
23dfe8fc4c
|
ClamScan, ClamDScan: process memory scanning (Windows)
Add the process memory scanning feature from ClamWin's ClamScan. This commit extends that feature to make it available in ClamDScan as well. This adds three new options to ClamScan and ClamDScan on Windows: * --memory * --kill * --unload --allmatch and --stream are available for ClamDScan. To reduce code duplication, this refactors clamd related code used in both scanmem.c and proto.c into clamdcom. Moved send_fdpass(), send_stream(), chkpath(), dconnect(), and dsresult(); as well as some type definitions. Special thanks to Gianluigi Tiesi for allowing us to integrate the Windows process memory scanning feature from ClamWin into the ClamAV. |
4 years ago |
![]() |
1b276dbe93 |
Change ReceiveTimeout to use CURLOPT_LOW_SPEED_TIME
Currently ReceiveTimeout sets CURLOPT_TIMEOUT which is an absolute timeout on the HTTP download and not particularly useful without knowing the size of the file or the throughput available to download it. Change it to use CURLOPT_LOW_SPEED_TIME instead, and set the related low speed limit (CURLOPT_LOW_SPEED_LIMIT) to 1 byte per second. This will allow the ReceiveTimeout to abort the attempt if the download is not making any significant progress. Restore the documentation, default and sample options back to before |
4 years ago |
![]() |
4a9cff9214 |
CMake: support Xcode builds
Xcode (and perhaps some other generators?) do not like targets that have only object files. See: https://cmake.org/cmake/help/latest/command/add_library.html#object-libraries And: https://cmake.org/pipermail/cmake/2016-May/063479.html This issue manifests when using `-G Xcode` on macOS as the library dylibs being missing when linking with other binaries. This commit removes the object libraries for libclamav, libfreshclam, libclamunrar_iface, libclamunrar, libclammspack, and (lib)common because they were used by static or shared libs that didn't themselves have any added sources. Add getter & setter for the debug flag, so it isn't referenced by unit tests or other code that links with libclamav. This is needed because global variables are exported symbols on Windows. |
4 years ago |
![]() |
d46832d5cf |
clamav.net URL update for new docs, github issues
Replace new bugzilla ticket links with links to github issues. Replace clamav.net/documentation links with docs.clamav.net equivalents. |
4 years ago |
![]() |
27d51762b3
|
Added Windows services for clamd and freshclam
Added feature to start FreshClam & Clamd as Windows services Special thanks to Gianluigi Tiesi for allowing us to integrate this feature from ClamWin directly into ClamAV. Added internal --service-mode option for FreshClam and ClamD This is used when Windows starts FreshClam or ClamD as a service so that they will register with the service manager. Code found in service.c. |
4 years ago |
![]() |
8632035c9c |
Common: cert util memory free fix
This is a bit of a style thing, as freeing a NULL pointer isn't dangerous. Still, we hould only try to free the certificate name if non-NULL, and then set it to NULL so it may not be accidentally re-used if the code is later modified. |
4 years ago |
![]() |
4f51994fad |
FreshClam: Fix tests, configs for IPv6-only systems
Some config settings and some tests hardcoded 127.0.0.1. This switches to localhost, they'll work for systems that don't support IPv4. |
4 years ago |
![]() |
0255f29a72 |
Blacklist & Whitelist verbiage
Improvements to use modern block list and allow list verbiage. blacklist -> block list whitelist -> allow listed blacklisted -> blocked whitelisted -> allowed In the case of certificate verification, use "trust" or "verify" when something is allowed. Also changed domainlist -> domain list (or DomainList) to match. |
4 years ago |
![]() |
a746d344df |
Remove Autotools build system & built-in LLVM
CMake is now required to build. The built-in LLVM is no longer available. Also removed support for libltdl calls, which is not used in the CMake builds, was only used when building with Autotools. TODO: Fix CMake LLVM support & update to work with modern versions. |
4 years ago |
![]() |
c025afd683 |
Rename "shared" library to "common"
The named "shared" is confusing, especially now that these features are built as a static library instead of being directly compiled into the various applications. |
4 years ago |