It's more natural to use uint8_t * (std::byte needs C++17 and llvm has
too much uint8_t *) and most callers use uint8_t * instead of char *.
The functions are recently moved into `llvm::compression::zlib::`, so
downstream projects need to make adaption anyway.
* Refactor compression namespaces across the project, making way for a possible
introduction of alternatives to zlib compression.
Changes are as follows:
* Relocate the `llvm::zlib` namespace to `llvm::compression::zlib`.
Reviewed By: MaskRay, leonardchan, phosek
Differential Revision: https://reviews.llvm.org/D128953
This is a support for " #pragma omp atomic compare fail ". It has Parser & AST support for now.
Reviewed By: tianshilei1992
Differential Revision: https://reviews.llvm.org/D123235
Adds support for the reserved locator 'omp_all_memory' for use
in depend clauses with 'out' or 'inout' dependence-types.
Differential Revision: https://reviews.llvm.org/D125828
This is a 1.9% reduction in PCH size in my measurements.
In abbreviated records, VBR6 seems to be slightl better than VBR8 for locations
that may be delta-encoded (i.e. not the first)
Differential Revision: https://reviews.llvm.org/D125952
Much of the size of PCH/PCM files comes from stored SourceLocations.
These are encoded using (almost) their raw value, VBR-encoded. Absolute
SourceLocations can be relatively large numbers, so this commonly takes
20-30 bits per location.
We can reduce this by exploiting redundancy: many "nearby" SourceLocations are
stored differing only slightly and can be delta-encoded.
Randam-access loading of AST nodes constrains how long these sequences
can be, but we can do it at least within a node that always gets
deserialized as an atomic unit.
TypeLoc is implemented in this patch as it's a relatively small change
that shows most of the API.
This saves ~3.5% of PCH size, I have local changes applying this technique
further that save another 3%, I think it's possible to get to 10% total.
Differential Revision: https://reviews.llvm.org/D125403
This diff changes the serialization of the `ORIGINAL_PCH_DIR`
entry in module files to be serialized relative to the module's
`BaseDirectory`. This will allow for the module to be relocatable
across machines.
The path is restored relative to the module's BaseDirectory on
deserialization.
Reviewed By: urnathan
Differential Revision: https://reviews.llvm.org/D124946
This diff changes the serialization of the `SUBMODULE_TOPHEADER`
entry in module files to be serialized relative to the module's
`BaseDirectory`. This matches the behavior of the
`SUBMODULE_HEADER` entry and will allow for the module to be
relocatable across machines.
The path is restored relative to the module's `BaseDirectory` on
deserialization.
Reviewed By: urnathan
Differential Revision: https://reviews.llvm.org/D124938
This diff adds a new frontend flag `-fmodule-file-home-is-cwd`.
The behavior of this flag is similar to
`-fmodule-map-file-home-is-cwd` but does not require the module
map files to be modified to have inputs relative to the cwd.
Instead the output modules will have their `BaseDirectory` set
to the cwd and will try and resolve paths relative to that.
The motiviation for this change is to support relocatable pcm
files that are built on different machines with different paths
without having to alter module map files, which is sometimes not
possible as they are provided by 3rd parties.
Reviewed By: urnathan
Differential Revision: https://reviews.llvm.org/D124874
ASTWriter::associateDeclWithFile shows a lot in clangd perf profile due to O(n^2) behaviour in insertion of DeclIDs in SortedFileDeclIDs. Instead of doing that, this patch just appends it to the DeclIDs vector and sorts them at the end.
Reviewed By: akyrtzi
Differential Revision: https://reviews.llvm.org/D124840
Returning `std::array<uint8_t, N>` is better ergonomics for the hashing functions usage, instead of a `StringRef`:
* When returning `StringRef`, client code is "jumping through hoops" to do string manipulations instead of dealing with fixed array of bytes directly, which is more natural
* Returning `std::array<uint8_t, N>` avoids the need for the hasher classes to keep a field just for the purpose of wrapping it and returning it as a `StringRef`
As part of this patch also:
* Introduce `TruncatedBLAKE3` which is useful for using BLAKE3 as the hasher type for `HashBuilder` with non-default hash sizes.
* Make `MD5Result` inherit from `std::array<uint8_t, 16>` which improves & simplifies its API.
Differential Revision: https://reviews.llvm.org/D123100
Previously, if a `#pragma clang assume_nonnull begin` was at the
end of a premable with a `#pragma clang assume_nonnull end` at the
end of the main file, clang would diagnose an unterminated begin in
the preamble and an unbalanced end in the main file.
With this change, those errors no longer occur and the case above is
now properly handled. I've added a corresponding test to clangd,
which makes use of preambles, in order to verify this works as
expected.
Differential Revision: https://reviews.llvm.org/D122179
For header units we build the top level module directly from the header
that it represents and macros defined in this TU need to be emitted (when
such a definition is live at the end of the TU).
Differential Revision: https://reviews.llvm.org/D121097
Update `WeakUndeclaredIdentifiers` to hold a collection of weak
aliases per identifier instead of only one.
This also allows the "used" state to be removed from `WeakInfo`
because it is really only there as an alternative to removing
processed map entries, and we can represent that using an empty set
now. The serialization code is updated for the removal of the field.
Additionally, a PCH test is added for the new functionality.
The records are grouped by the "target" identifier, which was already
being used as a key for lookup purposes. We also store only one record
per alias name; combined, this means that diagnostics are grouped by
the "target" and limited to one per alias (which should be acceptable).
Fixes PR28611.
Fixesllvm/llvm-project#28985.
Reviewed By: aaron.ballman, cebowleratibm
Differential Revision: https://reviews.llvm.org/D121927
Co-authored-by: Rachel Craik <rcraik@ca.ibm.com>
Co-authored-by: Jamie Schmeiser <schmeise@ca.ibm.com>
Current ASTContext.getAttributedType() takes attribute kind,
ModifiedType and EquivType as the hash to decide whether an AST node
has been generated or note. But this is not enough for btf_type_tag
as the attribute might have the same ModifiedType and EquivType, but
still have different string associated with attribute.
For example, for a data structure like below,
struct map_value {
int __attribute__((btf_type_tag("tag1"))) __attribute__((btf_type_tag("tag3"))) *a;
int __attribute__((btf_type_tag("tag2"))) __attribute__((btf_type_tag("tag4"))) *b;
};
The current ASTContext.getAttributedType() will produce
an AST similar to below:
struct map_value {
int __attribute__((btf_type_tag("tag1"))) __attribute__((btf_type_tag("tag3"))) *a;
int __attribute__((btf_type_tag("tag1"))) __attribute__((btf_type_tag("tag3"))) *b;
};
and this is incorrect.
It is very difficult to use the current AttributedType as it is hard to
get the tag information. To fix the problem, this patch introduced
BTFTagAttributedType which is similar to AttributedType
in many ways but with an additional BTFTypeTagAttr. The tag itself can
be retrieved with BTFTypeTagAttr.
With the new BTFTagAttributed type, the debuginfo code can be greatly
simplified compared to previous TypeLoc based approach.
Differential Revision: https://reviews.llvm.org/D120296
With a sufficiently large output buffer, the only failure is Z_MEM_ERROR.
Check it and call the noreturn report_bad_alloc_error if applicable.
resize_for_overwrite may call report_bad_alloc_error as well.
Now that there is no other error type, we can replace the return type with void
and simplify call sites.
Reviewed By: ikudrin
Differential Revision: https://reviews.llvm.org/D121512
Since D106876, PCM files don't report module maps as input files unless they contributed to the compilation.
Reporting only module maps of (transitively) imported modules is not enough, though. For modules marked with `[no_undeclared_includes]`, other module maps affect the compilation by introducing anti-dependencies.
This patch makes sure such module maps are being reported as input files.
Depends on D120463.
Reviewed By: dexonsmith
Differential Revision: https://reviews.llvm.org/D120464
clang has support for lazy headers in module maps - if size and/or
modtime and provided in the cppmap file, headers are only resolved when
an include directive for a file with that size/modtime is encoutered.
Before this change, the lazy resolution was all-or-nothing per module.
That means as soon as even one file in that module potentially matched
an include, all lazy files in that module were resolved. With this
change, only files with matching size/modtime will be resolved.
The goal is to avoid unnecessary stat() calls on non-included files,
which is especially valuable on networked file systems, with higher
latency.
Differential Revision: https://reviews.llvm.org/D120569
This is an initial enabling patch for module partition support.
We add enumerations for partition interfaces/implementations.
This means that the module kind enumeration now occupies three
bits, so the AST streamer is adjusted for this. Adding one bit there
seems preferable to trying to overload the meanings of existing
kinds (and we will also want to add a C++20 header unit case later).
Differential Revision: https://reviews.llvm.org/D114714
Recently we observed high memory pressure caused by clang during some parallel builds.
We discovered that we have several projects that have a large number of #define directives
in their TUs (on the order of millions), which caused huge memory consumption in clang due
to a lot of allocations for MacroInfo. We would like to reduce the memory overhead of
clang for a single #define to reduce the memory overhead for these files, to allow us to
reduce the memory pressure on the system during highly parallel builds. This change achieves
that by removing the SmallVector in MacroInfo and instead storing the tokens in an array
allocated using the bump pointer allocator, after all tokens are lexed.
The added unit test with 1000000 #define directives illustrates the problem. Prior to this
change, on arm64 macOS, clang's PP bump pointer allocator allocated 272007616 bytes, and
used roughly 272 bytes per #define. After this change, clang's PP bump pointer allocator
allocates 120002016 bytes, and uses only roughly 120 bytes per #define.
For an example test file that we have internally with 7.8 million #define directives, this
change produces the following improvement on arm64 macOS: Persistent allocation footprint for
this test case file as it's being compiled to LLVM IR went down 22% from 5.28 GB to 4.07 GB
and the total allocations went down 14% from 8.26 GB to 7.05 GB. Furthermore, this change
reduced the total number of allocations made by the system for this clang invocation from
1454853 to 133663, an order of magnitude improvement.
The recommit fixes the LLDB build failure.
Differential Revision: https://reviews.llvm.org/D117348
Recently we observed high memory pressure caused by clang during some parallel builds.
We discovered that we have several projects that have a large number of #define directives
in their TUs (on the order of millions), which caused huge memory consumption in clang due
to a lot of allocations for MacroInfo. We would like to reduce the memory overhead of
clang for a single #define to reduce the memory overhead for these files, to allow us to
reduce the memory pressure on the system during highly parallel builds. This change achieves
that by removing the SmallVector in MacroInfo and instead storing the tokens in an array
allocated using the bump pointer allocator, after all tokens are lexed.
The added unit test with 1000000 #define directives illustrates the problem. Prior to this
change, on arm64 macOS, clang's PP bump pointer allocator allocated 272007616 bytes, and
used roughly 272 bytes per #define. After this change, clang's PP bump pointer allocator
allocates 120002016 bytes, and uses only roughly 120 bytes per #define.
For an example test file that we have internally with 7.8 million #define directives, this
change produces the following improvement on arm64 macOS: Persistent allocation footprint for
this test case file as it's being compiled to LLVM IR went down 22% from 5.28 GB to 4.07 GB
and the total allocations went down 14% from 8.26 GB to 7.05 GB. Furthermore, this change
reduced the total number of allocations made by the system for this clang invocation from
1454853 to 133663, an order of magnitude improvement.
Differential Revision: https://reviews.llvm.org/D117348
This patch replaces the exact include count of each file in `HeaderFileInfo` with a set of included files in `Preprocessor`.
The number of includes isn't a property of a header file but rather a preprocessor state. The exact number of includes is not used anywhere except statistic tracking.
Reviewed By: vsapsai
Differential Revision: https://reviews.llvm.org/D114095
The patch was reverted because it caused a crash during PCH build -- we
missed to update the RParenLoc in TreeTransform<Derived>::TransformAutoType.
This relands 55d96ac and 37ec65e with a test and fix.
This patch adds the support for `atomic compare` in parser. The support
in Sema and CodeGen will come soon. For now, it simply eimits an error when it
is encountered.
Reviewed By: ABataev
Differential Revision: https://reviews.llvm.org/D115561
This reverts commit cc56c66f27.
Fixed a bad assertion, the target of a UsingShadowDecl must not have
*local* qualifiers, but it can be a typedef whose underlying type is qualified.
Currently there's no way to find the UsingDecl that a typeloc found its
underlying type through. Compare to DeclRefExpr::getFoundDecl().
Design decisions:
- a sugar type, as there are many contexts this type of use may appear in
- UsingType is a leaf like TypedefType, the underlying type has no TypeLoc
- not unified with UnresolvedUsingType: a single name is appealing,
but being sometimes-sugar is often fiddly.
- not unified with TypedefType: the UsingShadowDecl is not a TypedefNameDecl or
even a TypeDecl, and users think of these differently.
- does not cover other rarer aliases like objc @compatibility_alias,
in order to be have a concrete API that's easy to understand.
- implicitly desugared by the hasDeclaration ASTMatcher, to avoid
breaking existing patterns and following the precedent of ElaboratedType.
Scope:
- This does not cover types associated with template names introduced by
using declarations. A future patch should introduce a sugar TemplateName
variant for this. (CTAD deduced types fall under this)
- There are enough AST matchers to fix the in-tree clang-tidy tests and
probably any other matchers, though more may be useful later.
Caveats:
- This changes a fairly common pattern in the AST people may depend on matching.
Previously, typeLoc(loc(recordType())) matched whether a struct was
referred to by its original scope or introduced via using-decl.
Now, the using-decl case is not matched, and needs a separate matcher.
This is similar to the case of typedefs but nevertheless both adds
complexity and breaks existing code.
Differential Revision: https://reviews.llvm.org/D114251
WG14 adopted the _ExtInt feature from Clang for C23, but renamed the
type to be _BitInt. This patch does the vast majority of the work to
rename _ExtInt to _BitInt, which accounts for most of its size. The new
type is exposed in older C modes and all C++ modes as a conforming
extension. However, there are functional changes worth calling out:
* Deprecates _ExtInt with a fix-it to help users migrate to _BitInt.
* Updates the mangling for the type.
* Updates the documentation and adds a release note to warn users what
is going on.
* Adds new diagnostics for use of _BitInt to call out when it's used as
a Clang extension or as a pre-C23 compatibility concern.
* Adds new tests for the new diagnostic behaviors.
I want to call out the ABI break specifically. We do not believe that
this break will cause a significant imposition for early adopters of
the feature, and so this is being done as a full break. If it turns out
there are critical uses where recompilation is not an option for some
reason, we can consider using ABI tags to ease the transition.
Problem:
PCM file includes references to all module maps used in compilation which created PCM. This problem leads to PCM-rebuilds in distributed compilations as some module maps could be missing in isolated compilation. (For example in our distributed build system we create a temp folder for every compilation with only modules and headers that are needed for that particular command).
Solution:
Add only affecting module map files to a PCM-file.
Reviewed By: rsmith
Differential Revision: https://reviews.llvm.org/D106876
For each selector encountered in the source code, we need to load
selectors from the imported modules and check that we are calling a
selector with compatible types.
At the moment, for each module we are storing methods declared in the
headers belonging to this module and methods from the transitive closure
of imported modules. When a module is imported by a few other modules,
methods from the shared module are duplicated in each importer. As the
result, we can end up with lots of identical methods that we try to add
to the global method pool. Doing this duplicate work is useless and
relatively expensive.
Avoid processing duplicate methods by storing in each module only its
own methods and not storing methods from dependencies. Collect methods
from dependencies by walking the graph of module dependencies.
The issue was discovered and reported by Richard Howell. He has done the
hard work for this fix as he has investigated and provided a detailed
explanation of the performance problem.
Differential Revision: https://reviews.llvm.org/D110123
To reduce the number of explicit builds of a single module, we can try to squash multiple occurrences of the module with different command-lines (and context hashes) by removing benign command-line options. The greatest contributors to benign differences between command-lines are the header search paths.
In this patch, the lookup cache in `HeaderSearch` is used to identify paths that were actually used when implicitly building the module during scanning. This information is serialized into the unhashed control block of the implicitly-built PCM. The dependency scanner then loads this and may use it to prune the header search paths before computing the context hash of the module and generating the command-line.
We could also prune the header search paths when serializing `HeaderSearchOptions` into the PCM. That way, we could do it only once instead of every load of the PCM file by dependency scanner. However, that would result in a PCM file whose contents don't produce the same context hash as the original build, which is probably highly surprising.
There is an alternative approach to storing extra information into the PCM: wire up preprocessor callbacks to capture the used header search paths on-the-fly during preprocessing of modularized headers (similar to what we currently do for the main source file and textual headers). Right now, that's not compatible with the fact that we do an actual implicit build producing PCM files during dependency scanning. The second run of dependency scanner loads the PCM from the first run, skipping the preprocessing altogether, which would result in different results between runs. We can revisit this approach when we stop building implicitly during dependency scanning.
Depends on D102923.
Reviewed By: dexonsmith
Differential Revision: https://reviews.llvm.org/D102488
This is part of a patch series working towards the ability to make
SourceLocation into a 64-bit type to handle larger translation units.
NFC: this patch introduces typedefs for the integer type used by
SourceLocation and makes all the boring changes to use the typedefs
everywhere, but for the moment, they are unconditionally defined to
uint32_t.
Patch originally by Mikhail Maltsev.
Reviewed By: tmatheson
Differential Revision: https://reviews.llvm.org/D105492