This change-set puts 93d08acaac functionality
under -add-omp-offload-notes switch that is OFF by default.
CUDA toolchain is not able to handle ELF images with LLVMOMPOFFLOAD
notes for unknown reason (see https://reviews.llvm.org/D99551#2950272).
I disable the ELF notes embedding until the CUDA issue is triaged and resolved.
Differential Revision: https://reviews.llvm.org/D108246
It fails on ubuntu bionic otherwise with:
```
scan-build-py-14: Run 'scan-view /tmp/scan-build-2021-08-09-09-14-36-765350-nx9s888s' to examine bug reports.
scan-build-py-14: Internal error.
Traceback (most recent call last):
File "/usr/lib/llvm-14/lib/libscanbuild/__init__.py", line 125, in wrapper
return function(*args, **kwargs)
File "/usr/lib/llvm-14/lib/libscanbuild/analyze.py", line 72, in scan_build
number_of_bugs = document(args)
File "/usr/lib/llvm-14/lib/libscanbuild/report.py", line 35, in document
for bug in read_bugs(args.output, html_reports_available):
File "/usr/lib/llvm-14/lib/libscanbuild/report.py", line 282, in read_bugs
for bug in parser(bug_file):
File "/usr/lib/llvm-14/lib/libscanbuild/report.py", line 421, in parse_bug_html
for line in handler.readlines():
File "/usr/lib/python3.6/encodings/ascii.py", line 26, in decode
return codecs.ascii_decode(input, self.errors)[0]
UnicodeDecodeError: 'ascii' codec can't decode byte 0xc3 in position 3360: ordinal not in range(128)
scan-build-py-14: Please run this command again and turn on verbose mode (add '-vvvv' as argument).
```
I guess it is caused by a problem in Python 3.6
Reviewed By: phosek, isthismyaccount
Differential Revision: https://reviews.llvm.org/D107887
The patch adds ELF notes into SHT_NOTE sections of ELF offload images
passed to clang-offload-wrapper.
The new notes use a null-terminated "LLVMOMPOFFLOAD" note name.
There are currently three types of notes:
VERSION: a string (not null-terminated) representing the ELF offload
image structure. The current version '1.0' does not put any restrictions
on the structure of the image. If we ever need to come up with a common
structure for ELF offload images (e.g. to be able to analyze the images
in libomptarget in some standard way), then we will introduce new versions.
PRODUCER: a vendor specific name of the producing toolchain.
Upstream LLVM uses "LLVM" (not null-terminated).
PRODUCER_VERSION: a vendor specific version of the producing toolchain.
Upstream LLVM uses LLVM_VERSION_STRING with optional <space> LLVM_REVISION.
All three notes are not mandatory currently.
Differential Revision: https://reviews.llvm.org/D99551
Only the bare name is completed, with no args.
For args to be useful we need arg names. These *are* in the tablegen but
not currently emitted in usable form, so left this as future work.
C++11, C2x, GNU, declspec, MS syntax is supported, with the appropriate
spellings of attributes suggested.
`#pragma clang attribute` is supported but not terribly useful as we
only reach completion if parens are balanced (i.e. the line is not truncated)
There's no filtering of which attributes might make sense in this
grammatical context (e.g. attached to a function). In code-completion context
this is hard to do, and will only work in few cases :-(
There's also no filtering by langopts: this is because currently the
only way of checking is to try to produce diagnostics, which requires a
valid ParsedAttr which is hard to get.
This should be fairly simple to fix but requires some tablegen changes
to expose the logic without the side-effect.
Differential Revision: https://reviews.llvm.org/D107696
Linking `libclang.so` is currently broken on Solaris:
ld: fatal: option --version-script requires option -z gnu-version-script-compat to be specified
While Solaris `ld` supports a considerable subset of `--version-script`,
there are some elements of the syntax that aren't.
The fix is equivalent to D78510 <https://reviews.llvm.org/D78510>.
Additionally, use of C-style comments is a GNU extension
that can easily be avoided by using `#` as comment character, which is
supported by GNU `ld`, `gold`, and `lld`.
Tested on `amd64-pc-solaris2.11`, `sparcv9-sun-solaris2.11`,
`x86_64-pc-linux-gnu`.
Differential Revision: https://reviews.llvm.org/D107559
LLVMLineEditor library is part of the LLVM dylib. Move it into
LLVM_LINK_COMPONENTS to avoid duplicate linking when dylib is being
used. This fixes building standalone clang against installed LLVM
without static libraries.
Differential Revision: https://reviews.llvm.org/D107558
Use `clang_target_link_libraries` to avoid duplicate libraries when
the same symbol is provided both by a static library and a larger
dylib, fixing linking with win32 dylibs. This fixes errors like
these:
ld.lld: error: duplicate symbol: llvm::createStringError(std::__1::error_code, char const*)
>>> defined at libLLVMSupport.a(Error.cpp.obj)
>>> defined at libLLVM-14git.dll
This matches how other clang tools declare their dependencies.
Differential Revision: https://reviews.llvm.org/D107231
This function is marked with CINDEX_LINKAGE, but was never added to the
export list / linker script.
Reviewed By: jrtc27
Differential Revision: https://reviews.llvm.org/D106974
There are some platform that might not have version script support,
don't try to use version script on those.
Reviewed By: MaskRay, hubert.reinterpretcast
Differential Revision: https://reviews.llvm.org/D106914
We have the basic infrastructure in place. We can recover from simple errors
(recovering from errors in template instantiations is not yet supported). It
looks like we are in a reasonably functional state for llvm13.
Differential revision: https://reviews.llvm.org/D106813
The changes made in 0cf37a3b0617457daaed3224373ffa07724f8482 are not
compatible with gold, which does not seem to support a symbol version
with the name local.
This is part of a patch series working towards the ability to make
SourceLocation into a 64-bit type to handle larger translation units.
NFC: this patch introduces typedefs for the integer type used by
SourceLocation and makes all the boring changes to use the typedefs
everywhere, but for the moment, they are unconditionally defined to
uint32_t.
Patch originally by Mikhail Maltsev.
Reviewed By: tmatheson
Differential Revision: https://reviews.llvm.org/D105492
We can build it with -Werror=global-constructors now. This helps
in situation where libSupport is embedded as a shared library,
potential with dlopen/dlclose scenario, and when command-line
parsing or other facilities may not be involved. Avoiding the
implicit construction of these cl::opt can avoid double-registration
issues and other kind of behavior.
Reviewed By: lattner, jpienaar
Differential Revision: https://reviews.llvm.org/D105959
We can build it with -Werror=global-constructors now. This helps
in situation where libSupport is embedded as a shared library,
potential with dlopen/dlclose scenario, and when command-line
parsing or other facilities may not be involved. Avoiding the
implicit construction of these cl::opt can avoid double-registration
issues and other kind of behavior.
Reviewed By: lattner, jpienaar
Differential Revision: https://reviews.llvm.org/D105959
We can build it with -Werror=global-constructors now. This helps
in situation where libSupport is embedded as a shared library,
potential with dlopen/dlclose scenario, and when command-line
parsing or other facilities may not be involved. Avoiding the
implicit construction of these cl::opt can avoid double-registration
issues and other kind of behavior.
Reviewed By: lattner, jpienaar
Differential Revision: https://reviews.llvm.org/D105959
Original commit message:
[clang-repl] Implement partial translation units and error recovery.
https://reviews.llvm.org/D96033 contained a discussion regarding efficient
modeling of error recovery. @rjmccall has outlined the key ideas:
Conceptually, we can split the translation unit into a sequence of partial
translation units (PTUs). Every declaration will be associated with a unique PTU
that owns it.
The first key insight here is that the owning PTU isn't always the "active"
(most recent) PTU, and it isn't always the PTU that the declaration
"comes from". A new declaration (that isn't a redeclaration or specialization of
anything) does belong to the active PTU. A template specialization, however,
belongs to the most recent PTU of all the declarations in its signature - mostly
that means that it can be pulled into a more recent PTU by its template
arguments.
The second key insight is that processing a PTU might extend an earlier PTU.
Rolling back the later PTU shouldn't throw that extension away. For example, if
the second PTU defines a template, and the third PTU requires that template to
be instantiated at float, that template specialization is still part of the
second PTU. Similarly, if the fifth PTU uses an inline function belonging to the
fourth, that definition still belongs to the fourth. When we go to emit code in
a new PTU, we map each declaration we have to emit back to its owning PTU and
emit it in a new module for just the extensions to that PTU. We keep track of
all the modules we've emitted for a PTU so that we can unload them all if we
decide to roll it back.
Most declarations/definitions will only refer to entities from the same or
earlier PTUs. However, it is possible (primarily by defining a
previously-declared entity, but also through templates or ADL) for an entity
that belongs to one PTU to refer to something from a later PTU. We will have to
keep track of this and prevent unwinding to later PTU when we recognize it.
Fortunately, this should be very rare; and crucially, we don't have to do the
bookkeeping for this if we've only got one PTU, e.g. in normal compilation.
Otherwise, PTUs after the first just need to record enough metadata to be able
to revert any changes they've made to declarations belonging to earlier PTUs,
e.g. to redeclaration chains or template specialization lists.
It should even eventually be possible for PTUs to provide their own slab
allocators which can be thrown away as part of rolling back the PTU. We can
maintain a notion of the active allocator and allocate things like Stmt/Expr
nodes in it, temporarily changing it to the appropriate PTU whenever we go to do
something like instantiate a function template. More care will be required when
allocating declarations and types, though.
We would want the PTU to be efficiently recoverable from a Decl; I'm not sure
how best to do that. An easy option that would cover most declarations would be
to make multiple TranslationUnitDecls and parent the declarations appropriately,
but I don't think that's good enough for things like member function templates,
since an instantiation of that would still be parented by its original class.
Maybe we can work this into the DC chain somehow, like how lexical DCs are.
We add a different kind of translation unit `TU_Incremental` which is a
complete translation unit that we might nonetheless incrementally extend later.
Because it is complete (and we might want to generate code for it), we do
perform template instantiation, but because it might be extended later, we don't
warn if it declares or uses undefined internal-linkage symbols.
This patch teaches clang-repl how to recover from errors by disconnecting the
most recent PTU and update the primary PTU lookup tables. For instance:
```./clang-repl
clang-repl> int i = 12; error;
In file included from <<< inputs >>>:1:
input_line_0:1:13: error: C++ requires a type specifier for all declarations
int i = 12; error;
^
error: Parsing failed.
clang-repl> int i = 13; extern "C" int printf(const char*,...);
clang-repl> auto r1 = printf("i=%d\n", i);
i=13
clang-repl> quit
```
Differential revision: https://reviews.llvm.org/D104918
This reverts commit 6775fc6ffa.
It also reverts "[lldb] Fix compilation by adjusting to the new ASTContext signature."
This reverts commit 03a3f86071.
We see some failures on the lldb infrastructure, these changes might play a role
in it. Let's revert it now and see if the bots will become green.
Ref: https://reviews.llvm.org/D104918
https://reviews.llvm.org/D96033 contained a discussion regarding efficient
modeling of error recovery. @rjmccall has outlined the key ideas:
Conceptually, we can split the translation unit into a sequence of partial
translation units (PTUs). Every declaration will be associated with a unique PTU
that owns it.
The first key insight here is that the owning PTU isn't always the "active"
(most recent) PTU, and it isn't always the PTU that the declaration
"comes from". A new declaration (that isn't a redeclaration or specialization of
anything) does belong to the active PTU. A template specialization, however,
belongs to the most recent PTU of all the declarations in its signature - mostly
that means that it can be pulled into a more recent PTU by its template
arguments.
The second key insight is that processing a PTU might extend an earlier PTU.
Rolling back the later PTU shouldn't throw that extension away. For example, if
the second PTU defines a template, and the third PTU requires that template to
be instantiated at float, that template specialization is still part of the
second PTU. Similarly, if the fifth PTU uses an inline function belonging to the
fourth, that definition still belongs to the fourth. When we go to emit code in
a new PTU, we map each declaration we have to emit back to its owning PTU and
emit it in a new module for just the extensions to that PTU. We keep track of
all the modules we've emitted for a PTU so that we can unload them all if we
decide to roll it back.
Most declarations/definitions will only refer to entities from the same or
earlier PTUs. However, it is possible (primarily by defining a
previously-declared entity, but also through templates or ADL) for an entity
that belongs to one PTU to refer to something from a later PTU. We will have to
keep track of this and prevent unwinding to later PTU when we recognize it.
Fortunately, this should be very rare; and crucially, we don't have to do the
bookkeeping for this if we've only got one PTU, e.g. in normal compilation.
Otherwise, PTUs after the first just need to record enough metadata to be able
to revert any changes they've made to declarations belonging to earlier PTUs,
e.g. to redeclaration chains or template specialization lists.
It should even eventually be possible for PTUs to provide their own slab
allocators which can be thrown away as part of rolling back the PTU. We can
maintain a notion of the active allocator and allocate things like Stmt/Expr
nodes in it, temporarily changing it to the appropriate PTU whenever we go to do
something like instantiate a function template. More care will be required when
allocating declarations and types, though.
We would want the PTU to be efficiently recoverable from a Decl; I'm not sure
how best to do that. An easy option that would cover most declarations would be
to make multiple TranslationUnitDecls and parent the declarations appropriately,
but I don't think that's good enough for things like member function templates,
since an instantiation of that would still be parented by its original class.
Maybe we can work this into the DC chain somehow, like how lexical DCs are.
We add a different kind of translation unit `TU_Incremental` which is a
complete translation unit that we might nonetheless incrementally extend later.
Because it is complete (and we might want to generate code for it), we do
perform template instantiation, but because it might be extended later, we don't
warn if it declares or uses undefined internal-linkage symbols.
This patch teaches clang-repl how to recover from errors by disconnecting the
most recent PTU and update the primary PTU lookup tables. For instance:
```./clang-repl
clang-repl> int i = 12; error;
In file included from <<< inputs >>>:1:
input_line_0:1:13: error: C++ requires a type specifier for all declarations
int i = 12; error;
^
error: Parsing failed.
clang-repl> int i = 13; extern "C" int printf(const char*,...);
clang-repl> auto r1 = printf("i=%d\n", i);
i=13
clang-repl> quit
```
Differential revision: https://reviews.llvm.org/D104918
This reverts commit 3ec88ca60b which reverted e386871e1d due to a asan build
failure.
This patch removes the new lines in the test case which seem to introduce the
failure.
Differential revision: https://reviews.llvm.org/D104898
This change is intended as initial setup. The plan is to add
more semantic checks later. I plan to update the documentation
as more semantic checks are added (instead of documenting the
details up front). Most of the code closely mirrors that for
the Swift calling convention. Three places are marked as
[FIXME: swiftasynccc]; those will be addressed once the
corresponding convention is introduced in LLVM.
Reviewed By: rjmccall
Differential Revision: https://reviews.llvm.org/D95561
C++23 will make these conversions ambiguous - so fix them to make the
codebase forward-compatible with C++23 (& a follow-up change I've made
will make this ambiguous/invalid even in <C++23 so we don't regress
this & it generally improves the code anyway)
This adds a new llvm::thread class with the same interface as std::thread
except there is an extra constructor that allows us to set the new thread's
stack size. On Darwin even the default size is boosted to 8MB to match the main
thread.
It also switches all users of the older C-style `llvm_execute_on_thread` API
family over to `llvm::thread` followed by either a `detach` or `join` call and
removes the old API.
Moved definition of DefaultStackSize into the .cpp file to hopefully
fix the build on some (GCC-6?) machines.
This adds a new llvm::thread class with the same interface as std::thread
except there is an extra constructor that allows us to set the new thread's
stack size. On Darwin even the default size is boosted to 8MB to match the main
thread.
It also switches all users of the older C-style `llvm_execute_on_thread` API
family over to `llvm::thread` followed by either a `detach` or `join` call and
removes the old API.
This is an ELF specific option which isn't supported for Windows/MinGW
targets, even if the MinGW linker otherwise uses an ld.bfd like linker
interface.
Differential Revision: https://reviews.llvm.org/D105148
This patch adds unbundling support of an archive file. It takes an
archive file along with a set of offload targets as input.
Output is a device specific archive for each given offload target.
Input archive contains bundled code objects bundled using
clang-offload-bundler. Each generated device specific archive contains
a set of device code object files which are named as
<Parent Bundle Name>-<CodeObject-GPUArch>.
Entries in input archive can be of any binary type which is
supported by clang-offload-bundler, like *.bc. Output archives will
contain files in same type.
Example Usuage:
clang-offload-bundler --unbundle --inputs=lib-generic.a -type=a
-targets=openmp-amdgcn-amdhsa--gfx906,openmp-amdgcn-amdhsa--gfx908
-outputs=devicelib-gfx906.a,deviceLib-gfx908.a
Reviewed By: jdoerfert, yaxunl
Differential Revision: https://reviews.llvm.org/D93525
I find as I develop I'm moving between many different languages C++,C#,JavaScript all the time. As I move between the file types I like to keep `clang-format` as my formatting tool of choice. (hence why I initially added C# support in {D58404}) I know those other languages have their own tools but I have to learn them all, and I have to work out how to configure them, and they may or may not have integration into my IDE or my source code integration.
I am increasingly finding that I'm editing additional JSON files as part of my daily work and my editor and git commit hooks are just not setup to go and run [[ https://stedolan.github.io/jq/ | jq ]], So I tend to go to [[ https://jsonformatter.curiousconcept.com/ | JSON Formatter ]] and copy and paste back and forth. To get nicely formatted JSON. This is a painful process and I'd like a new one that causes me much less friction.
This has come up from time to time:
{D10543}
https://stackoverflow.com/questions/35856565/clang-format-a-json-filehttps://bugs.llvm.org/show_bug.cgi?id=18699
I would like to stop having to do that and have formatting JSON as a first class clang-format support `Language` (even if it has minimal style settings at present).
This revision adds support for formatting JSON using the inbuilt JSON serialization library of LLVM, With limited control at present only over the indentation level
This adds an additional Language into the .clang-format file to separate the settings from your other supported languages.
Reviewed By: HazardyKnusperkeks
Differential Revision: https://reviews.llvm.org/D93528
This is mostly a mechanical change, but a testcase that contains
parts of the StringRef class (clang/test/Analysis/llvm-conventions.cpp)
isn't touched.
A new revision identical to https://reviews.llvm.org/D101139
The parent revision of aforementioned revision seems to cause pre-merge checks to fail opaquely. Seeing if creating a new revision will work.
Reviewed By: phosek
Differential Revision: https://reviews.llvm.org/D104138
Given an invalid SourceLocation, translateSourceLocation will call
clang_getNullLocation, and then do nothing with the result. But
clang_getNullLocation has no side effects: it just constructs and
returns a null CXSourceLocation value.
Surely the intention was to //return// that null CXSourceLocation to
the caller, instead of throwing it away and pressing on anyway.
Reviewed By: miyuki
Differential Revision: https://reviews.llvm.org/D104442
This patch moves enabling system header deps from `clang-scan-deps` into the `DependencyScanning` library. This will make it easier to preserve semantics of the original TU command-line for modular dependencies (see D104036).
Reviewed By: arphaman
Differential Revision: https://reviews.llvm.org/D104033
This moves another piece of logic specific to `clang-scan-deps` into the `DependencyScanning` library. This makes it easier to check how the original command-line looked like in the library and will enable the library to stop inventing `-Wno-error` for modular dependencies (see D104036).
Reviewed By: arphaman
Differential Revision: https://reviews.llvm.org/D104031
The `clang-scan-deps` tool has some logic that parses and modifies the original Clang command-line. The goal is to setup `DependencyOutputOptions` by injecting `-M -MT <target>` and prevent the creation of output files.
This patch moves the logic into the `DependencyScanning` library, and uses the parsed `CompilerInvocation` instead of the raw command-line. The code simpler and can be used from the C++ API as well.
The `-o /dev/null` arguments are not necessary, since the `DependencyScanning` library only runs a preprocessing action, so there's no way it'll produce an actual object file.
Related: The `-M` argument implies `-w`, which would appear on the command-line of modular dependencies even though it was not on the original TU command line (see D104036).
Some related tests were updated.
Reviewed By: arphaman
Differential Revision: https://reviews.llvm.org/D104030