Commit Graph

856 Commits

Author SHA1 Message Date
Saurabh Jha 696becbd13 [Matrix] Remove bitcast when casting between matrices of the same size
In matrix type casts, we were doing bitcast when the matrices had the same size. This was incorrect and this patch fixes that.
Also added some new CodeGen tests for signed <-> usigned conversions

Reviewed By: fhahn

Differential Revision: https://reviews.llvm.org/D101754
2021-05-03 15:31:43 +01:00
Yaxun (Sam) Liu c58a6a6fb4 [HIP] Fix device lib selection
Choose optimized device lib bitcode by fp options
for performance.

Reviewed by: Artem Belevich, Fangrui Song

Differential Revision: https://reviews.llvm.org/D101654
2021-05-01 20:31:11 -04:00
Hamza Mahfooz be2277fbf2
[Matrix] Support #pragma clang fp
From https://bugs.llvm.org/show_bug.cgi?id=49739:

Currently, `#pragma clang fp` are ignored for matrix types.

For the code below, the `contract` fast-math flag should be added to the generated call to `llvm.matrix.multiply` and `fadd`

```
typedef float fx2x2_t __attribute__((matrix_type(2, 2)));

void foo(fx2x2_t &A, fx2x2_t &C, fx2x2_t &B) {
  #pragma clang fp contract(fast)
  C = A*B + C;
}
```

Reviewed By: fhahn, mibintc

Differential Revision: https://reviews.llvm.org/D100834
2021-04-22 11:45:34 +01:00
Saurabh Jha 71ab6c98a0
[Matrix] Implement C-style explicit type conversions for matrix types.
This implements C-style type conversions for matrix types, as specified
in clang/docs/MatrixTypes.rst.

Fixes PR47141.

Reviewed By: fhahn

Differential Revision: https://reviews.llvm.org/D99037
2021-04-10 11:48:41 +01:00
Saurabh Jha 7d8513b7f2 [clang] Move int <-> float scalar conversion to a separate function
As prelude to this patch https://reviews.llvm.org/D99037, we want to
move the int-float conversion
into a separate function that can be reused by matrix cast

Differential Revision: https://reviews.llvm.org/D100051
2021-04-07 12:34:16 -07:00
Simon Pilgrim 2901dc7575 Don't directly dereference getAs<> casts to avoid potential null dereferences. NFCI.
Replace with castAs<> which asserts the cast is valid.

Fixes a number of static analyzer warnings.
2021-04-06 12:24:19 +01:00
Simon Pilgrim 731b3d7664 [clang] Use Constant::getAllOnesValue helper. NFCI.
Avoid -1ULL which MSVC tends to complain about
2021-03-12 15:16:36 +00:00
David Blaikie bd2bdad19e void cast to suppress -Wunused-variable in non-asserts build 2021-03-11 17:51:31 -08:00
Florian Hahn c92ec0dd92
[Matrix] Add support for matrix-by-scalar division.
This patch extends the matrix spec to allow matrix-by-scalar division.

Originally support for `/` was left out to avoid ambiguity for the
matrix-matrix version of `/`, which could either be elementwise or
specified as matrix multiplication M1 * (1/M2).

For the matrix-scalar version, no ambiguity exists; `*` is also
an elementwise operation in that case. Matrix-by-scalar division
is commonly supported by systems including Matlab, Mathematica
or NumPy.

Reviewed By: rjmccall

Differential Revision: https://reviews.llvm.org/D97857
2021-03-11 22:21:23 +00:00
Bevin Hansson c4944a6f53 [Fixed Point] Add codegen for conversion between fixed-point and floating point.
The patch adds the required methods to FixedPointBuilder
for converting between fixed-point and floating point,
and uses them from Clang.

This depends on D54749.

Reviewed By: leonardchan

Differential Revision: https://reviews.llvm.org/D86632
2021-01-12 13:53:01 +01:00
Alan Phipps 9f2967bcfe [Coverage] Add support for Branch Coverage in LLVM Source-Based Code Coverage
This is an enhancement to LLVM Source-Based Code Coverage in clang to track how
many times individual branch-generating conditions are taken (evaluate to TRUE)
and not taken (evaluate to FALSE).  Individual conditions may comprise larger
boolean expressions using boolean logical operators.  This functionality is
very similar to what is supported by GCOV except that it is very closely
anchored to the ASTs.

Differential Revision: https://reviews.llvm.org/D84467
2021-01-05 09:51:51 -06:00
Juneyoung Lee 9b29610228 Use unary CreateShuffleVector if possible
As mentioned in D93793, there are quite a few places where unary `IRBuilder::CreateShuffleVector(X, Mask)` can be used
instead of `IRBuilder::CreateShuffleVector(X, Undef, Mask)`.
Let's update them.

Actually, it would have been more natural if the patches were made in this order:
(1) let them use unary CreateShuffleVector first
(2) update IRBuilder::CreateShuffleVector to use poison as a placeholder value (D93793)

The order is swapped, but in terms of correctness it is still fine.

Reviewed By: spatel

Differential Revision: https://reviews.llvm.org/D93923
2020-12-30 22:36:08 +09:00
Joe Ellis dad07baf12 [clang][AArch64][SVE] Avoid going through memory for VLAT <-> VLST casts
This change makes use of the llvm.vector.extract intrinsic to avoid
going through memory when performing bitcasts between vector-length
agnostic types and vector-length specific types.

Depends on D91362

Reviewed By: c-rhodes

Differential Revision: https://reviews.llvm.org/D92761
2020-12-16 12:24:32 +00:00
Kevin P. Neal acd4950d4f [FPEnv] Correct constrained metadata in fp16-ops-strict.c
This test shows we're in some cases not getting strictfp information from
the AST. Correct that.

Differential Revision: https://reviews.llvm.org/D92596
2020-12-08 10:18:32 -05:00
Tim Northover c5978f42ec UBSAN: emit distinctive traps
Sometimes people get minimal crash reports after a UBSAN incident. This change
tags each trap with an integer representing the kind of failure encountered,
which can aid in tracking down the root cause of the problem.
2020-12-08 10:28:26 +00:00
Kevin P. Neal 2069403cdf [FPEnv] Use strictfp metadata in casting nodes
The strictfp metadata was added to the casting AST nodes in D85960, but
we aren't using that metadata yet. This patch adds that support.

In order to avoid lots of ad-hoc passing around of the strictfp bits I
updated the IRBuilder when moving from a function that has the Expr* to a
function that lacks it. I believe we should switch to this pattern to keep
the strictfp support from being overly invasive.

For the purpose of testing that we're picking up the right metadata, I
also made my tests use a pragma to make the AST's strictfp metadata not
match the global strictfp metadata. This exposes issues that we need to
deal with in subsequent patches, and I believe this is the right method
for most all of our clang strictfp tests.

Differential Revision: https://reviews.llvm.org/D88913
2020-11-06 11:56:12 -05:00
Alex Lorenz 701456b523 [darwin] add support for __isPlatformVersionAtLeast check for if (@available)
The __isPlatformVersionAtLeast routine is an implementation of `if (@available)` check
that uses the _availability_version_check API on Darwin that's supported on
macOS 10.15, iOS 13, tvOS 13 and watchOS 6.

Differential Revision: https://reviews.llvm.org/D90367
2020-11-02 16:28:09 -08:00
Bevin Hansson 9fa7f48459 [Fixed Point] Add fixed-point to floating point cast types and consteval.
Reviewed By: leonardchan

Differential Revision: https://reviews.llvm.org/D86631
2020-10-13 13:26:56 +02:00
Amy Kwan 2e7117f847 [PowerPC] Implement the 128-bit vec_[all|any]_[eq | ne | lt | gt | le | ge] builtins in Clang/LLVM
This patch implements the vec_[all|any]_[eq | ne | lt | gt | le | ge] builtins for vector signed/unsigned __int128.

Differential Revision: https://reviews.llvm.org/D87910
2020-09-23 16:49:40 -04:00
Cullen Rhodes 2ddf795e8c Reland "[CodeGen][AArch64] Support arm_sve_vector_bits attribute"
This relands D85743 with a fix for test
CodeGen/attr-arm-sve-vector-bits-call.c that disables the new pass
manager with '-fno-experimental-new-pass-manager'. Test was failing due
to IR differences with the new pass manager which broke the Fuchsia
builder [1]. Reverted in 2e7041f.

[1] http://lab.llvm.org:8011/builders/fuchsia-x86_64-linux/builds/10375

Original summary:

This patch implements codegen for the 'arm_sve_vector_bits' type
attribute, defined by the Arm C Language Extensions (ACLE) for SVE [1].
The purpose of this attribute is to define vector-length-specific (VLS)
versions of existing vector-length-agnostic (VLA) types.

VLSTs are represented as VectorType in the AST and fixed-length vectors
in the IR everywhere except in function args/return. Implemented in this
patch is codegen support for the following:

  * Implicit casting between VLA <-> VLS types.
  * Coercion of VLS types in function args/return.
  * Mangling of VLS types.

Casting is handled by the CK_BitCast operation, which has been extended
to support the two new vector kinds for fixed-length SVE predicate and
data vectors, where the cast is implemented through memory rather than a
bitcast which is unsupported. Implementing this as a normal bitcast
would require relaxing checks in LLVM to allow bitcasting between
scalable and fixed types. Another option was adding target-specific
intrinsics, although codegen support would need to be added for these
intrinsics. Given this, casting through memory seemed like the best
approach as it's supported today and existing optimisations may remove
unnecessary loads/stores, although there is room for improvement here.

Coercion of VLSTs in function args/return from fixed to scalable is
implemented through the AArch64 ABI in TargetInfo.

The VLA and VLS types are defined by the ACLE to map to the same
machine-level SVE vectors. VLS types are mangled in the same way as:

  __SVE_VLS<typename, unsigned>

where the first argument is the underlying variable-length type and the
second argument is the SVE vector length in bits. For example:

  #if __ARM_FEATURE_SVE_BITS==512
  // Mangled as 9__SVE_VLSIu11__SVInt32_tLj512EE
  typedef svint32_t vec __attribute__((arm_sve_vector_bits(512)));
  // Mangled as 9__SVE_VLSIu10__SVBool_tLj512EE
  typedef svbool_t pred __attribute__((arm_sve_vector_bits(512)));
  #endif

The latest ACLE specification (00bet5) does not contain details of this
mangling scheme, it will be specified in the next revision.  The
mangling scheme is otherwise defined in the appendices to the Procedure
Call Standard for the Arm Architecture, see [2] for more information.

[1] https://developer.arm.com/documentation/100987/latest
[2] https://github.com/ARM-software/abi-aa/blob/master/aapcs64/aapcs64.rst#appendix-c-mangling

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D85743
2020-08-28 15:57:09 +00:00
JF Bastien 82d29b397b Add an unsigned shift base sanitizer
It's not undefined behavior for an unsigned left shift to overflow (i.e. to
shift bits out), but it has been the source of bugs and exploits in certain
codebases in the past. As we do in other parts of UBSan, this patch adds a
dynamic checker which acts beyond UBSan and checks other sources of errors. The
option is enabled as part of -fsanitize=integer.

The flag is named: -fsanitize=unsigned-shift-base
This matches shift-base and shift-exponent flags.

<rdar://problem/46129047>

Differential Revision: https://reviews.llvm.org/D86000
2020-08-27 19:50:10 -07:00
Cullen Rhodes 2e7041fdc2 Revert "[CodeGen][AArch64] Support arm_sve_vector_bits attribute"
Test CodeGen/attr-arm-sve-vector-bits-call.c is failing on some builders
[1][2]. Reverting whilst I investigate.

[1] http://lab.llvm.org:8011/builders/fuchsia-x86_64-linux/builds/10375
[2] https://luci-milo.appspot.com/p/fuchsia/builders/ci/clang-linux-x64/b8870800848452818112

This reverts commit 42587345a3.
2020-08-27 21:31:05 +00:00
Cullen Rhodes 42587345a3 [CodeGen][AArch64] Support arm_sve_vector_bits attribute
This patch implements codegen for the 'arm_sve_vector_bits' type
attribute, defined by the Arm C Language Extensions (ACLE) for SVE [1].
The purpose of this attribute is to define vector-length-specific (VLS)
versions of existing vector-length-agnostic (VLA) types.

VLSTs are represented as VectorType in the AST and fixed-length vectors
in the IR everywhere except in function args/return. Implemented in this
patch is codegen support for the following:

  * Implicit casting between VLA <-> VLS types.
  * Coercion of VLS types in function args/return.
  * Mangling of VLS types.

Casting is handled by the CK_BitCast operation, which has been extended
to support the two new vector kinds for fixed-length SVE predicate and
data vectors, where the cast is implemented through memory rather than a
bitcast which is unsupported. Implementing this as a normal bitcast
would require relaxing checks in LLVM to allow bitcasting between
scalable and fixed types. Another option was adding target-specific
intrinsics, although codegen support would need to be added for these
intrinsics. Given this, casting through memory seemed like the best
approach as it's supported today and existing optimisations may remove
unnecessary loads/stores, although there is room for improvement here.

Coercion of VLSTs in function args/return from fixed to scalable is
implemented through the AArch64 ABI in TargetInfo.

The VLA and VLS types are defined by the ACLE to map to the same
machine-level SVE vectors. VLS types are mangled in the same way as:

  __SVE_VLS<typename, unsigned>

where the first argument is the underlying variable-length type and the
second argument is the SVE vector length in bits. For example:

  #if __ARM_FEATURE_SVE_BITS==512
  // Mangled as 9__SVE_VLSIu11__SVInt32_tLj512EE
  typedef svint32_t vec __attribute__((arm_sve_vector_bits(512)));
  // Mangled as 9__SVE_VLSIu10__SVBool_tLj512EE
  typedef svbool_t pred __attribute__((arm_sve_vector_bits(512)));
  #endif

The latest ACLE specification (00bet5) does not contain details of this
mangling scheme, it will be specified in the next revision.  The
mangling scheme is otherwise defined in the appendices to the Procedure
Call Standard for the Arm Architecture, see [2] for more information.

[1] https://developer.arm.com/documentation/100987/latest
[2] https://github.com/ARM-software/abi-aa/blob/master/aapcs64/aapcs64.rst#appendix-c-mangling

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D85743
2020-08-27 15:11:58 +00:00
Christopher Tetreault 19e883fc59 [SVE] Remove calls to VectorType::getNumElements from clang
Reviewed By: RKSimon

Differential Revision: https://reviews.llvm.org/D82582
2020-08-26 11:12:26 -07:00
Bevin Hansson 577f8b157a [Fixed Point] Add codegen for fixed-point shifts.
This patch adds codegen to Clang for fixed-point shift
operations.

Reviewed By: leonardchan

Differential Revision: https://reviews.llvm.org/D83294
2020-08-24 14:37:16 +02:00
Bevin Hansson 808ac54645 [Fixed Point] Use FixedPointBuilder to codegen fixed-point IR.
This changes the methods in CGExprScalar to use
FixedPointBuilder to generate IR for fixed-point
conversions and operations.

Since FixedPointBuilder emits padded operations slightly
differently than the original code, some tests change.

Reviewed By: leonardchan

Differential Revision: https://reviews.llvm.org/D86282
2020-08-24 14:37:07 +02:00
Bevin Hansson 1a995a0af3 [ADT] Move FixedPoint.h from Clang to LLVM.
This patch moves FixedPointSemantics and APFixedPoint
from Clang to LLVM ADT.

This will make it easier to use the fixed-point
classes in LLVM for constructing an IR builder for
fixed-point and for reusing the APFixedPoint class
for constant evaluation purposes.

RFC: http://lists.llvm.org/pipermail/llvm-dev/2020-August/144025.html

Reviewed By: leonardchan, rjmccall

Differential Revision: https://reviews.llvm.org/D85312
2020-08-20 10:29:45 +02:00
Bruno Ricci 473fbc90d1
[clang][NFC] Store a pointer to the ASTContext in ASTDumper and TextNodeDumper
In general there is no way to get to the ASTContext from most AST nodes
(Decls are one of the exception). This will be a problem when implementing
the rest of APValue::dump since we need the ASTContext to dump some kinds of
APValues.

The ASTContext* in ASTDumper and TextNodeDumper is not always non-null.
This is because we still want to be able to use the various dump() functions
in a debugger.

No functional changes intended.

Reverted in fcf4d5e449 since a few dump()
functions in lldb where missed.
2020-07-03 13:59:22 +01:00
Bruno Ricci fcf4d5e449
Revert "[clang][NFC] Store a pointer to the ASTContext in ASTDumper and TextNodeDumper"
This reverts commit aa7fd905e4.

I missed some dump() functions.
2020-07-02 19:40:09 +01:00
Bruno Ricci aa7fd905e4
[clang][NFC] Store a pointer to the ASTContext in ASTDumper and TextNodeDumper
In general there is no way to get to the ASTContext from most AST nodes
(Decls are one of the exception). This will be a problem when implementing
the rest of APValue::dump since we need the ASTContext to dump some kinds of
APValues.

The ASTContext* in ASTDumper and TextNodeDumper is not always
non-null. This is because we still want to be able to use the various
dump() functions in a debugger.

No functional changes intended.
2020-07-02 19:29:02 +01:00
Bevin Hansson fefa34faf5 [CodeGen] Use the common semantic for fixed-point codegen, not the result semantic.
Summary:
Using the result semantic is wrong in some cases, such as
unsigned fixed-point + signed integer. In this case, the
result semantic is unsigned and the common semantic is
signed.

Reviewers: leonardchan

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D82662
2020-06-29 16:22:29 +02:00
Melanie Blower f4aaed3bf1 Reland D81869 "Modify FPFeatures to use delta not absolute settings"
This reverts commit defd43a5b3.
with correction to solve msan report

To solve https://bugs.llvm.org/show_bug.cgi?id=46166 where the
floating point settings in PCH files aren't compatible, rewrite
FPFeatures to use a delta in the settings rather than absolute settings.
With this patch, these floating point options can be benign.

Reviewers: rjmccall

Differential Revision: https://reviews.llvm.org/D81869
2020-06-27 01:34:57 -07:00
Melanie Blower defd43a5b3 Revert "Revert "Revert "Modify FPFeatures to use delta not absolute settings"""
This reverts commit 9518763d71.
Memory sanitizer fails in CGFPOptionsRAII::CGFPOptionsRAII dtor
2020-06-26 08:47:04 -07:00
Melanie Blower 9518763d71 Revert "Revert "Modify FPFeatures to use delta not absolute settings""
This reverts commit b55d723ed6.
Reapply Modify FPFeatures to use delta not absolute settings

To solve https://bugs.llvm.org/show_bug.cgi?id=46166 where the
floating point settings in PCH files aren't compatible, rewrite
FPFeatures to use a delta in the settings rather than absolute settings.
With this patch, these floating point options can be benign.

Reviewers: rjmccall

Differential Revision: https://reviews.llvm.org/D81869
2020-06-26 08:00:08 -07:00
Melanie Blower b55d723ed6 Revert "Modify FPFeatures to use delta not absolute settings"
This reverts commit 3a748cbf86.
I'm reverting this commit because I forgot to format the commit message
propertly. Sorry for the thrash.
2020-06-26 07:52:57 -07:00
Melanie Blower 3a748cbf86 Modify FPFeatures to use delta not absolute settings 2020-06-26 07:41:09 -07:00
Tyker 51e4aa87e0 attempt to fix failing buildbots after 3bab88b7ba
Prevent IR-gen from emitting consteval declarations

Summary: with this patch instead of emitting calls to consteval function. the IR-gen will emit a store of the already computed result.
2020-06-15 12:58:37 +02:00
Kirill Bobyrev 550c4562d1 Revert "Prevent IR-gen from emitting consteval declarations"
This reverts commit 3bab88b7ba.

This patch causes test failures:
http://lab.llvm.org:8011/builders/clang-cmake-armv7-quick/builds/17260
2020-06-15 12:14:15 +02:00
Tyker 3bab88b7ba Prevent IR-gen from emitting consteval declarations
Summary: with this patch instead of emitting calls to consteval function. the IR-gen will emit a store of the already computed result.

Reviewers: rsmith

Reviewed By: rsmith

Subscribers: cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D76420
2020-06-15 10:47:14 +02:00
Akira Hatanaka c9a52de002 [CodeGen] Simplify the way lifetime of block captures is extended
Rather than pushing inactive cleanups for the block captures at the
entry of a full expression and activating them during the creation of
the block literal, just call pushLifetimeExtendedDestroy to ensure the
cleanups are popped at the end of the scope enclosing the block
expression.

rdar://problem/63996471

Differential Revision: https://reviews.llvm.org/D81624
2020-06-11 16:06:22 -07:00
John McCall 7fac1acc61 Set the LLVM FP optimization flags conservatively.
Functions can have local pragmas that override the global settings.
We set the flags eagerly based on global settings, but if we emit
an expression under the influence of a pragma, we clear the
appropriate flags from the function.

In order to avoid doing a ton of redundant work whenever we emit
an FP expression, configure the IRBuilder to default to global
settings, and only reconfigure it when we see an FP expression
that's not using the global settings.

Patch by Michele Scandale!

https://reviews.llvm.org/D80462
2020-06-11 18:16:41 -04:00
Arthur Eubanks ce7d3e1c55 Reland (again) D80966 [codeview] Put !heapallocsite on calls to operator new
Check that getDebugInfo() is not null, as in the first revision, before
calling getDebugInfo()->addHeapAllocSiteMetadata().
Else would cause a crash with a new expression in a default arg.

---

Clang marks calls to operator new as heap allocation sites, but the
operator declared at global scope returns a void pointer. There is no
explicit cast in the code, so the compiler has to write down the
allocated type itself.

Also generalize a cast to use CallBase, so that we mark heap alloc sites
when exceptions are enabled.

Differential Revision: https://reviews.llvm.org/D80966
2020-06-09 09:27:32 -07:00
Arthur Eubanks a92ce3b706 Revert "Reland D80966 [codeview] Put !heapallocsite on calls to operator new"
This reverts commit b6e143aa54.

Causes https://bugs.chromium.org/p/chromium/issues/detail?id=1092370#c5.
Will investigate and reland (again).
2020-06-08 12:49:41 -07:00
Fangrui Song fc935fc35b Reland D80979 [clang] Implement VectorType logic not operator
With a fix to use -triple %itanium_abi_triple

Differential Revision: https://reviews.llvm.org/D80979
2020-06-08 09:32:30 -07:00
Nico Weber abca3b7b2c Revert "[clang] Implement VectorType logic not operator."
This reverts commit a0de3335ed.
Breaks check-clang on Windows, see e.g.
https://reviews.llvm.org/D80979#2078750 (but fails on all
other Windows bots too).
2020-06-08 06:45:21 -04:00
Jun Ma a0de3335ed [clang] Implement VectorType logic not operator.
Differential Revision: https://reviews.llvm.org/D80979
2020-06-08 08:41:01 +08:00
Fangrui Song b6e143aa54 Reland D80966 [codeview] Put !heapallocsite on calls to operator new
With a change to use `CGM.getCodeGenOpts().getDebugInfo() != codegenoptions::NoDebugInfo`
instead of `getDebugInfo()`,
to fix `Profile-<arch> :: instrprof-gcov-multithread_fork.test`

See CodeGenModule::CodeGenModule, `EmitGcovArcs || EmitGcovNotes` can
set `clang::CodeGen::CodeGenModule::DebugInfo`.

---

Clang marks calls to operator new as heap allocation sites, but the
operator declared at global scope returns a void pointer. There is no
explicit cast in the code, so the compiler has to write down the
allocated type itself.

Also generalize a cast to use CallBase, so that we mark heap alloc sites
when exceptions are enabled.

Differential Revision: https://reviews.llvm.org/D80966
2020-06-07 13:35:20 -07:00
Florian Hahn 4affc444b4 [Matrix] Implement * binary operator for MatrixType.
This patch implements the * binary operator for values of
MatrixType. It adds support for matrix * matrix, scalar * matrix and
matrix * scalar.

For the matrix, matrix case, the number of columns of the first operand
must match the number of rows of the second. For the scalar,matrix variants,
the element type of the matrix must match the scalar type.

Reviewers: rjmccall, anemet, Bigcheese, rsmith, martong

Reviewed By: rjmccall

Differential Revision: https://reviews.llvm.org/D76794
2020-06-07 11:11:27 +01:00
Douglas Yung 059ba74bb6 Revert "[codeview] Put !heapallocsite on calls to operator new"
This reverts commit 672ed53860.

This commit is hitting an assertion failure across multiple bots in the test:
Profile-<arch> :: instrprof-gcov-multithread_fork.test

Failing bots include:
http://lab.llvm.org:8011/builders/llvm-avr-linux/builds/2205
http://lab.llvm.org:8011/builders/clang-cmake-aarch64-lld/builds/8967
http://lab.llvm.org:8011/builders/clang-cmake-armv7-full/builds/10789
http://lab.llvm.org:8011/builders/sanitizer-x86_64-linux/builds/27750
http://lab.llvm.org:8011/builders/sanitizer-ppc64be-linux/builds/16751
2020-06-06 23:30:46 +00:00
Reid Kleckner 672ed53860 [codeview] Put !heapallocsite on calls to operator new
Clang marks calls to operator new as heap allocation sites, but the
operator declared at global scope returns a void pointer. There is no
explicit cast in the code, so the compiler has to write down the
allocated type itself.

Also generalize a cast to use CallBase, so that we mark heap alloc sites
when exceptions are enabled.

Reviewed By: MaskRay

Differential Revision: https://reviews.llvm.org/D80966
2020-06-05 12:52:38 -07:00