Commit Graph

19876 Commits

Author SHA1 Message Date
Daniel Neilson 367c2aea4e [InstCombine] Properly change GEP type when reassociating loop invariant GEP chains
Summary:
This is a fix to PR37005.

Essentially, rL328539 ([InstCombine] reassociate loop invariant GEP chains to enable LICM) contains a bug
whereby it will convert:
%src = getelementptr inbounds i8, i8* %base, <2 x i64> %val
%res = getelementptr inbounds i8, <2 x i8*> %src, i64 %val2
into:
%src = getelementptr inbounds i8, i8* %base, i64 %val2
%res = getelementptr inbounds i8, <2 x i8*> %src, <2 x i64> %val

By swapping the index operands if the GEPs are in a loop, and %val is loop variant while %val2
is loop invariant.

This fix recreates new GEP instructions if the index operand swap would result in the type
of %src changing from vector to scalar, or vice versa.

Reviewers: sebpop, spatel

Reviewed By: sebpop

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D45287

llvm-svn: 329331
2018-04-05 18:51:45 +00:00
Sanjay Patel deaf4f354e [InstCombine] use pattern matchers for fsub --> fadd folds
This allows folding for vectors with undef elements.

llvm-svn: 329316
2018-04-05 17:06:45 +00:00
Sanjay Patel 236442e063 [InstCombine] cleanup; NFC
llvm-svn: 329282
2018-04-05 13:24:26 +00:00
Florian Hahn 6e0043365b [LoopInterchange] Add stats counter for number of interchanged loops.
Reviewers: samparker, karthikthecool, blitz.opensource

Reviewed By: samparker

Differential Revision: https://reviews.llvm.org/D45209

llvm-svn: 329269
2018-04-05 10:39:23 +00:00
Florian Hahn 831a757728 [LoopInterchange] Preserve LoopInfo after interchanging.
LoopInterchange relies on LoopInfo being up-to-date, so we should
preserve it after interchanging. This patch updates restructureLoops to
move the BBs of the interchanged loops to the right place.

Reviewers: davide, efriedma, karthikthecool, mcrosier

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D45278

llvm-svn: 329264
2018-04-05 09:48:45 +00:00
Taewook Oh e0db533feb [CallSiteSplitting] Do not perform callsite splitting inside landing pad
Summary:
If the callsite is inside landing pad, do not perform callsite splitting.

Callsite splitting uses utility function llvm::DuplicateInstructionsInSplitBetween, which eventually calls llvm::SplitEdge. llvm::SplitEdge calls llvm::SplitCriticalEdge with an assumption that the function returns nullptr only when the target edge is not a critical edge (and further assumes that if the return value was not nullptr, the predecessor of the original target edge always has a single successor because critical edge splitting was successful). However, this assumtion is not true because SplitCriticalEdge returns nullptr if the destination block is a landing pad. This invalid assumption results assertion failure.

Fundamental solution might be fixing llvm::SplitEdge to not to rely on the invalid assumption. However, it'll involve a lot of work because current API assumes that llvm::SplitEdge never fails. Instead, this patch makes callsite splitting to not to attempt splitting if the callsite is in a landing pad.

Attached test case will crash with assertion failure without the fix.

Reviewers: fhahn, junbuml, dberlin

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D45130

llvm-svn: 329250
2018-04-05 04:16:23 +00:00
Evgeniy Stepanov 1f1a7a719d hwasan: add -hwasan-match-all-tag flag
Sometimes instead of storing addresses as is, the kernel stores the address of
a page and an offset within that page, and then computes the actual address
when it needs to make an access. Because of this the pointer tag gets lost
(gets set to 0xff). The solution is to ignore all accesses tagged with 0xff.

This patch adds a -hwasan-match-all-tag flag to hwasan, which allows to ignore
accesses through pointers with a particular pointer tag value for validity.

Patch by Andrey Konovalov.

Differential Revision: https://reviews.llvm.org/D44827

llvm-svn: 329228
2018-04-04 20:44:59 +00:00
Benjamin Kramer 1fc0da4849 Make helpers static. NFC.
llvm-svn: 329170
2018-04-04 11:45:11 +00:00
Nicolai Haehnle eb7311ffb1 StructurizeCFG: Test for branch divergence correctly
Fixes cases like the new test @nonuniform. In that test, %cc itself
is a uniform value; however, when reading it after the end of the loop in
basic block %if, its value is effectively non-uniform, so the branch is
non-uniform.

This problem was encountered in
https://bugs.freedesktop.org/show_bug.cgi?id=103743; however, this change
in itself is not sufficient to fix that bug, as there is another issue
in the AMDGPU backend.

As discovered after committing an earlier version of this change, this
exposes a subtle interaction between this pass and DivergenceAnalysis:
since we remove and re-create branch instructions, we can no longer rely
on DivergenceAnalysis for branches in subregions that were already
processed by the pass.

Explicitly remove branch instructions from DivergenceAnalysis to
avoid dangling pointers as a matter of defensive programming, and
change how we detect non-uniform subregions.

Change-Id: I32bbffece4a32f686fab54964dae1a5dd72949d4

Differential Revision: https://reviews.llvm.org/D43743

llvm-svn: 329165
2018-04-04 10:58:15 +00:00
Craig Topper 7d3aba6687 [SimplifyCFG] Teach merge conditional stores to handle cases where the PostBB has more than 2 predecessors by inserting a new block for the store.
Summary:
Currently merge conditional stores can't handle cases where PostBB (the block we need to move the store to) has more than 2 predecessors.

This patch removes that restriction by creating a new block with only the 2 predecessors we care about and an unconditional branch to the original block. This provides a place to put the store.

Reviewers: efriedma, jmolloy, ABataev

Reviewed By: efriedma

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D39760

llvm-svn: 329142
2018-04-04 03:47:17 +00:00
Ikhlas Ajbar 1376d934ed [Hexagon] peel loops with runtime small trip counts
Move the check canPeel() to Hexagon Target before setting PeelCount.

Differential Revision: https://reviews.llvm.org/D44880

llvm-svn: 329129
2018-04-03 22:55:09 +00:00
Sanjay Patel 81b3b10a95 [InstCombine] allow more fmul folds with 'reassoc'
The tests marked with 'FIXME' require loosening the check
in SimplifyAssociativeOrCommutative() to optimize completely;
that's still checking isFast() in Instruction::isAssociative().

llvm-svn: 329121
2018-04-03 22:19:19 +00:00
Vlad Tsyrklevich 07cf78cdad Fix bad copy-and-paste in r329108
llvm-svn: 329118
2018-04-03 21:40:27 +00:00
Gor Nishanov d4712715dd [coroutines] Respect alloca alignment requirements when building coroutine frame
Summary:
If an alloca need to be stored in the coroutine frame and it has an alignment specified and the alignment does not match the natural alignment of the alloca type. Insert appropriate padding into the coroutine frame to make sure that it gets requested alignment.

For example for a packet type (which natural alignment is 1), but alloca alignment is 8, we may need to insert a padding field with required number of bytes to make sure it is properly aligned.

```
%PackedStruct = type <{ i64 }>
...
  %data = alloca %PackedStruct, align 8
```

If the previous field in the coroutine frame had alignment 2, we would have [6 x i8] inserted before %PackedStruct in the coroutine frame:

```
%f.Frame = type { ..., i16, [6 x i8], %PackedStruct }
```

Reviewers: rnk, lewissbaker, modocache

Reviewed By: modocache

Subscribers: EricWF, llvm-commits

Differential Revision: https://reviews.llvm.org/D45221

llvm-svn: 329112
2018-04-03 20:54:20 +00:00
Florian Hahn 9467ccf447 [LoopInterchange] Add remark for calls preventing interchanging.
It also updates test/Transforms/LoopInterchange/call-instructions.ll
to use accesses where we can prove dependence after D35430.


Reviewers: sebpop, karthikthecool, blitz.opensource

Reviewed By: sebpop

Differential Revision: https://reviews.llvm.org/D45206

llvm-svn: 329111
2018-04-03 20:54:04 +00:00
Vlad Tsyrklevich d17f61ea3b Add the ShadowCallStack attribute
Summary:
Introduce the ShadowCallStack function attribute. It's added to
functions compiled with -fsanitize=shadow-call-stack in order to mark
functions to be instrumented by a ShadowCallStack pass to be submitted
in a separate change.

Reviewers: pcc, kcc, kubamracek

Reviewed By: pcc, kcc

Subscribers: cryptoad, mehdi_amini, javed.absar, llvm-commits, kcc

Differential Revision: https://reviews.llvm.org/D44800

llvm-svn: 329108
2018-04-03 20:10:40 +00:00
Alexey Bataev d5b1f7892f [SLP] Fixed formatting, NFC.
llvm-svn: 329091
2018-04-03 17:48:14 +00:00
Daniel Neilson 901acfab0c [InstCombine] Fold compare of int constant against a splatted vector of ints
Summary:
Folding patterns like:
  %vec = shufflevector <4 x i8> %insvec, <4 x i8> undef, <4 x i32> zeroinitializer
  %cast = bitcast <4 x i8> %vec to i32
  %cond = icmp eq i32 %cast, 0
into:
  %ext = extractelement <4 x i8> %insvec, i32 0
  %cond = icmp eq i32 %ext, 0

Combined with existing rules, this allows us to fold patterns like:
  %insvec = insertelement <4 x i8> undef, i8 %val, i32 0
  %vec = shufflevector <4 x i8> %insvec, <4 x i8> undef, <4 x i32> zeroinitializer
  %cast = bitcast <4 x i8> %vec to i32
  %cond = icmp eq i32 %cast, 0
into:
  %cond = icmp eq i8 %val, 0

When we construct a splat vector via a shuffle, and bitcast the vector into an integer type for comparison against an integer constant. Then we can simplify the the comparison to compare the splatted value against the integer constant.

Reviewers: spatel, anna, mkazantsev

Reviewed By: spatel

Subscribers: efriedma, rengolin, llvm-commits

Differential Revision: https://reviews.llvm.org/D44997

llvm-svn: 329087
2018-04-03 17:26:20 +00:00
Alexey Bataev 428e9d9d87 [SLP] Fix PR36481: vectorize reassociated instructions.
Summary:
If the load/extractelement/extractvalue instructions are not originally
consecutive, the SLP vectorizer is unable to vectorize them. Patch
allows reordering of such instructions.

Patch does not support reordering of the repeated instruction, this must
be handled in the separate patch.

Reviewers: RKSimon, spatel, hfinkel, mkuper, Ayal, ashahid

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D43776

llvm-svn: 329085
2018-04-03 17:14:47 +00:00
Alexey Bataev df989c54cf Recommit "[SLP] Fix issues with debug output in the SLP vectorizer."
The primary issue here is that using NDEBUG alone isn't enough to guard
debug printing -- instead the DEBUG() macro needs to be used so that the
specific pass debug logging check is employed. Without this, every
asserts-enabled build was printing out information when it hit this.

I also fixed another place where we had multiple statements in a DEBUG
macro to use {}s to be a bit cleaner. And I fixed a place that used
errs() rather than dbgs().

llvm-svn: 329082
2018-04-03 16:40:33 +00:00
Benjamin Kramer 2fc3b18922 Revert "[SLP] Fix PR36481: vectorize reassociated instructions."
This reverts commit r328980 and r329046. Makes the vectorizer crash.

llvm-svn: 329071
2018-04-03 14:40:33 +00:00
Alexander Potapenko ac70668cff MSan: introduce the conservative assembly handling mode.
The default assembly handling mode may introduce false positives in the
cases when MSan doesn't understand that the assembly call initializes
the memory pointed to by one of its arguments.

We introduce the conservative mode, which initializes the first
|sizeof(type)| bytes for every |type*| pointer passed into the
assembly statement.

llvm-svn: 329054
2018-04-03 09:50:06 +00:00
Chandler Carruth 597bfd8448 [SLP] Fix issues with debug output in the SLP vectorizer.
The primary issue here is that using NDEBUG alone isn't enough to guard
debug printing -- instead the DEBUG() macro needs to be used so that the
specific pass debug logging check is employed. Without this, every
asserts-enabled build was printing out information when it hit this.

I also fixed another place where we had multiple statements in a DEBUG
macro to use {}s to be a bit cleaner. And I fixed a place that used
`errs()` rather than `dbgs()`.

llvm-svn: 329046
2018-04-03 05:27:28 +00:00
Ikhlas Ajbar b7322e8ac7 peel loops with runtime small trip counts
For Hexagon, peeling loops with small runtime trip count is beneficial for our
benchmarks. We set PeelCount in HexagonTargetInfo.cpp and we use PeelCount set
by the target for computing the desired peel count.

Differential Revision: https://reviews.llvm.org/D44880

llvm-svn: 329042
2018-04-03 03:39:43 +00:00
Haicheng Wu 7f0daaeb86 [SLP] Distinguish "demanded and shrinkable" from "demanded and not shrinkable" values when determining the minimum bitwidth
We use two approaches for determining the minimum bitwidth.

   * Demanded bits
   * Value tracking

If demanded bits doesn't result in a narrower type, we then try value tracking.
We need this if we want to root SLP trees with the indices of getelementptr
instructions since all the bits of the indices are demanded.

But there is a missing piece though. We need to be able to distinguish "demanded
and shrinkable" from "demanded and not shrinkable". For example, the bits of %i
in

%i = sext i32 %e1 to i64
%gep = getelementptr inbounds i64, i64* %p, i64 %i

are demanded, but we can shrink %i's type to i32 because it won't change the
result of the getelementptr. On the other hand, in

%tmp15 = sext i32 %tmp14 to i64
%tmp16 = insertvalue { i64, i64 } undef, i64 %tmp15, 0

it doesn't make sense to shrink %tmp15 and we can skip the value tracking.

Ideas are from Matthew Simpson!

Differential Revision: https://reviews.llvm.org/D44868

llvm-svn: 329035
2018-04-03 00:05:10 +00:00
Brian Gesiak 64521bed0d [Coroutines] Avoid assert splitting hidden coros
Summary:
When attempting to split a coroutine with 'hidden' visibility (for
example, a C++ coroutine that is inlined when compiled with the option
'-fvisibility-inlines-hidden'), LLVM would hit an assertion in
include/llvm/IR/GlobalValue.h:240: "local linkage requires default
visibility". The issue is that the visibility is copied from the source
of the function split in the `CloneFunctionInto` function, but the linkage
is not. To fix, create the new function first with external linkage,
then copy the linkage from the original function *after* `CloneFunctionInto`
is called.

Since `GlobalValue::setLinkage` in turn calls `maybeSetDsoLocal`, the
explicit call to `setDSOLocal` can be removed in CoroSplit.cpp.

Test Plan: check-llvm

Reviewers: GorNishanov, lewissbaker, EricWF, majnemer, rnk

Reviewed By: rnk

Subscribers: llvm-commits, eric_niebler

Differential Revision: https://reviews.llvm.org/D44185

llvm-svn: 329033
2018-04-02 23:39:40 +00:00
Reid Kleckner 298ffc609b [InstCombine] Don't strip function type casts from musttail calls
Summary:
The cast simplifications that instcombine does here do not make any
attempt to obey the verifier rules for musttail calls. Therefore we have
to disable them.

Reviewers: efriedma, majnemer, pcc

Subscribers: hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D45186

llvm-svn: 329027
2018-04-02 22:49:44 +00:00
Reid Kleckner a9e9918ee4 Treat inlining a notail call as a regular, non-tail call
Otherwise, we end up inlining a musttail call into a non-tail position,
which breaks verifier invariants.

Fixes PR31014

llvm-svn: 329015
2018-04-02 21:23:16 +00:00
Sanjay Patel cbb0450540 [InstCombine] add folds for icmp + sub (PR36969)
(A - B) >u A --> A <u B
C <u (C - D) --> C <u D

https://rise4fun.com/Alive/e7j

Name: ugt
  %sub = sub i8 %x, %y
  %cmp = icmp ugt i8 %sub, %x
=>
  %cmp = icmp ult i8 %x, %y
  
Name: ult
  %sub = sub i8 %x, %y
  %cmp = icmp ult i8 %x, %sub
=>
  %cmp = icmp ult i8 %x, %y

This should fix:
https://bugs.llvm.org/show_bug.cgi?id=36969

llvm-svn: 329011
2018-04-02 20:37:40 +00:00
Rong Xu 5a8d4c3357 [DeadArgumentElim] Clone function level metadatas
Some Function level metadatas, such as function entry count, are not cloned in
DeadArgumentElim. This happens a lot in lto/thinlto because of DeadArgumentElim
after internalization.

This patch clones the metadatas in the original function to the new function.

Differential Revision: https://reviews.llvm.org/D44127

llvm-svn: 328991
2018-04-02 17:27:38 +00:00
Gor Nishanov b0316d96ae [coroutines] Add support for llvm.coro.noop intrinsics
Summary:
A recent addition to Coroutines TS (https://wg21.link/p0913) adds a pre-defined coroutine noop_coroutine that does nothing.
To implement this feature, we implemented an llvm.coro.noop intrinsic that returns a coroutine handle to a coroutine that does nothing when resumed or destroyed.

Reviewers: EricWF, modocache, rnk, lewissbaker

Reviewed By: modocache

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D45114

llvm-svn: 328986
2018-04-02 16:55:12 +00:00
Alexey Bataev 3decaf4275 [SLP] Fix PR36481: vectorize reassociated instructions.
Summary:
If the load/extractelement/extractvalue instructions are not originally
consecutive, the SLP vectorizer is unable to vectorize them. Patch
allows reordering of such instructions.

Reviewers: RKSimon, spatel, hfinkel, mkuper, Ayal, ashahid

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D43776

llvm-svn: 328980
2018-04-02 14:51:37 +00:00
Teresa Johnson 974706ebf7 [ThinLTO] Add an import cutoff for debugging/triaging
Summary:
Adds -import-cutoff=N which will stop importing during the thin link
after N imports. Default is -1 (no  limit).

Reviewers: wmi

Subscribers: inglorion, llvm-commits

Differential Revision: https://reviews.llvm.org/D45127

llvm-svn: 328934
2018-04-01 15:54:40 +00:00
David Green f80ebc8d21 [LoopRotate] Rotate loops with loop exiting latches
If a loop has a loop exiting latch, it can be profitable
to rotate the loop if it leads to the simplification of
a phi node. Perform rotation in these cases even if loop
rotate itself didnt simplify the loop to get there.

Differential Revision: https://reviews.llvm.org/D44199

llvm-svn: 328933
2018-04-01 12:48:24 +00:00
Fangrui Song 956ee79795 Fix a bunch of typoes. NFC
llvm-svn: 328907
2018-03-30 22:22:31 +00:00
Peter Collingbourne d03bf12c1b DataFlowSanitizer: wrappers of functions with local linkage should have the same linkage as the function being wrapped
This patch resolves link errors when the address of a static function is taken, and that function is uninstrumented by DFSan.

This change resolves bug 36314.

Patch by Sam Kerner!

Differential Revision: https://reviews.llvm.org/D44784

llvm-svn: 328890
2018-03-30 18:37:55 +00:00
Krzysztof Parzyszek fce30c2ba3 Revert "peel loops with runtime small trip counts"
This reverts commit r328854, it breaks some Hexagon tests.

llvm-svn: 328875
2018-03-30 16:55:44 +00:00
Ikhlas Ajbar 66c8ba5a50 peel loops with runtime small trip counts
For Hexagon, peeling loops with small runtime trip count is beneficial for our
benchmarks. We set PeelCount in HexagonTargetInfo.cpp and we use PeelCount set
by the target for computing the desired peel count.

Differential Revision: https://reviews.llvm.org/D44880

llvm-svn: 328854
2018-03-30 03:05:34 +00:00
David Blaikie f423062aff Fix some layering in StripNonLineTableDebugInfo, moving its declaration from IPO.h to Utils.h to match its implementation
llvm-svn: 328844
2018-03-29 22:42:08 +00:00
David Blaikie 7883340331 Remove unused header to fix layering.
llvm-svn: 328842
2018-03-29 22:35:59 +00:00
David Blaikie 4778bb88ef Remove unused headers to fix layering
llvm-svn: 328840
2018-03-29 22:31:39 +00:00
David Blaikie c90289b5d3 llvm-c: Split Utils out of Scalar.h
To fix layering (so that Scalar.h, a libScalarOpts header, isn't
included from Utils - which libScalarOpts depends on).

llvm-svn: 328839
2018-03-29 22:31:38 +00:00
Evgeniy Stepanov 50635dab26 Add msan custom mapping options.
Similarly to https://reviews.llvm.org/D18865 this adds options to provide custom mapping for msan.
As discussed in http://lists.llvm.org/pipermail/llvm-dev/2018-February/121339.html

Patch by vit9696(at)avp.su.

Differential Revision: https://reviews.llvm.org/D44926

llvm-svn: 328830
2018-03-29 21:18:17 +00:00
Philip Reames 5c14ed89f6 [NFC][LICM] Rearrange checks to have the cheap bail out first
llvm-svn: 328822
2018-03-29 20:32:15 +00:00
Haicheng Wu c7cc87922e [JumpThreading] Don't select an edge that we know we can't thread
In r312664 (D36404), JumpThreading stopped threading edges into
loop headers. Unfortunately, I observed a significant performance
regression as a result of this change. Upon further investigation,
the problematic pattern looked something like this (after
many high level optimizations):

while (true) {
    bool cond = ...;
    if (!cond) {
        <body>
    }
    if (cond)
        break;
}

Now, naturally we want jump threading to essentially eliminate the
second if check and hook up the edges appropriately. However, the
above mentioned change, prevented it from doing this because it would
have to thread an edge into the loop header.

Upon further investigation, what is happening is that since both branches
are threadable, JumpThreading picks one of them at arbitrarily. In my
case, because of the way that the IR ended up, it tended to pick
the one to the loop header, bailing out immediately after. However,
if it had picked the one to the exit block, everything would have
worked out fine (because the only remaining branch would then be folded,
not thraded which is acceptable).

Thus, to fix this problem, we can simply eliminate loop headers from
consideration as possible threading targets earlier, to make sure that
if there are multiple eligible branches, we can still thread one of
the ones that don't target a loop header.

Patch by Keno Fischer!
Differential Revision: https://reviews.llvm.org/D42260

llvm-svn: 328798
2018-03-29 16:01:26 +00:00
David Green b0aa36f9c2 [LoopRotate] Restructuring LoopRotation.cpp to create Loop Rotation Pass with Loop Rotation Utility Interface
The existing LoopRotation.cpp is implemented as one of loop passes instead of
being a utility. The user cannot easily perform the loop rotation selectively
(or on demand) under different optimization level. For example, the loop
rotation is needed as part of the logic to convert a loop into a loop with
bottom test for a transformation. If the loop rotation is simply added as a
loop pass before the transformation, the pass is skipped if it is compiled at
–O0 or if it is explicitly disabled by the user, causing the compiler to
generate incorrect code. Furthermore, as a loop pass it will rotate all loops
instead of just the relevant loops.

We provide a utility interface for the loop rotation so that the loop rotation
can be called on demand. The changeset is as follows:

- Create a new file lib/Transforms/Utils/LoopRotationUtils.cpp and move the main
  implementation of class LoopRotate into this file.
- Create a new file llvm/include/Transform/Utils/LoopRotationUtils.h with the
  interface LoopRotation(...).
- Original LoopRotation.cpp is changed to use the utility function LoopRotation
  in LoopRotationUtils.cpp. This is done in the same way community did for
  mem-to-reg implementation.

Patch by Jin Lin!

Differential Revision: https://reviews.llvm.org/D44595

llvm-svn: 328766
2018-03-29 08:48:15 +00:00
Benjamin Kramer 6b995a4a7e [Transforms] Make sure to include the c binding header when defining c binding functions
Otherwise the definitions can't see the extern C declarations and get
name mangled, making it impossible for users to call them. This breaks
the Go bindings.

llvm-svn: 328765
2018-03-29 07:56:53 +00:00
David Blaikie 8ad9a97310 Plumb useAA through TargetTransformInfo to remove Transforms->CodeGen header dependency
Thanks to echristo for the pointers on direction.

llvm-svn: 328737
2018-03-28 22:28:50 +00:00
David Blaikie eb8cc04ea2 Oops - moved slightly too many things from Scalar to Utils. Move LoopSimplifyCFG things back
llvm-svn: 328720
2018-03-28 18:03:25 +00:00
David Blaikie a373d18eb7 Transforms: Introduce Transforms/Utils.h rather than spreading the declarations amongst Scalar.h and IPO.h
Fixes layering - Transforms/Utils shouldn't depend on including a Scalar
or IPO header, because Scalar and IPO depend on Utils.

llvm-svn: 328717
2018-03-28 17:44:36 +00:00
Alexander Potapenko 4e7ad0805e [MSan] Introduce ActualFnStart. NFC
This is a step towards the upcoming KMSAN implementation patch.
KMSAN is going to prepend a special basic block containing
tool-specific calls to each function. Because we still want to
instrument the original entry block, we'll need to store it in
ActualFnStart.

For MSan this will still be F.getEntryBlock(), whereas for KMSAN
it'll contain the second BB.

llvm-svn: 328697
2018-03-28 11:35:09 +00:00
Alexander Potapenko e1d5877847 [MSan] Add an isStore argument to getShadowOriginPtr(). NFC
This is a step towards the upcoming KMSAN implementation patch.
The isStore argument is to be used by getShadowOriginPtrKernel(),
it is ignored by getShadowOriginPtrUserspace().

Depending on whether a memory access is a load or a store, KMSAN
instruments it with different functions, __msan_metadata_ptr_for_load_X()
and __msan_metadata_ptr_for_store_X().

Those functions may return different values for a single address,
which is necessary in the case the runtime library decides to ignore
particular accesses.

llvm-svn: 328692
2018-03-28 10:17:17 +00:00
Xin Tong 0272cb077f 80-line wrap. NFC
llvm-svn: 328660
2018-03-27 19:43:02 +00:00
Rong Xu 662f38b16f [PGO] Fix branch probability remarks assert
Fixed counter/weight overflow that leads to an assertion. Also fixed the help
string for pgo-emit-branch-prob option.

Differential Revision: https://reviews.llvm.org/D44809

llvm-svn: 328653
2018-03-27 18:55:56 +00:00
Krzysztof Parzyszek 5d93fdfa89 [LV] Add TTI::shouldMaximizeVectorBandwidth to allow enabling it per target
The default implementation returns false and keeps the current behavior.

Differential Revision: https://reviews.llvm.org/D44735

llvm-svn: 328632
2018-03-27 16:14:11 +00:00
Max Kazantsev b1ad66ff12 [LoopUnroll][NFC] Remove redundant canPeel check
We check `canPeel` twice: when evaluating the number of iterations to be peeled
and within the method `peelLoop` that performs peeling. This method is only
executed if the calculated peel count is positive. Thus, the check in `peelLoop` can
never fail. This patch replaces this check with an assert.

Differential Revision: https://reviews.llvm.org/D44919
Reviewed By: fhahn

llvm-svn: 328615
2018-03-27 09:40:51 +00:00
Sam Parker 90b7f4f72c [IRCE] Enable decreasing loops of non-const bound
As a follow-up to r328480, this updates the logic for the decreasing
safety checks in a similar manner:
- CanBeMax is replaced by CannotBeMaxInLoop which queries
  isLoopEntryGuardedByCond on the maximum value.
- SumCanReachMin is replaced by isSafeDecreasingBound which includes
  some logic from parseLoopStructure and, again, has been updated to
  use isLoopEntryGuardedByCond on the given bounds.

Differential Revision: https://reviews.llvm.org/D44776

llvm-svn: 328613
2018-03-27 08:24:53 +00:00
Sanjay Patel 0e3167cb30 [InstCombine] improve code comment; NFC
llvm-svn: 328560
2018-03-26 17:52:02 +00:00
Sebastian Pop d870aea03e [InstCombine] reassociate loop invariant GEP chains to enable LICM
This change brings performance of zlib up by 10%. The example below is from a
hot loop in longest_match() from zlib.

do.body:
  %cur_match.addr.0 = phi i32 [ %cur_match, %entry ], [ %2, %do.cond ]
  %idx.ext = zext i32 %cur_match.addr.0 to i64
  %add.ptr = getelementptr inbounds i8, i8* %win, i64 %idx.ext
  %add.ptr2 = getelementptr inbounds i8, i8* %add.ptr, i64 %idx.ext1
  %add.ptr3 = getelementptr inbounds i8, i8* %add.ptr2, i64 -1

In this example %idx.ext1 is a loop invariant. It will be moved above the use of
loop induction variable %idx.ext such that it can be hoisted out of the loop by
LICM. The operands that have dependences carried by the loop will be sinked down
in the GEP chain. This patch will produce the following output:

do.body:
  %cur_match.addr.0 = phi i32 [ %cur_match, %entry ], [ %2, %do.cond ]
  %idx.ext = zext i32 %cur_match.addr.0 to i64
  %add.ptr = getelementptr inbounds i8, i8* %win, i64 %idx.ext1
  %add.ptr2 = getelementptr inbounds i8, i8* %add.ptr, i64 -1
  %add.ptr3 = getelementptr inbounds i8, i8* %add.ptr2, i64 %idx.ext

llvm-svn: 328539
2018-03-26 16:19:31 +00:00
Sanjay Patel 4fd4fd610c [InstCombine] distribute fmul over fadd/fsub
This replaces a large chunk of code that was looking for compound
patterns that include these sub-patterns. Existing tests ensure that
all of the previous examples are still folded as expected.

We still need to loosen the FMF check.

llvm-svn: 328502
2018-03-26 15:03:57 +00:00
Sanjay Patel 2455fef497 [InstCombine] check uses before creating instructions for fmul distribution
As the tests show, we could create extra instructions without any obvious benefit.

llvm-svn: 328498
2018-03-26 14:25:43 +00:00
Krzysztof Parzyszek 0b377e0ae9 [LSR] Allow giving priority to post-incrementing addressing modes
Implement TTI interface for targets to indicate that the LSR should give
priority to post-incrementing addressing modes.

Combination of patches by Sebastian Pop and Brendon Cahoon.

Differential Revision: https://reviews.llvm.org/D44758

llvm-svn: 328490
2018-03-26 13:10:09 +00:00
Max Kazantsev a55749312b [LoopUnroll] Fix dangling pointers in SCEV
Current logic of loop SCEV invalidation in Loop Unroller implicitly relies on
fact that exit count of outer loops cannot rely on exiting blocks of
inner loops, which is true in current implementation of backedge taken count
calculation but is wrong in general. As result, when we only forget the loop that
we have just unrolled, we may still have cached data for its outer loops (in particular,
exit counts) which keeps references on blocks of inner loop that could have been
changed or even deleted.

The attached test demonstrates a situaton when after unrolling of innermost loop
the outermost loop contains a dangling pointer on non-existant block. The problem
shows up when we apply patch https://reviews.llvm.org/D44677 that makes SCEV
smarter about exit count calculation. I am not sure if the bug exists without this patch,
it appears that now it is accidentally correct just because in practice exact backedge
taken count for outer loops with complex control flow inside is never calculated.
But when SCEV learns to do so, this problem shows up.

This patch replaces existing logic of SCEV loop invalidation with a correct one, which
happens to be invalidation of outermost loop (which also leads to invalidation of all
loops inside of it). It is the only way to ensure that no outer loop keeps dangling pointers
on removed blocks, or just outdated information that has changed after unrolling.

Differential Revision: https://reviews.llvm.org/D44818
Reviewed By: samparker

llvm-svn: 328483
2018-03-26 11:31:46 +00:00
Benjamin Kramer 8840f644b4 [DeadArgElim] Strip allocsize attributes when deleting an argument.
Since allocsize refers to the argument number it gets invalidated when
an argument is removed and the numbers shift.

llvm-svn: 328481
2018-03-26 09:44:24 +00:00
Sam Parker 53a423a417 [IRCE] Enable increasing loops of variable bounds
CanBeMin is currently used which will report true for any unknown
values, but often a check is performed outside the loop which covers
this situation:
    
for (int i = 0; i < N; ++i)
  ...
    
if (N > 0)
  for (int i = 0; i < N; ++i)
    ...
    
So I've add 'LoopGuardedAgainstMin' which reports whether N is
greater than the minimum value which then allows loop with a variable
loop count to be optimised. I've also moved the increasing bound
checking into its own function and replaced SumCanReachMax is another
isLoopEntryGuardedByCond function.

llvm-svn: 328480
2018-03-26 09:29:42 +00:00
Sanjay Patel 93e64dd9a1 [PatternMatch] allow undef elements when matching vector FP +0.0
This continues the FP constant pattern matching improvements from:
https://reviews.llvm.org/rL327627
https://reviews.llvm.org/rL327339
https://reviews.llvm.org/rL327307

Several integer constant matchers also have this ability. I'm
separating matching of integer/pointer null from FP positive zero
and renaming/commenting to make the functionality clearer.

llvm-svn: 328461
2018-03-25 21:16:33 +00:00
Sanjay Patel 841aac04d4 [InstCombine] peek through more icmp of FP cast + bitcast
This is an extension of rL328426 as noted in D44367. 

llvm-svn: 328448
2018-03-25 14:01:42 +00:00
Sanjay Patel 745a9c62c2 [InstCombine] peek through FP casts for sign-bit compares (PR36682)
This pattern came up in PR36682:
https://bugs.llvm.org/show_bug.cgi?id=36682
https://godbolt.org/g/LhuD9A

Equality checks are planned as a follow-up enhancement.

Differential Revision: https://reviews.llvm.org/D44367

llvm-svn: 328426
2018-03-24 15:45:02 +00:00
Sanjay Patel 286074e8a1 [InstCombine] fix formatting; NFC
llvm-svn: 328425
2018-03-24 15:41:59 +00:00
David Blaikie 53f51c1df8 Remove unused header from EntryExitInstrumenter
Fixes layering, since Transforms/Utils doesn't depend on CodeGen, so
shouldn't include headers from it.

llvm-svn: 328399
2018-03-24 00:06:14 +00:00
Philip Reames 6a1f3446b5 [GuardWidening] Group code by class [NFC]
llvm-svn: 328387
2018-03-23 23:41:47 +00:00
David Blaikie 4fe1fe1418 Fix Layering, move instrumentation transform headers into Instrumentation subdirectory
llvm-svn: 328379
2018-03-23 22:11:06 +00:00
Fedor Sergeev 6660fd0f95 [PM][FunctionAttrs] add NoUnwind attribute inference to PostOrderFunctionAttrs pass
Summary:
This was motivated by absence of PrunEH functionality in new PM.
It was decided that a proper way to do PruneEH is to add NoUnwind inference
into PostOrderFunctionAttrs and then perform normal SimplifyCFG on top.

This change generalizes attribute handling implemented for (a removal of)
Convergent attribute, by introducing a generic builder-like class
   AttributeInferer

It registers all the attribute inference requests, storing per-attribute
predicates into a vector, and then goes through an SCC Node, scanning all
the instructions for not breaking attribute assumptions.

The main idea is that as soon all the instructions from all the functions
of SCC Node conform to attribute assumptions then we are free to infer
the attribute as set for all the functions of SCC Node.

It handles two distinct cases of attributes:
   - those that might break due to derefinement of the function code

     for these attributes we are allowed to apply inference only if all the
     functions are "exact definitions". Example - NoUnwind.

   - those that do not care about derefinement

     for these attributes we are allowed to apply inference as soon as we see
     any function definition. Example - removal of Convergent attribute.

Also in this commit:
* Converted all the FunctionAttrs tests to use FileCheck and added new-PM
  invocations to them

* FunctionAttrs/convergent.ll test demonstrates a difference in behavior between
   new and old PM implementations. Marked with FIXME.

* PruneEH tests were converted to new-PM as well, using function-attrs+simplify-cfg
  combo as intended

* some of "other" tests were updated since function-attrs now infers 'nounwind'
  even for old PM pipeline

* -disable-nounwind-inference hidden option added as a possible workaround for a supposedly
  rare case when nounwind being inferred by default presents a problem

Reviewers: chandlerc, jlebar

Reviewed By: jlebar

Subscribers: eraman, llvm-commits

Differential Revision: https://reviews.llvm.org/D44415

llvm-svn: 328377
2018-03-23 21:46:16 +00:00
Sanjay Patel 32381d7c7e [InstCombine] simplify code for FP intrinsic shrinking; NFCI
llvm-svn: 328372
2018-03-23 21:18:12 +00:00
Alex Shlyapnikov 83e7841419 [HWASan] Port HWASan to Linux x86-64 (LLVM)
Summary:
Porting HWASan to Linux x86-64, first of the three patches, LLVM part.

The approach is similar to ARM case, trap signal is used to communicate
memory tag check failure. int3 instruction is used to generate a signal,
access parameters are stored in nop [eax + offset] instruction immediately
following the int3 one.

One notable difference is that x86-64 has to untag the pointer before use
due to the lack of feature comparable to ARM's TBI (Top Byte Ignore).

Reviewers: eugenis

Subscribers: kristof.beyls, llvm-commits

Differential Revision: https://reviews.llvm.org/D44699

llvm-svn: 328342
2018-03-23 17:57:54 +00:00
Andrew Kaylor a237866faf Fix a block copying problem in LICM
Differential Revision: https://reviews.llvm.org/D44817

llvm-svn: 328336
2018-03-23 17:36:18 +00:00
Sanjay Patel 713ca3d36a [InstCombine] reduce code duplication; NFC
llvm-svn: 328323
2018-03-23 15:07:35 +00:00
Sanjay Patel 6de89ce3f7 [InstCombine] improve variable name; NFC
llvm-svn: 328322
2018-03-23 14:48:31 +00:00
Matthew Simpson 6c289a1c74 [SLP] Stop counting cost of gather sequences with multiple uses
When building the SLP tree, we look for reuse among the vectorized tree
entries. However, each gather sequence is represented by a unique tree entry,
even though the sequence may be identical to another one. This means, for
example, that a gather sequence with two uses will be counted twice when
computing the cost of the tree. We should only count the cost of the definition
of a gather sequence rather than its uses. During code generation, the
redundant gather sequences are emitted, but we optimize them away with CSE. So
it looks like this problem just affects the cost model.

Differential Revision: https://reviews.llvm.org/D44742

llvm-svn: 328316
2018-03-23 14:18:27 +00:00
Florian Hahn f73c3ece7f Revert r328307: [IPSCCP] Use constant range information for comparisons of parameters.
Reverted for now, due to it causing verifier failures.

llvm-svn: 328312
2018-03-23 12:49:39 +00:00
Florian Hahn b1feec087e [IPSCCP] Use constant range information for comparisons of parameters.
For comparisons with parameters, we can use the ParamState lattice
elements which also provide constant range information. This improves
the code for PR33253 further and gets us closer to use
ValueLatticeElement for all values.

Also, as we are using the range information in the solver directly, we
do not need tryToReplaceWithConstantRange afterwards anymore.

Reviewers: dberlin, mssimpso, davide, efriedma

Reviewed By: mssimpso

Differential Revision: https://reviews.llvm.org/D43762

llvm-svn: 328307
2018-03-23 11:56:00 +00:00
Florian Hahn 52436a587e [LoopUnroll] Simplify induction variables after peeling too.
Loop peeling also has an impact on the induction variables, so we should
benefit from induction variable simplification after peeling too.

Reviewers: sanjoy, bogner, mzolotukhin, efriedma

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D43878

llvm-svn: 328301
2018-03-23 10:38:12 +00:00
David Blaikie 301627f875 Move SampleProfile.h into IPO along with the rest of the IPO pass headers
llvm-svn: 328262
2018-03-22 22:42:44 +00:00
David Blaikie 376294c23a Finish moving the IPSCCP pass from Scalar to IPO - moving the registration
llvm-svn: 328259
2018-03-22 22:07:53 +00:00
David Blaikie 3bbf5af0ac Fix layering between SCCP and IPO SCCP
Transforms/Scalar/SCCP.cpp implemented both the Scalar and IPO SCCP, but
this meant Transforms/Scalar including Transfroms/IPO headers, creating
a circular dependency. (IPO depends on Scalar already) - so move the IPO
SCCP shims out into IPO and the basic library implementation accessible
from Scalar/SCCP.h to be used from the IPO/SCCP.cpp implementation.

llvm-svn: 328250
2018-03-22 21:41:29 +00:00
David Blaikie 2965a01e98 Move the initialization of the Meta Renamer pass over to IPO along with the rest of it that was moved in r328209
llvm-svn: 328234
2018-03-22 19:36:54 +00:00
Daniel Neilson 710d7b9945 [InstCombineCalls] Update deprecated API usage (NFC)
Summary:
Just updating a call to MemSetInst::getAlignment() to MemSetInst::getDestAlignment(). The
former has been deprecated.

llvm-svn: 328227
2018-03-22 18:36:15 +00:00
Matt Morehouse 236cdaf84c [SimplifyCFG] Create attribute for fuzzing-specific optimizations.
Summary:
When building with libFuzzer, converting control flow to selects or
obscuring the original operands of CMPs reduces the effectiveness of
libFuzzer's heuristics.

This patch provides an attribute to disable or modify certain optimizations
for optimal fuzzing signal.

Provides a less aggressive alternative to https://reviews.llvm.org/D44057.

Reviewers: vitalybuka, davide, arsenm, hfinkel

Reviewed By: vitalybuka

Subscribers: junbuml, mehdi_amini, wdng, javed.absar, hiraditya, llvm-commits, kcc

Differential Revision: https://reviews.llvm.org/D44232

llvm-svn: 328214
2018-03-22 17:07:51 +00:00
Anna Thomas 9b1176b0ef [LoopPredication] Add profitability check based on BPI
Summary:
LoopPredication is not profitable when the loop is known to always exit
through some block other than the latch block.
A coarse grained latch check can cause loop predication to predicate the
loop, and unconditionally deoptimize.

However, without predicating the loop, the guard may never fail within the
loop during the dynamic execution because the non-latch loop termination
condition exits the loop before the latch condition causes the loop to
exit.
We teach LP about this using BranchProfileInfo pass.

Reviewers: apilipenko, skatkov, mkazantsev, reames

Reviewed by: skatkov

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D44667

llvm-svn: 328210
2018-03-22 16:03:59 +00:00
David Blaikie 0368417595 Move MetaRenamer from Transforms/UTils to Transforms/IPO since it implements part of IPO.h
llvm-svn: 328209
2018-03-22 15:57:47 +00:00
Florian Hahn 9bc0bc4b9b [CallSiteSplitting] Preserve DominatorTreeAnalysis.
The dominator tree analysis can be preserved easily.
Some other kinds of analysis can probably be preserved
too.

Reviewers: junbuml, dberlin

Reviewed By: dberlin

Differential Revision: https://reviews.llvm.org/D43173

llvm-svn: 328206
2018-03-22 15:23:33 +00:00
Sanjay Patel 94c91b78e7 [InstCombine] add folds for xor-of-icmp signbit tests (PR36682)
This is a retry of r328119 which was reverted at r328145 because
it could crash by trying to combine icmps with different operand
types. This version has a check for that and additional tests.

Original commit message:

This is part of solving:
https://bugs.llvm.org/show_bug.cgi?id=36682

There's also a leftover improvement from the long-ago-closed:
https://bugs.llvm.org/show_bug.cgi?id=5438

https://rise4fun.com/Alive/dC1

llvm-svn: 328197
2018-03-22 14:08:16 +00:00
Florian Hahn 3bb822e7d6 [CloneFunction] Preserve DT in DuplicateInstructionsInSplitBetween.
DuplicateInstructionsInSplitBetween can preserve the DT by passing
through DT to SplitEdge.

Reviewers: sanjoy, junbuml, anna, kuhar

Reviewed By: kuhar

Differential Revision: https://reviews.llvm.org/D44629

llvm-svn: 328189
2018-03-22 11:38:53 +00:00
David Blaikie 2be3922807 Fix a couple of layering violations in Transforms
Remove #include of Transforms/Scalar.h from Transform/Utils to fix layering.

Transforms depends on Transforms/Utils, not the other way around. So
remove the header and the "createStripGCRelocatesPass" function
declaration (& definition) that is unused and motivated this dependency.

Move Transforms/Utils/Local.h into Analysis because it's used by
Analysis/MemoryBuiltins.cpp.

llvm-svn: 328165
2018-03-21 22:34:23 +00:00
Reid Kleckner 762331be07 Revert r328119 "[InstCombine] add folds for xor-of-icmp signbit tests (PR36682)"
This asserts when compiling safe_numerics_unittest.cpp in Chromium with
MSan.

llvm-svn: 328145
2018-03-21 20:35:36 +00:00
Sanjay Patel 778032f39d [InstCombine] add folds for xor-of-icmp signbit tests (PR36682)
This is part of solving:
https://bugs.llvm.org/show_bug.cgi?id=36682

There's also a leftover improvement from the long-ago-closed:
https://bugs.llvm.org/show_bug.cgi?id=5438

https://rise4fun.com/Alive/dC1

llvm-svn: 328119
2018-03-21 17:17:13 +00:00
Daniel Neilson 6f1eb58e92 [MemCpyOpt] Update to new API for memory intrinsic alignment
Summary:
This change is part of step five in the series of changes to remove alignment argument from
memcpy/memmove/memset in favour of alignment attributes. In particular, this changes the
MemCpyOpt pass to cease using:
1) The old getAlignment() API of MemoryIntrinsic in favour of getting source & dest specific
alignments through the new API.
2) The old IRBuilder CreateMemCpy/CreateMemMove single-alignment APIs in favour of the new
API that allows setting source and destination alignments independently.

We also add a few tests to fill gaps in the testing of this pass.

Steps:
Step 1) Remove alignment parameter and create alignment parameter attributes for
memcpy/memmove/memset. ( rL322965, rC322964, rL322963 )
Step 2) Expand the IRBuilder API to allow creation of memcpy/memmove with differing
source and dest alignments. ( rL323597 )
Step 3) Update Clang to use the new IRBuilder API. ( rC323617 )
Step 4) Update Polly to use the new IRBuilder API. ( rL323618 )
Step 5) Update LLVM passes that create memcpy/memmove calls to use the new IRBuilder API,
and those that use use MemIntrinsicInst::[get|set]Alignment() to use [get|set]DestAlignment()
and [get|set]SourceAlignment() instead. ( rL323886, rL323891, rL324148, rL324273, rL324278,
rL324384, rL324395, rL324402, rL324626, rL324642, rL324653, rL324654, rL324773, rL324774,
rL324781, rL324784, rL324955, rL324960, rL325816, rL327398, rL327421 )
Step 6) Remove the single-alignment IRBuilder API for memcpy/memmove, and the
MemIntrinsicInst::[get|set]Alignment() methods.

Reference
   http://lists.llvm.org/pipermail/llvm-dev/2015-August/089384.html
   http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html

llvm-svn: 328097
2018-03-21 14:14:55 +00:00
Justin Lebar 038cbc5c13 Re-re-land: Teach CorrelatedValuePropagation to reduce the width of udiv/urem instructions.
Summary:
If the operands of a udiv/urem can be proved to fit within a smaller
power-of-two-sized type, reduce the width of the udiv/urem.

Backed out for causing performance regressions.  Re-landing
because we've determined that these regressions were noise.

Original Differential Revision: https://reviews.llvm.org/D44102

llvm-svn: 328096
2018-03-21 14:08:21 +00:00
Philip Reames 23aed5ef6f [MustExecute] Move isGuaranteedToExecute and related rourtines to Analysis
Next step is to actually merge the implementations and get both implementations tested through the new printer.

llvm-svn: 328055
2018-03-20 22:45:23 +00:00
Shoaib Meenai 3f689c8632 [ObjCARC] Add funclet token to ARC marker
The inline assembly generated for the ARC autorelease elision marker
must have a funclet token if it's emitted inside a funclet, otherwise
the inline assembly (and all subsequent code in the funclet) will be
marked unreachable by WinEHPrepare.

Note that this only applies for the non-O0 case, since at O0, clang
emits the autorelease elision marker itself rather than deferring to the
backend. The fix for clang is handled in a separate change.

Differential Revision: https://reviews.llvm.org/D44641

llvm-svn: 328042
2018-03-20 20:45:41 +00:00
Xin Tong a713ebea24 [MergeICmps] Break eargerly out of loop
llvm-svn: 327972
2018-03-20 12:03:25 +00:00
Xin Tong bdbd97ed9a [MergeICmp] Fix a bug in entry block shuffled to middle of the chain
Summary: Fix a bug in entry block shuffled to middle of the chain.

Reviewers: davide, courbet

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D44642

llvm-svn: 327971
2018-03-20 11:57:54 +00:00
Andrei Elovikov 8b8253fdc7 [LV] Let recordVectorLoopValueForInductionCast to check if IV was created from the cast.
Summary:
It turned out to be error-prone to expect the callers to handle that - better to
leave the decision to this routine and make the required data to be explicitly
passed to the function.

This handles the case that was missed in the r322473 and fixes the assert
mentioned in PR36524.

Reviewers: dorit, mssimpso, Ayal, dcaballe

Reviewed By: dcaballe

Subscribers: Ka-Ka, hiraditya, dneilson, hsaito, llvm-commits

Differential Revision: https://reviews.llvm.org/D43812

llvm-svn: 327960
2018-03-20 09:04:39 +00:00
Sanjay Patel 0ce3086777 [InstCombine] canonicalize fcmp+select to fabs
This is complicated by -0.0 and nan. This is based on the DAG patterns 
as shown in D44091. I'm hoping that we can just remove those DAG folds 
and always rely on IR canonicalization to handle the matching to fabs.

We would still need to delete the broken code from DAGCombiner to fix 
PR36600:
https://bugs.llvm.org/show_bug.cgi?id=36600

Differential Revision: https://reviews.llvm.org/D44550

llvm-svn: 327858
2018-03-19 15:14:30 +00:00
Alexander Potapenko fa0217276a [MSan] fix the types of RegSaveAreaPtrPtr and OverflowArgAreaPtrPtr
Despite their names, RegSaveAreaPtrPtr and OverflowArgAreaPtrPtr
used to be i8* instead of i8**.

This is important, because these pointers are dereferenced twice
(first in CreateLoad(), then in getShadowOriginPtr()), but for some
reason MSan allowed this - most certainly because it was possible
to optimize getShadowOriginPtr() away at compile time.

Differential revision: https://reviews.llvm.org/D44520

llvm-svn: 327830
2018-03-19 10:08:04 +00:00
Alexander Potapenko 014ff63f24 [MSan] Don't create zero offsets in getShadowPtrForArgument(). NFC
For MSan instrumentation with MS.ParamTLS and MS.ParamOriginTLS being
TLS variables, the CreateAdd() with ArgOffset==0 is a no-op, because
the compiler is able to fold the addition of 0.

But for KMSAN, which receives ParamTLS and ParamOriginTLS from a call
to the runtime library, this introduces a stray instruction which
complicates reading/testing the IR.

Differential revision: https://reviews.llvm.org/D44514

llvm-svn: 327829
2018-03-19 10:03:47 +00:00
Alexander Potapenko e0bafb4359 [MSan] Introduce insertWarningFn(). NFC
This is a step towards the upcoming KMSAN implementation patch.
KMSAN is going to use a different warning function,
__msan_warning_32(uptr origin), so we'd better create the warning
calls in one place.

Differential Revision: https://reviews.llvm.org/D44513

llvm-svn: 327828
2018-03-19 09:59:44 +00:00
Anastasis Grammenos 3a589103a4 [LICM] Salvage DI from dying Instructions
LICM deletes trivially dead instructions which it won't attempt to sink.
Attempt to salvage debug values which reference these instructions.

llvm-svn: 327800
2018-03-18 15:59:19 +00:00
Roman Lebedev e6da3063a5 [InstCombine] peek through unsigned FP casts for zero-equality compares (PR36682)
Summary:
This pattern came up in PR36682 / D44390
https://bugs.llvm.org/show_bug.cgi?id=36682
https://reviews.llvm.org/D44390
https://godbolt.org/g/oKvT5H

See also D44416

Reviewers: spatel, majnemer, efriedma, arsenm

Reviewed By: spatel

Subscribers: wdng, llvm-commits

Differential Revision: https://reviews.llvm.org/D44424

llvm-svn: 327799
2018-03-18 15:53:02 +00:00
Sanjay Patel 63b1028953 [InstCombine] add nnan requirement for sqrt(x) * sqrt(y) -> sqrt(x*y)
This is similar to D43765.

llvm-svn: 327797
2018-03-18 14:32:54 +00:00
Oren Ben Simhon fdd72fd522 [X86] Added support for nocf_check attribute for indirect Branch Tracking
X86 Supports Indirect Branch Tracking (IBT) as part of Control-Flow Enforcement Technology (CET).
IBT instruments ENDBR instructions used to specify valid targets of indirect call / jmp.
The `nocf_check` attribute has two roles in the context of X86 IBT technology:
	1. Appertains to a function - do not add ENDBR instruction at the beginning of the function.
	2. Appertains to a function pointer - do not track the target function of this pointer by adding nocf_check prefix to the indirect-call instruction.

This patch implements `nocf_check` context for Indirect Branch Tracking.
It also auto generates `nocf_check` prefixes before indirect branchs to jump tables that are guarded by range checks.

Differential Revision: https://reviews.llvm.org/D41879

llvm-svn: 327767
2018-03-17 13:29:46 +00:00
Craig Topper 71d69b2ea5 [CorrelatedValuePropagation] Use SelectInst::getCondition/getTrueValue/getFalseValue instead of getOperand for readability. NFC
llvm-svn: 327728
2018-03-16 18:18:47 +00:00
Philip Reames 8a106272e8 [LICM/mustexec] Extend first iteration must execute logic to fcmps
This builds on the work from https://reviews.llvm.org/D44287. It turned out supporting fcmp was much easier than I realized, so let's do that now.

As an aside, our -O3 handling of a floating point IVs leaves a lot to be desired. We do convert the float IV to an integer IV, but do so late enough that many other optimizations are missed (e.g. we don't vectorize).

Differential Revision: https://reviews.llvm.org/D44542

llvm-svn: 327722
2018-03-16 16:33:49 +00:00
Brian M. Rzycki f65ddc5fa2 [JumpThreading] Track unreachable BBs to avoid processing
JumpThreading iterates over F until the IR quiesces. Transforming
unreachable BBs increases compile time and it is also possible to
never stabilize causing JumpThreading to hang. An older attempt at
fixing this problem was D3991 where removeUnreachableBlocks(F)
was called before JumpThreading began. This has a few drawbacks:
 * expensive - the routine attempts to fix up the IR to identify
   additional BBs that can be removed along with unreachable BBs.
 * aggressive - does not identify and preserve the shape of the IR.
   At a minimum it does not preserve loop hierarchies.
 * invasive - altering reachable blocks it may disrupt IR shapes
   that could have otherwise been JumpThreaded.

This patch avoids removeUnreachableBlocks(F) and instead tracks
unreachable BBs in a SmallPtrSet using DominatorTree to validate the
initial state of all BBs. We then rely on subsequent passes to identify
and remove these unreachable blocks from F.

Reviewers: dberlin, sebpop, kuhar, dinesh.d

Reviewed by: sebpop, kuhar

Subscribers: hiraditya, uabelho, llvm-commits

Differential Revision: https://reviews.llvm.org/D44177

llvm-svn: 327713
2018-03-16 15:13:47 +00:00
Florian Hahn fc97b6173f [LoopUnroll] Peel off iterations if it makes conditions true/false.
If the loop body contains conditions of the form IndVar < #constant, we
can remove the checks by peeling off #constant iterations.

This improves codegen for PR34364.

Reviewers: mkuper, mkazantsev, efriedma

Reviewed By: mkazantsev

Differential Revision: https://reviews.llvm.org/D43876

llvm-svn: 327671
2018-03-15 21:34:43 +00:00
Philip Reames a21d5f1e18 [LICM] Ignore exits provably not taken on first iteration when computing must execute
It is common to have conditional exits within a loop which are known not to be taken on some iterations, but not necessarily all. This patches extends our reasoning around guaranteed to execute (used when establishing whether it's safe to dereference a location from the preheader) to handle the case where an exit is known not to be taken on the first iteration and the instruction of interest *is* known to be taken on the first iteration.

This case comes up in two major ways:
* If we have a range check which we've been unable to eliminate, we frequently know that it doesn't fail on the first iteration.
* Pass ordering. We may have a check which will be eliminated through some sequence of other passes, but depending on the exact pass sequence we might never actually do so or we might miss other optimizations from passes run before the check is finally eliminated.

The initial version (here) is implemented via InstSimplify. At the moment, it catches a few cases, but misses a lot too. I added test cases for missing cases in InstSimplify which I'll follow up on separately. Longer term, we should probably wire SCEV through to here to get much smarter loop aware simplification of the first iteration predicate.

Differential Revision: https://reviews.llvm.org/D44287

llvm-svn: 327664
2018-03-15 21:04:28 +00:00
Diego Caballero cae4994a58 [LV] Test commit. Removing white space.
This is just to check that I have commit access privilege.

llvm-svn: 327656
2018-03-15 19:34:27 +00:00
Philip Reames 422024a1b7 [EarlyCSE] Don't hide earler invariant.scopes
If we've already established an invariant scope with an earlier generation, we don't want to hide it in the scoped hash table with one with a later generation.  I noticed this when working on the invariant-load handling, but it also applies to the invariant.start case as well.

Without this change, my previous patch for invariant-load regresses some cases, so I'm pushing this without waiting for review.  This is why you don't make last minute tweaks to patches to catch "obvious cases" after it's already been reviewed.  Bad Philip!

llvm-svn: 327655
2018-03-15 18:12:27 +00:00
Philip Reames ca587fe0b4 [EarlyCSE] Reuse invariant scopes for invariant load
This is a follow up to https://reviews.llvm.org/D43716 which rewrites the invariant load handling using the new infrastructure. It's slightly more powerful, but only in somewhat minor ways for the moment. It's not clear that DSE of stores to invariant locations is actually interesting since why would your IR have such a construct to start with?

Note: The submitted version is slightly different than the reviewed one.  I realized the scope could start for an invariant load which was proven redundant and removed.  Added a test case to illustrate that as well.

Differential Revision: https://reviews.llvm.org/D44497

llvm-svn: 327646
2018-03-15 17:29:32 +00:00
Ulrich Weigand f4ceef8d3f [Debug] Retain both copies of debug intrinsics in HoistThenElseCodeToIf
When hoisting common code from the "then" and "else" branches of a condition
to before the "if", the HoistThenElseCodeToIf routine will attempt to merge
the debug location associated with the two original copies of the hoisted
instruction.

This is a problem in the special case where the hoisted instruction is a
debug info intrinsic, since for those the debug location is considered
part of the intrinsic and attempting to modify it may resut in invalid
IR.  This is the underlying cause of PR36410.

This patch fixes the problem by handling debug info intrinsics specially:
instead of hoisting one copy and merging the two locations, the code now
simply hoists both copies, each with its original location intact.  Note
that this is still only done in the case where both original copies are
otherwise (i.e. apart from location metadata) identical.

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D44312

llvm-svn: 327622
2018-03-15 12:28:48 +00:00
Fedor Sergeev 194a407bda [New PM][IRCE] port of Inductive Range Check Elimination pass to the new pass manager
There are two nontrivial details here:
* Loop structure update interface is quite different with new pass manager,
  so the code to add new loops was factored out

* BranchProbabilityInfo is not a loop analysis, so it can not be just getResult'ed from
  within the loop pass. It cant even be queried through getCachedResult as LoopCanonicalization
  sequence (e.g. LoopSimplify) might invalidate BPI results.

  Complete solution for BPI will likely take some time to discuss and figure out,
  so for now this was partially solved by making BPI optional in IRCE
  (skipping a couple of profitability checks if it is absent).

Most of the IRCE tests got their corresponding new-pass-manager variant enabled.
Only two of them depend on BPI, both marked with TODO, to be turned on when BPI
starts being available for loop passes.

Reviewers: chandlerc, mkazantsev, sanjoy, asbirlea
Reviewed By: mkazantsev
Differential Revision: https://reviews.llvm.org/D43795

llvm-svn: 327619
2018-03-15 11:01:19 +00:00
Andrei Elovikov f9b8035f3c [LoopUnroll] Ignore ephemeral values when checking full unroll profitability.
Summary:
Before this patch call graph is like this in the LoopUnrollPass:

  tryToUnrollLoop
    ApproximateLoopSize
      collectEphemeralValues
      /* Use collected ephemeral values */
    computeUnrollCount
      analyzeLoopUnrollCost
        /* Bail out from the analysis if loop contains CallInst */

This patch moves collection of the ephemeral values to the tryToUnrollLoop
function and passes the collected values into both ApproximateLoopsize (as
before) and additionally starts using them in analyzeLoopUnrollCost:

  tryToUnrollLoop
    collectEphemeralValues
    ApproximateLoopSize(EphValues)
      /* Use EphValues */
    computeUnrollCount(EphValues)
      analyzeLoopUnrollCost(EphValues)
        /* Ignore ephemeral values - they don't contribute to the final cost */
        /* Bail out from the analysis if loop contains CallInst */

Reviewers: mzolotukhin, evstupac, sanjoy

Reviewed By: evstupac

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D43931

llvm-svn: 327617
2018-03-15 09:59:15 +00:00
George Burgess IV cedfa6da81 Remove unused variable; NFC
llvm-svn: 327597
2018-03-15 02:58:36 +00:00
Matt Davis 9407bb5f54 [CleanUp] Remove NumInstructions field from LoopVectorizer's RegisterUsage struct.
Summary:
This variable is largely going unused; aside from reporting number of instructions for in DEBUG builds.

The only use of NumInstructions is in debug output to represent the LoopSize.  That value can be can be misleading as it also includes metadata instructions (e.g., DBG_VALUE) which have no real impact.  If we do choose to keep this around, we probably should guard it by a DEBUG macro, as it's not used in production builds.



Reviewers: majnemer, congh, rengolin

Reviewed By: rengolin

Subscribers: llvm-commits, rengolin

Differential Revision: https://reviews.llvm.org/D44495

llvm-svn: 327589
2018-03-14 23:30:31 +00:00
Philip Reames 0adbb19409 [EarlyCSE] Exploit open ended invariant.start scopes
If we have an invariant.start with no corresponding invariant.end, then the memory location becomes invariant indefinitely after the invariant.start. As a result, anything dominated by the start is guaranteed to see the value the memory location had when the invariant.start executed.

This patch adds an AvailableInvariants table which tracks the generation a particular memory location became invariant and then uses that information to allow value forwarding that would otherwise be disallowed by potentially aliasing stores. (Reminder: In EarlyCSE everything clobbers everything by default.)

This should be compatible with the MemorySSA variant, but design is generational. We can and should add first class support for invariant.start within MemorySSA at a later time.  I took a quick look at doing so, but probably need some input from a MemorySSA expert.

Differential Revision: https://reviews.llvm.org/D43716

llvm-svn: 327577
2018-03-14 21:35:06 +00:00
Hiroshi Yamauchi e6a3dc7699 Simplify more cases of logical ops of masked icmps.
Summary:
For example,

((X & 255) != 0) && ((X & 15) == 8) -> ((X & 15) == 8).
((X & 7) != 0) && ((X & 15) == 8) -> false.

Reviewers: davidxl

Reviewed By: davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D43835

llvm-svn: 327450
2018-03-13 21:13:18 +00:00
Anna Thomas 5ac72f94f3 Test Commit NFC. Updated comment
llvm-svn: 327436
2018-03-13 19:38:45 +00:00
Haicheng Wu aee0af3e23 [SLP] clean some formats
llvm-svn: 327433
2018-03-13 18:44:19 +00:00
Rafael Espindola f5220fb68f [ThinLTO] Clear dllimport when setting dso_local.
This is PR36686.

If a user of a library is LTOed with that library we take the
opportunity to set dso_local, but we don't clear dllimport, which
creates an invalid IR.

llvm-svn: 327408
2018-03-13 15:24:51 +00:00
Sanjay Patel 204edeca56 [InstCombine] fix fmul reassociation to avoid creating an extra fdiv
This was supposed to be an NFC refactoring that will eventually allow
eliminating the isFast() predicate, but there's a rare possibility
that we would pessimize the code as shown in the test case because
we failed to check 'hasOneUse()' properly. This version also removes
an inefficiency of the old code; we would look for: 
(X * C) * C1 --> X * (C * C1)
...but that pattern is always handled by 
SimplifyAssociativeOrCommutative().

llvm-svn: 327404
2018-03-13 14:46:32 +00:00
Daniel Neilson 41e781d5f1 [SROA] Take advantage of separate alignments for memcpy source and destination
Summary:
This change is part of step five in the series of changes to remove alignment argument from
memcpy/memmove/memset in favour of alignment attributes. In particular, this changes the
SROA pass to cease using the old getAlignment() & setAlignment() APIs of MemoryIntrinsic in
favour of getting source & dest specific alignments through the new API. This allows us
to enhance visitMemTransferInst to be more aggressive setting the alignment in memcpy
calls that it creates, as well as to only change the alignment of a memcpy/memmove
argument that it replaces.

Steps:
Step 1) Remove alignment parameter and create alignment parameter attributes for
memcpy/memmove/memset. ( rL322965, rC322964, rL322963 )
Step 2) Expand the IRBuilder API to allow creation of memcpy/memmove with differing
source and dest alignments. ( rL323597 )
Step 3) Update Clang to use the new IRBuilder API. ( rC323617 )
Step 4) Update Polly to use the new IRBuilder API. ( rL323618 )
Step 5) Update LLVM passes that create memcpy/memmove calls to use the new IRBuilder API,
and those that use use MemIntrinsicInst::[get|set]Alignment() to use [get|set]DestAlignment()
and [get|set]SourceAlignment() instead. ( rL323886, rL323891, rL324148, rL324273, rL324278,
rL324384, rL324395, rL324402, rL324626, rL324642, rL324653, rL324654, rL324773, rL324774,
rL324781, rL324784, rL324955, rL324960, rL325816 )
Step 6) Remove the single-alignment IRBuilder API for memcpy/memmove, and the
MemIntrinsicInst::[get|set]Alignment() methods.

Reference
   http://lists.llvm.org/pipermail/llvm-dev/2015-August/089384.html
   http://lists.llvm.org/pipermail/llvm-commits/Week-of-Mon-20151109/312083.html

Reviewers: chandlerc, bollu, efriedma

Reviewed By: efriedma

Subscribers: efriedma, eraman, llvm-commits

Differential Revision: https://reviews.llvm.org/D42974

llvm-svn: 327398
2018-03-13 14:25:33 +00:00
Eugene Leviant 6f42a2cd91 [Evaluator] Evaluate load/store with bitcast
Differential revision: https://reviews.llvm.org/D43457

llvm-svn: 327381
2018-03-13 10:19:50 +00:00
Clement Courbet 9f0b3170bc [MergeICmps] Make sure that the comparison only has one use.
Summary: Fixes PR36557.

Reviewers: trentxintong, spatel

Subscribers: mstorsjo, llvm-commits

Differential Revision: https://reviews.llvm.org/D44083

llvm-svn: 327372
2018-03-13 07:05:55 +00:00
Vlad Tsyrklevich aab6000684 Reland r327041: [ThinLTO] Keep available_externally symbols live
Summary:
This change fixes PR36483. The bug was originally introduced by a change
that marked non-prevailing symbols dead. This broke LowerTypeTests
handling of available_externally functions, which are non-prevailing.
LowerTypeTests uses liveness information to avoid emitting thunks for
unused functions.

Marking available_externally functions dead is incorrect, the functions
are used though the function definitions are not. This change keeps them
live, and lets the EliminateAvailableExternally/GlobalDCE passes remove
them later instead.

(Reland with a suspected fix for a unit test failure I haven't been able
to reproduce locally)

Reviewers: pcc, tejohnson

Reviewed By: tejohnson

Subscribers: grimar, mehdi_amini, inglorion, eraman, llvm-commits

Differential Revision: https://reviews.llvm.org/D43690

llvm-svn: 327360
2018-03-13 05:08:48 +00:00
Saleem Abdulrasool f159a389df ObjCARC: address review comments from majnemer
I forgot to incorporate these comments into the original revision.  This
is just code cleanup addressing the feedback, NFC.

llvm-svn: 327351
2018-03-12 23:48:20 +00:00
Volkan Keles 4ecdb44a64 BlockExtractor: Don’t delete functions directly
Blocks may have function calls, so don’t erase functions
directly to avoid erasing a function that has a user.

llvm-svn: 327340
2018-03-12 22:28:18 +00:00
Saleem Abdulrasool 8b342680bf ObjCARC: teach the cloner about funclets
In the case that the CallInst that is being moved has an associated
operand bundle which is a funclet, the move will construct an invalid
instruction.  The new site will have a different token and needs to be
reassociated with the new instruction.

Unfortunately, there is no way to alter the bundle after the
construction of the instruction.  Replace the call instruction cloning
with a custom helper to clone the instruction and reassociate the
funclet token.

llvm-svn: 327336
2018-03-12 21:46:09 +00:00
Vedant Kumar 3a408538f0 Remove the LoopInstSimplify pass (-loop-instsimplify)
LoopInstSimplify is unused and untested. Reading through the commit
history the pass also seems to have a high maintenance burden.

It would be best to retire the pass for now. It should be easy to
recover if we need something similar in the future.

Differential Revision: https://reviews.llvm.org/D44053

llvm-svn: 327329
2018-03-12 20:49:42 +00:00
Michael Zolotukhin a3d8ef0f08 Improve caching scheme in ProvenanceAnalysis.
Summary:
ProvenanceAnalysis::related(A, B) currently memoizes its results, and on big
tests the cache grows too large, and we're spending most of the time
growing/looking through DenseMap.

This patch reduces the size of the cache by normalizing keys first: we do that
by calling GetUnderlyingObjCPtr on the input values. The results of
GetUnderlyingObjCPtr are also memoized in a separate cache.

The patch doesn't bring noticable changes to compile time on CTMark, however
significantly helps one of our internal tests.

Reviewers: gottesmm

Subscribers: hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D44270

llvm-svn: 327328
2018-03-12 20:36:25 +00:00
Craig Topper ee99aa4dd0 [InstCombine] Replace calls to getNumUses with hasNUses or hasNUsesOrMore
getNumUses is a linear time operation. It traverses the user linked list to the end and counts as it goes. Since we are only interested in small constant counts, we should use hasNUses or hasNUsesMore more that terminate the traversal as soon as it can provide the answer.

There are still two other locations in InstCombine, but changing those would force a rebase of D44266 which if accepted would remove them.

Differential Revision: https://reviews.llvm.org/D44398

llvm-svn: 327315
2018-03-12 18:46:05 +00:00
Craig Topper 3b4ad9c12d [CallSiteSplitting] Use !Instruction::use_empty instead of checking for a non-zero return from getNumUses
getNumUses is a linear operation. It walks a linked list to get a count. So in this case its better to just ask if there are any users rather than how many.

llvm-svn: 327314
2018-03-12 18:40:59 +00:00
Eugene Leviant 19e238746b [ThinLTO] Recommit of import global variables
This wasreverted in r326638 due to link problems and fixed
afterwards

llvm-svn: 327254
2018-03-12 10:30:50 +00:00
Justin Lebar 24b6640b1b Back out "Re-land: Teach CorrelatedValuePropagation to reduce the width of udiv/urem instructions."
This reverts r326908, originally landed as D44102.

Reverted for causing performance regressions on x86.  (These regressions
are not yet understood.)

llvm-svn: 327252
2018-03-12 09:26:09 +00:00
Florian Hahn a7dcfa746e [PartialInlining] Use isInlineViable to detect constructs preventing inlining.
Use isInlineViable to prevent inlining of functions with non-inlinable
constructs, in case cost analysis is skipped.

Reviewers: efriedma, sfertile, davide, davidxl

Reviewed By: efriedma

Differential Revision: https://reviews.llvm.org/D42846

llvm-svn: 327207
2018-03-10 14:53:44 +00:00
Ulrich Weigand 019dd2316d Revert "[Debug] Retain both sets of debug intrinsics in HoistThenElseCodeToIf"
This reverts commit r327175 as problems in debug info generation were shown.

llvm-svn: 327176
2018-03-09 22:00:10 +00:00
Ulrich Weigand fa4e63c0d6 [Debug] Retain both sets of debug intrinsics in HoistThenElseCodeToIf
When hoisting common code from the "then" and "else" branches of a condition
to before the "if", there is no need to require that debug intrinsics match
before moving them (and merging them).  Instead, we can simply always keep
all debug intrinsics from both sides of the "if".

This fixes PR36410, which describes a problem where as a result of the attempt
to merge debug locations for two debug intrinsics we end up with an invalid
intrinsic, where the scope indicated in the !dbg location no longer matches
the scope of the variable tracked by the intrinsic.

In addition, this has the benefit that we no longer throw away information
that is actually still valid, helping to generate better debug data.

Reviewed By: vsk

Differential Revision: https://reviews.llvm.org/D44312

llvm-svn: 327175
2018-03-09 21:37:07 +00:00
Renato Golin 038ede2a16 [NFC] Consolidate six getPointerOperand() utility functions into one place
There are six separate instances of getPointerOperand() utility.
LoopVectorize.cpp has one of them,
and I don't want to create a 7th one while I'm trying to move
LoopVectorizationLegality into a separate file
(eventual objective is to move it to Analysis tree).

See http://lists.llvm.org/pipermail/llvm-dev/2018-February/120999.html
for llvm-dev discussions

Closes D43323.

Patch by Hideki Saito <hideki.saito@intel.com>.

llvm-svn: 327173
2018-03-09 21:05:58 +00:00
Peter Collingbourne 2974856ad4 Use branch funnels for virtual calls when retpoline mitigation is enabled.
The retpoline mitigation for variant 2 of CVE-2017-5715 inhibits the
branch predictor, and as a result it can lead to a measurable loss of
performance. We can reduce the performance impact of retpolined virtual
calls by replacing them with a special construct known as a branch
funnel, which is an instruction sequence that implements virtual calls
to a set of known targets using a binary tree of direct branches. This
allows the processor to speculately execute valid implementations of the
virtual function without allowing for speculative execution of of calls
to arbitrary addresses.

This patch extends the whole-program devirtualization pass to replace
certain virtual calls with calls to branch funnels, which are
represented using a new llvm.icall.jumptable intrinsic. It also extends
the LowerTypeTests pass to recognize the new intrinsic, generate code
for the branch funnels (x86_64 only for now) and lay out virtual tables
as required for each branch funnel.

The implementation supports full LTO as well as ThinLTO, and extends the
ThinLTO summary format used for whole-program devirtualization to
support branch funnels.

For more details see RFC:
http://lists.llvm.org/pipermail/llvm-dev/2018-January/120672.html

Differential Revision: https://reviews.llvm.org/D42453

llvm-svn: 327163
2018-03-09 19:11:44 +00:00
Chad Rosier 95d9ccb2a0 [JumpThreading] Don't restrict cast-traversal to i1
In r263618, JumpThreading learned to look trough simple cast instructions, but
only if the source of those cast instructions was a phi/cmp i1 (in an effort to
limit compile time effects). I think this condition is too restrictive. For
switches with limited value range, InstCombine will readily introduce an extra
trunc instruction to a smaller integer type (e.g. from i8 to i2), leaving us in
the somewhat perverse situation that jump-threading would work before running
instcombine, but not after. Since instcombine produces this pattern, I think we
need to consider it canonical and support it in JumpThreading.  In general,
for limiting recursion, I think the existing restriction to phi and cmp nodes
should be sufficient to avoid looking through unprofitable chains of
instructions.

Patch by Keno Fischer!
Differential Revision: https://reviews.llvm.org/D42262

llvm-svn: 327150
2018-03-09 16:43:46 +00:00
Renato Golin 6b62039bb0 [LV] Fix vectorizer's isUniform() abuse triggers assert in SCEV
Fixes PR36311.

See more detailed analysis in
https://bugs.llvm.org/show_bug.cgi?id=36311.

isUniform() information is recomputed after LV started transforming the
underlying IR and that triggered an assert in SCEV.

From vectorizer's architectural perspective, such information, while
still useful in vector code gen, should not be recomputed after the
start of transforming the LLVM IR. Instead, we should collect and cache
such information during the analysis phase of LV and use the cached info
during code gen.

From the symptom perspective, this assert as it stands right now is not
very useful. Legality already rejected loops that would trigger the
assert. As such, commenting out the assert is NFC from vectorizer's
functionality perspective. On top of that, just above the assertion, we
check for unit-strided load/store or
gather scatter. Addresses can't be uniform below that check.

From vectorization theory point of view, we don't have to reject all
cases of stores to uniform addresses. Eventually, we should support
safe/profitable cases.

This patch resolves the issue by removing the useless assertion that is
invoking LAA's isUniform() that requires up-to-date DomTree ---- once
vector code gen starts modifying CFG, we don't have an up-to-date
DomTree.

Patch by Hideki Saito <hideki.saito@intel.com>.

llvm-svn: 327109
2018-03-09 10:31:31 +00:00
Eric Christopher 3caa0fd050 Revert "[ThinLTO] Keep available_externally symbols live"
This reverts commit r327041 and the followup attempts at fixing the testcase as they're still failing.

llvm-svn: 327094
2018-03-09 01:25:18 +00:00
Adrian Prantl 5b477be72a LowerDbgDeclare: ignore dbg.declares for allocas with volatile access
There is no point in lowering a dbg.declare describing an alloca that
has volatile loads or stores as users, since the alloca cannot be
elided. Lowering the dbg.declare will result in larger debug info that
may also have worse coverage than just describing the alloca.

rdar://problem/34496278

llvm-svn: 327092
2018-03-09 00:45:04 +00:00
Philip Reames fbffd126b8 [NFC] Factor out a helper function for checking if a block has a potential early implicit exit.
llvm-svn: 327065
2018-03-08 21:25:30 +00:00
Kuba Mracek 8842da8e07 [asan] Fix a false positive ODR violation due to LTO ConstantMerge pass [llvm part, take 3]
This fixes a false positive ODR violation that is reported by ASan when using LTO. In cases, where two constant globals have the same value, LTO will merge them, which breaks ASan's ODR detection.

Differential Revision: https://reviews.llvm.org/D43959

llvm-svn: 327061
2018-03-08 21:02:18 +00:00
Kuba Mracek f0bcbfef5c Revert r327053.
llvm-svn: 327055
2018-03-08 20:13:39 +00:00
Kuba Mracek 584bd10803 [asan] Fix a false positive ODR violation due to LTO ConstantMerge pass [llvm part, take 2]
This fixes a false positive ODR violation that is reported by ASan when using LTO. In cases, where two constant globals have the same value, LTO will merge them, which breaks ASan's ODR detection.

Differential Revision: https://reviews.llvm.org/D43959

llvm-svn: 327053
2018-03-08 20:05:45 +00:00
Vlad Tsyrklevich 7b66ef1036 [ThinLTO] Keep available_externally symbols live
Summary:
This change fixes PR36483. The bug was originally introduced by a change
that marked non-prevailing symbols dead. This broke LowerTypeTests
handling of available_externally functions, which are non-prevailing.
LowerTypeTests uses liveness information to avoid emitting thunks for
unused functions.

Marking available_externally functions dead is incorrect, the functions
are used though the function definitions are not. This change keeps them
live, and lets the EliminateAvailableExternally/GlobalDCE passes remove
them later instead.

I've also enabled EliminateAvailableExternally for all optimization
levels, I believe it being disabled for O1 was an oversight.

Reviewers: pcc, tejohnson

Reviewed By: tejohnson

Subscribers: grimar, mehdi_amini, inglorion, eraman, llvm-commits

Differential Revision: https://reviews.llvm.org/D43690

llvm-svn: 327041
2018-03-08 18:48:03 +00:00
Kuba Mracek e834b22874 Revert r327029
llvm-svn: 327033
2018-03-08 17:32:00 +00:00
Kuba Mracek 0e06d37dba [asan] Fix a false positive ODR violation due to LTO ConstantMerge pass [llvm part]
This fixes a false positive ODR violation that is reported by ASan when using LTO. In cases, where two constant globals have the same value, LTO will merge them, which breaks ASan's ODR detection.

Differential Revision: https://reviews.llvm.org/D43959

llvm-svn: 327029
2018-03-08 17:24:06 +00:00
Farhana Aleen 89196642f7 [AMDGPU] Increased vector length for global/constant loads.
Summary: GCN ISA supports instructions that can read 16 consecutive dwords from memory through the scalar data cache;
         loadstoreVectorizer should take advantage of the wider vector length and pack 16/8 elements of dwords/quadwords.

Author: FarhanaAleen

Reviewed By: rampitec

Subscribers: llvm-commits, AMDGPU

Differential Revision: https://reviews.llvm.org/D44179

llvm-svn: 326910
2018-03-07 17:09:18 +00:00
Justin Lebar eccfbf1bcd Re-land: Teach CorrelatedValuePropagation to reduce the width of udiv/urem instructions.
Summary:
If the operands of a udiv/urem can be proved to fit within a smaller
power-of-two-sized type, reduce the width of the udiv/urem.

Backed out for failing an assert in clang bootstrap builds.  Re-landing
with a fix for handling non-power-of-two inputs (e.g. udiv i24).

Original Differential Revision: https://reviews.llvm.org/D44102

llvm-svn: 326908
2018-03-07 16:56:49 +00:00
Farhana Aleen 347d12b4ce Revert "[AMDGPU] Widened vector length for global/constant address space."
This reverts commit ce988cc100dc65e7c6c727aff31ceb99231cab03.

llvm-svn: 326907
2018-03-07 16:55:27 +00:00
Farhana Aleen 0d03d0588d [AMDGPU] Widened vector length for global/constant address space.
llvm-svn: 326904
2018-03-07 16:29:05 +00:00
Justin Lebar eeeb0eb049 Revert rL326898: "Teach CorrelatedValuePropagation to reduce the width of udiv/urem instructions."
Breaks bootstrap builds: clang built with this patch asserts while
building MCDwarf.cpp: Assertion `castIsValid(op, S, Ty) && "Invalid
cast!"' failed.

llvm-svn: 326900
2018-03-07 16:05:43 +00:00
Justin Lebar cb9e89c39b Teach CorrelatedValuePropagation to reduce the width of udiv/urem instructions.
Summary:
If the operands of a udiv/urem can be proved to fit within a smaller
power-of-two-sized type, reduce the width of the udiv/urem.

Reviewers: spatel, sanjoy

Subscribers: llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D44102

llvm-svn: 326898
2018-03-07 15:11:13 +00:00
Sven van Haastregt 19f531d31e [LoadStoreVectorizer] Differentiate between <1 x T> and T
The LoadStoreVectorizer thought that <1 x T> and T were the same types
when merging stores, leading to a crash later.

Patch by Erik Hogeman.

Differential Revision: https://reviews.llvm.org/D44014

llvm-svn: 326884
2018-03-07 10:29:28 +00:00
Evgeny Stupachenko 204ade4102 Add early exit on reassociation of 0 expression.
Summary:

Before the patch a try to reassociate ((v * 16) * 0) * 1 fall into infinite loop

Reviewers: pankajchawla

Differential Revision: http://reviews.llvm.org/D41467

From: Evgeny Stupachenko <evstupac@gmail.com>
                         <evgeny.v.stupachenko@intel.com>
llvm-svn: 326861
2018-03-07 02:17:08 +00:00
Eugene Zelenko e2fc88a2fe [Transforms] Add missing header for InstructionCombining.cpp, in order to export LLVMInitializeInstCombine as extern "C". Fixes PR35947.
Patch by Brenton Bostick.

Differential revision: https://reviews.llvm.org/D44140

llvm-svn: 326843
2018-03-06 23:06:13 +00:00
Sebastian Pop bf6e1c26cf DA: remove uses of GEP, only ask SCEV
It's been quite some time the Dependence Analysis (DA) is broken,
as it uses the GEP representation to "identify" multi-dimensional arrays.
It even wrongly detects multi-dimensional arrays in single nested loops:

from test/Analysis/DependenceAnalysis/Coupled.ll, example @couple6
;; for (long int i = 0; i < 50; i++) {
;; A[i][3*i - 6] = i;
;; *B++ = A[i][i];

DA used to detect two subscripts, which makes no sense in the LLVM IR
or in C/C++ semantics, as there are no guarantees as in Fortran of
subscripts not overlapping into a next array dimension:

maximum nesting levels = 1
SrcPtrSCEV = %A
DstPtrSCEV = %A
using GEPs
subscript 0
    src = {0,+,1}<nuw><nsw><%for.body>
    dst = {0,+,1}<nuw><nsw><%for.body>
    class = 1
    loops = {1}
subscript 1
    src = {-6,+,3}<nsw><%for.body>
    dst = {0,+,1}<nuw><nsw><%for.body>
    class = 1
    loops = {1}
Separable = {}
Coupled = {1}

With the current patch, DA will correctly work on only one dimension:

maximum nesting levels = 1
SrcSCEV = {(-2424 + %A)<nsw>,+,1212}<%for.body>
DstSCEV = {%A,+,404}<%for.body>
subscript 0
    src = {(-2424 + %A)<nsw>,+,1212}<%for.body>
    dst = {%A,+,404}<%for.body>
    class = 1
    loops = {1}
Separable = {0}
Coupled = {}

This change removes all uses of GEP from DA, and we now only rely
on the SCEV representation.

The patch does not turn on -da-delinearize by default, and so the DA analysis
will be more conservative in the case of multi-dimensional memory accesses in
nested loops.

I disabled some interchange tests, as the DA is not able to disambiguate
the dependence anymore. To make DA stronger, we may need to
compute a bound on the number of iterations based on the access functions
and array dimensions.

The patch cleans up all the CHECKs in test/Transforms/LoopInterchange/*.ll to
avoid checking for snippets of LLVM IR: this form of checking is very hard to
maintain. Instead, we now check for output of the pass that are more meaningful
than dozens of lines of LLVM IR. Some tests now require -debug messages and thus
only enabled with asserts.

Patch written by Sebastian Pop and Aditya Kumar.

Differential Revision: https://reviews.llvm.org/D35430

llvm-svn: 326837
2018-03-06 21:55:59 +00:00
Sanjay Patel 1f2f5d18d3 [InstCombine] simplify min/max canonicalization; NFCI
llvm-svn: 326828
2018-03-06 19:01:18 +00:00
Sanjay Patel 7ed0bc26ac [ValueTracking] move helpers for SelectPatterns from InstCombine to ValueTracking
Most of the folds based on SelectPatternResult belong in InstSimplify rather than
InstCombine, so the helper code should be available to other passes/analysis.

llvm-svn: 326812
2018-03-06 16:57:55 +00:00
Florian Hahn 517dc51c48 [CallSiteSplitting] Do not crash when BB's terminator changes.
Change doCallSiteSplitting to iterate until we reach the terminator instruction.
tryToSplitCallSite can replace BB's terminator in case BB is a successor of
itself. Then IE will be invalidated and we also have to check the current
terminator.

Reviewers: junbuml, davidxl, davide, fhahn

Reviewed By: fhahn, junbuml

Differential Revision: https://reviews.llvm.org/D43824

llvm-svn: 326793
2018-03-06 14:00:58 +00:00
Florian Hahn f0a25f7253 [CloneFunction] Support BB == PredBB in DuplicateInstructionsInSplit.
In case PredBB == BB and StopAt == BB's terminator, StopAt != &*BI will
fail, because BB's terminator instruction gets replaced.

By using BB.getTerminator() we get the current terminator which we can use
to compare.

Reviewers: sanjoy, anna, reames

Reviewed By: anna

Differential Revision: https://reviews.llvm.org/D43822

llvm-svn: 326779
2018-03-06 13:12:32 +00:00
Xin Tong 8fd561f572 [MergeICmp] Simplify how BCECmpBlock instructions are blacklisted
llvm-svn: 326761
2018-03-06 02:24:02 +00:00
Xin Tong 98af9efca5 [MergeICmp] Fix printing. NFC
llvm-svn: 326760
2018-03-06 02:04:57 +00:00
Daniel Neilson 82daad31fe [RewriteStatepoints] Fix stale parse points
Summary:
RewriteStatepointsForGC collects parse points for further processing.
During the collection if a callsite is found in an unreachable block
(DominatorTree::isReachableFromEntry()) then all unreachable blocks are
removed by removeUnreachableBlocks(). Some of the removed blocks could
have been reachable according to DominatorTree::isReachableFromEntry().
In this case the collected parse points became stale and resulted in a
crash when accessed.

The fix is to unconditionally canonicalize the IR to
removeUnreachableBlocks and then collect the parse points.

The added test crashes with the old version and passes with this patch.

Patch by Yevgeny Rouban!

Reviewed by: Anna

Differential Revision: https://reviews.llvm.org/D43929

llvm-svn: 326748
2018-03-05 22:27:30 +00:00
Daniel Neilson bdda115e19 [InstCombine] Don't blow up in foldICmpWithCastAndCast on vector icmp instructions.
Summary:
Presently, InstCombiner::foldICmpWithCastAndCast() implicitly assumes that it is
only invoked with icmp instructions of integer type. If that assumption is broken,
and it is called with an icmp of vector type, then it fails (asserts/crashes).

This patch addresses the deficiency. It allows it to simplify
icmp (ptrtoint x), (ptrtoint/c) of vector type into a compare of the inputs,
much as is done when the type is integer.

Reviewers: apilipenko, fedor.sergeev, mkazantsev, anna

Reviewed By: anna

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D44063

llvm-svn: 326730
2018-03-05 18:05:51 +00:00
Craig Topper 8452faceae [InstCombine] Add constant vector support to getMinimumFPType for visitFPTrunc.
This patch teaches getMinimumFPType to support shrinking a vector of ConstantFPs. This should improve our ability to combine vector fptrunc with fp binops.

Differential Revision: https://reviews.llvm.org/D43774

llvm-svn: 326729
2018-03-05 18:04:12 +00:00
Florian Hahn 0b7c6422fb [IPSCCP] Add getCompare which returns either true, false, undef or null.
getCompare returns true, false or undef constants if the comparison can
be evaluated, or nullptr if it cannot. This is in line with what
ConstantExpr::getCompare returns. It also allows us to use
ConstantExpr::getCompare for comparing constants.

Reviewers: davide, mssimpso, dberlin, anna

Reviewed By: davide

Differential Revision: https://reviews.llvm.org/D43761

llvm-svn: 326720
2018-03-05 17:33:50 +00:00
Sanjay Patel 53ffabdfcb [CVP] fix formatting; NFC
llvm-svn: 326711
2018-03-05 16:08:34 +00:00
Xin Tong 8345c0e3a5 [MergeICmp] We can discard initial blocks that do other work
Summary:
 We can discard initial blocks that do other work
We do not need to limit ourselves to just the first block in the chain.

Reviewers: courbet, davide

Reviewed By: courbet

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D44029

llvm-svn: 326698
2018-03-05 13:54:47 +00:00
Clement Courbet 34be1b0288 [MergeICmps][NFC] Improve logging.
llvm-svn: 326683
2018-03-05 08:21:47 +00:00
Fedor Indutny 364b9c2adb [CallSiteSplitting] fix use after-free
Iterating through predecessors of `TailBB` while removing their
terminators leads to use after-free, because the predecessor list is
changing on each removal.

llvm-svn: 326668
2018-03-03 22:34:38 +00:00
Fedor Indutny f9e09c1dd0 [CallSiteSplitting] properly split musttail calls
Summary:
`musttail` calls can't be naively splitted. The split blocks must
include not only the call instruction itself, but also (optional)
`bitcast` and `return` instructions that follow it.

Clone `bitcast` and `ret`, place them into the split blocks, and
remove the tail block when done.

Reviewers: junbuml, mcrosier, davidxl, davide, fhahn

Reviewed By: fhahn

Subscribers: JDevlieghere, llvm-commits

Differential Revision: https://reviews.llvm.org/D43729

llvm-svn: 326666
2018-03-03 21:40:14 +00:00
Sanjay Patel 1a8d5c3d1f [InstCombine] (~X) - (~Y) --> Y - X
llvm-svn: 326660
2018-03-03 17:53:25 +00:00
Chandler Carruth a4619d9944 [ThinLTO] Revert r325320: Import global variables
This caused some links to fail with ThinLTO due to missing symbols as
well as causing some binaries to have failures at runtime. We're working
with the author to get a test case, but want to get the tree green
again.

Further, it appears to introduce a data race. While the test usage of
threads was disabled in r325361 & r325362, that isn't an acceptable fix.
I've reverted both of these as well. This code needs to be thread safe.
Test cases for this are already on the original commit thread.

llvm-svn: 326638
2018-03-02 23:40:08 +00:00
Vedant Kumar 7fc591f8bb [AggressiveInstCombine] Use use_empty() instead of !getNumUses(), NFC
use_empty() runs in O(1), whereas getNumUses() runs in O(# uses).

llvm-svn: 326635
2018-03-02 23:22:49 +00:00
Sanjay Patel e29375d04c [InstCombine] rearrange visitFMul; NFCI
Put the simplest non-FMF folds first, so it's easier to
see what's left to fix/group/add with the FMF folds.

llvm-svn: 326632
2018-03-02 23:06:45 +00:00
Vedant Kumar f69baf64eb [Utils] Salvage debug info in block simplification
In stage2 -O3 builds of llc, this results in small but measurable
increases in the number of variables with locations, and in the number
of unique source variables overall.

(According to llvm-dwarfdump --statistics, there are 123 additional
variables with locations, which is just a 0.006% improvement).

The size of the .debug_loc section of the llc dsym increases by 0.004%.

llvm-svn: 326629
2018-03-02 22:46:48 +00:00
Vedant Kumar 334fa57456 [Utils] Salvage debug info in recursive inst deletion
In stage2 -O3 builds of llc, this results in a 0.3% increase in the
number of variables with locations, and a 0.2% increase in the number of
unique source variables overall.

The size of the .debug_loc section of the llc dsym increases by 0.5%.

llvm-svn: 326621
2018-03-02 21:36:35 +00:00
Craig Topper c7461e1aad [InstCombine] Rewrite the binary op shrinking in visitFPTrunc to avoid creating overly small ConstantFPs that we'll just need to extend again.
Instead of returning the smaller FP constant we now return the minimal Type the constant can fit into. We also return the Type of the input to any fp extends. The legality checks are then done on just the size of these Types. If we find something profitable we then emit FPTruncs in front of the smaller binop and assume those FPTruncs will be constant folded or combined with any ConstantFPs or fpextends.

Differential Revision: https://reviews.llvm.org/D44038

llvm-svn: 326617
2018-03-02 21:25:18 +00:00
Sanjay Patel 2fd0acf05a [InstCombine] partly fix FMF for fmul+log2 fold
The code was checking that all of the instructions in the 
sequence are 'fast', but that's not necessary. The final 
multiply is all that we need to check (tests adjusted). 
The fmul doesn't need to be fully 'fast' either, but that 
can be another patch.

llvm-svn: 326608
2018-03-02 20:32:46 +00:00
Yaxun Liu 3c42f1c3c9 LoopUnroll: respect pragma unroll when AllowRemainder is disabled
Currently when AllowRemainder is disabled, pragma unroll count is not
respected even though there is no remainder. This bug causes a loop
fully unrolled in many cases even though the user specifies a unroll
count. Especially it affects OpenCL/CUDA since in many cases a loop
contains convergent instructions and currently AllowRemainder is
disabled for such loops.

Differential Revision: https://reviews.llvm.org/D43826

llvm-svn: 326585
2018-03-02 16:22:32 +00:00
Florian Hahn 515acd64fd [LV][CFG] Add irreducible CFG detection for outer loops
This patch adds support for detecting outer loops with irreducible control
flow in LV. Current detection uses SCCs and only works for innermost loops.
This patch adds a utility function that works on any CFG, given its RPO
traversal and its LoopInfoBase. This function is a generalization
of isIrreducibleCFG  from lib/CodeGen/ShrinkWrap.cpp. The code in
lib/CodeGen/ShrinkWrap.cpp is also updated to use the new generic utility
function.

Patch by Diego Caballero <diego.caballero@intel.com>

Differential Revision: https://reviews.llvm.org/D40874

llvm-svn: 326568
2018-03-02 12:24:25 +00:00
Fedor Indutny 1571b1271e [ArgumentPromotion] don't break musttail invariant PR36543
Summary:
Do not break musttail invariant by promoting arguments of musttail
callee or caller.

Reviewers: sanjoy, dberlin, hfinkel, george.burgess.iv, fhahn, rnk

Reviewed By: rnk

Subscribers: rnk, llvm-commits

Differential Revision: https://reviews.llvm.org/D43926

llvm-svn: 326521
2018-03-02 00:59:27 +00:00
Sanjay Patel d0cdb2f861 [InstCombine] allow fmul fold with less than 'fast'
This is a retry of r326502 with updates to the reassociate 
test file that I missed the first time.

@test15_reassoc in the supposed -reassociate test file 
(except that it tests 2 other passes too...) shows that
there's no clear responsiblity for reassociation transforms.

Instcombine now gets that case, but only because the
constant values are identical. Otherwise, it would still
miss that pattern. 

Reassociate doesn't get that case because it hasn't been 
updated to use less than 'fast' FMF.

llvm-svn: 326513
2018-03-02 00:14:51 +00:00
Sanjay Patel eb5d046890 revert r326502: [InstCombine] allow fmul fold with less than 'fast'
I forgot that I added tests for 'reassoc' to -reassociate, but
suprisingly that file calls -instcombine too, so it is affected.
I'll update that file and try again.

llvm-svn: 326510
2018-03-01 23:39:24 +00:00
Sanjay Patel 7373ae5c9a [InstCombine] allow fmul fold with less than 'fast'
llvm-svn: 326502
2018-03-01 22:53:47 +00:00
Craig Topper 2915bc0046 [SimplifyLibCalls] Update an obviously copy and pasted header comment to match this file. NFC
llvm-svn: 326475
2018-03-01 20:05:09 +00:00
Sanjay Patel f3b1af7aa4 [InstCombine] simplify code for (X*Y) * X => (X*X) * Y ; NFCI
llvm-svn: 326444
2018-03-01 15:50:26 +00:00