Commit Graph

105581 Commits

Author SHA1 Message Date
Jonas Paulsson 6228aeda65 [LSR / TTI / SystemZ] Eliminate TargetTransformInfo::isFoldableMemAccess()
isLegalAddressingMode() has recently gained the extra optional Instruction*
parameter, and therefore it can now do the job that previously only
isFoldableMemAccess() could do.

The SystemZ implementation of isLegalAddressingMode() has gained the
functionality of checking for offsets, which used to be done with
isFoldableMemAccess().

The isFoldableMemAccess() hook has been removed everywhere.

Review: Quentin Colombet, Ulrich Weigand
https://reviews.llvm.org/D35933

llvm-svn: 310463
2017-08-09 11:28:01 +00:00
Jonas Paulsson 5052771af3 [LoopStrengthReduce] Don't neglect the Fixup.Offset in isAMCompletelyFolded().
In the recursive call to isAMCompletelyFolded(), the passed offset should be
the sum of F.BaseOffset and Fixup.Offset.

Review: Quentin Colombet.
llvm-svn: 310462
2017-08-09 11:27:46 +00:00
Simon Dardis c6be2251b7 [mips] PR34083 - Wimplicit-fallthrough warning in MipsAsmParser.cpp
Assert that a binary expression is actually a binary expression,
rather than potientially incorrectly attempting to handle it as a
unary expression.

This resolves PR34083.

Thanks to Simonn Pilgrim for reporting the issue!

llvm-svn: 310460
2017-08-09 10:47:52 +00:00
Gabor Horvath 18bda5b0f2 Suppress a warning. NFC.
llvm-svn: 310459
2017-08-09 10:38:53 +00:00
Oliver Stannard 7f569a2d54 [AsmParser] Hash is not a comment on some targets
The '#' token is not a comment for all targets (on ARM and AArch64 it marks an
immediate operand), so we shouldn't treat it as such.

Comments are already converted to AsmToken::EndOfStatement by
AsmLexer::LexLineComment, so this check was unnecessary.

Differential Revision: https://reviews.llvm.org/D36405

llvm-svn: 310457
2017-08-09 09:40:51 +00:00
Chandler Carruth 2cd28b2ba0 [LCG] Completely remove the map-based association of post-order numbers
to Nodes when removing ref edges from a RefSCC.

This map based association turns out to be pretty expensive for large
RefSCCs and pointless as we already have embedded data members inside
nodes that we use to track the DFS state. We can reuse one of those and
the map becomes unnecessary.

This also fuses the update of those numbers into the scan across the
pending stack of nodes so that we don't walk the nodes twice during the
DFS.

With this I expect the new PM to be faster than the old PM for the test
case I have been optimizing. That said, it also seems simpler and more
direct in many ways. The side storage was always pretty awkward.

The last remaining hot-spot in the profile of the LCG once this is done
will be the edge iterator walk in the DFS. I'll take a look at improving
that next.

llvm-svn: 310456
2017-08-09 09:37:39 +00:00
Davide Italiano c163fac184 [GlobalOpt] Switch an explicit loop to llvm::all_of(). NFCI.
llvm-svn: 310453
2017-08-09 09:23:29 +00:00
Chandler Carruth 9c3deaa653 [LCG] Special case when removing a ref edge from a RefSCC leaves
that RefSCC still connected.

This is common and can be handled much more efficiently. As soon as we
know we've covered every node in the RefSCC with the DFS, we can simply
reset our state and return. This avoids numerous data structure updates
and other complexity.

On top of other changes, this appears to get new PM back to parity with
the old PM for a large protocol buffer message source code. The dense
map updates are very hot in this function.

llvm-svn: 310451
2017-08-09 09:14:34 +00:00
Chandler Carruth 23c2f44cc7 [LCG] Switch one of the update methods for the LazyCallGraph to support
limited batch updates.

Specifically, allow removing multiple reference edges starting from
a common source node. There are a few constraints that play into
supporting this form of batching:

1) The way updates occur during the CGSCC walk, about the most we can
   functionally batch together are those with a common source node. This
   also makes the batching simpler to implement, so it seems
   a worthwhile restriction.
2) The far and away hottest function for large C++ files I measured
   (generated code for protocol buffers) showed a huge amount of time
   was spent removing ref edges specifically, so it seems worth focusing
   there.
3) The algorithm for removing ref edges is very amenable to this
   restricted batching. There are just both API and implementation
   special casing for the non-batch case that gets in the way. Once
   removed, supporting batches is nearly trivial.

This does modify the API in an interesting way -- now, we only preserve
the target RefSCC when the RefSCC structure is unchanged. In the face of
any splits, we create brand new RefSCC objects. However, all of the
users were OK with it that I could find. Only the unittest needed
interesting updates here.

How much does batching these updates help? I instrumented the compiler
when run over a very large generated source file for a protocol buffer
and found that the majority of updates are intrinsically updating one
function at a time. However, nearly 40% of the total ref edges removed
are removed as part of a batch of removals greater than one, so these
are the cases batching can help with.

When compiling the IR for this file with 'opt' and 'O3', this patch
reduces the total time by 8-9%.

Differential Revision: https://reviews.llvm.org/D36352

llvm-svn: 310450
2017-08-09 09:05:27 +00:00
Craig Topper b049158a55 [X86] Add the rest of the ADC and SBB instructions to isDefConvertible.
I don't know if this really affects anything. Just thought it was weird that we had all of the ADD/SUB/AND/OR/XOR instructions.

llvm-svn: 310447
2017-08-09 06:17:49 +00:00
Craig Topper 5706c01c0b [InstCombine] Use regular dyn_cast instead of a matcher for a simple case. NFC
llvm-svn: 310446
2017-08-09 06:17:48 +00:00
Serguei Katkov 6ea2e81cf6 [ImplicitNullCheck] Fix the bug when dependent instruction accesses memory
It is possible that dependent instruction may access memory.
In this case we must reject optimization because the memory change will
be visible in null handler basic block. So we will execute an instruction which
we must not execute if check fails.

Reviewers: sanjoy, reames
Reviewed By: sanjoy
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D36392

llvm-svn: 310443
2017-08-09 05:17:02 +00:00
Zachary Turner 5448dabbdd [PDB] Fix an issue writing the publics stream.
In the refactor to merge the publics and globals stream, a bug
was introduced that wrote the wrong value for one of the fields
of the PublicsStreamHeader.  This caused debugging in WinDbg
to break.

We had no way of dumping any of these fields, so in addition to
fixing the bug I've added dumping support for them along with a
test that verifies the correct value is written.

llvm-svn: 310439
2017-08-09 04:23:59 +00:00
Zachary Turner 946204c83e [PDB] Merge Global and Publics Builders.
The publics stream and globals stream are very similar. They both
contain a list of hash buckets that refer into a single shared stream,
the symbol record stream. Because of the need for each builder to manage
both an independent hash stream as well as a single shared record
stream, making the two builders be independent entities is not the right
design. This patch merges them into a single class, of which only a
single instance is needed to create all 3 streams.  PublicsStreamBuilder
and GlobalsStreamBuilder are now merged into the single GSIStreamBuilder
class, which writes all 3 streams at once.

Note that this patch does not contain any functionality change. So we're
still not yet writing any records to the globals stream. All we're doing
is making it so that when we do start writing records to the globals,
this refactor won't have to be part of that patch.

Differential Revision: https://reviews.llvm.org/D36489

llvm-svn: 310438
2017-08-09 04:23:25 +00:00
Eugene Zelenko d8e3efd5c2 [AMDGPU] Revert r310429 changes in AMDKernelCodeT.h which broke some build bots.
llvm-svn: 310430
2017-08-09 00:06:29 +00:00
Eugene Zelenko d16eff816b [AMDGPU] Fix some Clang-tidy modernize-use-using and Include What You Use warnings; other minor fixes (NFC).
llvm-svn: 310429
2017-08-08 23:53:55 +00:00
Quentin Colombet 8dd90fb54b Revert "[GlobalISel] Remove the GISelAccessor API."
This reverts commit r310115.

It causes a linker failure for the one of the unittests of AArch64 on one
of the linux bot:
http://lab.llvm.org:8011/builders/clang-ppc64le-linux-multistage/builds/3429

: && /home/fedora/gcc/install/gcc-7.1.0/bin/g++   -fPIC
-fvisibility-inlines-hidden -Werror=date-time -std=c++11 -Wall -W
-Wno-unused-parameter -Wwrite-strings -Wcast-qual
-Wno-missing-field-initializers -pedantic -Wno-long-long
-Wno-maybe-uninitialized -Wdelete-non-virtual-dtor -Wno-comment
-ffunction-sections -fdata-sections -O2
-L/home/fedora/gcc/install/gcc-7.1.0/lib64 -Wl,-allow-shlib-undefined
-Wl,-O3 -Wl,--gc-sections
unittests/Target/AArch64/CMakeFiles/AArch64Tests.dir/InstSizes.cpp.o  -o
unittests/Target/AArch64/AArch64Tests
lib/libLLVMAArch64CodeGen.so.6.0.0svn lib/libLLVMAArch64Desc.so.6.0.0svn
lib/libLLVMAArch64Info.so.6.0.0svn lib/libLLVMCodeGen.so.6.0.0svn
lib/libLLVMCore.so.6.0.0svn lib/libLLVMMC.so.6.0.0svn
lib/libLLVMMIRParser.so.6.0.0svn lib/libLLVMSelectionDAG.so.6.0.0svn
lib/libLLVMTarget.so.6.0.0svn lib/libLLVMSupport.so.6.0.0svn -lpthread
lib/libgtest_main.so.6.0.0svn lib/libgtest.so.6.0.0svn -lpthread
-Wl,-rpath,/home/buildbots/ppc64le-clang-multistage-test/clang-ppc64le-multistage/stage1/lib
&& :
unittests/Target/AArch64/CMakeFiles/AArch64Tests.dir/InstSizes.cpp.o:(.toc+0x0):
undefined reference to `vtable for llvm::LegalizerInfo'
unittests/Target/AArch64/CMakeFiles/AArch64Tests.dir/InstSizes.cpp.o:(.toc+0x8):
undefined reference to `vtable for llvm::RegisterBankInfo'

The particularity of this bot is that it is built with
BUILD_SHARED_LIBS=ON

However, I was not able to reproduce the problem so far.
Reverting to unblock the bot.

llvm-svn: 310425
2017-08-08 22:22:30 +00:00
Nemanja Ivanovic 3f98fb4a3f My commit r310346 introduced some valid warnings. This cleans them up.
llvm-svn: 310424
2017-08-08 22:17:31 +00:00
Jessica Paquette d36945bf3a [MachineOutliner] Ensure AArch64 outliner doesn't mess with W30 or LR
Before, the outliner would mark all instructions that read from/modify LR as
illegal. This doesn't handle W30, which overlaps with LR. This shouldn't be
outlined.

This commit fixes that by making modifiesRegister() and readsRegister() look at
W30 + take in a TRI argument. This makes sure that modifiesRegister() and
readsRegister() won't outline either of W30 and LR.

https://reviews.llvm.org/D36435

llvm-svn: 310422
2017-08-08 21:51:26 +00:00
Wei Mi bb9106ac4b [GVN] Remove stale entries in phitranslate cache when new phi is generated for PRE
When a new phi is generated for scalarpre of an expression, the phiTranslate cache
will become stale: Before PRE, the candidate expression must not be available in a
predecessor block, and phitranslate will cache the information. After PRE, the
expression will become available in all predecessor blocks, so the related entries
in phiTranslate cache becomes stale. The patch will simply remove the stale entries
so phiTranslate can be recomputed next time.

The stale entries in phitranslate cache will not affect correctness but will cause
missing PRE opportunity for later instructions.

Differential Revision: https://reviews.llvm.org/D36124

llvm-svn: 310421
2017-08-08 21:40:14 +00:00
Nuno Lopes 598d1632e1 BasicAA: assert on another case where aliasGEP shouldn't get a PartialAlias response
llvm-svn: 310420
2017-08-08 21:25:26 +00:00
Dehao Chen 34cfcb29aa Make ICP uses PSI to check for hotness.
Summary: Currently, ICP checks the count against a fixed value to see if it is hot enough to be promoted. This does not work for SamplePGO because sampled count may be much smaller. This patch uses PSI to check if the count is hot enough to be promoted.

Reviewers: davidxl, tejohnson, eraman

Reviewed By: davidxl

Subscribers: sanjoy, llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D36341

llvm-svn: 310416
2017-08-08 20:57:33 +00:00
Reid Kleckner e2e82061f9 [codeview] Emit nested enums and typedefs from classes
Previously we limited ourselves to only emitting nested classes, but we
need other kinds of types as well.

This fixes the Visual Studio STL visualizers, so that users can
visualize std::string and other objects.

llvm-svn: 310410
2017-08-08 20:30:14 +00:00
Craig Topper 364359e4fc [InstCombine] Support pulling left shifts through a subtract with constant LHS
We already support pulling through an add with constant RHS. We can do the same for subtract.

Differential Revision: https://reviews.llvm.org/D36443

llvm-svn: 310407
2017-08-08 20:14:11 +00:00
Nirav Dave 8a813cf646 [DAG] Introduce peekThroughBitcast function. NFCI.
llvm-svn: 310405
2017-08-08 20:01:18 +00:00
Nirav Dave 515116d7c2 [DAG] Update comments. NFC.
llvm-svn: 310404
2017-08-08 19:52:19 +00:00
Connor Abbott 249fc7bd2a [AMDGPU] Add llvm.amdgpu.update.dpp intrinsic
Summary:
Now that we've made all the necessary backend changes, we can add a new
intrinsic which exposes the new capabilities to IR producers. Since
llvm.amdgpu.update.dpp is a strict superset of llvm.amdgpu.mov.dpp, we
should deprecate the former. We also add tests for all the functionality
that was added in previous changes, now that we can access it via an IR
construct.

Reviewers: tstellar, arsenm

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye

Differential Revision: https://reviews.llvm.org/D34718

llvm-svn: 310399
2017-08-08 18:52:22 +00:00
Chad Rosier 4d852597f8 [NewGVN] Use a cast instead of a dyn_cast.
Differential Revision: https://reviews.llvm.org/D36478

llvm-svn: 310397
2017-08-08 18:41:49 +00:00
Zachary Turner 59e3ae827d [PDB] Fix linking of function symbols and local variables.
The compiler outputs PROC32_ID symbols into the object files
for functions, and these symbols have an embedded type index
which, when copied to the PDB, refer to the IPI stream.  However,
the symbols themselves are also converted into regular symbols
(e.g. S_GPROC32_ID -> S_GPROC32), and type indices in the regular
symbol records refer to the TPI stream.  So this patch applies
two fixes to function records.
  1. It converts ID symbols to the proper non-ID record type.
  2. After remapping the type index from the object file's index
     space to the PDB file/IPI stream's index space, it then
     remaps that index to the TPI stream's index space by.

Besides functions, during the remapping process we were also
discarding symbol record types which we did not recognize.
In particular, we were discarding S_BPREL32 records, which is
what MSVC uses to describe local variables on the stack.  So
this patch fixes that as well by copying them to the PDB.

Differential Revision: https://reviews.llvm.org/D36426

llvm-svn: 310394
2017-08-08 18:34:44 +00:00
Anna Thomas 9b6e12f3dc [LoopVectorize] Fix assertion failure in Fcmp vectorization
Summary:
When vectorizing fcmps we can trip on incorrect cast assertion when setting the
FastMathFlags after generating the vectorized FCmp.
This can happen if the FCmp can be folded to true or false directly. The fix
here is to set the FastMathFlag using the FastMathFlagBuilder *before* creating
the FCmp Instruction. This is what's done by other optimizations such as
InstCombine.
Added a test case which trips on cast assertion without this patch.

Reviewers: Ayal, mssimpso, mkuper, gilr

Reviewed by: Ayal, mssimpso

Subscribers: llvm-commits, mzolotukhin

Differential Revision: https://reviews.llvm.org/D36244

llvm-svn: 310389
2017-08-08 18:07:44 +00:00
Tim Northover f370f2e3c6 Revert "[ARM] Fix assembly and disassembly for VMRS/VMSR"
This reverts r310243. Only MVFR2 is actually restricted to v8 and it'll be a
little while before we can get a proper fix together. Better that we allow
incorrect code than reject correct in the meantime.

llvm-svn: 310384
2017-08-08 17:16:46 +00:00
Craig Topper b498a23f0e [KnownBits][ValueTracking] Move the math for calculating known bits for add/sub into a static method in KnownBits object
I want to reuse this code in SimplifyDemandedBits handling of Add/Sub. This will make that easier.

Wonder if we should use it in SelectionDAG's computeKnownBits too.

Differential Revision: https://reviews.llvm.org/D36433

llvm-svn: 310378
2017-08-08 16:29:35 +00:00
Alex Bradbury 6b34069ab7 [RISCV] Fix warning about unused getSubtargetFeatureName()
llvm-svn: 310375
2017-08-08 16:20:39 +00:00
Nuno Lopes c7d4110aa7 BasicAA: aliasGEP shouldn't get a PartialAlias response here
add an assert() to ensure that's the case (as I'm not convinced it won't happen)

llvm-svn: 310373
2017-08-08 16:13:24 +00:00
Simon Pilgrim 91b7b991d4 [DAGCombiner] simplifyShuffleMask - handle UNDEF inputs from shuffles as well as BUILD_VECTOR
Minor extension to D36393

llvm-svn: 310372
2017-08-08 16:10:33 +00:00
Alex Bradbury 04f06d922f [RISCV] Add basic RISCVAsmParser (missing files)
This commit adds the files missing from rL310361. Apologies for the noise.

Differential Revision: https://reviews.llvm.org/D23563

llvm-svn: 310363
2017-08-08 14:43:36 +00:00
Alex Bradbury 1a4272914d [RISCV] Add basic RISCVAsmParser
This doesn't yet support parsing things like %pcrel_hi(foo), but will handle
basic instructions with register or immediate operands.

Differential Revision: https://reviews.llvm.org/D23563

llvm-svn: 310361
2017-08-08 14:32:35 +00:00
Nemanja Ivanovic 979dcb6f09 [PowerPC] Don't crash on larger splats achieved through 1-byte splats
We've implemented a 1-byte splat using XXSPLTISB on P9. However, LLVM will
produce a 1-byte splat even for wider element BUILD_VECTOR nodes. This patch
prevents crashing in that situation.

Differential Revision: https://reviews.llvm.org/D35650

llvm-svn: 310358
2017-08-08 13:52:45 +00:00
Nemanja Ivanovic bed7136eee Appease compilers that have the -Wcovered-switch-default switch.
llvm-svn: 310356
2017-08-08 12:41:56 +00:00
Amjad Aboud 6fa6813aec [X86] Improved X86::CMOV to Branch heuristic.
Resolved PR33954.
This patch contains two more constraints that aim to reduce the noise cases where we convert CMOV into branch for small gain, and end up spending more cycles due to overhead.

Differential Revision: https://reviews.llvm.org/D36081

llvm-svn: 310352
2017-08-08 12:17:56 +00:00
Nemanja Ivanovic 809fbfa6a1 [PowerPC] Eliminate compares - add i32 sext/zext handling for SETLE/SETGE
Adds handling for SETLE/SETGE comparisons on i32 values. Furthermore, it adds
the handling for the special case where RHS == 0.

Differential Revision: https://reviews.llvm.org/D34048

llvm-svn: 310346
2017-08-08 11:20:44 +00:00
Simon Pilgrim ef44228acb [DAGCombiner] Simplify shuffle mask index if the referenced input element is UNDEF
Fixes one of the cases in PR34041.

Differential Revision: https://reviews.llvm.org/D36393

llvm-svn: 310344
2017-08-08 11:03:30 +00:00
Daniel Sanders 0554004698 [globalisel][tablegen] Add support for importing 'imm' operands.
Summary:
This patch enables the import of rules containing 'imm' operands that do not
constrain the acceptable values using predicates. Support for ImmLeaf will
arrive in a later patch.

Depends on D35681

Reviewers: ab, t.p.northover, qcolombet, rovka, aditya_nandakumar

Reviewed By: rovka

Subscribers: kristof.beyls, javed.absar, igorb, llvm-commits

Differential Revision: https://reviews.llvm.org/D35833

llvm-svn: 310343
2017-08-08 10:44:31 +00:00
Chandler Carruth 6e35c31d2d [PM] Fix a likely more critical infloop bug in the CGSCC pass manager.
This was just a bad oversight on my part. The code in question should
never have worked without this fix. But it turns out, there are
relatively few places that involve libfunctions that participate in
a single SCC, and unless they do, this happens to not matter.

The effect of not having this correct is that each time through this
routine, the edge from write_wrapper to write was toggled between a call
edge and a ref edge. First time through, it becomes a demoted call edge
and is turned into a ref edge. Next time it is a promoted call edge from
a ref edge. On, and on it goes forever.

I've added the asserts which should have always been here to catch silly
mistakes like this in the future as well as a test case that will
actually infloop without the fix.

The other (much scarier) infinite-inlining issue I think didn't actually
occur in practice, and I simply misdiagnosed this minor issue as that
much more scary issue. The other issue *is* still a real issue, but I'm
somewhat relieved that so far it hasn't happened in real-world code
yet...

llvm-svn: 310342
2017-08-08 10:13:23 +00:00
Craig Topper 8e351e9018 [InstCombine] Cast to BinaryOperator earlier in foldSelectIntoOp to simplify the code.
We no longer need the explicit operand count check or the later dynamic cast.

llvm-svn: 310339
2017-08-08 06:19:24 +00:00
Tom Stellard 03aa3aee11 AMDGPU: Fix warnings introduced by r310336
llvm-svn: 310337
2017-08-08 05:52:00 +00:00
Tom Stellard 20287697f8 AMDGPU: Move R600 parts of AMDGPUISelDAGToDAG into their own class
Summary: This refactoring is required in order to split the R600 and GCN tablegen files.

Reviewers: arsenm

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, dstuttard, tpr, llvm-commits, t-tye

Differential Revision: https://reviews.llvm.org/D36286

llvm-svn: 310336
2017-08-08 04:57:55 +00:00
Chandler Carruth 7c888dca46 [PM] Fix new LoopUnroll function pass by invalidating loop analysis
results when a loop is completely removed.

This is very hard to manifest as a visible bug. You need to arrange for
there to be a subsequent allocation of a 'Loop' object which gets the
exact same address as the one which the unroll deleted, and you need the
LoopAccessAnalysis results to be significant in the way that they're
stale. And you need a million other things to align.

But when it does, you get a deeply mysterious crash due to actually
finding a stale analysis result. This fixes the issue and tests for it
by directly checking we successfully invalidate things. I have not been
able to get *any* test case to reliably trigger this. Changes to LLVM
itself caused the only test case I ever had to cease to crash.

I've looked pretty extensively at less brittle ways of fixing this and
they are actually very, very hard to do. This is a somewhat strange and
unusual case as we have a pass which is deleting an IR unit, but is not
running within that IR unit's pass framework (which is what handles this
cleanly for the normal loop unroll). And where there isn't a definitive
way to clear *all* of the stale cache entries. And where the pass *is*
updating the core analysis that provides the IR units!

For example, we don't have any of these problems with Function analyses
because it is easy to clear out function analyses when the functions
themselves may have been deleted -- we clear an entire module's worth!
But that is too heavy of a hammer down here in the LoopAnalysisManager
layer.

A better long-term solution IMO is to require that AnalysisManager's
make their keys durable to this kind of thing. Specifically, when
caching an analysis for one IR unit that is conceptually "owned" by
a higher level IR unit, the AnalysisManager should incorporate this into
its data structures so that we can reliably clear these results without
having to teach each and every pass to do so manually as we do here. But
that is a change for another day as it will be a fairly invasive change
to the AnalysisManager infrastructure. Until then, this fortunately
seems to be quite rare.

llvm-svn: 310333
2017-08-08 02:24:20 +00:00
Eugene Zelenko 59e128266c [AMDGPU] Fix some Clang-tidy modernize-use-using and Include What You Use warnings; other minor fixes (NFC).
llvm-svn: 310328
2017-08-08 00:47:13 +00:00
Kostya Serebryany e863796dca [libFuzzer] simplify code, NFC
llvm-svn: 310326
2017-08-08 00:17:20 +00:00
Kostya Serebryany 22e5f9a16a [libFuzzer] remove stale code
llvm-svn: 310325
2017-08-08 00:14:49 +00:00
Kostya Serebryany 854be98c93 [libFuzzer] simplify the implementation of -print_coverage=1
llvm-svn: 310324
2017-08-08 00:12:09 +00:00
Matt Arsenault bd57cea6e4 AMDGPU: Implement getMinimumNopSize
llvm-svn: 310310
2017-08-07 22:00:58 +00:00
George Karpenkov 00e25c5459 Do not instrument libFuzzer itself when built with -DLLVM_USE_SANITIZE_COVERAGE
Fixes regression from https://reviews.llvm.org/D36295

Differential Revision: https://reviews.llvm.org/D36428

llvm-svn: 310305
2017-08-07 20:56:11 +00:00
Dehao Chen 08f8831e57 Move the SampleProfileLoader right after EarlyFPM.
Summary: SampleProfileLoader pass do need to happen after some early cleanup passes so that inlining can happen correctly inside the SampleProfileLoader pass.

Reviewers: chandlerc, davidxl, tejohnson

Reviewed By: chandlerc, tejohnson

Subscribers: sanjoy, mehdi_amini, eraman, llvm-commits

Differential Revision: https://reviews.llvm.org/D36333

llvm-svn: 310296
2017-08-07 20:23:20 +00:00
Evgeny Stupachenko c675290680 Reapply fix PR23384 (part 3 of 3) r304824 (was reverted in r305720).
The root cause of reverting was fixed - PR33514.

Summary:
The patch makes instruction count the highest priority for
 LSR solution for X86 (previously registers had highest priority).

Reviewers: qcolombet

Differential Revision: http://reviews.llvm.org/D30562

From: Evgeny Stupachenko <evstupac@gmail.com>
                         <evgeny.v.stupachenko@intel.com>
llvm-svn: 310289
2017-08-07 19:56:34 +00:00
Aaron Ballman 428f0fe910 Removing an unused variable that was missed with the refactoring in r310272; NFC.
llvm-svn: 310285
2017-08-07 19:26:17 +00:00
Connor Abbott 79f3ade51a [AMDGPU] Add pseudo "old" source to all DPP instructions
Summary:
All instructions with the DPP modifier may not write to certain lanes of
the output if bound_ctrl=1 is set or any bits in bank_mask or row_mask
aren't set, so the destination register may be both defined and modified.
The right way to handle this is to add a constraint that the destination
register is the same as one of the inputs. We could tie the destination
to the first source, but that would be too restrictive for some use-cases
where we want the destination to be some other value before the
instruction executes. Instead, add a fake "old" source and tie it to the
destination. Effectively, the "old" source defines what value unwritten
lanes will get. We'll expose this functionality to users with a new
intrinsic later.

Also, we want to use DPP instructions for computing derivatives, which
means we need to set WQM for them. We also need to enable the entire
wavefront when using DPP intrinsics to implement nonuniform subgroup
reductions, since otherwise we'll get incorrect results in some cases.
To accomodate this, add a new operand to all DPP instructions which will
be interpreted by the SI WQM pass. This will be exposed with a new
intrinsic later. We'll also add support for Whole Wavefront Mode later.

I also fixed llvm.amdgcn.mov.dpp to overwrite the source and fixed up
the test. However, I could also keep the old behavior (where lanes that
aren't written are undefined) if people want it.

Reviewers: tstellar, arsenm

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye

Differential Revision: https://reviews.llvm.org/D34716

llvm-svn: 310283
2017-08-07 19:10:56 +00:00
Matt Arsenault 36b4b0bed7 AMDGPU: Remove -mcpu=SI
Leftover from before amdgcn/r600 split.

llvm-svn: 310277
2017-08-07 18:30:35 +00:00
Matt Arsenault 9d288e69f5 AMDGPU: Remove redundant opt level check
addOptimizedRegAlloc isn't used for -O0 already.

llvm-svn: 310275
2017-08-07 18:12:48 +00:00
Matt Arsenault 3db456820d AMDGPU: Remove FixControlFlowLiveIntervals pass
This hasn't done anything in a long time. This was
running after the the control flow pseudos were expanded,
so this would never find them. The control flow pseudo
expansion was moved to solve the problem this pass was
supposed to solve in the first place, except handling
it earlier also fixes it for fast regalloc which doesn't
use LiveIntervals.

Noticed by checking LCOV reports.

llvm-svn: 310274
2017-08-07 18:12:47 +00:00
Craig Topper 7091a743b4 [InstCombine] Support (X | C1) & C2 --> (X & C2^(C1&C2)) | (C1&C2) for vector splats
Note the original code I deleted incorrectly listed this as (X | C1) & C2 --> (X & C2^(C1&C2)) | C1 Which is only valid if C1 is a subset of C2. This relied on SimplifyDemandedBits to remove any extra bits from C1 before we got to that code.

My new implementation avoids relying on that behavior so that it can be naively verified with alive.

Differential Revision: https://reviews.llvm.org/D36384

llvm-svn: 310272
2017-08-07 18:10:39 +00:00
Matt Arsenault aac47c1c00 AMDGPU: Use a custom areInlineCompatible
Fixes not inlining OpenCL library functions on AMDGPU,
which don't have an explicitly set target-cpu.

llvm-svn: 310269
2017-08-07 17:08:44 +00:00
Simon Dardis b1b52c0200 [DebugInfo][DWARF] Address paulr's comment on rL310253.
llvm-svn: 310267
2017-08-07 16:08:11 +00:00
Sanjay Patel 807f92b8ff [x86] revert r310208 to investigate test-suite failures (PR34105 / PR34097)
llvm-svn: 310264
2017-08-07 15:47:48 +00:00
Simon Dardis 02d9945e6f [DebugInfo][DWARF] Correct some usages of PRIx32 to PRIx64
These lead to tests failing spuriously as the values after being rendered to a
string were incorrect.

Reviewers: clayborg

Differential Revision: https://reviews.llvm.org/D36319

llvm-svn: 310262
2017-08-07 15:37:57 +00:00
Alexey Bataev 9581b42589 [SLP] General improvements of SLP vectorization process.
Patch tries to improve two-pass vectorization analysis, existing in SLP vectorizer. What it does:

1. Defines key nodes, that are the vectorization roots. Previously vectorization started if StoreInst or ReturnInst is found. For now, the vectorization started for all Instructions with no users and void types (Terminators, StoreInst) + CallInsts.
2. CmpInsts, InsertElementInsts and InsertValueInsts are stored in the
array. This array is processed only after the vectorization of the
first-after-these instructions key node is finished. Vectorization goes
in reverse order to try to vectorize as much code as possible.

Reviewers: mzolotukhin, Ayal, mkuper, gilr, hfinkel, RKSimon

Subscribers: ashahid, anemet, RKSimon, mssimpso, llvm-commits

Differential Revision: https://reviews.llvm.org/D29826

llvm-svn: 310260
2017-08-07 15:25:49 +00:00
Matt Arsenault 8728c5f2db AMDGPU: Cleanup subtarget features
Try to avoid mutually exclusive features. Don't use
a real default GPU, and use a fake "generic". The goal
is to make it easier to see which set of features are
incompatible between feature strings.

Most of the test changes are due to random scheduling changes
from not having a default fullspeed model.

llvm-svn: 310258
2017-08-07 14:58:04 +00:00
Alexey Bataev 53d523c9eb Revert "[SLP] General improvements of SLP vectorization process."
This reverts commit r310255.

llvm-svn: 310257
2017-08-07 14:51:52 +00:00
Nirav Dave 3d3bde7682 [DAG] Extend visitSCALAR_TO_VECTOR optimization to truncated vector.
Relanding after case to insert explicit truncation as necessary.

Allow SCALAR_TO_VECTOR of EXTRACT_VECTOR_ELT to reduce to
EXTRACT_SUBVECTOR of vector shuffle when output is smaller. Marginally
improves vector shuffle computations.

Reviewers: efriedma, RKSimon, spatel

Subscribers: javed.absar, llvm-commits

Differential Revision: https://reviews.llvm.org/D35566

llvm-svn: 310256
2017-08-07 14:07:49 +00:00
Alexey Bataev faace8f1f1 [SLP] General improvements of SLP vectorization process.
Summary:
Patch tries to improve two-pass vectorization analysis, existing in SLP vectorizer. What it does:
1. Defines key nodes, that are the vectorization roots. Previously vectorization started if StoreInst or ReturnInst is found. For now, the vectorization started for all Instructions with no users and void types (Terminators, StoreInst) + CallInsts.
2. CmpInsts, InsertElementInsts and InsertValueInsts are stored in the array. This array is processed only after the vectorization of the first-after-these instructions key node is finished. Vectorization goes in reverse order to try to vectorize as much code as possible.

Reviewers: mzolotukhin, Ayal, mkuper, gilr, hfinkel, RKSimon

Subscribers: ashahid, anemet, RKSimon, mssimpso, llvm-commits

Differential Revision: https://reviews.llvm.org/D29826

llvm-svn: 310255
2017-08-07 14:03:17 +00:00
Simon Dardis ec4ea99766 [DebugInfo][DWARF] Use PRIx64 explicitly in output.
llvm-svn: 310253
2017-08-07 13:30:03 +00:00
Michael Zuckerman 680ac10aa7 [X86][LLVM]Expanding Supports lowerInterleavedStore() in X86InterleavedAccess (VF16 stride 4).
This patch expands the support of lowerInterleavedStore to 16x8i stride 4.

LLVM creates suboptimal shuffle code-gen for AVX2. In overall, this patch is a specific fix for the pattern (Strid=4 VF=16) and we plan to include more patterns in the future.

The patch goal is to optimize the following sequence:
At the end of the computation, we have ymm2, ymm0, ymm12 and ymm3 holding
each 16 chars:

c0, c1, , c16
m0, m1, , m16
y0, y1, , y16
k0, k1, ., k16

And these need to be transposed/interleaved and stored like so:

c0 m0 y0 k0 c1 m1 y1 k1 c2 m2 y2 k2 c3 m3 y3 k3 ....

Differential Revision: https://reviews.llvm.org/D35829

llvm-svn: 310252
2017-08-07 13:22:39 +00:00
Dmitry Preobrazhensky 50805a0b83 [AMDGPU][MC] Corrected VOP3 version of v_interp_* instructions for VI
See bug 32621: https://bugs.llvm.org//show_bug.cgi?id=32621

Reviewers: vpykhtin, SamWot, arsenm

Differential Revision: https://reviews.llvm.org/D35902

llvm-svn: 310251
2017-08-07 13:14:12 +00:00
Andre Vieira 7dffb9bfa6 [ARM] Fix assembly and disassembly for VMRS/VMSR
This patch addresses two issues with assembly and disassembly for VMRS/VMSR:

1.currently VMRS/VMSR instructions accessing fpsid, mvfr{0-2} and fpexc, are
  accepted for non ARMv8-A targets.

2. all VMRS/VMSR instructions accept writing/reading to PC and SP, when only
   ARMv7-A and ARMv8-A should be allowed to write/read to SP and none to PC.

This patch addresses those issues and adds tests for these cases.

Differential Revision: https://reviews.llvm.org/D36306

llvm-svn: 310243
2017-08-07 08:41:05 +00:00
Vitaly Buka 5d432ec929 [asan] Fix asan dynamic shadow check before copyArgsPassedByValToAllocas
llvm-svn: 310242
2017-08-07 07:35:33 +00:00
Vitaly Buka 629047de8e [asan] Disable checking of arguments passed by value for --asan-force-dynamic-shadow
Fails with "Instruction does not dominate all uses!"

llvm-svn: 310241
2017-08-07 07:12:34 +00:00
Guy Blank 5ca01695f7 [SelectionDAG] reset NewNodesMustHaveLegalTypes flag between basic blocks
The NewNodesMustHaveLegalTypes flag is set to false at the beginning of CodeGenAndEmitDAG, and set to true after legalizing types.
But before calling CodeGenAndEmitDAG we build the DAG for the basic block.
So for the first basic block NewNodesMustHaveLegalTypes would be 'false' during the SDAG building, and for all other basic blocks it would be 'true'.

This patch sets the flag to false before SDAG building each basic block.

Differential Revision:
https://reviews.llvm.org/D33435

llvm-svn: 310239
2017-08-07 05:51:14 +00:00
Davide Italiano b53b075bb1 [Reassociate] Use a range loop for clarity. NFCI.
While here, rename `i` to `Rank` as the latter is more
self-explanatory (and this code also uses `I` two lines below to
identify an Instruction).

llvm-svn: 310238
2017-08-07 01:57:21 +00:00
Davide Italiano a5cdc22e70 [Reassociate] Try to bail out early when canonicalizing.
This commit rearranges the checks to avoid calls to getRank()
when not needed (e.g. when RHS == LHS).

llvm-svn: 310237
2017-08-07 01:49:09 +00:00
Craig Topper 576fb91aef [InstCombine] Remove shift handling from OptAndOp.
Summary: This is all handled by SimplifyDemandedBits.

Reviewers: spatel, davide

Reviewed By: davide

Subscribers: davide, llvm-commits

Differential Revision: https://reviews.llvm.org/D36382

llvm-svn: 310234
2017-08-06 23:30:49 +00:00
Craig Topper a1693a2ed3 [InstCombine] Support (X ^ C1) & C2 --> (X & C2) ^ (C1&C2) for vector splats.
llvm-svn: 310233
2017-08-06 23:11:49 +00:00
Craig Topper 9cbdbefd0f [InstCombine] Support '(C - X) ^ signmask -> (C + signmask - X)' and '(X + C) ^ signmask -> (X + C + signmask)' for vector splats.
llvm-svn: 310232
2017-08-06 22:17:21 +00:00
Martin Storsjo c9263f4e49 [llvm-dlltool] Map the "arm64" machine type
Differential Revision: https://reviews.llvm.org/D36365

llvm-svn: 310223
2017-08-06 19:58:13 +00:00
Matt Arsenault a7eb14afc7 AMDGPU: Fix typo in feature description
llvm-svn: 310217
2017-08-06 18:13:23 +00:00
Sanjay Patel a923c2ee95 [x86] use more shift or LEA for select-of-constants
We can convert any select-of-constants to math ops:
http://rise4fun.com/Alive/d7d

For this patch, I'm enhancing an existing x86 transform that uses fake multiplies 
(they always become shl/lea) to avoid cmov or branching. The current code misses 
cases where we have a negative constant and a positive constant, so this is just 
trying to plug that hole.

The DAGCombiner diff prevents us from hitting a terrible inefficiency: we can start 
with a select in IR, create a select DAG node, convert it into a sext, convert it 
back into a select, and then lower it to sext machine code.

Some notes about the test diffs:

1. 2010-08-04-MaskedSignedCompare.ll - We were creating control flow that didn't exist in the IR.
2. memcmp.ll - Choose -1 or 1 is the case that got me looking at this again. I 
   think we could avoid the push/pop in some cases if we used 'movzbl %al' instead of an xor on 
   a different reg? That's a post-DAG problem though.
3. mul-constant-result.ll - The trade-off between sbb+not vs. setne+neg could be addressed if 
   that's a regression, but I think those would always be nearly equivalent.
4. pr22338.ll and sext-i1.ll - These tests have undef operands, so I don't think we actually care about these diffs.
5. sbb.ll - This shows a win for what I think is a common case: choose -1 or 0.
6. select.ll - There's another borderline case here: cmp+sbb+or vs. test+set+lea? Also, sbb+not vs. setae+neg shows up again.
7. select_const.ll - These are motivating cases for the enhancement; replace cmov with cheaper ops.

Assembly differences between movzbl and xor to avoid a partial reg stall are caused later by the X86 Fixup SetCC pass.

Differential Revision: https://reviews.llvm.org/D35340

llvm-svn: 310208
2017-08-06 16:27:07 +00:00
Simon Pilgrim 17e290f1d3 [X86] Add comment to match closing Defs = [FPSW]. NFCI.
llvm-svn: 310202
2017-08-06 13:21:09 +00:00
Meador Inge 70ab7cc55c [AVR] Compute code model if one is not provided
The patch from r310028 fixed things to work with the new
`LLVMTargetMachine` constructor that came in on r309911.
However, the fix was partial since an object of type
`CodeModel::Model` must be passed to `LLVMTargetMachine`
(not one of `Optional<CodeModel::Model>`).

This patch fixes the problem in the same fashion that r309911
did for other machines: by checking if the passed optional
code model has a value and using `CodeModel::Small` if not.

llvm-svn: 310200
2017-08-06 12:02:17 +00:00
Craig Topper b5bf016015 [InstCombine] Support ~(c-X) --> X+(-c-1) and ~(X-c) --> (-c-1)-X for splat vectors.
llvm-svn: 310195
2017-08-06 06:28:41 +00:00
Craig Topper 6bfa2aee78 [X86] Enable isel to use the PAUSE instruction even when SSE2 is disabled
Summary:
On older processors this instruction encoding is treated as a NOP.

MSVC doesn't disable intrinsics based on features the way clang/gcc does. Because the PAUSE instruction encoding doesn't crash older processors, some software out there uses these intrinsics without checking for SSE2.

This change also seems to also be consistent with gcc behavior.

Fixes PR34079

Reviewers: RKSimon, zvi

Reviewed By: RKSimon

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D36361

llvm-svn: 310190
2017-08-05 23:34:44 +00:00
Craig Topper 9ffda5ab86 [InstCombine] Fold (C - X) ^ signmask -> (C + signmask - X).
llvm-svn: 310186
2017-08-05 20:00:44 +00:00
Craig Topper 65dd32afbc [InstCombine] Teach the code that pulls logical operators through constant shifts to handle vector splats too.
llvm-svn: 310185
2017-08-05 20:00:42 +00:00
Craig Topper 1bbcab9ca5 [InstCombine] Support vector splats in foldSelectICmpAnd.
Unfortunately, it looks like there's some other missed optimizations in the generated code for some of these cases. I'll try to look at some of those next.

llvm-svn: 310184
2017-08-05 20:00:41 +00:00
Dinar Temirbulatov cc2294a4eb [SLPVectorizer] Add extra parameter to setInsertPointAfterBundle to handle different opcodes, NFCI.
Differential Revision: https://reviews.llvm.org/D35769

llvm-svn: 310183
2017-08-05 18:43:52 +00:00
Sanjay Patel 94da1de1ce [InstCombine] refactor trunc(binop) transforms; NFCI
In addition to moving the shift transforms over, we may want to
detect too-wide rotate patterns here (PR34046). 

llvm-svn: 310181
2017-08-05 15:19:18 +00:00
Florian Hahn d51a35e339 [ARM] The ARM backend is MachineVerifier clean now.
Summary: Thanks everyone involved in fixing the outstanding issues.

Reviewers: rovka, MatzeB, efriedma

Reviewed By: MatzeB

Subscribers: aemerson, javed.absar, llvm-commits, kristof.beyls

Differential Revision: https://reviews.llvm.org/D36153

llvm-svn: 310180
2017-08-05 15:14:06 +00:00
Chandler Carruth 691d0243a5 [LCG] Remove yet another variable only used inside of asserts.
llvm-svn: 310174
2017-08-05 08:33:16 +00:00
Benjamin Kramer ef42fd43f4 [LCG] Fold otherwise unused variable into assert.
No functionality change intended.

llvm-svn: 310173
2017-08-05 08:28:48 +00:00
Matt Arsenault b94972cb82 IPRA: Don't crash on null getCallPreservedMask
Kernels aren't callable, so they don't have a call preserved mask.

llvm-svn: 310172
2017-08-05 07:50:18 +00:00
Chandler Carruth adbf14ab85 [LCG] Completely remove the parent set and leaf tracking for RefSCCs.
After the previous series of patches, this is now trivial and deletes
a pretty astonishing amount of complexity. This has been a long time
coming, as the move toward a PO sequence of RefSCCs started eroding the
underlying use cases for this half of the data structure.

Among the biggest advantages here is that now there aren't two
independent data structures that need to stay in sync.

Some of my profiling has also indicated that updating the parent sets
was among the most expensive parts of the lazy call graph. Eliminating
it whole sale is likely to be a nice win in terms of compile time.

Last but not least, I had discussed with some folks previously keeping
it around for asserts and other correctness checking, but once the
fundamentals of the parent and child checking were implemented without
the parent sets their value in correctness checking was tiny and no
where near worth the cost of the complexity required to keep everything
up-to-date.

llvm-svn: 310171
2017-08-05 07:37:00 +00:00
Chandler Carruth 38bd6b50ef [LCG] Re-implement the basic isParentOf, isAncestorOf, isChildOf, and
isDescendantOf methods on RefSCCs in terms of the forward edges rather
than the parent sets.

This is technically slower, but probably not interestingly slower, and
all of these routines were already so expensive that they're guarded
behind both !NDEBUG and EXPENSIVE_CHECKS.

This removes another non-critical usage of parent sets.

I've also added some comments to try and help clarify to any potential
users the costs of these routines. They're mostly useful for debugging,
asserts, or other queries.

llvm-svn: 310170
2017-08-05 06:24:09 +00:00
Chandler Carruth c718b8e7c3 [LCG] Add the concept of a "dead" node and use it to avoid a complex
walk over the parent set.

When removing a single function from the call graph, we previously would
walk the entire RefSCC's parent set and then walk every outgoing edge
just to find the ones to remove. In addition to this being quite high
complexity in theory, it is also the last fundamental use of the parent
sets.

With this change, when we remove a function we transform the node
containing it to be recognizably "dead" and then teach the edge
iterators to recognize edges to such nodes and skip them the same way
they skip null edges.

We can't move fully to using "dead" nodes -- when disconnecting two live
nodes we need to null out the edge. But the complexity this adds to the
edge sequence isn't too bad and the simplification of lazily handling
this seems like a significant win.

llvm-svn: 310169
2017-08-05 05:47:37 +00:00
Joel Jones 60711ca253 [AArch64] LSE Atomics reorg - part 1
Add memory synchronization semantics to LSE Atomics.

The memory semantics feature will be added in a subsequent patch.

In this patch, several corrections were added to the existing LSE Atomics
implementation, based on the ARM Errata D11904 from 05/12/2017.

Patch by: steleman

Differential Revision: https://reviews.llvm.org/D35319

llvm-svn: 310167
2017-08-05 04:30:55 +00:00
Chandler Carruth 39df40d8c2 [LCG] Replace an implicit bool operator with a named function. (NFC)
The definition of 'false' here was already pretty vague and debatable,
and I'm about to add another potential 'false' that would actually make
much more sense in a bool operator. Especially given how rarely this is
used, a nicely named method seems better.

llvm-svn: 310165
2017-08-05 04:04:06 +00:00
Chandler Carruth 403d3c4b2b [LCG] When removing a dead function and clearing out the data
structures, actually null out the graph pointers as well. We won't ever
update these, and we certainly shouldn't be calling any methods on them,
so it seems good to defensively nuke them.

llvm-svn: 310164
2017-08-05 03:37:39 +00:00
Chandler Carruth 7cb23e705f [LCG] Rather than walking the directed graph structure to update graph
pointers in node objects, just walk the map from function to node.

It doesn't have stable ordering, but works just as well and is much
simpler. We don't need ordering when just updating internal pointers.

llvm-svn: 310163
2017-08-05 03:37:39 +00:00
Chandler Carruth 2c58e1a45c [LCG] Remove the complex walk of the parent sets to update graph
pointers.

This is completely unnecessary as we have a trivial list of RefSCCs now
that we can walk.

llvm-svn: 310162
2017-08-05 03:37:38 +00:00
Chandler Carruth 13ffd110ad [LCG] Remove the use of the parent sets to compute connectivity when
merging RefSCCs.

The logic to directly use the reference edges is simpler and not
substantially slower (despite the comments to the contrary) because this
is not actually an especially hot part of LCG in practice.

llvm-svn: 310161
2017-08-05 03:37:37 +00:00
Craig Topper fc5283092b [InstCombine] In foldSelectICmpAnd, if we need to to truncate from the 'and' type to the 'select' type, do it after shifting right instead of just bailing.
Previously we were always trying to emit the zext or truncate before any shift. This meant if the 'and' mask was larger than the size of the truncate we would skip the transformation.

Now we shift the result of the and right first leaving the bit within the range of the truncate.

This matches what we are doing in foldSelectICmpAndOr for the same problem.

llvm-svn: 310159
2017-08-05 01:45:17 +00:00
Reid Kleckner 7662d50d10 [X86] Teach fastisel to select calls to dllimport functions
Summary:
Direct calls to dllimport functions are very common Windows. We should
add them to the -O0 fast path.

Reviewers: rafael

Subscribers: llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D36197

llvm-svn: 310152
2017-08-05 00:10:43 +00:00
Kostya Serebryany a84a6c1e48 [libFuzzer] use the in-binary pc table (instead of PCs captured at run-time) to implement -exit_on_src_pos
llvm-svn: 310151
2017-08-04 23:49:53 +00:00
Kostya Serebryany be7a35769d [libFuzzer] print PCs using the in-binary PC-table instead of relying on PCs captured at run-time
llvm-svn: 310148
2017-08-04 23:13:58 +00:00
Adrian McCarthy b41f03e768 Enable llvm-pdbutil to list enumerations using native PDB reader
This extends the native reader to enable llvm-pdbutil to list the enums in a
PDB and it includes a simple test. It does not yet list the values in the
enumerations, which requires an actual implementation of
NativeEnumSymbol::FindChildren.

To exercise this code, use a command like:

    llvm-pdbutil pretty -native -enums foo.pdb

Differential Revision: https://reviews.llvm.org/D35738

llvm-svn: 310144
2017-08-04 22:37:58 +00:00
Sanjay Patel e12d734be3 [InstCombine] narrow truncated add/sub/mul with constant
Name: narrow_sub
  %sub = sub i32 C1, %x
  %r = trunc i32 %sub to i8
  =>  
  %xn = trunc i32 %x to i8
  %narrowC = trunc i32 C1 to i8
  %r = sub i8 %narrowC, %xn
 
Name: narrow_add
  %add = add i32 %x, C1
  %r = trunc i32 %add to i8
  =>  
  %xn = trunc i32 %x to i8
  %narrowC = trunc i32 C1 to i8
  %r = add i8 %xn, %narrowC
  
Name: narrow_mul
  %mul = mul i32 %x, C1
  %r = trunc i32 %mul to i8
  =>  
  %xn = trunc i32 %x to i8
  %narrowC = trunc i32 C1 to i8
  %r = mul i8 %xn, %narrowC


http://rise4fun.com/Alive/QpS

This doesn't solve PR34046 (failure to recognize rotate):
https://bugs.llvm.org/show_bug.cgi?id=34046
...but it reduces an extra complication in the description examples 
to a form that we can more easily match.

llvm-svn: 310141
2017-08-04 22:30:34 +00:00
Reid Kleckner cefb333582 [Support] Use FILE_SHARE_DELETE to fix RemoveFileOnSignal on Windows
Summary:
Tools like clang that use RemoveFileOnSignal on their output files
weren't actually able to clean up their outputs before this change.  Now
the call to llvm::sys::fs::remove succeeds and the temporary file is
deleted. This is a stop-gap to fix clang before implementing the
solution outlined in PR34070.

Reviewers: davide

Subscribers: llvm-commits, hiraditya

Differential Revision: https://reviews.llvm.org/D36337

llvm-svn: 310137
2017-08-04 21:52:00 +00:00
Kyle Butt 74f61dd8ef BlockPlacement: add a flag to force cold block outlining w/o a profile.
NFC.

llvm-svn: 310129
2017-08-04 21:13:41 +00:00
Kostya Serebryany 64426e3ba8 [libFuzzer] re-enable fuzzer-printcovpcs.test
llvm-svn: 310126
2017-08-04 20:47:22 +00:00
Nico Weber 69ca5322d4 Revert r310055, it caused PR34074.
llvm-svn: 310123
2017-08-04 20:40:38 +00:00
Nico Weber b24df62bb6 Revert r310058, it caused PR34073.
llvm-svn: 310118
2017-08-04 20:24:13 +00:00
Amara Emerson 56dca4e3ca [SCEV] Preserve NSW information for sext(subtract).
Pushes the sext onto the operands of a Sub if NSW is present.
Also adds support for propagating the nowrap flags of the
llvm.ssub.with.overflow intrinsic during analysis.

Differential Revision: https://reviews.llvm.org/D35256

llvm-svn: 310117
2017-08-04 20:19:46 +00:00
Quentin Colombet c046208c52 [GlobalISel] Remove the GISelAccessor API.
Its sole purpose was to avoid spreading around ifdefs related to
building global-isel. Since r309990, GlobalISel is not optional anymore,
thus, we can get rid of this mechanism all together.

NFC.

llvm-svn: 310115
2017-08-04 20:15:46 +00:00
Quentin Colombet c7652e22d7 [GlobalISel] Remove a stall comment in CMake.
Thanks to Diana Picus <diana.picus@linaro.org> for noticing.

NFC

llvm-svn: 310114
2017-08-04 20:15:41 +00:00
Kostya Serebryany 27cba58898 [libFuzzer] make a test more robust
llvm-svn: 310113
2017-08-04 20:09:15 +00:00
Kostya Serebryany 1d7a33b8ae [libFuzzer] remove the now redundant 'LLVMFuzzer-' prefix from libFuzzer tests
llvm-svn: 310110
2017-08-04 20:05:25 +00:00
Kostya Serebryany 785cec91a4 [libFuzzer] split one test into several
llvm-svn: 310106
2017-08-04 20:01:04 +00:00
George Karpenkov b0c2bb572d [libFuzzer tests] Only enable libFuzzer tests if
-DLIBFUZZER_ENABLE_TESTS=ON is set.

llvm-svn: 310100
2017-08-04 19:29:16 +00:00
Ulrich Weigand a11f63a952 [SystemZ] Add support for 128-bit atomic load/store/cmpxchg
This adds support for the main 128-bit atomic operations,
using the SystemZ instructions LPQ, STPQ, and CDSG.

Generating these instructions is a bit more complex than usual
since the i128 type is not legal for the back-end.  Therefore,
we have to hook the LowerOperationWrapper and ReplaceNodeResults
TargetLowering callbacks.

llvm-svn: 310094
2017-08-04 18:57:58 +00:00
Ulrich Weigand 02f1c02c27 [SystemZ] Eliminate unnecessary serialization operations
We currently emit a serialization operation (bcr 14, 0) before every
atomic load and after every atomic store.  This is overly conservative.
The SystemZ architecture actually does not require any serialization
for atomic loads, and a serialization after an atomic store only if
we need to enforce sequential consistency.  This is what other compilers
for the platform implement as well.

llvm-svn: 310093
2017-08-04 18:53:35 +00:00
Evgeny Stupachenko 38197c66a1 Fix PR33514
Summary:
The bug was uncovered after fix of  PR23384 (part 3 of 3).
The patch restricts pointer multiplication in SCEV computaion for ICmpZero.

Reviewers: qcolombet

Differential Revision: http://reviews.llvm.org/D36170

From: Evgeny Stupachenko <evstupac@gmail.com>
                         <evgeny.v.stupachenko@intel.com>
llvm-svn: 310092
2017-08-04 18:46:13 +00:00
Kostya Serebryany 0c079d06d3 [libFuzzer] make trace-pc.test more reliable
llvm-svn: 310091
2017-08-04 18:43:39 +00:00
Connor Abbott 66b9bd6e50 [AMDGPU] Implement llvm.amdgcn.set.inactive intrinsic
Summary:
This intrinsic lets us set inactive lanes to an identity value when
implementing wavefront reductions. In combination with Whole Wavefront
Mode, it lets inactive lanes be skipped over as required by GLSL/Vulkan.
Lowering the intrinsic needs to happen post-RA so that RA knows that the
destination isn't completely overwritten due to the EXEC shenanigans, so
we need another pseudo-instruction to represent the un-lowered
intrinsic.

Reviewers: tstellar, arsenm

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye

Differential Revision: https://reviews.llvm.org/D34719

llvm-svn: 310088
2017-08-04 18:36:54 +00:00
Connor Abbott 92638ab625 [AMDGPU] Add support for Whole Wavefront Mode
Summary:
Whole Wavefront Wode (WWM) is similar to WQM, except that all of the
lanes are always enabled, regardless of control flow. This is required
for implementing wavefront reductions in non-uniform control flow, where
we need to use the inactive lanes to propagate intermediate results, so
they need to be enabled. We need to propagate WWM to uses (unless
they're explicitly marked as exact) so that they also propagate
intermediate results correctly. We do the analysis and exec mask munging
during the WQM pass, since there are interactions with WQM for things
that require both WQM and WWM. For simplicity, WWM is entirely
block-local -- blocks are never WWM on entry or exit of a block, and WWM
is not propagated to the block level.  This means that computations
involving WWM cannot involve control flow, but we only ever plan to use
WWM for a few limited purposes (none of which involve control flow)
anyways.

Shaders can ask for WWM using the @llvm.amdgcn.wwm intrinsic. There
isn't yet a way to turn WWM off -- that will be added in a future
change.

Finally, it turns out that turning on inactive lanes causes a number of
problems with register allocation. While the best long-term solution
seems like teaching LLVM's register allocator about predication, for now
we need to add some hacks to prevent ourselves from getting into trouble
due to constraints that aren't currently expressed in LLVM. For the gory
details, see the comments at the top of SIFixWWMLiveness.cpp.

Reviewers: arsenm, nhaehnle, tpr

Subscribers: kzhuravl, wdng, mgorny, yaxunl, dstuttard, t-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D35524

llvm-svn: 310087
2017-08-04 18:36:52 +00:00
Connor Abbott de068fe8b4 [AMDGPU] refactor WQM pass in preparation for WWM (NFCI)
Summary:
Right now, the WQM pass conflates two different things when tracking the
Needs of an instruction:

1. Needs can be StateWQM, which is propagated to other instructions, and
means that this instruction (and everything it depends on) must be
calculated in WQM.
2. Needs can be StateExact, which is not propagated to other
instructions, and means that this instruction must not be calculated in
WQM and WQM-ness must not be propagated past this instruction.

This works now because there are only two different states, but in the
future we want to be able to express things like "calculate this in WQM,
but please disable WWM and don't propagate it" (to implement
@llvm.amdgcn.set.inactive). In order to do this, we need to split the
per-instruction Needs field in two: a new Needs field, which can only
contain StateWQM (and in the future, StateWWM) and is propagated to
sources, and a Disables field, which can also contain just StateWQM or
nothing for now.

We keep the per-block tracking the same for now, by translating
Needs/Disables to the old representation with only StateWQM or
StateExact. The other place that needs special handling is when we
emit the state transitions. We could just translate back to the old
representation there as well, which we almost do, but instead of 0 as a
placeholder value for "any state," we explicitly or together all the
states an instruction is allowed to be in. This lets us refactor the
code in preparation for WWM, where we'll need to be able to handle
things like "this instruction must be in Exact or WQM, but not WWM."

Reviewers: arsenm, nhaehnle, tpr

Subscribers: kzhuravl, wdng, yaxunl, dstuttard, t-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D35523

llvm-svn: 310086
2017-08-04 18:36:50 +00:00
Connor Abbott 8c217d0a29 [AMDGPU] Add an llvm.amdgcn.wqm intrinsic for WQM
Summary:
Previously, we assumed that certain types of instructions needed WQM in
pixel shaders, particularly DS instructions and image sampling
instructions. This was ok because with OpenGL, the assumption was
correct. But we want to start using DPP instructions for derivatives as
well as other things, so the assumption that we can infer whether to use
WQM based on the instruction won't continue to hold. This intrinsic lets
frontends like Mesa indicate what things need WQM based on their
knowledge of the API, rather than second-guessing them in the backend.
We need to keep around the old method of enabling WQM, but eventually we
should remove it once Mesa catches up. For now, this will let us use DPP
instructions for computing derivatives correctly.

Reviewers: arsenm, tpr, nhaehnle

Subscribers: kzhuravl, wdng, yaxunl, dstuttard, llvm-commits, t-tye

Differential Revision: https://reviews.llvm.org/D35167

llvm-svn: 310085
2017-08-04 18:36:49 +00:00
Marcello Maggioni 8de4bbdaa5 [MachineOperand] Add ChangeToTargetIndex method. NFC
Differential Revision: https://reviews.llvm.org/D36301

llvm-svn: 310083
2017-08-04 18:24:09 +00:00
Reid Kleckner af3e93ac93 [Support] Remove getPathFromOpenFD, it was unused
Summary:
It was added to support clang warnings about includes with case
mismatches, but it ended up not being necessary.

Reviewers: twoh, rafael

Subscribers: hiraditya, llvm-commits

Differential Revision: https://reviews.llvm.org/D36328

llvm-svn: 310078
2017-08-04 17:43:49 +00:00
George Karpenkov 96d6008145 Fixing buildbots: do not register check-fuzzer if clang or asan are not
present.

llvm-svn: 310077
2017-08-04 17:43:29 +00:00
George Karpenkov a5de052362 Drop Windows support from libFuzzer tests.
Differential Revision: https://reviews.llvm.org/D36205

llvm-svn: 310076
2017-08-04 17:43:28 +00:00
George Karpenkov 8ecdd7be15 Port libFuzzer tests to LIT. Do not require two-stage build for check-fuzzer.
This revision ports all libFuzzer tests apart from the unittest to LIT.
The advantages of doing so include:

 - Tests being self-contained
 - Much easier debugging of a single test
 - No need for using a two-stage compilation

The unit-test is still compiled using CMake, but it does not need a
freshly built compiler.

NOTE: The previous two-stage bot configuration will NOT work, as in the
second stage build LLVM_USE_SANITIZER is set, which disables ASAN from
being built.
Thus bots will be reconfigured in the next few commits.

Differential Revision: https://reviews.llvm.org/D36295

llvm-svn: 310075
2017-08-04 17:19:45 +00:00
Easwaran Raman ff77cc750c [Inliner] Fix a typo in option description. NFC.
llvm-svn: 310073
2017-08-04 17:15:17 +00:00
Javed Absar 9cda599151 [ARM] Use searchable-table for banked registers
This is a continuation of https://reviews.llvm.org/D36219

This patch uses reverse mapping (encoding->name) in
ARMInstPrinter::printBankedRegOperand to get rid of
hard-coded values (as pointed out by @olista01).

Reviewed by: @fhahn, @rovka, @olista01
Differential Revision: https://reviews.llvm.org/D36260

llvm-svn: 310072
2017-08-04 17:10:11 +00:00
Reid Kleckner da748f1c3d [ArgPromotion] Preserve alignment of byval argument in new alloca
The frontend may have requested a higher alignment for any reason, and
downstream optimizations may already have taken advantage of it.  We
should keep the same alignment when moving the allocation from the
parameter area to the local variable area.

Fixes PR34038

llvm-svn: 310071
2017-08-04 17:09:11 +00:00
Craig Topper 4e22ee6745 [ConstantInt] Use ConstantInt::getValue instead of Constant::getUniqueInteger in a few places where we obviously have a ConstantInt. NFC
getUniqueInteger will ultimately call ConstantInt::getValue, but calling ConstantInt::getValue should be inlined.

llvm-svn: 310069
2017-08-04 16:59:29 +00:00
Chad Rosier 14fc82a1df [AArch64] Fix an assertion for pre-index generation with unscaled loads/stores.
Differential Revision: https://reviews.llvm.org/D36248
PR34035

llvm-svn: 310066
2017-08-04 16:44:06 +00:00
Dehao Chen 63799512b2 Adjust the hotness threshold from 99.9% to 99%.
Summary: We originally set the hotness threshold as 99.9% to be consistent with gcc FDO. But because the inline heuristic is different between 2 compilers: llvm uses bottom-up algorithm while gcc uses priority based. The LLVM algorithm tends to inline too much early that prevents hot callsites from further inlined into its caller. Due to this restriction, we think it is reasonable to lower the hotness threshold to give priority to those that are really hot. Our experiments show that this change would improve performance on large applications. Note that the inline heuristic has great room for further tuning. Once the inline heuristics are refined, we could adjust this threshold to allow inlining for less hot callsites.

Reviewers: davidxl, tejohnson, eraman

Reviewed By: tejohnson

Subscribers: sanjoy, llvm-commits

Differential Revision: https://reviews.llvm.org/D36317

llvm-svn: 310065
2017-08-04 16:20:54 +00:00
Benjamin Kramer bda212a65d [InstCombine] Fold single-use variable into assert.
Avoids unused variable warnings in Release builds. No functional change.

llvm-svn: 310064
2017-08-04 16:08:41 +00:00
Craig Topper 760ff6ee87 [InstCombine] Remove the (not (sext)) case from foldBoolSextMaskToSelect and inline the remaining code to match visitOr
Summary:
The (not (sext)) case is really (xor (sext), -1) which should have been simplified to (sext (xor, 1)) before we got here. So we shouldn't need to handle it.

With that taken care of we only need to two cases so don't need the swap anymore. This makes us in sync with the equivalent code in visitOr so inline this to match.

Reviewers: spatel, eli.friedman, majnemer

Reviewed By: spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D36240

llvm-svn: 310063
2017-08-04 16:07:20 +00:00
Craig Topper 3b74a68cc7 [InstCombine] Use ConstantInt::getFalse to reduce some code. NFC
llvm-svn: 310062
2017-08-04 16:07:18 +00:00
Charles Saternos 75da10d1b2 [ThinLTO] Add FunctionAttrs to ThinLTO index
Adds function attributes to index: ReadNone, ReadOnly, NoRecurse, NoAlias. This attributes will be used for future ThinLTO optimizations that will propagate function attributes across modules.

llvm-svn: 310061
2017-08-04 16:00:58 +00:00
Sanjay Patel 79e7f6b3e3 [InstCombine] narrow lshr with constant
Name: narrow_shift
Pre: C1 < 8
%zx = zext i8 %x to i32
%l = lshr i32 %zx, C1
  =>  
%narrowC = trunc i32 C1 to i8
%ns = lshr i8 %x, %narrowC
%l = zext i8 %ns to i32

http://rise4fun.com/Alive/jIV

This isn't directly applicable to PR34046 as written, but we
need to have more narrowing folds like this to be sure that
rotate patterns are recognized.

llvm-svn: 310060
2017-08-04 15:42:47 +00:00
Dmitry Preobrazhensky 4b11a78a6e [AMDGPU][MC] Enabled expressions as operands
See bug 33579: https://bugs.llvm.org//show_bug.cgi?id=33579

Reviewers: vpykhtin, SamWot, arsenm

Differential Revision: https://reviews.llvm.org/D36091

llvm-svn: 310059
2017-08-04 13:55:24 +00:00
Simon Pilgrim 5c63586489 [DAGCombiner] Extending pattern detection for vector shuffle.
If all the operands of a BUILD_VECTOR extract elements from same vector then split the vector efficiently based on the maximum vector access index.

Committed on behalf of @jbhateja (Jatin Bhateja)

Differential Revision: https://reviews.llvm.org/D35788

llvm-svn: 310058
2017-08-04 12:46:35 +00:00
Filipe Cabecinhas fb9d2a8775 [DSE] Merge stores when the later store only writes to memory locations the early store also wrote to.
Summary:
This fixes PR31777.

If both stores' values are ConstantInt, we merge the two stores
(shifting the smaller store appropriately) and replace the earlier (and
larger) store with an updated constant.

In the future we should also support vectors of integers. And maybe
float/double if we can.

Reviewers: hfinkel, junbuml, jfb, RKSimon, bkramer

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D30703

llvm-svn: 310055
2017-08-04 12:28:36 +00:00
Nikolai Bozhenov 1545eb3408 [InstCombine] Canonicalize clamp of float types to minmax in fast mode.
Summary:
This commit allows matchSelectPattern to recognize clamp of float
arguments in the presence of FMF the same way as already done for
integers.

This case is a little different though. With integers, given the
min/max pattern is recognized, DAGBuilder starts selecting MIN/MAX
"automatically". That is not the case for float, because for them only
full FMINNAN/FMINNUM/FMAXNAN/FMAXNUM ISD nodes exist and they do care
about NaNs. On the other hand, some backends (e.g. X86) have only
FMIN/FMAX nodes that do not care about NaNS and the former NAN/NUM
nodes are illegal thus selection is not happening. So I decided to do
such kind of transformation in IR (InstCombiner) instead of
complicating the logic in the backend.

Reviewers: spatel, jmolloy, majnemer, efriedma, craig.topper

Reviewed By: efriedma

Subscribers: hiraditya, javed.absar, n.bozhenov, llvm-commits

Patch by Andrei Elovikov <andrei.elovikov@intel.com>

Differential Revision: https://reviews.llvm.org/D33186

llvm-svn: 310054
2017-08-04 12:22:17 +00:00
Florian Gross 2feb105882 [AMDGPU] Fixed MSVC build break
Error was:

field of type 'llvm::ArgDescriptor' has private default constructor
const AMDGPUFunctionArgInfo AMDGPUArgumentUsageInfo::ExternFunctionInfo{};
                                                                        ^

llvm-svn: 310048
2017-08-04 10:53:07 +00:00
Zoran Jovanovic 1c17001235 [mips][microMIPS] Extending size reduction pass with ADDIUSP and ADDIUR1SP
Author: milena.vujosevic.janicic
The patch extends size reduction pass for MicroMIPS.
The following instructions are examined and transformed, if possible:
ADDIU instruction is transformed into 16-bit instruction ADDIUSP
ADDIU instruction is transformed into 16-bit instruction ADDIUR1SP
Usage of u_int64_t replaced by uint64_t to avoid issues because of which previous patch version was reverted:
Differential Revision: https://reviews.llvm.org/D34511

llvm-svn: 310044
2017-08-04 10:18:44 +00:00
Max Kazantsev 9505470033 Do not declare a variable which is used only in assert. NFC
llvm-svn: 310034
2017-08-04 07:41:24 +00:00
Max Kazantsev 2f6ae28152 [IRCE] Handle loops with step different from 1/-1
This patch generalizes IRCE to handle IV steps that are not equal to 1 or -1.

Differential Revision: https://reviews.llvm.org/D35539

llvm-svn: 310032
2017-08-04 07:01:04 +00:00
Stanislav Mekhanoshin 6c7a8d0b5f [AMDGPU] Preserve inverted bit in SI_IF in presence of SI_KILL
In case if SI_KILL is in between of the SI_IF and SI_END_CF we need
to preserve the bits actually flipped by if rather then restoring
the original mask.

Differential Revision: https://reviews.llvm.org/D36299

llvm-svn: 310031
2017-08-04 06:58:42 +00:00
Dylan McKay 0547447831 [AVR] Update target machine to use new constructor parameters
The required parameters were changed in r309911.

llvm-svn: 310028
2017-08-04 05:48:20 +00:00
Max Kazantsev 07da1ab23a [IRCE] Recognize loops with unsigned latch conditions
This patch enables recognition of loops with ult/ugt latch conditions.

Differential Revision: https://reviews.llvm.org/D35302

llvm-svn: 310027
2017-08-04 05:40:20 +00:00
Craig Topper c2d3c631ff [InstCombine] Move the call to foldSelectICmpAnd into foldSelectInstWithICmp. NFCI
llvm-svn: 310025
2017-08-04 05:12:37 +00:00
Craig Topper a86ca08d26 [InstCombine] Remove unnecessary casts. NFC
We're calling an overload of getOpcode that already returns Instruction::CastOps.

llvm-svn: 310024
2017-08-04 05:12:35 +00:00
Daniel Jasper 3f47a5c054 Prevent unused warning in non-assert builds (introduced in r310014).
llvm-svn: 310022
2017-08-04 05:05:29 +00:00
Victor Leschuk 56b03d0dd6 Un-revert r310014: false revert, it wasn't the cause of build break
llvm-svn: 310021
2017-08-04 04:51:15 +00:00
Victor Leschuk 21713ebfb1 Revert r310014 as it breaks build lld-x86_64-darwin13
llvm-svn: 310020
2017-08-04 04:43:54 +00:00
Reid Kleckner 02aeadcf3d [Support] Update comments about stdout, raw_fd_ostream, and outs()
The full story is in the comments:

  // Do not attempt to close stdout or stderr. We used to try to maintain the
  // property that tools that support writing file to stdout should not also
  // write informational output to stdout, but in practice we were never able to
  // maintain this invariant. Many features have been added to LLVM and clang
  // (-fdump-record-layouts, optimization remarks, etc) that print to stdout, so
  // users must simply be aware that mixed output and remarks is a possibility.

NFC, I am just updating comments to reflect reality.

llvm-svn: 310016
2017-08-04 01:39:23 +00:00
Adrian Prantl fd8c8e9fe6 Teach GlobalSRA to update the debug info for split-up globals.
This is similar to what we are doing in "regular" SROA and creates
DW_OP_LLVM_fragment operations to describe the resulting variables.

rdar://problem/33654891

llvm-svn: 310014
2017-08-04 01:19:54 +00:00
Connor Abbott 00755362b9 [AMDGPU] Add missing hazard for DPP-after-EXEC-write
Summary:
Following the docs, we need at least 5 wait states between an EXEC write
and an instruction that uses DPP.

Reviewers: tstellar, arsenm

Subscribers: kzhuravl, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D34849

llvm-svn: 310013
2017-08-04 01:09:43 +00:00
George Karpenkov 5bd0503680 Disable libFuzzer tests on Windows
Differential Revision: https://reviews.llvm.org/D36297

llvm-svn: 310009
2017-08-04 00:26:12 +00:00
Matt Arsenault 5c921a9291 AMDGPU: Remove pointless asserts
llvm-svn: 310007
2017-08-04 00:00:13 +00:00
Teresa Johnson 8482e56920 Use profile summary to disable peeling for huge working sets
Summary:
Detect when the working set size of a profiled application is huge,
by comparing the number of counts required to reach the hot percentile
in the profile summary to a large threshold*.

When the working set size is determined to be huge, disable peeling
to avoid bloating the working set further.

*Note that the selected threshold (15K) is significantly larger than the
largest working set value in SPEC cpu2006 (which is gcc at around 11K).

Reviewers: davidxl

Subscribers: mehdi_amini, mzolotukhin, eraman, llvm-commits

Differential Revision: https://reviews.llvm.org/D36288

llvm-svn: 310005
2017-08-03 23:42:58 +00:00
Matt Arsenault a176cc5b93 AMDGPU: Don't use report_fatal_error for unsupported call types
llvm-svn: 310004
2017-08-03 23:32:41 +00:00
Matt Arsenault a202538bfa AMDGPU: Remove error on calls for amdgcn
Repurpose the -amdgpu-function-calls flag. Rather
than require it to emit a call, only use it to
run the always inline path or not.

llvm-svn: 310003
2017-08-03 23:24:05 +00:00
Matt Arsenault 817c253e60 AMDGPU: Fix implicitarg.ptr handling special inputs
llvm-svn: 310002
2017-08-03 23:12:44 +00:00
Martell Malone 346a5fdc9b Support: WOA64 and WOA Signals
Reviewers: rnk

Differential Revision: https://reviews.llvm.org/D21813

llvm-svn: 310001
2017-08-03 23:12:33 +00:00
Matt Arsenault 8623e8d864 AMDGPU: Pass special input registers to functions
llvm-svn: 309998
2017-08-03 23:00:29 +00:00
Eric Christopher 52854dcd34 Fix typo.
llvm-svn: 309997
2017-08-03 22:41:12 +00:00
Matt Arsenault 7016f13450 AMDGPU: Add analysis pass for function argument info
This will allow only adding necessary inputs to callee functions
that need special inputs forwarded from the kernel.

llvm-svn: 309996
2017-08-03 22:30:46 +00:00
Easwaran Raman 974d4eea93 [Inliner] Increase threshold for hot callsites without PGO.
Summary:
This increases the inlining threshold for hot callsites. Hotness is
defined in terms of block frequency of the callsite relative to the
caller's entry block's frequency. Since this requires BFI in the
inliner, this only affects the new PM pipeline. This is enabled by
default at -O3.

This improves the performance of some internal benchmarks. Notably, an
internal benchmark for Gipfeli compression
(https://github.com/google/gipfeli) improves by ~7%. Povray in SPEC2006
improves by ~2.5%. I am running more experiments and will update the
thread if other benchmarks show improvement/regression.

In terms of text size, LLVM test-suite shows an 1.22% text size
increase. Diving into the results, 13 of the benchmarks in the
test-suite increases by > 10%. Most of these are small, but
Adobe-C++/loop_unroll (17.6% increases) and tramp3d(20.7% size increase)
have >250K text size. On a large application, the text size increases by
2%

Reviewers: chandlerc, davidxl

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D36199

llvm-svn: 309994
2017-08-03 22:23:33 +00:00
Eugene Zelenko 79220eaeec [Mips] Fix some Clang-tidy modernize-use-using and Include What You Use warnings; other minor fixes (NFC).
llvm-svn: 309993
2017-08-03 22:12:30 +00:00
Matt Arsenault a52391f2db DAG: Provide access to Pass instance from SelectionDAG
This allows accessing an analysis pass during lowering.

llvm-svn: 309991
2017-08-03 21:54:00 +00:00
Quentin Colombet 250e050a50 [GlobalISel] Make GlobalISel a non-optional library.
With this change, the GlobalISel library gets always built. In
particular, this is not possible to opt GlobalISel out of the build
using the LLVM_BUILD_GLOBAL_ISEL variable any more.

llvm-svn: 309990
2017-08-03 21:52:25 +00:00
Davide Italiano 5974c31d91 [NewGVN] Fix the case where we have a phi-of-ops which goes away.
Patch by Daniel Berlin, fixes PR33196 (and probably something else).

llvm-svn: 309988
2017-08-03 21:17:49 +00:00
Reid Kleckner 175af4bcc7 [PDB] Fix section contributions
Summary:
PDB section contributions are supposed to use output section indices and
offsets, not input section indices and offsets.

This allows the debugger to look up the index of the module that it
should look up in the modules stream for symbol information. With this
change, windbg can now find line tables, but it still cannot print local
variables.

Fixes PR34048

Reviewers: zturner

Subscribers: hiraditya, ruiu, llvm-commits

Differential Revision: https://reviews.llvm.org/D36285

llvm-svn: 309987
2017-08-03 21:15:09 +00:00
Hiroshi Yamauchi 144ee2b4d7 [LVI] Constant-propagate a zero extension of the switch condition value through case edges
Summary:
(This is a second attempt as https://reviews.llvm.org/D34822 was reverted.)

LazyValueInfo currently computes the constant value of the switch condition through case edges, which allows the constant value to be propagated through the case edges.

But we have seen a case where a zero-extended value of the switch condition is used past case edges for which the constant propagation doesn't occur.

This patch adds a small logic to handle such a case in getEdgeValueLocal().

This is motivated by the Python 2.7 eval loop in PyEval_EvalFrameEx() where the lack of the constant propagation causes longer live ranges and more spill code than necessary.

With this patch, we see that the code size of PyEval_EvalFrameEx() decreases by ~5.4% and a performance test improves by ~4.6%.

Reviewers: sanjoy

Reviewed By: sanjoy

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D36247

llvm-svn: 309986
2017-08-03 21:11:30 +00:00
George Karpenkov f020c98912 [libFuzzer] Un-reverting change in tests after fixing the failure on Linux.
Differential Revision: https://reviews.llvm.org/D36242

llvm-svn: 309982
2017-08-03 20:28:16 +00:00
Nico Weber 0a920e76b8 Fix llvm-for-windows-on-linux build after LLVM r272701.
The file is called "intrin.h". When building targeting Windows on a Linux
system, with the SDK mounted in a case-insensitive file system, "Intrin.h" will
miss clang's intrin.h header (because that's not in a case-insensitive file
system) but then find intrin.h in the Microsoft SDK. clang can't handle the
SDK's intrin.h.

https://reviews.llvm.org/D36281

llvm-svn: 309980
2017-08-03 20:10:47 +00:00
Teresa Johnson 9a18a6f08b Disable loop peeling during full unrolling pass.
Summary:
Peeling should not occur during the full unrolling invocation early
in the pipeline, but rather later with partial and runtime loop
unrolling. The later loop unrolling invocation will also eventually
utilize profile summary and branch frequency information, which
we would like to use to control peeling. And for ThinLTO we want
to delay peeling until the backend (post thin link) phase, just as
we do for most types of unrolling.

Ensure peeling doesn't occur during the full unrolling invocation
by adding a parameter to the shared implementation function, similar
to the way partial and runtime loop unrolling are disabled.

Performance results for ThinLTO suggest this has a neutral to positive
effect on some internal benchmarks.

Reviewers: chandlerc, davidxl

Subscribers: mzolotukhin, llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D36258

llvm-svn: 309966
2017-08-03 17:52:38 +00:00
Dehao Chen f58df39529 Do not want to use BFI to get profile count for sample pgo
Summary: For SamplePGO, we already record the callsite count in the call instruction itself. So we do not want to use BFI to get profile count as it is less accurate.

Reviewers: tejohnson, davidxl, eraman

Reviewed By: eraman

Subscribers: sanjoy, llvm-commits, mehdi_amini

Differential Revision: https://reviews.llvm.org/D36025

llvm-svn: 309964
2017-08-03 17:11:41 +00:00
Tim Northover 869fa74d4b Revert "[AArch64] Simplify AES*Tied pseudo expansion (NFC)."
This reverts commit r309821.

My suggestion was wrong because it left the MachineOperands tied which
confused the verifier. Since there's no easy way to untie operands, the
original BuildMI solution is probably best.

llvm-svn: 309962
2017-08-03 16:59:36 +00:00
Changpeng Fang ef4dbb46da AMDGPU/SI: Don't fix a PHI under uniform branch in SIFixSGPRCopies only when sources and destination are all sgprs
Summary:
  If a PHI has at lease one VGPR operand, we have to fix the PHI
in SIFixSGPRCopies.

Reviewer:
  Matt

Differential Revision:
  http://reviews.llvm.org/D34727

llvm-svn: 309959
2017-08-03 16:37:02 +00:00
Nirav Dave 3fc1c2365c [DAG] Allow merging of stores of vector loads
Remove restriction disallowing merging of stores vector loads into
larger store of larger vector load.

Reviewers: RKSimon, efriedma, spatel

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D36158

llvm-svn: 309951
2017-08-03 15:51:20 +00:00
Nico Weber bfde70b097 Revert r309923, it caused PR34045.
llvm-svn: 309950
2017-08-03 15:41:26 +00:00
Sanjay Patel 7cf745cc94 [NewGVN] fix typos; NFC
llvm-svn: 309946
2017-08-03 15:18:27 +00:00
Robert Lougher 10f740df4d [LiveDebugVariables] Use lexical scope to trim debug value live intervals
The debug value live intervals computed by Live Debug Variables may extend
beyond the range of the debug location's lexical scope. In this case,
splitting of an interval can result in an interval outside of the scope being
created, causing extra unnecessary DBG_VALUEs to be emitted. To prevent this,
trim the intervals to the lexical scope.

This resolves PR33730.

Reviewers: aprantl

Differential Revision: https://reviews.llvm.org/D35953

llvm-svn: 309933
2017-08-03 11:54:02 +00:00
Simon Dardis 51296593a8 [SelectionDAG] Resolve PR33978.
rL306209 taught SelectionDAG how to add the dereferenceable flag when
expanding memcpy and memmove. The fix however contained a nit where
the offset + size was constructed as an APInt of PointerSize rather
than PointerSizeInBits.

This lead to isDereferenceableAndAlignedPointer() get truncated values or
values which would be sign extended within that function leading to
incorrect results.

Thanks to Alex Crichton for reporting the issue!

This resolves PR33978.

Reviewers: inouehrs

Differential Revision: https://reviews.llvm.org/D36236

llvm-svn: 309930
2017-08-03 09:38:46 +00:00
Ewan Crawford e18490c8be [Cloning] Move distinct GlobalVariable debug info metadata in CloneModule
Duplicating the distinct Subprogram and CU metadata nodes seems like the incorrect thing to do in CloneModule for GlobalVariable debug info. As it results in the scope of the GlobalVariable DI no longer being consistent with the rest of the module, and the new CU is absent from llvm.dbg.cu.

Fixed by adding RF_MoveDistinctMDs to MapMetadata flags for GlobalVariables.

Current unit test IR after clone:
```
@gv = global i32 1, comdat($comdat), !dbg !0, !type !5

define private void @f() comdat($comdat) personality void ()* @persfn !dbg !14 {

!llvm.dbg.cu = !{!10}

!0 = !DIGlobalVariableExpression(var: !1)
!1 = distinct !DIGlobalVariable(name: "gv", linkageName: "gv", scope: !2, file: !3, line: 1, type: !9, isLocal: false, isDefinition: true)
!2 = distinct !DISubprogram(name: "f", linkageName: "f", scope: null, file: !3, line: 4, type: !4, isLocal: true, isDefinition: true, scopeLine: 3, isOptimized: false, unit: !6, variables: !5)
!3 = !DIFile(filename: "filename.c", directory: "/file/dir/")
!4 = !DISubroutineType(types: !5)
!5 = !{}
!6 = distinct !DICompileUnit(language: DW_LANG_C99, file: !7, producer: "CloneModule", isOptimized: false, runtimeVersion: 0, emissionKind: FullDebug, enums: !5, globals: !8)
!7 = !DIFile(filename: "filename.c", directory: "/file/dir")
!8 = !{!0}
!9 = !DIBasicType(tag: DW_TAG_unspecified_type, name: "decltype(nullptr)")
!10 = distinct !DICompileUnit(language: DW_LANG_C99, file: !7, producer: "CloneModule", isOptimized: false, runtimeVersion: 0, emissionKind: FullDebug, enums: !5, globals: !11)
!11 = !{!12}
!12 = !DIGlobalVariableExpression(var: !13)
!13 = distinct !DIGlobalVariable(name: "gv", linkageName: "gv", scope: !14, file: !3, line: 1, type: !9, isLocal: false, isDefinition: true)
!14 = distinct !DISubprogram(name: "f", linkageName: "f", scope: null, file: !3, line: 4, type: !4, isLocal: true, isDefinition: true, scopeLine: 3, isOptimized: false, unit: !10, variables: !5)
```

Patched IR after clone:
```
@gv = global i32 1, comdat($comdat), !dbg !0, !type !5

define private void @f() comdat($comdat) personality void ()* @persfn !dbg !2 {

!llvm.dbg.cu = !{!6}

!0 = !DIGlobalVariableExpression(var: !1)
!1 = distinct !DIGlobalVariable(name: "gv", linkageName: "gv", scope: !2, file: !3, line: 1, type: !9, isLocal: false, isDefinition: true)
!2 = distinct !DISubprogram(name: "f", linkageName: "f", scope: null, file: !3, line: 4, type: !4, isLocal: true, isDefinition: true, scopeLine: 3, isOptimized: false, unit: !6, variables: !5)
!3 = !DIFile(filename: "filename.c", directory: "/file/dir/")
!4 = !DISubroutineType(types: !5)
!5 = !{}
!6 = distinct !DICompileUnit(language: DW_LANG_C99, file: !7, producer: "CloneModule", isOptimized: false, runtimeVersion: 0, emissionKind: FullDebug, enums: !5, globals: !8)
!7 = !DIFile(filename: "filename.c", directory: "/file/dir")
!8 = !{!0}
!9 = !DIBasicType(tag: DW_TAG_unspecified_type, name: "decltype(nullptr)")
```

Reviewers: aprantl, probinson, dblaikie, echristo, loladiro
Reviewed By: aprantl
Subscribers: llvm-commits
Differential Revision: https://reviews.llvm.org/D36082

llvm-svn: 309928
2017-08-03 09:23:03 +00:00
Diana Picus 930e6ec8f3 [ARM] GlobalISel: Select simple G_GLOBAL_VALUE instructions
Add support in the instruction selector for G_GLOBAL_VALUE for ELF and
MachO for the static relocation model. We don't handle Windows yet
because that's Thumb-only, and we don't handle Thumb in general at the
moment.

Support for PIC, ROPI, RWPI and TLS will be added in subsequent commits.

Differential Revision: https://reviews.llvm.org/D35883

llvm-svn: 309927
2017-08-03 09:14:59 +00:00
Dinar Temirbulatov a0beedef1c [X86] SET0 to use XMM registers where possible PR26018 PR32862
Differential Revision: https://reviews.llvm.org/D35965

llvm-svn: 309926
2017-08-03 08:50:18 +00:00