Commit Graph

26589 Commits

Author SHA1 Message Date
Sanjay Patel e4ef62291b [SDAG] improve expansion of ctpop+setcc
This should not cause any visible change in output, but it's
more efficient because we were producing non-canonical 'sub x, 1'
and 'setcc ugt x, 0'. As mentioned in the TODO, we should also
be handling the inverse predicate.

llvm-svn: 364302
2019-06-25 12:49:35 +00:00
Simon Pilgrim 69fc111184 [TargetLowering] SimplifyDemandedBits SIGN_EXTEND_VECTOR_INREG -> ANY/ZERO_EXTEND_VECTOR_INREG
Simplify SIGN_EXTEND_VECTOR_INREG if the extended bits are not required/known zero.

Matches what we already do for SIGN_EXTEND.

Reapplies rL363802 but now with legality checks added at rL364290

llvm-svn: 364299
2019-06-25 12:19:12 +00:00
Simon Pilgrim b23c942ce4 [VectorLegalizer] ExpandANY_EXTEND_VECTOR_INREG/ExpandZERO_EXTEND_VECTOR_INREG - widen source vector
The *_EXTEND_VECTOR_INREG opcodes were relaxed back around rL346784 to support source vector widths that are smaller than the output - it looks like the legalizers were never updated to account for this.

This patch inserts the smaller source vector into an undef vector of the same width of the result before performing the shuffle+bitcast to correctly handle this.

Part of the yak shaving to solve the crashes from rL364264 and rL364272

llvm-svn: 364295
2019-06-25 11:31:37 +00:00
Simon Pilgrim 49b3778e32 [TargetLowering] SimplifyDemandedBits - legal checks for SIGN/ZERO_EXTEND -> ZERO/ANY_EXTEND
As part of the fix for rL364264 + rL364272 - limit the *_EXTEND conversion to !TLO.LegalOperations || isOperationLegal cases.

We'll improve X86 legality in future commits.

llvm-svn: 364290
2019-06-25 10:51:15 +00:00
Roman Lebedev cdd43eac4f [Codegen] TargetLowering::SimplifySetCC(): omit urem when possible
Summary:
This addresses the regression that is being exposed by D50222 in `test/CodeGen/X86/jump_sign.ll`
The missing fold, at least partially, looks trivial:
https://rise4fun.com/Alive/Zsln
i.e. if we are comparing with zero, and comparing the `urem`-by-non-power-of-two,
and the `urem` is of something that may at most have a single bit set (or no bits set at all),
the `urem` is not needed.

Reviewers: RKSimon, craig.topper, xbolva00, spatel

Reviewed By: xbolva00, spatel

Subscribers: xbolva00, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63390

llvm-svn: 364286
2019-06-25 10:01:42 +00:00
Clement Courbet 3bc5ad551a [ExpandMemCmp] Move all options to TargetTransformInfo.
Split off from D60318.

llvm-svn: 364281
2019-06-25 08:04:13 +00:00
Craig Topper 079924b0b7 Revert r363802, r363850, and r363856 "[TargetLowering] SimplifyDemandedBits..."
This reverts the following patches.
"[TargetLowering] SimplifyDemandedBits SIGN_EXTEND_VECTOR_INREG -> ANY/ZERO_EXTEND_VECTOR_INREG"
"[TargetLowering] SimplifyDemandedBits ZERO_EXTEND_VECTOR_INREG -> ANY_EXTEND_VECTOR_INREG"
"[TargetLowering] SimplifyDemandedBits - add ANY_EXTEND_VECTOR_INREG support"

We can end up with an any_extend_vector_inreg with a 256 bit result type
and a 128 bit result type. This is allowed by the ISD opcode, but the
generic operation legalizer is only able to expand cases where the
total vector width is the same.

The X86 backend creates these mismatched cases for zext_vec_inreg/sext_vec_inreg.
The SimplifyDemandedBits changes are allowing those nodes to become
aext_vec_inreg. For the zext/sext cases, the X86 backend has Custom
handling and never lets them get to the generic legalizer. We need to do the same
for aext_vec_inreg.

llvm-svn: 364264
2019-06-25 01:32:42 +00:00
Roland Froese ea08248b2b [CodeGen] Add missing vector type legalization for ctlz_zero_undef
Widen vector result type for ctlz_zero_undef and cttz_zero_undef the same as
ctlz and cttz.

Differential Revision: https://reviews.llvm.org/D63463

llvm-svn: 364221
2019-06-24 19:27:07 +00:00
Matt Arsenault faeaedf8e9 GlobalISel: Remove unsigned variant of SrcOp
Force using Register.

One downside is the generated register enums require explicit
conversion.

llvm-svn: 364194
2019-06-24 16:16:12 +00:00
Matt Arsenault e3a676e9ad CodeGen: Introduce a class for registers
Avoids using a plain unsigned for registers throughoug codegen.
Doesn't attempt to change every register use, just something a little
more than the set needed to build after changing the return type of
MachineOperand::getReg().

llvm-svn: 364191
2019-06-24 15:50:29 +00:00
Simon Pilgrim 69144a925e [DAGCombine] visitMUL - allow shift by zero in MulByConstant.
This can occur under certain circumstances when undefs are created later on in the constant multipliers (e.g. in this case due to SimplifyDemandedVectorElts). Its better to let the shift by zero to occur and perform any cleanup afterward.

Fixes OSS Fuzz #15429

llvm-svn: 364179
2019-06-24 12:47:17 +00:00
Fangrui Song f955d5f623 SlotIndexes: delete unused functions
llvm-svn: 364154
2019-06-23 16:05:29 +00:00
Fangrui Song 6620e3b2f6 SlotIndexes: simplify IdxMBBPair operators
llvm-svn: 364152
2019-06-23 13:16:03 +00:00
Craig Topper 6ddc7912b0 [SelectionDAG] Remove the code that attempts to calculate the alignment for the second half of a split masked load/store.
The code divides the alignment by 2 if the original alignment is
equal to the original VT size. But this wouldn't be correct
if the alignment was larger than the VT size.

The memory operand object already takes care of calling MinAlign
on the base alignment and the memory pointer offset. So we don't
need any special code at all.

llvm-svn: 364151
2019-06-23 07:00:46 +00:00
Fangrui Song 43e14390b0 Make GlobalISel depend on SelectionDAG after D63169
GlobalISel/IRTranslator.cpp now references SelectionDAG/FunctionLoweringInfo.cpp.
This fixes a link error in -DBUILD_SHARED_LIBS=on builds:

    ld.lld: error: undefined symbol: llvm::FunctionLoweringInfo::clear()
    >>> referenced by IRTranslator.cpp:2198 (../lib/CodeGen/GlobalISel/IRTranslator.cpp:2198)
    >>>               lib/CodeGen/GlobalISel/CMakeFiles/LLVMGlobalISel.dir/IRTranslator.cpp.o:(llvm::IRTranslator::finalizeFunction())

llvm-svn: 364124
2019-06-22 01:30:17 +00:00
Amara Emerson fe4625fb24 [GlobalISel][IRTranslator] Change switch table translation to generate jump tables and range checks.
This change makes use of the newly refactored SwitchLoweringUtils code from
SelectionDAG to in order to generate jump tables and range checks where appropriate.

Much of this code is ported from SDAG with some modifications. We generate
G_JUMP_TABLE and G_BRJT instructions when JT opportunities are found. This means
that targets which previously relied on the naive one MBB per case stmt
translation will now start falling back until they add support for the new opcodes.

For range checks, we don't generate any previously unused operations. This
just recognizes contiguous ranges of case values and generates a single block per
range. Single case value blocks are just a special case of ranges so we get that
support almost for free.

There are still some optimizations missing that I haven't ported over, and
bit-tests are also unimplemented. This patch series is already complex enough.

Actual arm64 support for selection of jump tables is coming in a later patch.

Differential Revision: https://reviews.llvm.org/D63169

llvm-svn: 364085
2019-06-21 18:10:38 +00:00
Simon Pilgrim 0da13ed1f6 [DAGCombine] narrowExtractedVectorBinOp - pull out repeated getOpcode(). NFCI.
llvm-svn: 364076
2019-06-21 16:44:51 +00:00
Simon Pilgrim ca9933c22d [DAGCombine] narrowInsertExtractVectorBinOp - reuse "extract from insert" detection code.
Move the "extract from insert detection code" into a lambda helper function.

llvm-svn: 364059
2019-06-21 14:46:21 +00:00
Fangrui Song dc8de6037c Simplify std::lower_bound with llvm::{bsearch,lower_bound}. NFC
llvm-svn: 364006
2019-06-21 05:40:31 +00:00
Amara Emerson bc0d08e0ee [GlobalISel][Localizer] Allow localization of G_INTTOPTR and chains of instructions.
G_INTTOPTR can prevent the localizer from moving G_CONSTANTs, but since it's
essentially a side effect free cast instruction we can remat both instructions.
This patch changes the localizer to enable localization of the chains by
iterating over the entry block instructions in reverse order. That way, uses will
localized first, and then the defs are free to be localized as well.

This also changes the previous SmallPtrSet of localized instructions to use a
SetVector instead. We're dealing with pointers and need deterministic iteration
order.

Overall, this change improves ARM64 -O0 CTMark code size by around 0.7% geomean.

Differential Revision: https://reviews.llvm.org/D63630

llvm-svn: 364001
2019-06-21 00:36:19 +00:00
Simon Pilgrim 801c0f12b0 [DAGCombiner] Use getAPIntValue() instead of getZExtValue() where possible.
Better handling of out-of-i64-range values due to large integer types or from fuzz tests.

llvm-svn: 363955
2019-06-20 17:36:23 +00:00
Jordan Rupprecht 02508decf4 [DAGCombiner][NFC] Remove unused var
llvm-svn: 363954
2019-06-20 17:30:01 +00:00
Amy Huang 7fac5c8d94 Store a pointer to the return value in a static alloca and let the debugger use that
as the variable address for NRVO variables.

Subscribers: hiraditya, cfe-commits, llvm-commits

Tags: #clang, #llvm

Differential Revision: https://reviews.llvm.org/D63361

llvm-svn: 363952
2019-06-20 17:15:21 +00:00
Evandro Menezes aa10f05044 [CodeGen] Fix formatting and comments (NFC)
llvm-svn: 363947
2019-06-20 16:34:00 +00:00
Simon Pilgrim 1d8093249f [DAGCombiner] Support (shl (zext (srl x, C)), C) -> (zext (shl (srl x, C), C)) non-uniform folds.
Use matchBinaryPredicate instead of isConstOrConstSplat to let us handle non-uniform shift cases. 

llvm-svn: 363929
2019-06-20 14:42:27 +00:00
Simon Pilgrim 98a0ac5c0f [DAGCombine] Add TODOs for some combines that should support non-uniform vectors
We tend to only test for scalar/scalar consts when really we could support non-uniform vectors using ISD::matchUnaryPredicate/matchBinaryPredicate etc.

llvm-svn: 363924
2019-06-20 12:48:49 +00:00
Simon Pilgrim a487628270 [DAGCombine] Reduce scope of ShAmtVal variable. NFCI.
Fixes cppcheck warning.

Use the more capable getAPIntVal() instead of getZExtValue() as well since I'm here.

llvm-svn: 363921
2019-06-20 10:56:37 +00:00
Petar Avramovic 153bd24eda [MIPS GlobalISel] Select integer to floating point conversions
Select G_SITOFP and G_UITOFP for MIPS32.

Differential Revision: https://reviews.llvm.org/D63542

llvm-svn: 363912
2019-06-20 09:05:02 +00:00
Petar Avramovic 4b4dae1c76 [MIPS GlobalISel] Select floating point to integer conversions
Select G_FPTOSI and G_FPTOUI for MIPS32.

Differential Revision: https://reviews.llvm.org/D63541

llvm-svn: 363911
2019-06-20 08:52:53 +00:00
Simon Pilgrim 046d49a8dc [DAGCombine] Use ConstantSDNode::getAPIntValue() instead of getZExtValue().
Use getAPIntValue() in a few more places. Most of the time getZExtValue() is fine, but occasionally there's fuzzed code or someone decides to create i65536 or something.....

llvm-svn: 363887
2019-06-19 22:14:24 +00:00
Simon Pilgrim f05369768c [TargetLowering] SimplifyDemandedBits - add ANY_EXTEND_VECTOR_INREG support
Move 'lowest' demanded elt -> bitcast fold out of ZERO_EXTEND_VECTOR_INREG into ANY_EXTEND_VECTOR_INREG case.

llvm-svn: 363856
2019-06-19 18:34:58 +00:00
Simon Pilgrim 6016fb726c [TargetLowering] SimplifyDemandedBits ZERO_EXTEND_VECTOR_INREG -> ANY_EXTEND_VECTOR_INREG
Simplify ZERO_EXTEND_VECTOR_INREG if the extended bits are not required.

Matches what we already do for ZERO_EXTEND.

llvm-svn: 363850
2019-06-19 18:00:24 +00:00
Simon Pilgrim c3994f77cb [TargetLowering] SimplifyDemandedBits SIGN_EXTEND_VECTOR_INREG -> ANY/ZERO_EXTEND_VECTOR_INREG
Simplify SIGN_EXTEND_VECTOR_INREG if the extended bits are not required/known zero.

Matches what we already do for SIGN_EXTEND.

llvm-svn: 363802
2019-06-19 13:58:02 +00:00
Simon Pilgrim 9eed5d2f78 [DAGCombiner] Support (shl (ext (shl x, c1)), c2) -> (shl (ext x), (add c1, c2)) non-uniform folds.
Use matchBinaryPredicate instead of isConstOrConstSplat to let us handle non-uniform shift cases. 

llvm-svn: 363793
2019-06-19 12:41:37 +00:00
Simon Pilgrim 8c49366c9b [DAGCombiner] Support (shl (ext (shl x, c1)), c2) -> 0 non-uniform folds.
Use matchBinaryPredicate instead of isConstOrConstSplat to let us handle non-uniform shift cases. 

This requires us to tweak matchBinaryPredicate to allow it to (optionally) handle constants with different type widths.

llvm-svn: 363792
2019-06-19 12:25:29 +00:00
Simon Pilgrim bb6b856183 [DAGCombiner] visitSHL - pull out repeated shift amount VT. NFCI.
llvm-svn: 363789
2019-06-19 11:31:26 +00:00
Simon Pilgrim d954a53633 [DAGCombine] Fix (shl (ext (shl x, c1)), c2) -> (shl (ext x), (add c1, c2)) comment. NFCI.
We pre-extend, not post.

llvm-svn: 363787
2019-06-19 11:17:48 +00:00
Chen Zheng c5b918de58 [NFC] move some hardware loop checking code to a common place for other using.
Differential Revision: https://reviews.llvm.org/D63478

llvm-svn: 363758
2019-06-19 01:26:31 +00:00
Matt Arsenault 9cac4e6d14 Rename ExpandISelPseudo->FinalizeISel, delay register reservation
This allows targets to make more decisions about reserved registers
after isel. For example, now it should be certain there are calls or
stack objects in the frame or not, which could have been introduced by
legalization.

Patch by Matthias Braun

llvm-svn: 363757
2019-06-19 00:25:39 +00:00
Amara Emerson d11ea2c8c5 [GlobalISel][Localizer] Remove redundant set lookup.
After changing the algorithm to only process the entry block we never revisit
a processed instruction.

llvm-svn: 363745
2019-06-18 22:08:40 +00:00
Jinsong Ji ba43840bfe [MachinePipeliner][NFC] Do resource tracking log only when requested.
In most cases we don't need to do resource tracking debug,
so leave them off by default.

llvm-svn: 363733
2019-06-18 20:24:49 +00:00
Simon Pilgrim 5bef886cd8 [TargetLowering] SimplifyDemandedBits - Cleanup ANY_EXTEND handling
Match SIGN_EXTEND + ZERO_EXTEND handling - will be adding ANY_EXTEND_VECTOR_INREG support in a future patch.

llvm-svn: 363716
2019-06-18 18:22:30 +00:00
Simon Pilgrim 032b54f8e8 [TargetLowering] SimplifyDemandedBits - Merge ZERO_EXTEND+ZERO_EXTEND_VECTOR_INREG handling
Other than adding consistent demanded elts handling which was a trivial addition, the other differences in functionality will be added in later patches.

llvm-svn: 363713
2019-06-18 18:08:30 +00:00
Simon Pilgrim b6e7108dcd [TargetLowering] SimplifyDemandedBits - Merge SIGN_EXTEND+SIGN_EXTEND_VECTOR_INREG handling
Other than adding consistent demanded elts handling which was a trivial addition, the other differences in functionality will be added in later patches.

llvm-svn: 363710
2019-06-18 17:57:53 +00:00
Simon Pilgrim 9aa25be149 [TargetLowering] SimplifyDemandedVectorElts - support MUL and ANY_EXTEND_VECTOR_INREG
Also fold ANY_EXTEND_VECTOR_INREG -> BITCAST if we only need the bottom element.

Fixes temporary regression introduced in rL363693.

llvm-svn: 363694
2019-06-18 15:49:35 +00:00
Simon Pilgrim 83bacd8d72 [SelectionDAG] Legalize vaargs that require vector splitting
This adds vector splitting for vaarg instructions during type legalization

Committed on behalf of @luke (Luke Lau)

Differential Revision: https://reviews.llvm.org/D60762

llvm-svn: 363671
2019-06-18 12:24:02 +00:00
Tom Stellard 1f7f64665c GlobalISel: Remove redundant pass initialization
Summary:
All the GlobalISel passes are initialized when the target calls
initializeGlobalISel(), so we don't need to call the initializers
from the pass constructors.

Reviewers: qcolombet, t.p.northover, paquette, dsanders, aemerson, aditya_nandakumar

Reviewed By: aemerson

Subscribers: rovka, kristof.beyls, hiraditya, volkan, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63235

llvm-svn: 363642
2019-06-18 02:05:06 +00:00
Matt Arsenault 5a321b899e GlobalISel: Use the original flags when lowering fneg to fsub
This was ignoring the flag on fneg, and using the source instruction's
flags. Also fixes tests missing from r358702.

Note the expansion itself isn't correct without nnan, but that should
be fixed separately.

llvm-svn: 363637
2019-06-17 23:48:43 +00:00
Peter Collingbourne fb9ce100d1 hwasan: Add a tag_offset DWARF attribute to instrumented stack variables.
The goal is to improve hwasan's error reporting for stack use-after-return by
recording enough information to allow the specific variable that was accessed
to be identified based on the pointer's tag. Currently we record the PC and
lower bits of SP for each stack frame we create (which will eventually be
enough to derive the base tag used by the stack frame) but that's not enough
to determine the specific tag for each variable, which is the stack frame's
base tag XOR a value (the "tag offset") that is unique for each variable in
a function.

In IR, the tag offset is most naturally represented as part of a location
expression on the llvm.dbg.declare instruction. However, the presence of the
tag offset in the variable's actual location expression is likely to confuse
debuggers which won't know about tag offsets, and moreover the tag offset
is not required for a debugger to determine the location of the variable on
the stack, so at the DWARF level it is represented as an attribute so that
it will be ignored by debuggers that don't know about it.

Differential Revision: https://reviews.llvm.org/D63119

llvm-svn: 363635
2019-06-17 23:39:41 +00:00
Amara Emerson 146882242f [GlobalISel][Localizer] Rewrite localizer to run in 2 phases, inter & intra block.
Inter-block localization is the same as what currently happens, except now it
only runs on the entry block because that's where the problematic constants with
long live ranges come from.

The second phase is a new intra-block localization phase which attempts to
re-sink the already localized instructions further right before one of the
multiple uses.

One additional change is to also localize G_GLOBAL_VALUE as they're constants
too. However, on some targets like arm64 it takes multiple instructions to
materialize the value, so some additional heuristics with a TTI hook have been
introduced attempt to prevent code size regressions when localizing these.

Overall, these changes improve CTMark code size on arm64 by 1.2%.

Full code size results:

Program                                         baseline       new       diff
------------------------------------------------------------------------------
 test-suite...-typeset/consumer-typeset.test    1249984      1217216     -2.6%
 test-suite...:: CTMark/ClamAV/clamscan.test    1264928      1232152     -2.6%
 test-suite :: CTMark/SPASS/SPASS.test          1394092      1361316     -2.4%
 test-suite...Mark/mafft/pairlocalalign.test    731320       714928      -2.2%
 test-suite :: CTMark/lencod/lencod.test        1340592      1324200     -1.2%
 test-suite :: CTMark/kimwitu++/kc.test         3853512      3820420     -0.9%
 test-suite :: CTMark/Bullet/bullet.test        3406036      3389652     -0.5%
 test-suite...ark/tramp3d-v4/tramp3d-v4.test    8017000      8016992     -0.0%
 test-suite...TMark/7zip/7zip-benchmark.test    2856588      2856588      0.0%
 test-suite...:: CTMark/sqlite3/sqlite3.test    765704       765704       0.0%
 Geomean difference                                                      -1.2%

Differential Revision: https://reviews.llvm.org/D63303

llvm-svn: 363632
2019-06-17 23:20:29 +00:00
Michael Berg f9bff2a55e Propagate fmf in IRTranslate for fneg
Summary: This case is related to D63405 in that we need to be propagating FMF on negates.

Reviewers: volkan, spatel, arsenm

Reviewed By: arsenm

Subscribers: wdng, javed.absar

Differential Revision: https://reviews.llvm.org/D63458

llvm-svn: 363631
2019-06-17 23:19:40 +00:00
Daniel Sanders 184c8ee920 [globalisel] Fix iterator invalidation in the extload combines
Summary:
Change the way we deal with iterator invalidation in the extload combines as it
was still possible to neglect to visit a use. Even worse, it happened in the
in-tree test cases and the checks weren't good enough to detect it.

We now take a cheap copy of the use list before iterating over it. This
prevents iterator invalidation from occurring and has the nice side effect
of making the existing schedule-for-erase/schedule-for-insert mechanism
moot.

Reviewers: aditya_nandakumar

Reviewed By: aditya_nandakumar

Subscribers: rovka, kristof.beyls, javed.absar, volkan, Petar.Avramovic, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D61813

llvm-svn: 363616
2019-06-17 20:56:31 +00:00
Matt Arsenault 3e140066bc GlobalISel: Ignore callsite attributes when picking intrinsic type
A target intrinsic may be defined as possibly reading memory, but the
call site may have additional knowledge that it doesn't read
memory. The intrinsic lowering will expect the pessimistic assumption
of the intrinsic definition, so the chain should still be used.

I fixed the same bug in SelectionDAG in r287593.

llvm-svn: 363580
2019-06-17 17:01:35 +00:00
Matt Arsenault a7f09f3c9e GlobalISel: Verify intrinsics
I keep using the wrong instruction when manually writing tests. This
really needs to check the number of operands, but I don't see an easy
way to do that right now.

llvm-svn: 363579
2019-06-17 17:01:32 +00:00
Whitney Tsang 15b7f5b72d PHINode: introduce setIncomingValueForBlock() function, and use it.
Summary:
There is PHINode::getBasicBlockIndex() and PHINode::setIncomingValue()
but no function to replace incoming value for a specified BasicBlock*
predecessor.
Clearly, there are a lot of places that could use that functionality.

Reviewer: craig.topper, lebedev.ri, Meinersbur, kbarton, fhahn
Reviewed By: Meinersbur, fhahn
Subscribers: fhahn, hiraditya, zzheng, jsji, llvm-commits
Tag: LLVM
Differential Revision: https://reviews.llvm.org/D63338

llvm-svn: 363566
2019-06-17 14:38:56 +00:00
Sam Parker 1bd3d00e7e [CodeGen] Check for HardwareLoop Latch ExitBlock
The HardwareLoops pass finds exit blocks with a scevable exit count.
If the target specifies to update the loop counter in a register,
through a phi, we need to ensure that the exit block is a latch so
that we can insert the phi with the correct value for the incoming
edge.

Differential Revision: https://reviews.llvm.org/D63336

llvm-svn: 363556
2019-06-17 13:39:28 +00:00
Luis Marques 2e46312ffd [DAGCombiner] [CodeGenPrepare] More comprehensive GEP splitting
Some GEPs were not being split, presumably because that split would just be 
undone by the DAGCombiner. Not performing those splits can prevent important 
optimizations, such as preventing the element indices / member offsets from 
being (partially) folded into load/store instruction immediates. This patch:

- Makes the splits also occur in the cases where the base address and the GEP 
  are in the same BB.
- Ensures that the DAGCombiner doesn't reassociate them back again.

Differential Revision: https://reviews.llvm.org/D60294

llvm-svn: 363544
2019-06-17 10:54:12 +00:00
Simon Pilgrim ef78e55205 [SelectionDAG] Fold insert_subvector(undef, extract_subvector(v, c), c) -> v in getNode
This is already done in DAGCombiner::visitINSERT_SUBVECTOR, but this helps a number of shuffles across different vector widths recognise when they come from the same source.

llvm-svn: 363542
2019-06-17 10:14:52 +00:00
Sander de Smalen 5d6ee76c16 Describe stack-id as an enum
This patch changes MIR stack-id from an integer to an enum,
and adds printing/parsing support for this in MIR files. The default
stack-id '0' is now renamed to 'default'.

This should make MIR tests that have stack objects with different stack-ids
more descriptive. It also clarifies code operating on StackID.

Reviewers: arsenm, thegameg, qcolombet

Reviewed By: arsenm

Differential Revision: https://reviews.llvm.org/D60137

llvm-svn: 363533
2019-06-17 09:13:29 +00:00
Sanjay Patel c8d88ad1a9 [CodeGenPrepare][x86] shift both sides of a vector select when profitable
This is based on the example/discussion in PR37428:
https://bugs.llvm.org/show_bug.cgi?id=37428

Proper vector shift instructions don't appear until AVX2, so we may generate several
extra instructions within a loop trying to compensate for that. It's difficult to
recover from that shift expansion later than this, so use the existing TLI hook and
splat analysis to enable better codegen.

This extends CGP functionality introduced with:
rL201655

Differential Revision: https://reviews.llvm.org/D63233

llvm-svn: 363511
2019-06-16 15:29:03 +00:00
Michael Berg ad6bb86b2d adding more fmf propagation for selects plus updated tests
llvm-svn: 363484
2019-06-15 04:53:51 +00:00
Fangrui Song 968b5f84af Revert "adding more fmf propagation for selects plus tests"
This reverts rL363474. -debug-only=isel was added to some tests that
don't specify `REQUIRES: asserts`. This causes failures on
-DLLVM_ENABLE_ASSERTIONS=off builds.

I chose to revert instead of fixing the tests because I'm not sure
whether we should add `REQUIRES: asserts` to more tests.

llvm-svn: 363482
2019-06-15 03:51:08 +00:00
Matt Arsenault 9487278010 Reapply "GlobalISel: Avoid producing Illegal copies in RegBankSelect"
This reapplies r363410, avoiding null dereference if there is no
AltRegBank.

llvm-svn: 363478
2019-06-15 00:33:26 +00:00
Mitch Phillips 0d44f129bb Revert "GlobalISel: Avoid producing Illegal copies in RegBankSelect"
This patch breaks UBSan build bots. See
https://github.com/google/sanitizers/wiki/SanitizerBotReproduceBuild for
a guide as to how to reproduce the error.

This reverts commit c2864c0de0.
This reverts rL363410.

llvm-svn: 363476
2019-06-14 23:45:34 +00:00
Michael Berg 69394bedc5 adding more fmf propagation for selects plus tests
llvm-svn: 363474
2019-06-14 23:30:52 +00:00
Guozhi Wei d2210af332 [MBP] Move a latch block with conditional exit and multi predecessors to top of loop
Current findBestLoopTop can find and move one kind of block to top, a latch block has one successor. Another common case is:

    * a latch block
    * it has two successors, one is loop header, another is exit
    * it has more than one predecessors

If it is below one of its predecessors P, only P can fall through to it, all other predecessors need a jump to it, and another conditional jump to loop header. If it is moved before loop header, all its predecessors jump to it, then fall through to loop header. So all its predecessors except P can reduce one taken branch.

Differential Revision: https://reviews.llvm.org/D43256

llvm-svn: 363471
2019-06-14 23:08:59 +00:00
Amara Emerson f79d3bc724 [GlobalISel] Add a G_BRJT opcode.
This is a branch opcode that takes a jump table pointer, jump table index and an
index into the table to do an indirect branch.

We pass both the table pointer and JTI to allow targets like ARM64 to more
easily use the existing jump table compression optimization without having to
walk up the block to find a paired G_JUMP_TABLE.

Differential Revision: https://reviews.llvm.org/D63159

llvm-svn: 363434
2019-06-14 17:55:48 +00:00
Matt Arsenault c2864c0de0 GlobalISel: Avoid producing Illegal copies in RegBankSelect
Avoid producing illegal register bank copies for reg_sequence and
phi. The default implementation assumes it is possible to pick any
operand's bank and use that for the result, introducing a copy for
operands with a different bank. This does not check for illegal
copies. It is not legal to introduce a VGPR->SGPR copy, so any VGPR
operand requires the result to be a VGPR.

The changes in getInstrMappingImpl aren't strictly necessary, since
AMDGPU now just bypasses this for reg_sequence/phi. This could be
replaced with an assert in case other targets run into this. It is
currently responsible for producing the error for unsatisfiable
copies, but this will be better served with a verifier check.

For phis, for now assume any undetermined operands must be
VGPRs. Eventually, this needs to be able to defer mapping these
operations. This also does not yet have a way to check for whether the
block is in a divergent region.

llvm-svn: 363410
2019-06-14 15:22:25 +00:00
Sanjay Patel 7ea378b940 [CodeGenPrepare] propagate debuginfo when copying a shuffle
llvm-svn: 363409
2019-06-14 15:05:35 +00:00
Matt Arsenault 731a81598e RegBankSelect: Remove checks for invalid mappings
Avoid a check for valid and a set of redundant asserts. The place
InstructionMapping is constructed asserts all of the default fields
are passed anyway for an invalid mapping, so don't overcomplicate
this.

llvm-svn: 363391
2019-06-14 13:42:40 +00:00
Matt Arsenault 3062e87a1e Fix not calling TargetCustom PSVs printer
If the enum value was greater than the starting target custom value,
the custom printer wasn't called.

llvm-svn: 363386
2019-06-14 13:26:34 +00:00
David Blaikie 4129e3e0f8 DebugInfo: Include enumerators in pubnames
This is consistent with GCC's behavior (which is the defacto standard
for pubnames). Though I find the presence of enumerators from enum
classes to be a bit confusing, possibly a bug on GCC's end (since they
can't be named unqualified, unlike the other names - and names nested in
classes don't go in pubnames, for instance - presumably because one must
name the class first & that's enough to limit the scope of the search)

llvm-svn: 363349
2019-06-14 01:58:56 +00:00
Amy Huang 49275272e3 Use fully qualified name when printing S_CONSTANT records
Summary:
Before it was using the fully qualified name only for static data members.
Now it does for all variable names to match MSVC.

Reviewers: rnk

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63012

llvm-svn: 363335
2019-06-13 22:53:43 +00:00
Amara Emerson fb0a40f064 [GlobalISel][IRTranslator] Add debug loc with line 0 to constants emitted into the entry block.
Constants, including G_GLOBAL_VALUE, are all emitted into the entry block which
lets us use the vreg def assuming it dominates all other users. However, it can
cause jumpy debug behaviour since the DebugLoc attached to these MIs are from
a user instruction that could be in a different block.

Fixes PR40887.

Differential Revision: https://reviews.llvm.org/D63286

llvm-svn: 363331
2019-06-13 22:15:35 +00:00
Jinsong Ji 1c88445840 [MachinePiepliner] Don't check boundary node in checkValidNodeOrder
This was exposed by PowerPC target enablement.

In ScheduleDAG, if we haven't seen any uses in this scheduling region,
we will create a dependence edge to ExitSU to model the live-out latency.
This is required for vreg defs with no in-region use, and prefetches with
no vreg def.

When we build NodeOrder in Scheduler, we ignore these boundary nodes.
However, when we check Succs in checkValidNodeOrder, we did not skip
them, so we still assume all the nodes have been sorted and in order in
Indices array. So when we call lower_bound() for ExitSU, it will return
Indices.end(), causing memory issues in following Node access.

Differential Revision: https://reviews.llvm.org/D63282

llvm-svn: 363329
2019-06-13 21:51:12 +00:00
David Bolvansky 896ece41e4 [Codegen] Merge tail blocks with no successors after block placement
Summary:
I found the following case having tail blocks with no successors merging opportunities after block placement.

Before block placement:

bb0:
    ...
    bne a0, 0, bb2:

bb1:
    mv a0, 1
    ret 

bb2:
    ...

bb3:
    mv a0, 1
    ret

bb4:
    mv a0, -1
    ret

The conditional branch bne in bb0 is opposite to beq.

After block placement:

bb0:
    ...
    beq a0, 0, bb1

bb2:
    ...

bb4:
    mv a0, -1
    ret

bb1:
    mv a0, 1
    ret

bb3:
    mv a0, 1
    ret

After block placement, that appears new tail merging opportunity, bb1 and bb3 can be merged as one block. So the conditional constraint for merging tail blocks with no successors should be removed. In my experiment for RISC-V, it decreases code size.


Author of original patch: Jim Lin

Reviewers: haicheng, aheejin, craig.topper, rnk, RKSimon, Jim, dmgreen

Reviewed By: Jim, dmgreen

Subscribers: xbolva00, dschuff, javed.absar, sbc100, jgravelle-google, aheejin, kito-cheng, dmgreen, PkmX, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D54411

llvm-svn: 363284
2019-06-13 18:11:32 +00:00
David Stenberg 1278a19282 Remove ';' after namespace's closing bracket [NFC]
llvm-svn: 363267
2019-06-13 14:02:55 +00:00
Diogo N. Sampaio 0be2d25ecc [FIX] Forces shrink wrapping to consider any memory access as aliasing with the stack
Summary:
Relate bug: https://bugs.llvm.org/show_bug.cgi?id=37472

The shrink wrapping pass prematurally restores the stack, at a point where the stack might still be accessed.
Taking an exception can cause the stack to be corrupted.

As a first approach, this patch is overly conservative, assuming that any instruction that may load or store could access
the stack.

Reviewers: dmgreen, qcolombet

Reviewed By: qcolombet

Subscribers: simpal01, efriedma, eli.friedman, javed.absar, llvm-commits, eugenis, chill, carwil, thegameg

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63152

llvm-svn: 363265
2019-06-13 13:56:19 +00:00
Jeremy Morse d2cd9c23b4 [NFC] Sink a function call into LiveDebugValues::process
This was requested in D62904, which I successfully missed. This is just
a refactor and shouldn't change any behaviour.

llvm-svn: 363259
2019-06-13 13:11:57 +00:00
Simon Pilgrim 6b56ad164c [CodeGen] Add getMachineMemOperand + MachineMemOperand::Flags allocator helper wrapper. NFCI.
Pre-commit for D62726 on behalf of @luke (Luke Lau)

llvm-svn: 363257
2019-06-13 12:58:55 +00:00
Jeremy Morse bf2b2f08b0 [DebugInfo] Honour variable fragments in LiveDebugValues
This patch makes the LiveDebugValues pass consider fragments when propagating
DBG_VALUE insts between blocks, fixing PR41979. Fragment info for a variable
location is added to the open-ranges key, which allows distinct fragments to be
tracked separately. To handle overlapping fragments things become slightly
funkier. To avoid excessive searching for overlaps in the data-flow part of
LiveDebugValues, this patch:
 * Pre-computes pairings of fragments that overlap, for each DILocalVariable
 * During data-flow, whenever something happens that causes an open range to
   be terminated (via erase), any fragments pre-determined to overlap are
   also terminated.

The effect of which is that when encountering a DBG_VALUE fragment that
overlaps others, the overlapped fragments do not get propagated to other
blocks. We still rely on later location-list building to correctly handle
overlapping fragments within blocks.

It's unclear whether a mixture of DBG_VALUEs with and without fragmented
expressions are legitimate. To avoid suprises, this patch interprets a
DBG_VALUE with no fragment as overlapping any DBG_VALUE _with_ a fragment.

Differential Revision: https://reviews.llvm.org/D62904

llvm-svn: 363256
2019-06-13 12:51:57 +00:00
Nikola Prica 076ae0d2e2 [DebugInfo] Move Value struct out of DebugLocEntry as DbgValueLoc (NFC)
Since the DebugLocEntry::Value is used as part of DwarfDebug and
DebugLocEntry make it as the separate class.

Reviewers: aprantl, dstenb

Reviewed By: aprantl

Differential Revision: https://reviews.llvm.org/D63213

llvm-svn: 363246
2019-06-13 10:23:26 +00:00
Jeremy Morse 181bf0cefb [DebugInfo] Use FrameDestroy to extend stack locations to end-of-function
We aim to ignore changes in variable locations during the prologue and
epilogue of functions, to avoid using space documenting location changes
that aren't visible. However in D61940 / r362951 this got ripped out as
the previous implementation was unsound.

Instead, use the FrameDestroy flag to identify when we're in the epilogue
of a function, and ignore variable location changes accordingly. This fits
in with existing code that examines the FrameSetup flag.

Some variable locations get shuffled in modified tests as they now cover
greater ranges, which is what would be expected. Some additional
single-location variables are generated too. Two tests are un-xfailed,
they were only xfailed due to r362951 deleting functionality they depended
on.

Apparently some out-of-tree backends don't accurately maintain FrameDestroy
flags -- if you're an out-of-tree maintainer and see changes in variable
locations disappear due to a faulty FrameDestroy flag, it's safe to back
this change out. The impact is just slightly more debug info than necessary.

Differential Revision: https://reviews.llvm.org/D62314

llvm-svn: 363245
2019-06-13 10:03:17 +00:00
Simon Pilgrim 4e0648a541 [TargetLowering] Add MachineMemOperand::Flags to allowsMemoryAccess tests (PR42123)
As discussed on D62910, we need to check whether particular types of memory access are allowed, not just their alignment/address-space.

This NFC patch adds a MachineMemOperand::Flags argument to allowsMemoryAccess and allowsMisalignedMemoryAccesses, and wires up calls to pass the relevant flags to them.

If people are happy with this approach I can then update X86TargetLowering::allowsMisalignedMemoryAccesses to handle misaligned NT load/stores.

Differential Revision: https://reviews.llvm.org/D63075

llvm-svn: 363179
2019-06-12 17:14:03 +00:00
Matt Arsenault f29366b1f5 StackProtector: Use PointerMayBeCaptured
This was using its own, outdated list of possible captures. This was
at minimum not catching cmpxchg and addrspacecast captures.

One change is now any volatile access is treated as capturing. The
test coverage for this pass is quite inadequate, but this required
removing volatile in the lifetime capture test.

Also fixes some infrastructure issues to allow running just the IR
pass.

Fixes bug 42238.

llvm-svn: 363169
2019-06-12 14:23:33 +00:00
Anton Afanasyev 339b39b773 [MIR] Skip hoisting to basic block which may throw exception or return
Summary:
Fix hoisting to basic block which are not legal for hoisting cause
it can be terminated by exception or it is return block.

Reviewers: john.brawn, RKSimon, MatzeB

Subscribers: hiraditya, llvm-commits

Tags: #llvm

Differential Revision: https://reviews.llvm.org/D63148

llvm-svn: 363164
2019-06-12 13:51:44 +00:00
Hsiangkai Wang 93be25b580 [NFC] Correct comments in RegisterCoalescer.
Differential Revision: https://reviews.llvm.org/D63124

llvm-svn: 363119
2019-06-12 02:58:04 +00:00
Amara Emerson d133c15925 [GlobalISel] Add a G_JUMP_TABLE opcode.
This opcode generates a pointer to the address of the jump table
specified by the source operand, which is a jump table index.

It will be used in conjunction with an upcoming G_BRJT opcode to support
jump table codegen with GlobalISel.

Differential Revision: https://reviews.llvm.org/D63111

llvm-svn: 363096
2019-06-11 19:58:06 +00:00
Jinsong Ji ef2d6d99c0 [PowerPC] Enable MachinePipeliner for P9 with -ppc-enable-pipeliner
Implement necessary target hooks to enable MachinePipeliner for P9 only.
The pass is off by default, can be enabled with -ppc-enable-pipeliner for P9.

Differential Revision: https://reviews.llvm.org/D62164

llvm-svn: 363085
2019-06-11 17:40:39 +00:00
Simon Pilgrim 266f43964e [TargetLowering] Add allowsMemoryAccess(MachineMemOperand) helper wrapper. NFCI.
As suggested by @arsenm on D63075 - this adds a TargetLowering::allowsMemoryAccess wrapper that takes a Load/Store node's MachineMemOperand to handle the AddressSpace/Alignment arguments and will also implicitly handle the MachineMemOperand::Flags change in D63075.

llvm-svn: 363048
2019-06-11 11:00:23 +00:00
Simon Pilgrim 287e78c82b [DAGCombine] GetNegatedExpression - constant float vector support (PR42105)
Add support for negation of constant build vectors.

Differential Revision: https://reviews.llvm.org/D62963

llvm-svn: 363040
2019-06-11 09:44:33 +00:00
Sander de Smalen cbeb563cfb Change semantics of fadd/fmul vector reductions.
This patch changes how LLVM handles the accumulator/start value
in the reduction, by never ignoring it regardless of the presence of
fast-math flags on callsites. This change introduces the following
new intrinsics to replace the existing ones:

  llvm.experimental.vector.reduce.fadd -> llvm.experimental.vector.reduce.v2.fadd
  llvm.experimental.vector.reduce.fmul -> llvm.experimental.vector.reduce.v2.fmul

and adds functionality to auto-upgrade existing LLVM IR and bitcode.

Reviewers: RKSimon, greened, dmgreen, nikic, simoll, aemerson

Reviewed By: nikic

Differential Revision: https://reviews.llvm.org/D60261

llvm-svn: 363035
2019-06-11 08:22:10 +00:00
Matt Arsenault c5830f5f05 AtomicExpand: Don't crash on non-0 alloca
This now produces garbage on AMDGPU with a call to an nonexistent,
anonymous libcall but won't assert.

llvm-svn: 363022
2019-06-11 01:35:07 +00:00
Matt Arsenault 383e72fcfe AMDGPU: Expand < 32-bit atomics
Also fix AtomicExpand asserting on atomicrmw fadd/fsub.

llvm-svn: 363021
2019-06-11 01:35:00 +00:00
Puyan Lotfi 4d89462a1c [MIR-Canon] Fixing non-determinism that was breaking bots (NFC).
An earlier fix of a subtle iterator invalidation bug had uncovered a
nondeterminism that was present in the MultiUsers bag. Problem was that
MultiUsers was being looked up using pointers.

This patch is an NFC change that numbers each multiuser and processes each in
numbered order. This fixes the test failure on netbsd and will likely fix the
green-dragon bot too.

llvm-svn: 363012
2019-06-11 00:00:25 +00:00
Jessica Paquette b22954384e [GlobalISel] Translate memset/memmove/memcpy from undef ptrs into nops
If the source is undef, then just don't do anything.

This matches SelectionDAG's behaviour in SelectionDAG.cpp.

Also add a test showing that we do the right thing here.
(irtranslator-memfunc-undef.ll)

Differential Revision: https://reviews.llvm.org/D63095

llvm-svn: 362989
2019-06-10 21:53:56 +00:00
Francis Visoiu Mistrih a438432acc [FastISel] Skip creating unnecessary vregs for arguments
This behavior was added in r130928 for both FastISel and SD, and then
disabled in r131156 for FastISel.

This re-enables it for FastISel with the corresponding fix.

This is triggered only when FastISel can't lower the arguments and falls
back to SelectionDAG for it.

FastISel contains a map of "register fixups" where at the end of the
selection phase it replaces all uses of a register with another
register that FastISel sometimes pre-assigned. Code at the end of
SelectionDAGISel::runOnMachineFunction is doing the replacement at the
very end of the function, while other pieces that come in before that
look through the MachineFunction and assume everything is done. In this
case, the real issue is that the code emitting COPY instructions for the
liveins (physreg to vreg) (EmitLiveInCopies) is checking if the vreg
assigned to the physreg is used, and if it's not, it will skip the COPY.
If a register wasn't replaced with its assigned fixup yet, the copy will
be skipped and we'll end up with uses of undefined registers.

This fix moves the replacement of registers before the emission of
copies for the live-ins.

The initial motivation for this fix is to enable tail calls for
swiftself functions, which were blocked because we couldn't prove that
the swiftself argument (which is callee-save) comes from a function
argument (live-in), because there was an extra copy (vreg to vreg).

A few tests are affected by this:

* llvm/test/CodeGen/AArch64/swifterror.ll: we used to spill x21
(callee-save) but never reload it because it's attached to the return.
We now don't even spill it anymore.
* llvm/test/CodeGen/*/swiftself.ll: we tail-call now.
* llvm/test/CodeGen/AMDGPU/mubuf-legalize-operands.ll: I believe this
test was not really testing the right thing, but it worked because the
same registers were re-used.
* llvm/test/CodeGen/ARM/cmpxchg-O0.ll: regalloc changes
* llvm/test/CodeGen/ARM/swifterror.ll: get rid of a copy
* llvm/test/CodeGen/Mips/*: get rid of spills and copies
* llvm/test/CodeGen/SystemZ/swift-return.ll: smaller stack
* llvm/test/CodeGen/X86/atomic-unordered.ll: smaller stack
* llvm/test/CodeGen/X86/swifterror.ll: same as AArch64
* llvm/test/DebugInfo/X86/dbg-declare-arg.ll: stack size changed

Differential Revision: https://reviews.llvm.org/D62361

llvm-svn: 362963
2019-06-10 16:53:37 +00:00
Jeremy Morse bcff417292 [DebugInfo] Terminate all location-lists at end of block
This commit reapplies r359426 (which was reverted in r360301 due to
performance problems) and rolls in D61940 to address the performance problem.
I've combined the two to avoid creating a span of slow-performance, and to
ease reverting if more problems crop up.

The summary of D61940: This patch removes the "ChangingRegs" facility in
DbgEntityHistoryCalculator, as its overapproximate nature can produce incorrect
variable locations. An unchanging register doesn't mean a variable doesn't
change its location.

The patch kills off everything that calculates the ChangingRegs vector.
Previously ChangingRegs spotted epilogues and marked registers as unchanging if
they weren't modified outside the epilogue, increasing the chance that we can
emit a single-location variable record. Without this feature,
debug-loc-offset.mir and pr19307.mir become temporarily XFAIL. They'll be
re-enabled by D62314, using the FrameDestroy flag to identify epilogues, I've
split this into two steps as FrameDestroy isn't necessarily supported by all
backends.

The logic for terminating variable locations at the end of a basic block now
becomes much more enjoyably simple: we just terminate them all.

Other test changes: inlined-argument.ll becomes XFAIL, but for a longer term.
The current algorithm for detecting that a variable has a single-location
doesn't work in this scenario (inlined function in multiple blocks), only other
bugs were making this test work. fission-ranges.ll gets slightly refreshed too,
as the location of "p" is now correctly determined to be a single location.

Differential Revision: https://reviews.llvm.org/D61940

llvm-svn: 362951
2019-06-10 15:23:46 +00:00
Nikola Prica abc1dff7e4 [DebugInfo] More strict debug range for stack variables
Variable's stack location can stretch longer than it should. If a
variable is placed at the stack in a some nested basic block its range
can be calculated to be up to the next occurrence of the variable's
DBG_VALUE, or up to the end of the function, thus covering a basic
blocks that should not be included in the variable’s location range.
This happens because the DbgEntityHistoryCalculator ends register
locations at the end of a basic block only if the variable’s location
register has been changed throughout the function, which is not the
case for the register used to reference stack objects.

This patch also tries to produce a single value location if the location
list builder managed to merge all the locations into one.

Reviewers: aprantl, dstenb, jmorse

Reviewed By: aprantl, dstenb, jmorse

Subscribers: djtodoro, ivanbaev, asowda

Tags: #debug-info

Differential Revision: https://reviews.llvm.org/D61600

llvm-svn: 362923
2019-06-10 08:41:06 +00:00
QingShan Zhang ab846da7e8 [DAGCombine] Match a pattern where a wide type scalar value is stored by several narrow stores
This opportunity is found from spec 2017 557.xz_r. And it is used by the sha encrypt/decrypt. See sha-2/sha512.c

static void store64(u64 x, unsigned char* y)
{
    for(int i = 0; i != 8; ++i)
        y[i] = (x >> ((7-i) * 8)) & 255;
}

static u64 load64(const unsigned char* y)
{
    u64 res = 0;
    for(int i = 0; i != 8; ++i)
        res |= (u64)(y[i]) << ((7-i) * 8);
    return res;
}
The load64 has been implemented by https://reviews.llvm.org/D26149
This patch is trying to implement the store pattern.

Match a pattern where a wide type scalar value is stored by several narrow
stores. Fold it into a single store or a BSWAP and a store if the targets
supports it.

Assuming little endian target:
i8 *p = ...
i32 val = ...
p[0] = (val >> 0) & 0xFF;
p[1] = (val >> 8) & 0xFF;
p[2] = (val >> 16) & 0xFF;
p[3] = (val >> 24) & 0xFF;

>
*((i32)p) = val;

i8 *p = ...
i32 val = ...
p[0] = (val >> 24) & 0xFF;
p[1] = (val >> 16) & 0xFF;
p[2] = (val >> 8) & 0xFF;
p[3] = (val >> 0) & 0xFF;

>
*((i32)p) = BSWAP(val);

Differential Revision: https://reviews.llvm.org/D62897

llvm-svn: 362921
2019-06-10 05:40:21 +00:00