forked from OSchip/llvm-project
				
			
	
		
			27 Commits
		
	
	
		
	
	| Author | SHA1 | Message | Date | 
|---|---|---|---|
| 
							
							
								 | 
						54e22f33d9 | 
							
							
								
								In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled.
							
							
							
							
							
							
							
							Recommiting with compiler time improvements
    Recommitting after fixup of 32-bit aliasing sign offset bug in DAGCombiner.
    * Simplify Consecutive Merge Store Candidate Search
    Now that address aliasing is much less conservative, push through
    simplified store merging search and chain alias analysis which only
    checks for parallel stores through the chain subgraph. This is cleaner
    as the separation of non-interfering loads/stores from the
    store-merging logic.
    When merging stores search up the chain through a single load, and
    finds all possible stores by looking down from through a load and a
    TokenFactor to all stores visited.
    This improves the quality of the output SelectionDAG and the output
    Codegen (save perhaps for some ARM cases where we correctly constructs
    wider loads, but then promotes them to float operations which appear
    but requires more expensive constant generation).
    Some minor peephole optimizations to deal with improved SubDAG shapes (listed below)
    Additional Minor Changes:
      1. Finishes removing unused AliasLoad code
      2. Unifies the chain aggregation in the merged stores across code
         paths
      3. Re-add the Store node to the worklist after calling
         SimplifyDemandedBits.
      4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is
         arbitrary, but seems sufficient to not cause regressions in
         tests.
      5. Remove Chain dependencies of Memory operations on CopyfromReg
         nodes as these are captured by data dependence
      6. Forward loads-store values through tokenfactors containing
          {CopyToReg,CopyFromReg} Values.
      7. Peephole to convert buildvector of extract_vector_elt to
         extract_subvector if possible (see
         CodeGen/AArch64/store-merge.ll)
      8. Store merging for the ARM target is restricted to 32-bit as
         some in some contexts invalid 64-bit operations are being
         generated. This can be removed once appropriate checks are
         added.
    This finishes the change Matt Arsenault started in r246307 and
    jyknight's original patch.
    Many tests required some changes as memory operations are now
    reorderable, improving load-store forwarding. One test in
    particular is worth noting:
      CodeGen/PowerPC/ppc64-align-long-double.ll - Improved load-store
      forwarding converts a load-store pair into a parallel store and
      a memory-realized bitcast of the same value. However, because we
      lose the sharing of the explicit and implicit store values we
      must create another local store. A similar transformation
      happens before SelectionDAG as well.
    Reviewers: arsenm, hfinkel, tstellarAMD, jyknight, nhaehnle
llvm-svn: 297695
							
						 | 
						
							|
| 
							
							
								 | 
						ce52b80744 | 
							
							
								
								[SDAG] Revert r296476 (and r296486, r296668, r296690).
							
							
							
							
							
							
							
							This patch causes compile times for some patterns to explode. I have a (large, unreduced) test case that slows down by more than 20x and several test cases slow down by 2x. I'm sending some of the test cases directly to Nirav and following up with more details in the review log, but this should unblock anyone else hitting this. llvm-svn: 296862  | 
						
							|
| 
							
							
								 | 
						f830dec3f2 | 
							
							
								
								In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled.
							
							
							
							
							
							
							
							Recommiting after fixup of 32-bit aliasing sign offset bug in DAGCombiner.
    * Simplify Consecutive Merge Store Candidate Search
    Now that address aliasing is much less conservative, push through
    simplified store merging search and chain alias analysis which only
    checks for parallel stores through the chain subgraph. This is cleaner
    as the separation of non-interfering loads/stores from the
    store-merging logic.
    When merging stores search up the chain through a single load, and
    finds all possible stores by looking down from through a load and a
    TokenFactor to all stores visited.
    This improves the quality of the output SelectionDAG and the output
    Codegen (save perhaps for some ARM cases where we correctly constructs
    wider loads, but then promotes them to float operations which appear
    but requires more expensive constant generation).
    Some minor peephole optimizations to deal with improved SubDAG shapes (listed below)
    Additional Minor Changes:
      1. Finishes removing unused AliasLoad code
      2. Unifies the chain aggregation in the merged stores across code
         paths
      3. Re-add the Store node to the worklist after calling
         SimplifyDemandedBits.
      4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is
         arbitrary, but seems sufficient to not cause regressions in
         tests.
      5. Remove Chain dependencies of Memory operations on CopyfromReg
         nodes as these are captured by data dependence
      6. Forward loads-store values through tokenfactors containing
          {CopyToReg,CopyFromReg} Values.
      7. Peephole to convert buildvector of extract_vector_elt to
         extract_subvector if possible (see
         CodeGen/AArch64/store-merge.ll)
      8. Store merging for the ARM target is restricted to 32-bit as
         some in some contexts invalid 64-bit operations are being
         generated. This can be removed once appropriate checks are
         added.
    This finishes the change Matt Arsenault started in r246307 and
    jyknight's original patch.
    Many tests required some changes as memory operations are now
    reorderable, improving load-store forwarding. One test in
    particular is worth noting:
      CodeGen/PowerPC/ppc64-align-long-double.ll - Improved load-store
      forwarding converts a load-store pair into a parallel store and
      a memory-realized bitcast of the same value. However, because we
      lose the sharing of the explicit and implicit store values we
      must create another local store. A similar transformation
      happens before SelectionDAG as well.
    Reviewers: arsenm, hfinkel, tstellarAMD, jyknight, nhaehnle
llvm-svn: 296476
							
						 | 
						
							|
| 
							
							
								 | 
						73cd0194cf | 
							
							
								
								Revert "In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled."
							
							
							
							
							
							
							
							This reverts commit r296252 until 256-bit operations are more efficiently generated in X86. llvm-svn: 296279  | 
						
							|
| 
							
							
								 | 
						beabf456df | 
							
							
								
								In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled.
							
							
							
							
							
							
							
							Recommiting after fixup of 32-bit aliasing sign offset bug in DAGCombiner.
    * Simplify Consecutive Merge Store Candidate Search
    Now that address aliasing is much less conservative, push through
    simplified store merging search and chain alias analysis which only
    checks for parallel stores through the chain subgraph. This is cleaner
    as the separation of non-interfering loads/stores from the
    store-merging logic.
    When merging stores search up the chain through a single load, and
    finds all possible stores by looking down from through a load and a
    TokenFactor to all stores visited.
    This improves the quality of the output SelectionDAG and the output
    Codegen (save perhaps for some ARM cases where we correctly constructs
    wider loads, but then promotes them to float operations which appear
    but requires more expensive constant generation).
    Some minor peephole optimizations to deal with improved SubDAG shapes (listed below)
    Additional Minor Changes:
      1. Finishes removing unused AliasLoad code
      2. Unifies the chain aggregation in the merged stores across code
         paths
      3. Re-add the Store node to the worklist after calling
         SimplifyDemandedBits.
      4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is
         arbitrary, but seems sufficient to not cause regressions in
         tests.
      5. Remove Chain dependencies of Memory operations on CopyfromReg
         nodes as these are captured by data dependence
      6. Forward loads-store values through tokenfactors containing
          {CopyToReg,CopyFromReg} Values.
      7. Peephole to convert buildvector of extract_vector_elt to
         extract_subvector if possible (see
         CodeGen/AArch64/store-merge.ll)
      8. Store merging for the ARM target is restricted to 32-bit as
         some in some contexts invalid 64-bit operations are being
         generated. This can be removed once appropriate checks are
         added.
    This finishes the change Matt Arsenault started in r246307 and
    jyknight's original patch.
    Many tests required some changes as memory operations are now
    reorderable, improving load-store forwarding. One test in
    particular is worth noting:
      CodeGen/PowerPC/ppc64-align-long-double.ll - Improved load-store
      forwarding converts a load-store pair into a parallel store and
      a memory-realized bitcast of the same value. However, because we
      lose the sharing of the explicit and implicit store values we
      must create another local store. A similar transformation
      happens before SelectionDAG as well.
    Reviewers: arsenm, hfinkel, tstellarAMD, jyknight, nhaehnle
llvm-svn: 296252
							
						 | 
						
							|
| 
							
							
								 | 
						93f9d5ce04 | 
							
							
								
								Revert "In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled."
							
							
							
							
							
							
							
							This reverts commit r293893 which is miscompiling lua on ARM and bootstrapping for x86-windows. llvm-svn: 293915  | 
						
							|
| 
							
							
								 | 
						4442667fc5 | 
							
							
								
								In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled.
							
							
							
							
							
							
							
							Recommiting after fixing X86 inc/dec chain bug.
    * Simplify Consecutive Merge Store Candidate Search
    Now that address aliasing is much less conservative, push through
    simplified store merging search and chain alias analysis which only
    checks for parallel stores through the chain subgraph. This is cleaner
    as the separation of non-interfering loads/stores from the
    store-merging logic.
    When merging stores search up the chain through a single load, and
    finds all possible stores by looking down from through a load and a
    TokenFactor to all stores visited.
    This improves the quality of the output SelectionDAG and the output
    Codegen (save perhaps for some ARM cases where we correctly constructs
    wider loads, but then promotes them to float operations which appear
    but requires more expensive constant generation).
    Some minor peephole optimizations to deal with improved SubDAG shapes (listed below)
    Additional Minor Changes:
      1. Finishes removing unused AliasLoad code
      2. Unifies the chain aggregation in the merged stores across code
         paths
      3. Re-add the Store node to the worklist after calling
         SimplifyDemandedBits.
      4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is
         arbitrary, but seems sufficient to not cause regressions in
         tests.
      5. Remove Chain dependencies of Memory operations on CopyfromReg
         nodes as these are captured by data dependence
      6. Forward loads-store values through tokenfactors containing
          {CopyToReg,CopyFromReg} Values.
      7. Peephole to convert buildvector of extract_vector_elt to
         extract_subvector if possible (see
         CodeGen/AArch64/store-merge.ll)
      8. Store merging for the ARM target is restricted to 32-bit as
         some in some contexts invalid 64-bit operations are being
         generated. This can be removed once appropriate checks are
         added.
    This finishes the change Matt Arsenault started in r246307 and
    jyknight's original patch.
    Many tests required some changes as memory operations are now
    reorderable, improving load-store forwarding. One test in
    particular is worth noting:
      CodeGen/PowerPC/ppc64-align-long-double.ll - Improved load-store
      forwarding converts a load-store pair into a parallel store and
      a memory-realized bitcast of the same value. However, because we
      lose the sharing of the explicit and implicit store values we
      must create another local store. A similar transformation
      happens before SelectionDAG as well.
    Reviewers: arsenm, hfinkel, tstellarAMD, jyknight, nhaehnle
llvm-svn: 293893
							
						 | 
						
							|
| 
							
							
								 | 
						ca74dd79e9 | 
							
							
								
								[mips] Recommit: "N64 static relocation model support"
							
							
							
							
							
							
							
							This patch makes one change to GOT handling and two changes to N64's relocation model handling. Furthermore, the jumptable encodings have been corrected for static N64. Big GOT handling is now done via a new SDNode MipsGotHi - this node is unconditionally lowered to an lui instruction. The first change to N64's relocation handling is the lifting of the restriction that N64 always uses PIC. Now it is possible to target static environments. The second change adds support for 64 bit symbols and enables them by default. Previously N64 had patterns for sym32 mode only. In this mode all symbols are assumed to have 32 bit addresses. sym32 mode support is selectable with attribute 'sym32'. A follow on patch for clang will add the necessary frontend parameter. This partially resolves PR/23485. Thanks to Brooks Davis for reporting the issue! This version corrects a "Conditional jump or move depends on uninitialised value(s)" error detected by valgrind present in the original commit. Reviewers: dsanders, seanbruno, zoran.jovanovic, vkalintiris Differential Revision: https://reviews.llvm.org/D23652 llvm-svn: 293279  | 
						
							|
| 
							
							
								 | 
						d32a421f75 | 
							
							
								
								Revert "In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled."
							
							
							
							
							
							
							
							This reverts commit r293184 which is failing in LTO builds llvm-svn: 293188  | 
						
							|
| 
							
							
								 | 
						de6516c466 | 
							
							
								
								In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled.
							
							
							
							
							
							
							
							* Simplify Consecutive Merge Store Candidate Search
    Now that address aliasing is much less conservative, push through
    simplified store merging search and chain alias analysis which only
    checks for parallel stores through the chain subgraph. This is cleaner
    as the separation of non-interfering loads/stores from the
    store-merging logic.
    When merging stores search up the chain through a single load, and
    finds all possible stores by looking down from through a load and a
    TokenFactor to all stores visited.
    This improves the quality of the output SelectionDAG and the output
    Codegen (save perhaps for some ARM cases where we correctly constructs
    wider loads, but then promotes them to float operations which appear
    but requires more expensive constant generation).
    Some minor peephole optimizations to deal with improved SubDAG shapes (listed below)
    Additional Minor Changes:
      1. Finishes removing unused AliasLoad code
      2. Unifies the chain aggregation in the merged stores across code
         paths
      3. Re-add the Store node to the worklist after calling
         SimplifyDemandedBits.
      4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is
         arbitrary, but seems sufficient to not cause regressions in
         tests.
      5. Remove Chain dependencies of Memory operations on CopyfromReg
         nodes as these are captured by data dependence
      6. Forward loads-store values through tokenfactors containing
          {CopyToReg,CopyFromReg} Values.
      7. Peephole to convert buildvector of extract_vector_elt to
         extract_subvector if possible (see
         CodeGen/AArch64/store-merge.ll)
      8. Store merging for the ARM target is restricted to 32-bit as
         some in some contexts invalid 64-bit operations are being
         generated. This can be removed once appropriate checks are
         added.
    This finishes the change Matt Arsenault started in r246307 and
    jyknight's original patch.
    Many tests required some changes as memory operations are now
    reorderable, improving load-store forwarding. One test in
    particular is worth noting:
      CodeGen/PowerPC/ppc64-align-long-double.ll - Improved load-store
      forwarding converts a load-store pair into a parallel store and
      a memory-realized bitcast of the same value. However, because we
      lose the sharing of the explicit and implicit store values we
      must create another local store. A similar transformation
      happens before SelectionDAG as well.
    Reviewers: arsenm, hfinkel, tstellarAMD, jyknight, nhaehnle
llvm-svn: 293184
							
						 | 
						
							|
| 
							
							
								 | 
						5b67a4f75f | 
							
							
								
								Revert "[mips] N64 static relocation model support"
							
							
							
							
							
							
							
							This reverts commit r293164. There are multiple tests failing. llvm-svn: 293170  | 
						
							|
| 
							
							
								 | 
						09e65efd09 | 
							
							
								
								[mips] N64 static relocation model support
							
							
							
							
							
							
							
							This patch makes one change to GOT handling and two changes to N64's relocation model handling. Furthermore, the jumptable encodings have been corrected for static N64. Big GOT handling is now done via a new SDNode MipsGotHi - this node is unconditionally lowered to an lui instruction. The first change to N64's relocation handling is the lifting of the restriction that N64 always uses PIC. Now it is possible to target static environments. The second change adds support for 64 bit symbols and enables them by default. Previously N64 had patterns for sym32 mode only. In this mode all symbols are assumed to have 32 bit addresses. sym32 mode support is selectable with attribute 'sym32'. A follow on patch for clang will add the necessary frontend parameter. This partially resolves PR/23485. Thanks to Brooks Davis for reporting the issue! Reviewers: dsanders, seanbruno, zoran.jovanovic, vkalintiris Differential Revision: https://reviews.llvm.org/D23652 llvm-svn: 293164  | 
						
							|
| 
							
							
								 | 
						f5bf03c7ef | 
							
							
								
								Revert "In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled."
							
							
							
							
							
							
							
							Reverting due to ARM MCJIT and MIPS LLD error. This reverts commit r289659. llvm-svn: 289667  | 
						
							|
| 
							
							
								 | 
						8527ab0ad2 | 
							
							
								
								In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled.
							
							
							
							
							
							
							
							Retrying after fixing after removing load-store factoring through
token factors in favor of improved token factor operand pruning
Simplify Consecutive Merge Store Candidate Search
Now that address aliasing is much less conservative, push through
simplified store merging search which only checks for parallel stores
through the chain subgraph. This is cleaner as the separation of
non-interfering loads/stores from the store-merging logic.
Whem merging stores, search up the chain through a single load, and
finds all possible stores by looking down from through a load and a
TokenFactor to all stores visited. This improves the quality of the
output SelectionDAG and generally the output CodeGen (with some
exceptions).
Additional Minor Changes:
   1. Finishes removing unused AliasLoad code
   2. Unifies the the chain aggregation in the merged stores across
      code paths
   3. Re-add the Store node to the worklist after calling
      SimplifyDemandedBits.
   4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is
      arbitrary, but seemed sufficient to not cause regressions in
      tests.
This finishes the change Matt Arsenault started in r246307 and
jyknight's original patch.
Many tests required some changes as memory operations are now
reorderable. Some tests relying on the order were changed to use
volatile memory operations
Noteworthy tests:
    CodeGen/AArch64/argument-blocks.ll -
      It's not entirely clear what the test_varargs_stackalign test is
      supposed to be asserting, but the new code looks right.
    CodeGen/AArch64/arm64-memset-inline.lli -
    CodeGen/AArch64/arm64-stur.ll -
    CodeGen/ARM/memset-inline.ll -
      The backend now generates *worse* code due to store merging
      succeeding, as we do do a 16-byte constant-zero store efficiently.
    CodeGen/AArch64/merge-store.ll -
      Improved, but there still seems to be an extraneous vector insert
      from an element to itself?
    CodeGen/PowerPC/ppc64-align-long-double.ll -
      Worse code emitted in this case, due to the improved store->load
      forwarding.
    CodeGen/X86/dag-merge-fast-accesses.ll -
    CodeGen/X86/MergeConsecutiveStores.ll -
    CodeGen/X86/stores-merging.ll -
    CodeGen/Mips/load-store-left-right.ll -
      Restored correct merging of non-aligned stores
    CodeGen/AMDGPU/promote-alloca-stored-pointer-value.ll -
      Improved. Correctly merges buffer_store_dword calls
    CodeGen/AMDGPU/si-triv-disjoint-mem-access.ll -
      Improved. Sidesteps loading a stored value and
      merges two stores
    CodeGen/X86/pr18023.ll -
      This test has been removed, as it was asserting incorrect
      behavior. Non-volatile stores *CAN* be moved past volatile loads,
      and now are.
    CodeGen/X86/vector-idiv.ll -
    CodeGen/X86/vector-lzcnt-128.ll -
      It's basically impossible to tell what these tests are actually
      testing. But, looks like the code got better due to the memory
      operations being recognized as non-aliasing.
    CodeGen/X86/win32-eh.ll -
      Both loads of the securitycookie are now merged.
Reviewers: arsenm, hfinkel, tstellarAMD, jyknight, nhaehnle
Subscribers: wdng, nhaehnle, nemanjai, arsenm, weimingz, niravd, RKSimon, aemerson, qcolombet, dsanders, resistor, tstellarAMD, t.p.northover, spatel
Differential Revision: https://reviews.llvm.org/D14834
llvm-svn: 289659
							
						 | 
						
							|
| 
							
							
								 | 
						bedb5d906c | 
							
							
								
								Revert "In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled."
							
							
							
							
							
							
							
							This reverts commit r289221 which appears to be triggering an assertion llvm-svn: 289226  | 
						
							|
| 
							
							
								 | 
						fd51ff4fd8 | 
							
							
								
								In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled.
							
							
							
							
							
							
							
							Retrying after fixing overly aggressive load-store forwarding optimization.
Simplify Consecutive Merge Store Candidate Search
Now that address aliasing is much less conservative, push through
simplified store merging search which only checks for parallel stores
through the chain subgraph. This is cleaner as the separation of
non-interfering loads/stores from the store-merging logic.
Whem merging stores, search up the chain through a single load, and
finds all possible stores by looking down from through a load and a
TokenFactor to all stores visited. This improves the quality of the
output SelectionDAG and generally the output CodeGen (with some
exceptions).
Additional Minor Changes:
   1. Finishes removing unused AliasLoad code
   2. Unifies the the chain aggregation in the merged stores across
      code paths
   3. Re-add the Store node to the worklist after calling
      SimplifyDemandedBits.
   4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is
      arbitrary, but seemed sufficient to not cause regressions in
      tests.
This finishes the change Matt Arsenault started in r246307 and
jyknight's original patch.
Many tests required some changes as memory operations are now
reorderable. Some tests relying on the order were changed to use
volatile memory operations
Noteworthy tests:
    CodeGen/AArch64/argument-blocks.ll -
      It's not entirely clear what the test_varargs_stackalign test is
      supposed to be asserting, but the new code looks right.
    CodeGen/AArch64/arm64-memset-inline.lli -
    CodeGen/AArch64/arm64-stur.ll -
    CodeGen/ARM/memset-inline.ll -
      The backend now generates *worse* code due to store merging
      succeeding, as we do do a 16-byte constant-zero store efficiently.
    CodeGen/AArch64/merge-store.ll -
      Improved, but there still seems to be an extraneous vector insert
      from an element to itself?
    CodeGen/PowerPC/ppc64-align-long-double.ll -
      Worse code emitted in this case, due to the improved store->load
      forwarding.
    CodeGen/X86/dag-merge-fast-accesses.ll -
    CodeGen/X86/MergeConsecutiveStores.ll -
    CodeGen/X86/stores-merging.ll -
    CodeGen/Mips/load-store-left-right.ll -
      Restored correct merging of non-aligned stores
    CodeGen/AMDGPU/promote-alloca-stored-pointer-value.ll -
      Improved. Correctly merges buffer_store_dword calls
    CodeGen/AMDGPU/si-triv-disjoint-mem-access.ll -
      Improved. Sidesteps loading a stored value and
      merges two stores
    CodeGen/X86/pr18023.ll -
      This test has been removed, as it was asserting incorrect
      behavior. Non-volatile stores *CAN* be moved past volatile loads,
      and now are.
    CodeGen/X86/vector-idiv.ll -
    CodeGen/X86/vector-lzcnt-128.ll -
      It's basically impossible to tell what these tests are actually
      testing. But, looks like the code got better due to the memory
      operations being recognized as non-aliasing.
    CodeGen/X86/win32-eh.ll -
      Both loads of the securitycookie are now merged.
Reviewers: arsenm, hfinkel, tstellarAMD, jyknight, nhaehnle
Subscribers: wdng, nhaehnle, nemanjai, arsenm, weimingz, niravd, RKSimon, aemerson, qcolombet, dsanders, resistor, tstellarAMD, t.p.northover, spatel
Differential Revision: https://reviews.llvm.org/D14834
llvm-svn: 289221
							
						 | 
						
							|
| 
							
							
								 | 
						a81682aad4 | 
							
							
								
								Revert "In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled."
							
							
							
							
							
							
							
							This reverts commit r284151 which appears to be triggering a LTO failures on Hexagon llvm-svn: 284157  | 
						
							|
| 
							
							
								 | 
						4b36957243 | 
							
							
								
								In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled.
							
							
							
							
							
							
							
							Retrying after upstream changes.
   Simplify Consecutive Merge Store Candidate Search
   Now that address aliasing is much less conservative, push through
   simplified store merging search which only checks for parallel stores
   through the chain subgraph. This is cleaner as the separation of
   non-interfering loads/stores from the store-merging logic.
   Whem merging stores, search up the chain through a single load, and
   finds all possible stores by looking down from through a load and a
   TokenFactor to all stores visited. This improves the quality of the
   output SelectionDAG and generally the output CodeGen (with some
   exceptions).
   Additional Minor Changes:
       1. Finishes removing unused AliasLoad code
       2. Unifies the the chain aggregation in the merged stores across
       code paths
       3. Re-add the Store node to the worklist after calling
       SimplifyDemandedBits.
       4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is
       arbitrary, but seemed sufficient to not cause regressions in
       tests.
   This finishes the change Matt Arsenault started in r246307 and
   jyknight's original patch.
   Many tests required some changes as memory operations are now
   reorderable. Some tests relying on the order were changed to use
   volatile memory operations
   Noteworthy tests:
    CodeGen/AArch64/argument-blocks.ll -
      It's not entirely clear what the test_varargs_stackalign test is
      supposed to be asserting, but the new code looks right.
    CodeGen/AArch64/arm64-memset-inline.lli -
    CodeGen/AArch64/arm64-stur.ll -
    CodeGen/ARM/memset-inline.ll -
      The backend now generates *worse* code due to store merging
      succeeding, as we do do a 16-byte constant-zero store efficiently.
    CodeGen/AArch64/merge-store.ll -
      Improved, but there still seems to be an extraneous vector insert
      from an element to itself?
    CodeGen/PowerPC/ppc64-align-long-double.ll -
      Worse code emitted in this case, due to the improved store->load
      forwarding.
    CodeGen/X86/dag-merge-fast-accesses.ll -
    CodeGen/X86/MergeConsecutiveStores.ll -
    CodeGen/X86/stores-merging.ll -
    CodeGen/Mips/load-store-left-right.ll -
      Restored correct merging of non-aligned stores
    CodeGen/AMDGPU/promote-alloca-stored-pointer-value.ll -
      Improved. Correctly merges buffer_store_dword calls
    CodeGen/AMDGPU/si-triv-disjoint-mem-access.ll -
      Improved. Sidesteps loading a stored value and
      merges two stores
    CodeGen/X86/pr18023.ll -
      This test has been removed, as it was asserting incorrect
      behavior. Non-volatile stores *CAN* be moved past volatile loads,
      and now are.
    CodeGen/X86/vector-idiv.ll -
    CodeGen/X86/vector-lzcnt-128.ll -
      It's basically impossible to tell what these tests are actually
      testing. But, looks like the code got better due to the memory
      operations being recognized as non-aliasing.
    CodeGen/X86/win32-eh.ll -
      Both loads of the securitycookie are now merged.
    CodeGen/AMDGPU/vgpr-spill-emergency-stack-slot-compute.ll -
      This test appears to work but no longer exhibits the spill behavior.
Reviewers: arsenm, hfinkel, tstellarAMD, jyknight, nhaehnle
Subscribers: wdng, nhaehnle, nemanjai, arsenm, weimingz, niravd, RKSimon, aemerson, qcolombet, dsanders, resistor, tstellarAMD, t.p.northover, spatel
Differential Revision: https://reviews.llvm.org/D14834
llvm-svn: 284151
							
						 | 
						
							|
| 
							
							
								 | 
						e524f50882 | 
							
							
								
								Revert "In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled."
							
							
							
							
							
							
							
							This reverts commit r282600 due to test failues with MCJIT llvm-svn: 282604  | 
						
							|
| 
							
							
								 | 
						e17e055b75 | 
							
							
								
								In visitSTORE, always use FindBetterChain, rather than only when UseAA is enabled.
							
							
							
							
							
							
							
							Simplify Consecutive Merge Store Candidate Search
  Now that address aliasing is much less conservative, push through
  simplified store merging search which only checks for parallel stores
  through the chain subgraph. This is cleaner as the separation of
  non-interfering loads/stores from the store-merging logic.
  Whem merging stores, search up the chain through a single load, and
  finds all possible stores by looking down from through a load and a
  TokenFactor to all stores visited. This improves the quality of the
  output SelectionDAG and generally the output CodeGen (with some
  exceptions).
  Additional Minor Changes:
    1. Finishes removing unused AliasLoad code
    2. Unifies the the chain aggregation in the merged stores across
       code paths
    3. Re-add the Store node to the worklist after calling
       SimplifyDemandedBits.
    4. Increase GatherAllAliasesMaxDepth from 6 to 18. That number is
       arbitrary, but seemed sufficient to not cause regressions in
       tests.
  This finishes the change Matt Arsenault started in r246307 and
  jyknight's original patch.
  Many tests required some changes as memory operations are now
  reorderable. Some tests relying on the order were changed to use
  volatile memory operations
  Noteworthy tests:
    CodeGen/AArch64/argument-blocks.ll -
      It's not entirely clear what the test_varargs_stackalign test is
      supposed to be asserting, but the new code looks right.
    CodeGen/AArch64/arm64-memset-inline.lli -
    CodeGen/AArch64/arm64-stur.ll -
    CodeGen/ARM/memset-inline.ll -
      The backend now generates *worse* code due to store merging
      succeeding, as we do do a 16-byte constant-zero store efficiently.
    CodeGen/AArch64/merge-store.ll -
      Improved, but there still seems to be an extraneous vector insert
      from an element to itself?
    CodeGen/PowerPC/ppc64-align-long-double.ll -
      Worse code emitted in this case, due to the improved store->load
      forwarding.
    CodeGen/X86/dag-merge-fast-accesses.ll -
    CodeGen/X86/MergeConsecutiveStores.ll -
    CodeGen/X86/stores-merging.ll -
    CodeGen/Mips/load-store-left-right.ll -
      Restored correct merging of non-aligned stores
    CodeGen/AMDGPU/promote-alloca-stored-pointer-value.ll -
      Improved. Correctly merges buffer_store_dword calls
    CodeGen/AMDGPU/si-triv-disjoint-mem-access.ll -
      Improved. Sidesteps loading a stored value and merges two stores
    CodeGen/X86/pr18023.ll -
      This test has been removed, as it was asserting incorrect
      behavior. Non-volatile stores *CAN* be moved past volatile loads,
      and now are.
    CodeGen/X86/vector-idiv.ll -
    CodeGen/X86/vector-lzcnt-128.ll -
      It's basically impossible to tell what these tests are actually
      testing. But, looks like the code got better due to the memory
      operations being recognized as non-aliasing.
    CodeGen/X86/win32-eh.ll -
      Both loads of the securitycookie are now merged.
    CodeGen/AMDGPU/vgpr-spill-emergency-stack-slot-compute.ll -
      This test appears to work but no longer exhibits the spill
      behavior.
Reviewers: arsenm, hfinkel, tstellarAMD, nhaehnle, jyknight
Subscribers: wdng, nhaehnle, nemanjai, arsenm, weimingz, niravd, RKSimon, aemerson, qcolombet, resistor, tstellarAMD, t.p.northover, spatel
Differential Revision: https://reviews.llvm.org/D14834
llvm-svn: 282600
							
						 | 
						
							|
| 
							
							
								 | 
						0d97270ae5 | 
							
							
								
								[mips] Use --check-prefixes where appropriate. NFC.
							
							
							
							
							
							
							
							llvm-svn: 273669  | 
						
							|
| 
							
							
								 | 
						81fa35ca24 | 
							
							
								
								Now that we have a soft-float attribute, use it instead of the
							
							
							
							
							
							
							
							hard coded command line option for the Mips soft float tests. llvm-svn: 236801  | 
						
							|
| 
							
							
								 | 
						79e6c74981 | 
							
							
								
								[opaque pointer type] Add textual IR support for explicit type parameter to getelementptr instruction
							
							
							
							
							
							
							
							One of several parallel first steps to remove the target type of pointers,
replacing them with a single opaque pointer type.
This adds an explicit type parameter to the gep instruction so that when the
first parameter becomes an opaque pointer type, the type to gep through is
still available to the instructions.
* This doesn't modify gep operators, only instructions (operators will be
  handled separately)
* Textual IR changes only. Bitcode (including upgrade) and changing the
  in-memory representation will be in separate changes.
* geps of vectors are transformed as:
    getelementptr <4 x float*> %x, ...
  ->getelementptr float, <4 x float*> %x, ...
  Then, once the opaque pointer type is introduced, this will ultimately look
  like:
    getelementptr float, <4 x ptr> %x
  with the unambiguous interpretation that it is a vector of pointers to float.
* address spaces remain on the pointer, not the type:
    getelementptr float addrspace(1)* %x
  ->getelementptr float, float addrspace(1)* %x
  Then, eventually:
    getelementptr float, ptr addrspace(1) %x
Importantly, the massive amount of test case churn has been automated by
same crappy python code. I had to manually update a few test cases that
wouldn't fit the script's model (r228970,r229196,r229197,r229198). The
python script just massages stdin and writes the result to stdout, I
then wrapped that in a shell script to handle replacing files, then
using the usual find+xargs to migrate all the files.
update.py:
import fileinput
import sys
import re
ibrep = re.compile(r"(^.*?[^%\w]getelementptr inbounds )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
normrep = re.compile(       r"(^.*?[^%\w]getelementptr )(((?:<\d* x )?)(.*?)(| addrspace\(\d\)) *\*(|>)(?:$| *(?:%|@|null|undef|blockaddress|getelementptr|addrspacecast|bitcast|inttoptr|\[\[[a-zA-Z]|\{\{).*$))")
def conv(match, line):
  if not match:
    return line
  line = match.groups()[0]
  if len(match.groups()[5]) == 0:
    line += match.groups()[2]
  line += match.groups()[3]
  line += ", "
  line += match.groups()[1]
  line += "\n"
  return line
for line in sys.stdin:
  if line.find("getelementptr ") == line.find("getelementptr inbounds"):
    if line.find("getelementptr inbounds") != line.find("getelementptr inbounds ("):
      line = conv(re.match(ibrep, line), line)
  elif line.find("getelementptr ") != line.find("getelementptr ("):
    line = conv(re.match(normrep, line), line)
  sys.stdout.write(line)
apply.sh:
for name in "$@"
do
  python3 `dirname "$0"`/update.py < "$name" > "$name.tmp" && mv "$name.tmp" "$name"
  rm -f "$name.tmp"
done
The actual commands:
From llvm/src:
find test/ -name *.ll | xargs ./apply.sh
From llvm/src/tools/clang:
find test/ -name *.mm -o -name *.m -o -name *.cpp -o -name *.c | xargs -I '{}' ../../apply.sh "{}"
From llvm/src/tools/polly:
find test/ -name *.ll | xargs ./apply.sh
After that, check-all (with llvm, clang, clang-tools-extra, lld,
compiler-rt, and polly all checked out).
The extra 'rm' in the apply.sh script is due to a few files in clang's test
suite using interesting unicode stuff that my python script was throwing
exceptions on. None of those files needed to be migrated, so it seemed
sufficient to ignore those cases.
Reviewers: rafael, dexonsmith, grosser
Differential Revision: http://reviews.llvm.org/D7636
llvm-svn: 230786
							
						 | 
						
							|
| 
							
							
								 | 
						a576281694 | 
							
							
								
								Move the Mips target to storing the ABI in the TargetMachine rather
							
							
							
							
							
							
							
							than on MipsSubtargetInfo. This required a bit of massaging in the MC level to handle this since MC is a) largely a collection of disparate classes with no hierarchy, and b) there's no overarching equivalent to the TargetMachine, instead only the subtarget via MCSubtargetInfo (which is the base class of TargetSubtargetInfo). We're now storing the ABI in both the TargetMachine level and in the MC level because the AsmParser and the TargetStreamer both need to know what ABI we have to parse assembly and emit objects. The target streamer has a pointer to the one in the asm parser and is updated when the asm parser is created. This is fragile as the FIXME comment notes, but shouldn't be a problem in practice since we always create an asm parser before attempting to emit object code via the assembler. The TargetMachine now contains the ABI so that the DataLayout can be constructed dependent upon ABI. All testcases have been updated to use the -target-abi command line flag so that we can set the ABI without using a subtarget feature. Should be no change visible externally here. llvm-svn: 227102  | 
						
							|
| 
							
							
								 | 
						c43cda84ff | 
							
							
								
								[mips] Promote i32 arguments to i64 for the N32/N64 ABI and fix <64-bit structs...
							
							
							
							
							
							
							
							Summary: ... and after all that refactoring, it's possible to distinguish softfloat floating point values from integers so this patch no longer breaks softfloat to do it. Remove direct handling of i32's in the N32/N64 ABI by promoting them to i64. This more closely reflects the ABI documentation and also fixes problems with stack arguments on big-endian targets. We now rely on signext/zeroext annotations (already generated by clang) and the Assert[SZ]ext nodes to avoid the introduction of unnecessary sign/zero extends. It was not possible to convert three tests to use signext/zeroext. These tests are bswap.ll, ctlz-v.ll, ctlz-v.ll. It's not possible to put signext on a vector type so we just accept the sign extends here for now. These tests don't pass the vectors the same way clang does (clang puts multiple elements in the same argument, these map 1 element to 1 argument) so we don't need to worry too much about it. With this patch, all known N32/N64 bugs should be fixed and we now pass the first 10,000 tests generated by ABITest.py. Subscribers: llvm-commits Differential Revision: http://reviews.llvm.org/D6117 llvm-svn: 221534  | 
						
							|
| 
							
							
								 | 
						726f1ea2c5 | 
							
							
								
								[mips] Improve robustness of some tests.
							
							
							
							
							
							
							
							Summary: This is done by removing some hardcoded registers like $at or expecting a single digit register to be selected. Contains work done by Matheus Almeida. Reviewers: matheusalmeida, dsanders Reviewed By: dsanders Subscribers: tomatabacu Differential Revision: http://reviews.llvm.org/D4227 llvm-svn: 215640  | 
						
							|
| 
							
							
								 | 
						9fe0ad0c07 | 
							
							
								
								[mips] Add calling convention tests covering O32, N32, and N64.
							
							
							
							
							
							
							
							Summary: I had difficulty finding tests for the N32 and N64 ABI so I've added a collection of calling convention tests based on the document MIPS ABIs Described (MD00305), the MIPSpro N32 Handbook, and the SYSV ABI. Where the documents/implementations disagree, I've used GCC to resolve the conflict. A few interesting details: * For N32, LLVM uses 64-bit pointers when saving $ra despite pointers being 32-bit. I've yet to find a supporting statement in the ABI documentation but the current behaviour matches GCC. * For O32, the non-variable portion of a varargs argument list is also subject to the rule that floating-point is passed via GPR's (on N32/N64 only the variable portion is subject to this rule). This agrees with GCC's behaviour and the SYSV ABI but contradicts part of the MIPSpro N32 Handbook which talks about O32's behaviour. * The N32 implementation has the wrong callee-saved register list. (I already have a fix for this but will commit it as a follow-up). I've left RUN-TODO lines in for O32 on MIPS64. I don't plan to support this case for now but we should revisit it. Reviewers: matheusalmeida, vmedic Reviewed By: matheusalmeida Differential Revision: http://reviews.llvm.org/D3339 llvm-svn: 206370  |