Before this change a memory reference identifier had the form:
<STMT>_<ACCESSTYPE><ID>_<MEMREF>, e.g., Stmt_bb9_Write0_MemRef_tmp11
After this change, we use the format:
<STMT>_<ACCESSTYPE><ID>, e.g., Stmt_bb9_Write0
The name of the array that is accessed through a memory reference is not
necessary to uniquely identify a memory reference, but was only added to
provide additional information for debugging. We drop this information now
for the following two reasons:
1) This shortens the names and consequently improves readability
2) This removes a second location where we decide on the name of a scop array,
leaving us only with the location where the actual scop array is created.
Having after 2) only a single location to name scop arrays will allow us to
change the naming convention of scop arrays more easily, which we will do
in a future commit to reduce compilation time.
llvm-svn: 302004
When we introduced in r297375 support for hoisting loads that are known
to be dereferencable without any conditional guard, we forgot to keep the check
to verify that no other write into the very same location exists. This
change ensures now that dereferencable loads are allowed to access everything,
but can only be hoisted in case no conflicting write exists.
This resolves llvm.org/PR32778
Reported-by: Huihui Zhang <huihuiz@codeaurora.org>
llvm-svn: 301582
The AssumptionCache removal of r289756 has been reverted in
r290086/r290087. A different solution has been implemented in r291671
which keeps the AssumptionCache. We can therefore use it again in Polly.
This reverts r289791.
llvm-svn: 298089
In the previous default ScopInfo applied the profitability heuristic for
scalar accesses (-polly-unprofitable-scalar-accs=true) and the
-polly-prune-unprofitable was disabled by default
(-polly-enable-prune-unprofitable=false) as that pruning was already done.
This changes switches the defaults to -polly-unprofitable-scalar-accs=true
-polly-enable-prune-unprofitable=false such that the scalar access
heuristic check is done by the pass. This allows passes between ScopInfo
and PruneUnprofitable to optimize away scalar accesses.
Without enabling such intermediate passes, there is no change in
behaviour of profitability checks in a PassManagerBuilder built
pass chain, but it allows us to cover this configuration with the
buildbots.
Suggested-by: Tobias Grosser <tobias@grosser.es>
llvm-svn: 298081
ScopInfo's normal profitability heuristic considers SCoPs where all
statements have scalar writes as not profitably optimizable and
invalidate the SCoP in that case. However, -polly-delicm and
-polly-simplify may be able to remove some of the scalar writes such
that the flag -polly-unprofitable-scalar-accs=false allows disabling
that part of the heuristic.
In cases where DeLICM (or other passes after ScopInfo) are not
successful in removing scalar writes, the SCoP is still not profitably
optimizable. The schedule optimizer would again try computing another
schedule, resulting in slower compilation.
The -polly-prune-unprofitable pass applies the profitability heuristic
again before the schedule optimizer Polly can still bail out even with
-polly-unprofitable-scalar-accs=false.
Differential Revision: https://reviews.llvm.org/D31033
llvm-svn: 298080
For experiments it is sometimes helpful to provide parameter bound information
to polly and to not use these parameter bounds for simplification.
Add a new option "-polly-ignore-parameter-bounds" which does precisely this.
llvm-svn: 298077
For experiments it is sometimes helpful to not take any inbounds assumptions.
Add a new option "-polly-ignore-inbounds" which does precisely this.
llvm-svn: 298073
In subsequent changes we will make Polly a little bit more lazy in adding
parameter dimensions to different sets. As a result, not all parameters will
always be part of the parameter space. This change ensures that we do not use
the '-1' returned when a parameter dimension cannot be found, but instead
just do not try to eliminate the anyhow non-existing dimension.
llvm-svn: 298054
Since several years, isl can perform most operations on sets with differing
parameter spaces, by expanding the parameter space on demand relying using
named isl ids to distinguish different parameter dimensions.
By not always expanding to full dimensionality the set remain smaller and can
likely be operated on faster. This change by itself did not yet result in
measurable performance benefits, but it is a step into the right direction
needed to ensure that subsequent changes indeed can work with lower-dimensional
sets and these sets do not get blown up by accident when later intersected with
the domain context.
llvm-svn: 298053
Introduce ScopStmt::getSurroundingLoop() to replace getFirstNonBoxedLoopFor.
getSurroundingLoop() returns the precomputed surrounding/first non-boxed
loop. Except in ScopDetection, the list of boxed loops is only used to
get the surrounding loop. getFirstNonBoxedLoopFor also requires LoopInfo
at every use which is not necessarily available everywhere where we may
want to use it.
Differential Revision: https://reviews.llvm.org/D30985
llvm-svn: 297899
This new pass removes unnecessary accesses and writes. It currently
supports 2 simplifications, but more are planned.
It removes write accesses that write a loaded value back to the location
it was loaded from. It is a typical artifact from DeLICM. Removing it
will get rid of bogus dependencies later in dependency analysis.
It also removes statements without side-effects. ScopInfo already
removes these, but the removal of unnecessary writes can result in
more side-effect free statements.
Differential Revision: https://reviews.llvm.org/D30820
llvm-svn: 297473
In case LLVM pointers are annotated with !dereferencable attributes/metadata
or LLVM can look at the allocation from which a pointer is derived, we can know
that dereferencing pointers is safe and can be done unconditionally. We use this
information to proof certain pointers as save to hoist and then hoist them
unconditionally.
llvm-svn: 297375
Our current scop modeling enters an infinite loop when trying to model code
that has unreachable instructions (e.g.,
test/ScopInfo/BoundChecks/single-loop.ll), as the number of basic blocks
returned by the LLVM Loop* does not include unreachable basic blocks that
branch off from the core loop body. This arises for example in the following
piece of code:
for (i = 0; i < N; i++) {
if (i > 1024)
abort(); <- this abort might be translated to an
unreachable
A[i] = ...
}
This patch adds these unreachable basic blocks in our per loop basic block
count to ensure that the schedule construction does not assume a loop has been
processed completely, despite certain unreachable basic blocks still remaining.
The infinite loop is only observable in combination with
https://reviews.llvm.org/D12676 or a similar patch.
llvm-svn: 297156
Multi-disjunct access maps can easily result in inbound assumptions which
explode in case of many memory accesses and many parameters. This change reduces
compilation time of some larger kernel from over 15 minutes to less than 16
seconds.
Interesting is the test case test/ScopInfo/multidim_param_in_subscript.ll
which has a memory access
[n] -> { Stmt_for_body3[i0, i1] -> MemRef_A[i0, -1 + n - i1] }
which requires folding, but where only a single disjunct remains. We can still
model this test case even when only using limited memory folding.
For people only reading commit messages, here the comment that explains what
memory folding is:
To recover memory accesses with array size parameters in the subscript
expression we post-process the delinearization results.
We would normally recover from an access A[exp0(i) * N + exp1(i)] into an
array A[][N] the 2D access A[exp0(i)][exp1(i)]. However, another valid
delinearization is A[exp0(i) - 1][exp1(i) + N] which - depending on the
range of exp1(i) - may be preferrable. Specifically, for cases where we
know exp1(i) is negative, we want to choose the latter expression.
As we commonly do not have any information about the range of exp1(i),
we do not choose one of the two options, but instead create a piecewise
access function that adds the (-1, N) offsets as soon as exp1(i) becomes
negative. For a 2D array such an access function is created by applying
the piecewise map:
[i,j] -> [i, j] : j >= 0
[i,j] -> [i-1, j+N] : j < 0
After this patch we generate only the first case, except for situations where
we can proove the first case to be invalid and can consequently select the
second without introducing disjuncts.
llvm-svn: 296679
Without this simplification for a loop nest:
void foo(long n1_a, long n1_b, long n1_c, long n1_d,
long p1_b, long p1_c, long p1_d,
float A_1[][p1_b][p1_c][p1_d]) {
for (long i = 0; i < n1_a; i++)
for (long j = 0; j < n1_b; j++)
for (long k = 0; k < n1_c; k++)
for (long l = 0; l < n1_d; l++)
A_1[i][j][k][l] += i + j + k + l;
}
the assumption:
n1_a <= 0 or (n1_a > 0 and n1_b <= 0) or
(n1_a > 0 and n1_b > 0 and n1_c <= 0) or
(n1_a > 0 and n1_b > 0 and n1_c > 0 and n1_d <= 0) or
(n1_a > 0 and n1_b > 0 and n1_c > 0 and n1_d > 0 and
p1_b >= n1_b and p1_c >= n1_c and p1_d >= n1_d)
is taken rather than the simpler assumption:
p9_b >= n9_b and p9_c >= n9_c and p9_d >= n9_d.
The former is less strict, as it allows arbitrary values of p1_* in case, the
loop is not executed at all. However, in practice these precise constraints
explode when combined across different accesses and loops. For now it seems
to make more sense to take less precise, but more scalable constraints by
default. In case we find a practical example where more precise constraints
are needed, we can think about allowing such precise constraints in specific
situations where they help.
This change speeds up the new test case from taking very long (waited at least
a minute, but it probably takes a lot more) to below a second.
llvm-svn: 296456
Instead of counting the number of read-only accesses, we now count the number of
distinct read-only array references when checking if a run-time alias check
may be too complex. The run-time alias check is quadratic in the number of
base pointers, not the number of accesses.
Before this change we accidentally skipped SPEC's lbm test case.
llvm-svn: 295567
Trying to fold such kind of dimensions will result in a division by zero,
which crashes the compiler. As such arrays are likely to invalidate the
scop anyhow (but are not illegal in LLVM-IR), there is no point in trying
to optimize the array layout. Hence, we just avoid the folding of
constant dimensions of size zero.
llvm-svn: 295415
Before this change wrapping range metadata resulted in exponential growth of
the context, which made context construction of large scops very slow. Instead,
we now just do not model the range information precisely, in case the number
of disjuncts in the context has already reached a certain limit.
llvm-svn: 295360
Commit r230230 introduced the use of range metadata to derive bounds for
parameters, instead of just looking at the type of the parameter. As part of
this commit support for wrapping ranges was added, where the lower bound of a
parameter is larger than the upper bound:
{ 255 < p || p < 0 }
However, at the same time, for wrapping ranges support for adding bounds given
by the size of the containing type has acidentally been dropped. As a result,
the range of the parameters was not guaranteed to be bounded any more. This
change makes sure we always add the bounds given by the size of the type and
then additionally add bounds based on signed wrapping, if available. For a
parameter p with a type size of 32 bit, the valid range is then:
{ -2147483648 <= p <= 2147483647 and (255 < p or p < 0) }
llvm-svn: 295349
When deriving the range of valid values of a scalar evolution expression might
be a range [12, 8), where the upper bound is smaller than the lower bound and
where the range is expected to possibly wrap around. We theoretically could
model such a range as a union of two non-wrapping ranges, but do not do this
as of yet. Instead, we just do not derive any bounds. Before this change,
we could have obtained bounds where the maximal possible value is strictly
smaller than the minimal possible value, which is incorrect and also caused
assertions during scop modeling.
llvm-svn: 294891
This change clarfies that we want to indeed use the original base address
when creating the ScopArrayInfo that corresponds to a given memory access.
This change prepares for https://reviews.llvm.org/D28518.
llvm-svn: 294734
This replaces the use of getOriginalAddrPtr, a value that is stored in
ScopArrayInfo and might at some point not be unique any more. However, the
access value is defined to be unique.
This change is an update on r294576, which only clarified that we need the
original memory access, but where we still remained dependent to have one base
pointer per scop.
This change removes unnecessary uses of MemoryAddress::getOriginalBaseAddr() in
preparation for https://reviews.llvm.org/D28518.
llvm-svn: 294733
By using the public interface MemoryAccess::getScopArrayInfo() we avoid the
direct access to the ScopArrayInfoMap and as a result also do not need to
use the BasePtr as key. This change makes the code cleaner.
The const-cast we introduce is a little ugly. We may consider to drop const
correctness for getScopArrayInfo() at some point.
This change removes unnecessary uses of MemoryAddress::getBaseAddr() in
preparation for https://reviews.llvm.org/D28518.
llvm-svn: 294655
LLVM's coding conventions suggest to use auto only in obvious cases. Hence,
we move this code to actually declare the types used. We also replace the
variable name 'SAI', with the name 'Array', as this improves readability.
llvm-svn: 294654
When building alias groups, we sort different ScopArrays into unrelated groups.
Historically we identified arrays through their base pointer, as no
ScopArrayInfo class was yet available. This change changes the alias group
construction to reference arrays through their ScopArrayInfo object.
This change removes unnecessary uses of MemoryAddress::getBaseAddr() in
preparation for https://reviews.llvm.org/D28518.
llvm-svn: 294649
During SCoP construction we sometimes inspect the underlying IR by looking at
the base address of a MemoryAccess. In such cases, we always want the original
base address. Make this clear by calling getOriginalBaseAddr().
This is a non-functional change as getBaseAddr maps to getOriginalBaseAddr
at the moment.
This change removes unnecessary uses of MemoryAddress::getBaseAddr() in
preparation for https://reviews.llvm.org/D28518.
llvm-svn: 294576
The base address of a memory access is already an llvm::Value. Hence, there is
no need to go through SCEV, but we can directly work with the llvm::Value.
Also use 'Value *' instead of 'auto' for cases where the type is not obvious.
llvm-svn: 294575
Summary:
Instead of forbidding such access functions completely, we verify that their
base pointer has been hoisted and only assert in case the base pointer was
not hoisted.
I was trying for a little while to get a test case that ensures the assert is
correctly fired in case of invariant load hoisting being disabled, but I could
not find a good way to do so, as llvm-lit immediately aborts if a command
yields a non-zero return value. As we do not generally test our asserts,
not having a test case here seems OK.
This resolves http://llvm.org/PR31494
Suggested-by: Michael Kruse <llvm@meinersbur.de>
Reviewers: efriedma, jdoerfert, Meinersbur, gareevroman, sebpop, zinob, huihuiz, pollydev
Reviewed By: Meinersbur
Differential Revision: https://reviews.llvm.org/D28798
llvm-svn: 292213
Move the function getFirstNonBoxedLoopFor which is used in ScopBuilder
and in ScopInfo to Support/ScopHelpers to make it reusable in other
locations. No functionality change.
Patch by Sameer Abu Asal.
Differential Revision: https://reviews.llvm.org/D28754
llvm-svn: 292168
Before this change, this code has been mixed with a check for non-affine
loops (and when originally introduce was also duplicated). By creating
a separate loop and explicitly documenting this property, the current
behavior becomes a lot more clear.
llvm-svn: 292140
The loop body in buildAliasGroups is still too large to easily scan it. Hence,
we split the loop body out into a separate function to improve readability.
llvm-svn: 292138
Instead of modifying the original alias group and repurposing it as read-write
access group when splitting accesses in read-only and read-write accesses, we
just keep all three groups: the original alias group, the set of read-only
accesses and the set of read-write accesses. This allows us to remove some
complicated iterator handling and also allows for more code-reuse in
calculateMinMaxAccess.
llvm-svn: 292137
It seems over time we added an additional map that maps from the base address
of a read-only access to the actual access. However this map is never used.
Drop the creation and use of this map to simplify our alias check generation
code.
llvm-svn: 292126
The alias group will anyhow be cleared at the end of this function and is not
used afterwards. We avoid an explicit clear() call at multiple places to
improve readability of this code.
llvm-svn: 292125
Hoisting small vectors out of a loop seems to be a pure performance
optimization, which is unlikely to have great impact in practice. As this
hoisting just increases code-complexity, we fold the SmallVectors back into
the loop.
In subsequent commits, we will further simplify and structure this code, but
we committed this change separately to provide an explanation to make clear
that we purposefully reverted this optimization.
llvm-svn: 292122
The function buildAliasGroups got very large. We extract out the splitting
of alias groups to reduce its size and to better document the current behavior.
llvm-svn: 292121
The function buildAliasGroups got very large. We extract out the actual
construction of alias groups to reduce its size and to better document the
current behavior.
llvm-svn: 292120
To benefit of the type safety guarantees of C++11 typed enums, which would have
caught the type mismatch fixed in r291960, we make MemoryKind a typed enum.
This change also allows us to drop the 'MK_' prefix and to instead use the more
descriptive full name of the enum as prefix. To reduce the amount of typing
needed, we use this opportunity to move MemoryKind from ScopArrayInfo to a
global scope, which means the ScopArrayInfo:: prefix is not needed. This move
also makes historically sense. In the beginning of Polly we had different
MemoryKind enums in both MemoryAccess and ScopArrayInfo, which were later
canonicalized to one. During this canonicalization we just choose the enum in
ScopArrayInfo, but did not consider to move this shared enum to global scope.
Reviewed-by: Michael Kruse <llvm@meinersbur.de>
Differential Revision: https://reviews.llvm.org/D28090
llvm-svn: 292030
The AssumptionCache was removed in r289756 after being replaced by the an
addtional operand list of affected values in r289755. The absence of that cache
means that we have now have to manually search for llvm.assume intrinsics as
now done by other passes (LazyValueInfo, CodeMetrics) do not take into
account an llvm::Instruction's user lists (ScalarEvolution).
llvm-svn: 289791
This allows us to delinearize code such as the one below, where the array
sizes are A[][2 * n] as there are n times two elements in the innermost
dimension. Alternatively, we could try to generate another dimension for the
struct in the innermost dimension, but as the struct has constant size,
recovering this dimension is easy.
struct com {
double Real;
double Img;
};
void foo(long n, struct com A[][n]) {
for (long i = 0; i < 100; i++)
for (long j = 0; j < 1000; j++)
A[i][j].Real += A[i][j].Img;
}
int main() {
struct com A[100][1000];
foo(1000, A);
llvm-svn: 288489
After having built memory accesses we perform some additional transformations
on them to increase the chances that our delinearization guesses the right
shape. Only after these transformations, we take the assumptions that the
array shape we predict is such that no out-of-bounds memory accesses arise.
Before this change, the construction of the memory access, the access folding
that improves the represenation for certain parametric subscripts, and taking
the assumption was all done right after a memory access was created. In this
change we split this now into three separate iterations over all memory
accesses. This means only after all memory accesses have been built, we start
to canonicalize accesses, and to take assumptions. This split prepares for
future canonicalizations that must consider all memory accesses for deriving
additional beneficial transformations.
llvm-svn: 288479
Feasibility is checked late on its own but early it is hidden behind
the "PollyProcessUnprofitable" guard. This change will make sure we opt
out early if the runtime context is infeasible anyway.
llvm-svn: 288329
Since we do not necessarily treat memory intrinsics as non-affine
anymore, we have to check for them explicitly before we try to hoist an
access.
llvm-svn: 287270
Assumptions can either be added for a given basic block, in which case the set
describing the assumptions is expected to match the dimensions of its domain.
In case no basic block is provided a parameter-only set is expected to describe
the assumption.
The piecewise expressions that are generated by the SCEVAffinator sometimes
have a zero-dimensional domain (e.g., [p] -> { [] : p <= -129 or p >= 128 }),
which looks similar to a parameter-only domain, but is still a set domain.
This change adds an assert that checks that we always pass parameter domains to
addAssumptions if BB is empty to make mismatches here fail early.
We also change visitTruncExpr to always convert to parameter sets, if BB is
null. This change resolves http://llvm.org/PR30941
Another alternative to this change would have been to inspect all code to make
sure we directly generate in the SCEV affinator parameter sets in case of empty
domains. However, this would likely complicate the code which combines parameter
and non-parameter domains when constructing a statement domain. We might still
consider doing this at some point, but as this likely requires several non-local
changes this should probably be done as a separate refactoring.
Reported-by: Eli Friedman <efriedma@codeaurora.org>
llvm-svn: 286444
In r248701 "Allow switch instructions in SCoPs" support for switch statements
has been introduced, but support for switch statements in loop latches was
incomplete. This change completely disables switch statements in loop latches.
The original commit changed addLoopBoundsToHeaderDomain to support non-branch
terminator instructions, but this change was incorrect: it added a check for
BI != null to the if-branch of a condition, but BI was used in the else branch
es well. As a result, when a non-branch terminator instruction is encounted a
nullptr dereference is triggered. Due to missing test coverage, this bug was
overlooked.
r249273 "[FIX] Approximate non-affine loops correctly" added code to disallow
switch statements for non-affine loops, if they appear in either a loop latch
or a loop exit. We adapt this code to now prohibit switch statements in
loop latches even if the control condition is affine.
We could possibly add support for switch statements in loop latches, but such
support should be evaluated and tested separately.
This fixes llvm.org/PR30952
Reported-by: Eli Friedman <efriedma@codeaurora.org>
llvm-svn: 286426
Add asserts that verify that the memory accesses of a new copy statement
are defined for all domain instances the copy statement is defined for.
llvm-svn: 286047
We don't actually check whether a MemoryAccess is affine in very many
places, but one important one is in checks for aliasing.
Differential Revision: https://reviews.llvm.org/D25706
llvm-svn: 285746
When adding an llvm.memcpy instruction to AliasSetTracker, it uses the raw
source and target pointers which preserve bitcasts.
MemAccInst::getPointerOperand() also returns the raw target pointers, but
Scop::buildAliasGroups() did not for the source pointer. This lead to mismatches
between AliasSetTracker and ScopInfo on which pointer to use.
Fixed by also using raw pointers in Scop::buildAliasGroups().
llvm-svn: 285071
Summary: Otherwise the lack of an iteration order results in non-determinism in codegen.
Reviewers: _jdoerfert, zinob, grosser
Tags: #polly
Differential Revision: https://reviews.llvm.org/D25863
llvm-svn: 284845
With this option one can disable the heuristic that assumes that statements with
a scalar write access cannot be profitably optimized. Such a statement instances
necessarily have WAW-dependences to itself. With DeLICM scalar accesses can be
changed to array accesses, which can avoid these WAW-dependence.
llvm-svn: 283233
ScopArrayInfo used to determine base pointer origins by looking up whether the
base pointer is a load. The "base pointer" for scalar accesses is the
llvm::Value being accessed. This is only a symbolic base pointer, it
represents the alloca variable (.s2a or .phiops) generated for it at code
generation.
This patch disables determining base pointer origin for scalars.
A test case where this caused a crash will be added in the next commit. In that
test SAI tried to get the origin base pointer that was only declared later,
therefore not existing. This is probably only possible for scalars used in
PHINode incoming blocks.
llvm-svn: 283232
This is the fourth patch to apply the BLIS matmul optimization pattern on matmul
kernels (http://www.cs.utexas.edu/users/flame/pubs/TOMS-BLIS-Analytical.pdf).
BLIS implements gemm as three nested loops around a macro-kernel, plus two
packing routines. The macro-kernel is implemented in terms of two additional
loops around a micro-kernel. The micro-kernel is a loop around a rank-1
(i.e., outer product) update. In this change we perform copying to created
arrays, which is the last step to implement the packing transformation.
Reviewed-by: Tobias Grosser <tobias@grosser.es>
Differential Revision: https://reviews.llvm.org/D23260
llvm-svn: 281441
The alias to the array element is read-only and a primitive type (pointer),
therefore use the value directly instead of a reference to it.
llvm-svn: 281311
We do not need the size of the outermost dimension in most cases, but if we
allocate memory for newly created arrays, that size is needed.
Reviewed-by: Michael Kruse <llvm@meinersbur.de>
Differential Revision: https://reviews.llvm.org/D23991
llvm-svn: 281234
When running the clang static analyser to check for memory issues, this code
originally showed a double free, as the analyser was unable to understand that
isl_set_free always returns NULL and consequently later uses of the isl object
we just freed will never be reached. Without this knowledge, the analyser has
to issue a warning.
We refactor the code to make it clear that for empty maps the current loop
iteration is aborted.
llvm-svn: 280940
... but instead rely on the assumptions that we derive for load/store
instructions.
Before we were able to delinearize arrays, we used GEP pointer instructions
to derive information about the likely range of induction variables, which
gave us more freedom during loop scheduling. Today, this is not needed
any more as we delinearize multi-dimensional memory accesses and as part
of this process also "assume" that all accesses to these arrays remain
inbounds. The old derive-assumptions-from-GEP code has consequently become
mostly redundant. We drop it both to clean up our code, but also to improve
compile time. This change reduces the scop construction time for 3mm in
no-asserts mode on my machine from 48 to 37 ms.
llvm-svn: 280601
LLVM's coding guideline suggests to not use @brief for one-sentence doxygen
comments to improve readability. Switch this once and for all to ensure people
do not copy @brief comments from other parts of Polly, when writing new code.
llvm-svn: 280468
Change the code around setNewAccessRelation to allow to use a an existing array
element for memory instead of an ad-hoc alloca. This facility will be used for
DeLICM/DeGVN to convert scalar dependencies into regular ones.
The changes necessary include:
- Make the code generator use the implicit locations instead of the alloca ones.
- A test case
- Make the JScop importer accept changes of scalar accesses for that test case.
- Adapt the MemoryAccess interface to the fact that the MemoryKind can change.
They are named (get|is)OriginalXXX() to get the status of the memory access
before any change by setNewAccessRelation() (some properties such as
getIncoming() do not change even if the kind is changed and are still
required). To get the modified properties, there is (get|is)LatestXXX(). The
old accessors without Original|Latest become synonyms of the
(get|is)OriginalXXX() to not make functional changes in unrelated code.
Differential Revision: https://reviews.llvm.org/D23962
llvm-svn: 280408
There are some constraints on maps that can be access relations. In builds with assertions enabled, verify
- The access domain is the same space as the statement's domain (modulo parameters).
- Whether an access is defined for every instance of the statement. (codegen does not yet support partial access relations)
- Whether the access range links to an array, represented by a ScopArrayInfo.
- The number of access dimensions equals the dimensions of the array.
- The array is not an indirect access. (also not supported by codegen)
Differential Revision: https://reviews.llvm.org/D23916
llvm-svn: 280404
getAccessFunctions() is dead code and the 'BB' argument
of getOrCreateAccessFunctions() is not used. This patch deletes
getAccessFunctions and transforms AccFuncMap into
a std::vector<std::unique_ptr<MemoryAccess>> AccessFunctions.
Reviewed-by: Tobias Grosser <tobias@grosser.es>
Differential Revision: https://reviews.llvm.org/D23759
llvm-svn: 279394
Extend the jscop interface to allow the user to export arrays. It is required
that already existing arrays of the list of arrays correspond to arrays
of the SCoP. Each array that is appended to the list will be newly created.
Furthermore, we allow the user to modify access expressions to reference
any array in case it has the same element type.
Reviewed-by: Tobias Grosser <tobias@grosser.es>
Differential Revision: https://reviews.llvm.org/D22828
llvm-svn: 277263
Do not process SCoPs with infeasible runtime context in the new
ScopInfoWrapperPass. Do not compute dependences for such SCoPs in the new
DependenceInfoWrapperPass.
Patch by Utpal Bora <cs14mtech11017@iith.ac.in>
Differential Revision: https://reviews.llvm.org/D22402
llvm-svn: 276631
Commit r275056 introduced a gcc compile failure due to us using two
types named 'Type', the first being the newly introduced member variable
'Type' the second being llvm::Type. We resolve this issue by renaming
the newly introduced member variable to AccessType.
llvm-svn: 275057
Summary:
With a struct we can use named accessors instead of generic std::get<3>()
calls. This increases readability of the source code.
Reviewers: jdoerfert
Subscribers: pollydev, llvm-commits
Differential Revision: http://reviews.llvm.org/D21955
llvm-svn: 275056
We now compute the invalid context of memory accesses only for the domain under
which the memory access is executed. Without limiting ourselves to this
restricted domain, invalid accesses outside of the domain of actually executed
statement instances may result in the execution domain of the statement to
become empty despite the fact that the statement will actually be executed. As a
result, such scops would use unitialized values for their computations which
results in incorrect computations.
This fixes http://llvm.org/PR27944 and unbreaks the
-polly-position=before-vectorizer buildbots.
llvm-svn: 275053
For llvm the memory accesses from nonaffine loops should be visible,
however for polly those nonaffine loops should be invisible/boxed.
This fixes llvm.org/PR28245
Cointributed-by: Huihui Zhang <huihuiz@codeaurora.org>
Differential Revision: http://reviews.llvm.org/D21591
llvm-svn: 274842
It is only used internally by the ScopInfo pass. By moving it into its
own header file we avoid it being processed that use only ScopInfo.
llvm-svn: 273983
The methods in ScopBuilder are used for the construction of a Scop,
while the remaining classes of ScopInfo are required by all passes that
use Polly's polyhedral analysis.
llvm-svn: 273982
This function is used by both ScopInfo and ScopBuilder. A common
location for this function is required when ScopInfo and ScopBuilder are
separated into separate files in the next commit.
llvm-svn: 273981
This patch adds a new function pass ScopInfoWrapperPass so that the
polyhedral description of a region, the SCoP, can be constructed and
used in a function pass.
Patch by Utpal Bora <cs14mtech11017@iith.ac.in>
Differential Revision: http://reviews.llvm.org/D20962
llvm-svn: 273856
1. SCoP object is not owned by ScopBuilder. It just creates a SCoP and
hand over ownership through getScop() method.
2. ScopInfoRegionPass owns the SCoP object for a given region.
Patch by Utpal Bora <cs14mtech11017@iith.ac.in>
Differential Revision: http://reviews.llvm.org/D20912
llvm-svn: 273855