Commit Graph

177 Commits

Author SHA1 Message Date
Alexey Bataev 93a38d60be [OPENMP][NVPTX]Increment iterator only when it is used, NFC.
llvm-svn: 344574
2018-10-16 00:09:06 +00:00
Alexey Bataev 4ac58d1a4b [OPENMP][NVPTX]Reduce memory usage in target region.
Additional reduction of the global memory usage in the target regions
without parallel regions.

llvm-svn: 344413
2018-10-12 20:19:59 +00:00
Alexey Bataev 9bfe91da3d [OPENMP][NVPTX]Reduce memory usage in orphaned functions.
if the function has globalized variables and called in context of
target/teams/distribute regions, it does not need to globalize 32
copies of the same variables for memory coalescing, it is enough to
have just one copy, because there is parallel region.
Patch does this by adding call for `__kmpc_parallel_level` function and
checking its return value. If the code sees that the parallel level is
0, then only one variable is allocated, not 32.

llvm-svn: 344356
2018-10-12 16:04:20 +00:00
Alexey Bataev ff23bb6622 [OPENMP][NVPTX]Reduce memory use for globalized vars in
target/teams/distribute regions.

Previously introduced globalization scheme that uses memory coalescing
scheme may increase memory usage fr the variables that are devlared in
target/teams/distribute contexts. We don't need 32 copies of such
variables, just 1. Patch reduces memory use in this case.

llvm-svn: 344273
2018-10-11 18:30:31 +00:00
Alexey Bataev 9ea3c38597 [OPENMP][NVPTX] Support memory coalescing for globalized variables.
Added support for memory coalescing for better performance for
globalized variables. From now on all the globalized variables are
represented as arrays of 32 elements and each thread accesses these
elements using `tid & 31` as index.

llvm-svn: 344049
2018-10-09 14:49:00 +00:00
Alexey Bataev 6bc2732f71 [OPENMP][NVPTX] Fix emission of __kmpc_global_thread_num() for non-SPMD
mode.

__kmpc_global_thread_num() should be called before initialization of the
runtime.

llvm-svn: 343857
2018-10-05 15:27:47 +00:00
Alexey Bataev fd006c44ee [OPENMP] Fix emission of the __kmpc_global_thread_num.
Fixed emission of the __kmpc_global_thread_num() so that it is not
messed up with alloca instructions anymore. Plus, fixes emission of the
__kmpc_global_thread_num() functions in the target outlined regions so
that they are not called before runtime is initialized.

llvm-svn: 343856
2018-10-05 15:08:53 +00:00
Jonas Hahnfeld 3ca4701d35 [OpenMP][NVPTX] Simplify codegen for orphaned parallel, NFCI.
Worker threads fork off to the compiler generated worker function
directly after entering the kernel function. Hence, there is no
need to check whether the current thread is the master if we are
outside of a parallel region (neither SPMD nor parallel_level > 0).

Differential Revision: https://reviews.llvm.org/D52732

llvm-svn: 343618
2018-10-02 19:12:54 +00:00
Alexey Bataev e79451a517 [OPENMP][NVPTX] Handle `requires datasharing` flag correctly with
lightweight runtime.

The datasharing flag must be set to `1` when executing SPMD-mode compatible directive with reduction|lastprivate clauses.

llvm-svn: 343492
2018-10-01 16:20:57 +00:00
Alexey Bataev 4da1d5c33f [OPENMP] Simplify code, NFC.
llvm-svn: 343483
2018-10-01 14:40:06 +00:00
Gheorghe-Teodor Bercea 8233af90e1 [OpenMP] Make default parallel for schedule in NVPTX target regions in SPMD mode achieve coalescing
Summary: Set default schedule for parallel for loops to schedule(static, 1) when using SPMD mode on the NVPTX device offloading toolchain to ensure coalescing.

Reviewers: ABataev, Hahnfeld, caomhin

Reviewed By: ABataev

Subscribers: jholewinski, guansong, cfe-commits

Differential Revision: https://reviews.llvm.org/D52629

llvm-svn: 343260
2018-09-27 20:29:00 +00:00
Gheorghe-Teodor Bercea 02650d4c2c [OpenMP] Make default distribute schedule for NVPTX target regions in SPMD mode achieve coalescing
Summary: For the OpenMP NVPTX toolchain choose a default distribute schedule that ensures coalescing on the GPU when in SPMD mode. This significantly increases the performance of offloaded target code and reduces the number of registers used on the GPU side.

Reviewers: ABataev, caomhin, Hahnfeld

Reviewed By: ABataev, Hahnfeld

Subscribers: Hahnfeld, jholewinski, guansong, cfe-commits

Differential Revision: https://reviews.llvm.org/D52434

llvm-svn: 343253
2018-09-27 19:22:56 +00:00
Kelvin Li 1408f91a25 [OPENMP] Add support for OMP5 requires directive + unified_address clause
Add support for OMP5.0 requires directive and unified_address clause.
Patches to follow will include support for additional clauses.

Differential Revision: https://reviews.llvm.org/D52359

llvm-svn: 343063
2018-09-26 04:28:39 +00:00
Alexey Bataev 2adecff1aa [OPENMP][NVPTX] Enable support for lastprivates in SPMD constructs.
Previously we could not use lastprivates in SPMD constructs, patch
allows supporting lastprivates in SPMD with uninitialized runtime.

llvm-svn: 342738
2018-09-21 14:22:53 +00:00
Alexey Bataev bd8ff9bd70 [OPENMP] Fix PR38710: static functions are not emitted as implicitly
'declare target'.

All the functions, referenced in implicit|explicit target regions must
be emitted during code emission for the device.

llvm-svn: 341093
2018-08-30 18:56:11 +00:00
Alexey Bataev 80a9a61ded [OPENMP][NVPTX] Add options -f[no-]openmp-cuda-force-full-runtime.
Added options -f[no-]openmp-cuda-force-full-runtime to [not] force use
of the full runtime for OpenMP offloading to CUDA devices.

llvm-svn: 341073
2018-08-30 14:45:24 +00:00
Alexey Bataev 8d8e1235ab [OPENMP][NVPTX] Add support for lightweight runtime.
If the target construct can be executed in SPMD mode + it is a loop
based directive with static scheduling, we can use lightweight runtime
support.

llvm-svn: 340953
2018-08-29 18:32:21 +00:00
Alexey Bataev 97b722121e [OPENMP] Fix processing of declare target construct.
The attribute marked as inheritable since OpenMP 5.0 supports it +
additional fixes to support new functionality.

llvm-svn: 339704
2018-08-14 18:31:20 +00:00
Stephen Kelly f2ceec4811 Port getLocStart -> getBeginLoc
Reviewers: teemperor!

Subscribers: jholewinski, whisperity, jfb, cfe-commits

Differential Revision: https://reviews.llvm.org/D50350

llvm-svn: 339385
2018-08-09 21:08:08 +00:00
Alexey Bataev 8521ff6ec4 [OPENMP] ThreadId in serialized parallel regions is 0.
The first argument for the parallel outlined functions, called as
serialized parallel regions, should be a pointer to the global thread id
that always is 0.

llvm-svn: 337957
2018-07-25 20:03:01 +00:00
Alexey Bataev 3dd1f9d61d [OPENMP, NVPTX] Globalize only captured variables.
Sometimes we can try to globalize non-variable declarations, which may
lead to compiler crash.

llvm-svn: 337191
2018-07-16 16:49:20 +00:00
Gheorghe-Teodor Bercea ad4e579407 [OpenMP] Initialize data sharing stack for SPMD case
Summary: In the SPMD case, we need to initialize the data sharing and globalization infrastructure. This covers the case when an SPMD region calls a function in a different compilation unit.

Reviewers: ABataev, carlo.bertolli, caomhin

Reviewed By: ABataev

Subscribers: Hahnfeld, jholewinski, guansong, cfe-commits

Differential Revision: https://reviews.llvm.org/D49188

llvm-svn: 337015
2018-07-13 16:18:24 +00:00
Alexey Bataev b99dcb5f31 [OPENMP, NVPTX] Do not globalize local variables in parallel regions.
In generic data-sharing mode we are allowed to not globalize local
variables that escape their declaration context iff they are declared
inside of the parallel region. We can do this because L2 parallel
regions are executed sequentially and, thus, we do not need to put
shared local variables in the global memory.

llvm-svn: 336567
2018-07-09 17:43:58 +00:00
Alexey Bataev 91433f6877 [OPENMP, NVPTX] Reduce the number of the globalized variables.
Patch tries to make better analysis of the variables that should be
globalized. From now, instead of all parallel directives it will check
only distribute parallel .. directives and check only for
firstprivte/lastprivate variables if they must be globalized.

llvm-svn: 335632
2018-06-26 17:24:03 +00:00
Alexey Bataev 12c62908b5 [OPENMP, NVPTX] Fix reduction of the big data types/structures.
If the shuffle is required for the reduced structures/big data type,
current code may cause compiler crash because of the loading of the
aggregate values. Patch fixes this problem.

llvm-svn: 335377
2018-06-22 19:10:38 +00:00
Alexey Bataev 4065b9ae48 [OPENMP, NVPTX] Fix globalization of the variables passed to orphaned
parallel region.

If the current construct requires sharing of the local variable in the
inner parallel region, this variable must be globalized to avoid
runtime crash.

llvm-svn: 335285
2018-06-21 20:26:33 +00:00
Alexey Bataev 7b55d2d554 [OPENMP, NVPTX] Emit simple reduction if requested.
If simple reduction is requested, use the simple reduction instead of
the runtime functions calls.

llvm-svn: 334962
2018-06-18 17:11:45 +00:00
Alexey Bataev 0baba9e728 [OPENMP, NVPTX] Fixed codegen for orphaned parallel region.
If orphaned parallel region is found, the next code must be emitted:
```
if(__kmpc_is_spmd_exec_mode() || __kmpc_parallel_level(loc, gtid))
  Serialized execution.
else if (IsMasterThread())
  Prepare and signal worker.
else
  Outined function call.
```

llvm-svn: 333301
2018-05-25 20:16:03 +00:00
Alexey Bataev 673110d5d5 [OPENMP, NVPTX] Add check for SPMD mode in orphaned parallel directives.
If the orphaned directive is executed in SPMD mode, we need to emit the
check for the SPMD mode and run the orphaned parallel directive in
sequential mode.

llvm-svn: 332467
2018-05-16 13:36:30 +00:00
Alexey Bataev 2a3320a928 [OPENMP, NVPTX] Do not globalize variables with reference/pointer types.
In generic data-sharing mode we do not need to globalize
variables/parameters of reference/pointer types. They already are placed
in the global memory.

llvm-svn: 332380
2018-05-15 18:01:01 +00:00
Alexey Bataev df093e7b45 [OPENMP, NVPTX] Do not use SPMD mode for target simd and target teams
distribute simd directives.

Directives `target simd` and `target teams distribute simd` must be
executed in non-SPMD mode.

llvm-svn: 332129
2018-05-11 19:45:14 +00:00
Alexey Bataev bf5c84861c [OPENMP, NVPTX] Initial support for L2 parallelism in SPMD mode.
Added initial support for L2 parallelism in SPMD mode. Note, though,
that the orphaned parallel directives are not currently supported in
SPMD mode.

llvm-svn: 332016
2018-05-10 18:32:08 +00:00
Adrian Prantl 9fc8faf9e6 Remove \brief commands from doxygen comments.
This is similar to the LLVM change https://reviews.llvm.org/D46290.

We've been running doxygen with the autobrief option for a couple of
years now. This makes the \brief markers into our comments
redundant. Since they are a visual distraction and we don't want to
encourage more \brief markers in new code either, this patch removes
them all.

Patch produced by

for i in $(git grep -l '\@brief'); do perl -pi -e 's/\@brief //g' $i & done
for i in $(git grep -l '\\brief'); do perl -pi -e 's/\\brief //g' $i & done

Differential Revision: https://reviews.llvm.org/D46320

llvm-svn: 331834
2018-05-09 01:00:01 +00:00
Alexey Bataev 504fc2d0cd [OPENMP, NVPTX] Codegen for critical construct.
Added correct codegen for the critical construct on NVPTX devices.

llvm-svn: 331652
2018-05-07 17:23:05 +00:00
Alexey Bataev d7ff6d647f [OPENMP, NVPTX] Added support for L2 parallelism.
Added initial codegen for level 2, 3 etc. parallelism. Currently, all
the second, the third etc. parallel regions will run sequentially.

llvm-svn: 331642
2018-05-07 14:50:05 +00:00
Alexey Bataev fac26cf4ca [OPENMP] Add support for reductions on simd directives in target
regions.

Added codegen for `simd reduction()` constructs in target directives.

llvm-svn: 331393
2018-05-02 20:03:27 +00:00
Alexey Bataev 18fa2323b6 [OPENMP] Emit names of the globals depending on target.
Some symbols are not allowed to be used as names on some targets. Patch
ries to unify the emission of the names of LLVM globals so they could be
used on different targets.

llvm-svn: 331358
2018-05-02 14:20:50 +00:00
Alexey Bataev 2091ca6c97 [OPENMP] Do not cast captured by value variables with pointer types in
NVPTX target.

When generating the wrapper function for the offloading region, we need
to call the outlined function and cast the arguments correctly to follow
the ABI. Usually, variables captured by value are casted to `uintptr_t`
type. But this should not performed for the variables with pointer type.

llvm-svn: 330620
2018-04-23 17:33:41 +00:00
Alexey Bataev 9ff8083d98 [OPENMP] General code improvements.
llvm-svn: 330154
2018-04-16 20:16:21 +00:00
Alexey Bataev c0f879bcec [OPENMP] Additional attributes for the pointer parameters.
Added attributes for better optimization of the OpenMP code.

llvm-svn: 329751
2018-04-10 20:10:53 +00:00
Alexey Bataev e290ec02c7 [OPENMP, NVPTX] Fix codegen for the teams reduction.
Added NUW flags for all the add|mul|sub operations + replaced sdiv by udiv
as we operate on unsigned values only (addresses, converted to integers)

llvm-svn: 329411
2018-04-06 16:03:36 +00:00
Alexey Bataev 03f270c900 [OPENMP] Added emission of offloading data sections for declare target
variables.

Added emission of the offloading data sections for the variables within
declare target regions + fixes emission of the declare target variables
marked as declare target not within the declare target region.

llvm-svn: 328888
2018-03-30 18:31:07 +00:00
Gheorghe-Teodor Bercea 36cdfad062 [OpenMP][Clang] Add call to global data sharing stack initialization on the workers side
Summary: The workers also need to initialize the global stack. The call to the initialization function needs to happen after the kernel_init() function is called by the master. This ensures that the per-team data structures of the runtime have been initialized.

Reviewers: ABataev, grokos, carlo.bertolli, caomhin

Reviewed By: ABataev

Subscribers: jholewinski, guansong, cfe-commits

Differential Revision: https://reviews.llvm.org/D44749

llvm-svn: 328219
2018-03-22 17:33:27 +00:00
Alexey Bataev 173142171e [OPENMP, NVPTX] Codegen for target distribute parallel combined
constructs in generic mode.

Fixed codegen for distribute parallel combined constructs. We have to
pass and read the shared lower and upper bound from the distribute
region in the inner parallel region. Patch is for generic mode.

llvm-svn: 327990
2018-03-20 15:41:05 +00:00
Alexey Bataev 63cc8e96c3 [OPENMP, NVPTX] Globalization of the private redeclarations.
If the generic codegen is enabled and private copy of the original
variable escapes the declaration context, this private copy should be
globalized just like it was the original variable.

llvm-svn: 327985
2018-03-20 14:45:59 +00:00
Alexey Bataev a453f36085 [OPENMP, NVPTX] Reworked castToType() function, NFC.
Reworked function castToType to use more frontend functionality rather
than the backend.

llvm-svn: 327873
2018-03-19 17:53:56 +00:00
Alexey Bataev 634b5baa4e [OPENMP] Fix build with MSVC, NFC.
llvm-svn: 327868
2018-03-19 17:18:13 +00:00
Alexey Bataev b7f3cba84c [OPENMP, NVPTX] Emit correct thread id.
We emitted fake thread id for the outined function in NVPTX codegen.
Patch adds emission of the real thread id.

llvm-svn: 327867
2018-03-19 17:04:07 +00:00
Mikael Holmen 9f373a379d Fix compilation warning introduced in r327654
The compiler complained about

../tools/clang/lib/CodeGen/CGOpenMPRuntimeNVPTX.cpp:184:15: error: unused variable 'CSI' [-Werror,-Wunused-variable]
    if (auto *CSI = CGF.CapturedStmtInfo) {
              ^
1 error generated.

I don't know this code but it seems like an easy fix so I push it anyway
to get rid of the warning.

llvm-svn: 327694
2018-03-16 07:27:57 +00:00
Alexey Bataev c99042ba97 [OPENMP, NVPTX] Improve globalization of the variables captured by value.
If the variable is captured by value and the corresponding parameter in
the outlined function escapes its declaration context, this parameter
must be globalized. To globalize it we need to get the address of the
original parameter, load the value, store it to the global address and
use this global address instead of the original.

Patch improves globalization for parallel|teams regions + functions in
declare target regions.

llvm-svn: 327654
2018-03-15 18:10:54 +00:00
Gheorghe-Teodor Bercea d3dcf2f05d [OpenMP] Add OpenMP data sharing infrastructure using global memory
Summary:
This patch handles the Clang code generation phase for the OpenMP data sharing infrastructure.

TODO: add a more detailed description.

Reviewers: ABataev, carlo.bertolli, caomhin, hfinkel, Hahnfeld

Reviewed By: ABataev

Subscribers: jholewinski, guansong, cfe-commits

Differential Revision: https://reviews.llvm.org/D43660

llvm-svn: 327513
2018-03-14 14:17:45 +00:00
Gheorghe-Teodor Bercea 7d80da15a0 [OpenMP] Remove implicit data sharing code gen that aims to use device shared memory
Summary: Remove this scheme for now since it will be covered by another more generic scheme using global memory. This code will be worked into an optimization for the generic data sharing scheme. Removing this completely and then adding it via future patches will make all future data sharing patches cleaner.

Reviewers: ABataev, carlo.bertolli, caomhin

Reviewed By: ABataev

Subscribers: jholewinski, guansong, cfe-commits

Differential Revision: https://reviews.llvm.org/D43625

llvm-svn: 326948
2018-03-07 21:59:50 +00:00
Rafael Espindola 51ec5a9ce3 Pass a GlobalDecl to SetInternalFunctionAttributes. NFC.
This just reduces the noise in a followup patch.

Part of D43900.

llvm-svn: 326385
2018-02-28 23:46:35 +00:00
Carlo Bertolli 79712097c7 [OpenMP] Extend NVPTX SPMD implementation of combined constructs
Differential Revision: https://reviews.llvm.org/D43852

This patch extends the SPMD implementation to all target constructs and guards this implementation under a new flag.

llvm-svn: 326368
2018-02-28 20:48:35 +00:00
Sander de Smalen 891af03a55 Recommit rL323952: [DebugInfo] Enable debug information for C99 VLA types.
Fixed build issue when building with g++-4.8 (specialization after instantiation).

llvm-svn: 324173
2018-02-03 13:55:59 +00:00
Sander de Smalen 4e9a1264dd Reverting patch rL323952 due to build errors that I
haven't encountered in local builds.

llvm-svn: 323956
2018-02-01 12:27:13 +00:00
Sander de Smalen 17c4633e7f [DebugInfo] Enable debug information for C99 VLA types
Summary:
This patch enables debugging of C99 VLA types by generating more precise
LLVM Debug metadata, using the extended DISubrange 'count' field that
takes a DIVariable.
    
This should implement:
  Bug 30553: Debug info generated for arrays is not what GDB expects (not as good as GCC's)
https://bugs.llvm.org/show_bug.cgi?id=30553

Reviewers: echristo, aprantl, dexonsmith, clayborg, pcc, kristof.beyls, dblaikie

Reviewed By: aprantl

Subscribers: jholewinski, schweitz, davide, fhahn, JDevlieghere, cfe-commits

Differential Revision: https://reviews.llvm.org/D41698

llvm-svn: 323952
2018-02-01 11:25:10 +00:00
Alexey Bataev a9b9cc0d79 [OPENMP] Remove more empty SourceLocations() from the code.
Removed more empty SourceLocations() from the OpenMP code and replaced
with the correct locations for better debug info emission.

llvm-svn: 323232
2018-01-23 18:12:38 +00:00
Alexey Bataev 475a7440f1 [OPENMP] Replace calls of getAssociatedStmt().
getAssociatedStmt() returns the outermost captured statement for the
OpenMP directive. It may return incorrect region in case of combined
constructs. Reworked the code to reduce the number of calls of
getAssociatedStmt() and used getInnermostCapturedStmt() and
getCapturedStmt() functions instead.
In case of firstprivate variables it may lead to an extra allocas
generation for private copies even if the variable is passed by value
into outlined function and could be used directly as private copy.

llvm-svn: 322393
2018-01-12 19:39:11 +00:00
Alexey Bataev aee9389b04 [OPENMP] Fix debug info for outlined functions in NVPTX + add more tests.
Fixed name of emitted outlined functions in NVPTX target + extra tests
for the debug info.

llvm-svn: 322022
2018-01-08 20:09:47 +00:00
Alexey Bataev b2575930b3 [OPENMP] Fix casting in NVPTX support library.
If the reduction required shuffle in the NVPTX codegen, we may need to
cast the reduced value to the integer type. This casting was implemented
incorrectly and may cause compiler crash. Patch fixes this problem.

llvm-svn: 321818
2018-01-04 20:18:55 +00:00
Alexey Bataev 7cae94e74c [OPENMP] Add debug info for generated functions.
Most of the generated functions for the OpenMP were generated with
disabled debug info. Patch fixes this for better user experience.

llvm-svn: 321816
2018-01-04 19:45:16 +00:00
Jonas Hahnfeld fa059ba59e [OpenMP] Further adjustments of nvptx runtime functions
Pass in default value of 1, similar to previous commit r318836.

Differential Revision: https://reviews.llvm.org/D41012

llvm-svn: 321486
2017-12-27 10:39:56 +00:00
Gheorghe-Teodor Bercea b4c74c6603 [OpenMP] Add function attribute for triggering data sharing.
Summary:
The backend should only emit data sharing code for the cases where it is needed.
A new function attribute is used by Clang to enable data sharing only for the cases where OpenMP semantics require it and there are variables that need to be shared.

Reviewers: hfinkel, Hahnfeld, ABataev, carlo.bertolli, caomhin

Reviewed By: ABataev

Subscribers: cfe-commits, jholewinski

Differential Revision: https://reviews.llvm.org/D41123

llvm-svn: 320527
2017-12-12 21:38:43 +00:00
Alexey Bataev b45d43c397 [OPENMP] Do not mark captured variables as artificial in debug info.
Captured variables should not be marked as artificial parameters in
outlined functions in debug info.

llvm-svn: 318843
2017-11-22 16:02:03 +00:00
Jonas Hahnfeld 891c7fb19d [OpenMP] Adjust arguments of nvptx runtime functions
In the future the compiler will analyze whether the OpenMP
runtime needs to be (fully) initialized and avoid that overhead
if possible. The functions already take an argument to transfer
that information to the runtime, so pass in the default value 1.
(This is needed for binary compatibility with libomptarget-nvptx
currently being upstreamed.)

Differential Revision: https://reviews.llvm.org/D40354

llvm-svn: 318836
2017-11-22 14:46:49 +00:00
Gheorghe-Teodor Bercea eb89b1d46f [OpenMP] Add implicit data sharing support when offloading to NVIDIA GPUs using OpenMP device offloading
Summary:
This patch is part of the development effort to add support in the current OpenMP GPU offloading implementation for implicitly sharing variables between a target region executed by the team master thread and the worker threads within that team.

This patch is the first of three required for successfully performing the implicit sharing of master thread variables with the worker threads within a team. The remaining two patches are:
- Patch D38978 to the LLVM NVPTX backend which ensures the lowering of shared variables to an device memory which allows the sharing of references;
- Patch (coming soon) is a patch to libomptarget runtime library which ensures that a list of references to shared variables is properly maintained.

A simple code snippet which illustrates an implicit data sharing situation is as follows:

```
#pragma omp target
{
   // master thread only
   int v;
   #pragma omp parallel
   {
      // worker threads
      // use v
   }
}
```

Variable v is implicitly shared from the team master thread which executes the code in between the target and parallel directives. The worker threads must operate on the latest version of v, including any updates performed by the master.

The code generated in this patch relies on the LLVM NVPTX patch (mentioned above) which prevents v from being lowered in the thread local memory of the master thread thus making the reference to this variable un-shareable with the workers. This ensures that the code generated by this patch is correct.
Since the parallel region is outlined the passing of arguments to the outlined regions must preserve the original order of arguments. The runtime therefore maintains a list of references to shared variables thus ensuring their passing in the correct order. The passing of arguments to the outlined parallel function is performed in a separate function which the data sharing infrastructure constructs in this patch. The function is inlined when optimizations are enabled.

Reviewers: hfinkel, carlo.bertolli, arpith-jacob, Hahnfeld, ABataev, caomhin

Reviewed By: ABataev

Subscribers: cfe-commits, jholewinski

Differential Revision: https://reviews.llvm.org/D38976

llvm-svn: 318773
2017-11-21 15:54:54 +00:00
Mandeep Singh Grang 789b19a6b6 [clang] Remove redundant return [NFC]
Reviewers: rsmith, sfantao, mcrosier

Reviewed By: mcrosier

Subscribers: jholewinski, cfe-commits

Tags: #clang

Differential Revision: https://reviews.llvm.org/D39915

llvm-svn: 318074
2017-11-13 19:29:31 +00:00
Alexey Bataev 5d7edca316 [OPENMP] Codegen for `#pragma omp target parallel for simd`.
Added codegen for `#pragma omp target parallel for simd` and clauses.

llvm-svn: 317813
2017-11-09 17:32:15 +00:00
Alexey Bataev fb0ebecf0e [OPENMP] Codegen for `#pragma omp target parallel for`.
llvm-svn: 317719
2017-11-08 20:16:14 +00:00
Alexander Richardson 6d989436d0 Convert clang::LangAS to a strongly typed enum
Summary:
Convert clang::LangAS to a strongly typed enum

Currently both clang AST address spaces and target specific address spaces
are represented as unsigned which can lead to subtle errors if the wrong
type is passed. It is especially confusing in the CodeGen files as it is
not possible to see what kind of address space should be passed to a
function without looking at the implementation.
I originally made this change for our LLVM fork for the CHERI architecture
where we make extensive use of address spaces to differentiate between
capabilities and pointers. When merging the upstream changes I usually
run into some test failures or runtime crashes because the wrong kind of
address space is passed to a function. By converting the LangAS enum to a
C++11 we can catch these errors at compile time. Additionally, it is now
obvious from the function signature which kind of address space it expects.

I found the following errors while writing this patch:

- ItaniumRecordLayoutBuilder::LayoutField was passing a clang AST address
  space to  TargetInfo::getPointer{Width,Align}()
- TypePrinter::printAttributedAfter() prints the numeric value of the
  clang AST address space instead of the target address space.
  However, this code is not used so I kept the current behaviour
- initializeForBlockHeader() in CGBlocks.cpp was passing
  LangAS::opencl_generic to TargetInfo::getPointer{Width,Align}()
- CodeGenFunction::EmitBlockLiteral() was passing a AST address space to
  TargetInfo::getPointerWidth()
- CGOpenMPRuntimeNVPTX::translateParameter() passed a target address space
  to Qualifiers::addAddressSpace()
- CGOpenMPRuntimeNVPTX::getParameterAddress() was using
  llvm::Type::getPointerTo() with a AST address space
- clang_getAddressSpace() returns either a LangAS or a target address
  space. As this is exposed to C I have kept the current behaviour and
  added a comment stating that it is probably not correct.

Other than this the patch should not cause any functional changes.

Reviewers: yaxunl, pcc, bader

Reviewed By: yaxunl, bader

Subscribers: jlebar, jholewinski, nhaehnle, Anastasia, cfe-commits

Differential Revision: https://reviews.llvm.org/D38816

llvm-svn: 315871
2017-10-15 18:48:14 +00:00
Alexey Bataev 36f2c4df12 [OPENMP] Fix types for the target specific parameters in debug mode.
Used incorrect types for target specific parameters in debug mode,
should use original pointers rather than the pointee types.

llvm-svn: 313186
2017-09-13 20:20:59 +00:00
Alexey Bataev 07ed94a7c7 [OPENMP] Fix compiler crash on argument translate for NVPTX.
When translating arguments for NVPTX target it is not taken into account
that function may have variable number of arguments. Patch fixes this
problem.

llvm-svn: 310920
2017-08-15 14:34:04 +00:00
Alexey Bataev 3c595a6b2c [OPENMP] Generalization of calls of the outlined functions.
General improvement of the outlined functions calls.

llvm-svn: 310840
2017-08-14 15:01:03 +00:00
Alexey Bataev 3b8d5586ec [OPENMP][DEBUG] Set proper address space info if required by target.
Arguments, passed to the outlined function, must have correct address
space info for proper Debug info support. Patch sets global address
space for arguments that are mapped and passed by reference.

Also, cuda-gdb does not handle reference types correctly, so reference
arguments are represented as pointers.

llvm-svn: 310387
2017-08-08 18:04:06 +00:00
Alexey Bataev 4aa19052f3 Revert "[OPENMP][DEBUG] Set proper address space info if required by target."
This reverts commit r310377.

llvm-svn: 310379
2017-08-08 16:45:36 +00:00
Alexey Bataev 5a497136be [OPENMP][DEBUG] Set proper address space info if required by target.
Arguments, passed to the outlined function, must have correct address
space info for proper Debug info support. Patch sets global address
space for arguments that are mapped and passed by reference.

Also, cuda-gdb does not handle reference types correctly, so reference
arguments are represented as pointers.

llvm-svn: 310377
2017-08-08 16:29:11 +00:00
Alexey Bataev 6a824b9a45 Revert "[OPENMP][DEBUG] Set proper address space info if required by target."
This reverts commit r310360.

llvm-svn: 310364
2017-08-08 14:44:43 +00:00
Alexey Bataev 59b81e51d3 [OPENMP][DEBUG] Set proper address space info if required by target.
Arguments, passed to the outlined function, must have correct address
space info for proper Debug info support. Patch sets global address
space for arguments that are mapped and passed by reference.

Also, cuda-gdb does not handle reference types correctly, so reference
arguments are represented as pointers.

llvm-svn: 310360
2017-08-08 14:25:14 +00:00
Alexey Bataev d90ec748a8 Revert "[OPENMP][DEBUG] Set proper address space info if required by target."
This reverts commit r310104.

llvm-svn: 310135
2017-08-04 21:27:11 +00:00
Alexey Bataev be83fad57e [OPENMP][DEBUG] Set proper address space info if required by target.
Arguments, passed to the outlined function, must have correct address
space info for proper Debug info support. Patch sets global address
space for arguments that are mapped and passed by reference.

Also, cuda-gdb does not handle reference types correctly, so reference
arguments are represented as pointers.

llvm-svn: 310104
2017-08-04 19:46:10 +00:00
Alexey Bataev 2c7eee5b84 [OPENMP] Unify generation of outlined function calls.
llvm-svn: 310098
2017-08-04 19:10:54 +00:00
Alexey Bataev 56223237b0 [DebugInfo] Add kind of ImplicitParamDecl for emission of FlagObjectPointer.
Summary:
If the first parameter of the function is the ImplicitParamDecl, codegen
automatically marks it as an implicit argument with `this` or `self`
pointer. Added internal kind of the ImplicitParamDecl to separate
'this', 'self', 'vtt' and other implicit parameters from other kind of
parameters.

Reviewers: rjmccall, aaron.ballman

Subscribers: cfe-commits

Differential Revision: https://reviews.llvm.org/D33735

llvm-svn: 305075
2017-06-09 13:40:18 +00:00
Mehdi Amini 6aa9e9b41a IRGen: Add optnone attribute on function during O0
Amongst other, this will help LTO to correctly handle/honor files
compiled with O0, helping debugging failures.
It also seems in line with how we handle other options, like how
-fnoinline adds the appropriate attribute as well.

Differential Revision: https://reviews.llvm.org/D28404

llvm-svn: 304127
2017-05-29 05:38:20 +00:00
Benjamin Kramer 674d579271 Make helper functions static. NFC.
llvm-svn: 304028
2017-05-26 20:08:24 +00:00
Arpith Chacko Jacob fc711b1f47 [OpenMP] Teams reduction on the NVPTX device.
This patch implements codegen for the reduction clause on
any teams construct for elementary data types.  It builds
on parallel reductions on the GPU.  Subsequently,
the team master writes to a unique location in a global
memory scratchpad.  The last team to do so loads and
reduces this array to calculate the final result.

This patch emits two helper functions that are used by
the OpenMP runtime on the GPU to perform reductions across
teams.

Patch by Tian Jin in collaboration with Arpith Jacob

Reviewers: ABataev
Differential Revision: https://reviews.llvm.org/D29879

llvm-svn: 295335
2017-02-16 16:48:49 +00:00
Arpith Chacko Jacob 101e8fb1f3 [OpenMP] Parallel reduction on the NVPTX device.
This patch implements codegen for the reduction clause on
any parallel construct for elementary data types.  An efficient
implementation requires hierarchical reduction within a
warp and a threadblock.  It is complicated by the fact that
variables declared in the stack of a CUDA thread cannot be
shared with other threads.

The patch creates a struct to hold reduction variables and
a number of helper functions.  The OpenMP runtime on the GPU
implements reduction algorithms that uses these helper
functions to perform reductions within a team.  Variables are
shared between CUDA threads using shuffle intrinsics.

An implementation of reductions on the NVPTX device is
substantially different to that of CPUs.  However, this patch
is written so that there are minimal changes to the rest of
OpenMP codegen.

The implemented design allows the compiler and runtime to be
decoupled, i.e., the runtime does not need to know of the
reduction operation(s), the type of the reduction variable(s),
or the number of reductions.  The design also allows reuse of
host codegen, with appropriate specialization for the NVPTX
device.

While the patch does introduce a number of abstractions, the
expected use case calls for inlining of the GPU OpenMP runtime.
After inlining and optimizations in LLVM, these abstractions
are unwound and performance of OpenMP reductions is comparable
to CUDA-canonical code.

Patch by Tian Jin in collaboration with Arpith Jacob

Reviewers: ABataev
Differential Revision: https://reviews.llvm.org/D29758

llvm-svn: 295333
2017-02-16 16:20:16 +00:00
Arpith Chacko Jacob bd6344c0be Revert r295319 while investigating buildbot failure.
llvm-svn: 295323
2017-02-16 14:25:35 +00:00
Arpith Chacko Jacob 8e170fc857 [OpenMP] Parallel reduction on the NVPTX device.
This patch implements codegen for the reduction clause on
any parallel construct for elementary data types.  An efficient
implementation requires hierarchical reduction within a
warp and a threadblock.  It is complicated by the fact that
variables declared in the stack of a CUDA thread cannot be
shared with other threads.

The patch creates a struct to hold reduction variables and
a number of helper functions.  The OpenMP runtime on the GPU
implements reduction algorithms that uses these helper
functions to perform reductions within a team.  Variables are
shared between CUDA threads using shuffle intrinsics.

An implementation of reductions on the NVPTX device is
substantially different to that of CPUs.  However, this patch
is written so that there are minimal changes to the rest of
OpenMP codegen.

The implemented design allows the compiler and runtime to be
decoupled, i.e., the runtime does not need to know of the
reduction operation(s), the type of the reduction variable(s),
or the number of reductions.  The design also allows reuse of
host codegen, with appropriate specialization for the NVPTX
device.

While the patch does introduce a number of abstractions, the
expected use case calls for inlining of the GPU OpenMP runtime.
After inlining and optimizations in LLVM, these abstractions
are unwound and performance of OpenMP reductions is comparable
to CUDA-canonical code.

Patch by Tian Jin in collaboration with Arpith Jacob

Reviewers: ABataev
Differential Revision: https://reviews.llvm.org/D29758

llvm-svn: 295319
2017-02-16 14:03:36 +00:00
Arpith Chacko Jacob cca61a3a74 [OpenMP] Codegen support for 'target teams' on the NVPTX device.
This is a simple patch to teach OpenMP codegen to emit the construct
in Generic mode.

Reviewers: ABataev
Differential Revision: https://reviews.llvm.org/D29143

llvm-svn: 293183
2017-01-26 15:43:27 +00:00
Arpith Chacko Jacob 2cd6eeabfd [OpenMP] Support for the proc_bind-clause on 'target parallel' on the NVPTX device.
This patch adds support for the proc_bind clause on the Spmd construct
'target parallel' on the NVPTX device.  Since the parallel region is created
upon kernel launch, this clause can be safely ignored on the NVPTX device at
codegen time for level 0 parallelism.

Reviewers: ABataev
Differential Revision: https://reviews.llvm.org/D29128

llvm-svn: 293069
2017-01-25 16:55:10 +00:00
Arpith Chacko Jacob 86f9e46365 Reverting commit because an NVPTX patch sneaked in. Break up into two
patches.

llvm-svn: 293003
2017-01-25 01:45:59 +00:00
Arpith Chacko Jacob 4dbf368e14 [OpenMP] Codegen support for 'target teams' on the host.
This patch adds support for codegen of 'target teams' on the host.
This combined directive has two captured statements, one for the
'teams' region, and the other for the 'parallel'.

This target teams region is offloaded using the __tgt_target_teams()
call. The patch sets the number of teams as an argument to
this call.

Reviewers: ABataev
Differential Revision: https://reviews.llvm.org/D29084

llvm-svn: 293001
2017-01-25 01:38:33 +00:00
Arpith Chacko Jacob e04da5dee2 [OpenMP] Support for the num_threads-clause on 'target parallel' on the NVPTX device.
This patch adds support for the Spmd construct 'target parallel' on the
NVPTX device. This involves ignoring the num_threads clause on the device
since the number of threads in this combined construct is already set on
the host through the call to __tgt_target_teams().

Reviewers: ABataev
Differential Revision: https://reviews.llvm.org/D29083

llvm-svn: 292999
2017-01-25 01:18:34 +00:00
Arpith Chacko Jacob 44a87c9f1b [OpenMP] Codegen for the 'target parallel' directive on the NVPTX device.
This patch adds codegen for the 'target parallel' directive on the NVPTX
device.  We term offload OpenMP directives such as 'target parallel' and
'target teams distribute parallel for' as SPMD constructs.  SPMD constructs,
in contrast to Generic ones like the plain 'target', can never contain
a serial region.

SPMD constructs can be handled more efficiently on the GPU and do not
require the Warp Loop of the Generic codegen scheme. This patch adds
SPMD codegen support for 'target parallel' on the NVPTX device and can
be reused for other SPMD constructs.

Reviewers: ABataev
Differential Revision: https://reviews.llvm.org/D28755

llvm-svn: 292428
2017-01-18 19:35:00 +00:00
Arpith Chacko Jacob 19b911cb75 [OpenMP] Codegen support for 'target parallel' on the host.
This patch adds support for codegen of 'target parallel' on the host.
It is also the first combined directive that requires two or more
captured statements.  Support for this functionality is included in
the patch.

A combined directive such as 'target parallel' has two captured
statements, one for the 'target' and the other for the 'parallel'
region.  Two captured statements are required because each has
different implicit parameters (see SemaOpenMP.cpp).  For example,
the 'parallel' has 'global_tid' and 'bound_tid' while the 'target'
does not.  The patch adds support for handling multiple captured
statements based on the combined directive.

When codegen'ing the 'target parallel' directive, the 'target'
outlined function is created using the outer captured statement
and the 'parallel' outlined function is created using the inner
captured statement.

Reviewers: ABataev
Differential Revision: https://reviews.llvm.org/D28753

llvm-svn: 292419
2017-01-18 18:18:53 +00:00
Arpith Chacko Jacob 42793e000a Revert r292374 to debug Windows buildbot failure.
llvm-svn: 292400
2017-01-18 15:36:05 +00:00
Arpith Chacko Jacob 68019578a3 [OpenMP] Codegen support for 'target parallel' on the host.
This patch adds support for codegen of 'target parallel' on the host.
It is also the first combined directive that requires two or more
captured statements.  Support for this functionality is included in
the patch.

A combined directive such as 'target parallel' has two captured
statements, one for the 'target' and the other for the 'parallel'
region.  Two captured statements are required because each has
different implicit parameters (see SemaOpenMP.cpp).  For example,
the 'parallel' has 'global_tid' and 'bound_tid' while the 'target'
does not.  The patch adds support for handling multiple captured
statements based on the combined directive.

When codegen'ing the 'target parallel' directive, the 'target'
outlined function is created using the outer captured statement
and the 'parallel' outlined function is created using the inner
captured statement.

Reviewers: ABataev
Differential Revision: https://reviews.llvm.org/D28753

llvm-svn: 292374
2017-01-18 15:14:52 +00:00
Malcolm Parsons c6e4583dbb Remove unused lambda captures. NFC
llvm-svn: 291939
2017-01-13 18:55:32 +00:00
Arpith Chacko Jacob bb36fe8dba [OpenMP] Basic support for a parallel directive in a target region on an NVPTX device
Summary:

This patch introduces support for the execution of parallel constructs in a target
region on the NVPTX device.  Parallel regions must be in the lexical scope of the
target directive.

The master thread in the master warp signals parallel work for worker threads in worker
warps on encountering a parallel region.

Note: The patch does not yet support capture of arguments in a parallel region so
the test cases are simple.

Reviewers: ABataev
Differential Revision: https://reviews.llvm.org/D28145

llvm-svn: 291565
2017-01-10 15:42:51 +00:00