Commit Graph

17 Commits

Author SHA1 Message Date
Diego Caballero d7058acc14 [mlir] Add MemRef filter to affine data copy optimization
This patch extends affine data copy optimization utility with an
optional memref filter argument. When the memref filter is used, data
copy optimization will only generate copies for such a memref.

Note: this patch is just porting the memref filter feature from Uday's
'hop' branch: https://github.com/bondhugula/llvm-project/tree/hop.

Reviewed By: bondhugula

Differential Revision: https://reviews.llvm.org/D74342
2020-02-14 13:41:45 -08:00
Mehdi Amini 308571074c Mass update the MLIR license header to mention "Part of the LLVM project"
This is an artifact from merging MLIR into LLVM, the file headers are
now aligned with the rest of the project.
2020-01-26 03:58:30 +00:00
River Riddle e62a69561f NFC: Replace ValuePtr with Value and remove it now that Value is value-typed.
ValuePtr was a temporary typedef during the transition to a value-typed Value.

PiperOrigin-RevId: 286945714
2019-12-23 16:36:53 -08:00
Mehdi Amini 56222a0694 Adjust License.txt file to use the LLVM license
PiperOrigin-RevId: 286906740
2019-12-23 15:33:37 -08:00
River Riddle 35807bc4c5 NFC: Introduce new ValuePtr/ValueRef typedefs to simplify the transition to Value being value-typed.
This is an initial step to refactoring the representation of OpResult as proposed in: https://groups.google.com/a/tensorflow.org/g/mlir/c/XXzzKhqqF_0/m/v6bKb08WCgAJ

This change will make it much simpler to incrementally transition all of the existing code to use value-typed semantics.

PiperOrigin-RevId: 286844725
2019-12-22 22:00:23 -08:00
Christian Sigg c900d4994e Fix a number of Clang-Tidy warnings.
PiperOrigin-RevId: 270632324
2019-09-23 02:34:27 -07:00
Uday Bondhugula 4f32ae61b4 NFC - Move explicit copy/dma generation utility out of pass and into LoopUtils
- turn copy/dma generation method into a utility in LoopUtils, allowing
  it to be reused elsewhere.

- no functional/logic change to the pass/utility

- trim down header includes in files affected

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#124

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/124 from bondhugula:datacopy 9f346e62e5bd9dd1986720a30a35f302eb4d3252
PiperOrigin-RevId: 269106088
2019-09-14 13:23:48 -07:00
River Riddle f1b100c77b NFC: Finish replacing FunctionPassBase/ModulePassBase with OpPassBase.
These directives were temporary during the generalization of FunctionPass/ModulePass to OpPass.

PiperOrigin-RevId: 268970259
2019-09-13 13:34:27 -07:00
Uday Bondhugula 8c9dc690eb pipeline-data-transfer: remove dead tag alloc's and improve test coverage for replaceMemRefUsesWith / pipeline-data-transfer
- address remaining comments from PR tensorflow/mlir#87 for better test coverage for
  pipeline-data-transfer/replaceAllMemRefUsesWith
- remove dead tag allocs the same way they are removed for the replaced buffers

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#106

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/106 from bondhugula:followup 9e868666d047e8d43e5f82f43e4093b838c710fa
PiperOrigin-RevId: 267144774
2019-09-04 06:59:09 -07:00
Uday Bondhugula b1ef9dc22c Fix affine data copy generation corner cases/bugs
- the [begin, end) range identified for copying could end in between the
  block, which makes hoisting invalid in some cases. Change the range
  identification to always end with end of block.

- add test case to exercise these (with fast mem capacity set to minimal so
  that single element memref buffers are generated at the innermost loop)

- the location of begin/end of the block range for data copying was
  being confused with the insert points for copy in and copy out code.
  In cases, where we choose to hoist transfers, these are separate.

- when copy loops are single iteration ones, promote their bodies at
  the end of the pass.

- change default fast mem space to 1 (setting it to zero made it
  generate DMA op's that won't verify in the default case - since the
  DMA ops have a check for src/dest memref spaces being different).

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>
Co-Authored-By: Mehdi Amini <joker.eph@gmail.com>

Closes tensorflow/mlir#88

COPYBARA_INTEGRATE_REVIEW=https://github.com/tensorflow/mlir/pull/88 from bondhugula:datacopy 88697267c45e850c3ced87671e16e4a930c02a42
PiperOrigin-RevId: 266980911
2019-09-03 11:53:16 -07:00
River Riddle ffde975e21 NFC: Move AffineOps dialect to the Dialect sub-directory.
PiperOrigin-RevId: 264482571
2019-08-20 15:36:39 -07:00
River Riddle ba0fa92524 NFC: Move LLVMIR, SDBM, and StandardOps to the Dialect/ directory.
PiperOrigin-RevId: 264193915
2019-08-19 11:01:25 -07:00
Jacques Pienaar 79f53b0cf1 Change from llvm::make_unique to std::make_unique
Switch to C++14 standard method as llvm::make_unique has been removed (
https://reviews.llvm.org/D66259). Also mark some targets as c++14 to ease next
integrates.

PiperOrigin-RevId: 263953918
2019-08-17 11:06:03 -07:00
Mehdi Amini 926fb685de Express ownership transfer in PassManager API through std::unique_ptr (NFC)
Since raw pointers are always passed around for IR construct without
implying any ownership transfer, it can be error prone to have implicit
ownership transferred the same way.
For example this code can seem harmless:

  Pass *pass = ....
  pm.addPass(pass);
  pm.addPass(pass);
  pm.run(module);

PiperOrigin-RevId: 263053082
2019-08-12 19:13:12 -07:00
River Riddle 1e42954032 NFC: Standardize the terminology used for parent ops/regions/etc.
There are currently several different terms used to refer to a parent IR unit in 'get' methods: getParent/getEnclosing/getContaining. This cl standardizes all of these methods to use 'getParent*'.

PiperOrigin-RevId: 262680287
2019-08-09 20:07:52 -07:00
Alex Zinenko 58e66d71e7 AffineDataCopyGeneration: don't use CL flag values inside the pass
AffineDataCopyGeneration pass relied on command line flags for internal logic
in several places, which makes it unusable in a library context (i.e. outside a
standalone mlir-opt binary that does the command line parsing).  Define
configuration flags in the constructor instead, and set them up to command
line-based defaults to maintain the original behavior.

PiperOrigin-RevId: 261322364
2019-08-02 08:04:30 -07:00
Uday Bondhugula 18b8d4352b Introduce explicit copying optimization by generalizing the DMA generation pass
Explicit copying to contiguous buffers is a standard technique to avoid
conflict misses and TLB misses, and improve hardware prefetching
performance. When done in conjunction with cache tiling, it nearly
eliminates all cache conflict and TLB misses, and a single hardware
prefetch stream is needed per data tile.

- generalize/extend DMA generation pass (renamed data copying pass) to
  perform either point-wise explicit copies to fast memory buffers or
  DMAs (depending on a cmd line option). All logic is the same as
  erstwhile -dma-generate.

- -affine-dma-generate is now renamed -affine-data-copy; when -dma flag is
  provided, DMAs are generated, or else explicit copy loops are generated
  (point-wise) by default.

- point-wise copying could be used for CPUs (or GPUs); some indicative
  performance numbers with a "C" version of the MLIR when compiled with
  and without this optimization (about 2x improvement here).

  With a matmul on 4096^2 matrices on a single core of an Intel Core i7
  Skylake i7-8700K with clang 8.0.0:

  clang -O3:                       518s
  clang -O3 with MLIR tiling (128x128):      24.5s
  clang -O3 with MLIR tiling + data copying  12.4s
  (code equivalent to test/Transforms/data-copy.mlir func @matmul)

- fix some misleading comments.

- change default fast-mem space to 0 (more intuitive now with the
  default copy generation using point-wise copies instead of DMAs)

On a simple 3-d matmul loop nest, code generated with -affine-data-copy:

```
  affine.for %arg3 = 0 to 4096 step 128 {
    affine.for %arg4 = 0 to 4096 step 128 {
      %0 = affine.apply #map0(%arg3, %arg4)
      %1 = affine.apply #map1(%arg3, %arg4)
      %2 = alloc() : memref<128x128xf32, 2>
      // Copy-in Out matrix.
      affine.for %arg5 = 0 to 128 {
        %5 = affine.apply #map2(%arg3, %arg5)
        affine.for %arg6 = 0 to 128 {
          %6 = affine.apply #map2(%arg4, %arg6)
          %7 = load %arg2[%5, %6] : memref<4096x4096xf32>
          affine.store %7, %2[%arg5, %arg6] : memref<128x128xf32, 2>
        }
      }
      affine.for %arg5 = 0 to 4096 step 128 {
        %5 = affine.apply #map0(%arg3, %arg5)
        %6 = affine.apply #map1(%arg3, %arg5)
        %7 = alloc() : memref<128x128xf32, 2>
        // Copy-in LHS.
        affine.for %arg6 = 0 to 128 {
          %11 = affine.apply #map2(%arg3, %arg6)
          affine.for %arg7 = 0 to 128 {
            %12 = affine.apply #map2(%arg5, %arg7)
            %13 = load %arg0[%11, %12] : memref<4096x4096xf32>
            affine.store %13, %7[%arg6, %arg7] : memref<128x128xf32, 2>
          }
        }
        %8 = affine.apply #map0(%arg5, %arg4)
        %9 = affine.apply #map1(%arg5, %arg4)
        %10 = alloc() : memref<128x128xf32, 2>
        // Copy-in RHS.
        affine.for %arg6 = 0 to 128 {
          %11 = affine.apply #map2(%arg5, %arg6)
          affine.for %arg7 = 0 to 128 {
            %12 = affine.apply #map2(%arg4, %arg7)
            %13 = load %arg1[%11, %12] : memref<4096x4096xf32>
            affine.store %13, %10[%arg6, %arg7] : memref<128x128xf32, 2>
          }
        }
        // Compute.
        affine.for %arg6 = #map7(%arg3) to #map8(%arg3) {
          affine.for %arg7 = #map7(%arg4) to #map8(%arg4) {
            affine.for %arg8 = #map7(%arg5) to #map8(%arg5) {
              %11 = affine.load %7[-%arg3 + %arg6, -%arg5 + %arg8] : memref<128x128xf32, 2>
              %12 = affine.load %10[-%arg5 + %arg8, -%arg4 + %arg7] : memref<128x128xf32, 2>
              %13 = affine.load %2[-%arg3 + %arg6, -%arg4 + %arg7] : memref<128x128xf32, 2>
              %14 = mulf %11, %12 : f32
              %15 = addf %13, %14 : f32
              affine.store %15, %2[-%arg3 + %arg6, -%arg4 + %arg7] : memref<128x128xf32, 2>
            }
          }
        }
        dealloc %10 : memref<128x128xf32, 2>
        dealloc %7 : memref<128x128xf32, 2>
      }
      %3 = affine.apply #map0(%arg3, %arg4)
      %4 = affine.apply #map1(%arg3, %arg4)
      // Copy out result matrix.
      affine.for %arg5 = 0 to 128 {
        %5 = affine.apply #map2(%arg3, %arg5)
        affine.for %arg6 = 0 to 128 {
          %6 = affine.apply #map2(%arg4, %arg6)
          %7 = affine.load %2[%arg5, %arg6] : memref<128x128xf32, 2>
          store %7, %arg2[%5, %6] : memref<4096x4096xf32>
        }
      }
      dealloc %2 : memref<128x128xf32, 2>
    }
  }
```

With -affine-data-copy -dma:

```
  affine.for %arg3 = 0 to 4096 step 128 {
    %0 = affine.apply #map3(%arg3)
    %1 = alloc() : memref<128xf32, 2>
    %2 = alloc() : memref<1xi32>
    affine.dma_start %arg2[%arg3], %1[%c0], %2[%c0], %c128_0 : memref<4096xf32>, memref<128xf32, 2>, memref<1xi32>
    affine.dma_wait %2[%c0], %c128_0 : memref<1xi32>
    %3 = alloc() : memref<1xi32>
    affine.for %arg4 = 0 to 4096 step 128 {
      %5 = affine.apply #map0(%arg3, %arg4)
      %6 = affine.apply #map1(%arg3, %arg4)
      %7 = alloc() : memref<128x128xf32, 2>
      %8 = alloc() : memref<1xi32>
      affine.dma_start %arg0[%arg3, %arg4], %7[%c0, %c0], %8[%c0], %c16384, %c4096, %c128_2 : memref<4096x4096xf32>, memref<128x128xf32, 2>, memref<1xi32>
      affine.dma_wait %8[%c0], %c16384 : memref<1xi32>
      %9 = affine.apply #map3(%arg4)
      %10 = alloc() : memref<128xf32, 2>
      %11 = alloc() : memref<1xi32>
      affine.dma_start %arg1[%arg4], %10[%c0], %11[%c0], %c128_1 : memref<4096xf32>, memref<128xf32, 2>, memref<1xi32>
      affine.dma_wait %11[%c0], %c128_1 : memref<1xi32>
      affine.for %arg5 = #map3(%arg3) to #map5(%arg3) {
        affine.for %arg6 = #map3(%arg4) to #map5(%arg4) {
          %12 = affine.load %7[-%arg3 + %arg5, -%arg4 + %arg6] : memref<128x128xf32, 2>
          %13 = affine.load %10[-%arg4 + %arg6] : memref<128xf32, 2>
          %14 = affine.load %1[-%arg3 + %arg5] : memref<128xf32, 2>
          %15 = mulf %12, %13 : f32
          %16 = addf %14, %15 : f32
          affine.store %16, %1[-%arg3 + %arg5] : memref<128xf32, 2>
        }
      }
      dealloc %11 : memref<1xi32>
      dealloc %10 : memref<128xf32, 2>
      dealloc %8 : memref<1xi32>
      dealloc %7 : memref<128x128xf32, 2>
    }
    %4 = affine.apply #map3(%arg3)
    affine.dma_start %1[%c0], %arg2[%arg3], %3[%c0], %c128 : memref<128xf32, 2>, memref<4096xf32>, memref<1xi32>
    affine.dma_wait %3[%c0], %c128 : memref<1xi32>
    dealloc %3 : memref<1xi32>
    dealloc %2 : memref<1xi32>
    dealloc %1 : memref<128xf32, 2>
  }
```

Signed-off-by: Uday Bondhugula <uday@polymagelabs.com>

Closes tensorflow/mlir#50

PiperOrigin-RevId: 261221903
2019-08-01 16:31:58 -07:00