Commit Graph

36062 Commits

Author SHA1 Message Date
Matt Arsenault dc6c78596b GlobalISel: Implement fewerElementsVector for select
llvm-svn: 352601
2019-01-30 04:19:31 +00:00
Craig Topper a8710a6e7e [IR] Use CallBase to simplify some code
Summary:
This patch does the following to simplify the asm-goto patch

-Move isInlineAsm from CallInst to CallBase to share with CallBrInst in the asm-goto patch.
-Forward CallSite's data_operands_begin()/data_operands_end() to CallBase's implementation.
-Forward CallSite's getOperandBundlesAsDefs to CallBase.

Reviewers: chandlerc

Reviewed By: chandlerc

Subscribers: nickdesaulniers, llvm-commits

Differential Revision: https://reviews.llvm.org/D57415

llvm-svn: 352600
2019-01-30 03:43:41 +00:00
Heejin Ahn d6f487863d [WebAssembly] Exception handling: Switch to the new proposal
Summary:
This switches the EH implementation to the new proposal:
https://github.com/WebAssembly/exception-handling/blob/master/proposals/Exceptions.md
(The previous proposal was
 https://github.com/WebAssembly/exception-handling/blob/master/proposals/old/Exceptions.md)

- Instruction changes
  - Now we have one single `catch` instruction that returns a except_ref
    value
  - `throw` now can take variable number of operations
  - `rethrow` does not have 'depth' argument anymore
  - `br_on_exn` queries an except_ref to see if it matches the tag and
    branches to the given label if true.
  - `extract_exception` is a pseudo instruction that simulates popping
    values from wasm stack. This is to make `br_on_exn`, a very special
    instruction, work: `br_on_exn` puts values onto the stack only if it
    is taken, and the # of values can vay depending on the tag.

- Now there's only one `catch` per `try`, this patch removes all special
  handling for terminate pad with a call to `__clang_call_terminate`.
  Before it was the only case there are two catch clauses (a normal
  `catch` and `catch_all` per `try`).

- Make `rethrow` act as a terminator like `throw`. This splits BB after
  `rethrow` in WasmEHPrepare, and deletes an unnecessary `unreachable`
  after `rethrow` in LateEHPrepare.

- Now we stop at all catchpads (because we add wasm `catch` instruction
  that catches all exceptions), this creates new
  `findWasmUnwindDestinations` function in SelectionDAGBuilder.

- Now we use `br_on_exn` instrution to figure out if an except_ref
  matches the current tag or not, LateEHPrepare generates this sequence
  for catch pads:
```
  catch
  block i32
  br_on_exn $__cpp_exception
  end_block
  extract_exception
```

- Branch analysis for `br_on_exn` in WebAssemblyInstrInfo

- Other various misc. changes to switch to the new proposal.

Reviewers: dschuff

Subscribers: sbc100, jgravelle-google, sunfish, llvm-commits

Differential Revision: https://reviews.llvm.org/D57134

llvm-svn: 352598
2019-01-30 03:21:57 +00:00
Matt Arsenault 6d8e1b456a GlobalISel: Use appropriate extension for legalizing select conditions
llvm-svn: 352597
2019-01-30 02:57:43 +00:00
Sam Clegg fc37198df1 Add enum values to CodeGenOpt::Level
The absolute values of this enum are important at least in that
they get printed by SelectionDAGISel. e.g:
  `Before: -O2 ; After: -O0`

Differential Revision: https://reviews.llvm.org/D57430

llvm-svn: 352587
2019-01-30 02:08:34 +00:00
Max Kazantsev 23e642248d [NFC] Use ArrayRef instead of SmallVectorImpl where possible
llvm-svn: 352466
2019-01-29 09:39:15 +00:00
Matt Arsenault cdd191d9db AMDGPU: Add DS append/consume intrinsics
Since these pass the pointer in m0 unlike other DS instructions, these
need to worry about whether the address is uniform or not. This
assumes the address is dynamically uniform, and just uses
readfirstlane to get a copy into an SGPR.

I don't know if these have the same 16-bit add for the addressing mode
offset problem on SI or not, but I've just assumed they do.

Also includes some misc. changes to avoid test differences between the
LDS and GDS versions.

llvm-svn: 352422
2019-01-28 20:14:49 +00:00
Jessica Paquette 9f6afad913 [GlobalISel] Add G_FSIN and G_FCOS generic instructions
This introduces generic instrutions for floating point sin and cos, G_FCOS and
G_FSIN. It updates the tests, etc.

https://reviews.llvm.org/D57197
1/3

llvm-svn: 352400
2019-01-28 18:34:16 +00:00
Alina Sbirlea 3d1d95ca55 [AliasSetTracker] Update signature to aliasesPointer [NFCI].
llvm-svn: 352399
2019-01-28 18:30:05 +00:00
Michael Berg 685d5f675e [NFC] TLI query with default(on) behavior wrt DAG combines for fmin/fmax target control
llvm-svn: 352396
2019-01-28 18:03:08 +00:00
Tim Corringham 824ca3f3dd [AMDGPU] Add intrinsics for 16 bit interpolation
Summary:
Added the intrinsics llvm.amdgcn.interp.p1.f16() and
llvm.amdgcn.interp.p2.f16() and related LIT test.

The p1 intrinsic generates code appropriate for both 16 and 32
bank LDS.

Reviewers: #amdgpu, dstuttard, arsenm, tpr

Reviewed By: #amdgpu, arsenm

Subscribers: jvesely, mgorny, arsenm, kzhuravl, wdng, nhaehnle, yaxunl, dstuttard, tpr, t-tye, llvm-commits

Differential Revision: https://reviews.llvm.org/D46754

llvm-svn: 352357
2019-01-28 13:48:59 +00:00
James Y Knight 575c0855c0 [opaque pointer types] Remove GraphTraits specialization for Type.
The only caller has been deleted in r352076, and I'd like to minimize
the amount of code walking Type hierarchies generically, to make it
easier to identify code depending on pointee types.

llvm-svn: 352353
2019-01-28 13:25:57 +00:00
Craig Topper 453150bc18 [X86] Add new variadic avx512 compress/expand intrinsics that use vXi1 types for the mask argument.
Remove and autoupgrade the old intrinsics

llvm-svn: 352343
2019-01-28 07:03:03 +00:00
Matt Arsenault 816c9b3e25 GlobalISel: Factor fewerElementVectors into separate functions
llvm-svn: 352332
2019-01-27 21:53:09 +00:00
Martin Storsjo e5eb6fb950 [COFF] Add new relocation types.
Differential Revision: https://reviews.llvm.org/D57291

llvm-svn: 352324
2019-01-27 19:53:36 +00:00
Simon Pilgrim adca820927 [TTI] Add generic SADDSAT/SSUBSAT costs
Add generic costs calculation for SADDSAT/SSUBSAT intrinsics, this uses generic costs for sadd_with_overflow/ssub_with_overflow, an extra sign comparison + a selects based on the sign/overflow.

This completes PR40316

Differential Revision: https://reviews.llvm.org/D57239

llvm-svn: 352315
2019-01-27 13:51:59 +00:00
Thomas Preud'homme 5cb1193075 Revert "Add support for prefix-only CLI options"
This reverts commit r351038.

llvm-svn: 352310
2019-01-27 09:02:46 +00:00
Matt Arsenault 211e89d4dd GlobalISel: Implement narrowScalar for mul
llvm-svn: 352300
2019-01-27 00:52:51 +00:00
Craig Topper 3b5e01b386 [X86] Remove and autoupgrade vpconflict intrinsics that take a mask and passthru argument.
We have unmasked versions as of r352172

llvm-svn: 352270
2019-01-26 06:27:01 +00:00
Craig Topper 6c9c7d0796 [X86] Remove GCCBuiltins from 512-bit cvt(u)qqtops, cvt(u)qqtopd, and cvt(u)dqtops intrinsics. Add new variadic uitofp/sitofp with rounding mode intrinsics.
Summary: See clang patch D56998 for a full description.

Reviewers: RKSimon, spatel

Reviewed By: RKSimon

Subscribers: llvm-commits

Differential Revision: https://reviews.llvm.org/D56999

llvm-svn: 352266
2019-01-26 02:41:54 +00:00
Matt Arsenault cdc201fcde GlobalISel: Fix address space limit in LLT
The IR enforced limit for the address space is 24-bits, but LLT was
only using 23-bits. Additionally, the argument to the constructor was
truncating to 16-bits.

A similar problem still exists for the number of vector elements. The
IR enforces no limit, so if you try to use a vector with > 65535
elements the IRTranslator asserts in the LLT constructor.

llvm-svn: 352264
2019-01-26 01:42:13 +00:00
Artem Belevich dfad526943 [NVPTX] Some nvvm.read.ptx.sreg intrinsics should have IntrInaccessibleMemOnly attribute.
These intrinsics may return different values every time they are called
and should not be CSE'd. IntrInaccessibleMemOnly appears to be the right
attribute to model this behavior.

Differential Revision: https://reviews.llvm.org/D57259

llvm-svn: 352256
2019-01-26 00:28:32 +00:00
Vedant Kumar 13ef84fced [MC] Teach the MachO object writer about N_FUNC_COLD
N_FUNC_COLD is a new MachO symbol attribute. It's a hint to the linker
to order a symbol towards the end of its section, to improve locality.

Example:

```
void a1() {}
__attribute__((cold)) void a2() {}
void a3() {}
int main() {
  a1();
  a2();
  a3();
  return 0;
}
```

A linker that supports N_FUNC_COLD will order _a2 to the end of the text
section. From `nm -njU` output, we see:

```
_a1
_a3
_main
_a2
```

Differential Revision: https://reviews.llvm.org/D57190

llvm-svn: 352227
2019-01-25 18:30:22 +00:00
Clement Courbet b120127001 Revert r351954 "Add a value_type to ArrayRef."
This breaks arm self-hosted buildbots.

llvm-svn: 352206
2019-01-25 15:25:52 +00:00
Sam McCall 1e7491ea9c [JSON] Work around excess-precision issue when comparing T_Integer numbers.
Reviewers: bkramer

Subscribers: kristina, llvm-commits

Differential Revision: https://reviews.llvm.org/D57237

llvm-svn: 352204
2019-01-25 15:05:33 +00:00
Simon Pilgrim d6e1e3569c Fix line endings and trim trailing whitespace. NFCI.
llvm-svn: 352198
2019-01-25 14:29:57 +00:00
Javed Absar a3e3d85286 [TblGen] Extend !if semantics through new feature !cond
This patch extends TableGen language with !cond operator.
Instead of embedding !if inside !if which can get cumbersome,
one can now use !cond.
Below is an example to convert an integer 'x' into a string:

    !cond(!lt(x,0) : "Negative",
          !eq(x,0) : "Zero",
          !eq(x,1) : "One,
          1        : "MoreThanOne")

Reviewed By: hfinkel, simon_tatham, greened
Differential Revision: https://reviews.llvm.org/D55758

llvm-svn: 352185
2019-01-25 10:25:25 +00:00
Craig Topper 6fd9af587a [X86] Add non-masked versions of vpconflict intrinsics so we can use a select in the header file in clang.
I'll remove and autoupgrade the old intrinsics in a future commit.

llvm-svn: 352172
2019-01-25 07:08:07 +00:00
Matt Arsenault 1b1e685f10 GlobalISel: Support fewerElementsVector for icmp/fcmp
Also legalize 64-bit compares for AMDGPU

llvm-svn: 352157
2019-01-25 02:59:34 +00:00
Matt Arsenault ca676343a9 GlobalISel: Implement fewerElementsVector for extensions
llvm-svn: 352155
2019-01-25 02:36:32 +00:00
Matt Arsenault 990f507704 GlobalISel: Add convenience mutatations to scalarize
llvm-svn: 352143
2019-01-25 00:51:00 +00:00
Matt Arsenault 7ba2d82c34 GlobalISel: Add helper to LLT to get a scalar or vector
llvm-svn: 352136
2019-01-25 00:10:49 +00:00
Aditya Nandakumar 3ba0d94bce [GISel]: Change how CSE is enabled by default for each pass
https://reviews.llvm.org/D57178

Now add a hook in TargetPassConfig to query if CSE needs to be
enabled. By default this hook returns false only for O0 opt level but
this can be overridden by the target.
As a consequence of the default of enabled for non O0, a few tests
needed to be updated to not use CSE (by passing in -O0) to the run
line.

reviewed by: arsenm

llvm-svn: 352126
2019-01-24 23:11:25 +00:00
Matt Arsenault baa5d2e69c RegBankSelect: Support some more complex part mappings
llvm-svn: 352123
2019-01-24 22:47:04 +00:00
Roman Lebedev a95a7105ef [IRBuilder] Remove positivity check from CreateAlignmentAssumption()
Summary:
An alignment should be non-zero positive power-of-two, anything and everything else is UB.
We should not have that check for all these prerequisites here, it's just UB.
Also, that was likely confusing middle-end passes.

While there, `CreateIntCast()` should be called with `/*isSigned*/ false`.
Think about it, there are two explanations: "An alignment should be positive",
therefore the sign bit is unset, so `zext` and `sext` is equivalent.
Or a second one: you have `i2 0b10` - a valid alignment,
now you `sext` it: `i2 0b110` - no longer valid alignment.

Reviewers: craig.topper, jyknight, hfinkel, erichkeane, rjmccall

Reviewed By: hfinkel, rjmccall

Subscribers: hfinkel, llvm-commits

Differential Revision: https://reviews.llvm.org/D54653

llvm-svn: 352089
2019-01-24 19:32:48 +00:00
James Y Knight 2c36240a82 Fix emission of _fltused for MSVC.
It should be emitted when any floating-point operations (including
calls) are present in the object, not just when calls to printf/scanf
with floating point args are made.

The difference caused by this is very subtle: in static (/MT) builds,
on x86-32, in a program that uses floating point but doesn't print it,
the default x87 rounding mode may not be set properly upon
initialization.

This commit also removes the walk of the types pointed to by pointer
arguments in calls. (To assist in opaque pointer types migration --
eventually the pointee type won't be available.)

That latter implies that it will no longer consider a call like
`scanf("%f", &floatvar)` as sufficient to emit _fltused on its
own. And without _fltused, `scanf("%f")` will abort with error R6002. This
new behavior is unlikely to bite anyone in practice (you'd have to
read a float, and do nothing with it!), and also, is consistent with
MSVC.

Differential Revision: https://reviews.llvm.org/D56548

llvm-svn: 352076
2019-01-24 18:34:00 +00:00
Julian Lettner b62e9dc46b Revert "[Sanitizers] UBSan unreachable incompatible with ASan in the presence of `noreturn` calls"
This reverts commit cea84ab93a.

llvm-svn: 352069
2019-01-24 18:04:21 +00:00
Simon Pilgrim 2f018de6a3 [TargetLowering] Rename getExpandedFixedPointMultiplication to expandFixedPointMul. NFCI.
Match the (much shorter) name used in various legalization methods.

llvm-svn: 352056
2019-01-24 15:46:54 +00:00
Simon Pilgrim 47ca8606ba [TTI] Add generic SADDO/SSUBO costs
Added x86 scalar sadd_with_overflow/ssub_with_overflow costs.

llvm-svn: 352045
2019-01-24 13:36:45 +00:00
Simon Pilgrim a131e4e296 [TTI] Add generic UADDSAT/USUBSAT costs
Add generic costs calculation for UADDSAT/USUBSAT intrinsics, this fallbacks to using generic costs for uadd_with_overflow/usub_with_overflow + a select.

Differential Revision: https://reviews.llvm.org/D56907

llvm-svn: 352044
2019-01-24 12:27:10 +00:00
Simon Pilgrim 2d1964b90f [TTI] Add generic UADDO/USUBO costs
Added x86 scalar uadd_with_overflow/usub_with_overflow costs.

Differential Revision: https://reviews.llvm.org/D56907

llvm-svn: 352043
2019-01-24 12:10:20 +00:00
Julian Lettner cea84ab93a [Sanitizers] UBSan unreachable incompatible with ASan in the presence of `noreturn` calls
Summary:
UBSan wants to detect when unreachable code is actually reached, so it
adds instrumentation before every `unreachable` instruction. However,
the optimizer will remove code after calls to functions marked with
`noreturn`. To avoid this UBSan removes `noreturn` from both the call
instruction as well as from the function itself. Unfortunately, ASan
relies on this annotation to unpoison the stack by inserting calls to
`_asan_handle_no_return` before `noreturn` functions. This is important
for functions that do not return but access the the stack memory, e.g.,
unwinder functions *like* `longjmp` (`longjmp` itself is actually
"double-proofed" via its interceptor). The result is that when ASan and
UBSan are combined, the `noreturn` attributes are missing and ASan
cannot unpoison the stack, so it has false positives when stack
unwinding is used.

Changes:
  # UBSan now adds the `expect_noreturn` attribute whenever it removes
    the `noreturn` attribute from a function
  # ASan additionally checks for the presence of this attribute

Generated code:
```
call void @__asan_handle_no_return    // Additionally inserted to avoid false positives
call void @longjmp
call void @__asan_handle_no_return
call void @__ubsan_handle_builtin_unreachable
unreachable
```

The second call to `__asan_handle_no_return` is redundant. This will be
cleaned up in a follow-up patch.

rdar://problem/40723397

Reviewers: delcypher, eugenis

Tags: #sanitizers

Differential Revision: https://reviews.llvm.org/D56624

llvm-svn: 352003
2019-01-24 01:06:19 +00:00
David Callahan d2eeb2516d Update entry count for cold calls
Summary:
Profile sample files include the number of times each entry or inlined
call site is sampled. This is translated into the entry count metadta
on functions.

When sample data is being read, if a call site that was inlined
in the sample program is considered cold and not inlined, then
the entry count of the out-of-line functions does not reflect
the current compilation.

In this patch, we note call sites where the function was not inlined
and as a last action of the sample profile loading, we update the
called function's entry count to reflect the calls from these
call sites which are not included in the profile file.

Reviewers: danielcdh, wmi, Kader, modocache

Reviewed By: wmi

Subscribers: davidxl, eraman, llvm-commits

Differential Revision: https://reviews.llvm.org/D52845

llvm-svn: 352001
2019-01-24 00:55:23 +00:00
Mircea Trofin ec02630278 [llvm] Clarify responsiblity of some of DILocation discriminator APIs
Summary:
Renamed setBaseDiscriminator to cloneWithBaseDiscriminator, to match
similar APIs. Also changed its behavior to copy over the other
discriminator components, instead of eliding them.

Renamed cloneWithDuplicationFactor to
cloneByMultiplyingDuplicationFactor, which more closely matches what
this API does.

Reviewers: dblaikie, wmi

Reviewed By: dblaikie

Subscribers: zzheng, llvm-commits

Differential Revision: https://reviews.llvm.org/D56220

llvm-svn: 351996
2019-01-24 00:10:25 +00:00
Reid Kleckner e80799e6af [ADT] Notify ilist traits about in-list transfers
Summary:
Previously no client of ilist traits has needed to know about transfers
of nodes within the same list, so as an optimization, ilist doesn't call
transferNodesFromList in that case. However, now there are clients that
want to use ilist traits to cache instruction ordering information to
optimize dominance queries of instructions in the same basic block.
This change updates the existing ilist traits users to detect in-list
transfers and do nothing in that case.

After this change, we can start caching instruction ordering information
in LLVM IR data structures. There are two main ways to do that:
- by putting an order integer into the Instruction class
- by maintaining order integers in a hash table on BasicBlock

I plan to implement and measure both, but I wanted to commit this change
first to enable other out of tree ilist clients to implement this
optimization as well.

Reviewers: lattner, hfinkel, chandlerc

Subscribers: hiraditya, dexonsmith, llvm-commits

Differential Revision: https://reviews.llvm.org/D57120

llvm-svn: 351992
2019-01-23 22:59:52 +00:00
Andrea Di Biagio d768d35515 [MC][X86] Correctly model additional operand latency caused by transfer delays from the integer to the floating point unit.
This patch adds a new ReadAdvance definition named ReadInt2Fpu.
ReadInt2Fpu allows x86 scheduling models to accurately describe delays caused by
data transfers from the integer unit to the floating point unit.
ReadInt2Fpu currently defaults to a delay of zero cycles (i.e. no delay) for all
x86 models excluding BtVer2. That means, this patch is only a functional change
for the Jaguar cpu model only.

Tablegen definitions for instructions (V)PINSR* have been updated to account for
the new ReadInt2Fpu. That read is mapped to the the GPR input operand.
On Jaguar, int-to-fpu transfers are modeled as a +6cy delay. Before this patch,
that extra delay was added to the opcode latency. In practice, the insert opcode
only executes for 1cy. Most of the actual latency is actually contributed by the
so-called operand-latency. According to the AMD SOG for family 16h, (V)PINSR*
latency is defined by expression f+1, where f is defined as a forwarding delay
from the integer unit to the fpu.

When printing instruction latency from MCA (see InstructionInfoView.cpp) and LLC
(only when flag -print-schedule is speified), we now need to account for any
extra forwarding delays. We do this by checking if scheduling classes declare
any negative ReadAdvance entries. Quoting a code comment in TargetSchedule.td:
"A negative advance effectively increases latency, which may be used for
cross-domain stalls". When computing the instruction latency for the purpose of
our scheduling tests, we now add any extra delay to the formula. This avoids
regressing existing codegen and mca schedule tests. It comes with the cost of an
extra (but very simple) hook in MCSchedModel.

Differential Revision: https://reviews.llvm.org/D57056

llvm-svn: 351965
2019-01-23 16:35:07 +00:00
Simon Pilgrim f87226eb70 [IR] Match intrinsic parameter by scalar/vectorwidth
This patch replaces the existing LLVMVectorSameWidth matcher with LLVMScalarOrSameVectorWidth.

The matching args must be either scalars or vectors with the same number of elements, but in either case the scalar/element type can differ, specified by LLVMScalarOrSameVectorWidth.

I've updated the _overflow intrinsics to demonstrate this - allowing it to return a i1 or <N x i1> overflow result, matching the scalar/vectorwidth of the other (add/sub/mul) result type.

The masked load/store/gather/scatter intrinsics have also been updated to use this, although as we specify the reference type to be llvm_anyvector_ty we guarantee the mask will be <N x i1> so no change in behaviour

Differential Revision: https://reviews.llvm.org/D57090

llvm-svn: 351957
2019-01-23 16:00:22 +00:00
Clement Courbet c7956346da Re-land rL322538 "Add a value_type to ArrayRef."
llvm-svn: 351954
2019-01-23 14:20:59 +00:00
Brendon Cahoon 59d9973146 [Pipeliner] Add two pragmas to control software pipelining optimization
#pragma clang loop pipeline(disable)
  
    Disable SWP optimization for the next loop.
    “disable” is the only possible value.
  
#pragma clang loop pipeline_initiation_interval(number)
  
    Set value of initiation interval for SWP
    optimization to specified number value for
    the next loop. Number is the positive value
    greater than 0.
  
These pragmas could be used for debugging or reducing
compile time purposes. It is possible to disable SWP for
concrete loops to save compilation time or to find bugs
by not doing SWP to certain loops. It is possible to set
value of initiation interval to concrete number to save
compilation time by not doing extra pipeliner passes or
to check created schedule for specific initiation interval.

That is llvm part of the fix

Clang part of fix: https://reviews.llvm.org/D55710

Patch by Alexey Lapshin!

Differential Revision: https://reviews.llvm.org/D56403

llvm-svn: 351923
2019-01-23 03:26:10 +00:00
Peter Collingbourne 73078ecd38 hwasan: Move memory access checks into small outlined functions on aarch64.
Each hwasan check requires emitting a small piece of code like this:
https://clang.llvm.org/docs/HardwareAssistedAddressSanitizerDesign.html#memory-accesses

The problem with this is that these code blocks typically bloat code
size significantly.

An obvious solution is to outline these blocks of code. In fact, this
has already been implemented under the -hwasan-instrument-with-calls
flag. However, as currently implemented this has a number of problems:
- The functions use the same calling convention as regular C functions.
  This means that the backend must spill all temporary registers as
  required by the platform's C calling convention, even though the
  check only needs two registers on the hot path.
- The functions take the address to be checked in a fixed register,
  which increases register pressure.
Both of these factors can diminish the code size effect and increase
the performance hit of -hwasan-instrument-with-calls.

The solution that this patch implements is to involve the aarch64
backend in outlining the checks. An intrinsic and pseudo-instruction
are created to represent a hwasan check. The pseudo-instruction
is register allocated like any other instruction, and we allow the
register allocator to select almost any register for the address to
check. A particular combination of (register selection, type of check)
triggers the creation in the backend of a function to handle the check
for specifically that pair. The resulting functions are deduplicated by
the linker. The pseudo-instruction (really the function) is specified
to preserve all registers except for the registers that the AAPCS
specifies may be clobbered by a call.

To measure the code size and performance effect of this change, I
took a number of measurements using Chromium for Android on aarch64,
comparing a browser with inlined checks (the baseline) against a
browser with outlined checks.

Code size: Size of .text decreases from 243897420 to 171619972 bytes,
or a 30% decrease.

Performance: Using Chromium's blink_perf.layout microbenchmarks I
measured a median performance regression of 6.24%.

The fact that a perf/size tradeoff is evident here suggests that
we might want to make the new behaviour conditional on -Os/-Oz.
But for now I've enabled it unconditionally, my reasoning being that
hwasan users typically expect a relatively large perf hit, and ~6%
isn't really adding much. We may want to revisit this decision in
the future, though.

I also tried experimenting with varying the number of registers
selectable by the hwasan check pseudo-instruction (which would result
in fewer variants being created), on the hypothesis that creating
fewer variants of the function would expose another perf/size tradeoff
by reducing icache pressure from the check functions at the cost of
register pressure. Although I did observe a code size increase with
fewer registers, I did not observe a strong correlation between the
number of registers and the performance of the resulting browser on the
microbenchmarks, so I conclude that we might as well use ~all registers
to get the maximum code size improvement. My results are below:

Regs | .text size | Perf hit
-----+------------+---------
~all | 171619972  | 6.24%
  16 | 171765192  | 7.03%
   8 | 172917788  | 5.82%
   4 | 177054016  | 6.89%

Differential Revision: https://reviews.llvm.org/D56954

llvm-svn: 351920
2019-01-23 02:20:10 +00:00