There's a data race on the UninterestingChunks set. The code seems to
be operating on the assumption that all the tasks completed, so ensure
the unused results do complete. This started showing up about 50% of
the time when running operands-skip-parallel.ll after the recent
switch to use DenseSet; previously it failed much less frequently with
std::set.
We should introduce a mechanism to early terminate unused
results. Alternatively, I've been thinking about ways to to make the
reduction order smarter. I frequently have tests that take multiple
minutes to compile and hit the failure. It may be helpful to see which
chunks took the least time and prefer those over just taking the first
result.
Most tools accept .ll or .bc inputs interchangably, but some don't.
Default to writing temporary files that match the input. This
will also aid reducing deserialization bugs.
Previously, this unconditionally emitted text IR. I ran
into a bug that manifested in broken disassembly, so the
desired output was the bitcode format. If the input format
was binary bitcode, the requested output file ends in .bc,
or an explicit -output-bitcode option was used, emit bitcode.
This was also trying to write the bitcode to the failed file
on failure, which asserts. Also, consistently use
ToolOutputFile, instead of one path manually removing
the temp file.
Each delta pass run should have guaranteed the output is still
interesting, so it should be pointless to recheck this each
iteration. I have many issues that take multiple minutes
to reproduce, so this ends up being a huge waste of time.
Also, remove broken line counting. This never worked, since
getLines was failing to open the temporary file which was just
deleted.
We randomly use outs() or errs(), which makes test logs confusing.
We also randomly add/don't add a line afterward.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D136130
Without this patch, the assertion triggers below on the test case,
because we are using different oracles for the verification.
Assertion failed: (Targets == NoChunksCounter.count() && "number of chunks changes when reducing"), function runDeltaPass, file Delta.cpp, line 272.
This should cut down on the visual noise when reducing. Still keep output when we run a pass or when we successfully reduce.
Notably, this also suppresses redirecting the test output to stdout/stderr.
Reviewed By: regehr
Differential Revision: https://reviews.llvm.org/D131922
This was done by adding --abort-on-invalid-reduction to remove-function-bodies-used-in-globals.ll and fixing the fallout.
Aliases must have a GlobalValue or ConstantExpr aliasee and the aliasee must be a definition if it's a GlobalValue.
Don't RAUW functions with null if there's an alias pointing to it, and similarly don't delete the body of a function.
Don't delete the entire body of a function when reducing blocks, preserve at least one block.
Also make debugging these sorts of things easier by dumping the module when --abort-on-invalid-reduction triggers.
Reviewed By: regehr
Differential Revision: https://reviews.llvm.org/D131505
Adds support for reading and writing LTO bitcode files.
- Emit a summary if the original bitcode file had a summary
- Use split LTO units if the original bitcode file used them.
Reviewed By: arsenm
Differential Revision: https://reviews.llvm.org/D127168
The current testcase I'm trying to reduce only reproduces with IPRA
enabled and requires handling multiple functions.
The only real difference vs. the IR is the extra indirect to look for
the underlying MachineFunction, so treat the ReduceWorkItem as the
module instead of the function.
The ugliest piece of this is really the ugliness of
MachineModuleInfo. It not only tracks actual module state, but has a
number of transient fields used for isel and/or the asm printer. These
shouldn't do any harm for the use here, though they should be
separated out.
Previously the options category given to cl::HideUnrelatedOptions was
local to llvm-reduce.cpp and as a result only options declared in that
file were visible in the -help options listing. This was a bit
unfortunate since there were several useful options declared in other
files. This patch addresses that.
Differential Revision: https://reviews.llvm.org/D118682
This patch adds parallel processing of chunks. When reducing very large
inputs, e.g. functions with 500k basic blocks, processing chunks in
parallel can significantly speed up the reduction.
To allow modifying clones of the original module in parallel, each clone
needs their own LLVMContext object. To achieve this, each job parses the
input module with their own LLVMContext. In case a job successfully
reduced the input, it serializes the result module as bitcode into a
result array.
To ensure parallel reduction produces the same results as serial
reduction, only the first successfully reduced result is used, and
results of other successful jobs are dropped. Processing resumes after
the chunk that was successfully reduced.
The number of threads to use can be configured using the -j option.
It defaults to 1, which means serial processing.
Reviewed By: Meinersbur
Differential Revision: https://reviews.llvm.org/D113857
This patch moves the logic to clone and check a new chunk into a new
function, to allow re-use in a follow-up patch that implements parallel
reductions.
Reviewed By: dblaikie
Differential Revision: https://reviews.llvm.org/D113856
Textual LLVM IR files are much bigger and take longer to write to disk.
To avoid the extra cost incurred by serializing to text, this patch adds
an option to save temporary files as bitcode instead.
Reviewed By: aeubanks
Differential Revision: https://reviews.llvm.org/D113858
Having a separate counting method runs the risk of a mismatch between
the actual reduction method and the counting method.
Instead, create an Oracle that always returns true for shouldKeep(), run
the reduction, and count how many times shouldKeep() was called. The
module should not be modified if shouldKeep() always returns true.
Reviewed By: Meinersbur
Differential Revision: https://reviews.llvm.org/D113537
Sometimes if llvm-reduce is interrupted in the middle of a delta pass on
a large file, it can take quite some time for the tool to start actually
doing new work if it is restarted again on the partially-reduced file. A
lot of time ends up being spent testing large chunks when these large
chunks are very unlikely to actually pass the interestingness test. In
cases like this, the tool will complete faster if the starting
granularity is reduced to a finer amount. Thus, we introduce a command
line flag that automatically divides the chunks into smaller subsets a
fixed, user-specified number of times prior to beginning the core loop.
Reviewed By: aeubanks
Differential Revision: https://reviews.llvm.org/D112651
(Second try. Need to link against CodeGen and MC libs.)
The llvm-reduce tool has been extended to operate on MIR (import, clone and
export). Current limitation is that only a single machine function is
supported. A single reducer pass that operates on machine instructions (while
on SSA-form) has been added. Additional MIR specific reducer passes can be
added later as needed.
Differential Revision: https://reviews.llvm.org/D110527
The llvm-reduce tool has been extended to operate on MIR (import, clone and
export). Current limitation is that only a single machine function is
supported. A single reducer pass that operates on machine instructions (while
on SSA-form) has been added. Additional MIR specific reducer passes can be
added later as needed.
Differential Revision: https://reviews.llvm.org/D110527
Use Module& wherever possible.
Since every reduction immediately turns Chunks into an Oracle, directly pass Oracle instead.
Reviewed By: hans
Differential Revision: https://reviews.llvm.org/D111122
This introduces a flag that aborts if we ever reduce to IR that fails
the verifier.
Reviewed By: swamulism, arichardson
Differential Revision: https://reviews.llvm.org/D101279
Some reduction passes may create invalid IR. I am not aware of any use
case where we would like to proceed reducing invalid IR. Various utils
used here, including CloneModule, assume the module to clone is valid
and crash otherwise.
Ideally, no reduction pass would create invalid IR, but some currently
do. ReduceInstructions can be fixed relatively easily (D86210), but
others are harder. For example, ReduceBasicBlocks may remove result in
invalid PHI nodes.
For now, skip the chunks. If we get to the point where all reduction
passes result in valid IR, we may want to turn this into an assertion.
Reviewed By: lebedev.ri
Differential Revision: https://reviews.llvm.org/D86212
This helps with both debugging llvm-reduce and sometimes getting usefull result even if llvm-reduce crashes
Reviewed By: lebedev.ri
Differential Revision: https://reviews.llvm.org/D85996
Summary:
If there was a single target to begin with, because a single target
can only occupy a single chunk, we couldn't increase granularity.
and would immediately give up.
Likewise, if we had multiple targets, if by the end we'd end up with
a single target, we wouldn't finish reducing it, it would always
end up being "interesting"
Reviewers: dblaikie, nickdesaulniers, diegotf
Reviewed By: dblaikie
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D84318
This is how it should've been and brings it more in line with
std::string_view. There should be no functional change here.
This is mostly mechanical from a custom clang-tidy check, with a lot of
manual fixups. It uncovers a lot of minor inefficiencies.
This doesn't actually modify StringRef yet, I'll do that in a follow-up.
This modifies the tool somewhat to only create files when about to run
the "interestingness" test, and delete them immediately after - this
means some more files will be created sometimes (when "double checking"
work - which should probably be fixed/avoided anyway).
This now creates temporary files, rather than only unique ones, and also
uses ToolOutputFile (without ever calling "keep") to ensure the files
are deleted as soon as the interestingness test is run.
llvm-svn: 371696
Summary:
This also changes all the outs() statements to errs() so the output and
progress streams don't get mixed.
This has been added because D64176 had flaky tests, which I believe were because the reduced file was being catted into `FileCheck`, instead of being pass from STDOUT directly.
Reviewers: chandlerc, dblaikie, xbolva00
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66314
llvm-svn: 369060
Summary: This diff also changed the check in `Delta.cpp` to verify interesting-ness, so it exits when the input isn't interesting
Subscribers: llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D66251
llvm-svn: 368915
This reverts commits:
"Added Delta IR Reduction Tool"
"[Bugpoint redesign] Added Pass to Remove Global Variables"
"Added Tool as Dependency to tests & fixed warnings"
Reduce/remove-funcs.ll is failing on bots.
llvm-svn: 368122
Summary:
This pass tries to remove Global Variables, as well as their derived uses. For example if a variable `@x` is used by `%call1` and `%call2`, both these uses and the definition of `@x` are deleted. Moreover if `%call1` or `%call2` are used elsewhere those uses are also deleted, and so on recursively.
I'm still uncertain if this pass should remove derived uses, I'm open to suggestions.
Subscribers: mgorny, llvm-commits
Tags: #llvm
Differential Revision: https://reviews.llvm.org/D64176
llvm-svn: 368115