We should not deserialize unused declarations from the PCH file. Achieve
this by storing the top level declarations during parsing
(HandleTopLevelDecl ASTConsumer callback) and analyzing/building a call
graph only for those.
Tested the patch on a sample ObjC file that uses PCH. With the patch,
the analyzes is 17.5% faster and clang consumes 40% less memory.
Got about 10% overall build/analyzes time decrease on a large Objective
C project.
A bit of CallGraph refactoring/cleanup as well..
llvm-svn: 154625
As per Jordy's review. Creating a symbol here is more flexible; however
I could not come up with an example where it was needed. (What
constrains can be added on of the symbol constrained to 0?)
llvm-svn: 154542
we use the same Expr* as the one being currently visited. This is preparation for transitioning to having
ProgramPoints refer to CFGStmts.
This required a bit of trickery. We wish to keep the old Expr* bindings in the Environment intact,
as plenty of logic relies on it and there is no reason to change it, but we sometimes want the Stmt* for
the ProgramPoint to be different than the Expr* being used for bindings. This requires adding an extra
argument for some functions (e.g., evalLocation). This looks a bit strange for some clients, but
it will look a lot cleaner when were start using CFGStmt* in the appropriate places.
As some fallout, the diagnostics arrows are a bit difference, since some of the node locations have changed.
I have audited these, and they look reasonable.
llvm-svn: 154214
consolidate some commonly used category strings into global references (more of this can be done, I just did a few).
Fixes <rdar://problem/11191537>.
llvm-svn: 154121
Store this info inside the function summary generated for all analyzed
functions. This is useful for coverage stats and can be helpful for
analyzer state space search strategies.
llvm-svn: 153923
properly reason about such accesses, but we shouldn't emit bogus "uninitialized value" warnings
either. Fixes <rdar://problem/11127008>.
llvm-svn: 153913
Fixes a false positive (radar://11152419). The current solution of
adding the info into 3 places is quite ugly. Pending a generic pointer
escapes callback.
llvm-svn: 153731
count.
This is an optimization for "retry without inlining" option. Here, if we
failed to inline a function due to reaching the basic block max count,
we are going to store this information and not try to inline it
again in the translation unit. This can be viewed as a function summary.
On sqlite, with this optimization, we are 30% faster then before and
cover 10% more basic blocks (partially because the number of times we
reach timeout is decreased by 20%).
llvm-svn: 153730
The analyzer gives up path exploration under certain conditions. For
example, when the same basic block has been visited more than 4 times.
With inlining turned on, this could lead to decrease in code coverage.
Specifically, if we give up inside the inlined function, the rest of
parent's basic blocks will not get analyzed.
This commit introduces an option to enable re-run along the failed path,
in which we do not inline the last inlined call site. This is done by
enqueueing the node before the processing of the inlined call site
with a special policy encoded in the state. The policy tells us not to
inline the call site along the path.
This lead to ~10% increase in the number of paths analyzed. Even though
we expected a much greater coverage improvement.
The option is turned off by default for now.
llvm-svn: 153534
This required adding a change count token to BugReport, but also allowed us to ditch ImmutableList as the BugReporterVisitor data type.
Also, remove the hack from MallocChecker, now that visitors appear in the opposite order. This is not exactly a fix, but the common case -- custom diagnostics after generic ones -- is now the default behavior.
llvm-svn: 153369
Specifically, we use the last store of the leaked symbol in the leak diagnostic.
(No support for struct fields since the malloc checker doesn't track those
yet.)
+ Infrastructure to track the regions used in store evaluations.
This approach is more precise than iterating the store to
obtain the region bound to the symbol, which is used in RetainCount
checker. The region corresponds to what is uttered in the code in the
last store and we do not rely on the store implementation to support
this functionality.
llvm-svn: 153212
The symbol-aware stack hint combines the checker-provided message
with the information about how the symbol was passed to the callee: as
a parameter or a return value.
For malloc, the generated messages look like this :
"Returning from 'foo'; released memory via 1st parameter"
"Returning from 'foo'; allocated memory via 1st parameter"
"Returning from 'foo'; allocated memory returned"
"Returning from 'foo'; reallocation of 1st parameter failed"
(We are yet to handle cases when the symbol is a field in a struct or
an array element.)
llvm-svn: 152962
The only use of AggExprVisitor was in #if 0 code (the analyzer's incomplete C++ support), so there is no actual behavioral change anyway.
llvm-svn: 152856
BugVisitor DiagnosticPieces.
When checkers create a DiagnosticPieceEvent, they can supply an extra
string, which will be concatenated with the call exit message for every
call on the stack between the diagnostic event and the final bug report.
(This is a simple version, which could be/will be further enhanced.)
For example, this is used in Malloc checker to produce the ",
which allocated memory" in the following example:
static char *malloc_wrapper() { // 2. Entered call from 'use'
return malloc(12); // 3. Memory is allocated
}
void use() {
char *v;
v = malloc_wrapper(); // 1. Calling 'malloc_wrappers'
// 4. Returning from 'malloc_wrapper', which allocated memory
} // 5. Memory is never released; potential
memory leak
llvm-svn: 152837
track whether the referenced declaration comes from an enclosing
local context. I'm amenable to suggestions about the exact meaning
of this bit.
llvm-svn: 152491
as aborted, but didn't treat such cases as sinks in the ExplodedGraph.
Along the way, add basic support for CXXCatchStmt, expanding the set of code we actually analyze (hopefully correctly).
Fixes: <rdar://problem/10892489>
llvm-svn: 152468
We do not reanalyze a function, which has already been analyzed as an
inlined callee. As per PRELIMINARY testing, this gives over
50% run time reduction on some benchmarks without decreasing of the
number of bugs found.
Turning the mode on by default.
llvm-svn: 152440
Essentially, a bug centers around a story for various symbols and regions. We should only include
the path diagnostic events that relate to those symbols and regions.
The pruning is done by associating a set of interesting symbols and regions with a BugReporter, which
can be modified at BugReport creation or by BugReporterVisitors.
This patch reduces the diagnostics emitted in several of our test cases. I've vetted these as
having desired behavior. The only regression is a missing null check diagnostic for the return
value of realloc() in test/Analysis/malloc-plist.c. This will require some investigation to fix,
and I have added a FIXME to the test case.
llvm-svn: 152361
analysis to make the AST representation testable. They are represented by a
new UserDefinedLiteral AST node, which is a sugared CallExpr. All semantic
properties, including full CodeGen support, are achieved for free by this
representation.
UserDefinedLiterals can never be dependent, so no custom instantiation
behavior is required. They are mangled as if they were direct calls to the
underlying literal operator. This matches g++'s apparent behavior (but not its
actual mangling, which is broken for literal-operator-ids).
User-defined *string* literals are now fully-operational, but the semantic
analysis is quite hacky and needs more work. No other forms of user-defined
literal are created yet, but the AST support for them is present.
This patch committed after midnight because we had already hit the quota for
new kinds of literal yesterday.
llvm-svn: 152211
command line options for inlining tuning.
This adds the option for stack depth bound as well as function size
bound.
+ minor doxygenification
llvm-svn: 151930
funopen, setvbuf.
Teach the checker and the engine about these APIs to resolve malloc
false positives. As I am adding more of these APIs, it is clear that all
this should be factored out into a separate callback (for example,
region escapes). Malloc, KeyChainAPI and RetainRelease checkers could
all use it.
llvm-svn: 151737
This introduces a concept of a "prunable" PathDiagnosticEvent. Currently this is a flag, but
we may evolve the concept to make this more dynamically inferred.
llvm-svn: 151663
When allocated buffer is passed to CF/NS..NoCopy functions, the
ownership is transfered unless the deallocator argument is set to
'kCFAllocatorNull'.
llvm-svn: 151608
that provides the behavior of the C++11 library trait
std::is_trivially_constructible<T, Args...>, which can't be
implemented purely as a library.
Since __is_trivially_constructible can have zero or more arguments, I
needed to add Yet Another Type Trait Expression Class, this one
handling arbitrary arguments. The next step will be to migrate
UnaryTypeTrait and BinaryTypeTrait over to this new, more general
TypeTrait class.
Fixes the Clang side of <rdar://problem/10895483> / PR12038.
llvm-svn: 151352
When we find two leak reports with the same allocation site, report only
one of them.
Provide a helper method to BugReporter to facilitate this.
llvm-svn: 151287
Make this call an exception in ExprEngine::invalidateArguments:
'int pthread_setspecific(ptheread_key k, const void *)' stores
a value into thread local storage. The value can later be retrieved
with 'void *ptheread_getspecific(pthread_key)'. So even thought the
parameter is 'const void *', the region escapes through the
call.
(Here we just blacklist the call in the ExprEngine's default
logic. Another option would be to add a checker which evaluates
the call and triggers the call to invalidate regions.)
Teach the Malloc Checker, which treats all system calls as safe about
the API.
llvm-svn: 151220
block pointer that returns a block literal which captures (by copy)
the lambda closure itself. Some aspects of the block literal are left
unspecified, namely the capture variable (which doesn't actually
exist) and the body (which will be filled in by IRgen because it can't
be written as an AST).
Because we're switching to this model, this patch also eliminates
tracking the copy-initialization expression for the block capture of
the conversion function, since that information is now embedded in the
synthesized block literal. -1 side tables FTW.
llvm-svn: 151131
Holding the constructor directly makes no sense when list-initialized arrays come into play. The constructor is now held in a CXXConstructExpr, if construction is what is done. The new design can also distinguish properly between list-initialization and direct-initialization, as well as implicit default-initialization constructors and explicit value-initialization constructors. Finally, doing it this way removes redundance from the AST because CXXNewExpr doesn't try to handle both the allocation and the initialization responsibilities.
This breaks the static analysis of new expressions. I've filed PR12014 to track this.
llvm-svn: 150682
piece can always be generated.
The default end of diagnostic path piece was failing to generate on a
BlockEdge that was outgoing from a basic block without a terminator,
resulting in a very simple diagnostic being rendered (ex: no path
highlighting or custom visitors). Reuse another function, which is
essentially doing the same thing and correct it not to fail when a block
has no terminator.
llvm-svn: 150659
is general goodness because representations of member pointers are
not always equivalent across member pointer types on all ABIs
(even though this isn't really standard-endorsed).
Take advantage of the new information to teach IR-generation how
to do these reinterprets in constant initializers. Make sure this
works when intermingled with hierarchy conversions (although
this is not part of our motivating use case). Doing this in the
constant-evaluator would probably have been better, but that would
require a *lot* of extra structure in the representation of
constant member pointers: you'd really have to track an arbitrary
chain of hierarchy conversions and reinterpretations in order to
get this right. Ultimately, this seems less complex. I also
wasn't quite sure how to extend the constant evaluator to handle
foldings that we don't actually want to treat as extended
constant expressions.
llvm-svn: 150551
(In response of Ted's review of r150112.)
This moves the logic which checked if a symbol escapes through a
parameter to invalidateRegionCallback (instead of post CallExpr visit.)
To accommodate the change, added a CallOrObjCMessage parameter to
checkRegionChanges callback.
llvm-svn: 150513
This seems to negatively affect compile time onsome ObjC tests
(which use a lot of partial diagnostics I assume). I have to come
up with a way to keep them inline without including Diagnostic.h
everywhere. Now adding a new diagnostic requires a full rebuild
of e.g. the static analyzer which doesn't even use those diagnostics.
This reverts commit 6496bd10dc3a6d5e3266348f08b6e35f8184bc99.
This reverts commit 7af19b817ba964ac560b50c1ed6183235f699789.
This reverts commit fdd15602a42bbe26185978ef1e17019f6d969aa7.
This reverts commit 00bd44d5677783527d7517c1ffe45e4d75a0f56f.
This reverts commit ef9b60ffed980864a8db26ad30344be429e58ff5.
llvm-svn: 150006
- Capturing variables by-reference and by-copy within a lambda
- The representation of lambda captures
- The creation of the non-static data members in the lambda class
that store the captured variables
- The initialization of the non-static data members from the
captured variables
- Pretty-printing lambda expressions
There are a number of FIXMEs, both explicit and implied, including:
- Creating a field for a capture of 'this'
- Improved diagnostics for initialization failures when capturing
variables by copy
- Dealing with temporaries created during said initialization
- Template instantiation
- AST (de-)serialization
- Binding and returning the lambda expression; turning it into a
proper temporary
- Lots and lots of semantic constraints
- Parameter pack captures
llvm-svn: 149977
- Move the offending methods out of line and fix transitive includers.
- This required changing an enum in the PPCallback API into an unsigned.
llvm-svn: 149782
Fix all the files that depended on transitive includes of Diagnostic.h.
With this patch in place changing a diagnostic no longer requires a full rebuild of the StaticAnalyzer.
llvm-svn: 149781
Original log:
Convert ProgramStateRef to a smart pointer for managing the reference counts of ProgramStates. This leads to a slight memory
improvement, and a simplification of the logic for managing ProgramState objects.
# Please enter the commit message for your changes. Lines starting
llvm-svn: 149339
Original log:
Convert ProgramStateRef to a smart pointer for managing the reference counts of ProgramStates. This leads to a slight memory
improvement, and a simplification of the logic for managing ProgramState objects.
llvm-svn: 149336
At this point this is largely cosmetic, but it opens the door to replace
ProgramStateRef with a smart pointer that more eagerly acts in the role
of reclaiming unused ProgramState objects.
llvm-svn: 149081
to the underlying consumer implementation. This allows us to unique reports across analyses to multiple functions (which
shows up with inlining).
llvm-svn: 148997
This is accomplished by periodically reclaiming nodes in the graph. This was an optimization
done before the CFG was linearized, but the CFG linearization destroyed that optimization since each
freshly created node couldn't be reclaimed and we only looked at a window of nodes created between
each ProcessStmt. This optimization can be reclaimed my merely expanding the window to N number of nodes.
llvm-svn: 148888
- Add atomic-to/from-nonatomic cast types
- Emit atomic operations for arithmetic on atomic types
- Emit non-atomic stores for initialisation of atomic types, but atomic stores and loads for every other store / load
- Add a __atomic_init() intrinsic which does a non-atomic store to an _Atomic() type. This is needed for the corresponding C11 stdatomic.h function.
- Enables the relevant __has_feature() checks. The feature isn't 100% complete yet, but it's done enough that we want people testing it.
Still to do:
- Make the arithmetic operations on atomic types (e.g. Atomic(int) foo = 1; foo++;) use the correct LLVM intrinsic if one exists, not a loop with a cmpxchg.
- Add a signal fence builtin
- Properly set the fenv state in atomic operations on floating point values
- Correctly handle things like _Atomic(_Complex double) which are too large for an atomic cmpxchg on some platforms (this requires working out what 'correctly' means in this context)
- Fix the many remaining corner cases
llvm-svn: 148242
My hope is to reimplement this from first principles based on the simplifications of removing unneeded node builders
and re-evaluating how C++ calls are handled in the CFG. The hope is to turn inlining "on-by-default" as soon as possible
with a core set of things working well, and then expand over time.
llvm-svn: 147904
We already have a more conservative check in the compiler (if the
format string is not a literal, we warn). Still adding it here for
completeness and since this check is stronger - only triggered if the
format string is tainted.
llvm-svn: 147714