BugReporter is used to process ALL bug reports. By using a shared map,
we are having mappings from different PathDiagnosticPieces to LocationContexts
well beyond the point where we are processing a given report. This
state is inherently error prone, and is analogous to using a global
variable. Instead, just create a temporary map, one per report,
and when we are done with it we throw it away. No extra state.
llvm-svn: 180974
...and don't consider '0' to be a null pointer constant if it's the
initializer for a float!
Apparently null pointer constant evaluation looks through both
MaterializeTemporaryExpr and ImplicitCastExpr, so we have to be more
careful about types in the callers. For RegionStore this just means giving
up a little more; for ExprEngine this means handling the
MaterializeTemporaryExpr case explicitly.
Follow-up to r180894.
llvm-svn: 180944
Previously, this was scattered across Environment (literal expressions),
ExprEngine (default arguments), and RegionStore (global constants). The
former special-cased several kinds of simple constant expressions, while
the latter two deferred to the AST's constant evaluator.
Now, these are all unified as SValBuilder::getConstantVal(). To keep
Environment fast, the special cases for simple constant expressions have
been left in, but the main benefits are that (a) unusual constants like
ObjCStringLiterals now work as default arguments and global constant
initializers, and (b) we're not duplicating code between ExprEngine and
RegionStore.
This actually caught a bug in our test suite, which is awesome: we stop
tracking allocated memory if it's passed as an argument along with some
kind of callback, but not if the callback is 0. We were testing this in
a case where the callback parameter had a default value, but that value
was 0. After this change, the analyzer now (correctly) flags that as a
leak!
<rdar://problem/13773117>
llvm-svn: 180894
This goes with r178516, which instructed the analyzer not to inline the
constructors and destructors of C++ container classes. This goes a step
further and does the same thing for iterators, so that the analyzer won't
falsely decide we're trying to construct an iterator pointing to a
nonexistent element.
The heuristic for determining whether something is an iterator is the
presence of an 'iterator_category' member. This is controlled under the
same -analyzer-config option as container constructor/destructor inlining:
'c++-container-inlining'.
<rdar://problem/13770187>
llvm-svn: 180890
This doesn't appear to be the cause of the slowdown. I'll have to try a
manual bisect to see if there's really anything there, or if it's just
the bot itself taking on additional load. Meanwhile, this change helps
with correctness.
This changes an assertion and adds a test case, then re-applies r180638,
which was reverted in r180714.
<rdar://problem/13296133> and PR15863
llvm-svn: 180864
Much of this patch outside of PathDiagnostics.h are just minor
syntactic changes due to the return type for operator* and the like
changing for the iterator, so the real focus should be on
PathPieces itself.
This change is motivated so that we can do efficient insertion
and removal of individual pieces from within a PathPiece, just like
this was a kind of "IR" for static analyzer diagnostics. We
currently implement path transformations by iterating over an
entire PathPiece and making a copy. This isn't very natural for
some algorithms.
We use an ilist here instead of std::list because we want operations
to rip out/insert nodes in place, just like IR manipulation. This
isn't being used yet, but opens the door for more powerful
transformation algorithms on diagnostic paths.
llvm-svn: 180741
This seems to be causing quite a slowdown on our internal analyzer bot,
and I'm not sure why. Needs further investigation.
This reverts r180638 / 9e161ea981f22ae017b6af09d660bfc3ddf16a09.
llvm-svn: 180714
Casts to bool (and _Bool) are equivalent to checks against zero,
not truncations to 1 bit or 8 bits.
This improved reasoning does cause a change in the behavior of the alpha
BoolAssignment checker. Previously, this checker complained about statements
like "bool x = y" if 'y' was known not to be 0 or 1. Now it does not, since
that conversion is well-defined. It's hard to say what the "best" behavior
here is: this conversion is safe, but might be better written as an explicit
comparison against zero.
More usefully, besides improving our model of booleans, this fixes spurious
warnings when returning the address of a local variable cast to bool.
<rdar://problem/13296133>
llvm-svn: 180638
The 2 functions were computing the same location using different logic (each one had edge case bugs that the other
one did not). Refactor them to rely on the same logic.
The location of the warning reported in text/command line output format will now match that of the plist file.
There is one change in the plist output as well. When reporting an error on a BinaryOperator, we use the location of the
operator instead of the beginning of the BinaryOperator expression. This matches our output on command line and
looks better in most cases.
llvm-svn: 180165
The analyzer represents all pointer-to-pointer bitcasts the same way, but
this can be problematic if an implicit base cast gets layered on top of a
manual base cast (performed with reinterpret_cast instead of static_cast).
Fix this (and avoid a valid assertion) by looking through cast regions.
Using reinterpret_cast this way is only valid if the base class is at the
same offset as the derived class; this is checked by -Wreinterpret-base-class.
In the interest of performance, the analyzer doesn't repeat this check
anywhere; it will just silently do the wrong thing (use the wrong offsets
for fields of the base class) if the user code is wrong.
PR15394
llvm-svn: 180052
Add a CXXDefaultInitExpr, analogous to CXXDefaultArgExpr, and use it both in
CXXCtorInitializers and in InitListExprs to represent a default initializer.
There's an additional complication here: because the default initializer can
refer to the initialized object via its 'this' pointer, we need to make sure
that 'this' points to the right thing within the evaluation.
llvm-svn: 179958
Introduce a new helper function, which computes the first symbolic region in
the base region chain. The corresponding symbol has been used for assuming that
a pointer is null. Now, it will also be used for checking if it is null.
This ensures that we are tracking a null pointer correctly in the BugReporter.
llvm-svn: 179916
The analyzer uses LazyCompoundVals to represent rvalues of aggregate types,
most importantly structs and arrays. This allows us to efficiently copy
around an entire struct, rather than doing a memberwise load every time a
struct rvalue is encountered. This can also keep memory usage down by
allowing several structs to "share" the same snapshotted bindings.
However, /lookup/ through LazyCompoundVals can be expensive, especially
since they can end up chaining back to the original value. While we try
to reuse LazyCompoundVals whenever it's safe, and cache information about
this transitivity, the fact is it's sometimes just not a good idea to
perpetuate LazyCompoundVals -- the tradeoffs just aren't worth it.
This commit changes RegionStore so that binding a LazyCompoundVal to struct
will do a memberwise copy if the struct is simple enough. Today's definition
of "simple enough" is "up to N scalar members" (see below), but that could
easily be changed in the future. This is enough to bring the test case in
PR15697 back down to a manageable analysis time (within 20% of its original
time, in an unfair test where the new analyzer is not compiled with LTO).
The actual value of "N" is controlled by a new -analyzer-config option,
'region-store-small-struct-limit'. It defaults to "2", meaning structs with
zero, one, or two scalar members will be considered "simple enough" for
this code path.
It's worth noting that a more straightforward implementation would do this
on load, not on bind, and make use of the structure we already have for this:
CompoundVal. A long time ago, this was actually how RegionStore modeled
aggregate-to-aggregate copies, but today it's only used for compound literals.
Unfortunately, it seems that we've special-cased LazyCompoundVal in certain
places (such as liveness checks) but failed to similarly special-case
CompoundVal in all of them. Until we're confident that CompoundVal is
handled properly everywhere, this solution is safer, since the entire
optimization is just an implementation detail of RegionStore.
<rdar://problem/13599304>
llvm-svn: 179767
A C++ overloaded operator may be implemented as an instance method, and
that instance method may be called on an rvalue object, which has no
associated region. The analyzer handles this by creating a temporary region
just for the evaluation of this call; however, it is possible that /by
creating the region/, the analyzer ends up in a previously-explored state.
In this case we don't need to continue along this path.
This doesn't actually show any behavioral change now, but it starts being
used with the next commit and prevents an assertion failure there.
llvm-svn: 179766
In the committed example, we now see a note that tells us when the pointer
was assumed to be null.
This is the only case in which getDerefExpr returned null (failed to get
the dereferenced expr) throughout our regression tests. (There were multiple
occurrences of this one.)
llvm-svn: 179736
We always register the visitor on a node in which the value we are tracking is live and constrained. However,
the visitation can restart at a node, later on the path, in which the value is under constrained because
it is no longer live. Previously, we just silently stopped tracking in that case.
llvm-svn: 179731
This was slightly tricky because BlockDecls don't currently store an
inferred return type. However, we can rely on the fact that blocks with
inferred return types will have return statements that match the inferred
type.
<rdar://problem/13665798>
llvm-svn: 179699
When computing the value of ?: expression, we rely on the last expression in
the previous basic block to be the resulting value of the expression. This is
not the case for binary "?:" operator (GNU extension) in C++. As the last
basic block has the expression for the condition subexpression, which is an
R-value, whereas the true subexpression is the L-value.
Note the operator evaluation just happens to work in C since the true
subexpression is an R-value (like the condition subexpression). CFG is the
same in C and C++ case, but the AST nodes are different, which the LValue to
Rvalue conversion happening after the BinaryConditionalOperator evaluation.
Changed the logic to only use the last expression from the predecessor only
if it matches either true or false subexpression. Note, the logic needed
fortification anyway: L and R were passed but not even used by the function.
Also, change the conjureSymbolVal to correctly compute the type, when the
expression is an LG-value.
llvm-svn: 179574
While we don't do anything intelligent with pointers-to-members today,
it's perfectly legal to need a temporary of pointer-to-member type to, say,
pass by const reference. Tweak an assertion to allow this.
PR15742 and PR15747
llvm-svn: 179563
Structs and arrays can take advantage of the single top-level global
symbol optimization (described in the previous commit) just as well
as scalars.
No intended behavioral change.
llvm-svn: 179555
Now that we're invalidating global regions properly, we want to continue
taking advantage of a particular optimization: if all global regions are
invalidated together, we can represent the bindings of each region with
a "derived region value" symbol. Essentially, this lazily links each
global region with a single symbol created at invalidation time, rather
than binding each region with a new symbolic value.
We used to do this, but haven't been for a while; the previous commit
re-enabled this code path, and this handles the fallout.
<rdar://problem/13464044>
llvm-svn: 179554
This fixes a regression where a call to a function we can't reason about
would not actually invalidate global regions that had explicit bindings.
void test_that_now_works() {
globalInt = 42;
clang_analyzer_eval(globalInt == 42); // expected-warning{{TRUE}}
invalidateGlobals();
clang_analyzer_eval(globalInt == 42); // expected-warning{{UNKNOWN}}
}
This has probably been around since the initial "cluster" refactoring of
RegionStore, if not longer.
<rdar://problem/13464044>
llvm-svn: 179553
There are few cases where we can track the region, but cannot print the note,
which makes the testing limited. (Though, I’ve tested this manually by making
all regions non-printable.) Even though the applicability is limited now, the enhancement
will be more relevant as we start tracking more regions.
llvm-svn: 179396
Before:
1. Calling 'foo'
2. Doing something interesting
3. Returning from 'foo'
4. Some kind of error here
After:
1. Calling 'foo'
2. Doing something interesting
3. Returning from 'foo'
4. Some kind of error here
The location of the note is already in the caller, not the callee, so this
just brings the "depth" attribute in line with that.
This only affects plist diagnostic consumers (i.e. Xcode). It's necessary
for Xcode to associate the control flow arrows with the right stack frame.
<rdar://problem/13634363>
llvm-svn: 179351
In this code
int getZero() {
return 0;
}
void test() {
int problem = 1 / getZero(); // expected-warning {{Division by zero}}
}
we generate these arrows:
+-----------------+
| v
int problem = 1 / getZero();
^ |
+---+
where the top one represents the control flow up to the first call, and the
bottom one represents the flow to the division.* It turns out, however, that
we were generating the top arrow twice, as if attempting to "set up context"
after we had already returned from the call. This resulted in poor
highlighting in Xcode.
* Arguably the best location for the division is the '/', but that's a
different problem.
<rdar://problem/13326040>
llvm-svn: 179350
Previously, the analyzer used isIntegerType() everywhere, which uses the C
definition of "integer". The C++ predicate with the same behavior is
isIntegerOrUnscopedEnumerationType().
However, the analyzer is /really/ using this to ask if it's some sort of
"integrally representable" type, i.e. it should include C++11 scoped
enumerations as well. hasIntegerRepresentation() sounds like the right
predicate, but that includes vectors, which the analyzer represents by its
elements.
This commit audits all uses of isIntegerType() and replaces them with the
general isIntegerOrEnumerationType(), except in some specific cases where
it makes sense to exclude scoped enumerations, or any enumerations. These
cases now use isIntegerOrUnscopedEnumerationType() and getAs<BuiltinType>()
plus BuiltinType::isInteger().
isIntegerType() is hereby banned in the analyzer - lib/StaticAnalysis and
include/clang/StaticAnalysis. :-)
Fixes real assertion failures. PR15703 / <rdar://problem/12350701>
llvm-svn: 179081
This is important because sometimes two nodes are identical, except the
second one is a sink.
This bug has probably been around for a while, but it wouldn't have been an
issue in the old report graph algorithm. I'm ashamed to say I actually looked
at this the first time around and thought it would never be a problem...and
then didn't include an assertion to back that up.
PR15684
llvm-svn: 178944
This turns on not only destructor inlining, but inlining of constructors
for types with non-trivial destructors. Per r178516, we will still not
inline the constructor or destructor of anything that looks like a
container unless the analyzer-config option 'c++-container-inlining' is
set to 'true'.
In addition to the more precise path-sensitive model, this allows us to
catch simple smart pointer issues:
#include <memory>
void test() {
std::auto_ptr<int> releaser(new int[4]);
} // memory allocated with 'new[]' should not be deleted with 'delete'
<rdar://problem/12295363>
llvm-svn: 178805
Improvement of r178684 and r178685.
Jordan has pointed out that I should not rely on the value of the condition to know which expression branch
has been taken. It will not work in cases the branch condition is an unknown value (ex: we do not track the constraints for floats).
The better way of doing this would be to find out if the current node is the right or left successor of the node
that has the ternary operator as a terminator (which is how this is done in other places, like ConditionBRVisitor).
llvm-svn: 178701
The lifetime of a temporary can be extended when it is immediately bound
to a local reference:
const Value &MyVal = Value("temporary");
In this case, the temporary object's lifetime is extended for the entire
scope of the reference; at the end of the scope it is destroyed.
The analyzer was modeling this improperly in two ways:
- Since we don't model temporary constructors just yet, we create a fake
temporary region when it comes time to "materialize" a temporary into
a real object (lvalue). This wasn't taking base casts into account when
the bindings being materialized was Unknown; now it always respects base
casts except when the temporary region is itself a pointer.
- When actually destroying the region, the analyzer did not actually load
from the reference variable -- it was basically destroying the reference
instead of its referent. Now it does do the load.
This will be more useful whenever we finally start modeling temporaries,
or at least those that get bound to local reference variables.
<rdar://problem/13552274>
llvm-svn: 178697
1) Look for the node where the condition expression is live when checking if
it is constrained to true or false.
2) Fix a bug in ProgramState::isNull, which was masking the problem. When
the expression is not a symbol (,which is the case when it is Unknown) return
unconstrained value, instead of value constrained to “false”!
(Thankfully other callers of isNull have not been effected by the bug.)
llvm-svn: 178684
- Find the correct region to represent the first array element when
constructing a CXXConstructorCall.
- If the array is trivial, model the copy with a primitive load/store.
- Don't warn about the "uninitialized" subscript in the AST -- we don't use
the helper variable that Sema provides.
<rdar://problem/13091608>
llvm-svn: 178602
Refactor invalidateRegions to take SVals instead of Regions as input and teach RegionStore
about processing LazyCompoundVal as a top-level “escaping” value.
This addresses several false positives that get triggered by the NewDelete checker, but the
underlying issue is reproducible with other checkers as well (for example, MallocChecker).
llvm-svn: 178518
This is a heuristic to make up for the fact that the analyzer doesn't
model C++ containers very well. One example is modeling that
'std::distance(I, E) == 0' implies 'I == E'. In the future, it would be
nice to model this explicitly, but for now it just results in a lot of
false positives.
The actual heuristic checks if the base type has a member named 'begin' or
'iterator'. If so, we treat the constructors and destructors of that type
as opaque, rather than inlining them.
This is intended to drastically reduce the number of false positives
reported with experimental destructor support turned on. We can tweak the
heuristic in the future, but we'd rather err on the side of false negatives
for now.
<rdar://problem/13497258>
llvm-svn: 178516
Certain properties of a function can determine ahead of time whether or not
the function is inlineable, such as its kind, its signature, or its
location. We can cache this value in the FunctionSummaries map to avoid
rechecking these static properties for every call.
Note that the analyzer may still decide not to inline a specific call to
a function because of the particular dynamic properties of the call along
the current path.
No intended functionality change.
llvm-svn: 178515
The summaries lasted for the lifetime of the map anyway; no reason to
include an extra allocation.
Also, use SmallBitVector instead of BitVector to track the visited basic
blocks -- most functions will have less than 64 basic blocks -- and
use bitfields for the other fields to reduce the size of the structure.
No functionality change.
llvm-svn: 178514
This is controlled by the 'suppress-c++-stdlib' analyzer-config flag.
It is currently off by default.
This is more suppression than we'd like to do, since obviously there can
be user-caused issues within 'std', but it gives us the option to wield
a large hammer to suppress false positives the user likely can't work
around.
llvm-svn: 178513
Evaluating a C++ new expression now includes generating an intermediate
ExplodedNode, and this node could very well represent a previously-
reachable state in the ExplodedGraph. If so, we can short-circuit the
rest of the evaluation.
Caught by the assertion a few lines later.
<rdar://problem/13510065>
llvm-svn: 178401
We can check if the receiver is nil in the node that corresponds to the StmtPoint of the message send.
At that point, the receiver is guaranteed to be live. We will find at least one unreclaimed node due to
my previous commit (look for StmtPoint instead of PostStmt) and the fact that the nil receiver nodes are tagged.
+ a couple of extra tests.
llvm-svn: 178381
trackNullOrUndefValue tries to find the first node that matches the statement it is tracking.
Since we collect PostStmt nodes (in node reclamation), none of those might be on the
current path, so relax the search to look for any StmtPoint.
llvm-svn: 178380
Add a new callback that notifies checkers when a const pointer escapes. Currently, this only works
for const pointers passed as a top level parameter into a function. We need to differentiate the const
pointers escape from regular escape since the content pointed by const pointer will not change;
if it’s a file handle, a file cannot be closed; but delete is allowed on const pointers.
This should suppress several false positives reported by the NewDelete checker on llvm codebase.
llvm-svn: 178310
We should only suppress a bug report if the IDCed or null returned nil value is directly related to the value we are warning about. This was
not the case for nil receivers - we would suppress a bug report that had an IDCed nil receiver on the path regardless of how it’s
related to the warning.
1) Thread EnableNullFPSuppression parameter through the visitors to differentiate between tracking the value which
is directly responsible for the bug and other values that visitors are tracking (ex: general tracking of nil receivers).
2) in trackNullOrUndef specifically address the case when a value of the message send is nil due to the receiver being nil.
llvm-svn: 178309
These types will not have a CXXConstructExpr to do the initialization for
them. Previously we just used a simple call to ProgramState::bindLoc, but
that doesn't trigger proper checker callbacks (like pointer escape).
Found by Anton Yartsev.
llvm-svn: 178160
The visitor should look for the PreStmt node as the receiver is nil in the PreStmt and this is the node. Also, tag the nil
receiver nodes with a special tag for consistency.
llvm-svn: 178152
Register the nil tracking visitors with the region and refactor trackNullOrUndefValue a bit.
Also adds the cast and paren stripping before checking if the value is an OpaqueValueExpr
or ExprWithCleanups.
llvm-svn: 178093
This addresses an undefined value false positive from concreteOffsetBindingIsInvalidatedBySymbolicOffsetAssignment.
Fixes PR14877; radar://12991168.
llvm-svn: 177905
These aren't generated by default, but they are needed when either side of
the comparison is tainted.
Should fix our internal buildbot.
llvm-svn: 177846
In C, comparisons between signed and unsigned numbers are always done in
unsigned-space. Thus, we should know that "i >= 0U" is always true, even
if 'i' is signed. Similarly, "u >= 0" is also always true, even though '0'
is signed.
Part of <rdar://problem/13239003> (false positives related to std::vector)
llvm-svn: 177806
For two concrete locations, we were producing another concrete location and
then casting it to an integer. We should just create a nonloc::ConcreteInt
to begin with.
No functionality change.
llvm-svn: 177805
We can support the full range of comparison operations between two locations
by canonicalizing them as subtraction, as in the previous commit.
This won't work (well) if either location includes an offset, or (again)
if the comparisons are not consistent about which region comes first.
<rdar://problem/13239003>
llvm-svn: 177803
Canonicalizing these two forms allows us to better model containers like
std::vector, which use "m_start != m_finish" to implement empty() but
"m_finish - m_start" to implement size(). The analyzer should have a
consistent interpretation of these two symbolic expressions, even though
it's not properly reasoning about either one yet.
The other unfortunate thing is that while the size() expression will only
ever be written "m_finish - m_start", the comparison may be written
"m_finish == m_start" or "m_start == m_finish". Right now the analyzer does
not attempt to canonicalize those two expressions, since it doesn't know
which length expression to pick. Doing this correctly will probably require
implementing unary minus as a new SymExpr kind (<rdar://problem/12351075>).
For now, the analyzer inverts the order of arguments in the comparison to
build the subtraction, on the assumption that "begin() != end()" is
written more often than "end() != begin()". This is purely speculation.
<rdar://problem/13239003>
llvm-svn: 177801
We just treat this as opaque symbols, but even that allows us to handle
simple cases where the same condition is tested twice. This is very common
in the STL, which means that any project using the STL gets spurious errors.
Part of <rdar://problem/13239003>.
llvm-svn: 177800
The algorithm used here was ridiculously slow when a potential back-edge
pointed to a node that already had a lot of successors. The previous commit
makes this feature unnecessary anyway.
This reverts r177468 / f4cf6b10f863b9bc716a09b2b2a8c497dcc6aa9b.
Conflicts:
lib/StaticAnalyzer/Core/BugReporter.cpp
llvm-svn: 177765
For a given bug equivalence class, we'd like to emit the report with the
shortest path. So far to do this we've been trimming the ExplodedGraph to
only contain relevant nodes, then doing a reverse BFS (starting at all the
error nodes) to find the shortest paths from the root. However, this is
fairly expensive when we are suppressing many bug reports in the same
equivalence class.
r177468-9 tried to solve this problem by breaking cycles during graph
trimming, then updating the BFS priorities after each suppressed report
instead of recomputing the whole thing. However, breaking cycles is not
a cheap operation because an analysis graph minus cycles is still a DAG,
not a tree.
This fix changes the algorithm to do a single forward BFS (starting from the
root) and to use that to choose the report with the shortest path by looking
at the error nodes with the lowest BFS priorities. This was Anna's idea, and
has the added advantage of requiring no update step: we can just pick the
error node with the next lowest priority to produce the next bug report.
<rdar://problem/13474689>
llvm-svn: 177764
This fixes some mistaken condition logic in RegionStore that caused
global variables to be invalidated when /any/ region was invalidated,
rather than only as part of opaque function calls. This was only
being used by CStringChecker, and so users will now see that strcpy()
and friends do not invalidate global variables.
Also, add a test case we don't handle properly: explicitly-assigned
global variables aren't being invalidated by opaque calls. This is
being tracked by <rdar://problem/13464044>.
llvm-svn: 177572
Due to improper modelling of copy constructors (specifically, their
const reference arguments), we were producing spurious leak warnings
for allocated memory stored in structs. In order to silence this, we
decided to consider storing into a struct to be the same as escaping.
However, the previous commit has fixed this issue and we can now properly
distinguish leaked memory that happens to be in a struct from a buffer
that escapes within a struct wrapper.
Originally applied in r161511, reverted in r174468.
<rdar://problem/12945937>
llvm-svn: 177571
In this case, the value of 'x' may be changed after the call to indirectAccess:
struct Wrapper {
int *ptr;
};
void indirectAccess(const Wrapper &w);
void test() {
int x = 42;
Wrapper w = { x };
clang_analyzer_eval(x == 42); // TRUE
indirectAccess(w);
clang_analyzer_eval(x == 42); // UNKNOWN
}
This is important for modelling return-by-value objects in C++, to show
that the contents of the struct are escaping in the return copy-constructor.
<rdar://problem/13239826>
llvm-svn: 177570
This is a bit of old code trying to deal with the fact that functions that
take pointers often use them to access an entire array via pointer
arithmetic. However, RegionStore already conservatively assumes you can use
pointer arithmetic to access any part of a region.
Some day we may want to go back to handling this specifically for calls,
but we can do that in the future.
No functionality change.
llvm-svn: 177569
With the assurance that the trimmed graph does not contain cycles,
this patch is safe (with a few tweaks), and provides the performance
boost it was intended to.
Part of performance work for <rdar://problem/13433687>.
llvm-svn: 177469
Having a trimmed graph with no cycles (a DAG) is much more convenient for
trying to find shortest paths, which is exactly what BugReporter needs to do.
Part of the performance work for <rdar://problem/13433687>.
llvm-svn: 177468
This fixes a crash when analyzing LLVM that was exposed by r177220 (modeling of
trivial copy/move assignment operators).
When we look up a lazy binding for “Builder”, we see the direct binding of Loc at offset 0.
Previously, we believed the binding, which led to a crash. Now, we do not believe it as
the types do not match.
llvm-svn: 177453
The whole reason we were doing a BFS in the first place is because an
ExplodedGraph can have cycles. Unfortunately, my removeErrorNode "update"
doesn't work at all if there are cycles.
I'd still like to be able to avoid doing the BFS every time, but I'll come
back to it later.
This reverts r177353 / 481fa5071c203bc8ba4f88d929780f8d0f8837ba.
llvm-svn: 177448
Splitting the graph trimming and the path-finding (r177216) already
recovered quite a bit of performance lost to increased suppression.
We can still do better by also performing the reverse BFS up front
(needed for shortest-path-finding) and only walking the shortest path
for each report. This does mean we have to walk back up the path and
invalidate all the BFS numbers if the report turns out to be invalid,
but it's probably still faster than redoing the full BFS every time.
More performance work for <rdar://problem/13433687>
llvm-svn: 177353
r175234 allowed the analyzer to model trivial copy/move constructors as
an aggregate bind. This commit extends that to trivial assignment
operators as well. Like the last commit, one of the motivating factors here
is not warning when the right-hand object is partially-initialized, which
can have legitimate uses.
<rdar://problem/13405162>
llvm-svn: 177220
When we generate a path diagnostic for a bug report, we have to take the
full ExplodedGraph and limit it down to a single path. We do this in two
steps: "trimming", which limits the graph to all the paths that lead to
this particular bug, and "creating the report graph", which finds the
shortest path in the trimmed path to any error node.
With BugReporterVisitor false positive suppression, this becomes more
expensive: it's possible for some paths through the trimmed graph to be
invalid (i.e. likely false positives) but others to be valid. Therefore
we have to run the visitors over each path in the graph until we find one
that is valid, or until we've ruled them all out. This can become quite
expensive.
This commit separates out graph trimming from creating the report graph,
performing the first only once per bug equivalence class and the second
once per bug report. It also cleans up that portion of the code by
introducing some wrapper classes.
This seems to recover most of the performance regression described in my
last commit.
<rdar://problem/13433687>
llvm-svn: 177216
...in favor of this typedef:
typedef llvm::DenseMap<const ExplodedNode *, const ExplodedNode *>
InterExplodedGraphMap;
Use this everywhere the previous class and typedef were used.
Took the opportunity to ArrayRef-ize ExplodedGraph::trim while I'm at it.
No functionality change.
llvm-svn: 177215
I removed this check in the recursion->iteration commit, but forgot that
generatePathDiagnostic may be called multiple times if there are multiple
PathDiagnosticConsumers.
llvm-svn: 177214
Fixes a FIXME, improves dead symbol collection, suppresses a false positive,
which resulted from reusing the same symbol twice for simulation of 2 calls to the same function.
Fixing this lead to 2 possible false negatives in CString checker. Since the checker is still alpha and
the solution will not require revert of this commit, move the tests to a FIXME section.
llvm-svn: 177206
The previous generatePathDiagnostic() was intended to be tail-recursive,
restarting and trying again if a report was marked invalid. However:
(1) this leaked all the cloned visitors, which weren't being deleted, and
(2) this wasn't actually tail-recursive because some local variables had
non-trivial destructors.
This was causing us to overflow the stack on inputs with large numbers of
reports in the same equivalence class, such as sqlite3.c. Being iterative
at least prevents us from blowing out the stack, but doesn't solve the
performance issue: suppressing thousands (yes, thousands) of paths in the
same equivalence class is expensive. I'm looking into that now.
<rdar://problem/13423498>
llvm-svn: 177189
We discovered that sqlite3.c currently has 2600 reports in a single
equivalence class; it would be good to know if this is a recent
development or what.
(For the curious, the different reports in an equivalence class represent
the same bug found along different paths. When we're suppressing false
positives, we need to go through /every/ path to make sure there isn't a
valid path to a bug. This is a flaw in our after-the-fact suppression,
made worse by the fact that that function isn't particularly optimized.)
llvm-svn: 177188
In the test case below, the value V is not constrained to 0 in ErrorNode but it is in node N.
So we used to fail to register the Suppression visitor.
We also need to change the way we determine that the Visitor should kick in because the node N belongs to
the ExplodedGraph and might not be on the BugReporter path that the visitor sees. Instead of trying to match the node,
turn on the visitor when we see the last node in which the symbol is ‘0’.
llvm-svn: 177121
When BugReporter tracks C++ references involved in a null pointer violation, we
want to differentiate between a null reference and a reference to a null pointer. In the
first case, we want to track the region for the reference location; in the second, we want
to track the null pointer.
In addition, the core creates CXXTempObjectRegion to represent the location of the
C++ reference, so teach FindLastStoreBRVisitor about it.
This helps null pointer suppression to kick in.
(Patch by Anna and Jordan.)
llvm-svn: 176969
r176737 fixed bugreporter::trackNullOrUndefValue to find nodes for an lvalue
even if the rvalue node had already been collected. This commit extends that
to call statement nodes as well, so that if a call is contained within
implicit casts we can still track the return value.
No test case because node reclamation is extremely finicky (dependent on
how the AST and CFG are built, and then on our current reclamation rules,
and /then/ on how many nodes were generated by the analyzer core and the
current set of checkers). I consider this a low-risk change, though, and
it will only happen in cases of reclamation when the rvalue node isn't
available.
<rdar://problem/13340764>
llvm-svn: 176829
The visitor used to assume that the value it’s tracking is null in the first node it examines. This is not true.
If we are registering the Suppress Inlined Defensive checks visitor while traversing in another visitor
(such as FindlastStoreVisitor). When we restart with the IDC visitor, the invariance of the visitor does
not hold since the symbol we are tracking no longer exists at that point.
I had to pass the ErrorNode when creating the IDC visitor, because, in some cases, node N is
neither the error node nor will be visible along the path (we had not finalized the path at that point
and are dealing with ExplodedGraph.)
We should revisit the other visitors which might not be aware that they might get nodes, which are
later in path than the trigger point.
This suppresses a number of inline defensive checks in JavaScriptCore.
llvm-svn: 176756
r176010 introduced the notion of "interesting" lvalue expressions, whose
nodes are guaranteed never to be reclaimed by the ExplodedGraph. This was
used in bugreporter::trackNullOrUndefValue to find the region that contains
the null or undef value being tracked.
However, the /rvalue/ nodes (i.e. the loads from these lvalues that produce
a null or undef value) /are/ still being reclaimed, and if we couldn't
find the node for the rvalue, we just give up. This patch changes that so
that we look for the node for either the rvalue or the lvalue -- preferring
the former, since it lets us fall back to value-only tracking in cases
where we can't get a region, but allowing the latter as well.
<rdar://problem/13342842>
llvm-svn: 176737
Previously, ReturnVisitor waited to suppress a null return path until it
had found the inlined "return" statement. Now, it checks up front whether
the return value was NULL, and suppresses the warning right away if so.
We still have to wait until generating the path notes to invalidate the bug
report, or counter-suppression will never be triggered. (Counter-suppression
happens while generating path notes, but the generation won't happen for
reports already marked invalid.)
This isn't actually an issue today because we never reclaim nodes for
top-level statements (like return statements), but it could be an issue
some day in the future. (But, no expected behavioral change and no new
test case.)
llvm-svn: 176736
The second modification does not lead to any visible result, but, theoretically, is what we should
have been looking at to begin with since we are checking if the node was assumed to be null in
an inlined function.
llvm-svn: 176576
Inlining brought a few "null pointer use" false positives, which occur because
the callee defensively checks if a pointer is NULL, whereas the caller knows
that the pointer cannot be NULL in the context of the given call.
This is a first attempt to silence these warnings by tracking the symbolic value
along the execution path in the BugReporter. The new visitor finds the node
in which the symbol was first constrained to NULL. If the node belongs to
a function on the active stack, the warning is reported, otherwise, it is
suppressed.
There are several areas for follow up work, for example:
- How do we differentiate the cases where the first check is followed by
another one, which does happen on the active stack?
Also, this only silences a fraction of null pointer use warnings. For example, it
does not do anything for the cases where NULL was assigned inside a callee.
llvm-svn: 176402
Previously we were assuming that we'd never ask for the sub-region bindings
of a bitfield, since a bitfield cannot have subregions. However,
unification of code paths has made that assumption invalid. While we could
take advantage of this by just checking for the single possible binding,
it's probably better to do the right thing, so that if/when we someday
support unions we'll do the right thing there, too.
This fixes a handful of false positives in analyzing LLVM.
<rdar://problem/13325522>
llvm-svn: 176388
Most map types have an operator[] that inserts a new element if the key
isn't found, then returns a reference to the value slot so that you can
assign into it. However, if the value type is a pointer, it will be
initialized to null. This is usually no problem.
However, if the user /knows/ the map contains a value for a particular key,
they may just use it immediately:
// From ClangSACheckersEmitter.cpp
recordGroupMap[group]->Checkers
In this case the analyzer reports a null dereference on the path where the
key is not in the map, even though the user knows that path is impossible
here. They could silence the warning by adding an assertion, but that means
splitting up the expression and introducing a local variable. (Note that
the analyzer has no way of knowing that recordGroupMap[group] will return
the same reference if called twice in a row!)
We already have logic that says a null dereference has a high chance of
being a false positive if the null came from an inlined function. This
patch simply extends that to references whose rvalues are null as well,
silencing several false positives in LLVM.
<rdar://problem/13239854>
llvm-svn: 176371
By returning the (key, value) binding pairs, we save lookups afterwards.
This also enables further work later on.
No functionality change.
llvm-svn: 176230
Consider this case:
int *p = 0;
p = getPointerThatMayBeNull();
*p = 1;
If we inline 'getPointerThatMayBeNull', we might know that the value of 'p'
is NULL, and thus emit a null pointer dereference report. However, we
usually want to suppress such warnings as error paths, and we do so by using
FindLastStoreBRVisitor to see where the NULL came from. In this case, though,
because 'p' was NULL both before and after the assignment, the visitor
would decide that the "last store" was the initialization, not the
re-assignment.
This commit changes FindLastStoreBRVisitor to consider all PostStore nodes
that assign to this region. This still won't catches changes made directly
by checkers if they re-assign the same value, but it does handle the common
case in user-written code and will trigger ReturnVisitor's suppression
machinery as expected.
<rdar://problem/13299738>
llvm-svn: 176201
This enables constructor inlining for types with non-trivial destructors.
The plan is to enable destructor inlining within the next month, but that
needs further verification.
<rdar://problem/12295329>
llvm-svn: 176200
This potentially reduces a performance optimization of throwing away
PreStmtPurgeDeadSymbols nodes. I'll investigate the performance impact
soon and see if we need something better.
llvm-svn: 176149
This is essentially the same problem as r174031: a lazy binding for the first
field of a struct may stomp on an existing default binding for the
entire struct. Because of the way RegionStore is set up, we can't help
but lose the top-level binding, but then we need to make sure that accessing
one of the other fields doesn't come back as Undefined.
In this case, RegionStore is now correctly detecting that the lazy binding
we have isn't the right type, but then failing to follow through on the
implications of that: we don't know anything about the other fields in the
aggregate. This fix adds a test when searching for other kinds of default
values to see if there's a lazy binding we rejected, and if so returns
a symbolic value instead of Undefined.
The long-term fix for this is probably a new Store model; see
<rdar://problem/12701038>.
Fixes <rdar://problem/13292559>.
llvm-svn: 176144
Fixes PR15358 and <rdar://problem/13295437>.
Along the way, shorten path diagnostics that say "Variable 'x'" to just
be "'x'". By the context, it is obvious that we have a variable,
and so this just consumes text space.
llvm-svn: 176115
Normally, we need to look through derived-to-base casts when creating
temporary object regions (added in r175854). However, if the temporary
is a pointer (rather than a struct/class instance), we need to /preserve/
the base casts that have been applied.
This also ensures that we really do create a new temporary region when
we need to: MaterializeTemporaryExpr and lvalue CXXDefaultArgExprs.
Fixes PR15342, although the test case doesn't include the crash because
I couldn't isolate it.
llvm-svn: 176069
These nodes are never consulted by any analyzer client code, so they are
used only for machinery for removing dead bindings. Once successor nodes
are generated they can be safely removed.
This greatly reduces the amount of nodes that are generated in some case,
lowering the memory regression when analyzing Sema.cpp introduced by
r176010 from 14% to 2%.
llvm-svn: 176050
r175026 added support for default values, but didn't take reference
parameters into account, which expect the default argument to be an
lvalue. Use createTemporaryRegionIfNeeded if we can evaluate the default
expr as an rvalue but the expected result is an lvalue.
Fixes the most recent report of PR12915. The original report predates
default argument support, so that can't be it.
llvm-svn: 176042
While RegionStore checks to make sure casts on TypedValueRegions are valid,
it does not do the same for SymbolicRegions, which do not have perfect type
info anyway. Additionally, MemRegion::getAsOffset does not take a
ProgramState, so it can't use dynamic type info to determine a better type
for the regions. (This could also be dangerous if the type of a super-region
changes!)
Account for this by checking that a base object region is valid on top of a
symbolic region, and falling back to "symbolic offset" mode if not.
Fixes PR15345.
llvm-svn: 176034
This was triggering assertion failures when analyzing the LLVM codebase. This
is fallout from r175988.
I've got delta chewing away on a test case, but I wanted the fix to go
in now.
llvm-svn: 176011
r175988 modified the ExplodedGraph trimming algorithm to retain all
nodes for "lvalue" expressions. This patch refines that notion to
only "interesting" expressions that would be used for diagnostics.
llvm-svn: 176010
This required more changes than I originally expected:
- ObjCIvarRegion implements "canPrintPretty" et al
- DereferenceChecker indicates the null pointer source is an ivar
- bugreporter::trackNullOrUndefValue() uses an alternate algorithm
to compute the location region to track by scouring the ExplodedGraph.
This allows us to get the actual MemRegion for variables, ivars,
fields, etc. We only hand construct a VarRegion for C++ references.
- ExplodedGraph no longer drops nodes for expressions that are marked
'lvalue'. This is to facilitate the logic in the previous bullet.
This may lead to a slight increase in size in the ExplodedGraph,
which I have not measured, but it is likely not to be a big deal.
I have validated each of the changed plist output.
Fixes <rdar://problem/12114812>
llvm-svn: 175988
Use Optional<CFG*> where invalid states were needed previously. In the one case
where that's not possible (beginAutomaticObjDtorsInsert) just use a dummy
CFGAutomaticObjDtor.
Thanks for the help from Jordan Rose & discussion/feedback from Ted Kremenek
and Doug Gregor.
Post commit code review feedback on r175796 by Ted Kremenek.
llvm-svn: 175938
This Decl shouldn't be the canonical Decl; it should be the Decl used by
the CXXBaseSpecifier in the subclass. Unfortunately, that means continuing
to throw getCanonicalDecl() on all comparisons.
This fixes MemRegion::getAsOffset's use of ASTRecordLayout when redeclarations
are involved.
llvm-svn: 175913
Previously, we had the decisions about inlining spread out
over multiple functions.
In addition to the refactor, this commit ensures
that we will always inline BodyFarm functions as long as the Decl
is available. This fixes false positives due to those functions
not being inlined when no or minimal inlining is enabled such (as
shallow mode).
llvm-svn: 175857
This is a follow-up to r175830, which made sure a temporary object region
created for, say, a struct rvalue matched up with the initial bindings
being stored into it. This does the same for the case in which the AST
actually tells us that we need to create a temporary via a
MaterializeObjectExpr. I've unified the two code paths and moved a static
helper function onto ExprEngine.
This also caused a bit of test churn, causing us to go back to describing
temporary regions without a 'const' qualifier. This seems acceptable; it's
our behavior from a few months ago.
<rdar://problem/13265460> (part 2)
llvm-svn: 175854
When creating a temporary region (say, when a struct rvalue is used as
the base of a member expr), make sure we account for any derived-to-base
casts. We don't actually record these in the LazyCompoundVal that
represents the rvalue, but we need to make sure that the temporary region
we're creating (a) matches the bindings, and (b) matches its expression.
Most of the time this will do exactly the same thing as before, but it
fixes spurious "garbage value" warnings introduced in r175234 by the use
of lazy bindings to model trivial copy constructors.
<rdar://problem/13265460>
llvm-svn: 175830
This allows MemRegion and MemRegionManager to avoid asking over and over
again whether an class is a virtual base or a non-virtual base.
Minor optimization/cleanup; no functionality change.
llvm-svn: 175716
- When deciding if we can reuse a lazy binding, make sure to check if there
are additional bindings in the sub-region.
- When reading from a lazy binding, don't accidentally strip off casts or
base object regions. This slows down lazy binding reading a bit but is
necessary for type sanity when treating one class as another.
A bit of minor refactoring allowed these two checks to be unified in a nice
early-return-using helper function.
<rdar://problem/13239840>
llvm-svn: 175703
RegionStoreManager::getInterestingValues() returns a pointer to a
std::vector that lives inside a DenseMap, which is constructed on demand.
However, constructing one such value can lead to constructing another
value, which will invalidate the reference created earlier.
Fixed by delaying the new entry creation until the function returns.
llvm-svn: 175582
If a base object is at a 0 offset, RegionStoreManager may find a lazy
binding for the entire object, then try to attach a FieldRegion or
grandparent CXXBaseObjectRegion on top of that (skipping the intermediate
region). We now preserve as many layers of base object regions necessary
to make the types match.
<rdar://problem/13239840>
llvm-svn: 175556
This just adds a very simple check that if a DerivedToBase CastExpr is
operating on a value with known C++ object type, and that type is not the
base type specified in the AST, then the cast is invalid and we should
return UnknownVal.
In the future, perhaps we can have a checker that specifies that this is
illegal, but we still shouldn't assert even if the user turns that checker
off.
PR14872
llvm-svn: 175239
...after a host of optimizations related to the use of LazyCompoundVals
(our implementation of aggregate binds).
Originally applied in r173951.
Reverted in r174069 because it was causing hangs.
Re-applied in r174212.
Reverted in r174265 because it was /still/ causing hangs.
If this needs to be reverted again it will be punted to far in the future.
llvm-svn: 175234
Previously, we were scanning the current store. Now, we properly scan the
store that the LazyCompoundVal came from, which may have very different
live symbols.
llvm-svn: 175232
Previously, whenever we had a LazyCompoundVal, we crawled through the
entire store snapshot looking for bindings within the LCV's region. Now, we
just ask for the subregion bindings of the lazy region and only visit those.
This is an optimization (so no test case), but it may allow us to clean up
more dead bindings than we were previously.
llvm-svn: 175230
This is going to be used in the next commit.
While I'm here, tighten up assumptions about symbolic offset
BindingKeys, and make offset calculation explicitly handle all
MemRegion kinds.
No functionality change.
llvm-svn: 175228
In C++, constants captured by lambdas (and blocks) are not actually stored
in the closure object, since they can be expanded at compile time. In this
case, they will have no binding when we go to look them up. Previously,
RegionStore thought they were uninitialized stack variables; now, it checks
to see if they are a constant we know how to evaluate, using the same logic
as r175026.
This particular code path is only for scalar variables. Constant arrays and
structs are still unfortunately unhandled; we'll need a stronger solution
for those.
This may have a small performance impact, but only for truly-undefined
local variables, captures in a non-inlined block, and non-constant globals.
Even then, in the non-constant case we're only doing a quick type check.
<rdar://problem/13105553>
llvm-svn: 175194
Previously, we were handling only simple integer constants for globals and
the smattering of implicitly-valued expressions handled by Environment for
default arguments. Now, we can use any integer constant expression that
Clang can evaluate, in addition to everything we handled before.
PR15094 / <rdar://problem/12830437>
llvm-svn: 175026
The checkPointerEscape callback previously did not specify how a
pointer escaped. This change includes an enum which describes the
different ways a pointer may escape. This enum is passed to the
checkPointerEscape callback when a pointer escapes. If the escape
is due to a function call, the call is passed. This changes
previous behavior where the call is passed as NULL if the escape
was due to indirectly invalidating the region the pointer referenced.
A patch by Branden Archer!
llvm-svn: 174677
This patch makes sure that we do not reinitialize static globals when
the function is called more than once along a path. The motivation is
code with initialization patterns that rely on 2 static variables, where
one of them has an initializer while the other does not. Currently, we
reset the static variables with initializers on every visit to the
function along a path.
llvm-svn: 174676
This is a "quick fix".
The underlining issue is that when a const pointer to a struct is passed
into a function, we do not invalidate the pointer fields. This results
in false positives that are common in C++ (since copy constructors are
prevalent). (Silences two llvm false positives.)
llvm-svn: 174468
This is a more natural order of evaluation, and it is very important
for visualization in the static analyzer. Within Xcode, the arrows
will not jump from right to left, which looks very visually jarring.
It also provides a more natural location for dataflow-based diagnostics.
Along the way, we found a case in the analyzer diagnostics where we
needed to indicate that a variable was "captured" by a block.
-fsyntax-only timings on sqlite3.c show no visible performance change,
although this is just one test case.
Fixes <rdar://problem/13016513>
llvm-svn: 174447
...again. The problem has not been fixed and our internal buildbot is still
getting hangs.
This reverts r174212, originally applied in r173951, then reverted in r174069.
Will not re-apply until the entire project analyzes successfully on my
local machine.
llvm-svn: 174265
Inlining these functions is essential for correctness. We often have
cases where we do not inline calls. For example, the shallow mode and
when reanalyzing previously inlined ObjC methods as top level.
llvm-svn: 174245
This allows us to keep from chaining LazyCompoundVals in cases like this:
CGRect r = CGRectMake(0, 0, 640, 480);
CGRect r2 = r;
CGRect r3 = r2;
Previously we only made this optimization if the struct did not begin with
an aggregate member, to make sure that we weren't picking up an LCV for
the first field of the struct. But since LazyCompoundVals are typed, we can
make that inference directly by comparing types.
This is a pure optimization; the test changes are to guard against possible
future regressions.
llvm-svn: 174211
It's causing hangs on our internal analyzer buildbot. Will restore after
investigating.
This reverts r173951 / baa7ca1142990e1ad6d4e9d2c73adb749ff50789.
llvm-svn: 174069
This is a hack to work around the fact that we don't track extents for our
default bindings:
CGPoint p;
p.x = 0.0;
p.y = 0.0;
rectParam.origin = p;
use(rectParam.size); // warning: uninitialized value in rectParam.size.width
In this case, the default binding for 'p' gets copied into 'rectParam',
because the 'origin' field is at offset 0 within CGRect. From then on,
rectParam's old default binding (in this case a symbol) is lost.
This patch silences the warning by pretending that lazy bindings are never
made from uninitialized memory, but not only is that not true, the original
default binding is still getting overwritten (see FIXME test cases).
The long-term solution is tracked in <rdar://problem/12701038>
PR14765 and <rdar://problem/12875012>
llvm-svn: 174031
positives.
The includeSuffix was only set on the first iteration through the
function, resulting in invalid regions being produced by getLazyBinding
(ex: zoomRegion.y).
llvm-svn: 174016
Redefine the shallow mode to inline all functions for which we have a
definite definition (ipa=inlining). However, only inline functions that
are up to 4 basic blocks large and cut the max exploded nodes generated
per top level function in half.
This makes shallow faster and allows us to keep inlining small
functions. For example, we would keep inlining wrapper functions and
constructors/destructors.
With the new shallow, it takes 104s to analyze sqlite3, whereas
the deep mode is 658s and previous shallow is 209s.
llvm-svn: 173958
This is faster for the analyzer to process than inlining the constructor
and performing a member-wise copy, and it also solves the problem of
warning when a partially-initialized POD struct is copied.
Before:
CGPoint p;
p.x = 0;
CGPoint p2 = p; <-- assigned value is garbage or undefined
After:
CGPoint p;
p.x = 0;
CGPoint p2 = p; // no-warning
This matches our behavior in C, where we don't see a field-by-field copy.
<rdar://problem/12305288>
llvm-svn: 173951
When the analyzer sees an initializer, it checks if the initializer
contains a CXXConstructExpr. If so, it trusts that the CXXConstructExpr
does the necessary work to initialize the object, and performs no further
initialization.
This patch looks through any implicit wrapping expressions like
ExprWithCleanups to find the CXXConstructExpr inside.
Fixes PR15070.
llvm-svn: 173557
The idea is to introduce a higher level "user mode" option for
different use scenarios. For example, if one wants to run the analyzer
for a small project each time the code is built, they would use
the "shallow" mode.
The user mode option will influence the default settings for the
lower-level analyzer options. For now, this just influences the ipa
modes, but we plan to find more optimal settings for them.
llvm-svn: 173386
The idea is to eventually place all analyzer options under
"analyzer-config". In addition, this lays the ground for introduction of
a high-level analyzer mode option, which will influence the
default setting for IPAMode.
llvm-svn: 173385
Before:
Calling implicit default constructor for 'Foo' (where Foo is constructed)
Entered call from 'test' (at "=default" or 'Foo' declaration)
Calling default constructor for 'Bar' (at "=default" or 'Foo' declaration)
After:
Calling implicit default constructor for 'Foo' (where Foo is constructed)
Calling default constructor for 'Bar' (at "=default" or 'Foo' declaration)
This only affects the plist diagnostics; this note is never shown in the
other diagnostics.
llvm-svn: 172915
Suppress the warning by just not emitting the report. The sink node
would get generated, which is fine since we did reach a bad state.
Motivation
Due to the way code is structured in some of these macros, we do not
reason correctly about it and report false positives. Specifically, the
following loop reports a use-after-free. Because of the way the code is
structured inside of the macro, the analyzer assumes that the list can
have cycles, so you end up with use-after-free in the loop, that is
safely deleting elements of the list. (The user does not have a way to
teach the analyzer about shape of data structures.)
SLIST_FOREACH_SAFE(item, &ctx->example_list, example_le, tmpitem) {
if (item->index == 3) { // if you remove each time, no complaints
assert((&ctx->example_list)->slh_first == item);
SLIST_REMOVE(&ctx->example_list, item, example_s, example_le);
free(item);
}
}
llvm-svn: 172883
The issue here is that if we have 2 leaks reported at the same line for
which we cannot print the corresponding region info, they will get
treated as the same by issue_hash+description. We need to AUGMENT the
issue_hash with the allocation info to differentiate the two issues.
Add the "hash" (offset from the beginning of a function) representing
allocation site to solve the issue.
We might want to generalize solution in the future when we decide to
track more than just the 2 locations from the diagnostics.
llvm-svn: 171825
Instead of using several callbacks to identify the pointer escape event,
checkers now can register for the checkPointerEscape.
Converted the Malloc checker to use the new callback.
SimpleStreamChecker will be converted next.
llvm-svn: 170625
performance heuristic
After inlining a function with more than 13 basic blocks 32 times, we
are not going to inline it anymore. The idea is that inlining large
functions leads to drastic performance implications. Since the function
has already been inlined, we know that we've analyzed it in many
contexts.
The following metrics are used:
- Large function is a function with more than 13 basic blocks (we
should switch to another metric, like cyclomatic complexity)
- We consider that we've inlined a function many times if it's been
inlined 32 times. This number is configurable with -analyzer-config
max-times-inline-large=xx
This heuristic addresses a performance regression introduced with
inlining on one benchmark. The analyzer on this benchmark became 60
times slower with inlining turned on. The heuristic allows us to analyze
it in 24% of the time. The performance improvements on the other
benchmarks I've tested with are much lower - under 10%, which is
expected.
llvm-svn: 170361
We don't handle array destructors correctly yet, but we now apply the same
hack (explicitly destroy the first element, implicitly invalidate the rest)
for multidimensional arrays that we already use for linear arrays.
<rdar://problem/12858542>
llvm-svn: 170000
top level.
This heuristic is already turned on for non-ObjC methods
(inlining-mode=noredundancy). If a method has been previously analyzed,
while being inlined inside of another method, do not reanalyze it as top
level.
This commit applies it to ObjCMethods as well. The main caveat here is
that to catch the retain release errors, we are still going to reanalyze
all the ObjC methods but without inlining turned on.
Gives 21% performance increase on one heavy ObjC benchmark, which
suffered large performance regressions due to ObjC inlining.
llvm-svn: 169639
This is the case where the analyzer tries to print out source locations
for code within a synthesized function body, which of course does not have
a valid source location. The previous fix attempted to do this during
diagnostic path pruning, but some diagnostics have pruning disabled, and
so any diagnostic with a path that goes through a synthesized body will
either hit an assertion or emit invalid output.
<rdar://problem/12657843> (again)
llvm-svn: 169631
This reduces analysis time by 1.2% on one test case (Objective-C), but
also cleans up some of the code conceptually as well. We can possible
just make RegionBindingsRef -> RegionBindings, but I wanted to stage
things.
After this, we should revisit Jordan's optimization of not canonicalizing
the immutable AVL trees for the cluster bindings as well.
llvm-svn: 169571
Previously we would search for the last statement, then back up to the
entrance of the block that contained that statement. Now, while we're
scanning for the statement, we just keep track of which blocks are being
exited (in reverse order).
llvm-svn: 169526
This doesn't seem to make much of a difference in practice, but it does
have the potential to avoid a trip through the constraint manager.
llvm-svn: 169524
Whenever we touch a single bindings cluster multiple times, we can delay
canonicalizing it until the final access. This has some interesting
implications, in particular that we shouldn't remove an /empty/ cluster
from the top-level map until canonicalization.
This is good for a 2% speedup or so on the test case in
<rdar://problem/12810842>
llvm-svn: 169523
This feature was probably intended to improve diagnostics, but was currently
only used when dumping the Environment. It shows what location a given value
was loaded from, e.g. when evaluating an LValueToRValue cast.
llvm-svn: 169522
uncovered.
This required manually correcting all of the incorrect main-module
headers I could find, and running the new llvm/utils/sort_includes.py
script over the files.
I also manually added quite a few missing headers that were uncovered by
shuffling the order or moving headers up to be main-module-headers.
llvm-svn: 169237
The stop-gap here is to just drop such objects when processing the InitListExpr.
We still need a better solution.
Fixes <rdar://problem/12755044>.
llvm-svn: 168757
This was also covered by <rdar://problem/12753384>. The static analyzer
evaluates a CXXConstructExpr within an initializer expression and
RegionStore doesn't know how to handle the resulting CXXTempObjectRegion
that gets created. We need a better solution than just dropping the
value, but we need to better understand how to implement the right
semantics here.
Thanks to Jordan for his help diagnosing the behavior here.
llvm-svn: 168741
The AllocaRegion did not have the superRegion (based on LocationContext)
as part of it's hash. As a consequence, the AllocaRegions from
different frames were uniqued to be the same region.
llvm-svn: 168599
In code like this:
void foo() {
bar();
baz();
}
...the location for the call to 'bar()' was being used as a backup location
for the call to 'baz()'. This is fine unless the call to 'bar()' is deemed
uninteresting and that part of the path deleted.
(This looks like a logic error as well, but in practice the only way 'baz()'
could have an invalid location is if the entire body of 'foo()' is
synthesized, meaning the call to 'bar()' will be using the location of the
call to 'foo()' anyway. Nevertheless, the new version better matches the
intent of the code.)
Found by Matt Beaumont-Gay using ASan. Thanks, Matt!
llvm-svn: 168080
This fixes a few cases where we'd emit path notes like this:
+---+
1| v
p = malloc(len);
^ |2
+---+
In general this should make path notes more consistent and more correct,
especially in cases where the leak happens on the false branch of an if
that jumps directly to the end of the function. There are a couple places
where the leak is reported farther away from the cause; these are usually
cases where there are several levels of nested braces before the end of
the function. This still matches our current behavior for when there /is/
a statement after all the braces, though.
llvm-svn: 168070
This allows us to properly remove dead bindings at the end of the top-level
stack frame, using the ReturnStmt, if there is one, to keep the return value
live. This in turn removes the need for a check::EndPath callback in leak
checkers.
This does cause some changes in the path notes for leak checkers. Previously,
a leak would be reported at the location of the closing brace in a function.
Now, it gets reported at the last statement. This matches the way leaks are
currently reported for inlined functions, but is less than ideal for both.
llvm-svn: 168066
We do this by using the "most recent" good location: if a synthesized
function 'A' calls another function 'B', the path notes for the call to 'B'
will be placed at the same location as the path note for calling 'A'.
Similarly, the call to 'A' will have a note saying "Entered call from...",
and now we just don't emit that (since the user doesn't have a body to look
at anyway).
Previously, we were doing this for the "Calling..." notes, but not for the
"Entered call from..." or "Returning to caller". This caused a crash when
the path entered and then exiting a call within a synthesized body.
<rdar://problem/12657843>
llvm-svn: 168019
conditions.
The adjustment is needed only in case of dynamic dispatch performed by
the analyzer - when the runtime declaration is different from the static
one.
Document this explicitly in the code (by adding a helper). Also, use
canonical Decls to avoid matching against the case where the definition
is different from found declaration.
This fix suppresses the testcase I added in r167762, so add another
testcase to make sure we do test commit r167762.
llvm-svn: 167780
Suppresses a leak false positive (radar://12663777).
In addition, we'll need to rewrite the adjustReturnValue() method not to
return UnknownVal by default, but rather assert in cases we cannot
handle. To make it possible, we need to correctly handle some of the
edge cases we already know about.
llvm-svn: 167762
Previously, RegionStore was being VERY conservative in saying that because
p[i].x and p[i].y have a concrete base region of 'p', they might overlap.
Now, we check the chain of fields back up to the base object and check if
they match.
This only kicks in when dealing with symbolic offset regions because
RegionStore's "base+offset" representation of concrete offset regions loses
all information about fields. In cases where all offsets are concrete
(s.x and s.y), RegionStore will already do the right thing, but mixing
concrete and symbolic offsets can cause bindings to be invalidated that
are known to not overlap (e.g. p[0].x and p[i].y).
This additional refinement is tracked by <rdar://problem/12676180>.
<rdar://problem/12530149>
llvm-svn: 167654
As Anna pointed out, ProgramStateTrait.h is a relatively obscure header,
and checker writers may not know to look there to add their own custom
state.
The base macro that specializes the template remains in ProgramStateTrait.h
(REGISTER_TRAIT_WITH_PROGRAMSTATE), which allows the analyzer core to keep
using it.
llvm-svn: 167385
This will simplify checkers that need to register for leaks. Currently,
they have to register for both: check dead and check end of path.
I've modified the SymbolReaper to consider everything on the stack dead
if the input StackLocationContext is 0.
(This is a bit disruptive, so I'd like to flash out all the issues
asap.)
llvm-svn: 167352
These are CallEvent-equivalents of helpers already accessible in
CheckerContext, as part of making it easier for new checkers to be written
using CallEvent rather than raw CallExprs.
llvm-svn: 167338
Also, move the REGISTER_*_WITH_PROGRAMSTATE macros to ProgramStateTrait.h.
This doesn't get rid of /all/ explicit uses of ProgramStatePartialTrait,
but it does get a lot of them.
llvm-svn: 167276
Previously, every call to a ConstraintManager's isNull would do a full
assumeDual to test feasibility. Now, ConstraintManagers can override
checkNull if they have a cheaper way to do the same thing.
RangeConstraintManager can do this in less than half the work.
<rdar://problem/12608209>
llvm-svn: 167138
Our one basic suppression heuristic is to assume that functions do not
usually return NULL. However, when one of the arguments is NULL it is
suddenly much more likely that NULL is a valid return value. In this case,
we don't suppress the report here, but we do attach /another/ visitor to
go find out if this NULL argument also comes from an inlined function's
error path.
This new behavior, controlled by the 'avoid-suppressing-null-argument-paths'
analyzer-config option, is turned off by default. Turning it on produced
two false positives and no new true positives when running over LLVM/Clang.
This is one of the possible refinements to our suppression heuristics.
<rdar://problem/12350829>
llvm-svn: 166941
Additionally, don't collect PostStore nodes -- they are often used in
path diagnostics.
Previously, we tried to track null arguments in the same way as any other
null values, but in many cases the necessary nodes had already been
collected (a memory optimization in ExplodedGraph). Now, we fall back to
using the value of the argument at the time of the call, which may not
always match the actual contents of the region, but often will.
This is a precursor to improving our suppression heuristic.
<rdar://problem/12350829>
llvm-svn: 166940
path notes for cases where a value may be assumed to be null, etc.
Instead of having redundant diagnostics, do a pass over the generated
PathDiagnostic pieces and remove notes from TrackConstraintBRVisitor
that are already covered by ConditionBRVisitor, whose notes tend
to be better.
Fixes <rdar://problem/12252783>
llvm-svn: 166728
After every 1000 CFGElements processed, the ExplodedGraph trims out nodes
that satisfy a number of criteria for being "boring" (single predecessor,
single successor, and more). Rather than controlling this with a cc1 option,
which can only disable this behavior, we now have an analyzer-config option,
'graph-trim-interval', which can change this interval from 1000 to something
else. Setting the value to 0 disables reclamation.
The next commit relies on this behavior to actually test anything.
llvm-svn: 166528
This is actually required by the C++ standard in
[basic.stc.dynamic.allocation]p3:
If an allocation function declared with a non-throwing
exception-specification fails to allocate storage, it shall return a
null pointer. Any other allocation function that fails to allocate
storage shall indicate failure only by throwing an exception of a type
that would match a handler of type std::bad_alloc.
We don't bother checking for the specific exception type, but just go off
the operator new prototype. This should help with a certain class of lazy
initalization false positives.
<rdar://problem/12115221>
llvm-svn: 166363
This actually looks through several kinds of expression, such as
OpaqueValueExpr and ExprWithCleanups. The idea is that binding and lookup
should be consistent, and so if the environment needs to be modified later,
the code doing the modification will not have to manually look through these
"transparent" expressions to find the real binding to change.
This is necessary for proper updating of struct rvalues as described in
the previous commit.
llvm-svn: 166121
In C++, rvalues that need to have their address taken (for example, to be
passed to a function by const reference) will be wrapped in a
MaterializeTemporaryExpr, which lets CodeGen know to create a temporary
region to store this value. However, MaterializeTemporaryExprs are /not/
created when a method is called on an rvalue struct, even though the 'this'
pointer needs a valid value. CodeGen works around this by creating a
temporary region anyway; now, so does the analyzer.
The analyzer also does this when accessing a field of a struct rvalue.
This is a little unfortunate, since the rest of the struct will soon be
thrown away, but it does make things consistent with the rest of the
analyzer.
This allows us to bring back the assumption that all known 'this' values
are Locs. This is a revised version of r164828-9, reverted in r164876-7.
<rdar://problem/12137950>
llvm-svn: 166120
This was only used by OSAtomicChecker and makes it more
difficult to update values for expressions that the environment
may look through instead (it's not the same as IgnoreParens).
With this gone, we can have bindExpr bind to the inner
expression that getSVal will find.
Groundwork for <rdar://problem/12137950>
llvm-svn: 165866
I believe the removed assert in CheckerManager says it best:
InlineCall is a special hacky callback to allow intrusive
evaluation of the call (which simulates inlining). It is
currently only used by OSAtomicChecker and should go away
at some point.
OSAtomicChecker has gone away; inlineCall can now go away as well!
llvm-svn: 165865
This time, actually uncomment the code that's supposed to fix the problem.
This reverts r165671 / 8ceb837585ed973dc36fba8dfc57ef60fc8f2735.
llvm-svn: 165676
Author: Jordan Rose <jordan_rose@apple.com>
Date: Wed Oct 10 21:31:21 2012 +0000
[analyzer] Treat fields of unions as having symbolic offsets.
This allows only one field to be active at a time in RegionStore.
This isn't quite the correct behavior for unions, but it at least
would handle the case of "value goes in, value comes out" from the
same field.
RegionStore currently has a number of places where any access to a union
results in UnknownVal being returned. However, it is clearly missing
some cases, or the original issue wouldn't have occurred. It is probably
now safe to remove those changes, but that's a potentially destabilizing
change that should wait for more thorough testing.
Fixes PR14054.
git-svn-id: https://llvm.org/svn/llvm-project/cfe/trunk@165660 91177308-0d34-0410-b5e6-96231b3b80d8
This reverts commit cf9030e480f77ab349672f00ad302e216c26c92c.
llvm-svn: 165671
This allows only one field to be active at a time in RegionStore.
This isn't quite the correct behavior for unions, but it at least
would handle the case of "value goes in, value comes out" from the
same field.
RegionStore currently has a number of places where any access to a union
results in UnknownVal being returned. However, it is clearly missing
some cases, or the original issue wouldn't have occurred. It is probably
now safe to remove those changes, but that's a potentially destabilizing
change that should wait for more thorough testing.
Fixes PR14054.
llvm-svn: 165660
Some implicit statements, such as the implicit 'self' inserted for "free"
Objective-C ivar access, have invalid source locations. If one of these
statements is the location where an issue is reported, we'll now look at
the enclosing statements for a valid source location.
<rdar://problem/12446776>
llvm-svn: 165354
In C++, overriding virtual methods are allowed to specify a covariant
return type -- that is, if the return type of the base method is an
object pointer type (or reference type), the overriding method's return
type can be a pointer to a subclass of the original type. The analyzer
was failing to take this into account when devirtualizing a method call,
and anything that relied on the return value having the proper type later
would crash.
In Objective-C, overriding methods are allowed to specify ANY return type,
meaning we can NEVER be sure that devirtualizing will give us a "safe"
return value. Of course, a program that does this will most likely crash
at runtime, but the analyzer at least shouldn't crash.
The solution is to check and see if the function/method being inlined is
the function that static binding would have picked. If not, check that
the return value has the same type. If the types don't match, see if we
can fix it with a derived-to-base cast (the C++ case). If we can't,
return UnknownVal to avoid crashing later.
<rdar://problem/12409977>
llvm-svn: 165079
These functions are store-agnostic, and would benefit from information in
DynamicTypeInfo but gain nothing from the store type.
No intended functionality change.
llvm-svn: 165078
table, making it printable with the ConfigDump checker. Along the
way, fix a really serious bug where the value was getting parsed
from the string in code that was in an assert() call. This means
in a Release-Asserts build this code wouldn't work as expected.
llvm-svn: 165041
By analogy with C structs, this seems to be legal, if probably discouraged.
It's only if the ivar is read from or written to that there's a problem.
Running a program that gets the "address" of an instance variable does in
fact return the offset when the base "object" is nil.
This isn't a full revert because r164442 includes some diagnostic tweaks
as well; those have been kept.
This partially reverts r164442 / 08965091770c9b276c238bac2f716eaa4da2dca4.
llvm-svn: 164960
The original intent of this commit was to catch potential null dereferences
early, but it breaks the common "home-grown offsetof" idiom (PR13927):
(((struct Foo *)0)->member - ((struct foo *)0))
As it turns out, this appears to be legal in C, per a footnote in
C11 6.5.3.2: "Thus, &*E is equivalent to E (even if E is a null pointer)".
In C++ this issue is still open:
http://www.open-std.org/jtc1/sc22/wg21/docs/cwg_active.html#232
We'll just have to make sure we have good path notes in the future.
This reverts r164441 / 9be016dcd1ca3986873a7b66bd4bc027309ceb59.
llvm-svn: 164958
string in the config table so that it can be dumped as part of the
config dumper. Add a test to show that these options are sticking
and can be cross-checked using FileCheck.
llvm-svn: 164954
This is related to but not blocked by <rdar://problem/12137950>
("Return-by-value structs do not have associated regions")
This reverts r164875 / 3278d41e17749dbedb204a81ef373499f10251d7.
llvm-svn: 164952
It is possible and valid to have a state manager and associated objects
without having a SubEngine or checkers.
Patch by Olaf Krzikalla!
llvm-svn: 164947
Previously the analyzer treated all inlined constructors like lvalues,
setting the value of the CXXConstructExpr to the newly-constructed
region. However, some CXXConstructExprs behave like rvalues -- in
particular, the implicit copy constructor into a pass-by-value argument.
In this case, we want only the /contents/ of a temporary object to be
passed, so that we can use the same "copy each argument into the
parameter region" algorithm that we use for scalar arguments.
This may change when we start modeling destructors of temporaries,
but for now this is the last part of <rdar://problem/12137950>.
llvm-svn: 164830
An rvalue has no address, but calling a C++ member function requires a
'this' pointer. This commit makes the analyzer create a temporary region
in which to store the struct rvalue and use as a 'this' pointer whenever
a member function is called on an rvalue, which is essentially what
CodeGen does.
More of <rdar://problem/12137950>. The last part is tracking down the
C++ FIXME in array-struct-region.cpp.
llvm-svn: 164829
Struct rvalues are represented in the analyzer by CompoundVals,
LazyCompoundVals, or plain ConjuredSymbols -- none of which have associated
regions. If the entire structure is going to persist, this is not a
problem -- either the rvalue will be assigned to an existing region, or
a MaterializeTemporaryExpr will be present to create a temporary region.
However, if we just need a field from the struct, we need to create the
temporary region ourselves.
This is inspired by the way CodeGen handles calls to temporaries;
support for that in the analyzer is coming next.
Part of <rdar://problem/12137950>
llvm-svn: 164828
Previously, we'd just keep constraints around forever, which means we'd
never be able to merge paths that differed only in constraints on dead
symbols.
Because we now allow constraints on symbolic expressions, not just single
symbols, this requires changing SymExpr::symbol_iterator to include
intermediate symbol nodes in its traversal, not just the SymbolData leaf
nodes.
This depends on the previous commit to be correct. Originally applied in
r163444, reverted in r164275, now being re-applied.
llvm-svn: 164622
No tests, but this allows the optimization of removing dead constraints.
We can then add tests that we don't do this prematurely.
<rdar://problem/12333297>
Note: the added FIXME to investigate SymbolRegionValue liveness is
tracked by <rdar://problem/12368183>. This patch does not change the
existing behavior.
llvm-svn: 164621
This is a heuristic intended to greatly reduce the number of false
positives resulting from inlining, particularly inlining of generic,
defensive C++ methods that live in header files. The suppression is
triggered in the cases where we ask to track where a null pointer came
from, and it turns out that the source of the null pointer was an inlined
function call.
This change brings the number of bug reports in LLVM from ~1500 down to
around ~300, a much more manageable number. Yes, some true positives may
be hidden as well, but from what I looked at the vast majority of silenced
reports are false positives, and many of the true issues found by the
analyzer are still reported.
I'm hoping to improve this heuristic further by adding some exceptions
next week (cases in which a bug should still be reported).
llvm-svn: 164449
Before, PathDiagnosticConsumers that did not support actual path output
would (sensibly) cause the generation of the full path to be skipped.
However, BugReporterVisitors may want to see the path in order to mark a
BugReport as invalid.
Now, even for a path generation scheme of 'None' we will still create a
trimmed graph and walk backwards through the bug path, doing no work other
than passing the nodes to the BugReporterVisitors. This isn't cheap, but
it's necessary to properly do suppression when the first path consumer does
not support path notes.
In the future, we should try only generating the path and visitor-provided
path notes once, or at least only creating the trimmed graph once.
llvm-svn: 164447
This is intended to allow visitors to make decisions about whether a
BugReport is likely a false positive. Currently there are no visitors
making use of this feature, so there are no tests.
When a BugReport is marked invalid, the invalidator must provide a key
that identifies the invaliation (intended to be the visitor type and a
context pointer of some kind). This allows us to reverse the decision
later on. Being able to reverse a decision about invalidation gives us more
flexibility, and allows us to formulate conditions like "this report is
invalid UNLESS the original argument is 'foo'". We can use this to
fine-tune our false-positive suppression (coming soon).
llvm-svn: 164446
Rather than saying "Null pointer value stored to 'foo'", we now say
"Passing null pointer value via Nth parameter 'foo'", which is much better.
The note is also now on the argument expression as well, rather than the
entire call.
This paves the way for continuing to track arguments back to their sources.
<rdar://problem/12211490>
llvm-svn: 164444
Like with struct fields, we want to catch cases like this early,
so that we can produce better diagnostics and path notes:
PointObj *p = nil;
int *px = &p->_x; // should warn here
*px = 1;
llvm-svn: 164442
We want to catch cases like this early, so that we can produce better
diagnostics and path notes:
Point *p = 0;
int *px = &p->x; // should warn here
*px = 1;
llvm-svn: 164441
their implementations are unavailable. Start by simulating dispatch_sync().
This change is largely a bunch of plumbing around something very simple. We
use AnalysisDeclContext to conjure up a fake function body (using the
current ASTContext) when one does not exist. This is controlled
under the analyzer-config option "faux-bodies", which is off by default.
The plumbing in this patch is largely to pass the necessary machinery
around. CallEvent needs the AnalysisDeclContextManager to get
the function definition, as one may get conjured up lazily.
BugReporter and PathDiagnosticLocation needed to be relaxed to handle
invalid locations, as the conjured body has no real source locations.
We do some primitive recovery in diagnostic generation to generate
some reasonable locations (for arrows and events), but it can be
improved.
llvm-svn: 164339
While we definitely want this optimization in the future, we're not
currently handling constraints on symbolic /expressions/ correctly.
These should stay live even if the SymExpr itself is no longer referenced
because could recreate an identical SymExpr later. Only once the SymExpr
can no longer be recreated -- i.e. a component symbol is dead -- can we
safely remove the constraints on it.
This liveness issue is tracked by <rdar://problem/12333297>.
This reverts r163444 / 24c7f98828e039005cff3bd847e7ab404a6a09f8.
llvm-svn: 164275
in ObjCMethods.
Extend FunctionTextRegion to represent ObjC methods as well as
functions. Note, it is not clear what type ObjCMethod region should
return. Since the type of the FunctionText region is not currently used,
defer solving this issue.
llvm-svn: 164046
in NSException to a helper object in libAnalysis that can also
be used by Sema. Not sure if the predicate name 'isImplicitNoReturn'
is the best one, but we can massage that later.
No functionality change.
llvm-svn: 163759
Again, GCC is more aggressive about reusing temporary space than we are,
leading to Release build crashes for this undefined behavior.
PR13710 (though it may not be the only problem there)
llvm-svn: 163747
Currently we don't update the dynamic type of a C++ object when it is
cast. This can cause the situation above, where the static type of the
region is now known to be a subclass of the dynamic type.
Once we start updating DynamicTypeInfo in response to the various kinds
of casts in C++, we can re-add this assert to make sure we don't miss
any cases. This work is tracked by <rdar://problem/12287087>.
In -Asserts builds, we will simply not return any runtime definition
when our DynamicTypeInfo is known to be incorrect like this.
llvm-svn: 163745
Using the static type may be inconsistent with later calls. We should just
report that there is no inlining definition available if the static type is
better than the dynamic type. See next commit.
This reverts r163644 / 19d5886d1704e24282c86217b09d5c6d35ba604d.
llvm-svn: 163744
While PR13724 is still an issue, it's not actually an issue in the STL.
We can keep this option around in case there turn out to be widespread
false positives due to poor modeling of the C++ standard library functions,
but for now we'd like to get more data.
This reverts r163633 / c6baadceec1d5148c20ee6c902a102233c547f62.
llvm-svn: 163647
reinterpret_cast does not provide any of the usual type information that
static_cast or dynamic_cast provide -- only the new type. This can get us
in a situation where the dynamic type info for an object is actually a
superclass of the static type, which does not match what CodeGen does at all.
In these cases, just fall back to the static type as the best possible type
for devirtualization.
Should fix the crashes on our internal buildbot.
llvm-svn: 163644
C++11 [expr.call]p1: ...If the selected function is non-virtual, or if the
id-expression in the class member access expression is a qualified-id,
that function is called. Otherwise, its final overrider in the dynamic type
of the object expression is called.
<rdar://problem/12255556>
llvm-svn: 163577
The option allows to always inline very small functions, whose size (in
number of basic blocks) is set using -analyzer-config
ipa-always-inline-size option.
llvm-svn: 163558
This is a (heavy-handed) solution to PR13724 -- until we know we can do
a good job inlining the STL, it's best to be consistent and not generate
more false positives than we did before. We can selectively whitelist
certain parts of the 'std' namespace that are known to be safe.
This is controlled by analyzer config option 'c++-stdlib-inlining', which
can be set to "true" or "false".
This commit also adds control for whether or not to inline any templated
functions (member or non-member), under the config option
'c++-template-inlining'. This option is currently on by default.
llvm-svn: 163548
I need to see how this breaks on other platforms when I fix the issue
that Benjamin Kramer pointed out.
This includes r163489 and r163490, plus a two line change.
llvm-svn: 163512
r163489, "Take another crack at stabilizing the emission order of analyzer"
r163490, "Use isBeforeInTranslationUnitThan() instead of operator<."
llvm-svn: 163497
diagnostics without using FoldingSetNodeIDs. This is done
by doing a complete recursive comparison of the PathDiagnostics.
Note that the previous method of comparing FoldingSetNodeIDs did
not end up relying on unstable things such as pointer addresses, so
I suspect this may still have some issues on various buildbots because
I'm not sure if the true source of non-determinism has been eliminated.
The tests pass for me, so the only way to know is to commit this change
and see what happens.
llvm-svn: 163489
ObjCSelfInitChecker stashes information in the GDM to persist it across
function calls; it is stored in pre-call checks and retrieved post-call.
The post-call check is supposed to clear out the stored state, but was
failing to do so in cases where the call did not have a symbolic return
value.
This was actually causing the inappropriate cache-out from r163361.
Per discussion with Anna, we should never actually cache out when
assuming the receiver of an Objective-C message is non-nil, because
we guarded that node generation by checking that the state has changed.
Therefore, the only states that could reach this exact ExplodedNode are
ones that should have merged /before/ making this assumption.
r163361 has been reverted and the test case removed, since it won't
actually test anything interesting now.
llvm-svn: 163449
Previously, we'd just keep constraints around forever, which means we'd
never be able to merge paths that differed only in constraints on dead
symbols.
Because we now allow constraints on symbolic expressions, not just single
symbols, this requires changing SymExpr::symbol_iterator to include
intermediate symbol nodes in its traversal, not just the SymbolData leaf
nodes.
llvm-svn: 163444
RegionStoreManager was only treating a SymbolicRegion's symbel as live
if there was a binding referring to the region itself.
No test case because constraints are currently not being cleaned out
of the constraint manager at all (even if the symbol is legitimately dead).
llvm-svn: 163443
This is necessary because further analysis will assume that the SVal's
type matches the AST type. This caused a crash when trying to perform
a derived-to-base cast on a C++ object that had been new'd to be another
object type.
Yet another crash in PR13763.
llvm-svn: 163442
with at least one subtle bug in MacOSXKeyChainAPIChecker where the
calling the method was a substitute for assuming a symbolic value
was null (which is not the case).
We still keep ConstraintManager::getSymVal(), but we use that as
an optimization in SValBuilder and ProgramState::getSVal() to
constant-fold SVals. This is only if the ConstraintManager can
provide us with that information, which is no longer a requirement.
As part of this, introduce a default implementation of
ConstraintManager::getSymVal() which returns null.
For Checkers, introduce ConstraintManager::isNull(), which queries
the state to see if the symbolic value is constrained to be a null
value. It does this without assuming it has been implicitly constant
folded.
llvm-svn: 163428
When adding the next statement to the CoreEngine's work list, we take care
of all the special cases first. We certainly shouldn't be building
PostStmts with null statements (the diagnostics machinery assumes such
StmtPoints do not exist), and we should find out sooner if we're missing
a special case.
A refinement of r163402 that should help prevent further issues like PR13760.
llvm-svn: 163409
GCC destroys temporary objects more aggressively than clang, so this
results in incorrect behavior when compiling GCC Release builds.
We could avoid this issue under C++11 by preventing getAs from being
called when 'this' is an rvalue:
template<class ElemTy> const ElemTy *getAs() const & { ... }
template<class ElemTy> const ElemTy *getAs() const && = delete;
Unfortunately, we do not have compatibility macros for this behavior yet.
This will hopefully fix PR13760 and PR13762.
llvm-svn: 163402
(as this previously was the case before this was refactored). We also shouldn't
need to specially handle BinaryOperators since the eagerly-assume heuristic tags
such nodes.
llvm-svn: 163374
implicit pointer-to-boolean conversions in condition expressions. This would
result in inconsistent diagnostic emission between C and C++.
A consequence of this is now ConditionBRVisitor and TrackConstraintBRVisitor may
emit redundant diagnostics, for example:
"Assuming pointer value is null" (TrackConstraintBRVisitor)
"Assuming 'p' is null" (ConditionBRVisitor)
We need to reconcile the two, and perhaps prefer one over the other in some
cases.
llvm-svn: 163372
With some particularly evil casts, we can get an object whose dynamic type
is not actually a subclass of its static type. In this case, we won't even
find the statically-resolved method as a devirtualization candidate.
Rather than assert that this situation cannot occur, we now simply check
that the dynamic type is not an ancestor or descendent of the static type,
and leave it at that.
This error actually occurred analyzing LLVM: CallEventManager uses a
BumpPtrAllocator to allocate a concrete subclass of CallEvent
(FunctionCall), but then casts it to the actual subclass requested
(such as ObjCMethodCall) to perform the constructor.
Yet another crash in PR13763.
llvm-svn: 163367
A bizarre series of coincidences led us to generate a previously-seen
node in the middle of processing an Objective-C message, where we assume
the receiver is non-nil. We were assuming that such an assumption would
never "cache out" like this, and blithely went on using a null ExplodedNode
as the predecessor for the next step in evaluation.
Although the test case committed here is complicated, this could in theory
happen in other ways as well, so the correct fix is just to test if the
non-nil assumption results in an ExplodedNode we've seen before.
<rdar://problem/12243648>
llvm-svn: 163361
CXXDestructorCall now has a flag for when it is a base destructor call.
Other kinds of destructor calls (locals, fields, temporaries, and 'delete')
all behave as "whole-object" destructors and do not behave differently
from one another (specifically, in these cases we /should/ try to
devirtualize a call to a virtual destructor).
This was causing crashes in both our internal buildbot, the crash still
being tracked in PR13765, and some of the crashes being tracked in PR13763,
due to a assertion failure. (The behavior under -Asserts happened to be
correct anyway.)
Adding this knowledge also allows our DynamicTypePropagation checker to do
a bit less work; the special rules about virtual method calls during a
destructor only require extra handling during base destructors.
llvm-svn: 163348
While destructors will continue to not be inlined (unless the analyzer
config option 'c++-inlining' is set to 'destructors'), leaving them out
of the CFG is an incomplete model of the behavior of an object, and
can cause false positive warnings (like PR13751, now working).
Destructors for temporaries are still not on by default, since
(a) we haven't actually checked this code to be sure it's fully correct
(in particular, we probably need to be very careful with regard to
lifetime-extension when a temporary is bound to a reference,
C++11 [class.temporary]p5), and
(b) ExprEngine doesn't actually do anything when it sees a temporary
destructor in the CFG -- not even invalidate the object region.
To enable temporary destructors, set the 'cfg-temporary-dtors' analyzer
config option to '1'. The old -cfg-add-implicit-dtors cc1 option, which
controlled all implicit destructors, has been removed.
llvm-svn: 163264
If a region is binded to a symbolic value, we should track the symbol.
(The code I changed was not previously exercised by the regression
tests.)
llvm-svn: 163261
The problem is that the value of 'this' in a C++ member function call
should always be a region (or NULL). However, if the object is an rvalue,
it has no associated region (only a conjured symbol or LazyCompoundVal).
For now, we handle this in two ways:
1) Actually respect MaterializeTemporaryExpr. Before, it was relying on
CXXConstructExpr to create temporary regions for all struct values.
Now it just does the right thing: if the value is not in a temporary
region, create one.
2) Have CallEvent recognize the case where its 'this' pointer is a
non-region, and just return UnknownVal to keep from confusing clients.
The long-term problem is being tracked internally in <rdar://problem/12137950>,
but this makes many test cases pass.
llvm-svn: 163220
This turned out to have many implications, but what eventually seemed to
make it unworkable was the fact that we can get struct values (as
LazyCompoundVals) from other places besides return-by-value function calls;
that is, we weren't actually able to "treat all struct values as regions"
consistently across the entire analyzer core.
Hopefully we'll be able to come up with an alternate solution soon.
This reverts r163066 / 02df4f0aef142f00d4637cd851e54da2a123ca8e.
llvm-svn: 163218
SimpleSValBuilder processes a couple trivial identities, including 'x - x'
and 'x ^ x' (both 0). However, the former could appear with arguments of
floating-point type, and we weren't checking for that. This started
triggering an assert with r163069, which checks that a constant value is
actually going to be used as an integer or pointer.
llvm-svn: 163159
All clients of BasicValueFactory should be using QualTypes instead, and
indeed it seems they are. This caught the (fortunately harmless) bug
fixed in the previous commit.
No intended functionality change.
llvm-svn: 163069
The current logic would actually create a float- or double-sized signed
integer value of 1, which is not at all the same.
No test because the value would be swallowed by an Unknown as soon as it
gets added or subtracted to the original value, but it enables the cleanup
in the next patch.
llvm-svn: 163068
This allows us to correctly symbolicate the fields of structs returned by
value, as well as get the proper 'this' value for when methods are called
on structs returned by value.
This does require a moderately ugly hack in the StoreManager: if we assign
a "struct value" to a struct region, that now appears as a Loc value being
bound to a region of struct type. We handle this by simply "dereferencing"
the struct value region, which should create a LazyCompoundVal.
This should fix recent crashes analyzing LLVM and on our internal buildbot.
<rdar://problem/12137950>
llvm-svn: 163066
Previously, we preferred to get a result type by looking at the callee's
declared result type. This allowed us to handlereferences, which are
represented in the AST as lvalues of their pointee type. (That is, a call
to a function returning 'int &' has type 'int' and value kind 'lvalue'.)
However, this results in us preferring the original type of a function
over a casted type. This is a problem when a function pointer is casted
to another type, because the conjured result value will have the wrong
type. AdjustedReturnValueChecker is supposed to handle this, but still
doesn't handle the case where there is no "original function" at all,
i.e. where the callee is unknown.
Now, we instead look at the call expression's value kind (lvalue, xvalue,
or prvalue), and adjust the expr's type accordingly. This will have no
effect when the function is inlined, and will conjure the value that will
actually be used when it is not.
This makes AdjustedReturnValueChecker /nearly/ unnecessary; unfortunately,
the cases where it would still be useful are where we need to cast the
result of an inlined function or a checker-evaluated function, and in these
cases we don't know what we're casting /from/ by the time we can do post-
call checks. In light of that, remove AdjustedReturnValueChecker, which
was already not checking quite a few calls.
llvm-svn: 163065
This is similar to how we divide up the StaticAnalyzer libraries to separate
core functionality to what is clearly associated with Frontend actions.
llvm-svn: 163050
More generally, this adds a new configuration option 'c++-inlining', which
controls which C++ member functions can be considered for inlining. This
uses the new -analyzer-config table, so the cc1 arguments will look like this:
... -analyzer-config c++-inlining=[none|methods|constructors|destructors]
Note that each mode implies that all the previous member function kinds
will be inlined as well; it doesn't make sense to inline destructors
without inlining constructors, for example.
The default mode is 'methods'.
llvm-svn: 163004
PathDiagnostics are actually profiled and uniqued independently of the
path on which the bug occurred. This is used to merge diagnostics that
refer to the same issue along different paths, as well as by the plist
diagnostics to reference files created by the HTML diagnostics.
However, there are two problems with the current implementation:
1) The bug description is included in the profile, but some
PathDiagnosticConsumers prefer abbreviated descriptions and some
prefer verbose descriptions. Fixed by including both descriptions in
the PathDiagnostic objects and always using the verbose one in the profile.
2) The "minimal" path generation scheme provides extra information about
which events came from macros that the "extensive" scheme does not.
This resulted not only in different locations for the plist and HTML
diagnostics, but also in diagnostics being uniqued in the plist output
but not in the HTML output. Fixed by storing the "end path" location
explicitly in the PathDiagnostic object, rather than trying to find the
last piece of the path when the diagnostic is requested.
This should hopefully finish unsticking our internal buildbot.
llvm-svn: 162965
Basically, do the correct thing to fix the XML generation error, rather
than making it even worse by unilaterally dereferencing a null pointer.
llvm-svn: 162964
(__builtin_* etc.) so that it isn't possible to take their address.
Specifically, introduce a new type to represent a reference to a builtin
function, and a new cast kind to convert it to a function pointer in the
operand of a call. Fixes PR13195.
llvm-svn: 162962
reanalyzed.
The policy on what to reanalyze should be in AnalysisConsumer with the
rest of visitation order logic.
There is no reason why ExprEngine needs to pass the Visited set to
CoreEngine, it can populate it itself.
llvm-svn: 162957
If the current path diagnostic does /not/ have files associated with it, we
were simply skipping on to the next diagnostic with 'continue'. But that
also skipped the close tag for the diagnostic's <dict> node.
Part of fixing our internal analyzer buildbot.
llvm-svn: 162939
This heuristic addresses the case when a pointer (or ref) is passed
to a function, which initializes the variable (or sets it to something
other than '0'). On the branch where the inlined function does not
set the value, we report use of undefined value (or NULL pointer
dereference). The access happens in the caller and the path
through the callee would get pruned away with regular path pruning. To
solve this issue, we previously disabled diagnostic pruning completely
on undefined and null pointer dereference checks, which entailed very
verbose diagnostics in most cases. Furthermore, not all of the
undef value checks had the diagnostic pruning disabled.
This patch implements the following heuristic: if we pass a pointer (or
ref) to the region (on which the error is reported) into a function and
it's value is either undef or 'NULL' (and is a pointer), do not prune
the function.
llvm-svn: 162863
a comma separated collection of key:value pairs (which are strings). This
allows a general way to provide analyzer configuration data from the command line.
No clients yet.
llvm-svn: 162827
Specifically, CallEventManager::getCaller was looking at the call site for
an inlined call and trying to see what kind of call it was, but it only
checked for CXXConstructExprClass. (It's not using an isa<> here to avoid
doing three more checks on the the statement class.)
This caused an unreachable when we actually did inline the constructor of a
temporary object.
PR13717
llvm-svn: 162792
When exiting a function, the analyzer looks for the last statement in the
function to see if it's a return statement (and thus bind the return value).
However, the search for "the last statement" was accepting statements that
were in implicitly-generated inlined functions (i.e. destructors). So we'd
go and get the statement from the destructor, and then say "oh look, this
function had no explicit return...guess there's no return value". And /that/
led to the value being returned being declared dead, and all our leak
checkers complaining.
llvm-svn: 162791
No test case since this is a debug option that we will never turn on by
default since it makes the leak checkers much less useful. (We'll only report
leaks at the end of analysis if -analyzer-purge=none.)
llvm-svn: 162772
This helper function (in the clang::ento::bugreporter namespace) may add more
than one visitor, but conceptually it's tracking a single use of a null or
undefined value and should do so as best it can.
Also, the BugReport parameter has been made a reference to underscore that
it is non-optional.
llvm-svn: 162720
As Anna pointed out to me offline, it's a little silly to walk backwards through
the graph to find the store site when BugReporter will do the exact same walk
as part of path diagnostic generation.
llvm-svn: 162719
Previously, if we were tracking stores to a variable 'x', and came across this:
x = foo();
...we would simply emit a note here and stop. Now, we'll step into 'foo' and
continue tracking the returned value from there.
<rdar://problem/12114689>
llvm-svn: 162718
The two callers are using this in order to be conservative, so let's just
clarify the information that's actually being provided here. This is not
related to inlining decisions in any way.
No functionality change.
llvm-svn: 162717
Because the CXXNewExpr appears after the CXXConstructExpr in the CFG, we don't
actually have the correct region to construct into at the time we decide
whether or not to inline. The long-term fix (discussed in PR12014) might be to
introduce a new CFG node (CFGAllocator) that appears before the constructor.
Tracking the short-term fix in <rdar://problem/12180598>.
llvm-svn: 162689
This allows us to better reason about status objects, like Clang's own
llvm::Optional (when its contents are trivially destructible), which are
often intended to be passed around by value.
We still don't inline constructors for temporaries in the general case.
<rdar://problem/11986434>
llvm-svn: 162681
This allows checkers (like the MallocChecker) to process the effects of the
bind. Previously, using a memory-allocating function (like strdup()) in an
initializer would result in a leak warning.
This does bend the expectations of checkBind a bit; since there is no
assignment expression, the statement being used is the initializer value.
In most cases this shouldn't matter because we'll use a PostInitializer
program point (rather than PostStmt) for any checker-generated nodes, though
we /will/ generate a PostStore node referencing the internal statement.
(In theory this could have funny effects if someone actually does an
assignment within an initializer; in practice, that seems like it would be
very rare.)
<rdar://problem/12171711>
llvm-svn: 162637
generated for a given diagnostic to another. Because PathDiagnostics
are specific to a give PathDiagnosticConsumer, store in
a FoldingSet a unique hash for a PathDiagnostic (that will be the same
for the same bug for different PathDiagnosticConsumers) that
stores a list of files generated. This can then be read by the
other PathDiagnosticConsumers.
This fixes breakage in the PLIST-HTML output.
llvm-svn: 162580
More generally, any time we try to track where a null value came from, we
should show if it came from a function. This usually isn't necessary if
the value is symbolic, but if the value is just a constant we previously
just ignored its origin entirely. Now, we'll step into the function and
recursively add a visitor to the returned expression.
<rdar://problem/12114609>
llvm-svn: 162563
With inlining, retain count checker starts tracking 'self' through the
init methods. The analyser results were too noisy if the developer
did not follow 'self = [super init]' pattern (which is common
especially in older code bases) - we reported self init anti-pattern AND
possible use-after-free. This patch teaches the retain count
checker to assume that [super init] does not fail when it's not consumed
by another expression. This silences the retain count warning that warns
about possibility of use-after-free when init fails, while preserving
all the other checking on 'self'.
llvm-svn: 162508
Until we have full support for pointers-to-members, we can at least
approximate some of their use by tracking null and non-null values.
We thus treat &A::m_ptr as a non-null void * symbol, and MemberPointer(0)
as a pointer-sized null constant.
This enables support for what is sometimes called the "safe bool" idiom,
demonstrated in the test case.
llvm-svn: 162495
This is trivial; the UserDefinedConversion always wraps a CXXMemberCallExpr
for the appropriate conversion function, so it's just a matter of
propagating that value to the CastExpr itself.
llvm-svn: 162494
A CXXDefaultArgExpr wraps an Expr owned by a ParmVarDecl belonging to the
called function. In general, ExprEngine and Environment ought to treat this
like a ParenExpr or other transparent wrapper expression, with the inside
expression evaluated first.
However, if we call the same function twice, we'd produce a CFG that contains
the same wrapped expression twice, and we're not set up to handle that. I've
added a FIXME to the CFG builder to come back to that, but meanwhile we can
at least handle expressions that don't need to be explicitly evaluated:
literals. This probably handles many common uses of default parameters:
true/false, null, etc.
Part of PR13385 / <rdar://problem/12156507>
llvm-svn: 162453
As part of this change, I discovered that a few of our tests were not testing
the RangeConstraintManager. Luckily all of those passed when I moved them
over to use that constraint manager.
llvm-svn: 162384
Also rename 'getCurrentBlockCounter()' to 'blockCount()'.
This ripples a bunch of code simplifications; mostly aesthetic,
but makes the code a bit tighter.
llvm-svn: 162349
No need to have the "get", the word "conjure" is a verb too!
Getting a conjured symbol is the same as conjuring one up.
This shortening is largely cosmetic, but just this simple changed
cleaned up a handful of lines, making them less verbose.
llvm-svn: 162348
Under -analyzer-ipa=basic-inlining, only C functions, blocks, and C++ static
member functions are inlined -- essentially, the calls that behave like simple
C function calls. This is essentially the behavior in Xcode 4.4.
C++ support still has some rough edges, and we don't want users to be worried
about them if they download and run their own checker. (In particular, the
massive number of false positives for analyzing LLVM comes from inlining
defensively-written code in contexts where more aggressive assumptions are
implicitly made. This problem is not unique to C++, but it is exacerbated by
the higher proportion of code that lives in header files in C++.)
The eventual goal is to be comfortable enough with C++ support (and simple
Objective-C support) to advance to -analyzer-ipa=inlining as the default
behavior. See the IPA design notes for more details.
llvm-svn: 162318
This reduces duplication across the Basic and Range constraint managers, and
keeps their internals free of dealing with the semantics of C++. It's still
a little unfortunate that the constraint manager is dealing with this at all,
but this is pretty much the only place to put it so that it will apply to all
symbolic values, even when embedded in larger expressions.
llvm-svn: 162313
By doing this in the constraint managers, we can ensure that ANY reference
whose value we don't know gets the effect, even if it's not a top-level
parameter.
llvm-svn: 162246
Generating a sink is significantly different behavior from generating a
normal node, and a simple boolean parameter can be rather opaque. Per
offline discussion with Anna, adding new generation methods is the
clearest way to communicate intent.
No functionality change.
llvm-svn: 162215
Forgetting to at least cast the result was giving us Loc/NonLoc problems
in SValBuilder (hitting an assertion). But the standard (both C and C++)
does actually guarantee that && and || will result in the actual values
1 and 0, typed as 'int' in C and 'bool' in C++, and we can easily model that.
PR13461
llvm-svn: 162209
Our current handling of 'throw' is all CFG-based: it jumps to a 'catch' block
if there is one and the function exit block if not. But this doesn't really
get the right behavior when a function is inlined: execution will continue on
the caller's side, which is always the wrong thing to do.
Even within a single function, 'throw' completely skips any destructors that
are to be run. This is essentially the same problem as @finally -- a CFGBlock
that can have multiple entry points, whose exit points depend on whether it
was entered normally or exceptionally.
Representing 'throw' as a sink matches our current (non-)handling of @throw.
It's not a perfect solution, but it's better than continuing analysis in an
inconsistent or even impossible state.
<rdar://problem/12113713>
llvm-svn: 162157
The CFG approximates @throw as a return statement, but that's not good
enough in inlined functions. Moreover, since Objective-C exceptions are
usually considered fatal, we should be suppressing leak warnings like we
do for calls to noreturn functions (like abort()).
The comments indicate that we were probably intending to do this all along;
it may have been inadvertantly changed during a refactor at one point.
llvm-svn: 162156
This fixes several issues:
- removes egregious hack where PlistDiagnosticConsumer would forward to HTMLDiagnosticConsumer,
but diagnostics wouldn't be generated consistently in the same way if PlistDiagnosticConsumer
was used by itself.
- emitting diagnostics to the terminal (using clang's diagnostic machinery) is no longer a special
case, just another PathDiagnosticConsumer. This also magically resolved some duplicate warnings,
as we now use PathDiagnosticConsumer's diagnostic pruning, which has scope for the entire translation
unit, not just the scope of a BugReporter (which is limited to a particular ExprEngine).
As an interesting side-effect, diagnostics emitted to the terminal also have their trailing "." stripped,
just like with diagnostics emitted to plists and HTML. This required some tests to be updated, but now
the tests have higher fidelity with what users will see.
There are some inefficiencies in this patch. We currently generate the report graph (from the ExplodedGraph)
once per PathDiagnosticConsumer, which is a bit wasteful, but that could be pulled up higher in the
logic stack. There is some intended duplication, however, as we now generate different PathDiagnostics (for the same issue)
for different PathDiagnosticConsumers. This is necessary to produce the diagnostics that a particular
consumer expects.
llvm-svn: 162028
and remove ASTContext reference (which was frequently bound to a dereferenced
null pointer) from the recursive lump of printPretty functions. In so doing,
fix (at least) one case where we intended to use the 'dump' mode, but that
failed because a null ASTContext reference had been passed in.
llvm-svn: 162011
This is the other half of C++11 [class.cdtor]p4 (the destructor side
was added in r161915). This also fixes an issue with post-call checks
where the 'this' value was already being cleaned out of the state, thus
being omitted from a reconstructed CXXConstructorCall.
llvm-svn: 161981
With reinterpret_cast, we can get completely unrelated types in a region
hierarchy together; this was resulting in CXXBaseObjectRegions being layered
directly on an (untyped) SymbolicRegion, whose symbol was from a completely
different type hierarchy. This was what was causing the internal buildbot to
fail.
Reverts r161911, which merely masked the problem.
llvm-svn: 161960
Previously we were checking -analyzer-ipa=dynamic-bifurcate only, and
unconditionally inlining everything else that had an available definition,
even under -analyzer-ipa=inlining (but not under -analyzer-ipa=none).
llvm-svn: 161916
C++11 [class.cdtor]p4: When a virtual function is called directly or
indirectly from a constructor or from a destructor, including during
the construction or destruction of the class’s non-static data members,
and the object to which the call applies is the object under
construction or destruction, the function called is the final overrider
in the constructor's or destructor's class and not one overriding it in
a more-derived class.
llvm-svn: 161915
While there is now some duplication between SimpleCall and the CXXInstanceCall
sub-hierarchy, this is much better than copy-and-pasting the devirtualization
logic shared by both instance methods and destructors.
An unfortunate side effect is that there is no longer a single CallEvent type
that corresponds to "calls written as CallExprs". For the most part this is a
good thing, but the checker callback eval::Call still takes a CallExpr rather
than a CallEvent (since we're not sure if we want to allow checkers to
evaluate other kinds of calls). A mistake here will be caught by a cast<> in
CheckerManager::runCheckersForEvalCall.
No functionality change.
llvm-svn: 161809
Virtual base regions are never layered, so simply stripping them off won't
necessarily get you to the correct casted class. Instead, what we want is
the same logic for evaluating dynamic_cast: strip off base regions if possible,
but add new base regions if necessary.
llvm-svn: 161808
This can occur with multiple inheritance, which jumps from one parent to
the other, and with virtual inheritance, since virtual base regions always
wrap the actual object and can't be nested within other base regions.
This also exposed some incorrect logic for multiple inheritance: even if B
is known not to derive from C, D might still derive from both of them.
llvm-svn: 161798
...and /do/ strip CXXBaseObjectRegions when casting to a virtual base class.
This allows us to enforce the invariant that a CXXBaseObjectRegion can always
provide an offset for its base region if its base region has a known class
type, by only allowing virtual bases and direct non-virtual bases to form
CXXBaseObjectRegions.
This does mean some slight problems for our modeling of dynamic_cast, which
needs to be resolved by finding a path from the current region to the class
we're trying to cast to.
llvm-svn: 161797
This was causing a crash when we tried to re-apply a base object region to
itself. It probably also caused incorrect offset calculations in RegionStore.
PR13569 / <rdar://problem/12076683>
llvm-svn: 161710
This mostly affects pure virtual methods, but would also affect parent
methods defined inline in the header when analyzing the child's source file.
llvm-svn: 161709
when we don't need to split.
In some cases we know that a method cannot have a different
implementation in a subclass:
- the class is declared in the main file (private)
- all the method declarations (including the ones coming from super
classes) are in the main file.
This can be improved further, but might be enough for the heuristic.
(When we are too aggressive splitting the state, efficiency suffers.
When we fail to split the state coverage might suffer.)
llvm-svn: 161681
Both methods need to clear out existing bindings and provide a new default
binding. Originally KillStruct always provided UnknownVal as the default,
but it's allowed symbolic values for quite some time (for handling returned
structs in C).
No functionality change.
llvm-svn: 161637
This should speed up activities that need to access bindings by cluster,
such as invalidation and dead-bindings cleaning. In some cases all we save
is the cost of building the region cluster map, but other times we can
actually avoid traversing the rest of the store.
In casual testing, this produced a speedup of nearly 10% analyzing SQLite,
with /less/ memory used.
llvm-svn: 161636
This makes it faster to access and invalidate bindings with symbolic offsets
by only computing this information once.
No intended functionality change.
llvm-svn: 161635
An ASTContext's RecordLayoutInfo can only be used to look up offsets of
direct base classes, and we need the offset to make non-symbolic bindings
in RegionStore. This change makes sure that we have one layer of
CXXBaseObjectRegion for each base we are casting through.
This was causing crashes on an internal buildbot.
llvm-svn: 161621
This is an initial (unoptimized) version. We split the path when
inlining ObjC instance methods. On one branch we always assume that the
type information for the given memory region is precise. On the other we
assume that we don't have the exact type info. It is important to check
since the class could be subclassed and the method can be overridden. If
we always inline we can loose coverage.
Had to refactor some of the call eval functions.
llvm-svn: 161552
Unfortunately, generalized region printing is very difficult:
- ElementRegions are used both for casting and as actual elements.
- Accessing values through a pointer means going through an intermediate
SymbolRegionValue; symbolic regions are untyped.
- Referring to implicitly-defined variables like 'this' and 'self' could be
very confusing if they come from another stack frame.
We fall back to simply not printing the region name if we can't be sure it
will print well. This will allow us to improve in the future.
llvm-svn: 161512
The main blocker on this (besides the previous commit) was that
ScanReachableSymbols was not looking through LazyCompoundVals.
Once that was fixed, it's easy enough to clear out malloc data on return,
just like we do when we bind to a global region.
<rdar://problem/10872635>
llvm-svn: 161511
RegionStore currently uses a (Region, Offset) pair to describe the locations
of memory bindings. However, this representation breaks down when we have
regions like 'array[index]', where 'index' is unknown. We used to store this
as (SubRegion, 0); now we mark them specially as (SubRegion, SYMBOLIC).
Furthermore, ProgramState::scanReachableSymbols depended on the existence of
a sub-region map, but RegionStore's implementation doesn't provide for such
a thing. Moving the store-traversing logic of scanReachableSymbols into the
StoreManager allows us to eliminate the notion of SubRegionMap altogether.
This fixes some particularly awkward broken test cases, now in
array-struct-region.c.
llvm-svn: 161510
Instead of sprinkling dynamic type info propagation throughout
ExprEngine, the added checker would add the more precise type
information on known APIs (Ex: ObjC alloc, new) and propagate
the type info in other cases (ex: ObjC init method, casts (the second is
not implemented yet)).
Add handling of ObjC alloc, new and init to the checker.
llvm-svn: 161357
Like base constructors, delegating constructors require no further
processing in the CFGInitializer node.
Also, add PrettyStackTraceLoc to the initializer and destructor logic
so we can get better stack traces in the future.
llvm-svn: 161283
Because of this, we would previously emit NO path notes when a parameter
is constrained to null (because there are no stores). Now we show where we
made the assumption, which is much more useful.
llvm-svn: 161280
The visitor walks back through the ExplodedGraph as expected, but
it wasn't actually keeping track of when a value was assigned. This
meant that it only worked when the value was assigned when the variable
was defined.
Tests in the next commit (dependent on another change).
llvm-svn: 161276
In the following code, find the type of the symbolic receiver by
following it and updating the dynamic type info in the state when we
cast the symbol from id to MyClass *.
MyClass *a = [[self alloc] init];
return 5/[a testSelf];
llvm-svn: 161264
engine.
The code that was supposed to split the tie in a deterministic way is
not deterministic. Most likely one of the profile methods uses a
pointer. After this change we do finally get the consistent diagnostic
output. Testing this requires running the analyzer on large code bases
and diffing the results.
llvm-svn: 161224
This makes the diagnostic output order deterministic.
1) This makes order of text diagnostics consistent from run to run.
2) Also resulted in different bugs being reported (from one run to
another) with plist-html output.
llvm-svn: 161151
While usually we'd use a symbolic region rather than a straight-up Unknown,
we can still generate unknowns via array subscripts with symbolic indexes.
(And if this ever changes in the future, we still shouldn't crash.)
llvm-svn: 161059
This was causing a crash in our array-to-pointer logic, since the region
was clearly not an array.
PR13440 / <rdar://problem/11977113>
llvm-svn: 161051
This removes explicit checks for 'this' and 'self' from
Store::enterStackFrame. It also removes getCXXThisRegion() as a virtual
method on all CallEvents; it's now only implemented in the parts of the
hierarchy where it is relevant. Finally, it removes the option to ask
for the ParmVarDecls attached to the definition of an inlined function,
saving a recomputation of the result of getRuntimeDefinition().
No visible functionality change!
llvm-svn: 161017
Previously, we were only checking the origin expressions of inlined calls.
Checkers using the generic postCall and older postObjCMessage callbacks were
ignored. Now that we have CallEventManager, it is much easier to create
a CallEvent generically when exiting an inlined function, which we can then
use for post-call checks.
No test case because we don't (yet) have any checkers that depend on this
behavior (which is why it hadn't been fixed before now).
llvm-svn: 161005
- Retrieves the type of the object/receiver from the state.
- Binds self during stack setup.
- Only explores the path on which the method is inlined (no
bifurcation to explore the path on which the method is not inlined).
llvm-svn: 160991
This ensures that it is valid to reference-count any CallEvents, and we
won't accidentally try to reclaim a CallEvent that lives on the stack.
It also hides an ugly switch statement for handling CallExprs!
There should be no functionality change here.
llvm-svn: 160986
This allows us to get around the C++ "virtual constructor" problem
when we'd like to create a CallEvent from an ExplodedNode, an inlined
StackFrameContext, or another CallEvent. The solution has three parts:
- CallEventManager uses a BumpPtrAllocator to allocate CallEvent-sized
memory blocks. It also keeps a cache of freed CallEvents for reuse.
- CallEvents all have protected copy constructors, along with cloneTo()
methods that use placement new to copy into CallEventManager-managed
memory, vtables intact.
- CallEvents owned by CallEventManager are now wrapped in an
IntrusiveRefCntPtr. Going forwards, it's probably a good idea to create
ALL CallEvents through the CallEventManager, so that we don't accidentally
try to reclaim a stack-allocated CallEvent.
All of this machinery is currently unused but will be put into use shortly.
llvm-svn: 160983
We were treating this like a CXXDefaultArgExpr, but
SubstNonTypeTemplateParmExpr actually appears when a template is
instantiated, i.e. we have all the information necessary to evaluate it.
This allows us to inline functions like llvm::array_lengthof.
<rdar://problem/11949235>
llvm-svn: 160846
It's a good thing CallEvents aren't created all over the place yet.
I checked all the uses this time and the private copy constructor
/really/ shouldn't cause any more problems.
llvm-svn: 160845
instead of walking to the preceding PostStmt node. There are cases where the last evaluated
expression does not appear in the ExplodedGraph.
Fixes PR 13466.
llvm-svn: 160819
After discussion, the type-based dispatch was decided to be bad for
maintenance and made it very easy for subtle bugs to creep in. Instead,
we'll just be very careful when we do have to allocate these on the heap.
llvm-svn: 160817
Our BugReporter knows how to deal with implicit statements: it looks in
the ParentMap until it finds a parent with a valid location. However, since
initializers are not in the body of a constructor, their sub-expressions are
not in the ParentMap. That was easy enough to fix in AnalysisDeclContext.
...and then even once THAT was fixed, there's still an extra funny case
of Objective-C object pointer fields under ARC, which are initialized with
a top-level ImplicitValueInitExpr. To catch these cases,
PathDiagnosticLocation will now fall back to the start of the current
function if it can't find any other valid SourceLocations. This isn't great,
but it's miles better than a crash.
(All of this is only relevant when constructors and destructors are being
inlined, i.e. under -cfg-add-initializers and -cfg-add-implicit-dtors.)
llvm-svn: 160810
This workaround is fairly lame: we simulate the first element's constructor
and destructor and rely on the region invalidation to "initialize" the rest
of the elements.
llvm-svn: 160809
Previously we were using ParentMap and crawling through the parent DeclStmt.
This should be at least slightly cheaper (and is also more flexible).
No (intended) functionality change.
llvm-svn: 160807
Most of the logic here is fairly simple; the interesting thing is that
we now distinguish complete constructors from base or delegate constructors.
We also make sure to cast to the base class before evaluating a constructor
or destructor, since non-virtual base classes may behave differently.
This includes some refactoring of VisitCXXConstructExpr and VisitCXXDestructor
in order to keep ExprEngine.cpp as clean as possible (leaving the details for
ExprEngineCXX.cpp).
llvm-svn: 160806
This modifies BugReporter and friends to handle CallEnter and CallExitEnd
program points that came from implicit call CFG nodes (read: destructors).
This required some extra handling for nested implicit calls. For example,
the added multiple-inheritance test case has a call graph that looks like this:
testMultipleInheritance3
~MultipleInheritance
~SmartPointer
~Subclass
~SmartPointer
***bug here***
In this case we correctly notice that we started in an inlined function
when we reach the CallEnter program point for the second ~SmartPointer.
However, when we reach the next CallEnter (for ~Subclass), we were
accidentally re-using the inner ~SmartPointer call in the diagnostics.
Rather than guess if we saw the corresponding CallExitEnd based on the
contents of the active path, we now just ask the PathDiagnostic if there's
any known stack before popping off the top path.
(A similar issue could have occured without multiple inheritance, but there
wasn't a test case for it.)
llvm-svn: 160804
- Some cleanup(the TODOs) will be done after ObjC method inlining is
complete.
- Simplified CallEvent::getDefinition not to require ISDynamicDispatch
parameter.
- Also addressed Jordan's comments from r160530.
llvm-svn: 160768
value by scanning the path, rather than assuming we have visited the '?:' operator
as a terminator (which sets a value indicating which expression to grab the
final ternary expression value from).
llvm-svn: 160760
As pointed out by Anna, we only differentiate between explicit message sends
This also adds support for ObjCSubscriptExprs, which are basically the same
as properties in many ways. We were already checking these, but not emitting
nice messages for them.
This depends on the llvm::PointerIntPair change in r160456.
llvm-svn: 160461
We will need to be able to easily reconstruct a CallEvent from an ExplodedNode
for diagnostic purposes, and that's exactly what factory functions are for.
CallEvent objects are small enough (four pointers and a SourceLocation) that
returning them through the stack is fairly cheap. Clients who just need to use
existing CallEvents can continue to do so using const references.
This uses the same sort of "kind-field-dispatch" as SVal, though most of the
nastiness is contained in the DISPATCH and DISPATCH_ARG macros at the end of
the file. (We can't use a template for this because member-pointers to base
class methods don't call derived-class methods even when casting to the
derived class. We can't use variadic macros because they're a C99 feature.)
llvm-svn: 160459
ObjC properties are handled through their semantic form of ObjCMessageExprs
and their wrapper PseudoObjectExprs, and have been for quite a while. The
syntactic ObjCPropertyRefExprs do not appear in the CFG and are not visited
by ExprEngine.
No functionality change.
llvm-svn: 160458
This code has been moved around multiple times, but seems to have been
obsolete ever since we started handled references like pointers.
llvm-svn: 160375
instead push the terminator for the branch down into the basic blocks of the subexpressions of '&&' and '||'
respectively. This eliminates some artifical control-flow from the CFG and results in a more
compact CFG.
Note that this patch only alters the branches 'while', 'if' and 'for'. This was complex enough for
one patch. The remaining branches (e.g., do...while) can be handled in a separate patch, but they
weren't immediately tackled because they were less important.
It is possible that this patch introduces some subtle bugs, particularly w.r.t. to destructor placement.
I've tried to audit these changes, but it is also known that the destructor logic needs some refinement
in the area of '||' and '&&' regardless (i.e., their are known bugs).
llvm-svn: 160218
Previously we were using the static type of the base object to inline
methods, whether virtual or non-virtual. Now, we try to see if the base
object has a known type, and if so ask for its implementation of the method.
llvm-svn: 160094
This is probably not so useful yet because it is not path-sensitive, though
it does try to show inlining with indentation.
This also adds a dump() method to CallEvent, which should be useful for
debugging.
llvm-svn: 160030
Also contains a number of tweaks to inlining that are necessary
for constructors and destructors. (I have this enabled on a private
branch, but it is very much unstable.)
llvm-svn: 160023
In order to accomplish this, we now build the callee's stack frame
as part of the CallEnter node, rather than the subsequent BlockEdge node.
This should not have any effect on perceived behavior or diagnostics.
This makes it safe to re-enable inlining of member overloaded operators.
llvm-svn: 160022
While this work is still fairly tentative (destructors are still left out of
the CFG by default), we now handle destructors in the same way as any other
calls, instead of just automatically trying to inline them.
llvm-svn: 160020
These are currently unused, but are intended to be used in lieu of PreStmt
and PostStmt when the call is implicit (e.g. an automatic object destructor).
This also modifies the Data1 field of ProgramPoints to allow storing any
pointer-sized value, as opposed to only aligned pointers. This is necessary
to store SourceLocations.
There is currently no BugReporter support for these; they should be skipped
over in any diagnostic output.
This commit also tags checkers that currently rely on function calls only
occurring at StmtPoints.
llvm-svn: 160019
This was a regression introduced during the CallEvent changes; a call to
FunctionDecl::hasBody was also being used to replace the decl found by
lookup with the actual definition. To keep from making this mistake again
(particularly if/when we start inlining Objective-C methods), this commit
adds a "getDefinition()" method to CallEvent, which should do the right
thing under any circumstances.
llvm-svn: 159940
We use LazyCompoundVals to avoid copying the contents of structs and arrays
around in the store, and when we need to pass a struct around that already
has a LazyCompoundVal we just use the original one. However, it's possible
that the first field of a struct may have a LazyCompoundVal of its own, and
we currently can't distinguish a LazyCompoundVal for the first element of a
struct from a LazyCompoundVal for the entire struct. In this case we should
just drop the optimization and make a new LazyCompoundVal that encompasses
the old one.
PR13264 / <rdar://problem/11802440>
llvm-svn: 159866
very simple semantic analysis that just builds the AST; minor changes for lexer
to pick up source locations I didn't think about before.
Comments AST is modelled along the ideas of HTML AST: block and inline content.
* Block content is a paragraph or a command that has a paragraph as an argument
or verbatim command.
* Inline content is placed within some block. Inline content includes plain
text, inline commands and HTML as tag soup.
llvm-svn: 159790
This required moving the ctors for IntegerLiteral and FloatingLiteral out of
line which shouldn't change anything as they are usually called through Create
methods that are already out of line.
ASTContext::Deallocate has been a nop for a long time, drop it from ASTVector
and make it independent from ASTContext.h
Pass the StorageAllocator directly to AccessedEntity so it doesn't need to
have a definition of ASTContext around.
llvm-svn: 159718
Our current inlining support (specifically RegionStore::enterStackFrame)
doesn't know that calls to overloaded operators may be calls to non-static
member functions, and that in these cases the first argument should be
treated as 'this'. This caused incorrect results and sometimes crashes.
The long-term fix will be to rewrite RegionStore::enterStackFrame to use
CallEvent and its subclasses, but for now we can just disable these
problematic calls by classifying them under a new CallEvent,
CXXMemberOperatorCall.
llvm-svn: 159692
...and instead add an accessor. We're not using this today, but it's something
that should probably stay in the source for potential clients, and it doesn't
cost a lot. (ObjCPropertyAccess is only created on the stack, and right now
there's only ever one alive at a time.)
This reverts r159581 / commit 8e674e1da34a131faa7d43dc3fcbd6e49120edbe.
llvm-svn: 159595
we are encountering some scalability issues with memory usage. The
appropriate long term fix is to make the analysis more scalable, but
this will at least prevent the analyzer swapping when
analyzing very large functions.
llvm-svn: 159578
The preObjCMessage and postObjCMessage callbacks now take an ObjCMethodCall
argument, which can represent an explicit message send (ObjCMessageSend) or an
implicit message generated by a property access (ObjCPropertyAccess).
llvm-svn: 159559
Previously, the CallEvent subclass ObjCMessageInvocation was just a wrapper
around the existing ObjCMessage abstraction (over message sends and property
accesses). Now, we have abstract CallEvent ObjCMethodCall with subclasses
ObjCMessageSend and ObjCPropertyAccess.
In addition to removing yet another wrapper object, this should make it easy
to add a ObjCSubscriptAccess call event soon.
llvm-svn: 159558
This involved refactoring some common pointer-escapes code onto CallEvent,
then having MallocChecker use those callbacks for whether or not to consider
a pointer's /ownership/ as escaping. This still needs to be pinned down, and
probably we want to make the new argumentsMayEscape() function a little more
discerning (content invalidation vs. ownership/metadata invalidation), but
this is a good improvement.
As a bonus, also remove CallOrObjCMessage from the source completely.
llvm-svn: 159557
This is intended to replace CallOrObjCMessage, and is eventually intended to be
used for anything that cares more about /what/ is being called than /how/ it's
being called. For example, inlining destructors should be the same as inlining
blocks, and checking __attribute__((nonnull)) should apply to the allocator
calls generated by operator new.
llvm-svn: 159554
Previously:
...the comment said DFS...
...the WorkList being instantiated said BFS...
...and the implementation was actually DFS...
...due to an unintentional change in 2010...
...and everything kept working anyway.
This fixes our std::deque implementation of BFS, but switches back to a
SmallVector-based implementation of DFS.
We should probably still investigate the ramifications of DFS vs. BFS,
especially for large functions (and especially when we hit our block path
limit), since this might completely change our memory use. It can also mask
some bugs and reveal others depending on when we halt analysis. But at least
we will not have this kind of little mistake creep in again.
llvm-svn: 159397
We don't handle exceptions yet, so we treat them as sinks. ExprEngine
hardcodes messages that are known to raise Objective-C exceptions like -raise,
but it was only checking for +raise:format: and +raise:format:arguments: on
NSException itself, not subclasses.
<rdar://problem/11724201>
llvm-svn: 159010
express library-level dependencies within Clang.
This is no more verbose really, and plays nicer with the rest of the
CMake facilities. It should also have no change in functionality.
llvm-svn: 158888
The default global placement new just returns the pointer it is given.
Note that other custom 'new' implementations with placement args are not
guaranteed to do this.
In addition, we need to invalidate placement args, since they may be updated by
the allocator function. (Also, right now we don't properly handle the
constructor inside a CXXNewExpr, so we need to invalidate the placement args
just so that callers know something changed!)
This invalidation is not perfect because CallOrObjCMessage doesn't support
CXXNewExpr, and all of our invalidation callbacks expect that if there's no
CallOrObjCMessage, the invalidation is happening manually (e.g. by a direct
assignment) and shouldn't affect checker-specific metadata (like malloc state);
hence the malloc test case in new-fail.cpp. But region values are now
properly invalidated, at least.
The long-term solution to this problem is to rework CallOrObjCMessage into
something more general, rather than the morass of branches it is today.
<rdar://problem/11679031>
llvm-svn: 158784
This happens in C++ mode right at the declaration of a struct VLA;
MallocChecker sees a bind and tries to get see if it's an escaping bind.
It's likely that our handling of this is still incomplete, but it fixes a
crash on valid without disturbing anything else for now.
llvm-svn: 158587
This does not actually give us the right behavior for reinterpret_cast
of references. Reverting so I can think about it some more.
This reverts commit 50a75a6e26a49011150067adac556ef978639fe6.
llvm-svn: 158341
These casts only appear in very well-defined circumstances, in which the
target of a reinterpret_cast or a function formal parameter is an lvalue
reference. According to the C++ standard, the following are equivalent:
reinterpret_cast<T&>( x)
*reinterpret_cast<T*>(&x)
[expr.reinterpret.cast]p11
llvm-svn: 158338
While collections containing nil elements can still be iterated over in an
Objective-C for-in loop, the most common Cocoa collections -- NSArray,
NSDictionary, and NSSet -- cannot contain nil elements. This checker adds
that assumption to the analyzer state.
This was the cause of some minor false positives concerning CFRelease calls
on objects in an NSArray.
llvm-svn: 158319
CmpRuns.py can be used to compare issues from different analyzer runs.
Since it uses the issue line number to unique 2 issues, adding a new
line to the beginning of a file makes all issues in the file reported as
new.
The hash will be an opaque value which could be used (along with the
function name) by CmpRuns to identify the same issues. This way, we only
fail to identify the same issue from two runs if the function it appears
in changes (not perfect, but much better than nothing).
llvm-svn: 158180
I falsely assumed that the memory spaces are equal when we reach this
point, they might not be when memory space of one or more is stack or
Unknown. We don't want a region from Heap space alias something with
another memory space.
llvm-svn: 158165
Add a concept of symbolic memory region belonging to heap memory space.
When comparing symbolic regions allocated on the heap, assume that they
do not alias.
Use symbolic heap region to suppress a common false positive pattern in
the malloc checker, in code that relies on malloc not returning the
memory aliased to other malloc allocations, stack.
llvm-svn: 158136
In addition, I've made the pointer and reference typedef 'void' rather than T*
just so they can't get misused. I would've omitted them entirely but
std::distance likes them to be there even if it doesn't use them.
This rolls back r155808 and r155869.
Review by Doug Gregor incorporating feedback from Chandler Carruth.
llvm-svn: 158104
When we timeout or exceed a max number of blocks within an inlined
function, we retry with no inlining starting from a node right before
the CallEnter node. We assume the state of that node is the state of the
program before we start evaluating the call. However, the node pruning
removes this node as unimportant.
Teach the node pruning to keep the predecessors of the call enter nodes.
llvm-svn: 157860
improved the pruning heuristics. The current heuristics are pretty good, but they make diagnostics
for uninitialized variables warnings particularly useless in some cases.
llvm-svn: 157734
a horrible bug in GetLazyBindings where we falsely appended a field suffix when traversing 3 or more
layers of lazy bindings. I don't have a reduced test case yet; but I have added the original source
to an internal regression test suite. I'll see about coming up with a reduced test case.
Fixes <rdar://problem/11405978> (for real).
llvm-svn: 156580