With the assurance that the trimmed graph does not contain cycles,
this patch is safe (with a few tweaks), and provides the performance
boost it was intended to.
Part of performance work for <rdar://problem/13433687>.
llvm-svn: 177469
Having a trimmed graph with no cycles (a DAG) is much more convenient for
trying to find shortest paths, which is exactly what BugReporter needs to do.
Part of the performance work for <rdar://problem/13433687>.
llvm-svn: 177468
This fixes a crash when analyzing LLVM that was exposed by r177220 (modeling of
trivial copy/move assignment operators).
When we look up a lazy binding for “Builder”, we see the direct binding of Loc at offset 0.
Previously, we believed the binding, which led to a crash. Now, we do not believe it as
the types do not match.
llvm-svn: 177453
The whole reason we were doing a BFS in the first place is because an
ExplodedGraph can have cycles. Unfortunately, my removeErrorNode "update"
doesn't work at all if there are cycles.
I'd still like to be able to avoid doing the BFS every time, but I'll come
back to it later.
This reverts r177353 / 481fa5071c203bc8ba4f88d929780f8d0f8837ba.
llvm-svn: 177448
Splitting the graph trimming and the path-finding (r177216) already
recovered quite a bit of performance lost to increased suppression.
We can still do better by also performing the reverse BFS up front
(needed for shortest-path-finding) and only walking the shortest path
for each report. This does mean we have to walk back up the path and
invalidate all the BFS numbers if the report turns out to be invalid,
but it's probably still faster than redoing the full BFS every time.
More performance work for <rdar://problem/13433687>
llvm-svn: 177353
r175234 allowed the analyzer to model trivial copy/move constructors as
an aggregate bind. This commit extends that to trivial assignment
operators as well. Like the last commit, one of the motivating factors here
is not warning when the right-hand object is partially-initialized, which
can have legitimate uses.
<rdar://problem/13405162>
llvm-svn: 177220
When we generate a path diagnostic for a bug report, we have to take the
full ExplodedGraph and limit it down to a single path. We do this in two
steps: "trimming", which limits the graph to all the paths that lead to
this particular bug, and "creating the report graph", which finds the
shortest path in the trimmed path to any error node.
With BugReporterVisitor false positive suppression, this becomes more
expensive: it's possible for some paths through the trimmed graph to be
invalid (i.e. likely false positives) but others to be valid. Therefore
we have to run the visitors over each path in the graph until we find one
that is valid, or until we've ruled them all out. This can become quite
expensive.
This commit separates out graph trimming from creating the report graph,
performing the first only once per bug equivalence class and the second
once per bug report. It also cleans up that portion of the code by
introducing some wrapper classes.
This seems to recover most of the performance regression described in my
last commit.
<rdar://problem/13433687>
llvm-svn: 177216
...in favor of this typedef:
typedef llvm::DenseMap<const ExplodedNode *, const ExplodedNode *>
InterExplodedGraphMap;
Use this everywhere the previous class and typedef were used.
Took the opportunity to ArrayRef-ize ExplodedGraph::trim while I'm at it.
No functionality change.
llvm-svn: 177215
I removed this check in the recursion->iteration commit, but forgot that
generatePathDiagnostic may be called multiple times if there are multiple
PathDiagnosticConsumers.
llvm-svn: 177214
Fixes a FIXME, improves dead symbol collection, suppresses a false positive,
which resulted from reusing the same symbol twice for simulation of 2 calls to the same function.
Fixing this lead to 2 possible false negatives in CString checker. Since the checker is still alpha and
the solution will not require revert of this commit, move the tests to a FIXME section.
llvm-svn: 177206
The previous generatePathDiagnostic() was intended to be tail-recursive,
restarting and trying again if a report was marked invalid. However:
(1) this leaked all the cloned visitors, which weren't being deleted, and
(2) this wasn't actually tail-recursive because some local variables had
non-trivial destructors.
This was causing us to overflow the stack on inputs with large numbers of
reports in the same equivalence class, such as sqlite3.c. Being iterative
at least prevents us from blowing out the stack, but doesn't solve the
performance issue: suppressing thousands (yes, thousands) of paths in the
same equivalence class is expensive. I'm looking into that now.
<rdar://problem/13423498>
llvm-svn: 177189
We discovered that sqlite3.c currently has 2600 reports in a single
equivalence class; it would be good to know if this is a recent
development or what.
(For the curious, the different reports in an equivalence class represent
the same bug found along different paths. When we're suppressing false
positives, we need to go through /every/ path to make sure there isn't a
valid path to a bug. This is a flaw in our after-the-fact suppression,
made worse by the fact that that function isn't particularly optimized.)
llvm-svn: 177188
In the test case below, the value V is not constrained to 0 in ErrorNode but it is in node N.
So we used to fail to register the Suppression visitor.
We also need to change the way we determine that the Visitor should kick in because the node N belongs to
the ExplodedGraph and might not be on the BugReporter path that the visitor sees. Instead of trying to match the node,
turn on the visitor when we see the last node in which the symbol is ‘0’.
llvm-svn: 177121
When BugReporter tracks C++ references involved in a null pointer violation, we
want to differentiate between a null reference and a reference to a null pointer. In the
first case, we want to track the region for the reference location; in the second, we want
to track the null pointer.
In addition, the core creates CXXTempObjectRegion to represent the location of the
C++ reference, so teach FindLastStoreBRVisitor about it.
This helps null pointer suppression to kick in.
(Patch by Anna and Jordan.)
llvm-svn: 176969
r176737 fixed bugreporter::trackNullOrUndefValue to find nodes for an lvalue
even if the rvalue node had already been collected. This commit extends that
to call statement nodes as well, so that if a call is contained within
implicit casts we can still track the return value.
No test case because node reclamation is extremely finicky (dependent on
how the AST and CFG are built, and then on our current reclamation rules,
and /then/ on how many nodes were generated by the analyzer core and the
current set of checkers). I consider this a low-risk change, though, and
it will only happen in cases of reclamation when the rvalue node isn't
available.
<rdar://problem/13340764>
llvm-svn: 176829
The visitor used to assume that the value it’s tracking is null in the first node it examines. This is not true.
If we are registering the Suppress Inlined Defensive checks visitor while traversing in another visitor
(such as FindlastStoreVisitor). When we restart with the IDC visitor, the invariance of the visitor does
not hold since the symbol we are tracking no longer exists at that point.
I had to pass the ErrorNode when creating the IDC visitor, because, in some cases, node N is
neither the error node nor will be visible along the path (we had not finalized the path at that point
and are dealing with ExplodedGraph.)
We should revisit the other visitors which might not be aware that they might get nodes, which are
later in path than the trigger point.
This suppresses a number of inline defensive checks in JavaScriptCore.
llvm-svn: 176756
r176010 introduced the notion of "interesting" lvalue expressions, whose
nodes are guaranteed never to be reclaimed by the ExplodedGraph. This was
used in bugreporter::trackNullOrUndefValue to find the region that contains
the null or undef value being tracked.
However, the /rvalue/ nodes (i.e. the loads from these lvalues that produce
a null or undef value) /are/ still being reclaimed, and if we couldn't
find the node for the rvalue, we just give up. This patch changes that so
that we look for the node for either the rvalue or the lvalue -- preferring
the former, since it lets us fall back to value-only tracking in cases
where we can't get a region, but allowing the latter as well.
<rdar://problem/13342842>
llvm-svn: 176737
Previously, ReturnVisitor waited to suppress a null return path until it
had found the inlined "return" statement. Now, it checks up front whether
the return value was NULL, and suppresses the warning right away if so.
We still have to wait until generating the path notes to invalidate the bug
report, or counter-suppression will never be triggered. (Counter-suppression
happens while generating path notes, but the generation won't happen for
reports already marked invalid.)
This isn't actually an issue today because we never reclaim nodes for
top-level statements (like return statements), but it could be an issue
some day in the future. (But, no expected behavioral change and no new
test case.)
llvm-svn: 176736
The second modification does not lead to any visible result, but, theoretically, is what we should
have been looking at to begin with since we are checking if the node was assumed to be null in
an inlined function.
llvm-svn: 176576
Inlining brought a few "null pointer use" false positives, which occur because
the callee defensively checks if a pointer is NULL, whereas the caller knows
that the pointer cannot be NULL in the context of the given call.
This is a first attempt to silence these warnings by tracking the symbolic value
along the execution path in the BugReporter. The new visitor finds the node
in which the symbol was first constrained to NULL. If the node belongs to
a function on the active stack, the warning is reported, otherwise, it is
suppressed.
There are several areas for follow up work, for example:
- How do we differentiate the cases where the first check is followed by
another one, which does happen on the active stack?
Also, this only silences a fraction of null pointer use warnings. For example, it
does not do anything for the cases where NULL was assigned inside a callee.
llvm-svn: 176402
Previously we were assuming that we'd never ask for the sub-region bindings
of a bitfield, since a bitfield cannot have subregions. However,
unification of code paths has made that assumption invalid. While we could
take advantage of this by just checking for the single possible binding,
it's probably better to do the right thing, so that if/when we someday
support unions we'll do the right thing there, too.
This fixes a handful of false positives in analyzing LLVM.
<rdar://problem/13325522>
llvm-svn: 176388
Most map types have an operator[] that inserts a new element if the key
isn't found, then returns a reference to the value slot so that you can
assign into it. However, if the value type is a pointer, it will be
initialized to null. This is usually no problem.
However, if the user /knows/ the map contains a value for a particular key,
they may just use it immediately:
// From ClangSACheckersEmitter.cpp
recordGroupMap[group]->Checkers
In this case the analyzer reports a null dereference on the path where the
key is not in the map, even though the user knows that path is impossible
here. They could silence the warning by adding an assertion, but that means
splitting up the expression and introducing a local variable. (Note that
the analyzer has no way of knowing that recordGroupMap[group] will return
the same reference if called twice in a row!)
We already have logic that says a null dereference has a high chance of
being a false positive if the null came from an inlined function. This
patch simply extends that to references whose rvalues are null as well,
silencing several false positives in LLVM.
<rdar://problem/13239854>
llvm-svn: 176371
By returning the (key, value) binding pairs, we save lookups afterwards.
This also enables further work later on.
No functionality change.
llvm-svn: 176230
Consider this case:
int *p = 0;
p = getPointerThatMayBeNull();
*p = 1;
If we inline 'getPointerThatMayBeNull', we might know that the value of 'p'
is NULL, and thus emit a null pointer dereference report. However, we
usually want to suppress such warnings as error paths, and we do so by using
FindLastStoreBRVisitor to see where the NULL came from. In this case, though,
because 'p' was NULL both before and after the assignment, the visitor
would decide that the "last store" was the initialization, not the
re-assignment.
This commit changes FindLastStoreBRVisitor to consider all PostStore nodes
that assign to this region. This still won't catches changes made directly
by checkers if they re-assign the same value, but it does handle the common
case in user-written code and will trigger ReturnVisitor's suppression
machinery as expected.
<rdar://problem/13299738>
llvm-svn: 176201
This enables constructor inlining for types with non-trivial destructors.
The plan is to enable destructor inlining within the next month, but that
needs further verification.
<rdar://problem/12295329>
llvm-svn: 176200
This potentially reduces a performance optimization of throwing away
PreStmtPurgeDeadSymbols nodes. I'll investigate the performance impact
soon and see if we need something better.
llvm-svn: 176149
This is essentially the same problem as r174031: a lazy binding for the first
field of a struct may stomp on an existing default binding for the
entire struct. Because of the way RegionStore is set up, we can't help
but lose the top-level binding, but then we need to make sure that accessing
one of the other fields doesn't come back as Undefined.
In this case, RegionStore is now correctly detecting that the lazy binding
we have isn't the right type, but then failing to follow through on the
implications of that: we don't know anything about the other fields in the
aggregate. This fix adds a test when searching for other kinds of default
values to see if there's a lazy binding we rejected, and if so returns
a symbolic value instead of Undefined.
The long-term fix for this is probably a new Store model; see
<rdar://problem/12701038>.
Fixes <rdar://problem/13292559>.
llvm-svn: 176144
Fixes PR15358 and <rdar://problem/13295437>.
Along the way, shorten path diagnostics that say "Variable 'x'" to just
be "'x'". By the context, it is obvious that we have a variable,
and so this just consumes text space.
llvm-svn: 176115
Normally, we need to look through derived-to-base casts when creating
temporary object regions (added in r175854). However, if the temporary
is a pointer (rather than a struct/class instance), we need to /preserve/
the base casts that have been applied.
This also ensures that we really do create a new temporary region when
we need to: MaterializeTemporaryExpr and lvalue CXXDefaultArgExprs.
Fixes PR15342, although the test case doesn't include the crash because
I couldn't isolate it.
llvm-svn: 176069
These nodes are never consulted by any analyzer client code, so they are
used only for machinery for removing dead bindings. Once successor nodes
are generated they can be safely removed.
This greatly reduces the amount of nodes that are generated in some case,
lowering the memory regression when analyzing Sema.cpp introduced by
r176010 from 14% to 2%.
llvm-svn: 176050
r175026 added support for default values, but didn't take reference
parameters into account, which expect the default argument to be an
lvalue. Use createTemporaryRegionIfNeeded if we can evaluate the default
expr as an rvalue but the expected result is an lvalue.
Fixes the most recent report of PR12915. The original report predates
default argument support, so that can't be it.
llvm-svn: 176042
While RegionStore checks to make sure casts on TypedValueRegions are valid,
it does not do the same for SymbolicRegions, which do not have perfect type
info anyway. Additionally, MemRegion::getAsOffset does not take a
ProgramState, so it can't use dynamic type info to determine a better type
for the regions. (This could also be dangerous if the type of a super-region
changes!)
Account for this by checking that a base object region is valid on top of a
symbolic region, and falling back to "symbolic offset" mode if not.
Fixes PR15345.
llvm-svn: 176034
This was triggering assertion failures when analyzing the LLVM codebase. This
is fallout from r175988.
I've got delta chewing away on a test case, but I wanted the fix to go
in now.
llvm-svn: 176011
r175988 modified the ExplodedGraph trimming algorithm to retain all
nodes for "lvalue" expressions. This patch refines that notion to
only "interesting" expressions that would be used for diagnostics.
llvm-svn: 176010
This required more changes than I originally expected:
- ObjCIvarRegion implements "canPrintPretty" et al
- DereferenceChecker indicates the null pointer source is an ivar
- bugreporter::trackNullOrUndefValue() uses an alternate algorithm
to compute the location region to track by scouring the ExplodedGraph.
This allows us to get the actual MemRegion for variables, ivars,
fields, etc. We only hand construct a VarRegion for C++ references.
- ExplodedGraph no longer drops nodes for expressions that are marked
'lvalue'. This is to facilitate the logic in the previous bullet.
This may lead to a slight increase in size in the ExplodedGraph,
which I have not measured, but it is likely not to be a big deal.
I have validated each of the changed plist output.
Fixes <rdar://problem/12114812>
llvm-svn: 175988
Use Optional<CFG*> where invalid states were needed previously. In the one case
where that's not possible (beginAutomaticObjDtorsInsert) just use a dummy
CFGAutomaticObjDtor.
Thanks for the help from Jordan Rose & discussion/feedback from Ted Kremenek
and Doug Gregor.
Post commit code review feedback on r175796 by Ted Kremenek.
llvm-svn: 175938
This Decl shouldn't be the canonical Decl; it should be the Decl used by
the CXXBaseSpecifier in the subclass. Unfortunately, that means continuing
to throw getCanonicalDecl() on all comparisons.
This fixes MemRegion::getAsOffset's use of ASTRecordLayout when redeclarations
are involved.
llvm-svn: 175913
Previously, we had the decisions about inlining spread out
over multiple functions.
In addition to the refactor, this commit ensures
that we will always inline BodyFarm functions as long as the Decl
is available. This fixes false positives due to those functions
not being inlined when no or minimal inlining is enabled such (as
shallow mode).
llvm-svn: 175857
This is a follow-up to r175830, which made sure a temporary object region
created for, say, a struct rvalue matched up with the initial bindings
being stored into it. This does the same for the case in which the AST
actually tells us that we need to create a temporary via a
MaterializeObjectExpr. I've unified the two code paths and moved a static
helper function onto ExprEngine.
This also caused a bit of test churn, causing us to go back to describing
temporary regions without a 'const' qualifier. This seems acceptable; it's
our behavior from a few months ago.
<rdar://problem/13265460> (part 2)
llvm-svn: 175854
When creating a temporary region (say, when a struct rvalue is used as
the base of a member expr), make sure we account for any derived-to-base
casts. We don't actually record these in the LazyCompoundVal that
represents the rvalue, but we need to make sure that the temporary region
we're creating (a) matches the bindings, and (b) matches its expression.
Most of the time this will do exactly the same thing as before, but it
fixes spurious "garbage value" warnings introduced in r175234 by the use
of lazy bindings to model trivial copy constructors.
<rdar://problem/13265460>
llvm-svn: 175830
This allows MemRegion and MemRegionManager to avoid asking over and over
again whether an class is a virtual base or a non-virtual base.
Minor optimization/cleanup; no functionality change.
llvm-svn: 175716
- When deciding if we can reuse a lazy binding, make sure to check if there
are additional bindings in the sub-region.
- When reading from a lazy binding, don't accidentally strip off casts or
base object regions. This slows down lazy binding reading a bit but is
necessary for type sanity when treating one class as another.
A bit of minor refactoring allowed these two checks to be unified in a nice
early-return-using helper function.
<rdar://problem/13239840>
llvm-svn: 175703
RegionStoreManager::getInterestingValues() returns a pointer to a
std::vector that lives inside a DenseMap, which is constructed on demand.
However, constructing one such value can lead to constructing another
value, which will invalidate the reference created earlier.
Fixed by delaying the new entry creation until the function returns.
llvm-svn: 175582
If a base object is at a 0 offset, RegionStoreManager may find a lazy
binding for the entire object, then try to attach a FieldRegion or
grandparent CXXBaseObjectRegion on top of that (skipping the intermediate
region). We now preserve as many layers of base object regions necessary
to make the types match.
<rdar://problem/13239840>
llvm-svn: 175556
This just adds a very simple check that if a DerivedToBase CastExpr is
operating on a value with known C++ object type, and that type is not the
base type specified in the AST, then the cast is invalid and we should
return UnknownVal.
In the future, perhaps we can have a checker that specifies that this is
illegal, but we still shouldn't assert even if the user turns that checker
off.
PR14872
llvm-svn: 175239
...after a host of optimizations related to the use of LazyCompoundVals
(our implementation of aggregate binds).
Originally applied in r173951.
Reverted in r174069 because it was causing hangs.
Re-applied in r174212.
Reverted in r174265 because it was /still/ causing hangs.
If this needs to be reverted again it will be punted to far in the future.
llvm-svn: 175234
Previously, we were scanning the current store. Now, we properly scan the
store that the LazyCompoundVal came from, which may have very different
live symbols.
llvm-svn: 175232
Previously, whenever we had a LazyCompoundVal, we crawled through the
entire store snapshot looking for bindings within the LCV's region. Now, we
just ask for the subregion bindings of the lazy region and only visit those.
This is an optimization (so no test case), but it may allow us to clean up
more dead bindings than we were previously.
llvm-svn: 175230
This is going to be used in the next commit.
While I'm here, tighten up assumptions about symbolic offset
BindingKeys, and make offset calculation explicitly handle all
MemRegion kinds.
No functionality change.
llvm-svn: 175228
In C++, constants captured by lambdas (and blocks) are not actually stored
in the closure object, since they can be expanded at compile time. In this
case, they will have no binding when we go to look them up. Previously,
RegionStore thought they were uninitialized stack variables; now, it checks
to see if they are a constant we know how to evaluate, using the same logic
as r175026.
This particular code path is only for scalar variables. Constant arrays and
structs are still unfortunately unhandled; we'll need a stronger solution
for those.
This may have a small performance impact, but only for truly-undefined
local variables, captures in a non-inlined block, and non-constant globals.
Even then, in the non-constant case we're only doing a quick type check.
<rdar://problem/13105553>
llvm-svn: 175194
Previously, we were handling only simple integer constants for globals and
the smattering of implicitly-valued expressions handled by Environment for
default arguments. Now, we can use any integer constant expression that
Clang can evaluate, in addition to everything we handled before.
PR15094 / <rdar://problem/12830437>
llvm-svn: 175026
The checkPointerEscape callback previously did not specify how a
pointer escaped. This change includes an enum which describes the
different ways a pointer may escape. This enum is passed to the
checkPointerEscape callback when a pointer escapes. If the escape
is due to a function call, the call is passed. This changes
previous behavior where the call is passed as NULL if the escape
was due to indirectly invalidating the region the pointer referenced.
A patch by Branden Archer!
llvm-svn: 174677
This patch makes sure that we do not reinitialize static globals when
the function is called more than once along a path. The motivation is
code with initialization patterns that rely on 2 static variables, where
one of them has an initializer while the other does not. Currently, we
reset the static variables with initializers on every visit to the
function along a path.
llvm-svn: 174676
This is a "quick fix".
The underlining issue is that when a const pointer to a struct is passed
into a function, we do not invalidate the pointer fields. This results
in false positives that are common in C++ (since copy constructors are
prevalent). (Silences two llvm false positives.)
llvm-svn: 174468
This is a more natural order of evaluation, and it is very important
for visualization in the static analyzer. Within Xcode, the arrows
will not jump from right to left, which looks very visually jarring.
It also provides a more natural location for dataflow-based diagnostics.
Along the way, we found a case in the analyzer diagnostics where we
needed to indicate that a variable was "captured" by a block.
-fsyntax-only timings on sqlite3.c show no visible performance change,
although this is just one test case.
Fixes <rdar://problem/13016513>
llvm-svn: 174447
...again. The problem has not been fixed and our internal buildbot is still
getting hangs.
This reverts r174212, originally applied in r173951, then reverted in r174069.
Will not re-apply until the entire project analyzes successfully on my
local machine.
llvm-svn: 174265
Inlining these functions is essential for correctness. We often have
cases where we do not inline calls. For example, the shallow mode and
when reanalyzing previously inlined ObjC methods as top level.
llvm-svn: 174245
This allows us to keep from chaining LazyCompoundVals in cases like this:
CGRect r = CGRectMake(0, 0, 640, 480);
CGRect r2 = r;
CGRect r3 = r2;
Previously we only made this optimization if the struct did not begin with
an aggregate member, to make sure that we weren't picking up an LCV for
the first field of the struct. But since LazyCompoundVals are typed, we can
make that inference directly by comparing types.
This is a pure optimization; the test changes are to guard against possible
future regressions.
llvm-svn: 174211
It's causing hangs on our internal analyzer buildbot. Will restore after
investigating.
This reverts r173951 / baa7ca1142990e1ad6d4e9d2c73adb749ff50789.
llvm-svn: 174069
This is a hack to work around the fact that we don't track extents for our
default bindings:
CGPoint p;
p.x = 0.0;
p.y = 0.0;
rectParam.origin = p;
use(rectParam.size); // warning: uninitialized value in rectParam.size.width
In this case, the default binding for 'p' gets copied into 'rectParam',
because the 'origin' field is at offset 0 within CGRect. From then on,
rectParam's old default binding (in this case a symbol) is lost.
This patch silences the warning by pretending that lazy bindings are never
made from uninitialized memory, but not only is that not true, the original
default binding is still getting overwritten (see FIXME test cases).
The long-term solution is tracked in <rdar://problem/12701038>
PR14765 and <rdar://problem/12875012>
llvm-svn: 174031
positives.
The includeSuffix was only set on the first iteration through the
function, resulting in invalid regions being produced by getLazyBinding
(ex: zoomRegion.y).
llvm-svn: 174016
Redefine the shallow mode to inline all functions for which we have a
definite definition (ipa=inlining). However, only inline functions that
are up to 4 basic blocks large and cut the max exploded nodes generated
per top level function in half.
This makes shallow faster and allows us to keep inlining small
functions. For example, we would keep inlining wrapper functions and
constructors/destructors.
With the new shallow, it takes 104s to analyze sqlite3, whereas
the deep mode is 658s and previous shallow is 209s.
llvm-svn: 173958