Currently most of the test files have a separate dwarf and a separate
dsym test with almost identical content (only the build step is
different). With adding dwo symbol file handling to the test suit it
would increase this to a 3-way duplication. The purpose of this change
is to eliminate this redundancy with generating 2 test case (one dwarf
and one dsym) for each test function specified (dwo handling will be
added at a later commit).
Main design goals:
* There should be no boilerplate code in each test file to support the
multiple debug info in most of the tests (custom scenarios are
acceptable in special cases) so adding a new test case is easier and
we can't miss one of the debug info type.
* In case of a test failure, the debug symbols used during the test run
have to be cleanly visible from the output of dotest.py to make
debugging easier both from build bot logs and from local test runs
* Each test case should have a unique, fully qualified name so we can
run exactly 1 test with "-f <test-case>.<test-function>" syntax
* Test output should be grouped based on test files the same way as it
happens now (displaying dwarf/dsym results separately isn't
preferable)
Proposed solution (main logic in lldbtest.py, rest of them are test
cases fixed up for the new style):
* Have only 1 test fuction in the test files what will run for all
debug info separately and this test function should call just
"self.build(...)" to build an inferior with the right debug info
* When a class is created by python (the class object, not the class
instance), we will generate a new test method for each debug info
format in the test class with the name "<test-function>_<debug-info>"
and remove the original test method. This way unittest2 see multiple
test methods (1 for each debug info, pretty much as of now) and will
handle the test selection and the failure reporting correctly (the
debug info will be visible from the end of the test name)
* Add new annotation @no_debug_info_test to disable the generation of
multiple tests for each debug info format when the test don't have an
inferior
Differential revision: http://reviews.llvm.org/D13028
llvm-svn: 248883
Summary: Updated `append_to_remote_wd` to work for both remote and local.
Reviewers: clayborg, ovyalov
Reviewed By: ovyalov
Subscribers: tberghammer, lldb-commits
Differential Revision: http://reviews.llvm.org/D10288
llvm-svn: 239203
Summary:
This change adds the infrastructure to mark tests as xfail for specific
Android API levels.
Test Plan: dotest.py TestChangeProcessGroup on an Android API 16 device.
Reviewers: chying, labath, chaoren
Reviewed By: labath, chaoren
Subscribers: tberghammer, lldb-commits
Differential Revision: http://reviews.llvm.org/D10261
llvm-svn: 239126
It consistently times out. The test is already XTIMEOUT'd, but causes
tests to take an extra 5 minutes waiting for the timeout to expire.
llvm.org/pr23731
llvm-svn: 238833
if the test fails for some reason, we can end up leaking a process. This has a tendency to
confuse the buildbots. I have added a cleanup hook to make sure we cleanup the child.
llvm-svn: 238421
Summary:
Previously, we wait()ed for events from the inferiors process group. This is resulted in a
failure if the inferior changed its process group in the middle of execution. To avoid this, I
pass -1 to the wait() call. The flag __WNOTHREAD makes sure we don't actually wait for events
from any process, but only the processes(threads) which are our children (or traced by us). Since
this happens on the monitor thread, which is dedicated to monitoring a single inferior, we will
be getting events only from this inferior.
Test Plan: All tests pass on linux. I have added a test to check the new functionality.
Reviewers: chaoren, ovyalov
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D10061
llvm-svn: 238405