Currently most of the test files have a separate dwarf and a separate
dsym test with almost identical content (only the build step is
different). With adding dwo symbol file handling to the test suit it
would increase this to a 3-way duplication. The purpose of this change
is to eliminate this redundancy with generating 2 test case (one dwarf
and one dsym) for each test function specified (dwo handling will be
added at a later commit).
Main design goals:
* There should be no boilerplate code in each test file to support the
multiple debug info in most of the tests (custom scenarios are
acceptable in special cases) so adding a new test case is easier and
we can't miss one of the debug info type.
* In case of a test failure, the debug symbols used during the test run
have to be cleanly visible from the output of dotest.py to make
debugging easier both from build bot logs and from local test runs
* Each test case should have a unique, fully qualified name so we can
run exactly 1 test with "-f <test-case>.<test-function>" syntax
* Test output should be grouped based on test files the same way as it
happens now (displaying dwarf/dsym results separately isn't
preferable)
Proposed solution (main logic in lldbtest.py, rest of them are test
cases fixed up for the new style):
* Have only 1 test fuction in the test files what will run for all
debug info separately and this test function should call just
"self.build(...)" to build an inferior with the right debug info
* When a class is created by python (the class object, not the class
instance), we will generate a new test method for each debug info
format in the test class with the name "<test-function>_<debug-info>"
and remove the original test method. This way unittest2 see multiple
test methods (1 for each debug info, pretty much as of now) and will
handle the test selection and the failure reporting correctly (the
debug info will be visible from the end of the test name)
* Add new annotation @no_debug_info_test to disable the generation of
multiple tests for each debug info format when the test don't have an
inferior
Differential revision: http://reviews.llvm.org/D13028
llvm-svn: 248883
ExprCommandWithTimeoutsTestCase::expectedFailureFreeBSD had an
expectedFailureFreeBSD decorator, removed in r247799. It had been flakey
on the FreeBSD buildbot but passed locally. John Wolfe has since
observed a local failure, so add expectedFlakeyFreeBSD until we can
investigate and likely increase the timeout in the test.
llvm.org/pr19605 (FreeBSD)
llvm.org/pr20275 (equivalent Linux issue)
llvm-svn: 247822
ExprCommandWithTimeoutsTestCase::expectedFailureFreeBSD
This test passes locally but was marked XFAIL due to failures on the
FreeBSD buildbot. That buildbot has been retired as it was overloaded,
and we will investigate again if this fails once a new buildbot is in
place.
llvm.org/pr19605
llvm-svn: 247799
The test was flaky on the android buildbot, because the 10ms test completed before we got a
chance to interrupt it. I increase the duration to 50ms to hopefully get more consistent results.
llvm-svn: 245555
SUMMARY
Flakey tests get two chances to pass
Also, switched a bunch of tests to use new decorator.
TEST PLAN
Add one of these decorators to a test
Edit a test to pass on the first invocation, confirm test appears as pass
Edit a test to pass on the first invocation, pass on the second, confirm test appears as xfail
Edit a test to fail on two consecutive runs, confirm test appears in results as fail/error
Differential Revision: http://reviews.llvm.org/D10721
llvm-svn: 240789
Adds @skipIfPlatform and @skipUnlessPlatform decorators which will skip if /
unless the target platform is in the provided platform list.
Test Plan:
ninja check-lldb shows no regressions.
When running cross platform, tests which cannot run on the target platform are
skipped.
Differential Revision: http://reviews.llvm.org/D8665
llvm-svn: 233547
that the function we were calling would continue to sleep
for the requested time even if it was interrupted. That is
not true of std::this_thread::sleep_for, at least not on OS X.
Fix the test case so that if it wakes up early, it goes back
to sleep till the time is actually greater than the end point.
<rdar://problem/18523742>
llvm-svn: 219234
Many of the test executables use pthreads directly. This isn't
portable on Windows, so this patch converts these test to use
C++11 threads and mutexes. Since Windows' implementation of
std::thread classes throw and catch from header files, this patch
also disables exceptions when compiling with clang on Windows.
Reviewed by: Todd Fiala, Ed Maste
Differential Revision: http://reviews.llvm.org/D4816
llvm-svn: 215562
The following intermittently-failing tests have been flipped from
skip to XFAIL on some combo of Linux and MacOSX:
TestCallStopAndContinue.py (Linux, MacOSX)
TestCallWithTimeout.py (Linux)
TestConvenienceVariables.py (Linux)
TestStopHookMultipleThreads.py (Linux)
The following new tests have been marked XFAIL but are just
intermittently failing:
TestMultipleDebug.py (definitely intermittent on MacOSX, not sure I've seen
it pass yet on Linux)
llvm-svn: 212762
Marked skipped for Linux:
TestCallStopAndContinue
TestConvenienceVariables
TestStopHookMultipleThreads
Fixed up gdb-remote port-grabbing code to use a random port in a wide range,
and to allow that to fail more gracefully. This appears to have solved some
gdb-remote intermittent failing behavior.
llvm-svn: 212662
This test was skipped as it used to segfault on FreeBSD. It seems
the original issue has since been fixed, so have the test run again.
llvm-svn: 201169
This has led to many test suite failures because of copy and paste where new test cases were based off of other test cases and the "mydir" variable wasn't updated.
Now you can call your superclasses "compute_mydir()" function with "__file__" as the sole argument and the relative path will be computed for you.
llvm-svn: 196985
I now see no unexpected failures on FreeBSD on a local run of the test
suite.
llvm.org/pr17214
llvm.org/pr17225
llvm.org/pr17231
llvm.org/pr17232
llvm.org/pr17233
llvm-svn: 190709