Currently most of the test files have a separate dwarf and a separate
dsym test with almost identical content (only the build step is
different). With adding dwo symbol file handling to the test suit it
would increase this to a 3-way duplication. The purpose of this change
is to eliminate this redundancy with generating 2 test case (one dwarf
and one dsym) for each test function specified (dwo handling will be
added at a later commit).
Main design goals:
* There should be no boilerplate code in each test file to support the
multiple debug info in most of the tests (custom scenarios are
acceptable in special cases) so adding a new test case is easier and
we can't miss one of the debug info type.
* In case of a test failure, the debug symbols used during the test run
have to be cleanly visible from the output of dotest.py to make
debugging easier both from build bot logs and from local test runs
* Each test case should have a unique, fully qualified name so we can
run exactly 1 test with "-f <test-case>.<test-function>" syntax
* Test output should be grouped based on test files the same way as it
happens now (displaying dwarf/dsym results separately isn't
preferable)
Proposed solution (main logic in lldbtest.py, rest of them are test
cases fixed up for the new style):
* Have only 1 test fuction in the test files what will run for all
debug info separately and this test function should call just
"self.build(...)" to build an inferior with the right debug info
* When a class is created by python (the class object, not the class
instance), we will generate a new test method for each debug info
format in the test class with the name "<test-function>_<debug-info>"
and remove the original test method. This way unittest2 see multiple
test methods (1 for each debug info, pretty much as of now) and will
handle the test selection and the failure reporting correctly (the
debug info will be visible from the end of the test name)
* Add new annotation @no_debug_info_test to disable the generation of
multiple tests for each debug info format when the test don't have an
inferior
Differential revision: http://reviews.llvm.org/D13028
llvm-svn: 248883
Summary:
Before:
AssertionError: False is not True : Process is launched successfully
After:
AssertionError: False is not True : Command 'run a.out' failed.
>>> error: invalid target, create a target using the 'target create' command
>>> Process could not be launched successfully
Reviewers: clayborg
Reviewed By: clayborg
Subscribers: lldb-commits, vharron
Differential Revision: http://reviews.llvm.org/D9948
llvm-svn: 238363
Removed expectedFailureLinux from failures that I was unable to
reproduce, updated and improved some other comments near XFAIL tests
Differential Revision: http://reviews.llvm.org/D8676
llvm-svn: 233716
Adds @skipIfPlatform and @skipUnlessPlatform decorators which will skip if /
unless the target platform is in the provided platform list.
Test Plan:
ninja check-lldb shows no regressions.
When running cross platform, tests which cannot run on the target platform are
skipped.
Differential Revision: http://reviews.llvm.org/D8665
llvm-svn: 233547
To run tests against a different target platform many extra compiler flags are
needed to specify sysroot, include dirs, etc. The environment variable
CFLAGS_EXTRAS seems suited for this purpose except that several Makefiles
clobber the current flags. This change modifies all of these to add to
CFLAGS_EXTRAS instead.
Test Plan:
Verify no regressions in ninja check-lldb.
Run tests using CFLAGS_EXTRAS to specify cross compilation flags for a different
target running lldb-server platform.
Differential Revision: http://reviews.llvm.org/D8559
llvm-svn: 233066
Summary: This patch adds a few comments for GetValueDidChange and contains improvements for TestValueVarUpdate.py test which checks ValueObject::GetValueDidChange for complex types.
Reviewers: zturner, granata.enrico, clayborg
Reviewed By: clayborg
Subscribers: jingham, lldb-commits, granata.enrico, zturner, clayborg
Differential Revision: http://reviews.llvm.org/D8103
llvm-svn: 231526
The following lldb unit tests fail check-lldb on ubuntu:
TestDataFormatterStdMap.py
TestDataFormatterStdVBool.py
TestDataFormatterStdVector.py
TestDataFormatterSynthVal.py
TestEvents.py
TestInitializerList.py
TestMemoryHistory.py
TestReportData.py
TestValueVarUpdate.py
These unit test failures are for non-core functionality. The intent is to
reduce the check-lldb FAILS to core functionality FAILS and then circle
back later and fix these FAILS at a later date.
llvm-svn: 222608