Summary:
It seems that reading of register data is the biggest bottleneck in LLGS at the moment. Sending
four registers instead of the full GPR set increases the jThreadsInfo processing time about
6-fold. Until we figure out where is this time going, this commit limits the amount of data we
send to provide a more fluid debugging experience.
Reviewers: tberghammer, ovyalov
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D11264
llvm-svn: 242517
Summary:
This commit adds initial support for the jThreadsInfo packet to lldb-server. The current
implementation does not expedite inferior memory. I have also added a description of the new
packet to our protocol documentation (mostly taken from Greg's earlier commit message).
Reviewers: clayborg, ovyalov, tberghammer
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D11187
llvm-svn: 242402
This allows stepping operations that don't ever do a public stop to get all the info they need without having to send a jThreadsInfo packet since those tend to be large.
This patch will be followed by a patch that will detect when we do a public stop, and when that happens we will send a jThreadsInfo packet at that time to get all expedited registers and memory.
llvm-svn: 242352
Summary:
This commit integrates MainLoop into NativeProcessLinux. By registering a SIGCHLD handler with
the llgs main loop, we can get rid of the special monitor thread in NPL, which saves as a lot of
thread ping-pong when responding to client requests (e.g. qThreadInfo processing time has been
reduced by about 40%). It also makes the code simpler, IMHO.
Reviewers: ovyalov, clayborg, tberghammer, chaoren
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D11150
llvm-svn: 242305
should send when detaching and leaving the remote process/system
halted. Previously only the 'D' initial char was sent, which
resumed the process like a normal detach.
llvm-svn: 242256
Summary:
- Consolidate Unix signals selection in UnixSignals.
- Make Unix signals available from platform.
- Add jSignalsInfo packet to retrieve Unix signals from remote platform.
- Get a copy of the platform signal for each remote process.
- Update SB API for signals.
- Update signal utility in test suite.
Reviewers: ovyalov, clayborg
Subscribers: chaoren, jingham, labath, emaste, tberghammer, lldb-commits
Differential Revision: http://reviews.llvm.org/D11094
llvm-svn: 242101
Summary:
This is the first part of our effort to make llgs single threaded. Currently, llgs consists of
about three threads and the synchronisation between them is a major source of latency when
debugging linux and android applications.
In order to be able to go single threaded, we must have the ability to listen for events from
multiple sources (primarily, client commands coming over the network and debug events from the
inferior) and perform necessary actions. For this reason I introduce the concept of a MainLoop.
A main loop has the ability to register callback's which will be invoked upon receipt of certain
events. MainLoopPosix has the ability to listen for file descriptors and signals.
For the moment, I have merely made the GDBRemoteCommunicationServerLLGS class use MainLoop
instead of waiting on the network socket directly, but the other threads still remain. In the
followup patches I indend to migrate NativeProcessLinux to this class and remove the remaining
threads.
Reviewers: ovyalov, clayborg, amccarth, zturner, emaste
Subscribers: tberghammer, lldb-commits
Differential Revision: http://reviews.llvm.org/D11066
llvm-svn: 242018
jGetLoadedDynamicLibrariesInfos. This packet is similar to
qXfer:libraries:read except that lldb supplies the number of solibs
that should be reported about, and the start address for the list
of them. At the initial process launch we'll read the full list
of solibs linked by the process -- at this point we could be using
qXfer:libraries:read -- but on subsequence solib-loaded notifications,
we'll be fetching a smaller number of solibs, often only one or two.
A typical Mac/iOS GUI app may have a couple hundred different
solibs loaded - doing all of the loads via memory reads takes
a couple of megabytes of traffic between lldb and debugserver.
Having debugserver summarize the load addresses of all the solibs
and sending it in JSON requires a couple of hundred kilobytes
of traffic. It's a significant performance improvement when
communicating over a slower channel.
This patch leaves all of the logic for loading the libraries
in DynamicLoaderMacOSXDYLD -- it only call over ot ProcesGDBRemote
to get the JSON result.
If the jGetLoadedDynamicLibrariesInfos packet is not implemented,
the normal technique of using memory read packets to get all of
the details from the target will be used.
<rdar://problem/21007465>
llvm-svn: 241964
Summary:
This commit avoids the Platform instance when spawning or attaching to a process in lldb-server.
Instead, I have the server call a (static) method of NativeProcessProtocol directly. The reason
for this is that I believe that NativeProcessProtocol should be decoupled from the Platform
(after all, it always knows which platform it is running on, unlike the rest of lldb).
Additionally, the kind of platform actions a NativeProcessProtocol instance is likely to differ
greatly from the platform actions of the lldb client, so I think the separation makes sense.
After this, the only dependency NativeProcessLinux has on PlatformLinux is the ResolveExecutable
method, which needs additional refactoring.
This is a resubmit of r241672, after it was reverted due to build failueres on non-linux
platforms.
Reviewers: ovyalov, clayborg
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D10996
llvm-svn: 241796
Summary:
This is used on non-unix platforms, where qXfer:libraries-svr4:read
doesn't make sense. Windows uses that for instance.
Reviewers: clayborg
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D11036
llvm-svn: 241712
platform-specific symbols that are not implemented on OS X.
The build error that caused this is
Undefined symbols for architecture x86_64:
"lldb_private::NativeProcessProtocol::Attach(unsigned long long, lldb_private::NativeProcessProtocol::NativeDelegate&, std::__1::shared_ptr<lldb_private::NativeProcessProtocol>&)", referenced from:
lldb_private::process_gdb_remote::GDBRemoteCommunicationServerLLGS::AttachToProcess(unsigned long long) in liblldb-core.a(GDBRemoteCommunicationServerLLGS.o)
"lldb_private::NativeProcessProtocol::Launch(lldb_private::ProcessLaunchInfo&, lldb_private::NativeProcessProtocol::NativeDelegate&, std::__1::shared_ptr<lldb_private::NativeProcessProtocol>&)", referenced from:
lldb_private::process_gdb_remote::GDBRemoteCommunicationServerLLGS::LaunchProcess() in liblldb-core.a(GDBRemoteCommunicationServerLLGS.o)
ld: symbol(s) not found for architecture x86_64
clang: error: linker command failed with exit code 1 (use -v to see invocation)
llvm-svn: 241688
Summary:
This commit avoids the Platform instance when spawning or attaching to a process in lldb-server.
Instead, I have the server call a (static) method of NativeProcessProtocol directly. The reason
for this is that I believe that NativeProcessProtocol should be decoupled from the Platform
(after all, it always knows which platform it is running on, unlike the rest of lldb).
Additionally, the kind of platform actions a NativeProcessProtocol instance is likely to differ
greatly from the platform actions of the lldb client, so I think the separation makes sense.
After this, the only dependency NativeProcessLinux has on PlatformLinux is the ResolveExecutable
method, which needs additional refactoring.
Reviewers: ovyalov, clayborg
Subscribers: lldb-commits
Differential Revision: http://reviews.llvm.org/D10996
llvm-svn: 241672
Make the python target definition file have highest priority so that we can set
the remote stub breakpoint pc offset using it.
Reviewers: clayborg
Subscribers: ted, deepak2427, lldb-commits
Differential revision: http://reviews.llvm.org/D10775
llvm-svn: 241063
- Avoid sending the qfThreadInfo, qsThreadInfo packets if we have a stop reply packet with the threads already (save 2 round trip packets)
- Include the qname, qserial and qkind in the JSON info
- Report the qname, qserial and qkind to the thread so it can cache it to avoid many packets on MacOSX and iOS
- Don't clear all discoverable settings when we exec, just the ones we need to saves 1-5 packets for each exec.
llvm-svn: 240988
There are a couple of bugs in the XML register info handling which this patch fixes:
+ conflicting variable names in lambda, both capture list and parameters contains a variable called 'name'.
+ prev_reg_num, which sets the register number, should be incremented after each register is processed.
+ Windows errors regarding empty strings and the 'xi:' prefix disappearing from 'xi:include' node name.
Reviewers: clayborg
Subscribers: lldb-commits, deepak2427
Differential Revision: http://reviews.llvm.org/D10731
llvm-svn: 240768
A few extras were fixed
- Symbol::GetAddress() now returns an Address object, not a reference. There were places where people were accessing the address of a symbol when the symbol's value wasn't an address symbol. On MacOSX, undefined symbols have a value zero and some places where using the symbol's address and getting an absolute address of zero (since an Address object with no section and an m_offset whose value isn't LLDB_INVALID_ADDRESS is considered an absolute address). So fixing this required some changes to make sure people were getting what they expected.
- Since some places want to access the address as a reference, I added a few new functions to symbol:
Address &Symbol::GetAddressRef();
const Address &Symbol::GetAddressRef() const;
Linux test suite passes just fine now.
<rdar://problem/21494354>
llvm-svn: 240702
A "qSymbol::" is sent when shared libraries have been loaded by hooking into the Process::ModulesDidLoad() function from within ProcessGDBRemote. This function was made virtual so that the ProcessGDBRemote version is called, which then first calls the Process::ModulesDidLoad(), and then it queries for any symbol lookups that the remote GDB server might want to do.
This allows debugserver to request the "dispatch_queue_offsets" symbol so that it can read the queue name, queue kind and queue serial number and include this data as part of the stop reply packet. Previously each thread would have to do 3 memory reads in order to read the queue name.
This is part of reducing the number of packets that are sent between LLDB and the remote GDB server.
<rdar://problem/21494354>
llvm-svn: 240466
This patch adds a listener to the AynscThread in ProcessGDBRemote, specifically for dealing with any async notification packets.
From the broadcast our listener receives we can process the notify packet from the event data. A handler function then sets the thread stop info from this packet, and updates lldb by setting the process private state to stopped. Allowing the async thread to go back to sleep and getting the main thread to handle the implications of a state change.
When sending a vCont in nonstop mode we also get a different reply from all-stop mode, an OK response as opposed to a stop reply. So a condition is added to handle this and set the process state without the stop-reply data.
Reviewers: clayborg
Subscribers: lldb-commits, labath, ted, aidan.dodds, deepak2427
Differential Revision: http://reviews.llvm.org/D10544
llvm-svn: 240397
We have been working on reducing the packet count that is sent between LLDB and the debugserver on MacOSX and iOS. Our approach to this was to reduce the packets required when debugging multiple threads. We currently make one qThreadStopInfoXXXX call (where XXXX is the thread ID in hex) per thread except the thread that stopped with a stop reply packet. In order to implement multiple thread infos in a single reply, we need to use structured data, which means JSON. The new jThreadsInfo packet will attempt to retrieve all thread infos in a single packet. The data is very similar to the stop reply packets, but packaged in JSON and uses JSON arrays where applicable. The JSON output looks like:
[
{ "tid":1580681,
"metype":6,
"medata":[2,0],
"reason":"exception",
"qaddr":140735118423168,
"registers": {
"0":"8000000000000000",
"1":"0000000000000000",
"2":"20fabf5fff7f0000",
"3":"e8f8bf5fff7f0000",
"4":"0100000000000000",
"5":"d8f8bf5fff7f0000",
"6":"b0f8bf5fff7f0000",
"7":"20f4bf5fff7f0000",
"8":"8000000000000000",
"9":"61a8db78a61500db",
"10":"3200000000000000",
"11":"4602000000000000",
"12":"0000000000000000",
"13":"0000000000000000",
"14":"0000000000000000",
"15":"0000000000000000",
"16":"960b000001000000",
"17":"0202000000000000",
"18":"2b00000000000000",
"19":"0000000000000000",
"20":"0000000000000000"},
"memory":[
{"address":140734799804592,"bytes":"c8f8bf5fff7f0000c9a59e8cff7f0000"},
{"address":140734799804616,"bytes":"00000000000000000100000000000000"}
]
}
]
It contains an array of dicitionaries with all of the key value pairs that are normally in the stop reply packet. Including the expedited registers. Notice that is also contains expedited memory in the "memory" key. Any values in this memory will get included in a new L1 cache in lldb_private::Process where if a memory read request is made and that memory request fits into one of the L1 memory cache blocks, it will use that memory data. If a memory request fails in the L1 cache, it will fall back to the L2 cache which is the same block sized caching we were using before these changes. This allows a process to expedite memory that you are likely to use and it reduces packet count. On MacOSX with debugserver, we expedite the frame pointer backchain for a thread (up to 256 entries) by reading 2 pointers worth of bytes at the frame pointer (for the previous FP and PC), and follow the backchain. Most backtraces on MacOSX and iOS now don't require us to read any memory!
We will try these packets out and if successful, we should port these to lldb-server in the near future.
<rdar://problem/21494354>
llvm-svn: 240354
For some communication channels, sending large packets can be very
slow. In those cases, it may be faster to compress the contents of
the packet on the target device and decompress it on the debug host
system. For instance, communicating with a device using something
like Bluetooth may be an environment where this tradeoff is a good one.
This patch adds a new field to the response to the "qSupported" packet
(which returns a "qXfer:features:" response) -- SupportedCompressions
and DefaultCompressionMinSize. These tell you what the remote
stub can support.
lldb, if it wants to enable compression and can handle one of those
algorithms, it can send a QEnableCompression packet specifying the
algorithm and optionally the minimum packet size to use compression
on. lldb may have better knowledge about the best tradeoff for
a given communication channel.
I added support to debugserver an lldb to use the zlib APIs
(if -DHAVE_LIBZ=1 is in CFLAGS and -lz is in LDFLAGS) and the
libcompression APIs on Mac OS X 10.11 and later
(if -DHAVE_LIBCOMPRESSION=1). libz "zlib-deflate" compression.
libcompression can support deflate, lz4, lzma, and a proprietary
lzfse algorithm. libcompression has been hand-tuned for Apple
hardware so it should be preferred if available.
debugserver currently only adds the SupportedCompressions when
it is being run on an Apple watch (TARGET_OS_WATCH). Comment
that #if out from RNBRemote.cpp if you want to enable it to
see how it works. I haven't tested this on a native system
configuration but surely it will be slower to compress & decompress
the packets in a same-system debug session.
I haven't had a chance to add support for this to
GDBRemoteCommunciationServer.cpp yet.
<rdar://problem/21090180>
llvm-svn: 240066
In order to support asynchronous notifications for non-stop mode this patch adds a packet read thread. This is done by implementing AppendBytesToCache() from the communications class, which continually reads packets into a packet queue. To initialize this thread StartReadThread() must be called by the client, so since llgs and platform tools use the GBDRemoteCommunicatos code they must also call this function as well as ProcessGDBRemote.
When the read thread detects an async notify packet it broadcasts this event, where the matching listener will be added in the next non-stop patch.
Packets are now accessed by calling ReadPacket() which pops a packet from the queue, instead of using WaitForPacketWithTimeoutMicroSecondsNoLock()
Reviewers: vharron, clayborg
Subscribers: lldb-commits, labath, ted, domipheus, deepak2427
Differential Revision: http://reviews.llvm.org/D10085
llvm-svn: 239824
Summary:
This should solve the issue of sending denormalized paths over gdb-remote
if we stick to GetPath(false) in GDBRemoteCommunicationClient, and let the
server handle any denormalization.
Reviewers: ovyalov, zturner, vharron, clayborg
Reviewed By: clayborg
Subscribers: tberghammer, emaste, lldb-commits
Differential Revision: http://reviews.llvm.org/D9728
llvm-svn: 238604
Since interaction with the python interpreter is moving towards
being more isolated, we won't be able to include this header from
normal files anymore, all includes of it should be localized to
the python library which will live under source/bindings/API/Python
after a future patch.
None of the files that were including this header actually depended
on it anyway, so it was just a dead include in every single instance.
llvm-svn: 238581
Summary:
Previously, we reported inferior receiving SIGSEGV (or SIGILL, SIGFPE, SIGBUS) as an "exception"
to LLDB, presumably to match OSX behaviour. Beside the fact that we were basically lying to the
user, this was also causing problems with inferiors which handle SIGSEGV by themselves, since
LLDB was unable to reinject this signal back into the inferior.
This commit changes LLGS to report SIGSEGV as a signal. This has necessitated some changes in the
test-suite, which had previously used eStopReasonException to locate threads that crashed. Now it
uses platform-specific logic, which in the case of linux searches for eStopReasonSignaled with
signal=SIGSEGV.
I have also added the ability to set the description of StopInfoUnixSignal using the description
field of the gdb-remote packet. The linux stub uses this to display additional information about
the segfault (invalid address, address access protected, etc.).
Test Plan: All tests pass on linux and osx.
Reviewers: ovyalov, clayborg, emaste
Subscribers: emaste, lldb-commits
Differential Revision: http://reviews.llvm.org/D10057
llvm-svn: 238549
qEcho:%s
where '%s' is any valid string. The response to this packet is the exact packet itself with no changes, just reply with what you received!
This will help us to recover from packets timing out much more gracefully. Currently if a packet times out, LLDB quickly will hose up the debug session. For example, if we send a "abc" packet and we expect "ABC" back in response, but the "abc" command takes longer than the current timeout value this will happen:
--> "abc"
<-- <<<error: timeout>>>
Now we want to send "def" and get "DEF" back:
--> "def"
<-- "ABC"
We got the wrong response for the "def" packet because we didn't sync up with the server to clear any current responses from previously issues commands.
The fix is to modify GDBRemoteCommunication::WaitForPacketWithTimeoutMicroSecondsNoLock() so that when it gets a timeout, it syncs itself up with the client by sending a "qEcho:%u" where %u is an increasing integer, one for each time we timeout. We then wait for 3 timeout periods to sync back up. So the above "abc" session would look like:
--> "abc"
<-- <<<error: timeout>>> 1 second
--> "qEcho:1"
<-- <<<error: timeout>>> 1 second
<-- <<<error: timeout>>> 1 second
<-- "abc"
<-- "qEcho:1"
The first timeout is from trying to get the response, then we know we timed out and we send the "qEcho:1" packet and wait for 3 timeout periods to get back in sync knowing that we might actually get the response for the "abc" packet in the mean time...
In this case we would actually succeed in getting the response for "abc". But lets say the remote GDB server is deadlocked and will never response, it would look like:
--> "abc"
<-- <<<error: timeout>>> 1 second
--> "qEcho:1"
<-- <<<error: timeout>>> 1 second
<-- <<<error: timeout>>> 1 second
<-- <<<error: timeout>>> 1 second
We then disconnect and say we lost connection.
We might also have a bad GDB server that just dropped the "abc" packet on the floor. We can still recover in this case and it would look like:
--> "abc"
<-- <<<error: timeout>>> 1 second
--> "qEcho:1"
<-- "qEcho:1"
Then we know our remote GDB server is still alive and well, and it just dropped the "abc" response on the floor and we can continue to debug.
<rdar://problem/21082939>
llvm-svn: 238530
In ProcessGDBRemote we currently have a single packet, m_last_stop_packet, used to set the thread stop info.
However in non-stop mode we can receive several stop reply packets in a sequence for different threads. As a result we need to use a container to hold them before they are processed.
This patch also changes the return type of CheckPacket() so we can detect async notification packets.
Reviewers: clayborg
Subscribers: labath, ted, deepak2427, lldb-commits
Differential Revision: http://reviews.llvm.org/D9853
llvm-svn: 238323
This change also get rid of an unused Debugger instance in
GDBRemoteCommunicationServerLLGS and the command interpreter from
lldb-platform what was used only for enabling logging.
Differential revision: http://reviews.llvm.org/D9876
llvm-svn: 238319
We know have on API we should use for all XML within LLDB in XML.h. This API will be easy back the XML parsing by different libraries in case libxml2 doesn't work on all platforms. It also allows the only place for #ifdef ...XML... to be in XML.h and XML.cpp. The API is designed so it will still compile with or without XML support and there is a static function "bool XMLDocument::XMLEnabled()" that can be called to see if XML is currently supported. All APIs will return errors, false, or nothing when XML isn't enabled.
Converted all locations that used XML over to using the host XML implementation.
Added target.xml support to debugserver. Extended the XML register format to work for LLDB by including extra attributes and elements where needed. This allows the target.xml to replace the qRegisterInfo packets and allows us to fetch all register info in a single packet.
<rdar://problem/21090173>
llvm-svn: 238224
The main issue was the Communication::Disconnect() was calling its Connection::Disconnect() but this wouldn't release the pipes that the ConnectionFileDescriptor was using. We also have someone that is holding a strong reference to the Process so that when you re-run, target replaces its m_process_sp, but it doesn't get destructed because someone has a strong reference to it. I need to track that down. But, even if we have a strong reference to the a process that is outstanding, we need to call Process::Finalize() to have it release as much of its resources as possible to avoid memory bloat.
Removed the ProcessGDBRemote::SetExitStatus() override and replaced it with ProcessGDBRemote::DidExit().
Now we aren't leaking file descriptors and the stand alone test suite should run much better.
llvm-svn: 238089
Summary:
The test in TestPlatformCommand which runs "platform process list" has
been timing out for Android when running running dosep.py with
LLDB_TEST_THREADS=8. This patch increases the packet timeout to a large
value of 1min to accommodate the long time required for a response for
the qfProcessInfo packet on Android.
Test Plan: LLDB_TEST_THREADS=8 ./dosep.py on Android.
Reviewers: chaoren
Reviewed By: chaoren
Subscribers: tberghammer, lldb-commits
Differential Revision: http://reviews.llvm.org/D9866
llvm-svn: 237752
r237411 exposed the following issue: ProcessGDBRemote used the description field in the
stop-reply to set the description of the StopInfo. In the case of watchpoints, the packet
description contains the raw address that got hit, which is not exactly the information we want
to display to the user as the stop info. Therefore, I have changed the code to use the packet
description only if the StopInfo does not already have a description. This makes the behavior
equivalent to the pre-r237411 behavior as then the SetDecription call got ignored for
watchpoints.
llvm-svn: 237436
There were two versions of DoAttachToprocessWithId. One that takes
a pid_t, and the other which takes a pid_t and a ProcessAttachInfo.
There were no callers of the former version, and all of the
implementations of this version were simply forwarding calls to
one version or the other.
llvm-svn: 237281
Summary:
This patch is the beginnings of support for Non-stop mode in the remote protocol. Letting a user examine stopped threads, while other threads execute freely.
Non-stop mode is enabled using the setting target.non-stop-mode, which sends a QNonStop packet when establishing the remote connection.
Changes are also made to treat the '?' stop reply packet differently in non-stop mode, according to spec https://sourceware.org/gdb/current/onlinedocs/gdb/Remote-Non_002dStop.html#Remote-Non_002dStop.
A setting for querying the remote for default thread on setup is also included.
Handling of '%' async notification packets will be added next.
Reviewers: clayborg
Subscribers: lldb-commits, ADodds, ted, deepak2427
Differential Revision: http://reviews.llvm.org/D9656
llvm-svn: 237239
Removed some unused variables, added some consts, changed some casts
to const_cast. I don't think any of these changes are very
controversial.
Differential Revision: http://reviews.llvm.org/D9674
llvm-svn: 237218