Barnsley fern fractal

Thoughts on software architecture and development, and methods and techniques for improving the quality thereof.

David B. Robins (home)


Like Code Visions on Facebook.

Code Visions: Improving software quality
Over the air debugging

By David B. Robins tags: C++, Python, Tools, Debugging Saturday, April 11, 2015 15:57 EST (link)

I had the idea a few months ago to debug our embedded device application over a Bluetooth (Low Energy, BLE) link. Three weeks ago I finished over the air firmware updates, and this last week I got a chance to try my over the air debugging plan; and as it happened it worked fairly well.

We use SEGGER's J-Link GDB server to debug our application over a USB tag cable. I figured that I could write a GDB server that could instead use Bluetooth to communicate. It would be able to leverage GDB for symbol lookup and examine and write to memory although not (at least at first) run and step (stopping execution would terminate the Bluetooth connection). I looked up the GDB remote serial protocol, but as it happened an open source Python GDB server project already existed: pyOCD. I made some fixes to make it work with Python 3 (and stop obnoxiously logging to console), and wrote BLE transport and target classes.

When the pyOCD GDB server is launched, it runs in its own thread. My implementations of the read and write memory functions (readBlock32, readMem, writeBlock32, and writeMem) leveraged my existing ayncio-based tools framework that communicates to our embedded device over BLE using Bluegiga USB dongles. The functions use loop.call_soon_threadsafe to invoke the BLE communication to read or write on the main thread, and wait on an event until it completes.

This is of course a dangerous facility to leave enabled on shipping code, which is why it is (until we have better general security) enabled only under #ifdef DEBUGGER; our builds are automatically tagged with these feature defines so it will be clear when it is enabled.

Having this over the air memory debugging support is more convenient than having to attach a cable, and allows doing some useful debugging of state at remote locations too. More importantly, usually the case of attaching the normal debugger disrupts the device state (losing any information that could be obtained), or after attaching gets disrupted by static, and this will not. One of the current issues we're tracking has been very hard to reproduce, and we don't have enough physical debuggers to attach to every lab device. This new facility will be a great aid in resolving down such issues.

The compiler optimized out my destructor (and it was right)

By David B. Robins tags: C++, Bugs Tuesday, February 10, 2015 12:02 EST (link)

I do most of my testing with a debug build, which has fewer optimizations than release to make debugging easier (not "none", because that makes the binary image too big for the embedded device; -Og, actually). I was doing some pre-release testing with a release build, and noticed that the destructor for one of my objects didn't appear to be called in release builds (using -Os). This object was somewhat of a special case, since it was created with placement new and destroyed with an explicit destructor call; and I always had a faint feeling of not-quite-rightness with it, and now I know why.

The destructor set a state in the object to mark it as free, since it was one of several slots that could be activated when associated with a Bluetooth advertisement or connection. This is where you should be having uneasy feelings, because looking at destroyed objects is not kosher at all; in fact, it is undefined behavior (C++ standard N3690, 3.8 Object lifetime [basic.life], paragraph 5). Thus, the compiler reasons that if you are not allowed to look at the object's fields, then there's no point in wasting good CPU time setting them! So, the state was never set to free, and (in this particular case) the advertisement never expired; although this defect was hidden by another issue for a long time.

Fixed by using an explicit Free function.

That's not a hash… that's an encoding!

By David B. Robins tags: Development, Hiring Wednesday, February 4, 2015 20:13 EST (link)

The Trello Android Developer job page requires applicants to solve a programming puzzle, the solution of which must be sent with an application. (In case the question changes or page moves, searching for the source string "acdegilmnoprstuw" or 680131659347, the "hash" of "leepadg" should bring up the question.) The question is presented as reversing the hash of a string, i.e., given a (numeric) hash value, producing the original string. For strong hashes used in the real-world, such as MD5 or SHA1, this is a difficult problem and almost certainly requires a degree of brute-forcing, i.e., enumerating all the possible strings (billions and billions…), putting them through the hash function, and checking if the result matches the given hash value. Doing it significantly more efficiently would make someone famous and (from generating Bitcoins alone) rich.

However, the Trello developer question is actually an encoding, and while that doesn't technically exclude it from being a hash, one desirable quality in a hash is that it's difficult to go from hash value to original string (or even a possible original string, accounting for collisions), whereas being able to do that is a necessary property of an encoding. (I expect the Trello problem writers know this, and want to filter brute force attempts, which are, given the smaller space than in-use hashes, possible.) Encoding vs. Encryption vs. Hashing is a well-written post that clarifies the differences. It is trivial for someone with the most basic understanding of algebra to reverse the encoding in the question (it's a "weeder" question; I'm certain they'll have harder ones to ask in an actual interview).

I'd refrain if others hadn't already posted some solutions on the web (since it's a pain to have your screening question's answer available for cutting and pasting, although a few quick live (phone or in person) questions to the candidate should suffice to detect ignorant copying), but here's a Perl one-liner that solves it:

perl -wle '$t=956446786872726; while($t) { unshift @a, $t % 37; $t -= $a[0]; $t /= 37; } shift @a; print map { substr("acdegilmnoprstuw", $_, 1) } @a'

Given that it's folded into one line, it won't win you any style points anyway. While the Android developer position doesn't specify a language, the Node.js developer opening also posted specifically requires JavaScript or CoffeeScript. And note that at least one of the answers posted online does present a brute force and ignorance "solution"; perhaps a trap for those that might ignorantly search and copy?

Wireless mouse problem

By David B. Robins tags: Tools, Bugs Wednesday, January 28, 2015 11:18 EST (link)

Just a minor observation today, but it was driving me nuts.

One day I found that my Microsoft wireless mouse was stalling and not responding after I stopped using it for a few minutes. I'd be typing, go to move the mouse again, and it would take a few seconds to respond, kind of stuttering and jumping around. It had never done this before, so I suspected a Windows update might have broken something, but when I searched I didn't see reports of similar problems.

It turns out that this time I had plugged the mouse wireless USB dongle into a "charging port" on my USB hub. Apparently it didn't like that, and it caused the observed behavior. Plugging it back into a regular port fixed it.

Buddy allocator

By David B. Robins tags: C++, Development, Embedded Saturday, January 24, 2015 09:32 EST (link)

Dynamic memory allocation is generally to be avoided on memory-constrained embedded devices, and we are very constrained: after the vendor's "soft device" we only get 6K of the total 16K RAM to use (a future chip upgrade will provide 32K, but we're months from getting it and will likely have to support the 16K chips for a while). However, we got to a point where it was necessary to allocate and deallocate buffers since we couldn't afford having them all exist statically at the same time. I had concerns about the built-in malloc being a first-fit allocator due to hard to predict fragmentation, so decided to go with a buddy allocator.

I have posted the source to GitHub with permission of my employer; it is buildable using SCons, and by default builds a project that runs a test suite using GoogleTest. Currently it is set to build for MinGW on Windows (patches to make more general welcome), and I build it using GNU tools for ARM embedded processors (GCC 4.8 and 4.9) on the device. The documentation there explains how to use in a project. I quote from the implementation notes:

The implementation sacrifices some speed to save memory by not using free lists; instead, a bitmap representing a contiguous array of the smallest block is used to track allocations, with 2 bits per smallest block (see above). The first (high) bit is set if the block is in-use; the second is set if it is the end of a logical block. For example, the initial state is 00 00 00 ... 00 01, meaning one free block the size of the whole arena. It can then be split recursively down to the minimum size by flagging the appropriate smallest-block as an end-block.

It does make use of C++11 features, and I appreciate some embedded projects may be stuck using ancient compilers stuck on C++98 or worse; I make no apologies for using modern tools; being able to use std::unique_ptr and RAII helps avoid leaks.

Note that I'm only using "copyright" to stop someone else from claiming the work and copyrighting it and causing problems for me; otherwise I'd make it public domain. I don't care what you do with the code, although attribution would be appreciated. I hope it is useful to someone out there.

Code review tools and SVNKit tricks

By David B. Robins tags: Tools, Bugs, Java Wednesday, January 21, 2015 11:44 EST (link)

For a few months now, with the expectation of adding a new developer to our embedded group (present population: me), I've been looking for a code review tool. Admittedly, I'd been spoiled by SmartBear Collaborator (formerly Code Collaborator, but they're pitching it as document review too) at Exacq/Tyco, which was teriffic but is also $500 per named user (which is not that bad, but while we're not in bad shape we're also not in "let's drop a few thousand on a development tool" mode). Before that, we did code review by email (not great) and at Microsoft a couple of the dev managers really liked reviews being in-person, so one would pack up a DPK (diff package), throw it on a share, and interrupt someone's flow to get a review.

The highest-recommended open source tool was Review Board, so I set it up on my Amazon EC2 server and monkeyed with it until it saw my SVN repository, and left it for a while. Then I tried using it for a review, and hoo boy, I know it's free (so's Linux), but it's all kinds of terrible. The command-line tools they required for pre-commit review didn't work, so we tried post-commit (in a branch) and while I could comment the comments were hard to see/open after creating, and I don't believe replies were supported. It also didn't support updating a changeset after review comments, something Collaborator did pretty seamlessly most of the time.

Another recommendation was Atlassian Crucible, which looked pretty good in their overview/screenshots, and we're already using a number of their OnDemand (now "Cloud") tools (JIRA bug database, Confluence wiki, HipChat) so I gave it a try. I first downloaded the Windows 64-bit version on my laptop, which installed easily (to give the Windows ecosystem credit, Windows installers are usually good), but it had trouble accessing my SVN repository. I poked around a little but since I was eventually going to host it on a Linux server anyway figured I'd eliminate Windows from the equation and installed the Linux version. I must say, I was very favorably impressed with how easy it was to install (and that it didn't require a "blessed" Linux distro, working fine on Arch); unlike Windows, Linux installers are hit and miss when they exist at all. In this case it was: unzip, make a data directory, run a script and tell it where to find the config, run a script to start the service, connect via web browser. However, I had the same SVN repo access issues.

My SVN repository is accessed via an SSH tunnel (svn+ssh protocol). Atlassian recommends using their provided jsvn script to analyze Subversion connectivity issues, so I tried that; same error. I cranked up the Java log level to "FINEST" and it showed me the internal protocol data, and at some point it gave me an SVN E1700001 "authorization failed" error. I cut and pasted the same commands to a local svnserve session I initiated with the same user, and—no error (same with the standard svn command-line utility). Logging svnserve is extremely hacky, but I set it up and saw that it was logging a different username than the SSH user—the client local user, in fact. I added the command-line parameters to the log file and saw it was passing a --tunnel-user argument. Why? Since it used SVNKit for Subversion access, I downloaded and built the current trunk and fired up jdb. I didn't find out why it sent the local username rather than the SSH username, but I did find an undocumented property (similar to the documented ones) that I could use to set the remote SVN username: svnkit.ssh2.author (.username is the SSH user). Setting that let me in; in fact I didn't even need to set it in FISHEYE_OPTS since it seems to have been saved in the Fisheye/Crucible server user's ~/.subversion directory.

That made everything happy, and I could see something much like Trac's "timeline" view for commits, search the repository, etc.; I created a test review and it worked decently well except it doesn't have anything like Collaborator's defects. It does have a way to mark a comment as a defect, but there's no way to close a defect, just unmark it (and use a convention like inserting the text [RESOLVED]). The linked ticket is #6 in Fisheye/Crucible tickets by votes, and the one above is similar (checkbox for reviewer to acknowledge a comment); however, it's 2.5 years old (the other ticket is marked as duplicating it, and it's 7.5 years old), so I'm not getting my hopes up.

Since we do plan to get a license (using trial 30-day now), as it's only $10 each for Fisheye/Crucible for 1-5 users, and I believe that's perpetual and not recurring, and includes a year of support. I upgraded to PostgreSQL a few days after installing, since the built-in "HSQLDB" isn't supported outside evaluation. This was also super-painless following the migration page, a pleasant surprise.

Learning Android opens doors

By David B. Robins tags: Development, Mobile Tuesday, December 23, 2014 23:32 EST (link)

For a little while now I've been thinking I should learn a little about mobile development. The last time I considered it I was so put off by the thought of using Java that I gave up; but this time I gritted my teeth, installed Android Studio, and created a project. I went with a blank Activity, which I learned is sort of like the fundamental UI building block of an Android project, and used the visual editor to add some controls: some static text, a button (originally "Go!" but it later became "Unlock"), and a multiline text area to be used for status/logs. Later, a static image (of holly, for Christmas) was added, and I'm sure I violated all good positioning practices to get it where I wanted it without losing the button.

From there I started reading about the Android support for BLE (Bluetooth Low Energy) and added basic scan code to the onCreate handler and a handler for the button click event. Originally I connected on the button click event; eventually I did that in onCreate and had it send the authentication message (PL_RoomAuth) and modified the button to (1) be enabled only when connected, and (2) send a PL_Unlock message to unlock the door when clicked. I also updated the read-only log area with certain events: seeing a yLink (door), including the remote address (take that, iOS!); connection; disconnection (disables the unlock button, too); and button unlock. Honestly, it wasn't much trouble at all; a fun little project: Java's a lot less painful than it once was.

Now obviously it's a proof of concept. There's buckets of things it doesn't handle that the real guest app does: it probably doesn't do backgrounding well; the UI isn't pretty; it doesn't handle log in to our cloud site; it's not very flexible about which doors it will unlock. Those are all easy to fix (and rather tedious); this is just "Look, I can unlock doors with Android!", and a "my first Android app". Am I likely to want to work in mobile development in future? No, I don't think so; but I can do it if pressed and if I see a need it's easier for me to whip up an app than it was before.

This experience also validates my preferred method of testing candidates: I use a problem that requires that they both consume and produce an interface; in the specific case for C and C++, the interface consumed is the Python C API (which tends to make people think it's a Python problem when it isn't). Learning and implementing new interfaces is a huge part of programming and when the skill is developed it allows for rapid expansion to new areas of programming. Even embedded programming is far more about new interfaces and constraints than it is hardware.

Edit: I have made the project source available on GitHub.

Fix one thing, break another

By David B. Robins tags: C++, Development, Embedded, Bugs Wednesday, November 19, 2014 15:58 EST (link)

I've been doing a bunch of work with the nRF51822's on-chip UART this week; I found an issue with the vendor's code and contributed a fix back. Their code has generally been good and I imagine closing down the UART while the other side is transmitting, and then expecting to be able to reopen and pick up where they left off, isn't something people usually do. I had suspected some random UART issues but hadn't isolated them yet, so I wrote a test that:

  1. sent an incrementing stream of bytes from one chip to the other and verified it arrived with no gaps;
  2. verified that the other side saw the same first number (had to make some RTS/CTS fixes there);
  3. then I went to full duplex;
  4. had one side shut down the UART randomly, wait, and re-enable and ensure nothing was lost (this is where I had to make the library code fix).

I also found an interesting case where a fix exposed a bug: I fixed a function that had no return value (which you'd expect GCC with -Wall to warn about, but it doesn't, at least if the function has branches):

bool YlinkB::FSendMboxMessage(yikes::message::Header const &msg, size_t cb)
{
   if (!m_bfoMboxOut.FAppend(reinterpret_cast<uint8_t const *>(&msg), cb))   // this appends to the output queue
      NotReachedReturn(false);
   MboxWrite();  // this says "start writing until out of data", using an ISerialUser callback which reads from the above queue
   return true;  // this was missing
}

elsewhere I had this code:

yikes::message::LL_Power msg;
APP_ERROR_CHECK_BOOL(FSendMboxMessage(msg, sizeof(msg)));

while (FMboxBusy()) // don't close until message is sent
   __WFE();  // "wait for events" instruction, basically uses less power than pure busy wait

Serial::Close();
sd_power_system_off();

APP_ERROR_CHECK_BOOL is basically an assert, and the while loop wasn't there before. When FSendMboxMessage fell off the end of the function, it happened to return 0 and the assert fired. However, when it did that it went into an infinite loop and allowed the queued LL_Power message to finish sending, and then since it was looping forever it behaved pretty similarly to the chip being powered off, i.e. - it seemed like it was working properly.

When I added the return true, the assert no longer fired, and (no while loop yet) Serial::Close shut down the serial port before the LL_Power message finished sending. I added the while loop to wait for transmission to finish. (I could have put it into Serial::Close itself, but that wasn't the right thing; it's OK for it to shut down the port with data to send in the general case; it can resume later.)

How many CTOs does it take to change a light bulb?

By David B. Robins tags: Business, Management Sunday, November 16, 2014 15:25 EST (link)

It's been way too long since I've written here; 'tis the nature of blogs; I do have a number of technical matters to write about in my queue, mostly on embedded topics; they're next.

At my former employer, Exacq (a Tyco company), I always had confidence in the competence, both technical and managerial, of those I reported to. The director above we seven area managers (API/integration (me), client, server, enterprise, web/mobile, IP cameras, analog cameras) was technically competent and I thought a good manager; he kept an eye on what was happening and was quick to stop by/email if something seemed out of place, but he would accept a reasoned argument and not need things to be his way just because he was boss. He had written large parts of the code base and still maintained a few modules. (Am I saying you have to have engineering experience to manage engineers well? No, I didn't say that; but it does seem to correlate well, based on experience and what I've heard.) His boss, the VP of Engineering, was an engineer and also knew his stuff, although understandably didn't get in much hands-on engineering any more. There was also clear delineation of responsibility: if something affected API it came to me, and the decisions were mine to make. Obviously I was accountable for them and would get pulled up if I tried to do something that didn't align with the business's goals, but I was left alone to plan and progress toward those goals (and to do a few side things like brown bag technical lunches and build an automated test suite).

Note the number of directors in the software development organization (for all Exacq's products): 1. (Number of dev managers, which would be equivalent to director at smaller companies, for Word? 1.)

Based on that, we have about 3x too many ("senior") directors in my current org; if you also count ratio of ICs in the org, it bumps to 9x.

Resurrection on demand

By David B. Robins tags: C++, Development, Architecture, Embedded Sunday, July 27, 2014 17:07 EST (link)

It's been a few months since I've written an entry here, and that's mainly due to the job change to Senior Development Manager at Yikes. But I've been writing a whole lot of C++ and Python using technologies and techniques new to me and have plenty to write about on sundry topics.

Let me first describe the system we're building here at Yikes (and by "here" I mean scattered all over North America). The near-term goal is a hotel check-in system whereby a guest can bypass the front desk, go directly to their room, and unlock it with their phone (I know, I know, first-world problems); longer-term, the plan is to expand into the proximity space generally. To make this system work there are several components (the existence of none of which is secret in itself, just don't expect to see any schematics; but you didn't, did you?): a mobile phone app, a cloud API/database server, some on-property ("property" is jargon for "hotel" here) small form-factor computers, and a device with radios that resides in each door and controls the lock affectionately called a "yLink". It is on this yLink we are going to focus, since although I've been architecting various parts of the system the yLink is where I've been writing code.

This device consists of a custom board with two chips which each have Bluetooth (Low Energy or "Smart" 4.0) connectivity responsibilities: the first, or "A" chip, talks to the other hotel systems, and the second or "B" chip talks to a guest's phone and the door lock, unlocking it if the right authentication is received.

The device runs on a battery (as well as the physical difficulty of getting a power cable into a door, there are fire and liability concerns). Obviously, the longer this battery lasts the better (due to both cost and the inconvenience of staff having to replace it). A measurement a month or so ago showed it only lasting two days, when we're hoping for it to last more along the lines of 18 months; but I have reason to believe that measurement is spurious and am trying to get details of its circumstances, since I've been running my development yLink 8-10+ hours a day for weeks off the same battery.

Nonetheless, wherever it's at now if we can do a little work for a large return in power-saving, we should do it. And it struck me that while there are reasons the "A" chip needs to run continually (it has to scan for advertisements and keep a clock, which it already does in a low-power mode), if the "B" chip has no room authorizations pending (phones to authenticate and then allow in if matched), it can shut down completely. I read through the relevant sections of the (Nordic) API and the nRF51 chip manual, checked that we had "A" connected to a reset pin, and started experimenting.

To give the chip a chance to finish current operations, the per-second "tick" handler checks if anything is pending and then invokes the API go to to System OFF mode, sd_system_power_off. I later realized that the "A" side would need to know the chip was shut down (so it didn't offer needless resets and so it could sync communications), so using an existing messaging system I had "B" send a message first. "A" can set a flag and then knows that when it next needs to send a message to "B", it needs to pulse the reset pin low for 100 µs and wait a couple hundred ms for boot-up first.

Obviously… it's critical that "B" check for messages on wake-up, and process them, before it finds it's not busy and decides to sleep again!

We have employed the services of a contractor to do proper power profiling for the device in various modes; it may turn out, for example, that booting up is expensive and usage patterns show that it makes sense to wait for, say, 2 minutes of no activity before shutting down rather than doing it immediately when not busy. As another interesting note, we had been building the code (using Keil's ARMCC) at debug optimization (-O0); I added a debug flag to our build (we use SCons); enables optimization (-O2): code size dropped about 30% each side. It remains to be evaluated what effect this has on power consumption. At some point I may try building with GCC, but I've heard that Keil does much better and we're still well under the free code size limit (32k) so it's not a high priority.

/ previous entries /

Content on this site is licensed under a Creative Commons Attribution 3.0 License and is copyrighted by the authors.