Monday, May 22, 2006

Flexibility, a blessing?

The electronics industry at large sort of revels in the fact that it's flexible and ever changing, that we can build products that, with the click of a mouse, turn into something completely different. I sometimes wonder, though, if this is entirely a good thing, at least for the blokes who make these things.

My argument is simple. Ninety nine percent of electronic products today contain some form of programmable element (usually a microprocessor), which sits on top of other hardware. While flexibility in the form of a re-flashable microprocessor certainly helps everyone (most importantly, the end user), the MPU has traditionally relied upon a layer of bedrock, the hardware on which it runs, which remains relatively unchanged. These days, though, programmable logic has changed much of this, and we have systems where the hardware, a previously impermeable and immutable layer, now shifts around like quicksand beneath developers' feet. This hardware-du-jour phenomenon has made designing embedded systems doubly difficult, with constant bickering between the hardware guys, the software guys and (oh horrors!) management. Instruments designed with such an arrangement must now additionally tag all data with a 'hardware version' number, sometimes one for each chip the data passes through.

On the other hand, it can be a good thing from the "the more data the merrier" school of thought. No, I don't mean more work, I only mean that the more data you preserve (or the closer you preserve data to a sensor), the better you can process it offline. The only trouble, of course, is bandwidth.

Debugging such an application can invoke some rather extreme displays of hair pulling, mostly when you need to figure out if the problem lies in software or hardware. I have a hard enough time doing it alone, I wonder how these guys at various companies with 'hardware teams' and 'software teams' see eye-to-eye :)

In the end, though, decisions postponed in the name of "flexibility" are eventually done so out of sheer laziness, and the hope that someone else will pick up the slack. Eventually no-one does, and another deadline whooshes by. When will it stop??? :-(

2 comments:

Bharath said...

Well written. Have you tried using any of the on chip logic analyzers out there to debug? In another life, I used the one by Synplicity (forget the name) and thought it was pretty decent. Obviously the pain of debugging FPGA logic (vs s/w) is getting a snack from the vending machine while it does PAR :P

One of the things u shd consider posting about one of these days is the debate over network processors vs fpgas for networking applications. I havent really grasped the advantage of one vs the other. Since the larger vendors (Cisco/Juniper) tend to go ASIC, what the smaller guys do gives a sign of who is winning the battle. Looks like vendors are adopting NPs in increasing numbers. I think it has to do with the fact that many networking ppl come from a s/w background. If you can come up with a comparison, that wd be great.

Jim said...

Yes, until the eval license expired... I must convince the management to buy a copy of Chipscope, but I fear it's too late now (will probably be done and out of there in a bit). The endless PAR runs are not made easier by the increasing laziness on the part of tool programmers. ISE 8.1, for example, leaks memory each time a PAR run completes (check the system resources)

Not having actually used FPGAs for networking apps, I can only guess at the advantages by extrapolating from what I'm used to (ie, DSP apps), but I'll try, thanks for the idea!