Trick or Treat

It is not yet Halloween but its coming. This week I resurrected an old trick for a client and thought I would share it here as a technical/tutorial contribution instead of my usual rhetoric.

A great deal of engineering is capturing a process so that it can be modified and repeated to achieve a desired result. In electronics development, that generally means a series of invocations of various EDA programs of different sorts and from different vendors. Each of these invocations work on what is referred to as “source code” and create some intermediate representation of that code which is then fed into the next EDA invocation in the process and so on.

Along with the source code is generally a tool specific EDA setup file that specifies how the process should be completed. Typical source code might be Verilog, VHDL, C++, System C, System Verilog, etc. The method for capturing the process might be in a scripting language such as tcsh, bash, perl, python, tcl, etc. All of these languages have multiple methods for procedurally configuring the source code or the scripted process to achieve variations on the result from the same source code.

However, in many cases, the setup files for the actual EDA tools do not support variations in their invocation within the same setup file.

In fact, different setup files is often the expected method for achieving variation. So, the scripting language is often called up to choose from a selection of setup files based on the variation that only the script has the ability to understand. By selecting a setup file consistent with the desired configuration, a modification of how the EDA tool interprets the source code is achieved and its contribution to the variation in results is realized.

This often leads to a number of similar and somewhat redundant setup files that need to be maintained. Generally, the difference between two setup files is less than 50% of the content of the file. It could even be just one or two out of hundreds or thousands of lines of setup information.

This leads to a process maintenance nightmare where one desired change in setup that is common to all of the various setup files requires that they all be modified and verified to validate the process.

The trick to simplify the redundant maintenance of setup files is also simple, use one of the methods built into the source code files listed above. This method is the “text preprocessor.”

Specifically, I use the ‘C’ code preprocessor invoked by the command:
cpp -E -P -D<variationMacro1> -D<variationMacro2> … <setupFile> -o <temporarySetupFile>

This allows the use of all of the ‘C’ preprocessor constructs to be used in the setup file. The scripting language can invoke the ‘C’ preprocessor as shown above and then pass the temporarySetupFile to the EDA tool.

The only issue is when ‘C’ preprocessor constructs conflict with the syntax of the tool setup file. There are several ways to get around this but I find that using the ‘sed’ editor before and/or after “cpp” allows for management of ‘cpp’ sensitive characters like ‘#’, ‘/*’, and ‘//’.

For example replacing ‘#’ with ‘##’ in the setup file before ‘cpp’ and then reversing the replacement after preserves the ‘#’ character that might be needed in the setup file. Replacing ‘//’ with ‘%’ or some other special character keeps ‘cpp’ from interpreting what follows ‘//’ in the setup file as a comment. These edits will not work if the original file has ‘##’ or ‘%’ characters that should be preserved but I have found that by understanding the syntax of the setup file and the syntax of ‘cpp’ it is generally not difficult to work around any conflicts by using some innocuous character substitution. And if  ‘cpp’ does not meet your needs have a look at ‘m4’. I have never had to go there but if you do and you can then maybe you can write the next tutorial.

This trick can be used in more processes than just EDA development. It can be used in any textual based setup, source code, template, etc. to create customized variations based on  ‘cpp’ constructs.

So, there is the treat.

Eating an Elephant

Everyone knows how to eat an elephant.

Turns out you can eat a whale, a cow, or a chicken the same way and you don’t even have to know which animal it is.

Consulting is an interesting business, at least mine has been.

Sometimes you get to walk up to a clean whiteboard, capture the project goals and help a company design a whole new product. Maybe that’s rare for others, but I enjoyed that type of work all of my employed career and even a couple of times since beginning consulting.

At times, consulting work presents a problem that is in the middle or end of the development cycle and as a consultant you don’t necessarily get the opportunity to influence, or even fully appreciate the big picture. Often, to be efficient, a consultant must identify only as much of the context of the problem as to allow for the solution that the client has requested. In these scenarios the successful consultant needs to be able to quickly understand the environment and design practices of the client, identify the discrepancies, and propose or implement an improvement. More often than not, the challenge of the task is not the solution, but correctly identifying the accurate and optimum scope of the problem.

Lately, I have been providing assistance in ASIC emulation, also called rapid prototyping. In these cases, the actual product is generally near the end of its development cycle. It is generally a large, complex system with multiple processor cores, interfaces, memory systems, etc. There is generally firmware and hardware involved and an elaborate hardware/firmware/software design flow and integration. The focus on this type of assignment is rapidity. It would be impossible to be cost effective and try to understand the entire scope of the project.  So the consultant must quickly learn the environment, the design flow and only enough of the product details sufficient to assist in the process improvement as assigned.

His best tool is experience, which allows him the ability to quickly recognize and adapt to variations in process. For example, most ASIC design teams use version control. Amd. most currently use SVN, yet they all use SVN differently to achieve similar goals. An experienced consultant knows the goals, recognizes SVN, or its variant, or other RC methods and quickly understands and adapts to the methods and variations used by the client. The same is true for design flow scripting, their suite of EDA tools and even the organization of the clients design resources and systems network.

Consultants don’t always get to eat the whole animal and may not even know what kind of animal it is. He just does his part in taking bites and a good one takes bigger bites.

Rules of Debug and Testing

This week I am debugging code. I am doing some pro bono for a customer that asked for a test feature to be added to a product so that performance can be evaluated and improved. Rule #1, you can’t improve what you can’t measure. The customer has been good to me in the past and I am expecting that the result of the evaluation will possibly lead to new business. So, I am writing a SPI interface that will allow the product to offload a ridiculous amount of raw signal data that can be fed into an ideal system model, evaluated and compared to the product’s performance. Rule #2, compare multiple interpretations of a system and validate each individually. There really isn’t a “golden” model there are just multiple interpretations and verification is a process of evaluating consensus. Is the implementation wrong, is it the test or is it the presumption of desired behavior.

I spent an hour and described, in a document, the interface and the data stream format which will be captured by SPI to USB adapters connected to a PC and stored as raw data files on an HDD. Then I wrote the code in a couple of hours and installed it into the code database for the product. Since I architected the code database and already had a SPI slave module in the project library all I had to code was the state machine that captured the data,  formatted it and FIFO’d it to the SPI module. The stream is real-time and capture rate can be either slower or faster than the serial rate so the state machine and the data format has to handle both underflow and overflow conditions and the data captured does not match the transfer size of the SPI so the stream also has to be “packed”. And, since experience with the high speed adapter in the past has shown that transfer errors can occur when pushing the rate limit, a CRC is added to the stream at “block” intervals to protect data integrity.

Now it is time to debug my code, and that in any effort is N times more work than the concept or the implementation phase. Rule #3 schedule 2 units of time in concept, 1 unit of time in implementation and a minimum of 3 units of time in testing. When I was in charge of large ASIC designs, it often meant 2 to 6 months in concept which included a lot of detailed documentation, then 1 to 3 months writing code, followed by 6 to 18 months of testing. What I find is that if you skimp on the concept phase the implementation phase is longer due to re-visiting. And if you skimp on the last phase, which, once they hear that coding and initial testing is complete, is generally what upper management and marketing will push you to do, the product will fail miserably in the field. During the first phase design for test was always part of the product documentation and during the second phase I also allocated resources to developing a test plan for the product.

On this particular project the concept wasn’t totally new so I did skimp on phase 1 by leveraging past experience and so far I have got away with it in phase 2. However, because phase 1 and 2 went so fast I have a feeling phase 3 is going to take longer. This is not because I shorted the first 2 phases but because this particular rule is not going to scale to the smallness of this effort. In other words, I am probably looking at a 1-2-6 ratio where 1 unit is half of a day. Fortunately the original product already has an extensible design for test architecture and testbench so adding the new feature to that part of the database was no big effort.

Rule #4 Testing is an exponential problem. Thorough test grows exponential with the number of  modes and state. For example in this simple little add-on I have defined 8 modes for collecting the data and there is the overflow and underflow conditions as well as several exceptions to verify. Exceptions are conditions that the state machine can’t control and that are outside the list of defined conditions and expected operation. These conditions are often the most difficult to test and verify. I generally break the verification process into 3 steps, first is to get a basic mode operational. I pick the simplest scenario and write a test to find the easiest implementation failures, which usually include syntax and interface issues as well as initialization, idle and return to idle type issues. This is where I am after about a day of testing.

The second phase is a thorough test of defined modes and scenarios. Ideally these tests can be automated in both execution and evaluation. In an ongoing system development these become known as the regression test which is run on a snapshot of the database at a repeating schedule to maintain verified functionality is stable as the implementation evolves with features and corrections. This step is usually fairly quick to develop but time consuming execute. If you have the resources this can be parallelized, even with the other two steps.

The third step I refer to as stress testing. In this phase the boundaries of operation are explored for both proper operation and proper handling of exception scenarios. This step usually involves at least two types of testing, directed run-up to a boundary and randomized testing. Where boundaries of operation are known and have defined limits then testing that runs-up to the limit, crosses the limit and returns to within limits are written specifically. However with many modes of operation and many limits defined and many externally controlled inputs it is often difficult to prioritize the testing of every possible exception condition. It may even be difficult to test every possible proper operational condition. This is where randomized testing can be applied.

Randomized testing used to be the Holy Grail and was the part of testing that would get thrifted when marketing and budgetary pressures pushed. Now, however, due to the exponential rule it is replacing directed mode and boundary testing. This is what has driven the System Verilog language and the formal verification methodologies. The proper term might be constraint based testing and the difficulty is measuring coverage and tracing failures to root cause, especially in fault tolerant systems.

Even if you do a thorough job of testing, the product will fail in the field. Rule #5 all products have failures lurking regardless of the amount of testing or experience in the field. The concern is the likelihood, frequency, severity and recovery of the failures. I offer the following wager to anyone. $1 Million USD to anyone that can prove a product has no failure modes as long as they will match the wager if I can prove it has one. The value of a product can best be measured by the thoroughness of its errata sheet. We even see this today by people comparison shopping by looking at online customer reviews (really customer complaints). If a product has well documented complaints we can evaluate them and determine if those shortcoming affect how we are going to use (value) the product. Rule #6 Testing and documenting errata is often more valuable than fixing errata.

So, am I doing all three steps of testing including randomized testing on my SPI feature? Probably not, what do you expect for free? I am currently testing the basic mode and when done I will test all 16  mode scenarios, I might add it to my regression suite for the whole product and I will look at the most obvious exception modes, primarily transfer truncations due to unexpected negation of SPI slave select lines. I will leave randomized testing to my client in the field and he and I both will benefit from a re-programmable FPGA instead of a multi-million dollar ASIC investment. Rule #7 Product integrity is limited by product value.


I Need a Wake-up Call

Buy low and sell high is one of those obvious sayings in the investment field. And then there is the law of supply and demand for price setting. So, right now I have a supply of time and not a lot of demand. So my time in theory is a little cheaper and I am investing it in three fronts. First, I am trying to get the word out that I am available for contract, in other words marketing to increase demand. This post is just such a contribution. I am also working with a past client that has some ideas but is not ready to pull the funding trigger so I am doing a little pro bono in another effort to increase demand. And thirdly, I am working on an open source based project that might turn into an entrepreneurial product if I can ever get out of the quagmire that is open source (more on that later).

So, which of these investments will provide future income? If future performance matches past experience then none of these will. To date, the only means for my gaining a new client engagement has come from someone I have worked with in the past, knowing of an opportunity, and strongly suggesting me for the effort. Even though I have and continue to search job sites, cold call managers and companies, presented at a conference and talked at length with agency recruiters, none of these efforts have ever produced a new client engagement. One of my most fruitful engagements for both me and my client came when a past neighbor who knew me because my kids babysat his kids and he suggested I might know something about FPGA to a client he worked with in the RF field.

I am somewhat reminded of what Mona Lisa Vito said at the end of one of my favorite movies: “ You know, this could be a sign of things to come. You win all your cases, but with somebody else’s help. Right? You win case, after case, – and then afterwards, you have to go up to somebody and you have to say- “thank you“! Oh my God, what a nightmare!”.  It may not be a nightmare but while you are waiting for your next win, and the help that you depend on to get that win, it can be a bit scary when you feel that it is not totally in your control. So, to all of you who have recommended me and continue to recommend me, Thank You again. And to you and everyone else, please send me a new wake-up call soon before my dream job gets to the nightmare stage.

Two’s Company, but what we need is a Crowd

My next observation after attending EELive this year.

Crowd source funding is popular. Maybe it was just the choices I made in personalizing my conference schedule but it appears I need to add Kickstarter to my bookmarks right along side my favorites of Google, Expedia, Amazon and Wikipedia. I wonder if I can get as good at using it as I have the others.

Trendy marketing really does make the difference. With Kickstarter you start with a product message, an entertaining, visionary and maybe informative video (not too much because vagueness creates mystery) and a prototype (maybe just a mockup, video special effects makes it look like a prototype). Marketing now precedes product development and with marketing you can convince many to risk a little instead of trying to land one big investor who probably wants to see a history of income growth and a 2 dimensional product roadmap before writing the check. A Kickstarter investor is a bit like high tech QVC shopper. He watches the video and hits the buy now button. Except, no sales representatives need to be standing by and no inventory needs to be sitting in a warehouse.

There is more technology and ideas than products. Every where I look I saw open source hardware, open source software, cheap and free and easily hooked to the internet. Websites that advertise hundreds of project ideas, development platforms and cloud services. All in search of a product. Or are they in search of a profit? I don’t know. The product seems to be the development platform and the consumer is the developer. Case in point: http://www.sparkproducts.com This company initially tried a Kickstarter campaign for a WiFi connected light bulb socket adapter that used a cloud service to connect your lightbulbs to a phone application. They wanted more than 4000 people to sign up to buy these at $59 each and surprisingly may have actually got about half that. Apparently, though, for the other 2000 people they needed, $2000 is more than they were willing  to pay to control all of the lights in their average US home from their cell phone.

With that campaign expired without full funding, the company took the WiFi guts out of their product and campaigned again on Kickstarter with just a WiFi development board at $39.  They already had the design and manufacturing of these boards figured out. In fact so does Texas Instruments, MicroChip, Atmel and several module companies. But they have a cool video and they do open source everything and that is attractive. So even though they only asked for a few hundred backers. They got over 5,000. So, the lesson is, you can’t get 2000 people to buy your internet of things (IoT) product but you can get 5000 people to try to do it better.

And another lesson may be sell the hardware cheap, give away the software and get people to develop lots of products that depend on your free cloud service. Which is free just like Netflix streaming and Logmein was.

EELive – ESC 2014 Presentation

Here are the slides from the ESC 2014 Presentation (ESC Slides). This project was completed under contract between Provident Systems and Advanced Microwave Products. All of the project from the interfaces to the Video and Audio codecs to the DAC of the transmitter and conversely the ADC of the receiver was implemented in two Altera Cyclone III FPGAs. One for the transmitter and one for the receiver.  This included all of the COFDM processing as well as encryption, framing, packetization, buffering and data loss compensation for the delivery of the video, audio and data.


Cheap, Cheap

Two weeks ago, I attended a conference, EELive 2014. Last fall I decided I should pursue some exposure to increase my network in hope of finding new clients. I google searched embedded systems conference and found one called exactly that, ESC. I submitted a proposal to present and they accepted. They also changed the name to EELive. ESC was still embedded, pardon the pun, in the conference as a track.

So I attended for  four days handing out business cards and doing my best to schmooze. And, I presented a case study of my COFDM transceiver work. I was second to the last session of the conference, to a couple of dozen die hards. Did I achieve some exposure? I hope so but I also made a few observations about the industry in which I currently participate and that may prove more useful than I had planned. I started writing a post with some of those observations in a somewhat random order and now that it has grown too large for one I will break it up into a week or more shorter posts. Here is the first.

My first observation; hardware is cheap. I have been to conferences before and I have had a closet full of backpacks, water bottles and logo’d footballs. But this time I brought home 3, could have been more, very capable hardware development kits as free SWAG. One is a low power bluetooth dongle, another is a near field communications kit complete with fairly good sized color LCD screen and the third is a very capable 32 bit micro ala Raspberry Pi. I was convinced I had really scored some valuable stuff until I discovered what they all cost at their manufacturer’s website. I had been reading about the Arduino and Raspberry Pi phenomena but I did not realize that these were just the most publicized of a whole catalogue of cheap, very powerful, development platforms.

Is software still cheap? Software engineers have generally earned less than hardware engineers and I think that is still true. However, everything is now full of software code. So although a hundred lines of software code may still be cheaper to develop than a few hundred ASIC gates, there is a lot more code demand than gates. The gates market seems to be saturated while the code market is still hungry. And, the software cost required to build a microprocessor based product far exceeds the hardware cost.

The real market is ideas. I don’t know how much a good idea is really worth or how much one costs to develop but Google and Facebook buy a good idea for about a billion dollars or more just about every week. The amount they pay far exceeds the hardware or the software cost. What they seem to be paying for is just the idea. And not just the idea itself but the popularity of the idea. So more specifically the real market seems to be a popular idea.

Emulation vs Integration

Technology acronyms generally become jargon that loose their original meaning or maybe  take on an expanded meaning that looses its original precision. A list of adulterated acronyms could be another interesting discussion but for this piece I wanted to discus the term ASIC. And, specifically, the letter ‘I’ which stands for integrated. ASIC is Application Specific Integrated Circuit and is supposed to differentiate from just an integrated circuit (IC), we could discuss that evolution as well, but again for now lets focus on “integrated”. Before “integrated” there was “discrete” and integrated meant bringing all of those discrete circuit functions into a single package, sometime called a monolithic device,  to create a single unit of complex functionality. Through advances in design and manufacturing technology that single unit of complex functionality has become exponentially more complex. There are primarily two ways that this complexity in design has been managed in the development process: abstraction and reuse.

Abstraction is the idea that complex functionality can be described using higher abstract levels and then synthesized into the fundamental “discrete” components that create the intended function. Levels of abstraction evolved from transistor level to gate level to cell or slice level and then sort of stalled at register transfer level (RTL). RTL is currently the most common level of abstraction and efforts to evolve to higher levels have not really been successful. Examples have been behavioral level, system Level, transaction level and most recently high level. Yes, the current effort is just called high level abstraction (HLA) . I guess if that is successful we’ll start working on VHLA. That’s supposed to be a joke for those who know remember the progression of IC to LSI, HLSI (both short lived) and on to VHLI which all preceded ASIC as popular terminology the that which we colloquially call a “chip”.  None the less these efforts to comprehend and describe more complex functionality has not progressed significantly in at least 20 years.

So, until HLA makes progress, the more significant method for handling ASIC complexity growth in integrated functionality has been reuse. Reuse in its simplest definition is in someways a reversion back to discrete design. The difference is that the modern discrete is a much more complex and configurable building block than a transistor or gate and discretes  are now integrated into the design by way of a computer simulation instead of being physically wired up on a breadboard. Each building block is designed and tested as a unit and often completely implemented to its final physical implementation before being “integrated” with other blocks into the final system on a single silicon die, or possibly multiply silicon die in a single package. This design methodology is now commonly referred to as System on a Chip (SoC). The building blocks being reused are CPU cores, communication cores like USB, PCIe, ethernet transceivers, memory management cores, etc. In SoC we refer to these reusable discretes  as IP (intellectual property) which is yet another bastardization of a term that confuses with having to do with patent work.

Many SoC efforts get to a point where due to the high cost and/or the long lead times of getting an SoC from concept to product the SoC developer decides they would like an emulation of the SoC suitable for evaluation and verification by the next pipe stage in the development cycle. Basically a breadboard instead of a simulation because breadboards are faster and facilitate verification in the intended application environment. For example the SoC may contain a number of microprocessor systems that need firmware development, the SoC may communicate with another system that needs to software or hardware development that could be started early, or some initial in-system validation  of the SoC concept may be desired to improve confidence in the investment of time and money required to complete to product. For whatever reason the decision to emulate the SoC seems to come late and is often a compromise.

My suggestion is to think about integration again instead of emulation. Use a fast prototyping methodology to build the system, the S, before the chip, the C. Then integrate the S on the C. Balance the value of an early prototype that closely matches the target product, the SoC, with the cost of supporting the limitations of the prototyping methodology. For example, FPGAs are often the basis of a fast prototyping methodology. However, FPGAs may be slower and have IO limitations. So, architect the SoC to work within these limits in the FPGA and scale to the capability of the ASIC technology. The idea is to get back to integration of the building blocks instead of trying to cram an emulation in as an afterthought.