Consulting

Rules of Debug and Testing

This week I am debugging code. I am doing some pro bono for a customer that asked for a test feature to be added to a product so that performance can be evaluated and improved. Rule #1, you can’t improve what you can’t measure. The customer has been good to me in the past and I am expecting that the result of the evaluation will possibly lead to new business. So, I am writing a SPI interface that will allow the product to offload a ridiculous amount of raw signal data that can be fed into an ideal system model, evaluated and compared to the product’s performance. Rule #2, compare multiple interpretations of a system and validate each individually. There really isn’t a “golden” model there are just multiple interpretations and verification is a process of evaluating consensus. Is the implementation wrong, is it the test or is it the presumption of desired behavior.

I spent an hour and described, in a document, the interface and the data stream format which will be captured by SPI to USB adapters connected to a PC and stored as raw data files on an HDD. Then I wrote the code in a couple of hours and installed it into the code database for the product. Since I architected the code database and already had a SPI slave module in the project library all I had to code was the state machine that captured the data,  formatted it and FIFO’d it to the SPI module. The stream is real-time and capture rate can be either slower or faster than the serial rate so the state machine and the data format has to handle both underflow and overflow conditions and the data captured does not match the transfer size of the SPI so the stream also has to be “packed”. And, since experience with the high speed adapter in the past has shown that transfer errors can occur when pushing the rate limit, a CRC is added to the stream at “block” intervals to protect data integrity.

Now it is time to debug my code, and that in any effort is N times more work than the concept or the implementation phase. Rule #3 schedule 2 units of time in concept, 1 unit of time in implementation and a minimum of 3 units of time in testing. When I was in charge of large ASIC designs, it often meant 2 to 6 months in concept which included a lot of detailed documentation, then 1 to 3 months writing code, followed by 6 to 18 months of testing. What I find is that if you skimp on the concept phase the implementation phase is longer due to re-visiting. And if you skimp on the last phase, which, once they hear that coding and initial testing is complete, is generally what upper management and marketing will push you to do, the product will fail miserably in the field. During the first phase design for test was always part of the product documentation and during the second phase I also allocated resources to developing a test plan for the product.

On this particular project the concept wasn’t totally new so I did skimp on phase 1 by leveraging past experience and so far I have got away with it in phase 2. However, because phase 1 and 2 went so fast I have a feeling phase 3 is going to take longer. This is not because I shorted the first 2 phases but because this particular rule is not going to scale to the smallness of this effort. In other words, I am probably looking at a 1-2-6 ratio where 1 unit is half of a day. Fortunately the original product already has an extensible design for test architecture and testbench so adding the new feature to that part of the database was no big effort.

Rule #4 Testing is an exponential problem. Thorough test grows exponential with the number of  modes and state. For example in this simple little add-on I have defined 8 modes for collecting the data and there is the overflow and underflow conditions as well as several exceptions to verify. Exceptions are conditions that the state machine can’t control and that are outside the list of defined conditions and expected operation. These conditions are often the most difficult to test and verify. I generally break the verification process into 3 steps, first is to get a basic mode operational. I pick the simplest scenario and write a test to find the easiest implementation failures, which usually include syntax and interface issues as well as initialization, idle and return to idle type issues. This is where I am after about a day of testing.

The second phase is a thorough test of defined modes and scenarios. Ideally these tests can be automated in both execution and evaluation. In an ongoing system development these become known as the regression test which is run on a snapshot of the database at a repeating schedule to maintain verified functionality is stable as the implementation evolves with features and corrections. This step is usually fairly quick to develop but time consuming execute. If you have the resources this can be parallelized, even with the other two steps.

The third step I refer to as stress testing. In this phase the boundaries of operation are explored for both proper operation and proper handling of exception scenarios. This step usually involves at least two types of testing, directed run-up to a boundary and randomized testing. Where boundaries of operation are known and have defined limits then testing that runs-up to the limit, crosses the limit and returns to within limits are written specifically. However with many modes of operation and many limits defined and many externally controlled inputs it is often difficult to prioritize the testing of every possible exception condition. It may even be difficult to test every possible proper operational condition. This is where randomized testing can be applied.

Randomized testing used to be the Holy Grail and was the part of testing that would get thrifted when marketing and budgetary pressures pushed. Now, however, due to the exponential rule it is replacing directed mode and boundary testing. This is what has driven the System Verilog language and the formal verification methodologies. The proper term might be constraint based testing and the difficulty is measuring coverage and tracing failures to root cause, especially in fault tolerant systems.

Even if you do a thorough job of testing, the product will fail in the field. Rule #5 all products have failures lurking regardless of the amount of testing or experience in the field. The concern is the likelihood, frequency, severity and recovery of the failures. I offer the following wager to anyone. $1 Million USD to anyone that can prove a product has no failure modes as long as they will match the wager if I can prove it has one. The value of a product can best be measured by the thoroughness of its errata sheet. We even see this today by people comparison shopping by looking at online customer reviews (really customer complaints). If a product has well documented complaints we can evaluate them and determine if those shortcoming affect how we are going to use (value) the product. Rule #6 Testing and documenting errata is often more valuable than fixing errata.

So, am I doing all three steps of testing including randomized testing on my SPI feature? Probably not, what do you expect for free? I am currently testing the basic mode and when done I will test all 16  mode scenarios, I might add it to my regression suite for the whole product and I will look at the most obvious exception modes, primarily transfer truncations due to unexpected negation of SPI slave select lines. I will leave randomized testing to my client in the field and he and I both will benefit from a re-programmable FPGA instead of a multi-million dollar ASIC investment. Rule #7 Product integrity is limited by product value.

 

Are Worker Status Laws Affecting the Contractor Market

I am curious and would love to hear from others with any experience regarding Worker Status laws. First, I have recently come across a couple of posts regarding a “crack down” on worker status violations. Probably in response to increased utilization of contracted workers as a result of recent legislated burdens in maintaining employees. Worker status is a determination of whether a worker must be classified as an employee or an independent contractor. The determination dictates whether the employer or the contractor bears the burden of employment taxes and worker protection under the law. Determination has always been a little gray and each tax and legal entity involved has their own methods. Further any difference in determination between the worker, workee* and taxing or legal entity is open for interpretation with the taxing or legal entity getting the final call.

Second, I am seeing a proliferation of so called W2 contract positions. My understanding is that these positions are offered as opposed to 1099 positions. Basically W2 means an employee, either employed by the workee or with an agency contracted to the workee, and 1099 means the position is a traditional independent contractor position (see more in the next paragraph), not many of these exist in my field anymore. This observation is consistent with my first observation that 1099 positions are being scrutinized and few 1099 positions would pass scrutiny. Basically the W2 positions are what I would call “temps” and these do not pay well. Surprisingly I have found that many recruiters don’t really understand the difference between a W2 and a 1099 position as they apply to the law and tax implications, especially the difference in cost and value to the worker and workee. I just recently tried to explain why the rate for a 1099 worker needs to be significantly higher than that for an equivalent W2 and why the cost to the workee is the same. He said he understood but still claimed he was not experiencing any difference in what was being offered.

Third, I have also been asked if I offer a corp-to-corp arrangement. Which, I discovered some time ago, is the term for exactly what I do offer. Provident Systems is an S-Corp which employs and pays me with a W2 and bears all of the employer burden of me, the employee of Provident Systems. A customer of Provident Systems gets a W9 Taxpayer ID as it would from any other vendor as required by IRS guidelines. And, clients do not even have to provide a 1099 to Provident Systems since 1099’s are not required to be provided to corporations, including S-corporations. So, a 1099 position, as opposed to a corp-to-corp, would be one in which the worker’s W9 indicted the worker as a “sole proprietor” and thus claiming his income on a schedule C of a personal 1040. In contrast, a corp-to-corp arrangement is not much different than buying goods and services as a company would from any other vendor.  Again, a distinction I have had trouble describing to recruiters looking for providers of short-term skilled services for their clients.  Part of their problem is the recruiting company they work for is looking to be the agency providing the W2 and then re-selling the worker’s time to the client. Trying to convert these opportunities to a corp-to-corp makes their compensation calculation difficult.

Finally, it is my understanding that since Provident Systems is an S-corporation which files a W2 for its employee and reports its income on a corporate tax return, there is no gray area on worker status with its clients. Much like there would be no gray area on a worker provided to a workee via a third party agency, such as the recruiting agencies. It just happens that in this case the “agency” providing the worker is also owned by the worker.

So, I would appreciate any comments on two areas. First, from HR professionals: are you seeing a push to only look for so called W2 contracts, aka temporary workers, through an agency? And second, from HR or legal professionals: is my information above correct? Is an employee of an S-corp (aka the owner) completely or mostly exempt from worker determination tests with respect to the S-corp’s clients?

* workee is to read as employer of contract client depending on worker status. Please excuse me for this liberty in terminology.

I Need a Wake-up Call

Buy low and sell high is one of those obvious sayings in the investment field. And then there is the law of supply and demand for price setting. So, right now I have a supply of time and not a lot of demand. So my time in theory is a little cheaper and I am investing it in three fronts. First, I am trying to get the word out that I am available for contract, in other words marketing to increase demand. This post is just such a contribution. I am also working with a past client that has some ideas but is not ready to pull the funding trigger so I am doing a little pro bono in another effort to increase demand. And thirdly, I am working on an open source based project that might turn into an entrepreneurial product if I can ever get out of the quagmire that is open source (more on that later).

So, which of these investments will provide future income? If future performance matches past experience then none of these will. To date, the only means for my gaining a new client engagement has come from someone I have worked with in the past, knowing of an opportunity, and strongly suggesting me for the effort. Even though I have and continue to search job sites, cold call managers and companies, presented at a conference and talked at length with agency recruiters, none of these efforts have ever produced a new client engagement. One of my most fruitful engagements for both me and my client came when a past neighbor who knew me because my kids babysat his kids and he suggested I might know something about FPGA to a client he worked with in the RF field.

I am somewhat reminded of what Mona Lisa Vito said at the end of one of my favorite movies: “ You know, this could be a sign of things to come. You win all your cases, but with somebody else’s help. Right? You win case, after case, – and then afterwards, you have to go up to somebody and you have to say- “thank you“! Oh my God, what a nightmare!”.  It may not be a nightmare but while you are waiting for your next win, and the help that you depend on to get that win, it can be a bit scary when you feel that it is not totally in your control. So, to all of you who have recommended me and continue to recommend me, Thank You again. And to you and everyone else, please send me a new wake-up call soon before my dream job gets to the nightmare stage.

New Cool Project?

If you followed a few posts from last year you might notice I started an open source project, blogged on it 3 times and then nothing. Three things happened. First, I hit a couple of road blocks slash speed bumps that slowed me down and discouraged me. Second, I became fully contracted with a bit of a commute which took away time for side projects. And, third I recognized that NIOS running Linux was at least passe and at most never relevant given that FPGA SoCs come with ARM processors now and running Linux on a NIOS is probably not practical.

So, now I am back to NOT fully contracted, ready to fight through road blocks and speed bumps and looking for something relevant. If you have been reading my recent posts, and by you I mean me, since based on the stats for this blog it is more of a personal journal than a publication, then you know I have been exposed to all of the open source platforms at the ESC conferences as well as crowd funding and entrepreneurship. So, I am off on another side project until one of three things happen (see above).

I am open to suggestions if anyone has a good one.

Call Me First and Get a Discount

If it is not obvious from my greatly increased blog activity, let this post make it clear that I my most recent contracted engagement expired about a month ago. Yes, I don’t seem to do much blog wise or otherwise when I am fully contracted. My most recent contract started last May and was originally for 6 months to assist an ASIC company with the emulation of their next generation network adaptor product. They asked me to take the lead on the emulation process and after that was established and they had hired a team to support the effort, they extended my contract to maintain the process until they delivered the emulation product to its first internal firmware development customer. Of course, just as we reached that milestone the company’s management purchased a competitor’s effort for a similar ASIC product and canceled their own. So although the emulation effort was successfully on track the product it was supporting no longer existed and a lot of good work was abandoned. Worse than that a lot of good people were released and of course the purchased competitor division is offshore and so on. That’s the bad news. The good news is I have recent experience with Xilinx Vivado and Virtex 7, experience with state of the art ASIC partitioning to FPGA and exposure to extensive use of  System Verilog code. If you are reading this you have probably already been contacted by me and directed to this blog site, which along with my LinkedIn profile, is my primary web presence. If you haven’t then call me and tell me you saw this first and I will give you a 50% discount on your first 20 hours of contracted service.

Two’s Company, but what we need is a Crowd

My next observation after attending EELive this year.

Crowd source funding is popular. Maybe it was just the choices I made in personalizing my conference schedule but it appears I need to add Kickstarter to my bookmarks right along side my favorites of Google, Expedia, Amazon and Wikipedia. I wonder if I can get as good at using it as I have the others.

Trendy marketing really does make the difference. With Kickstarter you start with a product message, an entertaining, visionary and maybe informative video (not too much because vagueness creates mystery) and a prototype (maybe just a mockup, video special effects makes it look like a prototype). Marketing now precedes product development and with marketing you can convince many to risk a little instead of trying to land one big investor who probably wants to see a history of income growth and a 2 dimensional product roadmap before writing the check. A Kickstarter investor is a bit like high tech QVC shopper. He watches the video and hits the buy now button. Except, no sales representatives need to be standing by and no inventory needs to be sitting in a warehouse.

There is more technology and ideas than products. Every where I look I saw open source hardware, open source software, cheap and free and easily hooked to the internet. Websites that advertise hundreds of project ideas, development platforms and cloud services. All in search of a product. Or are they in search of a profit? I don’t know. The product seems to be the development platform and the consumer is the developer. Case in point: http://www.sparkproducts.com This company initially tried a Kickstarter campaign for a WiFi connected light bulb socket adapter that used a cloud service to connect your lightbulbs to a phone application. They wanted more than 4000 people to sign up to buy these at $59 each and surprisingly may have actually got about half that. Apparently, though, for the other 2000 people they needed, $2000 is more than they were willing  to pay to control all of the lights in their average US home from their cell phone.

With that campaign expired without full funding, the company took the WiFi guts out of their product and campaigned again on Kickstarter with just a WiFi development board at $39.  They already had the design and manufacturing of these boards figured out. In fact so does Texas Instruments, MicroChip, Atmel and several module companies. But they have a cool video and they do open source everything and that is attractive. So even though they only asked for a few hundred backers. They got over 5,000. So, the lesson is, you can’t get 2000 people to buy your internet of things (IoT) product but you can get 5000 people to try to do it better.

And another lesson may be sell the hardware cheap, give away the software and get people to develop lots of products that depend on your free cloud service. Which is free just like Netflix streaming and Logmein was.

EELive – ESC 2014 Presentation

Here are the slides from the ESC 2014 Presentation (ESC Slides). This project was completed under contract between Provident Systems and Advanced Microwave Products. All of the project from the interfaces to the Video and Audio codecs to the DAC of the transmitter and conversely the ADC of the receiver was implemented in two Altera Cyclone III FPGAs. One for the transmitter and one for the receiver.  This included all of the COFDM processing as well as encryption, framing, packetization, buffering and data loss compensation for the delivery of the video, audio and data.

 

Cheap, Cheap

Two weeks ago, I attended a conference, EELive 2014. Last fall I decided I should pursue some exposure to increase my network in hope of finding new clients. I google searched embedded systems conference and found one called exactly that, ESC. I submitted a proposal to present and they accepted. They also changed the name to EELive. ESC was still embedded, pardon the pun, in the conference as a track.

So I attended for  four days handing out business cards and doing my best to schmooze. And, I presented a case study of my COFDM transceiver work. I was second to the last session of the conference, to a couple of dozen die hards. Did I achieve some exposure? I hope so but I also made a few observations about the industry in which I currently participate and that may prove more useful than I had planned. I started writing a post with some of those observations in a somewhat random order and now that it has grown too large for one I will break it up into a week or more shorter posts. Here is the first.

My first observation; hardware is cheap. I have been to conferences before and I have had a closet full of backpacks, water bottles and logo’d footballs. But this time I brought home 3, could have been more, very capable hardware development kits as free SWAG. One is a low power bluetooth dongle, another is a near field communications kit complete with fairly good sized color LCD screen and the third is a very capable 32 bit micro ala Raspberry Pi. I was convinced I had really scored some valuable stuff until I discovered what they all cost at their manufacturer’s website. I had been reading about the Arduino and Raspberry Pi phenomena but I did not realize that these were just the most publicized of a whole catalogue of cheap, very powerful, development platforms.

Is software still cheap? Software engineers have generally earned less than hardware engineers and I think that is still true. However, everything is now full of software code. So although a hundred lines of software code may still be cheaper to develop than a few hundred ASIC gates, there is a lot more code demand than gates. The gates market seems to be saturated while the code market is still hungry. And, the software cost required to build a microprocessor based product far exceeds the hardware cost.

The real market is ideas. I don’t know how much a good idea is really worth or how much one costs to develop but Google and Facebook buy a good idea for about a billion dollars or more just about every week. The amount they pay far exceeds the hardware or the software cost. What they seem to be paying for is just the idea. And not just the idea itself but the popularity of the idea. So more specifically the real market seems to be a popular idea.

Fully Engaged Again

I am back. Back on my blog and back to work. I like to say engaged vs idle instead of out of work and back to work, so I am back to fully engaged. About two months ago my network paid off and one of my colleagues and good friends connected me to an ASIC company that wanted to create an FPGA emulation of their next generation offering. A big IC of 20+ million gates need to be partitioned into a number of the largest and fastest FPGA’s that Xilinx can provide. So, I am back in the ASIC world while still in the FPGA world. Great opportunity to bring old experience to bear and gain new experience in the growing field of FPGA emulation. I am getting exposed and re-exposed to processes of version control, cross-functional teams, systemVerilog, a suite of ASIC and FPGA verification and synthesis tools, etc. I am also working most of the time back in an office, cubicle, environment. More on that later. So for those in my network that were pulling for me to get a new assignment, thanks for your support and keep a look out for new work as this assignment won’t last forever.

What makes you an expert?

In college I studied electrical engineering and focused on microprocessor programming. I learned Fortran, Pascal, COBOL and C programming languages as well as assembly code for MC6800 and MC68000 microprocessors. I interned as a Fortran programmer at a power industry consulting firm. When I graduated I had offers from several great companies. I graduated when EE demand was probably at the highest it has ever been. I had two top choices. One was Trane, an HVAC company, that wanted to start using microprocessors to control commercial HVAC equipment. They were ready to hire me straight out of college as an expert to help start their very first solid state controls department. It would have been a fantastic opportunity to be considered an expert right out of college and to introduce a technology into an industry.

Instead, I chose an offer to become an IC designer with Delco Electronics, an automotive company.  Integrated circuit design was easy to identify as a growth career and I really like the idea of being involved with cars. I had no idea how to design an IC but they sent me to a one week crash course at the University of Waterloo in Canada. The course was an in depth study of a new design technology called VLSI and was based on a relatively new book authored by Mead and Conway. I went with one other NCG and when we came back we were two of the experts in this new methodology.

After a few design cycles using the VLSI methodology I discovered something called espresso which replaced Karnaugh maps with gate level optimization of logic expressions. Karnaugh map experts could not initially see the advantage but as soon as espresso started using more complex and more efficient logic gate structures which resulted in lower cost ICs, logic synthesis experts were sought. Soon after discovering the logic synthesis process, we discovered that a company called Synopsys was selling software that no only optimized logic but entire state machines and was using the Chapter 8 code from the Verilog logic simulator which we had been using for verification. Most people today don’t even realize that Verilog was originally a multi-state gate level simulator for schematically entered gate level designs. And, that the part of the language that has become well known for its ability to describe hardware was originally Chapter 8 of about 25 chapters from the language reference manual. Chapter 8 was original intended for describing the tests applied to the logic, something known today as a testbench. Synopsys, wisely chose a subset of the Chapter 8 code to be their hardware description language, as many testbench designers were already familiar with the syntax. I had designed some of the testbenches for the ICs I was involved with and was very familiar with Chapter 8 code, so voila, I was an expert.

Logic synthesis begat standard cell based design which required automatic placement and then automatic routing(APAR). That’s right they did not come at the same time initially and I actually manually routed designs that had only been automatically placed in rows with routing channels in between. One design like that was enough. Eventually we had a full front to back synthesis and APAR process with clock insertion, static timing analysis, a code coverage process and an Electronic Design Automation (EDA) department to maintain the constant new releases of the all the software from EDA companies. I was an EDA pioneer, and pioneers that survive to the destination are called experts.

So, after participating with the development of an EDA process at Delco Electronics and at the same time helping with the design and implementation of automotive electronic controls, I accepted an offer to take a position at Ford’s electronic division. I was sought by them as both an EDA and automotive expert. They had not yet adopted a complete EDA process and were embarking on a new powertrain microcontroller. They recognized that new IC architectures and the EDA process could leverage each other to create more cost effective products. This synergy between the design and the EDA process got the moniker ASIC development. A bit of a misnomer but then almost all of technology vernacular is misnomer. Again I was one of the ASIC pioneers and had several successful victories at Ford with gains in cost effectiveness of both product and product development. I was recognized as an expert in ASIC.

It was at this point that I started to understand the demands of being an expert. As an expert you are expected to not just contribute to the solution but must accurately identify the problem, then define the solution and quantify the results. At the same time an expert must continuously monitor and adopt the leading technological advances of other experts and effectively communicate his own. An expert’s resources are his knowledge, experience and access to outside resources. Experts begin to loose mentors and gain colleagues and depend heavily on an extensive library of information. When I first started my mentors were experts and they could almost be measured by their bookshelves of published texts and binders of notes from past efforts. Today that library is more commonly the internet. The internet has given all of us the bookshelf of an expert.  I first started mining the internet when Ford Microelectronics bought a modem and allowed a few of us limited access. I discovered Archie and immediately started increasing my reputation as an expert. Not always because I had the answer but because I could get an answer and get it quickly. I also could sort through candidate answers to either find the best one or a selection of answers giving my peers and superiors options. Answers and options are the currency of an expert. It is the experience and knowledge of an expert that allows him to leverage the internet library quickly, efficiently and above all accurately. That is why the internet alone cannot really make everyone an expert.

Every position I have held since Ford was offered to me as an expert and each position gave me new experiences, new skills, new knowledge that allowed me to claim the needed expertise for the next assignment. As a consultant I haven’t always had immediate expertise in areas that my clients were seeking but now I am an expert at being an expert so I have demonstrated that I know where to look, where not to look in order to accurately identify problems and propose solutions with options. Even though I have not seen the exact problem or found the precise solution before, I can apply my knowledge and experience of finding other problems and solutions and the resources of colleagues and the internet to new situations more effectively than those without this experience. Technology moves fast and almost no one has direct experience any more. Whatever needs to be done has never been done before and once it has been done it does not need to be done again because it is quickly shared and absorbed by everyone that needs it and soon after it is obsolete. So an expert is not one that has done exactly what IS NOW  needed before but has done what WAS THEN needed before. And therefore can confidently be expected to quickly identify, solve and communicate the problem at hand.