Call Me First and Get a Discount

If it is not obvious from my greatly increased blog activity, let this post make it clear that I my most recent contracted engagement expired about a month ago. Yes, I don’t seem to do much blog wise or otherwise when I am fully contracted. My most recent contract started last May and was originally for 6 months to assist an ASIC company with the emulation of their next generation network adaptor product. They asked me to take the lead on the emulation process and after that was established and they had hired a team to support the effort, they extended my contract to maintain the process until they delivered the emulation product to its first internal firmware development customer. Of course, just as we reached that milestone the company’s management purchased a competitor’s effort for a similar ASIC product and canceled their own. So although the emulation effort was successfully on track the product it was supporting no longer existed and a lot of good work was abandoned. Worse than that a lot of good people were released and of course the purchased competitor division is offshore and so on. That’s the bad news. The good news is I have recent experience with Xilinx Vivado and Virtex 7, experience with state of the art ASIC partitioning to FPGA and exposure to extensive use of  System Verilog code. If you are reading this you have probably already been contacted by me and directed to this blog site, which along with my LinkedIn profile, is my primary web presence. If you haven’t then call me and tell me you saw this first and I will give you a 50% discount on your first 20 hours of contracted service.

Emulation vs Integration

Technology acronyms generally become jargon that loose their original meaning or maybe  take on an expanded meaning that looses its original precision. A list of adulterated acronyms could be another interesting discussion but for this piece I wanted to discus the term ASIC. And, specifically, the letter ‘I’ which stands for integrated. ASIC is Application Specific Integrated Circuit and is supposed to differentiate from just an integrated circuit (IC), we could discuss that evolution as well, but again for now lets focus on “integrated”. Before “integrated” there was “discrete” and integrated meant bringing all of those discrete circuit functions into a single package, sometime called a monolithic device,  to create a single unit of complex functionality. Through advances in design and manufacturing technology that single unit of complex functionality has become exponentially more complex. There are primarily two ways that this complexity in design has been managed in the development process: abstraction and reuse.

Abstraction is the idea that complex functionality can be described using higher abstract levels and then synthesized into the fundamental “discrete” components that create the intended function. Levels of abstraction evolved from transistor level to gate level to cell or slice level and then sort of stalled at register transfer level (RTL). RTL is currently the most common level of abstraction and efforts to evolve to higher levels have not really been successful. Examples have been behavioral level, system Level, transaction level and most recently high level. Yes, the current effort is just called high level abstraction (HLA) . I guess if that is successful we’ll start working on VHLA. That’s supposed to be a joke for those who know remember the progression of IC to LSI, HLSI (both short lived) and on to VHLI which all preceded ASIC as popular terminology the that which we colloquially call a “chip”.  None the less these efforts to comprehend and describe more complex functionality has not progressed significantly in at least 20 years.

So, until HLA makes progress, the more significant method for handling ASIC complexity growth in integrated functionality has been reuse. Reuse in its simplest definition is in someways a reversion back to discrete design. The difference is that the modern discrete is a much more complex and configurable building block than a transistor or gate and discretes  are now integrated into the design by way of a computer simulation instead of being physically wired up on a breadboard. Each building block is designed and tested as a unit and often completely implemented to its final physical implementation before being “integrated” with other blocks into the final system on a single silicon die, or possibly multiply silicon die in a single package. This design methodology is now commonly referred to as System on a Chip (SoC). The building blocks being reused are CPU cores, communication cores like USB, PCIe, ethernet transceivers, memory management cores, etc. In SoC we refer to these reusable discretes  as IP (intellectual property) which is yet another bastardization of a term that confuses with having to do with patent work.

Many SoC efforts get to a point where due to the high cost and/or the long lead times of getting an SoC from concept to product the SoC developer decides they would like an emulation of the SoC suitable for evaluation and verification by the next pipe stage in the development cycle. Basically a breadboard instead of a simulation because breadboards are faster and facilitate verification in the intended application environment. For example the SoC may contain a number of microprocessor systems that need firmware development, the SoC may communicate with another system that needs to software or hardware development that could be started early, or some initial in-system validation  of the SoC concept may be desired to improve confidence in the investment of time and money required to complete to product. For whatever reason the decision to emulate the SoC seems to come late and is often a compromise.

My suggestion is to think about integration again instead of emulation. Use a fast prototyping methodology to build the system, the S, before the chip, the C. Then integrate the S on the C. Balance the value of an early prototype that closely matches the target product, the SoC, with the cost of supporting the limitations of the prototyping methodology. For example, FPGAs are often the basis of a fast prototyping methodology. However, FPGAs may be slower and have IO limitations. So, architect the SoC to work within these limits in the FPGA and scale to the capability of the ASIC technology. The idea is to get back to integration of the building blocks instead of trying to cram an emulation in as an afterthought.

What makes you an expert?

In college I studied electrical engineering and focused on microprocessor programming. I learned Fortran, Pascal, COBOL and C programming languages as well as assembly code for MC6800 and MC68000 microprocessors. I interned as a Fortran programmer at a power industry consulting firm. When I graduated I had offers from several great companies. I graduated when EE demand was probably at the highest it has ever been. I had two top choices. One was Trane, an HVAC company, that wanted to start using microprocessors to control commercial HVAC equipment. They were ready to hire me straight out of college as an expert to help start their very first solid state controls department. It would have been a fantastic opportunity to be considered an expert right out of college and to introduce a technology into an industry.

Instead, I chose an offer to become an IC designer with Delco Electronics, an automotive company.  Integrated circuit design was easy to identify as a growth career and I really like the idea of being involved with cars. I had no idea how to design an IC but they sent me to a one week crash course at the University of Waterloo in Canada. The course was an in depth study of a new design technology called VLSI and was based on a relatively new book authored by Mead and Conway. I went with one other NCG and when we came back we were two of the experts in this new methodology.

After a few design cycles using the VLSI methodology I discovered something called espresso which replaced Karnaugh maps with gate level optimization of logic expressions. Karnaugh map experts could not initially see the advantage but as soon as espresso started using more complex and more efficient logic gate structures which resulted in lower cost ICs, logic synthesis experts were sought. Soon after discovering the logic synthesis process, we discovered that a company called Synopsys was selling software that no only optimized logic but entire state machines and was using the Chapter 8 code from the Verilog logic simulator which we had been using for verification. Most people today don’t even realize that Verilog was originally a multi-state gate level simulator for schematically entered gate level designs. And, that the part of the language that has become well known for its ability to describe hardware was originally Chapter 8 of about 25 chapters from the language reference manual. Chapter 8 was original intended for describing the tests applied to the logic, something known today as a testbench. Synopsys, wisely chose a subset of the Chapter 8 code to be their hardware description language, as many testbench designers were already familiar with the syntax. I had designed some of the testbenches for the ICs I was involved with and was very familiar with Chapter 8 code, so voila, I was an expert.

Logic synthesis begat standard cell based design which required automatic placement and then automatic routing(APAR). That’s right they did not come at the same time initially and I actually manually routed designs that had only been automatically placed in rows with routing channels in between. One design like that was enough. Eventually we had a full front to back synthesis and APAR process with clock insertion, static timing analysis, a code coverage process and an Electronic Design Automation (EDA) department to maintain the constant new releases of the all the software from EDA companies. I was an EDA pioneer, and pioneers that survive to the destination are called experts.

So, after participating with the development of an EDA process at Delco Electronics and at the same time helping with the design and implementation of automotive electronic controls, I accepted an offer to take a position at Ford’s electronic division. I was sought by them as both an EDA and automotive expert. They had not yet adopted a complete EDA process and were embarking on a new powertrain microcontroller. They recognized that new IC architectures and the EDA process could leverage each other to create more cost effective products. This synergy between the design and the EDA process got the moniker ASIC development. A bit of a misnomer but then almost all of technology vernacular is misnomer. Again I was one of the ASIC pioneers and had several successful victories at Ford with gains in cost effectiveness of both product and product development. I was recognized as an expert in ASIC.

It was at this point that I started to understand the demands of being an expert. As an expert you are expected to not just contribute to the solution but must accurately identify the problem, then define the solution and quantify the results. At the same time an expert must continuously monitor and adopt the leading technological advances of other experts and effectively communicate his own. An expert’s resources are his knowledge, experience and access to outside resources. Experts begin to loose mentors and gain colleagues and depend heavily on an extensive library of information. When I first started my mentors were experts and they could almost be measured by their bookshelves of published texts and binders of notes from past efforts. Today that library is more commonly the internet. The internet has given all of us the bookshelf of an expert.  I first started mining the internet when Ford Microelectronics bought a modem and allowed a few of us limited access. I discovered Archie and immediately started increasing my reputation as an expert. Not always because I had the answer but because I could get an answer and get it quickly. I also could sort through candidate answers to either find the best one or a selection of answers giving my peers and superiors options. Answers and options are the currency of an expert. It is the experience and knowledge of an expert that allows him to leverage the internet library quickly, efficiently and above all accurately. That is why the internet alone cannot really make everyone an expert.

Every position I have held since Ford was offered to me as an expert and each position gave me new experiences, new skills, new knowledge that allowed me to claim the needed expertise for the next assignment. As a consultant I haven’t always had immediate expertise in areas that my clients were seeking but now I am an expert at being an expert so I have demonstrated that I know where to look, where not to look in order to accurately identify problems and propose solutions with options. Even though I have not seen the exact problem or found the precise solution before, I can apply my knowledge and experience of finding other problems and solutions and the resources of colleagues and the internet to new situations more effectively than those without this experience. Technology moves fast and almost no one has direct experience any more. Whatever needs to be done has never been done before and once it has been done it does not need to be done again because it is quickly shared and absorbed by everyone that needs it and soon after it is obsolete. So an expert is not one that has done exactly what IS NOW  needed before but has done what WAS THEN needed before. And therefore can confidently be expected to quickly identify, solve and communicate the problem at hand.

What’s New

Innovative (ADJECTIVE)

1. (of a product, idea, etc.) Featuring new methods; advanced and original.
2. (of a person) Introducing new ideas; original and creative in thinking: “an innovative thinker”.

Search the mission and values statements of many companies, the cover
letters and resumes of job seekers, or the position descriptions of
job boards and “innovative” is a common word. Companies strive to be
innovative and they want innovative employees. Career seekers promote
that they are the innovative people companies want.

But how innovative are company managers and employment policies? Are
they really using new methods for compensation? Are their staffing
methods advanced and original? Is the workplace environment creative
and efficient?

For the past 8 years I have worked with some companies that I found to
be truly innovative in securing staffing resources by contracting those
resources and compensating them through a 1099 instead of employing
them and sending a W2.

What made these companies innovative is that they partitioned the
development of technology critical to their companies core competancy
between contracted and employed staffing. They identified and
contracted the development of technology that was core, but generalized
across their product and client market. They employed staffing to validate,
maintain, customize and support the manufacture and marketing of their

The advantage is that they could specialize and optimize their
staffing in both the contracted and employed areas instead of under
hiring or over hiring in either area. This avoided undesirable and
costly staffing reductions and improves time-to-market by avoiding
training and false starts caused by inexperienced or generalized

There are a number of companies looking for innovative staffing to
work on innovative products right now. I am currently looking for the
ones that are also interested in innovative methods for staffing. If I
have reached one in this blog, please give me a call.