Semiconductor Industry News, Trends, and Technology, and SEMI Standards Updates

Implementing your Process Module Using CCF

Posted by Tim Hutchison: Senior Software Engineer on Feb 9, 2017 12:30:00 PM

You have designed the ultimate process that will revolutionize the semiconductor industry.  The parts have been collected, the process module assembled.   But now you need the software to make all the components work together.

As described in a recent CIMControlFramework (CCF) blog post around designing recipes, the recipe is the secret sauce for your process.  The recipe is used to direct the hardware to perform the process; How much time in a step, temperature, gas flow, pressure, etc.

The recipe provides directions to the process module on how to perform the processing.  How and when to enable/disable hardware components.  What setpoints to be set for components.  How much time to spend on any given step.  The process module (PM) software that you develop will take the recipe that you have defined and perform the operations using that recipe. CCF stays out of your way to allow to create your secret sauce.  

CCF makes integrating your process module easy.  CCF provides a simple process module interface that allows CCF to know when to prepare for processing, prepare for transfer, and process using the supplied recipe.

 Your process module hardware may be made up of any number and types hardware components, E.g.  Mass Flow Controller(s), valves, chuck, etc. that will be used to process the recipe. Since CCF does not use proprietary interfaces and does use C# and Visual Studio, creating interfaces to your hardware is much easier and left to you to design and develop these drivers. CCF makes it easy to connect to your hardware, whether it is via a PLC or talking directly to the hardware. 

CCF makes it incredibly simple to report data to a UI, a GEM host and even an EDA client.  Declare your status variable, update, and publish.  The data is reported to all three for you automatically!!

CCF takes the stress out of the necessary evil of moving material through the equipment to get it to your process module. It provides an interface for interacting with your process module allowing you to spend your time where it matters most - creating your secret sauce to help make you successful!

Topics: Semiconductor Industry, CIMControlFramework, Software

XP is Dead, It’s Time to Move On

Posted by Derek Lindsey: Product Manager on May 19, 2016 1:00:00 PM

Its-dead-jim.jpg

When my daughter turned one year old, she got a very soft blanket as a birthday present. She loved that blanket and would take it everywhere with her. She couldn’t/wouldn’t go to sleep at night without it. When she got old enough to talk, she called it her special blanket or “spesh.” Needless to say, after many years of toting that blanket around, it started to wear out – in fact, it started getting downright nasty. She adamantly refused to part with it even though it was just a rag with little redeeming value.

A couple of years ago, Microsoft made the following announcement: “After 12 years, support for Windows XP ended April 8, 2014. There will be no more security updates or technical support for the Windows XP operating system. It is very important that customers and partners migrate to a modern operating system.”

In the immortal words of Dr. Leonard “Bones” McCoy from Star Trek, “It’s dead Jim!”

windows_xp-100154667-large.png

Many arguments have been proffered on both sides as to why users should stay with or move away from XP. Windows XP was first introduced in 2001. That makes the operating system 15 years old — an eternity in computer years. The main argument I see for upgrading from XP is that it is impossible to keep up with advances to the .NET framework and remain on the old operating system. By staying with XP, you are missing out on new features and technologies. These features include taking advantage of better hardware integration for improved system performance and being able to use 64-bit applications and memory space.

Since Microsoft no longer supports XP and no longer provides security updates for the OS, staying with XP is a security risk. Any security holes that have been discovered since Microsoft withdrew support have been ruthlessly targeted.

To come full circle, my daughter finally did give up the little rag that she had left of the blanket. I don’t remember what ultimately made her give it up. She is now 18 and a few months ago, we came across that small piece of her special little blanket that we had stored away. The rag brought back good memories, but we were both glad it had been retired. Isn’t it time to do the same with XP?

Topics: Microsoft, Software

Testing for and Finding Memory Leaks

Posted by Bill Grey: Distinguished Software Engineer on May 12, 2016 1:00:00 PM

An issue that inevitably crops up in long-running, complex software systems is memory use. In the worst cases it manifests as a crash after several hours or days of running when the software has consumed all available memory.

Another inevitability is that these out-of-memory crashes are found very late in the development cycle, just prior to a delivery date. Or, worse, they are found after delivery. Given the fact that the crashes take hours or days to occur because the testing cycles are very long, they cause a lot of stress for the development team and frequently delay delivery.

The rest of this blog contains a proposed process to find these issues sooner in the development process and some tools to help the developer investigate memory use.

Early and continuous testing of the software system is the key to avoiding delivery of memory leaks. As soon as possible a dedicated system should be set up for endurance testing. The software should be built in debug mode, but it is not necessary to run it in a debugger. Preferably, for equipment control software, this would use a simulator for the hardware. This should be done as soon as there is enough of the software developed to be able to perform any significant functionality in a repetitive manner. This test can evolve as more of the software is developed with functionality being added to the test as it becomes available. For semiconductor equipment control software, a logical test would be to perform wafer cycling as this would exercise a good majority of the software. 

Memory.png

This endurance test should be kept running during development, right up to delivery. The computer running the endurance test should be configured to collect Windows crash dumps for the software application(s) and have Windows Performance Monitor configured to monitor Private Bytes for the application(s), https://msdn.microsoft.com/en-us/library/windows/hardware/ff560134(v=vs.85).aspx. The test should be checked daily to see how the Private Bytes memory use has changed.  If the application has crashed, then the crash dump .DMP file can be collected and analyzed. Visual Studio can be used to open the .DMP file for analysis on the developer’s computer. 

The endurance test should be maintained and updated as the software is updated. However, since run time is important for this test, consider only updating it on a weekly basis unless the update is to fix an issue that caused the test to crash.

If the endurance test shows that the Private Bytes for the application increases steadily with no signs of levelling off, then the application probably has a memory leak.

For C++ programs, Microsoft’s UMDH memory dump utility is very useful for tracking down what allocations are occurring in the application, https://msdn.microsoft.com/en-us/library/windows/hardware/ff560206(v=vs.85).aspx. The concept is to take two or more memory snapshots and analyze the differences to see what new objects have been created. Remember to have the software built in debug mode so full debug information is available in the memory dumps.

For .NET programs, newer versions of Visual Studio have built in memory profiling, https://msdn.microsoft.com/en-us/library/dd264934.aspx.

There are third party memory analyzers on the market that some have found to be useful. Most of these will report numerous false positives that the developer will have to wade through to get to the real leaks. Most third party memory analyzers for .NET seem to frequently report false positives for COM objects. 

The tools just provide the developer a location to review the code for leaks. It still requires diligence and expertise on the part of the developer to analyze the information and find the cause of the leak. Seldom do the tools create a treasure map with "X" marking the spot of the leak.

Having an endurance test running allows the developer to understand the memory profile of the software and watch how the profile changes as the software changes. Early detection is critical given the length of the testing cycle.

Topics: Microsoft, Software

CIMControlFramework Dynamic Model Creation

Posted by Derek Lindsey: Product Manager on Apr 14, 2016 1:00:00 PM

turkey-218742_960_720.jpg

Have you ever watched one of those cooking shows where the chef spends a lot of time whipping up the ingredients to some elaborate dish, and, when it comes time to put the dish in the oven to bake, there is already a finished one in there? If only the real world worked that way. Sometimes it would be nice to be able to go to the oven and have a delicious meal already waiting for you.

The Cimetrix CIMControlFramework™ (CCF) product is unique among Cimetrix products in that it not only provides source code, but also combines several other Cimetrix products (CIMConnect, CIM300, and CIMPortal™ Plus) and takes full advantage of all the features provided by each product.

One of the features of CIMPortal Plus that is used in CCF is the concept of an equipment model. The equipment model describes the data that your equipment provides through Interface A. The tool hierarchy is modeled along with all of the parameters, events, and exceptions published by the tool. It used to be that CCF users had to manually create the tool hierarchy in their base equipment model. CCF would then populate the model with the parameters, events, and exceptions. If the tool hierarchy changed, the base model would have to be modified. It made changing the tool configuration much more difficult.

Starting with the CCF 4.0 release, a base equipment model that is common to all equipment was installed. Generally, CCF users will not need to modify the base model. CCF takes advantage of the modeling API provided by CIMPortal Plus to dynamically add hierarchy nodes to the base model depending on the components that are created in CCF. This new feature makes it easy to change the configuration of the CCF tool because the user does not have to make modifications to the base model and redeploy the package to be able to run CCF.

The dynamically created model is also compliant with the SEMI E164 Common Metadata standard. This compliance is possible because of the dynamic nature of model creation. The required elements of E164 are added to the equipment model dynamically during the startup of Tool Supervisor.

Having a dynamically created Interface A model that exactly matches your equipment structure and is guaranteed to be E164-compliant without having to do any extra work is similar to going to the oven and finding a delicious dish already cooked and waiting for you.

Topics: EDA, CIMControlFramework, Product Information, Software

CIMControlFramework Work Breakdown

Posted by Derek Lindsey: Product Manager on Mar 15, 2016 1:00:00 PM

FirstStepofaThousandMileJourney1.jpg

“A journey of a thousand miles begins with a single step.” – Lao Tzu

“Watch out for that first step Mac, it’s a lulu!” – Bugs Bunny

These quotes by the great philosophers Lau Tzu and Bugs Bunny have more in common than would appear at first glance. At the beginning of a journey you have the element of the unknown. There is excitement that it could be a great journey, but there is also an element of the unknown that may make that first step the hardest to take. If you haven’t put in the preparation for a successful journey, that first step might be a lulu.

Similarly, when starting a new equipment control application, there is excitement for the great end product, but also some element of not knowing the best place to start. CIMControlFrameowrk (CCF) offers a great training program to get you started and many building blocks for helping create a first-class equipment control application. Even with these great starting tools, many users still have the question, “Where do I go from here?”

The first step is to create a work breakdown of what it takes to create a successful equipment control application. There will obviously be tasks that are unique to each equipment control application, but most applications have some common tasks or epic user stories that have to be completed during the project. The order in which these stories are completed may depend on milestones and expectations for when they are accomplished, but they generally all need to be completed during the project.

  • Integrate Devices – CCF provides an Equipment layer with abstractions of most commonly used devices. Integrating these devices into CCF only requires the implementation of the abstract interface.

  • Material Movement Through the Tool – CCF provides a flexible scheduler with complete working examples of different types of scheduling that could be done.

  • Implement the Process Module – CCF provides a process module interface that allows the rest of CCF to communicate with your process module – your secret sauce.

  • Create an Operator Interface (OI) – CCF provides an OI framework that allows commands to be sent and updates to be made. It also provides some default screens that use this interface. It also allows for insertion and use of custom OI screens.

  • Simulation – CCF provides a simulator that can be used in place of real hardware. The simulator can be used to deliver/remove material, perform robot moves, and do simulated IO. This is invaluable in continuing development before the hardware is ready or if there is limited tool time for the developers.

  • Recipes (Process Recipes and Execution) – CCF provides a recipe manager for passing recipes through the tool. The default recipe can be used or custom recipes can be added.

  • I/O – CCF provides ASCII serial drivers and other common IO providers. Custom IO providers can also be included in CCF.

  • Data Collection and Storage – Knowing what data to store and what medium to use for storage is recommended up front.

  • Factory Automation – CCF provides a fully integrated GEM, GEM300, and EDA implementation.

  • Diagnostics and Testing – The CCF logging package is a fantastic tool for debugging your application both on the tool and remotely.

  • Errors and Recovery – CCF provides an Alarms package for signaling of and recovery from error conditions.

By going through CCF training and creating a work breakdown of the tasks that need to be done for your equipment control application, you can ensure that your first step will be the foundation of a successful journey.

Topics: CIMControlFramework, Product Information, Software

Build vs. Buy?

Posted by David Warren: Director of Software Engineering on Feb 18, 2016 1:06:00 PM

Every company that needs software must make a build versus buy decision at some point. Some choices are easy—it makes little sense to build your own office software for word processing, spreadsheets, or presentations. But what if you need software to control specialized physical equipment?

Classic advantages of building your own software are:

  • Canned software is generally targeted to meet many needs for most problems. Custom software is better suited to meet specific and uncommon needs.

  • Canned software has a fixed set of features and it may be difficult to add or remove specific features, which may lead to software that contains unneeded features or is missing features that you do need. Custom software can be built to meet the specifications of a projects and include all the features that are needed and never any that aren’t.

  • The process of developing software builds in-house technical expertise. This expertise can be used to create competitive advantage through higher performance and faster reaction time in meeting the changing needs of the marketplace.

Classic advantages of using standardized software are:

  • Standardized software is generally less expensive than custom software because its cost can be spread across many customers and/or tools.

  • Standardized software can require less time depending on the degree of customization required.

  • Standardized software can be more reliable since it has been tested and used in many different applications.

  • Standardized software may provide more features than would otherwise be available.

Why Not Combine the Best of Both Options?

Brackets_Code.png

Buying a tool control framework can help you build your own tool control software and still get the benefits of using standardized software. The framework can take care of common problems while you focus on items unique to your specific tool. As a framework, features can be removed, replaced, or even modified as needed. You reduce your costs as well as your time-to-market by using a selection of reliable, field-proven features and including only those that are relevant and add value to your control system. You still retain and build your in-house technical expertise to create competitive advantages in controlling your equipment instead of treating tool control expertise as a commodity.

Using a tool control framework can be a smart way to improve your processes by using standardized software that is easy to customize. So why not consider it as an option for your next project?

If you are interested in downloading the data sheet on Cimetrix’ tool control framework software, CIMControlFramework, click here.

Topics: CIMControlFramework, Equipment Control-Software Products, Equipment Automation Framework, Software

Software Interfaces and API Method Signatures Should Remain Consistent During a Product's Lifecycle

Posted by Derek Lindsey: Product Manager on Jan 28, 2016 1:07:00 PM

TheMartian.jpg

I recently read The Martian by Andy Weir. Since this information comes out on the first page of the book, I don’t think I’m spoiling too much to say that it is the story of an astronaut, Mark Watney, who is lost in a space storm on a mission to Mars. He is presumed dead by his crewmates and abandoned on the planet. Of course he is not dead and he has to use training, skill, ingenuity, and luck to survive long enough to be rescued. Several times throughout the adventure, he has to connect life supporting utilities, tanks, airlocks, and vehicles together using the connecting valves supplied on each component. Watney says, “I’ve said this many times before, but: Hurray for standardized valve systems!” This is obviously a work of fiction, but what would have happened if he had tried to attach a holding tank to the ascent vehicle but the valves had changed between versions?

Software customers should be able to have the same expectation as Mark Watney that the valves don’t change during the mission. In the case of software, we aren’t talking about physical valves. Rather we are talking about software interfaces and API method signatures. In a real sense, the consistency of these software signatures are as mission critical as the standardized valve connections were for the astronaut in The Martian. Changing the method signatures, at the very least, requires that the users of the software have to rebuild their applications. Often times such changes require software users to have to requalify their entire tool. This places undue burden on the users of the software. Software users should be able to reasonably expect that the interfaces and API remain constant through the life of the mission (i.e. within the version of the software including minor releases and patches). A side note on this topic: If Cimetrix product management determines that a piece of software has a bug or does not conform to the SEMI standards on which our products are based, changes will be made to correct the problem. Similarly, if NASA determined that one of their connectors did not conform to the spec, they would immediately resolve the issue for the item that was out of spec.

The Cimetrix release versioning process (see our January 14, 2016 blog) allows Cimetrix personnel and Cimetrix software users to be aware of what backward compatibility guarantees are made for a specific version of Cimetrix software.

We would like our software users to be able to say, “Hurray for compatible software versions!”

Topics: Semiconductor Industry, Software

Subscribe to Email Updates

Follow Me

Learn More About the
SEMI Standards

SECS/GEM

GEM 300

Interface A/EDA

PV2 (PVECI)