Semiconductor Industry News, Trends, and Technology, and SEMI Standards Updates

XP is Dead, It’s Time to Move On

Posted by Derek Lindsey: Product Manager on May 19, 2016 1:00:00 PM

Its-dead-jim.jpg

When my daughter turned one year old, she got a very soft blanket as a birthday present. She loved that blanket and would take it everywhere with her. She couldn’t/wouldn’t go to sleep at night without it. When she got old enough to talk, she called it her special blanket or “spesh.” Needless to say, after many years of toting that blanket around, it started to wear out – in fact, it started getting downright nasty. She adamantly refused to part with it even though it was just a rag with little redeeming value.

A couple of years ago, Microsoft made the following announcement: “After 12 years, support for Windows XP ended April 8, 2014. There will be no more security updates or technical support for the Windows XP operating system. It is very important that customers and partners migrate to a modern operating system.”

In the immortal words of Dr. Leonard “Bones” McCoy from Star Trek, “It’s dead Jim!”

windows_xp-100154667-large.png

Many arguments have been proffered on both sides as to why users should stay with or move away from XP. Windows XP was first introduced in 2001. That makes the operating system 15 years old — an eternity in computer years. The main argument I see for upgrading from XP is that it is impossible to keep up with advances to the .NET framework and remain on the old operating system. By staying with XP, you are missing out on new features and technologies. These features include taking advantage of better hardware integration for improved system performance and being able to use 64-bit applications and memory space.

Since Microsoft no longer supports XP and no longer provides security updates for the OS, staying with XP is a security risk. Any security holes that have been discovered since Microsoft withdrew support have been ruthlessly targeted.

To come full circle, my daughter finally did give up the little rag that she had left of the blanket. I don’t remember what ultimately made her give it up. She is now 18 and a few months ago, we came across that small piece of her special little blanket that we had stored away. The rag brought back good memories, but we were both glad it had been retired. Isn’t it time to do the same with XP?

Topics: Microsoft, Software

Testing for and Finding Memory Leaks

Posted by Bill Grey: Distinguished Software Engineer on May 12, 2016 1:00:00 PM

An issue that inevitably crops up in long-running, complex software systems is memory use. In the worst cases it manifests as a crash after several hours or days of running when the software has consumed all available memory.

Another inevitability is that these out-of-memory crashes are found very late in the development cycle, just prior to a delivery date. Or, worse, they are found after delivery. Given the fact that the crashes take hours or days to occur because the testing cycles are very long, they cause a lot of stress for the development team and frequently delay delivery.

The rest of this blog contains a proposed process to find these issues sooner in the development process and some tools to help the developer investigate memory use.

Early and continuous testing of the software system is the key to avoiding delivery of memory leaks. As soon as possible a dedicated system should be set up for endurance testing. The software should be built in debug mode, but it is not necessary to run it in a debugger. Preferably, for equipment control software, this would use a simulator for the hardware. This should be done as soon as there is enough of the software developed to be able to perform any significant functionality in a repetitive manner. This test can evolve as more of the software is developed with functionality being added to the test as it becomes available. For semiconductor equipment control software, a logical test would be to perform wafer cycling as this would exercise a good majority of the software. 

Memory.png

This endurance test should be kept running during development, right up to delivery. The computer running the endurance test should be configured to collect Windows crash dumps for the software application(s) and have Windows Performance Monitor configured to monitor Private Bytes for the application(s), https://msdn.microsoft.com/en-us/library/windows/hardware/ff560134(v=vs.85).aspx. The test should be checked daily to see how the Private Bytes memory use has changed.  If the application has crashed, then the crash dump .DMP file can be collected and analyzed. Visual Studio can be used to open the .DMP file for analysis on the developer’s computer. 

The endurance test should be maintained and updated as the software is updated. However, since run time is important for this test, consider only updating it on a weekly basis unless the update is to fix an issue that caused the test to crash.

If the endurance test shows that the Private Bytes for the application increases steadily with no signs of levelling off, then the application probably has a memory leak.

For C++ programs, Microsoft’s UMDH memory dump utility is very useful for tracking down what allocations are occurring in the application, https://msdn.microsoft.com/en-us/library/windows/hardware/ff560206(v=vs.85).aspx. The concept is to take two or more memory snapshots and analyze the differences to see what new objects have been created. Remember to have the software built in debug mode so full debug information is available in the memory dumps.

For .NET programs, newer versions of Visual Studio have built in memory profiling, https://msdn.microsoft.com/en-us/library/dd264934.aspx.

There are third party memory analyzers on the market that some have found to be useful. Most of these will report numerous false positives that the developer will have to wade through to get to the real leaks. Most third party memory analyzers for .NET seem to frequently report false positives for COM objects. 

The tools just provide the developer a location to review the code for leaks. It still requires diligence and expertise on the part of the developer to analyze the information and find the cause of the leak. Seldom do the tools create a treasure map with "X" marking the spot of the leak.

Having an endurance test running allows the developer to understand the memory profile of the software and watch how the profile changes as the software changes. Early detection is critical given the length of the testing cycle.

Topics: Microsoft, Software

Benefits of Being a Microsoft Gold Competency Partner

Posted by Richard Howard: Director of Tech Ops on Mar 10, 2016 1:02:00 PM

windows_8s.png

In November 2014, Cimetrix attained a status of ISV (IP & Solution Development) Gold Competency Partner with Microsoft®. Now you may be thinking “So what? What could that possibly have to do with me as a client of Cimetrix?” That’s what I would have thought if I had read the headline without knowing what was involved to both achieving and maintaining that level with Microsoft. So let me briefly share the main value of Cimetrix being a Gold Competency Partner and why it matters to our clients and to Cimetrix.

A requirement for Cimetrix to reach the Gold Level was that we had to have, at a minimum, three (3) products that passed the Gold Competency Test for Windows® 8. This test (commonly referred to as a “logo” test) ensures that the software applications adhere to patterns and practices consistent with Microsoft’s operating system architecture. The logo compatible applications must conform to the following:

  1. Compatibility and Resilience – Apps are expected to be resilient and stable, and eliminating failures helps ensure that software is more predictable, maintainable, performant, and trustworthy.

  2. Adherence to Windows Security Best Practices – Using Windows security best practices will help avoid creating exposure to Windows attack surfaces. Attack surfaces are the entry points that a malicious attacker could use to exploit the operating system by taking advantage of vulnerabilities in the target software. One of the worst security vulnerabilities is the elevation of privilege.

  3. Support Windows Security Features – The Windows operating system has many features that support system security and privacy. Apps must support these features to maintain the integrity of the operating system. Improperly compiled apps can cause buffer overruns that may, in turn, cause denial of service or allow malicious code execution.

  4. Adherence to System Restart Manager Messages – When users initiate shutdown, they usually have a strong desire to see shutdown succeed; they may be in a hurry to leave the office and just want their computers to turn off. Apps must respect this desire by not blocking shutdown. While in most cases a shutdown may not be critical, apps must be prepared for the possibility of a critical shutdown.

  5. Support of a Clean, Reversible Installation – A clean, reversible installation allows users to successfully manage (deploy and remove) apps on their systems.

  6. Digitally Signing Files and Drivers – An Authenticode digital signature allows users to be sure that the software is genuine. It also allows one to detect whether a file has been tampered with, such as if it has been infected by a virus. Kernel-mode code signing enforcement is a Windows feature known as code integrity (CI), which improves the security of the operating system by verifying the integrity of a file each time the image of the file is loaded into memory. CI detects whether malicious code has modified a system binary file. It also generates a diagnostic and system-audit log event when the signature of a kernel module fails to verify correctly.

  7. Prevention of Blocked Installations or App Launches Based on an Operating System Version Check – It is important that customers are not artificially blocked from installing or running their app when there are no technical limitations. In general, if apps were written for Windows Vista or later versions of Windows, they should not have to check the operating system version.

  8. Does Not Load Services or Drivers in Safe Mode – Safe mode allows users to diagnose and troubleshoot Windows. Drivers and services must not be set to load in safe mode unless they are needed for basic system operations of such as storage device drivers or for diagnostic and recovery purposes, such as anti-virus scanners. By default, when Windows is in safe mode, it starts only the drivers and services that came preinstalled with Windows.

  9. Follows User Account Control Guidelines – Some Windows apps run in the security context of an administrator account, and apps often request excessive user rights and Windows privileges. Controlling access to resources enables users to be in control of their systems and protect them against unwanted changes. An unwanted change can be malicious, such as a toolkit taking control of the computer, or be the result of an action made by people who have limited privileges. The most important rule for controlling access to resources is to provide the least amount “standard user context” necessary for a user to perform his or her necessary tasks. Following user account control (UAC) guidelines provides an app with the necessary permissions when they are needed by the app, without leaving the system constantly exposed to security risks. Most apps do not require administrator privileges at run time, and should be just fine running as a standard-user.

  10. Installation to the Correct Folders by Default – Users should have a consistent and secure experience with the default installation location of files, while maintaining the option to install an app in the location of their choice. It is also necessary to store app data in the correct location to allow several people to use the same computer without corrupting or overwriting each other's data and settings. Windows provides specific locations in the file system to store programs and software components, shared app data, and app data specific to a user.

Microsoft provides a suite of tests that ensure compliance to the standards listed above. Cimetrix, as part of our release process, now runs the logo testing suite against all products prior to a scheduled release. To date we have received logo certification for our latest versions of CIM300, EDAConnect, and ECCE Plus. We have also submitted the latest release of CIMConnect for endorsement. We will continue to make sure all new product releases are subject to and pass the logo certification process. Committing to making sure our products are logo tested not only ensures our continued status as a Gold Competency Partner, but it also lets our clients know of our commitment to deliver quality software that is compatible with Microsoft’s current operating systems. 

The largest benefit Cimetrix receives from our Gold Partner status is the access to Microsoft tools and technologies. As a Gold Competency Partner, Cimetrix receives premium MSDN subscriptions to ensure each engineer in Engineering, Quality Engineering, and CT&S have the most up-to-date technology tools, training, and information they need to get their respective jobs done. Having access to the right tools ensures that our engineers can be as efficient and effective as possible. In addition, the cost savings of having these tools provided to us, as opposed to having to purchase a subscription for each engineer, is significant. By saving money on tools, we can devote those monies to product development. 

Application certification and the tools provided by MSDN subscriptions are just a couple of examples of how our Gold Competency Partner status provides benefits to our clients. Cimetrix greatly values its partnership status with Microsoft. We are committed to continuing to adhere to the requirements and standards set by Microsoft in order to retain our Gold status.

Topics: Partners, CIM300, EDAConnect, ECCE, Microsoft

Using C# for Development at Cimetrix

Posted by Cimetrix on Apr 12, 2010 4:00:00 PM

by Vladimir Chumakov,
Software Engineer

We started using C# at Cimetrix about 5 years ago when we first started working on CIMPortal™, our Equipment Data Acquisition product. Later on we used C# exclusively for development of our Equipment Client Connection Emulator (ECCE) tool; EDAConnect™, a client-side software library product for implementing the SEMI EDA Standards; and CIMControlFramework™, an equipment automation framework for tool control.

Here is why we chose - and keep using - C# for new project and product development at Cimetrix:

  • The biggest advantage using C# brings is not the programming language itself but the extensive amount of functionality provided by the Microsoft .NET Framework. The development time savings by using the .NET Framework could be measured in years.
    • We used ASP.NET libraries for development of CIMPortal’s Web GUI and implementation of the Interface A SOAP interfaces.
    • WinForms is by far easier to use than MFC library in C++ that we've used before.
    • WCF is used in EDAConnect for implementation of the Interface A SOAP interfaces and as inter-process communications in CIMControlFramework.
    • ADO.NET is the framework used for working with Databases. We use it in CIMStore and CIMControlFramework products.
    • And the best part is that Microsoft continuously keeps improving its .NET Framework. Microsoft released a new 4.0 version of the .NET Framework today on April 12th. It contains many new features. The most exciting is Parallel Computing Platform (http://msdn.microsoft.com/en-us/concurrency/default.aspx) which includes significant advancements for developers writing parallel and concurrent applications, including Parallel LINQ (PLINQ), the Task Parallel Library (TPL), new thread-safe collections, and a variety of new coordination and synchronization data structures.
  • Visual Studio (we currently use 2005 and 2008 versions) is an excellent development environment for both C++ and C# but has many features exclusive to C# that we take advantage of:
    • The Unit Testing Framework helps us with the creation and maintenance of test code.
    • C# Code refactoring (http://msdn.microsoft.com/en-us/library/ms379618%28VS.80%29.aspx). Refactoring is a formal and mechanical process used to modify existing code in such a way that it becomes 'better' while preserving the program's intended functionality. In addition to improving a program's overall design, the refactoring process tends to yield code which is far easier to maintain and extend in the long run.
  • C# Language advantages over C++
    • Automatic memory management allows much easier implementation of memory-leak free code.
    • 64-bit programming. There is no need to maintain two separate versions of source code or to have different builds – the same C# application runs on both 32- and 64-bit versions of Windows and is automatically compiled on the fly into native 32 or 64 bit code.
    • Performance. Contrary to common belief that C# is slower than C++, we've found that when features like immutable objects, lock-free containers and automatic memory management are used together, applications written in C# are faster than similar application written in C++.
  • There are still areas where C++ is better than C#
    • Application startup performance. Because C# applications are compiled at the run time, on the fly, it takes more time for application to start.
    • C++ templates are still powerful than generics in C#

All these advantages, especially in development time savings, is the reason why we use and will keep using C# at Cimetrix.

You might also be interested in:

Topics: Programming Tools, WCF, Microsoft, .NET

The Tech Ahead

Posted by Cimetrix on Mar 9, 2010 4:00:00 AM

Microsoft Logoby Bill Grey,
Director of Research and Development

2009 was a tough year and it is good to see the Semiconductor industry coming back. With development projects ramping up, here is a peek at the new technologies coming out this year:

AMD has some new 45 nm Phenom II and Athalon II CPUs out and has the 6-core 45 nm Thuban CPU coming out later in Q2. 2011 will follow with a Llano 32 nm quad-core APU and 32 nm Bulldozer core CPU called Zambezi with up to 8 cores.

Intel has 32 nm rolling strong with the release of the Clarkdale CPU with 2 cores this quarter. They will follow up with the Gulftown processor around mid-year with 6 cores.

It doesn’t look like processing power will be much of a problem any more. =)

For developers, Microsoft released Visual Studio 2010 and .NET 4.0 in April. More information may be found at http://msdn.microsoft.com/en-us/library/bb386063(VS.100).aspx.

Among the changes that got me excited are:

  • better support for parallel code development and debugging
  • debugging of mixed-mode native and managed code on 64-bit operating systems
  • the Visual F# programming language
  • reference highlighting in the editor (finally!)
  • call hierarchy navigation for C# and C++
  • box selection for copy/paste (finally!)
  • .NET background garbage collection instead of concurrent garbage collection for better performance
  • .NET tuple objects for structured data
  • .NET memory-mapped files (shared memory)
  • .NET String.IsNullOrWhiteSpace method indicates whether a string is null, empty, or consists only of white-space
  • Managed Extensibility Framework (MEF) to build extensible and composable applications

Office 2010 comes out the first half of this year with some new collaboration features such as co-authoring and PowerPoint presentation broadcasting: http://www.microsoft.com/office/2010/en/whats-new/default.aspx.

On the Windows side, Windows 7 is here in 32-bit and 64-bit flavors and is being adopted much faster than Vista was when it released. Windows Server 2008 R2 is out for the server platform. For embedded systems, Windows Embedded Standard 2009 has replaced Windows XP Embedded and a new version is on the way called Windows Embedded Standard 7 (Windows 7 based).

How many semiconductor manufacturing tools will need or will go to a 64 bit operating system this year?

One item that could spur the move to Windows 7 is a change in hard drive technology that is not targeted to be supported by Windows XP. Hard drives are moving from 512 byte sectors to 4 kilobyte sectors and will be incompatible with Windows XP. Some of the smarter drives may have a compatibility mode for Windows XP, but at a cost of reduced performance. This will start in early 2011. http://news.bbc.co.uk/2/hi/technology/8557144.stm.

Would you be interested in learning more about these emerging technologies and their effect on Cimetrix products? If there is a significant interest, Cimetrix plans to host a webinar on this topic in the near future.

Topics: Semiconductor Industry, Programming Tools, Windows 7, Microsoft, .NET, Visual Studio 2010, Office 2010

New Year, New Operating System

Posted by Cimetrix on Jan 6, 2010 10:25:00 AM

by Brent Forsgren,
EFA Practice Manager

It is the start of a new year, thank goodness! I wonder what is in store for my Global Services team this year. Last year was a tough year for the semiconductor market, but early indications and market experts are saying that 2010 should be much better than 2009.

On top of the market’s expected upward turn, Microsoft released Windows 7 in late 2009 to replace their not so popular Windows Vista. I expect that a significant portion of our customer’s equipment sales this year will be of existing technology and software. But for our customers that will be developing and selling new tools and software, will you jump to Windows 7 or will you wait for it to prove itself in the market place? Additionally, if you switch to Windows 7 will you also make the jump to a 64-bit architecture or will you stay with the aging 32-bit architecture?

We welcome your comments and feedback! I would love to hear your thoughts and plans.  Please comment below or email me at brent.forsgren@cimetrix.com.

Topics: Customer Service, Doing Business with Cimetrix, Product Information, Global Services, Windows 7, Microsoft

Subscribe to Email Updates

Follow Me

Learn More About the
SEMI Standards

SECS/GEM

GEM 300

Interface A/EDA

PV2 (PVECI)