By Shannon Ghorbani, Mad City Labs
When evaluating nanopositioners and their performance, interpreting published specifications from multiple vendors always is challenging. Differences in definitions or units of measurement can make it difficult to make an apples-to-apples comparison. Additionally, a technical specification is only a starting point. For most users, interpreting each specification and how it will affect their real-world application is more critical.
Which Specifications Matter To Your Application?
Comparisons between different companies’ products always include technical data sheets. Users are often confronted with a confusing array of specifications, compounded by the fact that no “standard” set of parameters exists to enable easy comparison between vendors. Even the same parameters may have varying definitions or measurement units under different vendors. So, how can a user confirm those specifications are accurate or relevant? Are they achievable in a real-world setting, or only in unrealistic environmental or setup conditions? Understanding the story behind the specifications is key to making good choices for the application.
Determining a specification’s importance within a given application begins with understanding the circumstances under which that specification was recorded, as well as the metric by which it is gauged. Among the specifications often quoted for nanopositioners — including resolution, accuracy, and repeatability — closed-loop resolution stands out as uniquely verifiable using a number of different methods. Conversely, consider how the specification of “resolution” is defined by any given vendor. Some vendors talk about their instrumentation resolution in terms of position noise or step resolution, without highlighting whether they are RMS values or peak-to-peak values.