How to Optimize Imaging with Nikon Elements Software, Part 2

June 18, 2020

Using Imaging and Metrology Operations

In this second session of a three-part series, Technical Sales Representative James Bristol demonstrates advanced imaging and metrology operations within the Nikon Elements software platform. Nikon’s Elements software can be used with Nikon or third-party microscope cameras to improve your imaging. Learn how to use its features.

Did you miss Part 1, The Basics? You may view it here.







    Add your contact information to our list, and we’ll let you know when new webinars become available.



    Transcript

    Charles Zona (CZ): Good afternoon, and welcome to another McCrone Group webinar. My name is Charles Zona, and today we welcome back Jim Bristol of McCrone Microscopes & Accessories. Jim is going to talk to us today about How to Optimize Your Imaging with Nikon Elements Software. This is the second installment of a three-part series, and if you missed Part 1, there’s a recording of it on our website. Simply go to our resources tab, and click on Webinars. Part 3 of this series will be presented in the next few weeks.

    Before we get started, I would like to tell you a little bit about Jim’s background and experience. Jim is a technical sales representative with McCrone Microscopes & Accessories, with more than 30 years experience in optical instrumentation and software sales. Jim has successfully sold and supported complex imaging systems, such as laser scanning and spinning disc confocal microscopes, integrated systems for cell sterology and neuron tracing, as well as scanning electron microscopes and thermal microscope systems.

    Jim will field questions from the audience immediately following today’s presentation.

    This webinar is a little bit different in that it is a recording, and not taking place live. However, you can still ask questions by typing them into the questions field. We will answer all of your questions individually in the coming days. This webinar will be available on The McCrone Group website under the Resources tab. And now, I will hand the program over to Jim.

    Jim Bristol (JB): Thank you, Chuck, And thank you, everyone, for joining us today for our second session on Nikon Elements software, where we will discuss imaging operations.

    Extended the Depth of Focus: The EDF function selects the focus area from multiple z-stack images and produces one all-in-focus image. The composite image can be viewed and rotated as a virtual 3-D image, as it contains the axis information. The z-axis information allows us to utilize that dataset to create, display, and measure, in three dimensions.

    And, lastly, in this segment, we will show you the Elements Moviemaker to display the 3-D dataset.

    Extended Depth of Focus, 3-D Measurements and Moviemaker

    Last session, we showed you a very simple way to perform EDF on a manual microscope. Today, we have motorized the three axes of this microscope with an X-Y stage and a Z motor, and we’re going to show you additional ways to take EDF images and obtain z-axis information in that process.

    The first way is with this real-time EDF module, which comes as an option with the software. It allows us the opportunity to set top and bottom; allows us the opportunity to align images. This is important if you’re using a stereomicroscope because with many stereomicroscopes, the the optical axis is slightly off perpendicular, and when you move your Z height up and down, you are changing your X position as the camera sees it. And you need to be able to realign those to get a good stacked EDF image.

    So let’s go ahead and capture an EDF image here. I’m going to focus on the top of my image. I’m going to go all the way down, focusing down to the bottom, and focus on the last thing that comes into focus. I have a 3.1 micron step, which is optimal for this 20X objective. I could change that to a 1 micron step, but then have many, many more images, such as 30. But I’m going to stay with what we have here, 3.1. We could put HDR in here, but we’re going to talk about HDR a little later. We’ll bring that up then. And then we’re just going to go ahead and tell the software to run this module.

    As you can see, we’re slowly moving down through the image. We are focusing from the top to the bottom because we are working with an upright microscope we want to move the stage against gravity. And now we should come up here, with our finished image. There it is. So you can see that everything is now in one focal plane.

    Another way to do this is also under applications. You can do a real-time EDF manually, whereas you would then start the process, and it would set up two different screens, and because we have a Z motor and we’re moving our z-axis using the motor, that again, information is calibrated and we could obtain the same image manually rather than having to do it through the actual module that we see.

    And the last way that we’ll talk about it after we’ve finished with our measurements of Moviemaker, is how to do this manually when you don’t have a Z motor but you do have the ability to use the graduations on your fine focus knob.

    But now that we have our EDF image, we can project a 3-D surface view. By clicking this button, we now have a surface view that shows us some Z information.

    One of the things I can do software-wise, is I can stretch the Zs so it’s a little thicker a little bit deeper. Now, you can see that we can move this image around. We can see that there is some depth. That this information there been some etching done, and we have what appears to be about 10 or 11 microns worth of Z depth information. I could go a little bit further and take it to 500X and make it a little bit deeper, make it a little bit easier for you to see, but then again, this is just a manipulation of the data. It does not change the data.

    We can go to the default view, and now, what I’d like to do is show you some of the other things we can do with this. We can show bottom to top, so it’s a positive, or we can show top to bottom. We can put a wire surface grid over this so we get some idea, from a modeling standpoint, of what this looks like. We have VRML, and STL exports of this data. VRML is virtual reality modeling language, which is most often used to show these types of 3-D events and 3-D images on the web, and STL is stereo lithography file format, which is most often used when you want to export the data for a CAD drawing or CAD application.

    Now with my mouse, I can zoom this back and with a right-click, I can see it.

    Let’s go ahead and look at some 3-D information here. Automatically, we popped up a Z profile, and I’m now extending that a little bit. And, you can see that we are now showing, pretty much from where that red line is, exactly where we are on 3-D, or z-axis measurement. I apply Control and click my mouse, and hold it. I can move that, and show different portions; different areas.

    One of the other things that we can do is, we can go ahead and we can actually turn around and we can rotate the axes we’re looking for, so we can change the plane that we might be looking at.

    Now that we have this Z graph, this Z profile, there are things we can do with it. You can have horizontal and vertical measurements, we can do areas under the particular curve by marking things out, full width half max information angles. I’m just going to do a couple of simple measurements here. It looks like this is one of the tallest spots. I’m going to do a vertical measurement. Click and it shows up over here as a Z length. I’d like to know the span here so I could do a horizontal measurement. I can go from this point to this point, and come up with the information.

    So now, all of this can eventually be exported, and under exporting, we have the ability to do a variety of different types of exports as just the Z profile, just Graph Measurement, just the graph itself. We have the ability to control these through our export settings, but I’m just going to go ahead and say Export All to Excel.

    Click, and now Excel has opened up. And, what do we have? We have an Excel report, we have this graph that is now placed on it, it gives us our measurements that we had here, and all of the Z profile information. So, that’s our EDF in 3-D measurements, using the Z motor.

    Now, you can also do this and get the Z information even if you don’t have a Z motor by going to the Acquire function, under Capture Z, and Capture Manually, and this brings up a screen where you have the Z position that you’re going to move, and in this case, you’ll hopefully move one micron per increment or two microns per increment on your fine focus knob; I would put in “1.0”, and then you manually take one frame at a time. Then, it’s all put together, as you saw in the automated version.

    So, we’ll cancel out all those images. Now I’m going to bring up in Moviemaker, which is the last thing in this segment that we’re going to talk about, an image that I’ve worked with before that I think will show us a little bit better what it is for an image.

    So, this is a part of a stepped product tool. As you can see, we maneuver around and we’re looking at about 1.5 mm. I’m going to rotate this so that it kind of looks like a series of steps, and then I’m going to zoom away. I’ll click on my Moviemaker, and up brings a strip. I’m going to go to my settings, and I’m going to change it from a 5 second to a 10 second video, 20 frames a second. It’s going to expand my file, and now I’m going to zoom back. I want to click on what we call an Update Key. And say this is my starting point. I’m going to bring it forward a little bit. Update key. 2.5 seconds. Bring it forward, a little bit more. Update Key. Let’s do 4.5 seconds. Now I’m going to move it down and I’m going to begin to rotate it. I’ll do an Update Key, and make that 6.5 seconds. I’ll now do a left-click and bring it all the way to this position. Update Key. 7.5 seconds. And now what I’ll do is, I’ll bring it up so that it’s almost perfectly side-on-side. Update key. Make that 9 seconds. Now I can go ahead and play that movie. As you can see, it very smoothly works through those steps that you gave it. So that is EDF, 3-D measurement, and Moviemaker.

    High dynamic range image acquisition, or HDR, creates an image with appropriate brightness in the dark and bright regions in a sample by combining the multiple images acquired with different exposure settings.

    HDR or high dynamic range imaging. Many times, you have an image, like the live image that you see before you, where there are very dark areas, and light areas, and the software and the auto exposure of the camera have difficulty deciding what the exposure should be, given the fact that we have these light and dark areas. If we were to take a region of interest here, and move around and do our auto exposure based on that region of interest, then move it to the dark area, we can see that we get drastically different exposure settings and a drastically different looking image. HDR allows us to balance out the light and the dark areas so that we can see both areas equally well. Applications, Capture HDR Image, a small module pops up. We look again at what 30 milliseconds gives us, and we can see good detail in the dark area. We look at what 4 milliseconds gives us, and we can see the light area and its detail very well. We can choose the number of images we want to take between those two exposures, those high and low exposures. I’ll stick with five. Click OK. And here is the resulting image, where we now see the detail of the dark, as well as the detail of the light, together in the same image.

    HDR can be combined with Image Stitching or Large Image Grab, it can be combined with EDF. It’s an excellent module to have, and is an option in the Elements Software package.

    Image stitching, or Large Image Acquisition

    Large image acquisition generates a single high magnification, wide field-of-view image by automatically stitching multiple adjacent frames from a multipoint acquisition using a motorized stage, or from multiple single images captured manually.

    Image stitching, or as it is known in the Nikon Element Software, Large Image Grab. Here with my 5X objective in place, I’m looking at the top of a surface mount device, yet I can’t get the entire device into a single field of view. This is where Large Image Grab can be a benefit.

    Manual Large Image Grab, a small window pops up; I’m going to verify that I have a light background for my shading. I’m going to tell the system to autocapture. So as I reach the overlap, it will automatically take an image for me and make it more efficient.

    I click on Start there as the first image, I capture that first image, and now I go ahead and move my stage down. You can do this manually with a manual stage or here I am manually with an automated stage.

    Here’s image one, and there’s image two. I click Finish.

    We now have a single image of that surface mount device. With everything in one field of view, we can scan around and look but as I zoom in, you can see we begin to lose focus because of lack of resolution of a 5X objective.

    We go back to live image; let’s go to a 20X objective. We’ll center this up in the corner here, and we’ll go to Acquire, Scan Large Image using a motorized X-Y stage. This large window pops up for us to work with. As I said earlier, it’s always easier to work to monitors in these types of scenarios, because of all of these windows that do pop up and are necessary to operate the functions. Here, we’re going to use the left top, right bottom limits of this device in order to set the parameters that the stage will need to move to capture all of the device. So we’re going to click left. Down here. Click bottom, and right.

    Now, there are options. We can combine Large Image Grab with HDR and EDF. We’ll not do that at this time. We can also manually start the focus, but we can also manually start at every field, every other field for focal purposes. I’m going to do the first image straight up with just a start at the beginning, set, move this window, click scan, verify that I have the right objective in place. Move, adjust my focus, so I’ve got the right focus that I want. And now the system begins to build in a snake raster format, left to right down, right to left down, left to right down, etc. The tiles that are composing this Large Image Grab or Image Stitching procedure. As you can see, we’re almost done. About ten seconds to go. This is done rather quickly and very easily. You can do this manually, as well, as we did earlier; for something as large as this, it just takes a little more time. Shading correction is done, image is composed. Let me move this over.

    You can now see, we can zoom in, that we have a lot better detail. But if we focus on the top left here, see, everything is in focus where we started. If I go to the bottom of the image and we zoom in, you’ll see that we’re kind of out of focus. This is where the ability to do either an EDF or step-by-step focusing improves that capability. For the sake of time, I’ve gone ahead and captured a couple of those already. Here is that same image with a single focus done at each plane for the best focus on surface. You can see it’s very sharp here, and as we move down towards the bottom, it’s equally as sharp, and in focus.

    We can also do an EDF because we have these etchings; this depth of focus in the top of the surface mount that we can bring into play as well. So we did one with EDF, and as you can see, everything is in focus from top to bottom. That is Image Stitching, or Large Image Grab as we know it in the Nikon Elements software.

    Live Compare enables easy image comparison between a sample image and a live image. Live observation side-by-side with a paused live image is also available in a split screen mode.

    Live Compare

    The Live Compare module, is just that: a module to compare a live image to a captured image, usually at the same component and same magnification. This is a very helpful operation and quality control and manufacturing.

    Here we have a simple piece of screening material that we have captured, and now I’m going to employ an Acquire, Live Compare and Transparency, and now we have a live transparency of the same image live over that captured image, which we’re using as a as a master. We can tell if the live image matches up properly to the captured image. We can do this, instead of in a transparency fashion, we can do it in a pseudo red and green so the color becomes a little bit easier to distinguish the differences. We can also do it in a transparency mode…difference mode, excuse me, which gives us more of a black and white. It makes it easier for us to distinguish whether things are exactly the way they should be or not. So, this is Live Compare, a great tool for quality control and manufacturing.

    Auto Measurement

    Auto Measurement measures the number of objects which are extracted from the image by the creation of a binary layer to the threshold. The results can be listed or exported as text, or an Excel file. It is possible to save and re-use threshold and parameters.

    Auto Measurement, also known as object counting. Here we set up our computer screen with an image that we wish to count objects on. Our automated measurement tab, which allows us to set the protocol and the steps that we’ll use for counting, and our Automated Measurement Results window, which will automatically tabulate the results as we set it up. This is a black and white image; black objects against a white background. We could also be working with a color image where we would use the pixel classifier in order to separate those objects against the background. But here, we’ll stick with a simple binary threshold.

    One of these three buttons here gives us a very good, simple look at an initial thresholding. That looks fairly good, but there’s still a little black there. So I can make adjustments here, back and forth, by sliding this line. This looks to be pretty good. On this image, there are some artifacts that are nowhere near the objects that we want to count, so we want to eliminate those by cleaning up the background, and we use this clean off tool. We toggle up, and you can see that some of these change color. So now you can see that these marks here are no longer the same color red; they’re black, meaning they’ll be excluded from any of our measurements. We also have some objects that are connected to each other, and not standing alone singley, so we can separate those by putting a separation line on, and you can see if I zoom up on the image a little bit, we’ve gone ahead and we’ve separated these two objects.

    We may also have a size factor. There are certain small objects we don’t want to count. So, we can do a size classifier, and we can slide back and forth, and you can see these objects change color as we eliminate them. And if we work in this area right here, this seems to be pretty good. Now, there are some here that are black based on size, but they’re also touching the sides. And what we can do there is, we can put in what’s called a measurement frame. That measurement frame then gives us certain properties we can get to by going to our settings and clicking on Measurements, Measurement Frame, Excluding objects that touch the frame border.

    So now we have cleaned up our background. We have separated those objects that were joined together. We’ve eliminated the smaller size of particle or object that we do not want to count. And we’ve gone ahead and we created a measurement frame excluding anything touching the edge the frame.

    So at this stage of the game, we have done quite a few steps, and let’s go ahead and see what we end up with. So now, under Object Data, you see the individual objects, you see their area and you see their perimeter. Then down below, we see the mean, the standard deviation, and the maximum for each of those columns. We also have object statistics, which gives us the total number of objects we counted; the average area, and the average perimeter.

    Now, here, we can export this data to Excel. We can set these parameters up with any one of these checkboxes, or any grouping that we want. But to keep it simple here, we’re going to first export the object’s statistics to Excel. Excel comes up, and as you can see here, we have our source, we have our field numbers.

    Wait while I open these up a little bit so you can see it a little easier. I realize it’s a small screen, but we have our mean, standard deviation, our areas, total counts on objects. Now, we can minimize that. We can go to Object Data, export our object data. And under object data here are each of the individual objects and the standard deviation statistics: mean and max at the bottom. So that is a quick tutorial on doing automated measurement, or object counting, utilizing NIS Elements.

    Macro Commands

    Macro commands allow you to consolidate individual steps that you might be using to adjust an image or to create an operation.

    You can create a one button tool that allows you to carry those out in a very efficient and effective manner. This works very well when you’re doing repetitive measurements, or repetitive operations on a series of images.

    Macro commands. Here, we’ve set up the same screen that we used for auto measure. A macro command allows you to automate a number of steps that you would normally be doing manually. While macro takes a little bit of time to set up, it’s well worth the effort when you’re doing multiple operations over a period of time.

    Under the macro heading here on the toolbar, we’ll go under Command. You can see all of the individual types of movements that you have just for image adjustment, annotations, converting the image, flipping the image — so it takes a while, but we’ll get one set up and we did one ahead of time here for thresholding. So we’re going to take this same image that we just did our auto measurement on, and we’re going to automate those steps so that it makes it a little bit more efficient. And when you’re working in a series of images in the same batch and you’re doing multiple repetitions, working with a macro makes things much more efficient and much more straightforward.

    So if I click on my Threshold button from the Macro panel, you can see that my image has quickly been thresholded. I have my measurement frame on it. I have certain artifacts that have already been eliminated. I have certain objects that have been eliminated based on their overall size. Now, we specifically set this up so that we’re missing a few things. And what you’re missing here is we still have some things touching the border that have not been excluded. So we can just very quickly make some slight adjustments here, and see that by just a slight adjustment of our thresholding, we now have all of our objects touching the border going to be excluded. And we now have our information here in Object Data, and we have our information here in Object Stats, and they also can be exported.

    We can make slight adjustments possibly to the size if we wanted to. So if I took the size down to 10 and hit Enter, you can see we can bring some of those in. So you still have the individual adjustments, it’s just that by using the Macro command, you can automate and make more efficient operations that you’re doing over and over again.

    Again, I’d like to thank Ryan McGaha of Nikon Metrology for his support in preparing this presentation. Join us for our next Nikon Elements software webinar, the third of our three-part series on Advanced Operations, where we will focus on Layer Thickness Measurement and Measurement Sequencer. Thank you for joining us.

    Please send me any of your questions or comments. I will respond to each of you and look forward to hearing from you. Thank you again.

    CZ: Okay, thank you for attending today’s webinar. If you have any questions for Jim, go ahead and type them into the questions field, we’re going to leave the questions field open for a while, so type your question in and we will answer them in the coming days after the presentation. And I would like to thank Jim for doing this presentation, and all of you who tuned in today. We really appreciate your time. And please check out our Webinars page for upcoming McCrone Group webinars. Thank you.

    View Part 1 and Part 3 in this webinar series.

    Comments

    add comment