Dave Hemker, CTO at Lam Research, sat down with Semiconductor Engineering to look at some of the key issues on the process and manufacturing side, and some of the key developments that will reshape the semiconductor industry in the future. What follows are excerpts of that conversation.
SE: One of the big discussion topics these days is 3D NAND. How far can we go with 3D NAND? How many layers can we have?
Hemker: We’re at 32 layers now. People are talking about going to 48. In the labs, they’re looking at 60-plus or 90-plus. What’s fun about being at this stage is we don’t know, and the answer always is more surprising than you would expect. You do something and you think it’s amazing, and five years later you’re still doing it and improving it.
SE: 3D NAND typically is associated with data centers. Will it branch out?
Hemker: In data centers it’s more expensive than spinning hard drives. But when you look at the power consumption, the density and the scalability, it starts to look very attractive. It will hit 100% saturation in laptops. Even if you’re working on a desktop PC, your boot drive is going to be SSD and you will have a spinning drive to store things. The whole memory space is interesting. There are potential future inflections in memory, as well. If you look at 3D-XPoint from Micron and Intel, it’s an indication of the way things can go in the future. In the past there was just memory. Then NAND came in and you had DRAM and NAND. There was volatile and non-volatile and each had their distinct uses. It was simple to differentiate because they’re far apart in terms of density, cost and durability. Now you have a memory that is non-volatile, denser than DRAM but faster than traditional NAND, and cost-competitive. It fits right in this memory continuum, which they’ve dubbed storage-class memory. If it were as fast as DRAM and as inexpensive, it would be the universal memory. We’ll keep aiming for that.
SE: Where do we start running into problems that become intractable? Are we good into the Angstrom world?
Hemker: When you push beyond 3nm, we’re into the Angstrom world. From my perspective, we’re living on this exponential curve of Moore’s Law. It has always been this steep when you’re on it. But in order to solve it, we are having to turn more knobs simultaneously. If you look at logic transistors, there’s a good road map. It’s seven years out, and Intel always has the road ending in a fog bank. Still, just because you can’t see it doesn’t mean it ends. Predicting the end of Moore’s Law is like predicting a bear market. Eventually you’re going to be right. I’ve talked about the broadening of Moore’s Law. You see one node hanging on longer. So when you look at each section of where we have to make advances, you can slice up the fin on a finFET so it becomes a gate-all-around nanowire. There are then options to go to vertical nanowires. The 3D NAND development is an interesting one because it’s a kind of pathfinder for us. There may be options where you keep doing what you’re doing, but you then stack. Maybe solving that problem isn’t quite as difficult as the other ones. Technically, there is at least another 10 years. From an economic standpoint, that’s a whole other thing that’s obviously very important.
SE: This used to be the realm of quantum physics, right?
Hemker: Yes, and you’re getting quantum effects here. There are things that are three atoms wide. That’s the challenge for us from an etch perspective and dep. All of the feature sizes now are on the order of nanometers. The variation is on the order of Angstroms because you have a 10 Angstrom window. Our critical feature control has to be under 0.5nm. That’s 5 Angstroms. A bond length is 2.5 Angstroms. We’re doing atomic-level engineering. That’s one of the new tools we’ll need. You need to peel off one layer in such a way that you have a perfect etch front, the same way you will need perfect conformality.
SE: So what do people actually want and need? Is it the processor speed or is it what you’re trying to accomplish?
Hemker: I don’t lose sleep over the need for more computing power or the need for more memory storage. If you look at what’s going on with machine learning, why shouldn’t that be in my pocket in 10 years? I want to be able to pull it out and have a natural language interface. We’ll be looking at Watson on Jeopardy in the future and people will be holding up a small device and saying, ‘This has twice as much as that.’ Whether that’s in 10 years or 20 years we don’t know. You can see this with speech recognition, though. Early on, they were trying to be clever with algorithms as opposed to having every word and waveform stored in something and you can use a simple lookup table. We’re on that cusp as well with Alexa Voice from Amazon and Siri. There is still a lot of room for improvement, but this is a big change.
Visit the Semiconductor Engineering website to read the full article.