@HarneyBA node sizes are determined such that each node size decrease translates to a doubling of transistor density.
Ie for planar transistors, the drop from 45nm to 32nm doubled the number of transistors per square millimeter.
Of course, the cost of each node drop is greater than the gain in transistor density
@HarneyBA that doesn't hold up anymore, however, in the era of FinFET transistors. Intel has been FinFET since 22nm, and pretty much every node smaller than that in the industry is also FinFET.
Node size in FinFET era is also mostly meaningless below 15nm, because it doesn't account for fin pitch and density in any way
@HarneyBA cost goes up a huge amount from this point forward. Below 10nm it requires new UV technology that is only just now coming to fruition.
450mm wafers are also just around the corner, and when they are ready, the ENTIRE industry will move to them at the same time. That will require ENORMOUS capital expense the likes we haven't seen since 300mm wafers came out
@HarneyBA why move to 450mm you ask? 450mm will result in a 2.25x greater die per wafer, while maintaining CURRENT cost/wafer.
We saw the same benefits with each wafer size jump, from 100mm -> 150mm -> 200mm -> 300mm.
Of course. That ignores cost of new equipment and R&D.
@HarneyBA A modern photolithography machine capable of laying patterns for 14nm nodes can easily cost $100M EACH. Most other tools cost around $5M-10M each, too.
@HarneyBA Of course.
Its fun to see hobbyists try to make usable devices with non-industry tools, but it cannot scale without gigantic capital investment if you are starting from scratch. Even if you want to borrow a foundry fab, you will still need gigantic financial investment *just* for the photomasks.
Also for 3D chips my understanding is that they separate the "layers" with a glass like substance? If they were to only create two layers and pull heat from either side would that solve the thermal issue while still allowing for shorter connections? Obviously I'm no where near qualified to even ask the question properly so pardon my ignorance.
There is actually no more interest in the industry to do it, despite the obvious benefits. Equipment manufacturers don't want to end-of-life their current 300mm tech, fabs don't want to have to completely retool, and we would have to learn the painful lessons of going up a wafer size again.
Going from 200mm to 300mm wafers was a VERY painful experience for the industry, even moreso than going from any other size up to 200mm.
Here's the thing about chips and heat dissipation. Think of it like cooking a turkey. The outside of the turkey is VERY easy to cook and cook rapidly, but the core of the turkey takes hours. This is because of the thermal properties of turkey meat.
When cooling something, the opposite is also true. Those very same properties that make heating a turkey slow, also make cooling the inside of a turkey slow.
@AtypicalKernel @HarneyBA Applying that to 3d chips, once you start dealing with a chip that is an order of magnitude thicker than before, the very center of the chip will be very hard to keep thermally regulated.
The very same properties in materials you want for electrical insulation also, unfortunately, translate into heat insulation. They are intrinsically the same property, and I'm not aware of many materials that are great heat conductors and poor electrical conductors at the same time.
Linux geeks doing what Linux geeks do...