Advanced Chip Packaging Satisfies Smartphone Needs
Clever chip packaging means mobile devices can be
smaller and smarter
By
Pushkar Apte, W. R. Bottoms, William Chen, George Scalise
Posted 28
Feb 2011 | 16:20 GMT
Illustration: Harry Campbell
We rely on our mobile devices for
an almost comically long list of functions: talking, texting, Web surfing,
navigating, listening to music, taking photos, watching and making videos.
Already, smartphones monitor blood pressure, pulse rate, and oxygen
concentration, and before long, they'll be measuring and reporting
air-pollutant concentrations and checking whether food is safe to eat.
And yet we don't want bigger devices or decreased battery life;
the latest Android phones, with their vivid 4.3-inch screens, are already
stretching the definition of pocket size,
to say nothing of the pockets themselves. The upshot is that the electronics
inside the devices have to do more, but without getting any larger, using more
power, or costing more.
Transistor density on state-of-the-art chips continues to
double at regular intervals, in keeping with the semiconductor industry's
decades-old defining paradigm, Moore's
Law. Today there are chips with billions of
transistors at a price per chip that has headed steadily down for decades.
Innovations that pack more and more circuits onto a chip will indeed continue,
as will the more recent trend of putting very different functions on a single
chip—for example, a microprocessor with an RF signal generator.
If we want to teach our smartphones new tricks, however, we'll
have to do more than equip them with denser chips. What we will need more than
ever are breakthroughs in an area not previously considered a major hub of
innovation: the packaging of those chips. Packaging refers loosely to the
conductors and other structures that interconnect the circuits, feed them with
electric power, discharge their heat, and protect them from damage when
dropped or otherwise jarred. But today, the drive to pack more functions into
a small space and reduce their power requirements demands that chip packages
do much more than they ever have before.
A packaged chip is a
sort of puzzle, with certain fixed and well-defined pieces. Before we talk
about how packaging designers are putting those pieces together in new ways,
it will help to review the standard ones.
The astoundingly complex manufacturing process that leads to a
chip starts with a wafer, a dinner-plate-size circle of a semiconductor
material, typically silicon. Manufacturers etch, print, implant, and perform
all sorts of other operations to turn a blank wafer into a grid of rectangles,
each about the size of a fingernail and mind-bogglingly dense with transistors
and interconnections. Sliced apart, those individual rectangles are what
specialists call die. Properly packaged, each die becomes a chip. These days,
many people use the termschip and die interchangeably, but traditionally,
the word die referred to a
naked integrated circuit without packaging. We'll stick to that traditional
terminology here so that we can succinctly make it clear whether we mean a
packaged chip or an unpackaged die.
Inside your smartphone, you don't see naked die, of course. You
see little plastic slabs of varying sizes, with scores of tiny metal prongs
sticking out like insect legs, soldered onto a circuit board. The plastic
slabs are the exterior of the packages. The fragile die are inside them,
protected from damage during manufacture or use and connected to other chips
through those prongs and the traces on the circuit boards.
These circuit boards are critical, of course, to any electronic
system, but they don't actually occupy all that much space inside those
systems. In fact, if you open up a smartphone today, you'll find that the
amount of space allocated to electronics is rather small, so efficient use of
that space is key.
Starting in the mid-1970s, designers trying to pack more
functionality into a small space created systems on chips. What that means is
that they designed digital and analog circuitry, memory, logic, communication,
and power elements that were manufactured by a single process on a single die.
This integration wasn't easy, because the processes, materials, and
technologies optimal for each of these functions tend to be very different.
For example, a communication or analog chip might ideally use gallium arsenide
as the substrate. It might be built in 180-nanometer technology, which
basically means that the smallest features of the devices on that chip measure
roughly 180 nm across. A digital processor chip, on the other hand, would use
a silicon substrate with 32-nm technology. Power and noise considerations also
vary tremendously; the analog chip might require a much higher voltage, and
noise from the digital circuitry could interfere significantly with the
performance of the analog sections.
The upshot is that integration of all those functions onto a
single die requires compromises in every circuit type in order to use the same
process and material, thus lowering performance and increasing power
consumption. A process that works for multiple types of functions is optimal
for none.
Designers have many methods of creating a
system-in-a-package(SiP).
So why bother to cram all those things onto one die? The main
advantage is proximity, which eliminates the signal-propagation delays that
can degrade performance. However, that advantage is often negated by other
factors: Incredibly long and complex combinations of processes often increase
costs and power consumption, while decreasing performance and yields. These
trade-offs make combining disparate functions on a single chip economically
unfeasible in many cases. Another barrier to this kind of integration is that
hardly any companies have the necessary expertise to make every single type of
circuitry needed in such a highly variegated die.
So, starting about a decade ago, designers began taking
another approach—thesystem-in-a-package
(SiP).
An SiP is a combination of integrated circuits, transistors, and
other components (like resistors and capacitors) on two or more die installed
within a single package. A graphics processor is a good example. Along with
the processing circuitry, it has memory—both dynamic RAM and flash—as well as
passive components like resistors and capacitors sitting on top of a single
miniature circuit board, and the whole pile goes inside one package. With
smart design integration, an SiP may contain multiple and radically different
functions—incorporating, for example, microelectromechanical systems, optical
components, sensors, biochemical elements, or other devices within that
package. It can even contain multiple system-on-a-chip units that combine some
of these functions.
Basically, SiP lets designers mix and match components to get
higher performance and get their product to market quickly while spending less
on R&D, because they're using existing components. They don't have to go
through a long and expensive design cycle every time they need to add a
function; they can simply change part of the collection of die within the
package.
The SiP approach can also enable smaller products. We all
remember the bulky, single-function video cameras that tourists lugged around
years ago. As those cameras got smaller, the sizes of some components—the
battery, the lens, and the LCD display, for example—didn't really change much;
people want big displays and lots of power. And the size of a lens is set by
its aperture, image sensor, and focal length. So the burden of miniaturization
falls on the electronics: When a device shrinks to 66 percent, for example
(from 450 cubic centimeters in 2006 to 300 cm3 today), the electronics must
shrink to a third or less of their original size.
SiP technology brings another benefit. Data paths between the
processor chip and the memory chip are shorter in comparison with those on a
circuit board, so data flow is faster and noise is reduced. With less distance
to travel, it takes less power to get there—another plus. This reduction in
size and increase in performance are the driving forces behind the continued
evolution of SiP architectures.
There's more than one
way to build a system-in-a-package. One of them is called package-on-package
(PoP). Remember that circuit board crammed with chips? It looks a little like
a suburban office park seen from the air. Well, what better way to cram in
more office space than to swap out some one-story buildings for multistory
replacements? That's what package-on-package designers are doing. They pack a
lot of circuitry into a small volume by stacking one set of connected die on
top of another set—flash and DRAM components, for example, on top of an
application-specific IC—and then putting them inside a single package so that
product designers and manufacturers can deal with them as single units. The
sets stack like Lego blocks, typically with logic on the bottom and memory on
top. Such structures are adaptable—manufacturers, when necessary, can vary the
memory density by swapping out the piece of the stack that holds the memory
components, for example. And each of the sets within the package can be tested
individually before stacking. After stacking, however, testing becomes more
difficult. And manufacturers will worry about possible warping of the
miniature circuit boards and die, which would reduce the yield during
assembly.
So PoP systems are a little pricey and therefore used only for
products whose prices can include a premium for better performance in a
smaller, low-power package. Manufacturers of high-end networking products were
early adopters of this approach; manufacturers of digital still cameras and
cellphones have since joined them. Smartphones and, more recently, tablet
computers are using PoPs mainly to integrate application-specific ICs with
memory. PoP continues to evolve and will likely migrate into other products
further down the consumer-electronics food chain.
Package-in-package (PiP)
is another variant of SiP. Instead of just naked die and other components
piled onto miniature circuit boards inside a single package, PiP adds packaged
die—in other words, chips—into the mix. So PiP puts chips within chips.
Semiconductor companies choose this option for business reasons as much as for
technical ones—it forces product manufacturers to buy multiple subsystems from
the same chip manufacturer. PiP integrates more functions and can improve
performance beyond that of PoP systems, but it is less flexible in combining
different devices, like memory chips, from different suppliers. It's also hard
to test. In some mobile applications—for example, the most advanced
smartphones—manufacturers gladly accept these drawbacks because PiP designs
can cram even more into a smaller space. But they haven't caught on as widely
as the PoP approach.
In all these packaging schemes, the most important consideration
is the electrical connections between the multiple die and the miniature
circuit boards that link them. The traditional and cheapest technology used
for these connections is wire bonding, which is in about 80 percent of the
packages produced today. Wires connect terminals on an individual chip to the
little circuit board inside the package. Then electrical paths on that circuit
board route signals among chips and to the leads that extend from the package,
enabling it to be connected to other devices within a system.
Despite repeated predictions that wire-bond technology has
reached its practical physical limit, it continues to reinvent itself: In the
past few years, manufacturers reduced the wire diameter to 15 micrometers to
enable them to cram more wire terminals onto the precious real estate of a
chip's surface. They also began changing the wire materials from gold to
copper, in response to the skyrocketing cost of gold.
In a conventional wire-bond connection between two chips, the
electrical path runs from the closely spaced terminals at the edge of the chip
to terminals on the substrate. As the chip shrinks, so does the distance
between the individual terminals, and it becomes tricky for designers to avoid
short circuits and to keep the wires far enough apart to minimize cross talk.
Photo: ASE Group
TIES THAT BIND: Fanning
wires out from all sides of a chip and making those wires thinner gives
designers more electrical paths to choose from.
Nevertheless, many innovations are extending the life of this
technology. Some manufacturers, for example, are replacing single rows of
wires with multiple rows on the four edges of the chip to give designers more
options for electrical paths.
Alternatively, some designers have eliminated the wires
altogether and replaced them with "bumps" of solder, gold, or
copper. This approach earned the name flip-chip, because the side of the chip
with the bumps must be flipped face down to connect with the bumpy side of the
chip below or the underlying circuit board. As you can imagine, a small bump
of metal is smaller and shorter than a long wire and therefore can conduct a
signal much faster and at higher bandwidths. However, this advantage comes at
a cost—increasing the overall price of the package to 1.5 to 5 times that of a
wire-bond version. Not surprisingly, this technology has also gravitated
toward industries that need high performance and will pay for it. It is now
standard for high-speed and high-bandwidth microprocessors and graphics
processors because of its shorter delay time.
A newcomer to the package scene is the wafer-level chip-scale
package. This technology is essentially a package without a package—the naked
die has extremely tiny solder balls on its active side, allowing it to connect
directly to a circuit board. These die are fragile, so to date this process
can be used only for very tiny die, and even these typically need to be
further protected with a coating on one side. The vast majority of smartphone
manufacturers are beginning to embrace this approach.
Designers have found another way to make SiP devices as small as
possible—one that might seem obvious. They simply make the wafer
thinner—taking a wafer that is, say, a little over 700 µm thick and reducing
it to perhaps 100 or even 50 µm or less. Because the size of the wafer
eventually determines the size of the package, and therefore the size of that
device you're carrying in your pocket, that change can make a big impact.
Mechanical grinding is the most popular way to thin a wafer. It's
just what you'd expect: Manufacturers physically grind the wafer down,
typically by rolling it through a slurry of water and abrasive particles or
rubbing it with diamond particles embedded in a resin. There are lots of other
ways to thin a wafer, including chemical mechanical polishing, which smooths
surfaces with the combination of chemical and mechanical forces, and chemical
etching, which uses chemical liquids or vapors to remove some of the wafer
material.
With the trend toward smaller packages, manufacturers are making
die thinner than was ever thought possible. For example, one manufacturer
recently privately demonstrated a flash memory die 10 µm thick and a tiny RF
device measuring 50 by 50 by 5 µm.
SiPs are the best way
to pack very different functions into a single electronic device. In the
future, the individual pieces in an SiP could be as diverse as RF antennas,
photodiodes, and drug delivery tubes—perhaps even a protein layer that could
allow the chip to connect with human tissue.
But we're not quite there yet. Putting such complex
devices into a single package will require new materials and control of their
interactions on the nanometer scale—and perhaps even on the molecular scale.
It won't be easy. There will be tough competition as consumers demand smaller
and smaller devices that do more and more. Designers are now investigating
taking packageless packaging beyond simply attaching naked die to circuit
boards; they are beginning to attach naked die directly to each other in three
dimensions. Some manufacturers are already making simple versions of these3-D
modules, but this technology has a long way to
evolve before it can become a staple of the manufacture of high-volume
commercial products.
All these packaging innovations are remarkable, but the real
impact has to be measured by what they enable in the real world—and how they
will change society. Electronics are woven into the fabric of our lives and
are beginning to be woven, literally, into the clothes we wear. Increasingly,
they will be implanted in our bodies as well. Pacemakers, defibrillators, and
microfluidic pumps for drug delivery are in use; biosensors and other
implantable devices that can send data to external computers are on the way.
Devices that may allow control of epilepsy, Parkinson's disease, and migraines
are already in clinical trials. Future forms of packaging will not only have
to protect the electronics from the environment but also shield a sensitive
environment—the human body—from the electronics. These innovations will
improve our work, our health, our play, and even our longevity.
This article originally appeared
in print as "Good Things in Small Packages."
- By Dr. Dan Tracy, www.semi.org
- View Original
- November 3rd, 2015
- New package form factors to satisfy high-performance, high-bandwidth, and low power consumption requirements in a thinner and smaller package.
- Packaging solutions to deliver systems-in-package capabilities while satisfying low-cost requirements.
- Shorter lifetimes and differing reliability requirements. For example, high-end smartphones and tablets, the key high reliability requirement is to pass the drop test; and packaging material solutions are essential to delivering such reliability.
- Shorter production ramp times to meet time-to-market demands of end product. This is becoming critical and causes redundancy in capacity to be required, capacity that is underutilized for part of the year
It’s All about Packaging. In this Material World That
We Are Dealing With, Who Is Your Partner?
With
the recent release of Apple’s 6s and the form factors of internet enabled
mobile devices and the emergence of the IoT (Internet of Things), advanced
packaging is clearly the enabling technology providing solutions for mobile
applications and for semiconductor devices fabricated at 16 nm and below
process nodes. These packages are forecasted to grow at a compound annual
growth rate (CAGR) of over 15% through 2019. In addition, the packaging
technologies have evolved and continue to evolve so to meet the growing
integration requirements needed in newer generations of mobile electronics.
Materials are a key enabler to increasing the functionality of thinner and
smaller package designs and for increasing the functionality of
system-in-package solutions.
Figure 1: Packaging Technology Evolution – Great Complexity in Smaller, Thinner Form
Factors, courtesy of TechSearch International, Inc.
The
observations related to mobile products include:
Packaging
must provide a low-cost solution and have an infrastructure in place to meet
steep ramps in electronic production. The move towards bumping and flip chip
has only accelerated with the growth in mobile electronics, though leadframe
and wirebond technologies remain as important low-cost alternatives for many
devices. Wafer bumping has been a major packaging market driver for over a
decade, and with the growth in mobile the move towards wafer bumping and flip
chip has only accelerated with finer pitch copper pillar bump technology
ramping up. Mobile also drives wafer-level packaging (WLP) and Fan-Out (FO)
WLP. New wafer level dielectric materials and substrate designs are required
for these emerging package form factors.
Going
forward, the wearable and IoT markets will have varying packaging requirements
depending on the application, the end use environment, and reliability needs.
Thin and small are a must though like other applications cost versus
performance will determine what package type is adopted for a given wearable
product, so once more leadframe and wirebonded packages could be the preferred
solution. And in many wearable applications, materials solutions must provide
a lightweight and flexible package.
Such
packaging solutions will remain the driver for materials consumption and new
materials development, and the outlook for these packages remain strong.
Materials will make possible even smaller and thinner packages with more
integration and functionality. Low cost substrates, matrix leadframe designs,
new underfill, and die attach materials are just some solutions to reduce
material usage and to improve manufacturing throughput and efficiencies.
SEMI
and TechSearch International are once again partnering to prepare a
comprehensive market analysis of how the current packaging technology trend
will impact the packaging manufacturing materials demand and market. The new
edition of “Global Semiconductor Packaging Materials Outlook” (GSPMO) report
is a detailed market research study in the industry that quantifies and
highlights opportunities in the packaging material market. This new SEMI
report is an essential business tool for anyone interested in the plastic
packaging materials arena. It will benefit readers to better understand the
latest industry and economic trends, the packaging material market size and
trend, and the respective market drivers in relation to a forecast out to
2019. For example, FO-WLP is a disruptive technology that impacts the
packaging materials segment and the GSPMO addresses this impact.
The new report will be published later in the fourth
quarter of 2015. For more information, download the 2013/2014
sample report and/or to preorder,
please contact SEMI customer service at 1.877.746.7788 (toll free in the U.S.)
or 1.408.943.6901 (International Callers). For further questions, please
contact SEMI Global
Customer Service 1.408.943-6901 or
email mktstats@semi.org.
Global Update
SEMI
- By Ed Sperling, semiengineering.com
- View Original
- September 28th, 2015
Is The Stacked Die Supply Chain Ready?
A
handful of big semiconductor companies began taking the wraps off 2.5D and
fan-out packaging plans in the past couple of weeks, setting the stage for the
first major shift away from Moore’s Law in 50 years.
Those moves coincide with reports of commercial 2.5D chips from chip assemblers and foundries that are
now under development. There have been indications for some time that this
trend is gathering steam. Equipment makers have been talking with analysts
about how advanced packaging will affect their growth plans. After almost a
year of delays, high-bandwidth memory was introduced into the market earlier
this year. And there have been announcements by foundries and OSATs that 2.5D chips are now in commercial production,
with many more on the way.
Still,
the process is far from smooth. It’s not that chips can’t be built using
interposers or microbumps or even bond wires to include more of what used to
be on a PCB in a single package. But in comparison to the supply chain for
planar CMOS, stacking die is a comparative newcomer. The tens of billions of
dollars spent on shrinking planar features dwarfs the amount that has been
spent on packaging multiple chips together, despite the fact that multi-chip
modules have been around since the 1990s. Foundry rules are still under
development. Some EDA tools and IP are available, but more still need to be
optimized for stacked die configurations. And experience in working with these
packaging approaches remains limited, even if they are gaining traction.
What’s ready
Nevertheless, chipmakers, IP vendors, packaging houses and foundries
are pitching a different story than they were at the beginning of the year.
Most now have some sort of advanced packaging strategy in place—a recognition of just how expensive it has become
to develop chips at 16/14nm, 10nm and 7nm, and how much business they’ll leave
on the table if they don’t recognize many chipmakers won’t go there.
Marvell, for example, has just begun rolling out what it calls a
“virtual SoC” 2.5D architecture called MoChi, with the first LEGO-like modules
to be added throughout the remainder of 2015 using internally developed
interconnect technology.
“The
problem is not just cost anymore,” said Michael Zimmerman, vice president and
general manager of Marvell’s connectivity, storage and infrastructure
business. “It’s the total development effort measured in dollars and years.
There are not many suppliers that can justify spending billions of dollars,
the time it takes to get these chips to market, and the resources required to
make that happen. The goal is to restore reasonable time-to-market by going in
a reverse direction. Instead of massive integration, you can break the chip
into parts and separate the problems into modules. That allows the pace of
innovation in each die to be separate from other die.”
He
noted that initially there was a lot of skepticism about the approach, but in
the past few months that skepticism has evaporated. “When you consider that
the interconnect is 8 gigabits per second for one serial connection, and you
can put 25 wires in a 1mm space, that means you can have up to 50 gigabits per
second die to die with latency of 8 nanoseconds.”
Similar
stories are being repeated more frequently across the industry. ASE Group has
been working with AMD since 2007 to bring 2.5D packaging to market.
“We
had a cost issue with the interposer,” said Michael Su, AMD Fellow in charge
of die stacking design and technology at AMD. “But we have managed to decrease
that to a better price point. Two years ago, the technology was still in the
development stage. Since then we’ve decreased the number of features, added
yield learning and now there are multiple players making interposers.”
The result is a graphics card for the gaming market that is 40%
shorter—small enough to fit on a
six-inch PCB—runs 20° C cooler at 75° instead of 90°, and which is 16 decibels
quieter. It also offers a 2X performance increase over previous versions based
on GDDR5 and twice the density, which allows system makers to turn up the
performance increases in other parts of the system without exceeding the power
budget. And with interposers now available commercially from most of the major
foundries, Su said the prices will continue to decline.
Put in perspective, though, this was not a trivial project. It took
eight years of ironing out the kinks and thousands of iterations of chips to
get to that point—and a huge
investment by both AMD and ASE.
“There
are 240,000 bumps that we needed to connect together,” said Calvin Cheung,
vice president of business development and engineering at ASE. “You have to
make sure every one of those is connected. We also had to select the right
materials and equipment, and to figure out how to pick the right piece of
equipment.”
Cheung
noted that a lot of the costs involved are proportional to volume, meaning
prices will drop once volume increases and yield, materials and architectural
design are mature enough. But he added that the value of integrating different
components on multiple die cannot be overstated, because it allows flexibility
to be able to target multiple market segments with minimal effort and time.
IBM Microelectronics has been working on this technology
for at least a decade, as well. Now part of GlobalFoundries, the combined company is shipping 2.5D and full 3D IC parts using through-silicon vias. Gary Patton, the
company’s CTO, is watching similar trends unfold. “We’re definitely seeing an
uptick in requests for quotes for 2.5D solutions,” he said. “As the volumes
increase, it helps drive the cost down. And then people see it’s shipping and
start to realize this is real and they can use it.”
The story is much the same at TSMC. “The vector continues for the high-performance camp,”
said Tom Quan, one of the foundry’s directors. “It offers better bandwidth,
and for the consumer market it can be done in high volume with low cost. Some
of this will use a silicon interposer. But even if you do away with the PCB
and put it all in a package, you get better results.”
TSMC’s
offerings in this area come in two flavors, a fanout technology it calls InFo
(integrated fanout) and a full 2.5D approach it calls CoWos (chips on wafer on
substrate). Quan said the advantage of CoWos is that it can integrate the
highest-performance die using the latest technology with analog sensors at
older technologies. “This is a big market. It includes IoT, automotive, and
high-performance computing. CoWos will address the high-performance needs,
InFo will address the other two.”
The
first version of TSMC’s plan is expected to roll out in 2016. There are a
couple of other iterations planned for InFo, including through-mold vias and
through-InFo vias.
View from the trenches
This
all sounds like the road to stacked die is fully paved, but companies involved
in developing these chips are finding not everything is so perfect yet.
“If you look at planar silicon, from GDSII to the mask
shop, there are well-defined specs,” said Mike Gianfagna, vice president of
marketing ateSilicon. “If you pass the requirements, which are standard,
then the downstream supplier can make the chip. That’s still missing in 2.5D.
If you have warping problems, contact problems or yield issues, you don’t know
that up front. And if the chip fails, you have to sign waivers that it’s your
risk, not someone else’s.”
Gianfagna
said what’s particularly troublesome is testing of the interposer. “We don’t
have rules for that. It’s good enough to create a design, and you can build it
into the cost of designs, but we’re still one to two years away from getting
the benefit of yield learning and analysis so you can get chips out that are
cheaper, more efficient and more reliable. This is still a big step forward,
though. In the past we weren’t sure whether we could build it or that it would
yield. We’re now beyond that, and a growing number of companies want to be out
in front with this.”
The
first companies to fully embrace these issues were DRAM manufacturers, which
have been combining memory modules vertically to save space and reduce the
distance signals need to travel. The Hybrid Memory Cube (HMC) and
high-bandwidth memory (HBM) are both now fully tested and in commercial use.
“By increasing the density you get more performance,
compared with more DIMM slots, which makes your system performance go down,”
said Lou Ternullo, product marketing director at Cadence. “Customers are all asking for 3D support because they
want to be ready.”
The
big difference between the HMC and HBM is the interface. HBM uses microbumps,
which means that today the only way to connect that to logic is through an
interposer. So far there has not been much adoption outside of the graphics
market, but Ternullo said that by the end of the year there should be about a
half-dozen chips using HBM.
What
changes in 2.5D and 3D is that the manufacturers take on more of the ecosystem
role to overcome the known good die issues. Several sources say this is
particularly important for HBM, because unlike DRAM it cannot be put through
temperature cycles for testing. It has to be tested through the interface once
the 2.5D package is completed, and the only way to do that is with built-in
self-test (BiST).
Planning for 2.5D
Some
of the big changes involve mindset, as well. Just as power and security need
to be part of the up-front architecture of any chip these days, compared with
earlier generations where they were an afterthought, so do things like how
engineering teams are going to test the components in a 2.5D configuration,
understanding how certain IP will yield compared with other IP, and
understanding the interactions of analog and digital chips even if they aren’t
on the same die.
“Design for test is one of the critical areas that needs
to be considered,” said Asim Salim, vice president of manufacturing operations
at Open-Silicon. “Proving microbumps has been a challenge for us. We
now have some solutions. But two to three years ago we had to educate people
that this is even needed.”
Integrating
analog is another issue, and it varies greatly from one package to the next.
Salim said that if an A-to-D converter is used to connect to other modules,
for example, it will require a different kind of testing than if it’s
connected to the ball grid array on the package. The first requires a power-on
self-test, while the latter can use external test. Testability is one of the
key areas, and getting it wrong can both increase the cost of the design and
decrease its reliability.
Another
area that needs to be considered up front is I/O coherency and what can be
done with new architectural approaches. What’s possible with multiple die is
more than what’s possible on a single die. “You can make two die behave like
one die,” said Marvell’s Zimmerman. “You also can connect multiple cores on
different die and turn it into many cores on different die.”
Full
3D-IC architectures are still expected to take at least a couple more years
before they are commercially in use, according to a number of companies. But
work has begun there, as well, say a number of industry sources. That problem
is even tougher to solve, but compared with 5nm and 3nm, it may be a toss-up
as to which approach is more difficult.
Conclusion
As
with many developments in the semiconductor industry, when the entire supply
chain begins turning direction the pieces line up quickly.
“We are really in the middle of the shift to 2.5D,” said
Wally Rhines, chairman and CEO of Mentor Graphics. “That will drive tools to do the integration of the
chip and the package.”
It
also will drive new opportunities for companies that have bet on this
technology as it matures and becomes much more flexible, with innovations
occurring at the module level rather than across an entire chip.
- By Ed Sperling, semiengineering.com
- View Original
- August 13th, 2015
2.5D Creeps Into SoC Designs
A
decade ago top chipmakers predicted that the next frontier for SoC
architectures would be the z axis, adding a third dimension to improve
throughput and performance, reduce congestion around memories, and reduce the
amount of energy needed to drive signals.
The
obvious market for this was applications processors for mobile devices, and
the first companies to jump on the stacked die bandwagon were big companies
developing test chips for high-volume devices. And that was about the last
serious statement of direction that anyone heard involving stacked die for a
few years.
What’s becoming apparent to chipmakers these days is that
stacked isn’t fading away, but it is changing. TSMC and Samsung reportedly are moving forward on both 2.5D and 3D IC, according to multiple industry sources, and GlobalFoundries continues its work in this area—a direction that will get a big
boost with the acquisition of IBM’s semiconductor unit.
“It’s still very definitely an extreme sport for some of
our customers,” said Jem Davies, ARM fellow and vice president of technology. “It looks
really exciting and it’s possibly the case that somebody who gets this right
is going to make some significant leaps. There are some physical rules in
this, though. If you can reduce the amount of power dissipated between a
device and memory, or if you’ve got multiple chips, the amount they dissipate
talking between them is a lot. If you’ve got a chip and memory, the closer you
can get these things to each other, the faster they’ll go and the less power
they will use. The Holy Grail here is in sight. There are a number of
technologies that we’re seeing people looking at using.”
Full
3D stacking with through-silicon vias has gained some ground in the memory
space, notably with backing from Micron and Samsung for the Hybrid Memory
Cube. But the real growth these days is coming in 2.5D configurations using a
“die on silicon interposer” approach, which leverages the interposer as the
substrate on which other components are added, and to a lesser extent
interposers connecting heterogeneous and homogeneous dies. Typically these
packages are being developed in lots of less than 10,000 units, but there are
enough of designs being turned into production chips that questions about
whether this approach will survive are becoming moot.
“People
are becoming a lot more comfortable with this technology,” said Robert Patti,
CTO at Tezzaron. “Silicon interposers are much more readily available. You can
get them from foundries. We build them. And there are some factories now being
built in China to manufacture them.”
Relative costs
One
of the key factors in making this packaging approach attractive is the price
reduction of the interposer technology. Initial quotes several years ago from
leading foundries were in the $1 range for small interposers. The price has
dropped to 1 to 2 cents per square millimeter for interposer die.
“This
is now PCB-equivalent pricing,” said Patti. “This used to be only for
mil/aero, but we’re seeing these in more moderate numbers. They’re being
manufactured in batches of hundreds to thousands to tens of thousands. We’ve
had people look at this for high-end disk drives. The low-hanging fruit in
this area is high-bandwidth memories with logic. The focus is on high
bandwidth, and power comes along for the ride.”
What’s less clear is whether interposers, such as Intel‘s Embedded Multi-die Interconnect Bridge, which is
available to the company’s 14nm foundry customers, and organic interposers,
which are more flexible but at this point more expensive, will be price
competitive with the silicon interposer approaches.
But cost is a relative term here, and certainly not
confined just to the cost of the interposer. The semiconductor industry tends
to focus on price changes in very narrow segments, such as photomasks, while
ignoring total cost of design through manufacturing. That’s true even
for finFETs, where the focus has been on reduced leakage current
rather than big shifts in thermal behavior, particularly at 10nm and beyond.
HiSilicon Technologies, which designs production 2.5D
chips for Huawei, submitted a technical paper to the IEEE International
Reliability Physics Symposium in April that focuses on localized thermal
effects (LTE), which can affect everything from electromigration to chip aging. The paper identifies thermal
trapping behavior as one of the big problems with finFETs at advanced nodes,
saying that the average temperature of finFET circuits is lower due to less
leakage, but “temperature variation is much larger and some local hot spots
may experience very high self-heating.”
Planning for LTE isn’t always straightforward. It can be affected by
which functions are on—essentially
who’s using a device and what they’re doing with it—and how functions are laid
out on the silicon. And it can be made worse by full 3D packaging, because
thermal hot spots may shift depending upon what’s turned on, what’s dark, and
how conductive certain parts of the chip are.
“The problem there is thermal hot spot migration,” said
Norman Chang, vice president and senior product strategist at Ansys. “At a 3D conference one company showed off a DRAM stack on an SoC. The hotspot was in the center of the chip when it was
planar, but once the DRAM was added on top, the hot spot moved to the upper
right corner. So the big issue there is how you control thermal migration.”
Chang
noted that that 2.5D is comparable to the thermal gradients in planar
architectures, where keeping 75% of the silicon dark at any single time for
power constraints generally keeps it cool enough to avoid problems.
Time to market
Cost is a consideration in the time it takes to design and
manufacture stacked die, as well. One of the initial promises of stacked die—particularly 2.5D—was that time to market would
be quicker than a planar SoC because not everything has to be developed at the
same process node. That hasn’t proven to be the case.
“The turnaround time is longer,” said Brandon Wang,
engineering group director at Cadence. “If you open a cell phone today and look at what’s
inside, there are chips all over the board. That requires glue logic, so what
happens is that with the next generation companies look at how much they can
get rid of. With a silicon interposer you can fit chips into a socket easier.
It takes longer, but it helps people win sockets.”
One
thing that has helped is that the heterogeneous 2.5D market is mature enough
for design teams have some history about what works and what doesn’t. Over
time, engineers get more comfortable with the design approach and it tends to
speed up. That same trend is observable with double patterning and finFETs,
where the initial implementations were much more time consuming than the
current batch of designs. Whether it will ever be faster than
pre-characterized IP on planar chips is a matter of debate. Still, at least
the gap is shrinking.
But
there also are some distinct advantages on the layout side, particularly for
networking and datacom applications. While designs may not be quicker at this
point, they are cleaner.
“Complex
timing sequences and cross-point control are where the real benefits of 2.5D
show up,” said Wang. “Cross point is a signal that crosses I/O points, and the
tough thing about cross point is data congestion. By going vertical you
provide another dimension for a crossover bridge.”
Testing
of 2.5D packages has been proved to be straightforward, as well. Full 3D
logic-on-logic testing has required a highly convoluted testing strategy,
which was outlined in the past by Imec. And more recently, the push for memory
stacks on logic has resulted in other approaches. But with 2.5D it has been a
matter of tweaking existing tools to deal with the interposer layer.
“You can still do a quick I/O scan chain and run tests
in parallel, so there is not a large test time,” said Steven Pateras, product
marketing director for test at Mentor Graphics. “You can access the die more easily. The only
complication is with the interconnect, and that’s pretty much the same as an
MCM (multi-chip module). That’s well understood.”
Reality check
While
stacked die pushes slowly into the mainstream, there are a number of other
technologies around the edges that could either improve its adoption or slow
it down. Fully depleted SOI is one such technology, particularly at 22nm and
below, where performance is significantly faster than at 28nm and where
operating voltage can be dropped below the voltage in 16/14nm finFETs.
CEA-Leti, for one, has bet heavily on three technology areas: FD-SOI,
2.5D, and monolithic 3D, according to Leti CEO Marie-Noëlle Semeria. “We see
the market going in two ways. One will be data storage, servers and consumer,
which will need high performance. That will be a requirement (and the
opportunity for 2.5D and 3D). Another market is the broad market for IoT,
which still has to be better defined. That will include the automotive and
self-driving market, medical devices and wearables. For that you need technology
with very good performance, low power and low cost. FD-SOI can answer this
market.”
Others
are convinced that stacked die, and in particular 2.5D, can move further
downstream as costs drop and more companies are comfortable working with the
technology.
“Right now, 2.5D is at the server and data center level,
and it will certainly be in more servers as time goes on,” said Ely Tsern,
vice president of the memory products group at Rambus. “But we also see it going forward as manufacturing
costs drop and yield increases.”
That’s
certainly evident at the EDA tool level, where companies are doing far more
architectural exploration than in the past. But whether that means more 2.5D
or 3D designs, and how quickly that shift happens, is anyone’s guess.
“Right now there is interest in exploring multiple
architectures that could change overall designs,” said Anand Iyer, director of
marketing for the low power platform at Calypto. “The big question people are asking is how you save
power and keep the same performance level. 2.5D is one way to reduce power,
and it’s one that many people are comfortable with. MCMs existed before this
and people are quite familiar with them. The new requirement we’re seeing is
how to simulate peak power more accurately. There are more problems introduced
if power integrity is not good.”
Iyer
noted that in previous generations, I/O tended to isolate the power. At
advanced nodes and with more communication to more devices, power integrity
has become a challenge. 2.5D is one way of helping to minimize that impact,
but it’s not the only way.
- 4 min read
- original
2.5D Timetable Coming Into Focus
After years of
empty promises, the timetable for 2.5D [KC]is
coming into better focus. Large and midsize chipmakers are behind it, real
silicon is being developed, and contracts are being signed.
That doesn’t mean
all of the pieces are in place or that market uptake is at the neck of the
hockey stick. And it certainly doesn’t mean the semiconductor industry is
going to abandon development at the most advanced process nodes, or even
improvements at older nodes that could slow migration in all directions.
“Without a doubt
not everything needs integration [on a single die], said Joe Sawicki, vice
president and general manager of the Design-To-Silicon Division at Mentor
Graphics []. “You’re not going to be looking for 6 billion
transistors in a wearable device or even a fully integrated factory. But at
the same time, the number of customers doing 20nm designs is a huge number of
companies.”
Although
the benefits are well known, 2.5D remains a new packaging approach with
different interconnects and new memory structures. There are still kinks to
iron out of the packaging process, work to be done across the supply chain,
and new tools to develop. Nevertheless—and
in spite of all those caveats—for the first time since the idea began gaining
serious attention several process nodes ago, dozens of companies have moved
beyond kicking the tires to developing what ultimately will be working
silicon.
“Cadence [], Mentor
Graphics [] and Ansys []are
aggressively developing tools to make 3D more predictable,” said Herb Reiter,
president of EDA2ASIC consulting. “This kind of information has to flow
through the materials to the Outsourced
Semiconductor Assembly and Test [KC]and foundry and then to the
customer.”
Critical pieces
under development include the second generation of high-bandwidth memory,
which SK Hynix is expected to begin sampling in the second quarter of 2015,
new and less costly interposer technologies and approaches, and new organic
substrates. There also are questions about whether Intel will allow its Embedded Multi-die
Interconnect Bridge(EMIB) to be widely licensed or sold outside of its own
foundry. Intel’s bridge technology allows for much tighter pitches than
organic substrates.
“What we don’t know
is how costly [EMIB] will be to integrate in order to make it flush with the
surface of the substrate or whether it’s something you can do with the die,”
said Reiter. “But there also is work underway for organic substrates to make
them smoother and almost the same pitch as silicon. And there is work being
done to put resistors, capacitors and inductors on the interposers, which
significantly increases the value proposition of the interposer.”
What’s changed?
Perhaps the biggest
shift, though, is in the attitude of companies working with 2.5D and 3D-ICs.
What started out as something of an interesting architectural approach to
shorten distances and widen signal plumbing is now becoming much more accepted
as a future direction for many chipmakers.
“The objection was
always cost and risk,” said Charlie Janac, chairman and CEO of Arteris [].
“The complaint was that interposer technology is expensive and dies don’t
necessarily work in a multi-chip package. But dealing with advanced nodes is
horrible on the analog side. Memory and logic already have diverged to
different process technologies, and it makes sense now to put them on separate
dies. It also changes the dynamics of what’s important in an SoC. It makes the
interconnect much more important and the packaging houses much more
important.”
That change in
attitude is widespread, even if the number of design starts is limited. Mike
Gianfagna, vice president of marketing at eSilicon [],
said the company is “actively engaged” in several 2.5D projects.
“There’s still a
lot of discussion about what’s the right interposer, whether it should be
silicon or another material, how big it should be,” he said. “This is all
about getting multiple chips with similar bandwidth on a chip. Not all of it
is silicon interposer technology, either. Some of it uses other strategies.”
Open-Silicon
likewise has seen limited uptake on 2.5D, even though there is plenty of
interest.
“You still have to
justify cost on a per customer basis,” said Steve Eplett, design technology
and automation manager at Open-Silicon []n.
“If you can leverage a die across multiple customers that changes the
economics. The metrics for power consumed between two die also aren’t as good
as homogeneous solutions, but we have gotten that down to a minimal and
reasonable tradeoff. And with new memories coming on, the power for
communication will be a tiny fraction of an off-chip solution. It’s
unterminated CMOS at 1.2 volts versus terminated, on-board DDR.”
What’s still missing?
Not all the pieces
are there yet, either. While chips are being built, some of the process isn’t
automated or as clear-cut as the move to finFETs or FD-SOI at 28nm.
“We’re not seeing a
whole lot of co-design optimization where you can measure the tradeoffs of one
chip versus another,” said Drew Wingard, CTO at Sonics. “We need to do mix and
match in a more aggressive form. When you put together a system at the PCB level
there are standard interfaces. At the board level, you can always wish a new
component existed, but most system designers look at what’s available. 2.5D is
a practical way of dealing with that.”
Those
standard interfaces—the electrical
interfaces for tying together different chips inside a single package—are
under discussion by standards groups.
“The pool of die
that you can integrate in a standard way is a black hole right now,” said
Open-Silicon’s Eplett.
Still,
most experts, standards groups and chipmakers see stacked die—both 2.5D and 3D ICs [KC]—as inevitable. While it makes sense for a company such as Intel to
continue pushing its very regular-shaped digital processor technology forward
for multiple more generations, the question is what else needs to go on that
die. If memory can be offloaded onto separate die, either with through-silicon
vias or interposers or bridges—or even bond wires—then distances will be
reduced, performance will increase, and the amount of power required to drive
signals will be cut significantly.
“There’s a lively
debate going on right now about 2.5D and 3D, said Chris Rowen, a Cadence []fellow.
“This is a natural outgrowth of what’s already been done. There are limits
about how many processes you can put on a die, and if you have digital logic,
DRAM and analog, you can’t make it work without moving everything closer
together. This is aggregation at the packaging level.”
The outlook remains
optimistic, but cautiously so. As Sawicki noted, the obvious application for
the first chips was in the data center, where development costs were less of a
factor and power was the key metric to worry about. “For a number of reasons, that
hasn’t occurred yet, but virtually everything that is required to make that
happen we have put in place.”
So will this all
change over the next couple years? All signs point to yes. Whether those
timetables will remain in place, though, remains to be seen.
Consider Packaging Requirements at the Beginning, Not the End, of the Design Cycle
By
eecatalog.com
7 min
Today’s integrated circuit designs are driven by size, performance,
cost, reliability, and time-to-market. In order to optimize these design
drivers, the requirements of the entire system should be considered at
the beginning of the design cycle—from the end system product down to
the chips and their packages. Failure to include packaging in this
holistic view can result in missing market windows or getting to market
with a product that is more costly and problematic to build than an
optimized product. In this article, we will provide some guidelines
covering eight issues where the packaging team should be closely
involved with the circuit design team.
Chip Design
As a starting consideration, chip packaging strategies should be developed prior to chip design completion. System timing budgets, power management, and thermal behavior can be defined at the beginning of the design cycle, eliminating the sometimes impossible constraints that are given to the package engineering team at the end of the design. In many instances chip designs end up being unnecessarily difficult to manufacture, have higher than necessary assembly costs and have reduced manufacturing yields because the chip design team used minimum design rules when looser rules could have been used.
Examples of these are using minimum pad-to-pad spacing when the pads could have been spread out or using unnecessary minimum metal to pad clearance (Figure 1). These hard taught lessons are well understood by the large chip manufacturers, yet often resurface with newer companies and design teams that have not experienced these lessons. Using design rule minimums puts unnecessary pressure on the manufacturing process resulting in lower overall manufacturing yields.
Packaging
Semiconductor packaging has often been seen as a necessary evil, with most chip designers relying on existing packages rather than package customization for optimal performance. Wafer level and chipscale packaging methods have further perpetuated the belief that the package is less important and can be eliminated, saving cost and improving performance. The real fact is that the semiconductor package provides six essential functions: power in, heat out, signal I/O, environmental protection, fan-out/compatibility to surface mounting (SMD), and managing reliability. These functions do not disappear with the implementation of chipscale packaging, they only transfer over to the printed circuit board (PCB) designer. Passing the buck does not solve the problem since the PCB designers and their tools are not usually expected to provide optimal consideration to the essential semiconductor die requirements.
Packages
Packaging technology has considerably evolved over the past 40 years. The evolution has kept pace with Moore’s Law increasing density while at the same time reducing cost and size. Hermetic pin grid arrays (PGAs) and side-brazed packages have mostly been replaced by the lead-frame-based plastic quad flat packs (QFP). Following those developments, laminate based ball grid arrays (BGA), quad flat pack no leads (QFN), chip scale and flip-chip direct attach became the dominate choice for packages.
The next generation of packages will employ through-silicon vias to allow 3D packaging with chip-on-chip or chip-on-interposer stacking. Such approaches promise to solve many of the packaging problems and usher in a new era. The reality is that each package type has its benefits and drawbacks and no package type ever seems to be completely extinct. The designer needs to have an in-depth understand of all of the packaging options to determine how each die design might benefit or suffer drawbacks from the use of any particular package type. If the designer does not have this expertise, it is wise to call in a packaging team that possesses this expertise.
Miniaturization
The push to put more and more electronics into a smaller space can inadvertently lead to unnecessary packaging complications. The ever increasing push to produce thinner packages is a compromise against reliability and manufacturability. Putting unpackaged die on the board definitely saves space and can produce thinner assemblies such as smart card applications. This chip-on-board (COB) approach often has problems since the die are difficult to bond because of their tight proximity to other components or have unnecessarily long bond wires or wires at acute angles that can cause shorts as PCB designers attempt to accommodate both board manufacturing line and space realities with wire bond requirements.
Additionally, the use of minimum PCB design rules can complicate the assembly process since the PCB etch-process variations must be accommodated. Picking the right PCB manufacturer is important too as laminate substrate manufacturers and standard PCB shops are most often seen as equals by many users. Often, designers will use material selections and metal systems that were designed for surface mounting but turn out to be difficult to wire bond. Picking a supplier that makes the right metallization tradeoffs and process disciplines is important in order to maximize manufacturing yields
Power
Power distribution, including decoupling capacitance and copper ground and power planes have been mostly a job for the PCB designer. This is a wonder to most users as to why decoupling is rarely embedded into the package as a complete unit. Cost or package size limitations are typically the reasons cited as to why this isn’t done. The reality is that semiconductor component suppliers usually don’t know the system requirements, power fluctuation tolerance and switching noise mitigation in any particular installation. Therefore power management is left to the system designer at the board level.
Thermal Management
Miniaturization results in less volume and heat spreading to dissipate heat. Often, there is no room or project funds available for heat sinks. Managing junction temperature has always been the job of the packaging engineer who must balance operating and ambient temperatures and packaging heat flow. Once again, it is important to develop a thermal strategy early in the design cycle that includes die specifics, die attachment material specification, heat spreading die attachment pad, thermal balls on BGA and direct thermal pad attachment during surface mount.
Signal Input/Output
Managing signal integrity has always been the primary concern of the packaging engineer. Minimizing parasitics, crosstalk, impedance mismatch, transmission line effects and signal attenuation are all challenges that must be addressed. The package must handle the input/output signal requirements at the desired operating frequencies without a significant decrease in signal integrity. All packages have signal characteristics specific to the materials and package designs.
Performance
There are a number of factors that impact performance including: on-chip drivers, impedance matching, crosstalk, power supply shielding, noise and PCB materials to name a few. The performance goals must be defined at the beginning of the design cycle and tradeoffs made throughout the design process.
Environmental Protection
The designer must also be aware that packaging choices have an impact on protecting the die from environmental contamination and/or damage. Next-generation chip-scale packaging (CSP) and flip chip technologies can expose the die to contamination. While the fab, packaging and manufacturing engineers are responsible for coming up with solutions that protect the die, the design engineer needs to understand the impact that these packaging technologies have on manufacturing yields and long-term reliability.
Involve your Packaging Team
Hopefully, these points have provided some insights on how packaging impacts many aspects of design and should not be relegated to just picking the right package at the end of the chip design. It is important that your packaging team be involved in the design process from initial specification through the final design review.
In today’s fast moving markets, market windows are shrinking so time to market is often the important differentiator between success and failure. Not involving your packaging team early in the design cycle can result in costly rework cycles at the end of the project, having manufacturing issues that delay the product introduction or, even worse, having impossible problems to solve that could have been eliminated had packaging been considered at the beginning of the design cycle.
System design incorporates many different design disciplines. Most designers are proficient in their domain specialty and not all domains. An important byproduct of these cross-functional teams is the spreading of design knowledge throughout the teams, resulting in more robust and cost effective designs.
John T. MacKay, President/Founder, Semi-Pac
John and his partner Tom Molinaro founded Semi-Pac in 1988. They both have been involved in the packaging industry since the late 70s. John has deep technical knowledge of the process and equipment used in integrated circuit fabrication and assembly.
Chip Design
As a starting consideration, chip packaging strategies should be developed prior to chip design completion. System timing budgets, power management, and thermal behavior can be defined at the beginning of the design cycle, eliminating the sometimes impossible constraints that are given to the package engineering team at the end of the design. In many instances chip designs end up being unnecessarily difficult to manufacture, have higher than necessary assembly costs and have reduced manufacturing yields because the chip design team used minimum design rules when looser rules could have been used.
Examples of these are using minimum pad-to-pad spacing when the pads could have been spread out or using unnecessary minimum metal to pad clearance (Figure 1). These hard taught lessons are well understood by the large chip manufacturers, yet often resurface with newer companies and design teams that have not experienced these lessons. Using design rule minimums puts unnecessary pressure on the manufacturing process resulting in lower overall manufacturing yields.
Figure 1. In this image, the bonding pads are grouped in tight clusters rather than evenly distributed across the edge of the chip. This makes it harder to bond to the pads and requires more-precise equipment to do the bonding, thus unnecessarily increasing the assembly cost and potentially impacting device reliability. |
Semiconductor packaging has often been seen as a necessary evil, with most chip designers relying on existing packages rather than package customization for optimal performance. Wafer level and chipscale packaging methods have further perpetuated the belief that the package is less important and can be eliminated, saving cost and improving performance. The real fact is that the semiconductor package provides six essential functions: power in, heat out, signal I/O, environmental protection, fan-out/compatibility to surface mounting (SMD), and managing reliability. These functions do not disappear with the implementation of chipscale packaging, they only transfer over to the printed circuit board (PCB) designer. Passing the buck does not solve the problem since the PCB designers and their tools are not usually expected to provide optimal consideration to the essential semiconductor die requirements.
Packages
Packaging technology has considerably evolved over the past 40 years. The evolution has kept pace with Moore’s Law increasing density while at the same time reducing cost and size. Hermetic pin grid arrays (PGAs) and side-brazed packages have mostly been replaced by the lead-frame-based plastic quad flat packs (QFP). Following those developments, laminate based ball grid arrays (BGA), quad flat pack no leads (QFN), chip scale and flip-chip direct attach became the dominate choice for packages.
The next generation of packages will employ through-silicon vias to allow 3D packaging with chip-on-chip or chip-on-interposer stacking. Such approaches promise to solve many of the packaging problems and usher in a new era. The reality is that each package type has its benefits and drawbacks and no package type ever seems to be completely extinct. The designer needs to have an in-depth understand of all of the packaging options to determine how each die design might benefit or suffer drawbacks from the use of any particular package type. If the designer does not have this expertise, it is wise to call in a packaging team that possesses this expertise.
Miniaturization
The push to put more and more electronics into a smaller space can inadvertently lead to unnecessary packaging complications. The ever increasing push to produce thinner packages is a compromise against reliability and manufacturability. Putting unpackaged die on the board definitely saves space and can produce thinner assemblies such as smart card applications. This chip-on-board (COB) approach often has problems since the die are difficult to bond because of their tight proximity to other components or have unnecessarily long bond wires or wires at acute angles that can cause shorts as PCB designers attempt to accommodate both board manufacturing line and space realities with wire bond requirements.
Additionally, the use of minimum PCB design rules can complicate the assembly process since the PCB etch-process variations must be accommodated. Picking the right PCB manufacturer is important too as laminate substrate manufacturers and standard PCB shops are most often seen as equals by many users. Often, designers will use material selections and metal systems that were designed for surface mounting but turn out to be difficult to wire bond. Picking a supplier that makes the right metallization tradeoffs and process disciplines is important in order to maximize manufacturing yields
Power
Power distribution, including decoupling capacitance and copper ground and power planes have been mostly a job for the PCB designer. This is a wonder to most users as to why decoupling is rarely embedded into the package as a complete unit. Cost or package size limitations are typically the reasons cited as to why this isn’t done. The reality is that semiconductor component suppliers usually don’t know the system requirements, power fluctuation tolerance and switching noise mitigation in any particular installation. Therefore power management is left to the system designer at the board level.
Thermal Management
Miniaturization results in less volume and heat spreading to dissipate heat. Often, there is no room or project funds available for heat sinks. Managing junction temperature has always been the job of the packaging engineer who must balance operating and ambient temperatures and packaging heat flow. Once again, it is important to develop a thermal strategy early in the design cycle that includes die specifics, die attachment material specification, heat spreading die attachment pad, thermal balls on BGA and direct thermal pad attachment during surface mount.
Signal Input/Output
Managing signal integrity has always been the primary concern of the packaging engineer. Minimizing parasitics, crosstalk, impedance mismatch, transmission line effects and signal attenuation are all challenges that must be addressed. The package must handle the input/output signal requirements at the desired operating frequencies without a significant decrease in signal integrity. All packages have signal characteristics specific to the materials and package designs.
Performance
There are a number of factors that impact performance including: on-chip drivers, impedance matching, crosstalk, power supply shielding, noise and PCB materials to name a few. The performance goals must be defined at the beginning of the design cycle and tradeoffs made throughout the design process.
Environmental Protection
The designer must also be aware that packaging choices have an impact on protecting the die from environmental contamination and/or damage. Next-generation chip-scale packaging (CSP) and flip chip technologies can expose the die to contamination. While the fab, packaging and manufacturing engineers are responsible for coming up with solutions that protect the die, the design engineer needs to understand the impact that these packaging technologies have on manufacturing yields and long-term reliability.
Involve your Packaging Team
Hopefully, these points have provided some insights on how packaging impacts many aspects of design and should not be relegated to just picking the right package at the end of the chip design. It is important that your packaging team be involved in the design process from initial specification through the final design review.
In today’s fast moving markets, market windows are shrinking so time to market is often the important differentiator between success and failure. Not involving your packaging team early in the design cycle can result in costly rework cycles at the end of the project, having manufacturing issues that delay the product introduction or, even worse, having impossible problems to solve that could have been eliminated had packaging been considered at the beginning of the design cycle.
System design incorporates many different design disciplines. Most designers are proficient in their domain specialty and not all domains. An important byproduct of these cross-functional teams is the spreading of design knowledge throughout the teams, resulting in more robust and cost effective designs.
John T. MacKay, President/Founder, Semi-Pac
John and his partner Tom Molinaro founded Semi-Pac in 1988. They both have been involved in the packaging industry since the late 70s. John has deep technical knowledge of the process and equipment used in integrated circuit fabrication and assembly.
Manufacturing And Packaging Changes For 2015
Good times ahead for the semiconductor industry, but some tricky issues that they will have to navigate in 2015.
The predictions are segregated into four areas: Markets, Design, Semiconductors, and Tools and Flows. In this segment predictions related to semiconductors and packaging are explored.
2014 was a pivotal year for the semiconductor industry in that it was the year when the ecosystem recognized that Moore’s Law, while still on track technically, has become irrelevant to all but a few chipmakers. 28nm has become the node where most people are likely to design for a significant time and the industry will start to improve that node in ways that we have not seen in the past. New processes, materials and devices will be developed for 28nm and packaging solutions such as 2.5D and 3D will become as important as process development.
The industry itself is doing so well that it is heading for a squeeze. “We see approximately 5% to 7% growth in semiconductor business for 2015 and 12% in the pure-play foundry business,” says Graham Bell, vice president of marketing at Real Intent. “Leading pure-play foundry TSMC will grow faster than the industry average and in 2015 its business will surpass that of all other pure-play foundries combined.”
When we couple that with a prediction from The 2015 Foundry Almanac, there could be trouble ahead. “Wafer-fab utilization at the four largest pure-play IC foundries will increase to an estimated 92% in 2014 compared to 89% in 2013 and 88% in 2012.” That does not provide a lot of headroom for growth unless the fabs start putting in extra capacity.
Other parts of the industry are also getting squeezed. “The worldwide market for DRAMs will rise 16% to $54.1 billion next year (2015),” according to DRAMeXchange forecasts. The company predicts that smartphones and tablet computers will drive that growth. Mobile DRAMs represent 40% of that, up from about 36% in 2014.
Sticking around at 28nm Synopsys is convinced that 28nm is here to stay. Three different people within the company were ready to defend their conviction. “28nm will have a very, very long lifespan,” says Marco Casale-Rossi, product marketing manager within the Design Group of Synopsys. “This will be strengthened by FD-SOI, which may be a very compelling solution if supported by foundries.”
Rich Goldman, vice president of corporate marketing for Synopsys, provides a little more background reasoning. “As the focus is on cost, low power and integration rather than on processing power, the favored processes will be established process nodes at 28nm and above. As it is the last process node that does not require double patterning, 28nm will be a particularly long-lived node, becoming increasingly popular. This will play well for foundries offering specialty processes. FD-SoI will likely be applied to extend the capabilities and longevity of 28nm.”
These sentiments are echoes by Navraj Nandra, senior director of marketing for DesignWare analog/mixed signal Intellectual Property, embedded memories and logic libraries within Synopsys. “The 28nm node is the main-stay of many devices and predictions are that this will be a “long node” meaning tape-outs will continue until 2020. The trend here is that some SoC developers are staying on 28nm rather than moving to finFET technologies due to costs or re-tooling their EDA and IP flows. To address this market, the semiconductor foundries are offering an array of 28nm technologies, the two most interesting are TSMC 28HPC and 28 FD-SOI.”
Moore’s Law is still alive
None of this means that Moore’s Law is technically dead and for some markets may even be seeing some acceleration. “The high-risk, high-reward transitions to new nodes at 16/14nm will be moving very quickly to 10nm and then 7nm,” believes Chi-Ping Hsu, senior vice president, chief strategy officer for EDA and chief of staff to the CEO at Cadence.
According to Nandra, three factors are driving fabrication technologies today. “The requirements for increased functionality, lower power and smaller form factors in mobile consumer devices are driving the demand for smaller technologies nodes such as 14nm and 10nm. Using these technologies they benefit from the significant reduction in power consumption, both dynamic and leakage. Another key benefit has been performance since these technologies use FinFET devices that have much higher drive. All the major semiconductor fabrication suppliers are projecting volume production in 2015 for 16/14 FinFET.”
The Intel roadmap already includes 10nm, 7nm, and 5nm, “enough runway to ensure that Moore’s Law will be in play for years to come,” says Goldman.
A close inspection of the memory process roadmaps reveals what they think is going to happen in 2015,” says Bell. “In 2015, we will see Samsung in volume production for 16/14nm DRAMS, followed at the end of the year by Micron and SK Hynix. For NAND flash we see a more aggressive roadmap, with 16/14 already at volume. The transition to 12/10nm processes will take over at the end of 2015.”
Getting to these nodes creates some mind-boggling numbers. “At 10nm we will be able to integrate more than 100 billion transistors on a single silicon die,” says Casale-Rossi,” and almost 10 trillion transistors at 1nm, towards the end of the next decade.”
Nandra sees that the only way to fill these chips is through IP. “SoC developers will use the opportunity to move to the next generation of interface IP protocols: USB 2.0 to USB 3.0; LPDDR3 to LPDDR4, PCIe 2.0 to PCIe 4.0. The 10-nm technology will only continue this trend with first tape-outs happening in 2015.”
Paul Pickle, president and chief operating officer for Microsemi, agrees about the importance of IP. “While the industry will still be working at advanced nodes down to 9nm, it will do so using IP blocks. However, there will be greater value in, say, a 28nm process backed by a rich IP portfolio than a 9nm process with no available IP.”
Casale-Rossi also puts it in perspective: “10% of design starts will be at 28nm and below in 2015. The future will not be decided by what is technically possible, but by what is economically affordable, on an application by application basis.”
Goldman provides another sobering note “These processes (this means 20/22nm and 14/16nm for 2015, along with the emergence of 10nm designs) are dominated by finFET. This assumes that the foundries are able to resolve yield issues.”
Overcoming hurdles
There certainly are hurdles that have to be overcome. , president and chief executive officer for Solido Design Automation says that “manufacturing variation will become increasingly problematic in 2015. To ensure optimal design, development teams will focus both on resolving the increasing number of variation issues, and on reducing the simulations required.”
And when things do go wrong, it is important to be able to find out why. “With this trend toward smaller and smaller structures in today’s advanced processes, finding and analyzing the root cause of failures will become increasingly challenging,” says Taqi Mohiuddin, senior director of marketing for Evans Analytical Group. “Even one individual atom out of place is now causing device performance issues or defects. The device looks as though it has been built correctly, which is putting more and more pressure on failure analysis tools and techniques that investigate all the way down to the transistor level.”
Packaging set to explode
The industry has been waiting for 2.5D IC packaging to become an affordable reality. Many hoped that 2014 would be the year it would happen and yet they were frustrated at the slow rate of development. They are not willing to back down.
“Multi-die designs, especially with interposers, will start to become more mainstream,” says Aveek Sarkar, vice president of product engineering and support at Ansys-Apache. “This will be true especially for applications that need heterogeneous integration support to benefit from advanced processes for digital circuits and mature technologies for analog circuits.”
While most companies keep an eye on stacked die, the big question is timing. “Technology solutions are in place, but we’re just not seeing the pull,” says Steven Pateras, product marketing director for test at Mentor Graphics. He adds that there are still some big questions to resolve before this packaging approach gets rolling.
“How much testing do you do at the wafer level versus what you do at the package, whether it be 2.5 or 3D packages? The sensitivities are somewhat different at 3D because you are stacking multiple die and the total yield issue is more important,” Pateras notes. “The cost of the package is more important. There’s definitely a push toward going to known good die in 3D-IC. The cost of spending more time at a wafer level becomes more compelling. When people actually start doing 3D you will definitely see a move away from ‘probably good die’ to ‘known good die.’ We’re already seeing a move toward known good die for specific application areas anyway, despite the lack of 3D—automotive in particular. There’s a much greater thrust on quality now for the automotive sector, which is exploding.”
But stacked die continues to draw attention, and it should by no means be counted out.
“Semiconductor companies must increasingly be technology-agnostic when it comes to fabrication technologies,” says Pickle. “Innovation will continue to move past a myopic focus on Moore’s Law in a growing range of applications – not simply because scaling no longer delivers the same predicted economic benefits, but because so many of today’s system-level integration challenges require more than just smaller transistors, including the difficult combination of multiple types of analog, RF and mixed-signal devices functions into feature-rich SoCs that are built using multiple process technologies and advanced packaging techniques. An increasingly attractive option will be 2.5D packaging supporting 2,000 or 3,000 connections between dies.”
This is not a time when EDA appears to be holding things back. “The lines between PCB, package, interposer, and chip are being blurred,” points out Hsu. “Having design environments that are familiar to the principle in the system interconnect creation, regardless of being PCB, package, or die-centric by nature, provides a cockpit from which the cross-fabric structures can be created and optimized. Being able to provide all of the environments also means that data-interoperable sharing is smooth between the domains. Possessing analysis tools that operate independent of the design environment offers the consistent results for all parties incorporating the cross-fabric interface data.”
But what will it take to get the ball rolling? The increasing importance of emerging markets may provide the economic incentive. “Automotive, health care, industrial and sensors will remain at the established technology node,” says Casale-Rossi. “They will perhaps be looking for alternatives such as 2.5D and 3D-IC, silicon photonics, among others, to improve integration and performance.”
Today, 2.5D and 3D-ICs are still taxiing while the supply-chain is being sorted out. “3D will still be over the horizon in 2015,” says Mike Gianfagna, vice president of marketing at eSilicon, “but interposer design will become more relevant in 2015. It might take another year to reach the mainstream however.”
Hsu suggests that “the foundry encroachment into the OSAT (outsourced assembly and test) space has begun with silicon-based 2.5D interposer technology and through silicon via (TSV) 3D die stacking offerings. As the pitches get finer on organic substrates and pricing on silicon options comes down, the foundries and the OSATs will be on an innovation spree to vie for dominance.”
But not all see this as the certain path forward. Bernard Murphy, chief technology officer at Atrenta, sees another possibility. “Intel is actively working on flip-chip and bump interconnects for die stacking, rather than TSVs. This is claimed to be simpler, lower cost and higher density than a TSV approach.” Murphy assumes this limits stacking to two die—maybe. “That hardly seems like a major limitation for most designs these days—and you presumably could still extend to side by side stacks on an interposer if needed.”
Murphy also expects more competition from monolithic 3D technologies. These are two or more layers of active components stacked above the same bulk silicon. “According to Qualcomm, TSV architectures are not really solving the interconnect issue and are costly. Monolithic 3D-ICs use much smaller vias, which can provide both more and finer-grained connections. This can provide a one-process-node advantage along with a 30% power savings, 40% performance gain, and 5% to 10% cost savings.”
Markets, economics and politics
Emerging markets, such as the Internet of Things are highly cost-sensitive markets, but their complexity is simplified because of the lower demand on data processing.
“Typically these devices are designed with fewer metal layers, to save on cost, and the IP requirements are also simplified compared to the high-end application processors,” says Synopsys’ Nandra. “This brings on new design challenges where cost is a determine factor. A small amount of code storage is required in these applications and semiconductor foundries are qualifying embedded flash technologies to meet these requirements. Power is also a key consideration and the trend by here is for the foundries to offer ultra low-power technologies. Since the data processing speed is not a key consideration for these devices but leakage power is, these high threshold devices will become popular.”
Bell sees another impact coming from politics. “The joint United States-China announcement of an end to China’s tariffs of up to 25% on advanced semiconductor components came as a surprise this past November.” Product groups for which tariffs will be eliminated include high-end medical equipment and game consoles, which only just recently became available to consumers in China. Bell explains the impact. “As with any tariff, local businesses are protected from foreign competition to give them an edge in economic growth. China certainly wants to promote its high-technology sector. However, big companies such as Lenovo and Huawei that are headquartered in China found themselves at a disadvantage when trying to compete with foreign companies on product cost. Both of them rely on imported ICs to build their systems.”
Bell says this will benefit large electronics systems companies, while China’s semiconductor foundry sector may be the loser. “SMIC is the leading Chinese foundry, with approximately 40% of its business from domestic customers. Its revenue in terms of dollars-per-wafer is much lower than other leading foundries because more than 80% of its business is in geometries larger than 45nm. With customers turning to foreign suppliers that can offer more leading-edge processes, SMIC may be forced into a consolidation with another leading pure-play foundry.”
The net result is that the cost of devices made in China may be lower and that will add up to even tougher consumer product competition.
[Last year, Semiconductor Engineering reviewed the 2014 predictions to see how close to the mark they came. You can see those in part one and part two of the retrospective. We will do the same with their predictions this year.]
Packaging Wars Begin
OSATs and foundries begin to ramp offerings and investments in preparation for mainstream multi-chip architectures.
At one time, the outsourced semiconductor assembly and test (Outsourced Semiconductor Assembly and Test) vendors dominated and handled the chip-packaging requirements for customers. The landscape began changing several years ago when TSMC entered the advanced packaging market. Since then, two other foundry vendors—Intel and now Samsung—announced plans to enter the advanced packaging business. The moves put the three foundry vendors in direct competition with the OSATs, although not all foundries are competing against the packaging houses.
Still, some foundries are taking a significant bite out of the business. For the new iPhone7, TSMC is making the A10 application processor on a foundry basis for Apple, according to Yole Développement. Based on a 16nm finFET process, Apple’s A10 is housed in TSMC’s Integrated Fan-Out (InFO) technology, the research house said.
Now, TSMC is developing a second-generation fan-out package and is spending roughly $1 billion in advanced packaging. And Intel said that its R&D spending in packaging is more than the two largest OSATs combined.
In response to the foundries and other events, several OSATs are consolidating as part of an effort to combine their resources and R&D. For example, Advanced Semiconductor Engineering (ASE), the world’s largest OSAT, plans to merge with one large rival and is investing in another.
But the changing landscape will present some challenges for customers. First, many chipmakers and OEMs are still evaluating various next-generation package types for their future products. Finding the right solution is complex and difficult.
In addition, customers must select a vendor for their advanced packaging requirements. Customers could go one of following routes:
• A turnkey service from a foundry. This involves everything from front-end manufacturing to IC-packaging and test.
• An OSAT.
• A combination of both foundries and OSATs.
So what’s the best path? The answer depends on the requirements. Each
route has some advantages and disadvantages. For example, in the
turnkey approach, the foundry manages the supply chain and production
flow, thereby controlling the cost and yield for customers. But the
turnkey approach is less flexible and usually prevents customers from
working with their preferred OSAT partners.• An OSAT.
• A combination of both foundries and OSATs.
Ultimately, the decision comes down to several factors. “The market is going to determine who does it,” said Jim Walker, an analyst with Gartner. “It’s all based on cost.”
There are other considerations. At last count, for example, a dozen or so companies are developing fan-out packages, but it’s unclear if all vendors or technologies will succeed in the long run. “Since this is an emerging market, there is enough room for all of them until the market matures,” Walker said.
What is advanced packaging?
A number of OSATs and foundries are pursuing advanced packaging for good reason—it’s a hot market. In fact, some foundries are looking for new growth engines amid a slowdown in the IC industry, and packaging presents some new and sizeable opportunities.
In total, the advanced packaging market is expected to reach $30 billion by 2020, up from $20.2 billion in 2014, according to Yole. Flip-chip is still the largest market in this area, but fan-out is growing the fastest. In total, the fan-out packaging market is projected grow from $244 million in 2015 to $2.4 billion by 2020, according to Yole.
Meanwhile, the separate market for 2.5D technology using through-silicon vias (TSVs) is expected to grow at an annual rate of 22% from 2014 to 2020, according to the firm.
Momentum is building for advanced packaging for several reasons. In simple terms, chipmakers want smaller packages with more performance. “Nowadays, there are several considerations on how you implement the chip,” said Walter Ng, vice president of business management at UMC. “Packaging is as much an upfront decision as anything else. It can certainly impact overall cost and performance.”
For example, smartphones have traditionally incorporated a packaging technology called package-on-package (PoP). This utilizes flip-chip interconnects in a ball grid array (BGA) package. PoP stacks two or more separate dies on top of each other. A memory package is on the top, while an application processor or baseband die is on the bottom.
PoP is still used in smartphones, but the technology is running out of steam at thicknesses of 0.5mm to 0.4mm. “(POP) also starts to show some limitations in terms of bandwidth and power,” said Doug Yu, senior director of integrated interconnect and package technology at TSMC.
Seeking to displace today’s PoP packages, OSATs and several foundries have been working on an array of new and competitive technologies.
One of those technologies is called fan-out wafer-level packaging. Wafer-level packaging involves packaging an IC while it’s still on the wafer, enabling smaller packages.
Wafer-level packaging involves two basic technologies—chip-scale packaging (CSP) and fan-out. CSP is a fan-in technology, where the I/Os are situated over the solder balls in the package. Fan-in runs out of steam at about 200 I/Os and 0.6mm profiles.
In fan-out, individual dies are embedded in an epoxy material. The interconnects, according to Deca Technologies, “are ‘fanned out’ through a redistribution layer (RDL) to the solder bumps,” enabling more I/Os.
“We see market opportunities for (fan-out) going from small low pin count applications all the way through to very high pin count devices such as FPGAs,” said Garry Pycroft, vice president of sales and marketing for fan-out packaging specialist Deca. “We also see many opportunities for multi-chip solutions, notably as companies look to partition their analog and digital blocks of their SoCs in different fab technologies to gain the most effective option.”
Indeed, fan-out provides customers with several options. For example, it enables a multi-die package with leading- and/or trailing-edge chips.
Or, instead of moving down the traditional scaling path with system-on-a-chip (SoC) designs, fan-out also enables highly-integrated, system-level packages with existing chips. “This is definitely a consideration, as SoC integration has too much NRE cost and a slower-time-to-market,” Gartner’s Walker said. “(Fan-out has) the ability to meet various volume demands for many IoT applications.”
Still, customers face some complex choices. Basically, there are three main types of high-density, fan-out technologies—chip-first/face-down; chip-first/face-up; and chip-last, sometimes known as RDL first.
“Chip first is a process whereby the die is attached to a temporary or permanent material structure prior to creating the RDL, which will extend from the die to the BGA/LGA interface,” said Ron Huemoeller, vice president of worldwide R&D at Amkor. “The reverse is true for a chip last process. The RDL is created first and the die is then mounted.”
The first wave of fan-out packages, called embedded wafer-level ball-grid array (eWLB), were chip-first/face-down. Generally, these are lower-density packages at 10u line/space.
Source: STATSchipPAC.
Today, ASE, JCET/STATS, Nanium and others are pursuing second-generation fan-out packages based on the chip-first/face-down approach. This technology, also called eWLB, is used for finer-pitch packages. It is ideal for smaller die sizes, lower I/Os and a fewer number of RDLs.
Meanwhile, TSMC and Deca are separately pursuing the chip-first/face-up approach. TSMC’s chip-first technology supports more I/Os, 3-plus RDL layers, and 2μ line/space.
Amkor is pursuing the chip-down approach. “It is used for application processors in combination with memory and other die,” Huemoeller said. “It deploys 3-plus layers of RDL and up to a 20mm square body size.”
Today, Apple is one of the first customers using high-density fan-out, but many other OEMs are taking a wait-and-see approach as the technology is still relatively expensive. “The traditional eWLB-type fan-out is cost-effective enough for even the second tier OEMs,” Huemoeller said. “The second-tier (OEMs) are willing to pay a slight premium to access the advanced technologies if the performance gain is big enough. However, it can only be a slight premium over standard product technologies.”
There are other issues. For example, OEMs want a second-source for a given fan-out package. The problem is there are no standards for fan-out, so OEMs must deal with proprietary solutions from vendors.
Besides fan-out, the market is heating up in other high-end packaging markets. In this segment, there is 2.5D stack die using interposers and TSVs. So far, the technology is gaining traction in FPGAs, graphics chips and memory.
In addition, Intel is pushing a technology called Embedded Multi-die Interconnect Bridge (EMIB). “The real key advantage of EMIB is that it requires only tiny pieces of silicon at the die borders to connect together the silicon in the package,” said Mark Bohr, senior fellow and director of process architecture and integration at Intel. “Compare this to an interposer in 2.5D. You’ve got one huge piece of silicon, which is much more expensive.”
Going with a foundry
Selecting the right technology is only part of the challenge. Vendor selection is also difficult, especially when choosing between a foundry or an OSAT.
Generally, there are two types of foundries in this arena. The first type doesn’t compete in the packaging market and works with OSATs. But many these foundries also provide limited packaging production capabilities, such as interposer development and TSV formation.
The second type of foundry develops and sells its own package types and may provide a turnkey service. They will also work with OSATs, depending on the product type.
The full-service foundry provides an impressive list of offerings, but they also operate at a different cost structure than the OSATs. “The foundry guys are used to doing 40% to 45%, upwards to 50% gross margins,” Gartner’s Walker said. “The packaging guys are used to doing about 20% to 25%. (The foundries) have to be willing to accept less margin to do the same process steps as the packaging guys do.”
Still, the foundries can offset some of the costs in packaging. By providing front-end manufacturing services, they can absorb some of the margin at the backend.
There are other tradeoffs. “For the most part, the foundry guys are sole-sourced when you use them,” Walker said. In contrast, a chipmaker tends to use two or three OSATs for a given package.
Meanwhile, each foundry vendor has a different strategy. For example, TSMC provides a turnkey service for its 2.5D and fan-out packages. In doing so, the company provides customers with a total solution, according to TSMC’s Yu.
Intel, meanwhile, also provides a turnkey service, although it will work with OSATs. “We don’t put any artificial constraints on the business,” said Zane Ball, vice president in the Technology and Manufacturing Group at Intel and co-general manager of Intel Custom Foundry. “Typically, once customers see what our assembly and test capabilities are, that tends to be a highlight of the collaboration.”
Another foundry, Samsung, has a different strategy. “We are not isolating the OSATs,” said Kelvin Low, senior director of foundry marketing at Samsung. “We don’t think that is a good idea. We think having a healthy ecosystem around the foundry business is still important.”
Samsung recently opened up its internal packaging and substrate operations for customers. Today, the company offers 2.5D technologies, fan-out and other packages.
Like Intel and TSMC, Samsung provides a turnkey service. “We have customers that want that,” Low said. “For us, it’s more like being a coordinator. How we consign the service is up to us.”
In many cases, though, Samsung will initiate the discussions with customers on packaging and may even ramp up a package to a limited degree. But for the most part, Samsung wants to avoid the high-volume packaging game. It prefers to offload the high-volume business to the OSATs.
“We have not changed our strategy here. Working with ASE, Amkor and others is important,” Low said. “It’s hard for us to manufacture everything. It’s not practical.”
Still others have different strategies. For example, Micron sells a 3D DRAM product called the Hybrid Memory Cube (HMC). As part of the flow, GlobalFoundries handles the TSV formation process and other steps for Micron’s HMC.
GlobalFoundries also develops interposers on an R&D basis, but it doesn’t develop chip packages for the commercial market—nor does it want to compete against the OSATs. “We work with the OSATs,” said Gary Patton, chief technology officer at GlobalFoundries.
Meanwhile, UMC provides front-end TSV manufacturing services, but it is staying out of the packaging business and works with OSATs. “Large and smaller companies generally want flexibility,” UMC’s Ng said. “They want the flexibility to choose the solution. They don’t want to be told: ‘This is the solution and take it or leave it.’ So our strategy is to continue to work with the ecosystem partners. We want to try to support those companies. We don’t want to put them out of business.”
Working with OSATs
Like the foundries, the OSATs have some advantages and disadvantages in advanced packaging. OSATs may have some but not all of the technical capabilities in the arena. But unlike the foundries, OSATs are more flexible and can handle large product mixes. “OSATs are built to handle product shifts, reuse of equipment and market re-direction,” Amkor’s Huemoeller said. “(OSATs have the) ability to receive die from multiple foundries to produce the final package. This is critical for SiP packages.”
Still, customers must keep a close eye on the OSATs. Over time, fewer OSATs can afford to make the necessary investments for both mainstream and advanced packages. There is only a finite amount of R&D dollars to go around.
As a result of this and other factors, the OSATs are consolidating. “(Consolidation) will bring a benefit to customers,” said Tien Wu, chief operating officer at ASE, in a recent interview. “If the industry players can consolidate, it will bring more R&D dollars (for packaging).”
Case in point: ASE recently announced plans to merge with Siliconware Precision Industries (SPIL), the world’s third largest OSAT. Under the plan, ASE and SPIL will form a holding company. ASE and SPIL will be subsidiaries of the holding company. Through this arrangement, the two companies hope to pool their resources. That deal is still pending.
Then, earlier this year, ASE invested $60 million in Deca, a subsidiary of Cypress Semiconductor. ASE also will install Deca’s fan-out technology within its production plant in Taiwan. In addition to Deca’s technology, ASE is also working on five or so other fan-out package types. SPIL is working on at least three.
At the same time, the competition is working on a multitude of other fan-out packages, but the question whether there is room for everyone.
“This is going to be a ubiquitous technology and will consume a major portion of the existing flip-chip packaging market,” Deca’s Pycroft said. “The market will splinter and some companies will focus their fan-out solution on a given sector. The potential is there for over a dozen companies to be engaged.”
An OSAT Reference Flow for Complex System-in-Package Design
With each new silicon process node, the complexity of SoC design
rules and physical verification requirements increases significantly.
The foundry and an EDA vendor collaborate to provide a “reference flow” –
a set of EDA tools and process design kit (PDK) data that have been
qualified for the new node. SoC design methodology teams leverage these
tool recommendations, when preparing their project plan, confident that
the tool and PDK data will work together seamlessly.
The complexity of current package design is increasing dramatically, as well. The heterogeneous integration of multiple die as part of a “System-in-Package” (SiP) module design introduces new challenges to traditional package design methodologies. This has motivated both outsourced assembly and test (OSAT) providers and EDA companies to address how to best enable designers to adopt these package technologies. I was excited to see an announcement from Cadence and Advanced Semiconductor Engineering, or ASE, for the availability of a reference flow and design kit for SiP designs.
I recently had the opportunity to chat with John Park, Product Management Director, IC Packaging and Cross-Platform Solutions, at Cadence, about this announcement and the collaboration with ASE.
In preparation for our discussion, I tried to study up on some of the recent technical advances at ASE.
ASE SiP (and FOCoS) Technology
There is a growing market for advanced SiP offerings, spanning the mobile/consumer markets to very high-end compute applications. The corresponding packaging technology requirements share these characteristics:
Figure 1. SiP for smart watch – top view and cross-section. (From: Dick James, Chipworks, “Apple Watch and ASE Start New Era in SiP”.)
This package incorporates a laminate substrate with underfill, molding encapsulation, and EMI shielding, necessitating intricate Design for Assembly (DFA) rules.
Other SiP applications require high interconnect density between die and high SiP pin counts, as mentioned above – these requirements have necessitated a transition to the use of lithography and metal/dielectric deposition and patterning based on wafer level technology – e.g., < 2-3um L/S redistribution layers (RDL). The volume manufacturing (i.e., cost) requirement has driven development of a wafer-based, bump-attach technology for SiP.
The general class of these newer packages is denoted as fan-out wafer-level processing (FOWLP). ASE has developed a unique offering for high-performance SiP designs – Fan-Out Chip-on-Substrate (FOCoS).
Figure 2. Cross-section and assembly flow for ASE’s advanced SiP, FOCoS. (From: Lin, et al., “Advanced System in Package with Fan-out Chip on Substrate”, Int’l. Conference on Microsystems, Packaging, Assembly and Circuits Technology, 2015.)
The multiple die in the SiP are mounted face-down on an adhesive carrier, and presented to a unique molding process. The molding compound fills the volume between the dice – a replacement 300mm “wafer” of die and compound results, after the carrier is removed. RDL connectivity layers are patterned, underbump metal (UBM) is added, and solder balls are deposited. The multi-die configuration is then flip-chip bonded to a carrier, followed by underfill and TIM plus heat sink attach.
SiP-intelligent design
With that background, John provided additional insight on the Cadence-ASE collaboration.
“SiP technology leverages IC-based processing for RDL fabrication. Existing package design and verification tools needed to be supplanted. Cadence recently enhanced SiP Layout, to provide a 2.5D/3D constraint-driven and rules-driven layout platform. Batch routing support for the signal density of advanced heterogeneous die integration is required.”, John highlighted.
“To accelerate the learning curve for the transition to SiP design, Cadence and ASE collaborated on the SiP-id capability – System-in-Package-intelligent-design.”
The figure below illustrates the combination of design kit data, tools, and reference flow information encompassed by this partnership.
Figure 3. SiP-id overview. ASE-provided design kit data highlighted inred.
ASE provided the Design for Assembly (DFA) and DRC rules data, for Cadence SiP Layout and Cadence Physical Verification System (PVS).
Further, there are a couple of key characteristics of SiP-id that are truly focused on design enablement.
Figure 4. Customer interface with SiP-id.
SiP technology will continue to offer unique PPA (and cost) optimization opportunities, especially for designs integrating heterogeneous die. The collaboration with ASE and Cadence to provide assembly and verification design kit data and release-to-manufacturing reference flows is a critical enablement. ASE is clearly committed to assisting designers pursue the challenges of SiP integration – perhaps their SiP-id web site says it best:
“It is our intention to offer all ASE customers a set of efficient tools where designers can freely experiment with designs which can go beyond the current packaging limits… This is an ongoing effort by ASE, not only to develop fanout (such as Fan-Out Chip on Substrate, FOCoS), panel fanout, embedded substrates, 2.5D, but also to making design tools more user friendly, up-to-date and efficient.”
This is indeed an exciting time for the packaging technology industry.
For more information on Cadence SiP Layout, please follow this link. For more information on the SiP-id reference flow and customer interface to ASE, please follow this link.
-chipguy
The complexity of current package design is increasing dramatically, as well. The heterogeneous integration of multiple die as part of a “System-in-Package” (SiP) module design introduces new challenges to traditional package design methodologies. This has motivated both outsourced assembly and test (OSAT) providers and EDA companies to address how to best enable designers to adopt these package technologies. I was excited to see an announcement from Cadence and Advanced Semiconductor Engineering, or ASE, for the availability of a reference flow and design kit for SiP designs.
I recently had the opportunity to chat with John Park, Product Management Director, IC Packaging and Cross-Platform Solutions, at Cadence, about this announcement and the collaboration with ASE.
In preparation for our discussion, I tried to study up on some of the recent technical advances at ASE.
ASE SiP (and FOCoS) Technology
There is a growing market for advanced SiP offerings, spanning the mobile/consumer markets to very high-end compute applications. The corresponding packaging technology requirements share these characteristics:
- integration of multiple, heterogeneous die (and passives) in complex 2.5D and 3D configurations
- very high chip I/O count and package pin count
- high-density and high-performance signal interconnections between die
- compatibility with high volume manufacturing throughput
- compatibility with thermal management packaging options for high-performance applications (e.g., attachment of thermal interface material (TIM) and a heat sink)
Figure 1. SiP for smart watch – top view and cross-section. (From: Dick James, Chipworks, “Apple Watch and ASE Start New Era in SiP”.)
This package incorporates a laminate substrate with underfill, molding encapsulation, and EMI shielding, necessitating intricate Design for Assembly (DFA) rules.
Other SiP applications require high interconnect density between die and high SiP pin counts, as mentioned above – these requirements have necessitated a transition to the use of lithography and metal/dielectric deposition and patterning based on wafer level technology – e.g., < 2-3um L/S redistribution layers (RDL). The volume manufacturing (i.e., cost) requirement has driven development of a wafer-based, bump-attach technology for SiP.
The general class of these newer packages is denoted as fan-out wafer-level processing (FOWLP). ASE has developed a unique offering for high-performance SiP designs – Fan-Out Chip-on-Substrate (FOCoS).
Figure 2. Cross-section and assembly flow for ASE’s advanced SiP, FOCoS. (From: Lin, et al., “Advanced System in Package with Fan-out Chip on Substrate”, Int’l. Conference on Microsystems, Packaging, Assembly and Circuits Technology, 2015.)
The multiple die in the SiP are mounted face-down on an adhesive carrier, and presented to a unique molding process. The molding compound fills the volume between the dice – a replacement 300mm “wafer” of die and compound results, after the carrier is removed. RDL connectivity layers are patterned, underbump metal (UBM) is added, and solder balls are deposited. The multi-die configuration is then flip-chip bonded to a carrier, followed by underfill and TIM plus heat sink attach.
SiP-intelligent design
With that background, John provided additional insight on the Cadence-ASE collaboration.
“SiP technology leverages IC-based processing for RDL fabrication. Existing package design and verification tools needed to be supplanted. Cadence recently enhanced SiP Layout, to provide a 2.5D/3D constraint-driven and rules-driven layout platform. Batch routing support for the signal density of advanced heterogeneous die integration is required.”, John highlighted.
“To accelerate the learning curve for the transition to SiP design, Cadence and ASE collaborated on the SiP-id capability – System-in-Package-intelligent-design.”
The figure below illustrates the combination of design kit data, tools, and reference flow information encompassed by this partnership.
Figure 3. SiP-id overview. ASE-provided design kit data highlighted inred.
ASE provided the Design for Assembly (DFA) and DRC rules data, for Cadence SiP Layout and Cadence Physical Verification System (PVS).
Further, there are a couple of key characteristics of SiP-id that are truly focused on design enablement.
- The DFA and DRC rules are used by SiP Layout for real time, interactive design checking (in 2D and 3D).
- ASE provides environment setup and workflow support to SiP designers, for managing the data interfaces to ASE, as illustrated below.
- As a result, this is a manufacturing sign-off based flow.
Figure 4. Customer interface with SiP-id.
SiP technology will continue to offer unique PPA (and cost) optimization opportunities, especially for designs integrating heterogeneous die. The collaboration with ASE and Cadence to provide assembly and verification design kit data and release-to-manufacturing reference flows is a critical enablement. ASE is clearly committed to assisting designers pursue the challenges of SiP integration – perhaps their SiP-id web site says it best:
“It is our intention to offer all ASE customers a set of efficient tools where designers can freely experiment with designs which can go beyond the current packaging limits… This is an ongoing effort by ASE, not only to develop fanout (such as Fan-Out Chip on Substrate, FOCoS), panel fanout, embedded substrates, 2.5D, but also to making design tools more user friendly, up-to-date and efficient.”
This is indeed an exciting time for the packaging technology industry.
For more information on Cadence SiP Layout, please follow this link. For more information on the SiP-id reference flow and customer interface to ASE, please follow this link.
-chipguy
OSAT Consolidation Continues
The merger of ASE Group and SPIL alters the competitive landscape, but more changes are still ahead.
For now, the companies will continue to operate separately, while their shares are traded under the ASX symbol on the New York Stock Exchange. ASE Industrial Holding serves as the parent company for ASE and SPIL. But the merger is almost certain to change the competitive landscape in the assembly, packaging and test markets.
All of these areas are tough markets. ASE posted net income of $670 million on 2017 revenue that was just slightly shy of $9.8 billion. While that may seem like a lot of money, compared with many segments in the technology world it’s a tight operating margin.
The ASE/SPIL deal is being very closely watched by Amkor Technology, JCET Group, and smaller contractors. OSATs are not just competing among themselves anymore. Increasingly, they are facing competition from TSMC, UMC and other foundries, which have pressed into chip packaging and testing services for several years now. There are also internal assembly and testing operations at some of the bigger semiconductor vendors, such as Intel, Samsung Electronics, and Texas Instruments, that take away certain business opportunities.
“There are a bunch of pressures in the OSAT business that will shape the industry in coming years,” says Risto Puhakka, president of VLSI Research. TSMC is competing in the high-end packaging business, with Apple as its big customer, and integrated device manufacturers are also competing in that field, he notes. (TSMC’s largest customer, not identified in its most recent 20-F filing, accounted for 22% of the foundry’s 2017 net revenue.)
There is more competition on the horizon, too. “The other pressure point that the OSATs feel is China,” says Puhakka. “There’s a substantial amount of packaging coming up in China, with the much lower cost, whether it’s through subsidies or something else. There is definitely pressure at the low end. It comes in the form of price pressure. Because the OSATs want to keep up the volume; their pricing is a much tougher environment. If you look at those two trends, you see people probably want to get bigger, they want to be operating in China, they want to be more competitive. The cycle in R&D is to get back that high-end business, and there are a number of things pushing in those directions. If you look at the OSAT business last year, there was growth, but nothing spectacular. Then, you look at the assembly equipment demand, which was spectacularly hot, which means a lot of equipment went to others—other than the traditional OSATs. It went mainly to China, to IDMs, to TSMC, Samsung.”
China’s OSAT industry is mostly made up of smaller firms, aside from Jiangsu Changjiang Electronics Technology (JCET), which owns STATS ChipPAC and other companies. JCET acquired STATS ChipPAC in 2015.
ASE and SPIL will be involved in integration initiatives in the near future, according to Puhakka. “The bigger question is, what will Amkor do? What will JCET do? The big players may want to buy from China. That’s not out of the question, but there would be some regulatory hurdles, I would imagine, to do that.”
The large OSATs today have geographically diverse operations throughout Asia. But China represents the largest growth opportunity.
“It is a market not to be ignored,” he says. “It just that you have the Chinese regulations, the requirement of joint ventures and technology transfers. It makes people very uneasy to do something like that. Those kinds of actions have limited how much business transfers to China. If you’re operating in China, you have ongoing IP protection issues. You’re constantly making decisions about what IP are you moving to China, what are you not. By default, people are basically saying, whatever you move to China, it becomes Chinese knowledge.”
Fig. 1: Pressures mount for OSATs. Source: CLSA
Bigger deals
Just as more limited opportunities and growing R&D investments fostered some mega-deals in the semiconductor business, similar forces are at work in the assembly, package and test world, which serves the semiconductor companies.
“It was certainly no surprise that ASE and SPIL came together, because of the increasingly challenging OSAT business environment and the major consolidation we’re seeing in our customer base,” says Hal Lasky, senior vice president of sales and marketing for JCET Group, who also serves as executive vice president and chief sales officer for STATS ChipPAC. “It’s kind of inevitable that we would see this at the OSAT level. Clearly, we’re a part of that as well, as we are now a part of the JCET Group. What does it mean for the competition? As a company, we embrace this change, and we see many opportunities arising due to this merger. There are many semiconductor companies, our customers, who see a combined market share of ASE and SPIL within their own TAM. I call it unhealthy, or maybe a little too high. We’ve had many chances to compete for market share where, without this merger, we wouldn’t have. I absolutely believe this merger enhances the competitive nature of the OSAT space. Maybe it gives us a higher bar to shoot at, which is not necessarily a bad thing for this very competitive OSAT industry.”
Lasky anticipates there will be more consolidation ahead, for the OSAT segment in particular, and the semiconductor industry in general.
“In the OSAT space, while I do expect us to follow the trend, we won’t see quite the pace. There is still a chance to see continued OSAT consolidation, but maybe not at the pace of our customer set. And the issue with OSATs is that the long tail of our industry—where the small players are not always of interest for M&A for the larger OSAT because the return you get versus the alternative of just competing for the business—when you look at that and the ROI and the deal around that, it doesn’t come out in favor of acquisition. Also, in the OSAT space, the technology gap between top-tier OSATs and the smaller OSATs continues to grow. That has an impact on the interest level in the larger OSATs to drive M&A with smaller ones.”
So rather than accelerating consolidation, consolidation among OSATs actually could slow down, he says. At the same time, TSMC will continue to compete with OSAT contractors in IC packaging services. “Their solutions in the wafer-level space—InFO and CoWoS—those are outstanding packaging solutions. They are targeted at key segments in our customer space, and they are staking out their portion of the application space. Within the overall application space, there is a very good fit for those products. They’re investing in the back end, and they’re doing it in a way that makes sense to their business. And it lets them optimize their overall business model. I see them continuing with that and continuing to stake out that position. While that’s certainly a challenge to the OSAT space, it’s not really a killer. But we need to adapt to that.”
Peaceful coexistence?
So can foundries and OSATS live and work together?
“There’s no question that the answer to that is yes,” Lasky says. “I believe very strongly that there’s plenty of opportunity. As we adapt to that shift in TAM, it’s mostly TSMC when you look at it, the bottom line is I believe very strongly in the spirit of co-opetition, because we continue to work very closely with the foundries to take care of and support our customers. Key to how we adapt as an OSAT industry really is finding where our strengths and our capabilities in the OSAT industry can take advantage of new growth areas, to take TAM back versus losing it to the foundries.”
The system-in-package module space is one area where OSATs can really shine, Lasky asserts.
“When these higher-level solutions involve multiple die, multiple devices, and you need to integrate at the packaging level to create a packaging-level solution, suddenly you need the abilities of OSATs, where in the past you might have done that as an EMS board-level solution,” he says. “There’s miniaturization, there are shielding concerns, there are a lot of different intricate process-level concerns, and it’s required at a very high yield. And those are all things that we play well in. We’re starting to see our TAM actually grow in that space in the OSAT world. There is no one packaging solution that’s going to wallpaper the entire application space. You need to find your strengths, find where your capabilities can let you grab share, and then go for that. Even in wafer-level, where the foundries have the very strong solutions for some of these processors, we have our own fan-out wafer-level and wafer-level CSP solutions that don’t make sense in the foundry space. The OSAT can do a better job.”
Ron Huemoeller, corporate vice president of Amkor and head of corporate R&D, likewise sees big changes and challenges in the OSAT industry.
“It’s a changing competitive environment and the OSAT market continues to narrow at the top, with only two OSATs remaining dominant in all phases of technology, ASE and Amkor, following the merger of SPIL and ASE. With fewer choices, more dependence on the premier OSATs is inevitable. It is important to note that developing and manufacturing new package platforms is expensive, and it requires a high degree of engineering expertise. It also requires perpetual funding in R&D to maintain competitiveness. Adding new blocks of capacity is very expensive – continually challenging ROI.”
Whether that will lead to more consolidation, and how quickly, remains to be seen.
“The OSAT business requires scale,” says Prasad Dhond, Amkor’s vice president and general manager of automotive. “There will continue to be some level of consolidation as players try to combine their resources to compete. This might now be more applicable to the smaller (Tier 2 and Tier 3) players, though. Foundries are making a push into certain segments of high-end packaging. They view this as an opportunity to cross-sell additional services and also to make their business stickier. However, from a fundamental business model standpoint, packaging margins are lower than what foundries are used to. It is not clear if foundries will be willing to make heavy CapEx investments in packaging when they could be using the capital for something else.”
So does that mean everyone will co-exist in their own space?
“One of the key aspects of foundry success in entering into the OSAT market segment is the bundling of their silicon with advanced packaging technology,” Huemoeller observes. “They secure the silicon sale by attaching it to the package technology. Foundries and OSATs are key components of the ecosystem. The foundries won’t engage in all aspects of the assembly and test business. There are niche areas they will play in, but there will always be a need for them to work with OSATs for other applications and if volumes exceed certain thresholds.”
Behind the ASE-SPIL deal
In its 20-F filing with the Securities and Exchange Commission for 2017, ASE provided an in-depth look at market pressures and developments in this space.
“We have significantly expanded our operations through both organic growth and acquisitions in recent years,” ASE says. “For example, we acquired the controlling interest of Universal Scientific in 2010 to expand our product offering scope to electronic manufacturing services; we also entered into a joint venture agreement with TDK Corporation in May 2015 to further expand our business in embedded substrates; in June 2016, we entered into the Joint Share Exchange Agreement with SPIL to take advantage of the synergy effect of business combination between SPIL and us; furthermore, we entered into a joint venture agreement with Qualcomm Incorporated in February 2018 to expand our SiP business. We expect that we will continue to expand our operations in the future. The purpose of our expansion is mainly to provide total solutions to existing customers or to attract new customers and broaden our product range for a variety of end-use applications. However, rapid expansion may place a strain on our managerial, technical, financial, operational and other resources. As a result of our expansion, we have implemented and will continue to implement additional operational and financial controls and hire and train additional personnel. Any failure to manage our growth effectively could lead to inefficiencies and redundancies and result in reduced growth prospects and profitability.”
It adds, “The successful consummation of the SPIL Acquisition is subject to a number of factors, including, among other things, obtaining all necessary antitrust or other regulatory approvals in Taiwan, the United States, the PRC and other jurisdictions where we do business. We received a no-objection letter in respect of the Share Exchange from the TFTC on November 16, 2016. On May 15, 2017, we received a letter from the FTC confirming that the non-public investigation on the Share Exchange has been closed. On November 24, 2017, we received approval from the Ministry of Commerce of the People’s Republic of China (MOFCOM) for the Share Exchange under the condition that ASE and SPIL maintain independent operations, among other conditions, for 24 months. In the event these conditions cannot be satisfied, we may re-evaluate our interest in SPIL and may consider, among other legally permissible alternatives, to dispose our SPIL shares at a loss, which may significantly affect our financial position. Notwithstanding the above, even if we are successful in consummating the SPIL Acquisition, we will be subject to regulatory restrictions requiring us to maintain separate operation of SPIL for a period of time, and we may face challenges in successfully integrating SPIL into our existing organization or in realizing anticipated benefits and cost synergies afterwards. Each of these risks could have a material adverse effect on our business and operations, including our relationship with customers, suppliers, employees and other constituencies, or otherwise adversely affect our financial condition and results of operations.”
The 20-F says, “The packaging and testing business is capital-intensive. We will need capital to fund the expansion of our facilities as well as fund our research and development activities in order to remain competitive. We believe that our existing cash, marketable securities, expected cash flow from operations and existing credit lines under our loan facilities will be sufficient to meet our capital expenditures, working capital, cash obligations under our existing debt and lease arrangements, and other requirements for at least the next twelve months. However, future capacity expansions or market or other developments may cause us to require additional funds…If we are unable to obtain funding in a timely manner or on acceptable terms, our results of operations and financial conditions may be materially and adversely affected.”
ASE has a co-opetition relationship with TSMC. The two companies have had a “strategic alliance” since 1997. ASE serves as the foundry’s non-exclusive, preferred provider of packaging and testing services for microchips fabricated by TSMC.
Conclusion
While OSATs will have one larger competitor to deal with in the near future, those companies look forward to the fray. TSMC’s muscling in on the high-end packaging business, especially when it comes to Apple’s custom application processors for the iPhone and the iPad, is a competitive challenge.
Yet OSATs retain expertise in the areas of SiP modules, molded interconnect substrates, substrate-like printed circuit boards, semiconductor embedded in substrate, and other emerging technologies. And while competition continues to ratchet up, there are always new opportunities around the edges for companies with the expertise and investment dollars to continue eking out a healthy living.
Choosing The Right Interconnect
Packaging options increasing as chipmakers vie for higher performance, lower power and faster time to market.
At the center of this frenzy of activity is the interconnect. Current options range from organic, silicon and glass interposers, to bridges that span different die at multiple levels. There also are various fan-out approaches that can achieve roughly the same high performance and low-power goals as the interposers and bridges.
What’s driving all of this activity is a recognition that the economic and performance benefits of shrinking features are dwindling. While this has been apparent on the analog side for some time, it’s now beginning to impact ASICs for a different reason—the immaturity of applications for which chips are being designed.
In artificial intelligence, deep learning and machine learning, which collectively represent one of the hot growth markets for chips, the training algorithms are in an almost constant state of flux. So are the decisions about how to apportion processing between the cloud, edge devices and mid-tier servers. That makes it far more difficult to commit to building an ASIC at advanced nodes, because by the time it hits the market it already may be obsolete.
The situation is much the same in the automotive segment, where much of the technology is still in transition. And in burgeoning markets such as medical electronics, augmented and virtual reality, IoT and IIoT, no one is quite sure what architectures will look like or where the commonalities ultimately will be. Unlike in the past, when chipmakers vied for a socket in a mobile phone or a PC or server, applications are either emerging or end markets are splintering.
That has helped push advanced packaging into the mainstream, where there are several important benefits:
• Performance can be improved significantly by routing signals through wider pipes—TSVs, bridges or even bonded metal layers, rather than thin wires.
• Distances between critical components can be reduced by placing different chips closer to each other rather than on the same die, thereby reducing the amount of energy required to send signals as well as the time it takes to move data.
• Components can be mixed and matched from multiple process nodes, which in the case of analog IP can be a huge time saver because analog circuitry does not benefit from shrinking features.
Still, advanced packaging adds its own level of complexity. There are
so many options in play in the packaging world that it isn’t clear
which approaches will win. The outcome depends largely on the choice of
interconnect, which serves as the glue between different chips.• Distances between critical components can be reduced by placing different chips closer to each other rather than on the same die, thereby reducing the amount of energy required to send signals as well as the time it takes to move data.
• Components can be mixed and matched from multiple process nodes, which in the case of analog IP can be a huge time saver because analog circuitry does not benefit from shrinking features.
“The key here is the shorten the time to development, particularly for AI,” said Patrick Soheili, vice president of business and corporate development at eSilicon. “On one side, you can’t afford not to do the chip right away because you can’t be left behind. But you also have to worry about future-proofing it. The goal is to get both.”
DARPA has been pushing chiplets as a way to standardize the assembly of components. The first commercial implementation of this sort of modular approach was developed by Marvell Semiconductor with its MoChi architecture. Marvell still uses that internally for its own chips, which it can customize for customers using a menu of options. DARPA’s CHIPS program takes that one step further, allowing chiplets from multiple companies to be mixed and matched and combined through an interposer.
“Chiplets are absolutely part of the solution,” said Soheili. “But this isn’t so easy. If a 7nm ASIC has to sit in the middle and connect to 180nm chiplets, something has to line up the data and send it over a link.”
Different types of interposers
As companies working with advanced packaging have discovered, this can be time-consuming and expensive. It is assumed that once these various approaches can be vetted and standardized, this process will become quicker and cheaper. That could involve sidestepping silicon interposers, which can run as high as $100 for the interposer itself in complex devices that require stitching of multiple reticles.
“There is overall agreement that silicon interposers are expensive,” said Ram Trichur, director of business development at Brewer Science. “The question is what to replace it with. The challenge with organic interposers has been warpage. There are a lot of companies addressing these challenges and working with certain formats for organic interposers. Some are directly mounted, others need a substrate.”
Kyocera, Shinko Electronics and Samsung independently have been developing organic interposers using epoxy films that can be built up using standard processes. One of the key issues here has been matching the coefficient of thermal expansion (CTE) with that of silicon. This isn’t a problem with silicon interposers, of course, but it has been an issue with organic laminates and underfill. Reducing the thickness of the interposer layer has been found to help significantly, according to several technical papers on the subject.
Fig. 1: Organic interposer. Source: NVIDIA/SEMCO
It’s still not clear if this will be a commercially viable alternative to silicon interposers, however. “With an organic interposer you get the same lines and spaces as a silicon interposer, but by the time you address all of the issues you come up with basically the same cost at the end,” said Andy Heinig, a research engineer at Fraunhofer EAS. “The problem is that you need a system-level study to find out which is the best solution for a design. One of the variables is that you need to transfer a huge amount of data on these devices. If you reduce that to a certain point, you can use an organic interposer. But it’s more of a task to find that out than with a silicon interposer.”
Organic interposers aren’t the only alternative. “There is also work on glass interposers, which are tunable,” said Brewer’s Trichur. “The CTE of glass matches silicon, so you get low loss, which is suitable for high-frequency applications. Glass is also good for panel-level processes, and the cost is low.”
Fig. 2: Glass interposer in test vehicle. Source: Georgia Tech
Interposer alternatives
One of the big attractions of 2.5D silicon interposers, or “2.1D” organic interposers, is improved throughput using arrays of TSVs rather than skinny wires. That allows a multi-pipe connection to stacks of DRAM, known as high-bandwidth memory.
The current HBM 2 JEDEC standard, introduced in 2016, supports up to 8 stacked DRAM chips with an optional memory controller, which is similar to the Hybrid Memory Cube. HBM 2 supports transfer rates of up to 2 GT/s, with up to 256 GB/s bandwidth per package. Over the next couple years that will increase again with HBM 3, which will double the bandwidth to 512 GB/s. There is also talk of HBM 3+ and HBM 4, although exact speeds and time frames are not clear at this point.
The goal of all of these devices is to be able to move more data between processor and memory more quickly, using less power, and 2.5/2.1D are not the only approaches in play at the moment. Numerous industry sources say that some new devices are being developed using pillars—stacked logic/memory/logic—on top of fan-outs. TSMC has been offering this capability for some time with its InFO (Integrated Fan-Out) packaging technology.
Other high-end fan-outs use a different approach. “Fan-out takes the place of the interposer,” said John Hunt, senior director of engineering at Advanced Semiconductor Engineering (ASE). “Chip-last is closer to an inorganic interposer, and the yield right now is as high as 99% using 4 metal layers and 2.5 spacing. The real objective of an interposer is to increase the pitch of active devices so you can route HBM2. High-end fan-outs perform better thermally and electrically because the copper RDL is thicker and the vias are less resistive. But they only work in cases where you don’t need 1 micron lines.”
There are a number of options available with fan-out technology, as well, including chip first, chip last, die up, die down. There also are flip-chip, system-in-package, and fan-out on substrate.
What’s important is that there are many ways to tackle this problem, and high-speed interconnects are now available using multiple packaging approaches. Until a couple years ago, the primary choices were fan-out, fan-in, 2.5D and 3D-IC and multi-chip modules, and there were distinct performance and cost differences between all of those. There are currently more options on the table for all of those approaches, and the number of options continues to expand, thereby blurring the lines.
Bridges
Another approach uses low-cost bridges. Intel has its Embedded Multi-die Interconnect Bridge (EMIB), which it offers to Intel Foundry customers as an option for connecting multiple routing layers.
Fig. 3: Intel’s EMIB. Source: Intel.
Samsung, meanwhile, has announced an RDL bridge for its customers, as well, which accomplishes the same thing inside the redistribution layer (RDL).
Fig. 4: Samsung’s interconnect options. Source: Samsung
Both of those approaches can certainly cut the cost of advanced packaging, but they are more limited than an interposer. So while a bridge can provide a high-speed connection between two or more chips, there is a limit to how many HBM stacks can be connected to logic using this type of approach.
Moreover, while the bridges themselves are less expensive than interposers filled with through-silicon vias, they can be challenging to assemble because the connections are planar. The same kinds of warpage issues that affect multi-die packaging apply with bridge technology, as well.
Future goals and issues
One of the reasons this kind of in-package, and inter-package interconnect technology is getting so much buzz lately is that the amount of data that needs to be processed is increasing significantly. Some of that must be processed locally, using multiple processors or cores, and some of it needs to be processed remotely, either in a mid-tier server or in the cloud. All of the compute models require massive throughput, and trying to build that throughput into a 7/5nm chip is becoming much more difficult.
The rule of thumb used to be that on-chip processing is always faster than off-chip processing. But the distance between two chips in a package can be shorter than routing signals from one side of an SoC to another over a skinny wire, which at advanced nodes may encounter RC delay. None of this is simple, however, and it gets worse in new areas such as 5G.
“There are several materals and process challenges,” said Brewer’s Trichur. “First, you’ve got the structural package issues. Then, when we get into 5G, you’ve got a gap in materials with integrated dielectrics. 5G will be the next materials challenge. So now you’ve got to integrate new materials and new processors, all in a small package. You’ve got more switches, and you also have to integrate antennas, which requires a new process and new materials in itself. This is a whole new challenge.”
Another market where advanced packaging will play a critical role is in AI/ML/DL. The key metrics there are performance and power, but the bigger challenge is being able to churn out new designs quickly. The problem in this segment is that the training algorithms are in an almost constant state of flux, so being able to add new processors or IP is time-sensitive. An 18-month development cycle will not work if the processor or memory architecture needs to change every six months.
Trying to utilize off-the-shelf components for a single-chip solution can cause its own set of issues. “One of the problems we’ve been seeing in big SoCs is that companies are trying to glue everything together and the IP models are at different levels of abstractions and different speeds,” said Kurt Shuler, vice president of marketing at ArterisIP. “That requires you to shim and hack the interconnect model to get it to work. Even then, because of the ancestry of the models, they weren’t developed for pins or TCM (tightly coupled memory) interfaces, or they are cycle-accurate or approximately timed or loosely timed. So we’re seeing things that were not developed on a large scale. They were developed as a point problem.”
Advanced packaging can help that to a point. But most advanced packaging so far has been more about a particular application and a particular project, rather than developing a platform that can be used by many companies.
“If it works well, you can do great things,” said Raymond Nijssen, vice president of systems engineering at Achronix. “But there are many forks in that road. There are solutions with interposers or without. There are different data rates, so you have some solutions with very high data rates. And if you are doing chiplets, it depends on why you are doing chiplets. Is it because you can’t afford that many balls on a package, or is it an issue of power efficiency because you have a hard ceiling on power usage?”
Conclusion
So far, there are no clear answers to any of these questions. But the good news is that there are plenty of options, and many of them have been proven in real products in the market and shown to work.
The next challenge will be to build economies of scale into the packaging world. That will require the industry to narrow down its choices. Until now, many of these packaging approaches have been expensive to implement, which is why they have shown up in everything from smart phones, where there are sufficient volumes to offset the development cost, or in networking chips, where price is less of an issue.
In the future, advanced packaging will need to become almost ubiquitous to drive widespread applications of AI/ML/DL inference at edge nodes and in automotive and a variety of other new market segments. That requires repetition with some degree of flexibility on design—basically the equivalent of mass customization. This is the direction the packaging world ultimately will take, but it will require some hard choices about how to get there. The interconnect will remain the centerpiece of all of these decisions, but which interconnect remains to be seen.
New Issues In Advanced Packaging
The race is on to simulate thermal and electromagnetic effects.
There are a number of factors that are tilting more of the the semiconductor industry toward advanced packaging. Among them:
- SoC interconnects and wires are not scaling at the same rate as transistors.
- Costs of designing and manufacturing chips at each new node are skyrocketing.
- Resistance and capacitance are increasing at each new node, along with heat and various types of noise.
These and other issues are driving many chipmakers to seriously consider advanced packaging options, including 2.5D, 3D-ICs, various flavors of fan-outs, and system-in-package.
“The cost to fab is so much higher today, but with 3D-IC and advanced packaging technologies that cost can be mitigated,” Krishnaswamy said. “You can bring down the cost by integrating two chips within the same package. And with a shorter distance of communication, the time it takes for a signal to travel from one chip to another can be less than the time it takes for a signal to travel from one end to another on a big chip at an advanced technology node.”
There is widespread agreement on that subject. “Packaging technology is becoming real, whether it’s side-by-side or stacked using TSVs or interposers,” said Navraj Nandra, senior director of marketing for the DesignWare Analog and MSIP Solutions Group at Synopsys. “And the packaging costs are worthwhile developing because products without advanced packaging are crazy expensive.”
Advanced packaging also can reduce the amount of power needed to drive those signals, because the interconnects are wider and the distances are shorter. And it makes heterogeneous integration much simpler in some respects, because such issues as heat, electromagnetic interference, and power noise can be dealt with at the floor-planning stage.
There is no shortage of examples. Apple’s A-10 processor is a case in point. So are network processors from Cisco and Huawei, and various GPUs and AI architectures. Today, all of the top semiconductor companies have designs in development based on some form of advanced packaging. But as many of them have discovered, this isn’t as easy as it sounds.
Fig. 1: The shape of things to come — fan-out chip on substrate. Source: ASE
“If it works, it can do great things,” said Raymond Nijssen, vice president of systems engineering at Achronix. “But it’s not a panacea. You have to decide how much memory to pair with a CPU, and that’s difficult to change because that memory is pre-packaged with the chip.”
He said that data rates for some solutions can be very high, but getting this wrong also can be expensive because it means throwing away multiple chips rather than one, including an expensive interposer layer. So yield is critical, and that is difficult to achieve with multiple die. “The Holy Grail is a module like a PCB, and we see a lot of interest in a more conventionalsystem-in-package with no interposer.”
Complexity multiplied
For now, however, interposers are a key part of advanced packaging. Increasingly, so are stacked die on top of interposers or some other substrate, in the form of pillars. And to understand just how complex this gets, picture three-die stacks atop a silicon interposer with eight pillars, for a total of 24 chips plus the interposer.
“If you want to check for power integrity [on a design like this], where each three-die stack is composed of either three SoC die, or two SoC die with one memory die, you may want to include everything in that simulation,” said Norman Chang, chief technologist at Ansys. “But in terms of capacity, that would be impossible.”
With a single three-die stack on top of a silicon interposer, the power integrity simulation is no problem, he said. “We can do the extraction for every die, including the TSV from the die to the silicon interposer, along with the internal TSVs from the bottom die to the interposer, from the bottom die to the middle die, and from the middle die to the top die. A simultaneous power integrity simulation can be done for that scenario.”
Simulating 24 chips requires a different approach. One option is to create a chip-power model for one of the stacks while condensing the other chips into a model. It’s critical that the middle die be checked for thermal integrity, and the best way to do that is to use thermal simulation. Here, a chip thermal model could be generated for each die, including the silicon interposer and the TSV in the detailed model of the thermal simulation, so the whole thermal simulation can be done for the 24 die on top of the silicon interposer, Chang explained.
Thermal-induced stress is a new issue that is cropping up for 2.5D and 3D designs. “If you look at the dielectric layer, which is the ELK (extra low-k) layer in the process, then you may suffer stress,” Chang said. “Solder joints and solder bumps traditionally had to be watched for stress, but this is a new, thermal-induced stress due to heat breaking down the materials. At higher temperatures, the stress is more severe.”
Knowing the neighbors
Another problem showing up in these advanced packages is the combination of electromagnetic interference (EMI), electromagnetic compatibility (EMC), electromagnetic susceptibility (EMS), and simultaneous switching output (SSO). EMI is the noise going out to the world, whereas EMS is the noise coming into the chip from outside. This is a new standard in the ADAS market because of noise through the direct power injection (DPI). In SSO, the signal must go through one chip to the other chip in the packaging. A specific simulation is needed for this as well.
Indeed, the over-arching challenge of 2.5D/3D heterogeneous design is that when one block is designed, it is not enough to verify the correctness of this one block by itself.
“We all know that it is important to consider what will happen to a block when integrated either with another block or with the entire chip around it,” according to Magdy Abadir, vice president of marketing at Helic. “There are many variations of this problem, but the wishful thinking of most people is that if I know what the neighbor is doing, and I try to understand the neighbor, it may mean keeping the neighbor away a bit. Why is this intuition true? Because it turned out for capacitive and resistive interactions that it is true. If you keep them away, the resistance is infinity while the capacitance is zero – so it works. But with inductance, it doesn’t because inductance has to do with loops. Current loops can exist in very big loops, which can make inductive magnetic effects between blocks.”
If, for example, there are two blocks and they are separated by a certain distance, when they are simulated, they may not interact with each other. But when they are put into a big chip, it may be a different story. As most SoCs contain a C-ring – which the manufacturer puts in for reliability purposes – even though it is not electrically connected, it is very large compared to the size of these two blocks, and therefore it is part of the physics.
“When you do the EM analysis, all of a sudden these two blocks start interfering with each other, and the problem is we have to be aware of the environment,” Abadir said. “We must be aware of what else exists on the chip even though it is not close by. This is a major challenge for a lot of engineering teams, and to SoC providers that cannot handle that kind of complexity. They really want to operate in a way that includes designing IP and putting it together, but that is not enough.”
Existing problem needs a new solution
While the need to be able to simulate heterogenous 2.5D and 3D designs has been a growing problem in the industry for at least the last five years, it is now reaching a critical stage. Today, it’s not uncommon for heterogenous designs to contain a half-dozen different technologies inside of a module, which contains upward of 20 ICs. This is due to the fact that for the optimum power, a specific manufacturing process may be used. Then for switching control, filters and switches, each may be manufactured with different processes, where the materials and process formulas are different.
“And that’s not mentioning the laminate board that they sit on and all of the SMTs,” said Michael Thompson, RF solutions architect in Cadence’s Custom IC & PCB Group. “The ability to handle multiple technologies in both simulation and layout – as well as to verify that what I laid out and how I connected these ICs together with the surface mount technologies actually reflect the schematic – has become a critical problem in the industry over the last few years.”
To accommodate this design paradigm, tool providers are beginning to support multiple process technologies in their simulation tools to handle a variety of different PDKs within the same tool. This also means they can be simulated together and laid out as different technologies. And this is just the beginning of the work in this vein. Tool vendors currently are working to augment tools further.
At the same time, existing tools are still being used. In fact, SPICE is used to do simulation at the chip level, and at the transistor level, Abadir noted. “But what you are feeding SPICE typically in the simulations are RC resistance and capacitive kind of extracted netlists of the chip. You are not seeing the impact of inductance. You’re not seeing the magnetic impact. And even if you have started to extract inductance for the long wires and interesting pieces, it is not very easy for a lot of the tools that are used to operating on smaller blocks to take their outputs, which is their EM models, and put it back into a chip environment. So there is still work to be done. [Many] tools don’t have a way to take that information back in.”
Another consideration is the fact that the low-power aspect of 2.5D and 3D designs makes circuitry a lot more sensitive because voltage levels are dropping, so noise obviously will impact it more. “Adding to the misery, there are reliability issues with thermal, where wires might get thinner,” he said. “As that happens, the inductance goes up, so the analysis that I have done might shift a bit because of the changing parameters, which may result in something that stops functioning six months later-or worse, something that didn’t fail starts missing bits. In a situation like autonomous driving, I’m assuming it looks like it’s doing the data analysis and forecasting the accident before it happens, but if the chip starts failing a little bit, there’s no telling what could happen.”
More complexity ahead
Looking ahead, Ansys’ Chang expects a number of designs with three-die stacks will show up this year containing pillars on silicon interposers. Most of those will have a mix-and-match, wafer-to-wafer die stack on top of a silicon interposer. This will translate to the use of additional advanced packaging technologies, including package-on-package and a number of other variations.
The result is that complexity will continue to increase, but now it will happen in multiple directions. Understanding the impact of all of these pieces on each other is a huge challenge, and it’s one that will continue to evolve as advanced packaging continues to gain ground as an alternative to planar scaling.
—Ed Sperling contributed to this report.
Fan-Out Wars Begin
The number of low-density packaging options is increasing as the popularity of advanced packaging grows.
Amkor, ASE, STATS ChipPAC and others sell traditional low-density fan-out packages, although some new and competitive technologies are beginning to appear in the market. Low-density fan-out, or sometimes called standard-density, is one of two main categories in the overall fan-out market. The other type is high-density fan-out.
Generally, fan-out technology provides a small form-factor package with more I/Os than other package types, but it isn’t the only packaging option on the table. Regardless, geared for mobile, IoT and related applications, low- or standard-density fan-out is defined as a package with less than 500 I/Os and greater than 8μm line and space, according to Advanced Semiconductor Engineering (ASE), an outsourced semiconductor assembly and test (OSAT) vendor. Line and space refer to the width of a tiny wire or a metal trace and the space between various traces in a package.
Targeted for mid-range to high-end apps, high-density fan-out has more than 500 I/Os and less than 8μm line/space, according to ASE. TSMC’s InFO technology, the most notable example of high-density fan-out, is incorporated in Apple’s latest iPhones. Other OSATs are chasing after the high-density fan-out market.
The low-density market is also heating up. “InFO-Apple is the dominant one in high-density, but there is also a lot of standard-density (in the market). And there are a lot of devices that can go into this,” said Jan Vardaman, president of TechSearch International.
Among the drivers for standard-density fan-out are audio codecs, power management ICs, radar modules and RF, Vardaman said. And one company—Qualcomm—is the biggest customer in the arena. “It’s more than Qualcomm now,” she said. “We are seeing companies other than Qualcomm move into volumes in this space.”
The market may change in other respects. At last count, several vendors are shipping or readying at least six or more different low-density fan-out technology types. “It depends on how you count them,” said Jérôme Azemar, an analyst at Yole Développement. “In the long-run, there isn’t room for this many architectures, so it is likely that some will disappear or will simply get more and more similar, despite having different names.”
Fig. 1 Companies Offering FO-WLP Source: TechSearch International
Which fan-out technologies will prevail over the long haul comes down to cost, reliability and customer adoption, so chipmakers need to keep a close eye on this business. Here are just some of the main events in the arena:
- The original fan-out technology—embedded wafer-level ball-grid array (eWLB)—is seeing an increase in supply after being sold out for a long period of time.
- ASE and Deca are readying a new low-density fan-out line, a technology that appears to compete with eWLB.
- OSATs from China are moving into fan-out.
- Several packaging houses are pursuing panel-level fan-out, a low-density technology that promises to lower the cost of fan-out.
What is a fan-out?
Fan-out is a relative newcomer on the block. For decades, IC packaging was a straightforward process. “In conventional packaging, the finished wafer is cut up, or diced, into individual chips, which are then bonded and encapsulated,” explained Choon Lee, vice president of advanced packaging at Lam Research.
OSATs continue to use this method, but the big change occurred in the early 2000s when it developed a technology called wafer-level packaging (WLP). “WLP, as its name implies, involves packaging the die while it is still on the wafer,” Lee said in a blog posting. “Because the sides are not coated with WLP, the resulting packaged chip is small in size (roughly the same size as the chip itself), an important consideration in footprint-sensitive devices such as our smartphones. Other advantages include streamlined manufacturing and the ability to test chip functionality before dicing.”
Fig. 2: Traditional vs. WLP packaging flow Source: Lam Research
There are two main types of WLP packages—chip-scale packages (CSP) and fan-out. CSP is sometimes known as fan-in. “Packaging types are mainly driven by the end application,” said Pieter Vandewalle, senior director of marketing at KLA-Tencor. “Fan-in/fan-out WLP are mainly driven by mobile applications, which require high-performing, energy-efficient thin- and small-form-factor packages.”
Fan-in and fan-out are slightly different. One distinction is how the two package types incorporate the redistribution layers (RDLs). RDLs are the copper metal connection lines or traces that electrically connect one part of the package to another. RDLs are measured by line and space, which refer to the width and pitch of a metal trace. As stated above, low-density fan-out is greater than 8μm line/space.
In fan-in, the RDL traces are routed inwards. As a result, fan-in is limited and runs out of steam at about 200 I/Os and 0.6mm profiles.
But in fan-out, the RDL traces can be routed inward and outward, enabling thinner packages with more I/Os. “In fan-out, you expand the available area of the package,” said John Hunt, senior director of engineering at ASE.
Fig. 3: Fan-In to Fan-out Package Source: ASE
In high-density fan-out, Apple is leading the charge. Traditionally, smartphones use package-on-package (PoP) technology for the application processor. In PoP, a memory package is on the top, while an application processor package is on the bottom.
Many smartphone OEMs are sticking with PoP, as the technology is mature and inexpensive. But PoP is running out of steam at thicknesses around 0.8mm. So Apple moved from PoP to fan-out for the application processor in its latest iPhones. Apple’s latest application processor is based on a 10nm process. The chip is housed in TSMC’s InFO fan-out technology, enabling a smaller and thinner package.
In another example, a customer could integrate different devices, such as digital, analog and RF, in a fan-out package. The digital die might be based on an advanced process, while analog and RF use mature technologies.
The dies with advanced and mature processes can be partitioned and then interconnected in the same package. “Fan-out allows you to combine multiple die, either homogeneously or heterogeneously, into an electrical interconnected package,” Hunt said. “We can not only put multiple dies in a package, but we can also put MEMS, filters, crystals and passives into it.”
Fan-out isn’t the only way to incorporate multiple dies into the package. Customers have several options, including 2.5D/3D, fan-out, system-in-package (SiP) and wirebond technology.
Today, 75% to 80% of all IC packages utilize an older interconnect scheme called wire bonding, according to TechSearch. For this, a system called a wire bonder stitches one chip to another chip or substrate using tiny wires.
At the high end, OSATs offer 2.5D/3D, a die stacking technique that uses through-silicon vias (TSVs). Meanwhile, a SiP combines a series of multiple dies and passives to create a standalone function.
What’s the best multi-die packaging technology? It depends on the application. “Whether it’s fan-out or SiP depends on the application, bandwidth requirements and available real estate. Both will offer significant performance improvements from wire-bonded devices,” said Cristina Chu, strategic business development director at TEL NEXX, part of TEL. “Time-to-market is a major advantage for SiP in complex FPGA devices. In some cases, these SiPs can even combine components from different process nodes in the same package.”
Traditional vs. new fan-out
Meanwhile, in the mid-2000s, Freescale and Infineon separately introduced the industry’s first fan-out package types.
In 2006, Freescale introduced a fan-out technology called Redistributed Chip Packaging (RCP). Then, in 2010, Freescale licensed RCP to Nepes. Nepes set up a 300mm line to make RCP technology in Korea. “Nepes is in production for radar and IoT modules,” TechSearch’s Vardaman said. (In 2015, NXP acquired Freescale.)
Originally, Infineon’s eWLB technology was designed for a baseband chip in cellular phones. Infineon still has a 200mm eWLB production line, which is used for radar modules, Vardaman said.
In 2007, Infineon also licensed the eWLB technology to ASE, and a year later, it struck a similar deal with STATS ChipPAC. Later, Infineon licensed eWLB to Nanium, now owned by Amkor. The licensing deals gave these OSATs the rights to make eWLB.
Initially, eWLB was a single-die package, but the technology eventually moved into more complex multi-die configurations with passives.
Fig. 4: eWLB product portfolio Source: STATS ChipPAC
“In general, 2D eWLB devices are typical for low- to mid-density applications. The 2.5D and 3D eWLB devices are for high-end or high-performance applications that require more than 500 or 1,000 I/Os. However, there are cases where a 3D eWLB SiP has less than 500 I/Os because of the application requirements,” explained Seung Wook Yoon, director of group technology strategy for the JCET Group. JCET, China’s largest OSAT, in 2015 acquired Singapore’s STATS ChipPAC.
2D eWLB has been shipping since 2009. “We do have a number of 2.5D and 3D eWLB devices that have been qualified by our customers, but they have not reached high-volume production levels yet,” Yoon said.
This package type is manufactured using a chip-first/face-down process flow. Chip-first/face-down is one of three variations of fan-out. The other two include chip-first/face-up and chip-last, sometimes known as RDL first.
In the chip-first/face-down flow, the chips are first processed on a wafer in the fab. Then, the chips are diced. Using a pick-and-place system, the dies are placed on a new wafer based on an epoxy molded compound. This is referred to as a reconstituted wafer.
A reconstituted wafer can be processed in either a 200mm or 300mm round format. The packaging process itself is conducted on this wafer. Then, the dies are cut, forming a chip housed in a fan-out package.
Chip-first has been in production almost a decade. Chip-last, which has a different flow, has not been widely adopted-yet.
Fig. 5: Chip first vs. chip last. Source: TechSearch International
There are some challenges. Reconstituted wafers are prone to warpage in the flow. And when the dies are embedded in a reconstituted wafer, they tend to move during the flow, causing an unwanted effect called die shift. This impacts the yield.
OSATs have overcome many of these challenges. Perhaps a bigger issue occurred in 2016 and 2017, when the two main eWLB packaging suppliers – STATS ChipPAC and Nanium – were sold out of this package type due to demand from Qualcomm.
This, in turn, prompted customers to look for other types of packages, causing a pause in the eWLB market.
In response, STATS ChipPAC and ASE have expanded their eWLB capacities. Then, in 2017, Amkor bought Nanium, a move that provided some backing for the fan-out specialist.
Now, eWLB has three suppliers with sufficient capacity, a move that should jumpstart the market. “There continues to be growing demand for FOWLP in low- to mid-density applications. We have fan-out customers in mobile, 5G or automotive applications that require less than 500 I/Os,” Yoon said. “There are a number of emerging market segments for FOWLP, such as 5G mmWave devices, MEMS, fingerprint sensors and automotive applications like advanced driver assistance systems (ADAS).”
In 2018, though, eWLB is expected to get some new competition. ASE, which supplies eWLB, also has been working on another low-density fan-out technology with Deca Technologies. Deca, a subsidiary of Cypress Semiconductor, is the original developer of this technology, dubbed the M-Series.
In addition, ASE is in the process of merging with Siliconware Precision Industries (SPIL), a Taiwan OSAT. SPIL is also working on a fan-out technology called TPI-FO.
Then, in the first half of 2018, ASE plans to move into production with the M-Series fan-out technology. Unlike eWLB, the M-Series is a chip-first, die-up technology.
Fig. 6: M-Series vs. eWLB Source: ASE
The M-Series solves some of the issues with traditional fan-out. “(For traditional fan-out), you have to use a high-accuracy flip-chip bonder to place the die. It’s a relatively low-throughput process. It is around 8,000 dies an hour,” ASE’s Hunt said. “But one of the main problems is die shift. When you place the die after molding, it is not where you placed it. It moves.”
In response, Deca has developed a technology called adaptive patterning. First, the dies are placed on the wafer using a high-speed surface-mount system at a rate of 30,000-35,000 dies per hour. But the placement of each die is less accurate than a traditional system. So to compensate for the accuracy issues, Deca’s technology measures the actual position of every die on the wafer.
“We then recalculate the RDL pattern to accommodate every die shift in every wafer. That recalculation takes about 28 seconds. By the time the wafer gets to the imaging system, the pattern has been recalculated,” Hunt said.
The data then is fed into an imaging system. In eWLB, a traditional lithography system patterns a feature on a die. In contrast, Deca’s technology uses a proprietary laser direct imaging system. Laser direct is like direct-write lithography. It directly writes features on a die without a mask.
In Deca’s technology, the laser direct imaging system aligns the entire RDL pattern to the measured die position, which supposedly solves the die shift problems.
Competitors are keeping an eye on the technology. “Deca M-Series solution has its own unique advantages, but has not yet been proven in high-volume production,” JCET’s Yoon said.
Here comes China
Meanwhile, there are other relative newcomers in fan-out, namely from China. For example, Jiangyin Changdian Advanced Packaging (JCAP) has a wafer-level package. JCAP is part of Jiangsu Changjiang Electronics Technology (JCET). In addition, JCET also acquired STATS ChipPAC.
Tianshui Huatian, another large OSAT in China, develops several package types, including fan-out. “The JCAP one is in production. Huatian is probably close to a production part. The versions in China are different from the original eWLB process,” TechSearch’s Vardaman said.
Tianshui Huatian’s fan-out technology is called eSiFO. In eSiFO, the wafer is etched, forming a gap. Dies are placed in the gaps using a pick-and-place system and then sealed.
Fig. 7: Process Development of eSiFO Source: Tianshui Huatian
“It uses a silicon carrier and does not require a mold compound. It is gaining traction primarily because there is considerably less stress and warpage. There is a minimal CTE mismatch between our silicon carrier wafer and the die embedded inside the dry etched trench. It is also a fundamentally simpler process,” said Allan Calamoneri, vice president of sales and marketing for Huatian Technology Group USA. “Currently, applications are in lower density, smaller packages and recently multi-die configurations. We are in qualification with some U.S. customers, but production volumes are currently only being shipped for China-based customers.”
What’s next? Today’s fan-out packages involve packaging a die in a round 200mm or 300mm wafer format. In R&D, some are working on panel-level fan-out, which involves packaging a die on a large square panel. The idea is to process more dies per unit area, which, in theory, reduces the cost by 20%.
Fig. 8: Comparison of number of die exposed on 300mm wafer to number of die on panel. Source: STATS ChipPAC, Rudolph
ASE-Deca, Nepes, Samsung and others are developing panel-level fan-out. Targeted for 2018 and 2019, panel-level fan-out packages will supposedly enable cheaper, low-density products.
But panel-level packaging is a difficult technology to master, and there are no standards in the arena. “The main parameter of choice is always cost,” Yole’s Azemar said. “The entrance of panel in the equation may change the landscape.”
So which low-density fan-out technologies will prevail over the long run? Some will continue to make inroads. Others may take off or will become niche products. But it’s unclear if there is room for everyone despite the explosion of new apps in the market.
Toward High-End Fan-Outs
Denser interconnects, stacked die could rival 2.5D approaches.
These new fan-outs have denser interconnects than previous iterations, and in some cases they include multiple routing layers stacked on top of each other. TSMC has been offering this stacking capability for months with its Integrated Fan-Out (InFo), and now some OSATs are stepping in with their own versions.
Until very recently, fan-outs were seen almost entirely as the low-cost advanced packaging option, basically shrinking components that would otherwise be found on a PCB and putting all of them inside a single package. There are a number of advantages to this approach. First, putting everything into a smaller package reduces material costs. Second, by shortening distances that signals have to travel compared with a larger, fully integrated SoC, performance goes up while the amount of power needed to drive those signals goes down. And third, by combining chips developed at different nodes into the same device, chipmakers can optimize floor-planning to minimize physical effects such as cross-talk, power noise and electromigration.
This doesn’t mean work on low-end fan-outs is decreasing. In fact, the opposite is true. New EDA tools and flows are being developed and introduced, and so are panel-level packaging approaches for devices with sufficient volume. But high-end fan-outs push this packaging approach in a new direction, where the emphasis is on reducing lines and spaces for higher density, as well as significantly improving performance. While most fan-outs have lines and spaces above 8μm, those numbers could be as low as 2μm for these new devices. (Lines and spaces are the width and pitch of a metal trace.)
Fig. 1: Low-density versus high-density fan-outs. Source: ASE/SEMI Industry Strategy Symposium
“Everyone was thinking of fan-out for low-end solutions, but we see it as a high-end solution, as well,” said John Hunt, senior director of engineering at Advanced Semiconductor Engineering (ASE). “You can do a fan-out with a chip up and another chip directly on top of that. That’s very good for photonics because they want it within 50 microns. You also can use the fan-out as a die substrate alternative, so you flip the chip around onto the package. That allows you to transfer heat down as well as up.”
Hunt said that fan-out chip-on-substrate, which basically combines multiple routing layers on a substrate, also can replace 2.5D for some applications. “One of the great things about fan-out approaches is there is not as much stress on the individual dies. This isn’t going to replace high-density silicon interposer, but it will come in at a lower-cost and it allows companies to re-use silicon.”
Fig. 2: Multi-layer routing in a fan-out chip-on-substrate. Source: ASE/ISS
High-end fan-outs also add a novel approach to dealing with thermal issues that have plagued vertical stacking. With monolithic 3D stacking, if one of the logic layers is packaged between two other layers, it’s difficult to get the heat out of the middle logic layer without some exotic cooling technology such as microfluidics. That typically results in the partial or total shutdown of the sandwiched logic, and limits the benefits of this kind of packaging. But with double-sided cooling, particularly with fan-out on substrate, thermal issues can be managed more easily.
“InFO will continue in its current form as an integrated memory pre-stacked package solution for the Apple processor,” said Ron Huemoeller, corporate vice president for R&D at Amkor. “Beyond that, it remains to be seen how far the current format will extend. Fan-out on substrate will be the new hot button for the industry, introduced in varying forms from low density to high density fan out on substrate.”
New markets
Fan-outs went mainstream in 2016, when Apple adopted TSMC’s InFO technology for the application processor in the iPhone 7. Since then, fan-outs have been used in a variety of applications ranging from high-volume consumer devices such as phones to automotive applications, where time to market, flexibility and performance are critical.
Automotive is a particularly attractive opportunity for advanced packaging because there is so much uncertainty about how chips such as sensors and sensor hubs will ultimately look and what protocols they will have to support. Technology is still evolving for these applications, which means that what gets developed today may have to be modified much more quickly than in the past. Automotive design cycles used to be five to seven years for electronics. It is now on the same rollout schedule as consumer electronics.
Advanced packaging can help in this regard. It makes it simpler to augment existing designs with some new components, including different memories or memory configurations, as well as additional features that may not be available when a device is originally designed. In these cases, advanced packaging essentially creates a platform on which new functionality can be added without having to re-do the entire design. At least some of those now involve chips that are stacked vertically as well as horizontally.
Fig. 3: Fan-out revenue forecast by market type. Source: Yole Développement.
“We’ve seen this with MEMS devices, where ASICs and MEMS sensors are stacked on top of each other,” said Ram Trichur, director of business development for Brewer Science‘s Advanced Packaging Unit. “It’s also being looked at for high-frequency applications, greater than 24 GHz, where the antenna link needs to be the size of the package. With 5G, there are severe losses because of the frequency, so you need to decrease the distance between the different functional parts. With 4G LTE, the antenna link was a flexible cable off the chip. With 5G and millimeter wave, the antenna length is a few millimeters, so it needs to be integrated into the package.”
Not so simple
Fan-outs have been under development for more than a decade. So far, there is no single approach that works for everything. Even at the high end of this market there are a number of different varieties, including fan-out on substrate and package-on-package fan-out, as well as chip-first or chip-last approaches.
“Fan-out is a very good technology for low-cost and mid-cost applications,” said Andy Heinig, research engineering at Fraunhofer EAS. But we also see technology limits for fan-out. You can put two to three routing layers on fan-outs, but at this point 95% have been down with only one routing layer. With two layers, the yield decreases. And in the end, if you don’t have 98% to 99% yield, the design doesn’t go into production.”
Heinig noted that one approach is to develop the fan-out layers, put the chip into molding. This so-called chip-last approach is more flexible and simpler than starting with a chip first. But if more routing layers are added, the fan-out cost increases and the yield goes down, at least initially. And by the time all the factors are taken into account, it may be comparable to the cost of an interposer.
“For high-end applications, fan-out still cannot reach the requirements for HBM,” he said. “Bridges are another alternative, but they have some limitations, too. This involves a small piece of silicon which is put in between the processor and memory. If you have 1 HBM stack and one processor, you can align the processor and the memory with a bridge. But if you have four HBM stacks, there’s a problem aligning that with a bridge. So you can cut the cost of silicon, but there are a lot of steps to align the bridge. That makes it more expensive to develop, and in the end it may be more expensive than 2.5D.”
So at least for the time being, both 2.5D and high-end fan-outs will continue to overlap.
“2.5D will continue its slow growth in the HPC and automotive sector for specific applications,” said Amkor’s Huemoller. “Graphics is a main driver still, but multi-logic configurations will also require 2.5D packaged structures to address the AI market, as well. Multi-die products will drive the packaging industry going forward with new product growth. Heterogeneous integration will fully deploy in multiple formats over the next couple of years, including SiP, sub-system modules, 2.5D and various silicon-to-silicon bridge concepts. The incorporation of mixed technologies in modular form will drive much of this.”
Industry buy-in
No matter how difficult advanced packaging is, it’s still simpler and less expensive than putting everything onto an SoC developed at the most advanced nodes. While short-channel effects, which are responsible for leakage current, were reduced at 16/14nm with the introduction of finFETs, leakage is increasing again at 10/7nm. Gate-all-around FETs have been proposed beyond that, but the cost is expected to increase significantly as new transistor types are added into the mix, along with EUV lithography.
“There is still a push to continue integrating at smaller nodes,” said Navraj Nandra, senior director of marketing for the DesignWare Analog and MSIP Solutions Group at Synopsys. “At the same time, people are coming up with packaging solutions-either side by side or stacked die using TSVs or interposer. That’s becoming real. Packaging costs are low enough that it’s worthwhile developing products without crazy high expenses. Part of what’s driving that is deep neural networks, convolutional neural networks and machine learning, particularly on the inferencing side.”
This doesn’t displace scaling. But it does provide an alternative, as well as a possible way of extending scaling. And while this isn’t necessarily the cheapest solution, it can certainly be less costly than trying to develop everything at the latest node, such as analog sensors or PHYs.
“Advanced packaging is more for performance reasons or power reduction reasons and form factor more than cost,” said Rick Gottscho, chief technology officer at LAM Research. “It doesn’t displace scaling and trying to get higher density at the chip level. It’s complementary, and both will keep going. It certainly doesn’t replace the shrink approach to scaling.”
Mark Dougherty, vice president of advanced module engineering at GlobalFoundries, agrees. “It comes back to working things in parallel,” Dougherty said. “If you look at through-silicon vias, and 2.5D and 3D, it becomes a very application-specific question. It won’t obviate the need to scale at the die level, but depending on the solutions that the end customer is looking for, it opens up a lot more possibilities there. There is certainly the case of marrying logic with DRAM, or one technology generation with another. All of those things are happening. But it more will be driven by the application space.”
Conclusion
The number of packaging options available continues to grow. While that adds a fair amount of confusion, it’s also a sign that device scaling alone is getting too expensive and complex to continue every couple of years. Instead of moving to half-nodes, those continuing on the road map are jumping ahead to the next full node, and they are looking to extend that further with architectural options beyond a single planar die.
In that context, higher-performance, denser fan-outs are yet one more option that chipmakers increasingly are considering and adopting. Whether this is the beginning of more 3D integration, or simply a new alternative to platforms isn’t entirely clear yet. But packaging is becoming more complicated and much more customizable, and that trend is likely to continue for the foreseeable future.
What’s Next In Advanced Packaging
Wave of new options under development as scaling runs out of steam.
At a recent event, ASE, Leti/STMicroelectronics, TSMC and others described some of their new and advanced IC packaging technologies, which involve various product categories, such as 2.5D, 3D and fan-out. Some new packaging technologies are trickling out in the market, while others are still in R&D. Some will never take off due to technical and cost reasons.
Some manufacturers are expanding their packaging efforts in other ways. For example, Samsung’s semiconductor division recently acquired the panel-level fan-out unit from another affiliate, Samsung Electro-Mechanics (SEMCO). With that move, Samsung’s semi unit will expand its efforts in fan-out, propelling it into the panel-level fan-out market.
Across the industry, packaging is playing a bigger role and becoming a more viable option to develop new system-level chip designs. As a result, chipmakers and packaging houses are expanding their efforts.
Traditionally, the IC industry relied on traditional chip scaling and innovative architectures for new devices. In chip scaling, the idea is to pack more transistors on a monolithic die or system-on-a-chip (SoC) at each process node, enabling faster chips with a lower cost per transistor. But traditional chip scaling is becoming more difficult and expensive at each node.
While scaling remains an option for new designs, the industry is searching for alternatives. Another way to get the benefits of scaling is by putting multiple and advanced chips in an advanced package, also known as heterogeneous integration.
“In the old days, we tried to squeeze everything into one monolithic chip. But right now, it’s getting so expensive and the chip is getting so big,” said Calvin Cheung, vice president of engineering at ASE, in an interview at the recent IEEE Electronic Components and Technology Conference (ECTC). “Heterogenous integration tackles the age-old issue by combining chips with different process nodes and technologies. The die-to-die interconnect distances are so close that it mimics the functional block interconnect distances inside an SoC.”
There are several ways to implement a chip design using heterogenous integration, but this concept isn’t exactly new. Advanced packaging has been used in a limited form for decades in niche applications. The issue is cost as the technology remains too expensive for many applications.
At ECTC, several companies described new packaging technologies, which hope to address the cost and other challenges in the arena. Among them:
- ASE described more details about a high-density fan-out technology that supports high bandwidth memory (HBM).
- STMicroelectronics and Leti jointly described a 3D packaging technology using chiplets. In chiplets, the idea is that you have a menu of modular chips, or chiplets, in a library. Then, you integrate them in a package using a die-to-die interconnect scheme.
- TSMC provided more details about its next-generation fan-out and 3D technologies.
Wirebond and flip-chip
At one time, IC packaging took a backseat in the semiconductor industry. The package was simply there to house a chip at the lowest possible cost. That’s no longer the case. Advancements in packaging enable chips with smaller form factors. It also paves the way towards new and advanced forms of heterogeneous integration.
Today, a multitude of IC package types are targeted for different applications. “I like to break it down into mobile and high performance,” said Jan Vardaman, president of TechSearch International. “Mobile has a different set of packages. Mobile has to hit that steep ramp and has to be extremely cost-sensitive. Thin is also important, so you have more room for the battery.”
High-performance requires different packages with more I/Os. For both markets, there is no one package type that can meet all requirements. “Different people have different approaches,” Vardaman said. “There are many ways to get to the top of the mountain.”
Another way to segment the packaging market is by interconnect type, which includes wirebond, flip-chip, wafer-level packaging and through-silicon vias (TSVs).
Some 75% to 80% of today’s packages are based on wire bonding, which is an older technology, according to TechSearch. Developed in the 1950s, a wire bonder stitches one chip to another chip or substrate using tiny wires. Wire bonding is used for low-cost legacy packages, mid-range packages and memory die stacking.
With wire bonding the industry can stack and stitch together 16 flash memory dies with 32 die stacks in the works, according to Kulicke & Soffa (K&S). “To keep up with the lower profiles and high-performance demands of modern memory applications, higher I/O counts, more die stacks and the use of longer overhang structures is inevitable,” said Basil Milton, a senior staff engineer at K&S. “These requirements generate new challenges for wirebond process engineers.”
To extend the capabilities of wirebond, the industry requires systems with finer looping and stitch-bond formation. Today, the mainstream loop heights for wirebond are 300 to 400µm (15 to 20X wire diameter). At ECTC, K&S presented a paper, where it demonstrated 2X wire diameter loop heights at 35µm.
For processors and other chips, though, wirebond doesn’t provide enough I/Os. To increase that number, flip-chip is one step above wirebond. Fan-out is in the middle in terms of I/Os, while 2.5D/3D is at the high end.
Fig. 1: Package technology vs. application. Source: ASE
Commercialized in the 1960s, flip-chip is still widely used today. In flip-chip, a sea of tiny copper bumps are formed on top of a chip. The device is flipped and mounted on a separate die or board. The bumps land on copper pads, forming an electrical connection.
Flip-chip is used for many package types. At ECTC, for example, JCET described details about its ongoing efforts to develop advanced packages using thin organic substrates and flip-chip.
Still in the works, JCET’s technology enables single- and multi-die packages, including 2.5D-like configurations without an interposer. “The salient feature of an ultra-thin substrate is its thickness, which can be an order of magnitude thinner than normal flip-chip laminate or build-up substrates,” said Nokibul Islam, director of group technology strategy at JCET.
Fan-out, 2.5D/3D and chiplets
After flip-chip, fan-out is next in the I/O hierarchy. Fan-out recently gained attention when Apple began using TSMC’s InFO fan-out package for its iPhones. This package integrates Apple’s application processor and third-party memory in the same unit, enabling more I/Os than other package types.
Amkor, ASE, JCET and others also provide fan-out. Fan-out is used to package chips for automotive, mobile devices and other applications. Fan-out doesn’t require an interposer, making it cheaper than 2.5D/3D.
The technology is classified as a wafer-level package, where the dies are packaged while on a wafer. “In FOWLP, chips are embedded inside epoxy molding compound, and then high-density redistribution layers (RDLs) and solder balls are fabricated on the wafer surface to produce a reconstituted wafer,” said Kim Yess, technology director for the Wafer Level Packaging Materials business unit at Brewer Science. “As the industry pushes fan-out to the limit, stress plays a big factor into it. You have stress and warpage.”
Fan-out is split into two segments, low-density and high-density. Low-density fan-out (less than 500 I/Os) is used for power management ICs, codecs and other devices, while high-density (more than 500 I/Os) is targeted for servers and smartphones.
Going forward, fan-out is extending its reach in both markets. “You are seeing a push to extend fan-out,” said Kim Arnold, executive director for wafer level packaging materials at Brewer Science. “The industry is finding ways for fan-out to offer the performance needed. The industry knows how to run the process. They also know the cost structure.”
At the high end, for example, ASE and TSMC are working on fan-out packages that support HBM, which addresses a big challenge in today’s systems—the memory wall.
In systems, data moves between the processor and memory. But at times this exchange causes latency and power consumption, which is sometimes called the memory wall. DRAM, the main memory in systems, is one of the main culprits. Over the last decade, the data rates for DRAM have fallen behind in memory bandwidth requirements.
One solution is high-bandwidth memory (HBM). Targeted for high-end systems, HBM stacks DRAM dies on top of each other and connects them with TSVs, enabling more I/Os and bandwidth.
Typically, HBM is integrated in a 2.5D package. In 2.5D, dies are stacked or placed side-by-side on top of an interposer, which incorporates TSVs. The interposer acts as the bridge between the chips and a board.
Generally, though, 2.5D with HBM is relegated for high-end applications. The big issue with 2.5D is cost. It’s too expensive for most applications.
To help lower the cost, the industry is working on fan-out packages with HBM. In a paper at ECTC, for example, ASE described a fan-out technology that integrates an ASIC with two HBM2 dies. For this, ASE is using a hybrid fan-out package called Fan Out Chip on Substrate (FoCoS).
ASE’s current FoCoS package is based on a process called chip-first. In contrast, the HBM version of FoCoS is a chip-last process, enabling a 30mm x 30mm package size with 2μm line/space and a 10μm via size. It has four RDL layers with stacked vias.
Fan-out with HBM has several advantages over 2.5D. “The electrical performance is better than a 2.5D interposer solution,” said John Hunt, senior director of engineering at ASE. “You have less insertion loss, better impedance control and lower warpage than 2.5D. It’s a lower cost solution with better electrical performance. The difference is that 2.5D can do finer lines and spaces. But we can route the HBM2 dies with our current 2μm line and space.”
Fig. 2: Critical Signals In An HPC Device Source: ASE
Fan-out with HBM could take share away from 2.5D, but it won’t completely displace it. “When you look at what’s beyond fan-out, you see 2.5D or 3D. You find instances where people require an interposer. They need the performance. The same thing is true for 3D. You have places where 3D performance is required,” Brewer’s Arnold said.
Others also are developing new versions of fan-out. At ECTC, TSMC disclosed details about its next-generation fan-out technology, called 3D-MiM (MUST-in-MUST).
TSMC’s current fan-out technology is based on a package-on-package (PoP) scheme. In PoP, two dies (or more) are housed in the same package and connected using various interconnect technologies.
In contrast, 3D-MiM resembles an embedded die technology. “3D-MiM technology utilizes a more simplified architecture,” said An-Jhih Su, a researcher from TSMC. “First, there are no wafer bumps and flip-chip bonding during the 3D-MiM fan-out integration process, which reduces the assembly complexity and avoids the chip-package-interaction reliability challenges in a flip chip assembly. Second, a much thinner package profile is achieved for improved form factor, thermal, and electrical performance.”
Still in R&D, 3D-MiM consists of a substrate with separate tiers. A die or die stack can be embedded in each tier and connected to the board via a link.
In a three-tier configuration, for example, the top consists of an SoC. The middle tier consists of 8 memory dies, which are embedded and staggered in the substrate. The bottom tier also has 8 memory dies. All told, the package consists of one SoC and 16 memory dies in a package with a footprint of 15 x 15mm2.
Embedded die packaging isn’t new. Generally, the technology presents various manufacturing and cost challenges. Bonding the tiers and aligning the dies are among the challenges.
Fan-out, meanwhile, is moving other directions, as well. For example, after years of R&D, panel-level fan-out technology is finally beginning to ramp up in the market, at least in limited volumes for a few vendors.
There are various challenges here, including lack of panel standards and the ecosystem. “There are a lot of new materials and equipment with a focus on panel-level processing entering the market,” said Tanja Braun, deputy group manager at Fraunhofer.
Chiplet mania
After years of modest success in developing 3D-IC packages, the industry is launching new versions of the technology. In 3D-IC, the idea is to stack logic dies on each other or memory on logic. The dies are connected using an active interposer.
Still in the early stages of development, chiplets are another form of 3D-ICs. There are various ways to integrate chiplets. For example, instead of a big SoC in a package, you break the device up into smaller dies and connect them.
“Chiplets enable heterogeneous integration of CMOS with non-CMOS devices,” said Ajit Paranjpe, CTO at Veeco. “For example, at ECTC, a few papers highlighted the benefits of moving the voltage regulators off the main CMOS die, especially for server chips that have a sea of cores and require several hundred watts of power. Moving the voltage regulators off-chip can reduce the die size of the expensive leading-edge CMOS (i.e. 10nm and 7nm) by 20% to 30%.”
The idea of putting together different modules like LEGOs has been talked about for years. So far, only Marvell has used this concept commercially, and that was exclusively for its own chips based on what it calls a modular chip (MoChi) architecture.
Now, government agencies, industry groups and companies are jumping on the chiplet bandwagon. The latest is STMicroelectronics and Leti, which jointly presented a paper at ECTC on a 3D system architecture using chiplets.
STMicroelectronics and Leti developed six multiprocessor chiplets based on a 28nm FD-SOI technology. The devices were placed on a 65nm active interposer and connected using copper pillars.
“These copper pillars offer a large chiplet-to-chiplet communication bandwidth through the interposer, with a reduced impact on the chiplet floorplan,” said Perceval Coudrain, a researcher at CEA-Leti. “This object integrates a total of 96 cores, offering a low-power computing fabric with a cache-coherent architecture and wide voltage ranges.”
Meanwhile, TSMC described its latest efforts in the area, which it calls System on Integrated Chips (SoIC) for 3D heterogeneous integration.
TSMC demonstrated the SoIC concept for a fan-out package. In TSMC’s InFO package, a memory die is on top, while a single SoC is situated on the bottom.
In TSMC’s SoIC technology, there could be three smaller SoCs or chiplets instead of one big SoC in the package. One SoC is on top and two are on the bottom, which are joined using a bonding process.
The idea is to break the big SoC into smaller chiplets, which presumably have a lower cost and better yield than a monolithic die. “Compared to the typical 3D-IC PoP, the SoIC-embedded InFO_PoP offers higher interconnect I/O bonding density, lower power consumption and a thinner package profile,” said TSMC’s F.C. Chen in a paper.
Needless to say, the industry faces some challenges with chiplets. “Given all the advantages, we would expect chiplet adoption to happen, but the primary question is at what pace? And this will be driven primarily by cost. So it will most likely be implemented primarily in high-end applications initially, and adopted more generally as costs are driven down,” said Warren Flack, vice president of lithography applications at Veeco’s Ultratech Business unit.
Integrating chiplets into a package is easier said than done. “In general, the more individual chiplets that are used to complete a single package, the greater the lithography challenges. This involves interconnect overlay, TSV processes for stacked chip interconnect, and productivity (system throughput) to provide the needed technical solution at affordable costs,” Flack said.
There are other challenges. “Smaller metal features for 3D stacking also drives the detection of smaller defects and control of tighter dimensions,” said Stephen Hiebert, senior director of marketing at KLA. “For heterogeneous integration, the quality requirements for each device being integrated are increasing rapidly. More demanding requirements for accurate die screening are emerging as the number of and the value of ICs integrated into a system-in-package increases. For wafer-level and die-level inspections, small or subtle defects that may have been previously acceptable are becoming unacceptable when these die get incorporated in a complex, multi-device package. One bad die for system-in-package can kill the entire heterogenous package.”
Conclusion
For the next wave of devices, IDMs and design houses have several options. Scaling remains on the list, but it’s no longer the only option.
“From an economic standpoint, how many companies can afford silicon at the bleeding edge nowadays? That number is shrinking. For the very high-performance markets, there is always going to be that need,” said Walter Ng, vice president of business management at UMC. “But everyone else has slowed down quite a bit. You can look at the need for advanced packaging in multiple places.”
The Race To Next-Gen 2.5D/3D Packages
New approaches aim to drive down cost, boost benefits of heterogeneous integration.
Intel, TSMC and others are exploring or developing future packages based on one emerging interconnect scheme, called copper-to-copper hybrid bonding. This technology provides a way to stack advanced dies using copper connections at the chip level, enabling new types of 3D-ICs, chiplets and memory cubes. Still in R&D, copper hybrid bonding and competitive schemes are promising, but they also present some technical and cost challenges.
Many companies and research organizations are working on it, and for good reason. In some cases, traditional system-on-a-chip (SoC) designs are becoming too unwieldy and expensive at advanced nodes. So the industry is scrambling to develop new device alternatives using a multitude of different approaches.
Today, meanwhile, the industry is developing or shipping 2.5D/3D and other advanced packages using existing interconnect schemes. Interconnects are used to connect a die to another die or to a separate interposer as in 2.5D. In many of these packages, the dies are stacked and connected using an interconnect technology called copper microbumps and pillars. Bumps and pillars provide small, fast electrical connections between different devices.
The most advanced microbumps and pillars are tiny structures with a 40μm pitch. A pitch refers to a given space. A 40μm pitch involves a 25μm copper pillar in size with 15μm spacing. Going forward, the industry can scale the bump pitch possibly at or near 20μm. Then, the industry needs a new interconnect solution beyond bumps and pillars.
There are several options on the table, but copper-to-copper hybrid bonding is the current favorite. The idea to stack and connect dies directly using a copper-to-copper diffusion bonding technique, eliminating the need for bumps and pillars.
“A number of organizations and companies are planning to adopt direct bond interconnect or hybrid bonding as they get to 20μm to 10μm and below pitches,” said Jan Vardaman, president of TechSearch International. “It will probably be necessary as we go to a 10μm pitch and below.”
Copper hybrid bonding isn’t new. For years, the technology has been used for advanced CMOS image sensors. But migrating the technology for advanced chip stacking, such as memory on memory and memory on logic, is challenging and involves complex fab-level processes. And the timing of the technology remains a moving target, although the first products may appear by 2021 or sooner.
Nonetheless, there are several developments in the arena. Among them:
- Imec, Intel, Leti, Samsung, TSMC and others are working on copper hybrid bonding for future advanced packages.
- Xperi has developed a new version of its hybrid bonding technology. The company is licensing the technology to others.
- In R&D, the industry is working on hybrid bonding to enable new 3D DRAM types, namely 3DS (three die stacked) DRAMs. Some are developing new high-bandwidth memory (HBM) cubes.
Figure 1: 3D integration with hybrid bonding Source: Xperi
Interconnect challenges
Today’s chips are housed in a plethora of IC package types. One way to segment the packaging market is by interconnect type, which includes wirebond, flip-chip, wafer-level packaging (WLP) and through-silicon vias (TSVs). These aren’t package types per se, but they designate how chips are connected to each other or onto the board.
Some 75% to 80% of today’s packages are based on wire bonding, according to TechSearch. A wire bonder stitches one chip to another chip or substrate using tiny wires. Wire bonding is used for many package types.
For many chips, wirebond doesn’t provide enough I/Os. To increase the I/Os, the industry uses different interconnect technologies, such as flip-chip, WLP and TSVs.
“All these technologies have their own unique sweet spot for different applications,” said Calvin Cheung, vice president of engineering at ASE. “If you look at the roadmaps, you can divide it into flip-chip, fan-out and 2.5D in density and package size. Density refers to the number of I/Os. Right now, 2.5D can handle the most I/Os. 2.5D can handle I/Os and power grounds in excess of more than a few hundred thousand bumps. For fan-out, it’s a medium-size density and package size. Then, for BGA, you are talking about a few hundred to a thousand I/Os.”
In flip-chip, a sea of larger solder bumps, or tiny copper bumps and pillars, are formed on top of a chip. The device is flipped and mounted on a separate die or board. The bumps land on copper pads, forming an electrical connection. Generally, the two structures are bonded using a system called a wafer bonder. The less aggressive pitches use a flip-chip bonder.
Fan-out is classified as WLP, where dies are packaged while on a wafer. Meanwhile, in 2.5D, dies are stacked or placed side-by-side on top of an interposer, which incorporates TSVs. The interposer acts as the bridge between the chips and a board.
Advanced packaging, such as 2.5D and fan-out, has been around for years. But it is mainly used for higher-end applications. It’s too expensive for many products.
Going forward, though, advanced packaging is expected to become a more viable option to develop new system-level chip designs. The traditional approach of scaling chips, to pack in more transistors, is becoming more difficult and expensive at each new node. So while scaling remains an option for new designs, the industry is searching for alternatives.
Another way to get the benefits of scaling is by putting multiple and complex chips in an advanced package, also known as heterogeneous integration. In one example of heterogeneous integration, chipmakers can incorporate an FPGA and an HBM in a 2.5D package. Targeted for high-end systems, HBM stacks DRAM dies on top of each other and connects them with TSVs, enabling more I/Os and bandwidth. For example, Samsung’s HBM2 technology consists of eight 8Gbit DRAM dies, which are stacked and connected using 5,000 TSVs.
In HBM, each DRAM die has microbumps on both sides, enabling a connection to another die. “We are talking about a 5µm TSV through 50µm thick DRAM dies, and microbumps with a 25µm diameter with a 55µm pitch,” said Thomas Uhrmann, director of business development of EV Group.
Getting back to the 2.5D example, the HBM and the FPGA are then stacked, connected and bonded to the interposer using tiny copper microbumps with 55µm pitches.
The bonding process isn’t done with a flip-chip bonder. For finer pitch requirements, the industry generally uses thermal compression bonding (TCB). A TCB bonder picks up a die and aligns the bumps to those from another die. It bonds the bumps using force and heat.
“TCB defines the formation of fine-pitch interconnects using force during soldering rather than reflow soldering. The lower the interconnect pitch gets, the higher the requirements are for flatness and deformation during bonding,” Uhrmann said.
Nonetheless, there are several nagging issues with today’s 2.5D and 3D technologies. Cost is one issue. In addition, TCB is a slow process with low throughputs.
“Many customers are going in the third dimension by stacking chips. Every time they are stacking chips, they have thousands of bumps or pillars. They have to glue those things to each other as they keep stacking up layers. All of the bumps or pillars need to be at the same height. Otherwise, the bumps don’t make contacts. Then, you basically could lose your entire package,” said Subodh Kulkarni, president and chief executive at CyberOptics.
Going forward, leading-edge chip customers are migrating to the next nodes at 10nm/7nm and beyond. This has several implications for the package. “You need more I/Os. You are able to integrate more functional blocks into the die. So you need more I/Os to route the functions,” ASE’s Cheung said.
To put more I/Os in the same area, you need to shrink the bump pitch beyond today’s 40µm spec. That requires smaller bumps and pillars. Using today’s technologies, the industry sees a path to scale the bump pitches around 20µm. This still remains a moving target, however.
Today, there are some examples in the market. Intel, for one, recently unveiled a new 3D CPU platform, code-named “Lakefield.” This combines a 10nm processor core with four of Intel’s 22nm processor cores into a package. The 3D technology, called Foveros, uses existing microbumps with a 36µm pitch, according to WikiChip, a Web site.
Over time, many will stay at the current bump pitches. Some will push them to the limits. Beyond a certain point, though, there are some challenges with bumps and pillars.
In the copper pillar process flow, the dimensions of the pillars are defined. Then, on a substrate, the surface is deposited with a seed layer. A resist is applied on the surface and then patterned. A copper layer is plated in the defined area, followed by a solder cap.
At 20µm pitches, the process becomes difficult. A 20µm pitch involves a 11 to 12µm pillar size with 8 to 9µm spacing. That’s when the aspect ratios of the pillars become difficult to manage and control.
“The minimum microbump pitch can go below 20µm from a lithography standpoint. The minimum microbump CD is determined by the photoresist chemistry, the microbump height, and the numerical aperture of the imaging lens. The CD challenge for microbumps comes from other process steps such as the undercut of the copper seed layer during wet etch,” said Shankar Muthukrishnan, senior director of lithography marketing at Veeco.
What is hybrid bonding?
Nonetheless, the industry requires a new interconnect solution around a 20µm pitch. The leading contender is copper-to-copper bonding. The idea is to stack and connect devices directly using fine-pitch copper connections, not microbumps and pillars.
There are several approaches here, such as copper-to-copper thermal compression bonding and copper-to-copper hybrid bonding.
Kulicke & Soffa and UCLA recently demonstrated a copper-to-copper TCB technology, enabling fine-pitch copper interconnects at ≤10μm. Researchers also develop an in-situ treatment, which reduces copper oxidation.
In copper TCB, the idea is to form copper pillars on the surface of two wafers. The pads are then bonded using TCB. Still in R&D, copper TCB faces some reliability and cost challenges.
Copper-to-copper hybrid bonding, meanwhile, has the most momentum. With the technology, Intel, TSMC and others are exploring or devising a new class of fine-pitch 2.5D and 3D-ICs. TSMC recently provided more details about its next-generation 3D technologies, called System on Integrated Chips (SoIC) for 3D heterogeneous integration. Still in R&D, SoIC will use fine bump pitches with copper hybrid bonding.
TSMC and others are developing their own hybrid bonding technology. One company, Xperi, develops and licenses its own technology to others.
In hybrid bonding, you bond two structures together using different materials with a wafer bonder. Some are using standard materials, while others are exploring more exotic types like nano-pastes and nano-particles.
“Hybrid bonding is all about making good electrically conductivity between two die, and there are many methods to consider,” said Johanna Swan, director of packaging research and fellow at Intel. “We are looking at a slew of different materials based on what we think is best for our products.”
Hybrid bonding is different from a technology called “direct bonding,” which is used for today’s CMOS image sensors, MEMS and RF switches.
In direct bonding, a wafer is processed in a fab. Dielectric materials are exposed on one side of the wafer. Another wafer is processed the same way. Then, the two wafers undergo a dielectric-to-dielectric bonding process using a wafer bonder.
In hybrid bonding, the process is somewhat similar. The difference is that the two wafers are bonded together using a combination of two technologies—a dielectric-to-dielectric and a metal-to-metal bond—all at room temperature. In this case, the metals involve a copper-to-copper bond.
Hybrid bonding can be used to bond two wafers together (wafer-to-wafer bonding) and a chip to a wafer (die-to-wafer bonding).
The hybrid bonding process is conducted in a front-end manufacturing flow in a fab, not at an OSAT. “We’re riding the coattails of front-end processes,” said Craig Mitchell, president of Invensas, which is part of Xperi. “We have to optimize the parameters for our application, but we are using existing equipment.”
Xperi refers to its hybrid bonding process as Direct Bond Interconnect (DBI). DBI follows a traditional copper damascene flow in a fab.
Once a wafer is processed in the fab, metal pads are recessed on the surface. The surface is planarized using chemical mechanical polishing (CMP). Then, the wafer undergoes a plasma activation step.
A separate wafer undergoes a similar process. The wafers are bonded using a two-step process. It’s a dielectric-to-dielectric bond, followed by a metal-to-metal connection.
Hybrid bonding works. For years, the industry has been using the technology to make advanced CMOS image sensors. For this, one wafer is logic while the other is a pixel array. The two wafers are bonded together.
Years ago, Sony licensed Xperi’s hybrid bonding technology for use in developing image sensors. Used in today’s smartphones, Sony’s image sensor consists of 6μm pitch interconnects.
“We’ve also demonstrated 1.6μm,” said Abul Nuruzzaman, senior director of product marketing at Xperi. “The industry has been talking about 1μm pitches.”
All told, hybrid bonding enables 250,000 to 1 million interconnects per square millimeter. In comparison, 40μm pitch microbumps enables 600-625 interconnects per square millimeter.
Now, the industry is working on hybrid bonding for advanced memory and logic die stacking. The goal is to develop more advanced 2.5D/3D products.
This is where the industry faces several challenging, which is why it’s still in R&D. “Achieving a good copper-to-copper bond requires precise control over the topography following the copper CMP step,” said Stephen Hiebert, senior director of marketing at KLA. “If overpolished, the copper pad recess gets too large and there is a risk the pads will not join during hybrid bonding. If underpolished, copper residues can create electrical shorts.”
Meanwhile, Xperi has developed a new version of its hybrid bonding technology. This version is geared for die-to-wafer stacking with 40μm to 1.6μm pitches.
For this, the hybrid bonding process flow is the same, but there are more steps. Once the wafer is processed, the chips are diced, activated and then bonded on a wafer. “We see this as a critical solution for 2.5D and 3D integration moving forward,” Invensas’ Mitchell said. “For many 2.5D and 3D applications, you’re going to source dies that are of different sizes. They may come from different wafers, and even different fabs. Having a technique that allows you to take an individual die that’s known good and bonding it to another die that’s known good is an important capability for future electronics.”
Initially, Xperi’s new hybrid bonding technology is targeted for a new class of 3D memory, which will ship in the next two to three years. For example, the industry is developing 3DS DRAMs. Then, for HBM, 16 DRAM dies could be connected directly to each other with fine-pitch copper connections. It still requires TSVs between each layer.
Another application involves 2.5D, 3D-ICs and chiplets, where you stack memory-on-logic or logic-on-logic at the chip level. “Where we see this headed is in 2.5D and true 3D-IC chiplet concepts, allowing for a whole set of much higher density interconnects between those chiplets. You are getting to a point where you have almost chip-like interconnects, but you are able to use that between chips,” Mitchell said.
There are other advantages. “As chips get larger, the distances to travel from one end of the chip can get significant. But if you have three-dimensional interconnects, you get to microns,” he said. “This is important for power, latency, performance and heat. If you don’t have to drive a signal over a large area, you can use less current. This generates less heat.”
The new hybrid bonding technologies from Xperi and others aren’t simple and present some major challenges. “As people get to these kinds of pitches, you are going to need a front-end mentality,” TechSearch’s Vardaman said. “The environment has to be super clean. There can be no particles on the surface. Otherwise, you don’t have a bond. There’s a lot of issues to deal with.”
Those aren’t the only challenges. “Even more challenging are multiple layers or chip stacks, as sequential non-uniformities during bonding are influencing the next layer bonding. Therefore, tolerances and uniformity requirements are increasing. Even more importantly, the value of the die stack is rapidly increasing, meaning the cost of yield is increasing,” EV Group’s Uhrmann said.
Others agree. “For multiple devices combined in heterogeneous integration, one bad die results in the failure of the whole package,” KLA’s Hiebert said. “For hybrid bonding, we see several process control challenges that must be overcome to drive adoption of the technology into new logic and memory applications. Voids severely limit yield in hybrid bonding, so inline defect inspection for void-inducing particles is essential. For pitches less than 10µm, detection of particles in the 100nm to 200nm range becomes critical.”
Some issues aren’t so obvious. “Temporary bond/debond processes are needed for advanced 2.5D and 3D packages, but it really depends on what the end goal is,” said Kim Arnold, executive director for the advanced packaging business unit at Brewer Science. “The challenge for some processes is that they utilize a flip process, which requires two carriers. This means that ‘carrier 2’ has to withstand the debond method for ‘carrier 1.’ ”
Conclusion
Clearly, hybrid bonding is complicated. But the industry wants to make it work. As chip scaling slows down and is becoming too expensive, the industry needs some new and different approaches.
Otherwise, the IC industry itself may slow, if not grind to a gradual halt. It might be there already.
Great information on packaging
ReplyDeletethanks for good information and share it
ReplyDelete