Monday, April 29, 2013

Failure Analysis Techniques: Resolution


STM, AFM, EELS, SIMS, TEM (Angstroms) < AES (2nm) < XPS (5nm) < SEM (10nm) < BSE/EBSD (30nm) < EDX/WDX/XRF (0.3u) < FTIR (3u)

Failure Analysis: Tools & Techniques


Microstructural analysis:
Topography: SEM (low voltage, inelastic collisions, higher resolution, low contrast) /BSE (high voltage, elastic collisions, lower resolution, high contrast)
Morphology: (lattice geometry, crystallographic structure) EBSD/TEM/AFM/STM

Material analysis:
Elemental: EDX/WDX/XRF
Chemical: (structural bonds, oxidation states) AES/XPS/EELS/SIMS/FTIR

Interaction between primary electrons & matter: SEM, TEM, BSE, EBSD, EDX & WDX
Interaction between primary X-Rays & matter: XPS, AES, XRF
Other techniques: Opticals, X-Rays, CSAM, Curve Trace, TDR, IR & thermal imaging, SQUID, LSM (LIVA/OBIC for opens & TIVA/OBIRCH for shorts), x-sections, P-laps & FIB cuts

Making Sense of Physics-of-Failure Based Approaches


PoF is an alternative approach / methodology to reliability that is focused on failure mechanism, failure site & root cause analysis instead of the more conventional approach that looks at failure modes & effects alone. The PoF approach characterizes reliability through lifetime distributions (probability distribution of frequency of fails v/s time) instead of hazard rates (failure rate v/s time). PoF approaches involve the followings steps:
1. Study of the hardware configuration: geometry, design, materials, structure
2. Study of life cycle loads: operational loads (power, voltage, bias, duty cycle) & environmental loads (temperature, humidity, vibration, shock)
3. Stress analysis: Stress-strength distributions/interference, cumulative damage assessment & endurance interference, FMEA, hypothesize failure mechanisms, failure sites & associate failure models, root cause analysis, calculate RPN's to rank & prioritize failures.
4. Reliability assessment: Rel metrics characterization, life estimation, operating/design margin estimation.
5. Interpret & apply results: Design tradeoffs & optimization, ALT planning & development, PHM & HUMS planning.

Why is the Exponential Distribution special?


1. Beta = 1
2. Constant failure rate (or, hazard rate = lambda) -> used to model useful life portion of the bathtub curve.
3. R(t+T) = R(t)
4. A 3-parameter Weibull(eta, beta, gamma) is the same as a 2-parameter exponential (with beta = 1 & eta = MTTF = 1/lambda).
5. A 1-parameter Weibull (eta, beta=1, gamma=0) is the same as 1-parameter exponential (with eta = MTTF = 1/lambda)
6. R(t=MTTF) = 36.8% & Q(t=MTTF) = 63.2%.

Hypothesis Tests: How?


1. Define problem
2. Develop null & alternate hypotheses
3. Set up test parameters (1-sided v/s 2-sided, choose distribution & significance level or alpha)
4. Calculate test statistic & corresponding p-value
5. Compare p-value with alpha & interpret results

Hypothesis Tests : Which & When?


Test of Means:
1-sample or 2-sample: Use z-test for n>=30 or when population variance is known, else use t-test
> 2-samples: Use ANOVA

Test of Variances:
1-sample: Use Chi-square test
2-samples: Use F-ratio test

Test of Proportions:
1-sample or 2-sample: Use z-test
>2-samples: Use Chi-square test

Distributions


Distributions for Attribute/Finite Data:
Hypergeometric: Probability of r rejects in n sample size for N population size with d total rejects. (Intended for small, finite, well characterized populations)
Binomial: Probability of r rejects in n sample size, where n < 10% of N population size, where chance of success in any given trial always stays the same (p)(Intended for large population sizes)
Poisson: Probability of r rejects (=defects or events) in infinite population size, for a given failure rate (lambda). (Intended for n->infinity & p->0)

Binomial distribution approximates Hypergeometric distribution for large N.
Poisson distribution approximates Binomial distribution when N tends to infinity.

Distributions for Continuous Data: Normal, Lognormal, Exponential, Weibull

SPC/Control Charts


Control charts are used to differentiate & identify special causes of variation from those that are common cause related. These may be freaks/outliers, drifts, shifts, stratification, recurring patterns & systematic variation.
For variable data, use I-MR (for n=1), X(bar)-R (for n = 2 to 10) or X(bar)- s (for n>10)

For attribute data:
1. Count/proportion of defectives is estimated through binomial distribution. For constant sample size(n), estimate count of defectives using np chart, while for variable sample size, estimate proportion of defectives using p-charts.

2. Count/rate of defects is estimated through poisson distribution. For constant sample size(n), estimate count of defects using c-chart, while for variable sample size, estimate rate of defects using u-chart.

IC Package Types or Outlines

Six Sigma & Process Variation


For a normal distribution:
-Approx 68% of variation is contained within +/- 1sigma
-Approx 95% of variation is contained within +/- 2sigma
-Approx 99.7% of variation is contained within +/- 3sigma

Cp = 1 when +/- 3 sigma is contained within spec limits.
Cp = 1.33 when +/- 4 sigma is contained within spec limits.
Cp = 1.50 when +/- 4.5 sigma is contained within spec limits.
Cp = 1.67 when +/- 5 sigma is contained within spec limits.
Cp = 2.00 when +/- 6 sigma is contained within spec limits.

Acceptance sampling: LTPD & AQL


AQL = definition of a threshold good lot.
LTPD = definition of a threshold bad lot.

The sampling plan is designed around the AQL/LTPD such that it defines:
1. MAX chance of ACCEPTING lots of quality that is equal or worse than LTPD. This chance/risk is BETA or CONSUMER's RISK.
2. MAX chance of REJECTING lots of quality that is equal or better than AQL. This chance/risk is ALPHA or PRODUCER's RISK.
Alpha (Probability of rejection) is usually set to 0.05. This equates to 95% chance/confidence of acceptance.
Beta (Probablility of acceptance) is usually set to 0.10. This equates to 90% chance/confidence of rejection.

Power, Confidence, Error, Significance


Reject null hypothesis when true (false positive) = alpha or Type 1 error
Accept null hypothesis when false (or false negative) = beta or Type 2 error
Reject null hypothesis when false: POWER = (1-beta)
Accept null hypothesis when true : CONFIDENCE (= 1-alpha)
At high power, beta is small => alpha is large => likely that p-value will be < alpha (significance level). Most effects tend to be deemed significant.
At low power, beta is large => alpha is small => likely that p-value will be > alpha (significance level). Most effects tend to be deemed insignificant.

Thursday, April 25, 2013

2.5/3D TSV & Silicon Interposers: Weighing Pros v/s Cons


Benefits: 1. High density integration facilitating greater functionality (digital/logic, memory, analog, MEMS, optoelectronics, signal & power management) in smaller footprint. 2. Improved memory bandwidth and power management. 3. Faster signal speeds & lower parasitics (noise, crosstalk, latencies, propagation delays, interference) 4. Modular design & die-partitioning permits use of mixed IC technology, improving product development & supply-chain flexibility/scalability. 5. Best of both worlds between PoP(= modularity, shorter & less complex development cycles/TTM)& SoC (= increased wiring densities, faster signal speeds, memory/power benefits) type package architectures. Issues & concerns: 1. Cost: KGD yield related issues, manufacturing & test complexities drive cost upwards. 2. Thermal management. While 3D stacked memory (NAND Flash on S/D/RAM) and memory-on-logic(DSP with DRAM, GPU with SRAM) configurations have been successfully demonstrated & mass-produced, logic on logic has largely been beyond reach of thermal envelopes of existing packaging materials. 3. Manufacturing complexities: Additional processes such as backgrinding, bonding/debonding to carrier wafers & stacking are involved. Considerations include deciding between via-first, via-middle, via-last flows; F2F, F2B chip-attach schemes; W2W, D2D, D2W integration configurations. Thin die handling/die-attach and capillary underfill flow in tight interstitial spaces are challenges. PAM/NCP/NCF's with TCB will be needed. 4. Supply chain / ownership/ business model needs clarity: backgrinding/stacking/bumping/dicing/bonding/debonding operations need clear process owners. Who does what & what if things go wrong-these questions need clear answers. 5. Design tool kit to address multiple aspects (substrate/SoC design, SI/PI, RF & power management, EDA) is still in the exploratory & pathfinding phase. 6. Lack of standards across the industry for 2.5D/3D TSV facilitation. These are expected to be developed as needed and customized as required.

Product Development: Womb to Tomb, Cradle to the Grave


Technology Qual -> Node developed & validated -> Product Enablement -> Development Kickoff -> Logic Design -> Physical Design (partitioning/floorplanning/placement & layout) -> DRC -> Review/approve substrate drawings for substrate manufacturing & assembly bringup (tooling: jigs, fixtures, stencils, sockets, etc) -> Tapeout -> FAI -> First Silicon -> Assembly process development, optimization & validation (manufacturing windows & corner studies,material screens, material/equipment/process/recipe readiness) -> EVT builds -> Test charz (defects & yield, pareto of rejects) -> Reliability Charz (AM qual, margin assessment, robustness testing, component & board level testing) -> customer prototype builds -> Qualification -> Production Readiness (process/material specs, bill-of-materials, product flows, vendor lists, capacity & resource planning) -> Production Release through soft ramp -> HVM -> QMP(SPC/CoC) -> EOL

Six Sigma : Process & Design

Process: Aims to reduce process variation Define: Plan, scope, charter, schedule, team, objectives, milestones, deliverables Measure: MSA, GR&R, Process Capability, Yields Analyze: Hypothesis tests, ANOVA, PFMEA, Process Maps (KPIV/KPOV) Improve: DoE Control: SPC, Control Charts Design: Aims to reduce cycle time and need for rework Define: Plan, scope, charter, schedule, team, objectives, milestones, deliverables Measure: Baseline, benchmark, functional parameters, specs & margins Analyze: DFMEA, Risk analysis, GAP analysis Develop: Deliver design Optimize: DfX - tradeoffs Validate: Prototype builds

Firefighting through methodical madness

1. Develop Team 2. Define Problem: Failure rate, lots affected, establish scope 3. Containment: Raise red flags, lots on hold, generate documentation, reliability assessment, sampling plans, increased checks & balances 4. Problem analysis: Process mapping, history tracking, establish commonalities & dependencies, consult FMEA, RCA/5W/5M, failure analysis, establish hypotheses, develop CAPA theories (short-term/mid-term/long-term) 5. Verify corrective actions: Engineering studies to duplicate problem and verify effectiveness of CA 6. Implement corrective action: Release lots, provide disposition, soft ramp through full release with increased sampling, document lessons learnt 7. Implement preventive action: Mid-term/long-term actions to prevent any recurrences in future 8. Congratulate team

High Density Integration schemes

PoP, SoC & Die Stacking (wire-bonded only, wire-bonded + flip-chip, TSV/TSI 2.5D/3D, F2F FC bonded die).

Whats in a SoC?

Some digital logic (CPU, GPU & chipset logic such as GNB); Memory (DDR RAM, cache); analog signal & power management (sensors, drivers, actuators, controllers), interconnect buses & interfaces (PCI, HT) and DfT structures (BIST, JTAG Boundary Scans)

An acronymously brief history of semiconductor packaging

CERDIP -> PDIP -> SOP/QFP -> BGA & FCA -> QFN -> CSP/WLP -> PoP/SiP -> SoC -> TSV/TSI

Monday, April 22, 2013

GS4

http://reviews.cnet.com/smartphones/samsung-galaxy-s4/4505-6452_7-35627724.html Samsung Galaxy S4

HTC One

http://reviews.cnet.com/smartphones/htc-one/4505-6452_7-35616143.html HTC One

Smartphone Components

Antenna + Switch & RFFE, Filter, Duplexer, Amplifier, Transceiver, Baseband, Application Processor [SOC + LPDDR3], Memory [Flash / SSD...