Friday, October 11, 2013

"'Gravity' and Reality: History's Worst Space Disasters | Space.com" http://feedly.com/k/GV6kTG

Sunday, July 14, 2013

Smart enough for your phone?

"ABCs of smartphone screens: 1080p and more (Smartphones Unlocked) | Dialed In - CNET Blogs" http://feedly.com/k/12NYwaZ

Friday, May 10, 2013

Intel Outside


More Memories


Small Outline Dual Inline Memory Module (SODIMM)
Dual Data Rate Synchronous RAM (DDR SDRAM)
Single data rate Synchronous RAM (SDRAM)
Proprietary memory modules
L1 cache - Memory accesses at full microprocessor speed (10 nanoseconds, 4 kilobytes to 16 kilobytes in size)
L2 cache - Memory access of type SRAM (around 20 to 30 nanoseconds, 128 kilobytes to 512 kilobytes in size)
Main memory - Memory access of type RAM (around 60 nanoseconds, 32 megabytes to 128 megabytes in size)
Hard disk - Mechanical, slow (around 12 milliseconds, 1 gigabyte to 10 gigabytes in size)
Internet - Incredibly slow (between 1 second and 3 days, unlimited size)

Courtesy of HSW: "http://www.howstuffworks.com/"

Memories



Courtesy of Computer History Museum, Mountain View, CA.

Thursday, May 9, 2013

uP Timeline


1971 Intel 4004 Developed to drive calculators, the 4004 was a 4-bit chip with 2,300 transistors and clocked at 740KHz

1972 Intel 8008 The first 8-bit processor, the 8008 had an address space of 16KB and was clocked at 500KHz up to 800KHz

1974 Intel 8080 The 8080 was a significant step up, boasting a clock speed of 2MHz and able to address 64KB memory. Early desktop computers used this chip and the CP/M operating system

1976 Zilog Z80 Zilog was founded by ex-Intel engineers who created a compatible but superior chip to the 8080. The Z80 powered many CP/M machines, plus home computers like the ZX Spectrum

1978 Intel 8086 Famous as the first x86 chip, the 8086 was also Intel’s first 16-bit chip with about 29,000 transistors and was clocked initially at 4.77MHz

1982 Intel 80286 The 80286 was a high-performance upgrade of the 8086, and used by IBM in the PC-AT. First clocked at 6MHz, later versions ran up to 25MHz. The 286 had a 16MB address space and 134,000 transistors. 1985 Intel 80386 Intel’s first 32-bit chip, the 386 had 275,000 transistors – over 100 times that of the 4004. Versions of the 386 eventually reached 40MHz
1985 Acorn ARM produced as co-processor for BBC Micro Seeking a new chip to power future business computers, the makers of the BBC Micro decided to build their own, calling it the Acorn RISC Machine (ARM)
1987 Sun SPARC Like Acorn, Sun was looking for a new chip and decided to create its own. The Sparc architecture is still used today in Sun (now Oracle) systems, and supercomputers
1989 Intel 80486 A higher performance version of the 386, Intel’s 486 was the first x86 chip with over 1 million transistors (1.2 million). It was also the first with an on-chip cache and floating point unit

1990 IBM RS/6000 introduces Power chips IBM experimented with RISC chips in the 1970s, and this bore fruit with the RS/6000 workstation in 1990. The processor later developed into the Power chip used by IBM and Apple
1993 Intel Pentium The Pentium was a radical overhaul of Intel’s x86 line, introducing superscalar processing. Starting at 60MHz but eventually reaching 300MHz, the Pentium had 3,100,000 transistors
1995 Intel Pentium Pro Developed as a high-performance chip, the Pentium Pro introduced out-of-order execution and L2 cache inside the same package. This line later morphed into the Xeon line
1996 AMD K5 AMD had been manufacturing Intel chips under licence for years, but the K5 was its first in-house design, intended to compete with the Pentium
1999 AMD Athlon The AMD Athlon was the firm’s first processor that could beat Intel on performance. Starting at 500MHz, a later version was the first x86 chip to hit 1GHz and had 22 million transistors

2000 Intel Pentium 4 Another major redesign, the Pentium 4 introduced Intel’s Netburst architecture. It was clocked at 1.4GHz initially, rising to 3.8GHz, and had 42 million transistors

2001 Intel Itanium Developed by Intel and HP, Itanium is a 64-bit non-x86 architecture developed for parallelism and aimed at enterprise servers. The Itanium family has not been a great success
2002 TI Omap ARM TI became one of the largest makers of system-on-a-chip devices for smartphones and PDAs with the Omap family, combining an ARM CPU with circuitry such as GSM processors
2003 Intel Pentium-M (Centrino) The Pentium-M was designed specifically for laptops, and formed the core of Intel’s first Centrino platform. It had 77 million transistors and was clocked from 900MHz
2003 AMD Opteron While Intel laboured with Itanium, AMD introduced the first 64-bit x86 chips with the Opteron, which proved popular in workstations and servers. It had over 105 million transistors
2005 Intel Pentium-D Intel introduced its first dual-core chips in 2005, starting with the Pentium Extreme Edition. The Pentium D was the first mainstream desktop chip to follow suit
2006 AMD acquires ATI AMD bought up ATI, announcing ambitious plans to combine its x86 processors with ATI’s graphics processors
2006 Intel Xeon 5300 Intel‘s first quad-core chips were the Xeon 5300 line for workstations and servers. Actually two dual-core dies joined together, these have a total of 582 million transistors
2008 Qualcomm SnapDragon ARM Wireless technology firm Qualcomm started producing highperformance smartphone chips based on the ARM architecture. SnapDragon is clocked at 1GHz and has 200 million transistors
2011 Intel Core i3,i5, i7 Intel’s latest chips, based on the Sandy Bridge architecture. The desktop processors have up to eight cores on a single chip and up to 995 million transistors
2011 AMD Fusion chips The Fusion line combines multiple CPU cores on a single chip along with ATI GPU cores, with the first chips having up to 1.45 billion transistors
2011 ARM announces ARMv8 64-bit architecture ARM unveils its specifications for future 64-bit chips. Although some years away, products based on ARMv8 could have as many as 128 cores
-excerpted from "40 years of the microprocessor" published in the Inquirer http://www.theinquirer.net/

Monday, April 29, 2013

Failure Analysis Techniques: Resolution


STM, AFM, EELS, SIMS, TEM (Angstroms) < AES (2nm) < XPS (5nm) < SEM (10nm) < BSE/EBSD (30nm) < EDX/WDX/XRF (0.3u) < FTIR (3u)

Failure Analysis: Tools & Techniques


Microstructural analysis:
Topography: SEM (low voltage, inelastic collisions, higher resolution, low contrast) /BSE (high voltage, elastic collisions, lower resolution, high contrast)
Morphology: (lattice geometry, crystallographic structure) EBSD/TEM/AFM/STM

Material analysis:
Elemental: EDX/WDX/XRF
Chemical: (structural bonds, oxidation states) AES/XPS/EELS/SIMS/FTIR

Interaction between primary electrons & matter: SEM, TEM, BSE, EBSD, EDX & WDX
Interaction between primary X-Rays & matter: XPS, AES, XRF
Other techniques: Opticals, X-Rays, CSAM, Curve Trace, TDR, IR & thermal imaging, SQUID, LSM (LIVA/OBIC for opens & TIVA/OBIRCH for shorts), x-sections, P-laps & FIB cuts

Making Sense of Physics-of-Failure Based Approaches


PoF is an alternative approach / methodology to reliability that is focused on failure mechanism, failure site & root cause analysis instead of the more conventional approach that looks at failure modes & effects alone. The PoF approach characterizes reliability through lifetime distributions (probability distribution of frequency of fails v/s time) instead of hazard rates (failure rate v/s time). PoF approaches involve the followings steps:
1. Study of the hardware configuration: geometry, design, materials, structure
2. Study of life cycle loads: operational loads (power, voltage, bias, duty cycle) & environmental loads (temperature, humidity, vibration, shock)
3. Stress analysis: Stress-strength distributions/interference, cumulative damage assessment & endurance interference, FMEA, hypothesize failure mechanisms, failure sites & associate failure models, root cause analysis, calculate RPN's to rank & prioritize failures.
4. Reliability assessment: Rel metrics characterization, life estimation, operating/design margin estimation.
5. Interpret & apply results: Design tradeoffs & optimization, ALT planning & development, PHM & HUMS planning.

Why is the Exponential Distribution special?


1. Beta = 1
2. Constant failure rate (or, hazard rate = lambda) -> used to model useful life portion of the bathtub curve.
3. R(t+T) = R(t)
4. A 3-parameter Weibull(eta, beta, gamma) is the same as a 2-parameter exponential (with beta = 1 & eta = MTTF = 1/lambda).
5. A 1-parameter Weibull (eta, beta=1, gamma=0) is the same as 1-parameter exponential (with eta = MTTF = 1/lambda)
6. R(t=MTTF) = 36.8% & Q(t=MTTF) = 63.2%.

Hypothesis Tests: How?


1. Define problem
2. Develop null & alternate hypotheses
3. Set up test parameters (1-sided v/s 2-sided, choose distribution & significance level or alpha)
4. Calculate test statistic & corresponding p-value
5. Compare p-value with alpha & interpret results

Hypothesis Tests : Which & When?


Test of Means:
1-sample or 2-sample: Use z-test for n>=30 or when population variance is known, else use t-test
> 2-samples: Use ANOVA

Test of Variances:
1-sample: Use Chi-square test
2-samples: Use F-ratio test

Test of Proportions:
1-sample or 2-sample: Use z-test
>2-samples: Use Chi-square test

Distributions


Distributions for Attribute/Finite Data:
Hypergeometric: Probability of r rejects in n sample size for N population size with d total rejects. (Intended for small, finite, well characterized populations)
Binomial: Probability of r rejects in n sample size, where n < 10% of N population size, where chance of success in any given trial always stays the same (p)(Intended for large population sizes)
Poisson: Probability of r rejects (=defects or events) in infinite population size, for a given failure rate (lambda). (Intended for n->infinity & p->0)

Binomial distribution approximates Hypergeometric distribution for large N.
Poisson distribution approximates Binomial distribution when N tends to infinity.

Distributions for Continuous Data: Normal, Lognormal, Exponential, Weibull

SPC/Control Charts


Control charts are used to differentiate & identify special causes of variation from those that are common cause related. These may be freaks/outliers, drifts, shifts, stratification, recurring patterns & systematic variation.
For variable data, use I-MR (for n=1), X(bar)-R (for n = 2 to 10) or X(bar)- s (for n>10)

For attribute data:
1. Count/proportion of defectives is estimated through binomial distribution. For constant sample size(n), estimate count of defectives using np chart, while for variable sample size, estimate proportion of defectives using p-charts.

2. Count/rate of defects is estimated through poisson distribution. For constant sample size(n), estimate count of defects using c-chart, while for variable sample size, estimate rate of defects using u-chart.

IC Package Types or Outlines

Six Sigma & Process Variation


For a normal distribution:
-Approx 68% of variation is contained within +/- 1sigma
-Approx 95% of variation is contained within +/- 2sigma
-Approx 99.7% of variation is contained within +/- 3sigma

Cp = 1 when +/- 3 sigma is contained within spec limits.
Cp = 1.33 when +/- 4 sigma is contained within spec limits.
Cp = 1.50 when +/- 4.5 sigma is contained within spec limits.
Cp = 1.67 when +/- 5 sigma is contained within spec limits.
Cp = 2.00 when +/- 6 sigma is contained within spec limits.

Acceptance sampling: LTPD & AQL


AQL = definition of a threshold good lot.
LTPD = definition of a threshold bad lot.

The sampling plan is designed around the AQL/LTPD such that it defines:
1. MAX chance of ACCEPTING lots of quality that is equal or worse than LTPD. This chance/risk is BETA or CONSUMER's RISK.
2. MAX chance of REJECTING lots of quality that is equal or better than AQL. This chance/risk is ALPHA or PRODUCER's RISK.
Alpha (Probability of rejection) is usually set to 0.05. This equates to 95% chance/confidence of acceptance.
Beta (Probablility of acceptance) is usually set to 0.10. This equates to 90% chance/confidence of rejection.

Power, Confidence, Error, Significance


Reject null hypothesis when true (false positive) = alpha or Type 1 error
Accept null hypothesis when false (or false negative) = beta or Type 2 error
Reject null hypothesis when false: POWER = (1-beta)
Accept null hypothesis when true : CONFIDENCE (= 1-alpha)
At high power, beta is small => alpha is large => likely that p-value will be < alpha (significance level). Most effects tend to be deemed significant.
At low power, beta is large => alpha is small => likely that p-value will be > alpha (significance level). Most effects tend to be deemed insignificant.

Thursday, April 25, 2013

2.5/3D TSV & Silicon Interposers: Weighing Pros v/s Cons


Benefits: 1. High density integration facilitating greater functionality (digital/logic, memory, analog, MEMS, optoelectronics, signal & power management) in smaller footprint. 2. Improved memory bandwidth and power management. 3. Faster signal speeds & lower parasitics (noise, crosstalk, latencies, propagation delays, interference) 4. Modular design & die-partitioning permits use of mixed IC technology, improving product development & supply-chain flexibility/scalability. 5. Best of both worlds between PoP(= modularity, shorter & less complex development cycles/TTM)& SoC (= increased wiring densities, faster signal speeds, memory/power benefits) type package architectures. Issues & concerns: 1. Cost: KGD yield related issues, manufacturing & test complexities drive cost upwards. 2. Thermal management. While 3D stacked memory (NAND Flash on S/D/RAM) and memory-on-logic(DSP with DRAM, GPU with SRAM) configurations have been successfully demonstrated & mass-produced, logic on logic has largely been beyond reach of thermal envelopes of existing packaging materials. 3. Manufacturing complexities: Additional processes such as backgrinding, bonding/debonding to carrier wafers & stacking are involved. Considerations include deciding between via-first, via-middle, via-last flows; F2F, F2B chip-attach schemes; W2W, D2D, D2W integration configurations. Thin die handling/die-attach and capillary underfill flow in tight interstitial spaces are challenges. PAM/NCP/NCF's with TCB will be needed. 4. Supply chain / ownership/ business model needs clarity: backgrinding/stacking/bumping/dicing/bonding/debonding operations need clear process owners. Who does what & what if things go wrong-these questions need clear answers. 5. Design tool kit to address multiple aspects (substrate/SoC design, SI/PI, RF & power management, EDA) is still in the exploratory & pathfinding phase. 6. Lack of standards across the industry for 2.5D/3D TSV facilitation. These are expected to be developed as needed and customized as required.

Product Development: Womb to Tomb, Cradle to the Grave


Technology Qual -> Node developed & validated -> Product Enablement -> Development Kickoff -> Logic Design -> Physical Design (partitioning/floorplanning/placement & layout) -> DRC -> Review/approve substrate drawings for substrate manufacturing & assembly bringup (tooling: jigs, fixtures, stencils, sockets, etc) -> Tapeout -> FAI -> First Silicon -> Assembly process development, optimization & validation (manufacturing windows & corner studies,material screens, material/equipment/process/recipe readiness) -> EVT builds -> Test charz (defects & yield, pareto of rejects) -> Reliability Charz (AM qual, margin assessment, robustness testing, component & board level testing) -> customer prototype builds -> Qualification -> Production Readiness (process/material specs, bill-of-materials, product flows, vendor lists, capacity & resource planning) -> Production Release through soft ramp -> HVM -> QMP(SPC/CoC) -> EOL

Six Sigma : Process & Design

Process: Aims to reduce process variation Define: Plan, scope, charter, schedule, team, objectives, milestones, deliverables Measure: MSA, GR&R, Process Capability, Yields Analyze: Hypothesis tests, ANOVA, PFMEA, Process Maps (KPIV/KPOV) Improve: DoE Control: SPC, Control Charts Design: Aims to reduce cycle time and need for rework Define: Plan, scope, charter, schedule, team, objectives, milestones, deliverables Measure: Baseline, benchmark, functional parameters, specs & margins Analyze: DFMEA, Risk analysis, GAP analysis Develop: Deliver design Optimize: DfX - tradeoffs Validate: Prototype builds

Firefighting through methodical madness

1. Develop Team 2. Define Problem: Failure rate, lots affected, establish scope 3. Containment: Raise red flags, lots on hold, generate documentation, reliability assessment, sampling plans, increased checks & balances 4. Problem analysis: Process mapping, history tracking, establish commonalities & dependencies, consult FMEA, RCA/5W/5M, failure analysis, establish hypotheses, develop CAPA theories (short-term/mid-term/long-term) 5. Verify corrective actions: Engineering studies to duplicate problem and verify effectiveness of CA 6. Implement corrective action: Release lots, provide disposition, soft ramp through full release with increased sampling, document lessons learnt 7. Implement preventive action: Mid-term/long-term actions to prevent any recurrences in future 8. Congratulate team

High Density Integration schemes

PoP, SoC & Die Stacking (wire-bonded only, wire-bonded + flip-chip, TSV/TSI 2.5D/3D, F2F FC bonded die).

Whats in a SoC?

Some digital logic (CPU, GPU & chipset logic such as GNB); Memory (DDR RAM, cache); analog signal & power management (sensors, drivers, actuators, controllers), interconnect buses & interfaces (PCI, HT) and DfT structures (BIST, JTAG Boundary Scans)

An acronymously brief history of semiconductor packaging

CERDIP -> PDIP -> SOP/QFP -> BGA & FCA -> QFN -> CSP/WLP -> PoP/SiP -> SoC -> TSV/TSI

Monday, April 22, 2013

GS4

http://reviews.cnet.com/smartphones/samsung-galaxy-s4/4505-6452_7-35627724.html Samsung Galaxy S4

HTC One

http://reviews.cnet.com/smartphones/htc-one/4505-6452_7-35616143.html HTC One

Smartphone Components

Antenna + Switch & RFFE, Filter, Duplexer, Amplifier, Transceiver, Baseband, Application Processor [SOC + LPDDR3], Memory [Flash / SSD...