Monday 19 January 2015

CONGESTION CONTROL TECHNIQUES



Congestion needs to be analyzed after placement and the routing results depend on how congested your design is. Routing congestion may be localized. Some of the things that you can do to make sure routing is hassle free are:

Placement blockages: The utilization constraint is not a hard rule, and if you want to specifically avoid placement in certain areas, use placement blockages.
Soft blockages (buffer only)
Hard blockages (No std cells and buffers are allowed to Place)
Partial blockages (same as density screens)
Halo (same as Hard blockage but blockage can also be moved w.r.t Macro.)

Macro-padding: Macro padding or placement halos around the macros are placement blockages around the edge of the macros. This makes sure that no standard cells are placed near the pin outs of the macros, thereby giving extra breathing space for the macro pin connections to standard cells.

Cell padding: Cell Padding refers to placement clearance applied to std cells in PnR tools. This is typically done to ease placement congestion or reserve some space for future use down the flow.
For example typically people apply cell padding to the buffers/inverters used to build clock tree, so that space is reserved to insert DECAP cells near them after CTS.

Maximum Utilization constraint (density screens): Some tools let you specify maximum core utilization numbers for specific regions. If any region has routing congestion, utilization there can be reduced, thus freeing up more area for routing.

set_congestion_options -max_util 0.6-coordinate{837 114 1103 918}

DIFFERENT TYPES OF CELLS IN VLSI


Well taps (Tap Cells): They are traditionally used so that Vdd or GND are connected to substrate or n-well respectively. This is to help tie Vdd and GND which results in lesser drift and prevention from latchup.




End cap Cells: The library cells do not have cell connectivity as they are only connected to power and ground rails, thus to ensure that gaps do not occur between well and implant layer and to prevent the DRC violations by satisfying well tie-off requirements for core rows we use end-cap cells.



Decap Cells: They are temporary capacitors which are added in the design between power and ground rails to counter the functional failure due to dynamic IR drop. Dynamic IR Drop happens at the active edge of the clock at which a high current is drawn from the power grid for a small duration. If power source is far from a flop the chances are there that flop can go into metastable state. To overcome decaps are added, when current requirement is high this decaps discharge and provide boost to the power grid.

decap cell.

Tie Cells: Tie-high and Tie-Low cells are used to connect the gate of the transistor to either power or ground. In Lower technology nodes, if the gate is connected to power/ground the transistor might be turned on/off due to power or ground bounce. These cells are part of standard-cell library. The cells which require Vdd (Typically constant signals tied to 1) connect to Tie high cells The cells which require Vss/Gnd (Typically constant signals tied to 0) connect to Tie Low cells.



 Filler cells: Filler cells are used to establish the continuity of the N- well and the implant layers on the standard cell rows, some of the small cells also don’t have the bulk connection (substrate connection) because of their small size (thin cells). In those cases, the abutment of cells through inserting filler cells can connect those substrates of small cells to the power/ground nets. i.e. those thin cells can use the bulk connection of the other cells (this is one of the reason why you get standalone LVS check failed on some cells).



Spare cells: These are just that. They are extra cells placed in your layout in anticipation of a future ECO. When I say future, I mean after you taped out and got your silicon back. After silicon tests complete, it might become necessary to have some changes to the design. There might be a bug, or a very easy feature that will make the chip more valuable. This is where you try to use the existing “spare” cells in your design to incorporate the design change. For example, if you need a logic change that requires addition of an AND cell, you can use an existing spare AND to make this change. This way, you are ensuring that the base layer masks need no regeneration. The metal connections have changed, and hence only metal masks are regenerated for the next fabrication.
Kinds of spare cells: There are many variants of spare cells in the design. Designs are full of spare inverters, buffers, nand, nor and specially designed configurable spare cells.


 

Inserting Spare Cells
Spare cells need to added while the initial implementation. There are two ways to do this.
                 The designer adds separate modules with the required cells. You start your PnR with spare cells included, and must make sure that the tool hasn't optimized them away. There can be more than one such spare modules, and they will be typically named spare* or some such combination. The inputs are tied to power or ground nets, as floating gates shouldn't be allowed in the layout. The outputs are left unconnected. 

            Spare cells can also be added to design by including cells in Netlist itself.

Tuesday 13 January 2015

quality checks on Netlist

To start the Physical design  these are the files we get input from Synthesis Team.
1)Netlist,
2)SDC.

After receiving database from synthesis team and prior to place and route you can perform some sanity checks. To validate the quality of constraints read in the netlist and the sdc file in the primetime and perform check_timing and generate report which will giving inputs like the quality of database like
how many of the flip flops are getting clocks
how many flops are constrained,
how many ports are having constrained or whether there is any violation like that which will surely give some idea about the quality of the delivered database.

In order to understand the quality of the database interms of timing , generate timing reports and understand the quality of timing how good or how bad is the database and how much you can optimize at the backend or at the placement and routing stages or what paths you cannot meet timing even during placement stages

After analying bit on the timing reports you can get some idea of what all areas you need to close pack during placement so that you can create regions.

Generate report_area and report_references -hier report in the designcompiler or synthesis stage to better understand the design hierarchy.

Try performing formal verification using rtl 2 synthesized gates to know whether the design after synthesis stage is meeting formal verification requirement.

Signal integrity

Signal integrity is the ability of an electrical signal to carry information reliably and to resist the effects of high-frequency electromagnetic interference from nearby signals. Effects: CrossTalk, EM.

Crosstalk:
Switching of the signal in one net can interference neighboring net due to cross coupling capacitance. This affect is known as cross talk. Crosstalk can lead to crosstalk-induced delay changes or static noise.

• Signal Electro Migration: Electromigration is the permanent physical movement of metal in thin wire connections resulting from the displacement of metal ions by flowing electrons. ectromigration can lead to shorts and opens in wire connections, causing functional failure of the IC device.
High current densities cause wearing of metal due to EM.

Crosstalk

Crosstalk:
Switching of the signal in one net can interference neighboring net due to cross coupling capacitance. This affect is known as cross talk. Crosstalk can lead to crosstalk-induced delay changes or static noise.

Double spacing   =>  more spacing=>less capacitance=>less cross talk
Multiple vias       =>  less resistance=>less RC delay
Shielding             =>  constant cross coupling capacitance =>known value of crosstalk
Buffer insertion   =>  boost the victim strength.
Net ordering        =>  in same metal layer change the net path.
Layer assignment=>  Change the metal layer of two nets if possible. (One s/g in mtl3 and one signal in 4).

What is Body Effect?

Body Effect:
-In general multiple MOS devices are made on a common substrate. As a  result,the substrate voltage of all devices is normally equal. However while connecting the devices serially this may result in an increase in source-to-substrate voltage as we proceed vertically along the series chain,
(Vsb1=0, Vsb20).Which results Vth2>Vth1

Sunday 11 January 2015

What is the Difference between synchronizer and lockup latch?


What is the Difference between synchronizer and lockup latch?

A Synchronizer handles situations where data needs to move from faster clock domain to slower clock domain or vice-versa,whereas lockup latch doesn't allow data to cross between two clock domains.
Lockup latch is used to add a half cycle to avoid setup/hold issues between two flops.
Lock-up latches are used to allow scan chains to cross the clock domains. They mitigate the skew between two clock domains to ensure data is shifted reliably on the scan chain.

TEMPERATURE INVERSION

TEMPERATURE INVERSION:
Traditionally MOSFET drive current (ID) reduced with increasing temperature. Hence for most cases worst case delay corner used to be high temperature (100C or higher, depending on target application). With transistor scaling, VDD and Vt have scaled but not as aggressively as the rest of the parameters (such as gate oxide thickness, channel length etc.). If you look at the MOSFET drive current equation,


So, ID varies linearly as u (mobility) and (VGS-VT)2 or the overdrive voltage. Both mobility and Vt reduce with increasing temperature and vice versa. Interestingly, current is dependent on difference between Vg and Vt. So there is contention between mobility and the (Vg-Vt) term, and the one with more impact on the final current will determine if drive current increases or decreases with increasing temperature. For scaled nodes, i.e. at lower technologies VDD has scaled to values like 0.9V or lower, while Vt has not scaled as aggressively (0.3~0.4V), because of which even though mobility improves at low temperature, the increasing Vt and hence reducing (Vg-Vt) has a greater impact on the current, resulting in less drive current at lower temperature than at higher temperature. Hence, at scaled technology nodes, the low temperature becomes the SLOW corner, not HIGH temperature, especially for the HVT devices. This phenomenon is known has inverse temperature dependence in MOSFETs.


Friday 9 January 2015

SKIN EFFECT

Skin effect in semiconductor wires:
                                       So far, we have considered the resistance of a semiconductor wire to be linear and constant. This is definitely the case for most semiconductor circuits. At very high frequencies however, an additional phenomenon called the skin effect comes into play such that the resistance becomes frequency-dependent
                                       
                                  High-frequency currents tend to flow primarily on the surface of a conductor with the current density falling off exponentially with depth into the conductor. The skin depth d is defined as the depth where the current falls off to a value of e -1 of its nominal value, and is given by


with f the frequency of the signal and U the permeability of the surrounding dielectric (typically equal to the permeability of free space, or m = 4p ´ 10-7 H/m). For Aluminum at 1 GHz, the skin depth is equal to 2.6 mm.

The obvious question is now if this is something we should be concerned about when designing state-of-the-art digital circuits?
                                          The effect can be approximated by assuming that the current flows uniformly in an outer shell of the conductor with thickness d, as is illustrated in Figure for a rectangular wire. Assuming that the overall cross-section of the wire is now limited to approximately 


 we obtain the following expression for the resistance (per unit length) at high frequencies (f > fs):

                                                             

The increased resistance at higher frequencies may cause an extra attenuation and hence distortion of the signal being transmitted over the wire. To determine the on-set of the skin-effect, we can find the frequency fs where the skin depth is equal to half the largest dimension (W or H) of the conductor. Below fs the whole wire is conducting current, and the resistance is equal to (constant) low-frequency resistance of the wire. From Eq. (4.6), we find the value of fs:

Thursday 8 January 2015

LOCKUP LATCHES

lockup latches:


Lock-up latches are necessary to avoid skew problems during shift phase of scan test. These problems occur if FF’s driven by different clock lines are connected within a single scan chain.

eg: If you have a scan chain with multiple clock domains in it the scan insertion tool should group registers from the same clock domain. Then a lockup latch is used when going between the two clock groups

A lockup latch is used to prevent data corruption caused by overwriting of new data over the present data before the clock edge occurs. i.e there is a potential problem of data coming early and overwriting the present actual which has to be fed. This can be prevented by placing a lock up latch with negative level sensitive D latch connecting the two different chains. Especially used to avoid hold problems.


Tuesday 6 January 2015

NLDM VS CCS

NLDM: nonlinear delay models.
CCS:  current composite models.

The basic difference between CCS and NLDM is :
NLDM uses a voltage source for driver modeling (thevenin's theorem), where as a CCS uses a current source for driver modeling (norton's theorem).


130nm library (NLDM - say 5% variation with SPICE simulation) characterisation.
130nm library (CCS - say 3% variation with SPCIE simulation) characterisation.


NLDM Driver Model:
NLDM driver model characterizes input-to-output delay and output transition times with sensitivity to input transition time, output load and side input states. These characteristics are obtained using a circuit simulator with appropriate stimulus to cause output transition.

The problem with NLDM modeling is that, when the drive resistance RD becomes much less than Znet(network load impedance), then ideal condition arises i.e Vout=Vin. This is impossible in practical conditions.


NLDM only captures 3 output points, which is not sufficient to reflect non-linearities of circuits at lower geometries (65nm and below) in synthesized driver model during static timing analysis.

So with NLDM modeling parameters like the cell delay calculation, skew calculation will be inaccurate.

CCS Driver Model:
CCS driver model is characterized by capturing current waveform flowing into the load capacitor of the cell. CCS driver model also has sensitivity to input transition time, output load and side input states. CCS driver model is essentially a current source with infinite driver resistance, hence it provides better accuracy in cases where net impedance is very very high. Note, CCS timing model does not require synthesis of driver model, captured current waveform is driver model itself.

CCS Receiver Model:

CCS receiver model is characterized much like NLDM receiver model with additional granularity to reflect sensitivities like miller capacitance, state of side inputs, input transition times and output load. To accurately reflect effect of miller capacitance on input capacitance and net-delay, it is divided into two parts - C1 and C2. For STA delay calculation, C1 is used in net delay calculation before receiver waveform hits delay threshold point and C2 is used in net delay calculation after receiver waveform hits delay threshold point.

That is the reason why we prefer CCS to NLDM

Monday 5 January 2015

DOUBLE PATTERNING

What is Double patterning?
                    Double patterning is a technique used in the lithographic process that defines the features of integrated circuits at advanced process nodes. It will enable designers to develop chips for manufacture on sub-30nm process nodes using current optical lithography systems. 

                 The downsides of using double patterning include increased mask (reticle) and lithography costs, and the imposition of further restrictions on the ways in which circuits can be laid out on chip. This affects the complexity of the design process and the performance, variability and density of the resultant devices.

What does double patterning do and why do we need it?
            Double patterning counters the effects of diffraction in optical lithography, which happens because the minimum dimensions of advanced process nodes are a fraction of the 193nm wavelength of the illuminating light source. These diffraction effects makes it difficult to produce accurately defined deep sub-micron patterns using existing lighting sources and conventional masks: sharp corners and edges become blurs, and some small features on the mask won’t appear on the wafer at all.



Here the original mask is split-ed  2 different masks like mask A and Mask B in order to avoid diffraction effects like shorts and opens. i.e the cost of mask (lithography costs) increases as we move to double patterning and also introduces some extra DRC Rules in the design.

A number of reticle enhancement techniques have been introduced to counteract the diffraction problem as it has become more acute with each new process node.

Phase-shift masks: were introduced at the 180nm process node. They alter the phase of the light passing through some areas of the mask, changing the way it is diffracted and so reducing the defocusing effect of mask dimensions that are less than the wavelength of the illuminating light. The downside of using phase-shift techniques is that the masks are more difficult and expensive to make.

Optical-proximity correction (OPC) techniques work out how to distort the patterns on a mask to counter diffraction effects, for example by adding small ‘ears’ to the corners of a square feature on the mask so that they remain sharply defined on the wafer. The technique introduces layout restrictions, has a computational cost in design, and means that it takes longer and costs more to make the corrected masks.

Sunday 4 January 2015

Benefits and risks of FINFET & FD-SOI compared to Bulk transistor.



Benefits and risks of finFETs compared to Bulk transistor.

Strengths
Weaknesses
Significant reduction in power consumption (~50% over 32nm)
Very restrictive design options, especially for analog – Transistor drive strength is quantized to multiples of a single fin width
Faster switching speed
Fin width variability and edge quality leads to variability in threshold voltage VT
Effective speed/power trade-off possible with multi-Vt
Extra manufacturing complexity and expense (~+3% according to Intel)
Availability of strain engineering

Opportunities
Threats
Low power makes 20nm technology deployable for mobile applications
The potentially superior electrical performance and simpler manufacturing of fully depleted SOI
Increase CPU speeds beyond 4GHz


Benefits and risks of FD-SOI compared to Bulk transistor.

Strengths
Weaknesses
Significant reduction in power consumption
High cost of initial wafers (~+10% over regular wafers, according to Intel)
Faster switching speed
Limited number of wafer suppliers
Easier, standard manufacturing process
Variability in VT due to variations in the thickness of silicon thin-film
Availability of back-biasing to control VT
Multi- VT more complex to implement
No doping variability
Lack of strain engineering
Layout library compatible with existing bulktechnologies
Thin channel limits drive strength


Opportunities
Threats
Simpler and more flexible alternative to finFETs if wafer cost issue can be overcome
High wafer cost threatens economic viability for wider market adoption
Better controllability for analog applications