Aug 3, 2024

What is Partitioning in Physical Design ?


In this article, we have thoroughly explored several critical aspects of partitioning in CMOS circuits. We began with a brief overview of partitioning, emphasizing its importance in the design flow. Next, we delved deeper into various levels and types of partitioning, highlighting why it is a crucial process. We also covered the fundamental rules of partitioning, drawing on graph theory to explain its principles. Additionally, we examined both pin and net-oriented netlists and concluded with detailed discussions on two different partitioning algorithms.

Design Flow and Partitioning:

VLSI design cycle is broadly categorized into Front End and Back End.  Front end starts with system specification. FE defines the logical behavior according the functional specifications. At the end of FE we get technology mapped gate level netlist. BE starts from there and main focus of BE is to translate the circuit we have got in FE into Silicon wafer with proper placement of blocks , essential power lines routing and etc. After all these the process lead to tape out. 



Partitioning is the initial step in the PD process.  Partitioning is dividing a chip into smaller blocks. Different functional blocks are separated and routing and placement is simplified. The designer breaks the larger design into various smaller  functional modules/blocks and then proceeds with implementation of these smaller modules during RTL design phase.  These smaller functional blocks are structurally instantiated or linked in the main module.  Main module is called TOP LEVEL module. This type of partitioning is called as Logical Partitioning.

What is Partitioning ?

To simplify complex integrated circuit designs, they are divided into smaller parts called modules. These modules can be as simple as a few electrical components or as complex as fully functional integrated circuits (ICs).  A tool called a partitioner splits the circuit into smaller subcircuits/ partitions/blocks. It aims to reduce the number of connections between these partitions while adhering to design rules like maximum size and delay limits. 



If each block is designed without considering the others, it can lead to problems. More connections between partitions can increase circuit delay and decrease reliability. Too many connections can create dependencies that slow down the design process. The main objective is to minimize connections between sub-circuits to improve performance and meet design constraints. Constraints may include limits on the logic size in a partition or the number of external connections (e.g., limited by the number of I/O pins on a chip). By following these points, designers aim to create efficient, reliable, and easily manageable integrated circuits.

Level of Partitioning :



Circuit Partitioning (CP) is an important task in VLSI design applications. Partitioning algorithms are used to achieve various objectives such as Circuit Layout, Circuit Packaging and Circuit Simulation.


Level Of Partitioning is as follows- 

1. System Level Partitioning :

A system is partitioned into group of PCBs. Each sub-

system can be designed as single PCB.

2. Board level partitioning :

A PCB is partitioned into sub- circuits. Each subcircuit

fabricated as VLSI Chips.

3. Chip Level Partitioning : Circuit assigned to the chip is

divided into manageable sub circuits.


Why Partitioning is Important?

1. Physical packaging : Partitioning decomposes the system in order to satisfy the physical packaging constraints. The partitioning conforms to a physical hierarchy ranging from cabinets, cases, boards, chips, to modular blocks.

2. Divide and conquer strategy : Partitioning helps manage complex designs by breaking them into smaller parts. Thisapproach allows team members to work on different sections, creates a logical order for design, converts the netlist into a physical layout for planning, assigns cells to specific areas for placement and RLC extraction, and coordinates between logic and layout for simulation.

3. System emulation & Rapid Prototyping : One way to emulate and prototype a system is by using FPGAs to build the hardware. Since FPGAs usually have less capacity than modern VLSI designs, these prototype systems use a hierarchical setup of multiple FPGAs. A partitioning tool is necessary to map the netlist onto the hardware.

4. Hardware & Software Codesign : For hardware and software codesign, partitioning is used to decompose the designs into hardware and software.

5. Management of Design Reuse : For huge designs especially system-on-a-chip, we have to manage design reuse. Partitioning can identify clusters of the netlist and construct functional modules out of the clusters.


Rules of Partitioning :




1. Interconnections between Partitioning: 
Reducing interconnections decreases delay and interaction between partitions, simplifying independent design and fabrication.

2. Delay Due to Partitioning: Partitioning a circuit may result in the critical path crossing between partitions multiple times.

3. Number of Terminals: The number of nets needed to connect a sub-circuit to other sub-circuits does not exceed the sub-circuit's terminal count.

4. Number of Partitions : A large number of partitions can simplify the design of individual sections, but it may also increase fabrication costs and the number of interconnections between partitions.

5. Are of each partition

After Circuit Partitioning :

- Area occupied by each partition is estimated

- Possible shapes of blocks can be ascertained

- Number of terminals required by each block is known

- Netlist specifying connections between block is available

Graph Theory & Partitioning :

Graphs are used in physical design algorithms to describe and represent layout topologies.

A graph G(V,E) is made up of two sets :

(1) Elements : Set of nodes or vertices denoted as V

(2) Edges : relations between the elements, denoted as E

A hypergraph consists of nodes and hyperedges. In a hypergraph, edges are sets of any number of vertices. Hyperedges are commonly used to represent multi-pin nets or multi-point connections within circuit. Hypergraph can be directed or non-directed.

Order of Hypergraph = Size of vertex set,

Size of Hypergraph = Size of the edges set







Pin & Net Oriented Netlist:

A netlist is a list that shows all the connections (nets) and the components they link together in a design. It can be organized in two ways:

(i) Pin-oriented: each design component has a list of associated nets

(ii) Net-oriented: each net has a list of associated design components


Netlists are created during logic synthesis and are a key input to physical design. A connectivity graph is a representation of the netlist as a graph. Cells, blocks and pads correspond to nodes, while their connections correspond to edges.



Partitioning Algorithm:





A cell is any logical or functional unit built from component. A partition or block is a grouped collection of components and cells. The k-way partitioning problem seeks to divide a circuit into k partitions. The most common partitioning objective is to minimize the number or total weight of cut edges while balancing the sizes of the partitions.  Often, partition area is limited due to packing considerations and other boundary conditions implied by system hierarchy, chip size, or floorplan restrictions.  Circuit partitioning, is very hard to solve. As the problem gets bigger, the time needed to find the best solution grows very quickly.  There is no known fast and perfect method to solve balance- constrained partitioning.

There are two types of partitioning methods:

1. Constructive or Iterative Method : A constructive algorithm creates a partitioning from the graph that represents the circuit or system. Iterative methods work to improve the quality of an already existing partitioning solution.

2. Deterministic or Probabilistic Method : Deterministic programs always produce the same solution each time they run. Probabilistic methods give different solutions each time because they use random numbers.




Methods, like the Kernighan-Lin (KL) algorithm and the Fiduccia-Mattheyses (FM) algorithm, can find good solutions and run relatively quickly. Optimization using simulated annealing can address particularly challenging partitioning problems.  KL algorithm is sensitive to the number of nodes and edges. The KL algorithm performs partitioning through iterative improvement steps.The KL algorithm is based on exchanging (swapping) pairs of nodes, each node from a different partition.

FM algorithm is sensitive to the number of nodes and nets (hyperedges). The FM algorithm is typically applied to large circuit netlists. This algorithm is more naturally applicable to partitions of unequal size or the presence of initially fixed cells. FM algorith offers best tradeoff between solution quality and runtime. 

Partition area is limited by packing considerations and other boundary conditions such as chip size, or floorplan restrictions. Commercial Tool uses command line options or a GUI to create and manage partitions. Changes the net-list tree structure to create hierarchy nodes at the top level that can be physically divided. This ensures that the nodes are evenly sized and reduces the number of top-level connections needed for the hierarchy blocks. Finds the best way to divide a design into blocks that can be worked on separately. Makes sure the partitions are balanced by size or number of instances. Reduces the number of top-level connections between block I/O ports. Ensures the netlist is partitioned without changing how it works.


Watch the video lecture here:



Courtesy: Image by www.pngegg.com


Jul 5, 2024

What does an analog IC design engineer do?



An analog IC (Integrated Circuit) design engineer is responsible for designing, developing, and testing analog circuits that are used in electronic devices. 


Analog circuits deal with signals that vary continuously, as opposed to digital circuits that deal with signals that have only two states (on or off).

The specific responsibilities of an analog IC design engineer can vary depending on the company and the specific project, but generally include the following:

  • Designing and developing analog circuits using tools such as SPICE simulation software, schematic capture, and layout tools.
  • Conducting research to identify and evaluate new technologies and materials that can be used in the design of analog circuits.
  • Collaborating with other engineers and professionals, such as digital IC designers, PCB designers, and test engineers, to ensure that the analog circuit meets the requirements and specifications of the overall system. Best practices mentioned here : https://youtu.be/hvee5hyfjoo
  • Conducting thorough testing and verification of the analog circuit to ensure that it meets the required performance, power, and area (PPA) metrics. PPA explained here : https://youtu.be/Px5EIWeqy4I
  • Debugging and troubleshooting issues that arise during the design and testing process. Various industry standard parctice are disscussed here : https://youtu.be/KPn2YaevUA4

Overall, the role of an analog IC design engineer is critical in ensuring that electronic devices function properly and meet the required specifications for their intended use.

How To Get Started In VLSI as a beginner ?



Getting started in VLSI (Very Large Scale Integration) can be an exciting and challenging journey. Here are some steps you can take to get started:

1. Acquire basic knowledge: Start by learning the basic concepts of digital electronics and computer architecture. It will be helpful to have a strong foundation in electronics, digital systems, and integrated circuits. You can take courses in electrical engineering or computer science or read books on these topics.

Get the VLSI fundamentals : Click Here 

2. Learn a HDL & Linux-Basics : Learn one of the hardware description languages (HDLs), such as Verilog or VHDL, that are used to describe digital systems. HDLs are used to design and simulate digital circuits and are essential in VLSI design.

You can start with Verilog : Click Here 

Learn Linux basics : Click Here 

3. Learn programming languages: Familiarize yourself with programming languages such as C,TCL PERL, BASH, and Python. These languages are commonly used in VLSI design and simulation.

Some of the Self-Learning(Free) Turtorials for you

TCL : Click Here

PERL : Click Here

BASH : Click Here

Python : Click Here

4. Practice with design tools: Familiarize yourself with the design tools used in VLSI, such as Cadence, Synopsys, or Mentor Graphics. You can use these tools to create and simulate digital circuits. These are commercial tools and available in companies or learning version available in registered VLSI training institute. If you need to learn such tool join some certification course.

There are many free or open-source tools available : 

1. Vivado (Installation: Click Here ), 

2. Electric VLSI Design System, 

3. Icarus-Verilog (Installation : Click Here ), 

4. Magic, 

5. NGSPICE (Installation : Click Here

6.  OpenTimer (Installtion : Click Here ).


4. Join a VLSI design course: Consider enrolling in a VLSI design course, either online or at a university. This will give you hands-on experience in designing, simulating, and testing digital circuits.

5. Join a community: Join a VLSI design community or forum, where you can interact with professionals in the field and get tips and advice on designing digital circuits.

Join this community (Telegram Group) : https://t.me/vlsichaps

6. Read research papers: Read research papers on VLSI design to keep up-to-date with the latest developments and techniques.

Watch this for furthur guidance :  https://youtu.be/SIcpse82gsw

7. Practice, practice, practice: Finally, practice designing digital circuits on your own, starting with simple circuits and working your way up to more complex systems. The more you practice, the better you will become.

Overall, getting started in VLSI design requires a strong foundation in digital electronics and computer architecture, knowledge of HDLs, familiarity with design tools, practical experience through courses and design projects, and a commitment to continuous learning and practice.


Courtesy : Image by www.pexels.com

Jul 4, 2024

What is Clock Tree in VLSI?

In this article, we have provided a comprehensive overview of several critical concepts related to power dissipation in CMOS circuits. We began with a concise explanation of power dissipation, followed by an in-depth discussion on the clock tree and clock tree synthesis. The article covered various aspects of clock networks, including their fundamental structures and the challenges associated with them. Key parameters such as clock skew, jitter, latency, and slew were also examined, along with their impact on circuit performance. Additionally, we explored different clock distribution methods, such as conventional clock tree distribution, clock mesh, H-tree, and fishbone, culminating in an analysis of multi-source clock tree systems.


Clock Tree : 



A clock tree is a clock distribution network within a system/ckt. Includes the clocking circuitry and devices from clock source to destination. Complexity of the clock tree and the number of clocking components used depends on the circuit and its functionality. Systems can have multiple ICs with different clock requirements. A “clock tree” refers to the various clocks feeding those ICs. Usually a single reference clock is cascaded and synthesized into multiple different output clocks.

Clock Tree Synthesis & Design Flow:



Clock Tree Synthesis is the process to distribute clock to the each sequential element of the circuit.  CTS is part of Physical Design in back end of flow.  RTL design is verified and synthesized into a technology mapped gate level netlist. Floor Planning, Power Planning and Routing are done. Global and detailed placement is done.  CTS comes after all these steps.  Till now ideal clock is used. Since all sequential elements are placed, real clock signal are included in CTS.  Without CTS clock signal goes to each end leaf cells or FlipFlops directly from clock root. After CTS there are number of buffers inserted so that proper power management could be done. 

Clock Network:



A clock networks consist of:

(1) Clock generators , (2) Clock elements, (3) Clock wires

Clock can be generated is by using a ring oscillator/other non-stable circuit, but these are susceptible to PVT variations.  Off-chip clocks are generation is done using a crystal and an oscillation ckt., on-chip local clock generation is done using with a PLL or DLL.Routing a single clock around a chip is a difficult issue. Routing multiple clocks with little skew is even harder. 

Depending on applications there are many timing components. Most common timing component are :

1. Crystals : Crystals uses a piece of quartz which is cut at a particular angle and mounted in a protective metal casing. It provide a frequency output when an electrical signal is applied.

2. Crystal Oscillators (Xos) : Crystal Oscillators (XOs) integrate the crystal with the oscillator circuit, enabling XOs to provide higher frequency outputs.

3. Clock Generators : Clock generators are integrated circuits (ICs) that generate multiple output frequencies from a single input reference frequency.

4. Voltage controlled oscillators (VCXOs) : A self-contained oscillator that varies its output frequency in concert with differing voltages from a voltage reference.

5. Clock Buffers : ICs for distributing multiple copies of a clock to multiple ICs with the same frequency requirements.

6. Jitter Attenuators/Jitter Cleaners : Jitter attenuators are clock generators with specialized circuitry for reducing jitter.

Off chip clock has limitations like ,

(1) Frequency is limited – multiplier is required

(2) Uncontrolled clock phase – synchronization issue



PLL can multiply the clock frequency. If clock multiplication is not required, DLL is used. PLL uses an oscillator that creates a new clock whereas the DLL uses a variable delay line that delays the input clock. A phase detector (PD) produces a signal proportional to the phase difference between the input and output clocks.  A loop filter (LF) converts the phase error into a control signal (voltage). A voltage-controlled oscillator (VCO), creates a new clock signal based on the error signal. 



In a DLL, same principle, but instead of changing the frequency, it just delays the clock until the phase is equal.

Challenges of Clock Network:



1. Process Variations : Variability due to device structure and other parameters lead to Wafer-to-wafer variations. The direct impact of process variations is on the yield and the performance of the design. The variations in the standard cell result in the mismatch across the various clock trees in the clock network.

2. Power Supply Variations : Supply voltage scaling causes variations in the switching activity across the die. The uneven power dissipation across the die is the result of fluctuations in the demand of current over a short interval of time. Ripples or noise voltage is induced in the supply lines due to the presence of parasitic inductance. The current flows in the chip via interconnect which has a finite resistance. The variation in the resistance leads to IR drop.

3.Temperature Variations : Temperature varies continuously on the chip while it is operating due to the power dissipation on the chip. With the increase in temperature the drain current decreases. Both device and interconnect depend on the temperature and hence, are affected by the variations in the temperature.



Signal Integrity Issue : Signal integrity issues includes crosstalk, EM, IR. Relative switching of wires on account of the capacitive coupling results in crosstalk noise. With the increase in the clock frequency rates, the capacitive coupling dominates and results in significant delay in data paths. If wire resistance is very high or the current through the transistor is higher than estimated, there is an unwanted voltage drop. This unpredicted drop causes timing degradation in the signal and clock nets. It produces unwanted clock skew in the design and hampers the signal integrity of the design.




Timing Violations :
A design consists of millions of gate and 
multiple clocks. Timing violations due to setup and hold, clock skew and due to the signal integrity issue cause hindrance towards the timing closure for a design.

Design Complexity : Modern designs have millions of cells and the single chip is broken down into a hierarchy of modules. The timing budgets are created for the whole design, which permits the engineers to use hierarchical design methodology and work on their modules for the timing closure. Multistage clock gating structures are helpful for CTS. Multi-mode and multi-corner scenarios increase with the technology proceeding towards few nanometers. Therefore the timing closure, as well as the clock network, have become a challenge to the SoC designers.


Clock Skew:

Clock skew is defined as the difference of the insertion delays of two flops belonging to the same clock domain. Every component add some delay in signal transition. Clock signal take finite time to travel from one point to another so there is time difference between arrival of clock signal to two different flipflops. Difference between arrival of clock of two flipflops is skew. 

2 types of skew : (1) Local , (2) Global Skew Value can be +ve or -ve.  Local skew is the difference of insertion delays of two communicating flops of same clock domain. Global skew is the difference between the delay times for earliest clock reaching flip-flop and latest clock reaching flip-flop for a same clock-domain. If the capture flop receives clock signal late than the launch flop than it results in Positive Skew.  If the launch flop receives clock signal late than the capture flop than it results in Negetive Skew. 


Jitter, Latency & Slew:

Clock Jitter is defined as the deviation of a clock edge from its ideal position in time.  Clock jitter is uncertainity in the clock edge.  The cause may be noise, a fluctuating power source, or interference from nearby circuits. Jitter can occur in both direction, positive and negetive. Can be modeled by adding uncertainty regions around the rising and falling edges of the clock waveform. Clock jitter can be cycle-to-cycle/period/long term jitter.  

Clock Latency/Clock Insertion Delay:

Defined as the amount of time taken by the clock signal in traveling from its source to the sinks. 

Clock latency = Source latency + Network latency

Clock Source Latency/Source Insertion Delay defined as the time taken by the clock signal to reach the clock definition point from clock source.

Clock Network Latency/ Network Insertion Delay defined as the time taken by the clock signal in traversing from clock definition point to the sinks of the clock.  


Slew is defined as the time taken to change the state of 
the signal from 0 → 1 or 1→ 0.

Clock Parameter & Their Impact :

Every practical clock has parameters like Skew, Jitter, Latency, Slew etc. which are also their non- idealities.

Reasons behind Skew & Jitter:

(1)Clock generation : manufacturing device variations in clock drivers

(2)Interconnect variations : Number of buffers, Device Variation , Wire length and variation, Coupling , Load

(3) Environmental variations : Power supply and Temperature

Clock skew and jitter can limit the performance of a digital system. Designing a clock network that minimizes both is important.

How to control clock non-idealities :

- Balanced paths (H-tree network, matched RC trees) can eliminate skew.

- Clock grids used in the final stage of the clock distribution network that minimizes absolute delay (not relative delay).

- If the paths are perfectly balanced, clock skew is zero.

- Distributed buffering reduces absolute delay and makes clock gating easier, but is sensitive to variations in the buffer delay.

- Shielding clock wires to minimize/eliminate coupling with neighboring signal nets.

- Keep close eye on temperature and supply rail variations and their effects on skew and jitter.

- Power supply noise limits the performance of clock networks.


Conventional Clock Tree Distribution:



Single point CTS is the choice for designs when frequency is lower and number of sinks are less.  As name suggested having single clock source which distribute clock to every corner of design. In Single point CTS the point of divergence lie at the clock source, so it shared very large uncommon clock path, more susceptible to OCV variation. Clock gates are stratergically placed near the source, saving large amount of dynamic power.

Advantages:

(1) Simple to implement

(2) Better clock gating, reducing power dissipation

Disadvantages:

(1) Higher Insertion delay

(2) More uncommon clock path, more prone to OCV

(3) Tough to achieve lower skew, due to asymmetric

distribution of sinks.

(4) Conventional CTS is not a good choice for high frequency signals, having high no of sinks all over core region

Clock Mesh:


Clock Mesh is divided clock domains into many grid areas. Clock signals connect to the clock mesh node through the pre-driver buffer chain , the clock leaf node units get the clock signal from a nearby grid.  Global mesh use two layers of metal wiring, crossing each other, in order to spread the clock signal that placed by top level chain to the whole clock domain, and well control clock skew and clock delay. There is a network of pre-mesh drivers to drive the clock signal from clock port to input of mesh drivers. The output of all the mesh drivers will be shorted using a metal mesh, which will carry the clock signal across the block using horizontal and vertical metal stripes. In mesh structure power dissipation is high as clock gates cells are inserted after the mesh net. So clock gating is done at local level only.

Advantages:

(1) Lower Skew

(2) Highly tolerant to On-Chip Variation

(3) Possible to achieve lower insertion delay

Disadvantage:

(1) More power dissipation (Dynamic)

(2) More routing resources required for creating mesh

(3) Difficult to implement

H tree & Fish Bone:

The H-tree structure is highly symmetrical. The pre-drive buffers evenly distributed on the trunk. It can manage clock skew for the clock domains with a large number of flip-flops. H-tree structure consumes more routing resources, while there will be more power consumption.





Multi Source Clock Tree System:



MSCTS is a hybrid system containing the best aspects of a conventional clock tree and pure clock mesh. Clock mesh delivers the best possible clock frequency, skew, and OCV results, and whereas conventional Clock Tree delivers the lowest power consumption and the easiest flow, Multisource CTS offers a compromise between the two methods while favoring the OCV tolerant nature of pure clock mesh. 

A MSCTS Design comprises three different structures in the design :

(i) Pre-mesh Clock Tree

(ii) Multisource Mesh Fabric

(iii) Moderately sized clock trees.



Pre-Mesh Clock Tree:

Each buffer in the pre-mesh tree drives four other buffers.  Pre-mesh topology is similar to H-tree placement and routing. H-tree structure is a uniform, scalable, predictable method to distribute root clock over a large area. Also tolerant to corner-to-corner variation because of their balanced structure.

Multi Source Mesh Fabric:

The multi-source mesh fabric resembles a power/ground or clock mesh fabric, although less dense. The coarse fabric smoothes out any remaining clock arrival time differences from the multiple H-tree buffers that directly drive the fabric.  The measured skew at the mesh plane is effectively zero. 

Moderately Sized Clock Trees :

Multiple clock trees that are attached to the coarse mesh. That structure gives the technology its name. 


Benefits of Multisource CTS:

(1) Better performance and lower skew than convention

(2) Better OCV Tolerance than conventional CT

(3) Better multi-corner performance than conventional CT

(4) Less power consumption than pure clock mesh

(5) Greater tolerance for irregular, highly macro density designs than pure clock mesh.

(6) Faster and easier flow than pure clock mesh.


Watch the video lecture here:

Courtesy : Image by www.pngegg.com, pexels (Nick Wood) 

VHDL/System Verilog in UPF. Episode : 4

 


In this article, we cover several important aspects related to UPF and HDL simulation. We start by discussing the integration of UPF with HDL simulation, highlighting the synergy between these two critical components in power management. Next, we explore the different categories and syntax of UPF functions, providing a detailed explanation of how they operate and their significance. The article also addresses UPF supply query functions, shedding light on their role in managing power supply information. Additionally, we delve into the concepts of supply nets and data types within HDL, explaining how they interact and contribute to efficient power management. We introduce the Switching Activity Interchange Format (S.A.I.F), which is essential for analyzing power consumption. Lastly, we discuss the System-Verilog and VHDL packages for UPF, illustrating how these packages facilitate the integration of UPF into various design environments and enhance simulation capabilities.


UPF & HDL Simulation:

The voltage value and full/partial state of a supply net are valid only when its on/off state is asserted. Every time the state or voltage value of the power or ground nets changes, the power of the corresponding design elements is evaluated.

If both power and ground supply nets are on, the design element instances connected to the given supply pair are turned on. If power or ground supply net is off, the power to the design element instances is turned off. In turned off state every sequential element and every signal driven from within the powered-down element is called corrupted. Events that were scheduled before the power was turned off and whose target is inside a powered down instance shall have no effect.


UPF Functions:


UPF Functions Syntax :





UPF Supply Query Functions :


This is the verification side of UPF. Here we will see the functions that can do query to the designs.
get_supply_value : it will get the  supply value
get_supply_voltage : it will get the supply voltage
get_supply_on_stage :  it will  check the on stage
get_supply_state :  it will get the supply state
These are the UPF query functions that are required in verification.

Supply Net Data Types in HDL:


SystemVerilog:
typedef struct packed {
int voltage;// voltage in μV
bit [31:0] state;// net state 
} supply_net_type;

VHDL:
type supply_net_type is record
voltage : numeric_bit.signed(31 downto 0);// voltage in μV
state : bit_vector(31 downto 0); // net state
end record;
subtype net_state is bit_vector(1 downto 0);// the defined state bits


Supply Net in UPF:

The create_supply_net command provides an option that specifies the type of resolution to be used by the supply net.
The following resolution methods shall be provided:
Unresolved : The supply net may only be connected to a single output (this is the default).
One Hot : Multiple outputs may be connected to the supply net. At most,one of the outputs may be ON at any particular time.
Parallel : Multiple outputs may be connected to the supply net.

Switching Activity Interchange Format S.A.I.F:



SAIF stands for Switching Activity Interchange  Format. It is designed to assist in the extraction and storing of the switching activity information generated by EDA tools. A SAIF file containing switching activity information can be generated using an HDL simulator. This switching activity can be back-annotated into the power analysis/optimization tool.

SystemVerilog Package for UPF :



VHDL package for UPF:


UPF Supply Net :


VHDL Package:


VHDL Package:





Watch the video lecture here : 




Courtesy : Image by www.pngegg.com