Showing posts with label 3D design Tools. Show all posts
Showing posts with label 3D design Tools. Show all posts

Wednesday, April 22, 2009

Lisa McIlrath, R3 Logic: design tools for 3D IC are on the way

Mea culpa. I may have jumped to conclusions in yesterday’s post. Although it appears to those developing 3D IC integration processes that the design community hasn’t been heeding the call for the much needed design tools, after talking with Lisa McIlrath of R3 Logic, I realized there’s a lot more to it than that. In fact, the message has been heard, but these things are more convoluted than we think. In fact, it's a bit of a cart-before-the-horse situation. How can design tools be developed until the process technologies, characterization, and paramaters have been determined, and prototypes tested? For that matter, each customer is likely to establish its own design rules.

Consider that 3D IC integration is still a very new field, and that everyone is very much in what McIlrath called “pathfinding mode”; exploring different designs before having the parameters figured out. She said that there’s lots of advance work being done to discover different customer needs. Design tools needed now for CMOS image sensors and soon for stacked memory may not necessarily turn out to be the same ones needed down the road for heterogeneous integration. Unfortunately, it’s still not clear what the winning applications will be.

For large companies, there's not much incentive to invest in development until the picture is clearer and the market is big enough. However the buzz is that Cadence has been some internal development, and and Synopsis and Mentor are sending some of their people to Friday’s 3D Integration Workshop at DATE 2009, in Nice, France. Clearly, the interest is growing.

McIlrath said that although it might be risky and difficult for small companies to set a course on a tool that may not be adopted, it’s also easier for them to be agile as the market shifts and changes. For example, R3 Logic discovered a need by the research community for a layout editor, so they partnered with MicroMagic to develop one. (Tezzaron Semiconductors, pioneers in manufacturing stacked memory with TSVs uses the tool, and has endorsed it.) This week at DATE 2009, McIlrath says R3 Logic will be showcasing a 3D floorplanning tool scheduled for install this summer. Additionally, last month, the company announced a collaboration with ST-Microelectronics and CEA-LETI to develop a full 3D design flow.

McIlrath says that although it’s regrettable that there aren’t more tools out there and available, she doesn’t agree that the lack of design solutions, or test for that matter, is a blocking factor in TSV adoption. “3D integration is going to go ahead with or without any particular player,” she said. “The design tools are coming. Our goal is to find the most appropriate tools to suit the needs of the users.” – F.v.T

Tuesday, April 21, 2009

DATE 2009 Addresses Design for 3D Integration

It looks as though the call is finally being heard. Those deeply involved in 3D IC Integration using through silicon vias (TSV) as a method of interconnect have been banging these particular drums and sending up smoke signals to the design and test communities for quite some time now with the same message: WE NEED DESIGN SOLUTIONS and WE NEED METHODS FOR TEST.

During the 3D Panel held last month at IMAPS Device Packaging Symposium, panelists and attendees alike speculated about the why’s and wherefores of the logjam for both; but no one could really come up with a concrete answer. While several smaller EDA vendors were acknowledged for having developed tools for 3D IC design (Javelin Design Automation, MicroMagic, and R3 Logic), the question of when “the big guys” (Mentor and Cadence) would jump on board. The collective assumption is that until TSV is closer to market adoption, there’s no real need for the larger design houses to jump into the ring. Test for TSV is still very much an enigma.

Therefore, it was with delight that I reviewed the agenda for Friday’s workshop at DATE 2009, 3D Integration – Technology, Architecture, Design, Automation and Test. Workshop organizers, Yann Gillou, of ST Ericcson, and Erik Jan Marinissen and Geert Van der Plas, both from IMEC; have assembled a line-up for attendees from the design and test communities designed to educate attendees about the critical need for solutions, and to spark interaction between researchers, practitioners, and others interested in 3D IC Integration.

Session 1, moderated by Lisa McIlrath of R3 Logic, kicks off with a keyote addressby Sitaram Arkalgud, of SEMATECH, titled The Promise of Through-Silicon Vias, followed by invited speaker, Riko Radojcic from Qualcomm, who will outline requirements for the design-for-3D environment. The talk will focus on the design environment and EDA tools necessary for what Qualcomm identifies as the ‘Stage 1’ class of products, consisting of a functionally partitioned two-die stack. He will identify three classes of methodologies and associated EDA technologies.

The rest of the day addresses the gamut of issues surrounding design and test, ranging from power integrity issues and bandwidth optimization, to SOC test architecture, test strategies for 3D IC, and much more. In addition to live presentations and 22 poster sessions, the day will conclude with a panel discussion, The Future of 3D Integration From All Angles, moderated by Peter Ramm, of the Fraunhofer Institute. Panelists include Roger Carpenter, Javelin Design Automation; Krishnendu Chakrabarty, Duke University; Paul Siblerud, Semitool; Nicolas Sillon; CEA-LETI; Pascal Urard, ST Microelectronics; and Geert Van der Plas, IMEC.

They may not have the answers yet, but at least they’re getting the message. It’s a start. -- F.v.T

Wednesday, March 11, 2009

From the DPC: Panelists address burning questions for 3D IC integration

I’m glad I stuck around last evening for the 3D panel discussion on the status of 3D integration technologies, applications and roadmaps. As moderator Phil Garrou pointed out, it offered the opportunity to hear some commentary I might have otherwise missed by only attending the scheduled presentations. As a result, I, and a roomful of active participants, got a peek at the inside track of what’s happening .

Panelists included Bob Patti, of memory-maker Tezzaron Semiconductor; Eric Beyne, director of advanced packaging technologies at IMEC; renowned market analyst and keynoter, Jan Vardaman, of TechSearch international; C.J. Berry, of Amkor product development; and Bioh Kim, director of business development for EV Group. Garrou posed a line-up statements and asked the panelists if they agreed or disagreed, and why. Here’s what the panelists had to say.

In order of appearance, the three short term product drivers — CMOS Image sensors, memory on logic, and memory stacks — are paving the way for the ultimate goal, which is repartitioning. Repartitioning involves dividing chips into functions, producing them on separate wafers and stacking them. Do you agree that this is the ultimate goal?

With regard to the order of release, there was general consensus among the panelists. Vardaman elaborated her position on memory, saying she was “sticking to her guns” that DRAM memory will come before Flash memory due to the cost of TSVs. “Solid state drive makers don’t have it on their horizon, so we know it’s further out there. Berry offered that TSV is about evolutionary steps, and will require a mature supply chain to bring it to market.

Addressing partitioning as the ultimate goal, Patti said that it all depended on the value proposition, and Berry concurred, adding that there was no simple answer to that. Beyne said that memory on logic will lead the way to repartitioning. For memory stacking alone, wire bonding is still the cheapest way to go. He added that 3D partitioning is really a different type of 3D where you’re adding blocks. “It’s a different study involving higher density and is not the same TSV technology, However, at that level, potential cost advantages and paybacks are higher,” he said.

A comment on the floor about seeing “the same pictures as last year” sparked the question, what has changed in a year? Vardaman responded that since last march, a number of companies now offer chip-on-chip solution, which is the step before TSV. Some of the probe card companies — Formfactor, Wentworth Labs, Cascade Microtec —have been working on their probe card technologies; thermal area is showing promise; and some of the small design tool companies like R3Logic have made progress “I think you’re seeing the same pictures because people are still working on this, and aren’t ready to go public with their findings yet,” she added.

Beyne said that in Europe, 3D processes are showing up in MEMS applications and in automotive applications. He also said that a better response from EDA vendors with regard to the EDA issues is a good sign that things are coming because they don’t do anything until things are ready.

“If you’re talking about taking something from first article demonstration to high-volume production, seeing the same picture for a few years shouldn’t shock anybody." added Berry.

Beginning with the adoption of TSV for CMOS image sensors, and memory on logic in the 2008-2009 time frame, followed by backside illumination (BSI) in ’09, DRAM around 2010, and heterogeneous integration and repartitioning by 2014, Garrou asked if the panelist agree with this sequence if not these dates?

Berry agreed with the order, but also suggested that we might see some derivatives of heterogeneous integration by 2012 or 2013, with logic deconstructed on an interposer, for example. “ I wouldn’t be surprised if to see a simplified version of the heterogeneous integration show up a year earlier.” He said.

Vardaman said that other than image sensors, things have shifted out a bit due to the economic condition. “How long does it take to put a 300mm line in? Capex doesn’t look good.” She pointed out.

While agreeing with the sequence, Beyne noted that the image being used on various presentation to depict heterogenous integration “gives a completely false impression of what will happen. It will be a simpler version,” he noted, adding that it’s a question of added value. The driver from business point of view will be the advantage to “fab lite” manufacturers.
Garrou concurred, pointing out that the ability to do this in parts does have an advantage, especially from an IP perspective.

Patti said he agrees with the order of the way things will happen, and also sees it as a good time frame. “I agree flash will be behind DRAM, but phase-change memory will be before flash.” He explained that as phase-change memory is still in the design phase, it offers the opportunity to start from scratch. There’s no hurry to convert existing products to TSV, but when you’re starting with a new memory architecture, and have a problem that can be solved, it makes sense to incorporate TSV.

TSMC stepped forward with a roadmap to do TSV at 50µm pitch – willing to implement in 2011. Will TSMC hold to this roadmap or push out the iTSV production capability?

The consensus among the panel — TSMC will likely push it out. “There aren’t enough customer to justify it, so I think it will push out." noted Patti. “I’ve good reason to believe it’s likely to move because they’re demand driven. There would have to be strong customer demand.”

Berry noted that it’s a difficult business model for TSMC. “Its everyone’s best interest to wait." he said. “It’s always better to delay."

Do we all agree that for the most part, TSV will be a Fab/Foundry business?

While it seemed clear from the panel responses that the vias themselves will be created first in the foundry, who would take ownership of the post fab processes still remains to be seen. Barry said that it’s a question of risk mitigation. OSATS will be in good position to support middle end technologies.

Patti said that wherever it is, it will be important that it’s one entity that’s doing all the post-fab processes in one spot: backside grinding, surface treatment, RDL , microbump, die stacking, assembly and test. He added that since foundries got burned in the bumping business, he sees the task being taken on by the OSAT providers — albeit by a short list of OSATS who can do it.

Offering a perspective from equipment manufacturers, Kim pointed out that transferring very thin, delicate wafers between locations is a concern. “I’m not sure who will be the ones to do it,” he said, “but multiple processes done on very thin wafers, should be done at one location.”

Aside from test, do we all agree that equipment sets are ready for production?

The general concensus among panelists was that yes, with a few modifications in some areas equipment is ready. “EVG is definitely ready.” said Kim.

From the floor, Ted Tessier of Flip Chip International pointed out that there’s still work to be done in the die placement area. “It’s not cost effective yet," he noted.

Design and Test: what is the hesitancy of Cadence and Mentor Graphics to addess the design tool issue, and should it be done under a consortium umbrella?

There was general agreement among the panel that a the design community doesn’t really lend itself to a consortium format. Vardaman observed that with regard to Cadence and Mentor, it is likely they are waiting for a small start-up company to develop the tools, and then they’ll acquire them.

Speaking from his position as an early adopter, Patti said that designs for memory can be done with existing tools. “We don’t like it, it’s a lot of heavy lifting, but it can be done,” he said. However, it will be a problem when it comes to heterogeneous integration. The bigger companies don’t have a compelling reason to develop them yet, Patti added, because they aren’t losing business to a competitor by not having a 3D solution. He named R3Logic and Micro Magic as two small design houses that currently have EDA tools for 3D.

Test – is this being done and kept under wraps?

Test was one of the most elusive areas, with the least amount of response among the panel and audience alike. Patti said he’s not sure how a wafer with 1.5M channels will ever be tested. He offered Tezzaron’s solution of built in self-testing and self –repair. Ultimately, he said, you’ll need to test the final package. “Tera-computing will require self test and self repair." he added.

Approaches to testing memory and logic are completely different, said Beyne. He suggested that inspection is a more viable approach, such as with metrology tools. “A lot of metrology issues can be measured to compliment the testing." In the end, the final structure will need to be tested.

All in all, some direct answers to some fairly provocative questions. As panel discussions go, I’d say this one brought some interesting information to the forefront. If anyone who attended thinks I left something out, be sure to add your comments here. – F.v.T.

Thursday, February 26, 2009

3D EDA Tools – Coming out of the Woodwork

That didn’t take long. A post about one EDA tool introduction inspired a comment about a 3D layout editor that’s been on the market for 2 years. A mention of said comment in Tuesday’s email update and an email to the individual who posted the comment brought immediate response. This is the beauty of blogging; it results in an almost instantaneous sharing of information and inspires collaboration.

According to Mark Mangum, sales manager for EDA tools and chip design tools at Micro Magic, Inc, the company’s layout editor, MAX 3D, handles the physical design of the chip, and is particularly suited to TSV design. He explained that its ability to manage separate wafer levels with individual tech files is more effective than relying on a "super tech file" to handle the whole design. With this approach, each wafer level maintains its own tech file throughout the design process.There is an additional tech file for the interconnect. In addition, the tool’s speed and capacity is ideal for handling the size and complexity of TSV designs. A slower editor tends to decrease performance drastically.

Three important elements of good EDA tools are programmability, customizability, and compatibility with other tools in the toolbox. Mangum assured me that MAX 3D was developed with these considerations in mind. “Integration is a key selling point for our customers, so we've made an effort to make our tools work with others.” he said, adding that the tool was developed for “open architecture”, with ASCII data files and open source scripting language. OpenAccess support was added for design data files due to customer demand, and is continually updated. To handle Pcell design data, a Pcell interpreter from IPL was added to allows users to read their Pcell data. "MAX-3D has real time design rule checking (DRC), but because many customers use Mentor Calibre for signoff DRC, a direct interface to Calibre was added. We also support industry standard file formats such as GDSII, LEF, DEF, etc. so MAX-3D users won't have to worry about "vendor lock-in" of file formats - for design data, cells, or generators,” he said.

Mangum told me MAX-3D is being used by several universities, including MIT, Lincoln, Cornell, North Carolina State, Penn State. Six companies have also incorporated it into their processes, mainly for developing test chips.

Are there other 3D tools in the works at Micro Magic? Mangum says yes, but is hush-hush about it. "We are working on some packaging-related development with a customer, but no word on when we'll be discussing it," he said. Simulation and verification are big blind spots in the industry right now, he added, but rumor has it, Mentor has something in the works on this.

I know one of Micro Magic’s customers is happy with the performance of the company's tools. An unsolicited endorsement appeared in my inbox shortly after I mentioned the product in my email. Gretchen Patti, technical communications specialist for Tezzaron Semiconductor, stated simply, yet enthusiastically. “About Micro Magic: Their tools are real! We use them.” That’s pretty much all I needed to know. – F.v.T.