Precision PCB Fabrication, High-Frequency PCB, High-Speed PCB, Standard PCB, Multilayer PCB and PCB Assembly.
The most reliable PCB & PCBA custom service factory.
IC Substrate

IC Substrate - Timely upgrade electromagnetic simulation software to meet new challenges

IC Substrate

IC Substrate - Timely upgrade electromagnetic simulation software to meet new challenges

Timely upgrade electromagnetic simulation software to meet new challenges

2021-09-14
View:830
Author:Fraank

1. To meet the different needs of different users, such as PCB products suitable for different operating environments or different platforms
2. After the software is put into use, after a period of operation, a change request is proposed, and major corrections or error corrections are required, or functions are added, or performance is improved.
Simulation software is a kind of professional software with a particularly high professional threshold. It combines the latest achievements of academic theoretical research in various disciplines and the latest computer technology to realize fast and accurate computer simulations and guide actual engineering design and research and development. For simulation software, the software version includes different platform support versions, as well as functional enhancement versions released in different periods. Our users are more concerned about the latter, that is, as time progresses, the function enhancements and supplementary versions of the subsequent release of the software are the core function values that users are most concerned about.
With the rapid development of today's technology, software update iterations are even faster, the operating system will push us from time to time system patch updates, and the mobile app will have new updates pushed every day. In contrast, the update of simulation software is much slower. In recent years, the frequency is generally one major version upgrade a year. In foreign countries, the purchase of simulation software generally includes several years of TECS service, which is also an upgrade service. During the TECS service period, users can use the latest version of the software company’s software. After the TECS service period, they need to purchase the corresponding service. Fee, renew the upgrade service. In China, due to various reasons, most customers seem to be much more passive towards the purchase of TECS. In today’s age of payment for knowledge, as the country’s protection of intellectual property rights increases and the public’s awareness of payment for knowledge increases, it is believed that this situation will be greatly improved.

The new version of the software will bring all-round improvements over the old version in all aspects. Technology is advancing, methods are innovating, and computer systems are also advancing. Various underlying algorithm libraries, communication libraries, instruction sets, and acceleration methods are numerous. It can be said that it is the crystallization of human wisdom in software technology. In product R&D and design, in the face of fierce competition in the market, the key to success is to race against time. The new version of the software has various advantages and can be used as the most advanced productivity tool to greatly enhance R&D advantages.

On the occasion of the release of the new version of ANSYS 2019 R3, I would like to take the electromagnetic field simulation software tool HFSS as an example to talk about the importance of adopting the new version of the software from the following aspects.

A new challenge, the impossible becomes possible
Technological development is always advancing in constant updating and updating. For functions that were not originally available, new algorithms and improvements have been added to the new version, which are implemented quickly and well. There are countless lists of such technologies, such as integral equation algorithms, finite arrays, bouncing ray methods, domain decomposition techniques, ISAR imaging, micro-discharge calculations, and so on. After decades of continuous research and development and improvement, HFSS has formed a large-scale scenario ranging from a chip to an urban environment, with cross-scale simulation capabilities.

Therefore, when we encounter a problem that cannot be solved, let us look back and think, have we kept up with the pace of software version development? Is there a better and faster way to implement functions that were originally slow and troublesome? Efficiency is the survival basis of R&D. It is essentially a race against time, hoping to stay ahead of competitors.

Let's take a few typical functional examples of HFSS software to take a look at how the new version of the software technology responds to the solution of impossible tasks.

Doppler imaging calculation for autonomous driving in HFSS SBR+
Doppler imaging is its core requirement in the development of ADAS (Automatic Driving Assistant System) technology. HFSS software has acquired the ability to quickly solve scene-level problems since it acquired the product Savant of Delcross, which has the core technology of the bouncing ray algorithm (SBR+). However, it requires the help of data processing software such as Matlab to realize the Doppler imaging function, and Generate dynamic graph results that change over time. There is no problem with such a process capability, but its convenience is a lot worse.

However, with the release of ANSYS 2019 R2 in June this year, the HFSS software has built-in this function, which is very convenient to achieve accelerated Doppler calculation and processing of scene-level problems. The accelerated Doppler calculation provides up to 100-300 Radar frame rate simulation. The following figure shows its functional interface and a display of the calculation results of an autonomous driving scene.

pcb board

Solving the micro-discharge problem
Micro-discharge refers to the discharge phenomenon caused by the migration of charged particles in high-power microwave equipment in a vacuum environment. It is very important for equipment safety and performance reliability. This has not been a direct reference area for HFSS. However, after the release of the 2019 R2 version, this problem has been properly solved, and its built-in new charged particle tracking solver (Multi-Paction Solver) can easily solve such problems.

This solution method is easy to set up, similar to post-processing. You can set the problem by adding charge regions, adding SEE boundaries, adding a solution setting linked to the discrete scan, and adding a few steps of Maxwell DC bias link to complete the problem setting. After the solution, you can Obtain the result of the movement process of the number of charged particles, and even the result of the dynamic change of the image expression, which provides very good simulation support for the design and research of such engineering problems. As shown

As the function extension of the above application, there are many previous versions of HFSS. When you encounter a new simulation application that you are not familiar with, you can first consult to find out whether it can be solved in the latest version of HFSS, and try to avoid detours.

Fast and accurate solution of aperiodic array antenna
Finite Large Array Technology (FA-DDM) is an advanced technology of HFSS software in the field of large array antennas. With its flexible modeling method, fast grid multiplexing method, and fast high-performance domain decomposition algorithm technology, it achieves accurate The solution of the large-scale array unit array solves the problem of periodic planar array.

However, when faced with aperiodic and multi-period complex arrays, what should we do at this time?

l The new version of 2019 R3 has a breakthrough update in this regard, using 3D component technology, virtual modeling and definition methods of array units, plus DDM's fast actual array solving function, achieving a major technological breakthrough.

l This method solves a variety of cell types, a variety of periodic or aperiodic array solutions, and achieves a great breakthrough in flexibility and adaptability.

We will also discuss it in detail in the next year's online seminar, so stay tuned.

UI
Accelerated improvement of kernel matrix solving
Here are a few examples from many new features in the historical version of HFSS as an illustration

1) HFSS R15: Direct matrix solver supports distributed solution (released in 2014)
The direct method matrix solver has the highest accuracy and the highest efficiency in the case of multi-port/multi-stimulus. It supports the use of multi-core CPU and memory of multiple computing nodes for distributed direct matrix solving. This function requires the support of the ANSYS Electronics HPC module.

2) HFSS R15: Multi-level high-performance computing improves solution scale and speed (released in 2014)
Supports multi-level high-performance computing functions. For example, the first level of tasks decomposes optimization or parameter scanning tasks into multiple computing nodes, and the second level uses multiple CPU cores or multiple nodes for parallel computing for each node’s task, so as to fully The use of computing resources to complete ultra-large-scale simulation calculations, especially optimization design and design space exploration research.

3) HFSS R14: HPC brings faster matrix solver (released in 2012)
Matrix solving is the most resource-consuming part of the HFSS calculation process. In Solver Profile, it shows the most memory and time consumption. In HFSS V15, HPC brings a new multi-core matrix solver. Compared with the traditional MP solver, A substantial increase in pure computing efficiency can be obtained, and it has better scalability.

4) HFSS R14: DDM accelerated version improvement (released in 2012)
The DDM algorithm extends the FEM algorithm to the distributed memory environment, and improves the ability of the FEM algorithm to an unprecedented level. DDM can be used to solve the unimaginable problems on the previous hardware system. The HFSS V15 version has improved the core algorithm of DDM, and the core efficiency has been greatly improved.

2016-2019 frequency scanning efficiency improvement
Taking a Galileo Test Board as an example, let's look at a set of test data. This is a six-layer complex PCB board with 39 ports and 24 nets. After splitting, there are about 3.3 million tetrahedral meshes and about 19.5 million unknowns. It is a relatively large-scale SI parameter extraction problem.

From the 2016 version to the 2019 version, the SI design has brought a very considerable speed improvement. The gain is very positive. We have done many years of multi-core solving and improvement. Here we can see that the investment in HPC 128 cores can bring about 40 times acceleration, which is very advantageous. After all, under the current 5G application background, the technical requirements of high frequency, high speed and high speed are increasingly dependent on simulation.

In addition, a set of statistical data on different resource configurations of the model on cloud computing resources is attached as a reference in the cloud application environment.

Cloud computing has three predefined machine configurations by default, namely:

• Small: 8 cores, 224 GB node

• Medium: 16 cores, 224 GB node

• Large: 32 cores, 448 GBs, two nodes

From the data, in order to improve efficiency, please don't let the memory size become a problem. Memory is very cheap, automatic high-performance computing will actively seek to utilize the extra memory on the system, and frequency sweep can solve multiple frequency points in parallel. This process will be further combined with the unique ability of the software to minimize the memory occupied during scan extraction, so as to encapsulate many frequency points into a given memory space.

Of course, if there is less available memory, the solution will not be so fast, and the automatic high-performance computing setting will automatically handle this situation.

2013-2019 Improvements to Broadband Frequency Scanning
This part shows an example of a medium-scale PCB board. The calculation difficulty is that on the one hand, the calculation scale of a single frequency point is not small, on the other hand, the number of frequency points that need to be scanned and calculated is very large, so the calculation cost is relatively high. This is a large model with complex frequency response. It uses HPC 128 core to solve at most. The S-parameter calculation of the 2019 R2 version is only 5 times the memory of the setup solution, but the speed is 4.3 times faster than the HFSS 14 version.

The following are the HFSS 14 version, HFSS 15 version, and HFSS 2019 R2 version. The maximum difference between the versions is 7 major versions, and the age is more than 7 years. We can compare some data (see the table below) to see that the new version is in Significant advantage in solving speed.

The number of HPC has been calculated from 8 cores, 32 cores and 128 cores, respectively, representing the maximum number of cores supported by providing 1, 2 or 3 HPC Packs. For speed purposes, because there are some changes in the number of grid points, it is not strictly corresponding to the comparison, but the difference is small, and the overall problem scale is similar.

HFSS 15 uses a larger grid and more memory, but this is just an illusion of adaptive gridding. In fact, for larger grids, HFSS 15 has better accuracy convergence (0.01 vs. 0.007) .

But as a benchmark, we only consider HFSS 14 analysis using 8-core multiprocessing (it is the old matrix solver). And comparing it with SDM-based HPC 32-core and 128-core analysis, we see that HFSS 15 provides a more accurate analysis for the use of distributed frequency analysis in a shorter time, and it is worth noting that HFSS 15 can be used in the benchmark The time-consuming process has been reduced from 3 days to 5.5 hours, and the HPC acceleration has been reduced from several days to several hours.

Update to the 2019 version, the overall simulation time speed has increased by 4 times, from a single iteration per day to 4 iterations per day. Increased memory usage provides faster simulation time, which is a cost-effective strategy because the cost of memory in the latest generation of computers is relatively low.