Joining the PCB industry 20 years ago, the induction training is directly on the job to be familiar with each process of PCB. At that time, I was particularly impressed by the internship in the electroplating line. Due to the harsh working environment and rapid turnover of personnel, the electroplating line has the longest "practice" time. A timer is hung in front of each tank of the electroplating line. It is necessary to manually move the PCB board in the hanging basket from one tank to the next according to the set time until the entire electroplating process is completed. The loading and unloading of most other processes is also manual, and the quality inspection after the etching line is also manual visual. With the continuous increase of labor costs, most of the manual labor and even a small part of mental labor in PCB manufacturing have been replaced by mechanization, electrification, automation and information technology. In the process of intelligent implementation, it was found that most PCB manufacturing equipment could not fully support intelligent requirements, and the corresponding transformation was mainly aimed at automation. At present, there is no unified interface specification for PCB manufacturing equipment. Some manufacturers learn from the interface specification of semiconductor equipment, but because of its high cost, it is temporarily unable to be widely used in PCB manufacturing. All PCB manufacturers are designing their own specifications according to their own needs, and the status quo can be described as diverse.
1. AI artificial intelligence
Artificial intelligence is the subject of studying certain thought processes and intelligent behaviors (such as learning, reasoning, thinking, planning, etc.) of people by using computers to simulate humans. It is a branch of computer science. Research in this field mainly includes robotics, language recognition, and images. Recognition, natural language processing, simulation system and expert system, etc. When people talk about artificial intelligence, machine learning (ML, Deep Learning), deep learning (DL, Deep Learning), deep neural network (DNN, Deep Neural Network), convolutional neural network (CNN, Convolutional Neural Network) and other concepts Often mentioned. AI is the goal that people are pursuing, and machine learning is the main way to achieve AI. Commonly used machine learning algorithms include linear regression, logistic regression, integration methods, support vector machines, neural networks, and deep learning. Machine learning depends on whether there are data labels or not. It can be divided into supervised learning, semi-supervised learning, unsupervised learning, reinforcement learning, structured learning and transfer learning. Deep learning is one of the most commonly used algorithms in machine learning, especially for the application of computer vision. Deep neural networks are a set of deep learning methods that imitate the mechanisms of the human brain. The depth, convolution, and neural network will be discussed separately below.
1.1. Convolution operation
Definition of convolution operation: Convolution operation is widely used in signal and linear systems, digital signal processing and image processing systems. From the mathematical definition (Equation 1), convolution is a function (unit response or convolution kernel). ) Weighted superposition on another function (input signal).
In the field of computer vision, the convolution kernel defines a certain mode. The convolution operation is to calculate the degree of similarity between each position and the mode. The more similar the current position and the mode, the stronger the response. The convolution kernel is usually a small-sized odd-numbered row and column matrix, and the digital image is a relatively large-sized 2-dimensional (multi-dimensional or multi-channel feature map) matrix. When convolution is performed, it is in the form of a sliding window, from left to right, From top to bottom, the corresponding positions of each channel are multiplied and summed. If the convolution kernel is regarded as the weight (Weight), and the vector is drawn as w, and the pixel at the corresponding position of the image is drawn as the vector, then the result of the convolution at this position can be expressed as formula (2), that is, the vector inner product + Bias.
The above is to understand the convolution operation from the point of view of the function. The following is a further explanation through two familiar application examples. Figure 1 is the morphological detection logic used in early AOI. The preset detection template matrix (convolution kernel) slides on the scanned image. When the graphic meets the characteristics defined by the template, the corresponding feature information will be generated, and then The feature information of the standard graphics is compared to find out the position of the defect. Depending on the shape of the defect point, AOI needs to define different detection logics (template matrices), such as T, Y, L, K, and H, etc. AOI engineers need to iteratively optimize the parameters of the detection template matrix, good convolution kernel design, Will produce good test results. Figure 2 is the Gaussian image smoothing commonly used in image processing. It can be seen that the gray value distribution of the processed image on the right is more uniform, and the noise in the image is smoothly filtered. The key to the application of convolution operation lies in the design of the convolution kernel. The main function in image processing is: image preprocessing and feature extraction, and the resulting feature image is output to the next link for analysis and understanding.
1.2. Deep Neural Network
Neural network is a mathematical model that uses a structure similar to the synaptic connections of the brain's nerves to process information. It is also a programming paradigm inspired by biology that allows computers to learn from observation data and find optimal ways to solve problems. . Artificial neural network has absorbed two extremely important concepts of biological neural network-computing unit and connection weight. Next, take the familiar example of "AOI Equipment Evaluation" to understand how neurons work.
Hypothesis: A PCB manufacturer needs to purchase AOI equipment. AOI engineers usually decide whether to purchase a supplier’s AOI equipment based on the following factors: inspection capability, false point rate, adaptability of materials and processes, ease of operation and production capacity, etc. These evaluation items are not arranged by weight. Different PCB manufacturers have different concerns and their weights are set differently. For example, manufacturer A produces high-end products with special materials and complex manufacturing processes, which will set high weights on inspection capabilities and adaptability. Value; while manufacturer B produces low-end products, it may pay attention to ease of operation and scanning speed. Which items (or called inputs, predictors, features, these are concepts in machine learning) need to be used for evaluation, depending on their actual situation, this process is feature extraction or feature engineering. Substituting the above evaluation items and the corresponding weight values as input into the neuron model of Figure 3, the corresponding eigenvalues can be calculated according to the above linear function, and then through the nonlinear activation function, the value of 0 to 1 (or -1 to 1) can be obtained. A certain number between, so as to simulate the logical operation of biological neurons. As the weights and biases change, different decision models can be obtained. In deep learning, the weight w and bias b are driven by data, and the weight and bias are the parameters of the convolution kernel.
If the output of each neuron simulates a biological neural network and is connected to the input of the next neuron, an artificial neural network can be formed (see Figure 4). The input of the evaluation item "false point rate" in the above example is determined by the output of the upper level neuron. In order to predict the result, it is necessary to make a decision based on the input and connection weight of the upper level neuron, such as: 1. Whether to use multiple partitions The detection engine is used to filter the non-critical shortcomings of non-critical areas; 2. Whether the full-spectrum light source is used to ensure that a clear image is obtained; 3. Whether the non-contact linear motor is used to ensure the smooth movement of the measured object to obtain a stable image, etc.
Deep learning is a powerful collection of many learning algorithms for neural network learning. In a broad sense, it is the process of solving the relationship between input and output, and in a narrow sense, it is the process of solving the weight and bias of neurons. The neural network is connected according to the input layer, hidden layer and output layer according to the network type to form a deep neural network. The number of hidden layers and functional modules determine the depth and type of the neural network, such as FNN, CNN, RNN, and GAN. The process of deep learning is to first input the labeled data of the training set into the neural network, and after each layer of neural network processing, to minimize the error between the output result and the expected value, that is, the loss function is minimized (the loss function is usually used to measure The deviation between actual behavior and expected behavior), the process is mainly to iteratively update weights and biases through forward propagation, BP algorithm, and loss function. In the process of neural network training, we often encounter the problems of gradient disappearance or gradient explosion, training too slow and overfitting. The subject is too large to discuss in depth here.
1.3. AI application scenarios
The artificial intelligence system is mainly composed of three parts: 1. Information input. Perceive the dynamically changing physical world through various sensing devices, thereby obtaining a large amount of data; 2. Decision processing. Apply the large amount of data obtained to the model obtained by machine learning for reasoning, prediction or decision-making; 3. Execution output. Perform corresponding actions based on the results of inference or prediction. In short, it is to build a prediction model for a large amount of input data through regression, integration and other machine learning algorithms, and apply the built model to the actual data set to obtain the prediction result. AI has been widely used in the fields of finance, medical care, education, public security, transportation, communications, agriculture, meteorology, and service industries. Table 1 lists some common AI application scenarios.
2. AOI automatic optical inspection
I briefly introduced the basic concepts of AI artificial intelligence, and mentioned computer vision algorithms commonly used in AI. These vision algorithms are also widely used in AOI. AOI automatic optical inspection is evolved from manual visual inspection. The working principle is: firstly, the required image feature information is "learned" from the standard CAM data through the visual algorithm, and then used for training on the scanned image of each PCB. Collect the learned models for feature extraction, compare the obtained feature images with standard data, and report the problem points that need to be detected according to the given rules (detection standards). Since AOI is a typical application of computer vision, it has the same difficulties as computer vision.
2.1. Visual difficulties in AOI
Information loss in the imaging process: When a person tries to understand an image, the previous experience and knowledge will be used for the current observation. The process of understanding the image is usually completed unconsciously. Computer vision needs to involve results and methods in mathematics, pattern recognition, artificial intelligence, psychophysiology, computer science, electronics, and other disciplines. Therefore, for AOI, because the 3D PCB board scene is projected into the 2D space, a lot of information is lost, especially the depth information, such as illumination, material properties, orientation and distance, etc., are reflected as the only measurement value-gray Degree value. The projection of the same 2D plane may be produced by an infinite number of possible 3D scene projections. Therefore, the inverse process from 2D to 3D is an ill-conditioned process, or ill-posed problem. The observation data is not enough to constrain the solution of the problem, so it is necessary to use the first Test knowledge or introduce appropriate constraints. For example, in AOI inspection, it is often encountered that there is an open circuit on the scanned image (2D image), but in fact it may be a real open circuit, or oxidation spots, residual glue, dust on the line (3D scene)...
Local window and global view: Usually, the image analysis algorithm needs to analyze and operate the special storage unit in the memory and its adjacent units. When the image can be obtained from the local view or only some local small holes, the interpretation of an image is usually Very difficult. AOI scans the specified width according to different resolutions and divides it into image blocks of specified size for processing. Therefore, the detection algorithm of AOI is also partially analyzed and processed. It will not be added to PCB network analysis like E-Test, only in logic processing. Add an auxiliary layer for functional analysis
As a key process of quality control in PCB manufacturing, in the AOI process, the confirmation process on the overhaul machine requires more manual participation, and the operator needs to repair according to false shortcomings.