![The specific implementation process of PCA model training in PSF data processing is as follows:
1. **Data Matrix Construction:** Flatten the standardized PSF images into vectors to form a data matrix \(X_{\text{scaled}} \in \mathbb{R}^{N \times D}\).
2. **Singular Value Decomposition (SVD):** The implementation in the library uses SVD optimization instead of directly calculating the eigenvalue decomposition of the covariance matrix. Perform SVD decomposition on the flattened image matrix \(X_{\text{scaled}}\): \(X_{\text{scaled}} = U \Sigma V^\top\). Here, the column vectors of the right singular vector \(V \in \mathbb{R}^{D \times D}\) are the principal component directions, and the square of the singular value \(\sigma_k^2\) is proportional to the eigenvalue \(\lambda_k\).
3. **Principal Component Selection:** Sort the principal components according to the eigenvalue size and select the top \(K\) principal components. These principal components constitute the projection matrix \(V_K = [v_1, v_2, ..., v_K] \in \mathbb{R}^{D \times K}\), where \(v_1\) corresponds to the direction of maximum variance, \(v_2\) to the next, and so on.
4. **Dimensionality Reduction and Coefficient Extraction:** Project the centered data into a low-dimensional space: \(Z = X_{\text{centered}} V_K \in \mathbb{R}^{N \times K}\), where each row of \(Z\) is the coefficient (also called 'score') of the corresponding PSF in the principal component space.
Draw a schematic diagram of this process, requiring no shading or watermarks, and a color style that meets the requirements of SCI journal papers.](/_next/image?url=https%3A%2F%2Fpub-8c0ddfa5c0454d40822bc9944fe6f303.r2.dev%2Fai-drawings%2FVSt6TNY52kYACNod5QcmZwP8nP4PfcsA%2F284075ba-a796-4c71-9eeb-4dbea1d89ee4%2Fa852e641-9a39-4397-90cd-d4d710834d23.png&w=3840&q=75)
The specific implementation process of PCA model training in PSF data processing is as follows: 1. **Data Matrix Construction:** Flatten the standardized PSF images into vectors to form a data matrix \(X_{\text{scaled}} \in \mathbb{R}^{N \times D}\). 2. **Singular Value Decomposition (SVD):** The implementation in the library uses SVD optimization instead of directly calculating the eigenvalue decomposition of the covariance matrix. Perform SVD decomposition on the flattened image matrix \(X_{\text{scaled}}\): \(X_{\text{scaled}} = U \Sigma V^\top\). Here, the column vectors of the right singular vector \(V \in \mathbb{R}^{D \times D}\) are the principal component directions, and the square of the singular value \(\sigma_k^2\) is proportional to the eigenvalue \(\lambda_k\). 3. **Principal Component Selection:** Sort the principal components according to the eigenvalue size and select the top \(K\) principal components. These principal components constitute the projection matrix \(V_K = [v_1, v_2, ..., v_K] \in \mathbb{R}^{D \times K}\), where \(v_1\) corresponds to the direction of maximum variance, \(v_2\) to the next, and so on. 4. **Dimensionality Reduction and Coefficient Extraction:** Project the centered data into a low-dimensional space: \(Z = X_{\text{centered}} V_K \in \mathbb{R}^{N \times K}\), where each row of \(Z\) is the coefficient (also called 'score') of the corresponding PSF in the principal component space. Draw a schematic diagram of this process, requiring no shading or watermarks, and a color style that meets the requirements of SCI journal papers.

We utilized a state-of-the-art optical processor, Jiuzhang 4...