This function will invoke a pre-trained Long Short Term Memory (LSTM) Network that can reliably perform the task of determining the number of factors. The maximum number of factors that the network can discuss is 10. The LSTM model is implemented in Python and trained on PyTorch (https://pytorch.org/) with CUDA 12.6 for acceleration. After training, the LSTM were saved as LSTM.onnx file. The LSTM function performs inference by loading the LSTM.onnx file in both Python and R environments. Therefore, please note that Python (suggested >= 3.11) and the libraries numpy and onnxruntime are required. @seealso check_python_libraries

To run this function, Python (suggested >= 3.11) is required, along with the installation of numpy and onnxruntime. See more in Details and Note.

LSTM(
  response,
  cor.type = "pearson",
  use = "pairwise.complete.obs",
  vis = TRUE,
  plot = TRUE
)

Arguments

response

A required N × I matrix or data.frame consisting of the responses of N individuals to I items.

cor.type

A character string indicating which correlation coefficient (or covariance) is to be computed. One of "pearson" (default), "kendall", or "spearman". @seealso cor.

use

An optional character string giving a method for computing covariances in the presence of missing values. This must be one of the strings "everything", "all.obs", "complete.obs", "na.or.complete", or "pairwise.complete.obs" (default). @seealso cor.

vis

A Boolean variable that will print the factor retention results when set to TRUE, and will not print when set to FALSE. (default = TRUE)

plot

A Boolean variable that will print the NN plot when set to TRUE, and will not print it when set to FALSE. (Default = TRUE)

Value

An object of class LSTM is a list containing the following components:

nfact

The number of factors to be retained.

features

A matrix (1×20) containing all the features for determining the number of factors by the LSTM.

probability

A matrix containing the probabilities for factor numbers ranging from 1 to 10 (1x10), where the number in the \(f\)-th column represents the probability that the number of factors for the response is \(f\).

Details

A total of 1,000,000 datasets (data.datasets.LSTM) were simulated to extract features for training LSTM. Each dataset was generated following the methods described by Auerswald & Moshagen (2019) and Goretzko & Buhner (2020), with the following specifications:

  • Factor number: F ~ U[1,10]

  • Sample size: N ~ U[100,1000]

  • Number of variables per factor: vpf ~ [3,10]

  • Factor correlation: fc ~ U[0.0,0.5]

  • Primary loadings: pl ~ U[0.35,0.80]

  • Cross-loadings: cl ~ U[-0.2,0.2]

A population correlation matrix was created for each data set based on the following decomposition: $$\mathbf{\Sigma} = \mathbf{\Lambda} \mathbf{\Phi} \mathbf{\Lambda}^T + \mathbf{\Delta}$$ where \(\mathbf{\Lambda}\) is the loading matrix, \(\mathbf{\Phi}\) is the factor correlation matrix, and \(\mathbf{\Delta}\) is a diagonal matrix, with \(\mathbf{\Delta} = 1 - \text{diag}(\mathbf{\Lambda} \mathbf{\Phi} \mathbf{\Lambda}^T)\). The purpose of \(\mathbf{\Delta}\) is to ensure that the diagonal elements of \(\mathbf{\Sigma} \) are 1.

The response data for each subject was simulated using the following formula: $$X_i = L_i + \epsilon_i, \quad 1 \leq i \leq I$$ where \(L_i\) follows a normal distribution \(N(0, \sigma)\), representing the contribution of latent factors, and \(\epsilon_i\) is the residual term following a standard normal distribution. \(L_i\) and \(\epsilon_i\) are uncorrelated, and \(\epsilon_i\) and \(\epsilon_j\) are also uncorrelated.

For each simulated dataset, a total of 2 types of features (@seealso extractor.feature). These features are as follows:

(1)

The top 10 largest eigenvalues.

(2)

The difference of the top 10 largest eigenvalues to the corresponding reference eigenvalues from arallel Analysis (PA). @seealso PA

The two types of features above were treated as sequence data with a time step of 10 to train the LSTM model, resulting in a final classification accuracy of 0.847.

The LSTM model is implemented in Python and trained on PyTorch (https://download.pytorch.org/whl/cu126) with CUDA 12.6 for acceleration. After training, the LSTM was saved as a LSTM.onnx file. The NN function performs inference by loading the LSTM.onnx file in both Python and R environments.

Note

Note that Python (suggested >= 3.11) and the libraries numpy and onnxruntime are required.

First, please ensure that Python is installed on your computer and that Python is included in the system's PATH environment variable. If not, please download and install it from the official website (https://www.python.org/).

If you encounter an error when running this function stating that the numpy and onnxruntime modules are missing:

Error in py_module_import(module, convert = convert) :

ModuleNotFoundError: No module named 'numpy'

or

Error in py_module_import(module, convert = convert) :

ModuleNotFoundError: No module named 'onnxruntime'

this means that the numpy or onnxruntime library is missing from your Python environment. The check_python_libraries function can help you install these two dependency libraries.

Of course, you can also choose not to use the check_python_libraries function. You can directly install the numpy or onnxruntime library using the appropriate commands. If you are using Windows or macOS, please run the command pip install numpy or pip install onnxruntime in Command Prompt or Windows PowerShell (Windows), or Terminal (macOS). If you are using Linux, please ensure that pip is installed and use the command pip install numpy or pip install onnxruntime to install the missing libraries.

References

Auerswald, M., & Moshagen, M. (2019). How to determine the number of factors to retain in exploratory factor analysis: A comparison of extraction methods under realistic conditions. Psychological methods, 24(4), 468-491. https://doi.org/10.1037/met0000200.

Goretzko, D., & Buhner, M. (2020). One model to rule them all? Using machine learning algorithms to determine the number of factors in exploratory factor analysis. Psychol Methods, 25(6), 776-786. https://doi.org/10.1037/met0000262.

Author

Haijiang Qin <Haijiang133@outlook.com>