This function is used to extract the features required by the Pre-Trained Deep Neural Network (DNN). @seealso DNN_predictor

extractor.feature.DNN(
  response,
  cor.type = "pearson",
  use = "pairwise.complete.obs"
)

Arguments

response

A required N × I matrix or data.frame consisting of the responses of N individuals to I items.

cor.type

A character string indicating which correlation coefficient (or covariance) is to be computed. One of "pearson" (default), "kendall", or "spearman". @seealso cor.

use

An optional character string giving a method for computing covariances in the presence of missing values. This must be one of the strings "everything", "all.obs", "complete.obs", "na.or.complete", or "pairwise.complete.obs" (default). @seealso cor.

Value

A matrix (1×54) containing all the features for determining the number of factors by the DNN.

Details

A total of two types of features (6 kinds, making up 54 features in total) will be extracted, and they are as follows: 1. Clustering-Based Features

(1)

Hierarchical clustering is performed with correlation coefficients as dissimilarity. The top 9 tree node heights are calculated, and all heights are divided by the maximum height. The heights from the 2nd to 9th nodes are used as features. @seealso EFAhclust

(2)

Hierarchical clustering with Euclidean distance as dissimilarity is performed. The top 9 tree node heights are calculated, and all heights are divided by the maximum height. The heights from the 2nd to 9th nodes are used as features. @seealso EFAhclust

(3)

K-means clustering is applied with the number of clusters ranging from 1 to 9. The within-cluster sum of squares (WSS) for clusters 2 to 9 are divided by the WSS for a single cluster. @seealso EFAkmeans

These three features are based on clustering algorithms. The purpose of division is to normalize the data. These clustering metrics often contain information unrelated to the number of factors, such as the number of items and the number of respondents, which can be avoided by normalization. The reason for using the 2nd to 9th data is that only the top F-1 data are needed to determine the number of factors F. The first data point is fixed at 1 after the division operations, so it is excluded. This approach helps in model simplification.

2. Traditional Exploratory Factor Analysis Features (Eigenvalues)

(4)

The top 10 largest eigenvalues.

(5)

The ratio of the top 10 largest eigenvalues to the corresponding reference eigenvalues from Empirical Kaiser Criterion (EKC; Braeken & van Assen, 2017). @seealso EKC

(6)

The cumulative variance proportion of the top 10 largest eigenvalues.

Only the top 10 elements are used to simplify the model.

See also

Author

Haijiang Qin <Haijiang133@outlook.com>

Examples

library(EFAfactors)
set.seed(123)

##Take the data.bfi dataset as an example.
data(data.bfi)

response <- as.matrix(data.bfi[, 1:25]) ## loading data
response <- na.omit(response) ## Remove samples with NA/missing values

## Transform the scores of reverse-scored items to normal scoring
response[, c(1, 9, 10, 11, 12, 22, 25)] <- 6 - response[, c(1, 9, 10, 11, 12, 22, 25)] + 1


## Run extractor.feature.DNN function with default parameters.
# \donttest{
 features <- extractor.feature.DNN(response)

 print(features)
#>  [1] 5.1343112 2.7518867 2.1427020 1.8523276 1.5481628 1.0735825 0.8395389
#>  [8] 0.7992062 0.7189892 0.6880888 4.2331814 2.7410872 2.1427020 1.8523276
#> [15] 1.5481628 1.0735825 0.8395389 0.7992062 0.7189892 0.6880888 0.2053724
#> [22] 0.3154479 0.4011560 0.4752491 0.5371756 0.5801189 0.6137005 0.6456687
#> [29] 0.6744283 0.7019518 0.8153791 0.6602762 0.5712842 0.4645855 0.4052522
#> [36] 0.3707678 0.3372263 0.3159913 0.4994727 0.3963800 0.3596445 0.2752157
#> [43] 0.2400672 0.2196391 0.1997694 0.1871900 0.8184622 0.7254026 0.6644626
#> [50] 0.5847990 0.5419842 0.5027078 0.4682304 0.4274335

# }