CDM.RdA function to estimate parameters for cognitive diagnosis models by MMLE/EM (de la Torre, 2009; de la Torre, 2011)
or MMLE/BM (Ma & Jiang, 2020) algorithm. The function imports various functions from the GDINA package,
parameter estimation for Cognitive Diagnostic Models (CDMs) was performed and extended. The CDM function not
only accomplishes parameter estimation for most commonly used models (e.g., GDINA, DINA, DINO,
ACDM, LLM, or rRUM). Furthermore, it incorporates Bayes modal estimation
(BM; Ma & Jiang, 2020) to obtain more reliable estimation results, especially in small sample sizes.
The monotonic constraints are able to be satisfied.
CDM(
Y,
Q,
model = "GDINA",
method = "EM",
att.str = NULL,
mono.constraint = FALSE,
maxitr = 2000,
verbose = 1
)A required \(N\) × \(I\) matrix or data.frame consisting of the responses of N individuals
to \(N\) × \(I\) items. Missing values need to be coded as NA.
A required binary \(I\) × \(K\) matrix containing the attributes not required or required
master the items. The ith row of the matrix is a binary indicator vector indicating which
attributes are not required (coded by 0) and which attributes are required (coded by 1) to master
item \(i\).
Type of model to be fitted; can be "GDINA", "LCDM", "DINA", "DINO", "ACDM",
"LLM", or "rRUM". Default = "GDINA".
Type of method to estimate CDMs' parameters; one out of "EM" and "BM". Default = "EM".
However, "BM" is only available when method = "GDINA".
Specify attribute structures. NULL, by default, means there is no structure. Attribute structure
needs be specified as a list - which will be internally handled by att.structure function.
See examples. It can also be a matrix giving all permissible attribute profiles.
Logical indicating whether monotonicity constraints should be fulfilled in estimation.
Default = FALSE.
A vector for each item or nonzero category, or a scalar which will be used for all items
to specify the maximum number of EM or BM cycles allowed. Default = 2000.
Can be 0, 1 or 2, indicating to print no information, information
for current iteration, or information for all iterations. Default = 1.
An object of class CDM containing the following components:
analysis.objAn GDINA object gained from GDINA package or an
list after BM algorithm, depending on which estimation is used.
alphaIndividuals' attribute parameters calculated by EAP method
P.alpha.XiIndividual's posterior probability
alpha.PIndividuals' marginal mastery probabilities matrix
P.alphaAttribute prior weights for calculating marginalized likelihood in the last iteration
model.fitSome basic model-fit indices, including Deviance, npar, AIC, BIC. @seealso fit
patternThe attribute mastery pattern matrix containing all possible attribute mastery pattern.
argumentsA list containing all input arguments
CDMs are statistical models that fully integrates cognitive structure variables, which define the response probability of examinees on items by assuming the mechanism between attributes. In the dichotomous test, this probability is the probability of answering correctly. According to the specificity or generality of CDM assumptions, it can be divided into reduced CDM and saturated CDM.
Reduced CDMs possess specific assumptions about the mechanisms of attribute interactions, leading to clear interactions between attributes. Representative reduced models include the Deterministic Input, Noisy and Gate (DINA) model (Haertel, 1989; Junker & Sijtsma, 2001; de la Torre & Douglas, 2004), the Deterministic Input, Noisy or Gate (DINO) model (Templin & Henson, 2006), and the Additive Cognitive Diagnosis Model (A-CDM; de la Torre, 2011), the reduced Reparametrized Unified Model (rRUM; Hartz, 2002), among others. Compared to reduced models, saturated models, such as the Log-Linear Cognitive Diagnosis Model (LCDM; Henson et al., 2009) and the general Deterministic Input, Noisy and Gate model (G-DINA; de la Torre, 2011), do not have strict assumptions about the mechanisms of attribute interactions. When appropriate constraints are applied, saturated models can be transformed into various reduced models (Henson et al., 2008; de la Torre, 2011).
The LCDM is a saturated CDM fully proposed within the framework of cognitive diagnosis. Unlike reduced models that only discuss the main effects of attributes, it also considers the interaction between attributes, thus having more generalized assumptions about attributes. Its definition of the probability of correct response is as follows: $$ P(X_{pi}=1|\boldsymbol{\alpha}_{l}) = \frac{\exp \left[\lambda_{i0} + \boldsymbol{\lambda}_{i}^{T} \boldsymbol{h} (\boldsymbol{q}_{i}, \boldsymbol{\alpha}_{l}) \right]} {1 + \exp \left[\lambda_{i0} + \boldsymbol{\lambda}_{i}^{T} \boldsymbol{h} (\boldsymbol{q}_{i}, \boldsymbol{\alpha}_{l}) \right]} $$ $$ \boldsymbol{\lambda}_{i}^{T} \boldsymbol{h}(\boldsymbol{q}_{i}, \boldsymbol{\alpha}_{l}) = \sum_{k=1}^{K^\ast}\lambda_{ik}\alpha_{lk} +\sum_{k=1}^{K^\ast-1}\sum_{k'=k+1}^{K^\ast} \lambda_{ikk'}\alpha_{lk}\alpha_{lk'} + \cdots + \lambda_{12 \cdots K^\ast}\prod_{k=1}^{K^\ast}\alpha_{lk} $$ Where, \(P(X_{pi}=1|\boldsymbol{\alpha}_{l})\) represents the probability of an examinee with attribute mastery pattern \(\boldsymbol{\alpha}_{l}\) (\(l=1,2,\cdots,L\) and \(L=2^{K^\ast}\)) correctly answering item i. Here, \(K^\ast = \sum_{k=1}^{K} q_{ik}\) denotes the number of attributes in the collapsed q-vector, \(\lambda_{i0}\) is the intercept parameter, and \(\boldsymbol{\lambda}_{i}=(\lambda_{i1}, \lambda_{i2}, \cdots, \lambda_{i12}, \cdots, \lambda_{i12{\cdots}K^\ast})\) represents the effect vector of the attributes. Specifically, \(\lambda_{ik}\) is the main effect of attribute \(k\), \(\lambda_{ikk'}\) is the interaction effect between attributes \(k\) and \(k'\), and \(\lambda_{j12{\cdots}K^\ast}\) represents the interaction effect of all required attributes.
The G-DINA, proposed by de la Torre (2011), is another saturated model that offers three types of link functions: identity link, log link, and logit link, which are defined as follows: $$P(X_{pi}=1|\boldsymbol{\alpha}_{l}) = \delta_{i0} + \sum_{k=1}^{K^\ast}\delta_{ik}\alpha_{lk} +\sum_{k=1}^{K^\ast-1}\sum_{k'=k+1}^{K^\ast}\delta_{ikk'}\alpha_{lk}\alpha_{lk'} + \cdots + \delta_{12{\cdots}K^\ast}\prod_{k=1}^{K^\ast}\alpha_{lk} $$ $$log \left[P(X_{pi}=1|\boldsymbol{\alpha}_{l}) \right] = v_{i0} + \sum_{k=1}^{K^\ast}v_{ik}\alpha_{lk} +\sum_{k=1}^{K^\ast-1}\sum_{k'=k+1}^{K^\ast}v_{ikk'}\alpha_{lk}\alpha_{lk'} + \cdots + v_{12{\cdots}K^\ast}\prod_{k=1}^{K^\ast}\alpha_{lk} $$ $$logit \left[P(X_{pi}=1|\boldsymbol{\alpha}_{l}) \right] = \lambda_{i0} + \sum_{k=1}^{K^\ast}\lambda_{ik}\alpha_{lk} +\sum_{k=1}^{K^\ast-1}\sum_{k'=k+1}^{K^\ast}\lambda_{ikk'}\alpha_{lk}\alpha_{lk'} + \cdots + \lambda_{12{\cdots}K^\ast}\prod_{k=1}^{K^\ast}\alpha_{lk} $$ Where \(\delta_{i0}\), \(v_{i0}\), and \(\lambda_{i0}\) are the intercept parameters for the three link functions, respectively; \(\delta_{ik}\), \(v_{ik}\), and \(\lambda_{ik}\) are the main effect parameters of \(\alpha_{lk}\) for the three link functions, respectively; \(\delta_{ikk'}\), \(v_{ikk'}\), and \(\lambda_{ikk'}\) are the interaction effect parameters between \(\alpha_{lk}\) and \(\alpha_{lk'}\) for the three link functions, respectively; and \(\delta_{i12{\cdots }K^\ast}\), \(v_{i12{\cdots}K^\ast}\), and \(\lambda_{i12{\cdots}K^\ast}\) are the interaction effect parameters of \(\alpha_{l1}{\cdots}\alpha_{lK^\ast}\) for the three link functions, respectively. It can be observed that when the logit link is adopted, the G-DINA model is equivalent to the LCDM model.
Specifically, the A-CDM can be formulated as: $$P(X_{pi}=1|\boldsymbol{\alpha}_{l}) = \delta_{i0} + \sum_{k=1}^{K^\ast}\delta_{ik}\alpha_{lk} $$
The rRUM, can be written as: $$log \left[P(X_{pi}=1|\boldsymbol{\alpha}_{l}) \right] = \lambda_{i0} + \sum_{k=1}^{K^\ast}\lambda_{ik}\alpha_{lk} $$
The item response function for the linear logistic model (LLM) can be given by: $$logit\left[P(X_{pi}=1|\boldsymbol{\alpha}_{l}) \right] = \lambda_{i0} + \sum_{k=1}^{K^\ast}\lambda_{ik}\alpha_{lk} $$
In the DINA model, every item is characterized by two key parameters: guessing (g) and slip (s). Within the traditional framework of DINA model parameterization, a latent variable \(\eta\), specific to examinee \(p\) who has the attribute mastery pattern \(\boldsymbol{\alpha}_{l}\) and responses to \(i\), is defined as follows: $$ \eta_{li}=\prod_{k=1}^{K}\alpha_{lk}^{q_{ik}} $$
If examinee \(p\) whose attribute mastery pattern is \(\boldsymbol{\alpha}_{l}\) has acquired every attribute required by item i, \(\eta_{pi}\) is given a value of 1. If not, \(\eta_{pi}\) is set to 0. The DINA model's item response function can be concisely formulated as such: $$P(X_{pi}=1|\boldsymbol{\alpha}_{l}) = (1-s_j)^{\eta_{li}}g_j^{(1-\eta_{li})} = \delta_{i0}+\delta_{i12{\cdots}K}\prod_{k=1}^{K^\ast}\alpha_{lk} $$
\((1-s_j)^{\eta_{li}}g_j^{(1-\eta_{li})}\) is the original expression of the DINA model, while \(\delta_{i0}+\delta_{i12{\cdots}K}\prod_{k=1}^{K^\ast}\alpha_{lk}\) is an equivalent form of the DINA model after adding constraints in the G-DINA model. Here, \(g_j = \delta_{i0}\) and \(1-s_j = \delta_{i0}+\delta_{i12{\cdots}K}\prod_{k=1}^{K^\ast}\alpha_{lk}\).
In contrast to the DINA model, the DINO model suggests that an examinee can correctly respond to an item if he/she have mastered at least one of the item's measured attributes. Additionally, like the DINA model, the DINO model also accounts for parameters related to guessing and slipping. Therefore, the main difference between DINO and DINA lies in their respective \(\eta_{li}\) formulations. The DINO model can be given by: $$\eta_{li} = 1-\prod_{k=1}^{K}(1 - \alpha_{lk})^{q_{lk}}$$
de la Torre, J. (2009). DINA Model and Parameter Estimation: A Didactic. Journal of Educational and Behavioral Statistics, 34(1), 115-130. DOI: 10.3102/1076998607309474.
de la Torre, J., & Douglas, J. A. (2004). Higher-order latent trait models for cognitive diagnosis. Psychometrika, 69(3), 333-353. DOI: 10.1007/BF02295640.
de la Torre, J. (2011). The Generalized DINA Model Framework. Psychometrika, 76(2), 179-199. DOI: 10.1007/s11336-011-9207-7.
Haertel, E. H. (1989). Using restricted latent class models to map the skill structure of achievement items. Journal of Educational Measurement, 26(4), 301-323. DOI: 10.1111/j.1745-3984.1989.tb00336.x.
Hartz, S. M. (2002). A Bayesian framework for the unified model for assessing cognitive abilities: Blending theory with practicality (Unpublished doctoral dissertation). University of Illinois at Urbana-Champaign.
Henson, R. A., Templin, J. L., & Willse, J. T. (2008). Defining a Family of Cognitive Diagnosis Models Using Log-Linear Models with Latent Variables. Psychometrika, 74(2), 191-210. DOI: 10.1007/s11336-008-9089-5.
Huebner, A., & Wang, C. (2011). A note on comparing examinee classification methods for cognitive diagnosis models. Educational and Psychological Measurement, 71, 407-419. DOI: 10.1177/0013164410388832.
Junker, B. W., & Sijtsma, K. (2001). Cognitive assessment models with few assumptions, and connections with nonparametric item response theory. Applied Psychological Measurement, 25(3), 258-272. DOI: 10.1177/01466210122032064.
Ma, W., & Jiang, Z. (2020). Estimating Cognitive Diagnosis Models in Small Samples: Bayes Modal Estimation and Monotonic Constraints. Applied Psychological Measurement, 45(2), 95-111. DOI: 10.1177/0146621620977681.
Templin, J. L., & Henson, R. A. (2006). Measurement of psychological disorders using cognitive diagnosis models. Psychological methods, 11(3), 287-305. DOI: 10.1037/1082-989X.11.3.287.
Tu, D., Chiu, J., Ma, W., Wang, D., Cai, Y., & Ouyang, X. (2022). A multiple logistic regression-based (MLR-B) Q-matrix validation method for cognitive diagnosis models: A confirmatory approach. Behavior Research Methods. DOI: 10.3758/s13428-022-01880-x.
################################################################
# Example 1 #
# fit using MMLE/EM to fit the GDINA models #
################################################################
set.seed(123)
library(Qval)
## generate Q-matrix and data to fit
K <- 3
I <- 30
Q <- sim.Q(K, I)
IQ <- list(
P0 = runif(I, 0.0, 0.2),
P1 = runif(I, 0.8, 1.0)
)
data.obj <- sim.data(Q = Q, N = 500, IQ = IQ,
model = "GDINA", distribute = "mvnorm")
#> distribute = mvnorm
#> model = GDINA
#> number of attributes: 3
#> number of items: 30
#> num of examinees: 500
#> average of P0 = 0.096
#> average of P1 = 0.909
#> sigma = 0.5
#> cutoffs = 0 0.674 -0.674
## using MMLE/EM to fit GDINA model
CDM.obj <- CDM(data.obj$dat, Q, model = "GDINA",
method = "EM", maxitr = 2000, verbose = 1)
#>
Iter = 1 Max. abs. change = 0.58606 Deviance = 14908.15
Iter = 2 Max. abs. change = 0.14853 Deviance = 12479.08
Iter = 3 Max. abs. change = 0.01178 Deviance = 12454.33
Iter = 4 Max. abs. change = 0.00468 Deviance = 12452.42
Iter = 5 Max. abs. change = 0.00208 Deviance = 12452.00
Iter = 6 Max. abs. change = 0.00099 Deviance = 12451.91
Iter = 7 Max. abs. change = 0.00047 Deviance = 12451.89
Iter = 8 Max. abs. change = 0.00023 Deviance = 12451.89
Iter = 9 Max. abs. change = 0.00011 Deviance = 12451.89
Iter = 10 Max. abs. change = 0.00005 Deviance = 12451.89
################################################################
# Example 2 #
# fit using MMLE/BM to fit the DINA #
################################################################
# \donttest{
set.seed(123)
library(Qval)
## generate Q-matrix and data to fit
K <- 5
I <- 30
Q <- sim.Q(K, I)
IQ <- list(
P0 = runif(I, 0.0, 0.2),
P1 = runif(I, 0.8, 1.0)
)
data.obj <- sim.data(Q = Q, N = 500, IQ = IQ,
model = "DINA", distribute = "horder")
#> distribute = horder
#> model = DINA
#> number of attributes: 5
#> number of items: 30
#> num of examinees: 500
#> average of P0 = 0.102
#> average of P1 = 0.905
#> theta_mean = -0.064 , theta_sd = 0.994
#> a = 1.5 1.5 1.5 1.5 1.5
#> b = 0 -0.75 -1.5 1.5 0.75
## using MMLE/BM to fit GDINA model
CDM.obj <- CDM(data.obj$dat, Q, model = "GDINA",
method = "BM", maxitr = 1000, verbose = 2)
#> Iter = 1 Max. abs. change = 0.57769 Deviance = 16086.644
#> Iter = 2 Max. abs. change = 0.25431 Deviance = 12359.953
#> Iter = 3 Max. abs. change = 0.20289 Deviance = 12111.890
#> Iter = 4 Max. abs. change = 0.25585 Deviance = 11992.248
#> Iter = 5 Max. abs. change = 0.15919 Deviance = 11908.937
#> Iter = 6 Max. abs. change = 0.04063 Deviance = 11864.146
#> Iter = 7 Max. abs. change = 0.01975 Deviance = 11848.966
#> Iter = 8 Max. abs. change = 0.01129 Deviance = 11843.813
#> Iter = 9 Max. abs. change = 0.00668 Deviance = 11841.916
#> Iter = 10 Max. abs. change = 0.00410 Deviance = 11841.148
#> Iter = 11 Max. abs. change = 0.00258 Deviance = 11840.795
#> Iter = 12 Max. abs. change = 0.00165 Deviance = 11840.613
#> Iter = 13 Max. abs. change = 0.00106 Deviance = 11840.510
#> Iter = 14 Max. abs. change = 0.00069 Deviance = 11840.449
#> Iter = 15 Max. abs. change = 0.00045 Deviance = 11840.412
#> Iter = 16 Max. abs. change = 0.00029 Deviance = 11840.390
#> Iter = 17 Max. abs. change = 0.00019 Deviance = 11840.377
#> Iter = 18 Max. abs. change = 0.00012 Deviance = 11840.369
#> Iter = 19 Max. abs. change = 0.00008 Deviance = 11840.364
#>
# }
################################################################
# Example 3 #
# fit using MMLE/EM to fit the GDINA with attribute structures #
################################################################
# \donttest{
set.seed(123)
library(Qval)
## generate Q-matrix and data to fit
K <- 3
I <- 30
att.str <- list(c(1,2),
c(2,3))
Q <- sim.Q(K, I, att.str)
IQ <- list(
P0 = runif(I, 0.0, 0.2),
P1 = runif(I, 0.8, 1.0)
)
data.obj <- sim.data(Q = Q, N = 500, IQ = IQ, att.str=att.str,
model = "GDINA")
#> attribute structure:
#> A1->A2, A2->A3
#> model = GDINA
#> number of attributes: 3
#> number of items: 30
#> num of examinees: 500
#> average of P0 = 0.086
#> average of P1 = 0.899
#> Dirichlet's alpha = 3, 3, 3, 3
#> att.prior:
#> 000 : 0.316
#> 100 : 0.318
#> 110 : 0.129
#> 111 : 0.237
#>
## using MMLE/EM to fit GDINA model with attribute structures
CDM.obj <- CDM(data.obj$dat, Q, model = "GDINA", att.str=att.str,
method = "EM", maxitr = 2000, verbose = 1)
#>
Iter = 1 Max. abs. change = 0.52722 Deviance = 15355.30
Iter = 2 Max. abs. change = 0.13287 Deviance = 12314.50
Iter = 3 Max. abs. change = 0.03133 Deviance = 12266.06
Iter = 4 Max. abs. change = 0.01233 Deviance = 12261.57
Iter = 5 Max. abs. change = 0.00914 Deviance = 12260.75
Iter = 6 Max. abs. change = 0.00770 Deviance = 12260.44
Iter = 7 Max. abs. change = 0.00646 Deviance = 12260.24
Iter = 8 Max. abs. change = 0.00549 Deviance = 12260.07
Iter = 9 Max. abs. change = 0.00498 Deviance = 12259.94
Iter = 10 Max. abs. change = 0.00472 Deviance = 12259.82
Iter = 11 Max. abs. change = 0.00440 Deviance = 12259.73
Iter = 12 Max. abs. change = 0.00406 Deviance = 12259.64
Iter = 13 Max. abs. change = 0.00373 Deviance = 12259.57
Iter = 14 Max. abs. change = 0.00342 Deviance = 12259.50
Iter = 15 Max. abs. change = 0.00313 Deviance = 12259.45
Iter = 16 Max. abs. change = 0.00286 Deviance = 12259.40
Iter = 17 Max. abs. change = 0.00261 Deviance = 12259.36
Iter = 18 Max. abs. change = 0.00239 Deviance = 12259.33
Iter = 19 Max. abs. change = 0.00219 Deviance = 12259.30
Iter = 20 Max. abs. change = 0.00200 Deviance = 12259.27
Iter = 21 Max. abs. change = 0.00184 Deviance = 12259.25
Iter = 22 Max. abs. change = 0.00167 Deviance = 12259.23
Iter = 23 Max. abs. change = 0.00153 Deviance = 12259.22
Iter = 24 Max. abs. change = 0.00141 Deviance = 12259.21
Iter = 25 Max. abs. change = 0.00131 Deviance = 12259.20
Iter = 26 Max. abs. change = 0.00121 Deviance = 12259.19
Iter = 27 Max. abs. change = 0.00112 Deviance = 12259.18
Iter = 28 Max. abs. change = 0.00104 Deviance = 12259.17
Iter = 29 Max. abs. change = 0.00097 Deviance = 12259.17
Iter = 30 Max. abs. change = 0.00090 Deviance = 12259.16
Iter = 31 Max. abs. change = 0.00084 Deviance = 12259.15
Iter = 32 Max. abs. change = 0.00079 Deviance = 12259.15
Iter = 33 Max. abs. change = 0.00074 Deviance = 12259.15
Iter = 34 Max. abs. change = 0.00069 Deviance = 12259.14
Iter = 35 Max. abs. change = 0.00065 Deviance = 12259.14
Iter = 36 Max. abs. change = 0.00061 Deviance = 12259.14
Iter = 37 Max. abs. change = 0.00057 Deviance = 12259.13
Iter = 38 Max. abs. change = 0.00054 Deviance = 12259.13
Iter = 39 Max. abs. change = 0.00051 Deviance = 12259.13
Iter = 40 Max. abs. change = 0.00048 Deviance = 12259.13
Iter = 41 Max. abs. change = 0.00045 Deviance = 12259.13
Iter = 42 Max. abs. change = 0.00043 Deviance = 12259.13
Iter = 43 Max. abs. change = 0.00041 Deviance = 12259.12
Iter = 44 Max. abs. change = 0.00038 Deviance = 12259.12
Iter = 45 Max. abs. change = 0.00036 Deviance = 12259.12
Iter = 46 Max. abs. change = 0.00034 Deviance = 12259.12
Iter = 47 Max. abs. change = 0.00033 Deviance = 12259.12
Iter = 48 Max. abs. change = 0.00031 Deviance = 12259.12
Iter = 49 Max. abs. change = 0.00029 Deviance = 12259.12
Iter = 50 Max. abs. change = 0.00028 Deviance = 12259.12
Iter = 51 Max. abs. change = 0.00027 Deviance = 12259.12
Iter = 52 Max. abs. change = 0.00025 Deviance = 12259.12
Iter = 53 Max. abs. change = 0.00024 Deviance = 12259.12
Iter = 54 Max. abs. change = 0.00023 Deviance = 12259.12
Iter = 55 Max. abs. change = 0.00022 Deviance = 12259.12
Iter = 56 Max. abs. change = 0.00021 Deviance = 12259.12
Iter = 57 Max. abs. change = 0.00020 Deviance = 12259.11
Iter = 58 Max. abs. change = 0.00019 Deviance = 12259.11
Iter = 59 Max. abs. change = 0.00018 Deviance = 12259.11
Iter = 60 Max. abs. change = 0.00017 Deviance = 12259.11
Iter = 61 Max. abs. change = 0.00016 Deviance = 12259.11
Iter = 62 Max. abs. change = 0.00016 Deviance = 12259.11
Iter = 63 Max. abs. change = 0.00015 Deviance = 12259.11
Iter = 64 Max. abs. change = 0.00014 Deviance = 12259.11
Iter = 65 Max. abs. change = 0.00014 Deviance = 12259.11
Iter = 66 Max. abs. change = 0.00013 Deviance = 12259.11
Iter = 67 Max. abs. change = 0.00013 Deviance = 12259.11
Iter = 68 Max. abs. change = 0.00012 Deviance = 12259.11
Iter = 69 Max. abs. change = 0.00011 Deviance = 12259.11
Iter = 70 Max. abs. change = 0.00011 Deviance = 12259.11
Iter = 71 Max. abs. change = 0.00011 Deviance = 12259.11
Iter = 72 Max. abs. change = 0.00010 Deviance = 12259.11
Iter = 73 Max. abs. change = 0.00010 Deviance = 12259.11
summary(CDM.obj)
#> ==============================================
#> Number of items = 30
#> Number of attributes = 3
#> Number of individuals = 500
#>
#> Model Fit:
#> Deviance npar AIC BIC
#> 12259.11 95.00 12449.11 12849.50
#>
#> Distribution of Alpha Patterns:
#> 000 100 110 111
#> freq 153 157 47 143
#> prop 0.306 0.314 0.094 0.286
# }