What is a latent variable? Whats is the latent class logit model? What does the model do? what are the advantages and disadvant
I wrote my paper in latex. I started it and to be honest, I don′t really understand the topic so I was basically doing what someone else did. What is a latent class logit? What is the purpose of lc logit? background on lclogit application of lc logit? What does lclogit do? Conclusion.
1 Introduction
What is a latent variable? Whats is the latent class logit model? What does the model do? what are the advantages and disadvantages of using this model?
The latent class logit (lclogit) or discrete mixture logit model is a stata module that uses the EM algorithm to fit latent class conditional logit (Pacifico and Yoo, 2013). Traditionally, latent class models are normally estimated using gradient-based optimization techniques such as Newton-Raphson or Berndt–Hall–Hall–Hausman (BHHH) algorithmBerndt et al. (1974). When these techniques are used and the number of parameter or latent classes increase it becomes difficult to estimate the maximum likelihood and takes more time to calculate the gradient. Bhat (1997); Train (2008) stated that the expectation-maximisation (EM) algo- rithm can be used in place of these traditional algorithms as it makes the model numerically more stable and estimate parameters efficiently with the large number of parameters.
A latent class logit model when compared to the mixed logit model the computational cost is lower and the processing time is faster. The EM algorithm iterates until the maximum likelihood reaches convergence. The discrete mixture approach is more flexible and easier to implement.
2 EM algorithm for latent class logit
Train (2008) used Bhat (1997) research on latent class model to show that the EM algorithm can be used on with a large number of parameters. The EM algorithm that is used in Pacifico and Yoo (2013) will be used in this paper. Let N be the agents, J is the alternatives and T be the choice scenarios. also ynjt denote the alternative that agent n chooses in situation t the alternative j.
Pn(βc) = T∏ t=1
J∏ j=1
(exp(βcxnjt)∑J k=1 exp(βcxnkt)
(1)
Equation 1 shows the choice probability of a of conditional logit where all the variables were previously described expect for Pn which is the choice probability, β is the parameters and C is the classes.
πcn(θ) = exp(θczn)
1 + ∑C−1
l=1 exp(θlzn) (2)
In equation 2 the weighted average of equation one is divided by the classes to get the weight for for class c, which is πcn(θ) θ = (θ1,θ2, …,θc−1 denotes the class membership parameter
lnL(β,θ) = N∑
n=1
ln C∑ c=1
πcn(θ)Pn(βc) (3)
1
The sample log likelihood is depicted in equation 3 which is derived by adding the log unconditional likelihood.
βs+1 = argmaxβ N∑
n=1
C∑ c=1
hcn(β s,θs)lnPn(βc)
θs+1 = argmaxθ N∑
n=1
C∑ c=1
hcn(β s,θs)lnπcn(θ)
(4)
The term being maximized in Equation 4 is the log likelihood function for a logit model with each choice situation of each agent treated as an observation. Let s be the estimates for the sth iteration, hcn(β
s,θs) is the posterior probability.
hcn(β s,θs) =
πcn(θ s)Pn(β
s c∑C
l=1 πln(θ s)Pn(β
s l )
(5)
The updating procedure can be implemented easily in Stata, exploiting clogit and fmlogit routines as follows. β(s+ 1) is computed by fitting a conditional logit model(clogit) C times, each time using hcn(βs,θs)for a particular c to weight observations on each n. θ
s+1 is obtained by fitting a fractional multinomial logit model (fmlogit) that takes hln(βs,θs), h2n(βs,θs),…, hCn(βs,θs) as dependent variables. When zn only includes the constant term so that each class share is the same for all agents, that is, when πcn(θ) = πc(θ), each class share can be directly updated by using the following analytical solution without fitting the fractional multinomial logit model:
πc(θ s+1) =
∑N n=1 hcn(β
s,θs)∑C c=1
∑N n=1 hln(β
s,θs) (6)
3 The lclogit command
The lclogit Stata command user-written program that is a numerically stable, faster and a more cost effective method of estimating nonparametric estimation of mixing distributions. These characteristics allows for the command to estimate a large number of latent classes in a short period of time. In addition, log probabilities and the generate command in stata are used which also reduces the estimation time. The clogit maximum likelihood evaluator is used as the lclogit does not have it’s own.
The results are displayed in a table by using the estimate store and estimate table pro- grams with the the columns labelled as the classes. If the latent classes are 20 and above the results will be in matrix form and no longer a table.
Pacifico and Yoo (2013) stated that their are certain requirements that is needed for the lclogit command such as: Group() and id() which are numeric variables that shows the choice occasion and choice makers respectively. If cross sectional data is being used then the same variable can be selected for each. Another option is the number of latent classes
2
which is selected by using the CAIC and BIC criteria methods. These information criteria also helps to select when the model has been converged, Convergence(). When convergence is declared the threshold stops and the maximum number of iterations, iterate(#) and the log likelihood is specified. The membership(varlist) uses constant independent variables for the fractional multinomial logit model of class membership. s.
4 Post-estimation command: lclogitpr
Pacifico and Yoo (2012) stated that the probabilities of selecting each alternative in the choice occassion can be predicted by the lclogitpr. The options for lclogitpr:
• class(numlist) is the class
• pr0 estimates the unconditional choice probability;
• up estimates the class shares or prior probabilities that the agent is in particular classes.
5 Post-estimation command: lclogitcov
This command shows the choice models coefficients by estimating the variance and co- variance.
”The default setting stores the predicted variances in a set of variables named var 1, var 2, …, where var k is the predicted variance of the coefficient on the kth variable listed in varlist, and to store the predicted covariances in cov 12, cov 13, …, cov 23, …, where cov kj is the predicted covariance between the coefficients on the kth variable and the jth variable in varlist.”(Pacifico and Yoo, 2012, p.631)
• nokeep shows the average covariance matrix and removes the drops the predicted vari- ances and covariances
• varname(stubname) states what the predicted variance should be saved as stubname1, stubname2, ….
• covname(stubname) tates what the predicted covariance should be saved as stub- name12, stubname13, ….
• matrix(name) stores the reported average covariance matrix in a Stata matrix called name.
6 Application
The lclogit command will be used on an example that Pacifico and Yoo (2013) used to estimate latent class logit. The data in the example is used to determine the preference of
3
household’s choice of electricity supplier. The example consist of 100 customers have at least 12 choice situations with 4 suppliers in which they can only choose one. The data contatins the; the price of the contract; length of contract that the supplier offered(years); whether the supplier is a local company (local); Whether the supplier is a well-known company (wknown); Whether the supplier offers a time-of-day rate instead of a fixed rate (tod)and Whether the supplier offers a seasonal rate instead of a fixed rate (seasonal). The data can be seen in Table one where y is the dummy variable for choice; pid and gid are numeric variables that shows the agents and the choice situations.
Table 1: Variables and Data
y price contract local wknown tod seasonal gid pid x1
1. 0 7 5 0 1 0 0 1 1 27
2. 0 9 1 1 0 0 0 1 1 27
3. 0 0 0 0 0 0 1 1 1 27
4. 1 0 5 0 1 1 0 1 1 27
5. 0 7 0 0 1 0 0 2 1 27
6. 0 9 5 0 1 0 0 2 1 27
7. 1 0 1 1 0 1 0 2 1 27
8. 0 0 5 0 0 0 1 2 1 27
9. 0 9 5 0 0 0 0 3 1 27
10. 0 7 1 0 1 0 0 3 1 27
11. 0 0 0 0 1 1 0 3 1 27
12. 1 0 0 1 0 0 1 3 1 27
The above table was derived by using the following commands in Stata:
• use http://fmwww.bc.edu/repec/bocode/t/traindata.dta
• set seed 1234567890
• by pid, sort: egen x1=sum(round(rnormal(0.5),1))
• list in 1/12, sepby(gid)
The information criteria used was the CAIC and the BIC to select the optimal number of latent classes. The estimation results can be seen in the Table the CAIC decreases from 2337.273 to 2292.538 as the the fifth class was added and increases to 2313.10 when sixth class was added. The same was done for the BIC criteria except that the number of latent
4
classes is eight. This example however, will use the 5 classes. The following commands were used in Stata:
• forvalues c = 2/10
• quietly lclogit y price contract local wknown tod seasonal, group(gid) id(pid) nclasses (‘c’) membership ( x1) seed(1234567890)
• matrix b = e(b)
• matrix ic = nullmat(ic) ‘e(nclasses)´, ‘e(ll)´, ‘=colsof(b)´, ‘e(caic)´, ‘e(bic)´
(output omitted )
•• matrix colnames ic = ”Classes” ”LLF” ”Nparam” ”CAIC” ”BIC”
• matlist ic, name(columns)
Table 2: Number of Class Selection
Classes LLF Nparam CAIC BIC
2 -1211.232 14 2500.935 2486.935
3 -1117.521 22 2258.356 2336.356
4 -1084.559 30 2337.273 2307.273
5 – 1039.771 38 2292.538 2254.538
6 – 1027.633 46 2313.103 2267.103
7 -999.9628 54 2302.605 2248.605
8 -987.7199 62 2322.96 2260.96
9 -985.1933 70 2362.748 2292.748
10 -966.3487 78 2369.901 2291.901
Table 3 gives the estimated model with 5 classes. Class 2 is the largest class with 28 percent. The average share over agents is represented by the class shares. This is the case because the class shares are now agent specific which is estimated by using the lclogitpr command.
• by ‘(id)’, sort: generate first = n==1
• lclogitpr cp, cp
• egen double cpmax = rowmax(cp1-cp5)
5
Table 3: Choice Model parameters and average class share
Variable Class1 Class2 Class3 Class4 Class5
price -0.315 -0.562 -0.887 -1.497 -0.762
contract 0.025 -0.083 -0.470 -0.380 -0.538
local 3.072 4.512 0.400 0.803 0.526
wknown 2.256 3.405 0.424 1.075 0.317
tod -2.183 -7.872 -8.245 -15.229 -5.356
seasonal -2.484 -7.705 -6.225 -14.419 -7.760
Class Share 0.300 0.174 0.112 0.254 0.160
Variable Class1 Class2 Class3 Class4 Class5
x1 -0.011 0.024 -0.022 -0.027 0.000
cons 0.902 -0.556 0.172 1.119 0.000
• summarize cpmax if first, sep(0)
The lclogitpr command is used to describe the efficiency of the model when measuring the difference in class preference. The mean of .95 states that the model did a good job of differentiating preference of each class. This can be seen in Table 4. The following command was inserted in stata: lclogit y price contract local wknown tod seasonal, group(gid) id(pid) nclasses(5) membership( x1) seed(1234567890)
Table 4: Fitness of Model
Variable Obs Mean Std. Dev. Min Max
cpmax 100 .9596674 0.860159 .5899004 1
The respondents are classifed in classes based on the one that gives that agent ”high posterior probablity” (Pacifico and Yoo, 2013). This is done so that the choice outcomes within within the model can be predicted. The conditional and unconditional probability for the choice is computed for being in that class.
• lclogitpr pr, pr
• generate byte class = . (4780 missing values generated)
6
• forvalues c = 1/‘e(nclasses)´
• quietly replace class = ‘c´ if cpmax==cp‘c´
forvalues c = 1/‘e(nclasses)´
•• quietly summarize pr if class == ‘c´ y==1
• local n=r(N)
• local a=r(mean)
• quietly summarize pr‘c´ if class == ‘c´ y==1
• local b=r(mean)
• matrix pr = nullmat(pr) ‘n´, ‘c´, ‘a´, ‘b´
•• matrix colnames pr = ”Obs” ”Class” ”Uncond Pr” ”Cond PR”
• matlist pr, name(columns)
Table 5 shows the conditional and unconditional probabilities if the model. Pacifico and Yoo (2013) states that the average conditional is 0.5 while the unconditional probability is 0.25. The probabilities depicted in the table is higher than the the usual probably which indicates that this model is estimates observed choice situations. The fowllowing stata code was used
• matrix list e(PB)
• e(PB)[1,6]
Table 5: Conditional and Unconditional Probabilities
Obs Class Uncondi Pr Cond Pr
129 1 .3364491 .5387555
336 2 .3344088 .4585939
191 3 .3407353 .5261553
300 4 .4562778 .7557497
239 5 .4321717 .6582177
7
7 Conclusion
The lclogit is a stata command that uses the EM algorithm to estimate discrete mixing distribution choices. This algorithm allows for large parameters to be estimated in a shorter period of time and with a lower computational cost. It is also used to describe the efficiency of the model when measuring the difference in class preference The CAIC and BIC are used to select the number of latent classes. The EM algorithm makes the model numerically more stable.
8
References
Berndt, E. R., Hall, B. H., Hall, R. E., and Hausman, J. A. (1974). Estimation and inference in nonlinear structural models. In Annals of Economic and Social Measurement, Volume 3, number 4, pages 653–665. NBER.
Bhat, C. R. (1997). An endogenous segmentation mode choice model with an application to intercity travel. Transportation science, 31(1):34–48.
Pacifico, D. and Yoo, H. I. (2012). A stata module for estimating latent class conditional logit models via the expectation-maximization algorithm. Technical report, School of Economics, The University of New South Wales.
Pacifico, D. and Yoo, H. I. (2013). lclogit: A stata command for fitting latent-class con- ditional logit models via the expectation-maximization algorithm. The Stata Journal, 13(3):625–639.
Train, K. E. (2008). Em algorithms for nonparametric estimation of mixing distributions. Journal of Choice Modelling, 1(1):40–69.
9
,
A systematic comparison of continuous
and discrete mixture models
Stephane Hess∗ Michel Bierlaire† John W. Polak‡
November, 17, 2006
Report TRANSP-OR 061117
Transport and Mobility Laboratory
School of Architecture, Civil and Environmental Engineering
Ecole Polytechnique F�ed�erale de Lausanne
transp-or.epfl.ch
∗Centre for Transport Studies, Imperial College London, [email protected],
Tel: +44(0)20 7594 6105, Fax: +44(0)20 7594 6102 †Transport and Mobility Laboratory, School of Civil and Environmental Engineering,
�Ecole Polytechnique F�ed�erale de Lausanne, [email protected] .ch, Tel: +41(0)21 693 25
37, Fax: +41(0)21 693 55 70 ‡Centre for Transport Studies, Imperial College London, [email protected], Tel:
+44(0)20 7594 6089, Fax: +44(0)20 7594 6102
1
Abstract
Modellers are increasingly relying on the use of continuous random
coe�cients models, such as Mixed Logit, for the representation of
variations in tastes across individuals. In this paper, we provide an
in-depth comparison of the performance of the Mixed Logit model
with that of its far less commonly used discrete mixture counterpart,
making use of a combination of real and simulated datasets. The
results not only show signi�cant computational advantages for the
discrete mixture approach, but also highlight greater exibility, and
show that, across a host of scenarios, the discrete mixture models are
able to o�er comparable or indeed superior model performance.
2
1 Introduction and context
Allowing for variations in behaviour across decision makers is one of the
most fundamental principles in discrete choice modelling, given that the
assumption of a purely homogeneous population cannot in general be seen
to be valid. The typical way of allowing for such variation is through
a deterministic approach, linking the taste heterogeneity to variations in
socio-demographic factors such as income or trip purpose.
While appealing from the point of view of interpretation (and especially
for forecasting), it is often not possible to represent all variations in tastes in
a deterministic fashion, for reasons of data quality, but also due to inherent
randomness in choice behaviour. For this reason, random coe�cient struc-
tures, such as the Mixed Multinomial Logit (MMNL) model, which allow
for random variations in behaviour across respondents, have an important
advantage in terms of exibility. In general, such models have the disad-
vantage that their choice probabilities take on the form of integrals that do
not possess a closed form solution, such that numerical processes, typically
simulation, are required during estimation and application of the models.
This greatly limited the use of these structures for many years after their
initial developments. Over recent years, gains in computer speed and the
e�ciency of simulation based estimation processes (see for example Hess
et al., 2006) have however led to increased interest in the MMNL model in
particular, by researchers and, to a lesser degree, also practitioners.
Despite the improvements in estimation capability, the cost of using the
MMNL model remains high. While this might be acceptable in many cases,
another important issue remains, namely the choice of distribution to be
used for representing the random variations in tastes across respondents.
Here, there is a major risk of producing misleading results when making
an inappropriate choice of distribution, as discussed by Hess et al. (2005).
In this paper, we explore an alternative approach, based on the idea of
replacing the continuous distribution functions by discrete distributions,
spreading the mass among several discrete values. Mathematically, the
model structure of a DM model is a special case of a latent class model
(Kamakura and Russell, 1989; Chintagunta et al., 1991, cf.), assigning dif-
1
ferent coe�cient values to di�erent parts of the population of respondents,
a concept discussed in the �eld of transport studies for example by Greene
and Hensher (2003) and Lee et al. (2003). Latent class approaches make
use of two sub-models, one for class allocation, and one for within class
choice. The former models the probability of an individual being assigned
to a speci�c class as a function of attributes of the respondent and possibly
of the alternatives in the choice set. The within class model is then used to
compute the class-speci�c choice probabilities for the di�erent alternatives,
conditional on the tastes within that class. The actual choice probability
for individual n and alternative i is given by a sum of the class-speci�c
choice probabilities, weighted by the class allocation choice probabilities
for that speci�c individual.
The latent class approach is appealing from the point of view that it
allows for di�erences in sensitivities across population groups, where the
group allocation can be related to socio-demographic characteristics. How-
ever, in practice, it may not always be possible to explain group allocation
with the help of a probabilistic model relating the outcome to observed
variables. This situation is similar to the case where taste heterogeneity
cannot be explained deterministically, leading to a requirement for using
random coe�cients models. As such, in this paper, we explore the use
of models in which the class allocation probabilities are independent of
explanatory variables, and are simply given by constants that are to be
estimated during model calibration. As such, the resulting model exploits
the class membership concept in the context of random coe�cients models,
with a limited set of possible values for the coe�cients.
Thus far, there have seemingly been only two applications of this ap-
proach in the area of transport research, by Gopinath (1995), in the context
of mode choice for freight shippers, and by Dong and Koppelman (2003),
who made use of discrete mixtures of MNL models in the analysis of mode
choice for work trips in New York, referring to the resulting model as the
Mass Point Mixed Logit model". Although the properties of DM models
have been discussed by several other authors (Wedel et al., 1999, e.g.), the
model structure does not seem to have received widespread exposure or
application, despite its many appealing characteristics.
2
Given the above discussion, part of the aim of this paper is to re-explore
the potential advantages of DM models, with the hope of encouraging their
more widespread use. Additionally, the paper aims to o�er a systematic
comparison of the performance of discrete and continuous mixture models
across a host of situations, making use of simulated data.
The remainder of this paper is organised as follows. The next section
sets out the theory behind DM models. Section 3 presents a case study
using real data, while Section 4 uses four di�erent simulated datasets in a
systematic comparison of discrete and continuous mixture models. Finally,
Section 5 presents the conclusions of the paper.
2 Methodology
We begin by introducing some general notation, which is used throughout
the remainder of this paper. Speci�cally, let xin be a vector de�ning the
attributes of alternative i as faced by respondent n (potentially including
interactions with socio demographic variables), and let β be a vector de�n-
ing the tastes of the decision maker, where, in purely deterministic models,
β is constant across respondents. Let xn be a vector grouping together the
individual vectors xjn across the alternatives contained in the choice set of
respondent n, and let γ represent an additional set of parameters, which
can for example contain the structural parameters (and possibly allocation
parameters) used to represent inter-alternative correlation in a Generalised
Extreme Value (GEV) context. In a very general form, we can then de�ne
Pn (i | xn, Cn, γ, β) to give the choice probability of alternative i for indi-
vidual n, with a choice set Cn, conditional on the observed vector xn, and
for given values for the vectors of parameters β and γ (to be estimated).
Due to the potential inclusion of socio-demographic attributes in xn, this
notation allows for deterministic variations in tastes across respondents.
In a discrete mixture context, the number of possible values for the taste
coe�cients β is �nite. Here, we divide the set of parameters β into two sets; �β represents a part of β containing deterministic parameters, while β̂ is a
set of K random parameters that have a discrete distribution. Within this
set, the parameter β̂k has mk mass points β̂ j k, j = 1, . . . , mk, each of them
3
associated with a probability π j k, where we impose the conditions that
1
0 ≤ πjk ≤ 1, k = 1, . . . , K; j = 1, . . . , mk, (1)
and mk∑ j=1
π j k = 1, k = 1, . . . , K. (2)
For each realisation β̂ j1 1 , . . . , β̂
jK K of β̂, the choice probability is given by
Pn
( i | xn, Cn, γ, β = 〈�β, β̂
j1 1 , . . . , β̂
jK K 〉
) , (3)
where the deterministic part of �β stays constant across realisations of the
vector β̂.
The unconditional (on a speci�c realisation of β, not on the distribution
of β̂) choice probability for alternative i and decision maker n can now be
written straightforwardly as a mixture over the discrete distributions of the
various elements contained in β̂ as:
Pn
( i | xn, Cn, γ, �β, β̂, π
) =
m1∑ j1=1
· · · mK∑
jK=1
Pn
( i | xn, Cn, γ, β = 〈�β, β̂
j1 1 , . . . , β̂
jK K 〉
) π
j1 1 · . . . · π
jK K , (4)
where �β, β̂ and π (π = 〈π11, . . . , π m1 1 , . . . , π
1 K, . . . , π
mK K 〉) are vectors of pa-
rameters to be estimated in a regular maximum likelihood estimation pro-
cedure. An obvious advantage of this approach is that, if the model (3)
used inside the mixture has a closed form, then so does the DM itself.
In this paper, we mainly focus on the simple case where the underly-
ing choice model is of MNL form; however, the form given in equation (4)
is appropriate for any underlying model, where, with an underlying GEV
structure, the resulting model obtains a closed form expression, avoiding
the need for simulation in estimation and application. In this case, the
1These constraints can be avoided by setting πi = eαi∑J
j=1 e
αj , where αj with j = 1, . . . , J
are estimated without constraints. While avoiding the need for constraints, this formula-
tion becomes highly non-linear and di�cult to handle in estimation.
4
vector γ would contain parameters that determine the nesting structure
of the model. The approach can easily be extended to the case of com-
bined discrete and continuous random taste variation, by partitioning β
into three parts; the above de�ned parts �β and β̂, and an additional part
β̃, whose elements follow continuous distributions2. This however leads to
a requirement to use simulation, as with all continuous mixture models.
Finally, independently of the additional treatment of random variations
in tastes, a treatment of repeated choice observations analogous to the stan-
dard continuous mixture treatment, with tastes varying across individuals,
but not across observations for the same individual, is made possible by
replacing the conditional choice probabilities for individual observations in
equation (4) by probabilities for sequences of choices, and by using the
resulting DM term inside the log-likelihood function.
Several issues arise in the estimation of DM models. Firstly, the non-
concavity of the log likelihood function does not allow the identi�cation of
a global maximum, even for discrete mixtures of MNL. Given the potential
presence of a high number of local maxima, performing several estimations
from various starting points is advisable. Also, it is good practice to use
starting values other than 0 or 1 for the π j k parameters. Secondly, con-
strained maximum likelihood must be used to account for constraints (1)
and (2). Thirdly, clustering of mass points (for example around the mode
of the true distribution) is a frequent phenomenon with DM models, and
the use of additional bounds on the mass points can be useful, based on
the de�nition of (potentially mutually exclusive) a priori intervals for the
individual mass points. In this context, a heuristic is needed to determine
the optimal number of support points in actual applications.
For the purpose of this analysis, the model was coded into BIOGEME
(Bierlaire, 2003), where various constraints on the parameters can be im-
posed to address the issues described above. This also allows modellers
to test the validity of speci�c assumptions, such as a mass at zero for the
VTTS, a concept discussed for example by Cirillo and Axhausen (2006).
2This approach can then also be used to include error components for correlation or
heteroscedasticity.
5
3 VTTS case study
In this section, we present the �ndings of an analysis making use of real
world data. We �rst give a brief description of the data in Section 3.1,
before looking at model speci�cation in Section 3.2. The estimation results
are presented in Section 3.3.
3.1 Data
The study presented here makes use of Stated Preference (SP) data col-
lected as part of a recent value of time study undertaken in Denmark (Burge
and Rohr, 2004). Speci�cally, we make use of data describing a binary
choice process for car travellers, with alternatives described only in terms
of travel cost and travel time. Each respondent was presented with 9 choice
situations, including one with a dominating alternative.
After eliminating the observations with a dominating alternative, as w
Collepals.com Plagiarism Free Papers
Are you looking for custom essay writing service or even dissertation writing services? Just request for our write my paper service, and we'll match you with the best essay writer in your subject! With an exceptional team of professional academic experts in a wide range of subjects, we can guarantee you an unrivaled quality of custom-written papers.
Get ZERO PLAGIARISM, HUMAN WRITTEN ESSAYS
Why Hire Collepals.com writers to do your paper?
Quality- We are experienced and have access to ample research materials.
We write plagiarism Free Content
Confidential- We never share or sell your personal information to third parties.
Support-Chat with us today! We are always waiting to answer all your questions.