Abstract Survival traits and selective genotyping datasets are typically not normally distributed, thus common models used to identify QTL may not be statistically appropriate for their analysis. The objective of the present study was to compare models for identification of QTL associated with survival traits, in particular when combined with selective genotyping. Data were simulated to model the survival distribution of a population of chickens challenged with Marek disease virus. Cox proportional hazards (CPH), linear regression (LR), and Weibull models were compared for their appropriateness to analyze the data, ability to identify associations of marker alleles with survival, and estimation of effects when all individuals were genotyped (full genotyping) and when selective genotyping was used. Little difference in power was found between the CPH and the LR model for low censoring cases for both full and selective genotyping. The simulated data were not transformed to follow a Weibull distribution and, as a result, the Weibull model generally resulted in less power than the other two models and overestimated effects. Effect estimates from LR and CPH were unbiased when all individuals were genotyped, but overestimated when selective genotyping was used. Thus, LR is preferred for analyzing survival data when the amount of censoring is low because of ease of implementation and interpretation. Including phenotypic data of non-genotyped individuals in selective genotyping analysis increased power, but resulted in LR having an inflated false positive rate, and therefore the CPH model is preferred for this scenario, although transformation of the data may also make the Weibull model appropriate for this case. The results from the research presented herein are directly applicable to interval mapping analyses.