The uncertainty associated with parameter estimations is essential for population model building, evaluation, and simulation. Summarized by the standard error (SE), its estimation is sometimes questionable. Herein, we evaluate SEs provided by different non linear mixed-effect estimation methods associated with their estimation performances. Methods based on maximum likelihood (FO and FOCE in NONMEMTM, nlme in SplusTM, and SAEM in MONOLIX) and Bayesian theory (WinBUGS) were evaluated on datasets obtained by simulations of a one-compartment PK model using 9 different designs. Bootstrap techniques were applied to FO, FOCE, and nlme. We compared SE estimations, parameter estimations, convergence, and computation time. Regarding SE estimations, methods provided concordant results for fixed effects. On random effects, SAEM and WinBUGS, tended respectively to under or over-estimate them. With sparse data, FO provided biased estimations of SE and discordant results between bootstrapped and original datasets. Regarding parameter estimations, FO showed a systematic bias on fixed and random effects. WinBUGS provided biased estimations, but only with sparse data. SAEM and WinBUGS converged systematically while FOCE failed in half of the cases. Applying bootstrap with FOCE yielded CPU times too large for routine application and bootstrap with nlme resulted in frequent crashes. In conclusion, FO provided bias on parameter estimations and on SE estimations of random effects. Methods like FOCE provided unbiased results but convergence was the biggest issue. Bootstrap did not improve SEs for FOCE methods, except when confidence interval of random effects is needed. WinBUGS gave consistent results but required long computation times. SAEM was in-between, showing few under-estimated SE but unbiased parameter estimations.