The loss ratio, measuring loss payments in comparison to premium income, is most extensively used in evaluating insurance underwriting results and in insurance decision making. It is especially instrumental in setting insurance rates, in testing the adequacy of rates, and in evaluating insurers' financial strength. Sometimes the loss ratio is used for the determination of the loss reserve (Breslin et al. (1978), p. 14). The loss ratio is also used in assessing the efficiency of preventive measures taken to reduce loss frequency or severity, and in assessing the performance of underwriting specific insurance lines, especially of new insurance programs. There is no uniformity in the definition of the loss ratio and many versions are used in practice. Moreover, the common measures are often severely biased. It is not impossible to find a company where one line has a traditional loss ratio of, say, 80%, whereas another has a loss ratio of 120% , although a more careful analysis, using the concepts which are discussed in this article, may show that the second line is more profitable than the first. The purpose of this note is to examine the bias that may result from the use of approximated and sometimes theoretically inadequate measures of the loss ratio, and to emphasize the bias which may arise from ignoring the timing of premium and loss payments. The authors suggest the use of a corrected loss ratio which is the ratio between the present value of losses and the present value of the premiums. This ratio may be especially instrumental in inflationary periods, since it can be easily adjusted to deal with "real" figures. The problems of choosing an appropriate discount factor and the relationships with current accounting and actuarial techniques are also subsequently discussed.