(ProQuest: ... denotes formulae omitted.)I. IntroductionIn the last two decades, borrowing and firm value issues receded into the background due to the structured financial instruments and the exotic options coming into prominence. During this period, some models have been improved by financial engineers in order to evaluate these instruments and to model the path of financial time series. However, these models are generally based on the assumptions of conventional financial theory, such as normal distribution and random walk, which are deeply criticized in different studies (Ramasamy and Helmi, 2011).As the existing conventional theory oversimplifies the complicated structure of financial markets and explains the problems under ideal and normal conditions, we need accurate and credible models in order to build up an effective finance theory (Velasquez, 2009). Conventional theory based on the Bachelier's (1900) study reduced the tail probabilities to an insignificant level using normal distribution. However, this insignificance forms the basis of models such as Markowitz (1952), Sharpe (1964), Fama (1965, 1970) and Black and Scholes (1973), which are the basic studies of conventional theory. The scandals and crises of the last 50 years, including Enron (2001), Parmalat (2003), Black Monday (1987), the collapse of Long Term Capital Management (LTCM) (1998) and the mortgage crisis (2008), demonstrated that the random walk assumption of the Efficient Market Hypothesis is not valid. This is because, if the distribution of price changes were normal, five standard deviation observations would be seen every 7,000 years. However, in this case changes are seen almost every 4 to 5 years. As stated by Mandelbrot and Hudson (2004), the chain is only as strong as its weakest link and the weakest links in conventional theory are the Efficient Market Hypothesis, random walk and normal distribution assumptions.The first and the most important criticism of conventional theory was made by Mandelbrot regarding the ignoring of self-similarity and long memory properties of the financial time series. It should be noted that while the long memory issue was associated with Mandelbrot, the first person who adressed this topic was Hurst (1951). That is why Mandelbrot (1972) used the H notation as the long memory parameter in honor of Hurst1. In that period, Hurst showed that there were significant correlations between the floods. According to his studies, a good flood year had a tendency to be followed by another good flood year, and similarly a bad flood year was followed by another bad flood year. That is, the movements of good and bad years are not in a random process (Grabbe, 2001). Hurst explained this correlation via the power law. One of the most crucial observations of Hurst can be explained as follows. Natural events generally appear to follow a normal distribution but only when the occurrence order of the events is not taken into account (Mandelbrot and Hudson, 2004). The statistic introduced by Hurst measures the persistency between the events and it is known as the Rescaled Range (R/S). The Rescaled Range is the range of the partial sums of the deviations from the mean of any time series (Chaitip et al., 2011). Inspired by the Bible, Mandelbrot and Wallis (1968) named the persistency situations the "Joseph Effect" and adapted them into the financial time series theory as the long memory issue (Grabbe, 2001).2 Mandelbrot (1963, 1983) also contributed to the emergence of self-similarity and the fractal dimension as new topics in finance theory. In fact, all of these models paved the way for the evolution of conventional finance theory.According to conventional finance theory based on the random walk assumption of Fama (1965), there is no correlation between asset returns, and therefore asset prices develop with the geometric Brownian motion. Almost all models of conventional finance theory are formed under this assumption and probability distributions of the returns are accepted as normal. …