6 results on '"Otabek, Sattarov"'
Search Results
2. Enhanced Bitcoin Price Direction Forecasting With DQN
- Author
-
Azamjon Muminov, Otabek Sattarov, and Daeyoung Na
- Subjects
Bitcoin ,reinforcement learning ,deep Q-network ,Pearson correlation ,reward function ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
In the Bitcoin trading landscape, predicting price movements is paramount. Our study focuses on identifying the key factors influencing these price fluctuations. Utilizing the Pearson correlation method, we extract essential data points from a comprehensive set of 14 data features. We consider historical Bitcoin prices, representing past market behavior; trading volumes, which highlight the level of trading activity; network metrics that provide insights into Bitcoin’s blockchain operations; and social indicators: analyzed sentiments from Twitter, tracked Bitcoin-related search trends on Google and on Twitter. These social indicators give us a more nuanced understanding of the digital community’s sentiment and interest levels. With this curated data, we forge ahead in developing a predictive model using Deep Q-Network (DQN). A defining aspect of our model is its innovative reward function, tailored for enhancing predicting Bitcoin price direction, distinguished by its multi-faceted reward function. This function is a blend of several critical factors: it rewards prediction accuracy, incorporates confidence scaling, applies an escalating penalty for consecutive incorrect predictions, and includes a time-based discounting to prioritize recent market trends. This composite approach ensures that the model’s performance is not only precise in its immediate predictions but also adaptable and responsive to the evolving patterns of the cryptocurrency market. Notably, in our tests, our model achieved an impressive F1-score of 95%, offering substantial promise for traders and investors.
- Published
- 2024
- Full Text
- View/download PDF
3. Forecasting Bitcoin Volatility Through on-Chain and Whale-Alert Tweet Analysis Using the Q-Learning Algorithm
- Author
-
Muminov Azamjon, Otabek Sattarov, and Jinsoo Cho
- Subjects
Bitcoin trend prediction ,data features ,historical price ,cryptoquant data ,sentiment analysis ,Q-learning ,Electrical engineering. Electronics. Nuclear engineering ,TK1-9971 - Abstract
As the adoption of cryptocurrencies, especially Bitcoin (BTC) continues to rise in today’s digital economy, understanding their unpredictable nature becomes increasingly critical. This research paper addresses this need by investigating the volatile nature of the cryptocurrency market, mainly focusing on Bitcoin trend prediction utilizing on-chain data and whale-alert tweets. By employing a Q-learning algorithm, a type of reinforcement learning, we analyze variables such as transaction volume, network activity, and significant Bitcoin transactions highlighted in whale-alert tweets. Our findings indicate that the algorithm effectively predicts Bitcoin trends when integrating on-chain and Twitter data. Consequently, this study offers valuable insights that could potentially guide investors in informed Bitcoin investment decisions, thereby playing a pivotal role in the realm of cryptocurrency risk management.
- Published
- 2023
- Full Text
- View/download PDF
4. Recommending Cryptocurrency Trading Points with Deep Reinforcement Learning Approach
- Author
-
Otabek Sattarov, Azamjon Muminov, Cheol Won Lee, Hyun Kyu Kang, Ryumduck Oh, Junho Ahn, Hyung Jun Oh, and Heung Seok Jeon
- Subjects
trading ,machine learning ,deep reinforcement learning ,moving average ,double cross strategy ,day trading ,swing trading ,position trading ,scalping ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
The net profit of investors can rapidly increase if they correctly decide to take one of these three actions: buying, selling, or holding the stocks. The right action is related to massive stock market measurements. Therefore, defining the right action requires specific knowledge from investors. The economy scientists, following their research, have suggested several strategies and indicating factors that serve to find the best option for trading in a stock market. However, several investors’ capital decreased when they tried to trade the basis of the recommendation of these strategies. That means the stock market needs more satisfactory research, which can give more guarantee of success for investors. To address this challenge, we tried to apply one of the machine learning algorithms, which is called deep reinforcement learning (DRL) on the stock market. As a result, we developed an application that observes historical price movements and takes action on real-time prices. We tested our proposal algorithm with three—Bitcoin (BTC), Litecoin (LTC), and Ethereum (ETH)—crypto coins’ historical data. The experiment on Bitcoin via DRL application shows that the investor got 14.4% net profits within one month. Similarly, tests on Litecoin and Ethereum also finished with 74% and 41% profit, respectively.
- Published
- 2020
- Full Text
- View/download PDF
5. Reducing GPS Error for Smart Collars Based on Animal’s Behavior
- Author
-
Azamjon Muminov, Otabek Sattarov, Cheol Won Lee, Hyun Kyu Kang, Myeong-Cheol Ko, Ryumduck Oh, Junho Ahn, Hyung Jun Oh, and Heung Seok Jeon
- Subjects
GPS error ,filter ,correction ,virtual fence ,livestock ,smart collar ,IoT ,machine learning ,SVM ,Technology ,Engineering (General). Civil engineering (General) ,TA1-2040 ,Biology (General) ,QH301-705.5 ,Physics ,QC1-999 ,Chemistry ,QD1-999 - Abstract
Global Positioning Systems (GPS) are successfully used in many fields such as navigation, meteorology, military tasks, mapping, virtual fencing, and more. Smart collars are currently the most convenient device for determining animal location in virtual fencing systems, however; these systems are still suffering from environmental effects and propagation in direct visibility. These types of side effects may reduce the work of GPS receivers. The current article defines a method for improving animal location accuracy using a virtual fence smart collar worn around the animal’s neck by the aid of maximum probability of movement from one point to another. The proposed approach first checks the current position of the animal, and after receiving a GPS signal from satellites it calculates the distance between the two GPS signals. Secondly, the method checks the animal’s behavior for the receiving period of the two points. Finally, the approach calculates a probability of maximum animal movement for the two-point receiving period. If the animal can pass the distance in the time frame of the two signals, then the second signal is taken as the correct position; otherwise, the point is taken which the animal could pass. Real-time animal behavior is classified using Support Vector Machines (SVM). The proposed method was verified within seven days of experiments. Consequently, the proposed approach experiments were sufficiently successful. The recreated locations from our approach appeared very close to the real point. The mean average of passed distance by the marked line decreased to 16.2, 5, 0 m for running, walking, and resting conditions, respectively. On the other hand, the unfiltered geolocations of the GPS receiver, give results significantly further from the animal’s actual position such as 148.8, 182.7, 136.2 m for running, walking, and resting conditions.
- Published
- 2019
- Full Text
- View/download PDF
6. Recommending Cryptocurrency Trading Points with Deep Reinforcement Learning Approach
- Author
-
Heung Seok Jeon, Cheol Won Lee, Azamjon Muminov, Ryumduck Oh, Hyun Kyu Kang, Junho Ahn, Hyung Jun Oh, and Otabek Sattarov
- Subjects
Net profit ,Cryptocurrency ,Profit (accounting) ,02 engineering and technology ,trading ,day trading ,lcsh:Technology ,moving average ,swing trading ,position trading ,Microeconomics ,lcsh:Chemistry ,0502 economics and business ,0202 electrical engineering, electronic engineering, information engineering ,Reinforcement learning ,General Materials Science ,Day trading ,Instrumentation ,lcsh:QH301-705.5 ,Swing trading ,Fluid Flow and Transfer Processes ,050208 finance ,deep reinforcement learning ,lcsh:T ,Process Chemistry and Technology ,05 social sciences ,General Engineering ,020206 networking & telecommunications ,lcsh:QC1-999 ,Computer Science Applications ,double cross strategy ,machine learning ,lcsh:Biology (General) ,lcsh:QD1-999 ,lcsh:TA1-2040 ,Capital (economics) ,Stock market ,Business ,scalping ,lcsh:Engineering (General). Civil engineering (General) ,lcsh:Physics - Abstract
The net profit of investors can rapidly increase if they correctly decide to take one of these three actions: buying, selling, or holding the stocks. The right action is related to massive stock market measurements. Therefore, defining the right action requires specific knowledge from investors. The economy scientists, following their research, have suggested several strategies and indicating factors that serve to find the best option for trading in a stock market. However, several investors&rsquo, capital decreased when they tried to trade the basis of the recommendation of these strategies. That means the stock market needs more satisfactory research, which can give more guarantee of success for investors. To address this challenge, we tried to apply one of the machine learning algorithms, which is called deep reinforcement learning (DRL) on the stock market. As a result, we developed an application that observes historical price movements and takes action on real-time prices. We tested our proposal algorithm with three&mdash, Bitcoin (BTC), Litecoin (LTC), and Ethereum (ETH)&mdash, crypto coins&rsquo, historical data. The experiment on Bitcoin via DRL application shows that the investor got 14.4% net profits within one month. Similarly, tests on Litecoin and Ethereum also finished with 74% and 41% profit, respectively.
- Published
- 2020
- Full Text
- View/download PDF
Catalog
Discovery Service for Jio Institute Digital Library
For full access to our library's resources, please sign in.