
This is the phrase with which I began every second trading day back in 2021. I remember sitting covered in charts and numbers, proud of my new trading system, and then bam – half of my deposit was gone. Because some smart guy made a statement about crypto and the market went crazy.
Sounds familiar, right? I am sure every algo trader has been through this. It seems I have calculated and tested everything, and the system works perfectly on historical data… But what about the real market? “Hello, volatility, long time no see!”
After another such “adventure”, I got angry and decided to get to the bottom of it. Well, it can’t be that it is impossible to somehow predict these market hysterics! I think I have dug through all the existing studies on volatility. Do you know what is most funny? It turned out that the solution lay at the intersection of old methods and new technologies.
In this article, I will share my journey from despair to a working volatility forecasting system. No boring stuff or academic jargon – just real experience and working solutions. I will show you how I combined MetaTrader 5 with Python (spoiler: they did not get along right away), how I made machine learning work for me, and what pitfalls I encountered along the way.
The main insight I gained from this whole story is that you cannot blindly trust either classic indicators or trendy neural networks. I remember how I spent a week setting up a very complex neural network, and then a simple XGBoost showed better results. Or how once a simple Bollinger saved a deposit where all the smart algorithms failed.
I also realized that in trading, as in boxing, the main thing is not the force of the blow, but the ability to anticipate it. My system does not make supernatural predictions. It simply helps you be prepared for market surprises and increase your trading strategy’s safety margin in time.
In short, if you are tired of your algorithms tripping over every little bit of volatility, welcome to my world. I will tell you everything as is, with code examples, charts and analysis. Let’s get started.
After months of experimentation and in-depth analysis of market data, a concept was born for a system capable of predicting volatility with astonishing accuracy. The key discovery was that volatility, unlike price, has the property of stationarity — it tends to return to its mean value and forms stable patterns. It is this feature that makes its forecasting not only possible, but also practically applicable in real trading.
The system is based on the powerful combination of MetaTrader 5 and Python, where each tool showcases its strengths. MetaTrader 5 acts as a reliable source of market data. It provides us with historical quotes and a real-time data stream with minimal delays. And Python becomes our analytical lab, where a rich set of machine learning libraries (Sklearn, XGBoost, PyTorch) helps extract valuable patterns from this data and confirm hypotheses about the stationarity of volatility.
The system architecture consists of three key levels:
The models are trained on a unique dataset, including quotes on different timeframes — from tick to daily. This allows the system to recognize three key market conditions: low volatility, trending, and explosive. Based on this information, recommendations are formed on optimal entry levels and protective orders. Due to the stationarity of volatility, the system is able not only to identify the current state, but also to predict transitions between these states.
The main feature of the system is its adaptability. It does not just issue fixed recommendations, but adapts them to the current market situation. For each trading situation, the system offers an individual set of parameters based on the forecast of future volatility. This adaptability is particularly effective due to persistent patterns in volatility behavior.
In the following sections, we’ll examine each system component in detail, show the actual code, and share the results of backtesting. You will see how theoretical concepts about volatility stationarity are transformed into a practical tool for market analysis.
Before we dive into system development, let’s take a look at installing all the necessary software. From my own experience, I know that it is the setup of the MetaTrader 5-Python connection that causes many people to stumble, so I will try to tell you not only how to install everything, but also how to avoid the main pitfalls.
Let’s start with Python. We need version 3.8 or higher, which can be downloaded from the official python.org website. During installation, be sure to check “Add Python to PATH”, otherwise you will have to add paths manually later. After installing Python, the first thing we will do is create a virtual environment for the project. This is not a mandatory step, but it is very useful – it will protect us from library version conflicts.
Now let’s install the necessary libraries. We will need a few basic tools: numpy and pandas for working with data, scikit-learn and xgboost for machine learning, pytorch for neural networks, and, of course, a library for working with MetaTrader 5. Here is the command to install the entire package:
Let’s take a closer look at installing MetaTrader 5. You need to download it from your broker’s website – this is important because versions may differ. When installing, choose a folder with a simple path, without Cyrillic characters or spaces — this will save you a lot of hassle when setting up communication with Python.
After installing the terminal, don’t forget to enable automatic trading and DLL import in its settings, as well as enable AlgoTrading. It sounds obvious, but I spent a couple of hours debugging it myself until I remembered about these settings.
Now comes the fun part – checking the connection between Python and MetaTrader 5. I developed a small script to make sure everything works as it should:
What to look for when problems arise? The most common stumbling block is MetaTrader 5 initialization. If the script cannot connect to the terminal, first check whether MetaTrader 5 itself is running. It seems obvious, but believe me, even experienced developers sometimes forget about it.
If the terminal is running but there is still no connection, check your administrator rights and firewall settings. Windows sometimes likes to play it safe and block the connection.
For development, I recommend using VS Code or PyCharm — both editors are great for Python development. Install the extension for Python and Jupyter – this will greatly simplify debugging and testing code.
The final check is to try getting some historical data:
If the code runs without errors, your development environment is ready to go! In the next section, we will look at receiving and handling data from MetaTrader 5.
Before we dive into complex calculations, let’s make sure we are receiving data from the trading terminal correctly. I wrote a simple script to help you test MetaTrader 5 and look at the data structure:
This code will display all the information we need to check the validity of the connection and the quality of the received data. Once you launch it, you will see:
After the launch, we immediately see that everything works as it should. If a problem occurs somewhere, the script will show at what stage exactly something went wrong.
In the following sections, we will use this data to calculate volatility, but first it is important to ensure that the underlying data retrieval works correctly.
When I first started working with volatility forecasting, I thought the main thing was a cool machine learning model. Practice quickly showed that the quality of data preparation is what really matters. Let me show you how I prepare data for our forecasting system.
Here is the full preprocessing code I am using:
First, we download the last 10,000 H1 bars from MetaTrader 5. Why exactly that many? Through trial and error, I found that this is the optimal amount – enough for learning, but not so much that the market changes significantly.
Now the most interesting part begins. The VolatilityProcessor class does all the dirty work of preparing the data. Here is what is going on under the hood:
As a result, we get 15 features – the optimal number for our task. I tried adding more (all sorts of exotic indicators), but it only made the results worse.
The target variable is the future volatility over the next 12 periods. Why 12? On hourly data, this gives us a forecast for the next half day — enough to make trading decisions, but not so much that the forecast becomes meaningless.
In the next section, we will build a machine learning model that will work with this prepared data. But remember, no matter how cool the model we use, it will not save poorly prepared data.
So, we have reached the most interesting part – creating a forecasting model. Initially, I took the obvious route – regression to predict the exact value of future volatility. The logic was simple: we get a specific number, multiply it by some ratio, and there you have your stop loss level.
I started with the simplest code – a basic XGBRegressor with minimal settings. The parameters are few: one hundred trees, learning rate 0.1 and depth 5. It was naive to think that this would be sufficient, but who has not made such mistakes at the beginning of their journey?
The results are unimpressive, to put it mildly. The R-square hovered around 0.05-0.06, meaning the model explained only 5-6% of the variation in the data. The standard deviation of the predictions turned out to be almost three times smaller than the actual one. Mean Absolute Error seemed pretty good, but it was a trap.
Why trap? That is because the model has simply learned to predict values close to the average. During quiet periods, everything looked great, but as soon as real action started, the model happily missed it.
Weeks were spent trying to improve the regression model. I tried different neural network architectures, added more and more new features, experimented with different loss functions, and tweaked hyperparameters until I was completely exhausted.
Everything turned out to be useless. Sometimes, I managed to raise the R-square to 0.15-0.20, but at what cost? The model became unstable, overfitted, and, most importantly, still missed the most important moments of high volatility.
And then it dawned on me: why do we need an exact volatility value at all? A trader does not care whether the volatility is 0.00234 or 0.00256. What is important is whether it will be significantly higher than usual.
This is how the idea was born to reformulate the problem as a classification. Instead of predicting specific values, we began to define two states: normal/low volatility (label 0) and high volatility above the 75th percentile (label 1).
First, we received clearer signals. Instead of vague predictions, there was now a certain answer: whether to expect a surge or not. This approach turned out to be much easier to interpret and integrate into a trading system.
Secondly, the model has become better at handling extreme values. In the regression, the outliers were “smeared out”, but in the classification they formed a clear pattern of the high volatility class.
Thirdly, practical applicability has increased. A trader needs a clear signal to act. It turned out to be much easier to adjust the levels of protective orders for two states than to try to scale them to a continuum of values.
After the transition to classification, the results improved dramatically. ‘Precision’ reached approximately 70%, which meant that out of 10 high volatility signals, 7 actually triggered. ‘Recall’ of around 65% meant that we were catching about two-thirds of all dangerous moments. But the main thing is that the model has become truly applicable in trading.
Now that the basic structure of the model has been defined, let’s talk in the next section about how to integrate it into a real trading system and what specific trading decisions can be made based on its signals. I am sure this will be the most interesting part of our journey into the world of volatility forecasting.
How do you like this approach? It would be interesting to know if you have used something similar in your practice. And if so, what other volatility metrics do you find useful?
The indicator I have developed is a comprehensive tool for predicting volatility spikes in the Forex market. Unlike conventional volatility indicators, which simply show the current state, our indicator predicts the likelihood of strong movements over the next 12 hours.
The main window of the indicator is divided into three main parts. The top section displays a Japanese candlestick chart of the last 100 bars for a clear representation of the current price dynamics. Green and red candles traditionally show rising and falling market movements.
The central part contains the main element of the indicator – a semicircular probability scale. It shows the current probability of a volatility spike as a percentage, from 0 to 100. The indicator arrow is colored differently depending on the risk level: green for a low probability of up to 50%, orange for a medium probability of 50% to 70%, and red for a high probability of over 70%.
The forecast is based on an analysis of the current market state and historical volatility patterns. The model takes into account data from the last 20 bars to build a forecast and predicts the likelihood of increased volatility over the next 12 hours. The greatest forecast accuracy is achieved in the first 4-6 hours after the signal.
At low probability (green zone), the market is likely to continue its calm movement. This is a good time to work with the standard settings of the trading system. During such periods, regular stop loss levels can be used.
When the indicator shows an average probability and the arrow turns orange, you should exercise extra caution. At such times, it is recommended to increase protective orders by about a quarter of the standard size.
When the probability of a spike is high and the arrow turns red, risk management should be seriously reviewed. During such periods, it is recommended to increase stop losses by at least one and a half times and, perhaps, refrain from opening new positions until the situation stabilizes.
The control elements are located on the bottom panel of the indicator. Here you can select a trading instrument, time interval, and set the threshold for receiving notifications. By default, the threshold is set at 70% – this is the optimal value that provides a balance between the number of signals and their reliability.
The indicator’s forecast accuracy reaches 70% for high volatility signals. This means that out of ten warnings about a possible surge, seven actually come true. At the same time, the indicator captures approximately two-thirds of all significant market movements.
It is important to understand that the indicator does not predict the direction of price movement, but only the likelihood of increased volatility. Its main purpose is to warn the trader about a possible strong market movement so that they can adjust their trading strategy and protective order levels in advance.
In future versions of the indicator, it is planned to add the ability to automatically adjust stop losses based on predicted volatility. This will further automate the risk management process and make trading more secure.
The indicator perfectly complements existing trading systems, acting as an additional filter for risk management. Its main advantage is the ability to warn of potential strong movements in advance, giving the trader time to prepare and adjust their strategy.
In modern trading, forecasting volatility remains one of the key tasks for successful trading. The path described in this article from a simple regression model to a high-volatility classifier demonstrates that sometimes a simple solution is more effective than a complex one. The developed system achieves forecast accuracy of approximately 70% and captures approximately two-thirds of all significant market movements.
The main conclusion is that for practical application, what is more important is not the exact value of future volatility, but a timely warning about its potential surge. The created indicator successfully solves this problem, allowing traders to adjust their trading strategies and protective order levels in advance. The combination of classical analysis methods with modern machine learning technologies opens up new possibilities for market risk management.
The key to success turned out to be not the complexity of the algorithms, but the correct formulation of the problem and high-quality data preparation. This approach can be adapted to different trading instruments and timeframes, making trading safer and more predictable.

