Correlation Heatmap Matrix [TradingFinder] 20 Assets Variable🔵 Introduction
Correlation is one of the most important statistical and analytical metrics in financial markets, data mining, and data science. It measures the strength and direction of the relationship between two variables.
The correlation coefficient always ranges between +1 and -1 : a perfect positive correlation (+1) means that two assets or currency pairs move together in the same direction and at a constant ratio, a correlation of zero (0) indicates no clear linear relationship, and a perfect negative correlation (-1) means they move in exactly opposite directions.
While the Pearson Correlation Coefficient is the most common method for calculation, other statistical methods like Spearman and Kendall are also used depending on the context.
In financial market analysis, correlation is a key tool for Forex, the Stock Market, and the Cryptocurrency Market because it allows traders to assess the price relationship between currency pairs, stocks, or coins. For example, in Forex, EUR/USD and GBP/USD often have a high positive correlation; in stocks, companies from the same sector such as Apple and Microsoft tend to move similarly; and in crypto, most altcoins show a strong positive correlation with Bitcoin.
Using a Correlation Heatmap in these markets visually displays the strength and direction of these relationships, helping traders make more accurate decisions for risk management and strategy optimization.
🟣 Correlation in Financial Markets
In finance, correlation refers to measuring how closely two assets move together over time. These assets can be stocks, currency pairs, commodities, indices, or cryptocurrencies. The main goal of correlation analysis in trading is to understand these movement patterns and use them for risk management, trend forecasting, and developing trading strategies.
🟣 Correlation Heatmap
A correlation heatmap is a visual tool that presents the correlation between multiple assets in a color-coded table. Each cell shows the correlation coefficient between two assets, with colors indicating its strength and direction. Warm colors (such as red or orange) represent strong negative correlation, cool colors (such as blue or cyan) represent strong positive correlation, and mid-range tones (such as yellow or green) indicate correlations that are close to neutral.
🟣 Practical Applications in Markets
Forex : Identify currency pairs that move together or in opposite directions, avoid overexposure to similar trades, and spot unusual divergences.
Crypto : Examine the dependency of altcoins on Bitcoin and find independent movers for portfolio diversification.
Stocks : Detect relationships between stocks in the same industry or find outliers that move differently from their sector.
🟣 Key Uses of Correlation in Trading
Risk management and diversification: Select assets with low or negative correlation to reduce portfolio volatility.
Avoiding overexposure: Prevent opening multiple positions on highly correlated assets.
Pairs trading: Exploit temporary deviations between historically correlated assets for arbitrage opportunities.
Intermarket analysis: Study the relationships between different markets like stocks, currencies, commodities, and bonds.
Divergence detection: Spot when two typically correlated assets move apart as a possible trend change signal.
Market forecasting: Use correlated asset movements to anticipate others’ behavior.
Event reaction analysis: Evaluate how groups of assets respond to economic or political events.
❗ Important Note
It’s important to note that correlation does not imply causation — it only reflects co-movement between assets. Correlation is also dynamic and can change over time, which is why analyzing it across multiple timeframes provides a more accurate picture. Combining correlation heatmaps with other analytical tools can significantly improve the precision of trading decisions.
🔵 How to Use
The Correlation Heatmap Matrix indicator is designed to analyze and manage the relationships between multiple assets at once. After adding the tool to your chart, start by selecting the assets you want to compare (up to 20).
Then, choose the Correlation Period that fits your trading strategy. Shorter periods (e.g., 20 bars) are more sensitive to recent price movements, making them suitable for short-term trading, while longer periods (e.g., 100 or 200 bars) provide a broader view of correlation trends over time.
The indicator outputs a color-coded matrix where each cell represents the correlation between two assets. Warm colors like red and orange signal strong negative correlation, while cool colors like blue and cyan indicate strong positive correlation. Mid-range tones such as yellow or green suggest correlations that are close to neutral. This visual representation makes it easy to spot market patterns at a glance.
One of the most valuable uses of this tool is in portfolio risk management. Portfolios with highly correlated assets are more vulnerable to market swings. By using the heatmap, traders can find assets with low or negative correlation to reduce overall risk.
Another key benefit is preventing overexposure. For example, if EUR/USD and GBP/USD have a high positive correlation, opening trades on both is almost like doubling the position size on one asset, increasing risk unnecessarily. The heatmap makes such relationships clear, helping you avoid them.
The indicator is also useful for pairs trading, where a trader identifies assets that are usually correlated but have temporarily diverged — a potential arbitrage or mean-reversion opportunity.
Additionally, the tool supports intermarket analysis, allowing traders to see how movements in one market (e.g., crude oil) may impact others (e.g., the Canadian dollar). Divergence detection is another advantage: if two typically aligned assets suddenly move in opposite directions, it could signal a major trend shift or a news-driven move.
Overall, the Correlation Heatmap Matrix is not just an analytical indicator but also a fast, visual alert system for monitoring multiple markets at once. This is particularly valuable for traders in fast-moving environments like Forex and crypto.
🔵 Settings
🟣 Logic
Correlation Period : Number of bars used to calculate correlation between assets.
🟣 Display
Table on Chart : Enable/disable displaying the heatmap directly on the chart.
Table Size : Choose the table size (from very small to very large).
Table Position : Set the table location on the chart (top, middle, or bottom in various alignments).
🟣 Symbol Custom
Select Market : Choose the market type (Forex, Stocks, Crypto, or Custom).
Symbol 1 to Symbol 20: In custom mode, you can define up to 20 assets for correlation calculation.
🔵 Conclusion
The Correlation Heatmap Matrix is a powerful tool for analyzing correlations across multiple assets in Forex, crypto, and stock markets. By displaying a color-coded table, it visually conveys both the strength and direction of correlations — warm colors for strong negative correlation, cool colors for strong positive correlation, and mid-range tones such as yellow or green for near-zero or neutral correlation.
This helps traders select assets with low or negative correlation for diversification, avoid overexposure to similar trades, identify arbitrage and pairs trading opportunities, and detect unusual divergences between typically aligned assets. With support for custom mode and up to 20 symbols, it offers high flexibility for different trading strategies, making it a valuable complement to technical analysis and risk management.
Statistics
Seasonality Monte Carlo Forecaster [BackQuant]Seasonality Monte Carlo Forecaster
Plain-English overview
This tool projects a cone of plausible future prices by combining two ideas that traders already use intuitively: seasonality and uncertainty. It watches how your market typically behaves around this calendar date, turns that seasonal tendency into a small daily “drift,” then runs many randomized price paths forward to estimate where price could land tomorrow, next week, or a month from now. The result is a probability cone with a clear expected path, plus optional overlays that show how past years tended to move from this point on the calendar. It is a planning tool, not a crystal ball: the goal is to quantify ranges and odds so you can size, place stops, set targets, and time entries with more realism.
What Monte Carlo is and why quants rely on it
• Definition . Monte Carlo simulation is a way to answer “what might happen next?” when there is randomness in the system. Instead of producing a single forecast, it generates thousands of alternate futures by repeatedly sampling random shocks and adding them to a model of how prices evolve.
• Why it is used . Markets are noisy. A single point forecast hides risk. Monte Carlo gives a distribution of outcomes so you can reason in probabilities: the median path, the 68% band, the 95% band, tail risks, and the chance of hitting a specific level within a horizon.
• Core strengths in quant finance .
– Path-dependent questions : “What is the probability we touch a stop before a target?” “What is the expected drawdown on the way to my objective?”
– Pricing and risk : Useful for path-dependent options, Value-at-Risk (VaR), expected shortfall (CVaR), stress paths, and scenario analysis when closed-form formulas are unrealistic.
– Planning under uncertainty : Portfolio construction and rebalancing rules can be tested against a cloud of plausible futures rather than a single guess.
• Why it fits trading workflows . It turns gut feel like “seasonality is supportive here” into quantitative ranges: “median path suggests +X% with a 68% band of ±Y%; stop at Z has only ~16% odds of being tagged in N days.”
How this indicator builds its probability cone
1) Seasonal pattern discovery
The script builds two day-of-year maps as new data arrives:
• A return map where each calendar day stores an exponentially smoothed average of that day’s log return (yesterday→today). The smoothing (90% old, 10% new) behaves like an EWMA, letting older seasons matter while adapting to new information.
• A volatility map that tracks the typical absolute return for the same calendar day.
It calculates the day-of-year carefully (with leap-year adjustment) and indexes into a 365-slot seasonal array so “March 18” is compared with past March 18ths. This becomes the seasonal bias that gently nudges simulations up or down on each forecast day.
2) Choice of randomness engine
You can pick how the future shocks are generated:
• Daily mode uses a Gaussian draw with the seasonal bias as the mean and a volatility that comes from realized returns, scaled down to avoid over-fitting. It relies on the Box–Muller transform internally to turn two uniform random numbers into one normal shock.
• Weekly mode uses bootstrap sampling from the seasonal return history (resampling actual historical daily drifts and then blending in a fraction of the seasonal bias). Bootstrapping is robust when the empirical distribution has asymmetry or fatter tails than a normal distribution.
Both modes seed their random draws deterministically per path and day, which makes plots reproducible bar-to-bar and avoids flickering bands.
3) Volatility scaling to current conditions
Markets do not always live in average volatility. The engine computes a simple volatility factor from ATR(20)/price and scales the simulated shocks up or down within sensible bounds (clamped between 0.5× and 2.0×). When the current regime is quiet, the cone narrows; when ranges expand, the cone widens. This prevents the classic mistake of projecting calm markets into a storm or vice versa.
4) Many futures, summarized by percentiles
The model generates a matrix of price paths (capped at 100 runs for performance inside TradingView), each path stepping forward for your selected horizon. For each forecast day it sorts the simulated prices and pulls key percentiles:
• 5th and 95th → approximate 95% band (outer cone).
• 16th and 84th → approximate 68% band (inner cone).
• 50th → the median or “expected path.”
These are drawn as polylines so you can immediately see central tendency and dispersion.
5) A historical overlay (optional)
Turn on the overlay to sketch a dotted path of what a purely seasonal projection would look like for the next ~30 days using only the return map, no randomness. This is not a forecast; it is a visual reminder of the seasonal drift you are biasing toward.
Inputs you control and how to think about them
Monte Carlo Simulation
• Price Series for Calculation . The source series, typically close.
• Enable Probability Forecasts . Master switch for simulation and drawing.
• Simulation Iterations . Requested number of paths to run. Internally capped at 100 to protect performance, which is generally enough to estimate the percentiles for a trading chart. If you need ultra-smooth bands, shorten the horizon.
• Forecast Days Ahead . The length of the cone. Longer horizons dilute seasonal signal and widen uncertainty.
• Probability Bands . Draw all bands, just 95%, just 68%, or a custom level (display logic remains 68/95 internally; the custom number is for labeling and color choice).
• Pattern Resolution . Daily leans on day-of-year effects like “turn-of-month” or holiday patterns. Weekly biases toward day-of-week tendencies and bootstraps from history.
• Volatility Scaling . On by default so the cone respects today’s range context.
Plotting & UI
• Probability Cone . Plots the outer and inner percentile envelopes.
• Expected Path . Plots the median line through the cone.
• Historical Overlay . Dotted seasonal-only projection for context.
• Band Transparency/Colors . Customize primary (outer) and secondary (inner) band colors and the mean path color. Use higher transparency for cleaner charts.
What appears on your chart
• A cone starting at the most recent bar, fanning outward. The outer lines are the ~95% band; the inner lines are the ~68% band.
• A median path (default blue) running through the center of the cone.
• An info panel on the final historical bar that summarizes simulation count, forecast days, number of seasonal patterns learned, the current day-of-year, expected percentage return to the median, and the approximate 95% half-range in percent.
• Optional historical seasonal path drawn as dotted segments for the next 30 bars.
How to use it in trading
1) Position sizing and stop logic
The cone translates “volatility plus seasonality” into distances.
• Put stops outside the inner band if you want only ~16% odds of a stop-out due to noise before your thesis can play.
• Size positions so that a test of the inner band is survivable and a test of the outer band is rare but acceptable.
• If your target sits inside the 68% band at your horizon, the payoff is likely modest; outside the 68% but inside the 95% can justify “one-good-push” trades; beyond the 95% band is a low-probability flyer—consider scaling plans or optionality.
2) Entry timing with seasonal bias
When the median path slopes up from this calendar date and the cone is relatively narrow, a pullback toward the lower inner band can be a high-quality entry with a tight invalidation. If the median slopes down, fade rallies toward the upper band or step aside if it clashes with your system.
3) Target selection
Project your time horizon to N bars ahead, then pick targets around the median or the opposite inner band depending on your style. You can also anchor dynamic take-profits to the moving median as new bars arrive.
4) Scenario planning & “what-ifs”
Before events, glance at the cone: if the 95% band already spans a huge range, trade smaller, expect whips, and avoid placing stops at obvious band edges. If the cone is unusually tight, consider breakout tactics and be ready to add if volatility expands beyond the inner band with follow-through.
5) Options and vol tactics
• When the cone is tight : Prefer long gamma structures (debit spreads) only if you expect a regime shift; otherwise premium selling may dominate.
• When the cone is wide : Debit structures benefit from range; credit spreads need wider wings or smaller size. Align with your separate IV metrics.
Reading the probability cone like a pro
• Cone slope = seasonal drift. Upward slope means the calendar has historically favored positive drift from this date, downward slope the opposite.
• Cone width = regime volatility. A widening fan tells you that uncertainty grows fast; a narrow cone says the market typically stays contained.
• Mean vs. price gap . If spot trades well above the median path and the upper band, mean-reversion risk is high. If spot presses the lower inner band in an up-sloping cone, you are in the “buy fear” zone.
• Touches and pierces . Touching the inner band is common noise; piercing it with momentum signals potential regime change; the outer band should be rare and often brings snap-backs unless there is a structural catalyst.
Methodological notes (what the code actually does)
• Log returns are used for additivity and better statistical behavior: sim_ret is applied via exp(sim_ret) to evolve price.
• Seasonal arrays are updated online with EWMA (90/10) so the model keeps learning as each bar arrives.
• Leap years are handled; indexing still normalizes into a 365-slot map so the seasonal pattern remains stable.
• Gaussian engine (Daily mode) centers shocks on the seasonal bias with a conservative standard deviation.
• Bootstrap engine (Weekly mode) resamples from observed seasonal returns and adds a fraction of the bias, which captures skew and fat tails better.
• Volatility adjustment multiplies each daily shock by a factor derived from ATR(20)/price, clamped between 0.5 and 2.0 to avoid extreme cones.
• Performance guardrails : simulations are capped at 100 paths; the probability cone uses polylines (no heavy fills) and only draws on the last confirmed bar to keep charts responsive.
• Prerequisite data : at least ~30 seasonal entries are required before the model will draw a cone; otherwise it waits for more history.
Strengths and limitations
• Strengths :
– Probabilistic thinking replaces single-point guessing.
– Seasonality adds a small but meaningful directional bias that many markets exhibit.
– Volatility scaling adapts to the current regime so the cone stays realistic.
• Limitations :
– Seasonality can break around structural changes, policy shifts, or one-off events.
– The number of paths is performance-limited; percentile estimates are good for trading, not for academic precision.
– The model assumes tomorrow’s randomness resembles recent randomness; if regime shifts violently, the cone will lag until the EWMA adapts.
– Holidays and missing sessions can thin the seasonal sample for some assets; be cautious with very short histories.
Tuning guide
• Horizon : 10–20 bars for tactical trades; 30+ for swing planning when you care more about broad ranges than precise targets.
• Iterations : The default 100 is enough for stable 5/16/50/84/95 percentiles. If you crave smoother lines, shorten the horizon or run on higher timeframes.
• Daily vs. Weekly : Daily for equities and crypto where month-end and turn-of-month effects matter; Weekly for futures and FX where day-of-week behavior is strong.
• Volatility scaling : Keep it on. Turn off only when you intentionally want a “pure seasonality” cone unaffected by current turbulence.
Workflow examples
• Swing continuation : Cone slopes up, price pulls into the lower inner band, your system fires. Enter near the band, stop just outside the outer line for the next 3–5 bars, target near the median or the opposite inner band.
• Fade extremes : Cone is flat or down, price gaps to the upper outer band on news, then stalls. Favor mean-reversion toward the median, size small if volatility scaling is elevated.
• Event play : Before CPI or earnings on a proxy index, check cone width. If the inner band is already wide, cut size or prefer options structures that benefit from range.
Good habits
• Pair the cone with your entry engine (breakout, pullback, order flow). Let Monte Carlo do range math; let your system do signal quality.
• Do not anchor blindly to the median; recalc after each bar. When the cone’s slope flips or width jumps, the plan should adapt.
• Validate seasonality for your symbol and timeframe; not every market has strong calendar effects.
Summary
The Seasonality Monte Carlo Forecaster wraps institutional risk planning into a single overlay: a data-driven seasonal drift, realistic volatility scaling, and a probabilistic cone that answers “where could we be, with what odds?” within your trading horizon. Use it to place stops where randomness is less likely to take you out, to set targets aligned with realistic travel, and to size positions with confidence born from distributions rather than hunches. It will not predict the future, but it will keep your decisions anchored to probabilities—the language markets actually speak.
Weekly High/Low Weekday Stats by [M1rage]Патч: условная статистика по дню недельного экстремума
Добавлена новая функция, позволяющая строить условное распределение по дням недели.
Что нового.
Два новых параметра в настройках:
Condition: Weekly High on — зафиксировать день недели, в который сформировался недельный High.
Condition: Weekly Low on — зафиксировать день недели, в который сформировался недельный Low.
Таблица автоматически перестраивается:
Левая колонка показывает — вероятности минимума недели при выбранном дне максимума.
Правая колонка показывает — вероятности максимума недели при выбранном дне минимума.
В заголовках колонок появляется подпись формата:
Weekly Low | High=Tue
Weekly High | Low=Thu
---------------------------------------------------------------------------------------------------------------------
Patch: Conditional Statistics by Day of the Weekly Extremum
A new feature has been added that builds a conditional distribution by weekdays.
What’s new
Two new settings:
Condition: Weekly High on — fix the weekday on which the weekly High formed.
Condition: Weekly Low on — fix the weekday on which the weekly Low formed.
The table updates automatically:
Left column — probabilities of the weekly Low given the selected day of the High.
Right column — probabilities of the weekly High given the selected day of the Low.
Column headers now display labels in the format:
Weekly Low | High=Tue
Weekly High | Low=Thu
Line color best indices grouped by Artificial Intelligence
The script uses the best buy indicators, such as moving average crossovers, RSI, and others selected by AI. The idea is to determine whether the stock is classified as a strong buy (yellow line), a buy (green line), or a red (sell)
TimeSeriesBenchmarkMeasuresLibrary "TimeSeriesBenchmarkMeasures"
Time Series Benchmark Metrics. \
Provides a comprehensive set of functions for benchmarking time series data, allowing you to evaluate the accuracy, stability, and risk characteristics of various models or strategies. The functions cover a wide range of statistical measures, including accuracy metrics (MAE, MSE, RMSE, NRMSE, MAPE, SMAPE), autocorrelation analysis (ACF, ADF), and risk measures (Theils Inequality, Sharpness, Resolution, Coverage, and Pinball).
___
Reference:
- github.com .
- medium.com .
- www.salesforce.com .
- towardsdatascience.com .
- github.com .
mae(actual, forecasts)
In statistics, mean absolute error (MAE) is a measure of errors between paired observations expressing the same phenomenon. Examples of Y versus X include comparisons of predicted versus observed, subsequent time versus initial time, and one technique of measurement versus an alternative technique of measurement.
Parameters:
actual (array) : List of actual values.
forecasts (array) : List of forecasts values.
Returns: - Mean Absolute Error (MAE).
___
Reference:
- en.wikipedia.org .
- The Orange Book of Machine Learning - Carl McBride Ellis .
mse(actual, forecasts)
The Mean Squared Error (MSE) is a measure of the quality of an estimator. As it is derived from the square of Euclidean distance, it is always a positive value that decreases as the error approaches zero.
Parameters:
actual (array) : List of actual values.
forecasts (array) : List of forecasts values.
Returns: - Mean Squared Error (MSE).
___
Reference:
- en.wikipedia.org .
rmse(targets, forecasts, order, offset)
Calculates the Root Mean Squared Error (RMSE) between target observations and forecasts. RMSE is a standard measure of the differences between values predicted by a model and the values actually observed.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
order (int) : Model order parameter that determines the starting position in the targets array, `default=0`.
offset (int) : Forecast offset related to target, `default=0`.
Returns: - RMSE value.
nmrse(targets, forecasts, order, offset)
Normalised Root Mean Squared Error.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
order (int) : Model order parameter that determines the starting position in the targets array, `default=0`.
offset (int) : Forecast offset related to target, `default=0`.
Returns: - NRMSE value.
rmse_interval(targets, forecasts)
Root Mean Squared Error for a set of interval windows. Computes RMSE by converting interval forecasts (with min/max bounds) into point forecasts using the mean of the interval bounds, then compares against actual target values.
Parameters:
targets (array) : List of target observations.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - RMSE value for the combined interval list.
mape(targets, forecasts)
Mean Average Percentual Error.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
Returns: - MAPE value.
smape(targets, forecasts, mode)
Symmetric Mean Average Percentual Error. Calculates the Mean Absolute Percentage Error (MAPE) between actual targets and forecasts. MAPE is a common metric for evaluating forecast accuracy, expressed as a percentage, lower values indicate a better forecast accuracy.
Parameters:
targets (array) : List of target observations.
forecasts (array) : List of forecasts.
mode (int) : Type of method: default=0:`sum(abs(Fi-Ti)) / sum(Fi+Ti)` , 1:`mean(abs(Fi-Ti) / ((Fi + Ti) / 2))` , 2:`mean(abs(Fi-Ti) / (abs(Fi) + abs(Ti))) * 100`
Returns: - SMAPE value.
mape_interval(targets, forecasts)
Mean Average Percentual Error for a set of interval windows.
Parameters:
targets (array) : List of target observations.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - MAPE value for the combined interval list.
acf(data, k)
Autocorrelation Function (ACF) for a time series at a specified lag.
Parameters:
data (array) : Sample data of the observations.
k (int) : The lag period for which to calculate the autocorrelation. Must be a non-negative integer.
Returns: - The autocorrelation value at the specified lag, ranging from -1 to 1.
___
The autocorrelation function measures the linear dependence between observations in a time series
at different time lags. It quantifies how well the series correlates with itself at different
time intervals, which is useful for identifying patterns, seasonality, and the appropriate
lag structure for time series models.
ACF values close to 1 indicate strong positive correlation, values close to -1 indicate
strong negative correlation, and values near 0 indicate no linear correlation.
___
Reference:
- statisticsbyjim.com
acf_multiple(data, k)
Autocorrelation function (ACF) for a time series at a set of specified lags.
Parameters:
data (array) : Sample data of the observations.
k (array) : List of lag periods for which to calculate the autocorrelation. Must be a non-negative integer.
Returns: - List of ACF values for provided lags.
___
The autocorrelation function measures the linear dependence between observations in a time series
at different time lags. It quantifies how well the series correlates with itself at different
time intervals, which is useful for identifying patterns, seasonality, and the appropriate
lag structure for time series models.
ACF values close to 1 indicate strong positive correlation, values close to -1 indicate
strong negative correlation, and values near 0 indicate no linear correlation.
___
Reference:
- statisticsbyjim.com
adfuller(data, n_lag, conf)
: Augmented Dickey-Fuller test for stationarity.
Parameters:
data (array) : Data series.
n_lag (int) : Maximum lag.
conf (string) : Confidence Probability level used to test for critical value, (`90%`, `95%`, `99%`).
Returns: - `adf` The test statistic.
- `crit` Critical value for the test statistic at the 10 % levels.
- `nobs` Number of observations used for the ADF regression and calculation of the critical values.
___
The Augmented Dickey-Fuller test is used to determine whether a time series is stationary
or contains a unit root (non-stationary). The null hypothesis is that the series has a unit root
(is non-stationary), while the alternative hypothesis is that the series is stationary.
A stationary time series has statistical properties that do not change over time, making it
suitable for many time series forecasting models. If the test statistic is less than the
critical value, we reject the null hypothesis and conclude the series is stationary.
___
Reference:
- www.jstor.org
- en.wikipedia.org
theils_inequality(targets, forecasts)
Calculates Theil's Inequality Coefficient, a measure of forecast accuracy that quantifies the relative difference between actual and predicted values.
Parameters:
targets (array) : List of target observations.
forecasts (array) : Matrix with list of forecasts, ordered column wise.
Returns: - Theil's Inequality Coefficient value, value closer to 0 is better.
___
Theil's Inequality Coefficient is calculated as: `sqrt(Sum((y_i - f_i)^2)) / (sqrt(Sum(y_i^2)) + sqrt(Sum(f_i^2)))`
where `y_i` represents actual values and `f_i` represents forecast values.
This metric ranges from 0 to infinity, with 0 indicating perfect forecast accuracy.
___
Reference:
- en.wikipedia.org
sharpness(forecasts)
The average width of the forecast intervals across all observations, representing the sharpness or precision of the predictive intervals.
Parameters:
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - Sharpness The sharpness level, which is the average width of all prediction intervals across the forecast horizon.
___
Sharpness is an important metric for evaluating forecast quality. It measures how narrow or wide the
prediction intervals are. Higher sharpness (narrower intervals) indicates greater precision in the
forecast intervals, while lower sharpness (wider intervals) suggests less precision.
The sharpness metric is calculated as the mean of the interval widths across all observations, where
each interval width is the difference between the upper and lower bounds of the prediction interval.
Note: This function assumes that the forecasts matrix has at least 2 columns, with the first column
representing the lower bounds and the second column representing the upper bounds of prediction intervals.
___
Reference:
- Hyndman, R. J., & Athanasopoulos, G. (2018). Forecasting: principles and practice. OTexts. otexts.com
resolution(forecasts)
Calculates the resolution of forecast intervals, measuring the average absolute difference between individual forecast interval widths and the overall sharpness measure.
Parameters:
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - The average absolute difference between individual forecast interval widths and the overall sharpness measure, representing the resolution of the forecasts.
___
Resolution is a key metric for evaluating forecast quality that measures the consistency of prediction
interval widths. It quantifies how much the individual forecast intervals vary from the average interval
width (sharpness). High resolution indicates that the forecast intervals are relatively consistent
across observations, while low resolution suggests significant variation in interval widths.
The resolution is calculated as the mean absolute deviation of individual interval widths from the
overall sharpness value. This provides insight into the uniformity of the forecast uncertainty
estimates across the forecast horizon.
Note: This function requires the forecasts matrix to have at least 2 columns (min, max) representing
the lower and upper bounds of prediction intervals.
___
Reference:
- (sites.stat.washington.edu)
- (www.jstor.org)
coverage(targets, forecasts)
Calculates the coverage probability, which is the percentage of target values that fall within the corresponding forecasted prediction intervals.
Parameters:
targets (array) : List of target values.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - Percent of target values that fall within their corresponding forecast intervals, expressed as a decimal value between 0 and 1 (or 0% and 100%).
___
Coverage probability is a crucial metric for evaluating the reliability of prediction intervals.
It measures how well the forecast intervals capture the actual observed values. An ideal forecast
should have a coverage probability close to the nominal confidence level (e.g., 90%, 95%, or 99%).
For example, if a 95% prediction interval is used, we expect approximately 95% of the actual
target values to fall within those intervals. If the coverage is significantly lower than the
nominal level, the intervals may be too narrow; if it's significantly higher, the intervals may
be too wide.
Note: This function requires the targets array and forecasts matrix to have the same number of
observations, and the forecasts matrix must have at least 2 columns (min, max) representing
the lower and upper bounds of prediction intervals.
___
Reference:
- (www.jstor.org)
pinball(tau, target, forecast)
Pinball loss function, measures the asymmetric loss for quantile forecasts.
Parameters:
tau (float) : The quantile level (between 0 and 1), where 0.5 represents the median.
target (float) : The actual observed value to compare against.
forecast (float) : The forecasted value.
Returns: - The Pinball loss value, which quantifies the distance between the forecast and target relative to the specified quantile level.
___
The Pinball loss function is specifically designed for evaluating quantile forecasts. It is
asymmetric, meaning it penalizes underestimates and overestimates differently depending on the
quantile level being evaluated.
For a given quantile τ, the loss function is defined as:
- If target >= forecast: (target - forecast) * τ
- If target < forecast: (forecast - target) * (1 - τ)
This loss function is commonly used in quantile regression and probabilistic forecasting
to evaluate how well forecasts capture specific quantiles of the target distribution.
___
Reference:
- (www.otexts.com)
pinball_mean(tau, targets, forecasts)
Calculates the mean pinball loss for quantile regression.
Parameters:
tau (float) : The quantile level (between 0 and 1), where 0.5 represents the median.
targets (array) : The actual observed values to compare against.
forecasts (matrix) : The forecasted values in matrix format with at least 2 columns (min, max).
Returns: - The mean pinball loss value across all observations.
___
The pinball_mean() function computes the average Pinball loss across multiple observations,
making it suitable for evaluating overall forecast performance in quantile regression tasks.
This function leverages the asymmetric Pinball loss function to evaluate how well forecasts
capture specific quantiles of the target distribution. The choice of which column from the
forecasts matrix to use depends on the quantile level:
- For τ ≤ 0.5: Uses the first column (min) of forecasts
- For τ > 0.5: Uses the second column (max) of forecasts
This loss function is commonly used in quantile regression and probabilistic forecasting
to evaluate how well forecasts capture specific quantiles of the target distribution.
___
Reference:
- (www.otexts.com)
XAUUSD Strength Dashboard with VolumeXAUUSD Strength Dashboard with Volume Analysis
📌 Description
This advanced Pine Script indicator provides a multi-timeframe dashboard for XAUUSD (Gold vs. USD), combining price action analysis with volume confirmation to generate high-probability trading signals. It detects:
✅ Break of Structure (BOS)
✅ Fair Value Gaps (FVG)
✅ Change of Character (CHOCH)
✅ Trendline Breaks (9/21 SMA Crossover)
✅ Volume Spikes (Confirmation of Strength)
The dashboard displays strength scores (0-100%) and action recommendations (Strong Buy/Buy/Neutral/Sell/Strong Sell) across multiple timeframes, helping traders identify confluences for better trade decisions.
🎯 How It Works
1. Multi-Timeframe Analysis
Fetches data from 1m, 5m, 15m, 30m, 1h, 4h, Daily, and Weekly timeframes.
Compares trend direction, BOS, FVG, CHOCH, and volume spikes across all timeframes.
2. Volume-Confirmed Strength Score
The Strength Score (0-100%) is calculated using:
Trend Direction (25 points) → 9 SMA vs. 21 SMA
Break of Structure (20 points) → New highs/lows with momentum
Fair Value Gaps (10 points) → Imbalance zones
Change of Character (10 points) → Shift in market structure
Trendline Break (20 points) → SMA crossover confirmation
Volume Spike (15 points) → High volume confirms moves
Score Interpretation:
≥75% → Strong Buy (High confidence bullish move)
60-74% → Buy (Bullish but weaker confirmation)
40-59% → Neutral (No strong bias)
25-39% → Sell (Bearish but weaker confirmation)
≤25% → Strong Sell (High confidence bearish move)
3. Dashboard & Chart Markers
Dashboard Table: Shows Trend, BOS, Volume, CHOCH, TL Break, Strength %, Key Level, and Action for each timeframe.
Chart Markers:
🟢 Green Triangles → Bullish BOS
🔴 Red Triangles → Bearish BOS
🟢 Green Circles → Bullish CHOCH
🔴 Red Circles → Bearish CHOCH
📈 Green Arrows → Bullish Trendline Break
📉 Red Arrows → Bearish Trendline Break
"Vol↑" (Lime) → Bullish Volume Spike
"Vol↓" (Maroon) → Bearish Volume Spike
🚀 How to Use
1. Dashboard Interpretation
Higher Timeframes (D/W) → Show the dominant trend.
Lower Timeframes (1m-4h) → Help with entry timing.
Strength Score ≥75% or ≤25% → Look for high-confidence trades.
Volume Spikes → Confirm breakouts/reversals.
2. Trading Strategy
📈 Long (Buy) Setup:
Higher TFs (D/W/4h) show bullish trend (↑).
Current TF has BOS & Volume Spike.
Strength Score ≥60%.
Key Level (Low) holds as support.
📉 Short (Sell) Setup:
Higher TFs (D/W/4h) show bearish trend (↓).
Current TF has BOS & Volume Spike.
Strength Score ≤40%.
Key Level (High) holds as resistance.
3. Customization
Adjust Volume Spike Multiplier (Default: 1.5x) → Controls sensitivity to volume spikes.
Toggle Timeframes → Enable/disable higher/lower timeframes.
🔑 Key Benefits
✔ Multi-Timeframe Confluence → Avoids false signals.
✔ Volume Confirmation → Filters low-quality breakouts.
✔ Clear Strength Scoring → Removes emotional bias.
✔ Visual Chart Markers → Easy to spot key signals.
This indicator is ideal for gold traders who follow institutional order flow, market structure, and volume analysis to improve their trading decisions.
🎯 Best Used With:
Support/Resistance Levels
Fibonacci Retracements
Price Action Confirmation
🚀 Happy Trading! 🚀
MistaB SMC Navigation ToolkitMistaB SMC Navigation Toolkit
A complete Smart Money Concepts (SMC) toolkit designed for precision navigation of market structure, order flow, and premium/discount trading zones. Perfect for traders following ICT-style concepts and multi-timeframe confluence.
Features
✅ Order Blocks (OBs)
• Automatic bullish & bearish OB detection
• Optional displacement & high-volume filters
• Midline display for quick equilibrium view
• Auto-expiry and broken OB cleanup
✅ Fair Value Gaps (FVGs)
• Bullish & bearish gap detection
• HTF bias filtering for higher accuracy
• Compact boxes with labels
• Automatic removal when filled
✅ Market Structure (BoS / CHoCH)
• Fractal-based swing detection
• Break of Structure & Change of Character labeling
• Dynamic HTF bias dimming
✅ Premium / Discount Zones
• Auto-calculated mid-level
• Highlighted zones for optimal trade placement
✅ Higher Timeframe (HTF) Confirmation
• Configurable confirmation timeframe
• On-chart HTF status label (Bullish / Bearish / Not Required)
✅ Automatic Cleanup System
• Fast or delayed cleanup for expired/broken zones
• Dimmed colors for invalidated levels
How to Use
Set your preferred HTF in the settings.
Look for OB/FVGs aligned with HTF bias.
Enter in discount zones for longs or premium zones for shorts.
Confirm with BoS / CHoCH signals before entry.
Manage trades towards opposing liquidity zones or HTF levels.
Disclaimer
This indicator is for educational purposes only. It does not provide financial advice or guarantee future results. Always practice proper risk management and test thoroughly before live trading.
Correlation HeatMap Matrix Data [TradingFinder]🔵 Introduction
Correlation is a statistical measure that shows the degree and direction of a linear relationship between two assets.
Its value ranges from -1 to +1 : +1 means perfect positive correlation, 0 means no linear relationship, and -1 means perfect negative correlation.
In financial markets, correlation is used for portfolio diversification, risk management, pairs trading, intermarket analysis, and identifying divergences.
Correlation HeatMap Matrix Data TradingFinder is a Pine Script v6 library that calculates and returns raw correlation matrix data between up to 20 symbols. It only provides the data – it does not draw or render the heatmap – making it ideal for use in other scripts that handle visualization or further analysis. The library uses ta.correlation for fast and accurate calculations.
It also includes two helper functions for visual styling :
CorrelationColor(corr) : takes the correlation value as input and generates a smooth gradient color, ranging from strong negative to strong positive correlation.
CorrelationTextColor(corr) : takes the correlation value as input and returns a text color that ensures optimal contrast over the background color.
Library
"Correlation_HeatMap_Matrix_Data_TradingFinder"
CorrelationColor(corr)
Parameters:
corr (float)
CorrelationTextColor(corr)
Parameters:
corr (float)
Data_Matrix(Corr_Period, Sym_1, Sym_2, Sym_3, Sym_4, Sym_5, Sym_6, Sym_7, Sym_8, Sym_9, Sym_10, Sym_11, Sym_12, Sym_13, Sym_14, Sym_15, Sym_16, Sym_17, Sym_18, Sym_19, Sym_20)
Parameters:
Corr_Period (int)
Sym_1 (string)
Sym_2 (string)
Sym_3 (string)
Sym_4 (string)
Sym_5 (string)
Sym_6 (string)
Sym_7 (string)
Sym_8 (string)
Sym_9 (string)
Sym_10 (string)
Sym_11 (string)
Sym_12 (string)
Sym_13 (string)
Sym_14 (string)
Sym_15 (string)
Sym_16 (string)
Sym_17 (string)
Sym_18 (string)
Sym_19 (string)
Sym_20 (string)
🔵 How to use
Import the library into your Pine Script using the import keyword and its full namespace.
Decide how many symbols you want to include in your correlation matrix (up to 20). Each symbol must be provided as a string, for example FX:EURUSD .
Choose the correlation period (Corr\_Period) in bars. This is the lookback window used for the calculation, such as 20, 50, or 100 bars.
Call Data_Matrix(Corr_Period, Sym_1, ..., Sym_20) with your selected parameters. The function will return an array containing the correlation values for every symbol pair (upper triangle of the matrix plus diagonal).
For example :
var string Sym_1 = '' , var string Sym_2 = '' , var string Sym_3 = '' , var string Sym_4 = '' , var string Sym_5 = '' , var string Sym_6 = '' , var string Sym_7 = '' , var string Sym_8 = '' , var string Sym_9 = '' , var string Sym_10 = ''
var string Sym_11 = '', var string Sym_12 = '', var string Sym_13 = '', var string Sym_14 = '', var string Sym_15 = '', var string Sym_16 = '', var string Sym_17 = '', var string Sym_18 = '', var string Sym_19 = '', var string Sym_20 = ''
switch Market
'Forex' => Sym_1 := 'EURUSD' , Sym_2 := 'GBPUSD' , Sym_3 := 'USDJPY' , Sym_4 := 'USDCHF' , Sym_5 := 'USDCAD' , Sym_6 := 'AUDUSD' , Sym_7 := 'NZDUSD' , Sym_8 := 'EURJPY' , Sym_9 := 'EURGBP' , Sym_10 := 'GBPJPY'
,Sym_11 := 'AUDJPY', Sym_12 := 'EURCHF', Sym_13 := 'EURCAD', Sym_14 := 'GBPCAD', Sym_15 := 'CADJPY', Sym_16 := 'CHFJPY', Sym_17 := 'NZDJPY', Sym_18 := 'AUDNZD', Sym_19 := 'USDSEK' , Sym_20 := 'USDNOK'
'Stock' => Sym_1 := 'NVDA' , Sym_2 := 'AAPL' , Sym_3 := 'GOOGL' , Sym_4 := 'GOOG' , Sym_5 := 'META' , Sym_6 := 'MSFT' , Sym_7 := 'AMZN' , Sym_8 := 'AVGO' , Sym_9 := 'TSLA' , Sym_10 := 'BRK.B'
,Sym_11 := 'UNH' , Sym_12 := 'V' , Sym_13 := 'JPM' , Sym_14 := 'WMT' , Sym_15 := 'LLY' , Sym_16 := 'ORCL', Sym_17 := 'HD' , Sym_18 := 'JNJ' , Sym_19 := 'MA' , Sym_20 := 'COST'
'Crypto' => Sym_1 := 'BTCUSD' , Sym_2 := 'ETHUSD' , Sym_3 := 'BNBUSD' , Sym_4 := 'XRPUSD' , Sym_5 := 'SOLUSD' , Sym_6 := 'ADAUSD' , Sym_7 := 'DOGEUSD' , Sym_8 := 'AVAXUSD' , Sym_9 := 'DOTUSD' , Sym_10 := 'TRXUSD'
,Sym_11 := 'LTCUSD' , Sym_12 := 'LINKUSD', Sym_13 := 'UNIUSD', Sym_14 := 'ATOMUSD', Sym_15 := 'ICPUSD', Sym_16 := 'ARBUSD', Sym_17 := 'APTUSD', Sym_18 := 'FILUSD', Sym_19 := 'OPUSD' , Sym_20 := 'USDT.D'
'Custom' => Sym_1 := Sym_1_C , Sym_2 := Sym_2_C , Sym_3 := Sym_3_C , Sym_4 := Sym_4_C , Sym_5 := Sym_5_C , Sym_6 := Sym_6_C , Sym_7 := Sym_7_C , Sym_8 := Sym_8_C , Sym_9 := Sym_9_C , Sym_10 := Sym_10_C
,Sym_11 := Sym_11_C, Sym_12 := Sym_12_C, Sym_13 := Sym_13_C, Sym_14 := Sym_14_C, Sym_15 := Sym_15_C, Sym_16 := Sym_16_C, Sym_17 := Sym_17_C, Sym_18 := Sym_18_C, Sym_19 := Sym_19_C , Sym_20 := Sym_20_C
= Corr.Data_Matrix(Corr_period, Sym_1 ,Sym_2 ,Sym_3 ,Sym_4 ,Sym_5 ,Sym_6 ,Sym_7 ,Sym_8 ,Sym_9 ,Sym_10,Sym_11,Sym_12,Sym_13,Sym_14,Sym_15,Sym_16,Sym_17,Sym_18,Sym_19,Sym_20)
Loop through or index into this array to retrieve each correlation value for your custom layout or logic.
Pass each correlation value to CorrelationColor() to get the corresponding gradient background color, which reflects the correlation’s strength and direction (negative to positive).
For example :
Corr.CorrelationColor(SYM_3_10)
Pass the same correlation value to CorrelationTextColor() to get the correct text color for readability against that background.
For example :
Corr.CorrelationTextColor(SYM_1_1)
Use these colors in a table or label to render your own heatmap or any other visualization you need.
Average hourly move by @zeusbottradingThis Pine Script called "Average hourly move by @zeusbottrading" calculates and displays the average percentage price movement for each hour of the day using the full available historical data.
How the script works:
It tracks the high and low price within each full hour (e.g., 10:00–10:59).
It calculates the percentage move as the range between high and low relative to the average price during that hour.
For each hour of the day, it stores the total of all recorded moves and the count of occurrences across the full history.
At the end, the script computes the average move for each hour (0 to 23) and determines the minimum and maximum averages.
Using these values, it creates a color gradient, where the hours with the lowest average volatility are red and the highest are green.
It then displays a table in the top-right corner of the chart showing each hour and its average percentage move, color‑coded according to volatility.
What it can be used for:
Identifying when the market is historically most volatile or calm during the day.
Helping plan trade entries and exits based on expected volatility.
Comparing hourly volatility patterns across different markets or instruments.
Adjusting position size and risk management according to the anticipated volatility in a particular hour.
Using long-term historical data to understand recurring daily volatility patterns.
In short, this script is a useful tool for traders who want to fine‑tune their trading strategies and risk management by analyzing time‑based volatility profiles.
Awesome Indicator# Moving Average Ribbon with ADR% - Complete Trading Indicator
## Overview
The **Moving Average Ribbon with ADR%** is a comprehensive technical analysis indicator that combines multiple analytical tools to provide traders with a complete picture of price trends, volatility, relative performance, and position sizing guidance. This multi-faceted indicator is designed for both swing and positional traders looking for data-driven entry and exit signals.
## Key Components
### 1. Moving Average Ribbon System
- **4 Customizable Moving Averages** with default periods: 13, 21, 55, and 189
- **Multiple MA Types**: SMA, EMA, SMMA (RMA), WMA, VWMA
- **Color-coded visualization** for easy trend identification
- **Flexible configuration** allowing users to modify periods, types, and colors
### 2. Average Daily Range Percentage (ADR%)
- Calculates the average daily volatility as a percentage
- Uses a 20-period simple moving average of (High/Low - 1) * 100
- Helps traders understand the stock's typical daily movement range
- Essential for position sizing and stop-loss placement
### 3. Volume Analysis (Up/Down Ratio)
- Analyzes volume distribution over the last 55 periods
- Calculates the ratio of volume on up days vs down days
- Provides insight into buying vs selling pressure
- Values > 1 indicate more buying volume, < 1 indicate more selling volume
### 4. Absolute Relative Strength (ARS)
- **Dual timeframe analysis** with customizable reference points
- **High ARS**: Performance relative to benchmark from a high reference point (default: Sep 27, 2024)
- **Low ARS**: Performance relative to benchmark from a low reference point (default: Apr 7, 2025)
- Uses NSE:NIFTY as default comparison symbol
- Color-coded display: Green for outperformance, Red for underperformance
### 5. Relative Performance Table
- **5 timeframes**: 1 Week, 1 Month, 3 Months, 6 Months, 1 Year
- Shows stock performance **relative to benchmark index**
- Formula: (Stock Return - Index Return) for each period
- **Color coding**:
- Lime: >5% outperformance
- Yellow: -5% to +5% relative performance
- Red: <-5% underperformance
### 6. Dynamic Position Allocation System
- **6-factor scoring system** based on price vs EMAs (21, 55, 189)
- Evaluates:
- Price above/below each EMA
- EMA alignment (21>55, 55>189, 21>189)
- **Allocation recommendations**:
- 100% allocation: Score = 6 (all bullish signals)
- 75% allocation: Score = 4
- 50% allocation: Score = 2
- 25% allocation: Score = 0
- 0% allocation: Score = -2, -4, -6 (bearish signals)
## Display Tables
### Performance Table (Top Right)
Shows relative performance vs benchmark across multiple timeframes with intuitive color coding for quick assessment.
### Metrics Table (Bottom Right)
Displays key statistics:
- **ADR%**: Average Daily Range percentage
- **U/D**: Up/Down volume ratio
- **Allocation%**: Recommended position size
- **High ARS%**: Relative strength from high reference
- **Low ARS%**: Relative strength from low reference
## How to Use This Indicator
### For Trend Analysis
1. **Moving Average Ribbon**: Look for price above ascending MAs for bullish trends
2. **MA Alignment**: Bullish when shorter MAs are above longer MAs
3. **Color coordination**: Use consistent color scheme for quick visual analysis
### For Entry/Exit Timing
1. **Performance Table**: Enter when showing consistent outperformance across timeframes
2. **Volume Analysis**: Confirm entries with U/D ratio > 1.5 for strong buying
3. **ARS Values**: Look for positive ARS readings for relative strength confirmation
### For Position Sizing
1. **Allocation System**: Use the recommended allocation percentage
2. **ADR% Consideration**: Adjust position size based on volatility
3. **Risk Management**: Lower allocation in high ADR% stocks
### For Risk Management
1. **ADR% for Stop Loss**: Set stops at 1-2x ADR% below entry
2. **Relative Performance**: Reduce positions when consistently underperforming
3. **Volume Confirmation**: Be cautious when U/D ratio deteriorates
## Best Practices
### Timeframe Recommendations
- **Intraday**: Use lower MA periods (5, 13, 21, 55)
- **Swing Trading**: Default settings work well (13, 21, 55, 189)
- **Position Trading**: Consider higher periods (21, 50, 100, 200)
### Market Conditions
- **Trending Markets**: Focus on MA alignment and relative performance
- **Sideways Markets**: Rely more on ADR% for range trading
- **Volatile Markets**: Reduce allocation percentage regardless of signals
### Customization Tips
1. Adjust reference dates for ARS calculation based on significant market events
2. Change comparison symbol to sector-specific indices for better relative analysis
3. Modify MA periods based on your trading style and market characteristics
## Technical Specifications
- **Version**: Pine Script v6
- **Overlay**: Yes (plots on price chart)
- **Real-time Updates**: Yes
- **Data Requirements**: Minimum 252 bars for complete calculations
- **Compatible Timeframes**: All standard timeframes
## Limitations
- Performance calculations require sufficient historical data
- ARS calculations depend on selected reference dates
- Volume analysis may be less reliable in low-volume stocks
- Relative performance is only as good as the chosen benchmark
This indicator is designed to provide a comprehensive analysis framework rather than simple buy/sell signals. It's recommended to use this in conjunction with your overall trading strategy and risk management rules.
Swing Point Volume Z-ScoreSWING POINT VOLUME Z-SCORE INDICATOR
A volume analysis tool that identifies statistical volume spikes at swing points with optional higher timeframe confirmation.
This indicator uses Leviathan's method of swing detection. All credit to him for his amazing work (and any mistakes mine). I was also inspired by Trading Riot, who's Capitulation indicator gave me the idea to create this one.
WHAT IT DOES
This indicator combines three analytical approaches:
- Volume Z-score calculation to measure volume significance statistically
- Automatic swing point detection (higher highs, lower lows, etc.)
- Optional higher timeframe volume confirmation
The Z-score measures how many standard deviations current volume is from the average, helping identify when volume activity is genuinely elevated rather than relying on visual assessment.
VISUAL SYSTEM
The indicator uses a color-coded approach for quick assessment:
GREEN - Normal Activity (Z-Score 1.0-2.0)
Above-average volume levels
ORANGE - Elevated Activity (Z-Score 2.0-3.0)
High volume activity that may indicate increased interest
RED - Potential Institutional Activity (Z-Score 3.0+)
Very high volume levels that could suggest significant market participation
HIGHER TIMEFRAME CONFIRMATION
When enabled, the indicator checks volume on a higher timeframe:
- Checkmark symbol indicates HTF volume also shows elevation
- X symbol indicates HTF volume doesn't confirm
- Auto-selects appropriate higher timeframe or allows manual selection
KEY FEATURES
Statistical Approach: Uses Z-score methodology rather than arbitrary volume thresholds
Adaptive Thresholds: Can adjust based on market volatility conditions
Swing Focus: Concentrates analysis on structurally important price levels
Volume Trends: Shows whether volume is accelerating or decelerating
Success Tracking: Monitors how often HTF confirmation proves effective
DISPLAY OPTIONS
Basic Mode: Essential features with clean interface
Advanced Mode: Additional customization and analytics
Label Sizing: Four size options to fit different screen setups
Table Position: Moveable info table with transparency control
Custom Colors: Adjustable for different chart themes
PRACTICAL APPLICATIONS
May help identify:
- Volume spikes at support/resistance levels
- Potential accumulation or distribution zones
- Breakout confirmation with volume backing
- Areas where larger market participants might be active
Works on all liquid markets and timeframes, though generally more effective on 15-minute charts and higher.
USAGE NOTES
This is an analytical tool that highlights statistically significant volume events. It should be used as part of a broader analysis approach rather than as a standalone trading system.
The indicator works best when combined with:
- Price action analysis
- Support and resistance identification
- Trend analysis
- Proper risk management
Default settings are designed to work well across most instruments, but users can adjust parameters based on their specific needs and trading style.
TECHNICAL DETAILS
Built with Pine Script v5
Compatible with all TradingView subscription levels
Open source code available for review and learning
Works on stocks, forex, crypto, futures, and other liquid instruments
The statistical approach helps remove some subjectivity from volume analysis, though like all technical indicators, it should be used thoughtfully as part of a complete trading plan.
[Pandora][Swarm] Rapid Exponential Moving AverageENVISIONING POSSIBILITY
What is the theoretical pinnacle of possibility? The current state of algorithmic affairs falls far short of my aspirations for achievable feasibility. I'm lifting the lid off of Pandora's box once again, very publicly this time, as a brute force challenge to conventional 'wisdom'. The unfolding series of time mandates a transcendental systemic alteration...
THE MOVING AVERAGE ZOO:
The realm of digital signal processing for trading is filled with familiar antiquated filtering tools. Two families of filtration, being 'infinite impulse response' (EMA, RMA, etc.) and 'finite impulse response' (WMA, SMA, etc.), are prevalently employed without question. These filter types are the mules and donkeys of data analysis, broadly accepted for use in finance.
At first glance, they appear sufficient for most tasks, offering a basic straightforward way to reduce noise and highlight trends. Yet, beneath their simplistic facade lies a constellation of limitations and impediments, each having its own finicky quirks. Upon closer inspection, identifiable drawbacks render them far from ideal for many real-world applications in today's volatile markets.
KNOWN FUNDAMENTAL FLAWS:
Despite commonplace moving average (MA) popularity, these conventional filters suffer from an assortment of fundamental flaws. Most of them don't genuinely address core challenges of how to preserve the true dynamics of a signal while suppressing noise and retaining cutoff frequency compliance. Their simple cookie cutter structures make them ill-suited in actuality for dynamic market environments. In reality, they often trade one problem for another dilemma, forsaking analytics to choose between distortion and delay.
A deeper seeded issue remains within frequency compliance, how adequately a filter respects (or disrespects) the underlying signal’s spectral properties according to it's assigned periodic parameter. Traditional MAs habitually distort phase relationships, causing delayed reactions with surplus lag or exaggerations with excessive undershoot/overshoot. For applications requiring timely resilience, such as algorithmic trading, these shortcomings are often functionally unacceptable. What’s needed is vigorous filters that can more accurately retain signal behaviors while minimizing lag without sacrificing smoothness and uniformity. Until then, the public MA zoo remains as a collection of corny compromises, rather than a favorable toolbelt of solutions.
P.S.: In PSv7+, in my opinion, many of these geriatric MAs deserve no future with ease of access for the naive, simply not knowing these filters are most likely creating bigger problems than solving any.
R.E.M.A.
What is this? I prefer to think of it as the "radical EMA", definitely along my lines of a retire everything morte algorithm. This isn't your run of the mill average from the petting zoo. I would categorize it as a paradigm shifting rampant economic masochistic annihilator, sufficiently good enough to begin ruthlessly executing moving averages left and right. Um, yeah... that kind of moving average destructor as you may soon recognize with a few 'Filters+' settings adjustments, realizing ordinary EMA has been doing us an injustice all this time.
Does it possess the capability to relentlessly exterminate most averaging filters in existence? Well, it's about time we find out, by uncaging it on the loose into the greater economic wilderness. Only then can we truly find out if it is indeed a radical exponential market accelerant whose time has come. If it is, then it may eventually become a reality erasing monolithic anomaly destined for greatness, ultimately changing the entire landscape of trading in perpetuity.
UNLEASHING NEXT-GEN:
This lone next generation exoweapon algorithm is intended to initiate the transformative beginning stages of mass filtration deprecation. However, it won't be the only one, just the first arrival of it's alien kind from me. Welcome to notion #1 of my future filtration frontier, on this episode of the algorithmic twilight zone. Where reality takes a twisting turn one dimension beyond practical logic, after persistent models of mindset disintegrate into insignificance, followed by illusory perception confronted into cognitive dissonance.
An evolutionary path to genuine advancement resides outside the prison of preconceptions, manifesting only after divergence from persistent binding restrictions of dogmatic doctrines. Such a genesis in transformative thinking will catalyze unbounded cognitive potential, plowing the way for the cultivation of total redesigns of thought. Futuristic innovative breakthroughs demand the surrender of legacy and outmoded understandings.
Now that the world's largest assembly of investors has been ensembled, there are additional tasks left to perform. I'm compelled to deploy this mathematical-weapon of mass financial creation into it's rightful destined hands, to "WE THE PEOPLE" of TV.
SCRIPT INTENTION:
Deprecate anything and everything as any non-commercial member sees desirably fit. This includes your existing code formulations already in working functional modes of operation AND/OR future projects in the works. Swapping is nearly as simple as copying and pasting with meager modifications, after you have identified comparable likeness in this indicators settings with a visual assessment. Results may become eye opening, but only if you dare to look and test.
Where you may suspect a ta.filter() is lacking sufficient luster or may be flat out majorly deficient, employing rema, drema, trema, or qrema configurations may be a more suitable replacement. That's up to you to discern. My code satire already identifies likely bottom of the barrel suspects that either belong in the extinction record or have already been marked for deprecation. They are ordered more towards the bottom by rank where they belong. SuperSmoother is a masterpiece here to stay, being my original go-to reference filter. Everything you see here is already deprecated, including REMA...
REMA CHARACTERISTICS
- VERY low lag
- No overshoot
- Frequency compliant
- Proper initialization at bar_index==0
- Period parameter accepts poitive floating point numerics (AND integers!)
- Infinite impulse response (IIR) filter
- Compact code footprint
- Minimized computational overhead
BTC Correlation PercentagePurpose
This indicator displays the correlation percentage between the current trading instrument and Bitcoin (BTC/USDT) as a text label on the chart. It helps traders quickly assess how closely an asset's price movements align with Bitcoin's fluctuations.
Key Features
Precise Calculation: Shows correlation as a percentage with one decimal place (e.g., 25.6%).
Customizable Appearance: Allows adjustment of colors, position, and calculation period.
Clean & Simple: Displays only essential information without cluttering the chart.
Universal Compatibility: Works on any timeframe and with any trading pair.
Input Settings
Core Parameters:
BTC Symbol – Ticker for Bitcoin (default: BINANCE:BTCUSDT).
Correlation Period – Number of bars used for calculation (default: 50 candles).
Show Correlation Label – Toggle visibility of the correlation label.
Visual Customization:
Text Color – Label text color (default: white).
Background Color – Label background color (default: semi-transparent blue).
Border Color – Border color around the label (default: gray).
Label Position – Where the label appears on the chart (default: top-right).
Interpreting Correlation Values
70% to 100% → Strong positive correlation (asset moves in sync with BTC).
30% to 70% → Moderate positive correlation.
-30% to 30% → Weak or no correlation.
-70% to -30% → Moderate negative correlation (asset moves opposite to BTC).
-100% to -70% → Strong negative correlation.
Practical Use Cases
For Altcoins: A correlation above 50% suggests high dependence on Bitcoin’s price action.
For Futures Trading: Helps assess systemic risks tied to BTC movements.
During High Volatility: Determines whether an asset’s price change is driven by its own factors or broader market trends.
How It Works
The indicator recalculates automatically with each new candle. For the most reliable results, it is recommended for use on daily or higher timeframes.
This tool provides traders with a quick, visual way to gauge Bitcoin’s influence on other assets, improving decision-making in crypto markets. 🚀
This response is AI-generated, for reference only.
New chat
TRI - Smart Zones============================================================================
# TRI - SMART ZONES v2.0
## Professional Smart Money Concepts Indicator for Pine Script v6
============================================================================
## 📊 OVERVIEW
**TRI - Smart Zones** is a comprehensive Smart Money Concepts indicator that
combines multiple institutional trading concepts into a single, powerful tool.
Built with Pine Script v6 for optimal performance and reliability.
## 🎯 CORE FEATURES
### **Fair Value Gaps (FVG)**
- **Detection**: Automatic identification of price imbalances
- **Types**: Bullish and Bearish Fair Value Gaps
- **Threshold**: Customizable gap size requirements (0.1% default)
- **Extension**: Configurable zone projection length
- **Mitigation**: Real-time tracking of gap fills
### **Order Blocks (OB)**
- **Detection**: Volume-based institutional footprint identification
- **Types**: Bullish and Bearish Order Blocks
- **Method**: Pivot-based volume analysis with configurable lookback
- **Validation**: Market structure confirmation required
- **Extension**: Adjustable zone projection
### **BSL/SSL Liquidity Levels**
- **Multi-Timeframe**: Automatic higher timeframe reference
- **Dynamic**: Real-time level updates and extensions
- **Visual**: Clear line markings with timeframe labels
- **Smart**: Adaptive timeframe selection based on current chart
### **Fibonacci Extensions**
- **ZigZag Integration**: Advanced pivot point detection
- **Levels**: Customizable Fibonacci ratios (38.2%, 61.8%, 100%, 161.8%)
- **Projection**: Dynamic extension from swing points
- **Visual**: Subtle dashed lines with level/price labels
### **Smart Dashboard**
- **Zone Statistics**: Real-time FVG and OB counts
- **Success Rates**: Mitigation percentages for each zone type
- **Market Bias**: Intelligent bullish/bearish/neutral assessment
- **Positioning**: Customizable location and size
### **Zone Analysis Engine**
- **Technical Confluence**: RSI, ADX, ATR, Volume analysis
- **VWAP Integration**: Institutional price reference
- **Confidence Scoring**: High/Mid/Low signal classification
- **Signal Arrows**: Visual trade direction indicators
## 🔔 ALERT SYSTEM
### **Market Structure Alerts**
- `Market Bias Changed` - Shift in overall market sentiment
- `BSL Touched` - Buy Side Liquidity level reached
- `SSL Touched` - Sell Side Liquidity level reached
### **Zone Touch Alerts**
- `OB Touched` - Any Order Block interaction
- `Bullish OB Touched` - Bullish Order Block touch
- `Bearish OB Touched` - Bearish Order Block touch
- `FVG Touched` - Any Fair Value Gap interaction
- `Bullish FVG Touched` - Bullish FVG touch
- `Bearish FVG Touched` - Bearish FVG touch
- `Zone Touched` - Any Smart Zone interaction
- `Bullish Zone Touched` - Any bullish zone touch
- `Bearish Zone Touched` - Any bearish zone touch
## ⚙️ CONFIGURATION
### **Zone Detection**
- Enable/disable FVG and OB detection independently
- Maximum zones per type (3-15, default: 8)
- Zone-specific threshold and extension settings
### **Visual Customization**
- Individual color schemes for each zone type
- Adjustable transparency levels
- Configurable line styles and widths
- Dashboard positioning and sizing options
### **Technical Analysis**
- RSI, ADX, ATR period customization
- Volume threshold multipliers
- Confidence level color coding
- Signal display toggle
## 🚀 PINE SCRIPT v6 OPTIMIZATIONS
- **User-Defined Types**: Structured data for zones and statistics
- **Methods**: Type-specific operations for better code organization
- **Enhanced Arrays**: Optimized memory management
- **Switch Statements**: Improved performance for zone classification
- **Error Handling**: Robust input validation and edge case management
- **Performance**: Efficient algorithms for real-time analysis
## 📈 TRADING APPLICATIONS
### **Entry Strategies**
- Zone confluence for high-probability setups
- Multi-timeframe confirmation via BSL/SSL
- Fibonacci extension targets
- Signal arrows for directional bias
### **Risk Management**
- Zone mitigation for stop-loss placement
- Market bias for position sizing
- Dashboard statistics for strategy validation
### **Market Analysis**
- Institutional footprint identification
- Liquidity level mapping
- Market structure assessment
- Trend continuation vs reversal analysis
## 🔧 TECHNICAL SPECIFICATIONS
- **Version**: Pine Script v6
- **Overlay**: True (draws on price chart)
- **Max Objects**: 100 boxes, 100 lines, 50 labels
- **Performance**: Optimized for real-time analysis
- **Compatibility**: All TradingView chart types and timeframes
Simple Leveraged PnLThis script shows your live trade PnL, ROE, R:R ratio, margin, leverage, entry, TP, and SL directly on the chart.
It draws:
Green/red zones for your Take Profit and Stop Loss ranges.
A pinned info card (movable to any corner of the chart) showing all key trade details in one place.
You can fully customize:
Card position (top/middle/bottom × left/middle/right)
Text size, colors, and background
Zone transparency
It works for both Long and Short positions and updates in real time.
☑️VMA Win % Dashboard for Different LengthsVMA Win % Dashboard for Different Lengths
Overview
This Pine Script indicator evaluates the performance of a Variable Moving Average (VMA) for lengths 13 to 17. It tracks the success rate of price hitting target levels during bullish or bearish trends and displays results in a table. It is part of a combination that includes two other indicators: ✅ VMA Avg ATR + Days to Targets Total Improved 🎯 and 📊 Visual MTF VMA Dashboard🔄️.
How It Works
1. Inputs:
- ATR Length: 14 periods (for volatility).
- VMA Lengths: 13, 14, 15, 16, 17.
2. VMA Calculation:
- Uses closing price.
- Measures price increases (pdm) and decreases (mdm).
- Smooths data to calculate a Directional Movement Index (DMI).
- Adjusts VMA based on momentum and volatility.
3. Trend Detection:
- Bullish: VMA rises (green).
- Bearish: VMA falls (red).
- Neutral: No direction (white).
- Confirms trends align with daily and 195-minute timeframes.
4. Performance Tracking:
- Trend Start: Records price, ATR, and time when a trend begins.
- Price Movement: Tracks highest (bullish) or lowest (bearish) price.
- Targets:
---- T1: Starting price ± historical average movement (ATR-based).
---- T2: Starting price ± 6x ATR.
- Statistics:
---- Counts hits (reached T1/T2) and misses (didn’t reach T1).
---- Calculates win percentages: % of trends hitting T1.
5. Dashboard:
- Table with columns: VMA Length, Win % Up, Win % Down.
- Shows win percentages for each length (e.g., 75.23%).
Use Cases
- Trend Trading: Confirms trend direction and success rate.
- Optimization: Finds the best VMA length.
- Risk Management: Sets ATR-based trade targets.
- Combination: Complements ✅ VMA Avg ATR + Days to Targets Total Improved 🎯 and 📊 Visual MTF VMA Dashboard🔄️ for a complete strategy.
Example
- VMA 15: 80% Win Up, 55% Win Down → Best for bullish trades.
- VMA 13: 75% Win Up, 60% Win Down → More balanced.
Limitations
- Based on historical data, not future predictions.
- Only analyzes trends aligned with higher timeframes.
- No VMA lines or signals plotted on the chart.
Combined Futures Open Interest [Sam SDF-Solutions]The Combined Futures Open Interest indicator is designed to provide comprehensive analysis of market positioning by aggregating open interest data from the two nearest futures contracts. This dual-contract approach captures the complete picture of market participation, including rollover dynamics between front and back month contracts, offering traders crucial insights into institutional positioning and market sentiment.
Key Features:
Dual-Contract Aggregation: Automatically identifies and combines open interest from the first and second nearest futures contracts (e.g., ES1! + ES2!), providing a complete view of market positioning that single-contract analysis might miss.
Multi-Period Analysis: Tracks open interest changes across multiple timeframes:
1 Day: Immediate market sentiment shifts
1 Week: Short-term positioning trends
1 Month: Medium-term institutional flows
3 Months: Quarterly positioning aligned with contract expiration cycles
Smart Data Handling: Utilizes last known values when data is temporarily unavailable, preventing false signals from data gaps while clearly indicating when stale data is being used.
EMA Smoothing: Incorporates a customizable Exponential Moving Average (default 65 periods) to identify the underlying trend in open interest, filtering out daily noise and highlighting significant deviations.
Dynamic Visualization:
Color-coded main line showing directional changes (green for increases, red for decreases)
Optional fill areas between OI and EMA to visualize momentum
Separate contract lines for detailed rollover analysis
Customizable labels for significant percentage changes
Comprehensive Information Table: Displays real-time statistics including:
Current total open interest across both contracts
Period-over-period changes in absolute and percentage terms
EMA deviation metrics
Visual status indicators for quick assessment
Contract symbols and data quality warnings
Alert System: Configurable alerts for:
Significant daily changes (customizable threshold)
EMA crossovers indicating trend changes
Large percentage movements suggesting institutional activity
How It Works:
Contract Detection: The indicator automatically identifies the base futures symbol and constructs the appropriate contract codes for the two nearest expirations, or accepts manual symbol input for non-standard contracts.
Data Aggregation: Open interest data from both contracts is retrieved and summed, providing a complete picture that accounts for positions rolling between contracts.
Historical Comparison: The indicator calculates changes from multiple lookback periods (1/5/22/66 days) to show how positioning has evolved across different time horizons.
Trend Analysis: The EMA overlay helps identify whether current open interest is above or below its smoothed average, indicating momentum in position building or reduction.
Visual Feedback: The main line changes color based on daily changes, while the optional table provides detailed numerical analysis for traders requiring precise data.
___________________
This indicator is essential for futures traders, particularly those focused on index futures, commodities, or currency futures where understanding the aggregate positioning across nearby contracts is crucial. It's especially valuable during rollover periods when positions shift between contracts, and for identifying institutional accumulation or distribution patterns that single-contract analysis might miss. By combining multiple timeframe analysis with intelligent data handling and clear visualization, it simplifies the complex task of monitoring open interest dynamics across the futures curve.
Quant Signals: Market Sentiment Monitor HUDWavelets & Scale Spectrum
This indicator is ideal for traders who adapt their strategy to market conditions — such as swing traders, intraday traders, and system developers.
Trend-followers can use it to confirm trending conditions before entering.
Mean-reversion traders can spot choppy markets where reversals are more likely.
Risk managers can monitor volatility shifts and regime changes to adjust position size or pause trading.
It works best as a market context filter — telling you the “weather” before you decide on the trade.
Wavelets are like tiny “measuring rulers” for price changes. Instead of looking at the whole chart at once, a wavelet looks at differences in price over a specific time scale — for example, 2 bars, 4 bars, 8 bars, and so on.
The scale spectrum is what you get when you measure volatility at several of these scales and then plot them against scale size.
If the spectrum forms a straight line on a log–log chart, it means price changes follow a consistent pattern across time scales (a power-law relationship).
The slope of that line gives the Hurst exponent (H) — telling you whether moves tend to persist (trend) or reverse (mean-revert).
The height of the line gives you the volatility (σ) — the average size of moves.
This approach works like a microscope, revealing whether the market’s behaviour is consistent across short-term and long-term horizons, and when that behaviour changes.
This tool applies a wavelet-based scale-spectrum analysis to price data to estimate three key market state measures inside a rolling window:
Hurst exponent (H) — measures persistence in price moves:
H > ~0.55 → market is trending (moves tend to continue).
H < ~0.45 → market is choppy/mean-reverting (moves tend to reverse).
Values near 0.5 indicate a neutral, random-walk-like regime.
Volatility (σ) — the average size of price swings at your chart’s timeframe, optionally annualized. Rising volatility means larger price moves, falling volatility means smaller moves.
Fit residual — how well the observed multi-scale volatility fits a clean power-law line. Low residual = stable behaviour; high residual = structural change (possible regime shift).
Quant Signals: Entropy w/ ForecastThis is the first of many quantitative signals I plan to create for TV users.
Most technical analysis (TA) tools—like moving averages, oscillators, or chart patterns—are heuristic: they’re based on visually identifiable shapes, threshold crossovers, or empirically chosen rules. These methods rarely quantify the information content or structural complexity of market data. By quantifying market predictability before making a forecast, this method filters out noise and focuses your trading only during statistically favorable conditions—something traditional TA cannot objectively measure.
This MEPP-based approach is quantitative and model-free:
It comes from information theory and measures Shannon entropy rate to assess how predictable the market is at any moment.
Instead of interpreting price formations, it uses a data-compression algorithm (Lempel–Ziv) to capture hidden structure in the sequence of returns.
Forecasts are generated using a principle from statistical physics (Maximum Entropy Production), not historical chart patterns.
In short, this method measures the market's predictability BEFORE deciding a directional forecast is worth trusting. This tool is to inform TA traders on the market's current regime, whether it is smooth and predictable or it is volatile and turbulent.
Technical Introduction:
In information theory, Shannon entropy measures the uncertainty (or information content) in a sequence of data. For markets, the entropy rate captures how much new information price returns generate over time:
Low entropy rate → price changes are more structured and predictable.
High entropy rate → price changes are more random and unpredictable.
By discretizing recent returns into quartile-based states, this indicator:
Calculates the normalized entropy rate as a regime filter.
Uses MEPP to forecast the next state that maximizes entropy production.
Displays both the regime status (predictable vs chaotic) and the forecast bias (bullish/bearish) in a dashboard.
Measurements & How to Use Them
TLDR: HIGH ENTROPY -> information generation/market shift -> Don't trust forecast/strategy
1. H (bits/sym)
Shannon entropy rate of the last μ discrete returns, in bits per symbol (0–2).
Lower → more predictable; higher → more random.
Use as a raw measure of market structure.
2. H_max (log₂Ω)
Theoretical maximum entropy for Ω states. Here Ω = 4 → H_max = 2.0 bits.
Reference value for normalization.
3. Entropy (norm)
H / H_max, scaled between 0 and 1.
< 0.5–0.6 → predictable regime; > 0.6 → chaotic regime.
Main regime filter — forecasts are more reliable when below your threshold.
4. Regime
Label based on Entropy (norm) vs your entThresh.
LOW (predictable) = higher odds forecast will be correct.
HIGH (chaotic) = forecasts less reliable.
5. Next State (MEPP Forecast)
Discrete return state (1–4) predicted to occur next, chosen to maximize entropy production:
Large Down (strong bearish)
Small Down (mild bearish)
Small Up (mild bullish)
Large Up (strong bullish)
Use as your bias direction.
6. Bias
Simplified label from the Next State:
States 1–2 = Bearish bias (red)
States 3–4 = Bullish bias (green)
Align strategy direction with bias only in LOW regime.