Barndorff-Nielsen and Shephard Jump Statistic [Loxx]The following comments and descriptions are from from "Problems in the Application of Jump Detection Tests to Stock Price Data" by Michael William Schwert; Professor George Tauchen, Faculty Advisor.
This indicator applies several jump detection tests to intraday stock price data sampled at various frequencies. It finds that the choice of sampling frequency has an effect on both the amount of jumps detected by these tests, as well as the timing of those jumps. Furthermore, although these tests are designed to identify the same phenomenon, they find different amounts and timing of jumps when performed on the same data. These results suggest that these jump detection tests are probably identifying different types of jump behavior in stock price data, so they are not really substitutes for one another.
In recent years there has been a great deal of interest in studying jumps in asset price movements. Reasons why it is important to know when and how frequently jumps occur include risk management and the pricing and hedging of derivative contracts. Investors would benefit greatly from knowing the properties of jumps, since large instantaneous drops in asset prices result in large instantaneous losses. The effect of jumps on derivative pricing is equally significant, especially considering the important role derivatives play in modern financial markets. When asset price movements are continuous, investors can perfectly hedge derivative contracts such as options, but when jumps occur, they cause a change in the derivative price that is non-linear to the change in the price of the underlying asset. Thus, jumps introduce an unhedgeable risk to the holders of derivative contracts.
The ability to identify realized jumps in the financial markets could provide helpful information such as how frequently jumps occur, how large the jumps are, and whether they tend to occur in clusters. With this goal in mind, several authors have developed tests to determine whether or not an asset price movement is a statistically significant jump. These tests take advantage of the high-frequency intraday price data available today through electronic sources. Barndorff-Nielsen and Shephard (2004, 2006) use the difference between an estimate of variance and a jump-robust measure of variance to detect jumps over the course of a day. Approaching the problem differently, Jiang and Oomen (2007) exploit high order sample moments of returns to identify days that include jumps. Aїt-Sahalia and Jacod (2008) also exploit high order sample moments of returns to detect jumps by comparing price data sampled at two different frequencies. Lee and Mykland (2007) test for jumps at individual price observations by scaling returns by a local volatility measure. While these tests employ different strategies for detecting jumps, they are all designed to identify the same phenomenon.
For this indicator we are focused on the Barndorff-Nielsen and Shephard jump statistic.
Barndorff-Nielsen and Shephard (2004, 2006) developed a test that uses high-frequency price data to determine whether there is a jump over the course of a day. Their test compares two measures of variance: Realized Variance, which converges to the integrated variance plus a jump component as the time between observations approaches zero; and Bipower Variation, which converges to the integrated variance as the time between observations approaches zero, and is robust to jumps in the price path, an important fact for this application. The integrated variance of a price process is the integral of the square of the σ(t) term in (2.2.2), taken over the course of a day. Since prices cannot be observed continuously, one cannot calculate integrated variance exactly, and must estimate it instead.
For our purposes here, this is calculated as:
r = log(p /p )
This the geometric return from time ti-1 to time ti.
Then, Realized Variance and Bipower Variation are described by the following functions (see code for details)
realizedVariance(float src, int per)
and
bipowerVariance(float src, int per)
Huang and Tauchen (2005) also consider Relative Jump, a measure that approximates the percentage of total variance attributable to jumps:
RJ = (RV - BV) / RV
This statistic approximates the ratio of the sum of squared jumps to the total variance and is useful because it scales out long-term trends in volatility so one can compare the relative contribution of jumps to the variance of two price series with different volatilities.
To develop a statistical test to determine whether there is a significant difference between RV and BV, one needs an estimate of integrated quarticity. Andersen, Bollerslev, and Diebold (2004) recommend using a jump-robust realized Tri-Power Quarticity, I've included commentary in code to better explain how this indicator is collocated. See code for details.
How to use this indicator
When the bars turn gray, it's an indication that a jump has occurred in the market. It serves a warning that price jumped. I've included a percent point function (or inverse cumulative distribution function) to cutoff Z-score values depicted by histogram values. The top line at 3 is the empirical maximum Z-score value a serves merely as a point of reference. The Red line is the cutoff line calculated using PPF. When the histogram is green, no jumps have been detected. This indicator also includes alerts, signals, and bar coloring. I've also expanded the possible source types using my own Expanded Source Types library so you can test different log return methods as inputs. It is recommended to use window sizes of 7, 16, 78, 110, 156, and 270 returns for sampling intervals of 1 week, 1 day, 1 hour, 30 minutes, 15 minutes, and 5 minutes, respectively.
If you'ed like to better understand PPF, see here: Distributions in python
Included:
Bar coloring
Signals
Alerts
Loxx's Expanded Source Types
Search in scripts for "one一季度财报"
Stochastic of Two-Pole SuperSmoother [Loxx]Stochastic of Two-Pole SuperSmoother is a Stochastic Indicator that takes as input Two-Pole SuperSmoother of price. Includes gradient coloring and Discontinued Signal Lines signals with alerts.
What is Ehlers ; Two-Pole Super Smoother?
From "Cycle Analytics for Traders Advanced Technical Trading Concepts" by John F. Ehlers
A SuperSmoother filter is used anytime a moving average of any type would otherwise be used, with the result that the SuperSmoother filter output would have substantially less lag for an equivalent amount of smoothing produced by the moving average. For example, a five-bar SMA has a cutoff period of approximately 10 bars and has two bars of lag. A SuperSmoother filter with a cutoff period of 10 bars has a lag a half bar larger than the two-pole modified Butterworth filter.Therefore, such a SuperSmoother filter has a maximum lag of approximately 1.5 bars and even less lag into the attenuation band of the filter. The differential in lag between moving average and SuperSmoother filter outputs becomes even larger when the cutoff periods are larger.
Market data contain noise, and removal of noise is the reason for using smoothing filters. In fact, market data contain several kinds of noise. I’ll group one kind of noise as systemic, caused by the random events of trades being exercised. A second kind of noise is aliasing noise, caused by the use of sampled data. Aliasing noise is the dominant term in the data for shorter cycle periods.
It is easy to think of market data as being a continuous waveform, but it is not. Using the closing price as representative for that bar constitutes one sample point. It doesn’t matter if you are using an average of the high and low instead of the close, you are still getting one sample per bar. Since sampled data is being used, there are some dSP aspects that must be considered. For example, the shortest analysis period that is possible (without aliasing)2 is a two-bar cycle.This is called the Nyquist frequency, 0.5 cycles per sample.A perfect two-bar sine wave cycle sampled at the peaks becomes a square wave due to sampling. However, sampling at the cycle peaks can- not be guaranteed, and the interference between the sampling frequency and the data frequency creates the aliasing noise.The noise is reduced as the data period is longer. For example, a four-bar cycle means there are four samples per cycle. Because there are more samples, the sampled data are a better replica of the sine wave component. The replica is better yet for an eight-bar data component.The improved fidelity of the sampled data means the aliasing noise is reduced at longer and longer cycle periods.The rate of reduction is 6 dB per octave. My experience is that the systemic noise rarely is more than 10 dB below the level of cyclic information, so that we create two conditions for effective smoothing of aliasing noise:
1. It is difficult to use cycle periods shorter that two octaves below the Nyquist frequency.That is, an eight-bar cycle component has a quantization noise level 12 dB below the noise level at the Nyquist frequency. longer cycle components therefore have a systemic noise level that exceeds the aliasing noise level.
2. A smoothing filter should have sufficient selectivity to reduce aliasing noise below the systemic noise level. Since aliasing noise increases at the rate of 6 dB per octave above a selected filter cutoff frequency and since the SuperSmoother attenuation rate is 12 dB per octave, the Super- Smoother filter is an effective tool to virtually eliminate aliasing noise in the output signal.
What are DSL Discontinued Signal Line?
A lot of indicators are using signal lines in order to determine the trend (or some desired state of the indicator) easier. The idea of the signal line is easy : comparing the value to it's smoothed (slightly lagging) state, the idea of current momentum/state is made.
Discontinued signal line is inheriting that simple signal line idea and it is extending it : instead of having one signal line, more lines depending on the current value of the indicator.
"Signal" line is calculated the following way :
When a certain level is crossed into the desired direction, the EMA of that value is calculated for the desired signal line
When that level is crossed into the opposite direction, the previous "signal" line value is simply "inherited" and it becomes a kind of a level
This way it becomes a combination of signal lines and levels that are trying to combine both the good from both methods.
In simple terms, DSL uses the concept of a signal line and betters it by inheriting the previous signal line's value & makes it a level.
Included:
Bar coloring
Alerts
Signals
Loxx's Expanded Source Types
Adaptive Two-Pole Super Smoother Entropy MACD [Loxx]Adaptive Two-Pole Super Smoother Entropy (Math) MACD is an Ehlers Two-Pole Super Smoother that is transformed into an MACD oscillator using entropy mathematics. Signals are generated using Discontinued Signal Lines.
What is Ehlers; Two-Pole Super Smoother?
From "Cycle Analytics for Traders Advanced Technical Trading Concepts" by John F. Ehlers
A SuperSmoother filter is used anytime a moving average of any type would otherwise be used, with the result that the SuperSmoother filter output would have substantially less lag for an equivalent amount of smoothing produced by the moving average. For example, a five-bar SMA has a cutoff period of approximately 10 bars and has two bars of lag. A SuperSmoother filter with a cutoff period of 10 bars has a lag a half bar larger than the two-pole modified Butterworth filter.Therefore, such a SuperSmoother filter has a maximum lag of approximately 1.5 bars and even less lag into the attenuation band of the filter. The differential in lag between moving average and SuperSmoother filter outputs becomes even larger when the cutoff periods are larger.
Market data contain noise, and removal of noise is the reason for using smoothing filters. In fact, market data contain several kinds of noise. I’ll group one kind of noise as systemic, caused by the random events of trades being exercised. A second kind of noise is aliasing noise, caused by the use of sampled data. Aliasing noise is the dominant term in the data for shorter cycle periods.
It is easy to think of market data as being a continuous waveform, but it is not. Using the closing price as representative for that bar constitutes one sample point. It doesn’t matter if you are using an average of the high and low instead of the close, you are still getting one sample per bar. Since sampled data is being used, there are some dSP aspects that must be considered. For example, the shortest analysis period that is possible (without aliasing)2 is a two-bar cycle.This is called the Nyquist frequency, 0.5 cycles per sample.A perfect two-bar sine wave cycle sampled at the peaks becomes a square wave due to sampling. However, sampling at the cycle peaks can- not be guaranteed, and the interference between the sampling frequency and the data frequency creates the aliasing noise.The noise is reduced as the data period is longer. For example, a four-bar cycle means there are four samples per cycle. Because there are more samples, the sampled data are a better replica of the sine wave component. The replica is better yet for an eight-bar data component.The improved fidelity of the sampled data means the aliasing noise is reduced at longer and longer cycle periods.The rate of reduction is 6 dB per octave. My experience is that the systemic noise rarely is more than 10 dB below the level of cyclic information, so that we create two conditions for effective smoothing of aliasing noise:
1. It is difficult to use cycle periods shorter that two octaves below the Nyquist frequency.That is, an eight-bar cycle component has a quantization noise level 12 dB below the noise level at the Nyquist frequency. longer cycle components therefore have a systemic noise level that exceeds the aliasing noise level.
2. A smoothing filter should have sufficient selectivity to reduce aliasing noise below the systemic noise level. Since aliasing noise increases at the rate of 6 dB per octave above a selected filter cutoff frequency and since the SuperSmoother attenuation rate is 12 dB per octave, the Super- Smoother filter is an effective tool to virtually eliminate aliasing noise in the output signal.
What are DSL Discontinued Signal Line?
A lot of indicators are using signal lines in order to determine the trend (or some desired state of the indicator) easier. The idea of the signal line is easy : comparing the value to it's smoothed (slightly lagging) state, the idea of current momentum/state is made.
Discontinued signal line is inheriting that simple signal line idea and it is extending it : instead of having one signal line, more lines depending on the current value of the indicator.
"Signal" line is calculated the following way :
When a certain level is crossed into the desired direction, the EMA of that value is calculated for the desired signal line
When that level is crossed into the opposite direction, the previous "signal" line value is simply "inherited" and it becomes a kind of a level
This way it becomes a combination of signal lines and levels that are trying to combine both the good from both methods.
In simple terms, DSL uses the concept of a signal line and betters it by inheriting the previous signal line's value & makes it a level.
Included:
Bar coloring
Alerts
Signals
Loxx's Expanded Source Types
Filtered, N-Order Power-of-Cosine, Sinc FIR Filter [Loxx]Filtered, N-Order Power-of-Cosine, Sinc FIR Filter is a Discrete-Time, FIR Digital Filter that uses Power-of-Cosine Family of FIR filters. This is an N-order algorithm that allows up to 50 values for alpha, orders, of depth. This one differs from previous Power-of-Cosine filters I've published in that it this uses Windowed-Sinc filtering. I've also included a Dual Element Lag Reducer using Kalman velocity, a standard deviation filter, and a clutter filter. You can read about each of these below.
Impulse Response
What are FIR Filters?
In discrete-time signal processing, windowing is a preliminary signal shaping technique, usually applied to improve the appearance and usefulness of a subsequent Discrete Fourier Transform. Several window functions can be defined, based on a constant (rectangular window), B-splines, other polynomials, sinusoids, cosine-sums, adjustable, hybrid, and other types. The windowing operation consists of multipying the given sampled signal by the window function. For trading purposes, these FIR filters act as advanced weighted moving averages.
A finite impulse response (FIR) filter is a filter whose impulse response (or response to any finite length input) is of finite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely (usually decaying).
The impulse response (that is, the output in response to a Kronecker delta input) of an Nth-order discrete-time FIR filter lasts exactly {\displaystyle N+1}N+1 samples (from first nonzero element through last nonzero element) before it then settles to zero.
FIR filters can be discrete-time or continuous-time, and digital or analog.
A FIR filter is (similar to, or) just a weighted moving average filter, where (unlike a typical equally weighted moving average filter) the weights of each delay tap are not constrained to be identical or even of the same sign. By changing various values in the array of weights (the impulse response, or time shifted and sampled version of the same), the frequency response of a FIR filter can be completely changed.
An FIR filter simply CONVOLVES the input time series (price data) with its IMPULSE RESPONSE. The impulse response is just a set of weights (or "coefficients") that multiply each data point. Then you just add up all the products and divide by the sum of the weights and that is it; e.g., for a 10-bar SMA you just add up 10 bars of price data (each multiplied by 1) and divide by 10. For a weighted-MA you add up the product of the price data with triangular-number weights and divide by the total weight.
What is a Standard Deviation Filter?
If price or output or both don't move more than the (standard deviation) * multiplier then the trend stays the previous bar trend. This will appear on the chart as "stepping" of the moving average line. This works similar to Super Trend or Parabolic SAR but is a more naive technique of filtering.
What is a Clutter Filter?
For our purposes here, this is a filter that compares the slope of the trading filter output to a threshold to determine whether to shift trends. If the slope is up but the slope doesn't exceed the threshold, then the color is gray and this indicates a chop zone. If the slope is down but the slope doesn't exceed the threshold, then the color is gray and this indicates a chop zone. Alternatively if either up or down slope exceeds the threshold then the trend turns green for up and red for down. Fro demonstration purposes, an EMA is used as the moving average. This acts to reduce the noise in the signal.
What is a Dual Element Lag Reducer?
Modifies an array of coefficients to reduce lag by the Lag Reduction Factor uses a generic version of a Kalman velocity component to accomplish this lag reduction is achieved by applying the following to the array:
2 * coeff - coeff
The response time vs noise battle still holds true, high lag reduction means more noise is present in your data! Please note that the beginning coefficients which the modifying matrix cannot be applied to (coef whose indecies are < LagReductionFactor) are simply multiplied by two for additional smoothing .
Whats a Windowed-Sinc Filter?
Windowed-sinc filters are used to separate one band of frequencies from another. They are very stable, produce few surprises, and can be pushed to incredible performance levels. These exceptional frequency domain characteristics are obtained at the expense of poor performance in the time domain, including excessive ripple and overshoot in the step response. When carried out by standard convolution, windowed-sinc filters are easy to program, but slow to execute.
The sinc function sinc (x), also called the "sampling function," is a function that arises frequently in signal processing and the theory of Fourier transforms.
In mathematics, the historical unnormalized sinc function is defined for x ≠ 0 by
sinc x = sinx / x
In digital signal processing and information theory, the normalized sinc function is commonly defined for x ≠ 0 by
sinc x = sin(pi * x) / (pi * x)
For our purposes here, we are used a normalized Sinc function
Included
Bar coloring
Loxx's Expanded Source Types
Signals
Alerts
Related indicators
Variety, Low-Pass, FIR Filter Impulse Response Explorer
STD-Filtered, Variety FIR Digital Filters w/ ATR Bands
STD/C-Filtered, N-Order Power-of-Cosine FIR Filter
STD/C-Filtered, Truncated Taylor Family FIR Filter
STD/Clutter-Filtered, Kaiser Window FIR Digital Filter
STD/Clutter Filtered, One-Sided, N-Sinc-Kernel, EFIR Filt
Variety, Low-Pass, FIR Filter Impulse Response Explorer [Loxx]Variety Low-Pass FIR Filter, Impulse Response Explorer is a simple impulse response explorer of 16 of the most popular FIR digital filtering windowing techniques. Y-values are the values of the coefficients produced by the selected algorithms; X-values are the index of sample. This indicator also allows you to turn on Sinc Windowing for all window types except for Rectangular, Triangular, and Linear. This is an educational indicator to demonstrate the differences between popular FIR filters in terms of their coefficient outputs. This is also used to compliment other indicators I've published or will publish that implement advanced FIR digital filters (see below to find applicable indicators).
Inputs:
Number of Coefficients to Calculate = Sample size; for example, this would be the period used in SMA or WMA
FIR Digital Filter Type = FIR windowing method you would like to explore
Multiplier (Sinc only) = applies a multiplier effect to the Sinc Windowing
Frequency Cutoff = this is necessary to smooth the output and get rid of noise. the lower the number, the smoother the output.
Turn on Sinc? = turn this on if you want to convert the windowing function from regular function to a Windowed-Sinc filter
Order = This is used for power of cosine filter only. This is the N-order, or depth, of the filter you wish to create.
What are FIR Filters?
In discrete-time signal processing, windowing is a preliminary signal shaping technique, usually applied to improve the appearance and usefulness of a subsequent Discrete Fourier Transform. Several window functions can be defined, based on a constant (rectangular window), B-splines, other polynomials, sinusoids, cosine-sums, adjustable, hybrid, and other types. The windowing operation consists of multipying the given sampled signal by the window function. For trading purposes, these FIR filters act as advanced weighted moving averages.
A finite impulse response (FIR) filter is a filter whose impulse response (or response to any finite length input) is of finite duration, because it settles to zero in finite time. This is in contrast to infinite impulse response (IIR) filters, which may have internal feedback and may continue to respond indefinitely (usually decaying).
The impulse response (that is, the output in response to a Kronecker delta input) of an Nth-order discrete-time FIR filter lasts exactly {\displaystyle N+1}N+1 samples (from first nonzero element through last nonzero element) before it then settles to zero.
FIR filters can be discrete-time or continuous-time, and digital or analog.
A FIR filter is (similar to, or) just a weighted moving average filter, where (unlike a typical equally weighted moving average filter) the weights of each delay tap are not constrained to be identical or even of the same sign. By changing various values in the array of weights (the impulse response, or time shifted and sampled version of the same), the frequency response of a FIR filter can be completely changed.
An FIR filter simply CONVOLVES the input time series (price data) with its IMPULSE RESPONSE. The impulse response is just a set of weights (or "coefficients") that multiply each data point. Then you just add up all the products and divide by the sum of the weights and that is it; e.g., for a 10-bar SMA you just add up 10 bars of price data (each multiplied by 1) and divide by 10. For a weighted-MA you add up the product of the price data with triangular-number weights and divide by the total weight.
What's a Low-Pass Filter?
A low-pass filter is the type of frequency domain filter that is used for smoothing sound, image, or data. This is different from a high-pass filter that is used for sharpening data, images, or sound.
Whats a Windowed-Sinc Filter?
Windowed-sinc filters are used to separate one band of frequencies from another. They are very stable, produce few surprises, and can be pushed to incredible performance levels. These exceptional frequency domain characteristics are obtained at the expense of poor performance in the time domain, including excessive ripple and overshoot in the step response. When carried out by standard convolution, windowed-sinc filters are easy to program, but slow to execute.
The sinc function sinc (x), also called the "sampling function," is a function that arises frequently in signal processing and the theory of Fourier transforms.
In mathematics, the historical unnormalized sinc function is defined for x ≠ 0 by
sinc x = sinx / x
In digital signal processing and information theory, the normalized sinc function is commonly defined for x ≠ 0 by
sinc x = sin(pi * x) / (pi * x)
For our purposes here, we are used a normalized Sinc function
Included Windowing Functions
N-Order Power-of-Cosine (this one is really N-different types of FIR filters)
Hamming
Hanning
Blackman
Blackman Harris
Blackman Nutall
Nutall
Bartlet Zero End Points
Bartlet-Hann
Hann
Sine
Lanczos
Flat Top
Rectangular
Linear
Triangular
If you wish to dive deeper to get a full explanation of these windowing functions, see here: en.wikipedia.org
Related indicators
STD-Filtered, Variety FIR Digital Filters w/ ATR Bands
STD/C-Filtered, N-Order Power-of-Cosine FIR Filter
STD/C-Filtered, Truncated Taylor Family FIR Filter
STD/Clutter-Filtered, Kaiser Window FIR Digital Filter
STD/Clutter Filtered, One-Sided, N-Sinc-Kernel, EFIR Filt
STD-Filtered, N-Pole Gaussian Filter [Loxx]This is a Gaussian Filter with Standard Deviation Filtering that works for orders (poles) higher than the usual 4 poles that was originally available in Ehlers Gaussian Filter formulas. Because of that, it is a sort of generalized Gaussian filter that can calculate arbitrary (order) pole Gaussian Filter and which makes it a sort of a unique indicator. For this implementation, the practical mathematical maximum is 15 poles after which the precision of calculation is useless--the coefficients for levels above 15 poles are so high that the precision loss actually means very little. Despite this maximal precision utility, I've left the upper bound of poles open-ended so you can try poles of order 15 and above yourself. The default is set to 5 poles which is 1 pole greater than the normal maximum of 4 poles.
The purpose of the standard deviation filter is to filter out noise by and by default it will filter 1 standard deviation. Adjust this number and the filter selections (price, both, GMA, none) to reduce the signal noise.
What is Ehlers Gaussian filter?
This filter can be used for smoothing. It rejects high frequencies (fast movements) better than an EMA and has lower lag. published by John F. Ehlers in "Rocket Science For Traders".
A Gaussian filter is one whose transfer response is described by the familiar Gaussian bell-shaped curve. In the case of low-pass filters, only the upper half of the curve describes the filter. The use of gaussian filters is a move toward achieving the dual goal of reducing lag and reducing the lag of high-frequency components relative to the lag of lower-frequency components.
A gaussian filter with...
One Pole: f = alpha*g + (1-alpha)f
Two Poles: f = alpha*2g + 2(1-alpha)f - (1-alpha)2f
Three Poles: f = alpha*3g + 3(1-alpha)f - 3(1-alpha)2f + (1-alpha)3f
Four Poles: f = alpha*4g + 4(1-alpha)f - 6(1-alpha)2f + 4(1-alpha)3f - (1-alpha)4f
and so on...
For an equivalent number of poles the lag of a Gaussian is about half the lag of a Butterworth filters: Lag = N*P / pi^2, where,
N is the number of poles, and
P is the critical period
Special initialization of filter stages ensures proper working in scans with as few bars as possible.
From Ehlers Book: "The first objective of using smoothers is to eliminate or reduce the undesired high-frequency components in the eprice data. Therefore these smoothers are called low-pass filters, and they all work by some form of averaging. Butterworth low-pass filters can do this job, but nothing comes for free. A higher degree of filtering is necessarily accompanied by a larger amount of lag. We have come to see that is a fact of life."
References John F. Ehlers: "Rocket Science For Traders, Digital Signal Processing Applications", Chapter 15: "Infinite Impulse Response Filters"
Included
Loxx's Expanded Source Types
Signals
Alerts
Bar coloring
Related indicators
STD-Filtered, Gaussian Moving Average (GMA)
STD-Filtered, Gaussian-Kernel-Weighted Moving Average
One-Sided Gaussian Filter w/ Channels
Fisher Transform w/ Dynamic Zones
R-sqrd Adapt. Fisher Transform w/ D. Zones & Divs .
STD-Filtered, Gaussian Moving Average (GMA) [Loxx]STD-Filtered, Gaussian Moving Average (GMA) is a 1-4 pole Ehlers Gaussian Filter with standard deviation filtering. This indicator should perform similar to Ehlers Fisher Transform.
The purpose of the standard deviation filter is to filter out noise by and by default it will filter 1 standard deviation. Adjust this number and the filter selections (price, both, GMA, none) to reduce the signal noise.
What is Ehlers Gaussian filter?
This filter can be used for smoothing. It rejects high frequencies (fast movements) better than an EMA and has lower lag. published by John F. Ehlers in "Rocket Science For Traders". First implemented in Wealth-Lab by Dr René Koch.
A Gaussian filter is one whose transfer response is described by the familiar Gaussian bell-shaped curve. In the case of low-pass filters, only the upper half of the curve describes the filter. The use of gaussian filters is a move toward achieving the dual goal of reducing lag and reducing the lag of high-frequency components relative to the lag of lower-frequency components.
A gaussian filter with...
one pole is equivalent to an EMA filter.
two poles is equivalent to EMA(EMA())
three poles is equivalent to EMA(EMA(EMA()))
and so on...
For an equivalent number of poles the lag of a Gaussian is about half the lag of a Butterworth filters: Lag = N * P / (2 * ¶2), where,
N is the number of poles, and
P is the critical period
Special initialization of filter stages ensures proper working in scans with as few bars as possible.
From Ehlers Book: "The first objective of using smoothers is to eliminate or reduce the undesired high-frequency components in the eprice data. Therefore these smoothers are called low-pass filters, and they all work by some form of averaging. Butterworth low-pass filtters can do this job, but nothing comes for free. A higher degree of filtering is necessarily accompanied by a larger amount of lag. We have come to see that is a fact of life."
References John F. Ehlers: "Rocket Science For Traders, Digital Signal Processing Applications", Chapter 15: "Infinite Impulse Response Filters"
Included
Loxx's Expanded Source Types
Signals
Alerts
Bar coloring
Related indicators
STD-Filtered, Gaussian-Kernel-Weighted Moving Average
One-Sided Gaussian Filter w/ Channels
Fisher Transform w/ Dynamic Zones
R-sqrd Adapt. Fisher Transform w/ D. Zones & Divs.
Improved Lowry Up-Down Volume + Stocks Indicatordocs.cmtassociation.org
In Paul F. Desmond's award winning paper in 2002 entitled "Identifying Bear Market Bottoms and New Bull Markets", he proposed an indicator for panic buying and selling that can be used to determine major market bottoms.
The paper explains that in major bear markets, you should have at least one, or more than one multiple 90% down days. Recoveries out of bear markets, or beginnings of new bull markets, should have at least one of the following conditions:
1) At least one 90% up volume day
2) At least two back-to-back 80% up volume days
Up and Down volume are defined as:
1) 90% up volume - defined as 90% up volume / total volume (or 10% down volume / total volume)
2) 90% down volume - defined as 90% down volume / total volume (or 10% up volume / total volume)
Several scripts exist in Tradingview to show this indicator for Up and Down volume, along with arrows or indicators for green up days or red down days.
However, this script is an improved version as it allows you the option to customize a couple parameters:
1) You may chose whether you'd like to use volume or stocks - sometimes it's better to have confluence between volume and actual stocks at the 90% threshold
2) You may chose the exchanges to consider - in the paper the NYSE is discussed, but this allows the expansion into NYSE, NASDAQ, DOW, and even a combined NYSE + NASDAQ + DOW indicator
3) It uniquely codes in the ability to plot a buy signal for both 90% up days, but also two back-to-back 80% up days - which is in the spirit of the original paper
I hope you enjoy this script and please let me know if you'd like me to make any modifications or additions.
Thank you, sincerely,
Jim Bosse
Strategy Myth-Busting #3 - BB_BUY+SuperTrend - [MYN]This is part of a new series we are calling "Strategy Myth-Busting" where we take open public manual trading strategies and automate them. The goal is to not only validate the authenticity of the claims but to provide an automated version for traders who wish to trade autonomously.
Our third one we are automating is one of the strategies from "The Best 3 Buy And Sell Indicators on Tradingview + Confirmation Indicators ( The Golden Ones ))" from "Online Trading Signals (Scalping Channel)". No formal backtesting was done by them so wanted to validate their claims.
If you know of or have a strategy you want to see myth-busted or just have an idea for one, please feel free to message me.
This strategy uses a combination of 2 open-source public indicators:
BB_Buy and Sell by guikroth (default settings)
SuperTrend from TradingView's Technicals (default settings)
Trading Rules
15 min candles
Long
Long condition when BB_BUY indicates buy signal and SuperTrend is green
Short
Short condition when BB_BUY indicates Sell signal and SuperTrend is red
J_TPO Velocity VariationThis one is a very random indicator but with an excellent concept. Unfortunately, I don't know much about the origin of this indicator or who made it. Still, the first appearance was around 2004 on a Meta Trader forum. There are a lot of variations of the J_TPO indicator. One of them is the J_TPO Velocity. The difference from the original version is that it uses the price range of the latest candles to change the magnitude of the indicator value, but the concept is the same.
More info here
In its original form, an oscillator between -1 and +1 is a nonparametric statistic quantifying how well the prices are ordered in consecutive ups (+1) or downs (-1), or intermediate cases. The velocity variation adds the price range, and this script variation adds a baseline as a filter for the indicator. This indicator will work as a confirmation indicator. Using it with the trend filter will work as an entry indicator.
Besides the columns representing the indicator's values, 2 more signals will be printed on the chart. One is the middle cross, the other the kicking middle cross. The first will print a signal when the J_TPO crosses the middle line (0) in favor of the trend. A diamond will be printed when the baseline is above 0, and the cross is upwards. The inverse for crosses downwards. The other signal is the Kicking middle cross which will appear when the cross comes after an opposite cross. This will give only one signal per cross in the same direction, which may help identify earlier the trend direction.
Crypto Portfolio ManagementCrypto Portfolio Management
This is an indicator not like the other ones that you regularly see in tradingview. The main difference is that this indicator does not plot a value for each candle bar like you would see with RSI or MACD. Actually it is table and it just uses tradingview great database of assets to plot some valuebale information that can not be found elsewhere easily. These metrics are some basic one that is used by portfolio managers to decide what they want to hold in their portfolio. The basic idea is that you should hold assets in your basket that are less correlated to the benchmark.
Benchmark in traditional context refers to main market indices like S&P 500 of US market. But they already have a lot of tools available. My effort was for crypto investors who are trying to rebalance their portfolio every month or week to have some good metrics to make decision. Because of this I used Bitcoin as crypto market benchmark. So, everything is compared to bitcoin in this script. I’m gonna explain the terms that is used in the table’s columns below.
MAKE SURE YOU PUT YOUR CHART AT DAILY AND AT THE MAXIMUM AVAILABLE DATA EXCHANGE.
Y-Exp
This is yearly expected return of the asset. It is simply the mean of the yearly returns of the asset. (these calculations are not typical in Tradingview because mainly we calculate on each bar and give value at the same bar but here this value to change once a year). Remember that the higher this value is the better it is because historically the asset have shown good returns but there is a tip: Always check the available historical data in any asset that you are adding if you add an asset that has only 1 year of data available or you use an exchange data that recently added the coin you will get unsignificant results and the results can not be trusted. You should always selects coins and market (coins can be changed in setting) that have the largest data available.
Y-SDev
This is a little bit complicated than the previous. This is the standard deviation of the yearly returns. This is a classic measure of RISK in financial markets. The higher the value, the more risk is involved with the asset that you have added. If you added two assets that have same returns but different Standard deviations, the rational thinker should choose the asset with lower Standard deviation.
The standard deviation is a good place to start but there are some considerations to have -it is getting complicated and average user should not be involved with these terms and can ignore the next phrases- standard deviation and mean of the yearly returns are random variables, these variables have a theoretical probability density function and these functions are not gaussian normal distribution. Because of this in the professional usage these returns should be transformed to a normal distribution and have all these terms calculated there and then transform back to its own normal state and then be used for any serious investment decision. I think these calculations can be done on Tradingview but I need you support to do this in the form of like and share of my scripts and ideas.
M-Exp and M-SDev
These terms are like the previous ones but it is calculated on monthly returns. As it goes for yearly return, the monthly returns change once a monthly candle closes. So be patient to use this indicator.
I highly recommend not to make decisions on monthly data due to a lot of noise involved with this market but in long run it is ok. So go with yearly returns and wait at least for 3 years to see your results.
CorToBTC
Basically you want to buy something that is less correalted with the benchmark. this is the correlation of the asset to bitcoin.
Sharpe Ratio
This is one of the most used metric as a risk adjusted return measurment. you can google it for more information. The higher this value the better. remmeber with any invenstment it is important to understand risks associated with the assets that you are buying.
DownFromATH
This metric that I didn't see anywhere in the tradingview and is familiar in the platforms like coinmarketcap. this is a real calculation of precentage down from ATH (All Time High). it means how much percentage a coin is down from the maximum price that the asset has experienced until now.
***
Remember you can change all the asset except main asset. If you like this script to 500 I will update this continuously.
FunctionLAPACKdtrsmLibrary "FunctionLAPACKdtrsm"
subroutine in the LAPACK:linear algebra package, used to solve one of the following matrix equations:
op( A )*X = alpha*B, or X*op( A ) = alpha*B,
where alpha is a scalar, X and B are m by n matrices, A is a unit, or
non-unit, upper or lower triangular matrix and op( A ) is one of
op( A ) = A or op( A ) = A**T.
The matrix X is overwritten on B.
reference:
netlib.org
dtrsm(side, uplo, transa, diag, m, n, alpha, a, lda, b, ldb)
solves one of the matrix equations
op( A )*X = alpha*B, or X*op( A ) = alpha*B,
where alpha is a scalar, X and B are m by n matrices, A is a unit, or
non-unit, upper or lower triangular matrix and op( A ) is one of
op( A ) = A or op( A ) = A**T.
The matrix X is overwritten on B.
Parameters:
side : string , On entry, SIDE specifies whether op( A ) appears on the left or right of X as follows:
SIDE = 'L' or 'l' op( A )*X = alpha*B.
SIDE = 'R' or 'r' X*op( A ) = alpha*B.
uplo : string , specifies whether the matrix A is an upper or lower triangular matrix as follows:
UPLO = 'U' or 'u' A is an upper triangular matrix.
UPLO = 'L' or 'l' A is a lower triangular matrix.
transa : string , specifies the form of op( A ) to be used in the matrix multiplication as follows:
TRANSA = 'N' or 'n' op( A ) = A.
TRANSA = 'T' or 't' op( A ) = A**T.
TRANSA = 'C' or 'c' op( A ) = A**T.
diag : string , specifies whether or not A is unit triangular as follows:
DIAG = 'U' or 'u' A is assumed to be unit triangular.
DIAG = 'N' or 'n' A is not assumed to be unit triangular.
m : int , the number of rows of B. M must be at least zero.
n : int , the number of columns of B. N must be at least zero.
alpha : float , specifies the scalar alpha. When alpha is zero then A is not referenced and B need not be set before entry.
a : matrix, Triangular matrix.
lda : int , specifies the first dimension of A.
b : matrix, right-hand side matrix B, and on exit is overwritten by the solution matrix X.
ldb : int , specifies the first dimension of B.
Returns: void, modifies matrix b.
usage:
dtrsm ('L', 'U', 'N', 'N', 5, 3, 1.0, a, 7, b, 6)
Copy/Paste LevelsCopy/Paste Levels allows levels to be pasted onto your chart from a properly formatted source.
This tool streamlines the process of adding lines to your chart, and sharing lines from your chart.
More than one ticker at a time!
This indicator will only draw lines on charts it has values for!
This means you can input levels for every ticker you need all at once, one time, and only be displayed the levels for the current chart you are looking at. When you switch tickers, the levels for that ticker will display. (Assuming you have levels entered for that ticker)
The formatting is as follows:
Ticker,Color,Style,Width,Lvl1,Lvl2,Lvl3;
Ticker - Any ticker on Tradingview can be used in the field
Color - Available colors are: Red,Orange,Yellow,Green,Blue,Purple,White,Black,Gray
Style - Available styles are: Solid,Dashed,Dotted
Width - This can be any negative integer, ex.(-1,-2,-3,-4,-5)
Lvls - These can be any positive number (decimals allowed)
Semi-Colons separate sections, each section contains enough information to create at least 1 line.
Each additional level added within the same section will have the same styling parameters as the other levels in the section.
Example:
2 solid lines colored red with a thickness of 2 on QQQ, 1 at $300 and 1 at $400.
QQQ,RED,SOLID,-2,300,400;
IMPORTANT MUST READ!!!
Remember to not include any spaces between commas and the entries in each field!
ex. ; QQQ, red, dotted, -1, 325; <- Wrong
ex. ;QQQ,red,dotted,-1,325;)<- Right
However,
All fields must be filled out, to use default values in the fields, insert a space between the commas.
ex. ;QQQ,red,dotted,,325; <- Wrong
ex. ;QQQ,red,dotted, ,325; <- Right
While spaces can not be included line breaks can!
I recommend for easier typing and viewing to include a line break for each new line (if changing styling or ticker)
Example:
2 solid lines, one red at $300, one green at $400, both default width. Written in a single line AND using multiple lines, both give the same output.
QQQ,red,solid, ,300;QQQ,green,solid, ,400;
or
QQQ,red,solid, ,300;
QQQ,green,solid, ,400;
In this following screenshot you can see more examples of different formatting variations.
The textbox contains exactly what is pasted into the settings input box.
As you can see, capitalization does not matter.
Default Values:
Color = optimal contrast color, If this field is filled in with a space it will display the optimal contrast color of the users background.
Style = solid
Width = -1
More Examples:
Multi-Ticker: drawing 3 lines at $300, all default values, on 3 different tickers
SPY, , , ,300;QQQ, , , ,300;AAPL, , , ,300
or
SPY, , , ,300;
QQQ, , , ,300;
AAPL, , , ,300
Multiple levels: There is no limit* to the number of levels that can be included within 1 section.
* only TV default line limit per indicator (500)
This will be 4 lines all with the same styling at different values on 2 separate tickers.
SPY,BLUE,SOLID,-2,100,200,300,400;QQQ,BLUE,SOLID,-2,100,200,300,400
or
SPY,BLUE,SOLID,-2,100,200,300,400;
QQQ,BLUE,SOLID,-2,100,200,300,400
Semi-colons must separate sections, but are not required at the beginning or end, it makes no difference if they are or are not added.
SPY,BLUE,SOLID,-2,100,200,300,400;
QQQ,BLUE,SOLID,-2,100,200,300,400
==
SPY,BLUE,SOLID,-2,100,200,300,400;
QQQ,BLUE,SOLID,-2,100,200,300,400;
==
;SPY,BLUE,SOLID,-2,100,200,300,400;
QQQ,BLUE,SOLID,-2,100,200,300,400;
All the above output the same results.
Hope this is helpful for people,
Enjoy!
Real-Fast Fourier Transform of Price Oscillator [Loxx]Real-Fast Fourier Transform Oscillator is a simple Real-Fast Fourier Transform Oscillator. You have the option to turn on inverse filter as well as min/max filters to fine tune the oscillator. This oscillator is normalized by default. This indicator is to demonstrate how one can easily turn the RFFT algorithm into an oscillator..
What is the Discrete Fourier Transform?
In mathematics, the discrete Fourier transform (DFT) converts a finite sequence of equally-spaced samples of a function into a same-length sequence of equally-spaced samples of the discrete-time Fourier transform (DTFT), which is a complex-valued function of frequency. The interval at which the DTFT is sampled is the reciprocal of the duration of the input sequence. An inverse DFT is a Fourier series, using the DTFT samples as coefficients of complex sinusoids at the corresponding DTFT frequencies. It has the same sample-values as the original input sequence. The DFT is therefore said to be a frequency domain representation of the original input sequence. If the original sequence spans all the non-zero values of a function, its DTFT is continuous (and periodic), and the DFT provides discrete samples of one cycle. If the original sequence is one cycle of a periodic function, the DFT provides all the non-zero values of one DTFT cycle.
What is the Complex Fast Fourier Transform?
The complex Fast Fourier Transform algorithm transforms N real or complex numbers into another N complex numbers. The complex FFT transforms a real or complex signal x in the time domain into a complex two-sided spectrum X in the frequency domain. You must remember that zero frequency corresponds to n = 0, positive frequencies 0 < f < f_c correspond to values 1 ≤ n ≤ N/2 −1, while negative frequencies −fc < f < 0 correspond to N/2 +1 ≤ n ≤ N −1. The value n = N/2 corresponds to both f = f_c and f = −f_c. f_c is the critical or Nyquist frequency with f_c = 1/(2*T) or half the sampling frequency. The first harmonic X corresponds to the frequency 1/(N*T).
The complex FFT requires the list of values (resolution, or N) to be a power 2. If the input size if not a power of 2, then the input data will be padded with zeros to fit the size of the closest power of 2 upward.
What is Real-Fast Fourier Transform?
Has conditions similar to the complex Fast Fourier Transform value, except that the input data must be purely real. If the time series data has the basic type complex64, only the real parts of the complex numbers are used for the calculation. The imaginary parts are silently discarded.
Included
Moving window from Last Bar setting. You can lock the oscillator in place on the current bar by adding 1 every time a new bar appears in the Last Bar Setting
[ChasinAlts] Best Volatility Indicator I hope you all enjoy this one as it does a great job at finding runners I did try to search for an example script to reference for quite a while when i first dreamt up this idea bc needed assistance implementing it. This script in particular was one that I began long ago but got put on the back-burner because I couldn't figure out how to implement the flow of logic until I came across a library titled 'Conditional Averages' and published by the “Pinecoders" account. Thus, the logic in this code is partially derived from that () . To understand what the functions/logic do in the beginning of the 'Functions'' section, you must understand how TV presents it's data through the charts.
Wether on the 1sec TF or the 1day (or ANY other), the only time TV prints a bar/candle is when a trade occurs for that asset (i.e. a change in volume). Even if Open=Close on the same candle, the candle will print with the updated price. The % of candles printed out of the TOTAL possible amount that COULD HAVE been printed is the ultimate output that’s calculated in the script. So, if the lookback setting=10min on the 1min TF and only 7 out of the last 10 candles have printed then the value will appear as 70(%). There are MANY benefits to using this method to measure volatility but its vital to recall that the indicator does nothing to provide the direction of future price movement. One thing I’ve noticed is that when a coin is just beginning it’s ascent and its move is considerably larger/longer than all the other coins OR the plots angle is very steep, it is usually the end of a move and the direction is about to abruptly reverse, continuing with it’s volatility. As volatility increases more and more the plot gets brighter and brighter…and also vise versa.
The settings are as follows:
1) which set of Kucoin’s Margin Coins to use (8 possible sets with 32 coins in each set).
2) input how many minutes ago to start counting the total printed candles from (i.e. if setting is input as 1440, count begins from exactly 24hrs(1440min) ago to present candle.
3) there are 3 different lines to choose from to be able to plot:
i. ‘Includes Open==Close’ = adds to count when bar prints but price does NOT change (=t1)
ii. ‘Does NOT include Open==Close’ = count ONLY updates upon price movement (=t2)
iii. ‘Difference’ = (( t1 - t2 ) / t1 ) *100
*** I’ve got some more great ones I will be uploading soon. Just have to create a description for them
Peace out,
- ChasinAlts
CFB-Adaptive, Williams %R w/ Dynamic Zones [Loxx]CFB-Adaptive, Williams %R w/ Dynamic Zones is a Jurik-Composite-Fractal-Behavior-Adaptive Williams % Range indicator with Dynamic Zones. These additions to the WPR calculation reduce noise and return a signal that is more viable than WPR alone.
What is Williams %R?
Williams %R , also known as the Williams Percent Range, is a type of momentum indicator that moves between 0 and -100 and measures overbought and oversold levels. The Williams %R may be used to find entry and exit points in the market. The indicator is very similar to the Stochastic oscillator and is used in the same way. It was developed by Larry Williams and it compares a stock’s closing price to the high-low range over a specific period, typically 14 days or periods.
What is Composite Fractal Behavior ( CFB )?
All around you mechanisms adjust themselves to their environment. From simple thermostats that react to air temperature to computer chips in modern cars that respond to changes in engine temperature, r.p.m.'s, torque, and throttle position. It was only a matter of time before fast desktop computers applied the mathematics of self-adjustment to systems that trade the financial markets.
Unlike basic systems with fixed formulas, an adaptive system adjusts its own equations. For example, start with a basic channel breakout system that uses the highest closing price of the last N bars as a threshold for detecting breakouts on the up side. An adaptive and improved version of this system would adjust N according to market conditions, such as momentum, price volatility or acceleration.
Since many systems are based directly or indirectly on cycles, another useful measure of market condition is the periodic length of a price chart's dominant cycle, (DC), that cycle with the greatest influence on price action.
The utility of this new DC measure was noted by author Murray Ruggiero in the January '96 issue of Futures Magazine. In it. Mr. Ruggiero used it to adaptive adjust the value of N in a channel breakout system. He then simulated trading 15 years of D-Mark futures in order to compare its performance to a similar system that had a fixed optimal value of N. The adaptive version produced 20% more profit!
This DC index utilized the popular MESA algorithm (a formulation by John Ehlers adapted from Burg's maximum entropy algorithm, MEM). Unfortunately, the DC approach is problematic when the market has no real dominant cycle momentum, because the mathematics will produce a value whether or not one actually exists! Therefore, we developed a proprietary indicator that does not presuppose the presence of market cycles. It's called CFB (Composite Fractal Behavior) and it works well whether or not the market is cyclic.
CFB examines price action for a particular fractal pattern, categorizes them by size, and then outputs a composite fractal size index. This index is smooth, timely and accurate
Essentially, CFB reveals the length of the market's trending action time frame. Long trending activity produces a large CFB index and short choppy action produces a small index value. Investors have found many applications for CFB which involve scaling other existing technical indicators adaptively, on a bar-to-bar basis.
What is Jurik Volty used in the Juirk Filter?
One of the lesser known qualities of Juirk smoothing is that the Jurik smoothing process is adaptive. "Jurik Volty" (a sort of market volatility ) is what makes Jurik smoothing adaptive. The Jurik Volty calculation can be used as both a standalone indicator and to smooth other indicators that you wish to make adaptive.
What is the Jurik Moving Average?
Have you noticed how moving averages add some lag (delay) to your signals? ... especially when price gaps up or down in a big move, and you are waiting for your moving average to catch up? Wait no more! JMA eliminates this problem forever and gives you the best of both worlds: low lag and smooth lines.
Ideally, you would like a filtered signal to be both smooth and lag-free. Lag causes delays in your trades, and increasing lag in your indicators typically result in lower profits. In other words, late comers get what's left on the table after the feast has already begun.
What are Dynamic Zones?
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
Included:
Bar coloring
3 signal variations w/ alerts
Divergences w/ alerts
Loxx's Expanded Source Types
Natural Market Mirror (NMM) and NMAs w/ Dynamic Zones [Loxx]Natural Market Mirror (NMM) and NMAs w/ Dynamic Zones is a very complex indicator derived from Sloman's Ocean Theory. This indicator contains 3 core outputs and those outputs, depending on the one you select to be used to crate a long/short signal, will be highlighted and bound by Dynamic Zones. Pre-smoothing of source input is available, you only need to increase the period length to greater than 1. The smoothing algorithm used here it's Ehlers Two-pole Super Smoother. This indicator should be used as you would use the popular QQE, the difference being this indicator is multi-level momentum adaptive, and QQE is fixed RSI-based. This indicator is multilayer adaptive.
The three core indicators calculations are as follows:
NMM = Natural Market Mirror, solid line
NMF = Natural Moving Average Fast, dashed line (white when off)
NMA = Natural Moving Average Regular, dashed line (yellow when off)
Whichever one you select to be used as the signal output base, that line with increased in width and change color to match the price inputted trend. The Dynamic Zones will then readjust around that selected output and form a new bounding zone for signal output.
What is the Ocean Natural Market Mirror?
Created by Jim Sloman, the NMA is a momentum indicator that automatically adjusts to volatility without being programed to do so. For more info, read his guide "Ocean Theory, an Introduction"
What is the Ocean Natural Moving Average?
Also created by Jim Sloman, the NMA is a moving average that automatically adjusts to volatility.
What are Dynamic Zones?
As explained in "Stocks & Commodities V15:7 (306-310): Dynamic Zones by Leo Zamansky, Ph .D., and David Stendahl"
Most indicators use a fixed zone for buy and sell signals. Here’ s a concept based on zones that are responsive to past levels of the indicator.
One approach to active investing employs the use of oscillators to exploit tradable market trends. This investing style follows a very simple form of logic: Enter the market only when an oscillator has moved far above or below traditional trading lev- els. However, these oscillator- driven systems lack the ability to evolve with the market because they use fixed buy and sell zones. Traders typically use one set of buy and sell zones for a bull market and substantially different zones for a bear market. And therein lies the problem.
Once traders begin introducing their market opinions into trading equations, by changing the zones, they negate the system’s mechanical nature. The objective is to have a system automatically define its own buy and sell zones and thereby profitably trade in any market — bull or bear. Dynamic zones offer a solution to the problem of fixed buy and sell zones for any oscillator-driven system.
An indicator’s extreme levels can be quantified using statistical methods. These extreme levels are calculated for a certain period and serve as the buy and sell zones for a trading system. The repetition of this statistical process for every value of the indicator creates values that become the dynamic zones. The zones are calculated in such a way that the probability of the indicator value rising above, or falling below, the dynamic zones is equal to a given probability input set by the trader.
To better understand dynamic zones, let's first describe them mathematically and then explain their use. The dynamic zones definition:
Find V such that:
For dynamic zone buy: P{X <= V}=P1
For dynamic zone sell: P{X >= V}=P2
where P1 and P2 are the probabilities set by the trader, X is the value of the indicator for the selected period and V represents the value of the dynamic zone.
The probability input P1 and P2 can be adjusted by the trader to encompass as much or as little data as the trader would like. The smaller the probability, the fewer data values above and below the dynamic zones. This translates into a wider range between the buy and sell zones. If a 10% probability is used for P1 and P2, only those data values that make up the top 10% and bottom 10% for an indicator are used in the construction of the zones. Of the values, 80% will fall between the two extreme levels. Because dynamic zone levels are penetrated so infrequently, when this happens, traders know that the market has truly moved into overbought or oversold territory.
Calculating the Dynamic Zones
The algorithm for the dynamic zones is a series of steps. First, decide the value of the lookback period t. Next, decide the value of the probability Pbuy for buy zone and value of the probability Psell for the sell zone.
For i=1, to the last lookback period, build the distribution f(x) of the price during the lookback period i. Then find the value Vi1 such that the probability of the price less than or equal to Vi1 during the lookback period i is equal to Pbuy. Find the value Vi2 such that the probability of the price greater or equal to Vi2 during the lookback period i is equal to Psell. The sequence of Vi1 for all periods gives the buy zone. The sequence of Vi2 for all periods gives the sell zone.
In the algorithm description, we have: Build the distribution f(x) of the price during the lookback period i. The distribution here is empirical namely, how many times a given value of x appeared during the lookback period. The problem is to find such x that the probability of a price being greater or equal to x will be equal to a probability selected by the user. Probability is the area under the distribution curve. The task is to find such value of x that the area under the distribution curve to the right of x will be equal to the probability selected by the user. That x is the dynamic zone.
Included
Bar coloring
3 types of signal output options
Alerts
Loxx's Expanded Source Types
Futures Exchange Sessions 3.0Description
The ultimate conclusion to the Futures Exchange Sessions 2.0 indicator. In version 3.0 the user gets full control of the start and end times of three separate dynamic boxes and one horizontal line. If the user wants to visually keep track of killzones, lunches, or any other time span in a trading day, version 3.0 will dynamically expand and keep track of price within the time specified by the user.
Inputs and Style
Everything about the three dynamic boxes and one horizontal line can but independently configured. Color, style, border, width can all be adjusted. In the Settings each box has a text box so the user can give each one a unique name.
Timezone
All of the start and end times are in EST. Additionally, each box and line need a dependent start of each day. This is controlled by a setting where the user can specify a timezone called Start Day Timezone which would be midnight of the respective timezone. In general if a box or line resides within a particular Session pick the corresponding timezone. If the users box/line fits in the Asian Session then choose Asia/Shanghai. If the box/line is within the London Session then choose Europe/London. And the same goes for the New York Session.
Special Notes
If start time is within one period of the Start Day Timezone in the Settings, then the line/box won't display
Boxes and time lines only display when timeframe is <= 30 minute
To turn off box text label set opacity to 0%
Goertzel Cycle Period [Loxx]Goertzel Cycle Period is an indicator that uses Goertzel algorithm to extract the cycle period of ticker's price input to then be injected into advanced, adaptive indicators and technical analysis algorithms.
The following information is extracted from: "MESA vs Goertzel-DFT, 2003 by Dennis Meyers"
Background
MESA which stands for Maximum Entropy Spectral Analysis is a widely used mathematical technique designed to find the frequencies present in data. MESA was developed by J.P Burg for his Ph.D dissertation at Stanford University in 1975. The use of the MESA technique for stocks has been written about in many articles and has been popularized as a trading technique by John Ehlers.
The Fourier Transform is a mathematical technique named after the famed French mathematician Jean Baptiste Joseph Fourier 1768-1830. In its digital form, namely the discrete-time Fourier Transform (DFT) series, is a widely used mathematical technique to find the frequencies of discrete time sampled data. The use of the DFT has been written about in many articles in this magazine (see references section).
Today, both MESA and DFT are widely used in science and engineering in digital signal processing. The application of MESA and Fourier mathematical techniques are prevalent in our everyday life from everything from television to cell phones to wireless internet to satellite communications.
MESA Advantages & Disadvantage
MESA is a mathematical technique that calculates the frequencies of a time series from the autoregressive coefficients of the time series. We have all heard of regression. The simplest regression is the straight line regression of price against time where price(t) = a+b*t and where a and b are calculated such that the square of the distance between price and the best fit straight line is minimized (also called least squares fitting). With autoregression we attempt to predict tomorrows price by a linear combination of M past prices.
One of the major advantages of MESA is that the frequency examined is not constrained to multiples of 1/N (1/N is equal to the DFT frequency spacing and N is equal to the number of sample points). For instance with the DFT and N data points we can only look a frequencies of 1/N, 2/N, Ö.., 0.5. With MESA we can examine any frequency band within that range and any frequency spacing between i/N and (i+1)/N . For example, if we had 100 bars of price data, we might be interested in looking for all cycles between 3 bars per cycle and 30 bars/ cycle only and with a frequency spacing of 0.5 bars/cycle. DFT would examine all bars per cycle of between 2 and 50 with a frequency spacing constrained to 1/100.
Another of the major advantages of MESA is that the dominant spectral (frequency) peaks of the price series, if they exist, can be identified with fewer samples than the DFT technique. For instance if we had a 10 bar price period and a high signal to noise ratio we could accurately identify this period with 40 data samples using the MESA technique. This same resolution might take 128 samples for the DFT. One major disadvantage of the MESA technique is that with low signal to noise ratios, that is below 6db (signal amplitude/noise amplitude < 2), the ability of MESA to find the dominant frequency peaks is severely diminished.(see Kay, Ref 10, p 437). With noisy price series this disadvantage can become a real problem. Another disadvantage of MESA is that when the dominant frequencies are found another procedure has to be used to get the amplitude and phases of these found frequencies. This two stage process can make MESA much slower than the DFT and FFT . The FFT stands for Fast Fourier Transform. The Fast Fourier Transform(FFT) is a computationally efficient algorithm which is a designed to rapidly evaluate the DFT. We will show in examples below the comparisons between the DFT & MESA using constructed signals with various noise levels.
DFT Advantages and Disadvantages.
The mathematical technique called the DFT takes a discrete time series(price) of N equally spaced samples and transforms or converts this time series through a mathematical operation into set of N complex numbers defined in what is called the frequency domain. Why would we what to do that? Well it turns out that we can do all kinds of neat analysis tricks in the frequency domain which are just to hard to do, computationally wise, with the original price series in the time domain. If we make the assumption that the price series we are examining is made up of signals of various frequencies plus noise, than in the frequency domain we can easily filter out the frequencies we have no interest in and minimize the noise in the data. We could then transform the resultant back into the time domain and produce a filtered price series that hopefully would be easier to trade. The advantages of the DFT and itís fast computation algorithm the FFT, are that it is extremely fast in calculating the frequencies of the input price series. In addition it can determine frequency peaks for very noisy price series even when the signal amplitude is less than the noise amplitude. One of the disadvantages of the FFT is that straight line, parabolic trends and edge effects in the price series can distort the frequency spectrum. In addition, end effects in the price series can distort the frequency spectrum. Another disadvantage of the FFT is that it needs a lot more data than MESA for spectral resolution. However this disadvantage has largely been nullified by the speed of today's computers.
Goertzel algorithm attempts to resolve these problems...
What is the Goertzel algorithm?
The Goertzel algorithm is a technique in digital signal processing (DSP) for efficient evaluation of the individual terms of the discrete Fourier transform (DFT). It is useful in certain practical applications, such as recognition of dual-tone multi-frequency signaling (DTMF) tones produced by the push buttons of the keypad of a traditional analog telephone. The algorithm was first described by Gerald Goertzel in 1958.
Like the DFT, the Goertzel algorithm analyses one selectable frequency component from a discrete signal. Unlike direct DFT calculations, the Goertzel algorithm applies a single real-valued coefficient at each iteration, using real-valued arithmetic for real-valued input sequences. For covering a full spectrum, the Goertzel algorithm has a higher order of complexity than fast Fourier transform (FFT) algorithms, but for computing a small number of selected frequency components, it is more numerically efficient. The simple structure of the Goertzel algorithm makes it well suited to small processors and embedded applications.
The main calculation in the Goertzel algorithm has the form of a digital filter, and for this reason the algorithm is often called a Goertzel filter
Where is Goertzel algorithm used?
This package contains the advanced mathematical technique called the Goertzel algorithm for discrete Fourier transforms. This mathematical technique is currently used in today's space-age satellite and communication applications and is applied here to stock and futures trading.
While the mathematical technique called the Goertzel algorithm is unknown to many, this algorithm is used everyday without even knowing it. When you press a cell phone button have you ever wondered how the telephone company knows what button tone you pushed? The answer is the Goertzel algorithm. This algorithm is built into tiny integrated circuits and immediately detects which of the 12 button tones(frequencies) you pushed.
Future Additions:
Bartels test for cycle significance, testing output cycles for utility
Hodrick Prescott Detrending, smoothing
Zero-Lag Regression Detrending, smoothing
High-pass or Double WMA filtering of source input price data
References:
1. Burg, J. P., ëMaximum Entropy Spectral Analysisî, Ph.D. dissertation, Stanford University, Stanford, CA. May 1975.
2. Kay, Steven M., ìModern Spectral Estimationî, Prentice Hall, 1988
3. Marple, Lawrence S. Jr., ìDigital Spectral Analysis With Applicationsî, Prentice Hall, 1987
4. Press, William H., et al, ìNumerical Receipts in C++: the Art of Scientific Computingî,
Cambridge Press, 2002.
5. Oppenheim, A, Schafer, R. and Buck, J., ìDiscrete Time Signal Processingî, Prentice Hall,
1996, pp663-634
6. Proakis, J. and Manolakis, D. ìDigital Signal Processing-Principles, Algorithms and
Applicationsî, Prentice Hall, 1996., pp480-481
7. Goertzel, G., ìAn Algorithm for he evaluation of finite trigonometric seriesî American Math
Month, Vol 65, 1958 pp34-35.
Dual Fibonacci Zone & Ranged Vol DCA Strategy - R3c0nTraderWhat does this do?
This is for educational purposes and allows one to backtest two Fibonacci Zones simultaneously. This also includes an option for Ranged Volume as a parameter.
Pre-requisites:
First off, this is a Long only strategy as I wrote it with DCA in mind. It cannot be used for shorting. Shorting defeats the purpose of a DCA bot which has a goal that is Long a position not Short a position. If you want to short, there are plenty of free scripts out there that do this.
You must have some base knowledge or experience with Fibonacci trading, understanding what is ADX, +DI (and -DI), etc.
You can use this script without a 3Commas account and see how 3Commas DCA Bot would perform. However, I highly recommend inexperienced uses get a free account and going through the tutorials, FAQ's and knowledgebase. This would give you a base understanding of the settings you will see in this strategy and why you will need to know them. Only then should you try testing this strategy with a paper bot.
Background
After I had created and released "Fibonacci Zone DCA Strategy", I began expanding and testing other ideas.
The first idea was to add Ranged Volume to the Fibonacci Zone DCA strategy which I wanted for providing further confirmation before entering a trade. The second idea was to add a second Fibonacci Zone that was just as configurable as the first Fibonacci Zone. I managed to add both and they can be easily enabled or disabled via the strategy settings menu.
Things Got Real Interesting
Things got real interesting when I started testing strategies with two Fibonacci zones. Here's a quick list of what I found I was able to do:
Mix and match exit strategies. I could set the Fib-1 zone strategy to exit with a take profit % and separately set the Fib-2 zone strategy to exit when the price crosses the top-high fib border
Trade the trend. A common phrase amongst traders is "the Trend is your friend" and with the help of an additional Fib Zone, I was able to trade the trend more often by using two different Fib Zone strategies which if configured properly can shorten time to re-deploy capital, increase number of closed trades, and in some cases increase net profit.
Trade both bull market uptrends and bear market downtrends in the same strategy. I found I could configure one Fib Zone strategy to be really good in uptrends and another Fib Zone strategy to be really good in downtrends. In some cases, with both Fib Zone strategies enabled together in a single strategy I got better results than if the strategies were backtested separately.
There are many other trade strategies I am finding with this. One could be to trade a convergence or divergence of the two different Fib Zones. This could possibly be achieved by setting one strategy to have different Fibonacci length.
Credits:
Thank you "EvoCrypto" for granting me permission to use "Ranged Volume" to create this strategy
Thank you "eykpunter" for granting me permission to use "Fibonacci Zones" to create this strategy
Thank you "junyou0424" for granting me permission to use "DCA Bot with SuperTrend Emulator" which I used for adding bot inputs, calculations, and strategy
Jurik CFB Adaptive QQE [Loxx]Jurik CFB Adaptive QQE is a Double Jurik-Filtered, Composite Fractal Behavior (CFB) adaptive, Qualitative Quantitative Estimation indicator. This indicator includes both fixed and the CFB adaptive calculations as well as three different types of RSI calculations including Jurik's RSX.
What is Qualitative Quantitative Estimation (QQE)?
The Qualitative Quantitative Estimation (QQE) indicator works like a smoother version of the popular Relative Strength Index ( RSI ) indicator. QQE expands on RSI by adding two volatility based trailing stop lines. These trailing stop lines are composed of a fast and a slow moving Average True Range (ATR).
There are many indicators for many purposes. Some of them are complex and some are comparatively easy to handle. The QQE indicator is a really useful analytical tool and one of the most accurate indicators. It offers numerous strategies for using the buy and sell signals. Essentially, it can help detect trend reversal and enter the trade at the most optimal positions.
What is Wilders' RSI?
The Relative Strength Index ( RSI ) is a well versed momentum based oscillator which is used to measure the speed (velocity) as well as the change (magnitude) of directional price movements. Essentially RSI , when graphed, provides a visual mean to monitor both the current, as well as historical, strength and weakness of a particular market. The strength or weakness is based on closing prices over the duration of a specified trading period creating a reliable metric of price and momentum changes. Given the popularity of cash settled instruments (stock indexes) and leveraged financial products (the entire field of derivatives); RSI has proven to be a viable indicator of price movements.
What is RSX RSI?
RSI is a very popular technical indicator, because it takes into consideration market speed, direction and trend uniformity. However, the its widely criticized drawback is its noisy (jittery) appearance. The Jurk RSX retains all the useful features of RSI , but with one important exception: the noise is gone with no added lag.
What is Rapid RSI?
Rapid RSI Indicator, from Ian Copsey's article in the October 2006 issue of Stocks & Commodities magazine.
RapidRSI resembles Wilder's RSI , but uses a SMA instead of a WilderMA for internal smoothing of price change accumulators.
What is Composite Fractal Behavior (CFB)?
All around you mechanisms adjust themselves to their environment. From simple thermostats that react to air temperature to computer chips in modern cars that respond to changes in engine temperature, r.p.m.'s, torque, and throttle position. It was only a matter of time before fast desktop computers applied the mathematics of self-adjustment to systems that trade the financial markets.
Unlike basic systems with fixed formulas, an adaptive system adjusts its own equations. For example, start with a basic channel breakout system that uses the highest closing price of the last N bars as a threshold for detecting breakouts on the up side. An adaptive and improved version of this system would adjust N according to market conditions, such as momentum, price volatility or acceleration.
Since many systems are based directly or indirectly on cycles, another useful measure of market condition is the periodic length of a price chart's dominant cycle, (DC), that cycle with the greatest influence on price action.
The utility of this new DC measure was noted by author Murray Ruggiero in the January '96 issue of Futures Magazine. In it. Mr. Ruggiero used it to adaptive adjust the value of N in a channel breakout system. He then simulated trading 15 years of D-Mark futures in order to compare its performance to a similar system that had a fixed optimal value of N. The adaptive version produced 20% more profit!
This DC index utilized the popular MESA algorithm (a formulation by John Ehlers adapted from Burg's maximum entropy algorithm, MEM). Unfortunately, the DC approach is problematic when the market has no real dominant cycle momentum, because the mathematics will produce a value whether or not one actually exists! Therefore, we developed a proprietary indicator that does not presuppose the presence of market cycles. It's called CFB (Composite Fractal Behavior) and it works well whether or not the market is cyclic.
CFB examines price action for a particular fractal pattern, categorizes them by size, and then outputs a composite fractal size index. This index is smooth, timely and accurate
Essentially, CFB reveals the length of the market's trending action time frame. Long trending activity produces a large CFB index and short choppy action produces a small index value. Investors have found many applications for CFB which involve scaling other existing technical indicators adaptively, on a bar-to-bar basis.
What is Jurik Volty used in the Juirk Filter?
One of the lesser known qualities of Juirk smoothing is that the Jurik smoothing process is adaptive. "Jurik Volty" (a sort of market volatility ) is what makes Jurik smoothing adaptive. The Jurik Volty calculation can be used as both a standalone indicator and to smooth other indicators that you wish to make adaptive.
What is the Jurik Moving Average?
Have you noticed how moving averages add some lag (delay) to your signals? ... especially when price gaps up or down in a big move, and you are waiting for your moving average to catch up? Wait no more! JMA eliminates this problem forever and gives you the best of both worlds: low lag and smooth lines.
Ideally, you would like a filtered signal to be both smooth and lag-free. Lag causes delays in your trades, and increasing lag in your indicators typically result in lower profits. In other words, late comers get what's left on the table after the feast has already begun.
Included
-Toggle bar color on/off
Multiple EMAAn exponentially weighted moving average reacts more significantly to recent price changes than a simple moving average (SMA), which applies an equal weight to all observations in the period.
Here, i have merged multiple EMA into one indicator. traders would find it very convenient as multiple widely used EMA`s are merged into 1 indicator. one can also change the time and color from its settings as per their convenience.
About the practicality of this EMA`s:
Every EMA suggests the sentiments in a period of time.
The longer-day EMAs (i.e. 50 and 200-day) tend to be used more by long-term investors, while short-term investors tend to use 8 and 20 day EMAs.
One may prefer to short or to hedge their position when 200 day moving average is broken downside. vise-versa for long. Normally in one may expect around 2-3% move on either side when broken with volumes supporting it.
Bjorgum Double Tap█ OVERVIEW
Double Tap is a pattern recognition script aimed at detecting Double Tops and Double Bottoms. Double Tap can be applied to the broker emulator to observe historical results, run as a trading bot for live trade alerts in real time with entry signals, take profit, and stop orders, or to simply detect patterns.
█ CONCEPTS
How Is A Pattern Defined?
Doubles are technical formations that are both reversal patterns and breakout patterns. These formations typically have a distinctive “M” or a “W” shape with price action breaking beyond the neckline formed by the center of the pattern. They can be recognized when a pivot fails to break when tested for a second time and the retracement that follows breaks beyond the key level opposite. This can trap entrants that were playing in the direction of the prior trend. Entries are made on the breakout with a target projected beyond the neckline equal to the height of the pattern.
Pattern Recognition
Patterns are recognized through the use of zig-zag; a method of filtering price action by connecting swing highs and lows in an alternating fashion to establish trend, support and resistance, or derive shapes from price action. The script looks for the highest or lowest point in a given number of bars and updates a list with the values as they form. If the levels are exceeded, the values are updated. If the direction changes and a new significant point is made, a new point is added to the list and the process starts again. Meanwhile, we scan the list of values looking for the distinctive shape to form as previously described.
█ STRATEGY RESULTS
Back Testing
Historical back testing is the most common method to test a strategy due in part to the general ease of gathering quick results. The underlying theory is that any strategy that worked well in the past is likely to work well in the future, and conversely, any strategy that performed poorly in the past is likely to perform poorly in the future. It is easy to poke holes in this theory, however, as for one to accept it as gospel, one would have to assume that future results will match what has come to pass. The randomness of markets may see to it otherwise, so it is important to scrutinize results. Some commonly used methods are to compare to other markets or benchmarks, perform statistical analysis on the results over many iterations and on differing datasets, walk-forward testing, out-of-sample analysis, or a variety of other techniques. There are many ways to interpret the results, so it is important to do research and gain knowledge in the field prior to taking meaningful conclusions from them.
👉 In short, it would be naive to place trust in one good backtest and expect positive results to continue. For this reason, results have been omitted from this publication.
Repainting
Repainting is simply the difference in behaviour of a strategy in real time vs the results calculated on the historical dataset. The strategy, by default, will wait for confirmed signals and is thus designed to not repaint. Waiting for bar close for entires aligns results in the real time data feed to those calculated on historical bars, which contain far less data. By doing this we align the behaviour of the strategy on the 2 data types, which brings significance to the calculated results. To override this behaviour and introduce repainting one can select "Recalculate on every tick" from the properties tab. It is important to note that by doing this alerts may not align with results seen in the strategy tester when the chart is reloaded, and thus to do so is to forgo backtesting and restricts a strategy to forward testing only.
👉 It is possible to use this script as an indicator as opposed to a full strategy by disabling "Use Strategy" in the "Inputs" tab. Basic alerts for detection will be sent when patterns are detected as opposed to complex order syntax. For alerts mid-bar enable "Recalculate on every tick" , and for confirmed signals ensure it is disabled.
█ EXIT ORDERS
Limit and Stop Orders
By default, the strategy will place a stop loss at the invalidation point of the pattern. This point is beyond the pattern high in the case of Double Tops, or beneath the pattern low in the case of Double Bottoms. The target or take profit point is an equal-legs measurement, or 100% of the pattern height in the direction of the pattern bias. Both the stop and the limit level can be adjusted from the user menu as a percentage of the pattern height.
Trailing Stops
Optional from the menu is the implementation of an ATR based trailing stop. The trailing stop is designed to begin when the target projection is reached. From there, the script looks back a user-defined number of bars for the highest or lowest point +/- the ATR value. For tighter stops the user can look back a lesser number of bars, or decrease the ATR multiple. When using either Alertatron or Trading Connector, each change in the trail value will trigger an alert to update the stop order on the exchange to reflect the new trail price. This reduces latency and slippage that can occur when relying on alerts only as real exchange orders fill faster and remain in place in the event of a disruption in communication between your strategy and the exchange, which ensures a higher level of safety.
👉 It is important to note that in the case the trailing stop is enabled, limit orders are excluded from the exit criteria. Rather, the point in time that the limit value is exceeded is the point that the trail begins. As such, this method will exit by stop loss only.
█ ALERTS
Five Built-in 3rd Party Destinations
The following are five options for delivering alerts from Double Tap to live trade execution via third party API solutions or chat bots to share your trades on social media. These destinations can be selected from the input menu and alert syntax will automatically configure in alerts appropriately to manage trades.
Custom JSON
JSON, or JavaScript Object Notation, is a readable format for structuring data. It is used primarily to transmit data between a server and a web application. In regards to this script, this may be a custom intermediary web application designed to catch alerts and interface with an exchange API. The JSON message is a trade map for an application to read equipped with where its been, where its going, targets, stops, quantity; a full diagnostic of the current state and its previous state. A web application could be configured to follow the messages sent in this format and conduct trades in sync with alerts running on the TV server.
Below is an example of a rendered JSON alert:
{
"passphrase": "1234",
"time": "2022-05-01T17:50:05Z",
"ticker": "ETHUSDTPERP",
"plot": {
"stop_price": 2600.15,
"limit_price": 3100.45
},
"strategy": {
"position_size": 0.1,
"order_action": "buy",
"market_position": "long",
"market_position_size": 0,
"prev_market_position": "flat",
"prev_market_position_size": 0
}
}
Trading Connector
Trading Connector is a third party fully autonomous Chrome extension designed to catch alert webhooks from TradingView and interface with MT4/MT5 to execute live trades from your machine. Alerts to Trading Connector are simple; just select the destination from the input drop down menu, set your ticker in the "TC Ticker" box in the "Alert Strings" section and enter your URL in the alert window when configuring your alert.
Alertatron
Alertatron is an automated algo platform for cryptocurrency trading that is designed to automate your trading strategies. Although the platform is currently restricted to crypto, it offers a versatile interface with high flexibility syntax for complex market orders and conditions. To direct alerts to Alertatron, select the platform from the 3rd party drop down, configure your API key in the ”Alertatron Key” box and add your URL in the alert message box when making alerts.
3 Commas
3 Commas is an easy and quick to use click-and-go third party crypto API solution. Alerts are simple without overly complex syntax. Messages are simply pasted into alerts and executed as alerts are triggered. There are 4 boxes at the bottom of the "Inputs" tab where the appropriate messages to be placed. These messages can be copied from 3 Commas after the bots are set up and pasted directly into the settings menu. Remember to select 3 Commas as a destination from the third party drop down and place the appropriate URL in the alert message window.
Discord
Some may wish to share their trades with their friends in a Discord chat via webhook chat bot. Messages are configured to notify of the pattern type with targets and stop values. A bot can be configured through the integration menu in a Discord chat to which you have appropriate access. Select Discord from the 3rd party drop down menu and place your chat bot URL in the alert message window when configuring alerts.
👉 For further information regarding alert setup, refer to the platform specific instructions given by the chosen third party provider.
█ IMPORTANT NOTES
Setting Alerts
For alert messages to be properly delivered on order fills it is necessary to place the following placeholder in the alert message box when creating an alert.
{{strategy.order.alert_message}}
This placeholder will auto-populate the alert message with the appropriate syntax that is designated for the 3rd party selected in the user menu.
Order Sizing and Commissions
The values that are sent in alert messages are populated from live metrics calculated by the strategy. This means that the actual values in the "Properties" tab are used and must be set by the user. The initial capital, order size, commission, etc. are all used in the calculations, so it is important to set these prior to executing live trades. Be sure to set the commission to the values used by the exchange as well.
👉 It is important to understand that the calculations on the account size take place from the beginning of the price history of the strategy. This means that if historical results have inflated or depleted the account size from the beginning of trade history until now, the values sent in alerts will reflect the calculated size based on the inputs in the "Properties" tab. To start fresh, the user must set the date in the "Inputs" tab to the current date as to remove trades from the trade history. Failure to follow this instruction can result in an unexpected order size being sent in the alert.
█ FOR PINECODERS
• With the recent introduction of matrices in Pine, the script utilizes a matrix to track pivot points with the bars they occurred on, while tracking if that pivot has been traded against to prevent duplicate detections after a trade is exited.
• Alert messages are populated with placeholders ; capability that previously was only possible in alertcondition() , but has recently been extended to `strategy.*()` functions for use in the `alert_message` argument. This allows delivery of live trade values to populate in strategy alert messages.
• New arguments have been added to strategy.exit() , which allow differentiated messages to be sent based on whether the exit occurred at the stop or the limit. The new arguments used in this script are `alert_profit` and `alert_loss` to send messages to Discord